id
stringlengths
3
9
source
stringclasses
1 value
version
stringclasses
1 value
text
stringlengths
1.54k
298k
added
stringdate
1993-11-25 05:05:38
2024-09-20 15:30:25
created
stringdate
1-01-01 00:00:00
2024-07-31 00:00:00
metadata
dict
206289681
pes2o/s2orc
v3-fos-license
Xenotransplantation, Xenogeneic Infections, Biotechnology, and Public Health Abstract Xenotransplantation is the attempt to use living biological material from nonhuman animal species in humans for therapeutic purposes. Clinical trials and preclinical studies have suggested that living cells and tissue from other species have the potential to be used in humans to ameliorate disease. However, the potential for successful xenotransplantation to cure human disease is coupled with the risk that therapeutic use of living nonhuman cells in humans may also serve to introduce xenogeneic infections of unpredictable significance. Animal husbandry practices and xenotransplantation product preparation may eliminate most exogenous infectious agents prior to transplantation. However, endogenous retroviruses are present in the genomes of all mammalian cells, have an inadequately defined ability to infect human cells, and have generated public health concern. The history of xenotransplantation, the implications for public health, the global consensus on public safeguards necessary to accompany clinical trials, and the future direction of xenotransplantation are discussed in the context of public health. Mt Sinai J Med 76:435–441, 2009. © 2009 Mount Sinai School of Medicine Xenotransplantation can be briefly described as an attempt to use living biological material from nonhuman animals in humans for therapeutic benefit. The US Public Health Service defines xenotransplantation more formally as the transplantation, implantation, or infusion into a human recipient of either (1) live cells, tissues, or organs from a nonhuman animal source or (2) human body fluids, cells, tissues, or organs that have had ex vivo contact with live nonhuman animal cells, tissues, or organs. 1 Xenotransplantation may sound like an unlikely event. However, in vitro fertilization, a practice by which infertile couples are often able to bear children via the removal of eggs and sperm from the intended parents, fertilization of the eggs in a laboratory, growth of those fertilized eggs to a multicell stage over 3 to 5 days, and implantation of the eggs into the mother's uterus, is no longer an unlikely event. In the 1990s, the substrate used in the laboratory to support the development of a fertilized egg into the multicell stage was frequently a cell line of nonhuman origin. Thus, the multicell stage fertilization product implanted into the mother's uterus and ultimately the resulting infant was, by US Public Health Service definition, a xenotransplantation product. Additionally, hundreds of patients have been treated with investigational xenotransplantation products intended (1) to sustain patients suffering hepatic failure until a liver is available for transplant (hemoperfusion through a porcine liver or hepatocytes), (2) to decrease the dependence of diabetics on insulin (porcine pancreatic cell implants), or (3) to improve functions in patients with Parkinson's disease (implantation of porcine neurological cells) or other functions. 2,3 WHY IS XENOTRANSPLANTATION A PUBLIC HEALTH CONCERN? Rationale for Xenotransplantation As transplantation surgery has become more technically proficient, the factor limiting transplant patient survival has ceased to be the technical complexity of the surgery and has instead become the limited availability of donor organs appropriate for transplantation. During that same period, 18% of the people who were both in need of a heart transplant and on the waiting list died while awaiting transplantation. Thus, as a result of the mismatch between the demand for and supply of donor organs for transplantation, the greatest risk of dying due to heart transplantation today is attributable to the scarcity of transplantable organs rather than to the risk of either surgery or posttransplantation organ rejection. 4 The shortage of donor organ availability first drove interest in exploring whether organs from nonhuman animals could be adapted for transplantation into humans. Xenotransplantation has been envisioned as a source of spare-part organs to be transplanted as a definitive cure for end organ failure. However, xenotransplantation has also been envisioned as a bridge therapy: an interim therapeutic step intended only to sustain the patient long enough to allow a compatible human donor organ to become available. The disparity between the demand for transplantation generated by organ failure and the supply of donated human organs was the original driving force behind efforts to develop xenotransplantation. However, a second driving force emerged after it was recognized that differences in species susceptibility to specific infections might be exploited to human advantage. Two early landmark experiments demonstrated the application of this concept. When Dr. Thomas Starzl, a pioneering transplant surgeon in Pittsburgh, learned that baboons were refractory to infection with either hepatitis B or human immunodeficiency virus (HIV), he wondered if this species difference in susceptibility could be exploited to the advantage of HIV-infected patients who were dying of liver failure due to hepatitis B virus infection. In 1992, following advances in the ability to control both cellular and humoral components of xenograft rejection in vitro, Dr. Starzl and his colleagues attempted to transplant a baboon liver into a 35-year-old HIV-infected man with hepatitis B virus-associated chronic active hepatitis. The patient survived with little evidence of rejection, and products of hepatic synthesis became those of the baboon liver without evidence of an obvious adverse impact. The patient died on day 70 after transplantation because of a cerebral and subarachnoid hemorrhage caused by an angioinvasive Aspergillus infection. Although Dr. Starzl did not repeat this experiment, he concluded that this experiment had demonstrated the feasibility of controlling the rejection of baboon livers transplanted into human recipients. 5 Conceptually, he had ushered in a new era of thought about xenotransplantation. On December 14, 1995, a 38-year-old activist who had been infected with HIV for more than 15 years underwent a controversial experiment. His bone marrow was suppressed (not ablated) by sublethal doses of radiation and chemotherapeutic drugs, after which he received an infusion of stem cells and facilitator cells procured from the bone marrow of a baboon. Facilitator cells, discovered by Dr. Suzanne Ildstad, another Pittsburgh surgeon, appear to allow stem cells to proliferate in other species without producing graft-versus-host disease. 6 This experimental attempt to reconstitute chimeric functional bone marrow in a patient with acquired immune deficiency syndrome (AIDS) was another conceptualization of how the species differences in susceptibility to infection could be exploited to human health advantage. This experiment occurred prior to the advent of cocktail antiretroviral therapy at a point when the standard treatment approaches were not working adequately and the AIDS activist community was increasingly convinced that radical new approaches were necessary. In this atmosphere of desperation, a US Food and Drug Administration advisory committee debated and then voted to allow this controversial human experiment that might actually foreshorten rather than prolong the patient's life. The experiment went forward, and the patient survived and improved for reasons that are unclear, as the baboon bone marrow cells were not identifiable in his bone marrow beyond the first month. 7 These 2 experiments illustrate a transition in the conceptual nature of xenotransplantation. The first concept, that whole animal organs were anticipated to serve as spare-part replacements for failed human organs, was driven by the inability of the supply of human organs to keep pace with the ability of technological advances in transplantation to save lives. Increasingly since 1990, the development of xenotransplantation applications has been influenced by the recognition that unique properties of therapeutic materials originating from nonhuman species may be exploited to the advantage of human health. Increasingly, the products are cellular rather than whole organs. Problems with immune rejection and the absence of preclinical data meeting specific recommendations for the survival of xenogeneic organs in nonhuman primates prior to clinical use in humans further influence preferential interest in cellular transplantation versus organ transplantation. Recent success with islet allotransplantation for diabetes is driving renewed interest in using porcine islets because of a lack of a sufficient supply of human islets for allotransplantation. The transplantation of cellular products likely represents the most viable near future of xenotransplantation. Why Are Public Health Professionals Interested in Xenotransplantation? Xenotransplantation first came to the compelling attention of public health authorities in the mid-1990s, around the time of the previously described experiments by Dr. Starzl and Dr. Ildstad. 8 The combination of advances in the control of immune rejection and the engineering of transgenic pigs that contained certain human genes anticipated to improve porcine xenotransplantation product survival in human recipients led to great enthusiasm for the rapid movement of xenotransplantation applications into human clinical trials. The absence of any precedent for regulatory policy for xenotransplantation clinical trials was of pressing concern to the public health authorities charged with safety oversight. Although Dr. Starzl, Dr. Ilstad, and their colleagues were focused on the potential to transform the health of individuals, the public health focus was on the protection of community health. Some xenotransplantation proposals could provide great societal benefit if successful. For example, in 2005-2006, the crude prevalence of total diabetes in US residents 20 years old or older was 12.9%. 9 For decades before human insulin became available, diabetes was managed through the intermittent injection of porcine insulin. Imagine the impact if diabetes could be functionally cured through the infusion of functioning porcine pancreatic islet cells. One modeling study estimated the health economic impact of maintaining glycosylated hemoglobin values in all US patients with currently uncontrolled type 1 or type 2 diabetes mellitus at the American Diabetes Association standard of 7.0% and at the American Association of Clinical Endocrinologists target of 6.5%. This analysis, run from a societal perspective over a 10-year time horizon, estimated that achieving this level of maintenance could achieve total direct medical cost savings of 35 to 50 billion US dollars, respectively, over 10 years. When indirect cost savings were included, the total savings increased to 50 to 72 billion US dollars, which corresponded to 4% to 6% of the total annual US health care costs of 1.3 trillion US dollars. 10 This analysis estimated only financial savings and did not address quality of life issues or years of productive life reclaimed. Although xenotransplantation's potential for positive benefits to individuals and society was a primary driver for the clinical research community, the attention of the public health community was captured more by concern for the potential for unintended negative consequences. Xenotransplantation is intended to benefit the health of individuals by replacing nonfunctioning or malfunctioning human cells, tissues, or organs with functioning nonhuman animals cells, tissue, or organs. However, the implantation, transplantation, or infusion of living nonhuman tissue into humans for therapeutic purposes has an associated potential to also transfer infections across species lines into humans. Because xenotransplantation applications breech normal host defenses and are frequently accompanied by pharmacological immune suppression, xenotransplantation may be ideally suited to have the unintended consequence of introducing new infections into the human recipients. The potential for implanted living nonhuman animal cells to also transfer infections across species lines into the human population (xenozoonosis or xenogeneic infections) was a major concern. Zoonotic infections occur in nature and produce disease in individuals and epidemics in human populations. Examples of zoonotic epidemics include the 1993 hantavirus pulmonary syndrome outbreak in the American southwest 11 and the epidemic of encephalitis that followed the introduction of West Nile virus into New York in 1999. 12 Zoonotic infection of humans by avian influenza viruses caused the largest recorded pandemic in human history, the Influenza Pandemic of 1918. 13 AIDS, now understood to have resulted from the introduction of a simian immunodeficiency virus infection across species lines into humans, 14 has gone through all these stages and is no longer a zoonosis, an epidemic, or a pandemic. AIDS now is simply an endemic infection affecting all human populations throughout the world, and as much as any single force in this century, it is reshaping the world. This then is the primary basis for public health interest in xenotransplantation and related biotechnologies. The potential for xenotransplantation to benefit individual patients is inevitably linked to a potential to introduce harm to human populations through the unintended introduction of xenogeneic infections. Because infectious diseases do not respect geopolitical boundaries, a xenotransplantation clinical trial anywhere that is not accompanied by adequate public safeguards is a concern to the global community. XENOTRANSPLANTATION AND PUBLIC HEALTH IN 1995 In 1995, at the request of Phil Lee, the Assistant Secretary for Health of the Department of Health and Human Services, federal agencies first began to examine xenotransplantation as a public health issue. At that time, the potential for new biotechnical approaches to alleviate human suffering by the use of living biological material from nonhuman animals was recognized. These experimental approaches had the exciting potential to have an unprecedented positive impact on a broad spectrum of human disease. However, these approaches also carry an unquantifiable and probably small but still existent risk of unintended negative side effects via the introduction of xenogeneic infections into the human population. Recognition of the potential for xenotransplantation to have a negative public health impact due to xenogeneic infections inspired collaborative efforts by the academic, clinical, industrial, research, and public health communities to identify ways to develop this promising biotechnology while adequately safeguarding the health of the larger community. The first public health priority was the development of an international consensus on the public safeguards necessary to allow xenotransplantation clinical trials to proceed with public confidence. Once consensus was achieved, it was implemented through the development of appropriate public health policy translated into regulatory practice. Simultaneous research efforts explored clinical interventions and bioengineering approaches that might increase the safety of clinical trials. The stakeholder communities also undertook collaborative basic science research to better define fundamental understandings of both the risk and potential of xenotransplantation. Public Health Guidance Public health guidance documents are available from national health authorities in most nations in which clinical xenotransplantation trials have been undertaken or considered, including the United States, Canada, multiple European countries, the European Union, and Australia, and from multinational organizations such as the World Health Organization and the Organization for Economic Cooperation and Development. 1,15 These various guidances are interpretations of a single global consensus. The US Public Health Service and other guidances build on a foundation that requires xenotransplantation product source animals to originate from closed colonies of purpose-bred animals with husbandry practices that limit and define the lifelong exposures of these animals. These guidances emphasize the importance of pretransplantation screening of source animals and herds to identify and eliminate problematic infectious agents prior to the development of xenotransplantation products and of posttransplantation surveillance of xenotransplantation recipients to identify and contain any xenogeneic infections. Posttransplantation surveillance is necessary to identify infectious agents that (1) were transplanted because they were not known to exist at the time of transplant screening (eg, severe acute respiratory syndrome-associated coronavirus prior to 2003), (2) were known to exist but could not be detected because of inadequate diagnostic tools (eg, prions), or (3) were known to be present in the source animal and could not be removed (eg, endogenous retroviruses). Endogenous Retroviruses Retroviruses are RNA viruses that replicate by transcribing viral RNA into DNA by reverse transcription. 16 Proviral DNA is integrated into the host cell genome and replicated with the host cell DNA. Exogenous retroviruses exist as independent cell invaders that are transmitted horizontally by infection. The most familiar example is HIV. However, when retroviruses become integrated into the host cell genome within a germ cell, they can then be transmitted vertically by inheritance through the germline DNA. These endogenous retroviruses exist as proviral DNA integrated into the germlines of all mammals, including the genomes of humans and all species considered as source animals for xenotransplantation. Endogenous retroviruses represent a sort of fossil remnant of what are presumed to have once been exogenous retroviruses that integrated into the host germline eons ago and remain as an inherited part of the genetic structure of every cell of the species today. These endogenous retroviruses may express infectious viruses but do not cause disease in the host species. However, many endogenous retroviruses are xenotropic; this means that they are able to infect cells from other species. Endogenous retroviruses of pigs and baboons are able to infect human cells in vitro. Thus, living animal tissue that is apparently devoid of exogenous infectious agents nonetheless retains an innate infectious potential due to the presence of endogenous retroviruses. C-type particles (crescent-shaped formations on the membranes of cells associated with the budding of C-type retroviruses) expressed from a variety of porcine cell lines were identified during the 1970s and 1980s and characterized as endogenous retroviruses capable of infecting ST-Iowa cells (a cell line derived from Sus scrofa) in vitro. After 1995, concerns about endogenous retroviruses in xenotransplantation products inspired studies that defined infectivity and host ranges for porcine endogenous retrovirus (PERV). 17,18 PERV is expressed from multiple porcine cell lines and primary tissues. Of 3 identified variants, 2 (PERV-A and PERV-B) productively infect multiple human cell lines, although human peripheral blood mononuclear cells appear resistant to productive infection. Both the murine leukemia virus and feline leukemia virus share more than 60% homology with PERV. This homology has been exploited to develop serologic assays for PERV, and the recognized characteristics of these viruses have been used as a basis for reasoning by analogy about how PERV may behave biologically. 2,3,13,18 In response to these findings, on October 16, 1997, the US Food and Drug Administration placed all xenotransplantation trials using porcine products in the United States on clinical hold. Reimplementation of clinical trials required the development of assays for detection of infectious PERV in xenotransplantation products, implementation of surveillance for PERV infections in recipients, and development of informed consent documents adequately informing clinical trial participants of potential risks associated with the presence of PERV in porcine xenotransplantation products. 19 In 1999, in recognition of a global public health consensus, the US Food and Drug Administration issued additional guidances that preclude the use of nonhuman primates as source animals for xenotransplantation products and that defer xenotransplantation recipients from the donation of blood and other biological materials as precautionary measures. 20 New tools were necessary to enable laboratory surveillance for endogenous retrovirus infection. Because PERV DNA is a normal part of the genome of every porcine cell, polymerase chain reaction (PCR) identification of PERV DNA will be inevitable whenever transplanted porcine cells are present. The ability to discriminate PERV infection from the presence of PERV DNA-containing porcine cells in a xenotransplantation product recipient requires additional testing. Most approaches combine a PCR assay for PERV DNA with a PCR assay for a marker of porcine (or other source animal) DNA. Although these assays frequently identify source animal mitochondrial DNA, other repetitive sequences such as centromeric sequences have also been used. 21 -25 The presence of PERV DNA in the absence of porcine mitochondrial DNA implies PERV infection, whereas in the presence of porcine mitochondrial DNA, it simply implies that not all xenogeneic cells have been rejected by the recipient. 21 -25 This basic approach is further refined with quantitative tests that assess the relative abundance of PERV DNA and host mitochondrial DNA in biological material from recipients with respect to the ratio that existed in source animal cells prior to transplantation. These, combined with reverse-transcriptase PCR tests that identify PERV-specific RNA evidence of viral expression, Amp-RT or similar assays that identify generic reverse-transcriptase activity, and western blot assays that identify seroreactivity against homologous retroviruses, compose the basic armamentarium for laboratory surveillance for PERV infection in porcine xenotransplant recipients. 21 -25 An early series of retrospective studies failed to identify evidence of PERV infection in recipients of porcine xenotransplantation products. Two landmark negative studies were published in Lancet in 1998. Patience et al. 23 found no evidence of PERV infection in 2 patients who had experienced shortterm extracorporeal perfusion of their blood through pig kidneys in the absence of immunosuppression. Heneine et al. 24 did not identify PERV infection in 10 immunosuppressed diabetic recipients of human kidney transplants who also received fetal porcine islet cells either infused into the portal vein or inserted under the capsule of the kidney allograft; porcine cells had persisted for up to 6 months following transplantation. In 1999, the negative results of a global collaboration by the xenotransplantation research community were published. 25 This collaboration studied a large number of patients (n = 160) who were exposed to porcine xenotransplantation products through a wide variety of methods. PERV infection was sought through double-blinded testing in 2 laboratories using independently developed assays. Microchimeric porcine cells were identified in the peripheral blood of 23 recipients up to 8.5 years after exposure ended. This finding, while DOI:10.1002/MSJ unexpected, is consistent with evidence of Y chromosome-containing microchimeric cells in the peripheral blood of women decades after they gave birth to male offspring. 26 In the context of xenotransplantation, this finding signifies that transient exposure to xenotransplantation products may result in persistent exposure to risk of PERV infection. 2009: WHAT IS THE FUTURE OF XENOTRANSPLANTATION? Since the dialogue between public health and xenotransplantation began in the mid-1990s, much effort has been applied to studies attempting to elucidate aspects of xenotransplantation relevant to infectious risk. 27 The development of a global agreement on public safeguards that should accompany xenotransplantation clinical trials was a significant advance. A growing body of evidence has failed to identify PERV infection in xenotransplantation product recipients, although recent advances intended to overcome immune rejection (ie, the development of alpha-gal knockout source pigs) may inadvertently increase the risk of PERV transmission. 28 Modifications of xenotransplantation product bioengineering have been shown to diminish the release of PERV virions in vitro; this observation suggests that such modifications may reduce a recipient's risk of exposure to infectious PERV in vivo. 29 Other in vitro experiments suggest that transient use of antiretrovirals peri-transplant may increase selective pressure against persistent infection. 30 The recognized homology between PERV and feline leukemia virus suggests that effective vaccination may be possible, although other observations suggest that vaccines which protect against PERV infection may also contribute to the rejection of porcine xenotransplantation products. 31 Although much remains to be explored in all these areas, findings to date endorse the decision by regulatory authorities to allow xenotransplantation clinical trials to proceed with public safeguards in place and diminish, but do not eliminate, concern about the unintended introduction of new infections into the human population as a byproduct of efforts to cure individual disease. The potential for xenotransplantation to introduce xenogeneic infections remains a concern but is no longer the rate-limiting step in advancement of the field of xenotransplantation. Whether xenotransplantation will deliver on the promise raised by early visions will depend more on whether basic research can overcome the remaining immune barriers to longterm survival of xenotransplantation products in vivo, on discoveries about the adequacy of xenoproduct physiological function in humans, and on whether advances in the related fields of cloning and regenerative medicine outpace the rate of discovery in xenotransplantation to the point of irrelevancy.
2018-04-03T06:03:45.640Z
2009-09-28T00:00:00.000
{ "year": 2009, "sha1": "d3ad853a1c70c18c3d30bdbec7ab7cb3a9912a5b", "oa_license": null, "oa_url": "https://onlinelibrary.wiley.com/doi/pdfdirect/10.1002/msj.20131", "oa_status": "BRONZE", "pdf_src": "PubMedCentral", "pdf_hash": "d3ad853a1c70c18c3d30bdbec7ab7cb3a9912a5b", "s2fieldsofstudy": [ "Medicine", "Environmental Science" ], "extfieldsofstudy": [ "Medicine" ] }
59158217
pes2o/s2orc
v3-fos-license
A Review of Heterogeneous Resource Allocation in LTE-A based Femto cell Networks with CQI Feedback Background/Objectives: The Long-Term EvolutionAdvanced (LTE-A) based heterogeneous networks focus on the Femto based cell deployment. These Femto cells will benefit the customers and service providers in term of network coverage and spectral efficiency. In such networks, optimal means of resource allocations is the major concern. Methods/Statistical Analysis: This paper reviews a detail study of resource allocation in Femto cell based LTE-A networks. In addition, a resource allocation strategy is suggested by means of Heterogeneous Channel Quality Index (CQI) based Scheduling Techniques (HCBST). It makes use of an indexed adaptive modulation and coding technique by referring the various CQI parameters. Findings: The HCBST algorithm is implemented by analyzing various scheduling technique at Femto and Marco base stations. The proposed resource allocation strategy is implemented by means of two scenarios, namely, Femto without node mobility and Femto with node mobility. Application/Improvements: The proposed HCBST system with modified largest weight delay first with CQI at Macro cell and Exponential with CQI at Femto cell (ML-EXP-CQI) based scheduling display 1.1879% and 2.85% better in term of throughput and spectral efficiency, respectively, as compared to existing scheduling algorithm.. *Author for correspondence Introduction The development of Long Term Evolution-Advanced (LTE-A) technology added a golden part in the development of wireless communications domain. To improve the performance of LTE-A based network relay and Picocell came into picture. Relays awireless radio extension of Base Station (BS) and it provides limited intelligence to network operators. On the other hand, Pico based deployment provides intelligent BS, which is owned, planned and placed by operator. If we observe both the deployments, customers do not come into picture. In order to give more scheduler, Kwan Max Throughput Scheduler, Proportional Fair Scheduler, Max-Min scheduler and Resource Fair Scheduler. Among these schedulers, the Best Channel Quality Index (CQI) scheduler performs better than other schedulers do in term of throughput. Alternatively, the Round Robin and Fair Proportion schedulers are does not perform well under mobility conditions. The choice of optimal admission control and resource allocation for uplink traffic is discussed by Joint Admission Control and Resource Allocations algorithm 2 . It focused on the transport layer resource allocations of Femto equipment along with interference aware resource allocation. Further advancement of resource allocation is based on cooperative power scheme that reduce the power setting with Carrier Aggregation based Femtocell 3 . Femto cell cooperatively adjust the power in order to optimize the power but this scheme applicable only in dense Femto cell condition. Later, distributed algorithm came in picture, which combined the power control and Scheduling for Femto cell network for downlink condition 4 . It discusses a distributed algorithm, which considers the channel quality and iterative power. This is applied to cell edge users and cell centre user but it is applied in interference-limited bandwidth. A novel spectrum method is proposed in literature [4][5][6] , which discrete the network completely, where Marco base station divide the frequency spectrum in to two parts: 1. Particular frequency spectrum allocated to the Femto Cell and 2. Another Frequency spectrum allocated to the Marco-Cell. Such scheduling scheme maintains the interference pool table. This mostly focused on the resources allocated for link monitoring, handling and managing the radio resources and channel state information with feedback. A Q-Based learning scheduling is analysed where the first approach towards heterogeneous network. It is advance version of the Round Robin algorithm. It weeds out the RBs which has high interference at Marco Evolved Node B (ENB) and make use of round robin to maintain fair resource allocation. The dynamic system level behaviour of multi users of Femto cell placed in Microcell has been discussed in literature [7][8][9] . It investigate multiuser and multi cell in Single Input Single Output (SISO) and Multiple Input Multiple Output (MIMO) configuration of antenna. Later, self-optimization of Femto cell is considered by cooperative spectrum sharing, power allocations and scheduling technique. It uses a dynamic approach for sharing spectrum and power allocation and showed that effective channel usage ratio can reach 75% of its average spectrum utilization. The impact of coordination delay during resource allocation with distributed algorithm is explained 10 , where prior to scheduling among multi user, it checks the channel traffic and prioritize the traffic. Here, prime focus is on signalling of transmission power over pilots so that neighbouring cell knows the Signal to Noise Interference Ratio (SNIR) of corresponding node. Based on the SNIR value, it deploys adaptive coding and modulation technique. Here, the round robin method is utilized and resource scheduling is carried out as channel independent. A resource allocation scheme 11 is focused on the dense Femto cell interference with Marco cell in uplink or interference with Femto cell in downlink. The importance of channel quality index consideration during resource allocation is discussed inliteratur 12 , where it uses dynamic sub-carrier allocations. It explained in detail about the limitation of full CQI in real time scenario. It shows that how a partial CQI technique is much better in case of mobility. Such kind of CQI technique is further attached with fair proportional to improve the performance of the network 13 . A decentralized user centric scheduling is suggested for small cell 14,15 . Here, each Macro cell is divided into a small cell and one node acts a centric node, which will take care of channel quality and provide feedback to eNB [16][17][18][19][20][21] . As found in the literature, a redundant bits or additional physical device is required to achieve the scheduling technique in Femto based networks. In this paper, without increasing the complexity of the network, a heterogeneous scheduling technique is proposed for Macro-Femto scenario with seamless service in mobile condition. The rest of the paper is organised as follows: In Section 2, the design of the proposed system is presented. Section 3 describes the simulation results for Femto immobile nodes and Femto nodes with mobility. Finally, Section 4 concludes the major findings and future research directions. System Model The system model consider in this section consists of two scenarios, namely, Femto nodes without mobility as shown in Figure1. Femto nodes with mobility are shown in Figure 2. In Figure 1, the system consists of a cell of hexagonal shape with a radius of 10 km. The eNB of Marco cell are placed at the centre of the hexagonal cell and it is named as Macro BS. The User Equipment's (UE) associated with eNB are placed randomly in the cell and they are mobile in nature. In every iteration location of MS keeps on changing. In first scenario, Femto cells are placed inside a building without mobility. Assumption made here is building is of 5*5 grid. Each grid consists of the 25 apartments. Femto cells are placed in this apartment randomly. One Femto BS can carry load of 30 home Femto User Equipment's (FUE). Distance between Femto BS and FUE is 35m and between FUE is 10-20 m. To evaluate the performance of the system, VOIP, Video and Voice are the target services considered for evaluation. Video is considered as the Real Time (RT) service, whereas the Voice Over Internet protocol (VoIP) and Best Effort (BE) are the Non-Real Time (NRT) services. All the resource allocations happened in down link of LTE-A based Femto cell network. In proposed scenario, scheduling technique used for Femto and Marco scenario is heterogeneous. The term heterogeneous means the usage of Fair Proportional (FP) scheduling in Marco scenario and a scheduling other than FP in Femto Cell. This paper considers three scheduling schemes, namely, FP, Modified Largest Weight Delay First (MLWDF) and exponential scheduling. FP always focuses in maximizing the throughput of the channel and to provide minimal service (fairness) to all the users. In other words it provides a trade-off between overall system gain and data rate fairness among the users. The drawback of the FP algorithm is that it requires independent identical distributed among the users and not much efficient under mobility condition. On the other hand, the other two algorithms, MLWDF and Exponential scheduling are more focused on real time applications like video services. MLWDF is the advance version of Largest Weighted Delay First (LWDF) and the basic idea behind this algorithm is that packet with same Quality of Service (QOS) is gathered in a single queue and packets in that queue get equal scheduling opportunity. It generally ensures that delay packet probability doesn't exceed the minimum allowable packet loss. Alternatively, the exponential algorithm is an advance version of FP scheduling and it is applicable for both real time and non-real time applications. Only difference in exponential algorithm is that it gives more priority to the real time services than non-real time services. The second scenario considered in this paper with the mobility with FUE inside a Femto cell. This scenario is almost similar to first scenario except the Femto equipment is in mobile nature inside building premises. With mobility consideration in FUE, a handoff often occurs between HENB and ENB and hence a handoff aware resource allocation is considered along with these three resource allocation algorithms. In order to elaborate the handoff aware resource allocation in detail, consider a sample illustration of Marco-Femto scenario is shown in the Figure 2. Also, this paper considers one ENB and two HENB. UE and FUE all are associated to it respective ENB and HENB. FUE are in the mobile state. As per the literature and simulation, we found that high velocity of Femto equipment causes an unnecessary interrupt to the service. To overcome such hurdle, proposed paper found that if velocity of the Femto cell is more 3km/hr, respective HENB is not suitable for handling to it. In that case handoff is requiring from HENB to ENB as ENB can handle high velocity equipment. Black arrow in the Figure 1 shows the handoff from HENB to ENB. Once done with system architecture, we have simulated the above network with scheduling technique with Femto node (UE) immobility as well as in Femto node mobility. Under no mobility scenario, exponential scheduling at Marco ENB and fair proportional scheduling at FENB combination shows better performance than other possible combinations. However, in mobility condition, MLWDF at Marco ENB and exponential scheduling at FENB shows the better combination. To improve the system performance, we consider a channel quality awareness based feedback technique along with best possible combinations of scheduling under immobility and mobility scenario. Such consideration is named as Heterogeneous CQI based Scheduling Techniques (HCBST). Channel quality awareness is identified by means of channel quality index that generally has 4 bits and numbered from 1-15. Based on the choice of the number, respective adaptive coding and modulation is carried out as shown in Table 1. Further, to reduce the number of feedbacks, an M-Feedback CQI technique is considered in HCBST. In M-Feedback CQI, each subcarrier is divided into many sub channels S and each user compute the SNR on this sub channels and send the indices. Based on the sub channels indices base station choose the highest index. The range of signal to noise ratio (SNR) and its respective indices is shown in Table 1. Where R_FUE is the distance between FUE and HENB and m is the Femto cell coverage Figure 3 and Figure 4 display the flow chart of HCBST algorithm. It consists of five iterations. In each iteration user equipment changes its position. The algorithm has taken five iterations. Initially set I = 1. Create a scenario of Marco first along with its equipment. After that deploy Femto cell inside it while deployment of Marco and Femto assigned the respective scheduling technique. Once done with that check the CQI for next channel and assigned the ACM for it. Once i>5 it will be come out of the loop. Heterogeneous Scheduling in Macro-Femto Scenario under Immobile FUE The simulation parameters used for evaluating the performance of the network and the various scheduling mechanisms is shown in Table 2 and Table 3, respectively. The curves in Figure 5 show the throughput of heterogeneous scheduling in Macro-Femto scenario under immobile node with INF-BUF application. EXP-FP Figure 5. Throughput of the heterogeneous network with IN-BUF application without node mobility. show better performance in case of immobile node. For the considered scenario an average throughput of 25.633 Mbps, which is 2.145% better compared to the second best ML-EXP based resource allocation. The resource allocation by ML-FP exhibits a throughput of 23.195 Mbps due to inappropriate combination of scheduling at Macro ENB and Femto HENB. Through the curves, it could be inferred that the throughput of FP-EXP changes drastically with increase in the number of users. The curves in Figure 6 display the spectral efficiency of heterogeneous scheduling in Macro-Femto scenario under immobile nodes. Among the schedulers, the EXP-FP shows the best spectral efficiency of 2.085185b/s/ Hz. Conversely, the ML-EXP show least spectral efficiency of 1.8831b/s/Hz. Among these schedulers, the EXP-FP shows an improvement of 10.7315% than ML-EXP. If we observe the immobile node scenario, exponential scheduling is used in Marco, where nodes are in mobile condition. In addition, FP shows best performance in Femto immobile condition. It tries to maintain the maximum throughput along with minimal service to the users. Further, exponential scheduling is better for high mobile conditions. Heterogeneous Scheduling in Macro-Femto Scenario under Mobile FUE The curves in Figure 7 display the throughput of heterogeneous scheduling in Macro-Femto scenario under mobility conditions. The scheduling with ML-EXP shows the best possible combination under mobile condition. It gives throughput of 24.833 Mbps. Alternatively, the ML-FP shows minimum throughput of 23.54 Mbps. Further, it results in 5.49277% less as compared to ML-EXP based resource allocation. The curves in Figure 8 display the spectral efficiency of heterogeneous scheduling in Macro-Femto scenario under mobility. The ML-EXP scheduling shows the best spectral efficiency of 2.0123 and ML-FP show least performance, which correspond to a spectral efficiency value of 1.97365. The ML-EXP scheduler performs 1.9315% perfect better than ML-FP scheduler. Hence, ML-EXP is considered as the best combination under mobility condition. Here, ML-EXP resource allocation considers Marco ENB with MLDWF scheduling and Femto HENB using exponential scheduling. In this scenario, we can observe that FP scheduling has not been chosen for scheduling. The reason behind is FP is not suitable candidate in mobile condition. MLDWF is suitable for high mobile condition because it generally ensure that delay packet probability doesn't exceed the minimum allowable packet EXP-FP and EXP-FP-CQI in Femto Node Immobile Scenario The graph in Figure 9 and Figure10 display the throughput loss. Exponential scheduling is more suitable for multimedia service and can stands well in mobile condition. Figure 11. Throughput comparison of ML-EXP-R-CQI with ML-EXP-R. Figure 11 and Figure12 display the throughput and spectral efficiency comparison between ML-EXP-R-CQI and ML-EXP-R scheduling under Femto node immobility conditions. Here, ML-EXP-R-CQI based scheduling shows better performance compared to ML-EXP-R based scheduling. In particular, ML-EXP-R-CQI gives a peak throughput of 25.128Mbps and spectral efficiency of 2.1324 b/s/Hz. Further, it is 1.1879% and 2.85% better in term of throughput and spectral efficiency as compared to ML-EXP-R based scheduling. The mean comparison of the various scheduling schemes for Femto cell based LTE-A networks for immobile user and mobile users is shown in Table 4 and Table 5, respectively. It could be inferred that the ML-EXP-CQI based scheduling perform better than the existing and other scheduling algorithms that are discussed for LTE-A networks. Conclusion This paper details the resource allocation in scenarios with Femto immobile nodes and with Femto node mobility. In Femto immobile nodes, exponential scheduling at Marco ENB and fair proportional at Femto ENB shows the better performance. However, with M-Based CQI consideration, the performance of the combined scheduler is increased by1.8945% and 5.504% better in term of throughput and spectral efficiency than the heterogeneous scheduler without considering CQI. Similarly, in case of Femto node mobility, MLDWF scheduler at Marco ENB and exponential at Femto ENB showed the best possible combinations. In addition, with combining M-Based CQI with the aforementioned heterogeneous scheduler, the system performance is increased by 1.1879% and 2.85% better in term of throughput and spectral efficiency than without considering CQI.
2019-02-16T14:30:59.278Z
2016-09-29T00:00:00.000
{ "year": 2016, "sha1": "ebc48127bda37f569b4fb7011d05e040c875cea8", "oa_license": null, "oa_url": "https://doi.org/10.17485/ijst/2016/v9i36/102109", "oa_status": "GOLD", "pdf_src": "MergedPDFExtraction", "pdf_hash": "3d321af8d290d25bf2b7f2323bbcb714075a7092", "s2fieldsofstudy": [ "Business" ], "extfieldsofstudy": [ "Computer Science" ] }
53578826
pes2o/s2orc
v3-fos-license
Inequalities for integrals of the modified Struve function of the first kind Simple inequalities for some integrals involving the modified Struve function of the first kind $\mathbf{L}_{\nu}(x)$ are established. In most cases, these inequalities have best possible constant. We also deduce a tight double inequality, involving the modified Struve function $\mathbf{L}_{\nu}(x)$, for a generalized hypergeometric function. Introduction In the recent papers [9] and [11], simple lower and upper bounds, involving the modified Bessel function of the first kind I ν (x), were obtained for the integrals x 0 e −γt t ν I ν (t) dt, x 0 e −γt t ν+1 I ν (t) dt, (1.1) where x > 0, 0 ≤ γ < 1 and ν > − 1 2 . For γ = 0 there does not exist simple closed form expressions for the integrals in (1.1) The inequalities of [9,11] were needed in the development of Stein's method [18,6,17] for variance-gamma approximation [7,8,10], although as they are simple and surprisingly accurate the inequalities may also prove useful in other problems involving modified Bessel functions; see for example, [5] in which inequalities for modified Bessel functions of the first kind were used to obtain lower and upper bounds for integrals involving modified Bessel functions of the first kind. In this note, we consider the natural problem of obtaining inequalities, involving the modified Struve function of the first kind, for the integrals x 0 e −γt t ν L ν (t) dt, x 0 e −γt t ν+1 L ν (t) dt, (1.2) where x > 0, 0 ≤ γ < 1 and ν > − 3 2 , and L ν (x) is the modified Struve function of the first kind defined, for x ∈ R and ν ∈ R, by . The modified Struve function L ν (x) is closely related to the modified Bessel function I ν (x), and either shares or has a close analogue to the properties of I ν (x) that were exploited in derivations of the inqualities for the integrals in (1.1) by [9,11]. The function L ν (x) is itself a widely used special function; see a standard reference, such as [16], for its basic properties. It arises in manyfold applications, including leakage inductance in transformer windings [12], perturbation approximations of lee waves in a stratified flow [15], scattering of plane waves by soft obstacles [19]; see [1] for a list of further application areas. When γ = 0 both integrals in (1.2) can be evaluated exactly. Indeed, the second integral is equal to x ν+1 L ν+1 (x) (see [16], formula 11.4.29). The first integral can be evaluated because the modified Struve function L ν (x) can be represented as a generalized hypergeometric function. To see this, recall that the generalized hypergeometric function (see [16] for this definition and further properties) is defined by and the Pochhammer symbol is given by (a) 0 = 1 and (a) k = a(a+ 1)(a+ 2) · · · (a+ k −1), k ≥ 1. Then, for −ν − 3 2 / ∈ N, we have the representation (see also [1] for other representations in terms of the generalized hypergeometric function). A straightforward calculation then yields When γ = 0, there does, however, not exist a closed form formula for the integrals in (1.2). Moreover, even when γ = 0 the first integral is given in terms of the generalized hypergeometric function. This provides the motivation for establishing simple bounds, involving the modified Struve function L ν (x), for these integrals. The approach taken in this note to bound the integrals in (1.2) is similar to that used by [9,11] to bound the related integrals involving the modfied Bessel function I ν (x), and the inequalities obtained in this note are of a similar form to those obtained for the integrals involving I ν (x). As already noted, the reason for this similarity is because many of the properties of the modified Bessel function I ν (x) that were exploited in the proofs of [9,11] are shared by the modified Struve function L ν (x), which we now list. All these formulas can be found in [16], except for the inequality which is given in [2]. Further inequalities for L ν (x) can be found in [2] and [4], some of which improve results of [14]. For positive values of x the function L ν (x) is positive for ν > − 3 2 . The function L ν (x) satisfies the recurrence relation and differentiation formula The function L ν (x) has the following asymptotic properties: We end this introduction by noting that [9,11] also derived lower and upper bounds for the integrals is a modified Bessel function of the second kind. Analogously to the problem studied in this note it is natural to ask for bounds for the integrals is the modified Struve function of the second kind. However, the inequalities of [9,11] for integrals involving K ν (x) do not have a natural analogue for M ν (x). Unlike the function L ν (x), for general values of ν, some of the crucial properties of K ν (x) that were exploited in the proofs of [9,11] do not have an analogue for M ν (x). Indeed, the function M ν (x) does not have the exponential decay as x → ∞ that K ν (x) has, and is in fact unbounded when ν > 1 (see formula 11.6.2 of [16]). Moreover, despite possessing some interesting monotonicity properties (see [3]), M ν (x) does not have an analogue of the inequality K ν (x) > K ν−1 (x), ν > 1 2 , (see [13]) which was heavily used in the proofs of [9,11]. Inequalities for integrals of the modified Struve function of the first kind In the following theorem, we establish inequalities for the integrals in (1.2), which are natural analogues of the inequalities for the integrals in (1.1) that are given in Theorem 2.1 of [9] and Theorem 2.3 of [11]. 14) x We have equality in (2.12) and (2.14) if and only if γ = 0. The constants in the bounds (2.10)-(2.15) cannot be improved, and the constant in (2.9) is also best possible if γ = 0. Inequalities (2.9) and (2.14) hold for all γ > 0. Applying inequality (2.9) with γ = 0 to the integral on the right hand-side of the above expression then yields (2.11), as required. (vi) Let ν > − 3 2 so that the integral exists. Since γ > 0, (vii) Consider the function In order to prove the result, we argue that that u(x) > 0 for all x > 0. Using the differentiation formula (1.5) we have that where we used (1.8) to obtain the inequality. Also, from (1.6), as x ↓ 0, Thus, we conclude that u(x) > 0 for all x > 0, as required. Thus, if M > 1 then u M (x) > 0 in a small positive neighbourhood of the origin, from which we conclude that the constant (M = 1) in (2.14) is best possible. We end by noting that we can combine the inequalities of Theorem 2.1 and the integral formula (1.3) to obtain lower and upper bounds for a generalized hypergeometric function. . Remark 2.3. We know from Theorem 2.1 that the constants in the double inequality in Corollary 2.2 are best possible. Also, the double inequality is clearly tight in the limit ν → ∞ and part (viii) of the proof of Theorem 2.1 tells us that the inequality is tight as x → ∞. To gain further insight into the approximation, we used Mathematica to carry out some numerical results. Denote by L ν (x) and U ν (x) the lower and upper bounds in the double inequality and denote by F ν (x) the expression involving the generalized hypergeometric function that is bounded by these quantities. The relative error in approximating F ν (x) by L ν (x) and U ν (x) are given in Tables 1 and 2. For a given x, we observe the relative error in approximating F ν (x) by either L ν (x) or U ν (x) decreases as ν increases. We also notice that, for a given ν, the relative error in approximating F ν (x) by L ν (x) decreases as x increases. However, from Table 2 we see that, for a given ν, as x increases the relative error in approximating F ν (x) by U ν (x) initially increases before decreasing. This is because, for ν > − 1 2 , lim x↓0 Uν (x) Fν (x) = 1, and so the relative error in approximating F ν (x) by U ν (x) is 0 in the limit x ↓ 0. The limit lim x↓0 Uν (x) Fν(x) = 1 follows from combining the formula F ν (x) = x −ν x 0 t ν L ν (t) dt and the limiting forms (2.19) and (2.20) (with n = 0).
2018-04-23T13:48:05.000Z
2018-02-06T00:00:00.000
{ "year": 2018, "sha1": "304d551da423268a1a9f28960eb72169b17b83d5", "oa_license": "CCBY", "oa_url": "https://link.springer.com/content/pdf/10.1007/s00025-018-0827-4.pdf", "oa_status": "HYBRID", "pdf_src": "Arxiv", "pdf_hash": "304d551da423268a1a9f28960eb72169b17b83d5", "s2fieldsofstudy": [ "Mathematics" ], "extfieldsofstudy": [ "Mathematics" ] }
258510076
pes2o/s2orc
v3-fos-license
Airway ultrasound to detect subglottic secretion above endotracheal tube cuff Background Subglottic secretion had been proven as one of the causes of microaspiration and increased risk of ventilator-associated pneumonia (VAP). The role of ultrasound to detect subglottic secretion has not yet been established. Purpose The purpose of this study is to determine the sensitivity and specificity of upper airway ultrasound (US) in the detection of subglottic secretions as compared to computed tomography (CT) scanning. Material and methods A prospective observational study was carried out in adult trauma patients requiring mechanical ventilation and cervical CT scan. All patients had an endotracheal tube cuff-pressure maintained between 20 and 30 cm H2O. Airway US was performed at the bedside immediately before the patient was transferred to the CT scan suite. The sensitivity, specificity, and positive/negative predictive values (PPV, NPV) of the upper airway US detection of subglottic secretions were then calculated and compared with CT findings. Results Fifty participants were consecutively included. Subglottic secretions were detected in 31 patients using upper airway US. The sensitivity and specificity of upper airway US in detecting subglottic secretion were 96.7% and 90%, respectively (PPV 93.5%, NPV 94.7%). 18 (58%) patients with subglottic secretions developed VAP during their ICU stay (p = 0.01). The area under the receiver operating curve (AUROC) was 0.977 (95% CI 0.936–1.00). Conclusions Upper airway US is a useful tool for detecting subglottic secretions with high sensitivity and specificity. Clinical implications This study shows: Upper airway US may aid in detecting subglottic secretions, which are linked to VAP. Detecting subglottic secretions at the bedside aids in determining the best frequency of subglottic aspiration to clean the subglottic trachea. Upper airway US may also aid in detecting the correct ETT position. Trial registration Clinicaltrials.gov. Clinicaltrials.gov identifier NCT04739878 Date of registration 2nd May 2021 URL of trial registry record https://clinicaltrials.gov/ct2/show/NCT04739878. Supplementary Information The online version contains supplementary material available at 10.1186/s13089-023-00318-5. Introduction Orotracheal intubation is provided to many critically ill patients who requires airway protection and/or ventilator support. To maintain an appropriate airway seal, the endotracheal tube (ETT) cuff pressure should be maintained between 20 and 30 cmH 2 O as a way to minimize airway leak and avoid compromising the integrity of the tracheal mucosa [1]. The accumulation of secretions above the endotracheal tube (ETT) cuff, are not easily removed with tracheal tubes that lack subglottic secretion drainage (SSD) ports and therefore, predisposing to microaspiration and ventilator-associated pneumonia) [2][3][4][5]. Nevertheless, the SSD technique is still underutilized [4,5]. The main reasons are costs and safety issues of SSD, which may cause prolapse of tracheal mucosa into the suction port, especially with continuous aspiration of the subglottic area [6]. Therefore, intermittent SSD is recommended, and its efficiency may be improved by synchronizing drainage with subglottic secretions accumulation [6,7]. Subglottic secretions above the ETT cuff can be detected directly by aspiration of the subglottic area, [1] or through visualization by imaging [8]. CT features of subglottic secretions include complex shapes, internal air bubbles, location at dependent portion, and CT Hounsfield < units < 21.7 [9]. To our knowledge, studies using airway ultrasound (US) to visualize subglottic secretions in intubated patients are scarce [10,11]. The main objective of our study is to compare the performance of US with CT in detecting subglottic secretion above the ETT tube cuff. Study design, setting and ethical consideration Consecutive trauma patients admitted to the Emergency Department of Raja Permaisuri Bainun Hospital from November 2021 to January 2022 who required endotracheal intubation and a cervical computed tomography (CT) scan were enrolled. Ethical approval was obtained from the Medical Research and Ethics Committee of Malaysia Ministry of Health was granted [NMRR-21-1852-61475 (IIR)]. Study was also registered at the ClinicalTrials.gov (Identifier: NCT04739878). Written informed consent was obtained from the patients or their next of kin. Patients were excluded if they had any of the following criterion: (1) subcutaneous emphysema of the neck; (2) scars or surgical dressing around the neck which can lead to difficulty in obtaining optimal ultrasound images. Intervention All patients were kept in the supine position, intubated with an Idealcare (Ideal Healthcare Sdn Bhd, Malaysia) oral high volume low pressure cuffed ETT, and then mechanically ventilated. The decision to intubate the patient was made by the primary care team, without the participation of the investigator team. The ventilator machine setting was initiated at the discretion of the treating physician. The ETT pilot balloon was connected to a cube pressure tube with filter, (Promepla S.A.M, Monaco, France) and inflated with air to adequately seal the airway. The ETT cuff pressure was continuously measure, and monitored using the IntelliCuff of the Hamilton G5 ventilator (Hamilton, Switzerland). The target cuff pressure was set between 20 and 30 cm H 2 O [12]. IntelliCuff ® automatically adjusted the cuff pressure within these values. In the event of a damaged cuff, the device generated an alarm while simultaneously increasing the pressure as a way to maintain the desired cuff pressure. The following data were recorded at the time of intubation: patients' demographics, size of the ETT, and physiological parameters. Studied patient outcomes include the incidence of VAP, mortality, intensive care unit (ICU) and hospital length of stay, and days on mechanical ventilation. The CT scan was used as the gold standard for delineating supraglottic secretions. All patients were admitted to the ICU after CT scan. Ultrasound examination Airway US was performed at the bedside immediately before the patient was transferred to the CT scan suite. This ensures that the subglottic secretions that was observed by US would also be detected by CT. Airway US was performed by critical care physicians and emergency physicians who were trained in critical care sonography with a minimum of 5 years of experience. All investigators had undergone airway US training by the World Interactive Network Focused On Critical Ultrasound (WINFOCUS). A sagittal (longitudinal) view examination was performed at anterior midline of the neck to identify the air-mucosal (A-M) interface, ETT cuff, surrounding structures of importance such as the thyroid cartilage, cricoid cartilage, cricothyroid membrane and tracheal rings ( Fig. 1, Additional file 1: Video S1). Transverse view examination was performed at the level of cricoid cartilage transversely across the anterior surface of the neck (Fig. 2, Additional file 2: Video 2). In order to acquire the image of subglottic secretion at the posterior part of ETT cuff, a parasagittal (lateral to the midline) scan was performed at the lateral right side of the neck with the transducer tilted towards caudad (Additional file 1: Video S3). The presence of subglottic secretions was defined as heterogenous or homogenous fluid collections or comettail artefacts caused by bubble-rich secretions above the ETT cuff [10]. The air-mucosal interface was observed as bright hyperechoic mobile lines. The thyroid and cricoid cartilages both had oval hypoechoic appearance in the parasagittal view and appeared like a hump in the transverse view. The thyroid cartilage was more anterior and larger in size compare to the cricoid cartilage. In longitudinal plane, the tracheal cartilage was seen as a "string of beads" and inverted U in the transverse plane. A failed US consisted in the inability to identify key anatomical structures or the inability to visualize the ETT balloon cuff [13,14]. Neck CT scan was performed using a 64-multislice detector CT machine (Toshiba Aquilion CX 2010, Japan). CT findings were examined by a radiologist with more than 10 years of experience. The investigator radiologist and US operators were blinded to the findings obtained with the other technique. Sample size considerations We calculated the sample size to determine whether an area under the curve (AUC) of ≥ 0.75 was achieved for a receiver operator characteristic (ROC) plot of neck US for detecting subglottic secretions versus cervical CT as a gold standard. The null hypothesis was set as AUC 0.5 (meaning no discriminating power), Type 1 error of 0.05 and power of 80%. Based on the unpublished data from our own experience, with a precision of 10%, and an expected proportion of subglottic secretions on chest CT scan of 80%, the sample size required was 45. Taking into account the potential for 10% incomplete data from neck US or cervical CT, we included 49 patients for the final analysis. AUROC sample size calculation was performed using MedCalc for Windows, version 19.4 (Med-Calc Software, Ostend, Belgium). [15] Statistical analysis The characteristics of the patients were summarized as medians and interquartile ranges for continuous variables, and as numbers and percentages for qualitative variables. The ROC curve and AUC estimates were determined for the relationship of neck US and cervical CT to diagnose subglottic secretions. Sensitivity (Se), specificity (Sp), negative predictive value (NPV) and positive predictive value (PPV) were provided with their 95% confidence intervals (CIs). ROC and AUC, Se, Sp, NPV and PPV were determined for neck US to diagnose subglottic secretions according to the cervical CT findings as a gold standard. For all calculations, SPSS Statistics for Windows, Version 20.0 (IBM, Armonk, NY, USA) were used. The significance level was set at p < 0.05. The inter-observer and intra-observer agreement percentages were calculated by dividing the number of occasions of agreement by the total number of occasions. Weighted kappa statistics were applied to determine the degree of agreement. The kappa statistics was interpreted as follows: less than 0.00, poor agreement, 0.00-0.20, slight agreement; 0.21-0.40, fair agreement; 0.41-0.60, moderate agreement; 0.61-0.80, substantial agreement and 0.81-1.00, almost perfect agreement. The level of statistically significant difference was p < 0.01. Statistical analyses were performed with SAS software version 9.1 (SAS Institute) [16]. Fig. 4 STARD flow diagram (2%). The mean time from intubation to bedside airway US was 3.7 h (± 2.5 h). There was no significant difference between patients with subglottic secretions and those with no subglottic secretions in terms of comorbidities, ETT size, Glasgow Coma Scale on arrival, Sequential Organ Failure Assessment score, Acute Physiology and Chronic Health disease Classification System II (APACHE II) score, Simplified Acute Physiology Score (SAPS II) score and antibiotic use (Table 1). Secondary outcomes There was significantly higher incidence of VAP in patients with subglottic secretions compared to those with no subglottic secretions (58% vs 11%, p = 0.01). However, there was no significant difference between the groups in terms of mortality, ICU length of stay, hospital length of stay, and days on ventilator (Table 1). Discussion Proper management of either bronchial secretions or subglottic secretions is of utmost importance to prevent VAP in intubated patients [5], in addition to maintaining the ETT cuff pressure within the recommended target values [17]. VAP incidence is lower in patients who have been intubated with specialized ETT that included a subglottic secretion drainage port [17]. While the impact of subglottic secretion aspiration are often recognized when VAP occurs, airway US, as shown in our study, may be able to detect subglottic secretions repeatedly at the patient's bedside before aspiration occurs, aiding in providing a timely subglottic drainage and preventing microaspiration. In addition, airway US may aid in determining positioning of the ETT cuff and adjust ETT depth as needed [14]. Compared with US, advanced techniques such as CT [18] or magnetic resonance imaging delineate with precision the subglottic space [19]. However, the main indications include detecting of subglottic stenosis, laryngeal tumours and neck trauma [19,20]. Furthermore, these modalities require moving the patient to the radiology department, are costly, and expose the patient to ionizing radiation (CT). In contrast, point-of-care US is widely available in the emergency department or ICU, is cheaper, does not expose the patient to ionizing radiation, and can be performed at the patient's bedside. US has been proven to assist in detecting endotracheal vs. esophageal intubation, with a short learning curve. In addition, it provides adequate images of the subglottic space to detect subglottic secretions and the position of the ETT cuff [21][22][23][24]. The detection of subglottic secretions using plain radiograph had been described by Greene et al. in 1994 [8]. Using airway US to observe at the subglottic area is recent. Tao et al. first demonstrated ultrasound-guided visualization of subglottic secretions in an intubated patient, by injecting saline through the subglottic catheter above the ETT cuff [10]. This was followed by a case report by Yan et al., who detected subglottic secretions in a patient who had gastric regurgitation while undergoing general anaesthesia [11]. Our study showed that airway US can be a reliable tool to visualize subglottic secretions in intubated patients at the emergency department and their early visualization may lead to secretion aspiration and eventually reduce microaspiration. Continuous SSD had been found to be associated with trachea mucosa damages. Intermittent SSD when there are accumulation subglottic secretions detected by ultrasound may help to mitigate this risk [6,7,25]. However, this hypothesis should be tested in an appropriately design and powered trial also to evaluate prognostic prevention of VAP. Limitation The limitation of our study is the small sample size, and the population selected that are exclusively trauma patients. There was also a selection bias as patients with airway trauma were excluded. However, our findings may be replicated to other types of patients. An important limitation is the correlation with VAP since our study only observed for subglottic secretions at only one point in time (i.e. before patient admission to the ICU). The best frequency to perform US and subglottic suctioning has yet to be determined. Regular US detection of subglottic secretions synchronized with subglottic suctioning could more clearly define the impact on VAP and can be a matter of future studies.
2023-05-06T13:34:36.817Z
2023-05-06T00:00:00.000
{ "year": 2023, "sha1": "73e1e887142dd52a8f700dfb0709d3956ea1278a", "oa_license": null, "oa_url": null, "oa_status": null, "pdf_src": "Springer", "pdf_hash": "73e1e887142dd52a8f700dfb0709d3956ea1278a", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
7073361
pes2o/s2orc
v3-fos-license
An SCC-DFTB Repulsive Potential for Various ZnO Polymorphs and the ZnO–Water System We have developed an efficient scheme for the generation of accurate repulsive potentials for self-consistent charge density-functional-based tight-binding calculations, which involves energy-volume scans of bulk polymorphs with different coordination numbers. The scheme was used to generate an optimized parameter set for various ZnO polymorphs. The new potential was subsequently tested for ZnO bulk, surface, and nanowire systems as well as for water adsorption on the low-index wurtzite (101̅0) and (112̅0) surfaces. By comparison to results obtained at the density functional level of theory, we show that the newly generated repulsive potential is highly transferable and capable of capturing most of the relevant chemistry of ZnO and the ZnO/water interface. INTRODUCTION Zinc oxide is a wide band gap semiconductor (band gap 3.4 eV), used in several technologically important applications, such as heterogeneous catalysis, 1 gas sensors, 2 and microelectronic devices. 3 Many of these applications have in common that they rely on the specific electronic properties of ZnO and how they can be modified by dopants or by specific structural features. In particular, ZnO nanoparticles of various sizes can be grown to exhibit a large number of different shapes, such as wires, spheres, and helices. 2 ZnO nanoparticles can also be used in, for example, sunscreens to protect against UV irradiation, but the toxicity of the nanoparticles is debated 4,5 and may be related to the dissolution of Zn 2+ ions. The interaction of ZnO with water or OH-groups is of particular interest, because water in either the liquid or the gas phase tends to be present for most of the applications involving ZnO. At ambient conditions, ZnO exhibits the wurtzite crystal structure and principally exposes four different surfaces: the nonpolar ZnO(101̅ 0) and ZnO(112̅ 0) and the polar Znterminated ZnO(0001) and O-terminated ZnO(0001̅ ). 6 Adsorbed water adopts different structures for different surfaces. On ZnO(101̅ 0), experiments and calculations have shown that the most favorable adsorption structure for water is "half-dissociated", where every other water molecule is dissociated. 7 This leads to a particularly stable hydrogenbonded network on the surface. However, the half-dissociated adsorption structure coexists with a molecular adsorption structure on the surface, 8 and calculations suggest that the amount of dissociated water may increase upon addition of more water layers. 9 On ZnO(112̅ 0), the situation is to a large extent unknown, although it has been proposed by density-functional theory (DFT) calculations that water adsorbs either fully dissociated 10 or half-dissociated. 11 The clean polar Znterminated ZnO(0001) surface exhibits triangular pits 12 to stabilize the inherent polarity in the ZnO[0001] direction. On this surface, water adsorbs dissociatively and reduces the pit sizes. 13 Finally, the O-terminated ZnO(0001̅ ) surface is usually hydrogen covered under normal preparation conditions. Adsorption of water on this hydrogen-covered surface has been proposed to be molecular, while adsorption on the clean (non-hydrogen-covered) surface has been proposed to be dissociative. 14 Recently, there has also been interest in crystal structures of ZnO other than wurtzite, such as the closely related cubic zincblende structure, 15 as well as high-pressure modifications such as the NaCl-type and CsCl-type structures 16−19 and the lesser-known "graphitic" 20−25 and low-density "body-centered tetragonal" 26 (BCT) 25,27−32 and cubane-type 33 structures. The graphitic and BCT structures have been proposed to form during thin-film growth of ZnO to quench the macroscopic dipole moment formed as wurtzite ZnO grows along the polar ZnO[0001]/ZnO[0001̅ ] directions. Each of these polymorphs have their own set of lattice parameters, formation energies, and electronic properties that may make them more suitable than the wurtzite structure for certain applications. Currently, all of the polymorphs except the CsCl, BCT, and cubane structures have been synthesized in experimental laboratories. However, a BCT-like structure has been shown to form dynamically at the ZnO(101̅ 0) surface. 32 With this palette of ZnO structures and their individual characters, it is highly desirable to have access to a set of theoretical methods which would allow for the study of different polymorphs of ZnO and their interaction with water at experimentally relevant size and time scales, concerning both structural and electronic properties. Given the significant computational cost associated with very large-scale and simultaneously accurate theoretical calculations at the ab initio level today, other methods, which bridge the existing size and time gaps between experiments and theory, are needed. Such methods are invariably parametrized or semiempirical, i.e., they rely on precalculated interaction parameters between different atomic species. The parameters are obtained by fitting them to the results of a set of reference calculations, usually performed at the ab initio level. Ideally, the method should contain only a small set of parameters that can be calculated from ab initio methods in a direct and easy manner. One such method is the density functional-based tight binding method with self-consistent charges 34 (SCC-DFTB), a method which has been successfully used in several applications, including biological systems 35 and inorganic materials. 36 SCC-DFTB allows for explicit calculations of the electronic structure (e.g., orbital population analysis and band structure calculations) of a system, which makes the approach suitable for studying, for example, charge transfer, a key ingredient in many chemical reactions. There is an SCC-DFTB parameter set for the interactions between Zn, O, and H, called znorg-0-1. 37 It is based on the mio parameter set 34 38 This parameter set is not publically available. The znorg-0-1 potential has been used to study intrinsic defects in ZnO nanowires 39 and adsorption of various molecules on ZnO surfaces. 11,40,41 However, we have found that this potential displays overbinding at long Zn−O distances which tend to be present in structures with high coordination numbers. This has implications for the comparison of different structural phases of ZnO. For example, experimentally, the cubic NaCl-type structure of ZnO is less stable than the wurtzite structure, i.e., at ambient conditions the wurtzite structure is the most stable. However, in this paper we find that the znorg-0-1 potential gives the opposite result. The overbinding at long Zn−O distances is also problematic when comparing adsorption configurations for water molecules on ZnO surfaces in which the water oxygen atom coordinates one or two Zn atoms on the surface. In an effort to overcome these problems, we have here generated a new SCC-DFTB repulsive potential which accurately handles all relevant phases of ZnO, for different bond lengths and coordination numbers. We achieved this by, in contrast to previous works, including structures of different coordination numbers directly in the parametrization. In this procedure, we have adopted ideas that has previously been used for the parametrization of a reactive force field for zinc oxide. 42 We will show that this new parametrization also improves the chemical description of ZnO surfaces, nanowires, and especially of the ZnO/water interface, for which we obtain good agreement with higher-level theoretical calculations (DFT) and to available experimental data. In this work, we propose a method for efficient optimization of the repulsive potential within the SCC-DFTB framework. Specifically, based on a set of reference DFT calculations, we optimize the Zn−O repulsive potential to accurately describe the low-energy polymorphs of ZnO under various strains. We then use this optimized parameter set, which we will call znopt, to study ZnO nanowires and the ZnO/water interface. In the paper, we repeatedly demonstrate the accuracy of the calculated SCC-DFTB results with respect to calculations at the GGA-PBE 43 density functional level, which constitutes our framework of reference. METHODS 2.1. SCC-DFTB. The SCC-DFTB method has been described in detail elsewhere. 44,45 Here we will just briefly present the points necessary to appreciate the results of this paper. SCC-DFTB is an approximative quantum-chemical method where the total energy is expressed as a second-order expansion of the DFT energy with respect to charge density fluctuations. The total energy is usually expressed as the sum of three terms: the band-structure term E BS , the second-order term E second , and the repulsive term E rep . The band-structure and repulsive terms are also found in "normal" (non-SCC) DFTB, while the second-order term is unique to the SCC-DFTB method. The band-structure term corresponds to the sum of the energies of all occupied electronic eigenstates of the Hamiltonian. The eigenstates are expressed in a minimal basis of pseudoatomic orbitals generated from an all-electron calculation for each atomic species. The (distance-dependent) overlap matrix elements and the matrix elements representing the Hamiltonian in this basis are precalculated once and stored in Slater−Koster tables. The second-order term contains the energy contributions due to charge density fluctuations in the system. These fluctuations are approximated by atomic charges derived from Mulliken population analysis. 46 The energy associated with the interaction of these atomic Mulliken charges is derived using the approximation of spherically symmetric Gaussian-shaped atomic charge densities, leading to an analytical expression that determines the on-site and interatomic contributions to the second-order term. These contributions depend on the individual atomic hardnesses, which are usually expressed in terms of the Hubbard U parameter. The band-structure term and the second-order term constitute the electronic part of the total energy. The total energy of the system also includes the repulsive term, which contains the ion−ion repulsion (hence the name) as well as exchange-correlation contributions and "double-counting" corrections. In practice, it is usually approximated by a sum of pairwise atomic interactions: where V rep IJ (r IJ ) is the potential between atoms I and J at a distance r IJ . These pairwise potentials are obtained by fitting them either to experimental data, or to the results of theoretical calculations of higher accuracy, e.g., ab initio calculations. the well-established mio parameter set, 34 and to some extent with the znorg-0-1 37 parameter set (which also involves interactions between Zn and the elements C, S, N, and P, that are not treated in the present work), we have kept all the Hubbard parameters for the elements and Slater−Koster tables from the mio and znorg-0-1 parameter sets. The only part that we have changed is the Zn−O repulsive potential from the znorg-0-1 set (i.e., all the other repulsive potentials are also kept the same as in znorg-0-1). The znorg-0-1 repulsive potential was only trained to reproduce results for the cubic zincblende polymorph of ZnO. 37 In the present work we show that the repulsive energy term obtained with the znorg-0-1 potential is too small at longer distances (above 2 Å) which are present in ZnO structures with high Zn coordination numbers. Thus, using the znorg-0-1 potential, the NaCl-type polymorph of ZnO, which is sixcoordinated and therefore exhibits relatively long Zn−O bonds, is considerably more stable than the experimentally found wurtzite structure, which is four-coordinated and exhibits relatively short Zn−O bonds. We therefore set out to generate a new repulsive potential (znopt), with an increased repulsion at long distances, to obtain the correct stability order of the wurtzite and NaCl-type polymorphs of ZnO. To achieve this, we performed volume scans for ZnO in the wurtzite structure and the NaCl-type structure (see Figure 1) using density functional theory (details are given in section 2.3). The necessity of including structures with different coordination numbers and different bond lengths in the parametrization has been pointed out before, 45,47 but this seems to not yet have been the practice for SCC-DFTB parametrizations of solid-state materials. Atomic positions and lattice parameters were optimized with DFT, giving a Zn−O bond length of 2.17 Å in the NaCl-type structure and a (shortest) bond length of 2.01 Å in the wurtzite structure. The optimized DFT geometry was then scaled isotropically in a volume range of 73% to 133% in steps of approximately 6%. Single-point energy calculations were performed at each volume, using both DFT and SCC-DFTB. The ideal SCC-DFTB repulsive energy, i.e., the repulsive energy which would result in perfect agreement between DFT and SCC-DFTB and which we denote Er ep , is the difference between the DFT total energy and the electronic part of the SCC-DFTB energy for each structure S, calculated with respect to the DFT-optimized wurtzite structure S ref : where E DFT is the total DFT energy and E el is the electronic part of the SCC-DFTB energy. The ideal value for Ẽr ep (S ref ) is the repulsive energy value that gives the desired atomization energy of the solid. It is, however, in principle possible to choose any arbitrary value for this quantity. The absolute values of the atomization energies in the SCC-DFTB calculations will then not match the DFT values, although the relative atomization energies will match. By sacrificing some of the accuracy of the atomization energy, it may be possible to improve other aspects such as surface relaxation 37 or water adsorption energies. This has been the approach used in this work, i.e., the absolute values of the atomization energies were varied until all other properties considered in this work were reproduced satisfactorily by the SCC-DFTB calculations. The error introduced in the absolute atomization energies can thus effectively be regarded as an error in the description of the isolated atoms, but because we are primarily interested in the chemistry of ZnO with a coordination number near 4, we believe this to be a sound approach. The goal of the parametrization was thus to find V rep of eq 1 so that the difference between Er ep and E rep was minimized. We chose to express V rep using the four-range Buckingham potential, which divides V rep into four different functional forms depending on the interatomic distance: This expression allows for some flexibility without giving rise to unphysical variations or kinks in the potential, as the first and second derivatives are continuous in the entire distance range. In the parametrization, we varied the five limits r min to r max manually and added a tapering function to let the repulsive potential smoothly approach zero at r = 3.0 Å, which is smaller 49 together with both the original znorg-0-1 repulsive potential by Moreira et al. 37 and with our new improved repulsive potential znopt. An empirical dispersion correction, similar to the Grimme dispersion correction 50 for DFT, was not used, because we validated our SCC-DFTB repulsive potential against results obtained with DFT without such a dispersion correction. The DFT reference calculations were performed using the generalized gradient approximation exchange-correlation functional PBE 43 in an implementation involving a plane-wave basis set with energy cutoff 500 eV and projector augmented wave (PAW) type pseudopotentials. 51,52 We explicitly treated one, six, and twelve valence electrons for H, O, and Zn, respectively. The PBE DFT functional was used as a reference because it was the functional used to derive the SCC-DFTB atomic basis for Zn. 37 The PBE calculations were performed using the VASP 53−55 package. Periodic boundary conditions in three dimensions were employed throughout. Geometry optimizations were performed using the conjugate gradient algorithm until all forces on the atoms were smaller than 5 meV/Å. Convergence tests with respect to the k-point sampling were made for each system under study (for the bulk calculations, we typically used a Γcentered 7 × 7 × 7 k-point grid and correspondingly smaller grids for the surface and nanowire calculations). 2.4. Systems Studied. 2.4.1. ZnO Bulk Polymorphs. In this work, we have calculated properties such as lattice parameters, bulk moduli, and band gaps for the CsCl-type, NaCl-type, wurtzite, zincblende, graphitic, BCT, and cubane structures of ZnO. The unit cells for each of the seven polymorphs are shown in Figure 1. The hexagonal wurtzite structure is characterized not only by its lattice parameters a and c but also by the internal parameter u which corresponds to the fractional displacements of the Zn and O sublattices along the c direction. We have chosen to model the BCT structure as an "ideal" BCT structure, i.e., with a = b so that the structure is tetragonal. The BCT structure then possesses two internal parameters, one of which we call u and which corresponds to the fractional displacements of the Zn and O sublattices along the a or b direction, and the other which corresponds to the Zn−O−Zn angle α depicted in Figure 1. The internal parameters u and α are defined in a similar way for the cubane polymorph. In the parametrization procedure, only the wurtzite and NaCl-type polymorphs were included, while the other polymorphs were used for testing purposes. Thus, to ensure the transferability of our generated parameter set, we validated the parameters by calculating energy−volume scans for the high-density CsCl, intermediate-density graphitic and zincblende, and low-density BCT and cubane polymorphs. The optimized structures were strained from 73% to 133% in steps of approximately 6%. For each volume under consideration, the c/a ratio in the wurtzite, BCT, and graphitic structures was optimized, and all atoms were allowed to fully relax. The bulk modulus B 0 was obtained by fitting the energy− volume curves to a Murnaghan type equation of state. The atomization energies E at of the bulk polymorphs were calculated as where n is the number of ZnO formula units in the crystal unit cell, and E Zn-atom , E O-atom , and E ZnO-crystal correspond to the energies of an isolated Zn atom, an isolated (spin-polarized) O atom, and the ZnO crystal, respectively. 2.4.2. Clean ZnO Surfaces. In the calculation of structural properties of the clean ZnO surfaces, we used twenty-layer thick ZnO(101̅ 0) and ZnO(112̅ 0) slabs with a vacuum gap of at least 15 Å between neighboring systems. The systems were constructed using the bulk lattice parameters native to each method (i.e., the PBE-optimized bulk wurtzite lattice was used to construct the surfaces for the PBE calculations, etc.). Side and top views of these systems are shown in Figure 2. We characterized the surface structure through the angle θ that the ZnO "dimers" in the top layer along the polar ZnO[0001]/ ZnO[0001̅ ] directions make to the corresponding layers in the bulk (indicated in Figure 2). Additionally, we calculated the surface energy as where E slab is the total energy of the slab supercell, n is the number of ZnO formula units in the cell, E ZnO-bulk is the energy of a formula unit of ZnO in the wurtzite bulk, and A is the total surface area of the slab supercell (twice the area of one face). The surface energies were converged within 0.01 J/m 2 for the eight-layer thick slabs. For this reason, eight-layer thick slabs were used for the adsorption of water (see the following sections). No calculations were performed for the polar ZnO(0001) and ZnO(0001̅ ) surfaces, because these typically exhibit significant surface reconstructions and possibly surface metallization. 56,57 Because of the complexity involved in modeling these surfaces, they lie outside the scope of the current article. 2.4.3. ZnO Nanowires. We generated wires in the hexagonal wurtzite structure and in the NaCl-type structure, see Figure 3, and calculated formation energies as a function of wire diameter d for diameters in the range 9 to 44 Å using SCC-DFTB, and for diameters in the range 9 to 27 Å using PBE. The largest nanowire studied using PBE contained 200 formula units, while the largest nanowire studied using SCC-DFTB contained 588 formula units. A vacuum gap of at least 15 Å was introduced between neighboring wires. The hexagonal wires of the wurtzite structure were terminated by {101̅ 0} facets and were periodic in the [0001̅ ] direction. The square wires of NaCl-type structure were terminated by {100} facets and were periodic in the [001] direction. The formation energies of the nanowires per ZnO were calculated as where n is the number of ZnO formula units in the supercell, E wire is the total energy of the wire supercell, and E ZnO-bulk is the energy per ZnO formula unit in wurtzite bulk ZnO. 2.4.4. Water Adsorption on ZnO Surfaces. Single water molecules were adsorbed onto the ZnO(101̅ 0) and ZnO(112̅ 0) surfaces in various adsorption configurations, similar to the calculations performed by Meyer et al. 7 and grosse Holthaus et al. 11 The calculations were performed in a 3 × 2 supercell for ZnO(101̅ 0) (ca. 10 Å × 10 Å) and in a 2 × 2 supercell for ZnO(112̅ 0) (ca. 11 Å × 10 Å). Additionally, full-monolayer calculations in a 2 × 2 cell for both surfaces were performed in fully dissociated, half-dissociated, and molecular configurations. The water molecules were adsorbed onto only one side of the eight-layer thick slab. The adsorption energy per water molecule was calculated as where N is the number of water molecules per supercell, E tot is the energy per supercell of the system with water adsorbed onto the ZnO slab, E slab is the energy per supercell of the clean optimized ZnO slab, and E H 2 O is the energy of an optimized isolated water molecule. Thus, the more negative the adsorption energy, the more stable the system. RESULTS AND DISCUSSION 3.1. The Optimization of the SCC-DFTB Repulsive Potential. Our optimized znopt repulsive potential is compared to the original znorg-0-1 repulsive potential in Figure 4. At distances longer than 2 Å, the znopt potential is more repulsive than the znorg-0-1 potential. Figure 5 shows energy− volume curves calculated using znorg-0-1, znopt, and the reference PBE method for the wurtzite and NaCl-type bulk structures of ZnO. These were the polymorphs that were explicitly fitted against in the znopt parametrization procedure. By inspection of Figure 5, it is clear that we have addressed the problem of the too stable NaCl-type structure that plagued the znorg-0-1 repulsive potential. In the following sections, we will show that we improve the descriptions of many other types of systems as well. 3.2. ZnO Bulk Polymorphs. The calculated optimized lattice parameters, band gaps, and bulk moduli of the various bulk phases considered (CsCl-type, NaCl-type, wurtzite, zincblende, graphitic, BCT, and cubane) of ZnO are given in Table 1. The energy−volume curves calculated with both PBE and the znopt SCC-DFTB repulsive potential for all considered phases are shown in Figures 5 and 6. We find that the agreement between the znopt and the reference PBE results in general is very good, both for those structures that formed part of the parametrization and for the others. With both methods, the most stable structure is the wurtzite structure (as is the case in experiment), followed closely by the zincblende structure. At high pressures, a phase transformation into the NaCl-type structure can be expected while at highly negative pressures, the BCT and cubane structures are the most stable. There are, however, some minor discrepancies between the PBE and znopt results. For example, the equilibrium volume of the wurtzite structure is somewhat smaller with znopt compared to PBE, although the znopt-optimized lattice parameters are closer to the experimental values (a = 3.250 Å, c = 5.207 Å, u = 0.3825). 58 Additionally, the band gap obtained with znopt is 4.33 eV while the PBE band gap is 0.73 eV, so the znopt band gap is closer to the experimental value of 3.4 eV. 3 PBE greatly underestimates the band gap because of electron selfinteraction, while SCC-DFTB overestimates the band gap because of the use of a minimal basis set. Here, we should point out that the original znorg-0-1 potential in fact gives a somewhat smaller band gap (3.78 eV). This is a result of the fact that the Zn−O bonds are longer in the znorg-0-1-optimized structure compared to the znopt-optimized structure. The underestimation of the band gap by PBE reaches an extreme for the CsCl-type polymorph of ZnO, where a metallic solution is obtained (as also noted by Wrobel and Piechota). 18 With znopt, a small indirect band gap of 0.48 eV remains. Uddin and Scuseria 17 performed hybrid density functional calculations on this polymorph and also noted a decrease of the band gap compared to the wurtzite structure, albeit only by 1.2 eV (from 2.9 to 1.7 eV). For the NaCl-type structure, we obtain an indirect band gap of 2.32 eV with znopt, which is similar to the experimental value of 2.45 ± 0.15 eV. 59 A more detailed view of the electronic structure of the various polymorphs is given in Figure 7. In Figure 7a, the electronic DOS obtained by znopt and PBE is shown for the wurtzite polymorph of ZnO. The energies have been aligned with respect to the O2s energy, which is the most "corelike" state that is explicitly treated in the calculations. It is clear that the larger band gap obtained with znopt mainly comes from a shift of the unoccupied states, i.e., the conduction band. The occupied valence band, on the other hand, is well-described compared to PBE. In Figure 7b, schematic views of the DOS for all seven polymorphs are shown. The agreement between the znopt and PBE-calculated valence bands for the fourcoordinated wurtzite, zincblende, BCT, and cubane-type structures, as well as the five-coordinated graphitic structure, is very good. The positions of the valence bands relative to the O2s states with PBE and znopt match very well, and the znoptcalculated valence bands are only slightly more narrow than the corresponding PBE-calculated valence bands. For the highercoordinated NaCl-type and CsCl-type polymorphs, the agreement between znopt and PBE is worse, with the valence bands being much too narrow in the znopt calculations. Thus, the valence bands of these high-coordinated phases are not very well described, and the znopt potential is consequently not very a The band gaps for the NaCl and CsCl type structures are indirect (denoted with an asterisk). The angle α for the BCT and cubane structures is defined in Figure 1. The space groups and space group numbers for the various structures are also given. suitable to study these phases, even if the formation energies relative to the wurtzite phase are qualitatively correct. The znopt-calculated bulk moduli are in general too high compared to PBE. This can be seen in Figures 5 and 6, where the znopt-calculated energy−volume curves are too steep compared to PBE, although good agreement with experiment is obtained (B 0 = 142.6 GPa for wurtzite and B 0 = 202.5 GPa for the NaCl-type structure). 60 We note that the atomization energies of all the polymorphs are quite overestimated, as is often the case in SCC-DFTB calculations. 37,61 The reason for this is that by imposing the correct atomization energy (which we found can be achieved by increasing the Zn−O repulsive potential by roughly 0.65 eV near the Zn−O equilibrium distance in wurtzite, cf., eq 2), the adsorption of water molecules (see coming sections) on the ZnO surfaces becomes far too weak. Our reported znopt potential thus corresponds to a trade-off between correct bulk ZnO and water adsorption descriptions. Finally, we comment on the PBE-calculated results for the BCT polymorph. Our calculated equilibrium volume (26.1 Å 3 / ZnO) and relative stability (0.05 eV/ZnO less stable than the wurtzite structure) agree very well with results of Morgan 29 and Zwijnenburg et al. 30 However, the results are considerably different from the equilibrium volume (33 Å 3 /ZnO) and relative stability (1.15 eV/ZnO less stable than wurtzite) reported by Zhang et al., 33 who performed similar PBE calculations and suggested that the cubane polymorph is the most stable low-density polymorph of ZnO. In contrast to the results of Zhang et al., 33 our calculations show that the BCT structure in fact is more stable than the cubane structure. 3.3. ZnO Surfaces. The calculated surface energies of the ZnO(101̅ 0) and ZnO(112̅ 0) together with the corresponding relaxation angles are given in Table 2. Although our znoptcalculated surface energies are still too high compared to PBE, we improve on both the surface energies and relaxation angles compared to znorg-0-1. SCC-DFTB gives higher surface energies than PBE because of the higher atomization energy (see Table 1), so that it is more unfavorable to break ZnO bonds. 3.4. ZnO Nanowires. The calculated formation energies as a function of the nanowire diameter are given in Figure 8 for the reference PBE method, as well as the znopt and znorg-0-1 potentials. The PBE and znopt results are comparable, with the NaCl-type nanowire being consistently less favorable than the wurtzite-type nanowire (experimentally, ZnO nanowires indeed exhibit the wurtzite structure). 62 In fact, the difference in formation energies between the two different phases appear to be almost constant over the diameter range considered, with the difference for PBE amounting to 0.24 eV and the difference for znopt amounting to 0.35 eV, i.e., slightly larger than the PBE difference. Conversely, the znorg-0-1 potential favors the NaCl-type nanowire for all wire diameters larger than roughly 10 Å. The results for the nanowires thus follow closely the results of the bulk calculations, where the NaCl-type structure was more stable than the wurtzite structure for the znorg-0-1 potential. Both the formation energies for the NaCl-type and wurtzitetype nanowires are higher with znopt than with PBE (for example, the formation energy of the wurtzite nanowire of diameter 23 Å is 0.22 eV/ZnO with PBE but 0.31 eV/ZnO Figure 8. Formation energies ΔE f for ZnO nanowires in the wurtzite and NaCl-type structures as a function of wire diameter, calculated with PBE, the znopt potential, and the znorg-0-1 potential. with znopt). This is consistent with the higher surface energies obtained with znopt (see previous section). We additionally calculated band gaps of the nanowires as a function of wire diameter. In agreement with a previous SCC-DFTB study, 37 we find that the band gap decreases with increasing size until it converges to a value slightly below the one for the optimized bulk. This is due to surface states in the nanowire, and similar effects are observed for the surface slabs. 3.5. The ZnO/Water Interface. The structures of all studied water/ZnO systems are shown in Figure 9, and the corresponding adsorption energies are given in Table 3 for PBE, znopt, and znorg-0-1. The adsorption energies are also given in a graphical diagram in Figure 10. In general, the znorg-0-1 potential overestimates the interaction between the water molecules and the ZnO surface (resulting in much too negative, i.e., too stable, adsorption energies), compared to both PBE and znopt. In ref 11, even more negative adsorption energies for the ZnO(112̅ 0)/water interface obtained with the znorg-0-1 potential were reported, but those numbers contained errors. 63 The agreement between PBE and the newly generated znopt potential is in general very good. One of the reasons our znopt potential gives weaker adsorption than the znorg-0-1 potential is the greater value of the Zn−O repulsive potential at the water-Zn distances (2.0−2.1 Å). In the following, we will discuss in detail the results obtained for single-molecule and fullmonolayer adsorption of water on the ZnO(101̅ 0) and ZnO(112̅ 0) surfaces. Single Water Molecule Adsorption on ZnO(101̅ 0). Meyer et al. 7 also performed DFT calculations of isolated water molecules on ZnO(101̅ 0) using PBE. We found that only four of the nine adsorption structures reported by Meyer et al. 7 correspond to true local minima (at least, using our implementation of DFT, the other five structures represented shallow regions on the potential energy surface). The four structures are the d, e, f, and i structures in Figure 1 of ref 7, and we choose to call them M-2, D-2, D-1, and M-1, respectively (see Figure 9), where the M denotes molecular adsorption and the D denotes dissociated adsorption, and the species labeled "1" is more stable than the species labeled "2". In the molecular M-1 configuration (Figure 9a), the water oxygen atom coordinates one Zn atom, which adopts an almost bulklike tetrahedral coordination of four oxygen atoms. One of the water hydrogen atoms forms a hydrogen bond to an oxygen atom on the surface. In the dissociated D-1 configuration (Figure 9c For these four configurations at the PBE level, we obtain the same relative stability and similar adsorption energies as those obtained by Meyer et al. 7 The M-1 configuration is the most stable with an adsorption energy of −0.98 eV, followed by the D-1 configuration (E ads = −0.89 eV). Considerably less stable are the M-2 (E ads = −0.60 eV) and D-2 (E ads = −0.66 eV) configurations. The old znorg-0-1 potential gives much too strong adsorption for these four adsorption structures, particularly for the most stable M-1 and D-1 configurations where the adsorption energies are 0.3−0.4 eV more negative than the PBE values. In contrast, our optimized znopt potential is in excellent agreement with the PBE results, and also predicts the M-1 configuration to be more stable than the D-1 configuration, as is the case in PBE but not with znorg-0-1. The improvement is displayed in a clear manner in Figure 10a. With znorg-0-1, the D-1 configuration becomes the most stable because there is an increased coordination of the water O atom to the ZnO surface, so the relative stability of the M-1 (low-coordinated) and D-1 (high-coordinated) adsorption configurations of water on Figure 9. Figure 10. Adsorption energies of all the ZnO/water structures considered in this work, except the D-1 configuration for ZnO(112̅ 0), which has been removed for clarity. The structures are shown in Figure 9, and the values can also be found in Table 3. ZnO(101̅ 0) exactly follow the relative stability trend for the bulk phases of wurtzite (low-coordinated) and NaCl-type (high-coordinated), irrespective of the method used. This could explain why the correct stability order of M-1 and D-1 is obtained with znopt but not with znorg-0-1. Full Water Monolayer Adsorption on ZnO(101̅ 0). Increasing the coverage of water to full monolayer coverage (1 ML, one water molecule per surface Zn atom) on the ZnO(101̅ 0) surface, we distinguish between three different cases: 1 × 1 molecular adsorption (Figure 9e), 2 × 1 halfdissociated adsorption (Figure 9f), and 1 × 1 dissociated adsorption (Figure 9g). In the half-dissociated case, every other water molecule along the [12̅ 10] direction of ZnO is dissociated. In the PBE calculations, the half-dissociated adsorption is the most stable (E ads = −1.17 eV), in agreement with experiment 8 and previous DFT calculations. 7,64 The molecular configuration is less stable (E ads = −1.07 eV) and the dissociated configuration even more so (E ads = −0.96 eV). Excellent agreement between the PBE and znopt calculations is obtained also for the full coverage case of water adsorption on ZnO(101̅ 0) (see Table 3 and Figure 10b). For the original znorg-0-1 potential, the most favorable adsorption energy in the full coverage case (the half-dissociated configuration) is −1.29 eV per water molecule, which is less stable than the isolated water molecule in the D-1 (E ads = −1.31 eV) configuration, i.e., there is an effective repulsion between the water molecules. However, it is known both from experiment 8 and higher-level theoretical calculations 7,64 that the 2 × 1 half-dissociated network stabilizes the adsorbed water. The performance of the znorg-0-1 potential is thus not very good in this case. The znopt potential performs much better, with adsorption energies that are within 0.07 eV of the PBE-calculated values. 3.5.3. Single Water Molecule Adsorption on ZnO(112̅ 0). On the ZnO(112̅ 0) surface, we find two stable adsorption sites that are both of the molecular kind (M-1 and M-2). These correspond to configurations B and A in ref 11, respectively. In the M-1 configuration (Figure 9h), the water molecule forms two hydrogen bonds to the surface, while in the M-2 configuration (Figure 9i), only one hydrogen bond is formed. Here, the znopt potential overestimates the binding strength by about 0.1 eV compared to the PBE values (Table 3 and Figure 10c). These values considerably improve upon the values obtained with znorg-0-1, that are about 0.3 eV too negative compared to PBE. Nevertheless, the relative stabilities of the two adsorption sites are the same with all three methods, and the differences in adsorption energies between M-1 and M-2 are all about 0.2 eV. For completeness, we have calculated the adsorption energy of a dissociated water molecule on this surface (D-1, see Figure 9j). In this case, we were unable to stabilize the dissociated hydrogen atom near the remaining hydroxyl fragment. The only local minimum found had the dissociated hydrogen atom being displaced almost a full surface unit cell along the [0001̅ ] direction away from the hydroxyl group. Although being a local minimum, the adsorption energy obtained with PBE, znopt, and znorg-0-1 is positive, which implies that the structure is not stable with respect to desorption. The adsorption energies obtained with either SCC-DFTB repulsive potential is about 0.3 eV higher than the PBE value, but because the structure is so unstable, it is unlikely to appear during for example molecular dynamics simulations. 3.5.4. Full Water Monolayer Adsorption on ZnO(112̅ 0). For the ZnO(112̅ 0) surface, similar to the case of the ZnO(101̅ 0) surface, the adsorbed water monolayer can form molecular (Figure 9k and 9l), half-dissociated (Figure 9m), or dissociated (Figure 9n) structures. PBE predicts that the halfdissociated configuration is the most stable. This result is consistent with the DFT calculations of große Holthaus et al. 11 but at odds with the results of Cooke et al., 10 who found that the fully dissociated configuration is 0.06 eV per water more stable than the half-dissociated configuration. This may be related to the choice of supercell, as Cooke et al. 10 used a 1 × 1 supercell, while in the current work (as well as in ref 11) a 2 × 2 supercell was used, which gives room for two nonidentical molecularly adsorbed water molecules alongside the dissociated water molecules. In any case, the energy differences between the different configurations are small, and coexistence of various configurations under normal conditions can be expected, as noted previously. 10,11 Table 3 and Figure 10d show that the agreement between the PBE and SCC-DFTB calculations is not very good in that the PBE result favors half-dissociated adsorption while the SCC-DFTB results strongly favor molecular adsorption. Even so, the znopt potential significantly improves on the znorg-0-1 results, where the largest error compared to PBE is 0.4 eV for the molecular-1 configuration, while our largest error with the znopt potential is 0.2 eV (also for the molecular-1 configuration). However, for a study where the ZnO(112̅ 0)/ water interface is in focus (not this one), the SCC-DFTB parameters may need to be improved. This may involve changing the O−H repulsive potential or the Slater−Koster tables. In the present work, we limited ourselves to only changing the Zn−O repulsive potential, and already this modification resulted in major improvements, as we have seen. große Holthaus et al., 11 who used the original znorg-0-1 potential, stated that the half-dissociated configuration was the most stable one at full monolayer coverage, but they did not actually report the adsorption energy of a fully molecular adsorption layer. große Holthaus et al. 11 subsequently performed MD simulations with additional layers of liquid water on top of the surface and found that the dissociated or half-dissociated water layers nearest to the ZnO surface began to convert into molecular layers. The authors attributed this effect to the formation of a developed hydrogen-bonded network above the surface, although a more likely explanation, in our opinion, is that the molecular monolayer actually is the most stable for both the znorg-0-1 and the znopt potentials, as we have shown here. CONCLUSIONS We have developed an efficient scheme to generate SCC-DFTB repulsive potentials. The scheme is based on energy−volume scans of bulk polymorphs with different coordination numbers and was applied here to ZnO, although it can easily be generalized to other materials. The key to a successful and transferable parametrization appears to be the inclusion of structures with different coordination numbers in the parametrization procedure, something which to our knowledge has not been done before for SCC-DFTB parametrizations of solidstate materials. The previously reported znorg-0-1 37 potential gives the incorrect stability order of the wurtzite and NaCl phases ( Figure 5). Our newly generated znopt potential, on the other hand, gives the correct stability order for the NaCl-type and wurtzite structures and additionally performs very well for other possible bulk structures of ZnO (CsCl-type, zincblende, graphitic, body-centered tetragonal (BCT), and cubane). The improved description of chemical properties extends into lower-dimensional properties such as surface relaxation, surface energies, and nanowire formation energies. Finally, we found that the znopt potential greatly improves on the description of the ZnO/water interface at both low and high coverage, particularly for the ZnO(101̅ 0) surface where excellent agreement is obtained compared to reference DFT data. Notes The authors declare no competing financial interest.
2016-05-12T22:15:10.714Z
2013-07-23T00:00:00.000
{ "year": 2013, "sha1": "723a9354aef876f942d70a6b5cb95c9d7d5771cb", "oa_license": "acs-specific: authorchoice/editors choice usage agreement", "oa_url": "https://doi.org/10.1021/jp404095x", "oa_status": "HYBRID", "pdf_src": "PubMedCentral", "pdf_hash": "723a9354aef876f942d70a6b5cb95c9d7d5771cb", "s2fieldsofstudy": [ "Materials Science", "Chemistry" ], "extfieldsofstudy": [ "Chemistry", "Medicine" ] }
92789719
pes2o/s2orc
v3-fos-license
Storage of Soybean Seeds and Addition of Insecticide and Micronutrients The objective of this work was to evaluate the effects on the physiological attributes of soybean seeds submitted to the seed treatment with addition of insecticide, polymers and micronutrients throughout the storage. The experimental design was completely randomized in a factorial scheme, with four seed treatments per two seasons of storage of the seeds. The analysis of variance revealed a significant interaction among seed treatments and storage times for both cultivars at 5% of probability, referring to the characteristics of shoot length (SL), primary root length (RL), shoot dry mass (SDM) and dry mass of the primary root (RDM) for the cultivar Fundacep 37 RR. Addition of seed treatments influences the physiological performance of seedlings originated from soybean seeds stored for 240 days. The shoot and primary root lenghts, and shoot dry mass express the isoenzyme esterase through the aerial part and primary root of the seedling, the malate dehydrogenase is expressed in the primary root while in the peroxidase it is evident in the shoot of the seedlings. Introduction The soybean (Glycine max (L.) Merrill) belongs to the Fabaceae family being characterized as one of the main oleaginous plants produced worldwide, due to its economic importance and nutritional quality coupled with high crude protein concentration (Follmann et al., 2014).In Brazil, it is evidenced as the most cultivated species in the most varied agricultural regions, which provided an increase of 12.18% in national production for the 2016/2017 crop season (Conab, 2017). These increases are due to the technological advances and seed quality used that directly influence the performance and establishment of the plants in the field (Ferrari et al., 2014;Meira et al., 2016).However, before the seeds are used in the field, they are submitted to certain periods of storage where some peculiar situations the seeds have been treated and exposed to stresses, that will contribute to reduce or maintain the physiological quality of the seeds.Among these treatments, the use of insecticides conjugated to polymers, fungicides and micronutrients (Pereira et al., 2005;Karam et al., 2007;Carvalho et al., 2015;Zanatta et al., 2018) stands out. The use of insecticides in the seed treatment in combination with polymers and micronutrients may increase seedling uniformity, change the field emergence, provide conditions for the seeds to express their maximum vigor (Follmann et al., 2014), on the other hand, some products used may compromise the emergence of seedlings (Souza et al., 2015;Szareski et al., 2015). The use of insecticides in the seed treatment in combination with polymers and micronutrients may increase seedling uniformity, change the field emergence, provide conditions for the seeds to express their maximum vigor (Follmann et al., 2014), in contrast, some (Sauzar et al., 2005).In addition, the use of seedlings has been shown to increase the number of seedlings.In general, insecticides are used to control insect pests and exert bioactivities that benefit agronomic interest attributes as well as soybean yield (Pelegrin et al., 2016;Ferrari et al., 2015).However, seeds treated and stored for up to 120 days may reflect positively on the dry matter accumulation of the seedling and on the leaf area (Ludwig et al., 2015), in contrast, storage prolongation may negatively affect the physiological quality of the seeds (Dan, et al., 2010, Souza et al., 2015;Gabriel et al., 2018). There are certain cases that require longer storage periods of treated seeds, this may result in effects on physiological attributes such as seedling dry mass and activity while, the possibility of performing seed treatment in advance may facilitate the organizational logistics of the seed processing unit.In this context, the objective of this work was to evaluate the effects on the physiological attributes of soybean seeds submitted to the seed treatment with addition of insecticide, polymers and micronutrients throughout the storage. Material and Methods The work was conducted at the Seed Analysis Laboratory of the Postgraduate Program in Seed Science and Technology of the Eliseu Maciel Agronomy College, Federal University of Pelotas, located in the municipality of Capão do Leão, RS, Brazil.The soybean seeds used came from the cultivars Fepagro 37 RR and Nidera 6411 RR, which obtained initial germination of 81 and 82%.The treatments used were: T 1 : seeds without treatment; T 2 : seeds treated with micronutrients (2 mL kg -1 of commercial product based on 6.7% copper, 3.2% molybdenum, 15% zinc and 9.4% manganese) + insecticide (thiametoxam at the dose of 350 g L kg -1 of seeds) + polymer (0.5 mL kg -1 of seeds); T 3 : seeds treated with insecticide (Thiametoxam at a dose of 350 g L kg -1 of seeds) + polymer (0.5 mL kg -1 of seeds); T 4 : seeds treated with micronutrients (2 mL kg -1 of commercial product based on 6.7% copper, 3.2% molybdenum, 15% zinc and 9.4% manganese) + polymer (0.5 mL kg -1 of seeds). Seeds were treated using a spray volume of 6 ml kg -1 of seeds using commercial concentrations of each isolated formulation, the seeds were distributed in individual plastic bags for each treatment.After the homogenization the seeds were transferred to paper bags and kept at room temperature for 24 hours, the rest of the seeds were stored in paper bags under controlled conditions of medium air temperature (15 °C) and relative humidity (50%) for 360 days. The primary root and aerial part lengths of the seedlings, dry mass of the primary root and aerial part of the seedlings, expression of the isoenzymes: esterase, glutamate oxaloacetate transaminase, malatodesidrogenase, and peroxidase were measured, being these measured 240 days after the treatment of the seeds, to obtain the characters of interest the following methodologies were followed: The length of the primary root and aerial part of the seedlings were obtained by the measurement of 10 seedlings at the end of the germination test, where the length between the basal insertion of the primary root to the apex of the shoot was measured, while the primary root length was measured between the apical distance and radicle base, results expressed in millimeters (mm).Dry matter of the primary root and aerial part were obtained by measuring 10 seedlings at the end of the germination test, where the seedlings were conditioned in brown paper envelopes and subjected to drying in forced ventilation oven at 70 °C until constant mass, results expressed in milligrams (mg). The determination of the isoenzymes was performed by measuring 10 seedlings at the end of the germination test at 8 days after sowing (Brasil, 2009).The expression of the isoenzymes: esterase, glutamate oxaloacetate transaminase, malatodesidrogenase and peroxidase were obtained by vertical electrophoresis in polyacrylamide gel (Malone et al., 2007).The seedlings were separately macerated in porcelain gral and kept in an ice bath, 200 mg of the macerate from each sample were transferred to microcentrifuge tubes and added with extraction solution (0.2M Lithium Borate at pH 8.3 + Tris Citrate + 0.2M at pH 8.3 + 0.15% of 2-mercaptoethanol) in the ratio 1:2 (m/v). Electrophoresis was performed on 7% polyacrylamide gels, applying 20 μL of each sample, and the staining systems used are described by Scandálios (1969) and Alfenas (1998).The interpretation of the results of the isoenzymes was based on the visual analysis of the electrophoresis gels, where the presence or absence, as well as the intensity of each of the electrophoretic bands for each measured isoenzymatic system was considered. The experimental design was completely randomized in a factorial scheme, with four seed treatments x two seasons of storage of the seeds.The effects were performed for each soybean cultivar separately and the treatments were arranged in four replicates.The data were submitted to analysis of variance where the interaction among seed treatments and storage times (zero and 12 months) at of 5% probability was verified, the characters that showed interaction were dismembered to the simple effects, on the other hand, those that did not show interaction were dismembered to the main effects for each factor separately using the complementary analyzes by the Tukey test at 5% of probability. Results and Discussion The analysis of variance revealed a significant interaction among seed treatments and storage times for both cultivars at 5% of probability, referring to the characteristics of shoot length (SL), primary root length (RL), shoot dry mass (SDM) and dry mass of the primary root (MR) for the cultivar Fundacep 37 RR.The initial growth of the soybean seedlings before storage (Zero) presented different results compared to the treatments of seeds tested for both soybean cultivars (Table 1), being proved by the performance of the characters length (SL) and shoot dry mass accumulation (SDM). The primary root lenght (RL) showed superiority through the use of Thiametoxam and polymer referring to Fepagro 37 RR cultivar, this insecticide may have stimulated the initial growth of these seedlings (Almeida et al., 2012;Kavalco et al., 2015).In contrast, NS6411 RR showed better performance through the absence of seed treatments and the use of polymer and micronutrients (Table 1), where it obtained superiority for radicular length, in this context, the use of micronutrients and polymers with the seeds of soybean can maintain and even contribute to the enhancement of the physiological quality of soybean seeds (Bays et al., 2007).For both cultivars there was a decrease in the primary root lenght in the period after storage (12 months), these results corroborate with Dan et al. (2010) where they determined that the seed treatment reduces the length of seedlings after prolonged storage. The shoot length (CPA) was superior for seedlings from seeds treated with Thimetoxan + micronutrients + polymers referring to cultivar Fepagro 37 RR, in contrast, the cultivar NS6411 RR was superior when the seeds were treated with micronutrients + polymers (Table 1).The differential performance between the cultivars is due to the metabolic responses intrinsic to the genetic and morphological constitution, as well as the contact surface of the plasma membranes that allow to increase or decrease the effects of the treatment on the seeds (Moterle et al., 2011;Szareski et al., 2016c). Table 1.Influence of seed treatments and storage times on primary root (RL) ands hootlenghts (SL) of seedlings from soybean seeds (Fepagro37 RR (F.37) and NS6411RR (NS6411)) Note.* Means followed by the same letter, lower case in the column and upper case in the row, do not statistically differ for Tukey with 5% of probability. After the seed storage, the shoot (SL) and primary root lenght (RL) with the use of Thimetoxan + polymer showed inferiority to the other treatments for the cultivar Fepagro 37 RR, in contrast, the shoot length (SL) of cultivar NS6411 RR (Table 1) reduced its magnitude after storage.In this way, the use of insecticides (Dan et al., 2010;Piccinin et al., 2013;Szareski et al., 2016a), polymers (Avelar et al., 2011) with seed treatment that will be maintained for a long period may negatively influence the physiological potential of soybeans. The dry mass of the primary root (MR) and shoot (SDM) did not differ faced to the seed treatments (Table 2) used for both cultivars under conditions of absence of storage (Zero).Research has shown that the use of insecticide and bioregulator does not result in accumulation of dry matter in the seedlings (Dan et al., 2012;Moterle et al., 2011;Zimmer et al., 2016).However, the cultivar Fepagro 37 RR reduced the dry mass of the primary root after storage of the seeds.Note.*Means followed by the same letter, lower case in the column and upper case in the row, do not statistically differ for Tukey with 5% of probability. The dry mass of the primary root (MR) and shoot (SDM) did not differ faced to the seed treatments (Table 2) used for both cultivars under conditions of absence of storage (Zero).Research has shown that the use of insecticide and bioregulator does not result in accumulation of dry matter in the seedlings (Dan et al., 2012;Moterle et al., 2011;Zimmer et al., 2016;Rigo et al., 2018).However, the cultivar Fepagro 37 RR reduced the dry mass of the primary root after storage of the seeds.According to Ludwig et al. (2011), the quality of the soybean seeds when stored and covered by polymers and insecticides do not result in differentiation regarding the accumulation of dry matter of the seedlings. The shoot dry mass (SDM) of the cultivar Fepagro 37 RR in the absence of seed treatment and the seeds treated with Thimetoxan + polymers obtained superiority to the other treatments, in contrast, the cultivar NS6411 RR was not statistically different (Table 3).According to Dan et al. (2011), it is possible that there is no differentiation of biomass accumulation in the seedlings due to the seed treatment used.However, this character compared to storage times showed an increase in seedlings from seeds that were exposed to storage.This increase may be due to the consumption of the available energy in the cotyledons, when its accumulation is higher than the biomass production of the shoot and the primary root is increased (Henning et al., 2010, Nardino et al., 2016;Dellagostin et al., 2016;Vargas et al., 2018). In relation to the enzymatic profiles, the isoenzymatic expression was differentiated for both cultivars and after storage of the seeds (Figure 1).Therefore, esterase expression (EST) evidenced two bands for the seedling shoot (SL) for Fepagro 37 RR cultivar, being these more intense for the treatment with Thimetoxan + polymers, and micronutrients + polymer, in contrast, for the primary root (RL), the treatment with Thimetoxan + polymers showed only a band with higher intensity.In relation to the cultivar NS6411 RR the enzyme expressed higher intensity for the treatment with Thimetoxan + polymer in two bands, both in the aerial as well as radicular parts (Figure 1a).This enzyme acts on lipid metabolism reactions and esters hydrolysis (Peske et al., 2012), controlled deterioration (Padilha et al., 2001;Dubal et al., 2016), which culminate in the reduction of the physiological quality of the seeds. Glutamate oxaloacetate transaminase (GOT) did not show intensity variation for both shoot (SL) and root (RL) bands for both cultivars (Figure 1b).The increase in metabolic activity may lead to deterioration due to the increase in the expression of the enzyme glutamate oxaloacetate transaminase, which is responsible for the oxidation of amino acids, reduction of α-ketoglutarate for the synthesis of new amino acids, minimizing the energy supply for the Krebs cycle and to the developing embryo (Tunes et al., 2010;Strobel et al., 2016). The expression of malate dehydrogenase (MDH) showed only one band for both cultivars (Figure 1c) in both shoot (SL) and root (RL).This enzyme acts on the cellular respiratory processes (Satters et al., 1994), catalyzes the malate and transforms it into oxaloacetate in the Krebs cycle, which results in the production of NADH, acts in the conversion of the stored triacylglycerols in the form of glucose and provides energy to the processes of germination and initial growth of the seedlings, in this way, when expressed, can minimize the seed vigor (Pedó et al., 2006). jas.ccsenet. Table 2 . Influence of seed treatments and storage times on dry mass of the primary root (RDM) and shoot dry mass (SDM) of the seedlings from soybean seeds (Fepagro37 RR (F.37) and NS6411 RR (NS6411))
2019-04-03T13:08:46.326Z
2018-12-15T00:00:00.000
{ "year": 2018, "sha1": "0b86e967e17bdef97d84561450c5825d7d387c02", "oa_license": "CCBY", "oa_url": "https://www.ccsenet.org/journal/index.php/jas/article/download/0/0/37816/38212", "oa_status": "GOLD", "pdf_src": "Anansi", "pdf_hash": "0b86e967e17bdef97d84561450c5825d7d387c02", "s2fieldsofstudy": [ "Agricultural and Food Sciences" ], "extfieldsofstudy": [ "Biology" ] }
232404873
pes2o/s2orc
v3-fos-license
Activation of indistinguishability-based quantum coherence for enhanced metrological applications with particle statistics imprint Significance Quantum coherence has a fundamentally different origin for nonidentical and identical particles since for the latter a unique contribution exists due to indistinguishability. Here we experimentally show how to exploit, in a controllable fashion, the contribution to quantum coherence stemming from spatial indistinguishability. Our experiment also directly proves, on the same footing, the different role of particle statistics (bosons or fermions) in supplying coherence-enabled advantage for quantum metrology. Ultimately, our results provide insights toward viable quantum-enhanced technologies based on tunable indistinguishability of identical building blocks. Introduction.-A quantum system can reside in coherent superpositions of states, giving rise to nonclassicality [1,2] which implies the intrinsic probabilistic nature of predictions in the quantum realm [3][4][5][6][7]. Besides this fundamental role, quantum coherence is also at the basis of quantum algorithms [8][9][10][11][12][13] and, from the modern information-theoretic perspective, constitutes a paradigmatic basis-dependent quantum resource [14][15][16], providing a quantifiable advantage in certain quantum information protocols. For a single quantum particle, coherence emerges when the particle is found in a superposition of states in a given basis of the Hilbert space. For multiparticle compound systems, the physics underlying the emergence of coherence is richer and strictly connected to the nature of the particles, with fundamental differences for nonidentical and identical particles. In fact, states of identical particle systems can manifest coherence even when no particle resides in superposition states, provided that the wavefunctions of the particles overlap [17][18][19]. In general, a special contribution to quantum coherence arises thanks to the spatial indistinguishability of identical particles which cannot exist for nonidentical (or distinguishable) particles [17]. Recently, it has been found that the aptitude of spatial indistinguishability of identical particles can be exploited for entanglement generation [20], applicable even for spacelikeseparated quanta [21] and against preparation and dynamical noises [22][23][24]. The presence of entanglement is a signature that the bipartite system as a whole carries coherence even when the individual particles do not, the amount of this coherence being dependent on the degree of indistinguishability. We name this specific contribution to quantumness of compound systems as "indistinguishability-based coherence", as a difference with the more familiar "single-particle superpositionbased coherence". Indistinguishability-based coherence qualifies in principle as an exploitable resource for quantum metrology [17]. However, it requires sophisticated control techniques to be harnessed, especially in view of its nonlocal nature. Moreover, a crucial property of identical particles is the exchange statistics, while operating both Bosons and Fermions in the same setup is generally challenging. In this work, we experimentally investigate the operational contribution to quantum coherence stemming from spatial indistinguishability of identical particles. By virtue of our recently developed photonic architecture arXiv:2103.14802v1 [quant-ph] 27 Mar 2021 Fig. 1. Illustration of the indistinguishability-activated phase discrimination task. A resource state ρin that contains coherence on a computational basis is distilled from spatial indistinguishability. The state then enters a black box which implements a phase unitaryÛ k = e iĜφ k , k ∈ {1, . . . , n} on ρin. The goal is to determine the φ k actually applied through the output state ρout: indistinguishability-based coherence provides operational advantage to the task. capable of tuning the indistinguishability of two uncorrelated photons [25], we observe the direct connection between degree of indistinguishability and amount of coherence, and show that indistinguishability-based coherence can be concurrent with single-particle superpositionbased coherence. In particular, we demonstrate that it has operational implications, providing a quantifiable advantage in a phase discrimination task [26,27], as depicted in Fig. 1. Furthermore, we design a setup capable to test the impact of particle statistics in coherence production and phase discrimination for both Bosons and Fermions. This is accomplished by compensating for the exchange phase during state preparation, simulating Fermionic states with photons, which leads to statisticsdependent efficiency of the quantum task. Indistinguishability-based coherence-To formally recall the idea of coherence activated by spatial indistinguishability [17], we first consider the basic scenario where the wavefunctions of two identical particles with orthogonal pseudospins, ↓ and ↑ overlap at two spatiallyseparated sites, L and R. Omitting the unphysical labeling of identical particles [28], the state is described as |Ψ = |ψ ↓, ψ ↑ , with |ψ = l |L + r |R and |ψ = l |L + r |R denoting the spatial wavefunctions corresponding to the two pseudospins. Let us use spatially localized operations and classical communication, i.e., the sLOCC-framework [20], to activate and exploit the operational coherence. Projecting onto the operational subspace B = {|Lσ, Rτ ; σ, τ =↓, ↑} yields the normalized conditional state [17] with N Ψ LR = |lr | 2 + |l r| 2 , and the exchange phase factor η = 1(−1) originates from the Bosonic (Fermionic) nature of the indistinguishable particles. We see that, although each particle starts from an incoherent state (namely, |ψ ↓ , |ψ ↑ ) in the pseudospin computational basis, the final state |Ψ LR overall resembles a coherent, nonlocally-encoded qubit state in the compound basis B under spatially local operations and classical communication (sLOCC). Also, considering that this coherence vanishes when the two particles are nonidentical thus individually addressable [17], the emergence of coherence in |Ψ LR essentially hinges on the spatial indistinguishability of the identical particles, in strict analogy to the emergence of entanglement between pseudospins [20,25,29]. The coherence of the state of Eq. (1) is independent of the Bosonic or Fermionic nature of the particles because of the specific choice of the initial single-particle states. However, in general, particle statistics plays a role in determining the allowed spatial overlap properties of identical particles and is thus crucial for the coherence of the overall state of the system. Hence, we shall extend our experimental investigation to a state where these fundamental aspects can be observed. Taking again a scenario with two indistinguishable particles, one of the particles is now initialized with innate coherence in the pseudospin basis, i.e., the initial two-particle state reads |Ψ = |ψ ↓, ψ s , where |s = a |↑ + b |↓ with |a| 2 + |b| 2 = 1. Projecting onto B generates the three-level distributed state [17] where N Φ LR = a 2 (|lr | 2 + |l r| 2 ) + b 2 |lr + ηl r| 2 . In this state, indistinguishability-based coherence coexists with single-particle superposition-based coherence, giving rise to an overall multilevel coherence in the operational basis B. A photonic coherence synthesizer.-We demonstrate the preparation of two-level and three-level indistinguishability-based coherence by means of the photonic configuration shown in Fig. 2(a). The correspondence between photon's polarization and pseudospin reads |H ↔ |↑ , |V ↔ |↓ , with |H and |V identifying horizontal and vertical polarization, respectively. Frequency-degenerate photon pairs are generated by pumping a beamlike type-II β-barium borate (BBO) crystal via spontaneous parametric down-conversion [30], and sent to the main setup via two single-mode fibers, respectively. The two-photon initial state |H ⊗ |V is uncorrelated, and two half-wave plates (HWPs, #1 and #2) with their orientation set at 22.5 • and θ/2, respectively, are utilized to adjust their polarization. The two-level state |Ψ LR is effectively prepared by the setup already employed to demonstrate polarizationentanglement activation by spatial indistinguishability [25]. Each of the two initially uncorrelated photons passes through a polarizing beam splitter (PBS), which distributes their spatial wavefunction between two remote sites, L and R, according to the polarization state. Next, additional HWPs are added in different paths to revert the photons' polarization, and a beam displacer (BD) is inserted on each site to combine the propagating directions of the two photons. At this point, the spatial wavefunctions of the two photons become overlapped, allowing for preparation of the state |Ψ LR via sLOCC. Explicitly, a pair of polarization analysis devices (PADs) are inserted to cast polarization measurement, and the coincidence photon counting process realizes the desired projection onto the distributed basis B (for more details, see Ref. [25]). To prepare the three-level state |Φ LR , an additional part of setup consisting of an HWP set at 22.5 • and a BD is appended on each site, L and R, the orientations of the HWPs #3 and #4 being also adjusted to prepare one of the photons in the polarization-superposition state (see dashed box in Fig. 2(a)). The coherence underpinning the system is finally activated and detected via sLOCC. As a first observation, we want to prove the direct quantitative connection between produced coherence and spatial indistinguishability of photons, in analogy to what has been done for the entanglement [25]. In fact, in the present experimental study, the resource of interest is quantum coherence and such a preliminary analysis is essential in view of its controllable exploitation for the specific quantum metrology protocol. This analysis is performed for the two-level state |Ψ LR resulting from the original elementary state |Ψ . Various methods have been proposed to quantify coherence [26,[31][32][33][34]. Here, we adopt the l 1 norm of the density matrix ρ, that is C l1 (ρ) = i =j |ρ ij | [31]. For conve-nience, we omit the site delimiters of the distributed state and simply denote it using polarization. The system is prepared in |Ψ LR (θ) = cos θ |HV + sin θ |V H , and its measure of coherence in the basis B is C l1 (Ψ LR ) = | sin 2θ |. The coherence completely stems from the indistinguishability of the photons, as it vanishes at the limit θ = kπ/2 (k integer number), i.e., when the two photons are distinguishable. To quantify the spatial indistinguishability of the two photons we use the entropic measure [22] LR = |l r/N Φ LR | 2 ) refers to the probability of finding the photon from ψ and ψ (ψ and ψ) ending at L and R, respectively. For our setup, one has I = − cos 2 θ log(cos 2 θ) − sin 2 θ log(sin 2 θ). The experimental result for the measurement of coherence versus indistinguishability is plotted in Fig. 3(a), clearly revealing the monotonic dependence in accord with theoretical predictions. Here and after, the error bars represent the 1σ standard deviation of data points, which is deduced by assuming a Poisson distribution for counting statistics, and resampling over recorded data. The inset shows the result of quantum state tomography at θ = π/4, which has a fidelity of 0.988 to the maximally coherent state. Phase discrimination.-Having generated tunable coherence using sLOCC, we apply it in the phase discrimination task to demonstrate the operational advantage due to indistinguishability and the role of particle statistics. The formal definition of phase discrimination task is as follows: a phase unitary among n possible choices U k = e iĜφ k , k ∈ {1, . . . , n} is ran- domly applied on an initial state ρ in with a probability of p k , where the generator of the transformation G = στ =↑,↓ ω στ |Lσ, Rτ L ↑, R ↓| is diagonal on the computational basis (ω στ are arbitrary coefficients) and n k=1 p k = 1. We shall identify the φ k that is actually applied with maximal confidence from the output state ρ out , by casting on it positive operator-valued measurements (POVMs). Here, we focus on the n = 2 scenario with φ 1 = 0, φ 2 = φ, and solving the task using the experimentally feasible minimum-error discrimination [35,36]. We first investigate phase discrimination with the twolevel state and, without loss of generality, choose the gen-eratorĜ = |L ↑, R ↓ L ↑, R ↓| (obtained fixing ω ↑↓ = 1 and ω ↑↑ = ω ↓↑ = ω ↓↓ = 0). Consequently, the output states after being affected by U k read and they are discriminated by a POVM (a von Neumann projective measurement in this case) comprising two projectors Π = {Π 1 ,Π 2 }: whenΠ k clicks, the phase is identified as φ k . By this definition, the chance of making an error is P err = p 1 Ψ 1 |Π 2 |Ψ 1 + p 2 Ψ 2 |Π 1 |Ψ 2 , and is lower bounded by the Helstrom-Holevo bound [37,38], namely, P err For a two-level coherent state, it is straightforward to identify the measurement projectorsΠ 1 andΠ 2 [17]. The phase discrimination game is experimentally realized using the setup of Fig. 2(b). The photons in the state |Ψ LR on the site R are sent into a unbalanced Mach-Zehnder interferometer (UMZI), while the photons on the site L are directly detected. We put a HWP between two QWPs fixed at 45 • to build a phase gate, and situate one phase gate into each of the arms after a nonpolarization beam splitter (BS). In fact, in the short arm of UMZI, the state |Ψ LR remains unchanged, while in the long arm, a relative phase φ between |LV, RH and |LH, RV is imported. A movable shutter (not shown) is placed in one of the arms to adjust the parameters p 1 and p 2 . After the MZI, the photons are projected on the desired state. Since |Ψ LR is a two-level coherent state, the measurement projectorsΠ 1 andΠ 2 defined in the basis {|LV, RH , |LH, RV } are realized by drawing the corresponding subspace from the product (single-particle) state measurement. This procedure is as follows. On the site L (R), the polarization projector isÔ L = |χ χ| with |χ = α |H + β |V (Ô R = |χ χ | with |χ = α |H +β |V ); the product projector is thuŝ O L ⊗Ô R , leading to the two-photon projector |Ψ αβ Ψ αβ | with |Ψ αβ = αβ |LH, RV + βα |LV, RH in the subspace of interest {|LV, RH , |LH, RV }. Thanks to the final PAD unit of the setup of Fig. 2(b), the parameters {α, β, α , β } can be adjusted to perform the desired projective measurementsΠ 1 ,Π 2 and eventually obtain the error probability of discrimination P err . We directly measure the error probability of phase discrimination for various φ at p 1 = 0.44 by employing the maximally coherent state |Ψ LR (π/4) and optimizing over the measurement settings ofΠ 1 andΠ 2 . The experimental result, matching well with the theoretical prediction, is shown in Fig. 3(b). Note that without coherence, the best strategy of phase discrimination is to constantly guess the phase with greater probability, yieldingP err = p 1 (top dashed line). The reduced P err thus unravels the almost ubiquitous advantage of indistinguishabilitybased coherence. Particle statistics matters.-The symmetric form of Eq. (3) prevents the exchange phase factor η from affecting the outcome of |Ψ LR -based phase discrimination task. However, when |Φ LR is utilized in the same task, the intrinsic statistics of the indistinguishable particles renders the situation more complicated. In our optical setup, any state prepared necessarily has η = +1. For simplicity, we choose a = b and set l = l = r = r (l = r = 0) to maximize (destroy) indistinguishability. This is experimentally achieved by setting the orientation of HWPs #3 and #4 be 22.5 • and θ = π/4(0). However, investigation of Fermionic systems with η = −1 is also possible, which follows from the observation that η in Eq. 2 can be absorbed into l . By setting θ = −π/4, we invert the sign of l to simulate indistinguishabilityactivated coherence of Fermionic particles. The state preparation in all above cases is characterized via quantum state tomography, and the results are presented in Fig. 4(a), the magnitude of the imaginary part of the density matrices are smaller than 0.07. For the Bosonic case, the outcome authenticates the presence of coherence between all three vectors of the computational basis shown in Eq. (2). For the distinguishable case, the coherence is in contrast solely inherited from one of the particles, and localized on the site R. For the Fermionic case, the destructive interference completely eliminates the amplitude on the symmetric basis |V V ∼ |L ↓, R ↓ , and the resulted state interestingly resembles the twolevel state, |Ψ LR (π/4) , as is substantiated by the deduced density matrices. In the experiment, we simulate this case with the average fidelity of 95.9(2)% compared with |Ψ LR (±π/4) . Notice that a minus sign appears in the coefficient of the |V H terms, which is attributed to the π-phase acquired by the photons upon reflected by PBS. We are now in the position to demonstrate the role of particle statistics in the phase discrimination task. The corresponding operations U k are again realized using the phase gates within the UMZI, yielding two output states Φ k [17] written as Here, we set ω ↓↑ = 1, ω ↑↓ = 2 and ω ↓↓ = 3 in the generatorĜ. Differently from the two-level situation, in this three-level coherent case, we need to place each UMZI on each site L and R. The UMZI has a path difference equivalent to 2.7ns between the long and short paths, and the coincidence interval is set at 0.8ns. The quantum states affected by the two phase operations in the UMZIs are registered separately [39,40]. We adjust the electronic delay of the coincidence module to pick out the events that the two photons had taken the long/short and short/long paths, which correspond to the state after being affected by U 1 and U 2 , respectively. Moreover, for the measurement of the three-level system, to minimize the error probability of discrimination P err , three projectorsΠ 1 ,Π 2 andΠ 3 are required where Σ 3 iΠ i = I and Tr LR ] = 0. The projectorsΠ i (i = 1, 2, 3) consist of three linearly independent basis vectors B = {|L ↑, R ↓ , |L ↓, R ↑ , |L ↓, R ↓ }. Similarly to the method used above for the two-level state, these three projectors are also extracted from the subspace of the product projectors on the two sites L and R and implemented by the PAD unit of the setup. Fig. 4(b) reports the measured error probabilities for phase discrimination with the three-level states, where we omit the experimental result for the Fermionic case, because it is identical to the two-level case already given in the earlier text (see Fig. 3(b)). A clear discrepancy between the credibility of phase discrimination using different kinds of particles can be observed. Particularly, both types of indistinguishable particles provide advantage over distinguishable ones within the range of φ ∈ ( 2π 3 , 4π 3 ), but Fermions further outperform Bosons by a difference in P err of 0.119 at φ = π. This can be intuitively interpreted by recalling that the exchange interaction of Fermions prevent them from occupying the same state, so the wavefunction amplitude disperses between different states and produces large amount of coherence. In contrast, Bosons tend to bunch on a single state, so the applicable coherence is reduced. Discussion.-Coherence activated from spatial indistinguishability is a fundamental contribution to quantumness of multiparticle composite systems intimately related to the presence of identical particles (subsystems). It cannot exist between different types of quanta, that is, in systems made of nonidentical (or distinguishable) particles. Due to its intrinsic nonlocal trait, in order to exploit indistinguishability-based coherence for quantum information tasks, transformations and measurements on the resource state must admit direct product decomposition into local operations, which are achieved by sLOCC. We note that in the case of two identical particles, Schmidt decomposition recovers our capability to perform all possible measurements [41]. Therefore, the application of indistinguishability-based coherence between three or more quanta will be an open research route. In this paper, we have experimentally investigated indistinguishability-based coherence, demonstrating its operational usefulness in a quantum metrology protocol. Our photonic architecture is capable of tuning the degree of spatial indistinguishability of two uncorrelated photons, and adjusting the interplay between indistinguishability-based coherence and single-particle superposition-based coherence. This has allowed us to prepare via sLOCC various types of resource states and exploit them in the phase discrimination task to characterize the operational coherence. Interestingly, our setup has been designed in such a way that both Bosonic and
2021-03-30T01:16:17.702Z
2021-03-27T00:00:00.000
{ "year": 2022, "sha1": "f7f8fe96f0cf9f4f4be1f2851ab539f7f6310c6a", "oa_license": null, "oa_url": null, "oa_status": null, "pdf_src": "Arxiv", "pdf_hash": "f7f8fe96f0cf9f4f4be1f2851ab539f7f6310c6a", "s2fieldsofstudy": [ "Physics" ], "extfieldsofstudy": [ "Physics" ] }
55693720
pes2o/s2orc
v3-fos-license
Age and Gender Effects of Workforce Composition on Productivity and Profits: Evidence from a New Type of Data for German Enterprises This empirical paper documents the relationship between the composition of a firm’s workforce (with a special focus on age and gender) and its performance (productivity and profitability) for a large representative sample of enterprises from manufacturing industries in Germany using newly available, unique data. We find concave age-productivity profiles and a negative correlation of age on firms’ profitability. Moreover, our micro-econometric analysis reveals for the first time that the ceteris paribus lower level of productivity in firms with a higher share of female employees does not go hand in hand with a lower level of profitability in these firms. Introduction Economic research has a long tradition of explaining differences in firm performance (e.g., Bartelsman & Doms 2000;Syverson, 2011). Whereas some studies are interested in the effects of work practices (e.g., codetermination, training, incentive schemes) on firm performance, others are more interested in the relationship between the demographic structure of the workforce and firm performance. The latter stream of literature has received increasing attention due to persistent inequalities in the labor market (e.g., wage differentials between men and women, employment problems of older workers), increasing female employ-ment rates, and the demographic change leading to an ageing workforce. To understand such inequality issues and to learn about potential aggregated productivity (welfare) changes in ageing societies with increasing female employment, micro-econometric studies on the effects of the age and gender composition of firms' workforces are important. In the last two decades, several new databases have been made available to researchers. These databases include establishment and linked employer employee datasets. These new data sources are usually large representative panel datasets obtained by surveys or official statistics and which allow the application of advanced econometric techniques to the analysis of firm performance. In Germany, the most used datasets in this context are the IAB Establishment Panel (Fischer et al., 2009) and the linked employer employee data of the IAB (LIAB) (Alda, Bender, & Gartner, 2005), which combines the survey data of the IAB Establishment Panel with process produced employee data of the social security agencies. A disadvantage of such voluntary survey information is that information about firms' productivity, costs, profits, and other variables are often seen as confidential by firms and might include measurement errors that can distort the empirical link between explanatory variables and outcomes. In this paper, we use a new type of data (KombiFiD project) for German enterprises from the manufacturing sector that combines official statistics of employees covered by social security and information from mandatory enterprise level surveys performed by the German Statistical Offices. Therefore, we have more reliable information than most previous studies. Moreover, we can compute firms' rates of profit, yielding new insights into the firm performance literature, as previous studies have primarily focused on productivity. A table in the appendix presents a review of recent econometric studies that explicitly address the effects of age and gender on firm performance. All reviewed studies have in common that they use linked employer-employee data to study the productivity effects of age and gender. The used datasets are from different countries (Germany, Netherlands, Denmark, Finland, Belgium, Portugal, Canada, USA, Taiwan). The main findings of previous research can be summarized as follows. The age-productivity profiles are mostly positive concave or inverse u-shaped. However, the estimates differ among different methods and specifications. The employment share of women has mostly significant negative effects on firm productivity in OLS (Ordinary Least Squares) regressions and non-significant effects in GMM (General Method of Moments) regressions. Especially noteworthy are the last three papers in the appendix table by Cardoso, Guimarares, and Varejao (2011) for Portugal, van Ours and Stoeldraijer (2011) for the Netherlands, and Göbel and Zwick (2012) for Germany, as they are the most comparable to our study with respect to data, variables, specifications, and methods. Although previous research has analyzed firm productivity and the productivity-wage gap, we do not know of any study that has explicitly analyzed the effects of age and gender composition of the workforce on firms' profit-ability. 1 Consequently, we present the first evidence for direct links between workforce composition and firm profits. In our micro-econometric analysis, we use a bal- The finding for productivity is consistent with standard human capital considerations (amortization periods, depreciation). The finding for profit is consistent with deferred compensation considerations (underpayment of younger and overpayment of older employees). Whereas the concave age-productivity profiles do not support fears of declining productivity due to an ageing workforce and cannot explain the employment problems of older workers, the negative effect of age on firm profits highlights the employment barrier for older workers from a labor demand side. Our analysis furthermore reveals, for the first time, that the ceteris paribus lower level of productivity in firms with a higher share of female employees does not go hand in hand with a lower level of profitability in these firms. If anything, profitability is (slightly) higher in firms with a larger share of female employees. This finding might indicate that a lower productivity of women is (over)compensated by their lower labor costs, which in turn might indicate general labor market discrimination against women or lower reservation wages and less engagement in individual wage bargaining by women. The rest of the paper is organized as follows. Section 2 describes the data used and the definitions of the variables and presents descriptive statistics. Section 3 presents and discusses the approaches for our micro-econometric investigation. Section 4 contains the results of our micro-econometric analyses. The paper concludes in Section 5 with a summary and discussion of our results as well as comments on the newly available data for enterprises from German manufacturing used for the study. 27 Age and gender effects of workforce composition on productivity and profits: Evidence from a new type of data for German enterprises Data, definition of variables and descriptive statistics The empirical investigation uses data for manufacturing industry enterprises 2 . These data come from two sources. The first source is the cost structure survey for enterprises in the manufacturing sector. This survey is carried out annually by the statistical offices as a representative random sample survey stratified according to the number of employees and industries (see Fritsch et al., 2004). The sample covered by the cost structure survey represents all enterprises with at least 20 employees from manufacturing industries. Approximately 45 percent of the enterprises with 20 to 499 employees and all enterprises with 500 or more employees are included in the sample. 3 Although firms with 500 or more employees are covered by the cost structure survey in each year, the sample of smaller firms is part of the survey for four years in a row only. This survey is the source for information on productivity, profitability, firm size and industry affiliation: Productivity is measured as labor productivity, defined as value added per head (in Euro and in current prices). Information on the capital stock of a firm is not available from the cost structure survey, so more elaborate measures of total factor productivity cannot be used in this study. Bartelsman and Doms (2000, p. 575) note that heterogeneity in labor productivity is accompanied by similar heterogeneity in total factor productivity in the reviewed research where both concepts are measured. In a recent comprehensive survey, Chad Syverson (2011) argues that high-productivity producers will tend to appear efficient regardless of the specific way their productivity is measured. 4 Furthermore, Foster, Haltiwanger and Syverson (2008) show that productivity measures that use sales (i.e., quantities multiplied by prices) or quantities only are highly positively correlated. Therefore, we argue that labor productivity is a suitable measure for productivity at the firm level. Labor productivity is computed as: Firm size is measured by the number of people working in a firm. This measure is also included in squares in the empirical models to address non-linearity in the relation between firm size and firm performance. Industry affiliation of a firm is recorded at the twodigit level. The second source of data is the Establishment His- Age and gender effects of workforce composition on productivity and profits: Evidence from a new type of data for German enterprises (though not impossible) and is legal only if the firm agrees in writing. The basic idea of the KombiFiD (an acronym that stands for Kombinierte Firmendaten für Deutschland, or combined firm level data for Germany) project, described in detail on the web (see www. kombifid.de), is to ask a large sample of firms from all parts of the German economy to agree to match confidential micro data for these firms. These data are kept separately by three data producers (the Statistical Offices, the Federal Employment Agency, and the German Central Bank) in one dataset. These matched data are made available for scientific research while strictly obeying the data protection law, i.e., without revealing micro level information to researchers outside the data producing agencies. In KombiFiD, 54,960 firms were asked to agree in writing to merge firm level data from various surveys and administrative data for the report- Descriptive statistics for all variables and the pooled data are reported in Table 1. It is evident from these descriptive statistics that the variation of the share of females and of the share of employees in the two qualification groups are small over the four years covered compared with the variation between the firms in the sample. The same holds for the variation in firm size. Therefore, the within firm variation of important dimensions of diversity of the employees over time cannot be used in fixed effects models to sufficiently identify any relationship between changes in firm performance over time and the diversity of employees. Furthermore, a comparison of the mean and the standard deviation of the variables indicate that some firms may have characteristics that differ by orders of magnitude from the rest of the firms in the sample. Unfortunately, due to strict data protection rules it is not possible to report the minimum and maximum values of the variables in Table 1 (because these are fig-ures for a single firm that may not be revealed). This is less of a problem for all the variables that are defined as shares because their values are bound between zero and one hundred percent by definition. However, for value added per head, the rate of profit and firm size, we know from (unpublished) results of investigations of the KombiFiD Agreement Sample that there are extremely low or high values of these variables for some firms. These extreme observations, or outliers, may be highly influential in any empirical investigation. This aspect of the data, therefore, should be addressed. Approaches for the microeconometric investigation The investigation of the link between the diversity of employees (especially the composition of the workforce by age and gender) and two dimensions of firm performance (productivity measured as value added per head in Euros and profitability measured as the rate of profit in percent) 12 uses empirical models that regress the performance variable on the shares of employees from different age groups, the share of female employees 13 , the shares of highly and medium quali- tions are simply a vehicle to test for and estimate the size of the relation between firm performance and one dimension of workforce diversity controlling for other firm characteristics. Furthermore, note that productivity differences at the firm level are notoriously difficult to explain empirically. "At the micro level, productivity remains very much a measure of our ignorance" (Bartelsman & Doms 2000, p. 586). Syverson (2011) surveys the recent literature on determinants of productivity at the firm level. Inter alia, he mentions effects of competition, organizational structures within firms, payment systems, other human resources practices, managerial talent, human capital, higher-quality capital inputs, information technology (IT) and R&D. All these determinants of productivity are important for profitability as well, and they cannot be examined here with the data at hand. These limitations should be kept in mind when putting the results in perspective. In a first step, the empirical models were estimated for the pooled data from 2003 to 2006 by Ordinary Least Squares (OLS). The descriptive statistics presented above revealed that the variation of the share of females and of employees in the two large qualification groups are small over the four years covered compared with the variation between the firms in the sample. The same holds for the variation of firm size. Therefore, the within firm variation of important dimensions of diversity of the employees over time cannot be used to identify any relationship between changes in firm performance over time and the diversity of employees by adding fixed firm effects. To address the dependence of the error term between observations from one firm over the four years, the standard errors of the estimated regression coefficients are clustered at the firm level. 16 In a second step, we address the problem that some In contrast to the least squares estimator, the quantile regression estimates place less weight on outliers and are found to be robust to departures from normality. " Quantile regression at the median is identical to least absolute deviation (LAD) regression, which minimizes the sum of the absolute values of the residuals rather than the sum of their squares (as in OLS). This estimator is also known as the L 1 , or median regression, Results of the micro-econometric investigation Our main empirical results for the links between dimensions of workforce composition and firm productivity (value added per head in Euro) are reported in Table 2 To check for potentially influential outliers, we reestimated Model 1 with the robust MM regression technique, which supports our main findings from OLS. The estimated coefficients are slightly smaller and the estimated standard errors are substantially smaller for most variables in the robust MM regression, which leads to higher significance levels. Although the coefficient for the oldest age group is now significant and positive, we still find an inverse u-shaped relationship between age and productivity. The estimated negative coefficient of the employment share of women is 189 Euros in the robust MM regression compared with 219 Euros in OLS. A noteworthy difference arises for firm size that is likely driven by influential outliers. Whereas OLS indicates a positive concave relationship, because the maximum is reached at more than 70,000 employees, the robust MM regression suggests an inverse u-shape with a maximum at approximately 4,000 to 5,000 employees. Because we are especially interested in age-productivity profiles, Model 2, with less aggregated age groups, is also estimated with OLS and robust MM regressions. The results in Table 2 show no noteworthy changes in the estimated parameters for the other variables, so we focus on age. To facilitate interpretation, we plotted the estimated coefficients and corresponding 95% confidence intervals for OLS in Figure 1 and for robust MM in Figure 2. Note that the share of employees aged below 20 years serves as a reference group and that we neglect the oldest age group with workers aged 65 years and older because they may no longer be normal workers. Both plotted age-productivity profiles show in principal the same pattern. Productivity increases for younger workers until approximately age 30 and does not significantly change afterward. Thus, we find a more positive concave than inverse u-shaped age-productivity profile that does not support potential negative productivity effects due to an ageing workforce. Our results from pooled OLS and robust MM regressions, which address influential outliers in the data, are in principal only correlations and need not be causal due to potential endogeneity issues stemming from omitted variables and reverse causality. Therefore, we perform GMM first difference regressions for Age and gender effects of workforce composition on productivity and profits: Evidence from a new type of data for German enterprises Model 2 as robustness checks, whose results for productivity are presented in the last column of in the case of negative productivity shocks. Figure 3 plots the age-productivity profile, which is again positive concave and supports our previous findings from the pooled OLS and robust MM regressions. Table 3. Estimates for profitability Note: The dependent variable is rate of profit (%). All models include dummy variables for years and 2-digit-level industries. Standard errors clustered at the firm level are in parentheses for OLS and MM regressions. Two-step GMM first difference regressions for model 2; first differences are instrumented with second and third lags of their own levels. Robust standard errors are in parentheses for GMM regressions. Coefficients are significant at * p<0.10, ** p<0.05, and *** p<0.01. 37 Age and gender effects of workforce composition on productivity and profits: Evidence from a new type of data for German enterprises The OLS results for Model 1 in Table 3 Figure 5 for the robust MM regressions. It can be seen that profitability increases until age 30, as was the case for productivity, and decreases afterward compared with the rather flat productivity profiles. The results of the GMM regression for profitability are presented in the last column of Table 3. As was the case for productivity, the estimated coefficients and standard errors are larger than in the pooled OLS and robust MM regressions. The estimates reveal positive coefficients for some age groups and for the female employment share, although it is not significant. The ageprofit profile in Figure 6 shows an increase until age 30 and a slight decline afterward, although the differences between age groups are not statistically significant. Despite low statistical significance, the overall GMM results support our previous findings from the pooled OLS and robust MM regressions. Discussion and concluding remarks We start our discussion with a short summary of our basic findings about age and gender effects on firm performance. In line with previous research, we find 39 Age and gender effects of workforce composition on productivity and profits: Evidence from a new type of data for German enterprises concave age-productivity profiles that increase until age 30 and are flat afterward. The age-profit profiles indicate an increase until age 30 and a decline afterward. The employment shares of women and productivity are significantly negatively correlated in our pooled OLS and robust MM regressions but are significantly positively correlated in the GMM regressions. Profitability seems to be positively correlated with the share of female employees in all our regressions, although not significantly in the GMM regressions. Overall, most of our findings on firm productivity are in line with findings from previous research, which has been summarized in the appendix table, and we have provided new findings on firm profitability. Our finding for age and productivity is consistent with standard human capital considerations. Human capital theory (Mincer, 1974) implies that incentives to invest in human capital decrease with age as the amortization period decreases. Moreover, human capital is usually subject to depreciation. Both arguments lead to concave or even inverse u-shaped age-productivity profiles. Our finding for age and profit is consistent with deferred compensation considerations (Lazear, 1979). In deferred compensation models with longterm employment contracts, younger workers are paid below their marginal product and older workers are paid above their marginal product to provide work incentives. Consequently, firms' short term profits are positively affected by younger workers with short tenure, who pay loans to the firm, and negatively by older workers with long tenure, who receive the repayments of their loans. Although we cannot explicitly analyze tenure effects due to missing information in the data, age can be interpreted in this context because the German manufacturing sector is characterized by stable employment, making age and tenure quite collinear. Moreover, seniority arrangements with respect to age are usually part of collective contracts, which are binding to most firms in the German manufacturing sector. Whereas the concave age-productivity profiles cannot explain the employment problems of older workers, the negative effect of older workers on profits highlights the employment barrier for older workers from a labor demand side that might be explained by deferred compensation schemes (Heywood, Jirjahn, & Tsertsvardze, 2010;Hutchens, 1986). A similar conclusion can be drawn from previous studies that analyze the productivity-wage gap (e.g., Cardoso et al., 2011;Cataldi, Kampelmann, & Rycx, 2011;van Ours & Stoeldraijer, 2011). Moreover, our findings are important in that they do not support the fear of declining productivities in ageing societies. Although our findings for gender and productivity are unclear from a causal perspective, we were able to document that firms with higher shares of female employees do not have lower levels of profitability. If anything, profitability is ( However, if firms' profits are only larger due to an underpayment of women caused by lower reservation wages and fewer wage bargaining activities, equal pay legislation might not have adverse effects on female employment but will negatively impact firms' profits. Overall, a combination of female quotas and equal pay legislation might be necessary to effectively improve the employment situation of women and to reduce gender wage gaps. Whether such a policy would be ef- Workforce composition measures Main findings Haltiwanger, Lane, and Spletzer (1999) USA , 1985, -1997, , linked employer employee data OLS: pooled levels 1990, & 1994, , pooled differences 1986, -1990, & 1990, -1994 (Spengler 2008, p. 502) 8 Note that this information on the diversity of the employees is not available in greater detail; for example, the number of female employees aged 10 The sample is limited to firms from West Germany. There are large differences between enterprises from West Germany and the former communist East Germany, even many years after the unification in 1990. Therefore, an empirical study should be performed separately for both parts of Germany. The KombiFiD Age and gender effects of workforce composition on productivity and profits: Evidence from a new type of data for German enterprises 14 The reference category in our regression models is the share of employees who are either known not to be medium or highly qualified employees or whose qualification level is not reported in the data and which is, therefore, unknown.
2018-12-06T09:13:01.205Z
2014-03-31T00:00:00.000
{ "year": 2014, "sha1": "2dee2af2d770f0b16a7a0cf2cab454560ce2291b", "oa_license": "CCBY", "oa_url": "https://doi.org/10.5709/ce.1897-9254.129", "oa_status": "GOLD", "pdf_src": "ElsevierPush", "pdf_hash": "d532212559e5456830b21bef53d38bcfbea199e9", "s2fieldsofstudy": [ "Economics" ], "extfieldsofstudy": [ "Economics" ] }
233198176
pes2o/s2orc
v3-fos-license
Synchronization of Fractional Reaction-Diffusion Neural Networks With Time-Varying Delays and Input Saturation This study is concerned with a synchronization problem of two fractional reaction-diffusion neural networks with input saturation and time-varying delays by the Lyapunov direct method. We extend the traditional ellipsoid method by giving the novel definition of the ellipsoid and linear region of the saturated, which makes our method succinct and effective. First, we linearize the saturation terms by the properties of convex hulls. Then, by using a new Lyapunov-Krasovskii functional, we give the synchronization criteria and estimate the domain of attraction. All the results are presented in the form of linear matrix inequalities(LMIs). Finally, two numerical experiments verify the validity and reliability of our method. I. INTRODUCTION After the conception of ''small world'' [1] came up, the related research of complex networks has entered a rapid development stage. Complex networks is the network dynamically evolving in time whose structure is regular and complex [2]. It is an abstract description of the interaction between individuals in nature over time. Therefore, complex networks can describe not only the whole but also local behavior. As one kind of complex networks, the neural networks has attracted many scholars' interest because it can simulate many practical problems. Under the existing theoretical framework, the neural networks are described in two parts: the topological structure and the dynamical model. From the point of view of the dynamical model, previous studies mainly focused on the ODEs model. Still, in practice, the reaction-diffusion phenomenon cannot be ignored due to the necessity of describing the behavior of substance in space. Thus reaction-diffusion neural networks have become a research hotspot in recent years [3]- [5]. On the other hand, as an extension of the integral order reaction-diffusion equation, the fractional-order reaction-diffusion equation can The associate editor coordinating the review of this manuscript and approving it for publication was Norbert Herencsar . model more complex phenomena due to its non-local properties. It has achieved great success in such fields as anomalous diffusion [6], image enhancement [7], and porous media seepage [8], [9]. For neural networks, the existing researches mainly focus on the problems of ODEs with Caputo and Riemann-Liouville derivative [10], utilize the Laplace transform and properties of Mittag-Leffler function to obtain stability conditions [11]- [14]. On the other hand, the adaptive control law also attracts the attention of scholars. In [15], an adaptive sliding mode control method was presented for a class of fractional-order nonlinear time-delay systems with uncertainties to solve the target output tracking problem. By employing Hermitian form Lyapunov functionals and fractional skills, [16] present some sufficient criteria for fractional complex projective synchronization. In [17], sufficient conditions for the global asymptotical stabilization of a class of fractional-order nonautonomous systems had been obtained by constructing quadratic Lyapunov functions and utilizing a new property for Caputo fractional derivative. In [18], the sliding mode control problem for a normalized singular fractional-order system with matched uncertainties was investigated. The global stabilization criteria were given in [19] for fractional memristor-based neural networks with the aid of Lyapunov functions and the comparison principle. In general, the above research mainly focuses on the VOLUME 9, 2021 This work is licensed under a Creative Commons Attribution 4.0 License. For more information, see https://creativecommons.org/licenses/by/4.0/ ODE system. Only the recent work [20] concern about the fractional reaction-diffusion neural networks(FRDNNs) problem with Riemann-Liouville derivative. Hence, the study of neural networks with fractional order reaction-diffusion model will further develop related fields. Considering almost all practical applications, the time delay is unavoidable. Hence in this paper, we also take this factor into account. In many cases, researchers will construct a special Lyapunov functional to solve such problems: the Lyapunov-Krasovskii functional. The Lyapunov-Krasovskii stability theorem for fractional systems with delay had been investigated in literature [21]. And the Lyapunov-Krasovskii functional has many applications on the stability criterion and controller designing [22]- [25]. From the earlier discussion, synchronization between the nodes of neural networks is a widespread phenomenon. Usually, we need to introduce some controllers to synchronize the nodes in the neural networks. Fortunately, there many synchronization control strategies such as pinning control [26], [27], sliding mode control [28], adaptive control [29], and sampled-data control [30], which have been implemented can be applied on this topic. On the other hand, when designing the controller for synchronization, input saturation cannot be neglected due to the maximum power or the propagation and reaction rate. Once the control signal reaches or exceeds the saturation state, the system will become hard to control or completely uncontrollable. At present, some methods such as ellipsoid method [31], anti-windup [32], [33], have been applied to solve such problems. The problem of adaptive neural control for a class of strict-feedback stochastic nonlinear systems with multiple time-varying delays subject to input saturation has been investigated in [34], neural network-based adaptive control for spacecraft under actuator failures and input saturations has been handled in [35], and [36] investigates reliable estimation problem for Markovian jump neural networks with sensor saturation. There exists extensive research on the control systems with saturation [31], [37]- [41]. The ellipsoid method [31] is simple and reliable, has been applied successfully in some discrete or ODEs model [42], [43], but no application is seen in PDEs models such as reaction-diffusion problems. In fact, some successful methods in the ODE model cannot be directly applied to the PDE model, and we must consider the evolution of the model in the whole space. Compared to other anti-windup methods, which usually introduce a dead zone, the main advantage of this method is that it is easy to linearize the saturation controller by introducing the auxiliary gain function. The estimation of the domain of attraction can be obtained by solving LMIs. To the best of our knowledge, synchronization of FRDNNs with input saturation has not yet been fully investigated, which has theoretical and practical value to study. We hope that by putting forward such a Riemann-Liouville neural network, combined with some existing research basis, we can contribute to technology development in related fields and get some more universal results. Hence, motivated by the above reasons, the synchronization of FRDNNs with input saturation is investigated in this paper. We mainly intend to extend the ellipsoid method [44] combining with the Lyapunov-Krasovskii functional to the field of fractional partial differential model. In this paper, we will focus on the synchronization of FRDNNs with time-varying delays and input saturation. Linearization of the saturated input is by using the properties of the convex hulls. The main contributions and innovations of this paper are as follows: a) New definitions of the ellipsoid and linear region of the saturated are given for the FRDNNs input saturation problem. b) A novel Lyapunov-Krasovskii functional is employed. c) The saturation controller based on the convex hulls is extended to Riemann-Liouville FRDNNs. Meanwhile, the designed method can be easily extended to the system with Neumann boundary conditions. d) The domain of attraction is also estimated to ensure that the initial value range does not exceed the saturation input's control capacity. This paper is organized as follows. Section II gives some basic concepts, symbols, assumptions, and lemmas that are needed in the later proof process. In section III-IV we give the criterion of synchronization and the estimation of the domain of attraction. In Section V, we verify the theorem given in Section III by some numerical example. In Section VI, we summarize this paper and look forward to future research. Notation: Throughout this paper, R n denotes the n-dimensional Euclidean vector space, I n denotes the n × n identity matrix, ⊗ denotes the Kronecker product. II. PRELIMINARIES Problem Formulation: In this paper, we set the response system as the following Riemann-Liouville FRDNNs with the Dirichlet boundary conditions and initial conditions as ∂x j 2 is the Laplace diffusion operator on ; φ i (x, t) is a bounded continuous function; x ∈ R n is spatial independent variable; u i (x, t) ∈ R n are the n-dimensional state of the i-th neuron at time t; c i and d i are n×n dimensional constant diagonal matrix where c i represents the rate with which the ith neuron will reset its potential to the resting state when disconnected from the networks and external inputs in space x, and d i represents the transmission diffusion coefficient along the ith neuron; a ij and g ij are n × n dimensional constant matrix where a ij denote the connection strength, and g ij are the coupling strength between the ith and the jth nodes; f j are the excitation function of the jth node; τ (t) is the time-varying delay satisfying 0 ≤ τ (t) ≤ τ and 0 ≤τ (t) ≤ σ ≤ 1; J i (x, t) are the control input and is the saturation function with the input saturation upbound where (·) denotes the Gamma function. Then, we set the drive system as as the synchronization error function, then we have the error system as Next, some useful definitions are presented. where P is a positive definite matrix, V ( ) denotes the volume of . Definition 2: The range of state values in which the control input remains linear with respect to e i (x, t) is defined as x ∈ }, l = 1, 2, . . . , n, i ∈ N }. Remark 1: Definition 1 and 2 extends the conception of the ellipsoid and linear region of the saturated in [44]. By introducing the spatial variables, we use the maximum value of the function on the definition domain to represent the function's properties. We can find that this definition is very convenient to deal with the FRDNNs problem in later proof. Definition 3 ( [44]): The convex hulls of e i is defined as Definition 4 ([44]): For initial condition φ(t 0 ), the domain of attraction for u is defined as The assumptions given below are essential assets to achieve the main results of this paper. The following important lemmas will be employed during the proof process in the later section. Lemma 1 ( [20]): Let u(x, t) ∈ C n [ × [t 0 , +∞]] be a continuous function with the Riemann-Liouville fractionalorder derivative existing, then the following inequality holds: where P ∈ R n×n is a positive definite matrix. Then, inspired by [43], we note that the saturation terms' expressions can be treated independently of spatial coordinates. Thus we can give the expressions of sat (Kx(x, t)) as the following lemma: Lemma 2: Let be the set of n×n diagonal matrices whose diagonal elements are either 1 or 0. Suppose each element of is labeled as l and denote − l = I n − l . Clearly, if l ∈ , then − l ∈ . Let K , H ∈ R n×n , then, for any u(x, t) ∈ L(H ), we have sat(Ku(x, t)) = Lemma 3 ( [46]): For any vector x, y ∈ R n , positive definite matrix H ∈ R n×n , the following inequality holds ±2xy ≤ x T Hx + y T H −1 y. Hence, according to Lemma 2 and Kronecker product properties, the synchronization errors (7) can be rewritten into a compact form as , t))) where III. MAIN RESULTS In this section, we will derive sufficient conditions for synchronization of the systems with the Dirichlet boundary and control input saturation, that is: Theorem: Suppose the assumption 1 holds, then system (1) and (5) will achieve synchronization if there exists a positive definite matrix Q and arbitrary matrix K , H such that and ε(I , ρ) ⊆ L(H ), where with known matrix A, B, C, D, G. Proof: Choose the following Lyapunov functional where Thus V (t) ≥ 0 holds obviously. Then, according to Lemma 1, we get the derivative of V 1 (t) along the trajectories of system (10) as follows: , t))) Utilizing Green's formula and the boundary conditions, we have According to assumption (A1) and lemma 3, the third and fourth term satisfy the inequalities 50910 VOLUME 9, 2021 and x, t)GG T e(x, t)dx Substituting (18)- (20) into (17), we havė Similarly, the derivative of V 2 (t) satisfies the following inequalitẏ (22) Substituting (21) and (22) into (14), we havė (23) Thus, according to the condition (11), we havė where e( Since (12) is equivalent to we can transform it as through the Lagrange multiplier method, where M = V ( ) denotes the volume of . Thus we have According to the Schur complement, (28) can be expressed as the following LMIs form Thus, system (1) and (5) can achieve synchronization under the saturation input control. Meanwhile, according to (14) and we have where ϑ is a constant. Accordingly, sinceV (t) ≤ 0, it con- In other words, for any initial value e(x, 0) ∈ ε(I , ρ), e(x, t) will not leave ε(I , ρ) indicating that for all t > 0, e(x, t) ∈ ε(I , ρ) ⊆ L(H ) holds. The proof is completed. Remark 1: In [47] when dealing with the Lyapunov functional V , the fractional derivation is directly carried out and get the Mittag-Leffler stability. For the time-delay problem, fractional derivation on the functional V cannot work, so to use Lyapunov-Krasovskii functional and derivate the functional V with respect to t is a more convenient way. Remark 2: The Lyapunov-Krasovskii functional presented in our paper is a traditional and mature approach for related works of ODEs. Still, research seldom handles the Riemann-Liouville derivative with reaction-diffusion and saturation comprehensively, so we have made an original exploration of this issue. Corollary: Assume that τ (t) ≡ 0, then system (1) and (5) can reach synchronization with the feedback control if the following conditions and hold, where with known matrix A, B, C, D, G, positive definite matrix Q and arbitrary matrix K , H . IV. ESTIMATE THE DOMAIN OF ATTRACTION Due to the nonlinear influence of saturation, the stability region is often local. In this section, we will give sufficient conditions for the initial conditions which can ensure the two system reach synchronization during a finite time. It is difficult to deal with spatial variables, so we simplify the problem. Consider the set of the maximum values of the initial value of the system in its domain which conform to some certain shape reference set χ R , then we hope that the shape reference set can fill the attraction region of the system as fully as possible. That is to solve the problem If χ R is a polygon, i.e. thus the constraint a) is equivalent to γ 2 Me T i max e i max ≤ ρ, i = 1, . . . , N . According to the Schur component, we have Hence, we get (38) as the sufficient condition for the domain of initial conditions that can ensure the two systems can achieve synchronization under the above theorem conditions. Also, we can solve the following optimization problem to get the maximal volume of ε(I , ρ), where ζ = 1 ρ . Remark 3: It should note that (38) is ''sufficient'' enough, which means that the estimation of the domain of attraction is often smaller than its theoretical one. In other words, the initial conditions obtained by the above methods are usually safe enough, but we still hope to find more conservative laws in our future work. V. NUMERICAL EXAMPLES In this section, we will give two examples. In Example 1, We take the parameters satisfying all the Theorem conditions to test the feedback control capability. Then, we will test the tolerance upbound of the initial errors in Example 2. f (u(x, t)) = tanh(u(x, t)), and the initial value are given as u 0i (x) = 0.41 sin(2πix) and v 0i (x) = 0, i = 1, 2, 3, 4. Then according to the definition (4), it can be transformed as u i (x,t) = 0.66 sin(2πix) and v i (x,t) = 0 fort ∈ [−τ, 0] approximately. Thus, we can get the maximal ρ ≈ 1.4625 by solving (39). FIGURE 1 shows the error system keeps oscillating in a large range without input control, which implies that the two neural networks cannot reach synchronization due to the influence of reaction terms and time-varying delays. We use the MATLAB LMI control toolbox to solve the LMIs in the Theorem and get the feasible solution through the above parameters. Based on these solutions, we can choose the feedback gain matrix With the above control gain, FIGURE 2 illustrates that the errors between the two neural networks achieve the neighborhood of 0 on the entire domain. Looking at it from another angle, as FIGURE 3 depicts, the errors between two systems decay very quickly under the proposed control input. Then FIGURE 4 and 5 shows the input control signal of each node. In this situation, they didn't trigger saturation. Next, Example 2 will test the robustness of the designed control law. Example 2: Consider the parameters in Example 1, and we will replace them with some ''sick'' initial conditions to test the maximal tolerance of the initial errors. From (38), VOLUME 9, 2021 naturally, except for the two boundaries, we can take the initial errors as the maximum value on the interval value on the whole domain, that is u 0i (x) = e imax , v 0i (x) = 0. Let γ = 1, thus we have e imax ≈ 0.8426, the numerical experiment indicate that two system can reach synchronization as FIGURE 6 illustrate. Increasing e imax to 20, we found that although the control input has reached the saturation state, the error system can still approach the neighborhood of zero in finite time according to FIGURE 10-12. Continue increase e imax to 25, we found that the errors of node 1 increase rapidly as FIGURE 13 and 14 illustrate which indicate that under the saturation bound J i = 50 the systems cannot synchronize. VI. CONCLUSION In this work, firstly, the definitions of the ellipsoid and the linear region of the saturated are extended to PDEs case. Under this framework, we construct a suitable Lyapunov-Krasovskii functional for synchronizing two fractional reaction-diffusion neural networks and obtain sufficient conditions under saturated control inputs by using convex hulls and some Riemann-Liouville fractional integral properties. Besides, we estimate the domain of attraction. All the conditions are presented in the form of LMIs thus can easily be solved by the MATLAB toolbox. At last, two numerical experiments show that the proposed control laws are reliable when trigger saturation state. Meanwhile, the designed control law is safe enough with our estimation of the domain of attraction. As we can see, our method is simple and sufficient, but the estimation of the domain of attraction is too small. In our future work, we can find some more suitable inequalities to achieve more conservative conditions and apply our approach on network consensus, fault-tolerant, adaptive fuzzy control, etc.
2021-04-10T13:33:16.664Z
2021-01-01T00:00:00.000
{ "year": 2021, "sha1": "dd17dc78caaacc280c2605981c83f3ef4cb2ddce", "oa_license": "CCBY", "oa_url": "https://ieeexplore.ieee.org/ielx7/6287639/9312710/09389776.pdf", "oa_status": "GOLD", "pdf_src": "IEEE", "pdf_hash": "dd17dc78caaacc280c2605981c83f3ef4cb2ddce", "s2fieldsofstudy": [ "Mathematics" ], "extfieldsofstudy": [ "Computer Science" ] }
42297307
pes2o/s2orc
v3-fos-license
Efficient Face Recognition in Video by Bit Planes Slicing , INTRODUCTION In day today life face recognition system for still image and video have been in limelight both for the commercial purpose and security reasons. Due to increased importance of security in recent years face recognition has received substantial attention from researches in biometrics, pattern recognition field and computer vision communities. Various techniques have been developed for entrance control in building, personal access of computers, or for the criminal investigation. With increase in demand of security it has added a cost factor for face recognition system which makes it difficult to reach to every man. The face recognition systems can extract the features of face and compare this with the prepared database. Naturally people are good at face recognition through brain and nerve cells (Zhang and Gao, 2009). It is very difficult to understand how our brain function to stimulus and recognize the faces. Its altogether a different approach that we have taken to recognize the faces. Human faces have been in research for more than twenty years, in which many new techniques have developed. A face recognition system is a software application to identify or verify a person's image from still image or a video sequence. Unfortunately developing a computational model for face recognition from video sequences is quite difficult due to involvement of complex face under different pose and illumination condition. Face recognition using bit planes followed by self PCA is used to implement the model for test database of video sequences to distinguish it from a training set of stored faces with some real-time variations as well. The approached method is based on extracting significant bit planes from the image and creating a virtual image by combining the bit planes giving most of the information of the image multiplied by the weights (Dai et al., 2009). Weights are selected such that higher order bit planes carry more weights. Why virtual image: It reduces the within class differences of a face and thus faces of same person are more similar to each other while faces of different faces are more discriminate. Provide robust to various pose and illumination condition. As less information of image is used as comparison to original image processing time is comparatively less than conventional type PCA method. Virtual image is decomposed into a small set of characteristic feature images called eigen faces, which are principal components of the initial training set of still face image. Mean of all the virtual image is calculated (Dai et al., 2009). All the virtual image from training database are subtracted from mean to get the normalized image. Covariance matrix is calculated from normalized image by multiplying normalized image and its transpose. From covariance matrix eigen vector and eigen value is derived. From this projected image is calculated. Recognition is performed by reading the video sequences and grabbing the video frames. Frames at a rate of 30 frames per second is obtained. Preprocessing of frames is done (Poon et al., 2009). Mean of frames are calculated and a mean frame is derived. Histogram equalization is done on the frame. Bit plane slicing is done. By combining significant bit plane frame is converted to virtual frame by same method. This virtual test frame is subtracted from mean of the training image. And covariance matrix is found again. From covariance matrix eigen vectors and eigen values are calculated. Projected input image is calculated. For testing projected input image and projected image is compared. If this value comes to be less than threshold then it recognizes the face from the training database. (2) METERIALS AND METHODS The complete face recognition system consists of two processes: • Training stage • Recognition stage The training stage comprises of following operations: • Prepare the initial set of face image as training set • Find the histogram equalization of the face images. This will lead to contribution of lower order bit planes together with higher order bit planes also • Calculate the eight bit planes as pixel value range from 0-255 • Create the virtual image combining the planes which are providing most of the information • Calculate the eigen faces of virtual images, keeping only the highest eigen values. These M images define the face space. As new faces are experienced, the eigen faces can be updated or recalculated • Calculate the corresponding distribution in Mdimensional weight space for each known individual, by projecting their face images. These operations can be performed from time to time whenever there is a free excess operational capacity. This data can be cached which can be used in the further steps eliminating the overhead of re-initializing, decreasing execution time thereby increasing the performance of the entire system (Dai et al., 2009) Having trained the system, the next process involves the steps: • Test frames are obtained from test videos • After pre-processing, mean of all frames are taken and virtual image is created • Input image is subtracted from mean of all training images to get mean input image • Projected input image is obtained by multiplying transpose of eigen faces and mean input image • Euclidean distance is calculated by subtracting projected image from projected input image • Threshold is setup to compare and if error is less than the threshold the image is recognized with the least index value Initialization: Calculation of virtual image: Initially each pixel value of 0-255 is converted to binary value. Length of the binary digit is eight. So, all the decimal pixel value will get converted to binary numbers. Same plane binary digits are collected to form a plane. Similarly all the eight bit planes are formed. Figure 1 represents the bit planes before histogram equalization. Left one is the most significant bit plane while right one is least significant bit plane. As only significant bit planes were giving the most of the information, histogram equalization was done. Figure 2 i.e., bit planes after histogram equalization shown below shows the importance of lower order bit planes also. This change in the lower order bit planes carrying significant amount of information is due to fact that after histogram equalization pixel value got uniformaly distributed which can be easily verified by evaluating Fig. 3 and 4 which shows number of pixels vs gray levels before and after histogram equalization respectively. Initially lower value of gray level has lower number of pixels after histogram equalization the number of pixels got increased. This training data set has to be mean adjusted before calculating the covariance matrix or eigenvectors. The average face is calculated as Eq. 3: This is actually mean adjusted data. The covariance matrix is Eq. 5 and 6: Where: The matrix C is a N 2 by N 2 matrix and would generate N 2 eigenvectors and eigen values. With image sizes like 256 by 256, or even lower than that, such a calculation would be impractical to implement. A computationally feasible method was suggested to find out the eigenvectors. If the number of images in the training set is less than the no of pixels in an image (i.e., M<N 2) , then we can solve an M by M matrix instead of solving a N 2 by N 2 matrix. Consider the covariance matrix as A T A instead of AA T . Now the eigenvector Y i can calculate as Eq. 7 follows: where, µ i is the eigenvalue. Here the size of covariance matrix would be M by M. Thus we can have m eigenvectors instead of N 2 . Pre multiplying Eq. 2 by A, we have Eq. 8: The right hand side gives us the M eigenfaces of the order N 2 by 1.All such vectors would make the image space of dimensionality M. B Recognition process: Use of eigen faces to classify a face image: A new image T is transformed into its eigen face components (projected into 'face space') by a simple operation Eq. 9: Here k = 1, 2,….M. The weights obtained as above form a vector Ω T = [W 1, W 2, W 3, W M' ] that describes the contribution of each eigenface in representing the input face image . The vector may then be used in a standard pattern recognition algorithm to find out which of a number of predefined face class, if any, best describes the face. The face class can be calculated by averaging the weight vectors for the images of one individual. The face classes to be made depend on the classification to be made like a face class can be made of all the images where subject has the spectacles. With this face class, classification can be made if the subject has spectacles or not. The Euclidean distance of the weight vector of the new image from the face class weight vector can be calculated as follows Eq. 10: where, Ω K is a vector describing the K th face class. Euclidean distance. The face is classified as belonging to class k when the distance ε k is below some threshold value θε. Otherwise the face is classified as unknown. Database preparation: The database was obtained with 10 photographs of each person's at different viewing angels and different expressions which is shown in Fig. 5. There are 10 persons in database. Database was also prepared for testing phase by taking small sequences of videos of persons in different expressions and viewing angles using a low resolution camera. • Select a video sequence or a test image 'T' which is to be tested using open file dialog box or by using webcam for real time image testing. This test image is also reshaped into 200×180 which is shown in Fig. 7 • Frames from videos are taken and mean frame is found out. Virtual frame is created. In case of test image, it is read and coverted to virtual face. Then it gets normalized that is by finding the difference of the test virtual frame or test image from the mean image of the trained database Eq. 11: Training: • We project the normalized test data set • For each column in the matrix Z N , we calculate the Euclidean Norm of the difference with the projected vectors of matrix YN. Finally, the test image is identified as the person with the smallest value among all the Euclidean Norm values. Figure 8 represents the identified person. RESULTS AND DISCUSSION We have considered a total of 100 images comprising of 10 images per person of 10 individuals with varying condition. The perfoemance of this algorithm increases with the increase in number of images per person as depicted from Table 1. CONCLUSION This particular method using Bit plane slicing for face recognition followed by PCA was motivated by information theory, leading to basing face recognition on a small set of image features that best approximates the set of known face images. The eigen face approach provides a practical solution that is well fitted for the problem of face recognition. It is fast, relatively simple and study well in a different environment. Certain issues of robustness to changes in lighting, head size and head orientation are done. The tradeoffs between the numbers of eigen faces necessary for unambiguous classification are matter of concern. The speed of processing is faster than other method. This project is based on bit plane sliced approach that gives a maximum accuracy the images are taken in constrained environment. Adaptive algorithms may be used to obtain an optimum threshold value. There is scope for future betterment of the algorithm by using Neural Network technique that can give better results as compared to eigen face approach. With the help of neural network technique accuracy can be improved. Instead of having a constant threshold, it could be made adaptive, depending upon the conditions and the database available, so as to maximise the accuracy. The whole software is dependent on the database and the database is dependent on resolution of camera. So if good resolution digital camera or good resolution analog camera is used, the results could be considerably improved. It can be worked for advanced recognition for multiple face recognition.
2019-02-15T14:18:42.289Z
2012-01-01T00:00:00.000
{ "year": 2012, "sha1": "83e7bc87028625ad0f1a4f73abc37d85b38c8de4", "oa_license": "CCBY", "oa_url": "http://thescipub.com/pdf/10.3844/jcssp.2012.26.30", "oa_status": "HYBRID", "pdf_src": "MergedPDFExtraction", "pdf_hash": "fa5e45bc759c27c9497527f2a7b8a66eecb78b2d", "s2fieldsofstudy": [ "Computer Science" ], "extfieldsofstudy": [ "Computer Science" ] }
9110984
pes2o/s2orc
v3-fos-license
Fluorescent Nanoparticle-Based Indirect Immunofluorescence Microscopy for Detection of Mycobacterium tuberculosis A method of fluorescent nanoparticle-based indirect immunofluorescence microscopy (FNP-IIFM) was developed for the rapid detection of Mycobacterium tuberculosis. An anti-Mycobacterium tuberculosis antibody was used as primary antibody to recognize Mycobacterium tuberculosis, and then an antibody binding protein (Protein A) labeled with Tris(2,2-bipyridyl)dichlororuthenium(II) hexahydrate (RuBpy)-doped silica nanoparticles was used to generate fluorescent signal for microscopic examination. Prior to the detection, Protein A was immobilized on RuBpy-doped silica nanoparticles with a coverage of ∼5.1×102 molecules/nanoparticle. With this method, Mycobacterium tuberculosis in bacterial mixture as well as in spiked sputum was detected. The use of the fluorescent nanoparticles reveals amplified signal intensity and higher photostability than the direct use of conventional fluorescent dye as label. Our preliminary studies have demonstrated the potential application of the FNP-IIFM method for rapid detection of Mycobacterium tuberculosis in clinical samples. INTRODUCTION Tuberculosis (TB) is a global public health emergency, fueled by the spread of human immunodeficiency virus (HIV)/Acquired Immune Deficiency Syndrome (AIDS) and the emergence of drug-resistant stains of Mycobacterium tuberculosis (M. tuberculosis). Approximately 2 billion peopleone third of the human population-are currently infected with TB, with one new infection occurring every second. Each year there are more than 8.8 million cases and close to 2 million deaths attributed to TB worldwide. Experts at the World Health Organization (WHO) predicted these numbers would escalate in coming decades, nearly 1 billion people would become newly infected, over 150 million would become sick, and 36 million would die worldwide between now and 2020-if control was not further strengthened [1]. Rapid and accurate diagnosis of tuberculosis is a critical step in the management and control of TB. For decades, diagnosis has largely relied on acid-fast staining and culture of bacilli. However, the sensitivity of acid-fast staining is poor, and culture is a relatively time-consuming process. Many ef-forts have been directed toward developing techniques for rapid diagnosis of tuberculosis with higher sensitivity and reliability [2], including methods based on molecular biology (molecular diagnosis techniques) [3], such as nucleic acid amplification tests (NAA tests) [4,5], DNA probes [6,7]; and methods based on immunology (serodiagnosis techniques) [8], such as enzyme-linked immunosorbent assay (ELISA) [9,10], immunochromatographic assay [11], latex agglutination assay [12]. Recently, more simple, direct, and visually detectable assays have been developed for rapid diagnosis of TB with Au nanoparticles [13,14]. These approaches have contributed much on the improvement of sensitivity and accuracy of the detection but still exhibit deficiencies in some extent [15]. NAA tests have been the subject of a number of investigations. Many commercial kits are available including the Amplicor and MTD tests which are currently US FDA approved. The NAA tests have high specificity and work better to rule-in TB. However, sensitivity of NAA tests is lower and it is less good to rule-out TB. Serological tests for the diagnosis of tuberculosis have been attempted for decades. Dozens of commercial kits are available, most of which are focused on antibody detection. However, assays based on antibodies detection are hard to distinguish active TB from BCG vaccination and past infection. Therefore, more studies are needed to develop and improve the detection methods for tuberculosis. Dye-doped silica nanoparticles [16,17], exhibiting such important advantages as high luminescence and photostability compared to conventional fluorescent dyes, have been widely applied in biological imaging and ultrasensitive bioanalyses, including cell staining [18], DNA detection [19,20], cell surface receptor targeting [21][22][23][24], and ultrasensitive detection of Escherichia coli O157:H7 [25]. Owing to the dye-encapsulated structure, thousands of dye molecules embedded in one nanoparticle contribute to the luminescence of one particle, causing significant signal amplification. In this paper, we establish a rapid immunological method for detection of M. tuberculosis by combining highly luminescent RuBpy-doped nanoparticles with indirect immunofluorescence microscopy. Since direct anchoring of antibodies onto solid supports via covalence methods is always faced with the loss of activity of the antibodies, Protein A was applied as an affinitive adsorber. In order to obtain full antibody activity, M. tuberculosis was first recognized with the specific antibody in solution then signaled by Protein A functionalized fluorescent nanoparticles. This method was used to detect M. tuberculosis in mixed bacterial samples and spiked sputum samples. Meanwhile, signal intensity and photostability of the method were compared with conventional fluorescent dye fluorescein isothiocyanate labeling method. Bacteria The H37Ra strain of M. tuberculosis was obtained from the National Institute for the Control of Pharmaceutical and Biological Products (Beijing, China). M. tuberculosis was cultured by Dr. Songlin Yi (Hunan Tuberculosis Hospital, Hunan, China) on modified Lowenstein-Jenson medium at 37 • C for 3-4 weeks to obtain pure bacterial culture for use in establishing detection method. M. tuberculosis was harvested in pH 7.4, 0.01 M phosphate buffered saline (PBS) to form predominantly single-cell suspension using previously described method [26]. E. coli strain DH5α (Microbial Culture Collection Center of Guangdong Institute of Microbiology, Guangdong, China) was grown overnight in Luria-Bertani broth at 37 • C. The bacterial suspensions were counted in a Petroff-Hausser chamber, and the concentrations of bacteria were adjusted for use in experiments. Instrumentation The morphology and uniformity of RuBpy-doped silica nanoparticles were measured with an atomic force microscope (AFM) SPI3800N-SPA400 (Seiko). Size distribution analysis of RuBpy-doped silica nanoparticles was determined at 25 • C by dynamic light scattering (DLS) using Zetasizer 3000HS A (Malvern). The volume-weighted average diameter obtained by the manufacturer's software was used for the calculation of the average nanoparticle volume. A refractive index of 1.47 was used for nanoparticles (the refractive index of silica). Viscosity was determined at 30 • C using a cone plate digital viscometer LVDV-III+CP (Brookfield). Determination of protein concentration according to the Bradford method was done with a UV-Vis spectrophotometer DU-800 (Beckman) [28]. Biological modification of the RuBpy-doped silica nanoparticles RuBpy-doped silica nanoparticles were prepared using the water-in-oil (W/O) microemulsion method that had been described before [21]. In order to immobilize Protein A onto the nanoparticles, the surface of the RuBpy-doped silica nanoparticles was first activated with CNBr. Nanoparticles (11.2 mg) were suspended in 2 ml of 2 M sodium carbonate solution by ultrasonication. A solution of CNBr in acetonitrile (0.78 g of CNBr dissolved in 2 ml of acetonitrile) was then added dropwise to the particle suspension under stirring at room temperature for 5 minutes. After the activation reaction, the particles were washed twice with ice-cold water and twice with pH 7. Indirect immunofluorescence detection of M. tuberculosis with bioconjugated nanoparticles Rabbit anti-M. tuberculosis antibody was added to a 500 μl suspension of M. tuberculosis in PBS (antibody final concentration: 5 μg/ml) and incubated at 37 • C for 1 hour. The suspension was subsequently washed with PBS twice. Nanoparticle-Protein A conjugates (0.1 mg/ml) were then added, and the mixture was incubated at 37 • C for 1 hour. To remove the free nanoparticle-Protein A conjugates that did not bind to the bacteria, the mixture was centrifuged at 8000 rpm for 2 minutes, and then the supernatant was discard. The pellet was washed twice again. Smear slide was made by spreading the pellet on glass slide and observed with fluorescence microscopy or confocal microscopy. For controls, the rabbit anti-p53 antibody or PBS only was substituted for the primary antibody. Another bacterium E. coli DH5α was treated with the same strategy to test the crossreaction with bioconjugated nanoparticles. For immunofluorescence detection of M. tuberculosis with FITC-labeled antibody, the FITC-conjugated rabbit anti-M. tuberculosis antibody was added to a 500 μl suspension of M. tuberculosis in PBS (antibody final concentration: 25 μg/ml) and the mixture was incubated at 37 • C for 1 hour. The suspension was subsequently washed with PBS for three times and then spread on glass slide for microscopic examination. Preparation of mixed bacterial sample The mixed bacterial sample was prepared by mixing FITClabeled E. coli and unlabeled M. tuberculosis. The FITClabeled E. coli was first obtained according to the following method. E. coli was incubated at a concentration of 10 9 cells/ml with 0.5 mg of FITC in 0.1 M Na 2 CO 3 -NaHCO 3 buffer (pH 9.2) at 37 • C for 2 hours in the dark. The E. coli was then washed for three times with PBS to remove free FITC and resuspended in PBS. A 500 μl of mixed bacterial sample was prepared by easily mixing 1.8×10 6 cells/ml FITClabeled E. coli and 3.6×10 5 cells/ml unlabeled M. tuberculosis. The mixture was detected with the FNP-IIFM method. Preparation of spiked sputum sample Sputum (2 ml) from healthy individual was collected and equally divided into two portions. One portion was spiked with M. tuberculosis, whereas the other portion was used as the unspiked sample. Then samples were liquefied with the NALC-NaOH method. In brief, the samples were mixed with equal volumes of NALC-NaOH solution (2% NaOH, 1.45% Na-citrate, and 0.5% NALC), shaken vigorously for digestion, and the mixtures were allowed to stand for 15 minutes at room temperature. Then the samples were diluted with 8 ml of water. To remove big agglomerates in the sputum, the mixtures were centrifuged at 1000 rpm for 2 minutes. The precipitates were disposed and the supernatants were centrifuged at 4000 g for 15 minutes. After the supernatant fluids were carefully decanted, the sediments were resuspended in 10 ml of PBS and centrifuged again at 4000 g for 15 minutes. The supernatants were discarded. The resulting pellets were suspended in 500 μl of PBS and detected with the FNP-IIFM method. Microscopy imaging An inverted fluorescence microscope ECLIPSE TE300 (Nikon) equipped with a 100 W mercury lamp, a filter block (consisting of a 450-490 nm bandpass excitation and a 515 nm longpass emission filter), and a color CCD (Digital Camera DXM1200, Nikon) was used for common smear microscopic examination. Confocal microscopy was performed on an inverted Olympus IX70 microscope with an argon/krypton laser emitting at 488 nm to excite both RuBpy-doped nanoparticles and FITC fluorescence. We used a dichroic beam splitter (DCB) around 560 nm, together with either a longpass (LP) 560 nm filter for RuBpy-doped nanoparticles signal or an LP 505 nm filter for FITC signal. The RuBpy-doped nanoparticles signal was displayed in the pseudocolor red and the FITC signal in green. To study the differentiation between M. tuberculosis and E. coli in mixed bacterial samples with the FNP-IIFM method, the smears were scanned by sequential excitation mode. In brief, an argon/krypton laser emitting at 488 nm and a helium/neon laser emitting at 543 nm were used to excite FITC and RuBpy-doped silica nanoparticles fluorescence, respectively. We used a DCB around 560 nm, together with the following emission filter: either a bandpass (BP) 505-525 nm when the argon/krypton laser (FITC signal) was used or an LP 560 nm when the helium/neon laser (RuBpy-doped silica nanoparticles signal) was used. A ×60 objective (Olympus PlanApo NA 1.4 oil) was used for routine studies. Pixel format was 512 × 512. Highly luminescent and photostable fluorescent nanoparticles We used an easy and efficient water-in-oil microemulsion method to synthesize RuBpy-doped silica nanoparticles. The obtained nanoparticles were uniform and well dispersed as shown in the AFM image (Figure 1(a)). Dynamic light scattering (DLS) measurements for the nanoparticles showed that the size distribution of RuBpy-doped nanoparticles was narrow and the volume-weighted mean hydrodynamic diameter determined was 63.8 nm (Figure 1(b)). For the structure of dye-doped silica nanoparticles, dye molecules are trapped inside the silica matrix, which endows the nanoparticles with two important merits. For one thing, the fluorescence emitted by one nanoparticle is contributed by thousands of dye molecules embedded in the silica matrix. So it is easy to see that one dye-doped nanoparticle is much more luminescent than one dye molecule, which is called the significant signal amplification effect. This attribute makes the dye-doped nanoparticles be advantageous in improving detection sensitivity in many aspects and very suitable for detection of bacteria with higher sensitivity. As another advantage, due to the protective function of the silica matrix, the nanoparticles are much more photostable than ordinary dye molecules. As shown in Figure 1(c), after continuous intensive illumination with a laser source for 80 seconds, the fluorescence intensities of both RuBpy and FITC dyes were decreased to below 20%, while the fluorescence intensiy of RuBpy-doped nanoparticles remained above 80%. Covalent immobilization of Protein A on nanoparticles Covalent attachment of antibodies directly to solid supports via glutaraldehyde, carbodiimide, succinimide ester, and so forth is always found with the loss of biological activity of the antibodies. One of the main reasons for such reduction is attributed to the random orientation of the asymmetric macromolecules on support surface [30]. Several approaches for achieving oriented antibody coupling for good steric accessibilities of active binding sites and increased stability have been developed, including the use of Protein A or Protein G [31], chemical or enzymatic oxidation of the immunoglobulin (IgG) carbohydrate moiety [32], and the use of biotinavidin or streptavidin techniques [33]. Protein A, a highly stable 42 kDa coat protein extracted from Staphylococcus aureus, is capable of binding to the Fc portion of immunoglobulins, especially IgGs, from a large number of species [34]. In our scheme, Protein A was used as an affinitive adsorber to avoid direct attachment of antibody to nanoparticles. For immobilization of Protein A on the RuBpy-doped silica nanoparticles, the CNBr method was used to activate the surface of silica nanoparticles and then couple the Protein A. The surface coverage of Protein A on the nanoparticles was quantified by the Bradford method, and the average mass of one particle was determined through the viscosity/light scattering method, then the number of Protein A molecules attached to one particle could be calculated. The amount of Protein A immobilized on nanoparticles was calculated approximately as [29]: Dilan Qin et al. where q is the amount of Protein A immobilized onto a unit mass of the nanoparticles (mg/mg); C i and C t are the concentrations of the Protein A in the initial solution and in the supernatant after the immobilization reaction, respectively (mg/ml); V is the volume of the aqueous phase (ml); and m is the mass of the nanoparticles (mg). C i and C t were determined by the Bradford method [28]. The amount of Protein A immobilized on nanoparticles calculated according to (1) in our experiment was ∼0.41 mg/mg. The average mass of one particle was then determined and calculated as where m i is the average mass of one nanoparticle (mg); C is the concentration of the nanoparticle suspension (mg/ml); N is the number of nanoparticles in a unit volume of suspension liquid (particles/ml), which was calculated through the viscosity/light scattering method [35] as where 4/3π(d/2) 3 is the average volume of a nanoparticle; d is the volume-weighted diameter determined by light scattering; and φ is the volume fraction of the particles determined by viscosity and calculated as where h is the viscosity of the nanoparticle suspension; h 0 is the viscosity of the solvent without nanoparticles. According to (2)-(4), the average mass of one nanoparticle calculated was ∼8.8 × 10 −17 g. So there were ∼3.6 × 10 −14 mg Protein A on one particle, that is, ∼5.2 × 10 2 Protein A molecules on one particle. It provided a foundation for optimal binding of the nanoparticle-Protein A conjugates with the antibody in the later process. Detection of M. tuberculosis in pure culture A method of fluorescent nanoparticle-based indirect immunofluorescence microscopy (FNP-IIFM) was developed for the rapid detection of Mycobacterium tuberculosis. The principle for this method was illustrated in Figure 2. In this scheme, M. tuberculosis was first recognized by a rabbit anti-M. tuberculosis antibody and then the nanoparticle-Protein A conjugates were used to generate fluorescent signal. To examine the binding of bioconjugated nanoparticles to bacteria, the incubated bacteria were imaged using either fluorescence microscopy or confocal microscopy. Pure M. tuberculosis suspension was first immunodetected with the FNP-IIFM method and the resulting confocal images were shown in Figures 3(a), 3(f). The bacteria displayed a bright fluorescence. This indicated that large quantities of nanoparticles had bound to the M. tuberculosis cells. In order to demonstrate whether the binding of bioconjugated nanoparticles to M. tuberculosis was solely through the antigen-specific targeting pattern or there were other nonspecific interactions between the Protein A-nanoparticle conjugates and other surface molecules of the bacteria, two controls were set in which the primary antibody was substituted with the following: (1) PBS only; (2) a rabbit anti-p53 antibody. No fluorescence was observed to associate with the M. tuberculosis in both controls as shown in Figures 3(b), 3(g) and 3(c), 3(h), suggesting that there was little nonspecific interaction between the Protein A-nanoparticle conjugates and the M. tuberculosis cell wall. These results identify that the bioconjugated nanoparticles bind to M. tuberculosis through the antibody-mediated antigen binding pattern. Another bacterium E. coli DH5α was also tested with the FNP-IIFM method. No labeling of the bacteria with the nanoparticle bioconjugates was observed as shown in Figures 3(d), 3(i). The result shows that the anti-M. tuberculosis antibody does not cross-react with E. coli DH5α, and the nanoparticle bioconjugates do not attach to E. coli DH5α nonspecifically, which indicates that the FNP-IIFM method can be used to detect Mycobacterium tuberculosis in pure culture. The fluorescence enhancement capability of the bioconjugated nanoparticles label in the FNP-IIFM method has also been investigated. The detection of M. tuberculosis with bioconjugated RuBpy-doped nanoparticles was compared with the commercial FITC conjugated rabbit anti-M. tuberculosis antibody. The final antibody concentration used in the FITC method was 25 μg/ml. It was 5-fold higher than that used in the FNP-IIFM method. We used higher concentration of antibody in the FITC method because the induced fluorescence signal was too low when the antibody concentration was 5 μg/ml. Figures 3(e), 3(j) showed the confocal images of M. tuberculosis recognized by the FITC method. The fluorescence signal from the bacteria recognized with the FITC method ( Figure 3(j)) was much weaker than the signal with the FNP-IIFM method (Figure 3(f)). Although the primary antibody used in the FNP-IIFM method was only one fifth of that used in the FITC method, the average fluorescence intensity of M. tuberculosis recognized with the FNP-IIFM method was determined to be above five times of that with the FITC method. The experiment reveals the signal advantage that the fluorescent nanoparticles possess over conventional fluorescent dye. Detection of M. tuberculosis in mixed bacterial samples To evaluate the detection capability of the FNP-IIFM method in complex samples, artificial complex samples consisting of M. tuberculosis and E. coli were used for test. In order to estimate the accuracy of the detection with the FNP-IIFM method in bacterial mixture, E. coli was labeled with FITC to distinguish from M. tuberculosis prior to the detection. Then the FITC-labeled E. coli was mixed with unlabeled M. tuberculosis to constitute the mixed bacterial samples and detected with the FNP-IIFM method. The results obtained with confocal microscopy were shown in Figure 4(a). The image in Figure 4(a)-A showed the FITC fluorescence associated with E. coli in the mixture (pseudocolor green, emission filter: BP 505-525 nm). Figure 4(a)-B showed the fluorescence of the bioconjugated RuBpy-doped nanoparticles which had bound to bacteria (pseudocolor red, emission filter: LP 560 nm). If the nanoparticles also attached to E. coli, the fluorescence would appear yellow in the overlay image (the combination of green plus red). The overlay image in Figure 4(a)-C showed no colocalization of the red fluorescent nanoparticles with E. coli, so the bioconjugated nanoparticles only bound to the M. tuberculosis. Besides, the detection was also observed with the less-expensive fluorescence microscopy. As shown in Figure 4 good. These results indicate that the FNP-IIFM method can be used to detect M. tuberculosis in mixed bacterial samples. Meanwhile, the photostability of the fluorescent label in the FNP-IIFM method was also investigated. We compared the photostability of RuBpy-doped nanoparticles bound on M. tuberculosis and FITC dyes labeled on E. coli. The fluorescence of FITC was dim after being continuously irradiated for 2 minutes while that of the nanoparticles was still bright, as shown in Figure 5. It is demonstrated that the bioconjugated RuBpy-doped silica nanoparticles used in the FNP-IIFM method possess much better photostability in comparison with the FITC dye label. Detection of M. tuberculosis in spiked sputum In order to demonstrate the usefulness of our method for M. tuberculosis detection under clinical condition, M. tuberculosis was spiked into sputum and detected with the FNP-IIFM method. The result was compared with unspiked sputum control to make certain whether the M. tuberculosis could be detected in the sputum. Sputum from healthy individual was collected and equally divided into two portions. One portion was spiked with M. tuberculosis, whereas the other portion was used as the unspiked sample. The spiked sample and unspiked sample were parallelly pretreated and detected by the FNP-IIFM method. For sample pretreatment, we used the NALC-NaOH method to liquefy the sputum. After liquefaction for 15 minutes, the viscosity of the sputum was greatly decreased. However, there were some visible big agglomerates in both the spiked and unspiked sputum which could neither be liquefied nor be brokenup by vigorously vortexing. These big agglomerates caused poor smear quality such as uneven thickness, and had better been removed before immuno-reaction. To remove the big agglomerates, we centrifuged the liquefied sputum samples at low centrifugal speed (1000 rpm, 2 minutes) and disposed the precipitates. The supernatants were detected with the FNP-IIFM method. As we expected, the sputum samples were much complex mixtures containing a great deal of bacteria and impurities shown in Figure 6. In the unspiked sputum sample, no fluorescent bacterium was found as shown in Figure 6(b). It indicates that the bioconjugated nanoparticles have little nonspecific interaction with the sputum components and the oral bacteria. In the spiked sputum sample, we found highly luminescent bacteria in many microscopic fields as shown in Figure 6(a) (the luminescent bacterium indicated by the arrow). By comparing with the unspiked sample, the luminescent bacteria were considered to be M. tuberculosis recognized by the bioconjugated nanoparticles. The high intensity of fluorescence associated with the recognized M. tuberculosis well distinguished the object bacteria from the complex background. The time needed to finish detecting M. Tuberculosis with the FNP-IIFM method in sputum is <4 hours after the receipt of specimen (sample pretreatment: <1 hour, immunoassay and smear examination: <3 hours). This result demonstrates that our FNP-IIFM method is useful for rapid detection of M. tuberculosis in sputum. CONCLUSIONS We have developed a new method for the detection of M. tuberculosis using fluorescent nanoparticle-based indirect immunofluorescence microscopy. With this method, M. tuberculosis can be detected in both mixed bacterial samples and sputum samples. Total assay time including sample pretreatment is within 4 hours. Comparing with conventional fluorescent dyes, the use of fluorescent nanoparticles as label in immunofluorescence microscopy offers advantages of higher luminescence and higher photostability. This method can integrate with epifluorescent filter techniques to further shorten the time needed for detection. In addition, by substituting the antibody to suit to other bacteria, this technique has the potential to develop to a universal method for detecting a wide variety of bacteria in biomedical and biotechnological areas.
2014-10-01T00:00:00.000Z
2007-11-25T00:00:00.000
{ "year": 2007, "sha1": "60a707877a5bff93c626b0930af93bd4966b33fe", "oa_license": "CCBY", "oa_url": "http://downloads.hindawi.com/journals/bmri/2007/089364.pdf", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "02048dbb49595200c1b4296a7d577afcc05445c6", "s2fieldsofstudy": [ "Biology", "Chemistry" ], "extfieldsofstudy": [ "Chemistry", "Medicine" ] }
271232291
pes2o/s2orc
v3-fos-license
Sex Dependence in Control of Renal Haemodynamics and Excretion in Streptozotocin Diabetic Rats—Role of Adenosine System and Nitric Oxide Recently, we compared an interplay of the adenosine system and nitric oxide (NO) in the regulation of renal function between male normoglycaemic (NG) and streptozotocin-induced diabetic rats (DM). Considering the between-sex functional differences, e.g., in the NO status, we present similar studies performed in female rats. We examined if the theophylline effects (non-selective adenosine antagonist) in NG and DM females with or without active NO synthases differed from the earlier findings. In anaesthetised female Sprague Dawley rats, both NG and DM, untreated or after NO synthesis blockade with L-NAME, theophylline effects, on blood pressure, renal hemodynamics and excretion, and renal tissue NO were investigated. Renal artery blood flow (Transonic probe), cortical, outer-, and inner-medullary flows (laser-Doppler technique), and renal tissue NO signal (selective electrode) were measured. In contrast to males, in female NG and DM rats, theophylline induced renal vasodilation. In NO-deficient females, theophylline induced comparable renal vasodilatation, confirming the vasoconstrictor influence of the renal adenosine. In NG and DM females with intact NO synthesis, adenosine inhibition diminished kidney tissue NO, contrasting with an increase reported in males. Lowered baseline renal excretion in DM females suggested stimulation of renal tubular reabsorption due to the prevalence of antinatriuretic over natriuretic tubular action of adenosine receptors. An opposite inter-receptor balance pattern emerged previously from male studies. The study exposed between-sex functional differences in the interrelation of adenosine and NO in rats with normoglycaemia and streptozotocin diabetes. The findings also suggest that in diabetes mellitus, the abundance of individual receptor types can distinctly differ between females and males. Introduction Diabetes is a chronic metabolic disease characterized by elevated blood glucose levels (hyperglycaemia), which contributes to the development of multi-organ complications that pose a serious threat to patients' lives and generate huge social and economic costs.Diabetes complications include retinopathy and nephropathy, heart disease, stroke, and diabetic foot syndrome.In addition to hyperglycaemia, a high level of oxidative stress is described in diabetes.Simultaneously, in diabetic patients and in animal experimental diabetic models, the level of nitric oxide (NO) and its bioavailability was found to be lower than in normoglycaemia.NO is an established important factor in the control of vascular tone [1,2], particularly in the kidney.Inhibition of NO synthesis results in an increase in renal vascular resistance (RVR), a decrease in renal plasma flow (RPF) and sodium excretion (U Na V), and, as a consequence, in hypertension [3]. The vascular endothelium is the main site of NO production.The actual activity of endothelial NO synthases (NOSs) depends on the bioavailability of several cofactors, e.g., tetrahydrobiopterine (BH 4 ) [4].Interestingly, sex-specific differences were described in NO bioavailability and NOS protein expression, observed both in Sprague Dawley and spontaneously hypertensive rats [3,5,6]. The changes in NO synthesis and bioavailability, in combination with hyperglycaemia, are responsible for endothelial damage and dysfunction leading to an increase in vascular resistance, hypertension, and proteinuria.All these processes contribute to or are an effect of oxidative stress, which per se is also responsible for the mentioned negative phenomena. In the kidney, reactive oxygen species (ROS) interact with NO.Superoxide (O 2 − ) and its derivative molecules, such as hydrogen peroxide (H 2 O 2 ) and peroxynitrite (ONO 2 − ) are known to regulate solute and water reabsorption and thereby help maintain electrolyte homeostasis and extracellular fluid volume [7].Also, NO has a strong natriuretic and diuretic action, as a result of its inhibitory effect on proximal tubular water and Na + transport [7,8].ROS were shown to constrict the systemic and intrarenal arterial vasculature by modulation of the action of other endothelium-derived factors.On the other hand, optimal ROS production is required for normal cell signalling as they serve as a second messenger for activation of some pathways involved in cell growth, inflammation, apoptosis, and cell differentiation. Adenosine (Ado) dilates the vessels in most tissues, while in the kidney, it can cause both vasoconstriction and vasodilatation, depending on the prevailing stimulation of adenosine A1 or A2 receptors (A1R, A2A/BR), respectively [9].The renal effect of another Ado receptor, A3, has been less investigated.The mediator of the Ado vasodilatory effect could be NO as was shown previously in the study on the action of endogenous or exogenous adenosine [10].The NO-adenosine relationship may be important, especially in the medulla, which operates under relative hypoxia and is susceptible to ischaemic damage [11].It was demonstrated that A2AR-induced efferent arteriole vasodilation through NO could counteract the vasoconstriction mediated by the adenosine A1 receptor in the afferent arterioles.This mechanism is thought to maintain the glomerular filtration rate within the normal range, but it is abolished in diabetes [12].Moreover, some data indicate that A2AR activation promotes the increase in ROS generation, but may also activate eNOS, causing a NO production increase, which may impair the deleterious effects mediated by ROS and oxidative stress [13]. The abundance of particular types of Ado receptors (which belong to the P1 receptor subtypes family, P1R) differs between the renal cortex and medulla.A higher A1R level in the cortex compared to the medulla and an opposite pattern for A2R expression are usually reported [14].It is also suggested that the abundance of both receptor types is distinctly altered in pathological states, e.g., in diabetes mellitus (DM), as reviewed by Burnstock and Novak (2013) and Antonioli et al. (2015) [15,16].However, the limited data regarding P1R in diabetes come from a study performed in male animals.Also, tubular transport is under contrary control of P1R: the net effect of A2R activation is natriuresis, whereas A1R activation is antinatriuretic. P1R also participates in the regulation of glucose metabolism and therefore has a role in diabetes mellitus.For instance, Ado causes a rapid increase in carrier-mediated glucose uptake and acts via receptors linked to a signalling pathway that involves intracellular cyclic adenosine monophosphate (cAMP) production [17]. While it was shown that Ado through its P1R can contribute to the maintenance of glucose homeostasis and modulate diabetes, the role of individual P1R subtypes is still unclear [17].Most of the relevant experiments were performed on male animals.Single studies on purinergic receptor activity in females were performed in vitro.For instance, in women with gestational diabetes, hyperglycaemia was associated with elevated gene expression levels of the A2BRs in leukocytes [17]. Recently, we showed a close interrelationship between NO and Ado receptors both in NG and DM rats (adenosine-caused vasodilation seemed to be NO-dependent); however, the study was performed in male rats only [18].It will be noticed here that sex-dependence exists in NOS activity, which is higher in females than in males, and the result might be greater NO synthesis in the former, possibly triggering excessive synthesis of ROS.There is a growing awareness of the need to perform experiments on animals of both sexes.The American Physiological Society and the British Pharmacological Society recently recommended that sex should no longer be ignored as an experimental variable to be tested.Indeed, this is crucially important in preclinical research as a prerequisite for the successful translation of the results into clinical practice [19,20]. Considering such convincing recommendations, we have decided to extend our earlier pertinent research performed in male rats and supplement them with studies with their female counterparts, both normoglycaemic and with experimentally streptozotocin-induced diabetes (STZ-induced DM).The focus was to examine if the effect of theophylline (Theo, a non-selective Ado antagonist) in NG and DM females with or without NOS chronic blockade differed from the earlier findings in the males.Theo, a nonselective Ado receptor antagonist with very low affinity to A3R compared to the A1R and A2R subtypes of P1R [21], was often shown to inhibit adenosine-induced vascular actions in the kidneys of dogs and rats [22].The diuretic and natriuretic effects of theophylline are well-known and inhibition of Ado receptors is critical for its natriuretic action [23]. Chronic Study During the chronic part of the project, several parameters were tested.For convenience and clarity of presentation, we will show only those in which significant changes or differences were observed. Table 1 summarises the data from chronic observations (2 weeks = 14 days) of NG and DM rats (i.e., after buffer or STZ injection), with or without L-NAME treatment for the four days before the acute experiment (+/−L-NAME 4 ). Phase of the Oestrus Cycle, Body Weight (Bwt), Blood Glucose Level (BG) In order to determine the phase of the oestrous cycle, a vaginal smear was performed: all females were found to be in the proestrus phase.This could be ensured due to good animal synchronization: the rats shared the same cage prior to the start of the experiment. In NG females, a gradual increase in Bwt was observed over 14 days of observation.In DM, Bwt was not different between day 0 and 14 (237 ± 3 vs.233 ± 8 g), except for a transient decrease noted on day 7 (227 ± 4 g, p < 0.05 vs. day 0).Noteworthy chronic NOS inhibition (L-NAME 4 ) did not alter Bwt gain in NG and DM rats. BG remained stable in NG rats throughout two weeks.As expected, in DM rats, glycaemia increased beginning from the third day after STZ injection (430 ± 16 mg/dL, p < 0.05 vs. day 0) and was maintained greater until the end of the observation.L-NAME 4 treatment did not affect glycaemia in NG or DM rats; however, a visible higher BG (by 30 mg/dL compared to day 10) was shown in DM group. Blood Parameters During chronic observation prior to L-NAME treatment, haematocrit (Hct) values increased in NG but did not alter in DM animals.However, after L-NAME 4 treatment, Hct was not changed in NG, but in DM females it was slightly higher vs. day 0 (p < 0.02). There was no significant difference in plasma osmolality (P osm ) between NG and DM rats without L-NAME treatment; in DM females, a 5% increase was seen on day 10 (p < 0.01).Notably, in both groups, L-NAME 4 caused a significant increase in P osm vs. day 10th (by 7%, p < 0.03). Table 1.Body weight, data for blood, and plasma samples (upper section) and the data for 24 h observations in metabolic cages (lower section) for the samples obtained before (day 0) and 10 or 14 days after streptozotocin (STZ) or solvent injections in female Tac:Cmd:SD rats, normoglycaemic (NG) or diabetic (DM) before (day 10), and after L-NAME treatment for 4 days (box covered in grey depicts effects shown on the day 14).In NG animals, plasma sodium concentration (P Na ) did not alter along the chronic observation, both without and with L-NAME 4 .However, in DM animals, P Na was significantly lower before L-NAME treatment in comparison with the respective day 0 (p < 0.03), and also lower than P Na in NG animals (p < 0.04).Plasma potassium concentration (P K ) both in NG and DM animals did not alter for the whole time of chronic observation (14 days), similarly without and with L-NAME 4 . Daily Water Intake and Urine Excretion As expected, in NG animals prior to L-NAME treatment, no changes in water intake were observed, whereas in DM animals, about a 3-fold increase (p < 0.001) was seen.L-NAME 4 caused an increase (vs.day 10) in the volume of water drunk (not significant). In NG animals, daily urine flow did not alter during chronic observation, irrespective of L-NAME treatment.In DM animals about an 8-fold increase (p < 0.0001) of diuresis was seen (day 10), and chronic addition of L-NAME caused a 25% increase in diuresis (not significant in comparison with the respective day 10). Daily total solute excretion (U osm V) during chronic observation before L-NAME treatment (day 10) decreased in NG (p < 0.001), whereas in DM animals, it increased about 6-fold (p < 0.001) compared with day 0. U osm V significantly differed between groups (p < 0.001).Chronic addition of L-NAME restored solute excretion towards the basal value in NG but did not alter it significantly in the DM group.However, the U osm V difference between NG and DM females proved significant. The pattern of daily U Na V changes was similar to that of U osm V.During chronic observation before L-NAME treatment, U Na V decreased in NG (p < 0.001), whereas in DM animals, it increased about twice (p < 0.002) compared with day 0. The U Na V differed significantly between groups (p < 0.001).Chronic addition of L-NAME restored sodium excretion towards the basal value in NG but did not alter it in the DM group.However, the U Na V difference between NG and DM females proved significant. Acute Experiments 2.2.1. Effects of L-NAME 4 Pretreatment in NG and DM Rats on Baseline Haemodynamics and Renal Circulation The data on basal haemodynamics, renal perfusion, and excretion are collected in Table 2.No difference in MABP between NG and DM females without L-NAME 4 treatment was observed during control periods.However, as expected, both in NG and DM groups without pretreatment, MABP was lower compared with those receiving L-NAME 4 : in NG by 20% (123 ± 3 vs.149 ± 2 mmHg, p < 0.001), and in DM by 15% (124 ± 2 vs. 143 ± 4 mmHg, p < 0.0001). RBF-whole-kidney blood flow, RVR-renal vascular resistance, CBF cortical blood flow (laser-Doppler flux), V-urine flow, U osm V, U Na V, U K V-the excretion of total solutes, sodium, and potassium, respectively.The values are means ± SEM; n = 7-8 for each group; *-significantly different from the corresponding untreated group; †-significantly different from the NG group. Interestingly, both basal OMBF and IMBF were unaffected by diabetes or L-NAME 4 pretreatment. Effects of Theophylline on MABP, Heart Rate, and on Renal Total and Regional Perfusion in NG and DM Rats Untreated or Pre-Treated with L-NAME 4 Figure 1 shows that Theo affected MABP in NG-L-NAME 4 rats only, with a gradual decrease from 147 ± 4 to 139 ± 4 mmHg.Interestingly, HR was increased by Theo both in NG and DM females (by 16-26%), irrespective of L-NAME 4 treatment; significant elevation above baseline persisted after cessation of drug infusion. The changes in medullary blood perfusion differed from those seen in the cortex.Medullary blood flow was affected by Theo only in DM rats treated with L-NAME 4 ; however, only the increase in OM-BF (by 23 ± 5%) was significant (p < 0.01).In NG females, only the Theo effects on IM-BF differed between the untreated and L-NAME-treated groups. Since in NG, NG+L-NAME 4 , and DM+L-NAME 4 , the RBF increase induced by Theo occurred with no change or a decrease in MABP, the calculated renal vascular resistance (RVR) was decreased in NG rats only and persisted below the baseline value even after cessation of the drug infusion.Dissimilarly, in DM rats, only in those pre-treated with L-NAME 4 did Theo evoke an RVR decrease by 28 ± 5% (p < 0.01), without recovery to the baseline value till the end of the experiment (Figure 1).Theo infusion significantly decreased the renal medullary tissue NO signal in NG rats, by 5 ± 2%, p < 0.05.This effect was not clearly altered by L-NAME pretreatment; however, the Theo-induced change by 4 ± 3% was not significant (Figure 1).Moreover, we found here that an i.v.bolus of L-NAME, given after completing experiments with control (no Theo treatment) NG and DM rats chronically pre-treated with different doses of L-NAME, induced only a minimal 1-2% decrease in the tissue NO signal, quite similar to that in the NG and DM groups.This indicates that the chronic dosage, higher in diabetic than in normoglycaemic animals, was adequate and maximally efficient in both groups.We have not attempted to make any quantitative inter-group comparisons of basal or post-treatment NO signal values: while the method enables reliable estimation of changes in bioavailable tissue NO during a given experiment, it may be of doubtful value as a measure of absolute NO concentration Similarly to the effects of Theo on renal haemodynamics (Figure 1), the changes induced in renal excretion parameters were unidirectional (Figure 2).Usually, an increase in V, U osm V, U Na V, and U K V during Theo infusion was seen in NG and DM rats, irrespective of L-NAME 4 pretreatment.The exception was U osm , which in NG rats was elevated after Theo infusion, and similar to that in untreated and pre-treated rats.U osm did not change in DM animals (data shown in Supplementary Files). nal, quite similar to that in the NG and DM groups.This indicates that the chronic dosage, higher in diabetic than in normoglycaemic animals, was adequate and maximally efficient in both groups.We have not a empted to make any quantitative inter-group comparisons of basal or post-treatment NO signal values: while the method enables reliable estimation of changes in bioavailable tissue NO during a given experiment, it may be of doubtful value as a measure of absolute NO concentration 2.2.5.Effects of Theophylline on Renal Excretion in NG and DM Female Rats, Untreated or Pre-treated with L-NAME Similarly to the effects of Theo on renal haemodynamics (Figure 1), the changes induced in renal excretion parameters were unidirectional (Figure 2).Usually, an increase in V, UosmV, UNaV, and UKV during Theo infusion was seen in NG and DM rats, irrespective of L-NAME4 pretreatment.The exception was Uosm, which in NG rats was elevated after Theo infusion, and similar to that in untreated and pre-treated rats.Uosm did not change in DM animals (data shown in Supplementary Files).Urine flow (V) was increased by Theo in NG and even more so in the NG+L-NAME4 group (by 156 ± 33% vs. 263 ± 38%, respectively, p < 0.05).On the other hand, in DM rats, Theo-induced V elevation was smaller in the L-NAME4-treated group (177 ± 63% vs. 83 ± 19%, respectively, p < 0.002).Urine flow (V) was increased by Theo in NG and even more so in the NG+L-NAME 4 group (by 156 ± 33% vs. 263 ± 38%, respectively, p < 0.05).On the other hand, in DM rats, Theo-induced V elevation was smaller in the L-NAME 4 -treated group (177 ± 63% vs. 83 ± 19%, respectively, p < 0.002). The increase in potassium excretion (U K V) caused by Theo was also reduced by L-NAME 4 treatment, in NG from 220 ± 30% to 135 ± 40%, respectively (change not significant), whereas the difference between DM and DM+L-NAME 4 , by 190 ± 50 vs.55 ± 20%, respectively, was significant (p < 0.03).A similar tendency of post-Theo changes was seen for U osm V, both in NG+L-NAME 4 and in DM and DM+L-NAME 4 groups.However, these differences were not significant. Discussion The study aimed to compare the role of the adenosine system in its interrelation with NO in the control of renal and systemic circulation and renal excretion between normoglycaemic and streptozotocin diabetic female rats.In most of relevant studies, males were used, to avoid the effect of the oestrous phase on the results obtained.In the current study, females were kept in synchronised groups of animals, which enabled synchronisation of the oestrus cycle and ensured a comparable hormonal environment in all groups. Discussed below are the data from female rats and how they compare with the results of our recent studies performed in males [18,24]. Effects of STZ-Induced Hyperglycaemia on Metabolic and Renal Excretion Parameters An increase in food intake without any weight increase was shown in hyperglycaemic rats independent of sex, possibly depending on the STZ-induced hypoinsulinaemia.In addition to the loss of muscle mass, the relative Bwt lowering may have been due to dehydration caused by osmotic diuresis.Possible dehydration was probable because of the plasma osmolality increase in DM; in NG animals, no such alteration was seen.The same pattern of changes was observed in females and, as reported earlier [18,24], in males.However, there were no glycaemia-dependent changes in P Na and P K in female rats, whereas in males, a decrease in P K was shown at the end of chronic observation. Effects of STZ-Induced Hyperglycaemia on Systemic and Renal Haemodynamics and Renal Excretion In the females, baseline MABP did not differ between NG and DM groups, as was also seen in males.As expected, L-NAME only tended to increase baseline MABP, both in NG and DM rats.Dissimilarly, in NG and DM males, the difference between the untreated and L-NAME groups was significant.Thus, under baseline conditions (without Ado blockade), the tonic influence of NO could be more important in female than in male rats. As in males, in females, baseline HR was slightly higher in NG than in the DM group.However, after L-NAME, in contrast to males, HR was lower than in untreated NG and DM females. As in males, baseline RBF was in females slightly higher in the NG than in the DM group.As expected, after L-NAME, as in males, baseline RBF was in both NG and DM female rats significantly lower, most probably due to the lack of vasodilatory NO after chronic NOS inhibition. In females, baseline RVR did not differ between NG and DM rats and, as expected, significant increases were observed after L-NAME in both groups.A similar pattern of changes was shown previously in our study with male rats.Nevertheless, a higher RVR might be expected in females than males, as was pointed out in some reviews [25,26]. Taken together, data from our studies indicate that any differences in the baseline (prior to the acute experiment) vascular responses between female and male rats were negligible, which suggests that the vasodilator influence of NO was similar in either sex. Renal Regional Blood Perfusion Interestingly, in the females, neither basal OMBF nor IMBF were affected by diabetes or L-NAME treatment, despite evidence that the renal medulla is characterized by a high NOS expression and activity, especially nNOS [27,28].We reported that in normal male rats, NO bioavailability is much higher in the renal medulla compared to the cortex [29].It is worth mentioning that in Wistar NG males, blockade of NOS activity tended to decrease medullary perfusion, in parallel with peripheral vasoconstriction (increase in MABP) and a decrease in RBF [30]. Regarding the current findings, one can speculate that the applied dose of L-NAME 4 was too low to effectively block the local medullary NOS activity.However, the same dose evidently caused peripheral vasoconstriction (increased MABP); simultaneously, it reduced RBF (i.e., perfusion of the whole renal cortex) and CBF (perfusion of the superficial cortex).In general, irrespective of sex, NOS activity in the medulla is higher than in the kidney cortex, and in females, it could be additionally elevated by short-term diabetes [5,6].It is unclear why in SD rats of both sexes, chronic L-NAME treatment caused peripheral and renal cortical vasoconstriction, whereas it did not modify the medullary perfusion.It seems that irrespective of sex, renal medullary circulation is well preserved.This could be connected with its postulated role in the long-term control of blood pressure [31]. Renal Excretion Unexpectedly, in female rats, basal urine flow was slightly higher in the NG than in the DM groups, in contrast to what was seen in respective male groups.In the latter, distinctly higher (at least twice) baseline diuresis was shown in DM, as compared to NG [18,24].Noteworthy, data from chronic observations of both sexes showed a several times higher renal excretion in DM compared with the NG group.However, such dissimilarity between groups was preserved after anaesthesia in males only [18,24] and evidently not in females.Astonishingly, chronic NOS inhibition did not affect urine flow in female rats, even though L-NAME treatment associated with a blood pressure increase (~25 mmHg), and a pressure-dependent increase in diuresis could be expected.The same puzzling response was also noted in the study with male rats [18]. Surprisingly, basal renal sodium excretion was higher in NG than in DM females, in contrast to the distinct opposite difference in male rats.Moreover, in females, chronic blockade of NOS activity reversed the sodium excretion response pattern, whereas in male rats, there was no difference in sodium excretion between NG and DM rats [18].Notably, this occurred while the baseline blood pressure was increased, both in females and males, independent of glycaemia level.Noteworthily, in experiments with acute intravenous L-NAME delivery to male rats, a significant increase in sodium excretion without changes in blood pressure was shown.This accords with the notion of the specific effect of L-NAME on tubular transport [32].The differences between female and male renal excretion in NG and DM rats, as described above, are schematically summarized in Table 3. Table 3.A comparison of baseline differences in mean arterial blood pressure (MABP), and parameters of renal excretion and urine concentration between female and male rats, normoglycaemic (NG) or with streptozotocin diabetes (DM), untreated (NO-intact) or pre-treated with L-NAME (NO-deficient). NO Status Females Males V, urine excretion; UosmV, total solute excretion; U Na V and U K V, urine excretion of sodium and potassium; Uosm, urine osmolality."(+)"/"(−)", higher or lower mean value in DM vs. NG rats in the respective sex; ↔, no difference between DM and NG baseline in respective sex; *-significantly different vs. NG. Taken together, these findings indicate that in females, diabetes affects renal excretion differently than in males, especially in the case of sodium and potassium excretion and urine osmolality.Apparently, long-lasting hyperglycaemia causes in females an increase in reabsorption of sodium and potassium, probably depending on the tubular action of NO. Effects of Ado Receptor Blockade in NO-Intact Rats on Systemic and Renal Haemodynamics, and on Renal Excretion and Tissue NO As also seen in males, nonspecific A1 and A2 receptor inhibition with Theo did not modify MABP in NG and DM females.Thus, irrespective of sex, under our experimental conditions (anaesthesia and surgery), there was no apparent basal tonic influence of the Ado system on total peripheral vascular resistance (TPVR). Renal Haemodynamics Theo administration induced an increase in whole-kidney perfusion (RBF), both in NG and DM female rats.This was in striking contrast to the opposite change between NG and DM males, an increase vs. decrease, respectively [18,24].Consequently, in females, the post-Ado receptor blockade changes in RVR did not differ between the NG and DM groups.Again, this was in contrast to what was seen in males but, as could be expected, a decrease in RVR in NG vs. an increase in DM was shown [18,24]. Interestingly, the described between-sex variation in the renal haemodynamics after Ado receptor blockade was not seen within the medulla: OMBF remained unchanged in females, whereas it tended to decrease in males, irrespective of the glycaemia.Dissimilarly, IMBF tended to increase.These findings support the view that the Ado system contributes to the control of blood perfusion, but differently in the cortex and medulla.[18,24,33,34].However, in diabetes, the dissimilarity between the cortical and medullary circulation appears to be modulated by sex. Renal Excretion Acute intravenous delivery of a non-selective purinergic receptor antagonist (Theo) increased renal excretion (V, U osm V, U Na V) both in NG and DM female rats, similar to what was observed in male rats [18].Notably, in NG female rats, the natriuresis increase was greater than in DM, whereas no such difference was shown between NG and DM males.Thus, in the situation without NO blockade, purinergic receptors contribute to the control of tubular transport in both sexes independent of glycaemia.However, differently than in males, Ado blockade causes a greater reduction in sodium reabsorption and, consequently, a higher U Na V in NG than in DM females. Tissue NO Surprisingly, contrary to the increase in the renal tissue NO signal observed in males [18], acute non-selective antagonism of purinergic receptors did not induce any such change in NG and DM female rats.In the latter, rather a tendency to a decrease in DM and a slight decrease in tissue NO was noted (Figure 1).This effect could be expected because inhibition of A2R by Theo could cause a decrease rather than an increase in kidney tissue availability of NO. Effects of Ado Receptor Blockade in NO-Deficient Rats on Systemic and Renal Haemodynamics, and on Renal Excretion and Tissue NO 3.4.1.Unexpected Post-Theo Decrease in MABP in NG Female Rats Under conditions of impaired NO synthesis in NG females, Theo caused a significant and sustained decrease in MABP, whereas in the DM group, blood pressure remained unchanged.This suggests an important NO participation in MABP control in NG females.This contrasts with the unresponsiveness of MABP in male NG rats, both without and with NO inhibition [18]. Why in the NO-deficient normoglycaemic rats was the overall TPVR sensitive to the blockade of Ado receptors?Any association of such sensitivity with A2 blockade seems unlikely: the role of the A2 effect (NO stimulation) was excluded by NO blockade before Theo administration.Nevertheless, the influence of NO-mediated A2 receptors must still be considered: the reduction in MABP may have been due to a reduction in cardiac output.Adenosine can enhance coronary circulation in rats by stimulating A2 receptor activity, also in the absence of NO.It was reported that A2 receptors (both A and B subtypes) can evoke NO-independent smooth muscle relaxation in coronary arteries [35].Therefore, an inhibition of these receptors with Theo may induce coronary vasoconstriction leading to a decrease in cardiac output and followed by a fall in blood pressure.However, the increase in heart rate shown in all our female groups, irrespective of glycaemia, does not support this hypothesis.Alternatively, TPVR decrease and blood pressure drop could be explained by elimination by Theo of the vasoconstrictor effect of A1R.However, MABP did not decrease in NG rats without NO blockade.Perhaps if the decrease was the result of a simultaneous inhibition of A2R-induced vasodilation, then a zero net effect would be likely.The same explanation would also be valid for DM rats. The unusual hypotensive response to Theo seen in NG rats with NO deficiency was not due to a different basal TPVR because the baseline post-L-NAME level of BP was almost the same in NG and DM rats, and the RVR was comparable (Table 2).One can speculate that the reason could be the higher basal density of A1R in NG compared to DM rats. In general, our results do not accord with the evidence of Ado-induced relaxation (rather than contraction) reported in a recent study of endothelium-denuded rat aorta isolated from nondiabetic males [36]; however, caution is needed in attempts to compare the results of ex vivo and whole-animal studies. In addition to the unusual post-Theo blood pressure decrease, the NO-deficient female NG rats showed a similarly unusual decrease in renal inner-medullary perfusion (IMBF).Interestingly, such a pattern of post-Theo changes in MABP and IMBF was reported in males but only in those with hyperglycaemia.We suggest that it was secondary to a decrease in MABP because of impaired autoregulation of IMBF.Indeed, such impairment in the face of blood pressure alterations was repeatedly reported [31,37]. Renal Blood Perfusion Interestingly, in the females, contrary to the tubular effects of Theo (see below), the hemodynamic effects were not dependent on NOS activity, unlike in the study with male rats [18].In addition, we saw that under conditions of systemic NO deficiency, Ado receptor blockade induced renal vasodilatation and increased perfusion, probably due to abolishment of baseline (pre-Theo) vasoconstriction.The effect tended to be more pronounced in NG rats.Remarkably, the pronounced decrease in RVR was similar here to the situation without NO blockade, when Ado receptor blockade slightly decreased RVR.On the whole, under conditions of NO-deficiency, there was only a slight difference between NG and DM rats in the response of the renal circulation to Ado receptor blockade, which speaks for a comparable vasoconstrictor influence of the Ado system in females, independent of glycaemia (Figure 1).Our results from Ado inhibition experiments do not quite accord with those obtained with the application of exogenous Ado.In normoglycaemic males, the renal vasculature was found more sensitive to adenosine-mediated vasoconstriction when NO synthases were inhibited [38], whereas in females, we did not show such dependency.Similarly, the kidneys of streptozotocin-diabetic rats were clearly more responsive to the vasoconstrictor action of Ado in NO-deficient animals [10], which was not observed in our study with females. Noteworthily, in diabetic females under conditions of systemic NO deficiency, a slight increase and a tendency to an increase after Ado receptor blockade were seen for OMBF and IMBF, respectively.This suggests that hyperglycaemia can evoke some changes in Ado receptor contribution to control medullary circulation, which was not seen in males [18].The post-Theo increase in perfusion seems to be not simply the effect of elimination of A1R because it also occurred in the NG and DM females given Theo, in which no increase in OMBF or IMBF was seen. Renal Excretion In females, the Theo effects after NOS inhibition differed between NG (an increase) and DM (a decrease), whereas no such difference was earlier reported in males.Similar post-Theo effects were shown in females for sodium and potassium but not for total solute excretion, which showed no difference between NG and DM, both with and without NO blockade.This suggests that NO influence on the tubular action of purinergic receptors in female rats is modified by glycaemia. Tissue NO Remarkably, under conditions of inhibited NO synthesis, acute non-selective antagonism of purinergic receptors did not induce any changes, similarly in NG and DM female rats.It was, in some sense, unexpected in the NG group because a parallel decrease in MABP and impairment of IMBF (possible reduction in tissue anoxia) might be a stimulus for NO release from a source other than NOS, e.g., from S-nitrosothiols, a main form of NO storage within the vasculature [39].In males, the same direction of changes in blood pressure and medullary perfusion (actually more pronounced) was associated with an increase in tissue NO in the DM group.On the other hand, in the NG group, Theo did not alter MABP and IMBF, nor tissue NO [18]. Animals All protocols were approved by the Second Local Ethical Committee, Warsaw, Poland (number 156/2021), and were in accordance with the National Institutes of Health Guide for the Care and Use of Laboratory Animals and the European Union Directive 63/2010.Female Sprague Dawley (Tac:Cmd:SD) rats, diabetic (DM) or non-diabetic (NG) were used in this study.Hyperglycaemia was induced in rats aged 6-7 weeks obtained from the intramural animal breeding house.The animals were housed in groups of 2-4, under 12:12 h light/dark cycle, and had free access to tap water and standard rat chow (dry pellets with 0.25% Na w/w, SSINFF GmbH, Soest, Germany).Female rats were kept in stable groups, both during breeding and before the primary chronic experiment, which ensured that the oestrus cycle was synchronized.Additionally, at the beginning of the experiment, a vaginal smear was performed to determine the phase of the cycle. Chronic Studies 4.2.1. Induction of Diabetes Diabetes was induced with streptozotocin (STZ i.p.; 60 mg/kg, Santa Cruz Biotechnology, Inc., Dallas, TX, USA), dissolved in citrate buffer (0.05 mol/L, pH 4.5) directly before injection.The rats' body weight (Bwt) and blood glycaemia (BG) were determined beginning from the day before STZ injection until the acute experiment.On days 3, 7, and 10 or 14 after STZ administration, glucose level was measured, following 2 h food deprivation.The animals were considered diabetic if blood glucose was higher than 300 mg/dL measured 72 h after STZ injection and remained so elevated until the end of the observation.In order to define the phase of the oestrus cycle, a vaginal smear was taken in the chronic part of the experiment. L-NAME Pretreatment Prior to the acute experiment, in randomly selected NG and DM animals, N(G)-nitro-L-arginine methyl ester (L-NAME; Sigma, Pozna ń, Poland), a nonselective NOS inhibitor, was administered orally for four days (L-NAME 4 ) by admixing the powdered substance, 5 mg/100 mL, to drinking water.The rationale for using the same L-NAME concentration for NG and DM rats, even though the latter drank more water, was that in DM rats the production of renal NO was often found to be elevated [40].Similar effectiveness of the higher L-NAME dose in DM animals was confirmed by measuring the response of the tissue NO signal to a post-experiment i.v.injection of L-NAME bolus (see the details below).In our previous studies, an analogous approach resulted in effective inhibition of endogenous NO production in rats on standard and high sodium intake; the latter showed higher water intake but also produced more NO [30]. Acute Experiment 4.3.1. Surgical Preparations Rats were anaesthetized with intraperitoneal sodium thiopental (Thipen, Samarth, India.), 100 mg/kg, which provided stable anaesthesia for at least 4 h.They were placed on a heated surgery table to maintain rectal temperature at about 37 • C. A polyethylene tube was placed in the trachea to ensure free airways.The jugular vein was cannulated for fluid infusions, and the carotid artery for mean arterial blood pressure (MABP) measurement (Stoelting blood pressure meter and transducers, Wood Dale, IL, USA).During surgery, 3% bovine serum albumin solution was infused i.v. at 3 mL/h to preserve plasma volume.The left kidney was exposed from a subcostal flank incision and placed in a plastic holder, similar to that used for micropuncture.The ureter was cannulated for timed urine collection and urine volume was determined gravimetrically.The details of the measurement of whole renal blood flow (RBF) (non-cannulating renal artery Transonic probe) as well as superficial cortical (CBF), outer medullary (OM-BF) and inner medullary (IM-BF) flows were as described previously [18,30]. For measurement of the tissue NO signal in the kidney, a needle-shaped ISO-NOP 200 sensor (0.2 mm in diameter), connected with Free Radical Analyser (TBR 4100, World Precision Instruments, Inc., Sarasota, FL, USA), was inserted vertically into the medulla, to the depth of 5-7 mm from the kidney surface.The details of NO measurement and calibration of the results were as described previously [18,29]. Experimental Protocols At the end of surgical preparation and after placement of the intrarenal probes and recovery from surgery, four 15 min urine collections (control periods) were made to determine baseline water, sodium, and total solute excretion rates in each of eight groups.After stabilization of renal haemodynamics, Theo (0.2 mmol/kg/h) or saline (S) was infused for 45 min, followed by recovery periods.This basic protocol was applied in the following experimental groups, (n = 7-8 per group): 1. Analytical Procedures and Calculations BG was measured with a glucometer: ACCU-CHECK Active, Model GC (Roche, Mannheim, Germany).Urine volumes were determined gravimetrically.Urinary osmolality (U osm ) was measured with the cryoscopic osmometer Osmomat 030 (Gonotec, Berlin, Germany).Urine sodium (U Na ) and potassium (U K ) concentrations were measured by a flame photometer (Flame Photometers, BWB Technologies, UK). Urine flow (V), the excretion of total solutes (U osm V) sodium (U Na V), and potassium (U K V) were calculated from the usual formulas and standardized to g kidney weight (U X V/g KW). Statistics Values are expressed as mean ± SEM.Data were analysed by repeated measurement ANOVA with Bonferroni's test in case of multiple comparisons.p < 0.05 was considered as a significance level.When two sets of data within one group or two groups were compared, two-tailed Student's t-test for paired or unpaired samples, respectively, was applied.With more than two data sets or groups, the significance of changes was evaluated by multivariate analysis of variance (ANOVA) with repeated measurements, followed by Tukey post hoc test (STATISTICA, version 10.0, StatSoft Inc., Tulsa, OK, USA). Conclusions The present study in female rats, using blockade of Ado and NO, alone or combined, provided some new insights regarding the interrelation of the two systems in normoglycaemic (NG) and diabetic (DM) animals. 1. In both male and female rats with intact NO synthesis, no tonic influence of the Ado system on the resistance of the peripheral vasculature was seen.However, in the kidneys of female rats, unlike in males, the vasoactive influence of the Ado system was not altered by hyperglycaemia. 2. In female rats, blockade of NO synthesis caused a slightly greater increase in MABP and a decrease in renal haemodynamics in NG compared with DM animals, indicating enhanced vasodilator influence of NO in diabetic females, but this was not found in an earlier study with diabetic males. 3. In NO-deficient NG and DM female rats, Ado receptor blockade induced comparable renal vasodilatation, suggesting a comparable vasoconstrictor influence of the Ado system in this sex, independent of the glycaemia level.However, the novel finding in NG female rats was an associated decrease in arterial pressure, of unclear origin.4. Another novel and unexpected finding was that in female rats, both with intact or deficient NO synthesis, Ado receptor blockade had no or only a very slight impact on kidney tissue NO, in contrast to a distinct increase reported in males.Thus, in females only, Theo might somehow weaken rather than stimulate NO synthesis.The mechanism might be, in both sexes, the abolishment of the NO-inhibitory action of P1 receptors, presumably A1. 5. Lowered baseline renal excretion in female DM suggested stimulation of renal tubular fluid reabsorption, possibly due to the prevalence of antinatriuretic A1 over natriuretic A2 receptors.Remarkably, an opposite balance pattern between individual P1 receptor types emerged from the studies with males. 2. 2 . 4 . Effects of Theophylline (Theo) on Renal Tissue NO Signal in NG and DM Female Rats, Untreated or Pre-Treated with L-NAME 4 20 Figure 1 . Figure 1.Effects of theophylline (Theo) on MABP, HR, renal haemodynamics, and the in situ tissue NO signal in the renal medulla of normoglycaemic (NG) and hyperglycaemic female rats (DM), untreated or pre-treated with L-NAME.Means ± SEM.MABP-mean arterial blood pressure; HRheart rate; RBF-whole-kidney blood flow; CBF, OM-BF, and IM-BF-cortical, outer-and innermedullary blood flow, respectively; RVR-renal vascular resistance.*-significantly different from the respective baseline; †-significantly different from the NG rats; #-significant difference between NG-non-treated and L-NAME-pre-treated rats; $-significant difference between DM-nontreated and L-NAME-pre-treated rats. Figure 1 . Figure 1.Effects of theophylline (Theo) on MABP, HR, renal haemodynamics, and the in situ tissue NO signal in the renal medulla of normoglycaemic (NG) and hyperglycaemic female rats (DM), untreated or pre-treated with L-NAME.Means ± SEM.MABP-mean arterial blood pressure; HR-heart rate; RBF-whole-kidney blood flow; CBF, OM-BF, and IM-BF-cortical, outer-and innermedullary blood flow, respectively; RVR-renal vascular resistance.*-significantly different from the respective baseline; †-significantly different from the NG rats; #-significant difference between NG-non-treated and L-NAME-pre-treated rats; $-significant difference between DM-non-treated and L-NAME-pre-treated rats. Figure 2 . Figure 2. Effects of theophylline (Theo) on renal excretion in normoglycaemic (NG) and hyperglycaemic female rats (DM), untreated or pre-treated with L-NAME.Means ± SEM.V-urine flow; UosmV, UNaV, UKV-total solute, sodium and potassium excretion, respectively.*-significantly different from the respective baseline; †-significantly different from the NG; #-significant difference between NG and DM pre-treated with L-NAME. Figure 2 . Figure 2. Effects of theophylline (Theo) on renal excretion in normoglycaemic (NG) and hyperglycaemic female rats (DM), untreated or pre-treated with L-NAME.Means ± SEM.V-urine flow; U osm V, U Na V, U K V-total solute, sodium and potassium excretion, respectively.*-significantly different from the respective baseline; †-significantly different from the NG; #-significant difference between NG and DM pre-treated with L-NAME. Review Board Statement: This research was supported by internal funding from Mossakowski Medical Research Institute, Polish Academy of Sciences, Warsaw, Poland (Theme 18, 2021, 2022).The experimental procedures were approved by the Second Local Ethical Committee, Warsaw, Poland (R. WAW2/156/2021; 10.11.2021), and followed the European Union Directive 63/2010, and the National Institutes of Health Guide for the Care and Use of Laboratory Animals. Supplementary Materials:The following supporting information can be downloaded at: https: //www.mdpi.com/article/10.3390/ijms25147699/s1.Author Contributions: M.K.: Conceptualization; data curation; writing-original draft; writingreview and editing; formal analysis.L.D.: Conceptualization; data curation; writing-review and editing; formal analysis.All authors have read and agreed to the published version of the manuscript.Funding:Informed Consent Statement: Not applicable.
2024-07-17T15:24:34.446Z
2024-07-01T00:00:00.000
{ "year": 2024, "sha1": "3ee3938a1d5d5ed8fb96649fe78d25567303045e", "oa_license": "CCBY", "oa_url": null, "oa_status": null, "pdf_src": "PubMedCentral", "pdf_hash": "c99a93322d9e5474e5bb378f395b93114017f42f", "s2fieldsofstudy": [ "Medicine", "Biology" ], "extfieldsofstudy": [ "Medicine" ] }
34001553
pes2o/s2orc
v3-fos-license
Open source in cachexia? It is the concept of freely sharing technological information so it can be improved through multiple insights and viewpoints. Because the technology is ‘open source’, the amount of work involved is decreased as many individuals add multiple contributions. The central theme of open research is to disentangle the methodology freely available via the Internet and any data or results extracted or derived from them. This permits a massively distributed collaboration, and anyone can participate at any level of the project. Open ‘science’ is the application of open source methods to science (Figure 1). Open science removes the traditional hierarchy of research and encourages scientists of all levels—student or professor—to engage and contribute. Ideally, complete data release and collaboration happen in real time, to prevent duplication of effort and to maximize useful interaction between participants. It is the concept of freely sharing technological information so it can be improved through multiple insights and viewpoints. Because the technology is 'open source', the amount of work involved is decreased as many individuals add multiple contributions. The central theme of open research is to disentangle the methodology freely available via the Internet and any data or results extracted or derived from them. This permits a massively distributed collaboration, and anyone can participate at any level of the project. Open 'science' is the application of open source methods to science ( Figure 1). Open science removes the traditional hierarchy of research and encourages scientists of all levels-student or professor-to engage and contribute. Ideally, complete data release and collaboration happen in real time, to prevent duplication of effort and to maximize useful interaction between participants. 1 Why in cachexia? 'None of us is as smart as all of us' At the 2nd Cancer Cachexia Conference in Montreal (2014), several concerns came up. For instance, the participation of clinical staff and the possibility to reach out to everybody who wants to learn about the subject, and to those who are going to be the ones that judge and authorize therapies. Lets not forget the industry that founds the ongoing research; especially the nonprofit organizations feel much more at ease 'in the open'. There is also a lot of heterogeneity when it comes to animal models: going to open source cachexia and getting in touch with hundreds of others scientists just with a click may solve the query. Who is already using it? Except in software, other areas are reaching out for it: the open source movement has increased transparency in biotechnology research. A good example is that of Cambia. It is 'an Australian non-profit organization focusing on open science, biology, and intellectual property'. Cambia's efforts to freely distribute scientific tools and techniques gave rise to the Biological Open Source (BiOS) Initiative. Through an open source biotechnology licence and material transfer agreement, BiOS seeks to establish freedom to operate for innovators. A primary project of Cambia is the free full-text online patent search facility and knowledge resource, 'The Lens'; it allows free searching of almost 10 million full-text patent documents. Another example is the open source search for the malaria cure (OSM). Veiled in secrecy and often complicated by patents and intellectual property issues, scientists are not always the best at sharing their results, at least not until they are published in peer reviewed journals-and sometimes after letting evidence fall under the table. This means that lots of data, especially 'negative' ones are hidden. Avoiding the loss of vast quantities of data is just one of the reasons behind the The use of open source for drug discovery Because no drug has ever been discovered using an open source approach, it is difficult to be certain about how this would work. However, it seems likely that the biggest impact of the open approach would be in the early phases before clinical trials have started. Open methods could also have an impact on the process chemistry phase, in creating an efficient chemical synthesis on a large scale. [3][4][5] One negative aspect to bear in mind is that open work cannot be patented, because there can be no delays to release of data. Open source drug discovery must operate without patents. The hypothesis is that through working in an open mode, research and development costs are reduced, and research is accelerated. This offsets the lack of capital support for the project. Costs of clinical trials and product registration would have to be sourced from government and non-government organizations. It has to be pointed out, however, that some large pharmaceutical companies (GSK and Novartis) (GSK has even an 'Open Lab' in Tres Cantos, Spain; it provides an opportunity for visiting scientists from leading international institutions to work at the campus for a dedicated period of time, accessing GSK drug discovery expertise as part of an integrated team to discover new medicines.) have used open source. Founding comes from elsewhere, for example the Bill and Melinda Gates Foundation and the Global Fund, and even large multinational coalitions. Interest in cachexia research is growing everyday, and because wasting is present in many diseases, such as cancer, AIDS, COPD, and chronic heart disease, it attracts researchers and pharmaceutical companies; therefore open source should be encouraged (Figure 1). 6,7 Open source tools It is an essential powerful platform for academic researchers who want to develop, finance, and conduct research projects. This platform may sustain the accessibility of academic research together with new way of research funding. The main tools required are as follows: (1) A platform offering distributed revision control and source code management. (e.g. Github). (2) Raw experimental data are recorded in an online, openly readable electronic lab notebook. (3) A Google + page to keep up with developments and discussions. (4) LinkedIn as a way of connecting with relevant experts. (5) A wiki (web application that allows people to add, modify, or delete content in collaboration with others) to host the current overall project status. (6) Updates on the project's progress could be posted at a Facebook page, and this is also a place for interaction. Project management is important-assigning tasks, creating deadlines, tracking activity, and posting results.
2016-05-04T20:20:58.661Z
2015-03-01T00:00:00.000
{ "year": 2015, "sha1": "2c3d3001950fb02de40fa5bb78765f6498186a5c", "oa_license": "CCBYNCND", "oa_url": "https://doi.org/10.1002/jcsm.12013", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "2c3d3001950fb02de40fa5bb78765f6498186a5c", "s2fieldsofstudy": [ "Medicine", "Computer Science" ], "extfieldsofstudy": [ "Computer Science", "Medicine" ] }
16796355
pes2o/s2orc
v3-fos-license
Numerical approach to the onset of the electroweak phase transition We investigate whether the universe was homogeneously in the false vacuum state at the critical temperature of a weakly first-order phase transition such as the electroweak phase transition in terms of a series of numerical simulations of a phenomenological Langevin equation, whose noise term is derived from the effective action but the dissipative term is set so that the fluctuation-dissipation relation is met. The correlation function of the noise terms given by a non-equilibrium field theory has a distinct feature if it originates from interactions with a boson or with a fermion. The spatial correlation function of noises from a massless boson damps with a power-law, while the fermionic noises always damp exponentially above the inverse-temperature scale. In the simulation with one-loop effective potential of the Higgs field, the latter turns out to be more effective to disturb the homogeneous field configuration. Since noises of the both types are present in the electroweak phase transition, our results suggest that conventional picture of a phase transition, namely, nucleation of critical bubbles in a homogeneous background does not apply or the one-loop approximation breaks down in the standard model. I. INTRODUCTION In modern cosmology our universe has presumably experienced a number of phase transitions in the early stage of its evolution. Among them those at the grand unification scale, which may be related with inflation and/or formation of topological defects, are very speculative in that we know neither the symmetry-breaking pattern nor the initial state before the transition which must be highly non-thermal due to the rapid cosmic expansion then. In contrast, the dynamics of the electroweak phase transition (EWPT) in the standard model is a much more solid subject of study because we have a perturbative expression of the effective potential with only one undetermined degree of freedom, namely, the Higgs mass, M H , [1] [2] [3] and we may well assume the thermal state before the transition since the relevant particle interaction rates are much larger than the cosmic expansion rate by that time. Nevertheless the dynamics of EWPT is not fully understood yet, mainly because, although one-loop effective potential of the Higgs field, φ, shows it is of first order, the potential barrier between the two minima at the critical temperature, T c , is so shallow that it has been doubted if the conventional picture of nucleation of critical bubbles in the homogeneous false-vacuum background really works. Whether the transition is of first order with super-cooling is a very important cosmological issue to judge if electroweak baryogenesis [4] is possible. Much work has already been done on this topic. For example, Gleiser, Kolb, and Watkins [5] considered the role of subcritical bubbles of the correlation volume as a noise effect in a weakly first-order phase transition, and Gleiser and Kolb [6] concluded that for the Higgs mass larger than 57GeV, the universe is not in the false vacuum state uniformly with φ = 0 but in a mixture of φ = 0 and the true vacuum φ ≡ φ + (T c ) * . See also Gleiser and Ramos [7]. Gleiser [8] and Borrill and Gleiser [9] have confirmed occurrence of such "phase mixing" by numerical simulation of a phase transition, solving a simple phenomenological Langevin equation of the form ✷φ(x, t) + ηφ(x, t) + V ′ eff (φ, T c ) = ξ(x, t) , (1.1) at the critical temperature T = T c . Here an overdot denotes time derivation, V eff (φ, T c ) is an effective potential, and ξ(x, t) is a random Gaussian noise with the correlation function satisfying the fluctuation-dissipation relation D = 2ηT c . Later Shiromizu et al. [10] treated the size of a subcritical bubble as a statistical variable and discussed its typical size is smaller than the correlation length. They concluded that for any experimentally allowed value of the Higgs mass, or M H > ∼ 60GeV [11], the phase mixing does occur already at the critical temperature. Furthermore, in the Monte-Carlo lattice simulations of the Euclidean four dimensional theory or the reduced three dimensional model of the finite-temperature electroweak theory, analytical cross-over behavior is observed for most values of the Higgs mass permitted from the experiment [12] [13]. On the other hand, Dine et al. [3] calculated root-mean-square amplitude of the Higgs field on the correlation scale, or the inverse-mass length scale at φ = 0 at the critical temperature and concluded that for the Higgs boson with M H ≃ 60GeV it is much smaller than the distance between the two minima and the fraction of the asymmetric phase of the Universe is negligible (e −12 ) so that subcritical fluctuations do not affect the dynamics of EWPT. Bettencourt [14] confirmed their conclusion by estimating the probability that the mean value of φ averaged over a correlation volume is larger than the distance to the maximum of the effective potential separating the two minima and finding that it is extremely small. Finally in response to Shiromizu et al. [10], Enqvist et al. [15] treated both the amplitude and the spatial size of subcritical fluctuations as statistical variables and discussed that subcritical bubbles, if they exist at all, resemble the critical bubbles and that the usual description of a first-order phase transition applies. Their analysis, however, has a problem that it suffers from severe divergence and they had to introduce cut off ad hoc. Thus a number of independent analyses have drawn different conclusions about how EWPT proceeds. In the present paper we attempt to elucidate why such discrepancy has arisen with the help of numerical simulations of a phenomenological Langevin equation which is better motivated than (1.1) and (1.2) from a non-equilibrium field theory. In fact the origin of the discrepancy is quite simple: it only reflects at which spatial scale one estimates the amplitude of fluctuations. We try to approach the problem step by step. The rest of the paper is organized as follows. We start with a re-analysis of the simple Langevin equation (1.1) with white noise (1.2) in Sec. II. First we consider a nonselfinteracting massive scalar model and show that numerical calculations of (1.1) and (1.2) on a lattice can reproduce the finite-temperature spatial correlation function of a massive scalar field correctly as long as we take the lattice spacing comfortably smaller than the correlation length or the Compton wave length. We then solve the same equation but with an effective potential in the standard model as was done by Borrill and Gleiser [9]. We find not only the behavior of the correlation function but also the limit on M H above which phase mixing occurs change drastically depending on the lattice spacing. In order to obtain a sensible bound on M H , therefore, we should choose a reasonable value of the lattice spacing with the help of a fundamental theory. This issue is discussed in Sec. III and a new phenomenological Langevin equation is proposed. In Sec. IV we report the results of numerical simulations of the dynamics of the field based on this equation. In Sec. V we give an intuitive explanation for the numerical results by using a simple Boltzmann equation. Finally Sec. VI is devoted to summary and discussion. II. RE-ANALYSIS OF THE SIMPLE LANGEVIN EQUATION In this section we elucidate the origin of the discrepancy in the previous literatures. For this purpose, we first solve the simple Langevin equation (1.1) in the case only the mass term is present in the potential, namely, where ξ(x, t) is a random Gaussian noise satisfying (1.2) with D = 2ηT . After discretizing the system on a lattice, we follow the time evolution from the initial condition, φ(x, 0) = 0 andφ(x, 0) = 0, on each lattice point. The dimensionless variables and µ ≡ T −1 m are introduced for numerical calculations, but we omit the tildes below. We arrange three different lattices for comparison. One has the total lattice points, N = 32 3 , grid spacing, δx = 1.0, time step, δt = 0.1, and total run time, t = 500, another has N = 40 3 , δx = 0.8, δt = 0.1, and t = 500, and the other has N = 64 3 , δx = 0.5, δt = 0.1, and t = 500. Using the second order staggered leapfrog method, the discretized master equation readṡ where i represents spatial index and n temporal one, and µ is set to be 0.125. The correlation of the noise is given on the lattice by Since it is white both spatially and temporally, we have only to generate Gaussian white noise on each grid, where G i,n is a Gaussian random number with a vanishing mean and a unit dispersion. The periodic boundary condition is imposed. Under the above conditions we take the ensemble average over five different noises for each case. The correlation function C(r), ≡ Φ(x)Φ(y) with r = |x − y|, is numerically obtained by calculating all the combinations of the product Φ(x)Φ(y) satisfying r − 0.5 ≤ |x − y| < r + 0.5. On the other hand, the analytic expression of the equal-time correlation function of a non-selfinteracting scalar field with mass m at temperature T is given by where r = |x − y|, β = 1/T , ω n = 2πn/β, and K j is the modified Bessel function of the j-th order. This correlation function damps exponentially above the inverse-mass scale. Both numerical and analytic results are depicted in Fig. 1 in the dimensionless unit † . We find that the correlation functions obtained from numerical calculation damp in a manner independent of the lattice spacings and also that they coincide with the analytic formula (2.5), namely, exponentially damp with the correlation length of the inverse-mass. Thus for the case of the massive non-selfinteracting scalar field, not only the simple Langevin equation (1.1) with the random noise (1.2) but also its numerical solution on a lattice can reproduce its actual finite-temperature behavior. Next we consider an interacting scalar field borrowing the one-loop improved effective potential, V EW , of the Higgs field in the electroweak theory following Borrill and Gleiser [9]. The master equation is given by where ξ(x, t) is again a random Gaussian noise with no correlation. V EW is given in for M W = 80.6 GeV, M Z = 91.2 GeV, M t = 174 GeV, and σ = 246 GeV [11]. We also find (2.12) and the temperature-corrected Higgs self coupling is given by where the sum is performed over bosons and fermions with their degrees of freedom g B(F ) and ln c B = 5.41, ln c F = 2.64 . The Higgs field, φ, appearing in the potential (2.7) in the actual electroweak theory is of course the amplitude of an SU(2) doublet complex scalar field. But in solving the Langevin equation (2.6), we neglect the gauge-nonsinglet nature † The correlation function derived analytically is reguralized by subtraction of the vacuum energy. of the field for simplicity and constrain its dynamics along the real-neutral component to treat φ as if it was a real singlet field as in Borrill and Gleiser [9]. Thus this should not be regarded as the simulation of the actual Higgs field, although our results would be suggestive to it. and Here dimensionless parameters are defined as α = (2D) −3/4 (3E) = 0.065,λ = (2D) −1/2 λ T = 1.72λ T . Hereafter we omit the tildes. The effective potential is depicted in Fig. 2. For θ > θ 1 ≡ (1 − α 2 /4λ) −1/2 there is only one minimum at Φ = 0. At θ = θ 1 appears the inflection point Φ = αθ 1 /2λ. As the temperature further drops, another minimum, which is metastable, appears and at the critical temperature, θ c ≡ (1 − 2α 2 /9λ) −1/2 , the two minima, Φ = 0, and Φ + ≡ αθ 2λ 1 + 1 − 4λ(1 − 1/θ 2 )/α 2 , are degenerate. Below θ = θ c , the symmetric state Φ = 0 becomes metastable in turn and at θ 2 ≡ 1 disappears the local maximum at Φ − ≡ αθ This is a typical model which represents a first-order phase transition. We investigate whether the universe is in a homogeneous state of the false vacuum at the onset of the phase transition or the critical temperature following Borrill and Gleiser [9]. Taking the initial condition as for all x , we follow the evolution of the field to trace the fraction of the symmetric phase, f 0 (t), which is defined by the fractional volume of the lattice space with Φ ≤ Φ − , while the fraction of the asymmetric phase, f + (t) ≡ 1 − f 0 (t), is that with Φ ≥ Φ − . As in the non-selfinteracting case, the system is discretized and the second-order staggered leapfrog method is used. The noises are also generated in the same way. We have confirmed in all the cases of our interest the change of η affects only the relaxation time scale but properties of the final configuration are insensitive to it [9]. So hereafter we report the results with η = 1. Contrary to Borrill and Gleiser [9], we perform numerical calculations with various values of the grid spacing, which turn out to affect the final configuration greatly as seen below. First, as often assumed in the literatures [3] [14], we adopt so-called "correlation length" or the curvature scale of the potential at Φ = 0, r ≡ (U ′′ [Φ]) −1/2 , as the coarse-graining scale, namely, the lattice spacing. We thus set δx = (θ 2 c − 1) −1/2 and arrange a lattice with N = 64 3 , δt = 0.1, and t = 1500 . For several λ's, the fraction of the symmetric phase, f 0 (t) is depicted in Fig. 3. For λ = 0.06 (δx ≃ 7.9) corresponding to the Higgs mass M H ≃ 60GeV [11], the phase mixing does not occur. The result shows that if the Higgs mass is not too large, the phase mixing does not occur, which is consistent with the results of [3] [14]. In Fig. 4 we have depicted the correlation function, Φ(x)Φ(y) , at t = 1500, which does not necessarily approach zero for large separation due to the fact that the average value of Φ is not equal to zero. In this figure we have also depicted theoretical curves Eq. (2.5) with m = (U ′′ [Φ = 0]) 1/2 in a dimensionless unit, namely, The former damps much more mildly than the analytic counterpart (2.17), motivating us to study the case with the smaller lattice spacing as has been done in [9]. Next we investigate the dependence of results on the grid spacing δx. Two lattices are arranged, one with N = 64 3 , δt = 1.0, t = 1500 and, δx = 1.0, and the other with the same properties except for δx = 0.5. The results are depicted in Figs. 5(a) and 5(b). The former case reproduces Borrill and Gleiser's result. Comparing both results, we find that the smaller the lattice spacing becomes, the phase mixing occurs for the smaller values of λ corresponding to the lighter Higgs mass. This result can be understood as follows. Taking lattices is equivalent to cutting off the momentum. As is seen in Eq. (2.4), the smaller lattice spacing we take, the larger momentum dominates and the more easily the phase mixing occurs. The correlation function are depicted in Figs. 6(a) and 6(b). In the former case with δx = 1.0, it has a similar curve to the analytic one (2.17) except for the offset, while in the latter with δx = 0.5 the correlation damps more rapidly. In order to examine dependence of the lattice spacing further, we have also performed numerical simulations with N = 64 3 , δt = 0.1, t = 3000, and λ fixed at 0.06. As is seen in Fig. 7, the results depend on the lattice spacings very much. δx ≃ 6.0 is critical and if we take δx smaller, the phase mixing is manifest. Therefore unless we specify the lattice spacing from a physical argument, we cannot draw any quantitative conclusion about whether two phases are mixed or not. From the analogy with the non-selfinteracting massive scalar field analyzed in the beginning of this section, many people have adopted the Compton wavelength or the inverse-mass scale at Φ = 0 as the coarse-graining scale. In the present case, however, the correlation function changes significantly depending on the lattice spacing as shown in Figs. 6(a) and 6(b) contrary to the case of non-selfinteracting field. Thus the above results suggest that the previous analyses adopting the correlation length, or the inverse-mass at Φ = 0, as the coarse graining scale are inappropriate in the case of interacting potential. Thus we must reconsider the derivation of the Langevin equation before performing further numerical analysis based on the simplified equation (2.6). III. PROPERTIES OF THE NOISES DERIVED FROM A NON-EQUILIBRIUM QUANTUM FIELD THEORY Here we review a field theoretic approach to derive an effective Langevin equation with particular emphasis on the origin of its noise term. The standard quantum field theory, which is appropriate for evaluating the transition amplitude from an 'in' state to an 'out' state for some field operator O, out| O |in , is not suitable to trace time evolution of an expectation value in a non-equilibrium system. In order to follow the time development of the expectation value of some fields, it is necessary to establish an appropriate extension of the quantum field theory, which is often called the in-in formalism. This was first done by Schwinger [16] and developed by Keldysh [17]. Here, following Morikawa [18] and Gleiser and Ramos [19], we first review briefly the derivation of an effective Langevin-like equation for a coarse-grained field using the non-equilibrium quantum field theory based on the inin formalism and then extract necessary information on the noises which are essential for generating inhomogeneity of the system. A. Non-equilibrium quantum field theory Let us consider the following Lagrangian density of a singlet scalar field ϕ interacting with another scalar field χ and a fermion ψ for illustration. Although the above Lagrangian density is much simpler than the standard model, it turns out that this model fully accounts the nature of bosonic noises arising from interactions with gauge particles and Higgs self-interactions as well as fermionic noises from quarks and leptons. In order to follow the time development of ϕ, only the initial condition is fixed, and so the time contour in a generating functional starting from the infinite past must run to the infinite future without fixing the final condition and come back to the infinite past again. The generating functional is thus given by where the suffix c represents the closed time contour of integration and ϕ + (χ + , ψ + ,ψ + ) a field component ϕ(χ, ψ,ψ) on the +-branch (−∞ to +∞), ϕ − (χ − , ψ − ,ψ − ) that on the −branch (+∞ to −∞). The symbol T represents the time ordering according to the closed time contour, T + the ordinary time ordering, and T − the anti-time ordering. J, K, η, and η imply the external fields for the scalar and the Dirac fields, respectively. In fact, each external field J + (K + , η + ,η + ) and J − (K − , η − ,η − ) is identical, but for technical reason we treat them different and set J + = J − (K + = K − , η + = η − ,η + =η − ) only at the end of calculation. ρ is the initial density matrix. Strictly speaking, we need couple the time development of the expectation value of the field with that of the density matrix, which is practically impossible. Accordingly we assume that deviation from the equilibrium is small and use the density matrix corresponding to the finite temperature state. Then the generating functional is described by the path integral as where the classical action S is given by As with the Euclidean-time formulation, the scalar field is still periodic and the Dirac field anti-periodic along the imaginary direction, now with ϕ(t, The effective action for the scalar field is defined by the connected generating functional as We give the finite temperature propagator before the perturbative expansion. For the closed path, the scalar propagator has four components. with n χ (k) = (e βωχ(k) − 1) −1 , ω χ (k) 2 = k 2 + m 2 χ , and ǫ(k 0 ) = θ(k 0 ) − θ(−k 0 ) [21]. Similar formulae apply for ϕ field as well. Also, for a Dirac fermion we find B. One-loop finite temperature effective action The perturbative loop expansion for the effective action Γ can be obtained by transforming ϕ → ϕ 0 + ζ where ϕ 0 is the field configuration which extremizes the classical action S[ ϕ, J ] and ζ is small perturbation around ϕ 0 . Up to one loop order and O(λ 2 , g 4 , f 2 ), Γ is made up of the graphs as depicted in Fig. 8. Summing up these graphs, the effective action Γ becomes The last term of (3.10) gives the imaginary contribution to the effective action Γ. We can attribute these imaginary terms to the functional integrals over Gaussian fluctuations ξ 1 and ξ 2 [18]. That is to say, we can interpret that the imaginary part of the effective action comes from random fluctuations onto the expectation value. Thus we rewrite (3.10) as with the probability distribution functional C. Equation of motion Applying the variational principle to S eff , we obtain the equation of motion for φ c . and . Thoughà 1 andB 1 has two contributions from χ and ϕ fields, they have the same properties except for the values of coefficients and masses. From now on, we consider only the contribution from χ field for simplicity and omit the suffix c. The right-hand-side of (3.25) are the noise terms, while the last two terms of the left-hand-side are combination of a dissipation term and one-loop correction to the classical equation of motion which would reduce to a derivative of the effective potential, V ′ eff (φ), if we restricted φ(x ′ ) to be a constant in space and time. The above equation (3.25) is an extension of equation (3.2) of Gleiser and Ramos [19] in that we have incorporated not only self-interaction but also interactions with a boson χ and a fermion ψ. In [19] Gleiser and Ramos proposed to adopt several further approximations to reduce their equation to the form of the simple Langevin equation like (1.1) and (1.2). In particular, for the purpose of simplifying the equation to a local form they handled spatial nonlocality by considering only contributions with zero external momentum, which is physically equivalent to dealing only with nearly spatially homogeneous fields. With this approximation the correlation function of the bosonic noise (3.18), for example, becomes We thus obtain spatially uncorrelated noise, which would violate spatial homogeneity of φ in the severest manner and lead to a self-inconsistent result. Since the noise term in (3.25) are the only source of inhomogeneous evolution of φ, we should not adopt the above approximation D. Spatial correlation of noises ¿From the above discussion, we see the correlation length of the noise is the most important scale in order to investigate the effect of fluctuations on the dynamics of the phase transition. The correlation function of the noises are given in (3.26), which are, unfortunately, too complicated to apply for numerical simulations directly. Since nontrivial temporal correlation is expected to affect the relaxation process and its time scale only, we adopt an approximation that temporal correlation is white but take spatial correlation fully into account. So we evaluate the equal-time spatial correlation. First we consider the case of the bosonic noise. The equal-time propagator in momentum space is given by [21] Then that in configuration space propagator reads which of course has the same form as (2.5). Using the above representation, the equal-time spatial correlation of the bosonic noise becomes which damps exponentially for r > m −1 χ . For the massless bosonic noise, we can calculate the sum of the infinite series to yield (3.31) and the equal-time spatial correlation of the bosonic noise becomes We thus find that for the massive bosonic noise the correlation function damps exponentially, particularly the damping scale is the inverse of mass, while for the massless bosonic noise it damps much less rapidly, according to a power-law. For the fermionic noise, similarly, the equal-time propagator in momentum space is given by [21] and that in configuration space reads Also, the equal-time correlation of the fermionic noise is given by which damps exponentially at the inverse mass scale for m ψ β ≫ 1, and at the scale β for m ψ β < ∼ 1. For the massless fermion, we find and Note that for the fermionic noise, unlike the bosonic case, the correlation function damps exponentially regardless of the mass. This is the very interesting feature of the noise. We can physically interpret this feature as follows. Since there is Pauli's blocking law for a fermion, particles are apt to separate from one another and the correlation is easily destructed. On the other hand, bosonic particles can occupy the same state and the correlation is kept. When the temperature is zero, both fermionic and bosonic massless propagators damp according to a power-law. The above feature is an example of the fact that statistical difference between fermions and bosons appear more markedly at finite temperature than at zero one. E. Dissipation term The equation of motion (3.25) derived in the subsection III C has contributions representing the dissipative effect in the last two terms of the left hand side. Since these terms are nonlocal in time, what is often done in the literatures [18] [19] to extract local terms proportional toφ is to assume that the field changes adiabatically, or put in the integrand of (3.25). But the dissipation terms thus evaluated vanish as long as we use the bare propagators. This is usually interpreted as a manifestation of the fact that the dissipative effect is intrinsically a non-perturbative phenomenon and cannot be investigated from the perturbation theory [22]. In order to see a damping effect, we must observe the system for a finite duration of time typically proportional to the inverse of some coupling constants. After this period, however, the perturbation theory may have broken down as explained in [22] using a toy model. So, in order to find the damping effect, nonperturbative terms are often incorporated by using a "dressed" propagator with an explicit width, which is obtained by resuming higher-loop graphs of some classes, instead of the bare one [18,19]. Although we may obtain finite dissipation terms in this way, they do not yet satisfy the fluctuation-dissipation relation. More serious is the fact that these approach is not self-consistent as criticized by Gleiner and Müller [23]. In fact, apart from the validity of perturbation, the adiabatic expansion (3.40) itself breaks down before we can observe dissipation effect [23]. In order to cure the situation, Gleiner and Müller [23] proposed to adopt the linear harmonic expansion of the Fourier mode φ(k, t ′ ) as and calculated the dissipation term in a simple model. They have shown that the fluctuationdissipation relation is met in the classical limit [23]. Unfortunately, however, the applicability of this method is rather limited and we cannot calculate the dissipation term in our model explicitly. Therefore, here we make much of the thermodynamics and derive it so that the fluctuation-dissipation relation is met. We also identify the remaining part of the integral terms of (3.25) with the derivative of the effective potential V ′ eff (φ) , primarily for simplicity. But if the system turns out to be homogeneous as a result of numerical simulations, this choice will be justified since the homogeneous expectation value should take a value where the effective potential is minimized, but otherwise unjustified. In the latter case the one-loop approximation also breaks down and then we would have to deal with the full effective action, which is beyond the scope of the present analysis. We thus interpret the integral terms in (3.25) as consisting of two parts, one contribution to the derivative of the effective potential and the other to the dissipation term, and this division is done so that the dissipation term satisfies the fluctuation-dissipation relation. Note that, although the above procedure is physically motivated, it should be regarded as an ansatz rather than an approximation and we have been unable to derive these terms rigorously from first principles at present. Nevertheless we stress that our approach is self-consistent if the field configuration turns out to be homogeneous. We next put the above scheme into practice. First we consider the case thermalization proceeds only through bosonic noises and determine the corresponding dissipation term. To do this we rewrite (3.25) in the form is to be determined so that the fluctuation-dissipation relation is met. For the purpose of applying to numerical simulations we adopt an approximation that ξ 1 (x) is a temporally white noise with , which is a good approximation since temporal correlation damps exponentially beyond |t−t ′ | > β/(2π). Then this Langevin equation can be converted into the Fokker-Plank equation. is the distribution function, π(x) =φ(x), and H is the Hamiltonian, which is given by H = d 3 x 1 2 π 2 + 1 2 (∇φ) 2 + V eff (φ) . In order that this equation has a stationary solution, it is at least necessary that A 1 (x − x ′ , t) does not depend on time, T constitutes a stationary solution of the above equation, (3.44) Then we find and The fermionic contribution can be treated similarly, and we find 3.48) in the case thermalization is realized only through fermionic noises. IV. NUMERICAL SIMULATIONS Using the phenomenological Langevin equation obtained in the previous section, we now perform numerical simulations in the same way as in Sec. II. The essential difference from Borrill and Gleiser's formulation [9] is that we generate noises which have spatial correlations calculated in the previous section. In order to approach some aspects of the electroweak phase transition in the standard model, we adopt the one-loop improved effective potential of the Higgs field in our phenomenological Langevin equation. Since the properties of massless bosonic noises such as those from gauge interactions and fermionic noises are very different from each other, we analyze the cases thermalization proceed through bosonic noises and fermionic noises separately. A. Bosonic noise We first examine an extreme case that thermalization of φ proceeds by virtue of only massless bosonic noises which has a power-law correlation function such as those arising from gauge interactions. As a power-law is scale free, we set the same lattice spacing as the fermionic case, β/(2π), in order to see the difference between the behavior of bosonic noises and that of fermionic counterparts. Before calculation it should be noted that the bosonic noise we have derived so far is multiplicative and accordingly, if we set the initial condition as Φ =Φ = 0, the system does not evolve. Hence, in this case we add the following contributions with two loops and order g 4 (Fig. 9), which lead the additive noise. Then the correction to the effective potential is given by The last term is imaginary and we regard it as the term coming from the stochastic noise as before. The dissipation term is obtained so that the fluctuation-dissipation relation is met. Then the dimensionless equation of motion only with bosonic contributions becomes, (4.7) In reality, the correlation function of the noises as derived from a fundamental theory is proportional to some powers of coupling constants. However, once the fluctuation-dissipation relation is assumed, its magnitude does not affect the final equilibrium configuration at all. Hence we have normalized the amplitude of noises as above in our numerical calculations. Noises with the above correlation can easily be generated working in Fourier space. Since we are assuming ξ(x) is random Gaussian, its Fourier transform, also satisfies the Gaussian probability function, and the distribution function for each mode becomes Here N ′ is the normalization factor, and P (k) is the power spectrum given by and we have used the reality condition for ξ(x). This distribution has no correlation between each Fourier mode. Therefore we have only to generate the white noise in the momentum space and Fourier transform it into the configuration space. Next we discretize the system. Then the discretized master equation becomeṡ where we have used the second-order leapfrog method and adopted the Crank-Nicholson scheme only for the diagonal term because of its dominance for numerical stability. We also set B 1 i = 1/|i| 2 , B 4 i = 1/|i| 3 except B 1 0 = 1/N 2 , B 4 0 = 1/N 3 . Initial conditions are given by Φ i,0 =Φ i,0 = 0 for all i's. The results are depicted in Fig. 10. Massless bosonic noises do not disturb the homogeneous field configuration at least for small enough values of λ. B. Fermionic noise Next we consider the fermionic case. The spatial correlation of the fermionic noise damps exponentially for both massive and massless fermions. The fermions interacting with the Higgs field in the standard model acquire finite masses if and only if the Higgs field has a non-vanishing expectation value. Hence in the same spirit as adopting the effective potential in the equation of motion of the field, we perform numerical calculations with massless fermionic noises. That is, if φ remains to be equal to zero homogeneously reflecting the initial condition, this choice is consistent. In fact, however, even when phase mixing is manifest, the expectation value of the Higgs field remains at most about 50GeV for M H ≃ 60GeV, which means that quark masses remain much smaller than the temperature and massless approximation itself is justified in this case as well. Since the correlation of the fermionic noise damps exponentially, we can use the white noise as long as we take the lattice spacing larger than the correlation length, which is equal to β/(2π) and in our dimensionless unit it corresponds to δx = (2D) 1/2 /(2πθ c ) (= 0.092 for M H = 60GeV). Then the master equation is the same as that used by Borrill and Gleiser [9] or equation (2.14), where The results of the solutions of this master equation has already been depicted in Fig. 7 for various grid spacing δx. Figure 11 represents the case δx is set to be the fundamental length. As is seen in these figures, the fermionic noises are more effective to disturb the field configuration from a homogeneous state to an inhomogeneous one with possible mixing of two phases. Fig. 7, in some choice of δx and λ the system seems to relax to a state with f 0 some value between f 0 = 0.5 and 1. One may wonder that this is because our simulation time is so short that the system still keeps the memory of the particular initial condition, and that if we could trace the evolution for a long enough period the system would relax into a state with f 0 = 0.5. In fact, however, if we examine the time variation of the field configuration by observing its snapshot at different times as depicted in Figs. 12(a) -(e), we can easily convince ourselves that the system is in a stationary state with a constant f 0 (> 0.5) repeating creation and annihilation of a number of small domains of the asymmetric state whose typical radius is at most a few times the lattice spacing. In this section we would like to present an analytic argument to support that the state we followed in the numerical simulation is a thermal state. Similar analysis has already been done by Gleiser, Heckler, and Kolb [24] in a slightly different situation. See also Gelmini and Gleiser [25]. As in Let d + (R, Φ, t) be the number density of the asymmetric-state bubbles with radius R(> δx) and amplitude Φ(> Φ − ) at t. In order to obtain the Boltzmann equation for d + (R, Φ, t), we count the processes which change it: i) Thermal nucleation of asymmetricstate bubbles in an almost homogeneous symmetric-state sea. ii) Annihilation of these asymmetric-state bubbles into the symmetric-state sea. This is to be distinguished from the process of nucleation of a symmetric-state bubble in a homogeneous background of the asymmetric-state whose rate would be identical to that of i) in a degenerate potential. We expect the process i) has smaller rate than the process ii), since the former requires more energy. We do not take into account the process that nucleated bubbles dynamically shrink (|v|∂d + /∂R term in [24]) because the typical radius of nucleated bubbles is comparable to the lattice spacing and shrinking bubbles and vanishing bubbles are hardly distinguishable in our simulations. Then the Boltzmann equation for d + (R, Φ, t) becomes where G (s−phase⇒R,Φ) is the nucleation rate for the process i), G (s−phase⇐R,Φ) is that for the process ii). We also assume that nucleation rates can be obtained from the Gibbs distribution, namely G = A exp(−F/θ c ), where A is a constant. For G (s−phase⇒R,Φ) we put F = bΦ 2 R taking surface tension of the created bubbles into account with b a constant. Since the inverse process is not Boltzmann suppressed, we assume its rate is a constant: In an equilibrium state, ∂d + /∂t equals zero for all Φ's and R's, Summing these equations for all Φ's and R's leads to 3) Then f eq + becomes f eq + is depicted in Fig. 13 as a function of λ. Since the case for δx = 1.0 is suitable for seeing the change of the asymmetric fraction from non-mixing to percolation, we set δx = 1.0. Then we can fit the analytic solution with the numerical simulation very well with A/B = 65 and b = 2.77. Note that even if |v| equals zero, f eq + can become values different from 0.5. The essence lies in the fact that creation rate of an asymmetric-state domain in the symmetric phase and that of its reverse can be different due to the surface tension even at the critical temperature if the background is sufficiently homogeneous. Of course, in the case the average amplitude of fluctuations is large enough, percolation occurs quickly and the two processes will have the same rate, resulting in f eq + = 0.5. The above argument is expected to apply only when Φ is localized around the origin initially. If it is localized around Φ = Φ + initially, on the contrary, we expect that the system will settle into a state with f eq + = 1 − I/(I + 1) or f eq 0 = I/(I + 1), since the potential we are using is symmetric. In order to see what happens in the case Φ is not localized around either minima, we have run five simulations starting from a checkerboard made of Φ = 0 and Φ = Φ + using different realization of random numbers. The result is depicted in Fig. 14 for λ = 0.06, δx = 7.0 and N = 64 3 , which shows that the system approaches to either equilibrium state with f eq + or f eq 0 = I/(I + 1), although it takes much longer time to relax than in the cases with Φ = 0 or Φ = Φ + initially. This result implies that the configuration with f + = f 0 = 0.5 is unstable in this case even if it contains maximum number of microscopic states. Note that this configuration also costs more energy of domain boundaries than any other configuration. This is why the system may relax to a configuration with f + = f 0 . Although these arguments are interesting in themselves, we must be cautious with their interpretation, that is, it may not be directly relevant to the actual dynamics of EWPT because our use of the effective potential in the Langevin equation is not strictly justifiable in the case field configuration becomes inhomogeneous. The same warning also applies to Anderson's argument [28], who claims that basic picture of phase transition through subcritical bubbles is in contradiction with the second law of thermodynamics. This criticism is also based on an expression of the free energy which is not strictly correct in inhomogeneous situations. VI. SUMMARY AND DISCUSSION In the present paper we have performed a series of numerical simulations of the Langevin equations toward understanding aspects of a weakly first-order cosmological phase transition such as the electroweak phase transition. First we have confirmed that the simple Langevin equation (1.1) with random Gaussian white noise (1.2) can reproduce thermal equilibrium state of a massive non-selfinteracting scalar field, in particular, that the correlation length is given by the inverse-mass scale independent of the lattice spacing. We have then applied the same technique to one-loop improved effective potential of the Higgs field in the electroweak theory. Taking the coarsegraining scale or the lattice spacing equal to the inverse-mass scale at φ = 0, we have confirmed that phase mixing does not occur at the critical temperature for a small enough Higgs mass, consistent with the previous analytic estimate of the amplitude of fluctuations [3] [14] [15]. At the same time, we have argued that in order to reproduce the shape of the corresponding massive scalar correlation function we should take δx smaller. As a result of such simulation we have found that the correlation function of the Higgs field obtained numerically may damp at smaller scale depending on the choice of δx, so it has proved that the so called "correlation length" or the inverse-mass scale at φ = 0 is not a good measure of a coarse-graining scale in the case the potential contains a nontrivial interactions. In this case we have also found the final configuration of the simulation depends on the lattice spacing severely, which is a manifestation of the fact that lattice spacing serves as an ultraviolet cutoff of otherwise divergent theory. Since the renormalization prescription of [26] does not work to the temperature-dependent potential, we have tried to fix the lattice spacing from a physical argument. For this purpose we have reexamined derivation of the Langevin-like equation in the literatures [18] [19]. We stressed the importance of the correlation of the noise terms which are the only source of inhomogeneity in the system and so the simulation should be done reflecting their properties. Although the noise terms can be derived from the perturbative non-equilibrium field theory, no completely satisfactory derivation has been given of the other ingredient of thermalization, namely, the dissipation terms. Hence we made much of thermodynamics and determined them from the fluctuation-dissipation relation. Another difficulty of the equation of motion from the perturbative effective action is that it contains integral terms which are non-local in both space and time. Since it is impossible to deal with them numerically, we have replaced them with the derivative of the one-loop effective potential. This procedure may be justified if and only if the field configuration remains homogeneous. Since we set a homogeneous and static initial condition, φ(x, t) =φ(x, t) = 0, and what we are concerned is if the system remains homogeneous, the above approximation is sensible. Note, however, that in the case phase mixing is manifest in the final result, our simplified equation is no longer valid. Then the one-loop approximation also breaks down at the same time and we would have to deal with the full effective action, which is formidable now. Keeping the above-mentioned limits of our approach in mind, let us consider implication of the results of numerical calculations. We examined the effects of bosonic noises and fermionic noises separately. In the former case the field remained practically homogeneous at least for small enough values of M H , and in the latter case phase mixing was evident and our approximation broke down. If only one species of noises and the corresponding dissipation term are taken into account, the strength of the noise and the dissipation do not affect the final equilibrium configuration, as far as the fluctuation dissipation relation is satisfied, although they do affect the relaxation time scale. In the realistic case, both types of noises are present and their amplitude is also important to determine thermalization process of the system. Since the Yukawa coupling of the top quark is larger than the square gauge coupling and fermionic noises are more effective, we expect that the results of subsection IV B apply to the actual electroweak phase transition. In short, although gauge interaction plays the essential role to induce the cubic term in the effective potential, the non-equilibrium dynamics is dominated by fermionic interactions, and as a result, the conventional picture of first-order phase transition based on the one-loop potential is suspect. So far we have traced the behavior of the expectation value of the scalar field. Although the equation of motion has been motivated from quantum theory, as far as we concentrate on an expectation value, we must make sure that the effect of quantum uncertainty is sufficiently small. This condition is satisfied if the number of quanta contained in one lattice volume is much larger than unity [27] [28]. The simulations with a lattice spacing equal to the fundamental correlation length of the fermionic noise, β/(2π), does not satisfy this constraint. Our conclusion, however, remain intact since at the critical value of δx = 6.0 in Fig. 7, one lattice volume already contains more than 40 quanta, much larger than unity. In the light of the above results and from the fact that all the previous literatures claiming that usual nucleation picture in the homogeneous background applies to the electroweak phase transition have estimated fluctuation on too large a spatial scale, it is now evident that the conventional picture with one-loop effective potential does not work. On the other hand, in order to clarify real dynamics of the phase transition we must say that much work is to be done including derivation of correct equation of motion of the order parameter which should replace the crude phenomenological equation employed here.
2014-10-01T00:00:00.000Z
1997-07-29T00:00:00.000
{ "year": 1997, "sha1": "fa89839474b9b04cb707305dbb573d4462059e60", "oa_license": null, "oa_url": "http://arxiv.org/pdf/hep-ph/9707502", "oa_status": "GREEN", "pdf_src": "Arxiv", "pdf_hash": "76cb7ff0971d0e0374e7efda3755c43be91c0498", "s2fieldsofstudy": [ "Physics" ], "extfieldsofstudy": [ "Physics" ] }
226960203
pes2o/s2orc
v3-fos-license
Microstructure and Texture Evolution in a Post-dynamic Recrystallized Titanium During Annealing, Monotonic and Cyclic Loading The post-dynamic recrystallization behavior of ultrafine-grained (UFG: 0.44 μm) cp-Ti under annealing, room temperature (RT) monotonic and cyclic loading was investigated across a range of temperatures and deformation rates wherever appropriate. By characterizing the grain and boundary structures, it was confirmed that recrystallization and grain growth occurred due to annealing (≥ 600 °C) and R = − 1 fatigue at RT. There was a noticeable 30 deg aggregation in misorientation distribution, along with the increased grain size. However, the hypothetical correlation between 30 deg aggregation and Σ13a or the other characteristic coincidence site lattice boundaries was found to be weak. The fatigue-induced grain growth is particularly intriguing for two reasons. First, the large monotonic deformation with low strain rate cannot trigger grain growth. Second, fatigue sharpened the basal intensity around the ND and caused a weaker texture component close to TD (load axis along the LD, perpendicular to the TD–ND plane). By contrast, high-temperature annealing only strengthened the UFG processing induced basal pole but without affecting its location. Novel insights into this fatigue-induced texture evolution in UFG cp-Ti has been provided. The lattice rotation during fatigue can be attributed to the combined effect of activation of prismatic ⟨a⟩\documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$ \langle a\rangle $$\end{document} slip parallel to LD, and basal ⟨a⟩\documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$ \langle a\rangle $$\end{document} slip perpendicular to it. The theoretically calculated stress to activate dislocation slip by assuming a non-equilibrium grain boundary state lent support to the above assertion. Moreover, the TEM observation evidently showed the characteristics of dislocation cross-slip and multiple slip in the grain interior. I. INTRODUCTION THE generic term recrystallization describes the replacement of cold worked microstructure by forming new grains during annealing at temperatures of ‡ 0.5T m . This process is referred to as discontinuous static recrystallization. By contrast, discontinuous dynamic recrystallization (dDRX) occurs during high-temperature straining. For both cases, recrystallized grains co-exist with the deformed ones, hence a discontinuous process. The recrystallization mechanism was studied for several decades and a thorough review of the literature was published in 1997. [1] Since then electron backscattered diffraction (EBSD) technique has become widespread and it enables mapping crystallographic orientations in a large number of grains with much reduced time compared to transmission electron microscopy (TEM). The paper published in 2014 [2] provided a comprehensive review on two types of dynamic recrystallization: dDRX and continuous dynamic recrystallization (cDRX). A later review paper [3] considered the geometric DRX in addition to the above two. The evolving recrystallization terms can be attributed to the expansion of processing methods, in particular those arising from severe plastic deformation, SPD. [2] The SPD-induced ultrafine grain (UFG) microstructure can be characterized by the gradual transformation of low-angle into high-angle grain boundaries (LAGBs to HAGBs), that is the essence of cDRX. The dynamic UFG formation is accompanied by a dramatic decrease in dislocation density at the grain interior, while the boundaries are in non-equilibrium state. [4] Over the last decades, much improved knowledge has been gained about the creation of UFG microstructure and the underlying cDRX mechanism. [5] However, the post-dynamic recrystallization behavior of UFG materials has not been explored very much. [2] The annealing behavior was examined on a range of UFG materials that included Cu, Type 304 stainless steel, Ni, Al and Mg. [2] It can be concluded that continuous static recrystallization (cSRX) mechanism was responsible for the grain growth. Compared to the cubic materials, much fewer studies on hexagonal close packed (hcp) materials (hot-deformed AZ31 Mg [6,7] ) have been published to date. Furthermore, limited work (both on Mg [8,9] ) has been performed to elucidate the microstructure and texture evolution during fatigue loading. Therefore, further work is required in this field. The present work focuses on commercially pure cp-Ti with the UFG microstructure created by multi-directional forging (MDF). The reason is twofold: first, basal hai slip is no longer the only active slip system during deformation; and secondly, twinning activity is restricted due to the small size of grains and high stress threshold. Hence, this work is distinct from that in Mg. Among the SPD methods, MDF is the simplest and can easily be scaled up for the processing of sizeable semi-products. [2] Our previous work [10] on MDF cp-Ti revealed that high-cycle fatigue loading at room temperature (RT) can promote the formation of a different but strong texture concurrently with the grain growth. However, we were unable to explain the reasons. A systematic study of MDF cp-Ti would be valuable, not only in elucidating the underlying mechanisms responsible for the dramatic texture change, but also in providing a complete picture of the post-dynamic recrystallization behavior under annealing, monotonic and cyclic loading. Using TEM, EBSD and transmission Kikuchi diffraction (TKD) techniques, deformation mechanisms of MDF cp-Ti were examined with particular focus on the orientation dependence of active slip modes of individual grains and their collective effect. The validity of the determined slip modes was also discussed based on the Schmid factor. A. Material and Multi-directional Forging (MDF) Commercially pure titanium (cp-Ti, ASTM grade 2) with single a-phase hcp crystal structure had a coarse grain size of 35 ± 15 lm, measured using the linear intercept method. The chemical composition was Fe 0.28 pct, C 0.08 pct, N 0.03 pct, H 0.015 pct, O 0.25 pct (all in wt pct) and Ti in balance. A set of cylindrical bar samples (35 mm in diameter and 60 mm in length) were pre-heated to 450°C, followed by high strain rate (~20 s À1 ) forging. Three MDF cycles were performed to introduce a high cumulative strain of~5. Within each MDF cycle, the hammer force was applied along three orthogonal directions. The strain applied to the normal (ND) and transversal (TD) planes was similar, hence no fundamental difference between the ND and TD. The final dimension of MDF billet was 23 9 25 9 90 mm 3 . B. Isothermal Annealing, Monotonic and Fatigue Loading The MDF cp-Ti was the starting material condition to study the post-dynamic recrystallization behavior during annealing, monotonic tensile and fatigue loading. Isothermal annealing was carried out in a muffle furnace and the temperature was controlled within ± 2°C. Cube specimens with 4 9 5 9 6 mm 3 were cut from the middle part of the MDF billet, and the 5 9 6 mm 2 plane was parallel to the longitudinal direction (LD, Figure 1). The annealing temperature ranged from 400 to 800°C and time from 1 to 4 hours, as summarized in Table I. Annealing temperatures were well below the b-transus of cp-Ti (882°C). In terms of the monotonic tensile loading, dog-bone shaped specimens with gauge cross-section of 0.5 9 2.5 mm 2 and length of 8 mm were used ( Figure 1). All specimens were made using wire-electrical discharge machining and tensile direction was parallel to the LD. Tensile tests were performed at RT on a Shimadzu servopulser micromechanical testing system, under constant displacement rates ranging from 0.05 to 5 mm/ min. The corresponding strain rates were calculated as 1 9 10 À4 , 5 9 10 À4 , 1 9 10 À3 , 5 9 10 À3 , 1 9 10 À2 s À1 (Table I). In terms of the fatigue loading, two cylindrical specimens with gauge length of 14 mm and diameter of 8 mm were extracted from the MDF cp-Ti billet along the LD (Figure 1). Specimens were fatigued in air at RT, a constant stress amplitude of 280 MPa, R = À 1 and frequency of 14 Hz. One specimen was fatigued for 1000 cycles, while the other for 4.3 9 10 4 cycles with an equivalent loading time of~50 minutes (Table I). The latter specimen is the same one as that studied by means Fig. 1-A schematic showing the EBSD/TKD examination plane of annealed and loaded specimens with respect to the coordinate system defined for the MDF billet. ND, TD and LD denote normal, transversal and longitudinal directions, respectively. of in situ neutron diffraction. [10] All of the specimens that included both the tensile and fatigue were abraded using SiC paper up to 2000 grit and then polished by OPS to obtain a deformation-free surface condition. The typical surface roughness was measured to be 0.3 lm by Alicona 3D profilometer. C. Microstructure and Property Characterization Techniques For the post-test examination, all of the metallographic specimens were cut close to the central position of the deformed specimens or within 2 mm distance to the fracture surface. Their longitudinal planes were under scrutiny by TEM and EBSD/TKD ( Figure 1). TEM was carried out using a JEM-2100 high-resolution microscope operated at 200 kV. Both the bright-field imaging mode and selected area diffraction (SAD) were used to characterize the evolution of grain and boundary structures. TEM thin foil specimens were mechanically ground and then twin jet thinned in an electrolytic solution of 4 pct perchloric acid + 96 pct alcohol at 75 V. The microstructure and texture evolution in the as-MDF, annealed, tensile and fatigued specimens (Table I) were studied in a quantitative manner using a Zeiss AURIGA FIB-SEM workstation equipped with an EBSD detector, having the TKD capability. The main advantage of TKD over the EBSD is to offer a significantly higher spatial resolution; hence this technique was used for analyzing sub-micrometer grains in the as-MDF cp-Ti (Table I). Meanwhile, the EBSD was used to characterize those annealed specimens exhibiting a relatively large grain size. TEM thin foil specimens were used for the TKD examination and the instrument operated at 30 kV with a step size of 50 nm. EBSD scans were performed at 20 kV and a range of step sizes were selected depending on the grain sizes (200 nm for 600°C, 300 nm for 700°C, and 1.2 lm for 800°C annealed samples). Micro-hardness measurements were performed on HXD-1000TM/LCD tester at a load of 0.98 N for 15 seconds per indent. The average value out of 10 individual measurements is presented. Again, the longitudinal plane was examined. D. Microstructure and Texture Analysis Based on EBSD/TKD Dataset To ensure the statistical rigor of the microstructure and texture characterization, a large field-of-view EBSD/TKD map was collected for each material condition. The EBSD scanning areas are 60 9 90 lm for 600°C, 1009150 lm for 700°C, and 400 9 600 lm for 800°C annealed samples. The number of grains under EBSD scans are greater than 1200 for 600°C, 300 for 700°C, 600 for 800°C samples. In terms of the tensile and fatigue samples, the number of grains under TKD scans are greater than 500. The data analysis was carried out using HKL Technology Channel 5 and MTEX Matlab toolbox (Version 5.2.4). The inverse pole figure (IPF) orientation maps superimposed with grain boundary characters were used to illustrate the microstructural evolution. In the IPF orientation maps, thick-black lines denote HAGBs (defined with misorientation angles of > 15 deg), while thin-white lines denote LAGBs with the lower threshold cut-off angle of 2 deg. The change in the fraction of LAGBs in relation to the HAGBs was used to determine the recrystallization type. Grain size distribution histogram and its lognormal distribution data fitting were used to characterize grain growth behavior. It is known that the calculated fraction of HAGBs and the average grain size would be different if the cut-off misorientation angle of > 10 deg was selected. Here we take an example from the grain size measurements of 600°C annealed samples. The use of cut-off misorientation angle of > 15 deg, as adopted in the present work, resulted in the grain sizes of 0.44, 1.83, 2.16, and 2.28 lm for the as-MDF, 1 h, 2 h and 4 h annealed samples at 600°C (see Figure 11 for data illustration). By comparison, the use of > 10 deg led to the respective grain sizes of 0.42, 1.81, 1.99, and 2.05 lm. Therefore, it was confirmed that the selection of cut-off misorientation angle of > 15 deg does not affect the conclusions drawn from the present work. The fraction of recrystallized grains was calculated on the basis of internal average misorientation angle (IAMA) maps. The grains with IAMA values of > 1 deg were classified as deformed grains. The grains consisted of sub-grains with IAMA values of<1 deg but the sub-grain misorientation angle of > 1 deg were termed as substructured ones. All the others were classified as recrystallized grains. The local grain misorientation was calculated based on kernel average misorientation (KAM) analysis, from which the geometrically necessary dislocation (GND) density was derived. Any local misorientation angle of> 2 deg was excluded to avoid the influence of sub-grains. In-grain misorientation profiles (IGMP) analysis was performed on two typical grains: one with its c-axis parallel to ND, while the other being randomly oriented and away from the basal texture. In order to elucidate the texture evolution in relation to dislocation slip, Schmid factor (SF) analysis was also performed. A. UFG Microstructure Characteristics The EBSD map and pole figure of the as-MDF cp-Ti are presented in Figure 2(a). The grain size was measured to be 0.44 lm with aspect ratio of 2.18. The sub-micrometer grains were distributed uniformly throughout the microstructure, proving the effectiveness of MDF process. The pole figure shows the strength of the clustering of poles, relative to that from a random distribution; hence the magnitude of the multiples of uniform density (mud) can be used to indicate the crystallographic texture. It is evident from the pole figure that the as-MDF cp-Ti had strong texture along the [0001] direction. The c-axis was tilted between 40 deg and 60 deg from the TD toward ND (denoted as TD-split texture). The basal pole intensity with the maximum density of 13.2 mud spread both within the TD-ND plane and perpendicular to it. The TEM observation did not reveal any deformation twins (Figure 2(b)). The split diffraction spots in SAD pattern (Figure 2(c)) indicated the existence of LAGBs, while the elongated ones suggested the presence of high internal stress. Both are the typical character of grain boundaries in a non-equilibrium state. [5] The grain boundary misorientation distribution of the as-MDF cp-Ti is shown in Figure 2(d). ''Correlated'' in the figure were derived from the misorientation data between neighboring points, while ''Uncorrelated'' were derived from the misorientation data between randomly chosen points. ''Random'' outlines the theoretically calculated misorientation distribution of randomly oriented grains in a polycrystal. The distribution of correlated misorientations differed from that of the uncorrelated one particularly for the LAGBs. This suggests that there was a special relationship between adjacent grains, i.e., the presence of sub-grains in the present circumstance. Furthermore, the uncorrelated distribution differed from the random distribution ( Figure 2(d)), indicating a preferred crystallographic orientation. The IAMA and KAM maps are presented in Figures 2(e) and (f). The as-MDF cp-Ti had a large number of substructured grains (~60 pct in yellow), with the rest being recrystallized (blue) and deformed (red) grains. The three characteristic grains were distributed uniformly in the as-MDF microstructure (Figure 2(e)). The KAM map in Figure 2(f) showed an evenly distributed local misorientation, with the high intensity regions corresponding to those deformed grains (red in Figure 2(e)). The average GND density was derived as q =1.41 9 10 15 m À2 , indicating that the as-MDF cp-Ti contained high stored energy. By using shear modulus of G = 40 GPa and Burgers vector of b = 0.295 nm for cp-Ti, [11] the driving force p for recrystallization was calculated to be 2.45 9 10 6 J/m 3 based on the relation [12] : This suggests that the non-equilibrium grain boundaries in the as-MDF cp-Ti are prone to recrystallization when certain conditions are met. B. Annealing Behavior of Post-dynamic Recrystallized UFG Microstructure Annealing heat treatments were performed at low temperatures (400°C and 500°C, up to 4 hours) to assess thermal stability of the UFG microstructure. A representative TEM micrograph and SAD pattern, obtained from the MDF cp-Ti after annealing at 500°C for 4 hours, are shown in Figures 3(a) and (b). The average grain size of this sample was determined to be 0.49 lm by linear intercept method, and thus the grain growth was marginal. The SAD pattern obtained from this specimen showed much less elongated diffraction spots, comparing Figures 3(b) with 2(c). This indicates that low-temperature annealing caused relaxation of the high internal stress. Indeed, the annealed specimen showed less dislocation contrast (Figure 3(a)) compared with that in the as-MDF condition ( Figure 2(b)), suggesting the annihilation of dislocations by recovery. This agreed well with little effect of annealing on the micro-hardness; 273.4 ± 8.6 for the as-MDF condition, while 268.8 ± 9.0 for 4 hours annealed specimen at 500°C (Table I). Note that all the other specimens within this low-temperature group did not show any sign of grain growth under TEM examination (not shown for brevity) and their micro-hardness values were similar to that of the as-MDF cp-Ti (Figure 3(g)). Overall, the low-temperature annealing was difficult to trigger recrystallization. In terms of the high-temperature annealing (600°C, 700°C and 800°C, up to 4 h), grain growth occurred and the grain size became larger with the increasing temperature and time. Representative TEM micrographs together with SAD patterns obtained from 600 °C and 800°C annealed specimens are shown in Figures 3(c) through (f). Both the split diffraction spots and elongated ones were not observed, indicating that recrystallization caused reduced LAGBs and internal stress relaxed to a greater extent. For the annealing time of 2 hours, some dislocation tangles remained at the grain interior (600°C in Figure 3(c) and 800°C in Figure 3(e)). By contrast, the grain interior appeared to be transparent due to the reduced dislocation contrast, when the annealing time increased to 4 hours (600°C in Figure 3(d) and 800°C in Figure 3(f)). The specimen subjected to 4 hours annealing at 800°C is typical of fully recrystallized microstructure; grain boundaries became well-defined (i.e., transition into an equilibrium state) and the individual dislocation lines were distinguishable in the grain interior ( Figure 3(f)). These observed dislocation changes and grain growth agreed well with the reduced micro-hardness ( Figure 3(g)). The EBSD was used to reveal the textural evolution and provide quantitative description of the microstructural evolution due to high-temperature annealing. Representative EBSD orientation maps are shown in Figures 4(a), (b) and (c), for 4 hours annealed MDF cp-Ti at 600°C, 700°C and 800°C, respectively. Compared with the as-MDF condition (Figure 2(a)), the size of grains increased with the increasing temperature and their shape became more near-equiaxed. The grain size basically followed lognormal distribution and no bimodal size distribution can be seen in Figure 4(d). This means that recrystallization happened uniformly throughout the microstructure. The average grain size increased from 0.44 lm (as-MDF condition), to 2.28 lm (600°C), 7.36 lm (700°C) and 16.53 lm (800°C) after 4 hours annealing, respectively. The kinetics of grain growth during annealing has been derived and described in ''Appendix A''; the growth exponent of n = 0.15, 0.29 It is evident from the pole figures (the inset of Figures 4(a) through (c)) that the MDF deformation texture kept more or less the same, with the maximum basal pole intensity appearing within 40 deg to 60 deg from the TD to ND. The basal pole intensity at the upper hemisphere tended to be strengthened and the maximum intensity increased from 13.2 mud (as-MDF condition in Figure 2(a)), to 17.6, 30.3 and 24.6 mud, after 4 h annealing at 600°C, 700°C and 800°C. The intensity change is likely to be a consequence of reduced grain boundary proportion caused by the grain growth. Note that all the other specimens within this high-temperature group had the similar basal pole position (not shown for brevity), indicating the lack of lattice rotation during the grain growth. The misorientation distribution histogram obtained from the 4 hours and 800°C annealed specimen is shown in Figure 4(e). Several features are worthwhile to be noted. First, the difference between the correlated and uncorrelated misorientations was significantly reduced compared with the as-MDF condition (Figure 2(d)), indicating that the transition from LAGBs to HAGBs occurred concurrently with the grain growth; a characteristic feature of continuous recrystallization. [13] Second, the distribution of correlated and uncorrelated misorientations differed evidently from the random distribution ( Figure 4(e)). This can be attributed to the 30 deg aggregation phenomenon. Based on the correlated misorientations, it was found that the fraction of LAGBs reduced from 24.9 pct in the as-MDF cp-Ti to 7.6 pct in annealed condition (4 hours at 600°C), and then increased to 19.6 pct (4 hours at 800°C). This inverse relationship indicates that the 30 deg aggregation phenomenon was enhanced at higher temperatures. The development of 30 deg aggregation accompanied with the grain growth across all the annealed specimens is collectively presented in Figure 4(f). At 600°C, the 30 deg aggregation was relatively limited even though the size of grains increased to 2.28 lm after 4 hours annealing. With the temperature increase to 700°C and 800°C, there was incipient 30 deg aggregation after 1 hour, and the peak became stronger after 4 hours (Figure 4(f)). Note that the 30 deg aggregation phenomenon was reported many times (e.g., References 14-16) and claimed as a recrystallization feature for hcp metals. [17] The 30 deg aggregation was exclusively attributed to the formation of coincidence site lattice (CSL) boundaries with potentially low energies corresponding to a 27.8 deg rotation around the c-axis. [15,16] Unfortunately, there was little experimental evidence to support the interpretation, albeit some molecular dynamics results. [18] Note that the latest version of HKL Channel 5 software does not incorporate the database required to analyze the CSL boundaries in hcp metals. Thus, the axis/angle values as well as R-values for cp-Ti obtained from Reference 19 were inputted into the software and the Brandon criterion [20] was used to categorize CSL boundaries. It was found that the CSL boundaries of R11b, R17c, R19c, R22a, R23b, R23c were extremely low in all the specimens. We also confirmed that the most frequently mentioned R13a, that had been claimed as the reason for 30 deg aggregation in equal channel angular pressed (ECAP) Mg, [15] did not show any significant increase after annealing at 800°C for 4 hours. This suggests that the 30 deg peak (Figure 4(f)) in the annealed MDF cp-Ti cannot be attributed to the CSL boundaries. C. Strain-Induced Microstructure and Texture Evolution The tensile curves in true stress and strain, under a range of strain rates are presented in Figure 5(a). The presence of stress fluctuation behavior during hot deformation (i.e., up and down [21] or stress peak [22] when plotting the stress-strain curve) has been recognized as the DRX characteristics. This applies to different deformation modes including tensile, [23,24] compressive, [25] and torsion. [26] Therefore, the absence of stress fluctuation in Figure 5(a) indicates that dynamic recrystallization did not occur in the MDF cp-Ti under monotonic loading. In addition, the tensile properties did not show any significant rate dependence (Table II). The micro-hardness and grain size also had little difference under the selected strain rates. To this end, the specimen subjected to the lowest tensile rate of 1 9 10 À4 s À1 was selected for a comparative analysis with the cyclic loaded specimen (R = À 1 and 14 Hz fatigued at RT). The IPF misorientation map and pole figure from the tensile specimen are shown in Figure 5(b). The basal texture became much weaker with the maximum intensity reduced to 7.6 mud (against 13.2 mud in the as-MDF condition) and the location kept the same, although there was some orientation spread. The high level of stress (hence the high plastic strain) was likely to induce the activation of more slip systems including hc þ ai mode. Therefore, the orientation spread to both ND and LD ( Figure 5(b)) during monotonic tensile may be related to the activation of pyramidal hc þ ai slips, as suggested in the work [27] for magnesium alloys. By comparison, a dramatic texture change was found in the fatigued specimens, Figures 5(c) Note that a small number of fatigue cycles (i.e., less than 1000) seemed to be sufficient to trigger the texture evolution. Further fatigue loading of up to 4.3 9 10 4 did not cause any significant textural difference. Recall that the high-temperature annealing only strengthened the basal pole intensity but without affecting its location. This makes the fatigue-induced recrystallization in MDF cp-Ti particularly intriguing, especially the formation of a much sharper basal pole intensity around the ND. Figure 5(e) shows that the grain size distribution followed the lognormal distribution and no bimodal characteristic was observed after the fatigue loading. The average grain size increased to 0.80 lm (1000 cycles) and 1.05 lm (4.3 9 10 4 cycles). The grain growth in fatigued MDF cp-Ti was accompanied by the change in the fraction of LAGBs ( Figure 5(f)). There was a moderate increase of LAGBs from 24.9 pct in the as-MDF condition to 34.9 pct after 1000 cycles, followed by a decrease to 24.1 pct after 4.3 9 10 4 cycles. The initial increased fraction of LAGBs was probably due to the dislocation activation associated with fatigue loading. With further fatigue loading, the driving force for recrystallization seemed to be enhanced and the stored energy at some point was sufficiently high to trigger the recrystallization, hence a transition from LAGBs to HAGBs. Considering that the fraction of LAGBs increased to 43.5 pct due to monotonic loading ( Figure 5(f)), this may lead us to conclude that the accumulated monotonic plastic strain energy cannot trigger the recrystallization and grain growth in the MDF cp-Ti. The presence of 30 deg aggregation in the fatigued MDF cp-Ti but not in the tensile one ( Figure 5(f)), provides another evidence to suggest the fatigue-induced recrystallization. An aggregation peak at around 80 deg to 90 deg in the fatigued MDF cp-Ti was not observed in the tensile specimen, but there was some hint in the annealed one (Figure 4(e)). The 85 deg aggregation in hcp Mg has been generally related to the formation of f10 12gh10 11i tension twinning. However, there was no measurable twins in the present MDF cp-Ti according to our EBSD/ TKD analysis. Hence, such an aggregation peak at around 80 deg to 90 deg was probably caused by the texture evolution or grain growth. D. In-grain Misorientation Profiles (IGMP) The discrete pole figures and in-grain misorientation profiles (IGMP) of selected grains (marked by capital letters of A and B) are presented in Figures 6(a) through (d). Grain A represents those grains with the c-axis aligned with the ND, while grain B is typical of those with a random orientation. Four conditions were considered: as-MDF in Figure 6(a), annealed (800°C and 4 hours) in Figure 6(b), tensile deformed (1 9 10 À4 s À1 , RT) in Figure 6(c), fatigue loaded (4.3 910 4 cycles, RT) in Figure 6(d). In these figures, the black curve represents the misorientation data relative to the previous point (point to point), which can be used to determine the presence of LAGBs within the grain. The blue curve represents the misorientation data relative to the first point (point to origin), indicating the accumulated change of misorientations, i.e., the degree of plastic deformation. For the as-MDF cp-Ti, there were many peaks in the black curve with misorientation angles of 2 deg to 12 deg in grain A, suggesting the presence of sub-grains with the grain separated by LAGBs. For the blue curve, the monotonic angle change in each sub-grain was up to 5 deg, indicating that the interior of sub-grains had large internal stress. By comparison, grain B with its c-axis away from the ND showed a much fewer LAGBs with misorientation angles of typically below 2 deg, and the monotonic change of the blue curve was up to 3 deg. These IGMP characteristics of grains A and B indicate that the high stored energy was accumulated in the as-MDF cp-Ti (especially grain A), hence being susceptible to recovery or recrystallization. After annealing, the selected grain A did not contain any sub-grain (no peak with misorientation angles in the range of 2 deg to 15 deg for the black curve), and there was only one LAGB with the misorientation angle of 10 deg in grain B (Figure 6(b)). The state of more complete recrystallization in grain A seems to be consistent with the higher stored energy in grain A compared to grain B in the as-MDF condition (Figure 6(a)). An alternative explanation about the remained LAGB (10 deg misorientation angle) in grain B is that this boundary was just about to finish the sub-grain coalescence and would migrate into an HAGB if the annealing time was increased. Meanwhile, the monotonic change of the blue curves for both grains was less than 1 deg b Fig. 6-In-grain misorientation profiles of two typical grains (one from that with the c-axis aligned with the ND, the other from that with the c-axis far away from the ND): (a) as-MDF, (b) annealed at 800°C and 4 h, (c) tensile deformed with deformation rate of 1 9 10 À4 s À1 at RT, and (d) fatigued until 4.3 9 10 4 cycles at RT. The metallographic specimens were taken within 2 mm distance to the fracture surface. ( Figure 6(b)), confirming that high internal stress was relaxed and they were fully recrystallized. This agrees with the TEM micrograph and SAD pattern as presented in Figure 3(f). After tensile deformation, the increased number of peaks (the black curve) having misorientation angles of ‡ 2 deg (LAGBs) can be seen for both grains (Figure 6(c)). Furthermore, there was a significant monotonic increase of the blue curves up to around 20 deg for both grains A and B. These IGMP features indicate that grains were subjected to a heavy plastic deformation under tensile loading. However, there was no evidence to suggest the presence of recovery or recrystallization. After fatigue loading, the number of LAGBs (peaks with misorientation angle of 2 deg to 15 deg for the black curve) in both grains A and B was reduced ( Figure 6(d)), when compared with the as-MDF (Figure 6(a)) and tensile (Figure 6(c)) conditions. The monotonic change of the blue curve within each sub-grain was found to be up to 2 deg. This suggests that the high internal stress would have been relaxed during fatigue loading. Therefore, the post-dynamic recrystallized cp-Ti processed by MDF is prone to recrystallization during fatigue loading. Comparing with the annealed condition ( Figure 6(b)), grain A in fatigued condition ( Figure 6(d)) exhibited the higher misorientation angles within the sub-grains. This means that the grains were not in the state of complete recrystallization after fatigue loading. This was probably owing to the material hardening and softening that occurred alternately during the fatigue of MDF cp-Ti, typical of that in the dynamic recrystallization process. [28] IV. DISCUSSION A. Continuous Static and Dynamic Recrystallization (cSRX and cDRX) The post-dynamic recrystallized cp-Ti processed by MDF was subjected to isothermal annealing, RT tensile loading under a wide range of strain rates, and RT fully reversed high-cycle fatigue loading under 14 Hz. It is evident from the TEM and EBSD/TKD observations (Figures 3(a), (c), (e) and 4(a) through (c)) that the UFG grains grew rapidly at temperatures of 600°C and above. At lower annealing temperatures limited grain growth was found, but there was clear evidence to suggest the relaxation of high internal stress (Figure 3(b)) as well as dislocation re-arrangement especially at the grain boundary regions (the inset in Figure 3(a)). These microstructural observations agreed well with the micro-hardness measurement (Figure 3(g)). All of the findings above suggest that MDF cp-Ti has a thermally stable microstructure (up to 400°C). By comparison, the microstructural stability of MDF cp-Ti under RT cyclic loading is far inferior. The UFG grains grew to 0.80 and 1.05 lm due to 1000 and 4.3 9 10 4 cycles, respectively ( Figure 5(f)). The fatigue-induced grain growth is interesting as the underlying mechanism to trigger the mechanical grain growth cannot be readily understood, especially considering that insignificant grain growth was found under the slow tensile rate (Figures 5(a) and (b)). Previous studies on nanocrystalline materials (e.g., Al [29] , Ni-Fe and Co-P alloys [30] ) revealed mechanical grain growth under RT tensile loading. But it is worthwhile to note that their conclusions were drawn from the presence of abnormal grain growth, leading to a bimodal microstructure with large grain size up to a few hundreds of nanometers compared to the initial size of 12 to 90 nm. Also, the as-deposited nanocrystalline materials have intrinsic difference in dislocation density and structure as compared to the as-MDF ones (i.e., the bulk nanocrystalline materials fabricated by SPD). Such a difference would impact the post-recrystallization behavior. In the present work, the observed grain growth in MDF cp-Ti (initial grain size of 0.44 lm), triggered by annealing and fatigue but not by tensile loading, was characterized by a lognormal distribution shift toward larger grain sizes (Figures 4(d) and 5(e)). No evidence was revealed for abnormal grain growth. This is consistent with the previous work on nanocrystalline Ni under tensile loading [31] and on ECAP and MDF cp-Ti under fatigue loading. [10] The grain size distribution resemblance between annealed and fatigued cp-Ti seems to suggest that a thermally assisted dislocation process already occurred at RT. Figure 7(a) summarizes the change of average GND density with reference to the as-MDF condition. For comparison purposes, the equivalent time duration for the tensile and fatigue tests was estimated from the strain rate of 1 9 10 À4 s À1 and the frequency of 14 Hz, respectively. The average GND density increased from 1.41 9 10 15 to 1.72 9 10 15 m À2 due to tensile loading, but decreased remarkedly by two orders of magnitude to 3.33 9 10 13 m À2 after 4 h annealing at 800°C (Figure 7(a)). Note that there was a continuous decrease in LAGBs (Figure 4(f)) to form the recrystallized grains. Hence, cSRX was the controlling mechanism for the grain growth due to annealing. The increased GND density in tensile loaded MDF cp-Ti is expected, because there was a large amount of plastic strains according to the tensile curve ( Figure 5(a)). By contrast, a moderate decrease in the GND density (8.21 9 10 14 m À2 ) was found in the fatigued MDF cp-Ti (Figure 7(a)). In addition, the fraction of LAGBs increased and then decreased with the increasing fatigue cycles ( Figure 5(f)). Thus, the controlling mechanism for the fatigue-induced grain growth is believed to be cDRX. The reason why the GND density drop in the fatigued MDF cp-Ti being much less compared to the annealed counterpart can be attributed to the dynamic superposition of hardening by dislocation generation and softening by recrystallization. For the as-MDF cp-Ti, the IAMA map in Figure 2(e) illustrates a mixture of substructured grains as the majority, recrystallized and deformed ones as the minority. The fraction of each characteristic grain type was calculated based on the IAMA maps obtained from the representative samples after tensile, fatigue and annealing tests. The data comparison with the as-MDF condition are shown in Figure 7(b). For the annealed MDF cp-Ti, the proportion of deformed grains reduced to almost zero, while that of recrystallized ones increased remarkedly. Meanwhile, there was a 20 pct decrease in the fraction of substructured grains compared to the as-MDF cp-Ti. By contrast, the fatigued MDF cp-Ti showed a dramatic increase in the fraction of substructured grains (~85 pct) at the expense of the recrystallized and deformed ones. Therefore, the IAMA analysis is proved as an effective means to distinguish the cDRX from cSRX. The last but not the least, the increased fraction of deformed grains in tensile MDF cp-Ti accompanied with the reduced fractions for the other two types provides further evidence that recrystallization did not occur during the monotonic loading. Li [32] pointed out that extra dislocations in the nanocrystalline grain boundary can initiate mechanical grain growth. This has thus been believed as the reason why mechanical grain growth can take place in nanocrystals often having a non-equilibrium grain boundary structure, but not in microcrystals. Since the present UFG cp-Ti was processed by MDF, the resulting grain boundary is deemed to be in a non-equilibrium state with extra dislocations. It then becomes interesting to interrogate whether the dislocation activity could play a certain role in fatigue-induced recrystallization and grain growth. A theoretical calculation of shear stress r xy required to remove dislocations from the pure grain boundary can be calculated using the relation proposed in [32] : where shear modulus G = 40 GPa, Burgers vector b = 0.295 nm and Poisson's ratio v = 0.34 are used for cp-Ti. [11,33] It was assumed that the free dislocations were parallel to z axis and the dislocation wall was in the yz plane. The magnitude of a in Eq. [2] is equal to 0.8 being independent of the number of free dislocations, if more than 3 dislocations are removed from an equilibrium boundary. [32] According to the TEM micrograph, the dislocation spacing h was estimated as 5 to 10 nm for the as-MDF cp-Ti. The stress required is thus calculated as r xy = 455.3 to 227.6 MPa according to Eq. [2]. In addition, for a non-equilibrium grain boundary, i.e., the existence of extra free dislocations, the required stress would be reduced to a very low level (r xy = 34.1 to 17.1 MPa) because of the magnitude of a being as low as 0.06. [32] Therefore, the dislocation activity is very likely to play a certain role in MDF cp-Ti during the fatigue loading, although the applied stress level (stress amplitude of 280 MPa) is much lower than the yield strength (552 MPa, Table II). Also, there was TEM evidence to support the dislocation activity by revealing both the multiple slip and cross-slip characteristics in post-fatigued MDF cp-Ti (Figures 8(a) and (b)). To this end, the fatigue triggered cDRX and grain growth phenomenon in MDF cp-Ti can be satisfactorily explained. But it is still not clear why a large tensile plastic strain induced by the monotonic loading, even with the very low strain rate of 1 9 10 À4 s À1 , cannot trigger such a mechanical grain growth. Since the UFG microstructure in MDF cp-Ti was proved to be thermally stable up to 400°C, it is difficult to attribute primarily to the temperature effect caused by 14 Hz fatigue loading. Infrared thermograph technique was used to monitor the surface temperature under high-cycle fatigue loading at 40 Hz, [8] where the maximum surface temperature was measured to be below 50°C. However, one should not neglect that the low thermal conductivity of cp-Ti (24.5 W/mK) is about one-fourth of Fe (94 W/mK) and Ni (106 W/mK). This may make the cp-Ti easier to get heated by cyclic loading, and hence promoting the dislocation activity. In fact, there have been two experimental work to show that at very low temperature (thereby thermally activated processes should be inoperative) the grain growth process of UFG Cu (À 50°C ) [34] and cp-Ti (À 200°C) [10] under fatigue loading can be restricted. Therefore, the present work does not permit us to rule out completely the thermally activated process of cDRX. B. Fatigue-Induced Texture Change Accompanied with the fatigue-induced grain growth, a different but strong basal texture was developed, having a significant number of grains with the c-axis aligned with the ND (Figures 5(c) and (d)). This suggests that there was a significant lattice rotation. The previous work on AZ31B Mg alloy [9] also found the fatigue-induced crystallographic texture change, but both the deformation condition and lattice rotation direction were different to the present work. First, we applied a low stress amplitude R = À 1 fatigue, that is distinctly different to R = 0.1 as adopted in the work. [9] Second, we observed the lattice rotation within the TD-ND plane, i.e., always being perpendicular to the load axis, whereas the lattice rotation was found to be toward the fatigue loading direction in previous work. It is worth noting that the grain growth and texture evolution as revealed from the TKD results ( Figures 5(c) and (d)) are consistent with the previous in situ neutron diffraction results. [10] It is obvious that a much larger sampling area was used in the neutron diffraction measurement. It has already been confirmed that the micro-texture information based on the TKD analysis was consistent with the completely different relative peak intensities among all the diffraction peaks as revealed by the two neutron detector banks located at ± 90 deg relative to the incident beam. More details can be found in Reference 10. Also, the good consistency across the two techniques proved that the MDF processing can produce a homogenous microstructure. Thus, the fatigue-induced texture change in MDF cp-Ti is repeatable with good consistency. Although both the dislocation slip and twinning can cause crystal reorientation, it is less likely to activate deformation twins in small sized grains. [35] The twinning to slip transition occurred with grain size refinement from 19 to 5 lm in Mg. [36] In addition to the intrinsic nature of MDF cp-Ti (grain size of 0.44 lm), there are two fatigue test conditions that do not favor the twinning. First, the applied stress amplitude of 280 MPa (only half of the yield strength) would be too low to activate twinning. In a previous work on cp-Ti, [37] the critical resolved shear stresses (CRSSs) to activate twinning were very close to the yield strength. Second, even if a small amount of twinning deformation did occur during the tensile phase, the subsequent loading reversal to the compressive phase (R = À 1) would favor the detwinning. [38] Note that both the TEM and EBSD/TKD observations (Figures 5(c), (d) and 8) confirmed the absence of deformation twins. In sum, the fatigue-induced texture change should not be attributed to the twinning activation in MDF cp-Ti. The study on the microstructure evolution during high-cycle fatigue of a Mg-6Zn-1Mn alloy also revealed the relationship between the texture evolution and DRX. [8] Unfortunately, the authors did not provide any plausible explanations about their experimental results, although the possibility of twinning was ruled out. Hence, the present work represents the first study to explain the fatigue-induced texture evolution in bulk nanocrystalline hcp metal. On the other hand, the grain boundary sliding of UFG grains can also contribute to the accommodated plastic deformation, causing the texture change. [39] Since the lattice rotation is essentially to accommodate further deformation, it is possible to predict the evolution of grain orientation by examining the slip mode. In other words, the grain rotation occurs for the sake of making the crystal orientation reach the optimal position to activate the dislocation slip. In terms of dislocation slip, the major deformation modes of cp-Ti are basal and prismatic hai slips and their CRSSs were estimated as 49 and 37 MPa, respectively. [37] By contrast, pyramidal hai and hc þ ai slips are rarely activated at RT as their CRSSs are much higher. [40] It seems that the activation probability of basal and prismatic hai slips are almost equal for cp-Ti, which is distinctly different from the more widely studied Mg (15 MPa for the former, but 67 MPa for the latter [41] ). This means that the knowledge gained regarding the texture evolution in Mg during loading might not be applicable to cp-Ti. Therefore, it is necessary to examine the dislocation slip mode in order to accurately predict the texture evolution during fatigue loading of MDF cp-Ti. Under monotonic tensile loading, the slip plane and direction tended to be parallel to the load axis, whereas the slip plane and direction tended to be perpendicular to the load axis under compression. [42] Both are controlled by the CRSS law to accommodate plastic deformation, and the grain rotation behavior is illustrated schematically in Figure 9(a). Under tensile loading, the activation of both the basal and prismatic hai slip would make the c-axis perpendicular to the load axis. However, the basal hai slip would cause the c-axis parallel to the load axis, while the prismatic hai slip makes it become perpendicular. Figure 10(a) shows the IPF map superimposed with 3D grain orientation diagram for the post-fatigued cp-Ti. According to the Schmid law the orientation of each grain defines the most probable slip systems. Two typical grain groups (A and B) were selected. The group A has the c-axis nearly perpendicular to the fatigue loading direction (LD) and the SFs of prismatic hai slip (0.46 and 0.47) are much higher than that of basal hai slip (0. 26 and 0.16). This indicates that the prismatic hai slip was predominant in the grains. Thus, grains belonging to group A would rotate around their c-axis and kept their c-axis being perpendicular to LD under R = À 1 fatigue, as schematically shown in Figure 10 On the other hand, a few grains/sub-grains were located in a more favorable orientation for the activation of basal hai slip, namely, group B in Figure 10(b). The micromechanical modeling of monotonic loading of cp-Ti [43] showed that the yield strength obtained under compression was always higher than that under tension. This probably led to the fact that when the c-axis of those grains (group B) was rotated toward perpendicular to LD due to the activation of basal hai slip (i.e., orientation similar to that for group A), the prismatic hai slip then became predominant in the following fatigue cycles. A similar transition of predominant slip system from basal to prismatic [44] were also reported by the plasticity analysis of texture development in Mg during extrusion. Thus, the transition from basal hai to prismatic hai slips in cp-Ti is probably the reason why the present R = À 1 fatigue did not cause a symmetric/ reversible effect on grain rotation. and (d)) respectively. Thus, the activation of prismatic hai slip is much higher than that of the basal slip. With the increase of fatigue cycles, most grains rotated to the stable direction with their c-axis perpendicular to the load axis due to the activation of prismatic hai slip (Figures 9(a) and 10(b)). Accordingly, the polar projection of most grains converged to the TD-ND plane in {0001} pole figure under fatigue, as schematically shown in Figure 9(b). In addition, according to the Schmid law of Figure 9(a), tension causes f10 10g to be parallel to LD, while compression causes f11 20g to be parallel to LD. Ultimately, this caused the final texture with their f10 10g or f11 20g being parallel to the LD direction, as shown in Figure 10(e). It is worth noting that the activation of prismatic hai slip along LD could not cause the c-axis rotation within the TD-ND plane. Since the last two forging directions for the as-MDF cp-Ti were perpendicular to ND and TD planes, the combined effect of compressive residual stress and the resolved shear stress perpendicular to LD (caused by Poisson ratio effect) could activate the dislocation slip. Hence, the SFs (stress direction along ND) were also calculated. The average SFs of the basal and prismatic hai slips for the as-MDF cp-Ti were calculated as 0.39 and 0.24, while those were 0.21 and 0.22 for the fatigued cp-Ti. The favorable orientation for basal hai slip in the as-MDF cp-Ti may be attributed to the inclination of around 40 deg to 50 deg in the orientation of basal planes from ND toward TD, as schematically shown in Figure 9(c). In addition, stacking fault energies (SFEs) for basal and prismatic planes were reported to be 300 and 150 mJ/m 2 for Ti. [45] The higher SFE of the basal plane may induce cross-slip more easily (Figure 8(b)), which could enhance dynamic recovery. On the other hand, the lower SFE of prismatic slip promotes the dislocation density increase, resulting in the difficulty of dislocation slip on the prismatic plane with the progress of deformation. Therefore, the formation of texture with the c-axis parallel to the ND and TD in the fatigued MDF cp-Ti is likely to be attributed to the activation of basal hai slip perpendicular to LD. Note that the texture evolution from {0001} TD-split texture to ND basal texture observed during differential speed rolling of cp-Ti was reported in the work, [46] and this phenomenon was also attributed to the lattice rotation caused by basal slip activity. According to the crystal plasticity theory, the slip systems seem to be hardly activated during high-cycle fatigue loading because of the low applied stress, i.e., only half of the yield strength. However, in the present case, the grain growth-induced material softening and temperature rise (low thermal conductivity of cp-Ti) during fatigue should not be neglected. Such a material softening behavior can facilitate the activation of dislocation slips. Furthermore, the superimposed residual compressive stress on the applied stress on the TD and ND planes, induced by the MDF process, can also contribute to the critical stress to trigger the activation of dislocation slips. In sum, the fatigue-induced lattice reorientation was attributed to the gradual lattice rotation by the activation of prismatic hai slip parallel to LD (Figure 9(b)) and the activation of basal slip perpendicular to LD (Figure 9(c)). In addition, the grain boundary sliding and cDRX dominated grain growth may also serve as auxiliary mechanisms for the dislocation slip-induced grain rotation during fatigue loading. V. CONCLUSIONS The post-dynamic recrystallization behavior of cp-Ti processed by MDF, was studied under the following three conditions: annealing, RT monotonic tensile loading, and R = À 1 high-cycle fatigue loading. The main conclusions are summarized as follows: 1) Recrystallization and uniform grain growth occurred at high-temperature ( ‡ 600°C) annealing and R = À 1 fatigue. No evidence was revealed for the abnormal grain growth. By contrast, RT tensile loading with the low strain rate cannot trigger the grain growth. 2) 30 deg aggregation occurred concurrently with the grain growth, but there was no evidence to suggest its correlation with the characteristic CSL boundaries. 3) The recrystallization texture as revealed in the annealed MDF cp-Ti appeared to be enhanced in terms of the intensity but without affecting its location compared to the as-MDF condition. The mechanism behind was confirmed as cSRX. 4) The fatigue-induced texture evolution and grain growth was controlled by cDRX and there was clear evidence to suggest the dislocation activity. The formation of a different but strong texture in fatigued MDF cp-Ti can be attributed to the activation of prismatic hai slip parallel to LD and the activation of basal hai slip perpendicular to LD. from Liaoning University of Technology (China) for their help and fruitful discussions about the in-depth interpretation of the EBSD/TKD dataset. OPEN ACCESS This article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons licence, and indicate if changes were made. The images or other third party material in this article are included in the article's Creative Commons licence, unless indicated otherwise in a credit line to the material. If material is not included in the article's Creative Commons licence and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this licence, visit http://creat ivecommons.org/licenses/by/4.0/. APPENDIX A: GRAIN GROWTH KINETICS OF POST-DYNAMIC RECRYSTALLIZED CP-TI Grain growth is a process of reduction of the internal energy of a material at high temperature by reducing the total volume of grain boundaries. Grain growth occurs by the movement of grain boundaries driven by the local curvature. The kinetics of grain growth can be represented as [47] : where D is the average grain size after annealing at temperature T for a given time t, and D 0 is the initial grain size (i.e., as-MDF condition). n is the grain/subgrain growth exponent, while k 0 is the temperature-dependent rate constant. R is the gas constant and Q is the activation energy. Figure 11 presents the averaged grain sizes in both the high-temperature annealed and RT fatigued conditions as well as the derived n. The value of n physically corresponds to the grain boundary mobility and is dependent on the interfacial energy and atomic diffusion. It can be seen in Figure 11 that n increased with T from 0.15 (600°C) to 0.33 (800°C). For single-phase materials under annealing, the n value range were reported from 0.2 to 0.5; a higher material purity tended to make n approaching 0.5. [48] By contrast, the fatigued MDF cp-Ti showed a much smaller n value (0.07), indicating that the underlying mechanism was different between high-temperature annealing and RT fatigue loading-induced grain growth. To calculate the Q value, the average value of n = 0.30 was used. For the present post-dynamic recrystallized cp-Ti, the average Q value was thus determined to be 241.7 kJ/mol with a very small standard deviation of ± 17.6 kJ/mol. By comparing with the literature data available for cp-Ti, the present Q value for MDF cp-Ti (241.7 kJ/mol) appeared to be higher than that derived from the recrystallization of cold rolled cp-Ti (156.8 kJ/mol) [49] and that from the 8-pass ECAP cp-Ti (179.0 kJ/mol) [47] . For the UFG cp-Ti with initial grain size of 1.4 lm, the recrystallization activation energy Q was determined to be 248.0 kJ/mol, [50] while a value of 342.0 kJ/mol was reported for a 10-pass ECAP cp-Ti. [51] In light of the significant Q value inconsistency in the literature, it is not appropriate to draw a conclusion. But, the Q value in the present MDF cp-Ti, as well as the previous ECAP cp-Ti [50,51] except for one outlier, [47] were significantly greater than that of lattice self-diffusion of Ti (169.1 kJ/mol). [52] This seems to be backed up with our knowledge of the relatively good thermal stability of SPD metals compared with the conventional plastic deformed ones.
2020-11-16T14:39:50.776Z
2020-11-15T00:00:00.000
{ "year": 2020, "sha1": "407e48966f10da711fc4cfd7a6d87b8fb01142b7", "oa_license": "CCBY", "oa_url": "https://link.springer.com/content/pdf/10.1007/s11661-020-06071-x.pdf", "oa_status": "HYBRID", "pdf_src": "Springer", "pdf_hash": "407e48966f10da711fc4cfd7a6d87b8fb01142b7", "s2fieldsofstudy": [ "Materials Science" ], "extfieldsofstudy": [ "Materials Science" ] }
237286761
pes2o/s2orc
v3-fos-license
Research on fragmentation quality percentage of coal in hot air dense medium fluidized bed This paper studies the percentage of pulverized coal in the hot air dense medium fluidized bed, and investigates the influence of coal surface moisture, drying temperature, drying time, air volume and other factors on the percentage of crushing, and establishes a mathematical correlation formula for the percentage of crushing; it shows that with the increase of drying temperature, the percentage of crushing increases; with the increase of air volume, the percentage of crushing increases; the percentage of crushing increases with the increase of drying time and increases with the increase of surface moisture of coal. Introduction About two-thirds of my country's coal resources are distributed in the northwest and other dry and water-deficient areas. It is difficult to use wet coal preparation technology to improve coal quality [1][2] . Because of the urgent demand for dry coal preparation technology [3][4] , the air is heavy. The medium fluidized bed coal preparation technology uses solid particles with a certain size as the solid phase aggravated matter, and then air is introduced to completely fluidize the aggravated particles to achieve coal separation [5][6] . In this test, due to the self-developed air-dense-medium drying and sorting integrated device introduced heat energy, coal with high moisture content, especially lignite,in the hot fluidized bed sorting process, its own void structure, brittleness and other physical properties will continue to change. Coupled with the continuous collision and friction of coal in the fluidized bed, the coal is prone to breakage. In actual production, the discharge end of the separator is at a certain height from the vibrating separation screen. After the selection, the product falls into the separation screen. Later, it is also prone to breakage. After crushing, coal powder of <1mm is produced, which cannot be removed through the normal de-intermediation sieve. It will enter the under-sieve medium together with the magnetite powder and participate in the diversion operation. Therefore, it is necessary to study the crushing behavior of coal in the hot fluidized bed, which can guide the control of the split flow and the removal of non-magnetic substances. Test system The experimental device is a drying and sorting integrated model device of air heavy medium fluidized bed, which is composed of an air supply system, an electric heating system, a separation system and a temperature detection system. The structure diagram is shown in Figure 1, the physical map is shown in Figure 2. Establishment of mathematical correlation of fragmentation quality percentage According to the experimental data of the fragmentation quality percentage, the R 2 comprehensive analysis of the two models recommended by Design-Expert is shown in Table 1. The results show that the standard deviation and R 2 correction value of the two models are not much different, but the R 2 prediction value of the 2FI model is greater than that of the quadratic model. The residual sum of squares is less than the quadratic model, which shows that the 2FI model is suitable for simulation and analysis of experimental results. However, considering that the R 2 correction value of the 2FI model is 0.8435 and the R 2 prediction value is 0.6989, there is a certain gap between the two, indicating that The simulation accuracy of the model needs to be improved, so the 2FI model needs to be revised. The analysis of variance was performed on the model parameters of the 2FI model, as shown in Table 2. The F value test method was used to test the significance of the model parameters. Among them, when the "Prob>F" of the model parameter is greater than 0.1, it means that the parameter is not significant, When "Prob>F" is less than 0.05, it means that the parameter is significant. From Table 2, it can be seen that the AB, BD, CD and other factors in the model are less significant. In order to improve the simulation effect, these factors are removed and established a new model is the 2FI revised model. A comprehensive analysis of R 2 is performed on the 2FI correction model. As shown in Table 3, it can be seen that the correction value of R 2 is 0.8424, and the predicted value of R 2 is 0.7703. Both of them are relatively large and close in value, showing good consistency. It shows that the simulation accuracy of the 2FI modified model is relatively high. Therefore, it was decided to use the 2FI modified model to simulate the broken percentage experiment. Through simulation, the mathematical correlation between the fragmentation quality percentage and various operating parameters is obtained: Table 4 The normal distribution of the studentized residual is shown in Figure 2, and the comparison between the experimental value and the predicted value is shown in Figure 3. It can be seen that the studentized residual basically conforms to the normal distribution, and the experimental value and the predicted value are in good agreement. Table 5, it can be seen that the five model parameters A, B, C, D, and AD are significant factors, which A and C are the most significant, followed by B, D, AD. Figure 4 shows the effect of drying temperature and air volume on the percentage of crushing. It can be seen from Figure 4 that as the drying temperature increases, the percentage of crushing increases; as the air volume increases, the percentage of crushing increases. This is because lignite contains a large number of microporous structures of different sizes, and water is easily stored in these micropores in a clustered structure. The more water on the coal surface, the more water penetrates into the micropores. Because lignite is rich in oxygen-containing functional groups, and these oxygencontaining functional groups have strong water absorption, moisture is easily combined with these functional groups; when the coal is dried in the fluidized bed, the surface moisture of the coal and the moisture inside the micropores will be affected. Diffusion and phase change occur, and energy is released. During this process, the microporous structure on the surface and internal of the lignite is destroyed, some of the structure will collapse, and the size and number of micropores will continue to decrease, resulting in volume shrinkage of the lignite, also caused the increase of the overall brittleness and hardness of coal. The increase of the drying temperature increases the heat and mass transfer intensity between the wet coal and the hot fluidized bed. The movement of the water molecules inside the coal is intensified, and the diffusion and evaporation rates increase, which in turn affects the structure of the lignite and makes the lignite easier broken. Therefore, as the drying temperature increases, the percentage of crushing also increases. The increase in air volume will make the contact between gas and solid more sufficient, the efficiency of heat and mass transfer will increase, and the strength of collision and friction between lignite will increase. Therefore, as the air volume increases, the probability of lignite crushing increases, and the percentage of crushing increases. Fig.4 The effect of drying temperature and air volume on fragmentation quality percentage 5. The effect of drying time and coal surface moisture on the fragmentation quality percentage Figure 5 shows the effect of drying time and coal surface moisture on the percentage of crushing. It can be seen from Figure 5 that the percentage of crushing increases with the increase of drying time, and increases with the increase of coal surface moisture. This is due to the increase in drying time, which makes the contact time of coal and hot fluidized longer, and the total amount of water diffusion and evaporation increases, which increases the degree of damage to the structure of lignite by drying. Therefore, the percentage of broken will increase with the increase of drying time. With the increase of coal surface moisture, the contact area between moisture and lignite increases, the diffusion and evaporation of moisture increase, and the severity of the impact on the structure of lignite increases, coal is more easily broken during fluidized bed drying and collision and friction between coals, so the percentage of broken coal will increase with the increase of surface moisture of coal. (2) With the increase of drying temperature, the percentage of crushing increases; with the increase of air volume, the percentage of crushing increases.
2021-08-25T20:06:48.014Z
2021-01-01T00:00:00.000
{ "year": 2021, "sha1": "7121e0fc4a73f9b9e6e930b76dc49fbb28f4e82d", "oa_license": null, "oa_url": "https://doi.org/10.1088/1742-6596/2009/1/012045", "oa_status": "GOLD", "pdf_src": "IOP", "pdf_hash": "7121e0fc4a73f9b9e6e930b76dc49fbb28f4e82d", "s2fieldsofstudy": [ "Materials Science" ], "extfieldsofstudy": [ "Physics" ] }
236409268
pes2o/s2orc
v3-fos-license
E-Commerce Strategy in Driving Sharing Economy in Culinary Industry Targeting the Jakarta market, e-commerce catering had emerged to provide practical solutions for routine eating needs. The object in this study was Kulina, which was founded initially as a marketplace in 2015. The initial purpose of its establishment was to drive the sharing economy through co-creation with kitchen and distribution partners to meet the lunch needs of their customers, the employees in Jakarta. The research data was taken by interviews with Digital Marketing Manager and Customer Experience Head and Supervising Delivery. Additionally, observations were conducted on Kulina’s digital marketing communication activities on the @ Kulina.id Instagram account during September 2017-January 2018. Document searches were carried out via the internet on Instagram with the keyword #Kulina and another site containing information about Kulina based on a google search with the keyword Kulina. The study found that customer demand communicated through the website affects complementarity, development of economies of scale, and standard-setting. The information was used to open and develop a sharing economy network and business terms for Partners. Nevertheless, complementarity in Kulina was not only influenced by the meeting of suppliers and demand but also other factors such as traffic jams. From these points, this research aims to describe Kulina’s strategy as e-commerce in driving the sharing economy through co-creation to increase the number of customers after rebranding. The development of internet-based communication technology has triggered digital marketing communication research to promote issues and move netizens. The first research is the promotion of tourism in Yogyakarta, which is driven by user-generated content (UGC). In the study, Amalia and Erwan observe digital marketing communications to promote Jogja tourism and mobilize netizens to interact in the comment column on the @explorejogja Instagram account. The results of this study are the involvement that arises from recommendations, invitations to visit tourist destinations, and reviews about tourist locations (Amalia & Sudiwijaya, 2020). The next research is Paracrisis and Social Media: Social Network Analysis of Hashtag #uninstallbukalapak on Twitter. In their research, Acniah Damayanti observes digital marketing communications that drive the hashtag #uninstallbukalapak. This study shows that the most central actors or accounts in the hashtag network #uninstallbukalapak are @achmadzaky, @bukalapak, and @jokowi. The discussion topics that appear in the network and amplify #uninstallbukalapak include the attribution of mistakes to Achmad Zaky, the hashtag association for support for the presidential candidate, and support for Achmad Zaky and Bukalapak (Damayanti, 2020). Previous research took data from the point of view of the communicant, namely followers and netizens. This study, on the contrary, takes a brand manager’s point of view to describe e-commerce digital marketing communications to drive the sharing economy in the culinary industry. INTRODUCTION Indonesia is fruitful ground for e-commerce growth with 70% of the population under 40 years old and demographically spread over islands with limited access to retail shops. The presence of ecommerce can be an alternative solution for shopping, where e-commerce acts as an information intermediary between buyers and retail stores. Product information becomes a marketed commodity and becomes a solution to reach marketing areas spread across thousands of islands in Indonesia. The rapid growth of the middle class in Indonesia has also increased purchasing power, so it is estimated that Indonesia's gross domestic product will grow by 5.3% in the next ten years. So, it is not surprising that 52% of the market e-commerce in Southeast Asia is developing in Indonesia as much as US $ 46 billion (Pramisti, NQ, Bhaskara, LA, 2018). E-commerce or electronic commerce is defined as the process of buying, selling, exchanging products, services, and information through computer networks, both intranet and internet (Sims, 2018). Currently, various kinds of product information are marketed through e-commerce easily. Practically, e-commerce as an information broker involving everyone to participate as sellers and buyers at the same time is armed with a smartphone. Further, this growth stimulates the creativity of its users to create innovation. One of the products that drive the growth of Indonesia's creative economy is a culinary business. Based on a survey conducted by the Indonesian Creative Economy Agency and the Central Bureau of Statistics in 2016, the culinary sub-sector recorded a 41% contribution to the creative economy's gross domestic product (Salam, F., & Hasan, 2017) The growth of e-commerce in culinary services in Indonesia is preceded by food delivery services as a solution for the restaurants that do not have delivery services. Foodpanda was present for the first time as a start-up from Germany that spread its wings to Indonesia, followed by GoFood, which acts as an information broker for restaurant partners with a network of tens of thousands of GoJek drivers to run GoFood food delivery services. The rapid growth can be seen from the 37,000 restaurant and restaurant partners that have joined GoFood services and have made it a leader in the food delivery service category in big cities such as Bandung and Jakarta (Pramisti, NQ, Bhaskara, LA, 2018). Ecommerce in culinary in Indonesia is increasingly developing to meet the needs of culinary information ranging from culinary reviews, restaurant searches to catering. Zomato is an information intermediary application that connects culinary lovers and culinary places through search services and reviews of popular culinary places. This start-up has been operating in 23 countries, including Indonesia, and has reviewed 30 thousand restaurants in Jakarta, Bogor, Depok, Tangerang, Bekasi, and Bali. Local e-commerce applications that provide review and search services for restaurants from Indonesia have also started to appear, such as Makan Mana which has been downloaded 100 thousand times, and Foodsessive, both of which have recently reviewed numerous restaurants in Surabaya (Pramisti, NQ, Bhaskara, LA, 2018). Throughout 2018, the growth of e-commerce in Indonesia reached 78% and became the highest in the world (Zuraya, 2019). Google Trend recorded growth in demand for food delivery services, increased 15 times from 2015 to 2019, and raised a US $ 6 billion turnovers. In terms of managing the culinary e-commerce business, its growth has reached six times from 2015 to 2019, with an average growth rate of 57%. The factor supporting this growth is a change in overall consumer behavior to choose to order food rather than coming to the seller's place. Consumers, whether individuals, families, or institutions, prefer to avoid congestion and erratic weather by enjoying messages and eating from home (Putri, 2021). Targeting the Jakarta market, e-commerce Caterers are starting to emerge to provide practical solutions to everyday eating needs. One of this type of e-commerce is Kulina, founded in 2015 with the concept of marketplace. Kulina acts as a culinary information broker that connects 300 wedding reception catering kitchens from various parts of Jakarta -most of which operated only on Saturdays and Sundays, delivery services, and employees in business centers who need lunch. Kulina sets food prices along with shipping costs at a price range of Rp. 30,000 and can be reduced depending on the number of customers in the building (Triwijanarko, 2018 s strategy as e-commerce in driving the sharing economy through co-creation to increase the number of customers after rebranding. The development of internet-based communication technology has triggered digital marketing communication research to promote issues and move netizens. The first research is the promotion of tourism in Yogyakarta, which is driven by user-generated content (UGC). In the study, Amalia and Erwan observe digital marketing communications to promote Jogja tourism and mobilize netizens to interact in the comment column on the @explorejogja Instagram account. The results of this study are the involvement that arises from recommendations, invitations to visit tourist destinations, and reviews about tourist locations (Amalia & Sudiwijaya, 2020). The next research is Paracrisis and Social Media: Social Network Analysis of Hashtag #uninstallbukalapak on Twitter. In their research, Acniah Damayanti observes digital marketing communications that drive the hashtag #uninstallbukalapak. This study shows that the most central actors or accounts in the hashtag network #uninstallbukalapak are @achmadzaky, @bukalapak, and @jokowi. The discussion topics that appear in the network and amplify #uninstallbukalapak include the attribution of mistakes to Achmad Zaky, the hashtag association for support for the presidential candidate, and support for Achmad Zaky and Bukalapak (Damayanti, 2020). Previous research took data from the point of view of the communicant, namely followers and netizens. This study, on the contrary, takes a brand manager's point of view to describe e-commerce digital marketing communications to drive the sharing economy in the culinary industry. Sharing economy is defined as the value obtained from the process of managing neglected assets to be accessible to the community through online media, which leads to a reduced need for ownership of these assets. From this definition, sharing economy can be derived into the main concepts, namely: 1. Value becomes the basis of exchange that creates economic value either by using money or through bartering. 2. Abandoned assets 3. Online access via the internet 4. Community with shared beliefs, social interactions, and values 5. Reduced need for ownership. The emergence of the sharing economy coincided with the trend of collaborative consumption that marked the waning of overconsumption in the 20th century. Collaborative consumption is driven by the reputation of brands with community-determined choices in shared access. The essence of collaborative consumption is collaboration through internet media to connect with each other, to form communities, and to carry out many interactions (Sundararajan, 2017). Co-creation describes a new approach to innovation through collaboration between companies, consumers, producers, and interconnected partners. The experience on a product is an accumulation of individual consumer experiences that personalize the use of products according to their needs desires to create the highest value. This consumer customization data will enrich the platform as feedback from consumers. Moreover, the further co-creation stage positively affects customer satisfaction with service companies, customer loyalty, and service expenditures (Grissemann, U.S., & Stokburger-Sauer, 2012). Technology is a key driver of the sharing economy and makes economic activity easier and cheaper by reducing transaction costs. Before the advent of the sharing economy, transaction costs could be very high, requiring direct interaction and making selling prices expensive. Thus, encouraging consumers and suppliers often have to initiate transactions with an agreement in advance. Many of the interactions ended up being canceled due to overpriced costs. Now the internet, smartphones, and other new technologies are addressing this problem (Demary, 2015). In a sharing economy, blockchain technology can create a co-oriented ecosystem (Pazaitis, A., Filippi, PD, & Kostakis, 2017). The peer-to-peer business model in the sharing economy is basically a virtual network that connects individual consumers and individual suppliers. This network has six characteristics: complementarity, compatibility, standards, externalities, consumer switching costs, and significant economies of scale. All of these characteristics apply to businesses peer to peer of the sharing economy (Demary, 2015). This research will describe how Kulina as e-commerce drives the sharing economy through cocreation with kitchen partners and distribution partners to meet the lunch needs of employee customers in the Jakarta business center through; 1. Complementarity. The shared economic network between platforms, suppliers, and consumers are complementary. The platform will match individual unmet demand with the individual supply available. However, without a supplier providing a common good or service, the sharing economy business platform cannot keep up with demand. On the other hand, without a request, the platform cannot do business with the supplier. 2. Suitability. Suitability is closely related to complementarity, where the supply and demand must be able to move and work in harmony so that the sharing economy network can function. 3. Standard. In the sharing economy, the platform serves as a standard of transactions, including terms of business, payments, and communications. 4. Network externalities. In a sharing economy, the effects of the network do not directly affect the transacting producers, distributors, and consumers. The hallmark of this peer-to-peer business is that the more consumer demands are met on this platform, the more the platform's utility value increases. 5. Consumer Switching Costs. When users are familiar with the standards applied by a platform, it will take a long time to adjust to other platforms. In addition, users will incur search costs to find other platforms with the same service. Every interaction between producers, distributors, and consumers on this platform marks the beginning of the formation of trust. Therefore, the transition of users to the new platform can be initiated with a trust-building process. Research result (Nysveen, H., & Pedersen, 2014) demonstrated that loyalty to participating and satisfaction is mediated by brand experience. 6. Economies of scale. The costs incurred in sharing economy companies are quite cheap. At the beginning of the operation, it costs enough to create a platform and marketing. Even after the platform programming is complete and the number of users of this platform continues to increase, there are no additional costs that the company must incur. Therefore, the sharing economy company can spread its business wings as widely as possible to reach many consumers and suppliers and make it more flexible to enter the competition. METHOD This study uses a qualitative research method with a descriptive approach. The data were obtained through the process of interviews, observation, and documents analysis. The interview participants in this study were selected based on their involvement in the co-creation process in September 2017 -January 2018. Based on these criteria, the sources of this research were the Digital Marketing Manager and Customer Experience Head and Supervising delivery. The interview process was carried out on the Kulina team at the Innovation Factory, Jl. Prof. Herman Yohanes No.1212, Terban, Gondokusuman, Yogyakarta, and the internet on social media sites Instagram on the @ Kulina.id account and searches with the keyword #Kulina, Kulina's official website, www.kulina.id, and other sites that contain information about Kulina based on a search on google with the keyword Kulina. Observations were made to see Kulina's digital marketing communication activities on the @Kulina.id Instagram account from September 2017 to January 2018. In addition, document searches were carried out via the internet on the Instagram social media site through searches with the keyword #Kulina and other sites that contain information about Kulina based on searches on google with the keyword Kulina. The data in this research is presented by compiling the obtained and reduced data. Data is presented by describing the co-creation process in text and visual form. The text is presented in the form of writing in the form of direct quotations from the speakers' own words and in the form of a narrative, by arranging sentences logically and systematically so that they are easy to read and understand, according to the views used by the subject but focused in the context of the operationalization of the research concept. Researchers reviewed the co-creation of Kulina's perspective; therefore, the data obtained in this study were transcripts of interviews with informants involved in the process of co-creation. The other documentation data in this study were the news and customer blogs outside of Kulina's official media and observation data, namely digital marketing communication activities on Kulina's Instagram. The researcher then reduces this data into a presentation of data that has been grouped based on the research model, then compared with the theoretical basis to produce conclusions and suggestions. In this study, after the data is obtained, the researcher will check the validity using the triangulation method. The researcher then compares the interview information with observations and other documentation sources. The valid data were then analyzed using an interactive analysis model by collecting data from the speakers. After being collected in recorded form, the data is presented in written form or a transcript of the interview results. For all data to focus on research objectives, a process of selection, simplification, and data grouping is required so that the process of co-creation management in Kulina is visible. RESULT AND DISCUSSION Focusing on lunch catering services, Kulina is a technology company that does not have its own kitchen. They work with partner kitchens from caterers scattered throughout Jakarta and its surroundings. Then, Kulina connects customers with the closest partner kitchens and arranges food delivery routes so that the food can arrive on time to the customer's address by the distribution partners. The Kulina CEO explained in their blog that Kulina's commitment in this business is technically not a caterer, as it does not have its own kitchen. However, although Kulina is not a caterer, it is responsible for the quality of the food. Although it is not a food delivery courier service, it is also responsible for the punctuality of delivery" (Handika, 2017). As a virtual network, Kulina has six characteristics, namely complementarity, compatibility, standards, externalities, and switching costs, and economies of scale. Compelementarity Kulina arrives as a solution for their customers, mainly employees in Jakarta, dealing with lunch problems. Frequently, lunch break turns into an hour-long competition to overcome the obstacles of queuing for elevators to go up and down and queueing at the food shops. The afternoon break becomes a tiring time to rest. Easy internet access has opened the door to an abundance of information and keeps the target market always faced with multiple choices at all times. This finding laid the foundation for Kulina to create a simple and straightforward product. Kulina also views the sharing economy network as a pattern of complementary relationships that occur between Kulina as the owner platform, kitchen partners, distribution partners, and customers. For this reason, Kulina seeks to understand the needs of targeted customers, namely employees in Jakarta, then match it with more than 300 partner kitchens of wedding reception catering partners from various parts of Jakarta. These catering partners indeed have good kitchens and are not burdened with expensive rents when they must open shops located in shopping centers (Triwijanarko, 2018). For the delivery from the kitchen to the customer, Kulina works with Ninja Van as a distribution partner. Ninja Van is a start-up that is engaged in the expedition of goods and cooperates with motorbike riders. Kulina's CEO explained that the recipe for Kulina's success in matching customers and kitchens is the use of algorithms so that costs and distribution times will be more efficient and customers will get food on time at affordable prices (Handika, 2017). The complementarity process of the sharing economy network in Kulina is oriented towards customer demand. The Kulina platform produces a customer demand map that has been used as a basis for consideration to open a sharing economy network in the area. Conformity One of the daily complaints that Kulina receives is the mismatch between the arrival of the food and the predetermined time of delivery. The delays that customers complain about in Kulina can occur due to two things: distribution partners who deliberately fake Proof on Delivery or customers who just pick up food from the package's recipient after 12.00. For this reason, investigations were carried out involving customers and distribution partners. Another complaint received by the Customer Experience team is the mismatch between the menu and what was promised on the website and food received for substandard quality, smaller food sizes, or stale food. For Kulina's sharing economy network to function through conformity between promises and realization, Kulina's management team works every day to ensure that each menu can be cooked and served according to the menu offered. Then, Kulina makes sure the menu is cooked on time by the partner kitchen so that distribution partners have enough time to deliver to customers precisely at 12.00. Every day, the Food Quality Supply and Operation team will check the readiness of each partner kitchen via telegram group boot, which has been set automatically to inquire about the readiness of food every hour from seven o'clock until the cooking deadline at nine o'clock. From each photo sent by the partner kitchen, the Food Quality Supply and Operation team will monitor the composition and readiness of each partner kitchen. In addition to routine daily coordination, routine weekly coordination with partner kitchens is carried out to ensure the menu for the next week, in the form of a recipe in the form of a description of the recipe and serving rules. Communication between Kulina and partner kitchens is carried out to solve any problems that arise when cooking food. For example, for the raw material constraints that are sometimes difficult for one partner kitchen to find, the Kulina team will help find information from other partner kitchens because partner kitchens are not directly connected. Even if there is a kitchen that is off, Kulina coordinates with other kitchens to replace it. In addition to the kitchen management, the Kulina team also supervises the delivery of the packages from the kitchen to the customers every day. This routine starts from the deadline for orders by the customer, which will be systemically recapitulated and grouped based on the kitchen area; this data is then sent to Ninja Van. In the morning, the Ninja Van coordinator will send the data of PiC and rider for every kitchen. Then, the Supervising Delivery Team will start checking the attendance time of every rider to each PiC. Anticipatory steps will be taken as soon as a rider is not present and will be coordinated to find a replacement. The rider who is late in sending will immediately be notified to the customer. Delivery delays are not only caused by distribution partners but can also be caused by partner kitchens that have not finished cooking at nine in the morning. In fact, some packaging spaces for partner kitchens are narrow, which makes packing last a long time. All potentially late routes will be informed to customers that their late meal will be waived, and they can prepare to buy another meal for lunch. On the other hand, each customer can determine where the food will be delivered with a guarantee that the food will be free of charge when the food arrives after 12.00 noon. For this reason, coordination with distribution partners is an important routine to overcome the risk of delays that may arise jointly. Customers will receive delivery notifications via WhatsApp from Kulina at nine in the morning, so they too can monitor the journey of their food orders (Hidayah, 2018). Delays can also be detected from customer complaints received every day by the Customer Experience team, followed by coordination by the Supervising Delivery team to distribution partners to find out the last position and what obstacles faced in the field. Evaluation of each delivery rider is always conducted after the delivery limit is completed. All delays are monitored from the time of reporting the completed status from the rider. Not all delays are immediately given non-payment sanctions. Supervising Delivery Team will analyze all packages that are delivered by each rider. In some cases, the delayed completed status can also be caused by weak signals in several buildings in Jakarta. It once happened to the ten packages with the same destination, where the first to seventh packages have the completed status at 10 o'clock, yet the eight to ten packages had the completed status after twelve o'clock and no complaints from customers at that destination. Then, the rider is entitled to a delivery fee for ten packages. Complaints are not only initiated by the customer but can also be made by distribution partners, especially when distribution partners feel disadvantaged when delivery is on time but reported late by customers because the customers did not directly confirm the delivery so that they get the free consequences for the delivery. The conformity between customer and kitchen orders can be evaluated through complaints of differences in taste or size of food to customers who subscribe together in a building. From here, the role Customer Experience is necessary to provide an understanding that it is possible when one office customer is served by two partner kitchens. The Food Quality Supply and Operation team will carry out an investigation on the proper kitchen. Complaints on the composition of the food can be seen in the content posted by Kulina on December 14, 2017. In the comment column, the account @dinjani complained about eggs that were not on the menu today, and @fifipradyta also regretted that they did not have picture evidence to show that the menu she received was not as complete as in the content. Kulina immediately responded to this complaint by asking for an order ID as material to carry out investigations. As a collaborative business company, the conformity process in Kulina is challenging to stabilize. This problem will always arise and become more complex as the network of customers and partners grows. Therefore, it has become a routine job in Kulina to initiate complaints from customers and partners, followed up with investigative steps so that the suitability between supply and demand can always be harmonious. Apart from that, the internal team also did the matching in Kulina. After consumers received all the packages, Kulina's internal team conducted a distribution analysis to investigate through cases of delays that were not reported by customers, and then explored the causes and used them as evaluation materials. In addition, Kulina's internal team also analyzed food satisfaction by posting the menu sent today on Instagram so that the customers who feel that they receive food with a different composition could complain in the column comment. Standard For customers, product standardization is a sensitive issue with high risk. Silva felt the inconsistency of the portion standards in Kulina (Silva, 2017). She is a subscriber who finally decides not to continue subscribing because, after one week of subscription, she gets fewer side dishes from the rice. To maintain product and service standards, every week, Kulina coordinates with partner kitchens to provide a menu guide for one week in the form of recipes and arrangement guidelines. Nevertheless, until now, the portion standard of each Kulina menu is still challenging to monitor, so that it remains the same in each box delivered to the customer. For distributing partners, Kulina sets a time limit for delivery at noon. The consequence of the delay is that the delivery fee is not paid. So to facilitate monitoring, Kulina decides two types of status that must be renewed by distribution partners, namely transit while the food is in the delivery process and completed if it has arrived and handed over to the customer with proof of the recipient's signature. The guarantee given by Kulina to customers for the negligence of distribution partners who violated the standards is that the food is uncharged on that day so that orders can be transferred to the next day. The standard of business, transaction, and communication terms in Kulina is oriented towards customer satisfaction, where menu requests, location, and delivery hours from customers are Kulina's basis for establishing business terms for partners. To date, Kulina is still trying to deliver the food menu per the transaction agreement, on time at the location requested by the customer through coordination between Kulina and Partners. Network Externalities Food quality is a focus that is carefully guarded by the Food Quality Supply and Operation team of Kulina. The risk of fatal errors such as spoiled food is of the lowest quality and as much as possible anticipated. For partner kitchens, at the beginning of the contract, the Kulina team informed that the harshest sanction, in this case, was a penalty of three times the price of food purchased from partner kitchens, being banned with a specific time limit, or not being able to partner with Kulina at all. Customers are often disadvantaged by PoD (Proof on Delivery) counterfeit by distribution partners, where the delivery status has been reported completed at 12.00, but the customer has not got the food. As a reference for assessing partner performance, Kulina makes an assessment standard that is assessed by customers. Every day the Kulina team will send emoji-shaped evaluations mail to customers to assess the quality of the food they enjoy today. This assessment will routinely be recapitulated and shared with kitchen partners every week. To increase customer engagement in rating, Kulina is promoting its rating system on Instagram. Customers can give ratings via Kulina's website, email, and customer service through live chat on the website and through telephone number 085 574 677 678. The participation of customers in maintaining the quality and consistency of their lunch is carried out regularly. For instance, Hidayah, E. (Hidayah, 2018) is never absent filling feedback which Kulina sends every day. As a customer, the principle is that the reviews sent will come back again in the form of benefits that he can feel. Kulina distribution partner's assessment is determined by the timeliness of delivery. The distribution partners will update delivery status from transit to travel and completed once the package arrives at the destination by sending proof on delivery. The proof should be attached with the recipient's signature, as an assessment of the customer on the timeliness of the distribution partner. The Kulina team will start an investigation by finding the name of the distribution partner in charge of delivering the customer, then starting to coordinate with the Ninja Van WhatsApp group to ask for Proof on Delivery from distribution partners. The moment Proof on Delivery is received, Kulina's team will send it to the customer and ask the recipient to confirm the package. Shall the proof on delivery proven to be fake, the distribution partner concerned will get sanctions from Ninja Van. This assessment will determine the number of partner kitchen orders in the following week because from this assessment, Kulina can find out the capacity of each partner kitchen. To increase the usability value of the Kulina platform, Kulina minimizes the risks by conducting daily evaluations through customer assessments that measure the value of the Kulina platform's usability, as well as assessing the performance of partners because they are the ones who experience the service directly and interact directly with partners. Consumer Switching Fees Kulina provides skip features, allowing customers to pause orders on specific days so that orders can be postponed the next day. Customers take advantage of this feature to postpone orders for several days to a long duration of up to months because they want to try services from other lunch providers. The long-term lag is a challenge for Kulina as it reduces the number of their daily orders. Kulina's management takes this customer's departure as a time to compare and evaluate the quality of Kulina's food and service. So, they will find a convenient service and can decide what brand to choose for their lunch. Some customers who decided to return to Kulina after a long hiatus realized that Kulina has small things but valuable, which became a reason to resubscribe. The skip feature, which is not found in competitors, makes Kulina felt to be platform flexible for the customer's routine. Another thing that the customer feels is friendly customer service with fast response and solution. Customer complaints are received and responded to effectively, both big and small mistakes, by updating information, apologizing, and giving free orders. These attitudes have also made some of Kulina's complaining customers loyal to Kulina. Loyal customers who feel a quick response to their complaints are Stevani, A. (Stevani, 2017), who has been subscribed for four months. She complained when she felt that the side dish was no longer available in his message package. Though the package she chose on the website having a composition of rice + vegetables + meat + complement), she also complained on Twitter. Within 30 minutes, she had received a response from the Kulina Admin team. The closeness between the customer and Kulina's Customer Experience team has made the communication relationship no longer between brands and customers but has led to relationships peer to peer. This is evident in some of the team members who have resigned from Kulina but still contacted by the customer to inform the date of the delay in ordering and complaints Figure 6 Customer Complaints, Source: (Stevani, 2017) The ebb and flow of customers is a challenge faced by Kulina's team. Not a few customers resubscribe after customers pause and compare with competitors' lunch services. Little things such as friendly and responsive customer service to provide quick solutions are why customers come back and are loyal. This reason is the foundation of trust from Kulina to build long-term relationships with customers. Economies of Scale Kulina's economic-scale development is focused on Jakarta without any restrictions. Therefore Kulina also invites kitchen owners to become Kulina's kitchen partners through promotions on her Instagram account. However, the high demand to become partner kitchens in one area does not mean that services will be opened for that area. Kulina development is based on requests from customers. Customer demand mapping is carried out periodically by Kulina. Areas that experience growth will have an impact on increasing partner kitchens. An article entitled Efficient Lunch Solutions at Koran Sindo explains Kulina's business growth along with the increase in the number of customers, where currently Kulina uses 14 partner kitchens, and on average, one partner kitchen increases each week (Nararya, 2018). In Kulina, the development of service coverage is carried out with orientation to customer growth in each area. Education to potential customers is needed to maximize customer growth in Jakarta's lunch market while expanding the customer network. Co-creation in Kulina describes the interconnected collaboration between Kulina, customers, kitchen partners, and distribution partners. The co-creation process started when Kulina created a website that combines algorithms, bott, and management by the Kulina team. Consumers can use this website to customize the order date and delivery address according to their needs. Information and communication technology as the main driver of the sharing economy like Kulina makes economic activities easier and cheaper. As a virtual network, Kulina's co-creation has six processes: complementarity, compatibility, standards, externalities, costs switching, and economies of scale. The complementarity of a shared economic network works best when suppliers' goods and services meet demand (Demary, 2015). The complementarity process of the sharing economy network in Kulina is oriented towards customer demand at the website, which connects customers with Kulina. This demand data will then form a customer demand map used as a basis for consideration in opening up economic networks in the area. The demand map on Kulina's complementarity is in line with Du, P., & Chou (2020), whose research shows that the intersection between technology, work, and organization has an important role in cocreation and encouraging the development of a sharing economy. Closely related to compatibility or complementarity, supply and demand must be able to move and work in harmony (Demary, 2015). Kulina follows up on complaints arising from food delays with investigations to determine whether the fault is on the part of partners, customers, or management. Solutions that emerge are oriented towards the suitability of supply and demand can always be harmonious. Improvements in service that always result in this innovation (Hollebeek, L., & Rather, 2019) identified as the main drivers of co-creation, satisfaction behavior, advocacy, and customer loyalty intentions. In the sharing economy business model, the platform functions as a transaction standard covering business terms, payments, and communications (Demary, 2015). The platform actively encourages emotional work practices even in the absence of direct formal control (Bucher, E., Fieseler, C., Lutz, C., & Newlands, 2020). Standard provisions for business, transactions, and communications in Kulina are oriented towards customer satisfaction. Requests for menu, location, and delivery hours from customers communicated through the website are Kulina's basis for establishing business terms for partners. This two-sided customer relationship framework will help the company take the proper steps to make all actors involved in the process satisfied, loyal and profitable in the long run (Kumar, V., Lahiri, A., & Dogan, 2018). Fulfilling consumer demand becomes a standard that will increase the usability value of the peerto-peer business platform (Demary, 2015). Kulina made a rating facility that users can use to assess the quality of service and food they get. The rating given by customers is used by Kulina to determine the number of subsequent orders to kitchen partners. Rating facility in a research (Bucher, E., Fieseler, C., Lutz, C., & Newlands, 2020) is a valuation mechanism that conditions consumers to perform socially desirable behavior during transaction sharing, where open platforms have increased connectivity and sociality among actors (Fehrer, JA, Woratschek, H., & Brodie, 2018). The fast handling of complaints made critical customers loyal to Kulina. Responsiveness to testimonials on social media needs to be done quickly because the total number of online customer reviews (OCR volume) is a more popular quality indicator for customers than the average star rating (Hoskins, JD, & Leick, 2019). The magnitude of the influence of product reviews by customers has a positive effect on the act of sharing 'trust in peers' on this platform (Mmhlmann, 2016) as a form of repeated positive reinforcement, thus creating an emotional bond that encourages commitment (Jalilvand, MR, Salimipour, S., Elyasi, M., & Mohammadi, 2017) where the relationship among them is mediated by social values, (Wu, W., Wang, H., Wei, C., & Zheng, nd), entertainment value, and trustworthiness (Cai, S., Phang, CW, Pang, X., & Zhang, 2017). The ebb and flow of customers is a challenge faced by Kulina's team. Not a few customers have resubscribed after customers skip and compare with competitors' lunch services. Little things like friendly and responsive customer service to provide quick solutions are their reasons to come back. This reason is the foundation of trust from Kulina to build long-term relationships with customers. This finding is in line with the concept of transferring costs (Demary, 2015), where every interaction that occurs between producers, distributors, and consumers on the platform will mark the beginning of the formation of trust. Therefore, user switching to the new platform can be prevented by the process of building trust. When the user is familiar with the standards applied by a platform, it will take a long time to adjust to another platform. This dependence is motivated by the economy rather than the supplier of goods (Böcker, L., & Meelen, 2017). In addition, users will incur search costs to find other platforms with the same service. Cheap costs incurred by a start-up with the sharing economy business model can be seen from the initial capital spent to build the platform. After that, the reach can be widened as widely as possible without any additional costs on the platform (Demary, 2015). In Kulina, the development of service coverage is carried out with orientation to customer growth in each area. The development focus in each of these areas is described by Dreyer, B., Lüdeke-Freund, F., Hamann, R., & Faccer (2017) as outreach development in a sharing economy that emerges from the trend of collaborative consumption. The development must adapt to the local context, because in a local account, Light, Ann, and Miskelly (2015) emphasized co-organizing to create shared spaces for collaborative use of resources and joint ownership of projects and premises. CONCLUSION Ability internet of things for linking all value chains, impacting the management of collaboration and interaction that takes place horizontally between the platform, kitchen partners, distribution partners, and customers with an essential role in this stage co-creation specified by the customer. Customer requests were communicated via the website, influencing complementarity, developing economies of scale, and standard-setting. This information is used to open and develop a sharing economy network and establish business terms for Partners. Though, in fact, complementarity in Kulina is not only influenced by the meeting of suppliers and demand but also other factors such as traffic jams which affect the quality of complementarity every day. Complaints that arise are followed up with investigations to determine whether the fault is on the part of partners, customers, or management so that solutions that arise are oriented towards the compatibility of supply and demand to be always harmonious. This process has become a routine in customer management with a variety of new problems every day. The benchmark for measuring network externality underlies the utility value of the Kulina platform and the partners' performance value. Meanwhile, the measurement process is based on customer satisfaction which is communicated through a rating assessment, and customer disappointment which is communicated through complaints. The entire interaction between Kulina with partners and customers and between customers and partners is monitored to ensure the suitability of this sharing economy network so that it can continue to complement and satisfy each other.
2021-07-27T00:05:19.305Z
2021-05-28T00:00:00.000
{ "year": 2021, "sha1": "a5dffc4902bd2ca061881297c430568548aeb64c", "oa_license": "CCBYSA", "oa_url": "https://journal.umy.ac.id/index.php/jkm/article/download/9226/6306", "oa_status": "GOLD", "pdf_src": "ScienceParsePlus", "pdf_hash": "822b58bdc204d389d9e32aaa2cd957fe639abecf", "s2fieldsofstudy": [ "Business" ], "extfieldsofstudy": [ "Business" ] }
211723962
pes2o/s2orc
v3-fos-license
Paternal Involvement and Adverse Birth Outcomes in South Gujarat, India Background and Objectives: While the impact of maternal factors on birth outcomes are widely reported, the extent to which paternal involvement and varying cultural family dynamics influence birth outcomes particularly in an international context, remain understudied. The purpose of this study was to assess the relationship between paternal involvement and adverse birth outcomes in South Gujarat, India. Methods: An in-person questionnaire was administered to adult women at delivery or during the one-month postpartum visit at New Civil Hospital, in South Gujarat, India between May and June 2016 to assess level of paternal support and attendance at prenatal appointments and household structure. Pregnancy variables including birthweight and gestational age at delivery were collected from maternal and newborn record/chart review. Chi-square and t-test were used to assess demographics, as appropriate. Logistic regression was used to examine the association between paternal involvement and pregnancy birth outcomes. Results: Of the 404 infants born during the study period, 26.7% were premature (<37 weeks gestation) and 29% were of low birth weight (<2500g). More than 40% of the women surveyed reported their in-laws were the primary household decision-makers; however, those who reported high paternal attendance were less likely to report in-laws as the primary decision-maker (p=0.03). Adjusted logistic regression analysis indicated the odds of delivering a low birth weight infant were greater among mothers who reported low paternal support and low paternal attendance at prenatal visits (OR=2.99 (95% Confidence Interval (CI): 1.84-4.86) and OR=2.16 (95% CI: 1.35-3.47), respectively). Conclusion And Global Health Implications: Low paternal support during pregnancy may be a missed opportunity to increase healthy practices during pregnancy as well as decrease the risks associated with limited social support during pregnancy. It is important to consider varying socio-cultural family dynamics in different populations and how they may influence paternal involvement during pregnancy. Background of the Study The health of a nation is often defined by its infant mortality rate. The 2008 National Family Health Survey conducted in India reported infant mortality to be 41 per 1,000 live births. 1 Compared to other middle-income countries, India's rate is concerning. 2 While the impact of maternal factors on birth outcomes are globally reported, the extent to which paternal involvement influences birth outcomes remains poorly understood. 1 Paternal involvement and birth outcomes have shown at least a moderate level of association 5 in the United States. 5 Studies have examined paternal age, 3 financial contributions, paternity acknowledgment, 4 and overall paternal involvement in relation to birth outcomes. 5 In international settings, such as India, paternal involvement could be an important factor affecting birth outcomes, and identifying this is a significant step towards optimizing healthcare delivery to children and women. 1 However, few studies have explored the relationship between paternal involvement and different cultural family dynamics and infant outcomes. These studies have primarly focused on the association between paternal involvement and antenatal or childcare utlization and have not explored the influence of paternal involvement or factors influencing paternal involvement on birth outcomes. 6,7,8 In many countries where extended households are common, the household power is not divided solely between a husband and wife. Specifically in India, the extended household includes at least the wife, husband, and the wife's in-laws. 9,10,11,12 India's different socio-cultural norms and traditions compared to western countries may cause different household dynamics impacting paternal involvement and consequently birth outcomes. In the socio-cultural aspect of Indian household dynamics, mothers-in-law have a unique set of influences. The presence of a mother-in-law as a figure of power in a household may alter the balance of power between a husband and a wife. Treatment of wives differ throughout India. In the North, women are traditionally less-valued than their male counterparts 9,12 and therefore, mothers-in-law may hold their son's needs at higher importance than the needs of their daughter-in-law. Objectives of the Study The objective of this study was to assess the relationship between paternal involvement and family dynamics on birth outcomes such as low birth weight or weighing less than 2500 grams) and premature delivery or being born less than 37 weeks of gestation. Partner/paternal support was assessed using an 8-item Likert scale and family dynamics were assessed by inquiring about the primary decision-maker in the household. It was hypothesized that low paternal support and low paternal attendance at prenatal appointments will be associated with increased odds of women delivering infants too early (premature) or too small (low birth weight). Methods This cross-sectional study was conducted in South Gujarat, India, between May and July 2016. Inclusion criteria were adult women (18 years of age and older) in South Gujarat, and surrounding areas, who had delivered within the Gujarat Medical College of Surat and New Civil Hospital within the study timeframe. Women were interviewed during their postpartum stay or at the one-month postpartum visits. Study Variables Data were collected through a questionnaire that included medical and social history, in addition to information about prenatal visits and factors that facilitated or deterred prenatal visits. Detailed questions were also asked about the father's contributions to the pregnancy as it related to accompaniment to prenatal visits, knowledge about pregnancy, time spent together with spouse during pregnancy, and willingness to support spouse during pregnancy. Questions regarding partner support were adapted from an 8-item Likert scale which assesses constructs ranging from financial support to emotional support. 13 To assess paternal support, scores (ranging from 4-strongly agree to 1-strongly disagree) for all 8 items were totalled and the median was used to assess a cut-point for high paternal support and low paternal support. Similarly, the median was used to determine high paternal attendance and low paternal attendance. Additionally, women were asked to determine the primary decision-maker of their household as either their in-laws, their husbands, themselves, or equally distributed household power. Maternal history, delivery information, and newborn records were obtained from chart review. Statistical Analysis Bivariate analyses were conducted using chisquare and t-test, as appropriate, to describe the population's demographics, family dynamics, pregnancy complications, and birth outcomes. Logistic regression was used to determine the association between paternal involvement and pregnancy complications/ birth outcomes. Adjusted models accounted for differences in maternal age, maternal education, gravidity, primary household decision-maker and family socioeconomic status. Analysis was completed using IBM SPSS Statistics for Windows, Version 25. 14 Ethical Approval This research was conducted in accordance with prevailing ethical principles and reviewed by an Institutional Review Board in both the United States, by the University of South Florida, and in India, by the Human Research Ethics Committee at the Government Medical College of Surat (GMCS) Research staff completed the informed consent process in private, in the participant's preferred language of English, Gujarati, or Hindi. Due to the literacy level of the participants, the survey was administered orally by research staff as an interview in the participant's preferred language. Completed surveys were coded with pseudo-identifiers. A separate list containing pseudo-identifiers and the corresponding patient identifiers was stored securely in an administrative office in the GMCS hospital. Sociodemographic Characterisitcs Results Women who enrolled in the study (n=404) ranged in age from 18 to 35 years (mean = 23.54 ±3.36). The majority of the women were housewives and had less than a secondary school education. More than 40% reported their in-laws, either mother-or fatherin-law, were the primary decision-makers in their household; however mother-in-laws represented the majority (Table 1). Half (50.0%) of women surveyed reported high paternal support and a third (33.3%) reported high paternal attendance. Low paternal support was significantly associated with lower maternal education level whereas low paternal attendance was associated with maternal occupation, family socioeconomic status, gravidity, and the primary decision-maker in the household. Nearly one-third of pregnancies resulted in a premature birth (26.7%) or a low birth weight infant (29.0%). The most-common pregnancy complications included anemia (52.0%), prolonged latent stage labor (13.9%), and pre-eclampsia (5.2%) ( Table 2). More than 20% of the study sample had a history of abortion, 8.9% had a history of preterm birth, and 11.4% had a previous cesarean section. Adverse Birth Outcomes Crude logistic regression analysis indicated paternal support and paternal attendance were associated with delivering a low birth weight infant. Women who reported lower partner attendance at prenatal appointments and lower paternal support had two times higher odds of delivering a low birth weight infant after adjusting for maternal age, maternal education, gravidity, primary household decisionmaker and family socioeconomic status (adjusted odds ratio (AOR) = 2.16; 95% Confidence Interval (CI): 1.35 -3.47) and AOR = 2.99; 95% CI: 1.84-4.86), respectively, compared to their counterparts who reported higher partner attendance and higher partner support. The relationship between paternal support or attendance and preterm birth was not statistically significant ( Table 3). Over 40% of the participants indicated their in-laws, primarily motherin law, as the primary decision-maker and preliminary analysis demonstrated this may be related to low paternal attendance at prenatal appointments. Discussion This study demonstrated a relationship between paternal involvement and low birth weight; however, an association between paternal support or attendance and preterm birth was not apparent in this population. Previous studies have reported associations between absent fathers and higher rates of preterm birth, low birth weight, very low birth weight, and being www.mchandaids.org | © 2020 Global Health and Education Projects, Inc. small for gestational age. 15 This study contributes to the literature by examining this relationship in an international context and exploring socio-cultural implications in-laws may have on paternal involvement, taking into account a different cultural dynamic. Over 40% of the participants indicated their in-laws as the primary decision-maker, and preliminary analysis demonstrated this may be related to low paternal attendance at prenatal appointments. However, additional research is needed to adequately explore the influence of in-laws' decision-making on paternal involvement. Limitations of this study include potential underreporting of less acute pregnancy complications and omission of pertinent prenatal care information including risk factors such as smoking, substance use and maternal pre-pregnancy BMI. Additionally, antenatal support and attendance were reported during the postpartum period, requiring mothers to recall levels of support. Furthermore, the participants' husbands were not interviewed. However, a strength of this study is that outcome data (e.g., birth weight, gestational age at delivery) were extracted directly from the patient's prenatal medical records, therefore minimizing the potential for recall bias in outcome measurements. Future studies should consider prenatal enrolment and interviews with fathers to assess their perceptions of their involvement during pregnancy. Conclusion and Global Health Implications Low paternal support during pregnancy may be a missed opportunity to increase healthy practices as well as decrease the risks associated with limited social support during pregnancy. It is important to consider varying socio-cultural family dynamics in different populations and how they may influence paternal involvement during pregnancy.
2020-02-20T09:09:04.780Z
2020-02-14T00:00:00.000
{ "year": 2020, "sha1": "12db8d78e5a20de5cea345904fadddcd3085da9f", "oa_license": "CCBYNCSA", "oa_url": "https://doi.org/10.21106/ijma.348", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "7d6b5657879fe6917caa8274edac24d659aedbdf", "s2fieldsofstudy": [ "Sociology", "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
257737583
pes2o/s2orc
v3-fos-license
DISCONA: distributed sample compression for nearest neighbor algorithm Sample compression using 𝜖-net effectively reduces the number of labeled instances required for accurate classification with nearest neighbor algorithms. However, one-shot construction of an 𝜖-net can be extremely challenging in large-scale distributed data sets. We explore two approaches for distributed sample compression: one where local 𝜖-net is constructed for each data partition and then merged during an aggregation phase, and one where a single backbone of an 𝜖-net is constructed from one partition and aggregates target label distributions from other partitions. Both approaches are applied to the problem of malware detection in a complex, real-world data set of Android apps using the nearest neighbor algorithm. Examination of the compression rate, computational efficiency, and predictive power shows that a single backbone of an 𝜖-net attains favorable performance while achieving a compression rate of 99%. Introduction This article discusses distributed sample compression for nearest neighbor algorithms from the perspective of the smartphone security domain.Smartphones have become an integral part of our everyday lives.With annual sales, estimated at 1.373 billion units in 2019 (according to [18]) and this figure expected to increase, they are set to become more widespread.One of the key factors behind this commercial success, is the possibility to extend and adjust their function according to personal requirements by installing various applications (apps).The extreme popularity of the handheld devices and mobile apps also means that they are trusted with more and more personal and sensitive data, ranging from browser history to health and physical activity records and banking information.This trend, in turn, underlines the need to improve the trustworthiness and security of the devices and, therefore, the user data. There are currently over three million apps registered in the most popular marketplace, Google Play, as estimated by [3], providing the users with a myriad of additional functions.The apps are created by large developer studios, and recognized companies as well as anonymous individuals.The apps are published exclusively in a binary format.It is a huge effort to review their safety and trustworthiness. In this paper, we present an AI-based method to support the classification of Android apps on a large scale.The content of the apps is analyzed and used to classify the app as malicious or benign.This technique is known as static malware analysis [4,33].Each app is disassembled, and a list of functions it uses is extracted.Formally, each app is represented as a data point in a metric space with distances defined according to the sets of their functions.Classifying Android apps based on their nearest labeled neighbor in this kind of representation has been proven to be efficient by [13]. In general, the nearest neighbor search (NNS) is a family of simple yet powerful techniques commonly used in machine learning.No abstract model is fitted to the training data, but each test sample is compared to the most similar training data points.The computational complexity of the NNS depends on the size of the training data set.Compressing the training set by creating an -net that retains only a small fraction of the original training samples has clear benefits [20].Intuitively, it uses less space and shortens the search times, but can also reduce the classification accuracy.In our case, the problem at hand is too complex for conventional NNS (the data set is too large), meaning that a sample compression algorithm must be used.Furthermore, given the huge size of the data sets, such as the collections of Android apps, the sample compression process itself requires considerable resources that may not be available on a single machine. Given the large set of apps and their functions, we aim to compress this set in a distributed manner to efficiently perform an NNS and classify apps as malicious or benign.Distribution is necessary for diving the workload but it also raises additional challenges.For instance, how can the outputs from distributed computations be merged to produce a uniform compressed data set?How does the distribution affect the trade-off between compression and accuracy?Should the compression parameters change in the case of distributed compression?etc. In this paper, we discuss two approaches for distributed sample compression: the merge-based sample compression and the stream-based sample compression, and evaluate them on a large-scale, real-world data set of Android apps.The main contributions of this paper are as follows: • We propose a novel distributed sample compression algorithm.• Our results demonstrate the non-trivial parameterization of the -net for sample compression.• We provide insights into the scalability of the proposed solution. • We show that the compressed NNS achieves a favorable performance under the precision-recall curve of 0.9884 with a compression ratio of 0.9767.• We attain these favorable results for a real-world problem, which was too complex for a conventional NNS solution. The rest of the paper is structured as follows.Section 2 reviews the related work and provides a theoretical background to our algorithm.We then describe the details of the proposed solution in Section 3. We evaluate the presented solutions in Section 4 and conclude our paper with a summary and outlook in Section 5. Related work Before explaining our approach, we provide a brief review of relevant works in this field.The two relevant topics that constitute the basis for this work are approaches to malware detection and theoretical backgrounds to sample compression for the nearest neighbor search. Malware detection There is a substantial body of work on mobile app malware detection.Our approach belongs to the static analysis domain [4,33] which analyzes the content of the app (and its metadata) rather than the runtime behaviour as in the case of dynamic methods [32,40].Both approaches have their strengths and weaknesses.With dynamic analysis, it is difficult to enforce all the possible execution paths of an app.This process can be made even more difficult if the attacker uses anti-tracking and anti-debugging techniques.Under certain circumstances, it therefore cannot be guaranteed that the app will not became malicious.Static analysis, on the other hand, can be made more difficult to conduct due to code obfuscation, i.e. deliberately making code more difficult to read and comprehend.This problem is to a certain extent, orthogonal to our work.There are a number of anti-obfuscation solutions; an extensive overview is prepared by Zang et al. [42]. Regardless of the way the information about an app was collected (dynamically or statically), it must be analyzed to classify the app as malicious or benign.A wide range of techniques and solutions were also employed here.A number of popular machine learning techniques were used: k-means by [39], vector-embedding and support vector machines by [4], and (deep) neural networks in [31].For an extensive overview of the current state of the work in this field we refer the reader to a survey by Odusami et al. [28]. Our work is based on the foundations laid by [13], where among the others an overview of the effectiveness of the proposed methods in included.Those are summarized in Table 1.Neither one of these works considers distributed data processing.Moreover, distributed kNN is not used for static malware detection as is it would require searching through all the samples in the large malware databases each time a classification is made.Even if done in a distributed way this would be prohibitively expensive.Since we do not see such solutions in prior art our baseline is the non-distributed case.Our sample compression scheme on the other hand creates an implicit model (capturing domain knowledge) and speeds-up the subsequent classification.This is a crucial characteristic for the problem at hand. kNN and compression for NNS Nearest neighbor search, proposed originally by [15] remains a popular and powerful machine learning technique.Formally, given a set S ∈ X of points in a metric space (X, d) with a distance function d and a query q ∈ X, the nearest neighbor search localizes the nearest point (in terms of d function) to q among the S. Translating this to our the setting of our study, for a given app we find an app closest to it in the unrestricted space of all apps based on the similarity in their function usage.The two main challenges when employing kNN are the selection of value k and proper distance function, as highlighted by [41].There are many different distance functions that can be used with kNN: Euclidean, Mahalanobis [38], Minkowski [22], Levenshtein [10], etc.There is no perfect distance function.The function is application-specific and should be able to detect the similarity between samples, allowing them to be compared and classified.In our work, we opted to use the Jaccard distance function based on the encouraging results from previous work on malware detection by [13].The correct setting for k parameter is also specific to the application (data set) and interlinks with the selection of the distance function [41].We refer the reader to the original work of Kontorovich et al. [20] for theoretical discussions of the value of the k parameter in -nets and selected k = 1 for our evaluations. In most of the settings, the nearest neighbor search is a simple yet effective classification algorithm.In real-world situations (such as the one we are dealing with), however, it can suffer from a number of problems.It has high storage requirements (the training set needs to be stored); the efficiency of the classification declines with the increasing size of the data (i.e. more distance calculations are needed); and they have low noise tolerance (especially for the 1-NNS).As shown by [7], all these shortcomings can be addressed by data reduction techniques.The idea is to obtain a representative data set from the training data set that is smaller and can still be used to perform NNS with good accuracy.The accuracy on the compressed set is sometimes even higher, as it reduces the noise present in the full data set.The reduction techniques have different names: instance selection, prototype selection, data set condensations, and coresets.Regardless of the name, the goal is always to remove noisy and redundant data from the original data set before running the classification. Coresets are probably the most general theoretical framework for sample size reduction [27]."A coreset is a reduced data set which can be used as proxy for the full data set; the same algorithm can be run on the coreset as the full data set, and the result on the coreset approximates that on the full data set" [30].Coresets are specific to the algorithm; there are solutions for the smallest enclosing ball, -kernel, quantiles, k-means, and k-median clustering (see [30] for an excellent overview).In addition, the coresets are built with many assumptions regarding the data set to derive fundamental guarantees on the upper bounds on the cardinally of the coreset.Our work, on the other hand, deals with a practical problem.It has to deal with noise and inconsistencies found in the data set.The application of coresets in a distributed setting also requires a merging algorithm that is specific to the algorithm used.A coreset for NNS proposed by [11] thus cannot be directly applied to our problem. One of the first proposed data reduction techniques for the NNS is the Condensed Nearest Neighbor (CNN) Rule by [15].In short, the algorithm takes an arbitrary starting point to initialize the condensed set.Remaining points from the training set are considered one at a time and if their nearest neighbor in the condensed set differs from their actual label, they are added to the condensed set.The algorithm has three main drawbacks: it is order-dependent, cannot handle inconsistent points (i.e. points with the same attributes but different labels), and has bad running times.The CNN rule therefore has to be extended and modified multiple times e.g.[2,12,23].The fundamental idea behind these approaches is to split the CNN rule into two phases.In the first phase, instead of the random initialization of the condensed set, a representative of the training set is selected, which is then refined in the second phase of the algorithm.With a good initialization technique this ensures that the algorithm is order-independent.The condensation techniques use labels from the training set to improve the performance of the NNS algorithm. In this study, we argue that real-world applications often produce inconsistent data sets, i.e. sets with points that have the same coordinates but different labels.Our algorithm does not rely on the correct labels, or the consistency of the training set.We believe that if the problem at hand enables the use of labels, such a refinement could benefit from our work.An -net is a representation of the data set and could be refined in a similar way by classifying the remaining training points to boost its accuracy.MCNN was also used as a starting point for a parallel MCNN by [9].The proposed algorithm works in a distributed setting but requires a lot of communication between cooperating nodes (probably MPI-based).Our algorithm, on the other hand, reduces the level of communication (as the partitions are analyzed independently), rendering the calculation more robust in a distributed setting. -net-based compression for NNS Papers by Kontorovich et al. [19,20] form the theoretical basis of this work.They propose a novel approach for generating a subset for a nearest neighbor rule, i.e. sample compression that can still achieve good performance for predictions.For estimations of the prediction error at a given scale as well as the complexity of the set creation, we refer the reader to the original works.In our case, we focus more on the practical implications of such a compression, in particular the problem of distributing the compression process, which is not addressed in the aforementioned theoretical backgrounds. In practice, real data sets may contain identical data points with different labels due to insufficient data or noise.This can be addressed by taking a majority vote among the k nearest neighbors, as suggested by [34].We use a similar technique in the process of creating the compressed data set and for app classification.An alternative to majority vote are algorithms based on fuzzy class membership as reported in [6].We believe that this technique could be integrated into our algorithm, although it is not beneficial for the crisp binary classification problem at hand. Our compression uses point networks (also known asnet), as described by [21].They also proposed to use the hierarchy of such networks to speed-up the search.Speeding up the search in this way was not our primary goal, but it could be an interesting avenue for future work. One of our proposed algorithms (Stream DISCONA in Section 3.3.2) was inspired by on-line incremental learning.In short, the idea of incremental learning is to perform model creation when only parts of the data are available and to then update the models when new data arrive.Further information on such approaches is provided by [16] and [25]. Finally, it is worth mentioning the seminal work by Littlestone and Warmuth [24], who suggest that the compression process is reminiscent of learning. A compressed set of samples can thus be viewed as a degenerated model learnt from the distributed train data. DISCONA algorithm Our DIstributed Sample COmpression Algorithm (DIS-CONA) is based on the point networks.Such structures allow for a space-efficient representation of the metric spaces, while enabling a nearest neighbor search. Point network creation Let (X, d) be a metric space, in which X represents a set of points and d denotes a distance function.Let K(y, ) = {x ∈ X : d(y, x) ≤ } denote a sphere of radius around y. Given , a point network of X is Y ⊂ X that satisfies two conditions: 1. for every x, y ∈ Y, d(x, y) ≥ and 2. X ⊆ ∪ y∈Y K(y, ). We define d(p, Y ) = max y∈Y {d(p, y)}.Thus, for every node in a point network y ∈ Y it is understood that d (y, K(y, )) ≤ .Throughout this paper, we refer to a point network for a given fixed as an -net.A simple, brute-force algorithm for a point network creation is provided in [14]. The compression ratio of an -net depends on the value of the parameter.The higher the , the less points will be selected to constitute the network.For = 0, on the other hand, only compression resulting from the removal of duplicates will be performed.Once an -net is created, the classification comes down to finding the nearest point to the given query q, but only out of the points in the -net (Y ⊂ X). q is then classified to the same class as its nearest neighbor in Y . The metric space (X, d) comprises a set of points as well a distance function d.There are many possible distance functions to choose from.Based on the previous results in this area [13], we decided to use the Jaccard distance, which is based on the Jaccard coefficient.The coefficient measures the similarity between sets.For two sets, it is defined as the size of their intersection divided by the size of their union.The Jaccard distance is complementary to the coefficient: In our case, the distance calculation between the apps is based on the set of functions they use.When pre-processing the data set, each app and each function becomes a unique (hash-based) identifier [see, 13].If we take Example 1 of two apps app 0 and app 1 which use functions as defined in the curly braces.The applications have five unique function usages two of which are common (101925, 178583).Thus, according to the previous definition, the Jaccard distance between these applications is 0.6. Example 1 Jaccard distance calculation An -net constitutes a compression scheme for a given set.Berend and Kontorovich [5] have showed that such a network, despite containing less information, can be used to correctly perform majority voting and the results will be consistent with the results of the majority of the nearest neighbors running on the full data set. Instead, completely disregarding data points not in thenet, we use a slightly modified data structure.In the training (i.e.compression) phase, for each point of the -net (we refer to them as anchors) we store aggregated information about labeled data points in its vicinity, i.e. the distribution of app labels in K(y, ) as shown in Algorithm 1.This aggregated information also helps to derive confidence bounds on the actual classification.Let malicious (M) and benign (B) denote the labels assigned to apps in X and l : X → {M, B} be the labeling function.Let C p,M , C p,B ∈ N denote the number of malicious and benign apps respectively in K(p, ).Algorithm 1 maintains the incumbent -net Y and collects there every point that is farther than from existing anchors (see lines 3-5).Every point p in vicinity (p ∈ K(y, )) of the nearest anchor y ∈ Y is aggregated according to its label (see line 10).Note that every anchor is in vicinity of itself.For the -net example in Example 2, the app with id = 0 is an anchor point and it aggregates information about 19 apps (including itself), which are all malicious.The app with id = 4096, on the other hand, aggregates information about 24 apps, 23 of which are benign. Example 2 An aggregating -net Merging The sheer size of the data we have to deal with, renders the calculation of -net s on a full set impractical.We, therefore look into a distributed solution.The set of apps is partitioned and the partitions are distributed among several compute nodes.All the information about an app (the functions used) is stored together on the same compute node.Each node derives an -net by processing the local data.The networks are subsequently merged together.Since the networks already compress the locally available data, their size should be much smaller than the size of the entire partition.Such an exchange is, therefore, feasible.In the following sections, we discuss different possibilities of merging results from partitions to ultimately form the solution. Conservative merging To form a single network from the -net s of the partitions, they need to be merged together.The following approach allows each anchor to aggregate only the data points that are closer than in the original data set.Given a set of -net s Y 1 , Y 2 , . . .create a network Y = Y i .All the anchors from the input networks become anchors in the resulting network, retaining their label distributions.Only the label distributions of coinciding anchors are added together.The following example involves the merging of two networks, Y 1 and Y 2 : Example 3 Conservative merging All unique anchors in each partition are transferred to the resulting network Y (including the label distribution).For the common anchor (4096), the label distributions are added together.The resulting network Y, however, may not satisfy the first condition in the definition of -net.It is likely to contain anchors that are closer to each other than the given .In practical terms, the resulting network is larger than it could have been, achieving lower compression. Aggressive merging Another possibility for merging two networks,is to build the -net hierarchically with Y i as input data points.If two anchors from the input networks are closer than , then only one of them may be retained in the merged network Y.As a result, all anchors in the merged network satisfy the condition x, y ∈ Y, d(x, y) ≥ . When an anchor x ∈ Y i is closer than to one or more anchors in Y, its label distribution is aggregated by the closest one.This in another imperfect solution.Unlike the conservative merge, in this case the anchors may aggregate information from apps that were at a distance larger than in the original data set because x ∈ K(y, ) ∧ y ∈ K(y , ) x / ∈ K(y , ) Thus, we define the effective radius ˙ ≥ of the merged network Y in such a way that each anchor y ∈ Y aggregates labels from data points in the original data set X at a distance of ˙ or less. Distributed network creation Provided a mechanism to merge networks calculated at distributed computation nodes, we can create an -net for larger data sets.In this paper, we evaluated two algorithms of distributed network creation.We assume that each compute node has a partition of the original data set.The partitions are random, roughly equal in size, and do not overlap.In general, such an assumption can be achieved, for instance, by means of consistent hashing, attributing apps to particular compute nodes. Merge-based DISCONA In the merge-based approach, all nodes calculate -net in parallel for their partition of the data.The resulting networks are subsequently passed to a node responsible for merging.The process is schematically depicted in Fig. 1.Depending on the size of the resulting networks, the merging can either be performed in one shot, or as a sequence of smaller (e.g.pairwise) merges.In the second case, more nodes are responsible for merging, and thus the workload can be better distributed.However, more merges (of smaller networks) should be conducted so that the overall workload might even increase. For this kind of network creation, aggressive merging is preferable.The anchors of the resulting networks are used to calculate a set of anchors for the merged network.Conservative merging would substantially increase the size of the resulting network, and its overhead depends on the number of partitions. Stream-based DISCONA In this case, one partition (i) is initially selected at random as an origin, and an -net is calculated for it.The idea here is to retain all anchors of the origin network (Y = Y i ) and to only update their label distributions based on the -net s of other partitions.The origin network is then passed to other compute nodes.Each node j adjusts the label distributions of the anchors in origin network Y i .Technically speaking, each node performs an NNS for the local data and origin anchors.Each local data point x ∈ X j is attributed to one anchor y ∈ Y i from the origin network.This attribution is used to create a local -net with the same anchors as Y i but with label distributions specific to each partition.In the last step, the networks from all compute nodes are merged together in a conservative fashion.The process is shown in Fig. 2. Fig. 2 Stream-based compression The use of the aggressive merge algorithm is evidently a waste of resources in this case, as the anchors of the resulting networks stem from the origin network that was already a correct -net.Since the aggregation is based solely on the nearest neighbor calculation and only uses as an implicit, hidden parameter, the resulting network might have an effective ˙ ≥ . Evaluation Here we present the results of our study.We examine the compression achieved by our solution and compare it with the performance of the model working on the compressed data. Setup and data sets We conducted our experiments based on the data sets used in [13].We refer the reader to the original work for details on how the data sets were created, and pre-processed.It is, however, important to mention one processing step: The data sets only include nonempty functions that are defined and/or used in more than 100 apps. The code in this study was implemented in Python using popular libraries.In particular, we used pandas [26] and Scikit-learn [29] for data manipulation and model evaluation.The Matplotlib library [17] was used to create visualizations.The machine learning library Turi Create by [36] provided us with ways to efficiently calculate Jaccard similarity and perform NNS. Throughout the evaluation, we use two data sets of different sizes.The smaller one enabled quick hypothesis testing, as some experiments are too expensive to be conducted on the full data set.In particular, the research question on effective requires a lot of pairwise distance computations and was, therefore, studied on the basis of the smaller subset.In addition, initial results on compression Fig. 3 Compression ratio for increasing radius ratios, choosing , and the robustness of the streaming DISCONA were all obtained with the smaller set of 10003 apps: 4987 malicious apps obtained from [37], and 5016 benign apps collected using Androzoo API [1].This data set (referred to as VTAz) was composed by Frenklach et al. [13] and is used here for performance comparison.The apps used about 700000 unique functions.The overall size of the data set was over 35000000 records.Prior to the experiments, we withheld a random test set of 1000 apps, which we later used to asses the quality of the predictions.The remaining data were divided into 4 distinct partitions. After gaining the initial insights, the results from the VTAz data set were transferred to experiments with a largescale Virus Total (VT) data set.It comprised of 95220 benign and 94241 malware apps obtained from [37].An app is tagged as malware if it is detected as malicious by five or more VT anti viruses.The data set consisted of 188452 unique apps and 1052842 unique functions.The withheld test set comprised 1000 randomly selected apps and remained constant throughout the experiments.The remaining apps were dividend into 16 distinct, nonoverlapping partitions. Compression vs. predictive power The goal of our work was to achieve the highest possible compression while preserving the high predictive power of the model using the compressed data.The compression ratio of the algorithm can be regulated using the radius parameter .The relation between compression and the radius is presented in Fig. 3 and was examined using VTAz.We define compression as the complement of the ratio between the size of the -net (number of anchors) divided by the size of the input data set (unique apps).The stream-based solutions result in stronger compression.Furthermore, it is worth noting that even for = 0, we achieve substantial compression resulting from the removal of apps versions sharing the same frequent functions. Phase transition of the hyper-parameter We use precision and recall as the primary performance metrics in our evaluation.To calculate the performance metrics, we made predictions using resulting nets on the withheld set.To compare performance across different compression ratios using a single unparametrized measure, we used the area under the precision/recall curve (precision/recall AuC or PrAUC for short).We believe that malware classification is indeed best characterized by a trade-off between precision and recall.Precision shows how specific the malware detection is.For example, 1 − P recision quantifies the human effort involved in handling non-malware applications mistakenly classified as malware.Recall corresponds to the malware detection rate, quantifying the fraction of malware applications correctly classified as malware.Figure 4 shows the reduction in precision/recall AuC with increasing . We compare three methods.The reference method calculated the -net for the entire data set without distribution.The merged and streamed networks are created in a distributed way.The performance appears to be stable (and high) for values up to 0 = 0.65, as indicated by the dashed line on the plot.After this threshold, performance declines substantially for all compression schemes considered.In order to produce reference results, we used VTAz for this experiment. For a better comparison of the networks, we present the full precision/recall plot for the threshold value of radius = 0 = 0.65 in Fig. 5.It is important to note that both distributed solutions achieve the performance of the reference network.The streamed network is only slightly worse than the merged one.Based on the results presented in Fig. 3, the compression for = 0 is 9.47 × 10 −2 for the merged and 4.35 × 10 −2 for the streamed network.Also, in terms of overall problem-specific model performance we are achieving pretty good results (compare Table 1). The effective As mentioned earlier in Section 3.2, the distributed construction of an -net may reduce its quality by aggregating apps in label distributions of incorrect anchors.This behavior manifests itself as an increase of the effective in the network, i.e. there will be pairs of app-anchors in the data set that are at a larger distance than the requested .To empirically assess the magnitude of this problem, we plot the normalized distance distributions for both merged and streamed networks in Figs. 6 and 7.The results were produced using the smaller data set VTAz.The dashed line represents the desired value of .The distance distribution tail to the right of the dashed line can be considered as an error.We can see that the distance distribution for the stream-based network is more skewed than for the merged one, and has a longer right tail.It means that the label distributions of anchors in the stream-based network are affected by apps that are significantly less similar to the anchor than in the case of the merge-based network.In particular, these apps include very unique functions, which do not have much in common with other apps in the data set.Such unique apps scattered across partitions, do not become anchors in the origin network, and thus are also not anchors in the output network Y.Their exclusion from -net increases the compression ratio compared to the reference network and to the merge-based one in Fig. 3. Robustness of the streamed network The performance of the stream-based DISCONA depends on the selection of the initial anchors (origin network).To evaluate the sensitivity of the stream-based DISCONA to such a random selection, we conducted four additional experiments with the VTAz data set.With each run, a different partition was used to create the origin network (initial anchors selection).The results are depicted in Fig. 8. Here, we again fixed the network radius at the previously identified 0 = 0.65 value.Although, there are some differences in performance, we believe that the overall stability of the streamed network performance is pretty high with regard to the selection of initial anchors.We also include the values of area under the curve, which show that the stream-based network is capable of producing results similar to the merge-based network in Fig. 5. Performance as a function of the number of partitions The number of partitions can influence the performance of distributed sample compression both in terms of the classification accuracy and the running time.A single large partition is the most accurate (at high ) as it becomes the reference network (see Figs. 4 and 3 respectively).Here we investigate the influence of the number of partitions on the performance of sample compression.To this end, we divided the VTAz data set into 4,6,10,16, and 32 partitions. As can be seen in Fig. 9, the merged network is insensitive to the number of partitions.We attribute the robustness of the merged network to the aggressive merging strategy used in the merge-based network construction.In contrast, in the streamed network creation, increasing the number of partitions increases the compression ratio (Fig. 10), on one hand.However, the higher compression ratio comes at the cost of classification accuracy.Updating the label distributions during conservative merging is unable to compensate for the loss of potential good anchors due to the decreasing size of the first partition. Scalability evaluation To evaluate the scalability of the proposed approach we measure the network creation times.Figure 11 depicts the network creation time as a function of the number of partitions for three selected values of on the VTAz dataset.We can see that regardless of the radius parameter the overall running times decline with increasing number of partitions. We compare DISCONA to a single partition baseline adapted from [13].The baseline finishes in 746 secs.which is 50% higher than they report for the same data set.We attribute this discrepancy to the difference in implementation and hardware.The streamed DISCONA is performing much better than the baseline across the range of values and with only few partitions.The network creation time drops down to 88 secs.and 55 secs.with 16 and 32 partitions respectively.This shows the benefits of the parallel label distribution update in the streamed DISCONA algorithm. The situation is very different for the merged case, where the effect of the increase in the number of partitions is equivocal (see Fig. 12).Similar to the streamed case, the partitions are processed in parallel.However, the subsequent step of aggressive merging takes quadratic time where Y i is the network created for partition i.Thus, for higher values of , we observe improvements in the running time. A comparison with a baseline model creation is depicted in Fig. 13.The number of partitions is fixed to 16.The streamed DISCONA shows favorable compute times.The merged DISCONA is faster than baseline only for higher values of . To further corroborate the differences between the scalability of the streamed and merged cases we analyzed the breakdowns of the running time for a fixed number of 16 partitions.The time to create a streamed network (Fig. 14) is dominated by the origin network creation (in orange) and label distribution update (in blue).For the merged case, however, as can be seen in Fig. 15), the dominating factor is the aggressive merge of networks Y i (in light blue). Performance as a function of compression After performing the extended parameter study on the initial (smaller) VTAz data set, we applied the obtained knowledge to our full VT data set and used 16 partitions.In the streambased DISCONA, the origin network was created from the first partition. Firstly, we evaluated the achieved performance as a function of compression (see Fig. 16).Throughout this section, we use area under the precision/recall curve as a performance metric.In total, we created 8 streamed and 8 merged networks, with increasing values [0.15, 0.25, 0.45, 0.55, 0.65, 0.75, 0.85, 0.95].Overall, the stream-based DISCONA exhibits a better trade-off between compression and PRAuC than the merge-based algorithm due to the exclusion of unique apps from the set of anchors. The compression ratio of the streamed network has a lower bound corresponding to the size of the origin network.It can only use apps from the first partition in the origin network, even for very small or zero values of . Sub-partitions for the streamed network The compression of stream-based DISCONA can be influenced by the size of the origin network.We examined this in an experiment where the origin network was created from a subsample of the first partition from the VT data set.We generated an origin network for the 20%, 40%, 60%, and 80% of the apps from the first partition and then proceeded normally updating by the label distributions for each anchor according to the 16 VT partitions (including the remainder of the first partition).Thus, the networks comprise the same amount of information, although the number of anchors is (artificially) reduced. The results of this experiment are presented in Fig. 17.For the purpose of clarity, only networks with origins created from 20% and 100% data points in the first partition (extreme case) are depicted.For each subpartition, we calculated networks with an increasing radius ∈ [0.001 : 0.95].The dashed rectangle, denotes the highest value of = 0.95.We also noted the lowest values of for each network. Sub-sampling the initial partition naturally allows for a further increase in the compression ratio.On the one hand, the performance of a network created from the full partition and = 0.85 is inferior to the use of = 0.75 and the 20% subsample, which results in a substantially smaller model. On the other hand, the best performance is achieved bynet s created from the full partition and low values of (left-hand side of the plot), as expected. With the 20% partition, we almost reach the minimal compression ratio with = 0.001.With these settings, 98% of unique apps in the sub-partition become anchors in the origin network similar to the phenomena described in Section 4.8. Random networks A substantial drop in performance for the high values of (dashed rectangle in Fig. 17) is caused by aggregating the vast majority of apps by the first (random) anchor.The remaining potential anchors aggregate a small number of apps, each diminishing the predictive power of the -net.These are clearly not the best settings for the proposed DISCONA algorithm, as it cannot show its full potential.In this regime, the proposed algorithms are likely to exhibit an inferior performance than a set of randomly selected points of the same size.To verify this hypothesis, we conducted one more experiment with the full VT data set, where instead of creating origin networks we selected a random set of points from the first partition. The results of this experiment are presented in Fig. 18.A random selection of points indeed achieves a better performance than a point network with the highest values ( > 0.994).With decreasing and an increasing size of the random subset, however, the advantages of DISCONA can be clearly seen (left side of the plot).This is the regime where the algorithm can use its sophistication to select a good (rather than just small) set of representative malicious and benign apps.It should also be stressed that the random sample is used as an origin network and enriched by information from remaining partitions of our streamed DISCONA algorithm, which substantially increases its predictive power. Conclusion In this paper, we presented the first distributed sample compression for NNS based on -net.It is based on pointnetwork generation and subsequent merges of the partition results.The algorithms were evaluated with a real-world data set to perform Android malware classification, and they solve the problem very well, achieving a performance of 0.9884 (measured as area under precision/recall curve) while maintaining a compression ratio of 0.9767.We extensively examined the significant trade-off between the compression and predictive power of the NNS, showing the best range of the parameter to work with the data set.We also demonstrated the scalability of the solution. A future work, may examine alternative ways of building the -net.In particular, the aggressive merge phase can be accelerated by applying the distributed hierarchical merge approach.In addition, an application of the proposed solution beyond the malware classification, possibly requiring some domain-specific tweaks, should be pursued.A clear extension of the proposed algorithm is a multi-class classification case.This is theoretically possible, as shown by [19], but would require changes in the internal structures of the -net (see Example 2). One of the proposed algorithms, stream-based DIS-CONA, allows for on-line learning which is relevant in the application area and could be further examined. Algorithm 1 Point network with label distributions. Fig. 17 Fig. 17 Compression and performance of the streamed network for the origin created only from fraction (in percent) of initial partition Fig. 17 Compression and performance of the streamed network for the origin created only from fraction (in percent) of initial partition
2023-03-25T15:07:28.629Z
2023-03-23T00:00:00.000
{ "year": 2023, "sha1": "035687f50d6f299ca9361fa9514a6ef9b2b98950", "oa_license": "CCBY", "oa_url": "https://link.springer.com/content/pdf/10.1007/s10489-023-04482-y.pdf", "oa_status": "HYBRID", "pdf_src": "Springer", "pdf_hash": "5dc37ecd503851bca529baa6ac4c0fd396df078f", "s2fieldsofstudy": [ "Computer Science" ], "extfieldsofstudy": [ "Computer Science" ] }
119558611
pes2o/s2orc
v3-fos-license
Construction and stability of blowup solutions for a non-variational semilinear parabolic system We consider the following parabolic system whose nonlinearity has no gradient structure: $$\left\{\begin{array}{ll} \partial_t u = \Delta u + |v|^{p-1}v, \quad&\partial_t v = \mu \Delta v + |u|^{q - 1}u,\\ u(\cdot, 0) = u_0, \quad&v(\cdot, 0) = v_0, \end{array}\right. $$ in the whole space $\mathbb{R}^N$, where $p, q>1$ and $\mu>0$. We show the existence of initial data such that the corresponding solution to this system blows up in finite time $T(u_0, v_0)$ simultaneously in $u$ and $v$ only at one blowup point $a$, according to the following asymptotic dynamics: $$\left\{\begin{array}{c} u(x,t)\sim \Gamma\left[(T-t) \left(1 + \dfrac{b|x-a|^2}{(T-t)|\log (T-t)|}\right)\right]^{-\frac{(p + 1)}{pq - 1}},\\ v(x,t)\sim \gamma\left[(T-t) \left(1 + \dfrac{b|x-a|^2}{(T-t)|\log (T-t)|}\right)\right]^{-\frac{(q + 1)}{pq - 1}}, \end{array}\right.$$ with $b = b(p,q,\mu)>0$ and $(\Gamma, \gamma) = (\Gamma(p,q), \gamma(p,q))$. The construction relies on the reduction of the problem to a finite dimensional one and a topological argument based on the index theory to conclude. Two major difficulties arise in the proof: the linearized operator around the profile is not self-adjoint even in the case $\mu = 1$; and the fact that the case $\mu \ne 1$ breaks any symmetry in the problem. In the last section, through a geometrical interpretation of quantities of blowup parameters whose dimension is equal to the dimension of the finite dimensional problem, we are able to show the stability of these blowup behaviors with respect to perturbations in initial data. with b = b(p, q, µ) > 0 and (Γ, γ) = (Γ(p, q), γ(p, q)). The construction relies on the reduction of the problem to a finite dimensional one and a topological argument based on the index theory to conclude. Two major difficulties arise in the proof: the linearized operator around the profile is not self-adjoint even in the case µ = 1; and the fact that the case µ = 1 breaks any symmetry in the problem. In the last section, through a geometrical interpretation of quantities of blowup parameters whose dimension is equal to the dimension of the finite dimensional problem, we are able to show the stability of these blowup behaviors with respect to perturbations in initial data. Introduction. In this paper we are concerned with finite time blowup for the semilinear parabolic system: In that case, T is called the blowup time of the solution. A point a ∈ R N is said to be a blowup point of (u, v) if (u, v) is not locally bounded near (a, T ) in the sense that |u(x n , t n )| + |v(x n , t n )| → +∞ for some sequence (x n , t n ) → (a, T ) as n → +∞. We say that the blowup is simultaneous if lim sup t→T u(t) L ∞ (R N ) = lim sup t→T v(t) L ∞ (R N ) = +∞, (1.2) and that it is non-simultaneous if (1.2) does not hold, i.e. if one of the two components remains bounded on R N × [0, T ). For the system (1.1), it is easy to see that the blowup is always simultaneous. Indeed, if u is uniformly bounded on R N × [0, T ), then the second equation would yield a uniform bound on v. More specifically, we say that u and v blow up simultaneously at the same point a ∈ R N if a is a blowup point both for u and v. In the case of a single equation, namely when system (1.1) is reduced to the scalar equation ∂ t u = ∆u + |u| p−1 u, u(·, 0) = u 0 , p > 1, (1.3) the blowup question for equation (1.3) has been studied intensively by many authors and no list can be exhaustive. Let us sketch the main results for the case of the equation (1.3). Considering u a blowup solution to (1.3) and T its blowup time, we know from Giga and Kohn [18] that for some positive constant C, provided that 1 < p ≤ 3N +8 3N −4 or 1 < p < N +2 N −2 with u 0 ≥ 0. This result was extended by Giga, Matsui and Sasayama [21] for all 1 < p < N +2 N −2 without assuming the non-negativity of initial data. The study of the blow-up behavior of solution (1.3) is done through the introduction of similarity variables: According to Giga and Kohn in [19] (see also [17,18]), we know that: If a is a blow-up point of u, then uniformly on compact sets |y| ≤ R, where κ = (p − 1) − 1 p−1 . This estimate has been refined until the higher order by Filippas, Kohn and Liu [12], [13], Herrero and Velázquez [23], [25], [39], [41], [40]. More precisely, they classified the behavior of W T,a (y, s) for |y| bounded, and showed that one of the following cases occurs (up to replacing u by −u if necessary), • either there exists k ∈ {1, · · · , N }, where the homogeneous multilinear form |α|=m c α y α is non-negative. From Bricmont and Kupiainen [3], Herrero and Velázquez [25], we have examples of initial data leading to each of the above mentioned scenarios. Moreover, Herrero and Velázquez [24] proved that the asymptotic behavior (1.6) is generic in the one dimensional case, and they announced the same for the higher dimensional case, but they never published it. Note also that the asymptotic profile described in (1.6) with k = N has been proved to be stable with respect to perturbations in the initial data or the nonlinearity by Merle and Zaag in [29] (see also Fermanian, Merle and Zaag [14], [16], Nguyen and Zaag [33] for other proofs of the stability). As for system (1.1), much less result is known, in particular in the study of the asymptotic behavior of the solution near singularities. As far as we know, the only available results concerning the blowup behavior are due to Andreucci, Herrero and Velázquez [1] and Zaag [43] where the system (1.1) is considered with µ = 1. When µ = 1, according to Escobedo and Herrero [7] (see also [8]), we know that any nontrivial positive solution of (1.1) which is defined for all x ∈ R N must necessarily blow up in finite time if pq > 1, and max{p, q} and both functions u(x, t) and v(x, t) must blow up simultaneously. See also [9] for the case of boundary value problems. The study of blowup solutions for system (1.1) is done through the introduction of the following similarity variables for all a ∈ R N (a may or may not be a blowup point): Φ T,a (y, s) = (T − t) p+1 pq−1 u(x, t), Ψ T,a (y, s) = (T − t) q+1 pq−1 v(x, t), (1.10) From (1.1), (Φ T,a , Ψ T,a ) (or (Φ, Ψ) for simplicity) satisfy the following system: for all (y, s) ∈ R N × [− log T, +∞), (1.11) Assuming (1.8) holds, namely that and considering a ∈ R N a blowup point of (u, v), we know from [1] that (remind that we are considering the case when µ = 1) • either (Φ T,a , Ψ T,a ) goes to (Γ, γ) exponentially fast, • or there exists k ∈ {1, · · · , N } such that after an orthogonal change of space coordinates and up to replacing (u, v) by (−u, −v) if necessary, where (Γ, γ) is given by (1.9) and c 1 = c 1 (p, q) = 2pq + p + q 8pq(p + 1)(q + 1) , (1.13) and the convergence takes place in C ℓ loc (R N ) for any ℓ ≥ 0. In the first case, we have other profiles, some of them are different from those occurring in the scalar case of (1.3), see Theorem 3 and 4 in [1] for more details. Note that the value of c 1 given in (1.13) was not precised in [1], but we can justify it by explicit computations as in [1]. Beside the results already cited, let us mention to the work by Zaag [43] where the author obtained a Liouville theorem for system (1.1) that improves the results of [1]. Based on this theorem, he was able to derive sharp estimates of asymptotic behaviors as well as a localization property for blowup solutions of (1.1). For other aspects of system (1.1), especially concerning the blowup set, see Friedman and Giga [11], Mahmoudi, Souplet and Tayachi [28], Souplet [37]. In this paper, we want to study the profile of the solution of (1.1) near blowup, and the stability of such behavior with respect to perturbations in initial data. More precisely, we prove the following result. Theorem 1.1 (Existence of a blow-up solution for system (1.1) with the description of its profile). Consider a ∈ R N . There exists T > 0 such that system (1.1) has a solution (u, v) defined on R N × [0, T ) such that: (i) u and v blow up in finite time T simultaneously at one blowup point a and only there. Remark 1.6. The result of Theorem 1.1 holds for more general nonlinearities than (1.1), namely that the nonlinear terms in (1.1) are replaced by Note that in the setting (1.10), the terms f and g turn to be exponentially small. Therefore, a perturbation of our method works although we need in addition some parabolic regularity results in order to handle the nonlinear gradient terms (see [10] and [38] for such parabolic regularity techniques). For simplicity, we only give the proof when the nonlinear terms are exactly given by Remark 1.7. Our method can be naturally extended to the system of m equations of the form where p i > 1 and µ i > 0 for i = 1, 2, · · · , m. Up to a complication in parameters, we suspect that our analysis yields the existence of a solution for (1.17) which blows up in finite time T only at one blowup point a ∈ R N and satisfies the asymptotic behavior: for i = 1, 2, · · · , m, As a consequence of our techniques, we show the stability of the constructed solution with respect to perturbations in initial data. More precisely, we have the following result. Theorem 1.8 (Stability of the blowup profile (1.15)). Let (û 0 ,v 0 ) be the initial data of system (1.1) such that the corresponding solution (û,v) blows up in finite timeT at only one blowup point a and (û(x, t),v(x, t)) satisfies (1.14) with T =T and a =â. Then, there exists a neighborhood W 0 of (û 0 ,v 0 ) in L ∞ (R N ) × L ∞ (R N ) such that for any (u 0 , v 0 ) ∈ W 0 , system (1.1) has a unique solution (u, v) with initial data (u 0 , v 0 ) which blows up in finite time T (u 0 , v 0 ) at only one blowup point a(u 0 , v 0 ). Moreover, parts (ii) and (iii) of Theorem 1.1 are satisfied, and Remark 1.9. With the stability result, we expect that the blowup profile (1.15) is generic, i.e. there exists an open, everywhere dense set U 0 of initial data whose corresponding solution to (1.1) either converges to the steady state (1.9) or blows up in finite time at a single point, according the asymptotic behavior (1.14). In particular, we suspect that a numerical simulation of (1.1) should lead to the profile (1.15). Up to our knowledge, the only available proof for the genericity is given by Herrero and Velázquez [24] for the case of equation (1.3) in one-dimensional case. As in [24], a first step towards the genericity of the profile (1.15) is to classify all possible asymptotic behaviors of the blowup solution of (1.1) which was established in [1] (see also [43]) in the case when µ = 1. Let us now give the main idea of the proof of Theorem 1.1. Our proof uses some ideas developed by Merle and Zaag [29] and Bricmont and Kupiainen [3] for the equation (1.3). This kind of method has been proved to be successful for various situations including parabolic and hyperbolic equations. For the parabolic equations, we would like to mention the work by Masmoudi and Zaag [30] (see also the earlier work by Zaag [42]) for the complex Ginzburg-Landau equation with no gradient structure, There are also the works by Nguyen and Zaag [32] for a logarithmically perturbed equation of (1.3) (see also Ebde and Zaag [10] for a weakly perturbed version of (1.3)), by Nouaili and Zaag [31] for a non-variational complex-valued semilinear heat equation, or the recent work by Tayachi and Zaag [38] for the nonlinear heat equation with a critical power nonlinear gradient term, with p > 3, µ > 0. When p → +∞, this equation is reduced to which is studied in [22]. There are also the cases for the construction of multi-solitons for the semilinear wave equation in one space dimension by Côte and Zaag [5], for the wave maps by Raphaël and Rodnianski [34], for the Schrödinger maps by Merle, Raphaël and Rodnianski [27], for the critical harmonic heat flow by Schweyer [36] and for the two-dimensional Keller-Segel equation by Raphaël and Schweyer [35], Ghoul and Masmoudi [20]. One may think that the method used in [29] and [3] should work the same for system (1.1) perhaps with some technical complications. This is not the case, since the fact that µ = 1 breaks any symmetry in the problem, and makes the diffusion operator associated to (1.1) not self-adjoint. In other words, the method we present here is not based on a simple perturbation of the equation (1.3) treated in [29] and [3]. More precisely, our proof relies on the understanding of the dynamics of the selfsimilar version (1.11) around the profile (1.15). In the setting (1.10), constructing a solution for (1.1) satisfying (1.14) is equivalent to construct a solution for (1.11) Satisfying such a property is guaranteed by a condition that Λ(s) Υ(s) belongs to some set V A (s) ⊂ L ∞ (R N ) × L ∞ (R N ) which shrinks to 0 as s → +∞ (see Definition 4.1 below for an example). Since the linearization of system (1.11) around the profile Φ * Ψ * gives N + 1 positive modes, zero modes, and an infinite dimensional negative part (see Lemma 3.2 and Remark 3.3), we can use the method of [29] and [3] which relies on two arguments: -The use of the bounding effect of the heat kernel (see Proposition 5.3) to reduce the problem of the control of Λ(s) Υ(s) in V A (s) to the control of its positive modes. Note that the linearized operator around the profile, that is H + M defined in (3.4) and (3.5), is not self-adjoint. This is one of the major difficulties arising in this paper. -The control of the positive modes thanks to a topological argument based on the index theory. In addition to the difficulties concerning the linearized operator mentioned above, we also deal with the number of parameters in the problem (p, q, and µ) leading to actual complications in the analysis. According to the general framework of [29], some crucial modifications are needed. In particular, we have to overcome the following challenges: (i) Finding the profile (Φ * , Ψ * ) is not obvious, in particular in determining the values of b given by (1.13), which is crucial in many algebraic identities in the rigorous analysis. See Section 2 for a formal analysis to justify such a profile. We emphasize that the formal approach actually gives us an appreciated profile to be linearized around (see (2.8) and (2.9)). (ii) Defining the shrinking set V A (see Definition 4.1) to trap the solution. Note that our definition of V A is different from that of [29]. Here, we follow the idea of [30] to find out such an appreciated definition for V A . In particular, it comes from many relations in our proof, one of them is related to the dynamics of the linearized problem stated in Proposition 5.3. (iii) A good understanding of the dynamics of the linearized operator H + M + V of equation (3.3) around the appreciated profile (ϕ, ψ) given in (2.8) and (2.9) is needed, according to the definition of the shrinking set V A . Because the behavior of the potential V defined in (3.6) inside and outside the blowup region is different, the effect of the linearized operator is therefore considered accordingly to this region. Outside the blowup region, the linear operator H + M + V behaves as one with fully negative spectrum, which greatly simplifies the analysis in this region (see Section 5.2.4). Inside the blowup region, the potential V is considered as a perturbation of the effect of H + M, therefore, a good study of the spectral properties of H + M is needed. Note that the linear operator H + M is not diagonal, but it is diagonalized (see Lemma 3.2). Using this diagonalization, we then define the projection on subspaces of the spectrum of H + M (see Lemma 3.4). For the proof of single blowup point (part (i) of Theorem 1.1), we use part (ii) and an extended result of [19] that is called no blow-up under some threshold criterion for parabolic inequalities (see Proposition 4.7). The derivation of the final profile (u * (x), v * (x)) (part (iii) of Theorem 1.1) follows from part (ii) by using the same argument as [42] and [26]. The rest of the paper is organized as follows: -In Section 2, we first explain formally how we obtain the profile (Φ * , Ψ * ) and give a suggestion for an appreciated profile to be linearized around. -In Section 3, we give a formulation of the problem in order to justify the formal argument. We also give the spectral properties of the linear operator H + M as well as the definition of the projection on eigenspaces of H + M. -In Section 4, we give all the argument of the proof of Theorem 1.1 assuming technical results, which are left to the next section. -Section 5 is central in our analysis. It is devoted to the study of the dynamics of the linearized problem. In particular, we prove Proposition 5.3 from which we reduce the problem to a finite dimensional one. -In Section 6, we give the proof of Theorem 1.8. Since its proof is a consequence of the existence proof (part (ii) of Theorem 1.1), thanks to a geometrical interpretation of quantities of blowup parameters whose dimension is equal to the dimension of the finite dimensional problem, we only explain the main ideas of the proof there. A formal analysis. In this section, we give a formal analysis leading to the asymptotic behaviors described in (1.14) by means of matching asymptotic. For simplicity, we shall look for (u, v), a positive solution of (1.1) in one dimensional case. By the translation invariant in space, we assume that (u, v) blows up in finite time T > 0 at the origin, and write (Φ, Ψ) instead of (Φ T,a , Ψ T,a ) for short. From the transformation (1.10), the behavior (1.14) is equivalent to showing that as s → +∞, where Γ, γ are defined in (1.9) and b is given in (1.16). We use here the method of [30] treated for the complex Ginzburg-Landau equation, which was slightly adapted from the method of Berger and Kohn [2] for equation (1.3). Following the approach of [30], we try to search formally for system (1.11) a regular solution (Φ, Ψ) of the form Φ(y, s) = Φ 0 Injecting (2.2) into (1.11) and comparing elements of order 1 s j with j = 0, 1, · · · , we obtain for j = 0, and for j = 1, (2.4) Solving system (2.3) equipped with data at zero Φ 0 (0) = Γ and Ψ 0 (0) = γ, 5) for some integration constant b, and (Γ, γ) is given by (1.9). Since we want (Φ, Ψ) to be regular, we impose the condition b > 0. Let us now determine the value of b in (2.5). To do so, we first evaluate F and G at z = 0 by using (2.5) to find Using the definition of (Γ, γ) given in (1.9), one can simplify this system and obtain Let us now expand (Φ 1 , Ψ 1 ) in power of z, namely Injecting these forms into (2.4) and expanding F and G in powers of z, we obtain at the order z, Γγ − (pq − 1)Γ q γ p e 1 := Ae 1 . A straightforward computation gives A < 0, hence, For the terms of order z 2 in the expansion of F and G, we have Multiplying the second equation by p(q+1) q(p+1) , then combining with the first equation, we find that the coefficients of d 2 and e 2 disappear leading to b = (pq − 1)(2pq + p + q) 4pq(p + 1)(q + 1)(1 + µ) , which is the desired result. Note that our computation fits with the result of the case µ = 1 by combining (2.5), (1.12) and (1.13). Formulation of the problem. In this section, we give a formulation for the proof of Theorem 1.1. We will only give the proof in one dimensional case (N = 1) for simplicity, but the proof remains the same for higher dimensions N ≥ 2. We want to prove the existence of suitable initial data (u 0 , v 0 ) so that the corresponding solution (u, v) of system (1.1) blows up in finite time T only at one point a ∈ R and verifies (1.14). From translation invariance of equation (1.1), we may assume that a = 0. Through the transformation (1.10), we want to find s 0 > 0 and (Φ(y, s 0 ), Ψ(y, s 0 )) such that the solution (Φ, Ψ) of system (1.11) with initial data (Φ(y, s 0 ), Ψ(y, s 0 )) satisfies where Φ * and Ψ * are given by (1.15). With the introduction of (Λ, Υ) in (3.2), the problem is then reduced to constructing functions (Λ, Υ) such that lim and (Λ, Υ) satisfies the following system: where and Note that the term F 1 F 2 is built to be quadratic in the inner region |y| ≤ 2K √ s. Indeed, we have for all K > 1 and s ≥ 1, Note also that the term R 1 R 2 measures the defect preventing (ϕ, ψ) from being an exact solution of (1.11). Since (ϕ, ψ) is an approximate solution of (1.11), one easily checks that Therefore, since we would like to make (Λ, Υ) go to zero as s → +∞ in L ∞ (R N ) × L ∞ (R N ), the dynamics of (3.3) are influenced by the asymptotic limit of its linear term, From the definition (3.6), we see that the potential V (y, s) has two fundamental properties that will influence strongly our analysis: (3.10) In particular, the effect on V inside the blowup region or in the inner region |y| ≤ K √ s will be a perturbation of the effect of H + M. (ii) Outside the blowup region or in the outer region |y| ≥ K √ s, we have the following property: for all ǫ > 0, there exist K ǫ > 0 and s ǫ > 0 such that In other words, outside the blowup region, the linear operator H + M + V behaves as Given that the spectrum of H is negative (see (3.19) below) and that the matrix has negative eigenvalues for ǫ 1 and ǫ 2 small, we see that H + M + V behaves as one with fully negative spectrum, which greatly simplifies the analysis in that region. Since the behavior of the potential V inside and outside the blowup region is different, we will consider the dynamics for |y| ≥ K √ s and |y| ≤ 2K √ s separately for some K to be fixed large. Let us consider a non-increasing cut-off function χ 0 ∈ C ∞ 0 ([0, +∞)), with supp(χ 0 ) ⊂ [0, 2] and χ 0 ≡ 1 on [0, 1], and introduce where K is chosen large enough so that various technical estimates hold. We define Υe is the part of Λ Υ for |y| ≥ K √ s. As announced a few lines above and as we will see in Section 5.2.4, the spectrum of the linear operator of the equation satisfied by Λe Υe is negative, which makes the control of Λ e (s) L ∞ (R) and Υ e (s) L ∞ (R) easily. While the control of the outer part is easy, it is not the case for the part of Λ Υ for |y| ≤ 2K √ s. In fact, inside the blowup region |y| ≤ 2K √ s, the potential V can be seen as a perturbation of the effect of H + M whose spectrum has two positive eigenvalues, a zero eigenvalue in addition to infinitely negatives ones (see Lemma 3.2 below). Therefore, we have to expand Λ Υ inside the blowup region with respect to these eigenvalues in order to control Λ(s) L ∞ (|y|≤2K √ s) and Υ(s) L ∞ (|y|≤2K √ s) . To do so, we need to find a basis where H + M is diagonal or at least in Jordan blocks' form. Since the operator H is contributed from L 1 and L µ , let us first recall well-known spectral properties of the operator L η , where η ∈ {1, µ}. • Spectral properties of L η : Given η > 0, let us consider the Hilbert space (3.13) and ρ η is defined by (3.10). Note that we can write L η in the divergence form and that L η is self-adjoint with respect to the weight ρ η . Indeed, for any v and w in L 2 (3.14) Let us introduce for each α = (α 1 , · · · , α N ) ∈ N N the polynomial where H n is the standard one dimensional Hermite polynomial, i.e and c α ∈ R is chosen so that the term of highest degree inh α is N i=1 y α i i . In one-dimensional case, we haveh For example,h The family of eigenfunctions of L η constitutes an orthogonal basic in L 2 ρη (R N , R) in the sense that for any different α and β in N N , and that for any Remark 3.1. We remark that for any polynomial P n (y) of degree n, we have by (3.17), R Nh α (y)P n (y)ρ η (y)dy = 0 for all |α| > n. • Spectral properties of H : Let us consider the functional space If we introduce for each α ∈ N N , where H n is defined by (3.15), and a α andâ α are constants chosen so that the terms of highest • Spectral properties of H + M: As announced in the beginning of Section 3, we switch back to the case N = 1 for simplicity. Of course, our proof remains valid in the case N ≥ 2, though with some complications in the notation. We want to find a basis where H + M is diagonal or at least in Jordan blocks' form. More precisely, we have the following: . For all n ∈ N, there exist polynomials f n , g n ,f n andg n of degree n such that 20) and and the coefficients d n,n−2j , e n,n−2j ,d n,n−2j ,ẽ n,n−2j depend on the parameters p, q and µ. In particular, we have Remark 3.3. The spectrum of H + M has two positive eigenvalues λ 0 = 1 and λ 1 = 1 2 corresponding to eigenvectors f 0 g 0 and f 1 g 1 ; a zero eigenvalue λ 2 = 0 corresponding to eigenvector Proof. For each n ∈ N, we want to find Fn Gn in the form of polynomials of degree n such that Let us assume that F n G n = n i=0 a n,n−i b n,n−i y n−i , a n,n = 0, b n,n = 0. Since the terms of highest degree of h m andĥ m are y m , and h m andĥ m are even (or odd respectively) if m is even integer (or odd), we can rewrite the expression of Fn Gn in terms of h j 0 and 0 h j for j = 0, 1, · · · , n as stated in (3.22) and (3.23). In order to precise the values of d n,n−2 e n,n−2 and d n,n−2 e n,n−2 , let us compute a n,n−2 b n,n−2 . -For λ = λ + , we use (3.30) with i = 2 to get a n,n−2 b n,n−2 = −n(n − 1) Recalling from the definition (3.18) that h n (y) = y n − n(n − 1)y n−2 + · · · , andĥ n (y) = y n − n(n − 1)µy n−2 + · · · , we deduce from (3.22) that d n,n−2 e n,n−2 = n(n − 1) d n,n µ e n,n + a n,n−2 b n,n−2 = n(n − 1) . hence, (3.25) follows after a straightforward calculation. This concludes the proof of Lemma 3.2. For the sake of controlling Λ Υ in the region |y| ≤ 2K √ s, we will expand Λ Υ with respect to the family . We start by (note that this identity is precisely the definition of Λ − Υ − ) where M is a fixed even integer satisfying (in view of the definition (3.5) of M, this is indeed a suitable norm for (2 × 2) matrices). As we will show in Section 5.2.3, the choice of M is crucial and allows us to successfully use a Gronwall's inequality in the control of the infinite-dimensional part Λ − Υ − , and • Q n (s) andQ n (s) are the projections of Λ Υ on hn 0 and 0 hn respectively, defined by is the projector on the subspace of L ρ 1 × L ρµ where the spectrum of H is lower than 1−M 2 . Note that for all • We also introduce Π +,M = Id − Π −,M , and the complementary part which is called the finite-dimensional part of Λ Υ , and which satisfies for all s, We will expand it with respect to the basis of eigenfunctions of H + M computed in Lemma 3.2, namely the family fn gn , f ñ gn n≤M , as follows: respectively. This is possible, since from Lemma 3.2, we can express θ n (s) andθ n (s) in terms of Q n (s) and Q n (s) as follows: where the coefficients A n+2j,n , B n+2j,n ,Ã n+2j,n andB n+2j,n for j = 0, 1, 2, · · · depend on p, q and µ. In particular, we have and . is "lower triangular" in the sense that we can express the matrix in terms of (2 × 2) blocks (see (3.22) and (3.23)) as follows: where 0 is the (2 × 2) zero matrix, Thus, we can express By extracting the (2 × 2) blocks in (3.44), we derive from (3.32) the expressions (3.39) and (3.40). It remains to compute (3.41) and (3.42) in order to complete the proof of Lemma 3.4. To this end, we note from (3.44) that A n,nÃn,n B n,nBn,n = T , and A n+2,nÃn+2,n B n+2,nBn+2,n = G n+2,n = −T D n+2,n T . From (3.32) and (3.38), it holds that Note that the decomposition (3.45) is unique. Proof of the existence result assuming some technical lemmas. This section is devoted to the proof of Theorem 1.1. We will first show the existence of a solution Λ which concludes part (ii) of Theorem 1.1 (though with no estimate of the error). The proof of parts (i) and (iii) then follows from part (ii). We will give all the arguments of the proof without technical details which are left to the next section. Hereafter, we denote by C a generic positive constant depending only on p, q, µ and K introduced in (3.11). Given A ≥ 1 and s 0 ≥ e, we consider initial data for system (3.3), depending on two real parameters d 0 and d 1 of the form: where f i g i , i = 0, 1 are defined by (3.22) and χ is introduced in (3.11). The solution of system (3.3) with initial data (4.2) will be denoted by Λ(y,s) , or by Λ(y,s) Υ(y,s) when there is no ambiguity. Our aim is to show that if A is fixed large enough, then s 0 is fixed large enough depending on A, we can also fix the parameters (d 0 , d 1 ) ∈ [−2, 2] 2 so that the solution Λ(y,s) Υ(y,s) d 0 ,d 1 ,s 0 ,A will be defined for all s ≥ s 0 and converges to 0 0 as s → +∞ in L ∞ (R), meaning that (4.1) holds. According to the decomposition (3.45) and the definition (3.12), it is enough to control the solution in a shrinking set defined as follows: where Λ e , Υ e are defined by (3.12), Λ − , Υ − , θ n ,θ n are defined as in (3.45). Let us first make sure that initial data (4.2) belongs to V A (s 0 ). In particular, we claim the following: (ii) For all (d 0 , d 1 ) ∈ D s 0 , Λ 0 Υ 0 ∈ V A (s 0 ) with strict inequalities except for θ 0,0 and θ 0,1 , in the sense that Λ 0,e = Υ 0,e = 0, . If s * (d 0 , d 1 ) = +∞ for some (d 0 , d 1 ) ∈ D s 0 , then the proof is complete. Otherwise, we argue by contradiction and suppose that s * (d 0 , d 1 ) < +∞ for any (d 0 , d 1 ) ∈ D s 0 . By continuity and the definition of s * , we note that the solution at time s * is on the boundary of V A (s * ). Thus, at least one of the inequalities in the definition of V A (s * ) is an equality. In the following proposition, we show that this can happen only for the two components θ 0 (s * ) and θ 1 (s * ). Precisely, we have the following result: (i) (Reduction to a finite-dimensional problem) We have As a mater of fact, one can check that if (ii) (Transverse outgoing crossing) There exists δ 0 > 0 such that Remark 4.6. In N dimensions, θ 0 ∈ R and θ 1 ∈ R N . In particular, the finite-dimensional problem is of dimension N + 1. This is why in initial data (4.2), one has to take d 0 ∈ R and d 1 ∈ R N . The proof of Proposition 4.5 is a direct consequence of the dynamics of system (3.3). The idea is to project system (3.3) on the different components of the decomposition (3.45) and (3.12). However, because of the number of parameters in our problem (p, q and µ) and the coordinates in (3.45), the computations become too long. That is why a whole section (Section 5.2) is devoted to the proof of Proposition 4.5. Let us now assume Proposition 4.5 and continue the proof of Proposition 4.2. Let A ≥ A 3 and s 0 ≥ max{s 0,2 , s 0,3 }. From part (i) of Proposition 4.5, it follows that Hence, we may define the rescaled flow Θ at s = s * as follows: From If x 0 = 0, then we see from (1.14) that Hence, u and v both blow up at time T at x 0 = 0. It remains to show that if x 0 = 0, then x 0 is not a blowup point. The following result from Giga and Kohn [19] allows us to conclude: for some x 0 ∈ R and r > 0, then (u, v) does not blow up at (x 0 , T ). does the same; and because the semigroup and the fundamental solution generated by η∆ with η ∈ {1, µ} have the same regularizing effect independently from η. Indeed, we see from (1.14) that as t → T , hence, x 0 is not a blowup point of (u, v) from Proposition 4.7. This concludes the proof of part (i) of Theorem 1.1. We now give the proof of part (iii) of Thereom 1.1. Using the technique of Merle [26], we derive the existence of a blowup profile (u The profile (u * , v * ) is singular at the origin, as we will see shortly, after deriving its equivalent as x → 0. Since our argument is exactly the same as in Zaag [42] used for equation (1.18) with β = 0 (no new idea is needed), we just give the key arguments and kindly refer the reader to Section 4 in [42] for more details. Consider K 0 > 0 to be fixed large enough later. If x 0 = 0 and |x 0 | is small enough, we introduce for all (ξ, 4) and t 0 (x 0 ) is uniquely determined by From the invariance of system (1.1) under dilation, (g(x 0 , ξ, τ ), h(x 0 , ξ, τ )) is also a solution of (1.1) on its domain. From (4), (4.4) and (1.14), we have as x 0 → 0. Using the continuity with respect to initial data for system (1.1) associated to a space-localization in the ball B(0, |ξ| < | log(T − t 0 (x 0 ))| 1/4 ), we show as in Section 4 of [42] that is the solution of system (1.1) with constant initial data (Φ * (K 0 ), Ψ * (K 0 )). Making τ → 1 and using (4.4), we see that , as x 0 → 0, which concludes the proof of part (iii) of Theorem 1.1, assuming that Propositions 4.5 and 4.3 hold. Proof of the technical results. In this section, we prove all the technical results used for the proof of the existence of a solution of system (3.3) satisfying (4.1). In particular, we give the proofs of Propositions 4.3 and 4.5, each in a separate subsection. Preparation of the initial data. In this subsection, we give the proof of Proposition 4.3. Let us start with some properties of the set V A (s) introduced in Definition 4.1: (iii) For all y ∈ R, |Λ(y, s)| + |Υ(y, s)| ≤ CA M +1 log s s 2 (1 + |y| M +1 ). (remember from Lemma 3.2 that f n , g n ,f n andg n are polynomials of degree n). The same estimate holds for |Υ(y, s)|, which concludes the proof of (i). Reduction to a finite-dimensional problem. In this subsection, we give the proof of Proposition 4.5, which is the crucial part in our analysis. The idea of the proof is to project system (3.3) on the different components defined by (3.12) and the decomposition (3.45). More precisely, we claim that Proposition 4.5 is a direct consequence of the following: (ii) (ODE satisfied by the null mode) (iii) (Control of the finite-dimensional part) (v) (Control of the outer part) where r = min{p, q}. Because of the number of parameters in our problem (p, q and µ) and the coordinates in (3.45), the proof of Proposition 5.3 is too long. For that reason, we will organize the rest of this subsection in 4 separate parts for the reader's convenience: -Part 1: We assume the result of Proposition 5.3 in order to complete the proof of Proposition 4.5. The proof of Proposition 5.3 will be carried out in the next three parts. -Part 2: We deal with system (3.3) to write ODEs satisfied by θ n andθ n for n ≤ M . The definition of the projection of Λ Υ on fn gn and f n gn given in Lemma 3.4 will be the main tool to derive these ODEs. Then, we prove items (i), (ii) and (iii) of Proposition 5.3. -Part 3: We derive from system (3.3) a system satisfied by Λ − Υ − and prove item (iv) of Proposition 5.3. Unlike the estimate on θ n andθ n where we use the properties of the linear operator H + M, here we use the operator H . The fact that M is large enough (as fixed in (3.33)) is crucial in the proof, in the sense that this choice of M allows us to successfully apply a Gronwall's inequality at the end for the control of the infinite-dimensional part. -Part 4: In the shortest part, we project system (3.3) to write a system satisfied by Λe Υe and prove item (v) of Proposition 5.3. As mentioned early, the linear operator of the equation satisfied by Λ e and Υ e has a negative spectrum, which makes the control of Λ e (s) L ∞ (R) and Υ e (s) L ∞ (R) easily. Proof of Proposition 4.5 assuming Proposition 5.3. We give the proof of Proposition 4.5 assuming that Proposition 5.3 holds. Consider A ≥ A 3 and s 0 = − log T ≥ s 3 (A), where A 3 and s 3 are given in Proposition 5.3. On the other hand, we have from (ii) of Proposition 5.3, , and a contradiction follows if A ≥ C + 1. Hence, the estimates given in (5.2) are proved for all s ∈ [s 0 , s 0 + λ]. Case 2: s > s 0 + λ. Using parts (iv) − (v) of Proposition 5.3 with τ = s − λ > s 0 and recalling that τ ≥ s 2 from (5.3), we write It is clear that if A ≥ A 5 for some A 5 ≥ 1 large enough, all the estimates in (5.2) hold, except for the strict inequality for θ 2 (s) which is treated similarly as in the first case. This concludes the proof of part (i) of Proposition 4.5. The conclusion of part (ii) directly follows from part (i). Indeed, from item (i), we know that for n = 0 or 1 and ω = ±1, we have θ n (s 1 ) = ω A s 2 1 . Therefore, using item (i) of Proposition 5.3, we see that Taking A large enough gives ωθ ′ n (s 1 ) > 0, which means that θ n is traversal outgoing to the bounding curve s → ωAs −2 at s = s 1 . This concludes the proof of part (ii) and finishes the proof of Proposition 4.5 assuming that Proposition 5.3 holds. Let us now give the proof of Proposition 5.3 in order to complete the proof of Proposition 4.5. The proof is given in the next three parts. The finite-dimensional part. In this part, we give the proof of items (i), (ii) and (iii) of Proposition 5.3. We proceed in two steps: -In the first step, we find the main contribution to the projections P n,M andP n,M of the various terms appearing in (3.3). -In the second step, we gather all the estimates obtained in the first step to derive items (i), (ii) and (iii) of Proposition 5.3. -Step 1: The projection of system (3.3) on the eigenfunctions of the operator H +M. In the following, we will find the main contribution to the projections P n,M andP n,M of the five terms appearing in (3.3) (note that we handle (H + M) Λ Υ as one term). • Third term: We claim the following: Lemma 5.5 (Power series of V 1 and V 2 as s → +∞). The functions V 1 (y, s) and V 2 (y, s) given in (3.6) satisfy and for all k ∈ N * , where W i,j (y) is an even polynomial of degree 2j andW i,k (y, s) satisfies Moreover, we have for all |y| ≤ √ s and s ≥ 1, Proof. Since the estimates of V 1 and V 2 are the same, we only deal with V 1 . Let us introduce and consider z = |y| 2 s , we see from (3.6) that . We now use Lemma 5.5 to derive the projections of V 1 Υ V 2 Λ on fn gn and f ñ gn . More precisely, we have the following: (i) For all n ≤ M and for all s ≥ 1, we have -for n = 0, 1, 2, Proof. From Lemma 3.4, let us write Using (5.7), the first term can be bounded by . Since I 2 and I 3 are estimated in the same way, we only focus on the estimate for I 2 . -If i ≥ m − 2, we use (5.7) to bound (5.14) Let us prove (5.14). We use (5.8) to write where we take k to be the largest integer such that i + 2k < m, that is k = m−i−1 2 . Since |ρ 1 (y)| ≤ Ce −cs when |y| > √ s, the first term can be bounded by Ce −cs . The last term is . For the second term, we note that deg(g i W 1,l ) = i + 2l ≤ i + 2k < m, hence, we have by the orthogonality (3.17), This directly follows that the second term is bounded by Ce −cs and concludes the proof of (5.14). Hence, we have just proved that Similarly, it holds that Injecting (5.15) and (5.16) into (5.12) and (5.13) and making the change of index m = n + 2j, we obtain We rewrite the last term as follows: This concludes the proof of item (i). Using the definition 4.1 of V A (s), item (ii) simply follows from item (i). This finished the proof of Lemma 5.6. Using estimate (5.9) and (5.10), we further refine the estimate concerning the projection of V 1 Υ V 2 Λ on f 2 g 2 as follows: . (ii) For all A ≥ 1, there exists s 6 (A) ≥ 1 such that for all s ≥ s 6 (A), if Λ(s) Υ(s) ∈ V A (s), then: Proof. Using (5.9), (5.10) and decomposition (3.45), let us write for all |y| < √ s, where We first note that , and . Therefore, the problem is reduced to prove that To do so, let us write where h 0 , h 2 , h 4 andĥ 0 ,ĥ 2 ,ĥ 4 are defined as in (3.16) with η = 1 and η = µ respectively, and Using the definition of P 2,M given in (3.39) and the orthogonality (3.17), we see that where the values of A 2,2 , B 2,2 , A 4,2 and B 4,2 are explicitly given by (3.41) and (3.42), that is , A straightforward calculation yields • Fourth term: F 1 (Υ,y,s) F 2 (Λ,y,s) . We first claim the following: Lemma 5.8 (Decompositions of F 1 and F 2 ). The functions F 1 (Υ, y, s) and F 2 (Λ, y, s) given in (3.7) can be decomposed for all |Λ| ≤ 1, |Υ| ≤ 1 as follows: for all s ≥ 1 and |y| < √ s, where F l i,k is an even polynomials of degree less or equal to M andF l i,k (y, s) satisfies On the other hand, we have for all y ∈ R and s ≥ 1, Proof. We only deal with F 1 (Υ, y, s) since the same proof holds for F 2 (Λ, y, s). We first note that in the region {|y| < √ s} and for s ≥ s 0 for some s 0 ≥ 1, ψ(y, s) is bounded from above and from below. Thus, we Taylor expand F 1 in term of Υ and write Now, we expand E 1,k (ψ) in terms of the variable 1 s , and write Then, we expand E l 1,k (Ψ * ) in terms of z = y √ s as follows: Finally, we set which yields the desired result. This concludes the proof of Lemma 5.8. Using Lemma 5.8, let us now find estimates on the projection of F 1 F 2 on fn gn and f ñ gn . In particular, we claim the following: , -for n = 0, 1, 2, Proof. Let us write from Lemma 3.4 the projections of F 1 F 2 on fn gn and f ñ gn for n ≤ M as follows: We see that it is enough to estimate Π m since it implies the same estimate for P n,M andP n,M . Since the estimates for Π m andΠ m are similar, we only deal with Π m F 1 F 2 which is defined as follows: Using Lemma 5.8, let us write We use part (iii) of Proposition 5.1 to get the estimate . From part (ii) of Proposition 5.1 and (5.18), we see that for all y ∈ R and s ≥ A 2(M +2) . Since ρ 1 (y) ≤ Ce −cs for |y| > √ s, we then get Let us now estimate I 1 . We write where e l,i 1,k are the coefficients of the polynomial F l 1,k defined in (5.19). We note from part (ii) of Proposition 5.1 that Υ(s) L ∞ ≤ C for all s ≥ A 2(M +2) , from which we derive where k ≥ 2, and Υ + = M j=0 θ j g j +θ jgj . From Definition 4.1 of V A (s), we have Hence, the contribution coming from Υ − to the estimate of This concludes the proof of Lemma 5.9. • Fifth term: R 1 R 2 . We first expand R 1 (y, s) and R 2 (y, s) as a power series of 1 s as s → +∞, uniformly for |y| < √ s. More precisely, we claim the following: Lemma 5.10 (Power series of R 1 and R 2 as s → +∞). For all m ∈ N, the functions R 1 (y, s) and R 2 (y, s) given in (3.8) can be expanded as follows: for all |y| < √ s and s ≥ 1, 20) where R i,k is a polynomial of degree 2k. In particular, Proof. Let us consider z = y √ s and write from (2.8) and (2.9), where Φ * , Ψ * are defined as in (1.15), and b is given by (1.16). Using the fact that (Φ * , Ψ * ) ≡ (Φ 0 , Ψ 0 ) satisfies (2.3), we can write from (3.8), where F (ξ) = ξ p and G(ξ) = ξ q . We only deal with R 1 because the estimate for R 2 follows similarly. For |z| < 1, there exist positive constants c 0 and s 0 such that |Φ * (z)|, |Ψ * (z)|, Φ * (z) + D E and Ψ * (z) + E s are lager than 1 c 0 and smaller than c 0 , uniformly in |z| < 1 and s ≥ s 0 . Since F (ξ) is C ∞ for 1 c 0 ≤ |ξ| ≤ c 0 , we expand it around ξ = Ψ * (z) as follows: where F j (ξ) are C ∞ . Hence, we can expand F j (ξ) around ξ = Ψ * (0) and write Gathering all the above expansion to the expression of R 1 (y, s), we find that the term of order 1 s is given by hence, (5.20) -if n = 0 and n = 2, then Since R 1 (y, s) and R 2 (y, s) are even functions in y, we deduce that which follows (5.23). Now when n ≥ 4 is even, we use (5.20) with m = n 2 and write for i = 1, 2, , for all |y| < √ s, s ≥ 1, whereR i is polynomial in y of degree less then n − 1. It is enough to estimate Π k R 1 R 2 and Π k R 1 R 2 with n ≤ k = n + 2j ≤ M since the same bound holds for P n,M andP n,M . We only estimate Π k R 1 R 2 because the same proof holds forΠ k where we used the fact that deg(R 1, n 2 ) ≤ n − 1 < k and the orthogonality (3.17) resulting in RR 1, n 2 h k ρ 1 dy = 0, and that the integral over the domain |y| > √ s is controlled by Ce −cs . We have proved (5.24). When n = 0 and n = 2, estimate (5.25) directly follows from (5.20) with m = 1, that is It remains to prove (5.26). To this end, let us write from (5.20) where R 1,1 and R 2,1 are given by (5.21) and (5.22). Estimate (5.26) will follow if we show that Using Lemma 3.4 and the orthogonality (3.17) (note that deg(R i,1 ) = 2, i = 1, 2), we obtain after a straightforward simplification. This concludes the proof of Lemma 5.11. The infinite-dimensional part. We prove item (iv) of Proposition 5.3 in this part. We proceed in two steps: Step 1: Projection Π −,M of the all terms appearing in (3.3). In this step, we will find the main contribution in the projection Π −,M of various terms appearing in (3.3). Since deg(g n ) = deg(g n ) = n and n + 2k = M − 1 < M , we deduce that I 1 = 0. Moreover, since |W 1,k (y, s)| ≤ C s k+1 (1 + |y| 2k+2 ), we deduce from (iv) of Lemma A.2 that Π −,M V 1 (θ n g n +θ ngn ) 1 + |y| M +1 Similarly, when M − n is even, we use (5.8) with k = M −n 2 and argue as above to obtain the same estimate. This concludes the proof of part (i). Part (ii) simply follows from part (i) and Definition 4.1 of V A (s). This finishes the proof of Lemma 5.12. Proof. We only deal with F 1 (Υ, y, s) because the similar estimate holds for F 2 (Λ, y, s). Since the proof is similar to the proof of Lemma 5.12, we just give the key estimate. We first notice that for all polynomial functions f (y) of degree M , we have Π −,M f (y) = 0. Hence, the conclusion follows once we show that there exists a polynomial function F 1,M of degree M in y such that for all y ∈ R and s ≥ 1, wherep = min{2, p}. In particular, we take To prove (5.27), we recall from Lemma 5.8 that We first consider the region |y| ≥ √ s. From (5.18) and part (ii) of Proposition 5.1(ii), we have From Lemma 5.9, we know that for all n ≤ M , In the region |y| ≤ √ s, we use the same argument as in the proof of Lemma 5.9 to deduce that the coefficient of degree k ≥ M + 1 of the polynomial (1 + |y| M +1 ). To control the term |Υ| M +2 , we use parts i and (iii) of Proposition 5.1 to get A collection of all the above estimates yields (5.27). The conclusion of Lemma 5.13 follows from (5.27) by using the same argument as in the proof of Lemma 5.12. Fifth term: R 1 R 2 . From Lemma 5.10, we have the following: Lemma 5.14 (Projection of R 1 R 2 using Π −,M .). The functions R 1 (y, s) and R 2 (y, s) defined by (3.8) satisfy Proof. Applying Lemma 5.10 with m = M +2 2 , we write for all |y| ≤ √ s and s ≥ 1, Since deg(R i,k ) = 2k ≤ M , we have Π −,M R i,k = 0. The conclusion simply follows by using (iv) of Lemma A.2. This ends the proof of Lemma (5.14). We are ready to prove part (iv) of Proposition 5.3. Step 2: Proof of item (iv) of Proposition 5.3. Applying the projection Π −,M to system (3.3) and using the various estimates given in the first step, we see that Λ − and Υ − satisfy the following system: where G 1,− and G 2,− satisfy Using the semigroup representation of L η with η {1, µ}, we write for all s ∈ [τ, τ 1 ], Using part (iii) of Lemma A.2, we get where M ∞ = max p+1 pq−1 + pγ p−1 , q+1 pq−1 + qΓ q−1 . Since we have already fixed M in (3.33) such that , which concludes the proof of part (iv) of Proposition 5.3. Stability of the constructed solution. In this section we give the proof of Theorem 1.8. The proof strongly relies on the same ideas used in the proof of Theorem 1.1. That is the use of finite-dimensional parameters, the reduction to a finite-dimensional problem and the continuity. As the proof of Theorem 1.1, we only give the proof of Theorem 1.8 in the one dimensional case for simplicity, however, the same proof holds for higher dimensions. We claim the following which directly follows Theorem 1.8: Proposition 6.1. Let (û 0 ,v 0 ) be the initial data of system (1.1) such that the corresponding solution (û,v) blows up in finite timeT at only one blowup pointâ and (û(x, t),v(x, t)) satisfies (1.14) with T =T and a =â. Then, there exist B 0 ≥ 1, s 0 ≥ 1, a neighborhood E s 0 of (T ,â) in R 2 and a neighborhood W 0 of (û 0 ,v 0 ) in L ∞ (R) × L ∞ (R) such that the following holds: for any (u 0 , v 0 ) ∈ W 0 , there exists (T, a) ∈ E s 0 such that for all s ≥ s 0 , Λ T,a (s) Υ T,a (s) ∈ V B 0 (s), where Λ T,a (y, s) = Φ T,a (y, s) − ϕ(y, s), Υ T,a (y, s) = Ψ T,a (y, s) − ψ(y, s), (6.1) where (Φ T,a , Ψ T,a ) is defined as in (1.10) with (u, v) is the unique solution of (1.1) with initial data (u 0 , v 0 ), and ϕ, ψ are defined in (2.8) and (2.9). Indeed, once Proposition 6.1 is proved, we deduce from part (ii) of Proposition 5.1 and (1.10) that (1.14) holds for (u, v). Then, Proposition 4.7 applied to (u, v) shows that (u, v) blows up at time T at one single point a. Since part (iii) of Theorem 1.1 follows from part (ii), we conclude the proof of Theorem 1.8 assuming that Proposition 6.1 holds. Let us now give the proof of Proposition 6.1. The proof is anagolous to the case of equation (1.3) treated in [29] (see also [38]). For the reader's convenience, we give here the main idea of the proof. The interested reader is kindly referred to the stability section in [29] and [38] for more details. We consider (û,v) the constructed solution of system (1.1) in Theorem 1.1, and call (û 0 ,v 0 ) its initial data in L ∞ (R) × L ∞ (R), and (T ,â) its blowup time and blowup point. In view of (6.4) and (6.5), Λ 0 Υ 0 (T, a, u 0 , v 0 , σ 0 ) appears as initial data for system (3.3) at time s = s 0 (σ 0 , τ ) and our parameters is now (T, a) replacing (d 0 , d 1 ) in (4.2). In particular, we have the following property: Proof. The proof directly follows from the expansion of Λ 0 Υ 0 given in (6.4) and (6.5) for (T, a) close to (T ,â). It happens that the proof is completely analogous to the case of equation (1.3) treated in [29] (see also [38]). For this reason, we omit the proof and kindly refer the interested reader to Lemma B.4, page 186 in [29] and Lemma 6.2 in [38] for analogous proofs. With the result of Proposition 6.2 in hands, we are ready to complete the proof of Proposition 6.1. Recall that in the existence proof given in Section 4, we had to specific choice of the two parameters (d 0 , d 1 ) ∈ R 2 appearing in (4.2) in order to guarantee that Λ(s) Υ(s) d 0 ,d 1 ∈ V A (s) for all s ≥ s 0 for some A ≥ 1 and s 0 (A) ≥ 1 large enough. In particular, we choose (d 0 , d 1 ) ∈ D s 0 so that the initial data at s = s 0 of (3.3) is small in V A (s 0 ). Together with the dynamics of system (3.3) given in Proposition 5.3, we show that it stays small in V A (s) up to s = s 0 + λ for some λ = log A (see Subsection 5.2.1). In the case s ≥ s 0 + λ, we didn't use the data at s = s 0 , we only used Proposition 5.3 to derive the smallness of the solution. In particular, we derive the so-called reduction of the problem to a finite-dimensional one (see Proposition 4.5). Then the topological argument for the finite-dimensional problem involving two parameters (d 0 , d 1 ) allows us to conclude the existence of (d 0 , d 1 ) ∈ D s 0 such that the solution of (3.3) with initial data (4.2) is trapped in V A (s) for all s ≥ s 0 . Now, starting from Λ 0 Υ 0 (T, a, u 0 , v 0 , σ 0 ) at time s = s 0 and applying the same procedure as for the existence proof including the reduction to a finite dimensional problem (see Proposition 4.5) and the topological argument involving the two parameters (T, a), we end-up with the existence of (T (u 0 , v 0 ),ā(u 0 , v 0 )) ∈D B,σ 0 ,u 0 ,v 0 such that system (3.3) with initial data at time s = s 0 , This concludes the proof of Proposition 6.1 as well as Theorem 1.8. A. Some elementary lemmas. The following lemma is the integral version of Gronwall's inequality: Proof. See Lemma 2.3 in [19]. In the following lemma, we recall some linear regularity estimates of the linear operator L η defined in (3.4): We also have the following: (iv) For all k ≥ 0, we have Proof. The expressions of e τ Lη (y, x) and e τ Lη are given in [3], page 554. The proof of (i) − (ii) follow by straightforward calculations using (A.1) and (A.2). For (iii) − (iv), see Lemmas A.2 and A.3 in [30].
2016-10-31T12:00:12.000Z
2016-10-31T00:00:00.000
{ "year": 2016, "sha1": "428d0c887853b2b0d3cf0e122f36b578f6040e39", "oa_license": null, "oa_url": "http://arxiv.org/pdf/1610.09883", "oa_status": "GREEN", "pdf_src": "Arxiv", "pdf_hash": "428d0c887853b2b0d3cf0e122f36b578f6040e39", "s2fieldsofstudy": [ "Mathematics" ], "extfieldsofstudy": [ "Mathematics", "Physics" ] }
53615756
pes2o/s2orc
v3-fos-license
Zeolite-templated NiO Nanostructure for Methanol Oxidation Reaction Nickel oxide nano with particle size of 10.0 to 15.0 nm using zeolite as a template were successfully prepared and loaded (NiO 10 wt.%) on functionalized carbon nanofibers (CNFs). The as-prepared material NiO-CNFs was characterized and tested as an electrocatalyst and a catalyst for the methanol conversion. Electrocatalytic results showed high stability which was evinced by repetitive cycles as a result of catalyst surface activation. Gas phase catalytic tests were carried out at 290oC over NiO-CNFs catalyst in fresh, reduced, and oxidized forms. The results showed that dimethyl ether (DME) and CO2 were obtained as main products. Formaldehyde (FA), Methyl formate (MF) and Dimethoxymethane (DMM) were obtained as traces. The conversion of methanol in the absence of H2 or O2 over inactivated NiO-CNFs catalyst suggested that DME reacts with the formed H2O to produce CO2. In both cases, reactivation of the catalyst by H2 (reduction) or by Air (oxidation), the conversion was increased indicating a regeneration of the catalyst. INTRODUCTION Porous solids are categorized in three classes: Microporous materials with a pore diameter (< 2 nm), mesoporous materials are materials with a pore diameter of 2-50 nm and macroporous materials with a pore diameter (> 50 nm) 1,2 .Mesoporous materials have attracted high interest due to their unique properties such as large surface area, uniform pore size and pore size distribution which makes them suitable for many applications such as energy conversion 3 , Sensors 4 , Catalysis 5 and bio-related applications 6 .They are also used as templates for making molecular sieves, sensors and synthesis of controlled size nanoparticles 2 .On the other hand, microporous materials such as zeolite are known for many applications such as catalysis, and sorption due to their high stability and selectivity 7 .It has been reported that zeolite can be used as a starting materials to produce mesoporous structures through desiccation process by alkaline treatment of zeolite 7,8 .Here we report the syntheses of Nickel oxide nanoparticles using zeolite ZSM-5 to produce mesoporous structure and then the removal of the zeolite by the alkali treatment.Prepared NiO Nanoparticles has been supported on carbon fibers to be used as electrocatalyst and a catalyst towards methanol conversion. Zeolite Preparation and loading In a typical hydrothermal synthesis procedure 9 , an exact amount of 3 The product was filtered, rinsed with deionized water, dried, and calcined at 540°C for 8 h to remove the template 10.A certain amount of the prepared ZSM-5 zeolites was then kept under vacuum at 150°C overnight, then added to a solution of Ni(NO 3 ) 2 and sonication for 4 h followed by vacuum drying.Zeolite sample loaded with Nickel salt was then calcined at 350 O C for 2 hours.Zeolite template was removed using 1M NaOH solution for 5 h at 80 O C with stirring. Preparation of NiO/Carbon fiber nanocatalyst Carbon fibers support was prepared according to the method reported by Al-Enizi et.al., 11 .In a typical synthesis, polyacrylonitril elctrospin fibers were produced using electrospinning technique.Produced fibers were collected, dried then transferred to carbon fibers using pyrolysis process under Nitrogen atmosphere at 1100 O C for 6 hours.The obtained NiO material was dried at 70°C overnight.To achieve 10 wt.% loading of NiO on the Carbon fibers (CNFs), an appropriate amount of NiO and CNFs was mixed and sonicated in deionized water for 30 minutes. Materials characterization The C, H, and N contents of prepared NiO/ Carbon fiber nanocatalyst were determined using PerkinElmer 2400 Series II CHNS/O elemental analyzer and for the characterization of crystal structure, Bruker D8 Advance x-ray diffractometer has been used.The TEM characterization of NiO nanoparticles was preform on Jeol high resolution transition electron microscope (HRTEM 2100) operated at 200kV. Catalytic measurements Conversion of methanol was used to test the acid and redox properties of the catalyst.Oxidation tests were carried out at atmospheric pressure using a fixed-bed continuous-flow reactor.The catalyst (250 mg), packed in a stainless steel reactor, was preconditioned at 330 o C under air flow for 2 h at a rate of 80 ml/min.After the pre-treatment, the reagent mixture, which consisted of a mixture of methanol and air with volume ratio air/CH 3 OH = 2/0.5, was admitted in the reactor (admission of methanol was carried out automatically and continuously by micro-injection through a syringe).The reaction was conducted at 250 o C. AS for reduction tests, the loaded catalyst sample was preconditioned at 330 o C under H 2 flow for 2 h at a rate of 80 ml/min.After the pre-treatment, the reagent mixture, methanol and H 2 (H 2 /CH 3 OH = 2/0.5),was admitted in the reactor.The reaction products were analyzed with a gas phase chromatograph (Agilent 6890N) equipped with a flame ionization detector and a capillary column (HP-PLOT Q length 30m ID 0.53 mm). Electrocatalytic measurements Working electrodes were fabricated by dispersing 0.2 mg of the prepared materials in a solution of 0.5 ml isopropyl alcohol and 0.1 ml Nafion (10 wt.%) and sonicating the solution for 1 h Next, 10 μl of the dispersion was cast onto a glassy carbon (GC) disk electrode (GAMRY Instrument Company, A = 0.071 cm 2 ) and left to dry overnight.Before inserting the electrode into the electrolytic cell, its surface was flushed with the same electrolyte that was used in the measurements in order to confirm the surface wettability 11 . RESULTS AND DISCUSSION The result of X-ray powder diffraction analysis of the calcined ZSM-5 zeolite is shown in Fig. 1.The sample exhibited the characteristic diffraction lines of the ZSM-5 framework characterized by the reflections in the 2 theta range of 7−9° and 23−25°, which matched well with the standard pattern of ZSM-5 zeolite 12 .The obtained XRD spectrum confirmed successful preparation of a highly crystalline ZSM-5, as indicated by the well resolved XRD peaks and the weak background noise in the XRD patterns. TEM image after alkali treatment of zeolite template (Fig. 2), indicate the formations of uniform NiO particles of thickness about 3-6 nm.The XRD pattern of the prepared NiO/CNFs electrocatalyst shows diffraction peaks at 2θ 37.3°, 43.3°, indexed to the (111) and (200) of cubic NiO phases respectively (Fig. 3) 13 and a diffraction peak at about 26° which can be assigned to plane (002) of the hexagonal structure of the carbon nanofibers CNFs 10.The other smaller XRD peaks at about 2θ 52° and 77° are assigned to (004) and (110) planes of the hexagonal structure of Graphite C 2H (ICDD 00-041-1487) respectively. The prepared electrocatalyst with composition of 10% NiO/CNFs has been tested for the methanol oxidation reaction.The electrocatalytic and a scan rate of 50 mV s -1 , results were depicted in Fig. 6.Fig. 6 shows that, the peak current density increased from 0.95 mA cm -2 in the 1st cycle to 1.2 mA cm -2 in the 20 th cycle as a result of catalyst surface activation by repetitive cycling.However, after NaOH treatment of the zeolite, the formation of nickel silicate cannot be excluded.This can be in partial, the reason for the high stability of the catalyst.Additional studies of both the intermediate NiO material and the final NiO-CNFs catalyst are needed.The reaction of methanol conversion is considered to be sensitive to the structure 14 , the dispersion 15 and the interaction force between the active phase and the support 16 .For this reason it was selected as a reaction test to characterize the as-prepared catalyst.The reactions of conversion of methanol were carried out at 290 o C and 250 o C over the as-prepared catalyst.The results show that the reaction led to dimethyl ether (DME) and CO 2 as main products while formaldehyde (FA), methyl formate (MF) and dimethoxymethane (DMM) were observed as traces. DME can be produced by methanol dehydration: CO 2 can be produced by methanol or by DME total oxidation The acid character of the catalyst is indicated by the amount of dimethyl ether (DME) formed, while the redox character is indicated by the formation of CO 2 .Besides CO 2 , the other reaction products such as FA, (FM) and DMM, which are formed by condensation of methanol and FA in consecutive reactions, also indicate the redox character of the catalyst. The catalyst was tested initially, for the conversion of methanol at a reaction temperature of 290°C without being activated.The results Fig. 7 showed that the conversion increased as the reaction time increased from 0.5 to 2 hours.Beyond 2 h it decreased, indicating the deactivation of the catalyst.As for the variation of the selectivities with time on stream, it has been found that when the reaction time increases from 0,5 to 2 h the selectivity of the DME decreased in favor of CO 2 .The formation of the DME is due to the acidic sites (OH) created during the functionalization (treatment with H 2 SO 4 ) of the CNFs support.In fact, the treatment with H 2 SO 4 leads to the creation of oxygenated groups (Acid sites and redox) such as COOH, C=O and OH.Then, the DME form is oxidized by the redox sites of the support and NiO leading to the formation of CO 2 .These results are in agreement with those reported in the literature 17 where it was mentioned that methanol leads to DME which reacts with H 2 O to produce CO 2 (CH 3 OCH 3 + 3H 2 O → 6H 2 + 2 CO 2 ) 18 .CO 2 is also formed by reactions of methanol with H 2 O 19 .The sample that was tested for the reaction of methanol alone at 290 o C was activated by H 2 and tested for the CH 3 OH/H 2 reaction, then after it was activated by air, it was tested for the CH 3 OH/ Air reaction at the same reaction temperature.The change of the conversion and the selectivities with time on stream are shown in Fig. 8 and 9 respectively.It can be seen that in both cases the conversion increased with temperature and reached a value higher than that obtained for the reaction of methanol alone.As for the selectivities, it can be seen that H 2 increased the selectivity of DME to the detriment of that of CO 2 , whereas air (O 2 ) decreased DME selectivity in favor of that of CO 2 .These results can be explained by the fact that H 2 shifts the equilibrium of the conversion of DME to the left (equation 4) and thus decreased the conversion of DME into CO 2 .On the other hand, O 2 increased the conversion of methanol and DME to CO 2 . In order to see the effect of temperature on the CH 3 OH/O 2 reaction, the catalyst sample was activated at 330°C and then tested at 250°C.The results Fig. 10 CONCLUSION NiO nano particles were succefully prepared and loaded on functionalized carbon nanofibers (CNFs).The as-prepared NiO-CNFs material was tested as electrocatalyst and catalyst for the conversion of methanol in gas phase.The results show that the NiO/CNFs catalyst has considerable activities for methanol oxidation.In fact it showed high stability which proofed by repetitive cycles as a result of catalyst surface activation.However, additional studies of both the intermediate NiO material and the final NiO-CNFs catalyst are recommended for future work.The conversion of methanol in gas phase at 290 o C over NiO-CNFs catalyst in fresh, reduced, and oxidized forms, led to dimethyl ether (DME) and CO 2 as main products.Formaldehyde (FA), methyl formate (MF) and dimethoxymethane (DMM) were observed as traces. The conversion of methanol alone at 290 o C over fresh (neither reduced nor oxidized) NiO-CNFs catalyst showed a decrease of the selectivity of DME in favor of that of CO 2 which suggest that DME reacts with the formed H 2 O to produce CO 2 . Reactivation of the catalyst by H 2 (reduction) increased the selectivity of DME to the detriment of that of CO 2 (partial oxidation), whereas reactivation by air (oxidation) decreased DME selectivity in favor of that of CO 2 (total oxidation).Both cases of reactivation increased the conversion of methanol. Results of methanol conversion at lower reaction temperature 250 o C showed a decrease of DME selectivity in favor of that of FM and DMM which indicated that DMM and MF can be formed via DME at lower temperature. Fig. 7 . Fig. 7. Dependence of the conversion and product selectivities on time on stream reaction over NiO/CNFs, at 290 o C show a slight decrease in the conversion, which indicates a slight deactivation of the catalyst.On the other hand, a slight decrease in the selectivity of the DME in favor of FM and DMM was observed while the selectivity of CO 2 remained almost unchanged.These results are in agreement with those reported in the literature 20,21 where it has been reported that DMM and MF can be formed from DME.Compared to the results obtained at 290 o C, it can be concluded that MF, DMM which are formed from CH 3 OH or via DME, are converted to CO 2 by total oxidation.Taking into account the obtained results, the possible reaction routes of CO 2 , DME, MF and DMM can be found in scheme 1.
2018-11-02T18:22:10.239Z
2018-10-21T00:00:00.000
{ "year": 2018, "sha1": "80aa2b23aee2229d4a4264fe00be6f60aee4fcd5", "oa_license": "CCBY", "oa_url": "http://www.orientjchem.org/pdf/vol34no5/OJC_Vol34_No5_p_2597-2602.pdf", "oa_status": "GOLD", "pdf_src": "Anansi", "pdf_hash": "80aa2b23aee2229d4a4264fe00be6f60aee4fcd5", "s2fieldsofstudy": [ "Chemistry" ], "extfieldsofstudy": [ "Chemistry" ] }
251593322
pes2o/s2orc
v3-fos-license
Association of hypoxia inducible factor 1-Alpha gene polymorphisms with multiple disease risks: A comprehensive meta-analysis HIF1A gene polymorphisms have been confirmed the association with cancer risk through the statistical meta-analysis based on single genetic association (SGA) studies. A good number SGA studies also investigated the association of HIF1A gene with several other diseases, but no researcher yet performed statistical meta-analysis to confirm this association more accurately. Therefore, in this paper, we performed a statistical meta-analysis to draw a consensus decision about the association of HIF1A gene polymorphisms with several diseases except cancers giving the weight on large sample size. This meta-analysis was performed based on 41 SGA study’s findings, where the polymorphisms rs11549465 (1772 C/T) and rs11549467 (1790 G/A) of HIF1A gene were analyzed based on 11544 and 7426 cases and 11494 and 7063 control samples, respectively. Our results showed that the 1772 C/T polymorphism is not significantly associated with overall disease risks. The 1790 G/A polymorphism was significantly associated with overall diseases under recessive model (AA vs. AG + GG), which indicates that the A allele is responsible for overall diseases though it is recessive. The subgroup analysis based on ethnicity showed the significant association of 1772 C/T polymorphism with overall disease for Caucasian population under the all genetic models, which indicates that the C allele controls overall diseases. The ethnicity subgroup showed the significant association of 1790 G/A polymorphism with overall disease for Asian population under the recessive model (AA vs. AG + GG), which indicates that the A allele is responsible for overall diseases. The subgroup analysis based on disease types showed that 1772 C/T is significantly associated with chronic obstructive pulmonary disease (COPD) under two genetic models (C vs. T and CC vs. CT + TT), skin disease under two genetic models (CC vs. TT and CC + CT vs. TT), and diabetic complications under three genetic models (C vs. T, CT vs. TT and CC + CT vs. TT), where C allele is high risk factor for skin disease and diabetic complications (since, ORs > 1), but low risk factor for COPD (since, ORs < 1). Also the 1790 G/A variant significantly associated with the subgroup of cardiovascular disease (CVD) under homozygote model, diabetic complications under allelic and homozygote models, and other disease under four genetic models, where the A is high risk factor for diabetic complications and low risk factor for CVD. Thus, this study provided more evidence that the HIF1A gene is significantly associated with COPD, CVD, skin disease and diabetic complications. These might be the severe comorbidities and risk factors for multiple cancers due to the effect of HIF1A gene and need further investigations accumulating large number of studies. Introduction In the scientific community, hypoxia-inducible factor 1α (HIF1A), a transcription factor, has been a research focus to explain its role in oxygen sensing under normal and hypoxic conditions. Many aspects of Human physiology need to match oxygen supply to cellular metabolism and presumably regulate gene expression by sensing oxygen [1]. HIF1A regulates the expression of hundreds of genes [2,3] involved in many biological processes, including neovascularization, angiogenesis, cytoskeletal structure, apoptosis, adhesion, migration, invasion, metastasis, glycolysis, and metabolic bioenergetics [4][5][6]. Low oxygen levels or hypoxia represent an important microenvironment condition to affect the pathology of many human diseases, including cancer, diabetes, aging, and stroke/ ischemia [7,8]. HIF1A 1772 C/T (rs11549465) and 1790 G/A (rs11549467) single nucleotide polymorphisms (SNPs) have been identified in association with different types of cancers [9][10][11][12][13][14]. In recent years, a study also reviewed the association of HIF1A 1772 C/T and 1790 G/A polymorphisms with different types of cancers and found that both polymorphisms are significantly associated with overall cancers [15]. The subgroup analyses indicated 1772 C/T polymorphism in association with decreasing the risk of renal cell carcinoma and the 1790 G/A polymorphism with significantly increased cancer risk in the Asian and Caucasian population [15]. However, a good number of single genetic association (SGA) studies also reported the association of these two polymorphisms with other diseases, including type 2 diabetes (T2D), cardiovascular diseases (CVD), lung disease, autoimmune diseases, inflammatory diseases, preeclampsia, osteoarthritis, lumbar disc degeneration, high altitude polycythemia, age-related macular degeneration and many more . The SGA study of Hernández-Molina et al. [18] reported that HIF1A 1772 C/T is a significant genetic factor for autoimmune disease, whereas some other studies [25,31] found its insignificant association. Similarly, some authors showed the significant association of HIF1A (1772 C/T and 1790 G/A) with cardiovascular diseases (CVD) [21,40], though some authors did not find the significant effect in the same question [22,26]. Again for inflammatory diseases, a significant association was claimed by [20,38], and an insignificant association by [27,32,41]. For Chronic obstructive pulmonary disease (COPD), Yu et al. [17] and Putra et al. [39] claimed the significant and insignificant association with HIF1A gene polymorphisms, respectively. Wei et al. [37] showed significant association of 1772 C/T and insignificant association of 1790 G/A polymorphisms of HIF1A with COPD. The both SNPs of HIF1A gene were significantly associated with preeclampsia [16,24], but another study found their insignificant association [34]. Likewise, Geza et al. [29] reported the significant association of diabetes (type 1 & 2) with HIF1A 1772 C/T polymorphism, and Yamada et al. [35] also suggested that HIF1A 1772 C/T is significantly associated with type 2 diabetes (T2D) and HIF1A 1790 G/A is not. Another two studies claimed the insignificant association between HIF1A gene polymorphisms and type 2 diabetes [45,50]. Ekberg et al. [51], and Bi et al. [52] both found the significant association of HIF1A gene polymorphisms with diabetic complication diseases, but Liu et al. (a) [45] and Pichu et al. (b) [50] found no relation. Also, Lin et al. [33] reported that the HIF1A 1790 G/A might be played a protecting role significantly to develop the lumbar disc degeneration (LDD), and HIF1A 1772 C/T did not play any role with the severity of LDD. Some authors also checked the association of the HIF1A gene with cellulite [28], hemodialysis patients [30], high-altitude polycythemia (HAPC) [36], and agerelated macular degeneration (AMD) [23]. They found the significant association of HIF1A with cellulite, and insignificant association of HIF1A with hemodialysis patients, HAPC and AMD risk. Thus, we observed that different SGA studies produce inconsistent results about the association of HIF1A gene polymorphisms with multiple disease risks beyond cancers. This type of inconsistent results may be produced due to the small sample size and/or heterogeneous population in each of the individual SGA studies. Therefore, a consensus decision about the association of HIF1A gene polymorphisms with multiple disease risk is required to make a treatment plan against this genetic effect. To make a consensus decision about the contradictory findings of different studies more accurately, researchers usually consider statistical meta-analysis [15,[55][56][57][58][59][60]. The meta-analysis makes a decision about the association more accurately compared to SGA studies. Therefore, in this study, we considered statistical meta-analysis to make a consensus decision about the association of HIF1A gene (1772 C/T and 1790 G/A) polymorphisms with several disease risks excluding cancers, giving the weight on large sample size and appropriate statistical modeling. Eligibility criteria The title and abstract of the primarily selected relevant studies were independently investigated by two authors. For the final review some important inclusion-exclusion criteria were used to extract data and only included if the studies were (i) designed to examine the association between HIF1A gene polymorphisms (C1772T, A1790G) and disease/ disorder risk; (ii) Human case-control studies; (iii) sufficient to provide significant data of genotype frequency. Data extraction For the final review, the following information from each of the included studies was extracted, like; first author, year of publication, country of origin, ethnicity of the study subject, number of cases and control, disease type, allelic and genotypic distribution, and so on according to the PRISMA statement [61]. To confirm the validity of a selected SGA study for inclusion in the meta-analysis, the Hardy-Weinberg equilibrium (HWE) test was performed using the Chisquare statistic. A study was considered suitable for meta-analysis only if Pr{χ 2 obs � χ 2 } � .05 exist (Table 1). Quality assessment Two authors independently checked the assessment of individual study quality by using the Newcastle-Ottawa Scale (NOS) [62]. The total Nine point NOS score was generated through the categories of selection (4 points), comparability (2 points), and exposure (3 points). The NOS score of an individual study is considered poor (0-3), fair (4-6) and excellent (7-9) quality. In our meta-datasets, 38 studies showed excellent quality and 3 were fair quality (S1 Table). Statistical meta-analysis To perform meta-data on SGA studies, we used the following statistical analysis. The HWE test was performed using the Chi-square statistic to confirm the suitability of a selected study for inclusion in the meta-analysis. The consistency of genotypic ratio under the control population was used as the null hypothesis (Ho) for the HWE test. The test statistic of Ho is defined as ð1Þ which follows chi-square distribution with 1 degree of freedom, where O i and E i denote observe and expected frequency of the genotype, respectively. If p and q denote the probabilities of two alleles (e.g. C and T), respectively and O i = obs(i) is observed frequency of ith genotype among the 3 genotypes CC, CT and TT. Then p and q are defined as follows Then the expected frequency of ith genotype is E i = E(i), which defined as E(CC) = p 2 n, E (CT) = 2pqn, E(TT) = q 2 n, where n is total number of observations. To investigate the association of SNPs with multiple diseases based on pooled odds ratios (ORs), the individual OR of kth SGA study was calculated as follows [63,64]. This heterogeneity was tested using Cochran's Q statistic which will be introduced later. To calculate pooled ORs based on FEM, the Mentel-Haenszel (MH) method was used as follows. The FEM for kth SGA study is defined as Again, to calculate pooled ORs based on REM, the inverse variance method was used as follows. The REM for kth SGA study is defined as ffi ffi ffi ffi ffi ffi ffi ffi ffi ffi ffi ffi ffi ffi ffi The 95% confidence interval (CI) for pooled ORs can be obtained based on z-statistic as follows ffi ffi ffi ffi ffi ffi ffi ffi ffi ffi ffi ffi ffi ffi ffi ffi ffi ffi ffi ffi ffi ffi ffi ffi ffi ffi ffi ffi ffi ffi ffi ffi ffi ffi ffi ffi ffi ffi ffi ffi ffi ffi ffi ffi ffi ffi ffi ffi ffi ffi ffi ffi ffi ffi ffi ffi ffi ffi ffi ffi ffi ffi ffi ffi Then the Cochran's Q statistic [65] is defined as and its extended Higgin's and Thompson I 2 -statistic [66] was also used to check the heterogeneity of SGA studies. The I 2 -statistic is defined as The I 2 values >25%, >50% and >75% defined as low, moderate, and high heterogeneity, respectively. Subgroup analyses were performed based on ethnicity and disease types. Sensitivity analysis was carried out using both the full data and reduced data, where the reduced dataset did not included the SGA studies that were rejected by the HWE validation test. To investigate the publication bias on the included SGA studies in the meta-analysis, we constructed the funnel plot, where the standard error (se) of the estimated effect was plotted against the ORs [63,64,67]. Also, Egger's regression test and Begg's test [68,69] was performed for quantitative evaluation (p-value < 0.05 indicates the existence of publication bias). The Egger regression test was performed under H 0 : α = 0 (absence of publication bias) and the test statistic follows as where b y is estimated by the least square estimation with the respective following models with ε k~i id N(0, σ 2 ). The Begg's test was performed under H 0 : α = 0 (absence of publication bias) and the test statistic follows Z ¼ C À D ffi ffi ffi ffi ffi ffi ffi ffi ffi ffi ffi ffi ffi ffi ffi ffi ffi ffi ffi ffi ffi ffi ffi ffi ffi ffi ffi ffi ffi ffi ffi ffi ffi ffi ffi ffi ffi ffi ffi ffi ffi ffi ffi ffi where C and D represents concordant and discordant number, respectively, and obtained by using the Kendall's ranking of t � k and b where, t k = OR k is denoted the OR of kth study, and Also, we studied a false positive report probability (FPRP) to verify whether the findings could be regarded as false positives or not [70]. We computed the statistical power and FPRP based on our significant ORs using the following mechanism, where, α is the level of significance, π is the prior probability and (1β) is statistical power. To implement all the statistical analysis, we used 'meta' package in R program (http://metaanalysis-with-r.org/). Study characteristics Initially, 187 studies were selected through text mining that included the HIF1A gene and polymorphisms in their title or abstracts. After screening of the duplications, excluding the studies that did not match with the eligibility criteria or had incomplete information, a total of 41 studies were selected based on the PRISMA statement for the final review (Fig 1). In this study, 35 studies comprised 38 datasets of the HIF1A 1772 C/T polymorphism with a sample size of 23038 (comprising 11544 cases and 11494 controls), and 22 studies comprised 24 datasets for the HIF1A 1790 G/A polymorphism with a sample size of 14489 (comprising 7426 cases and 7063 controls) were incorporated. For Meta-analysis of HIF1A 1772 C/T and 1790 G/A polymorphisms, the types of diseases included (after excluding all types of cancer) were grouped as cardiovascular diseases (CVDs), type 2 diabetes (T2D), autoimmune diseases, inflammatory diseases, chronic obstructive pulmonary disease (COPD), preeclampsia, skin disease, diabetic complications, and other (age-related macular degeneration (AMD), Hemodialysis, lumbar disc degeneration (LDD), high altitude polycythemia (HAPC), metabolic syndrome, pressure injury). The 'other' disease group was made in case of a single study of each disease to perform this meta-analysis. The subgroup of respective diseases was shown in S4 Table. Quantitative synthesis of HIF1A 1772 C/T polymorphism. Results generated through this meta-analysis indicated that the HIF1A 17772 C/T polymorphism was insignificantly asso- Table). That subgroup may be the main sources of heterogeneity for conducting the meta-analysis of HIF1A 1772 C/T polymorphism. Subgroup Study number In the forest plot, the square of the horizontal line represents the individual studyspecific odds ratios (ORs) with 95% confidence intervals (CIs) and the black area of the squares represents the corresponding study weight. The black diamond reflects the pooled OR and the lateral points of the diamond represent the CI of the overall analyses. The solid vertical lines are the OR of 1 indicates no effect. The dashed vertical line shows the corresponding pooled OR of the analyses. Table 3). Sources of heterogeneity In this Meta-analysis, significant heterogeneity was observed in different studies of HIF1A analysis suggested that some genetic model showed significant heterogeneity in the cases of COPD, CVD, T2D, Diabetic complications, Asian, and Caucasian populations. (S2 Table). Publication bias checking The funnel plot was used to check publication bias of HIF1A gene 1772 C/T, and 1790 G/A polymorphisms with allelic model C vs. T, and A vs. G, respectively. The conventionally constructed plots confirmed the symmetric distribution of ORs based on standard error and suggested no evidence of publication bias (Fig 4). Also, the Begg's test and Egger's linear regression test analysis data confirmed no significant publication bias under the allelic model of HIF1A 1772 C/T polymorphism [C vs. T allele; p-value = 0.9900 and 0.5052 respectively], and for the 1790 G/A [A vs. G allele; p-value = 0.7284 and 0.8537 respectively] polymorphisms (S3 Table). Sensitivity analysis In this study, sensitivity analysis was performed to increase the reliability of meta-analysis results. Studies that do not qualify HWE were excluded to investigate the existence of the attained results. The statistical associations of the results were not altered after excluding the respective studies, which confirmed the reliability of this meta-analysis (Tables 2 and 3). False positive report probability (FPRP) and power analyses We performed false-positive report probability (FPRP) to assess whether associations reported previously were false positives. We preset FPRP at 0.2 as the threshold for biological importance and a prior probability 'π' at 0.01 to detect the significant OR [71]. We computed the statistical power and FPRP by fixing the odds ratio at 1.5 (or, 1/1.5 for protective effect) for identifying important biologic effects [70]. It should be mentioned here that an OR value at 1.5 is considered as a plausible value for a significant biologic effects [72,73]. The association was considered significant, when the FPRP value was less than 0.2 [74]. Based on the above discussion, the rs11549465 SNP significantly increased the overall disease risk in Caucasian patients. Also, the rs11549465 SNP significantly increased the risk of diabetic complications and decreasing the risk of COPD (Table 4). The rs11549467 SNP significantly decreased the overall disease risk for Asian patients and subgroup of CVD risk (Table 4). Discussion and conclusion We performed a statistical meta-analysis to investigate the association of HIF1A gene polymorphisms with multiple diseases risks more accurately compare to SGA studies. This analysis was performed based on 41 SGA study's findings, where the polymorphisms rs11549465 (1772 C/ T) and rs11549467 (1790 G/A) of HIF1A gene were analyzed based on 11544 and 7426 cases and 11494 and 7063 control samples, respectively. This study included different types of diseases (i.e. CVD, T2D, autoimmune diseases, inflammatory diseases, COPD, preeclampsia, though A was recessive. The subgroup analysis based on ethnicity showed significant association between the HIF1A 1772 C/T polymorphism and overall disease for the Caucasian population under the all genetic models, which indicates that the C allele is associated with overall diseases. Again, the ethnicity subgroup showed a significant association between HIF1A 1790 G/A polymorphism and overall disease for the Asian population under recessive model only (AA vs. AG + GG), which indicates that the A allele is associated with overall diseases. The subgroup analysis based on disease type showed that HIF1A 1772 C/T is significantly associated with COPD, skin and diabetic complications diseases, where C is high risk factor for skin and diabetic complications (since, ORs > 1), but low risk factor for COPD (since, ORs < 1). This subgroup analysis results goes in favor of Ekberg et al. [65], and Bi et al. [66] for diabetic complications and Yu et al. [17] and Wei et al. [37] for COPD. The association of diabetic complication risk was also supported by a previous meta-analysis report [75]. [41], and Fernández-Torres et al. [20] for inflammatory disease, Putra et al. [39] for lung and Nava-Salazar et al. [34] for preeclampsia. Thus the above discussion provided the significant evidence that the HIF1A gene is a risk factor for the development of COPD, CVD, skin disease and diabetic complications. Now it is required to explore the causality of HIF1A gene SNPs (1772 C/T and 1790 G/A) in the development of those disease by their expression analysis. Recently, some researchers studied single or multiple disease causing genes or SNPs by using network analysis or Mendelian randomization [76][77][78][79][80][81][82]. These SNPs can act as biological markers to locate the disease-causing genes that are regulated either directly or indirectly by those SNPs [83]. When SNPs occur within a gene or in a regulatory region near a gene, they are known as cis-acting factors, and they may play a more direct role in disease development by affecting the gene's function. When SNPs occur far away from the disease causing genes, they are known as trans-acting factors. The cis-and trans-acting factors are usually considered as the causal and non-causal risk factors of disease development, respectively. SNPs can be silent due to its occurrence within the noncoding regions or may change the encoded amino acids due to its occurrence within the coding region. They may influence promoter or enhancer activities, messenger RNA (mRNA) stability, and subcellular localization of mRNAs and/or proteins and hence may develop disease. A post-transcriptional modification (PTM) in mRNA, known as N4-acetylcytidine (ac4C) that occurs on cytidine, plays a vital role in the stability and regulation of mRNA translation. There are at least 15 nucleotide modifications found in mRNA of which m6A and N1-methyladenosine (m1A) are similar in function to ac4C. They play a significant role in the translation process of mRNA and its stability that leads to the progression of several human diseases [84][85][86][87]. If SNPs (1772 C/T and 1790 G/A) of HIF1Agene data are available for COPD, CVD, skin disease and diabetic complications, and control samples, an effective disease prediction model may be developed by using a suitable machine learning technique including logistic classifier. For example, some recent studies developed SNPs based diseases prediction model [88,89]. However, there were some limitations in this study, such as (i) the heterogeneity factors such as gender, age, smoking, drinking, blood pressure, family history, etc was not considered to estimate the combined effect for overall or subgroup analysis like as [56][57][58][59][60]. Because the dataset was generated through multiple diseases excluding cancer, so we cannot focus on specific behavior factor due to the insufficient information of GWAS studies. (ii) the metadata was collected considering the English language only, (iii) some subgroup analysis may be affected due to the small subgroup sample size and unavailable data due to limited GWAS studies. In conclusion, this study made a consensus decision about the association of HIF1A gene polymorphisms with multiple diseases risks excluding cancers. The meta-analysis results showed that the HIF1A 1772 C/T polymorphism is not significantly associated with overall disease risks The HIF1A 1790 G/A polymorphism was associated with overall diseases under recessive model, where the allele A controls the diseases though it is recessive. The ethnicity subgroup analysis showed the significant association of HIF1A 1772 C/T polymorphism with overall disease for Caucasian population under all genetic models, where C allele controls the diseases, while HIF1A 1790 G/A polymorphism was significantly associated with overall disease for Asian population under a genetic model due to the influence of A allele. The subgroup analysis based on disease types showed that HIF1A 1772 C/T is significantly associated with chronic obstructive pulmonary disease (COPD), skin and diabetic complications diseases, where C allele is the high risk factor for skin and diabetic complications diseases, and low risk factor for COPD. The HIF1A 1790 G/A polymorphism showed significant association with CVD under homozygote model and diabetic complications under allelic and homozygote models. The rest of diseases showed insignificant association with HIF1A gene under all of five genetic models by the subgroup analysis. Taken together, the results of this study suggest that HIF1A could be a useful prognostic biomarker for COPD, CVD, skin disease and diabetic complication diseases. In future, availability of more SGA studies on the different ethnic populations might shed more lights to unveil and confirm the association of the HIF1A gene polymorphisms with different diseases.
2022-08-17T06:16:19.577Z
2022-08-16T00:00:00.000
{ "year": 2022, "sha1": "7cbb772bbd7c4f96039c61941c5f88366ca626e6", "oa_license": "CCBY", "oa_url": null, "oa_status": null, "pdf_src": "PubMedCentral", "pdf_hash": "a54d243cced8fa18c24eaf49eb4011ce7b04ba88", "s2fieldsofstudy": [ "Medicine", "Biology" ], "extfieldsofstudy": [ "Medicine" ] }
245503880
pes2o/s2orc
v3-fos-license
Central retinal vein occlusion post-COVID-19 vaccination Coronavirus disease 2019 (COVID-19) is known to cause thromboembolic episodes apart from acute respiratory distress syndrome (ARDS). With large vaccine drives all across the world, there are a few case reports on post-vaccine thrombotic events seen with the AZD1222, ChAdO × 1 vaccine. Here, we present two cases of central retinal vein occlusion presenting immediately after receiving the second dose of the Covishield vaccine. Although the causal relationship cannot be drawn, the ophthalmologist should be aware of this adverse reaction. Case 1 A 50-year-old diabetic male came with right eye diminution of vision 4 days after receiving the second dose of vaccination (Covishield, AZD1222, ChAdOX 1). The best-corrected visual acuity was OD 6/60 and OS 6/6. The anterior segment examination and intraocular pressure were within normal limits in both eyes. The fundus examination revealed disk edema, dilated and tortuous veins, diffuse retinal hemorrhages with macular edema, suggestive of central retinal vein occlusion [ Fig. 1a]. The left eye fundus showed mild non-proliferative diabetic retinopathy changes. The optical coherence tomography showed cystoid macular edema with a central foveal thickness of 571 um in the right eye [ Fig. 1b]. The lab reports revealed uncontrolled diabetes with HbA1C of 13.2 with deranged renal profile (blood urea: 80 mg/dL, creatinine: 1.9 mg/dL), normal C-reactive protein (CRP) 1.9 mg/L (n: <5), d-dimer 233 ng/mL, (n: <500), PT/ activated partial thromboplastin time (APTT), differential leukocyte count (N60, L38, E2), complete blood count (CBC), basic coagulation profile, lipid profile, cardiac echography, and negative reverse transcriptase polymerase chain reaction (RT-PCR) for COVID-19. He was given intravitreal injection anti-vascular endothelial growth factor (VEGF) for cystoid macular edema. Case 2 A 43-year-old female with unremarkable systemic history presented with right eye sudden-onset diminution of vision 3 days after receiving the second dose of vaccination (Covishield, AZD1222, ChAdOX 1). Her best-corrected visual acuity was OD 5/60 and OS 6/12. The anterior segment examination showed an immature senile cataract in both eyes and dense central posterior subcapsular in right eye (OD). The intraocular pressure was within normal limits in both eyes. The fundus examination of the right eye showed hyperemic and edematous disk, tortuous veins, and intraretinal hemorrhages in all quadrants suggestive of impending central retinal vein occlusion [ Fig. 1c and d]. The left eye fundus was unremarkable. The blood investigations revealed raised erythrocyte sedimentation rate (ESR): 49, CRP 14.6 (n: <5), rheumatoid factor (RF) 11 (n: <8) and d-dimer 6077.4 ng/mL, (n: <500). The differential leukocyte count was N65, L23, E12, HBA1C 4.6, with normal CBC, peripheral smear, serum angiotensin converting enzyme (ACE), bleeding and clotting time, renal function tests, lipid profile, PT/APTT. RT-PCR of oral/nasal swab for COVID-19 was negative. The optical coherence tomography (OCT) revealed no cystoid macular edema (CME), and fundus fluorescein angiography (FFA) could not be done as the patient was not willing for an invasive procedure, so she was closely followed up. Discussion COVID-19 has led to a global-scale pandemic creating an unprecedented burden on human health and public health processes. Although the acute respiratory distress syndrome (ARDS) represents the hallmark of COVID-19-associated clinical manifestations, thromboembolic events are catastrophic in patients with severe COVID-19 and have been linked to cause morbidity and mortality in patients infected with SARS-CoV-2, owing to hyperinflammation and pre-existing cardiovascular disease. infection. The roles of the spike protein in receptor binding and membrane fusion make it an attractive vaccine antigen. [1] With many vaccines under trial, India is running the largest vaccination campaign with mainly Covishield (AZD1222) and Covaxin (BBV152) in circulation. The ChAdOx1 nCoV-19 vaccine (AZD1222) was developed at Oxford University and consists of a replication-deficient chimpanzee adenoviral vector ChAdOx1, containing the full-length structural surface glycoprotein (spike protein) of SARS-CoV-2, with a tissue plasminogen activator leader sequence. [1] Lately, there are reports of vascular thromboembolic catastrophes post-vaccination, especially with ChAdOX 1. A Scottish national population-based analysis among 2.53 million people who received their first dose of SARS-CoV-2 vaccines revealed a potential association between receiving a first-dose ChAdOx1 vaccination and occurrence of immune thrombocytopenic purpura (ITP), arterial thromboembolic, and hemorrhagic events with an incidence of 1.13 cases per 100,000 vaccinations. Also, the adverse events with ChAdOX 1 were more after the first dose of ChAdOX 1 compared to BNT162b2 (Pfizer). [2] Another study involving people aged 18-65 years who received the ChAdOx1 vaccine in Denmark and Norway observed increased rates of venous thromboembolic events, including cerebral venous thrombosis (standardized morbidity ratio of 1.97 and 95% CI 1.50-2.54) and intracerebral hemorrhage (standardized morbidity ratio of 2.33 and 95% CI 1.01-4.59). [3] There are a few reports of thrombotic events after Covishield in India too. [4,5] Greinacher et al. [6] suggested that the interactions between the vaccine and platelets or between the vaccine and PF4 (platelet factor 4) could play a role in pathogenesis. The proposed mechanisms include the formation of autoantibodies against PF4, antibodies induced by the free deoxyribonucleic acid (DNA) in the vaccine that cross-reacts with PF4, platelets, and adenovirus binds to the platelets causing platelet activation. Since vaccination of millions of people will be complicated by a background of thrombotic events unrelated to the vaccination, they suggested a PF4-dependent enzyme-linked immunoassay (ELISA) or a PF4-enhanced platelet-activation assay to confirm the diagnosis of vaccine-induced immune thrombotic thrombocytopenia through this novel mechanism of post-vaccine formation of platelet-activating antibodies against PF4 and also naming this novel entity vaccine-induced immune thrombotic thrombocytopenia (VITT) to avoid confusion with the heparin-induced thrombocytopenia. Apart from the thrombotic events, a few cases of neurological adverse effects have also been reported post-vaccination. Román et al. [7] reported the occurrence of three acute transverse myelitis among 11,636 participants in the AZD1222 vaccine. To the best of our knowledge, there are no previous reports of retinal venous occlusion following the AZD1222 vaccination. However, considering the rarity of such events, the potential risks of such events should be interpreted in light of the proven beneficial effects of the vaccine. Conclusion Herein, we report two cases of central retinal vein occlusion (CRVO) following the Covishield vaccination with the purpose of generating awareness among the ophthalmologists regarding the rare adverse event and unfurling the possible pathophysiology behind it. At this juncture, it would be premature to draw a causal relationship between the COVID-19 vaccine and CRVO. These reports should never discourage the vaccine rollout, but monitoring of the evolving data should be carried on by manufacturers and independent authorities before coming to a definitive conclusion. Financial support and sponsorship Nil. Conflicts of interest There are no conflicts of interest.
2021-12-24T06:17:02.793Z
2021-12-23T00:00:00.000
{ "year": 2021, "sha1": "7232e5235ca753fa6ec0f7067c58ad0352a75692", "oa_license": "CCBYNCSA", "oa_url": "https://doi.org/10.4103/ijo.ijo_1757_21", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "d0b229297f299b58cd1acbb2f3afaddd55085ea3", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
119114484
pes2o/s2orc
v3-fos-license
Second order tunneling of two interacting bosons in a driven triple well We investigate quantum tunneling of two repulsive bosons in a triple-well potential subject to a high-frequency driving field. By means of the multiple-time-scale asymptotic analysis, we evidence a far-resonant strongly-interacting regime in which the selected coherent destruction of tunneling can occur between the paired states and unpaired states, and the dominant tunneling of the paired states is a second order process. Two Floquet quasienergy bands of the both kinds of states are given analytically, where a fine structure up to the second order corrections is displayed. The analytical results are confirmed numerically based on the exact model, and may be particularly relevant to controlling correlated tunneling in experiments. Introduction Advances in laser technology have enabled studies of quantum tunneling and its coherent control of a single particle in light-induced quantum wells without dissipation [1,2]. Research attempting to manipulate quantum states has been underway for a long time [3,4]. The timeperiodic driving field is a powerful tool to control the tunneling dynamics and can lead to important phenomena, such as dynamic localization [5,6], coherent destruction of tunneling (CDT) [7][8][9][10] and photon-assisted tunneling [11][12][13]. In recent years, the effects of interparticle interaction have attracted much attention. It has been shown that adjusting the interaction can give rise to richer behavior, including many-body selective CDT [14,15] and the second-order tunneling of two interacting bosons [16]. The two-body interaction model is the simplest model for studying the interacting effects and it has received much attention [17][18][19][20], since the seminal experimental result was reported [21]. The tunneling dynamics are related to the interplay between the on-site interparticle interaction and external field, the former can be tuned by the Feshbach resonance technique [22,23]. In the presence of interaction and a periodic external field, the quantum well system may be nonintegrable, which necessitates the use of a perturbation method for an analytical investigation. The multiple-scale technique is a very useful perturbation method and has been extensively employed for different physical systems [24][25][26][27][28][29][30]. It has been demonstrated that in the multiple-scale perturbation method the usual high-frequency approximation corresponds to the first-order perturbation correction [30]. In the far-off-resonant strongly interacting regime with a stronger reduced interaction [31], the high-frequency approximation is no longer valid. In this case, the dominant tunneling of doublons, which corresponds to the paired states in [32], is a second-order process of a long time scale that can be described by the second-order perturbation correction. The correlated tunneling of two strongly interacting atoms corresponding to time-resolved second-order tunneling has been observed directly in an undriven double well system [16]. Very recently, Longhi et al [32] have studied the secondorder effect of two far-off-resonant strongly interacting bosons in a periodically driven optical lattice by using a multiple-time-scale asymptotic analysis. As mentioned above, there have been a number of studies on the tunneling dynamics of two interacting atoms, which have focused on systems with double-well or optical lattice potentials. The triple-well system is a bridge between the double well and the optical lattice systems and it is very important for us to fully understand the coherent control of particle tunneling in the quantum wells [33][34][35][36][37][38][39][40]. From the triple-well (or equivalent three-level) system one can study richer physical phenomena, such as the effects of finite and multiple dimension for a few-particle case, the simplest non-nearest-neighbor couple and nonadjacent dipolar interaction [34,35], and the adiabatical transport of a quantum particle from the left well to the right well with negligible middle-well occupation [33,40]. The tunneling dynamics of two-particles in a triplewell system have also attracted extensive attention [41][42][43]; however, research on the secondorder effect of the system has not yet been reported. In this paper, we investigate the coherent control of second-order tunneling for two triplewell confined bosons driven by a high-frequency ac field. By means of the multiple-timescale asymptotic analysis, we construct the second-order Floquet solutions, including the Rabi oscillation state and quasi-NOOOON state, and characterize the quantum dynamics of the two bosons with a continuous increase of interaction intensity. A far-off-resonant strongly interacting regime is demonstrated in which two bosons initially occupying the same well would form a stable bound pair because of the CDT from the doublons to the unpaired states. Taking into account the second-order tunneling effect, the selected CDT can also occur between the doublons or between the unpaired states on the six-dimensional Hilbert space. The result is confirmed by the Floquet quasienergy analysis, where the Floquet quasienergy band of the three unpaired states exhibits the avoided level-crossings (or new level-crossings) at (or near) the collapse points and the fine structure of quasienergy band of the three doublons shows the different level-crossings beyond the former collapse points. Good agreements between the analytical and numerical results are shown. The theoretical predictions could be verified further under the current accessible experimental setups [16,21,37,40,44,45]. The model and high-frequency approximation We consider two interacting bosons confined in a triple-well potential and driven by an ac field. The Hamiltonian of the system in the tight-binding approximation is described by the three-site Bose-Hubbard model [41][42][43] where the operatorâ ( †) l annihilates (creates) a boson in well l; J denotes the nearest-neighbor hopping matrix element; U 0 is the on-site interaction energy; and ε(t) = ε cos(ωt) is the ac driving of amplitude ε and frequency ω. Throughout this paper, we takeh = 1 and use the parameter J to appropriately scale the energy, U 0 , ε and ω such that these quantities become dimensionless [14]. Experimentally, the triple-well potential can be generated by different methods [37,40] and the periodic driving can be applied by shaking the system [44,45]. Alternatively, the driven two-boson Hubbard model can also be simulated by light transport in coupled optical waveguides [46]. Here we have assumed that the three wells are deep enough such that the Wannier functions of the two interacting bosons belonging to different wells have very small overlap. A Fock basis |N L , N M , N R of six-dimensional Hilbert space is useful to describe the system, where N L , N M and N R are the number of bosons localized in the left, middle and right wells, respectively, with N L + N M + N R = 2. The quantum state |ψ(t) of the system is expanded as the linear superposition of the Fock states where c j (t) ( j = 1, 2, . . . , 6) denotes the time-dependent probability amplitudes of finding the two bosons in the six different Fock states and obey the normalization condition 6 j=1 |c j (t)| 2 =1. Inserting equations (1) and (2) into Schrödinger equation i∂ t |ψ(t) =Ĥ (t)|ψ(t) , obtains the coupled equations for the amplitudes c j (t) Although it is difficult to obtain exact analytical solutions of equation (3), we can approximately study some interesting phenomena in the high-frequency regime with ω J . To do so, we rewrite the interaction strength as U 0 = mω + u for |u| ω/2, m = 0, 1, 2, . . . with u being the reduced interaction strength [31], and make the function transformations c 1 , c 5 (t) = a 5 (t) and c 6 (t) = a 6 (t) exp[−iϕ(t)], with a j (t) being the slowly varying functions and ϕ(t) = t 0 ε cos(ωτ ) dτ = ε ω sin(ωt). Then, equation (3) is transformed into the coupled equations in terms of a j (t). Under the high-frequency approximation, the rapidly oscillating functions included in the equations can be replaced by their time average, such that the equations of a j (t) become [43] where J m is the mth-order Bessel function of the first kind and e ±iut are the slowly varying functions for a small u value. In [27], Frasca studied the localization in a driven two-level system beyond the high-frequency approximation by means of the multiple-scale perturbation method. Recently, Longhi [30] proposed that the well-known high-frequency approximation commonly used to study CDT corresponds to the first-order perturbation approximation of the multipletime-scale asymptotic analysis. If the first-order correction term vanishes in the perturbation treatment, then the high-order corrected terms become important. Noticing that, for a set of fixed external field parameters, the dynamical behavior of the system (4) is related to the selfinteraction intensity. In this work, we are not concerned about the very strong interaction (e.g. U 0 6ω) since for such a interaction we need to consider not only the usual on-site atominteraction strength but also the interactions between atoms on neighboring lattice sites [17], which is beyond the considered case. When the condition J 0 = 0 is satisfied, equation (4) shows that CDT occurs for the weakly interacting case (m = 0, |u| ω), which leads all the first derivatives of the probability amplitudes to zero. This can be further confirmed by calculation of the Floquet quasienergies of the system in section 4. Besides, for the resonant strongly interacting case (u = 0, m = 1, 2, . . .), CDT for doublons is observed when the condition J m = 0 is satisfied, which leads the first derivativesȧ j (t) ( j = 1, 2, 3) of doublon-state amplitudes to zero. This is consistent with that of two interacting electrons in quantum dot arrays by numerical computation of the Floquet quasienergies [47]. As an example, we show time evolutions of the probabilities P j (t) = |c j | 2 = |a j | 2 ( j = 1, 2, . . . , 6) for the resonant case with J = 1, U 0 = ω = 80, ε/ω = 2.405 and P 2 (0) = 1, P j =2 (0) = 0, as in figure 1, where the first-order result (the circular points) from equation (4) is confirmed by the direct numerical simulation (the curves) of equation (3). From figure 1, we can see that transitions between the doublons with probabilities P j , j = 1, 2, 3 in figure 1(a) and the unpaired states with probabilities P j , j = 4, 6 in figure 1(b) happen periodically for the resonant case. We will come back to this property to make a comparison with the difference from the far-off-resonant case in next section. It is known that equation (4) is a good approximation of equation (3) only for small values of the reduced interaction strength, |u| ω. When the |u| values tend to their maximum |u| = ω/2, the functions e ±iut vary moderately quickly compared to the rapidly oscillating driving field. Consequently, in equation (4), although e ±iut may be replaced by their average value of zero [43] such that probability amplitudes a 1 (t), a 2 (t) and a 3 (t) of the doublons are frozen approximately, the effectiveness of the high-frequency approximation is partly lost. Particularly, in the case of moderate |u| values, the values are neither very small nor too large and, therefore, equation (4) is no longer a good approximation. For a stronger reduced interaction with a larger |u| value, we need to employ other approximation methods and to explore the second-order tunneling effects. Second-order tunneling in the far-off-resonant strongly interacting regime We will now consider the far-off-resonant case with a stronger reduced interaction to investigate the tunneling dynamics of the system, by means of multiple-time-scale asymptotic analysis. In the high-frequency regime, we set σ = J/ω as a small positive parameter and t = ωt is the rescaled time. The probability amplitudes a j (t ) ( j = 1, 2, . . . , 6) are expanded as a power series of σ Since the high-order infinitesimal can be neglected in the high-frequency regime, we approximately rewrite the probability amplitudes as the leading order a j (t ) = a (0) . . , 6) denotes the probabilities of finding the two bosons in the six different Fock states in equation (2). According to the perturbation analysis in the appendix, we readily obtain that such amplitudes are slowly varying functions in time, which satisfies the following linear equations with constant coefficients: where ρ i (i = 1, 2) are set as (see the appendix) for U 0 /ω + n = 0. Therefore, equations (6) and (7) are always definable and applicable, except for the resonant case in which ρ i tends to infinity. It is worth noting that for a stronger reduced interaction obeying at least |u| > J , the value of any term in the summations of equation (8) is less than σ −1 , such that σ 2 ρ i may be a second-order quantity and equations (6) and (7) could be applicable as a set of second-order equations. In fact, |ρ i | < σ −1 implies that the inequality which results in the conditions |u| > J and ω 2|u| > 2J validating the second approximation. Particularly, we will numerically illustrate the parameter regions of |u|, ω and U 0 , where the second-order perturbation method is more perfect for |u| 10 J = ω/8 with U 0 = ω + u and for |u| 5 J = ω/16 with U 0 = 2ω + u. Combining equation (6) with equation (7), we note that dynamics of the three doublons (i.e. the two bosons occupying the same site for A j (t ) with j = 1, 2, 3) is decoupled from that of the three unpaired states (i.e. the two bosons occupying the distinct sites for A j (t ) with j = 4, 5, 6). Clearly, for |u| > J , equations (6) and (7) describe the second-order approximation, where the time evolution of any doublon state amplitude is a second-order long-time-scale process, since its time derivative is proportional to only the second-order constant σ 2 . Similar results have been seen previously in the tight-binding optical lattice [32]. The second-order coupling coefficients of equations (6) and (7) are proportional to the parameter σ 2 ρ 2 , which describes the second-order tunneling rate of the system. The nonzero tunneling coefficient means that the tunneling can occur between the three doublons based on equation (6) and between the three unpaired states based on equation (7). In figure 2, we plot the factor ρ 2 of the second-order tunneling coefficients as a function of the driving parameters ε/ω and self-interaction intensity U 0 /ω, where figure 2(b) is the plan view of figure 2(a). Combining equation (8) with figure 2 we can see that the factor ρ 2 tends to infinity for any integer value of U 0 /ω and arbitrary value ranges of ε/ω, while its values are small enough for the considered far-off-resonant regime. Note that, in figure 2 the very great ρ values are not shown because we have avoided the integer values of U 0 /ω through selecting a rational step such that the multiple-time-scale asymptotic analysis holds. In the second approximation, equations (6) and (7) show that the CDT between the doublons will occur provided that the condition ρ 2 = 0 is satisfied. According to equations (6) and (7), we make an exact comparison of tunneling rates between the three doublons and three unpaired states for two different initial conditions as follows. Firstly, for the two bosons initially occupying the middle well [i.e. P 2 (0) = 1, P j =2 (0) = 0], we seek the analytical solutions P j (t) ( j = 1, 2, . . . , 6) from equations (6) and (7). To do so, we make the function transformations Inserting these expressions into equation (6) yields the coupled equations Combining equations (9) with (11) produces (10) yields the decoupled equation In (a), the dashed line corresponds to the probabilities P 1,3 , and the thin and the thick solid lines indicate the probabilities P 2 and P 4,5,6 , respectively. In (b) the dashed line corresponds to the probabilities P 4,6 , and the thin and the thick solid lines indicate the probabilities P 5 and P 1,2,3 , respectively. In this figure and in the following figures, all of the circular points indicate the analytical solutions and the curves represent the numerical results. In figure 3, we numerically plot the time evolutions of the probabilities P j ( j = 1, 2, . . . , 6) based on equation (3) for the above two initial conditions, the circular points correspond to the above analytical results. Obviously, the analytical results are in perfect agreement with the numerical simulations. The zero probability of the unpaired states in figure 3(a) and the zero probability of the doublons in figure 3(b) mean the CDT from the doublons to the unpaired states. This CDT leads to the interesting two-boson correlated tunneling. In order to further confirm the analytical results in equations (4), (6) and (7), we define the time-averaged total probability of finding the two interacting bosons in the three doublons as S = P 1 + P 2 + P 3 = 1 τ τ 0 (P 1 + P 2 + P 3 ) dt for τ = 200(J −1 ). The normalization means the time-averaged total probability in the three unpaired states being 1 − S . Taking the initial conditions P 2 (0) = 1, P j =2 (0) = 0 and parameter J = 1 from equation (3) we numerically give S as the function of the self-interaction U 0 for ω = 80 and ε/ω = 2.405, as in figure 5(a). It is shown that the time-averaged total probability in the three doublons possesses different features, for the multiphoton resonant points, the far-off-resonant regions, and the near-resonant regions, respectively. Firstly, at each of the resonant points (i.e. U 0 = mω with m = 1, 2, . . . , 5 being integer) S drops to the lowest points which means that the separation probability 1 − S of the two bosons is the largest, as shown in figure 1. Secondly, the two bosons can also be separated in the near-resonant regions; however, the time-averaged total probability S tends to one and the separation probability tends to zero rapidly as increasing the reduced interaction strength |u|. for U 0 = ω + u, and |u 2 | 5J = ω/16 for U 0 = 2ω + u, which belonged to the far-off-resonant strongly interacting regime in this paper. Such a CDT enables the two bosons form a stable bound pair and cannot move independently for a stronger reduced interaction [47]. Comparing with the similar phenomena in [21], it can be understood that any doublon state has a potential energy offset U 0 with respect to the unpaired states, such that breaking up of the stable pair is suppressed due to the band structure and energy conservation of the system. Only for the resonant case can the boson pair be separated, which happens because of the resonant absorbtion of energy from the ac driving field. In the weakly interacting regime, CDT is expected to occur in figure 5(a) because ε/ω = 2.405 is the first root of J 0 (ε/ω) = 0, which we will consider further in the next section. To show that the above analysis is generic in the high-frequency regime, we plot the time-averaged total probability S of the doublons as a function of the ratio U 0 /ω of the interaction and the driving frequency in figure 5(b) with ε = 160 and U 0 = 200. Figure 5(b) explicitly shows that the lowest points of S appear at the resonant points U 0 /ω = n for integer n in the high-frequency regime (n 30, i.e. ω 6.66). The results agree with the above analysis on figure 5(a). The plots similar to figures 5(a) and (b) can also be given for the parameter regions ω ∈ [10, 80] and U 0 ∈ [10,200], respectively, which are not shown here. It is worth noting that in the plot similar to figure 5(a), lower frequency ω is associated with a smaller |u m | value. It is well known that CDT can occur at the collapse points of the Floquet quasienergy spectrum [30,49,50], so the above-mentioned tunneling properties will be confirmed by the Floquet quasienergy analysis as follows: Floquet quasienergy analysis The Floquet theory provides a powerful tool to analyze the dynamics of a time-periodic quantum system [51]. According to the Floquet theory, the solutions of the time-dependent Schrödinger equation can be written as |ψ k (t) = e −iE k t |φ k (t) , with |φ k (t) being the Floquet states and E k Floquet quasienergies. In analogy to the Bloch solutions for the spatially periodic system, the quasienergy can only be determined up to a integer multiple of the photon energy ω. For the sake of definiteness, it is usually assumed to vary in the first Brillouin zone −ω/2 < E ω/2. The Floquet states inherit the period of the Hamiltonian and are eigenstates of the time evolution operator for one period of the driving where T is the time-ordering operator and T = 2π/ω is the period of the driving. Noticing that eigenvalues of U (T, 0) are exp(−iE k T ), the quasienergies of this system can be determined directly so long as we diagonalize U (T, 0). In figure 6, selecting the parameters as J = 1 and ω = 80, we show the numerical results of the quasienergy spectra as the functions of driving parameters ε/ω for U 0 = mω, m = 0, 1 with zero reduced interaction, respectively. In figure 6(a), for two noninteracting bosons, the quasienergy spectrum shows collapses at some fixed values of the driving parameters, for which J 0 ( ε ω ) = 0. The inset of figure 6(a) is an enlargement of quasienergies near the collapse point ε/ω = 2.405, corresponding to the first zero of J 0 ( ε ω ) and shows an exact level-crossing at this collapse point, this is analogous to a single boson in a triple-well system [35]. In the resonant regime with U 0 = ω = 80, the quasienergies of figure 6(b) show that the crossings of some quasienergies and the avoided crossing of the other quasienergies appear at the zero points (ε/ω = 3.832, . . .) of the firstorder Bessel function J 1 (ε/ω). From equation (4) with m = 1, u = 0 we know that the first derivative of the probability amplitudes of the three doublonsȧ j ( j = 1, 2, 3) are equal to zero and the three unpaired statesȧ j ( j = 4, 5, 6) do not vanish at the zero points of J 1 (ε/ω), which indicates that the CDT occurs between the three doublons for the crossing of quasienergy bands and the tunneling could appear between the three unpaired states for the avoided crossing of quasienergy bands. In the considered case u = 0, equations (6) and (7) are no longer valid and any quasienergy may be associated with both the doublons and the unpaired states; however, different quasienergies may have different projections onto the doubly or singly occupied basis vectors. In [47], Creffield and Platero distinguished the quasienergies according to the scheme of whether the Floquet states project mainly onto doubly or singly occupied basis vectors. Based on this scheme, we also numerically compute the different kinds of the quasienergies and distinguish them in figure 6, where the thin lines indicate the quasienergies associated with the states mainly projecting onto doubly occupied basis vectors and the heavy lines correspond to the singly occupied states. In figure 6(a) for U 0 = 0, we show that the curve of zero energy is a coincidence of the thin and heavy lines, while this coincided line is separated into two near curves in figure 6(b) with U 0 = ω. Although the level crossings occur for all states in figure 6(a), they only occur for the doublons in figure 6(b), which shows the different tunneling properties (as in the above-mentioned analytical results). When the reduced interaction strength is sufficiently larger (e.g. |u| > J ), the quasienergy spectrum is divided into two energy bands, which correspond to the three doublons and the three unpaired states, respectively (as shown in figure 7), where the quasienergies of doublons (or unpaired states) aperiodically oscillate near u values (or 0 value), hence the width of the energy gap between the two bands is proportional to the |u| value. For a weaker interaction with U 0 = u = 2, CDT between the different doublons and between the different unpaired states can be realized for the same driving parameters, as indicated by the level-crossing points in figure 7(a), when the ratio of the field amplitude ε and the field frequency ω is a root of the equation J 0 (ε/ω) = 0. Precise agreements between the numerical results based on equation (3) (19), (20) and (23), the thin dotted lines indicate the zero points of J 0 (ε/ω) and the arrow in 7(b) indicates the amplification position of the inset. and the analytical results from equations (19), (20) and (23) are observed in figure 7(a) for a sufficiently larger range of ratio ε/ω. The small deviation between the both results in figure 7(a) indicates that the second-order perturbation method is perfectly applicable only for some suitable parameter regions. We have also investigated the quasienergy spectra for |u| = 5, 6, . . . which are not shown here. The results verify that, in the far-off-resonant strongly interacting regime, ω/2 |u| |u| 1 ≈ 10 J is just the above suitable parameter regions. Interestingly, the energy band corresponding to the doublons becomes narrower and the energy gap tends to widen with the increase of self-interaction intensity from |u| = 2 < |u| 1 to |u| = 30 > |u| 1 . A wider gap means that the quantum transition between the both kinds of states is hard to occur. A narrower band necessitates analysis the Floquet quasienergy spectrum from both cases of the unpaired states and the doublons, respectively. Avoided level-crossing of unpaired states In the far-off-resonant strongly interacting regime, selecting the parameters as J = 1, ω = 80 and U 0 = 50 (i.e. m = 1 and u = −30), from equation (3) we numerically plot quasienergy spectrum versus ε/ω in figure 7(b). In this figure, the quasienergies corresponding to the unpaired states shows collapses when ε/ω are the roots of J 0 (ε/ω) = 0; however, the energy band corresponding to the doublons has collapsed into an approximate straight line. The inset of figure 7(b) is an enlargement of quasienergies corresponding to the three unpaired states near the first collapse point ε/ω ≈ 2.405. The fine structure of energy spectrum exhibits that the pseudocollapse point is converted to an avoided crossing point at ε/ω ≈ 2.405 and two different crossing points due to the second-order correction terms in equation (7). To explain the numerical result, from equation (7) we analytically calculate the quasienergies corresponding to the three unpaired states. Note that the period of functions exp[−iϕ(t)] and exp[±2iϕ(t)] is T . Therefore, we can construct the Floquet states by setting [35] A j (t) = B j exp(−iEt) ( j = 4, 5, 6) for the three unpaired states with constant B j , then rewriting equation (7) as the time-independent form The existence condition for the nontrivial solution of equation (17) reads From equation (18) we obtain three Floquet quasienergies corresponding to the three unpaired states where we have set We now compare the analytical results of equations (19) and (20) with the numerical computation based on the original equation (3). A typical behavior of quasienegies near the first crossing point is plotted in the inset of figure 7(b). It is clearly shown that the analytical result (the circular points) is in perfect agreement with the direct numerical computation (the curves). A fine structure of quasienergy spectrum of doublons Next, we examine some detailed features of the quasienergies corresponding to the three doublons. In [32], Longhi et al proposed that in a lattice system, CDT can be realized between the doublons and between the unpaired states for the same parameters, namely the field parameters take the second root ε/ω = 5.52 of J 0 (ε/ω) = 0 and the interaction intensity obeys U 0 /ω = 2.58 corresponding to ρ 2 = 0. Here, for the triple-well system we prove a similar result and exhibit a fine structure of quasienergy spectrum of doublons, based on the analytical Floquet solutions of equations (6)- (8). According to equation (6), CDT occurs between the doublons if the condition ρ 2 = 0 is satisfied. From equation (8), we have ρ 2 = 0 at ε/ω ≈ 0.95 in the region ε/ω ∈ [0, 8] for U 0 /ω = 1.6, and have ρ 2 = 0 at ε/ω ≈ 1.20, 2.02, 5.52 and 5.74 in the same region for U 0 /ω = 2.58. Selecting two different values of the self-interaction intensity, from equation (3) we numerically plot quasienergy spectrum versus ε/ω in figures 8(a) and (b), respectively, with the insets being enlargements of quasienergies of the three doublons. At the points fitting ρ 2 = 0, the level-crossing of two quasienergies will occur. This indicates that CDT for doublons can be observed at the crossing points of the partial levels. The predictions of the perturbation analysis can be confirmed by direct numerical computation of the temporal evolution of the boson occupation probabilities from equation (3) (not depicted here). We then analytically calculate the quasienergies corresponding to the three doublons. Substituting the Floquet solutions A j (t) = B j exp[−i(E − U 0 )t] ( j = 1, 2, 3) into equation (6), we obtain easily with the existence condition of the nontrivial solution From this equation we obtain three Floquet quasienergies corresponding to the three doublons as We note that the quasienergies E j for j = 4, 5, 6 should be converted to E j in the first Brillouin zone (i.e. E j = E j − mω = u + E j − U 0 ). Thus, the quasienergies are mainly decided by the reduced interaction strength u in the high-frequency regime and their positions depend on the sign and the size of u. At the same time, the quasienergies are no longer degenerate due to the strong interaction-induced second-order corrections. For example, U 0 = 2.58ω ≈ 206.4, so the quasienergies are rewritten as E j = E j − 3ω for ω = 80 ( j = 4, 5, 6). We plot the analytical quasienergies versus ε/ω for U 0 /ω = 1.6 and 2.58 in the insets of figures 8(a) and (b) as circular points, respectively, which are in perfect agreement with the direct numerical computations (the curves) based on the original equation (3). Conclusion and discussion We have investigated the tunneling dynamics of two bosons in a high-frequency driven triple well for a continuously increasing interaction intensity by means of the multiple-time-scale asymptotic analysis. In the far-off-resonant strongly interacting regime, we have analytically constructed the second-order Floquet solutions, including the Rabi oscillation state and quasi-NOOOON state. It is shown that the dominant tunneling effect of doublons is a second-order process, similar to two bosons in a driven optical lattice [32]. For a stronger reduced interaction, we make an exact comparison of tunneling rates between the doublons and unpaired states, and find that two bosons initially occupying the same well will form a stable bound pair and the stable unpaired states can also be kept for two initial unpaired particles. Therefore, the selected CDT between the doublons or between the unpaired states can occur for different values of interaction intensity. However, in the near-resonant case such initially paired bosons can separate due to the multiphoton resonance. Furthermore, we calculate the Floquet quasienergy spectrum and demonstrate that for the reduced interaction strength obeying |u| > J , the quasienergy is divided into two energy bands corresponding to the three doublons and the three unpaired states. The width of the energy gap between the two bands is proportional to the |u| value. The prediction on the CDT is confirmed by the quasienergy spectra, in which the avoided level-crossings and new level-crossings near the collapse points are exhibited for the three unpaired states due to the second-order corrections. While for the three doublons, a fine structure of quasienergy spectrum up to the second-order is displayed, by which we show the different level-crossings beyond the former collapse points. These analytical results are consistent with the direct numerical computations from the time-dependent Bose-Hubbard Hamiltonian for some sufficiently large values of the reduced interaction strength |u|, which allows the wide parameter regions ω ∈ [10, 80] and U 0 ∈ [10, 200] to fit experimental requirements. The second-order results of the long time scale may be conveniently applied to adiabatic manipulation [4,52] of the paired-particle correlated tunneling experimentally. In the experiments [16,21], the second-order tunneling of two strongly interacting bosons has been observed in the undriven optical lattice [21] and undriven double well [16]. The triple-well potential has also been prepared by using different methods [37,40]. Combining these with the periodical shaking method [44,45] and the Feshbach resonance technique [22,23] has enabled us to show that experimental verification of the theoretical prediction in this paper could be expected under the currently accessible setups [16,21,37,40,44,45]. Particularly, the results from the triple-well system can be extended to the three-level system [33] and triple-quantum-dot system [9], exhibiting richer new physics. For the convenience of our discussion, we simplify equation (A.5) as i∂a (1) j /∂ T 0 = −i∂ A j /∂ T 1 + G (1) j (T 0 ) for j = 1, 2, . . . , 6. To avoid the occurrence of secular growing terms in the solution a (1) j , the solvability condition [30,32] i ∂ A j ∂ T 1 = G (1) j (T 0 ) (A.6) must be satisfied, where the overline denotes the time average with respect to the fast time variable T 0 (i.e. the dc component of the driving term G (1) j (T 0 )). The amplitudes a j at order σ are given by Employing equations (A.5) and (A.6), one gives So the solutions of order σ read , (A.10) We note that the probabilities to find the two strongly interacting bosons in the same wells are constants in time up to the first-order time scale T 1 from equation (A.8). Therefore, we need to consider the asymptotic analysis up to the order σ 2 . Following the same procedure outlined where we have set for U 0 /ω + n = 0. Thus the evolution of the amplitudes A j up to the second-order long time is given by , we obtain the two sets of coupled equations, equations (6) and (7).
2013-04-01T01:13:51.000Z
2013-04-01T00:00:00.000
{ "year": 2013, "sha1": "f2e8f35e7b8cfb0c55ccbeb4ca0fbe8c7788cdb2", "oa_license": "CCBY", "oa_url": "https://doi.org/10.1088/1367-2630/15/12/123020", "oa_status": "GOLD", "pdf_src": "IOP", "pdf_hash": "802a008a18d377525befc7a4f51edd4cf80be61f", "s2fieldsofstudy": [ "Physics" ], "extfieldsofstudy": [ "Physics" ] }
263279125
pes2o/s2orc
v3-fos-license
Identifying the Relationship between PM2.5 and Hyperlipidemia Using Mendelian Randomization, RNA-seq Data and Model Mice Subjected to Air Pollution Air pollution is an important public health problem that endangers human health. However, the casual association and pathogenesis between particles < 2.5 μm (PM2.5) and hyperlipidemia remains incompletely unknown. Mendelian randomization (MR) and transcriptomic data analysis were performed, and an air pollution model using mice was constructed to investigate the association between PM2.5 and hyperlipidemia. MR analysis demonstrated that PM2.5 is associated with hyperlipidemia and the triglyceride (TG) level in the European population (IVW method of hyperlipidemia: OR: 1.0063, 95%CI: 1.0010–1.0118, p = 0.0210; IVW method of TG level: OR: 1.1004, 95%CI: 1.0067–1.2028, p = 0.0350). Mest, Adipoq, Ccl2, and Pcsk9 emerged in the differentially expressed genes of the liver and plasma of PM2.5 model mice, which might mediate atherosclerosis accelerated by PM2.5. The studied animal model shows that the Paigen Diet (PD)-fed male LDLR−/− mice had higher total cholesterol (TC), TG, and CM/VLDL cholesterol levels than the control group did after 10 times 5 mg/kg PM2.5 intranasal instillation once every three days. Our study revealed that PM2.5 had causality with hyperlipidemia, and PM2.5 might affect liver secretion, which could further regulate atherosclerosis. The lipid profile of PD-fed Familial Hypercholesterolemia (FH) model mice is more likely to be jeopardized by PM2.5 exposure. Introduction Air pollution is an important public health problem that jeopardizes human health.Particles < 2.5 µm (PM 2.5 ) come from the combustion of coal, oil, gasoline, and transformation products of nitrogen oxides (NOx), sulfur dioxide (SO 2 ) [1], and are composed of sulfate, nitrate, ammonium, hydrogen ion, elemental carbon, organic compounds, polycyclicaromatichydrocarbons (PAH), metals, particle-bound water, and biogenic organic [2].PM 2.5 can be deposited in the respiratory bronchioles and alveoli, in which gas exchange occurs [3].These particles can affect gas exchange, penetrate the lungs, and escape into the bloodstream, causing significant cardiovascular problems [4]. Arteriosclerotic cardiovascular disease (ASCVD) is the leading cause of human mortality around the world [5], and dyslipidemia composed of hypercholesterolemia, hypertriglyceridemia, hypoalphalipoproteinemia, and hyperbetalipoproteinemia is the major cause of atherosclerosis [6].Numerous studies have shown that air pollution could affect lipid metabolism and cardiovascular disease incidence and mortality; however, the results are inconsistent (Table S1).Yang et al. conducted a large-scale epidemiological study of 15,477 subjects in 33 communities in China and found that long-term ambient air pollution is associated with dyslipidemia, especially among overweight or obese patients [25].PM 2.5 is positively associated with total cholesterol (TC), triglyceride (TG), and low-density lipoprotein cholesterol (LDL-C) and negatively associated with high-density lipoprotein-cholesterol (HDL-C) in the study by Zhang et al. [23].Nevertheless, Mao et al. demonstrated that the increment of PM 2.5 is related to the increase in TC, LDL-C, hypercholesterolemia, hyperbetalipoproteinemia, and hypoalphalipoproteinemia, and associated with the decrease in TG and HDL-C [27].Therefore, the relationship between exposure to atmospheric fine particulate matter and plasma TC is mostly positive, while the relationship between exposure and TG is positive in some studies [23,25,28] and negative in others [19,27,29].Regarding the air pollution animal model, Song et al. found that after exposure to PM 2.5 and filtered air (FA), TG and TC levels were increased, and HDL was decreased in both C57BL/6 and db/db mice [30].The average concentrations of PM 2.5 in the exposure chamber and FA chamber were 324.2 ± 45.2 µg/m 3 and 17.3 ± 3.7 µg/m 3 [30]. Meanwhile, there is still a lack of studies on PM 2.5 and hyperlipidemia at the human genome-wide level.Transcriptome sequencing analysis in animal model studies and the distribution of lipids and lipoproteins in model animals are seldom reported.We attempted to analyze the association between PM 2.5 -related gene loci and hyperlipidemia gene loci through the GWAS database and explore the association between PM 2.5 exposure and hyperlipidemia through transcriptome data and laboratory test results via air pollution model mice.The RNA-sequencing data of mice liver and plasma were acquired from the Gene Expression Omnibus database (GEO database: GSE146508). Mendelian Randomization Analysis We chose SNPs as internal instrumental variables at the p < 1 × 10 −5 significance level, which showed a low possibility of weak instrumental variable bias in MR analysis since there were only 8 SNPs screened at the genome-wide significant threshold of 5 × 10 −8 [31]. MR methods, including simple median, weighted median (WM), inverse variance weighted (IVW), MR-Egger, weighted mode, and simple mode, were selected to evaluate the causal effect between PM2.5 and hyperlipidemia.Among them, IVW is the major analysis method [32].The weighted median method [33] and MR Egger [32] methods were conducted for sensitivity analysis to account for potential bias from unknown pleiotropy.The MR Egger estimate is less precise than that from IVW because the variance of the MR Egger estimate is additionally affected by the variability between the genetic associations with exposure or gene polymorphism. A heterogeneity test was conducted using Cochran's Q-test to identify whether the MR results were biased by potential heterogenic factors.A leave-one-out permutation test was performed to assess whether the IVW estimate was biased by the influence of particular SNPs.Causal estimates between PM 2.5 and the hyperlipidemia risk were expressed as odds ratios (OR) and a 95% confidence interval (CI) per standard deviation increment.All the analyses with p < 0.05 were considered statistically significant.All statistical analyses were performed using the R Studio (R version 4.3.0)software and the R package "TwoSampleMR". Enrichment Analysis To further investigate the biological mechanisms of DEGs, GO, KEGG, and GSEA analysis was conducted using the "ClusterProfiler" R package.The three categories assessed via GO analyses were as follows: biological process (BP), cellular component (CC), and molecular function (MF), which demonstrated the molecular biological function of the selected genes.The STRING APP of Cytoscape v.3.8.2 was used to conduct the PPI analysis of intersecting DEGs. Estimation of Immune Cells Infiltration in Mice Livers CIBERSORT tools were used to explore the difference in immune cell marker expression in mice liver between the cases and controls.The NCBI reference set for 22 immune cell subtypes CIBERSORT regarding gene expression features was used. Animal Model Ten 7-week-old male C57BL/6J mice weighing 20-25 g were purchased from Beijing HFK Bio-Technology Company, and twelve 7-week-old male LDLR −/− mice weighing 20-25 g were obtained from State Key Laboratory of Vascular Homeostasis and Remodeling (Peking University, Beijing, China).The sample size was calculated according to the sample size formula [35].All animals were allowed to adapt to the animal room environment for one week before the study.C57BL/6J mice were fed a chow diet during the air pollution model construction.Eight-week-old LDLR −/− mice were fed a Paigen diet (PD, RESEARCH DIETS, D12109C, New Brunswick, NJ, USA; HF Rodent Diet with Regular Casein, 1.25% Added Cholesterol and 0.5% Sodium Cholate) for 6 weeks to further assess the effect of PM 2.5 on the plasma lipids.Mice of each strain were randomly assigned to two groups via a randomized block design: PM 2.5 and saline group. Intratracheal installation is a more convenient method and can be used to easily calculate exposure doses than exposure chambers can in an air pollution study [36,37].According to the lung surface area-dose exchange algorithm [34,38], we converted the human PM 2.5 exposure dose (500 µg/m 3 ) into the corresponding mouse dose (5 mg/kg).The intratracheal instillation was performed on 8-week-old C57BL/6J mice and 8-weekold LDLR −/− mice after two weeks of being fed a chow diet (CD) or PD, respectively.During the procedure, the mice were anesthetized with sodium pentobarbital (50 mg/kg) via intraperitoneal injection, and a rodent respirator (ALCV9A; Shanghai Alcott Biotech Co., Ltd., Shanghai, China) was used for ventilation test to ensure successful tracheal intubation.The mice in the PM 2.5 group were given 5 mg/kg PM 2.5 [39][40][41] (in 50 µL 0.9% normal saline) once every three days, 10 times [42]; mice in the control group were given 50 µL 0.9% normal saline at the same frequency.CD and PD feeding continued during the instillation period.After 6 weeks of eating a CD or PD, the mice were anesthetized with 1% pentobarbital sodium, the plasma and organs were collected for further analysis, and all mice were sacrificed because of the excessive loss of blood.All procedures were followed to the guidelines of Laboratory Animal Care (NIH Publication No. 85Y23, revised 1996), and the experimental protocol was approved by the Animal Care Committee, Peking University First Hospital (J2022109). The Assays of Plasma Lipids and Lipoproteins Blood samples were collected from the retro-orbital plexus of the mice after 4 h fasting under sodium pentobarbital anesthesia.The plasma TC and TG levels were enzymatically determined using commercially available kits (100000180, Total cholesterol Assay Kit (CHOD-PAP), 100000220, calibration product 150 mg/Dl-230 mg/dL; Triglyceride Assay Kit (TG) Enzyme colorimetry (GPO-PAP), calibration product, 1.2 mmol/L-3.2mmol/L, Zhongsheng Beikong, Beijing, China). To analyze the lipid distribution, the fast protein liquid chromatography (FPLC) of plasma lipoproteins was performed using 200 µL of pooled plasma samples from the animals of each group with indicated genotypes, which were filtered using 0.22 mm filters and then applied to Tricorn high-performance Superose S-6 10/300 GL columns (Amersham Biosciences), eluting with PBS at a constant flow rate of 0.25 mL/min.The eluted fractions (500 µL per fraction) were assessed for TG and cholesterol concentrations using the same TG and cholesterol kits described above. qPCR The total RNA of the mice livers was isolated using TRIzol Reagent (ET111-01, Trans-Gen Biotech, Beijing, China).Briefly, the total RNA was reverse-transcribed to cDNA using the Reverse Transcription Reagent Kit (AH301-02, TransGen Biotech, Beijing, China).The resulting cDNA was amplified via 40 cycles of qPCR using Top Green qPCR SuperMix (AQ132-24, TransGen Biotech, Beijing, China).The mRNA level of each target gene was normalized to an endogenous β-actin expression.The ∆∆Ct method was used to evaluate relative expression levels or fold changes.The primer sequences used in our study are listed in Table S2. Statistical Analysis All data in the animal model passed the normality test (Shapiro-Wilk test and Kolmogorov-Smirnov test) and are presented as the means ± SEM.Statistical comparisons were performed using an unpaired two-tailed t-test.GraphPad Prism 9.0 software was used for statistical analyses.A value of p < 0.05 indicates a statistically significant difference. Mendelian Randomization Indicates Genetic Association with PM 2.5 and Hyperlipidemia In general, in this MR study, we analyzed a total of 423,796 European individuals.We extracted IVs that were significantly associated with glutamine from the GWAS (p < 1 × 10 −5 ). As shown in Table 1 and Figure 1A, the MR analyses revealed causal associations between PM 2.5 and hyperlipidemia in the European cohort.The casual inference of genetic liability between PM 2.5 and hyperlipidemia in the European population was noted (Table 1 and Figure 1A).The casual inference of genetic liability between PM 2.5 and the TG level in the European population was noted as well (Table 2 and Figure 1D).Sensitivity analyses for MR were performed.Using leave-one-out analysis, we discovered no single SNP that drove the causal link between PM 2.5 and hyperlipidemia/the TG level (Figure 1C,F).We performed a pleiotropy test to investigate horizontal pleiotropy (Figure 1B,E), and the results confirmed that pleiotropy was unlikely to bias the causal relationship (p > 0.05).Moreover, bidirectional Mendelian randomization was conducted to analyze the reverse causal relationships between PM 2.5 and hyperlipidemia/the TG level.Although Mendelian randomization analysis pointing to PM 2.5 can cause hyperlipidemia or affect the TG level, hyperlipidemia or the TG level does not lead to a high risk of PM 2.5 exposure. The significant SNPs of PM 2.5 were mapped to 34 genes using NCBI.Then, we evaluated the DEGs in the liver of air pollution model mice from GSE146508 of the GEO database and found that Clcn1 was differentially expressed in PM 2.5 and the control groups. RNA-seq Data of Liver and Plasma in Air Pollution Model Mice Revealed Hub Genes in Muscle Contraction and Lipid Metabolism The "limma" package of R software was utilized to analyze the differential expressed genes (DEGs) in the liver and plasma RNA samples from mice of the GSE146508 set.The cutoff for log2FC in the liver is 0.645; 706 genes were downregulated, and 515 genes were upregulated (Figure 2A).The DEGs of the liver are displayed in the heatmap (Figure 2B). To reveal the interaction between the DEGs in the livers of model mice exposed to air pollution, we conducted GO, KEGG, and GSEA enrichment and PPI network analysis.In the GO-BP analysis (Figure 2C), the recombination of immune receptors built from the immunoglobulin superfamily, muscle contraction, and muscle system processes were most significantly enriched.In the GO-CC analysis (Figure 2C), the major pathways were associated with the actin cytoskeleton, contractile fiber, and myofibril.The results of GO-MF (Figure 2C) mainly refer to actin binding, compound binding, and channel activity.These results of the GO enrichment analysis revealed that muscle contraction might play a prominent role in PM 2.5 -exposed mice, in which Clcn1 took part.Meanwhile, KEGG analysis enriched the pathway of cytokine-cytokine receptor interaction and vascular smooth muscle contraction (Figure 2D).GSEA denoted a similar result as GO and KEGG analysis (Figure 3A). Clcn1-related genes in humans were mapped using STRING: functional protein association networks (string-db.org)(Figure 3B) We overlapped the mice liver DEGs and muscle contraction-related genes from NCBI (Figure 3C) and illustrated the PPI network using Cytoscape tools (Figure 3E).A correlation test was performed for the hub genes Clcn1, Scn4a, and Tnn3, which all exist in human and mouse data (Figure 3D). The cutoff for log2FC in plasma was 1.000; 552 genes were downregulated, and 858 genes were upregulated (Figure 4A).The DEGs of plasma are displayed in the heatmap (Figure 4B).Subsequently, we looked for overlapping between lipid metabolism-related genes from the NCBI database and DEGs from the liver and plasma from GSE146508, and four genes (Mest, Adipoq, Ccl2, and Pcsk9) are shown in the Venn diagram (Figure 4C).These intersecting DEGs indicate the close relationship between PM 2.5 and lipid metabolism.To reveal the interaction between the DEGs in the livers of model mice exposed to a pollution, we conducted GO, KEGG, and GSEA enrichment and PPI network analysis.In addition, the regulation of the lipid metabolic process, the positive regulation of the lipid metabolic process, the neutral lipid metabolic process, and lipid localization pathways are enriched in the GO analysis (p value < 0.05, respectively) Immune Infiltration Analysis The PPI network of 48 overlapped DEGs in liver and lipid metabolism-related genes was illustrated using STRING (Figure 4D) and rearranged in Cytoscape (Figure 4E).Adipoq and Ccl2 are among the top 13 hub genes, and the Adipoq and Ccl2 mRNA expression levels were decreased in the RNA-seq data (Figure 4F).Hub genes Adipoq and Ccl2 are thought to play essential roles in PM 2.5 -related hyperlipidemia. Immune Infiltration Analysis Based on the enrichment analysis results, we used the CIBERSORT algorithm to evaluate the immune cell distribution in model mice livers (Figure 5A) and the associations between PM 2.5 exposure and immune infiltration (Figure 5B).PM 2.5 exposure is associated with upregulated naïve CD8+ T cells, Th2 cells, and monocytes and downregulated activated CD8+ T cells and memory CD4+ T cells (Figure 5B).Moreover, to understand the immune infiltration difference in sex, the RNAseq data were divided into male and female groups.In male mice, naïve CD8+ T cells and Th2 cells were upregulated, and memory CD4+ T cells were downregulated remarkably in the PM 2.5 group (Figure 6A).In female mice, monocytes and Th17 cells were increased, and M0 macrophages and follicular CD4+ T cells were decreased significantly in the PM 2.5 group (Figure 6B).Based on the enrichment analysis results, we used the CIBERSORT algorithm to evaluate the immune cell distribution in model mice livers (Figure 5A) and the associations between PM2.5 exposure and immune infiltration (Figure 5B).PM2.5 exposure is associated with upregulated naïve CD8+ T cells, Th2 cells, and monocytes and downregulated activated CD8+ T cells and memory CD4+ T cells (Figure 5B).Moreover, to understand the immune infiltration difference in sex, the RNAseq data were divided into male and female groups.In male mice, naïve CD8+ T cells and Th2 cells were upregulated, and memory CD4+ T cells were downregulated remarkably in the PM2.5 group (Figure 6A).In female mice, monocytes and Th17 cells were increased, and M0 macrophages and follicular CD4+ T cells were decreased significantly in the PM2.5 group (Figure 6B).The four hub genes were significantly associated with different immune cells cating the relationship between lipid metabolism-related secretory factors and the P affected immune environment (Figure 5C).Adipoq is positively related to activated T cells and negatively related to naïve CD8+ T cells and Th2 cells.Ccl2 is negatively r to eosinophils, naïve CD8+ T cells, Th1 cells, and NK resting cells.PCSK9 is highly tively correlated to monocytes, but has a negative relationship with naïve CD8+ T The four hub genes were significantly associated with different immune cells, indicating the relationship between lipid metabolism-related secretory factors and the PM2.5affected immune environment (Figure 5C).Adipoq is positively related to activated CD8+ T cells and negatively related to naïve CD8+ T cells and Th2 cells.Ccl2 is negatively related to eosinophils, naïve CD8+ T cells, Th1 cells, and NK resting cells.PCSK9 is highly positively correlated to monocytes, but has a negative relationship with naïve CD8+ T cells, Th1 cells, and Th2 cells.Mest positively correlated with activated CD8+ T cells but is negatively related to naïve CD8+ T cells, Th1 cells, Th2 cells, and NK resting cells. Lipid and Lipoprotein Profiles in Model Mice Exposed to Intranasal Instillation-Induced Air Pollution Ten 8-week-old male C57BL/6J wild-type (WT) mice and twelve 8-week-old male LDLR −/− mice were all used to construct an air pollution model.Intranasal instillation was performed once every 3 days for 10 times after 2 weeks of chow diet or Paigen diet (PD) feeding.All the animals were sacrificed on the day after the last instillation.First, we tested the mRNA expression levels of Adipoq and Ccl2 using qPCR in the livers of WT model mice exposed to intranasal instillation-induced air pollution or control (Figure 7A,B), which were all decreased.As we can see, the PM 2.5 and control groups of the WT mice did not have any difference in terms of TC or TG after intranasal instillation (Figure 7C,D).However, the PD-fed LDLR −/− mice after PM 2.5 intranasal instillation had higher TC and TG levels than the control group (Figure 7E,F, p < 0.05). Discussion The effect of PM2.5 on hyperlipidemia is inconsistent, but our study confirmed the causation between PM2.5 and hyperlipidemia/the TG level from the perspective of genetics.Although the results of previous epidemiological studies showed a contradictory ef- The FPLC showed a higher CM/VLDL cholesterol level in the PM 2.5 LDLR −/− mice group.However, the CM/VLDL level of TG in the PM 2.5 group seems lower than that of the control group (Figure 7G,H). Discussion The effect of PM 2.5 on hyperlipidemia is inconsistent, but our study confirmed the causation between PM 2.5 and hyperlipidemia/the TG level from the perspective of genetics.Although the results of previous epidemiological studies showed a contradictory effect of PM 2.5 on the TG level, we demonstrated that it has a positive effect.MR analysis could avoid inverse causation and potential confounders and evaluate the causality between PM 2.5 and hyperlipidemia with less susceptibility. The gene Clcn1 was screened via Mendelian randomization, and its related genes were confirmed in mice transcriptome data.Clcn1 is a member of the Clcn family of voltagegated chloride ion channels.The Clcn1 channel plays a role in the regulation of muscle excitability and repolarization [43].These results suggest that muscle contraction-related pathways may be involved in mediating the effect of PM 2.5 on hyperlipidemia.The function of hub genes Tnnt2 and Myh7 have all been discussed in regard to cardiomyopathy and skeletal muscle myopathy [44,45].The findings may hint at crosstalk between the liver and other organs, such as the heart and skeletal muscle. Abnormal lipid metabolism is closely related to the formation of atherosclerosis.In particular, we focused on lipid metabolism-related genes in the liver and plasma transcriptome data from a PM 2.5 model mice to search for liver-produced secretory factors and speculate on their role in lipid metabolism and atherosclerosis progression.Ccl2 regulates the migration and infiltration of a wide range of immune cells, including monocytes, macrophages, memory T lymphocytes, and natural killer (NK) cells [46,47].The Ccl2-induced migration of monocytes to the vessel wall is an essential activity contributing to the development of atherosclerosis.During this process [48], Adipoq, also known as adiponectin, is an adipocytokine produced by adipocytes, skeletal, cardiac myocytes, and endothelial cells [49].Many epidemiological studies suggest that adiponectin deficiency is associated with coronary artery disease [50].It has been proven that adiponectin prevents endothelial apoptosis through the AMPK-mediated pathway [51] and suppresses the proliferation and migration of vascular smooth muscle cells [52].Adiponectin is effective in alleviating alcoholic and nonalcoholic fatty liver diseases, including hepatomegaly, steatosis, and elevated levels of serum alanine aminotransferase [53].Mest has been proven to enlarge adipocytes and could be a marker of the size of adipocytes [54].Pcsk9 is secreted into the plasma by the liver, binding low-density lipoprotein (LDL) receptors at the surface of hepatocytes, thereby preventing its recycling and enhancing its degradation, resulting in reduced LDLcholesterol clearance [55].PCSK9 inhibitors have been produced and recommended in the guidelines for the management of dyslipidemia [56].The four hub genes related to lipid metabolism are closely associated with atherosclerosis and metabolic syndrome.They play a crucial role in the effect of PM 2.5 on hyperlipidemia. At the same time, the CIBERSORT deconvolution algorithm was used to analyze immune cell infiltration in the livers of PM 2.5 model mice.We could see a significant increase in the number of CD8+ T cells and CD4+ T cells.To further study the sex-related effect of PM 2.5 on dyslipidemia, the mice were divided into male and female groups.Naive CD8+ T cells and Th2 cells were upregulated, and memory CD4+ T cells were downregulated in the males.The number of monocytes and Th17 cells was increased, and that of M0 macrophages and follicular CD4+ T cells was decreased in the female mice.This suggests that the difference in immune cell infiltration influenced by PM 2.5 is closely related to sex [57].The increased monocyte and decreased M0 macrophage numbers in the female mice indicated that monocytes were recruited after exposure to PM 2.5 , and the phenotypic transformation of macrophages happened in the PM 2.5 -exposed female mice.However, CD8+ T cells and CD4+ T cells are a few resident cells recruited rapidly during a liver infection or injury [58].The number of naive CD8+ T cells and Th2 cells (a subtype of CD4+ T cells) was increased in the PM 2.5 -exposed male mice, and that of Th17 cells (a subtype of CD4+ T cells) was increased in the PM 2.5 -exposed female mice, which means the immune reaction was similar to that which would occur during a liver infection.This reaction may lead to the formation of tertiary immune structures, which are also known as intrahepatic myeloid cell aggregates for T cell population expansion (iMATEs) [59].Meanwhile, we found that the potential secretory factors affecting lipid metabolism and the atherosclerosis process also have positive or reverse relationships with different types of immune cells. Finally, we established intranasal instillation models of WT mice and LDLR −/− mice to measure the difference in TC and TG levels between the saline and PM 2.5 groups.At the same time, a difference in lipoprotein distribution was observed via the FPLC method.The increase in CM/VLDL cholesterol is supposed to be the cause of PM 2.5induced atherosclerosis. Advantages and innovations: 1.The Mendelian randomization method was used for the first time to explore the relationship between PM 2.5 and hyperlipidemia in this population; transcriptome data were used as validation.2. Focusing on the influence of PM 2.5 on lipid metabolism, we searched for hub genes, enriched the pathways related to lipid metabolism, and performed qPCR verification.3. We established an animal model of air pollution in which WT mice simulating healthy people and LDLR −/− mice simulating FH patients were utilized.We focused on the blood lipids of WT mice fed a chow diet and LDLR −/− mice under PD feeding conditions and discussed the differences in the blood lipids and the distribution of lipoproteins in the LDLR −/− mice in the air pollution model for the first time. Limitations: 1.Although the gene Clcn1 selected via MR was certificated in the enrichment pathways from mice liver RNA-seq data, it seemed to have a minimal association with hyperlipidemia.Therefore, we focused on the lipid metabolism pathways which were less significant in the enrichment analysis.2. The plasma lipid levels of wild-type mice fed a High-Fat Diet (HFD) and LDLR −/− mice fed a chow diet could be further tested to study the effect of different diets during PM 2.5 exposure.According to the different immune cell infiltration in male and female mice livers, female mice could also be utilized to identify the sex-related effect of PM 2.5 on dyslipidemia.RNA transcriptome and single-cell RNA sequencing can be further performed on the livers of LDLR −/− mice to analyze the differences in PM 2.5 and control groups in terms of lipid metabolism after the amplification of HFD. 3. Dyslipidemia is closely related to atherosclerosis, so the effect and mechanism of PM 2.5 on atherosclerosis can be further discussed. Conclusions Our study revealed the causality between PM 2.5 and hyperlipidemia using genomewide Mendelian randomization.Four possible proteins secreted from the liver to plasma were excavated from RNAseq data, which might accelerate atherosclerosis.The immune cell infiltration of PM 2.5 exposed mice liver was explored, and a sex-related effect was found.The lipid profile and lipoprotein distribution of LDLR −/− PM 2.5 model mice supplement the research on the dyslipidemia of FH patients affected by PM 2.5 .The results from GWAS, RNAseq, and the animal model all conclude that PM 2.5 could aggravate dyslipidemia and maybe accelerate atherosclerosis. PM 2 . 5 - related genetic instruments were extracted from a large GWAS study with 423,796 samples consisting of 9,851,867 SNPs of European people (GWAS trait ID: ukb-b-10817).The same hyperlipidemia and total triglycerides genetic instruments from other GWAS studies were used (GWAS trait ID: ukb-b-17462 and met-d-Total_TG). Figure 1 . Figure 1.(A) Scatter plot of SNPs related to PM2.5 and hyperlipidemia.The slope of each l strates the estimated effect of the Mendelian randomization method.(B) Funnel plot o hyperlipidemia.Vertical lines represent estimates with all SNPs.Symmetry of the IVW the funnel plot demonstrates no obvious horizontal pleiotropy.(C) Leave-one-out analy and hyperlipidemia.There was no substantial change in the IVW causal estimate afte any of the instrumental SNPs.(D) Scatter plot of SNPs related to PM2.5 and TG level.(E) of PM2.5 and TG level.(F) Leave-one-out analysis of PM2.5 and TG levels.SNP, single polymorphism; IVW, inverse variance weighted; TG, triglyceride. Figure 1 . Figure 1.(A) Scatter plot of SNPs related to PM 2.5 and hyperlipidemia.The slope of each line demonstrates the estimated effect of the Mendelian randomization method.(B) Funnel plot of PM 2.5 and hyperlipidemia.Vertical lines represent estimates with all SNPs.Symmetry of the IVW method in the funnel plot demonstrates no obvious horizontal pleiotropy.(C) Leave-one-out analysis of PM 2.5 and hyperlipidemia.There was no substantial change in the IVW causal estimate after removing any of the instrumental SNPs.(D) Scatter plot of SNPs related to PM 2.5 and TG level.(E) Funnel plot of PM 2.5 and TG level.(F) Leave-one-out analysis of PM 2.5 and TG levels.SNP, single-nucleotide polymorphism; IVW, inverse variance weighted; TG, triglyceride. Figure 2 . Figure 2. (A) Volcano plot of DEGs in PM2.5-exposed mice liver and controls from GSE146508.T cutoff for logFC is 0.645.The number of the up gene is 706, and the number of the down gene is 51 (B) Heatmap of DEGs in PM2.5 exposed mice liver and controls from GSE146508.(C) Top 10 pat ways of GO enrichment analysis (BP, CC, and MF) about DEGs in PM2.5 exposed mice liver an controls from GSE146508.(D) Top 10 pathways of KEGG enrichment analysis about DEGs in PM exposed mice liver and controls from GSE146508.DEG, differentially expressed gene; GO, Ge Ontology; BP, biological process; CC, cellular component; and MF, molecular function; KEG Kyoto Encyclopedia of Genes and Genomes. Figure 2 . Figure 2. (A) Volcano plot of DEGs in PM 2.5 -exposed mice liver and controls from GSE146508.The cutoff for logFC is 0.645.The number of the up gene is 706, and the number of the down gene is 515.(B) Heatmap of DEGs in PM 2.5 exposed mice liver and controls from GSE146508.(C) Top 10 pathways of GO enrichment analysis (BP, CC, and MF) about DEGs in PM 2.5 exposed mice liver and controls from GSE146508.(D) Top 10 pathways of KEGG enrichment analysis about DEGs in PM 2.5 -exposed mice liver and controls from GSE146508.DEG, differentially expressed gene; GO, Gene Ontology; BP, biological process; CC, cellular component; and MF, molecular function; KEGG, Kyoto Encyclopedia of Genes and Genomes. Figure 3 . Figure 3. (A) Protein-protein interaction network (PPI) of Clcn1 in humans.(B) Venn plot of DEGs in PM2.5-exposed mice liver and muscle contraction related genes.(C) Top 5 pathways of GSEA enrichment analysis about DEGs in PM2.5-exposed mice liver and controls from GSE146508.(D) Correlation between hub genes Clcn1, Scn5a, and Tnnt3, which all exist in humans and mice.The correlation coefficients are shown in the square.(E) PPI network of muscle contraction-related genes arranged by degree using Cytoscape.A large, red circle means that the gene is more important. Figure 3 . Figure 3. (A) Protein-protein interaction network (PPI) of Clcn1 in humans.(B) Venn plot of DEGs in PM 2.5 -exposed mice liver and muscle contraction related genes.(C) Top 5 pathways of GSEA enrichment analysis about DEGs in PM 2.5 -exposed mice liver and controls from GSE146508.(D) Correlation between hub genes Clcn1, Scn5a, and Tnnt3, which all exist in humans and mice.The correlation coefficients are shown in the square.(E) PPI network of muscle contraction-related genes arranged by degree using Cytoscape.A large, red circle means that the gene is more important. Figure 4 . Figure 4. (A) Volcano plot of DEGs in PM2.5-exposed mice plasma and controls from GSE146508.The cutoff for logFC is 1.000.The number of the up gene is 858, and the number of the down gene is 552.(B) Heatmap of DEGs in PM2.5-exposed mice plasma and controls from GSE146508.(C) Venn plot of DEGs in PM2.5-exposed mice liver, plasma, and lipid metabolism-related genes.(D) PPI network of lipid metabolism−related genes.(E) PPI network of lipid metabolism−related genes arranged by degree in Cytoscape.A large, red circle means that the gene is more important.(F) The exhibition of mRNA expression of Ccl2 and Adipoq from GSE146508.* P<0.05, ** P<0.01. Figure 4 . Figure 4. (A) Volcano plot of DEGs in PM 2.5 -exposed mice plasma and controls from GSE146508.The cutoff for logFC is 1.000.The number of the up gene is 858, and the number of the down gene is 552.(B) of DEGs in PM 2.5 -exposed mice plasma and controls from GSE146508.(C) Venn plot of DEGs in PM 2.5 -exposed mice liver, plasma, and lipid metabolism-related genes.(D) PPI network of lipid metabolism−related genes.(E) PPI network of lipid metabolism−related genes arranged by degree in Cytoscape.A large, red circle means that the gene is more important.(F) The exhibition of mRNA expression of Ccl2 and Adipoq from GSE146508.* p < 0.05, ** p < 0.01. Figure 6 . Figure 6.(A) Box plots of immune cells in PM 2.5 -exposed male mice liver and controls from GSE146508.(B) Box plots of immune cells in PM 2.5 -exposed female mice liver and controls from GSE146508.* p < 0.05, ** p < 0.01. Table 1 . Mendelian randomization estimates of the causal relationships between PM 2.5 and hyperlipidemia. Table 2 . Mendelian randomization estimates of the causal relationships between PM 2.5 and TG level.
2023-10-01T15:09:53.469Z
2023-09-29T00:00:00.000
{ "year": 2023, "sha1": "4f420295fffeb90143f71f3d77bde928a4c1e671", "oa_license": "CCBY", "oa_url": "https://www.mdpi.com/2305-6304/11/10/823/pdf?version=1695982790", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "d46625d2b48f555a82c2f1d269e33de39e94d1f5", "s2fieldsofstudy": [ "Biology" ], "extfieldsofstudy": [] }
227102027
pes2o/s2orc
v3-fos-license
Possible Participation of Ionotropic Glutamate Receptors and l-Arginine-Nitric Oxide-Cyclic Guanosine Monophosphate-ATP-Sensitive K+ Channel Pathway in the Antinociceptive Activity of Cardamonin in Acute Pain Animal Models The perception of pain caused by inflammation serves as a warning sign to avoid further injury. The generation and transmission of pain impulses involves various pathways and receptors. Cardamonin isolated from Boesenbergia rotunda (L.) Mansf. has been reported to exert antinociceptive effects in thermal and mechanical pain models; however, the precise mechanism has yet to be examined. The present study investigated the possible mechanisms involved in the antinociceptive activity of cardamonin on protein kinase C, N-methyl-d-aspartate (NMDA) and non-NMDA glutamate receptors, l-arginine/cyclic guanosine monophosphate (cGMP) mechanism, as well as the ATP-sensitive potassium (K+) channel. Cardamonin was administered to the animals intra-peritoneally. Present findings showed that cardamonin significantly inhibited pain elicited by intraplantar injection of phorbol 12-myristate 13-acetate (PMA, a protein kinase C activator) with calculated mean ED50 of 2.0 mg/kg (0.9–4.5 mg/kg). The study presented that pre-treatment with MK-801 (NMDA receptor antagonist) and NBQX (non-NMDA receptor antagonist) significantly modulates the antinociceptive activity of cardamonin at 3 mg/kg when tested with glutamate-induced paw licking test. Pre-treatment with l-arginine (a nitric oxide precursor), ODQ (selective inhibitor of soluble guanylyl cyclase) and glibenclamide (ATP-sensitive K+ channel inhibitor) significantly enhanced the antinociception produced by cardamonin. In conclusion, the present findings showed that the antinociceptive activity of cardamonin might involve the modulation of PKC activity, NMDA and non-NMDA glutamate receptors, l-arginine/nitric oxide/cGMP pathway and ATP-sensitive K+ channel. Both in vitro and ex vivo studies reported that cardamonin possessed inhibitory action against pro-inflammatory cytokine production [2,25]. Further studies reported the inhibitory action of cardamonin against inflammatory responses involved in the disruption of nitric oxide (NO) production and the downregulation of the iNOS expression via modulation of the NF-ҡB pathway [3,25,26]. An in vivo study with lipopolysaccharide (LPS)-challenged ICR mice model reported that cardamonin also suppressed the generation of nitric oxide [27]. A previous study presented that cardamonin showed antinociceptive activity against PBQ-induced writhing and carrageenan-indyced hyperalgesia [28,29]. Taking all these into account, a deeper understanding of the mechanism of antinociceptive activity of cardamonin has to be carried out. In a previous study, cardamonin demonstrated antinociceptive activity through a acetic acid-induced abdominal writing test, hot plate test and glutamate-induced nociception tests [29]. Activation of N-methyl-d-aspartate (NMDA) and non-NMDA glutamate receptors is probably involved in the glutamate-induced nociception. In particular, activation of NMDA receptor is mediated by the l-arginine-nitric oxide-cyclic GMP pathway [30,31]. Thus, in the present study, we attempted to study the possible participation of ionotropic glultamate receptors and nitric oxide/cyclic GM/ATP-sensitive K + channel pathway in the antinociceptive activity of cardamonin. Acetic Acid-Induced Abdominal Writhing Test The result in Figure 1 presented the effect of systemically administered cardamonin in acetic acid-induced abdominal writhing test. Cardamonin at the dose of 0.3, 1, 3 and 10 mg/kg produced significant dose-dependent inhibition against acetic acid-induced pain, with the percentage of inhibition at 45%, 56%, 80% and 100%, respectively. For the use throughout the experiments in the mechanism studies, the calculated mean ED 50 value for intraperitoneal administration of cardamonin was 2.1 mg/kg (1.9-2.5 mg/kg). Indomethacin (Indo; 10 mg/kg; i.p.), which served as the positive control drug, showed significant inhibition, with 80% of inhibition against acetic acid-induced pain in mice, according to our previously published results [29]. Involvement of Protein Kinase C The intraperitoneal administration of cardamonin at doses of 0.3, 1, 3 and 10 mg/kg demonstrated significant dose dependent inhibition in phorbol 12-myristate 13-acetate (PMA)-induced paw licking test in mice, with 61%, 68%, 74% and 83% of inhibition respectively ( Figure 2). The calculated mean ED 50 value for this study was 2.0 mg/kg (CI, 0.9-4.5 mg/kg). Indomethacin (Indo; 10 mg/kg; i.p.) was used as the positive control drug and it showed significant inhibition as compared to the control group, with 81% of inhibition against PMA-induced nociception. Effect of cardamonin (0.3, 1, 3 and 10 mg/kg; i.p.) against phorbol 12-myristate 13-acetate (PMA)-induced nociception. Each column represents the mean ± S.E.M. of 6 mice. Control group receives only the vehicle used to dilute the compound. Indomethacin (Indo; 10 mg/kg; i.p.) was used as the positive control drug. Statistical analysis was determined by one-way ANOVA, followed by Tukey's post hoc test. Values with different superscript letters are statistically different from each other at p < 0.05. Figure 3 presented the antinociceptive effect of cardamonin on NMDA glutamate receptor subtype (Panel A) and non-NMDA glutamate receptor subtype (Panel B) when assessed in glutamate-induced paw licking test. NMDA receptor antagonist, MK-801 (0.3 mg/kg; i.p.) produced significant inhibition when administered alone intraperitoneally, with 86% of inhibition in paw licking when compared to the control group. Pre-treatment with MK-801 prior to the administration of cardamonin (1 and 3 mg/kg) significantly enhanced the effect of the cardamonin treatment alone respectively. Administration of AMPA/kainate receptor antagonist, NBQX (3 mg/kg; i.p.) alone produced significant inhibition in glutamate-induced nociception, with 48% of inhibition when compared to the control group. Pre-treatment with NBQX produced no significant changes for the treatment with cardamonin 1 mg/kg when compared to the treatment alone, but when treated with cardamonin 3 mg/kg, NBQX significantly reversed the antinociceptive effect of cardamonin. Effect of MK-801 and NBQX on Glutamate-Induced Nociception All The result depicted in Figure 4 showed that pre-treatment with nitric oxide precursor, l-arginine (100 mg/kg; i.p.), at a dose that produced no significant different as compared to the control group, significantly reversed the antinociception exhibited by the nitric oxide synthase inhibitor, l-NOARG (20 mg/kg; i.p.) and significantly enhanced the antinociceptive effect of cardamonin (1 mg/kg; i.p.), when analysed with an acetic acid-induced nociceptive test. Involvement of Cyclic Guanosine Monophosphate (cGMP) The result depicted in Figure 5 showed the effect of cardamonin upon the injection of the specific guanylyl cyclise inhibitor, ODQ (2 mg/kg; i.p.) and analysed with acetic acid-induced nociceptive test. Both ODQ and cardamonin, when administered alone, produced significant inhibition in acetic acid-induced abdominal writhing test. However, when given together, ODQ significantly enhanced the antinociceptive effect of cardamonin compared to the treatment alone. Involvement of ATP-Sensitive K + Channel The pre-treatment with ATP-sensitive K + channel inhibitor, glibenclamide (10 mg/kg; i.p.), produced no significant difference as compared with the control group when administered alone, but significantly enhanced the antinociceptive effect of cardamonin (1 mg/kg; i.p.) in the acetic acid-induced abdominal writhing test ( Figure 6). Discussion The perception of pain involves various pathways which transmit the pain impulses from the site of injury to the peripheral nervous system then central nervous system. A previous study showed that cardamonin exhibited antinociceptive action by interrupting the opioidergic pathway and TRPA1 activation [24,32]. Cardamonin was reported to inhibit nociception through action on TRPV1 channel and glutamate receptors [29]. The stimulation of nerve endings involves the release of various inflammatory mediators which lead to sensitization of respective receptors embedded on the neurons surface [33]. Prostaglandin E 2 sensitization of TRPV1 activity in mice involves protein kinase C (PKC)dependent pathway [34]. The findings of the present study shows that the antinociceptive effect of cardamonin involved inhibition of protein kinase C activity. The systemic administration of cardamonin exhibited significant dose dependent inhibition against overt nociception induced by phorbol-12-myristate-13-acetate (PMA)-induced paw licking test. The protein kinase C signalling pathway plays an important role in regulating the excitation of sensory neurons through the phosphorylation of membrane-bound receptors and ion channels. The injection of PMA, a phorbol ester which represent pharmacological analogues of diacylglycerol, into the mouse paw through intraplantar injection, directly activates protein kinase C [35]. The nociceptive behavioural responses induced by the injection of PMA into mouse paw attributed to translocation of protein kinase C isoforms from cytoplasmic region to peripheral ending of primary afferent nerves [35,36]. Thus, it was postulated that cardamonin might inhibit the translocation of protein kinase C to the primary afferent nerve endings, which then reduced peripheral nociception. Previous study reported that activation of protein kinase C potentiates the effect of capsaicin on TRPV1 receptor [37]. Protein kinase C mediated sensitization of TRPV1 receptor enhances glutamatergic synaptic transmission at the central terminal of sensory neurons in the dorsal horn of spinal cord [38]. At the central level, cardamonin might have the capability to interfere with the binding of protein kinase C to TRPV1 receptor and thus reduces the influx of calcium, as well as decreases glutamate activity, resulting in inhibition upon paw licking behavioural responses. Cardamonin has been reported to exert inhibition of pain behaviour against glutamate-induced nociceptive test in mice [29]. There are three subtypes of ionotropic glutamate receptors, namely α-amino-3-hydroxy-5-methyl-4-isoxazolepropionic acid (AMPA), kainate and N-methyl-d-aspartate (NMDA) glutamate receptors. The AMPA receptors are thought to mediate rapid excitatory neurotransmission in the central nervous system in the earlier findings, but recent studies have demonstrated that spinal AMPA receptors contribute in the development of both acute and painful responses. The C-terminal intracellular regulatory domain of AMPA receptor subunit presents multiple phosphorylation sites for various protein kinases participate in the regulation of AMPA receptor function. The activation of protein kinase cascades, including calcium/calmodulin protein kinase II (CaMKII), protein kinase C (PKC) and protein kinase A (PKA) by noxious stimulation in the periphery tissues, may play a crucial role in the phosphorylation of glutamate receptors in spinal nociceptive neurons, which then followed by enhanced activity of glutamatergic synapses [39]. In the present study, animals were treated with quinoxalinedione antagonist, NBQX to block out glutamate binding to the non-NMDA receptors, but this antagonist has a limitation in distinguishing AMPA from kainate receptors [40]. The present findings suggest that cardamonin at the dose of 3 mg/kg exerted its antinociceptive property, possibly by blocking the binding of glutamate to AMPA receptor and inhibition of protein kinase cascades, in particular, protein kinase C-mediated phosphorylation of glutamate receptors, which then lead to reduced activity at glutamatergic synapses. The activation of NMDA glutamate receptor in chronic pain settings happens at peripheral, spinal and supraspinal level of neural axis [41]. Peripheral administration of NMDA receptor antagonists attenuated the nociceptive behaviours caused by local injections of glutamate or NMDA, which shows there is the presence of NMDA receptors in the periphery [42]. The present study reported that systemic injection of NMDA receptor antagonist could reduce the nociceptive behaviours caused by local injection of glutamate into mouse paw; in which, intraperitoneal administration of MK-801 (NMDA receptor antagonist) showed significant inhibition in glutamate-induced paw licking behaviour when compared to the control group. Consistent with these findings, MK-801 also produced a dose dependent inhibition in glutamate-induced nociception when administered at periphery, systemic, spinal and supraspinally [30]. Despite the antinociceptive property, NMDA receptor antagonists also caused disturbance in motor coordination, but these two effects cannot be separated [30,43]. This study presented that cardamonin showed significant inhibition but not as effective as MK-801 when tested in glutamate-induced paw licking test. When pre-treated with MK-801, cardamonin was able to produce significant enhancement of inhibitory effect when compared to their respective dose treatment of cardamonin. These findings suggested that the inhibitory effect of cardamonin might involve the blocking NMDA receptor. Thus, the present study postulated that the antinociceptive effect of cardamonin in glutamate-induced nociception might involve the modulation of the activity of ionotropic glutamate receptor, including NMDA, AMPA and kainate receptors. Nitric oxide is a diffusible gas that functions as a neurotransmitter in the pain processing pathway. The catalytic action performed by nitric oxide synthase (NOS), generates nitric oxide from l-arginine and molecular oxygen [44]. The administration of l-arginine systemically into the peritoneal region provided substrates for the action of the enzyme nitric oxide synthase (NOS) to produce a sufficient amount of nitric oxide to induce pain-related behaviour when examined with acetic acid-induced abdominal constriction model. Thus, in order to control the synthesis of nitric oxide, one must regulate the activity of its producing enzyme, which is the nitric oxide synthase (NOS) [45]. Previous studies reported that the administration of nitric oxide synthase inhibitors, both systemically and intrathecally, are capable to reduce hyperalgesia [46,47]. Furthermore, it was reported that l-NG-nitro-arginine (l-NOARG), which is a selective inhibitor of nitric oxide biosynthesis, showed antinociceptive activity when tested in a mouse model [48]. In the present study, systemic administration of N ω -nitro-l-arginine (l-NOARG) has reduced the pain induced by acetic acid, as shown by the reduced abdominal constriction behaviour of the tested animals. Upon administration of the selective inhibitor of nitric oxide synthase (NOS), l-NOARG inhibited the synthesis of nitric oxide by inactivating the enzyme. Thus, disturbance in the downstream signalling by nitric oxide that regulates various ion channels leads to the cessation of pain processing. On the other hand, when l-arginine was administered prior to the selective inhibitor of nitric oxide synthase, l-NOARG, the inhibitory effect of l-NOARG, was reversed as the nitric oxide produced sufficient enough to induce pronociceptive behaviour. In order to determine whether the antinociceptive pathway of cardamonin involves the inhibition of nitric oxide signalling, the animals were pre-treated with l-arginine prior to the systemic administration of cardamonin. The results showed that the group cardamonin pre-treated with l-arginine exhibited significantly enhanced inhibition in acetic acid-induced pain behavioural study when compared with the group treated with cardamonin only. Since cardamonin was postulated to block the NMDA receptor, the influx of calcium ion into the intracellular cavity is reduced; thus, the calcium/calmodulin-dependent nitric oxide synthase (NOS) fails to catalyse l-arginine into nitric oxide and l-citrulline, which then leads to disruption of pain signalling. It was also suggested that downstream signalling of nitric oxide pain signalling pathway involves the release of glutamate [49]. Thus, the present study suggested that the enhanced antinociceptive effect of cardamonin when pre-treated with l-arginine probably involves the modulation of the glutamate receptors. The administration of ODQ, a selective inhibitor of soluble guanylyl cyclase, was expected to reduce the synthesis of cGMP, and thus reduce the behavioural response against pain. In the group of animals treated with ODQ prior to cardamonin, it produced enhanced antinociceptive activity with a decrease in abdominal writhing response, as compared to the cardamonin only treated group. This study postulated that the enhanced inhibitory effect may be attributed to the synergistic effect of both ODQ and cardamonin. The soluble guanylyl cyclase inhibitor, ODQ should reduce the synthesis of cGMP; the downstream signalling pathway may lead to pain processing; with additional help from cardamonin that has shown its inhibitory effect against glutamate, nitric oxide and other neurotransmitters, the inhibitory action in the nociceptive behavioural study has improved as compared with their individual treatment respectively. Furthermore, it was reported that ODQ significantly decreased the glutamate concentration and reduced the intensity of NMDA-induced pain-related behaviour [49]. Thus, cardamonin was postulated to exert its antinociceptive activity by inhibiting the NMDA activation evoked by nitric oxide/cGMP/glutamate release cascade. The activation of nociceptors initiates an increase in inward currents by activating the non-potassium channels and/or reduction in outward currents that leads to membrane depolarization, followed by the generation of action potential. The membrane depolarization and excitation of dorsal root ganglion neurons by nociceptive stimuli provide a basis for the manifestation of ATP-sensitive potassium channel opening. The opening of potassium channels plays a pivotal role in regulating resting membrane potential and action potential firing threshold [50,51]. Glibenclamide is one of the sulfonylurea drugs that bind to sulfonylurea receptor (SUR) protein and affect the opening of ATP-sensitive potassium channels [52]. Previous studies reported that sulfonylurea drugs, such as glibenclamide, cause neither hyperalgesia nor antinociceptive activity when tested individually [53]. Furthermore, administration of glibenclamide alone did not affect the pain behaviour that arises from formalin injection, at both phase 1 and phase 2 [54]. The results of this study were in agreement with these evidences, in which glibenclamide, when injected alone, did not caused any changes in pain behaviour when tested with acetic acid-induced abdominal writhing test as compared to the control group. Pre-treatment of glibenclamide has blocked the opening of ATP-sensitive potassium channels, but when treated with cardamonin, the pain behaviour induced by acetic acid has greatly reduced and was significantly different from the group pre-treated with vehicle. Nitric oxide plays dual effects on nociception and antinociception. The antinociceptive effects of nitric oxide involves the activation of ATP-sensitive potassium channels [55]. Nitric oxide binding to the soluble guanlyl cyclase (sGC) catalyzes guanosine triphosphate (GTP) forming 3 ,5 -cyclic monophosphate (cGMP), followed by the activation of cGMP-dependent protein kinase (PKG). The ATP-sensitive potassium channels phosphorylated by PKG leads to membrane potential hyperpolarization, thus resulting in cessation of nociceptive signal transmission [56,57]. In addition, a study showed that nitric oxide was capable of up-regulating the expression of ATP-sensitive potassium channel in primary sensory neurons [58]. The findings in the present study suggested that cardamonin might facilitate in the up-regulation of ATP-sensitive potassium channels expression in the presence of nitric oxide. Plant Material The fresh rhizomes of Boesenbergia rotunda (5 kg) were commercially purchased from the local market, Serdang, Malaysia, and were authenticated by a resident botanist at the Institute of Bioscience (IBS), Universiti Putra Malaysia (UPM). A voucher specimen (SK1780/10) was deposited at the Herbarium, which was located at Laboratory of Natural Products, IBS, UPM. A small part of the rhizomes was cultivated in the Medicinal Plant Garden at IBS, UPM for future reference. Extraction and Isolation The fresh rhizomes of Boesenbergia rotunda were sliced into small flat pieces and dried under shadow for one week. The dried rhizomes were ground into fine powder by using domestic food processor. The dried powder 2.5 kg was dissolved in distilled methanol for two to three days. The methanolic extract was filtered and concentrated on rotary evaporator and 255 g of crude extract was obtained. The methanolic extract was subjected to solvent extraction. The crude methanolic extract was dissolved in 250 mL distilled water and transferred into a separating funnel. About 150 mL hexane was added into aqueous layer and subsequently extracted with chloroform, ethyl acetate and butanol. The chloroform layer finally passed over sodium sulphate anhydrous to remove the moisture. The chloroform extract was subjected to flash column chromatography by using ethyl acetate and hexane as eluents. Finally, the compound was purified from chloroform extract and identified as cardamonin after performing the detailed spectroscopic techniques [23,29]. Experimental Animals Adult male ICR mice (20-30 g) were used throughout the study. Animals were randomly divided into six mice per group (n = 6) and were housed at the Animal House of Faculty of Medicine and Health Sciences, Universiti Putra Malaysia. The condition of housing was set at 12 h light/12 h dark cycle with free access to standard pellet and water ad libitum. Animals were acclimatized to the condition of the laboratory one hour before the experiments. Acetic Acid-Induced Abdominal Writhing Test The acetic acid-induced abdominal writhing test was carried out as previously described [59], with slight modifications. Animals were pre-treated with cardamonin (0.3, 1, 3 and 10 mg/kg; i.p.), 30 min before the challenge with injection of 0.8% acetic acid (10 mL/kg; i.p.). Doses of cardamonin used was reported does not cause any apparent qualitative toxicity and disruption in motor coordination on animals [29]. The control group received a similar volume of vehicle (10 mL/kg; i.p.). Indomethacin (Indo, 10 mg/kg; i.p.) was used as the reference drug. Following the injection of acetic acid, the animals were immediately placed into a Perspex chamber and the number of abdominal writhes was recorded for 30 min, beginning 5 min after the acetic acid injection. Involvement of Protein Kinase C The experiment was carried out as previously described [60,61]. A volume of 20 µL of phorbol 12-myristate 13-acetate (PMA; 0.03 µg/paw) was injected intraplantarly (i.p.) into the ventral surface of the right hind paw. Animals were individually placed into an observation chamber and were observed from 15 to 45 min following PMA injection. The amount of time spent licking and biting the injected paw was timed with a chronometer and was considered to be indicative of pain. The animals were treated with cardamonin (0.3, 1, 3 and 10 mg/kg; i.p.), indomethacin (10 mg/kg; i.p.) or vehicle (10 mg/kg; i.p.) 30 min before PMA injection. Effect of MK-801 and NBQX on Glutamate-Induced Nociception To investigate the possible participation of NMDA receptor in the antinociceptive effect of cardamonin, the procedure used was similar to that previously described [30,62,63], with slight modifications. The animals were pre-treated with MK-801 (0.3 mg/kg; i.p.; NMDA receptor antagonist) or NBQX (3 mg/kg; i.p.; non-NMDA receptor antagonist) 15 min before injection of either vehicle (10 mL/kg; i.p.) or cardamonin (1 and 3 mg/kg; i.p.). The animals were then challenged with the injection of 20 µL of glutamate (10 µmol/paw) into the ventral surface of the right hind paw, 30 min after the treatment. Animals were then observed individually for 15 min following glutamate injection. The amount of time spent licking and biting the injected paw was timed with a chronometer and considered as an indication of nociception. To assess the possible participation of the l-arginine/nitric oxide pathway in the antinociceptive effect of cardamonin in acetic acid test, animals were pre-treated with l-arginine (100 mg/kg; i.p.; a nitric oxide precursor), 15 min before the administration of cardamonin (1 mg/kg; i.p.), N ω -nitro-l-arginine (l-NOARG; 20 mg/kg; i.p.; a nitric oxide synthase inhibitor) or vehicle (10 mL/kg; i.p.) as described previously [64,65]. Nociceptive responses against acetic acid were then recorded for 30 min after the administration of cardamonin, l-NOARG or vehicle, began 5 min after the acetic acid injection. The numbers of abdominal writhing were considered as indication of pain behaviour. Another group of animals were pre-treated with vehicle (10 mL/kg; i.p.), and after 15 min, they received cardamonin (1 mg/kg; i.p.), l-NOARG (20 mg/kg; i.p.) or vehicle, 30 min before acetic acid injection. Involvement of Cyclic Guanosine Monophosphate (cGMP) To elucidate the possible contribution of cGMP in the antinociceptive effect of cardamonin in the acetic acid test, animals were pre-treated with ODQ (2 mg/kg; i.p.; a selective inhibitor of soluble guanylyl cyclise), 15 min before the administration of cardamonin (1 mg/kg; i.p.) or vehicle (10 mL/kg; i.p.) as described previously [66,67], with slight modifications. Nociceptive responses against acetic acid were then recorded for 30 min after the administration of cardamonin or vehicle, beginning 5 min after the acetic acid injection. The numbers of abdominal writhing were counted as indication of pain behaviour. Another group of animals were pre-treated with vehicle (10 mL/kg; i.p.), and after 15 min, they received cardamonin (1 mg/kg; i.p.) or vehicle, 30 min before acetic acid injection. 4.6.3. Involvement of ATP-Sensitive K + Channel In order to investigate the possible participation of K + channel in the antinociceptive effect of cardamonin in acetic acid test, animals were pre-treated with glibenclamide (10 mg/kg; i.p.; an ATP-sensitive K + channel inhibitor), 15 min prior to the injection of either cardamonin (1 mg/kg; i.p.) or vehicle (10 mL/kg; i.p.), as described previously [60,64], with slight modifications. The animals were then challenged with acetic acid (i.p.) 30 min after the treatment. The animals were immediately placed into a Perspex chamber after injection of acetic acid and the numbers of abdominal writhing were recorded for 30 min, began 5 min after the acetic acid injection. Another group of animals were pre-treated with vehicle (10 mL/kg; i.p.), and after 15 min, they received cardamonin or vehicle, 30 min before acetic acid injection. Data Analysis The data collected were expressed as mean ± S.E.M. for six animals per group and analysed using one-way ANOVA followed by Tukey's multiple comparison test. The differences between means were considered as statistically significant at p < 0.05. The percentage of inhibition was calculated by comparing the results of treatment group with control group. Conclusions Findings in the present study suggested that systemic administration of cardamonin, in doses that does not produce any apparent toxicity and motor impairment in animals, exerted antinociceptive effects through the inhibition of PKC activation, modulation of NMDA and non-NMDA glutamate receptors, interruption of NMDA activation evoked nitric oxide/cGMP/glutamate release cascade, and the modulation of nitric oxide/cGMP-mediated activation of ATP-sensitive potassium channels. The precise mechanism underlying remains to be investigated through molecular analysis.
2020-11-19T09:12:47.835Z
2020-11-01T00:00:00.000
{ "year": 2020, "sha1": "444b17869f6f2bab47c8fc779aa3ec5138036e40", "oa_license": "CCBY", "oa_url": "https://www.mdpi.com/1420-3049/25/22/5385/pdf", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "ab64eed6bc08bb0341dcfc8cefea87f7f7201756", "s2fieldsofstudy": [ "Medicine", "Biology" ], "extfieldsofstudy": [ "Chemistry", "Medicine" ] }
268056528
pes2o/s2orc
v3-fos-license
Rise in broadly cross-reactive adaptive immunity against human β-coronaviruses in MERS-recovered patients during the COVID-19 pandemic To develop a universal coronavirus (CoV) vaccine, long-term immunity against multiple CoVs, including severe acute respiratory syndrome coronavirus 2 (SARS-CoV-2) variants, Middle East respiratory syndrome (MERS)–CoV, and future CoV strains, is crucial. Following the 2015 Korean MERS outbreak, we conducted a long-term follow-up study and found that although neutralizing antibodies and memory T cells against MERS-CoV declined over 5 years, some recovered patients exhibited increased antibody levels during the COVID-19 pandemic. This likely resulted from cross-reactive immunity induced by SARS-CoV-2 vaccines or infections. A significant correlation in antibody responses across various CoVs indicates shared immunogenic epitopes. Two epitopes—the spike protein’s stem helix and intracellular domain—were highly immunogenic after MERS-CoV infection and after SARS-CoV-2 vaccination or infection. In addition, memory T cell responses, especially polyfunctional CD4+ T cells, were enhanced during the pandemic, correlating significantly with MERS-CoV spike-specific antibodies and neutralizing activity. Therefore, incorporating these cross-reactive and immunogenic epitopes into pan-CoV vaccine formulations may facilitate effective vaccine development. INTRODUCTION Severe acute respiratory syndrome coronavirus 2 (SARS-CoV-2), the causative agent of COVID-19, belongs to the family of positivesense RNA human coronaviruses (hCoVs) (1).This family includes αcoronaviruses (hCoV-229E and hCoV-NL63) and βcoronaviruses [hCoV-HKU1, hCoV-OC43, Middle East respiratory syndrome coronavirus (MERS-CoV), and SARS-CoV].All hCoVs have homologous genomic structures encoding the viral spike (S), envelope (E), membrane (M), and nucleocapsid (N) structural proteins; a large polymerase complex composed of 16 nonstructural proteins (NSPs); and several accessory proteins (2).These proteins retain highly conserved motifs among the various hCoVs, which may broadly support cross-reactive immune responses (3).Such cross-reactive responses to SARS-CoV-2 proteins preexist in naïve participants even before the COVID-19 pandemic, and most cross-reactive epitopes are located within the conserved structural proteins and NSPs of hCoVs (3)(4)(5)(6)(7).However, the specific role of these cross-reactive responses in the outcomes of SARS-CoV-2 infection and COVID-19 vaccination remains unclear (3).Nonetheless, hCoV cross-reactive antibodies are boosted by SARS-CoV-2 infection (8).Considering that repeated infections with different SARS-CoV-2 variants and other hCoVs, along with COVID-19 vaccination, may have continued in the post-COVID-19 pandemic era, cross-reactive memory responses against various hCoVs may form a complex pool of heterogeneous herd immunity against the valid viral pathogens.Therefore, the specific role of broad cross-reactivity against various hCoVs needs to be determined to prevent severe disease progression and to develop effective and universal hCoV vaccines for future CoV pandemics. Newly emerging zoonotic CoVs that cause acute respiratory syndrome have threatened global public health via potential spillovers from animal hosts, such as bats (9).SARS-CoV and MERS-CoV emerged in 2002 and 2012, respectively, whereas SARS-CoV-2 has spread globally at an alarming rate since December 2019 and has successfully adapted to the human population.Although their transmission potential and virulence in humans vary greatly, the continuous challenge of diverse zoonotic CoVs may create a unique immune repertoire against the CoV family in humans.However, our current knowledge of the long-term dynamics and impact of managing the repeated challenge of various CoVs on the human immune system is limited.In addition, a recent global vaccine campaign against COVID-19 has introduced an additional layer of artificial immune modulation against the viral pathogens.In South Korea, the confirmed SARS-CoV-2 infection rate was 1.2% of the entire population at the end of 2021, which increased to 56.3% by the end of 2022.In addition, COVID-19 vaccine coverage expanded to 85.0% in 2021 and further to 86.7% by the end of 2022 (https://covid19.who.int/).Therefore, tracing adaptive immunity in humans could elucidate the mechanistic understanding of longterm changes in our immune system against ongoing insults by SARS-CoV-2 variants, sporadic MERS-CoV outbreaks, and future novel pandemic CoVs. In 2015, a large MERS outbreak swept South Korea, resulting in 186 confirmed cases and 38 deaths (10).This outbreak was initiated by an infected traveler from the Middle East and amplified in healthcare settings (11).Our group launched a Korean MERS cohort in 2016 and performed follow-up studies for up to 7 years, including the COVID-19 pandemic period, to evaluate MERS-CoV-specific adaptive immune responses in MERS-recovered patients (10,(12)(13)(14)(15)(16).The present study aimed to further explore the antibody responses against hCoVs, including SARS-CoV, SARS-CoV-2, and MERS-CoV.This enabled us to landscape the chronological changes in adaptive immunity against various hCoVs in MERS-recovered patients during the COVID-19 pandemic and highlight the heterogeneous but focused boosting of cross-reactive antibody responses, as well as neutralizing activity, against conserved epitopes shared by diverse hCoVs, including SARS-CoV and SARS-CoV-2. Increased neutralizing antibody levels against MERS-CoV in MERS-recovered patients during the COVID-19 pandemic According to disease severity during the 2015 MERS outbreak in Korea (13), the participants were classified into group I (G I): asymptomatic or those with mild disease not progressing to pneumonia; group II (G II): those with mild pneumonia without hypoxemia; and group III (G III): those recovering from prolonged and severe pneumonia, where they experienced hypoxemia and were treated with oxygen during hospitalization (table S1). Specific antibody responses against the MERS-CoV spike antigen (S1) were assessed in serum samples collected from participants up to 7 years after symptom onset (Fig. 1A).The mean optical density (OD) ratios against S1 peaked at the second year (mean ± SD, 1.90 ± 1.69) and gradually decreased thereafter (0.78 ± 0.67 at the seventh year).The magnitude and durability of antibody responses and the seropositivity rate strongly correlated with disease severity.Patients in G II and G III showed stronger and sustained antibody responses than those in G I (Fig. 1B).The seropositivity rate persisted for up to the fourth year (50.7 to 54.9%) and sharply decreased to 17.1% in the seventh year (Fig. 1, C and D).No participant in G I was seropositive from the third year, whereas 75.0 to 83.3% of the participants in G III showed persistent seroconversion up to the fourth year, and 27.3% of the participants remained seropositive in the seventh year.The foci reduction neutralization titer (FRNT 50 ) against MERS-CoV persisted up to the third year and significantly decreased from the fourth year after infection (Fig. 1E).The mean FRNT 50 value in the first year (mean ± SD, 2359 ± 2813) was reduced by 70.4% in the fourth year (699 ± 1187) and by 82.1% in the sixth year (422 ± 511).The mean neutralizing antibody (nAb) titer marginally increased in the seventh year (450 ± 543, 6.7% increase relative to that in the sixth year) in contrast to the continuously reducing antibody levels against the S1 antigen (Fig. 1G).Although the nAb titers were generally higher among patients in G II and G III than those in participants of G I throughout the follow-up period (Fig. 1F), the mean FRNT 50 values in all three groups were consistently elevated in the seventh year (percent increase: G I = 8.6%, G II = 3.8%, and G III = 13.0%)compared to those in the sixth year.Thus, the neutralizing activity against MERS-CoV in our cohort may have been affected by active vaccination campaigns and/or SARS-CoV-2 infection in South Korea during the COVID-19 pandemic. Increased cross-reacting antibody levels against human βCoVs in MERS-recovered patients during the COVID-19 pandemic To investigate the sudden increase in nAb levels against MERS-CoV in the seventh year, we further investigated the kinetic changes in specific antibody responses against MERS-CoV spike antigens in 33 participants whose sera were available from the fourth (2019) to the seventh year (2022).Antibodies against the MERS-CoV S1 antigen gradually decreased (Fig. 2A), as seen in Fig. 1A.In contrast, the antibody responses against the S2 antigen and full-spike ectodomain antigen (S) gradually increased in the sixth and seventh years (Fig. 2, B and C).In addition, there was a slight upward trend in the neutralizing activity of the sera throughout the study period (Fig. 2D).However, all these changes in antibody levels were marginal and not statistically significant.Nevertheless, gradual increases in antibody responses [anti-S2 and S immunoglobulin G (IgG)] and neutralizing activities were consistently observed in all patient groups, although the antibody responses were generally higher in patients who recovered from MERS pneumonia (G II and G III) than in those without pneumonia (G I) (Fig. 2, E to H). COVID-19 vaccination and infection history of the 33 patients revealed that majority of them (93.9%) were immunized with various combinations of commercial vaccines in the sixth and seventh years (table S2 and fig.S1).Moreover, nine participants (37.3%) experienced confirmed SARS-CoV-2 infection in the sixth or seventh year.On comparing mean neutralizing activities before and after exposure to the COVID-19 vaccine and/or SARS-CoV-2 infection, 17 participants showed an increase in nAb responses against MERS-CoV, with seven participants demonstrating >50% increase in neutralizing activity (table S2 and Fig. 2I).In contrast, the remaining 16 participants showed decreased neutralizing activity, indicating individual variations in antibody responses against MERS-CoV during the COVID-19 pandemic.On the basis of our clinical data, changes in neutralizing activity were not significantly associated with MERS severity, age, sex, or SARS-CoV-2 infection history, although male patients who recovered from severe MERS generally presented higher neutralizing activity (fig.S1).nAb responses against MERS-CoV significantly correlated with antibody levels against S1, S2, and S antigens (Fig. 2J).Thus, the gradual increase in neutralizing activity against MERS-CoV after exposure to the COVID-19 vaccine and/or SARS-CoV-2 infection may be attributed to emerging cross-reactive antibody responses against S2, which is elevated during COVID-19 (3), than those against S1, which gradually decreased during the surveillance period (Fig. 2K).Given that antibody response to the S1 antigen strongly correlated with the neutralizing activity, these antibodies may still play a substantial role in the neutralization of MERS-CoV (Fig. 2J). COVID-19 vaccination and SARS-CoV-2 infection induced specific antibody responses against SARS-CoV-2 spike antigens and neutralizing activity against SARS-CoV-2.Antibody responses specific to the SARS-CoV-2 spike S and S2 antigens were markedly elevated in the sixth and seventh years (Fig. 3, A and B).In addition, neutralizing activities against SARS-CoV-2 wild type and a recent variant, BA.5, were significantly enhanced in the seventh year (Fig. 3, C and D).Regardless of MERS severity, quantitative and kinetic trends in antibody responses were consistently observed in all participants (Fig. 3, E to H), except those who were unvaccinated and uninfected during the surveillance period (Fig. 3, I to L, black line).We also observed a significantly strong correlation between the neutralizing activity and specific IgG levels against SARS-CoV-2 S2 and S antigens (Fig. 3, M to P). Furthermore, we assessed antibody responses against other hCoVs, including SARS-CoV, hCoV-OC43, and hCoV-NL63.Anti-SARS-CoV S IgG and neutralizing activity during the pandemic gradually increased, with significantly higher antibody levels in the seventh year than those in the fourth, fifth, and sixth years (Fig. 4, A and B).Regardless of MERS severity, quantitative and kinetic trends in antibody responses were consistently observed in all participants (Fig. 4, E and F), except one participant who was unvaccinated and uninfected during the surveillance period (Fig. 4, I and J).The antibody levels against the SARS-CoV S antigen also positively correlated with neutralizing activity against the virus (Fig. 4M).Considering no SARS-CoV exposure history, this increase in spike-specific antibodies strongly suggests the emergence of cross-reactive antibodies after COVID-19 vaccination and/or SARS-CoV-2 infection.Moreover, we detected a marginal increase in specific antibodies against hCoV-OC43 in the seventh year (Fig. 4C), where G III patients presented significantly higher levels (mean titer 50 : 4952.8) of anti-hCoV-OC43 S IgG than those (mean titer 50 : 1393.2) in G I participants (Fig. 4G).Among the 33 participants, 25 (75.8%)presented with increased specific antibody responses in the seventh year compared to those in the sixth year, whereas eight participants (24.2%) showed a marginal reduction in antibody responses against hCoV-OC43 during the pandemic (Fig. 4K).Two participants presented markedly increased specific IgG responses against the phylogenetically distant endemic hCoV-OC43 by 17-and 65-fold in the seventh year compared to those in the sixth year (Fig. 4K), suggesting that some MERS-recovered patients had potent cross-reactive antibodies produced by memory B cells, or they may have been infected with hCoV-OC43 during the pandemic.In contrast, antibody responses against hCoV-NL63 spike antigen were not significantly affected during the pandemic (Fig. 4, D, H, and L).Thus, a specific increase in broadly cross-reactive antibodies against human βCoVs was induced in some MERS-recovered patients by COVID-19 vaccination and/or SARS-CoV-2 infection during the pandemic; however, these responses might be barely reactive to other human αCoVs.In addition, we failed to detect significant changes in specific IgG levels against other respiratory viruses such as human influenza A virus and rhinovirus during the pandemic (fig.S2). Positive correlation of antibody levels against human βCoVs in MERS-recovered patients The above findings indicated potential correlations among specific antibody responses in our MERS-recovered cohort, which was confirmed using pairwise correlation analysis (Fig. 4, N to P). Antibody responses against hCoV-OC43, an endemic embecovirus, significantly correlated with those against sarbecoviruses (SARS-CoV-2 and SARS-CoV) and merbecovirus (MERS-CoV).Antibody responses against hCoV-NL63 exhibited a positive correlation solely with antibodies against MERS-CoV S2 and hCoV-OC43 but barely correlated with antibodies against other βCoV's spike antigens (Fig. 4, N to P).These results are consistent with the amino acid sequence conservation rate of spike antigens (S2 > S > S1) among hCoVs (fig.S3).We also confirmed the significant induction of antibodies against the spike antigens of SARS-CoV-2 and SARS-CoV after COVID-19 vaccination in healthy controls unexposed to MERS-CoV (Fig. 5A).In addition, the control group exhibited a significant rise in the antibody response to the S2 antigen of MERS-CoV.However, before vaccination, the control group's baseline antibody levels against the S antigens of SARS-CoV-2 and MERS-CoV were significantly lower than those of MERS-recovered patients who had not been vaccinated against COVID-19 (fifth year).MERS-recovered patients in their seventh year also showed notably elevated antibody levels for SARS-CoV-2 S, SARS-CoV S, and MERS-CoV S/S2 antigens compared to the control group after COVID-19 vaccination.For the MERS-recovered group in the seventh year, postvaccination antibody levels for S antigens of SARS-CoV-2 (mean titer ± SD: 21,953 ± 24,768), SARS-CoV (8266 ± 8653), and MERS-CoV (16,762 ± 15,817) were significantly higher than those in the non-MERS controls after vaccination (anti-SARS-CoV-2 S: 3873 ± 4271, anti-SARS-CoV S: 1975 ± 2063, and anti-MERS-CoV S: 1012 ± 2396).Moreover, the antibody fold increase specific to the S antigens of SARS-CoV-2 and SARS-CoV after COVID-19 vaccination was also substantially higher in the MERS-recovered group (fifth versus seventh year) relative to the non-MERS control group (Fig. 5B).Thus, prior exposure to MERS-CoV spike antigens might further contribute to the antibody responses against SARS-CoV-2 and SARS-CoV spike antigens after COVID-19 vaccination.The antibody fold increase for S and S2 antigens of MERS-CoV was not significantly different between the non-MERS control and MERS-recovered groups.This may be due to the preexisting higher antibody levels in MERS-recovered subjects. Landscaping linear epitopes in hCoVs' spikes recognized by cross-reacting antibodies in MERS-recovered patients Furthermore, the antibody responses cross-reacting with hCoVs were characterized using a microarray system.Sera collected from MERS-recovered patients in the first, third, fifth, or seventh year after MERS-CoV infection were pooled and screened for reactive linear epitopes.Pooled sera from healthy controls, unexposed to MERS-CoV or SARS-CoV-2 antigens, were used as negative controls.Antibody reactivities, presented by normalized intensity values, were arranged on sequence alignments of hCoVs' spike domains.This approach allowed us to landscape linear epitopes reacted by serum antibodies and quantify the kinetic responses of the reactive antibodies (fig.S4A).A general temporal increase in the overall binding intensities against the spike antigens derived from the seven hCoVs was observed (fig.S4B).Among the peptide epitopes, we identified five immunodominant epitopes within S2 antigens, which were more conserved than the S1 subunits and present broad reactivity across hCoVs (Fig. 6).These include a region [common epitope #2 (CE#2)] encompassing the fusion peptide, CE#4 containing a conserved stem-helix region, and the cytoplasmic tail (CE#5) of human CoVs.CE#2 and CE#4 are highly conserved among diverse CoVs and exhibit the broadest neutralizing spectrum by preventing S2 cleavage and the fusion of viral and cellular membranes, respectively (3).In particular, the binding capacity of antibodies against CE#4 increased for all five human βCoVs in the pooled sera collected in the seventh year (Fig. 6, A and B).This strongly suggests an active boosting of the antibody repertoire against the conserved stem helix region of the spike antigen by COVID-19 vaccination and/or SARS-CoV-2 infection.We confirmed that antibody responses against MERS-CoV CE#4 were elevated in all MERS-recovered patients, except one participant unvaccinated and uninfected with SARS-CoV-2, during the pandemic period (sixth or seventh year) (Fig. 6, C to E). Notably, this antibody response was significantly higher in G III than in G I in the seventh year (Fig. 6D).A pairwise correlation analysis involving antibodies against various spike antigens showed the most robust positive correlation between antibody levels against MERS-CoV CE#4 and neutralizing activity against SARS-CoV-2 and SARS-CoV (Fig. 6, F and G).This was also observed with antibody levels against S antigens (S or S2) of SARS-CoV-2, SARS-CoV, and MERS-CoV.However, there was moderate to no significant correlation with antibodies against hCoV-OC43 S, MERS-CoV S1, and the neutralizing activity against MERS-CoV.The antibody levels against MERS-CoV CE#4 were also significantly increased in the non-MERS control group after COVID-19 vaccination (Fig. 6H).However, the specific antibody levels were elevated in only 56.8% (21 of 37), unchanged in 37.8% (14 of 37), and decreased in 5.4% (2 of 37) of cases after vaccination, compared to those before vaccination.This antibody response increased in 84.8% (28 of 31) of MERSrecovered group after vaccination.In addition, the fold increase in antibody levels specific to CE#4 of MERS-CoV's spike antigen was significantly higher in the MERS-recovered group compared to the non-MERS control group (Fig. 6H, right).Again, prior exposure to MERS-CoV spike antigens may better amplify the antibody response to the conserved CE#4 (stem helix epitope) in MERS-recovered subjects after COVID-19 vaccination, more so than in the non-MERS group. Notably, antibodies against βCoV's CE#5 (cytoplasmic tail) were moderately induced in the first year after the MERS outbreak and sustained up to the seventh year, although variations were observed depending on the βCoV species (Fig. 6B and fig.S4C).The antibody response to CE#5 appears to be specific to the viral species.An enhanced response to CE#5 of MERS-CoV was detected in the first year, while responses to those of SARS-CoV-2 and SARS-CoV increased during the pandemic. Kinetic changes in memory "T cell" responses against MERS-CoV during the COVID-19 pandemic Stimulation of peripheral blood mononuclear cells (PBMCs) with synthetic MERS-CoV peptides reveals a gradual decrease in interferon-γ (IFN-γ)-producing T cell levels in most participants, yielding a 64% overall positivity rate of memory T cell responses in the fifth year after infection (15).Here, the mean frequency of T lymphocytes increased in participants during the COVID-19 pandemic (Fig. 7A).The positivity rates of memory T cell responses specific to any of the viral antigens (S1, S2, E, M, and N) were 80.6 and 65.7% at the sixth and seventh year, respectively.Increase in the specific memory T cells were detected in a subset of participants [>1.5-fold increase in 44.4% (16 of 36)] during the pandemic.However, these annual changes did not reach statistical significance.The mean specific memory T cell frequencies were moderately elevated at the sixth and seventh year [mean ± SD, 103 ± 95 spot-forming cells (SFCs)/2 × 10 5 PBMCs] when compared to those at the fourth and fifth year (74 ± 62 SFCs/2 × 10 5 PBMCs) (Fig. 6B).This kinetic pattern was similar among all the severity groups (fig.S5A).The moderate increase may be attributed to cross-priming by COVID-19 vaccination and/or SARS-CoV-2 infection, as observed in the antibody responses.Simultaneous increases in the frequency of SARS-CoV-2-and MERS-CoV-reactive T cells observed in several MERSrecovered patients who were vaccinated and infected with SARS-CoV-2 during the sixth and seventh years indirectly supported this possibility (fig.S6, A and F).In addition, an increase in the positivity rate of IFN-γ + T cells was observed in cells stimulated with S2 peptides (36.1, 58.3, and 60.0% at the fifth, sixth, and seventh year, respectively) (Fig. 7A, middle).Because a subset of participants displayed an increase in specific T cell responses against E/M/N (fig.S6A), which were not included in COVID-19 vaccines, these responses might have been induced by cross-active T cells after exposure to SARS-CoV-2 and/or other βCoVs during the pandemic.In general, these responses were higher than those of the specific T cells responding to the spike protein (S1 or S2) (Fig. 7, A and B).However, none of the annual changes in T cell responses were statistically significant, and they occurred at moderate levels. Furthermore, we determined which subset of T lymphocytes participated in persistent memory T cell responses by performing intracellular cytokine staining after the stimulation of PBMCs with viral peptides.MERS-CoV-reactive IFN-γ-secreting T cells generally persisted in the participants up to the seventh year, with a gradual rise in some participants at the sixth and seventh years (fig.S6, B to D), similar to that observed for antibody responses.The frequency of virus-reactive cells in CD4 + T and CD8 + T lymphocytes also tended to constantly persist and even increased in some patients during the pandemic, driving a gradual rise in the mean frequency of the memory T cell subsets (fig.S6D).A rising trend (>1.5-fold increase) during the pandemic was observed in both CD4 + (38.9% of participants) and CD8 + T cells (52.8% of participants) when compared to those in the fourth and fifth years, whereas only two (5.6%) and five (13.9%) participants presented >50% reduction in antigen-specific CD4 + and CD8 + T cells.In addition, the CD4 + T cell frequency positively correlated with that of CD8 + T cells observed in the fourth to seventh years (fig.S6E).Both CD4 + and CD8 + T cells generally responded better to E/M/N proteins than to S protein antigens during the follow-up period (fig.S6D), as reported previously (15).Notably, the frequency of SARS-CoV-2 antigen-specific CD4 + and CD8 + T cells simultaneously increased in participants during the sixth and seventh years (fig.S6G), suggesting that the increase in MERS-CoV-specific T cell responses might have been induced by cross-reactive T cells after vaccination and/or SARS-CoV-2 infection.Nonetheless, there was no significant difference in the levels of MERS-CoV-specific memory CD4 + and CD8 + T cells among the three different severity groups and few correlations with age, sex, and SARS-CoV-2 infection history in our cohort [fig.S5 (B and C) and S7A]. We also examined the polyfunctionality of memory T cells that secrete multiple cytokines, such as IFN-γ, tumor necrosis factor-α (TNF-α), and interleukin-2 (IL-2), upon antigenic (S1/S2/E/M/N) Purple line, geometric mean; SFc, spot-forming cell; PBMc, peripheral blood mononuclear cell.(B) Mean memory t cell response kinetics after nonlinear regression analysis (solid line) with 95% ci (shaded color).(C and D) cytokine-production profiles of the MeRS-cov-specific cd4 + t cells (c) and cd8 + t cells (d) in the MeRS-recovered cohort.All possible combinations of iFnγ, il-2, and tnFα are shown on the x axis.the response at each year is grouped according to the number of functions and data summarized using pie charts.each slice of the pie represents the fraction of the total response that consists of t cells positive for a given number of functions.Purple line, geometric mean.(E) Kinetic changes in the mean value of polyfunctional and iFnγ + memory t cell responses after nonlinear regression analysis (solid line) with 95% ci (shaded color).(F) correlation matrix of MeRS-covspecific memory t cells with antibody levels against various hcovs' spike antigens and neutralizing activity.the circle sizes and color intensities are proportional to Spearman's correlation coefficients.Only significant correlations (P < 0.05) are presented.(G) correlation of neutralizing acitivity against MeRS-cov with memory t cells specific to MeRS-cov's structural peptide pools (S1/S2/e/M/n) assessed using linear regression (brown line) and Spearman's rank test.n = 132. stimulation.MERS-CoV-reactive CD4 + T cells showed a similar distribution in single-, double-, and triple-cytokine-producing cells, whereas single-cytokine-producing cells were dominant in memory CD8 + T cells (Fig. 7, C and D).Notably, IFN-γ + CD8 + T cells were highly up-regulated in the sixth and seventh years (Fig. 7E).Predominant multifunctional CD4 + T cells together with enhanced IFN-γ + CD8 + T cells specific to SARS-CoV-2 antigens after viral infection have been reported previously (17,18).Therefore, potential cross-reactive T cell responses induced by COVID-19 vaccination and/or SARS-CoV-2 infection may drive antigenspecific memory T cell responses against . Correlations between memory T cell and antibody responses We further assessed the correlation between antigen-specific antibodies and T cell responses observed in MERS-recovered patients from the fourth to seventh years.Using pairwise comparisons, MERS-CoV-specific T cells measured using enzyme-linked immunosorbent spot (ELISpot) analysis and flow cytometry after stimulation with mixed peptide pools derived from the S/E/M/N antigens extensively correlated with each other (Fig. 7F).In particular, CD4 + T cells significantly correlated with the nAb titers against MERS-CoV (Fig. 7, F and G).Antibody responses against MERS-CoV S1, SARS-CoV-2, and SARS-CoV barely correlated with T cell responses.However, we observed a broader and more significant correlation between MERS-CoV S antigen-specific T cells and antibody levels against MERS-CoV S/S2 and hCoV-OC43 (Fig. 7F and fig.S7B).Thus, broadly cross-reacting spike-specific T cell responses, especially against the S2 subunit generated by COVID-19 vaccination, may induce broadly cross-reacting antibodies against β-CoVs as well as neutralizing activities, but not against the highly variable S1 subunit.Furthermore, IFN-γ-secreting CD4 + T cells specific to MERS-CoV antigens significantly correlated (P = 0.0265) with nAb levels against MERS-CoV, whereas those of CD8 + T cells, except cells stimulated with E/M/N antigens, failed to do so (fig.S7, C and D).Therefore, memory B cells with broadly cross-reactive neutralizing potential against βCoVs observed in MERS-recovered patients might be boosted by elevated CD4 + T cells induced by COVID-19 vaccination and/or SARS-CoV-2 infection. DISCUSSION Information on the characteristics, longevity, and cross-reactivity of antibodies and memory T cell responses after acute CoV infection is important for developing effective control strategies.Our long-term follow-up study with a well-defined cohort of MERS-recovered patients revealed the following characteristics of humoral and cellular memory responses. First, antibody responses and neutralizing activity against MERS-CoV peaked 1 to 2 years after infection and gradually declined thereafter (Fig. 1).The estimated half-lives of anti-MERS-CoV S1 IgG and nAbs in seropositive participants during 5 years after the MERS outbreak are 61 and 20 months, respectively (16).Virus-specific antibody responses and memory T lymphocytes were detected in 17.1% (anti-MERS-CoV S1 IgG) and 65.7% (IFN-γ + cells) of the survivors 7 years after infection, indicating a relatively longer persistence of memory T cells than that of antibody responses.Memory T cells predominated in the CD4 + T lymphocyte compartment, although CD8 + T cells were also found in some participants (fig.S6).Specific IgG antibodies to SARS-CoV have been majorly undetectable in patients with SARS (8.7% positivity) 6 years after infection, along with the absence of SARS-CoV-specific memory B cell responses (20).However, memory T cell responses to a pool of SARS-CoV S peptides have been detected in 60.9% of patients 6 years after recovery (20) and long-lasting memory T cells reactive to SARS-CoV N protein have been detected 17 years after the 2003 SARS outbreak (4).Thus, memory T cell immunity lasting for >7 years has been observed in at least 50% of MERS-CoV-infected participants, which is similar to that acquired from SARS-CoV infection.Moreover, MERS-specific cellular memory responses could last for up to 6.9 years, although this study was performed with a small number of MERS survivors at single time points (21).Antibody responses and neutralizing activity against viral antigens during SARS-CoV-2 infection generally peak in the first month after infection, gradually decline, and stabilize 4 to 6 months after infection (22).The estimated half-life of the SARS-CoV-2 nAb is >200 days, and neutralizing activity is detectable in approximately 80 to 90% of SARS-CoV-2-infected individuals 12 months after infection (23,24).The estimated half-lives of SARS-CoV-2-specific memory CD4 + and CD8 + T cell kinetics vary widely from 100 to 400 days and 100 to 200 days, respectively, and longlasting memory T cells exist against SARS-CoV-2 (22).Although long-term follow-up studies on the antibody and memory T cell responses against SARS-CoV-2 are required to confirm the longevity of adaptive immunity and for comparative analysis with MERS-CoV and SARS-CoV infections, active vaccine campaigns and the continuous emergence of SARS-CoV-2 variants may hamper this approach. Second, the antibody levels among patients in G II and G III were consistently higher than those in G I throughout the surveillance period.Moreover, the initial antibody responses in patients with severe MERS (G III) were delayed compared to those in patients with moderate pneumonia (G II) (25), whereas more rapid and robust antibody responses specific to SARS-CoV-2 antigens are consistently observed in patients with severe COVID-19, potentially due to extrafollicular B cell activation (26)(27)(28).In addition, the longevity of specific antibodies in patients with COVID-19 (less than 1 year) is considerably shorter than that in patients with MERS (approximately 5 years) (16,29).Thus, the differential mechanisms modulating initial antibody responses, their role in disease progression, and the regulatory networks controlling the longevity of memory responses among various zoonotic CoV infections, depending on their virulence in humans, need to be further characterized (30).Nonetheless, prolonged antibody responses and neutralizing activity in nonhospitalized patients who recovered from more severe MERS are observed for up to 6 years after infection (21).Although the frequency of memory T lymphocytes is high in severely or moderately ill patients than that in mildly ill patients during early recovery from infection (14), this difference gradually disappeared (fig.S5). Third, antibody and memory T cell responses against MERS-CoV were boosted by other βCoV infections and/or vaccination, such as SARS-CoV-2, only in some MERS-recovered patients.The degree of antibody and memory T cell resurgence against MERS-CoV may be independent of initial MERS severity.Memory T lymphocytes, particularly CD4 + T cells, responded better to E/M/N proteins than to the S protein of MERS-CoV (fig.S6).In addition, the polyfunctionality of these memory T lymphocytes did not significantly change during the follow-up period but was slightly enhanced during the pandemic, potentially due to cross-priming through SARS-CoV-2 vaccination and/or infection, similar to the boost in preexisting cross-reactive memory T cells in SARS-CoV-1 survivors after COVID-19 vaccination (31).In addition, COVID-19 vaccination or SARS-CoV-2 infection can induce T cell proliferation with cross-reactivity to MERS-CoV in individuals previously uninfected with MERS-CoV (32).The simultaneous increase in MERS-CoV-and SARS-CoV-2-reactive T cells in some MERS survivors infected with SARS-CoV-2 and/or vaccinated also suggests cross-reactivity (fig.S6).Therefore, the increase in MERS-CoVreactive memory T cell numbers observed in only some patients during the pandemic might be caused by exposure to SARS-CoV-2 antigens; however, this needs further investigation.The positive correlation of the memory T cell responses, particularly CD4 + T cells, with the MERS-CoV antibodies suggested that the boosted antibody responses might be supported by antigen-specific memory T cells (Fig. 7F) and cross-reactive memory B cell responses against conserved epitopes such as S2, as revealed by correlation analysis using a dataset of antibody responses against various hCoVs (Fig. 4).Unbiased clustering based on all antibody response datasets revealed two groups in our MERS cohort, one with high and sustained anti-MERS-CoV antibodies and the other with relatively low anti-MERS-CoV antibodies (fig.S1).The first group was maledominated (82.4%, 14 of 17) and recovered from more severe MERS (G II: 58.8%, G III: 41.2%), whereas the second group was femaledominated (62.5%, 10 of 16) and generally exhibited milder MERS (G I: 43.8%, G II: 31.3%, and G III: 25.0%).Although the first group presented more enhanced antibody responses against hCoV-OC43, which correlated better with MERS-CoV antibodies than with SARS-CoV-2 or SARS-CoV antibodies (Fig. 4N), both groups showed similar degrees of elevation in anti-SARS-CoV-2 and SARS-CoV antibodies, indicating that the COVID-19 vaccination equally crossboosted anti-SARS-CoV antibodies, regardless of previous anti-MERS-CoV antibody levels.However, we observed a remarkable increase in cross-reactive antibody responses against the conserved CE#4 linear epitope in MERS survivors, especially in G III group recovered from severe MERS (Fig. 6D).The antibody levels at the seventh year were significantly higher (P = 0.0068) in the first group [mean log 2 (titer) ± SD, 7.2 ± 2.1] than those in the second group (5.5 ± 1.6) (fig.S1).In addition, the fold increase in antibody levels against the SARS-CoV S antigen and the MERS-CoV CE#4 epitope in MERS survivors was notably higher than that observed in the non-MERS group following COVID-19 vaccination (Figs.5B and 6H).Moreover, prior exposure to MERS-CoV spike antigens contributed to significantly higher antibody responses against MERS-CoV, SARS-CoV-2, and SARS-CoV spike antigens when compared to non-MERS group after COVID-19 vaccination (Fig. 5A).Previous studies also reported induction of cross-reactive antibody responses in MERS-recovered patients during the COVID-19 pandemic, although they examined a smaller cohort (33,34).Thus, stronger and sustained anti-MERS-CoV antibody responses and memory B cells after recovery may support enhanced cross-reactivity of antibodies against the conserved stem helix epitopes of various human βCoVs, as well as those against βCoVs' spike antigens, after COVID-19 vaccination and/or SARS-CoV-2 infection.Broadly neutralizing monoclonal antibodies against this epitope protect against all three human βCoVs in an in vivo infection model (35).Nine patients with confirmed COVID-19 in our cohort presented with only mild respiratory symptoms without pneumonia.Antibodies specific to the stem helix of various human βCoVs are less frequent, implying its low immunogenicity, in individuals previously infected with SARS-CoV-2 (7 to 44%) or completely vaccinated against COVID-19 (22 to 76%) (36).Notably, specific antibodies to MERS-CoV CE#4 were elevated in 56.8% vaccinated participants unexposed to MERS-CoV, whereas the antibody response increased in 84.8% of the MERS-recovered group after vaccination (Fig. 6H).Thus, if there are preexisting memory responses, even those established up to 7 years prior against other human βCoVs including MERS-CoV, plasma antibody responses to the stem helix are robustly triggered by SARS-CoV-2 infection or vaccination.Nonetheless, antibodies specific to MERS-CoV CE#4 demonstrate a significant correlation with neutralizing activity against SARS-CoV-2 and SARS-CoV, but only a marginal correlation with neutralizing antibodies against MERS-CoV (Fig. 6, F and G).Considering that antibody levels against the MERS-CoV S1 antigen, which includes the receptor-binding domain, have remained relatively high in our MERS cohort even during the pandemic (Fig. 1) and exhibit a strong correlation with neutralizing activity against MERS-CoV (Figs. 2J and 4N), the contribution of antibodies specific to the S1 antigen may be more substantial than those targeting MERS-CoV CE#4, which are induced by COVID-19 vaccination and/or SARS-CoV-2 infection. Peptide arrays also identified the epitope CE#5 (Fig. 5) as a potential cytoplasmic epitope representing immunogenic and conserved epitopes of human βCoVs, which can be boosted by repeated infection and/or vaccination with heterologous spike antigens.However, the immunological role of antibodies against the intracellular domain of hCoV needs further investigation.We also observed relative increase in antibody levels against the highly conserved epitope (CE#2) containing the S2′ fusion peptide region of all CoV genera, in the first year after the MERS outbreak (Fig. 6, A and B) (3).Although these antibodies may provide excellent broad-neutralizing activities against all seven hCoVs (37), antibodies against this epitope in humans were barely boosted upon COVID-19 vaccination and/or SARS-CoV-2 infection, suggesting a relatively weaker immunogenicity than those against CE#4 and CE#5 (fig.S4C).Nonetheless, we noted a general increase in antibody levels against the spike peptide pools following MERS-CoV infection (from baseline to the first year) and during the pandemic (from the fifth to the seventh year) despite wide variations depending on the βCoV species. The efficacy of vaccines against SARS-CoV-2 through nAb generation gradually weakens with the appearance of variant viruses and the rapid disappearance of nAbs in humans (38).Virus-reactive T cells are also generated upon vaccination and are critical in preventing disease development and progression to severe infection (39).Therefore, the active induction of long-lasting antibodies with broad neutralizing activity and memory T cells against various hCoVs by targeting conserved epitopes should be optimized to develop an effective pan-CoV universal vaccine.The inclusion of immunogenic epitopes within spike antigens, which can be easily boosted by repeatedly vaccinating the general human population and potently induce broad neutralizing activity against various hCoVs, could be extremely beneficial.In addition, the generation of long-lasting memory T cells, particularly CD4 + T cells, is also a promising strategy for developing effective pan-CoV vaccines.Recent reports suggest that the sustained presence and functional role of spike-specific CD4 + follicular helper T cells following SARS-CoV-2 infection and vaccination may be crucial for the maintenance of antibodies and the recall response, potentially offering long-term protection (40,41).Considering the hCoV species-dependent heterogeneity of Fig. 1 . Fig. 1.Kinetic changes in S1 IgG antibody responses, seropositivity, and neutralizing activity against MERS-CoV in recovered patients.(A) the serum spike (S1)specific igG levels were semiquantitatively determined by calculating the Od ratios.intermediate Od ratio range is marked gray.(B) Anti-S1 lgG levels according to clinical MeRS severity represented as box and whisker (minimum to maximum) plots, including the median value.(C and D) Seropositivity based on Od ratios in all participants (c) and in each clinical severity group (d) at each time point.(E) neutralizing activity (FRnt 50 ) levels of collected sera.dashed line: limit of detection.(F) neutralizing activities according to clinical MeRS severity.(G) Kinetic changes in mean Od ratios against the S1 antigen and neutralizing activity after nonlinear regression analysis (solid line) with 95% confidence intervals (cis) (shaded cyan and orange).nAb, neutralizing antibody; n, sample number; purple line, geometric mean value; *P < 0.05; **P < 0.01; ***P < 0.001 by Kruskal-Wallis test. Fig. 2 . Fig. 2. Kinetic changes in antibody responses against spike antigens and neutralizing activity against MERS-CoV.(A to D) Antibody response levels against indicated MeRS-cov spike antigens and neutralizing activity 4 to 7 years after the Korean MeRS outbreak.Purple line, geometric mean; nAb, neutralizing antibody; dashed line, limit of detection; n = 33 per year.(E to H) the antibody response levels according to clinical MeRS severity represented as box and whisker (minimum to maximum) plots, including the median value.n = 33 per year.(I) Kinetic changes in individual neutralizing activities with indicated disease severity.Kinetic lines of seven participants with >50% increase in neutralizing activity during the cOvid-19 pandemic, according to MeRS severity (light blue: G i, orange: G ii, brown: G iii, pink: unvaccinated and SARS-cov-2-infected participant, and black: unvaccinated and uninfected participant).(J) correlation of neutralizing activity against MeRS-cov with the indicated antibody levels were assessed using linear regression (brown line) and Spearman's rank test (r and P value in indigo).n = 132 (33 × 4 years).(K) Kinetic changes in mean antibody levels against the indicated spike antigens and neutralizing activity after nonlinear regression analysis (solid line) with 95% ci (shaded colors).*P < 0.05; **P < 0.01; ***P < 0.001 using the Kruskal-Wallis test. Fig. 3 . 14 Fig. 4 . Fig. 3. Kinetic changes in antibody responses against spike antigens and neutralizing activity against SARS-CoV-2.(A to D) Antibody response levels against indicated SARS-cov-2 spike antigens and neutralizing activity 4 to 7 years after the Korean MeRS outbreak.Purple line, geometric mean; nAb, neutralizing antibody; dashed line, limit of detection; n = 33 per year.(E to H) the antibody response levels against indicated SARS-cov-2 spike antigens and neutralizing activity according to clinical MeRS severity represented as box and whisker (minimum to maximum) plots, including the median value.n = 33 per year.(I to L) Kinetic changes in antibody levels against indicated spike antigens and neutralizing activity of individual participants (pink: unvaccinated and SARS-cov-2-infected participant, black: unvaccinated and uninfected participant).(M to P) correlation of neutralizing activity against SARS-cov-2 wild type (Wt) [(M) and (n)] or BA.5 variant [(O) and (P)] with the indicated antibody levels were assessed using linear regression (brown line) and Spearman's rank test (r and P value in indigo).*P < 0.05; **P < 0.01; ***P < 0.001 using the Kruskal-Wallis test. Fig. 5 . Fig. 5. Comparison of antibody responses against zoonotic CoV's spike antigens between non-MERS and MERS-recovered group after COVID-19 vaccination.(A) comparison of antibody responses to indicated cov spike antigens in the non-MeRS control group (n = 36) and the MeRS-recovered cohort (n = 31) before (pre) and after (post) cOvid-19 vaccination.Purple line, geometric mean.*P < 0.05; **P < 0.01; ***P < 0.001 by Kruskal-Wallis test.ns, not significant.(B) comparison of fold increases in antibody responses to indicated cov spike antigens between the non-MeRS control group and the MeRS-recovered cohort before and after cOvid-19 vaccination.P value calculated using the Mann-Whitney test. Fig. 6 . Fig. 6.Kinetic changes in antibody responses against hCoVs' linear spike epitopes and their correlation with antibody levels against hCoV's spikes.(A) landscape of antibody responses against overlapping 15-mer peptides derived from various hcovs' spikes.the width of the antigenic peak for each spike protein was adjusted approximately, considering sequence similarities and lengths.Broadly reactive representative peptide region (① to ⑤) within S2 antigens was selected.ctl, negative control pooled sera.(B) Peptide sequences in the five regions and kinetic responses against them.(C) the antibody response levels against ce#4.Purple line: geometric mean; dashed line: limit of detection; n = 33 per year.(D) Antibody response levels against ce#4, according to clinical MeRS severity.(E) Kinetic changes in individual serum antibody levels against ce#4 (pink: unvaccinated and SARS-cov-2-infected participant; black: unvaccinated and uninfected participant).Seropositive rate of each year is also presented.(F) correlation of anti-ce#4 antibody with antibody levels against various hcovs' spike antigens assessed using Spearman's rank test.n = 132.(G) correlation of anti-ce#4 antibody with indicated neutralizing activity assessed using linear regression (brown line) and Spearman's rank test.(H) change in serum anti-ce#4 antibody levels before and after cOvid-19 vaccination.*P < 0.05; **P < 0.01; ***P < 0.001 by Kruskal-Wallis test or the Mann-Whitney test.a.u., arbitrary units. Fig. 7 . Fig. 7. Kinetic changes in MERS-CoV-specific T cell responses and their correlation with antibody responses against various hCoVs' spike antigens and neutralizing activity.(A) ex vivo iFnγ eliSpot responses to MeRS-cov structural proteins.the annual positivity rates of memory t cell responses (over mean + 2 × Sd of healthy controls, dashed line).Purple line, geometric mean; SFc, spot-forming cell; PBMc, peripheral blood mononuclear cell.(B) Mean memory t cell response kinetics after nonlinear regression analysis (solid line) with 95% ci (shaded color).(C and D) cytokine-production profiles of the MeRS-cov-specific cd4 + t cells (c) and cd8 + t cells (d) in the MeRS-recovered cohort.All possible combinations of iFnγ, il-2, and tnFα are shown on the x axis.the response at each year is grouped according to the number of functions and data summarized using pie charts.each slice of the pie represents the fraction of the total response that consists of t cells positive for a given number of functions.Purple line, geometric mean.(E) Kinetic changes in the mean value of polyfunctional and iFnγ + memory t cell responses after nonlinear regression analysis (solid line) with 95% ci (shaded color).(F) correlation matrix of MeRS-covspecific memory t cells with antibody levels against various hcovs' spike antigens and neutralizing activity.the circle sizes and color intensities are proportional to Spearman's correlation coefficients.Only significant correlations (P < 0.05) are presented.(G) correlation of neutralizing acitivity against MeRS-cov with memory t cells specific to MeRS-cov's structural peptide pools (S1/S2/e/M/n) assessed using linear regression (brown line) and Spearman's rank test.n = 132.
2024-03-01T05:14:42.394Z
2024-02-28T00:00:00.000
{ "year": 2024, "sha1": "480246928f1eaf27703b09cdea485107480ff5e9", "oa_license": "CCBY", "oa_url": null, "oa_status": null, "pdf_src": "PubMedCentral", "pdf_hash": "480246928f1eaf27703b09cdea485107480ff5e9", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
269174719
pes2o/s2orc
v3-fos-license
Pressure Ulcer Management Virtual Reality Simulation (PU-VRSim) for Novice Nurses: Mixed Methods Study Background: Pressure ulcers (PUs) are a common and serious complication in patients who are immobile in health care settings. Nurses play a fundamental role in the prevention of PUs; however, novice nurses lack experience in clinical situations. Virtual reality (VR) is highly conducive to clinical-and procedure-focused training because it facilitates simulations. Objective: We aimed to explore the feasibility of a novel PU management VR simulation (PU-VRSim) program using a head-mounted display for novice nurses and to investigate how different types of learning materials (ie, VR or a video-based lecture) impact learning outcomes and experiences. Methods: PU-VRSim was created in the Unity 3D platform. This mixed methods pilot quasi-experimental study included 35 novice nurses categorized into the experimental (n=18) and control (n=17) groups. The PU-VRSim program was applied using VR in the experimental group, whereas the control group received a video-based lecture. The PU knowledge test, critical thinking disposition measurement tool, and Korean version of the General Self-Efficacy Scale were assessed before and after the intervention in both groups. After the intervention, the experimental group was further assessed using the Clinical Judgment Rubric and interviewed to evaluate their experience with PU-VRSim. Results: The results compared before and after the intervention showed significant improvements in PU knowledge in both the experimental group ( P =.001) and control group ( P =.005). There were no significant differences in self-efficacy and critical thinking in either group. The experimental group scored a mean of 3.23 (SD 0.44) points (accomplished) on clinical judgment, assessed using a 4-point scale. The experimental group interviews revealed that the VR simulation was realistic and helpful for learning about PU management. Conclusions: The results revealed that PU-VRSim could improve novice nurses’ learning of PU management in realistic environments. Further studies using VR for clinical training are recommended for novice nurses. Introduction Pressure ulcers (PUs) refer to the skin damage caused by ischemia of the skin, subcutaneous fat, and muscles due to a continuous blood circulation disorder in a compressed body [1]. PUs result in a reduction of oxygen and nutrition delivered to the cells, which can contribute to the development of cancer and cardiovascular disease, ultimately amounting to high medical expenses [1].The incidence of PUs is estimated at 12% in hospitals, representing a common but important health problem because it leads to high nursing burdens, increased medical costs, and mortality [2].PUs are among the global health indicators and are included in the standard of nursing [3].Most PUs are preventable by maintaining proper skin integration, and prevention is considered as important as treatment [3,4]. Nurses have a great responsibility in the well-being and safety of patients [5].In addition, nurses are required to perform appropriate nursing for the prevention and management of PUs.However, they experience difficulties in performing clinical nursing and providing the necessary care to patients [6].Novice nurses are those with less than 3 years of working experience based on the Benner novice-to-expert model; although they can recognize the basic order in nursing and take decisions, it is generally more difficult for novice nurses to establish priorities [7].Particularly, novice nurses who have completed the regular curriculum do not have sufficient opportunities for practice during their training and thus experience difficulties in adapting to a new environment and changes in roles in the clinical field, leading to stress and anxiety [8,9].Therefore, a program that can help novice nurses adapt to the clinical environment is needed. Virtual reality (VR) is characterized by interaction, immersion, and imagination, and has been increasingly used in the curriculum for nursing with great potential for course development [10].VR using a head-mounted display (HMD) provides the learning experience of communication between medical staff and patients, as well as simulations of standardized and controlled situations [11].VR use has been easily accepted by learners in various medical environments and plays an essential role in improving their performance [11,12].In nursing education, VR has been used in areas such as cardiopulmonary resuscitation, respiratory nursing, and delivery nursing, as well as for improving professional knowledge, clinical reasoning skills, and learning satisfaction [13,14].In addition, as a learning method, VR meets the expectations and learning styles of the new generation of young learners [14,15]. In nursing education, teaching methods have shifted from traditional lecture-style education to simulation education [15].Lecture-style education is effective in terms of knowledge transfer to novice nurses.However, there is a limit of this approach in improving nursing work skills in hospitals where various problems can occur [16].Video-based education for novice nurses is a time-efficient and economically effective method owing to the heavy workload and lack of physical time; however, this format often lacks an appropriate feedback system [17].Simulation education provides educational opportunities for clinical practice without putting patients or others at risk, and learners have the advantage of safely learning from experience [18].VR is a representative technology for simulation education [15].VR simulation can be used by novice nurses freely, which has been shown to improve their knowledge, critical thinking, and self-efficacy [13,18], thereby helping them transform into professional clinical nurses. Simulation creates a learning environment in which learners can experience intervention and treatment in a safe manner, and various educational theories and structural models can be applied to achieve effective learning results [19]. The Analysis, Design, Development, Implementation, and Evaluation (ADDIE) instructional design model [20] is an effective and efficient development model based on the five steps of analysis, design, development, implementation, and evaluation.Kolb's experiential learning theory [21] states that learning is achieved through the process of "active experimentation," starting with a "concrete experience," "reflective observation," and "abstract conceptualization."Through the concrete experience of simulation, learners make reflective observations and abstract conceptualizations by trying and practicing new techniques in a safe environment, and they perform active experimentation to understand the patient's situation in an actual clinical environment and provide appropriate nursing practice. This study aimed to develop a nursing PU management VR simulation program (PU-VRSim) and assess the feasibility of the novel virtual program for novice nurses.Toward this end, we applied the ADDIE instructional design model and Kolb's experiential learning theory.The first objective was to assess the feasibility of implementing PU-VRSim for nursing education on PUs.The second objective was to compare the effects of the VR program and video-based lectures on PU knowledge, self-efficacy, and critical thinking, and confirm the level of clinical judgment and experience of participants after undergoing PU-VRSim.The main research questions included: (1) Is implementing PU-VRSim for nursing education feasible?(2) What is the effect of PU-VRSim compared with that of video-based lectures?and (3) What are the participants' experiences with PU-VRSim? Design This study applied Kolb's experiential learning theory [21] based on the ADDIE model [20], which is a model of instructional design that was used to develop PU-VRSim for preventing and managing PUs using VR in a nursing education program.This was a mixed methods, pilot quasi-experimental study [22] including nurses with less than 2 years of clinical experience to confirm the effectiveness of PU-VRSim.PU-VRSim was created in the Unity 3D platform (Unity Technologies).Participants experienced the program through an HMD and hand controllers (HTC Corporation, VIVE pro). Participants For data collection, nurses with less than 2 years of clinical experience were notified of the purpose, period, conditions of participation, and benefits and disadvantages of participating in the study in the nurses' community bulletin boards.Recruitment for the preintervention survey, intervention, and postintervention survey was conducted through convenience sampling.Participants were categorized into the two groups based on the work schedule of the novice nurses, and participants were blinded to their group allocation.A total of 35 participants were recruited voluntarily from October 10 to December 31, 2022, with 18 assigned to the experimental group and 17 assigned to the control group from January 1 to March 31, 2023.In both groups, one researcher conducted a one-on-one survey and measured general characteristics, PU knowledge, critical thinking, and self-efficacy using preliminary questionnaires, which were sent to the two groups before implementing the program.Regarding the intervention, the experimental group participated in the PU-VRSim program in the simulation room, whereas the control group participated in a video-based lecture on the prevention and care of PUs.After the program, PU knowledge, critical thinking skills, and self-efficacy were measured in both groups.The effectiveness of the program was further assessed with participants in the experimental group via interviews and the Lasater Clinical Judgment Rubric (LCJR) (Figure 1). The sample size required to compare variables between groups with the t test was calculated using the G*Power 3.1 program according to the method of Polit and Sherman [23], using a significance level of α=.05, effect size (f) of 0.80, and power (1-β) of .90[24].Considering that the sample size satisfying the above conditions was at least 16 people per group, 36 participants were selected, including 18 in the experimental group and 18 in the control group, prior to data collection.One participant in the control group dropped out of the study.Finally, 35 participants were included in the analysis. Ethical Considerations Data collection began after obtaining approval from the Institutional Review Board (40525-202204-HR-016-03) of Keimyung University in Daegu City for the protection of the research participants.The purpose of the study, procedures, guarantee of anonymity and confidentiality, and assurance that there are no consequences in case of withdrawal from the study were explained to the research participants, and they were allowed to respond to the questionnaire only when they agreed to participate in the research.The researchers conducted the preintervention survey, application of programs, and postintervention survey.All data collected during this study were anonymized.Participants were compensated for their contribution with a beverage coupon worth 10,000 KRW (~US $8) after the postintervention survey. PU Knowledge The Pieper-Zulkowski pressure ulcer knowledge test (PZ-PUKT), a PU knowledge tool developed by Pieper and Zulkowski [25] and modified and supplemented by Park [26], was used in this study.The PZ-PUKT comprises 39 questions, including 19, 9, and 11 questions on PU stage confirmation, wound assessment, and dressing methods, respectively.Each question was answered "yes," "no," or "don't know," with 1 point for correct answers and 0 points for incorrect answers.The total score ranges from 0 to 39, with higher scores indicating greater knowledge of PUs.The Cronbach α value was 0.80 and 0.70 in the studies by Pieper and Zulkowski [25] and Park [26], respectively, and was 0.69 in our study. Critical Thinking The critical thinking disposition measurement tool developed by Yun [27] and modified and supplemented by Shin et al [28] XSL • FO RenderX was used for evaluating the impact of the intervention on critical thinking skills.This tool comprises 27 questions divided into 7 subdomains: intellectual passion/curiosity (5 questions), prudence (4 questions), confidence (4 questions), systemicity (3 questions), intellectual fairness (4 questions), healthy skepticism (4 questions), and objectivity (3 questions).Answers are rated on a scale of 1 point for "not so" to 5 points for "very much so"; a higher score indicates a stronger critical thinking disposition.The Cronbach α value was 0.84 in the studies of both Yun [27] and Shin et al [28] and was 0.83 in our study. Self-Efficacy The Korean version of the General Self-Efficacy Scale developed by Schwarzer and Jerusalem [29] and adapted by Schwarzer et al [30] was used to determine general self-efficacy.The Korean version of the General Self-Efficacy Scale comprises 10 questions rated on a 4-point Likert scale ranging from 10 to 40, with higher scores indicating higher self-efficacy.The Cronbach α value was 0.90 and 0.88 in the studies by Schwarzer and Jerusalem [29] and Schwarzer et al [30] and it was 0.86 in our study. Clinical Judgment Rubric The LCJR, developed by Lasater [31], was used to evaluate the simulation experience.This rubric comprises 11 items based on the following four phases: noticing, interpreting, responding, and reflecting.The LCJR evaluates participants' performance as beginning (1 point), developing (2 points), accomplished (3 points), or exemplary (4 points).The total score ranges from 11 to 44, with a higher score indicating higher clinical judgment ability.The Cronbach α value was 0.83 in the study of Shin et al [32] and was 0.92 in our study. Development Overview PU-VRSim was developed by applying Kolb's experiential learning theory to the ADDIE model (Figure 2). Analysis Stage The analysis stage identified learners' general and learning-related characteristics.Through a literature review [33], the importance of PU care, factors of PU occurrence, and prevention and management methods were confirmed, and the factors to be included in the development of the VR simulation were analyzed. Design Stage In the design stage, the teaching method for developing an effective educational program was determined.Kolb's experiential learning theory [21] was applied to the basic data collected during the analysis stage to determine the teaching method using the VR simulation (concrete experience) and debriefing (reflective observation).The design aimed to improve critical thinking, self-efficacy, and clinical judgment (abstract conceptualization).Using this approach, the learner would assist in performing PU prevention and nursing (active experimentation) well in actual patients. Development Stage In the development stage, a VR-based program was developed based on the educational topics selected in the analysis and design stages.A VR platform (Unity 3D, Unity Technologies) was constructed in collaboration with a professional company.A preliminary VR program was tested by five nurses with more than 5 years of clinical experience in scenarios and nursing care of patients with PUs.By checking and correcting errors in the VR program, operational problems were improved and addressed. Implementation and Evaluation Stage The implementation stage involved application and operation of the program completed in the development stage (Table 1). Overview of Study Design The program was used during the evaluation stage.PU knowledge, critical thinking, and self-efficacy in the experimental and control groups were measured once before the start of the study and then again after the program.For the experimental group only, assessment using the LCJR was performed after the program and the effect of PU-VRSim was evaluated through an interview. Preintervention Survey The preintervention survey of the experimental group was conducted from October 1 to November 30, 2022, and that of the control group was conducted from January 1 to March 1, 2023.After the participants signed a consent form to participate in the study, their PU knowledge, self-efficacy, and critical thinking were measured using structured questionnaires as described above. Implementation The experimental group received the VR simulation program, comprising a prebriefing session (15 minutes) where participants briefly learned about the definition, classification, prevention, and wound management of PUs.Participants were then exposed to PU-VRSim (10 minutes), including PU assessment, nursing care, and education to patients through VR.This was followed by a debriefing session (20 minutes), in which participants were assessed using the LCJR after the simulation. The control group received a video-based lecture.The video format was selected to reduce the time burden on participants who work in shifts and to ensure safety from SARS-CoV-2 infection, according to the participants' hospital work.In total, 17 participants in the control group received lecture materials and a 30-minute video-based lecture on the definition, classification, prevention, and management of PUs. Postintervention Survey After the program, PU knowledge, self-efficacy, and critical thinking were assessed in both groups.The experimental group was further assessed using the LCJR and an interview was conducted to confirm their experience. Interview To discuss the experience of participating in PU-VRSim, which could not be verified using objective data, the participants were interviewed after the program.The interview included a self-introduction by the researcher and participant, recording of the interview, guaranteeing anonymity, and explaining that the research results were used only for research purposes and that the interviews were conducted with the participants' voluntary consent.One-on-one interviews were conducted in all cases in a quiet seminar room.Before conducting the interviews, the questions were drafted based on the purpose of the study and proceeded in the order of introduction, transition, and main questions, as shown in Textbox 1. Data Analysis The data collected in this study were analyzed using IBM SPSS 23.0, and a two-tailed test was performed at a significance level of .05.The normality of the dependent variable was verified using the Shapiro-Wilk test.The homogeneity of the data in the experimental and control groups was verified using the χ 2 test and independent t test.General characteristics and performance on the LCJR aspects of the participants were presented as means (SDs) and n (%), respectively.Wilcoxon signed rank and Mann-Whitney U tests were used to verify differences in PU knowledge, critical thinking, and self-efficacy between the experimental and control groups. The data collected through the interviews were analyzed using an inductive approach, which is one of the content analysis methods suggested by Elo and Kyngäs [34].For the data analysis, the researcher repeatedly read the transcripts of the interviews, interpreted the meaning of the key statements, and created categories by assigning titles.After data analysis, the authors discussed their interpretations to reach a consensus. Subsequently, the semantic units identified were grouped into higher-level categories, the properties were stated, and the keywords were derived by coding the contents accordingly. Feasibility of PU-VRSim Our first objective was to assess the feasibility of implementing the PU-VRSim program for nursing education on PUs in the implementation and evaluation stages. The general characteristics of the participants are presented in Table 2.The average age and work experience of the 35 novice nurses was 24.8 years and 14 months, respectively.The experimental group comprised 18 (100%) women, whereas the control group comprised 2 (12%) men and 15 (88%) women.We analyzed the homogeneity of the two groups in terms of general characteristics such as age, educational level, VR experience, and PU education experience; no significant difference was observed between the two groups (all P>.05) and thus homogeneity between the two groups was confirmed. Effect of PU-VRSim on Outcomes Our second objective addressed the effects of the VR intervention on PU knowledge, self-efficacy, critical thinking, and critical judgment. As shown in Table 3, in the experimental group, the PU knowledge score increased by 2.88 points and the self-efficacy score increased by 0.56 points compared with those in the preintervention survey.In the control group, the PU knowledge score increased by 4.12 points, the critical thinking score increased by 4.0, and the self-efficacy score increased by 0.76 points.Each group showed significant improvements in PU knowledge after the intervention.However, there were no significant differences in critical thinking and self-efficacy in either group.There were no significant differences in the change in PU knowledge, critical thinking, and self-efficacy between the two groups. The results for the clinical judgment assessment are summarized in Table 4.In the experimental group, after PU-VRSim, the overall clinical judgment of novice nurses was 3.23 points.When evaluated in the four phases to confirm whether all phases reached the level of "accomplished," out of a total of 4 points, the mean scores for noticing, interpretation, responding, and reflecting were 3.27, 3.31, 3.32, and 2.91 points, respectively.The items "well-planned intervention/flexibility" and "skill proficiency" in the responding phase scored the highest, with 3.67 points, whereas "commitment improve" in the reflecting phase scored the lowest, with 2.78 points. Theme 1: Realistic VR Scenarios The participants gained practical experience through VR scenarios.The subtopics related to this were "real field" and "real experience." It was realistic to be able to assess and manage wounds about which I learned from books using VR [S4] Although it is VR for pressure ulcer nursing, which is difficult to understand only through lectures, it was good to apply it as a direct action [S11] Theme 2: Helpfulness of VR Learning The participants expressed that VR was helpful for learning.Subtopics related to this theme were "improvement of knowledge learning" and "improvement of skills." I was able to learn about PU care. [S8] When applying nursing care to patients with PUs through VR, it seems to be more memorable.[S9, S15, S16] I was able to confirm the PU classification concept. [S12] There were no bedsore patients in the ward where I worked, and through this virtual reality program, I was able to evaluate, intervene, and evaluate PUs. [S2] Theme 3: Usability The participants expressed the feeling of using VR learning as "safe" and "easy to access."There were times when the mannequins were dirty in nursing practice, but it was nice that the virtual patients were not contaminated.[S13] It was nice to be able to participate without the burden of time and place.[S5] Theme 4: Satisfaction The participants expressed their satisfaction of using VR learning as "pleasure," "fun," and "new experience."They showed interest in VR and experienced fun and enjoyment through learning. It was my first time using virtual reality, and I was able to enjoy it. I want to try again. [S5, S13, S17] It was a new experience, and I enjoyed it.[S8, S14, S17] Theme 5: Limitation of VR Equipment The participants expressed limitations in terms of the equipment used in the VR program. It was a bit heavy to wear on my head. [S8] The preparation for running the program was complex and took a long time.[S12] It took a long time when the focus was not good, and the text looked blurry and the controller was not recognized well in VR. [S13] Principal Findings In this study, we developed PU-VRSim by applying the ADDIE model and Kolb's experiential learning theory [20,21].PU-VRSim was designed as a PU prevention and nursing simulation program for novice nurses with less than 2 years of clinical experience.The participants of the PU-VRSim group showed significant improvements in PU knowledge.They reached the accomplished phase of clinical judgment.They commented that it was realistic and helpful for learning about PU management. PU-VRSim was developed by applying an analysis-design-development-implementation-evaluation method according to the ADDIE model [20].In the analysis stage, a literature review [33] confirmed that PU care was an important indicator of the quality of nursing services, which is becoming increasingly important [35].PUs are caused by immobility, pressure, and friction.Factors to be included in education were analyzed by evaluating the methods for preventing and managing PUs through support surface management, position change, and dressing application to relieve pressure on the skin surface.Previous studies [36,37] have confirmed the improvement in PU knowledge and nursing performance of nurses through PU nursing education, thereby suggesting that continuous education on PUs for nurses is needed.Kolb's experiential learning theory [21] applied at the design stage of PU-VRSim connects theory and practice in VR simulation.Through concrete experience and reflective observation, an abstract conceptualization of theories in realistic situations can help to acquire the knowledge and skills that can be used in real situations.Kolb's theory has also been applied in simulation education in various health fields [38], and VR provides learners with experience-based learning in a real environment by which the learners make decisions and take appropriate actions in real situations [10].Through PU-VRSim, novice nurses freely apply the theoretical knowledge acquired through existing knowledge and prior learning materials to the process of solving problems encountered by patients in a safe virtual clinical environment.Ultimately, positive results can be expected by applying the improved nursing capabilities in actual clinical trials. After novice nurses underwent the program, they showed improvement in PU knowledge and reached the "accomplished" stage of clinical judgment.PU knowledge scores increased on average in both the experimental and control groups after the educational program.This shows that the effect of knowledge transfer [16] can be confirmed via both traditional teaching and lectures and with the new VR simulation method.According to Kolb's experiential learning theory, during knowledge transfer via VR, learners can improve their knowledge by reapplying it through concrete experiences and reflections, as well as learn how to utilize what they have learned and gain new knowledge.PU knowledge is the basis of PU care, and professional nursing can be performed through critical thinking and improvement of clinical performance skills.Furthermore, clinical judgment is a particularly important skill in nursing that has also received recent attention.VR has a positive impact on clinical judgment in nursing education [39].Therefore, positive effects and acquisition of new skills can be expected if field-tailored simulation [40] is applied to nurses to reproduce clinical situations. Interview contents were analyzed to confirm the experiences of the novice nurses participating in the PU-VRSim program.The analysis revealed that the realistic scenarios of PU-VRSim help in learning, with usability of safety, easy accessibility, and satisfaction being expressed as positive experiences.However, inconveniences in using the equipment to implement VR programs were expressed as negative experiences.This is consistent with the findings of Adhikari et al [41] on the experience of VR programs in terms of acceptability, applicability, areas of improvement, and limitations.VR can be safely and repeatedly applied in situations that can be dangerous to patients; however, it is expensive and has usage limitations [42].VR was deemed to be a safe and effective educational method for use during the COVID-19 pandemic [43].The development of a VR program that reduces the inconvenience and cost burden of equipment is expected to increase the use of VR in nursing education. Our participants confirmed that PU-VRSim was helpful for learning because it could be used repeatedly to access disease-focused nursing problems.There is a need for education about various clinical situations in which nurses can apply nursing interventions according to the situation and an overall assessment of the patient [44,45]; PU-VRSim reflects the clinical situation of PUs and requires nursing education.In addition, it was confirmed as a positive experience, suggesting that improvement in nursing knowledge and clinical performance ability as well as repetitive learning are possible through the promotion of spontaneous thinking and immediate feedback of learners, which were evaluated as advantages of simulation in previous studies [46,47].These results confirm the possibility of using PU-VRSim as an educational program in clinical practice. In nursing practice using mannequins during the COVID-19 pandemic, the participants expressed concerns about infection via contamination of the mannequins from multiple contacts.Using VR, they felt safe as the risk of infection could be avoided.In previous studies, VR simulation was suggested and used as a nonface-to-face practice method when clinical practice was not possible due to the prevalence of COVID-19 [43,48].In addition, participants did not feel the burden of time and space when participating in VR education.This is an advantage of VR, in which one can experience the actual medical field using only computers and equipment.Furthermore, individual learning is possible; therefore, it can provide optimal learning to individual learners and help them overcome obstacles in the physical environment [49].VR may be an appropriate training method for shift workers, because nurses who work shifts can access the education without experiencing a time burden. Novice nurses in this study regarded VR education as a new experience and evaluated it as enjoyable.A VR learning environment enhances immersion and activates learners' imagination to simulate the real world [50].The VR program is a teaching method that incorporates the latest technology and meets the learning needs of a new generation.Educational programs are being developed on various topics for nursing students and novice nurses, and the effects of enjoyment and fun have been confirmed [13].Learning satisfaction through the enjoyable and fun VR improves learners' learning motivation and confidence, and they experience reduced fear in real situations [14].Because enjoyment and fun in learning are factors that stimulate learning motivation and interest, gamification can be applied when developing programs so that learners can enjoy various experiences in the virtual world. When implementing the VR program, participants had difficulty in using an HMD; in particular, participants wearing glasses experienced inconvenience when wearing the device along with their glasses.As in previous studies, most of the participants experienced technical difficulties [41].A VR program is typically executed using a computer program, an HMD, and a controller; however, the HMD and controller devices do not recognize the participants' fine movements, making it difficult to proceed [42].In the future, the development of a convenient version of the HMD, with a clear field of view, ease of wearing, and usability, may lead to an increase in the use of VR education. PUs are common health problems in hospitals, and novice nurses experience difficulties in treating PUs.Education of nurses has been regarded as an integral component of PU prevention [51].VR is an ideal educational technology, and the number of educational programs applying VR in nursing education has been increasing recently [52].For the VR education program, we confirmed the improvement in knowledge of the participants through the experience in prevention and nursing interventions for patients with PUs.Improvement in clinical performance can be expected with improved knowledge.In addition, the novice nurses in this study expressed satisfaction with VR education as a new experience and a safe learning method.Considering the limitations of VR equipment, it is necessary to develop and utilize a popular simulation program that is more user-friendly and can be manipulated easily.Based on this study, we suggest the development of VR nursing education programs focusing on the educational needs of novice nurses and including new technology such as artificial intelligence with the development of technology. Conclusion The PU-VRSim program developed in this study was found to be effective in improving novice nurses' knowledge of PUs and was positively evaluated as a pleasant experience conducive to learning in an actual hospital-like environment.Therefore, PU-VRSim can be used as an effective educational method for novice nurses, as well as for nursing students and clinical nurses.In addition, a synergistic effect can be expected when the content used incorporates various software programs, including VR simulation programs. Limitations exist in understanding and generalizing the effects of nonrandomized control-group experiments targeting novice nurses.To supplement this, we propose a follow-up study that applies the PU-VRSim program to nursing students and clinical nurses, as well as a randomized control group experimental study of novice nurses.All participants in the experimental group were women; therefore, we propose a further study with a more heterogeneous group of participants by including male and female nurses.In addition, we suggest the development of a field-tailored VR simulation for health professionals, including novice nurses, and study of its educational effect.Finally, developing a program for VR simulation is expensive and wearing an HMD when implementing the program is uncomfortable.We propose the development of software and VR simulations using technology such as smartphone apps, which are inexpensive, comfortable, and easy to use.In conclusion, we propose the continuous development and improvement of VR nursing education programs for novice nurses applying new technologies. Table 1 . Core contents and images of the virtual reality system.Assess the degree of risk of developing PUs a in the patient. • Classify the PU stage of the patient.• Apply a proper dressing to the patient's PU. • Provide prevention education and care for PUs to the patient.Patient case • Patient hospitalization history • Patient information: name, sex, age, diagnosis, past medical history, social history Identifying data: vital signs, results of blood test, x-ray findings, physical exam, medication Patient information Nursing intervention Braden scale score; a lower score indicates a higher risk of developing PUs.Risk assessment for PU prevention Assessment and evaluation of PUs • Assessment of PUs: size and stage • Evaluation of PUs: writing a report Dressing on PUs Management of PUs • Stage 1: film dressing • Stage 2: foam dressing PU prevention and management education: Patient education • Skin care, urinary and fecal incontinence management a PUs: Pressure ulcers Textbox 1 . Interview question structure.Introduction question: Thank you for taking the time after work to participate in the virtual reality (VR) program.Can you briefly describe your feelings?Transition question: Now, we would like to take the time to talk freely about the program's effectiveness and improvements.Main question: What helped you with the program?What do you think about the content and methods of the VR program in which you participated?What do you need to improve or add to this program? a df=17 for the experimental group and 16 for the control group.b PU: pressure ulcer.c VR: virtual reality. Table 3 . Effect of the pressure ulcer management virtual reality simulation on outcomes. a Wilcoxon signed rank test.bMann-Whitney U test. Table 5 . Qualitative outcomes of the pressure ulcer (PU) management virtual reality (VR) experience in novice nurses.
2024-04-17T15:03:16.215Z
2023-09-27T00:00:00.000
{ "year": 2024, "sha1": "046db7b8a877afada2d20b2bdca4744eaeb39903", "oa_license": "CCBY", "oa_url": "https://doi.org/10.2196/53165", "oa_status": "GOLD", "pdf_src": "ScienceParsePlus", "pdf_hash": "661e5a0ef933a2855266c7cf7e54b798b9a2fa2f", "s2fieldsofstudy": [ "Medicine", "Computer Science" ], "extfieldsofstudy": [] }
35961540
pes2o/s2orc
v3-fos-license
Economic crisis, women entrepreneurs and bank loans: some empirical evidence from Italy Abstract This paper presents the main findings from an empirical research project, whose aim was to answer the following research questions: (1)Did men and women entrepreneurs ask for new bank loans during the crisis? (2) Did they obtain required bank loans at the same conditions? (3) Which variables, other than gender, influence access to bank credit? Data show that firms were very cautious in access to finance during the crisis and female-led firms asked for bank loans more rarely than male-led ones. Entrepreneurs’ gender, age and education, banking history and industry only slightly affected access to credit during the crisis. Introduction In the literature on female entrepreneurship, great attention has been paid to the access to credit, and this issue has been the subject of many studies and empirical investigations. Earlier research provides clear-cut evidence in which female businesses exhibit various peculiar financial patterns. For instance, women entrepreneurs use lower ratios of debt finance (Haines, Orser, & Riding, 1999) and are more likely than men to use personal loans (Coleman & Robb, 2009). Reasons for gender differences within business financing are still unclear. In fact, despite the great amount of available data, there are still no unequivocal or widely accepted explanations, and recent studies present different interpretations to explain the reasons for weaker financial patterns within female-owned businesses and their lower ratios of debt finance. This issue has gained new interest as a result of the recent crisis involving several countries, including Italy. During the crisis, a slowdown in credit availability was extensively documented, as a result of a mix of supply-and demand-side factors. On one hand, due to a great uncertainty about future economic conditions and a considerable slowdown in sales and production, firms have reduced the demand for loans. On the other hand, banks faced a liquidity shock following a capital shortage. This paper aims to discuss whether men-and women-owned firms have been affected by the crisis in the same way, or if bank-firm dynamics have followed different trends in the two cases. Moreover, we wonder if variables other than gender have affected access to credit. KEYWORDS Female entrepreneurship; access to credit; banks loans; financing; economic crisis The empirical research is based on a questionnaire survey involving a sample of 300 sole-proprietors (150 men and 150 women) and owners of micro-enterprises located in the Marche region in central Italy. The purpose of the questionnaire was identify the existence of any gender differences in access to credit during the crisis, and to check whether other variables, selected from those emerging from prevailing literature on the subject, could have affected access to credit. The following of the paper is structured into four sections. In the second section, the main literature on access to credit and female entrepreneurs is presented. After that, the methodology used in the empirical research is described. Finally, key results of the study are discussed and main conclusions are drawn. Women entrepreneurs and access to bank credit. A literature review Numerous studies show the existence of significant differences between male and female businesses regarding the use of debt capital (Coleman & Robb, 2009;Constantinidis, Cornet, & Asandei, 2006;Fairlie & Robb, 2009;Robb & Walken, 2002). In fact, female entrepreneurs start their firms with a lower level of funding than male entrepreneurs (Alsos, Isaksen, & Ljunggren, 2006), are less likely to raise capital from external sources (Constantinidis et al., 2006;Fairlie & Robb, 2009;Robb & Walken, 2002), even in the subsequent phases of their entrepreneurial life cycle (Coleman & Robb, 2009), and are more likely than men to use personal loans -from family and friends (Coleman & Robb, 2009). These differences result from a combination of different factors, partially due to the characteristics of women entrepreneurs and their businesses, and in part to criteria adopted by banks for the granting of loans. Gender differences in business financing and access to credit received three main interpretations: (1) existence of structural dissimilarities between male-and female-owned businesses (size, age, industry); (2) supply-side discriminations; (3) demand-side factors relating to women entrepreneurs' choices, preferences and motivations. According to the first interpretation, female businesses' lower debt ratios stem from structural differences between male and female businesses, with particular reference to industry, size and age. Countless research, in fact, shows that female businesses primarily operate in the retail trade and service industries (GEM, 2013;Unioncamere, 2014). These sectors on average cause lower financial needs with respect to industrial enterprises, both in the start-up and in the later stages of firms' life cycle. Moreover, female entrepreneurship is a much more recent phenomenon than male entrepreneurship. As a consequence, female firms are typically younger and smaller than male firms (GEM, 2013;Unioncamere, 2014). According to this interpretation, females have lower debt ratios because on average they require less financial resources. These structural characteristics also affect firms' ability to obtain bank financing. Precisely because of their businesses' young age, female entrepreneurs have a shorter banking history and shorter financial and administrative experience, whereas banks prefer long-term and well-known customers because they are considered more reliable (Shaw, Carter, & Brierton, 2001). An entrepreneur's young age, his lack of business experiences and a low-level business education can also negatively affect access to credit. The second interpretation (supply side discriminations) suggests that banks adopt discriminatory behaviours. In fact, ceteris paribus, banks would be less willing to grant loans to female businesses (taste-based discrimination). This assumption, however, has been confirmed in only some cases. Muravyev, Talavera, and Schäfer (2009) found that female businesses are less likely than male counterparts to get bank loans and have to pay higher interest rates. In Italy, Alesina, Lotti, and Mistrulli (2013) found that female businesses pay higher interest rates, even if they aren't riskier borrowers. Moreover they find that the interest rate paid by women decreases if they involve a man as a guarantor, whereas their interest rate increases if the guarantor is a woman. According to Bellucci, Borisov, and Zazzaro (2010), banks more frequently ask female businesses to provide collateral. According to Calcagnini, Giombini, and Lenti (2014), banks ask higher collateral from female businesses, and this request can only be partially explained by structural differences between male and female businesses. On the contrary, other studies have not confirmed the existence of gender-based discriminations (Buttner & Rosen, 1989;Carter, Shaw, Lam, & Wilson, 2007). The third interpretation (demand-side factors) explains female businesses' lower debt ratio as a result of women entrepreneurs' personal choices and motivations (Watson, 2006). It is noted, for example, that women entrepreneurs have a higher risk aversion (Byrnes, Miller, & Schafer, 1999;Croson & Gneezy, 2009;Powell & Ansic, 1997), which may reduce their propensity towards debts (Morris, Miyasaki, Watters, & Coombes, 2006). Other authors (Coleman, 2002) emphasise female entrepreneurs' lack of financial literacy, seeing that women may have more difficulty in dealing with financial partners and in adequately expressing their financing needs. For these reasons they have a lower propensity towards debts (Cesaroni, 2010;Coleman, 2002;Moro & Fink, 2010) and, in the end, they are discouraged from applying for external sources (Sena, Scott, & Roper, 2012). Conclusively, other authors maintain that women suffer from a type of 'preventive fear' , which makes them more reluctant than men to turn to banks, because they believe that their requests for funding have little chance of being accepted (Ongena & Popov, 2013;Robb & Walken, 2002). These three interpretations on the relationship between gender and funding choices are quite dissimilar and have not produced unequivocal results. The onset of the economic crisis has aroused new interest in this subject. In fact, questions on whether the crisis has changed the relationship between business and access to credit, and whether it has produced different effects on male and female businesses, have arisen. Women entrepreneurs may have suffered the effects of the crisis more than their male counterparts, due to supply-side factors. Banks, in fact, may have selected their customers and applied stricter conditions to customers who were considered less attractive, such as female-led firms, seeing that they are younger, smaller and with a shorter banking history. At the same time, the crisis may have emphasised some demand-side factors: in fact, the crisis might have increased women entrepreneurs' risk aversion, their sense of discouragement and fear of receiving a refusal from banks, and thus induced them to give up the demand for more loans. Analysis carried out in different countries shows mixed results. Cowling, Liu, and Ledger (2012) investigated small-business experiences in the UK during the prerecession (2007)(2008) and recession (from December 2008 to February 2010) periods. They found that female-led firms maintained a lower demand for external finance also during the economic recession, probably because of their higher risk aversion. No difference was found in lenders' behaviour towards them. Tabuenca, Martí, and Romero (2015) analysed the dynamics and evolution of entrepreneurial activity of women in Spain in the period 2003-2013. They found that women-owned companies present a lower degree of indebtedness than men-owned businesses, both before and after the emergence of the crisis. In fact, they did not observe any changes in this pattern caused by the economic crisis. Robb, Marin Consulting, and LLC (2013) investigated women-owned firms and how the economic crisis has affected their access to credit in the United States during the period 2007-2010. Women were less likely to apply for new loans than their male counterparts for fear of having their loan application denied during the years of the economic crisis. Moreover, data showed women-owned businesses faced greater credit constraints than did similar start-ups owned by men during the years of the financial crisis. Stefani and Vacca (2013), using European Central Bank survey data for the period 2009-2011, showed that female-owned firms faced greater difficulties in obtaining credit with respect to their male counterparts. The main reasons are demand-side factors -as women more often than men anticipate a rejection -and supply-side factors -female-owned firms experienced a higher rejection rate as they are structurally different from male firms. They also found some differences across European countries. Women-led firms in Italy are more likely to have their loan request rejected. With regard to the Italian context, Cesaroni, Lotti, and Mistrulli (2013), using data from the Credit Register at the Bank of Italy for the period 2007-2009, also found that women-owned firms faced a more pronounced credit contraction with respect to other firms. In particular, from their analysis, it emerged that the growth rate of total and short-term loans was consistently negative in that period and sometimes it was so low as to push firms' loans below the Central Credit Register threshold. However, the authors affirm that the results do not allow one to clearly explain the reasons for the greater credit contraction shown by women-owned firms and neither do they explain this result as the consequence of bank's behaviour. Research on entrepreneurs' gender and access to credit during the economic crisis have not produced unequivocal results. In addition, investigations carried out in Italy focused only on the first phase of the crisis. In this context, it is therefore useful to further investigate male and female entrepreneurs in order to understand how they relate to the banking system during all the periods of the economic crisis. Methodology. Survey data collection and data analysis To achieve these goals a questionnaire survey was carried out. The survey involved a sample of 300 men and women sole-proprietors (hereafter M and W) and owners of Italian micro-enterprises located in the Marche Region. A non-proportional stratified sample, with the same number of M and W, was selected using the list of members from one of the main regional business associations. Starting from a list of 1,627 sole-proprietors (429 W and 1,198 M), a sample of 300 sole-proprietors (150 M and 150 W) was randomly extracted. The decision to involve only sole-proprietors in the survey was motivated by several reasons: (1) in companies with members of both genders, it is not easy to determine whether financial decisions are in fact taken by a man or a woman; (2) in Italy, sole proprietorships represent a very high percentage of the total number of female enterprises (61% in 2010); (3) in the case of companies or partnerships, information on the gender of shareholders, partners and directors is not always available. Entrepreneurs selected in this manner took part in a telephone questionnaire between October and November 2013, and the questions refer to the previous 5 years (Autumn 2008 -Autumn 2013). The purpose of the questionnaire was to answer the following research questions. (1) Did men and women entrepreneurs ask for new bank loans during the crisis or did they prefer to give up? (2) Did men and women entrepreneurs obtain the required bank loans? If so, did they get bank loans under the same conditions? (3) Which variables, other than gender, may have influenced access to bank credit? Variables considered in the research include some entrepreneurs' personal characteristics (gender, age, entrepreneurial experience, education) and some firms' structural characteristics (industry, age). They are described in Table 1. We haven't considered firms' size, since only sole proprietors are included in the sample. Moreover, start-ups have been excluded. In the sample, only firms formed before the beginning of the crisis (Autumn 2008) are included, seeing that our aim is to understand how the onset of the recession has affected access to credit. The survey enabled us to obtain 218 fully completed questionnaires: 110 from women and 108 from men. The response rate was particularly high, standing at 73% and substantially similar for entrepreneurs of both genders (M: 72%; W: 73.3%). The first results of the analysis are presented below. These are to be considered merely descriptive results, to deepen with further analysis. Access to bank credit. A comparison between men and women entrepreneurs Of the 218 respondents, only 48 (22% of the sample, with 26 men and 22 women) asked for new bank loans during the period 2008-2013 (Table 2). Therefore, this figure shows the general trend to contain debt level and to avoid new loans during the crisis years. The percentage of women who asked for new loans is also slightly lower than that of men (20% versus 24%). This is consistent with previous research indicating that female firms maintained a lower demand for external finance also during the economic recession (Cowling et al., 2012). This figure expresses the tendency of micro-entrepreneurs to minimise new investment and to face the crisis with great prudence and a defensive attitude. Other research shows that during the crisis the vast majority of entrepreneurs adopted defensive strategies, based on cutting and downsizing actions (Cesaroni & Sentuti, 2014;Del Giovane, Eramo, & Nobili, 2011). As a consequence, financing needs were very low and firms did not require new loans. This interpretation is confirmed by the responses from the questionnaire. As shown in Table 3, 75% of entrepreneurs said they didn't ask for new loans because they did not need them. The same result also stems from the tendency to prefer more cautious funding choices and to limit firms' debt exposure, thus preferring other sources, such as personal capital or funds from family members (approximately 19% of entrepreneurs refrained from applying for bank funding because they thought it was better not to get into debt). A low percentage of entrepreneurs (4.7%) didn't rely on the banking system and were convinced that they had little chance of getting a loan. From this point of view, the survey doesn't reveal significant differences between M and W. This result is different from Robb and Marin Consulting & LLC (2013), who reported that female entrepreneurs were less likely to apply for new loans than male entrepreneurs for fear of having their loan application denied. However, a higher percentage of female entrepreneurs (79.5% versus 69.5% of men) mentioned that they didn't request a new bank loan because it was unnecessary. The reason for this difference comes from the strategies that male and female entrepreneurs adopted in facing the crisis. In opposition to a general attitude of prudence, women adopted defensive strategies more than men. Therefore, they decided not to make new investments or launch new initiatives, and as a consequence they didn't require new funding (Cesaroni & Sentuti, 2014). In contrast, the necessity to finance new investments is the main reason given by those who applied for new bank loans (22% of the sample) (Table 4). A significant percentage of firms faced cash flow problems caused by both sales decreasing and the growing difficulties to collect receivables. The need to apply for new loans to solve cash flow problems was most felt by men (46%) compared with women entrepreneurs (32%). Access to bank credit. Banks' response and credit conditions To apply for funding, men and women entrepreneurs turned to their reference bank and most of them (79%) obtained the entire required amount. However, the percentage of women who got the entire funding is lower (72.7% against 84.6% of men) (Table 5). These data seem consistent with Cesaroni et al. (2013) and Stefani and Vacca (2013), who showed that Italian female firms faced more difficulties in obtaining credit with respect to their male counterparts during the economic crisis. In about half of the considered cases (irrespective of gender), the involvement of a third person, as guarantor, was required to get bank funding. The guarantor provided a real (e.g., a mortgage) or a personal guarantee to hedge the risk of credit (Table 6). In most cases, the entrepreneur's spouse was involved as a guarantor (67%), and occasionally his/her father (17%) or brother. From this point of view there were no significant differences between men and women, and this is in line with the results from other investigations, which show that it isn't a prerogative for only female entrepreneurs to request the involvement of a spouse or family member (Cesaroni, 2010). This result may be due to the fact that in recent years banks have reduced their risk tolerance and have consequently strengthened security measures, generalising the demand for guarantees, in order to minimise the risks associated with loans. Credit guarantee consortia also played an important role, given their institutional aim to facilitate access to credit by providing collateral to firms that require bank financing. The practice of involving consortia has been rather widespread for both genders. In fact, 50% of entrepreneurs who took a bank loan turned to a guarantee consortium, with no distinction between men and women (Table 7). Furthermore, it's interesting to observe that all women entrepreneurs consider the involvement of the consortium useful. Thanks to consortia they obtained significant benefits (Table 8), especially because it was easier for them to obtain the entire funding requested (83.3%) and they paid a lower interest rate (83.3%). In conclusion, the guarantee consortia have played a very important role in enabling entrepreneurs to obtain bank loans. Consortia, in fact, provide businesses with a guarantee for the bank loan. Moreover, they reassure banks that the creditworthiness of entrepreneur applicants is sound, so they can obtain financing with a reduced amount of onerous conditions. For this reason, benefits typically associated with the involvement of a consortium concern greater ease of obtaining the loan and lower interest rates. Other variables influencing access to bank credit In addition to gender, other variables -described in Table 1, with specific literature references -have been analysed to understand if they affected access to credit during the crisis. In particular, we tried to understand if, with respect to these variables, entrepreneurs' answers show significant differences with regard to three issues: (1) requested funding was obtained; (2) banks asked for a guarantor; (3) consortia were involved in the funding application. Our hypothesis is that firms with younger owners, a lower education and a shorter banking history have had more difficulties in access to bank credit during the crisis. As previous research shows, these variables outline an entrepreneur profile, which is less appreciated by banks. We have also considered the industry because the crisis may have had different impacts within different industries. Results are briefly summarised in Table 9. Data reveal no special differences. In some cases, clear similarities emerge without a doubt. In regard to entrepreneur's age, for example, the sample is perfectly divided with respect to both the involvement of guarantors and consortia. Entrepreneurs who are owners of manufacturing firms (61% versus 43% of services) and with a shorter experience involved more frequently a consortium (75% versus 0% of mature employers and 54% of seniors). Entrepreneurs' answers were subjected to the chi-square test in order to check for a statistically significant difference. However, the results don't show a statistically significant difference. Conclusion The survey shows that only a few firms (about 20%) applied for new loans during the crisis. Between these, almost 92% received their funding entirely (79.2%) or partially (12.5%). We can say, then, that the crisis has not resulted in a severe contraction of credit by banks. For the most part, firms didn't ask for a new loan because it was deemed unnecessary, while the majority of companies that requested new loans received them. Consistent with Cowling et al. (2012), data confirm that female entrepreneurs preferred to maintain a lower demand for loans during the economic crisis. Nevertheless, the analysis hasn't revealed significant differences between genders. With regard to fear of having a bank loan application denied, for instance, and contrary to Robb and Marin Consulting & LLC (2013) we didn't find significant differences between male and female entrepreneurs. Some small differences, however, remain, given that women have had greater difficulties in receiving the requested funding entirely. This result is consistent with previous research (Cesaroni et al., 2013;Stefani & Vacca, 2013), who showed that Italian female-owned firms faced more difficulties in obtaining credit with respect to their male counterparts. The analysis of the other variables -entrepreneurs' age and education, firms' banking history and industry -showed small differences, mostly in relation to the involvement of consortia. Therefore, these variables do not appear to have significantly impacted access to credit during the crisis. This study has important limitations given that we analysed only individual firms, located in a limited geographic area and members from a single association. Moreover, the number of companies is limited and does not allow us to obtain generalizable results. Further research should examine a larger sample involving a wider geographic area and different legal forms. It also would be interesting to expand this analysis to other countries, in a comparative fashion, in order to better identify gender discrepancies regarding access to credit during the economic recession in each country and among countries.
2017-05-03T04:17:10.430Z
2016-01-01T00:00:00.000
{ "year": 2016, "sha1": "fb943d3298a124aa3df838f40a8e296ef02aee1a", "oa_license": "CCBY", "oa_url": "https://doi.org/10.1080/1331677x.2016.1211953", "oa_status": "GOLD", "pdf_src": "TaylorAndFrancis", "pdf_hash": "1fe6d09f6ea430e52bd1c79d7b3eab7374b7772e", "s2fieldsofstudy": [ "Economics" ], "extfieldsofstudy": [ "Business" ] }
221355284
pes2o/s2orc
v3-fos-license
Exploring community evolutionary characteristics of microbial populations with supplementation of Camellia green tea extracts in microbial fuel cells This first-attempt study deciphered combined characteristics of species evolution and bioelectricity generation of microbial community in microbial fuel cells (MFCs) supplemented with Camellia green tea (GT) extracts for biomass energy extraction. Prior studies indicated that polyphenols-rich extracts as effective redox mediators (RMs) could exhibit significant electrochemical activities to enhance power generation in MFCs. However, the supplementation of Camellia GT extract obtained at room temperature with significant redox capabilities into MFCs unexpectedly exhibited obvious inhibitory effect towards power generation. This systematic study indicated that the presence of antimicrobial components (especially catechins) in GT extract might significantly alter the distribution of microbial community, in particular a decrease of microbial diversity and evenness. For practical applications to different microbial systems, pre-screening criteria of selecting biocompatible RMs should not only consider their promising redox capabilities (abiotic), but also possible inhibitory potency (biotic) to receptor microbes. Although Camellia tea extract was well-characterized as GRAS energy drink, some contents (e.g., catechins) may still express inhibition towards organisms and further assessment upon biotoxicity may be inevitably required for practice. Introduction In face of gradual exhaustion of fossil energy shortage around the globe, biomass energy is considered to be the most green and sustainable alternative with the environmental friendliness for worldwide utilization [1À4]. Among myriads of bioresources of biomass energy, microbial fuel cells (MFCs) were effective electrochemical systems to convert chemical energy for bioelectricity generation via simultaneous waste biotreatment and product biosynthesis [5À7]. However, the relatively low capacity of bioelectricity generation still greatly limited its potentials for practical applications due to the low electron transfer efficiency [8]. To overcome this disadvantage, exogenous supplementation of redox mediators (RMs) was considered to effectively improve the electron transfer capability for augmenting power generation [9,10]. Regarding such electroactive RMs, several pioneer works selected artificially synthesized compounds for feasibility study [11À13]. For example, aromatic compounds with promising electrochemically active functional groups (e.g.,-OH and -NH 2 ) at ortho or para positions could exhibit significant redox capabilities to effectively enhance power-generating capabilities in MFCs [11]. Compared with amino group, hydroxyl group sometimes owned the more reversible and stable electrochemical activity for electron-shuttling [14]. However, for green sustainability using synthetic compounds as RMs may still introduce several inevitable concerns for practice (e.g., inhibitory potency for biocompatibility towards biological systems) [15,16]. Therefore, natural resources or products were recently considered to replace such artificially synthesized compounds for sustainable applications. Several studies clearly revealed that appropriate supplementation of natural products abundant in electrochemically active substances could significantly promote simultaneous bioelectricity generation and wastewater treatment in MFCs [17À20]. Recently, Xu et al. [20] extracted natural herbal substances abundant in anthocyanins to effectively increase nearly threefold efficiency of bioelectricity generation in MFCs. From the perspective of chemical structure, in fact these bioelectricity-stimulating substances were mostly polyphenols, which are abundant in natural plants [17À20]. Regarding polyphenols in natural plants, over some thousand years, Camellia green tea (GT) has been used as a traditional drink in China and India and become popular all over the world. In particular, UK has become one of the world's greatest tea consumer since eighteen century [21]. Based upon the degree of tea fermentation, Camellia tea can be classified as GT (0%), yellow tea (10%À30%), Oolong tea (30%À80%) and red tea (80%À100%) [22]. Medicinal values of tea have been well known (e.g., "Cha Jing" (Tea Bible) by Lu Yu of the Tang Dynasty), but potential health properties of tea polyphenols indicated anticancer, antioxidant validated scientifically within these 3À4 decades [23]. Modern medicine used the theory of free radical and immunity to elucidate such specific effects of Camellia tea on human health. In summary, tea contained not only the essential nutrients of human body, but also the medicinal ingredients beneficial to the restoration of human health under certain pathological conditions [24]. In fact, tea polyphenols as the main antioxidant components in Camellia tea has been mentioned in medical literature. For instance, Bag et al. [25] and Henning et al. [26] pointed out that tea polyphenols could prevent cancer by regulating the expression of genetic aberrations occurring in targeted DNA, modified histones and micro RNAs. Ding et al. [27] also further indicated that Pu'er tea had a significant hypoglycemic effect on diabetic rats induced by streptozotocin. Chen et al. [28] even pointed out that theaflavin-3,3 0 -digallate (TF) in Puer and black tea could effectively inhibit SARS-CoV 3C-like protease. That is, tea polyphenols may be a potential source of anticancer and antiviral drugs with fewer side effects and lower prices. Moreover, recent studies also deciphered that abundant tea polyphenols in tea extract sufficiently provided a reliable source of electrochemically active RMs [29,30]. Considering stimulation of effective bioenergy extraction, the protagonist dealing with bioelectricity production in MFCs is electroactive microbes or mixed consortia, that effectively release electrons from oxidation of organic matter [20]. Of course, the bioactivities of the electricity-generating microorganisms (e.g., biofilm forming, electron-transporting characteristics) would directly influence the power-generating efficiency of MFCs [17,19]. Due to some inhibitory/ antagonistic interactions to be taken place, it was not possible to guarantee that all of natural substances are appropriate sources to be natural RMs. This might also explain why the extracellular metabolites of Chlorella with significant redox capacities sometimes could not significantly stimulate the bioelectricity production of MFCs as previously mentioned [31]. If natural extracts exhibited significant redox-mediating capabilities, they may also possibly express toxicity potency to the receptor organisms. Therefore, significant redox capability cannot be the sole screening criterion to select electrochemically active natural RMs for bioenergy applications. For example, as literature mentioned [29,30], some tea polyphenols owned antiinflammatory and bacteriostatic effects (e.g., noticeable inhibition on Typhoid bacillus, Paratyphoid bacillus, Yellow haemolytic staphylococcus, Streptococcus aureus and Dysentery bacillus). Some studies have also shown that tea polyphenols had a strong killing effect on Streptococcus mutans in the oral cavity [32]. Therefore, tea polyphenols have also been widely used in industrial preparation of toothpaste. That is, such strong antibacterial effects of components in tea polyphenols may also have negative effects on bacteria in MFCs. As prior work indicated, different methods of extraction could result in different levels of antioxidant/ electron transfer capabilities (e.g., H 2 O extracted > EtOH extracted) [33]. Moreover, many medicinal herbs may not be appropriate to be extracted in low temperature, rather than at high temperature. In fact, high temperature by macerated and heat-dried or processing heating could attenuate contents of some toxic species and sometimes even reduce power of side effects in medication. To clearly reveal whether such phenomena may also be taken place in MFCs, Camellia GT was selected herein as study material using water and methanol as solvents of extraction. Furthermore, to reveal the novelty of this study and to compare results of high-temperature extraction in prior studies, low-temperature extraction was intentionally selected to inspect the electrochemical activity and/or antibacterial characteristics of Camellia green tea extract. After centrifugation and freeze drying, extracted powders of Camellia GT obtained from the extraction of different solvents were harvested for comparison. In addition, electrochemical properties of the extracts were characterized via electrochemical measures to evaluate whether their electrochemical potential activities could be considered as RMs. Quantitative analyses of bioelectricity generation and community ecology in MFCs with the supplementation of these electrochemical active extracts were further implemented to quantitatively present such inhibitory or stimulating responses to affect the performance of bioenergy-extracting processes in MFCs. Preparation of tea extracts To achieve maximal extraction efficiency, 5.0 g GT sample was ground into powder and screened through the particle size sieve (diameter of 40 mm). In addition, the final solid/liquid ratio (S/L) was set as 5 g/50 mL to have identical basis for comparison. To have comparative assessment with prior studies, the deionized water and 80% methanol (MEOH) aqueous solutions were intentionally used as solvents of extraction at room temperature in total volume of 50 mL with continuous stirring for 12 h [33À35]. Supernatants of GT extracts were obtained via centrifugation at 13,000 rpm, 25°C for 10 min. Harvested supernatants were then purified via 0.2 mm filters (Millipore Millex Ò -GS 0.22 mm filter unit) to eliminate residual particles. After refrigeration under the condition of À 80°C for overnight, frozen extracts were placed into the freezer dryer for 48 h and then pulverized. As the resultant powder may be light-sensitive and were then placed in dried, dark brown glass containers to avoid deliquesce and illumination. High performance liquid chromatography (HPLC) analysis For comparison upon components and contents of tea extracts obtained via different extraction solvents, water extract and methanol extract of Camellia GT at the same mass concentration were analyzed by HPLC. The HPLC system was accomplished with a Chromaster-5110 single pump, Chromaster-5260 auto sampler, and Chromaster-5420 UVÀVIS Detector (Hitachi High-Tech Sci. Corp., Japan) using an InertSustain C18 column (5 mm, 4.6 £ 250 mm, GL Sciences Inc., Japan) under a mode of gradient eluent. The gradient used was mobile phases A (3% acetic acid solution, HAc: water =3:97) and mobile phases B (methanol), where the percentage of mobile phases A was changed over time as follows: 0À1 min, 100%; 1À28 min, 100À37%; 28À33 min, 37À100%. The rejection volume was 10 mL with the flow rate of 1 mL min À1 . The chromatographic separation was completed in 33 min for each sample. The separated components were detected at 280 nm using the Chromaster-5420 UVÀVIS Detector. In fact, as Cabrera et al. [36] recommended, this is more reliable, rapid and simpler method of HPLC for simultaneous determination of catechins, gallic acids (GA) and caffeine (CAF). Cyclic voltammetric (CV) analysis To assess the electrochemical characteristics of tea extracts, comparative CV analysis upon these candidate redox mediators was carried out through electrochemical workstation (ALS/DY2325 BI-POTENTIOSTAT, Taiwan). A glassy carbon electrode (0.07 cm 2 ; CH Instruments Inc., SA) polished with 0.05 mm alumina polish was used as the working electrode. Quadrate platinum electrode (6.08 cm 2 ) was served as the counter electrode and was soaked in hydrogen peroxide (H 2 O 2 ) prior to use. As the reference electrode, a Hg/Hg 2 Cl 2 electrode was filled with saturated KCl (aq) to maintain electrochemical stability and reproducibility. Prior to analysis, the test solutions were purged with nitrogen 15 min for removal of residual oxygen. The symmetric scan range from À1.5 to + 1.5 V were carried out with a scanning rate of 10 mV s À 1 . As the direct parameter to assess the redox capacity, closed curve area of redox potential i:e:; Area ¼ R V H VL ði h Ài l ÞdV were determined with Origin 8. Considering data calculation, V H , V L represented the CV scanning voltages of +1.5 V and À1.5 V, respectively; i h , i l denoted the oxidation currents and the reduction currents at specific scan voltage, respectively. Moreover, 100 cycles of CV scan were conducted to verify the electrochemical reversibility and stability of redox-mediating characteristics. MFC construction Membrane-free air cathode single-chamber MFCs (SC-MFCs) were constructed in cylindrical tubes made by polymethyl methacrylate (PMMA) (cell sizing ID = 54 mm, L = 95 mm) with the working volume of ca. 230 mL (i.e., pð5:4 cmÞ 2 4 Â ð9:5 þ 2 Â 0:3Þcm % 231:3mL). Porous carbon cloth (CeTech) (without waterproofing catalyst) with a projected area of ca. 22.9 cm 2 (i.e., p £ 2.7 2 ) on one side was used as anode electrodes. The air cathode was almost identical to the anode in size and consisted of a polytetrafluoroethylene (PTFE) diffusion layer (CeTech) on the air-facing side. Detailed procedures of bacterial cultures for MFCs (e.g., bacterial acclimation, cell immobilization, bioelectricity stimulation) were described elsewhere [18,20]. To guarantee stable and reproducible electrochemical characteristics to be fully expressed, seeding microbe-Shewanella haliotis was used to inoculate open system LB-based MFCs for at least 1 month acclimation. Then, two domesticated microbial cultures (i.e., consortia A and consortia B) with high electricity-generating capacities were adopted as study MFC platforms for comparison. To explore the electrochemical influences of different Camellia GT extracts on the same MFC, the same microbial culture (e.g., consortia A) was seeded to two blank MFCs for comparison (e.g., marked as MFC-A1 and MFC-A2). Power generation measurement Experimental data of electric current (I MFC ) and voltage (V MFC ) were automatically collected with a data acquisition system (DAS 5020; Jiehan Technology Corporation, Taiwan). For comparison with prior results, the external resistance of microbial fuel cells was intentionally set at 1 KV. Power density and current density of MFCs were calculated with the formulae below: where V MFC and I MFC could be directly measured with linear sweep voltammetry supported by a work station for electric chemistry analysis (Jiehan 5600, Jiehan Technology Corporation, Taiwan). The parameter A anode was the actual working area of the graphite anode. Microbial community analysis (a) Sample preparation, library construction and sequencing: For total genomic DNA extractions, samples of bacterial solutions (ca. 50 mL) in MFC-A and MFC-B were taken and then separated with a high-speed centrifugation (10,000 rpm for 10 mins) to harvest bottom bacterial "precipitate". DNA was extracted using a QIAamp Ò DNA Stool Mini Kit (Qiagen, Valencia, CA, USA) to obtain an OD 260 / OD 280 ratio between 1.8À2.0 for analysis. After library construction, samples were mixed with MiSeq Reagent Kit v3 (600-cycle) and loaded onto a MiSeq cartridge, then a 2 £ 300 bp paired-end sequencing run was performed using the MiSeq platform (Illumina, San Diego, CA, USA). (b) Operational Taxonomic Units (OTUs) analysis: The paired-end raw FASTQ reads generated from Illumina MiSeq platform were filtered using Bowtie 2. Trimmomatic was used to remove sequences with average QV<20 to produce clean reads. Low-quality tails and primers were then trimmed and filtered based on length using Mothur to produce filtered tags. USEARCH was used to remove PCR chimeras to produce effective tags and to construct OTUs at 97% sequence identity. Chemical composition analysis For solid-liquid extraction, apparently the composition and content of the extract are directly associated to the solvent to be used. In this study, two types of solvents-pure water and 80% methanol were selected to extract Camellia GT at room temperature and the chemical composition and content of the obtained solid extract powder would also be greatly different. Therefore, chemical constituents of water extract and methanol extract of Camellia GT were quantitatively analyzed by HPLC method for comparative assessment. According to Jiang et al. [37], Camellia green tea extracts mainly contain tea polyphenols and alkaloids, while major contents of tea polyphenols are phenolic acids and catechins. Therefore, representative substances of these three categories were selected for standard quantitative analysis (i.e., gallic acid (GA), epigallocatechin gallate (EGCG, catechin) and caffeine (CAF, alkaloid)). According to the HPLC spectra in Fig 1(A, B and C), the fingerprint peaks of standard GA, EGCG and CAF was responded at the retention time peak at 8 min, 17 min and 18 min, respectively, indicating that the green tea extracts contained these above-mentioned standards. As comparison of relative contents of three standard substances in the two different green tea extracts, the peak height of HPLC could be approximately represented as the relative contents of the three standard substances since the mass concentration of the prepared tea extract was identical. As indicated in Fig 1 (D), the content of CAF in these two extracts was nearly equal, while the content of GA in the water extraction was relatively higher than that in the methanol extraction. However, as Table 1 indicated, the content of EGCG was higher in MEOH than that in water, since EGCG was more soluble in MEOH than in water. In fact, Ahmad Muhamud and Amran [38] indicated that EGCG contents in Camellia sinensis GT were 0.9347 (MEOH extract) and 0.6705 (water extract) mg/mL. Furthermore, Oh et al. [39] also mentioned that MEOH extract of GT could obtain 60À580 g major catechins/kg dry extract, but water extract was only 385 g major catechins/kg dry extract for 85°C extraction. In addition, extraction through pure organic solvents was found to yield the highest content of catechins. As indicated in Fig. 1, some unknown substances at the retention time of 3 min and 21 min were exhibited; however, their contents in the water extraction were also relatively small. This was owing to the different solubility of these substances in two solvents with different polarity, which resulting in the different content in the extracts [17]. To conduct a detailed quantitative analysis of the standards in the two extracts, HPLC analysis was implemented upon the pure standards with different concentrations ranged from 10 mg L À 1 to 500 mg L À 1 for calibration [40]. As shown in Fig 2, the retention time peak of these three standards increased with the increase in concentration of GA, EGCG and CAF. The linear relationship between the integral area of the retention time peak and the concentration of the three standards was shown in Fig 2(D). As indicated in the calibration line, the concentration of these three standards and the integral area of corresponding retention time peak showed a well-correlated linear relationship. The corresponding concentrations of GA, EGCG and CAF in the two extracts could be quantitatively determined by matching the integral area of corresponding retention time peak with the standard calibration line. The corresponding concentrations of these three standards in the water extraction and MEOH extraction were listed in Table 1. According to the detailed data in the Table 1, the contents of CAF in the two extracts were respectively 77 mg L À 1 and 71 mg L À 1 nearly at the same level. The content of GA in the water extraction was nearly 4 times of that in MEOH extraction, while the content of EGCG was only ca. 50%. These indicated that the contents of these three standards and other unknown substances in the two extracts also exhibited significantly different. This considerable difference might result in diverse outcomes of electrochemical and antioxidant activities as revealed in details afterwards. Electrochemical capability assessment To reveal the electrochemical capability of water extract and MEOH extract from green tea (e.g., reversibility and stability of oxidation and reduction potential peaks), 100 cycles CV scanning of these candidate redox mediators with the same mass concentration was also carried out [11]. As indicated in Fig 3, both the water extract and MEOH extracts of GT exhibited significant redox potential peaks, which directly reflected the bioenergy-stimulating capability. Regarding the electrochemical reversibility, the redox peaks of water extract tended to be stabilized after serial CV scanning (i.e., repeated reduction and oxidation) and both oxidation peaks and reduction peaks could be clearly revealed. However, the redox peaks of MEOH extract seemed to show significant electrochemical instability of CV profiles and gradually attenuated. This was likely due to antioxidant and/or anti-reductants compositions present in most of extracted contents. Therefore, it could be seen that the water extract owned more significant electroactive capabilities than MEOH extract for bioenergy extraction. Since both the extracts were mixtures of phenolic acids and catechins, especially GA and EGCG, the redox capability should be the overall responses present of the electrochemical components. In fact, the redox capabilities of both GA and EGCG were also evaluated by CV inspections under the same scanning conditions for comparison. As shown in Fig 4, both GA and EGCG owned the chemical structure with three consecutive hydroxyl groups attached to the benzene ring. However, both redox capabilities of GA and EGCG were exhibited in significant differences. The CV profile of GA emerged significant redox potential peaks with higher stable reversibility. However, the redox capability of EGCG tended to be attenuated possibly due to electrochemical instability. This might indicate that GA owned more significant electrochemical capability than EGCG for electron-shuttling catalysis. Moreover, this seemed to explain why the redox capability of water extract was greater than that of MEOH extract. The water extract contained higher amount of electrochemically convertible GA, leading to the difference of electrochemical capability from MEOH extract. Bioenergy performance analysis As prior studies [11] indicated, aromatic compounds with electronshuttling functional groups (e.g., ortho-or para-dihydroxyl (ÀOH) substituent(s)) could act as RMs to enhance efficiency of simultaneous Table 1 The measured concentrations of GA, EGCG and CAF in the water extraction and MEOH extraction. Extracts GA EGCG CAF MEOH extraction 7 mg L À 1 122 mg L À 1 77 mg L À 1 water extraction 26 mg L À 1 68 mg L À 1 71 mg L À 1 , RMs could be reversibly inter-converted between reduced and oxidized forms of intermediates to enhance electron transfer phenomena between electron donor(s) and electron acceptor(s) for augmenting electricity generation. Therefore, the application of two green tea extracts into MFCs should clearly present whether such electrochemical activities could be stably expressed by microbes. This would be reflected in the remarkable electron-shuttling phenomenon, improving the bioelectricity-augmenting performance of MFCs [17]. Thus, water extract and MEOH extract were supplemented to MFCs with the same mass concentration to evaluate power-generating capacity for comparison. Aware that nearly identical mass concentration of tea extract simply suggested that the same weight of Camellia tea (biomass) was used for comparison. In fact, two kinds of MFCs inoculated with different mixed consortia of bacteria (i.e., MFC-A and MFC-B) were carried out to verify whether the expression of electrochemical RMs in MFCs is still controlled by the electroactive-bacterial populations. However, supplementation of both water Regarding the power generation capacity, the power density of MFC-A1 and MFC-B1 respectively decreased from 14 to 18 mW m À 2 to 10 and 9 mW m À 2 with the successive supplementation of water extract. Of course, dose concentration strongly influenced the degree of inhibitory potency expression since higher dose would trigger more severe adverse effects to be taken place to the receptor organisms. That was why power density of 2nd was even lower than 1st (Fig. 5). This also suggested that different levels of inhibitory responses would be resulted owning to the difference of microbial community. Furthermore, addition of MEOH extract tended to dramatically inhibit the power generation of MFC-A2 and MFC-B2, respectively decreased from 15 to 19 mW m À 2 to 5 and 3 mW m À 2 . These result directly suggested that the inhibition of MEOH extract was even more considerable than that of water extract. Since the sources of water extract and MEOH extract were the same green tea, the apparent inhibitory differences were most likely owing to the differences in the composition and content of the extract due to the different solvents and temperatures of extraction as discussed afterwards. Microbial community analysis As the two Camellia green tea extracts contained chemical components with strong antibacterial activities, applications to MFCs clearly expressed inhibitory effect on power generation, directly affecting electrochemical activities of the electrogenic microorganism to be exhibited in MFCs. To clearly decipher such inhibitory effect of green tea extract on bacterial strains in MFCs, microbial community analysis of MFCs before and after supplementation of MEOH extract was implemented [41]. The changes of microbial community in MFCs were analyzed from the levels of phylum, class, order and family (Fig 6). Regarding MFC-A, from the phylum level, the original flora could be mainly divided into Proteobacteria and Firmicutes, and the two phyla accounted for 48.21% and 51.78%, respectively, in a relatively even distribution state (i.e., community in higher biodiversity). However, after the supplementation of MEOH extract, the proportion of the Proteobacteria increased significantly (up to 85.60%), while the Firmicutes could not adapt to such environmental stress, leading to significant loss of cell viability (decreased to 14.40%). The same result could be found in the microbial community analysis of MFC-B. The Proteobacteria and Firmicutes respectively changed from 42.73% and 57.27% to 67.00% and 33.00%, respectively. In addition, such phenomenon could also be observed in the classification of class and order levels. From the more detailed classification of the family, in addition to the obvious changes of microbial community in MFC-A, Clostridiales UC and Clostridiaceae-1 strains seemed to become extinct with the supplementation of MEOH extract. In MFC-B, except for dying out of Clostridiaceae-1, Carnobacteriaceae strains were also eliminated from the population owing to the addition of MEOH extract. These results indicated that these strains were unlikely to resist such supplementation of MEOH extract. Considering community distribution diversity, Shannon's Diversity Index (H') was adopted herein as performance index to characterize species (OTUs) diversity in this bacterial community, suggesting both abundance and evenness of the species (OTUs) in the population [42]. Thus, the Shannon's Diversity Index for comparative inspection was calculated by the following formula where S obs is total number of species (OTUs) and P i is the fraction of the total number of individuals in a particular genotype or taxon i. Moreover, vlues for H' ranged from 0 (low diversity) to 5 (high diversity) could directly reflect the abundance level of bacterial community. In addition, to express species evenness of bacterial community, Shannon's Evenness Index (also known as Pielou's Evenness Index (J)) was also calculated from the following formula where H 0 max is the maximum possible value of Shannon's Diversity Index. The values for J ranged from 0 (low evenness) to 1 (high evenness) could directly reflect the uniform distribution of bacterial community. In bacterial community, if the distribution of every species was likely equally, the H 0 max could be calculated as As indicated in these calculated indices (Table 2), continuous increases of Shannon's Diversity Index (H') and decreases of Shannon's Evenness Index (J) from phylum to family in both MFC-A and MFC-B suggested that the bacterial community tended to be highly diverse with low evenness if more specific levels were considered. This might suggest that some species populations were possibly on the verge of extinction as well. Furthermore, comparing the two different bacterial communities before and after the supplementation of MEOH extract, both the Shannon's Diversity Index and Shannon's Evenness Index exhibited significant decrease. This result was consistent with significant extinction of species as microbial community analysis indicated. Mechanism exploration To elucidate such different inhibitory outcomes of GT extracts, comparative analysis on prior studies and literature was carried out. As aforementioned, evidently contents of GA, EGCG and several unknown compositions in the water extract and MEOH extract were significantly different. As indicated in literature [30,31,41], EGCG owned apparent inhibitory effects on bacterial species (e.g., Staphylococcus aureus, Proteusbacillus vulgaris, Salmonella typhosa, Pseudomonas aeruginosa, Bacillus subtilis, Oral streptococcus, E. coli, Stenotrophomonas maltophilia). Moreover, EGCG could even express significant antibacterial activity toward food poisoning bacteria and plant pathogenic bacteria. Therefore, due to the higher content of EGCG in GT extract, the greater inhibitory potency towards the bacterial populations was revealed. In addition to EGCG, there were many chemical components with the similar chemical structure (e.g., orthodihydroxyl bearing aromatic compounds EC, EGC, ECG, GA, TF) in GT extracts. According to Zuo et al. [34] and Yao et al. [35], catechins in tea extracts can also include epigallocatechin (EGC), epicatechin gallate (ECG), and epicatechin (EC), which are likely corresponding to the unidentified peaks in the HPLC results. Similar to EGCG, these catechins had high antioxidant capacities and also revealed very strong antibacterial activities. Furthermore, GT extracts obtained through room temperature extraction could also lead to higher contents of inhibitory chemical species than those from higher temperature (e.g., 65°C) extracts. These findings all supported that the presence of antibacterial components in GT extract may directly alter the distribution of microbial community, further decreasing the power-generating capabilities of MFCs possibly due to reduction of electroactive populations in the community. However, such strong inhibitory effect of tea extract seemed to be different from our prior findings [17,18]. From the comparison upon experimental methods, the extraction temperature seems to be the main-effect reason to evolve such a difference. In fact, different degree of inhibition caused by changes in temperature of extraction were also found in natural products (e.g., medicinal herbs). As a matter of fact, many medicinal herbs may not be appropriate to be extracted at low temperature (e.g., room temperature). According to the practices in Herbal Medicine, macerated and heat-dried or processing (p aozhì) under higher temperature heating (65À85°C) could attenuate some inhibitory chemical species and sometimes even lose power of side effects in medication. For example, artemisinin (Qinghaosu) was found to be effectively against malaria. However, this finding was due to "accidental" modification of the extraction at low temperature by the Youyou Tu's group [42]. After their further separation of acid extract, the natural extract indeed contain very promising antimalarial activity that was even much stronger than well-known chloroquine for clinical mediation to patients with malaria. Moreover, this phenomenon was popularly observed in extracts of active compositions in medicinal herbs to against disease. As revealed in Oh et al. [39], ethanol extract of GT at 20°C owned potent antimicrobial activity against all five test pathogens, compared to water extract at 80°C. That is, ethanol extracts contained higher concentrations of inhibitory chemicals to pathogens than water extracts. This may be due to the favorable solubility of antimicrobial compounds in organic alcohols (e.g., methanol, ethanol) much higher than that in water. In summary, it is noted that a Chinese saying "every medicine has its side effect" (y ao jí shì d u). As this study and literature [1À9] indicated, evidently there were at least three crucial conditions to affect inhibition potency of GT to bacteria as follows: (1) temperature of extraction (e.g., room, higher temperature), (2) solvent of extraction (e.g., water, ethanol or methanol) and (3) concentration of extract (e.g., powder, concentrated solution). They all affected the role of GT to be either "medicine" (y ao) or "poison" (d u). In addition, herbal tolerance and susceptibility of test organism also strongly influenced the responses to be in either "toxicity" or medication. These all strongly suggested that why macerated and heat-dried or processing (p aozhì) is of great importance to clinical medication. The novelty of this MFC study was to depict a promising platform to evaluate possible herbal species for its bioenergy-extracting potential without sacrifice of living mice and animals in practice. The scope also pointed out some information of great importance on the threshold criteria of "toxicity" for the characteristic of Camellia tea extract (e.g., when and how it may be "drug" or "poison" to what receptor organisms?) [43À52]. Conclusion Camellia green tea extract obtained from "inappropriate" extraction procedures (e.g., room temperature extraction) might be inhibitory to reduce the power generation of MFCs compared to bioelectricity stimulation by supplement of higher temperatureextracted green tea. Evidently, the main components of green tea extracted by pure water and MEOH showed significant difference in concentration and composition, which could directly lead to the differences in inhibitory potency and biocompatibility. Although the extracts obtained by different solvents (i.e., water and MEOH) owned significant redox capabilities, the application in MFCs unexpectedly still exhibited considerable inhibitory effect. Microbial community analysis showed that the supplementation of green tea extract significantly altered the distribution of microbial community, especially the decrease of microbial diversity and evenness. Therefore, in practical application of different microbial culture systems, selecting appropriate RMs should not only consider their excellent redoxmediating characteristics, but also the inhibitory responses for feasibility evaluation as screening criterion.
2020-08-29T13:02:21.149Z
2020-08-01T00:00:00.000
{ "year": 2020, "sha1": "abae80539c5da0a38ccdd7d1a7e8d081a870e9c0", "oa_license": null, "oa_url": "https://doi.org/10.1016/j.jtice.2020.08.015", "oa_status": "BRONZE", "pdf_src": "PubMedCentral", "pdf_hash": "22381c9257d6a93f7662b6181e68dfc47ddd30fa", "s2fieldsofstudy": [ "Environmental Science", "Biology", "Engineering" ], "extfieldsofstudy": [ "Chemistry", "Medicine" ] }
15233046
pes2o/s2orc
v3-fos-license
Delayed Puberty due to Pituitary Stalk Dysgenesis and Ectopic Neurohypophysis Hypopituitarism is not a common cause of delayed puberty. A 22 year old man was referred to our clinic because of the absence of the development of secondary sexual characteristics. The patient had no complaints of physical discomfort. Random serum testosterone and luteinizing hormone level were obtained and found to be low. The combined pituitary function stimulation test revealed a partial hypopituitarism. A pituitary magnetic resonance imaging (MRI) was obtained and showed decreased pituitary stalk enhancement and ectopic neurohypophysis. Therefore, we conclude that the delayed puberty was a result of hypopituitarism due to pituitary stalk dysgenesis and ectopic neurohypophysis. The patient was started on hormone replacement therapy and gradually developed secondary sexual characteristics. INTRODUCTION The etiology of delayed puberty is heterogeneous and includes: constitutional delay, hypogonadotropic states, hypergonadotropic states as well as chronic illness 1) . Hypopituitarism is not a common cause of delayed puberty 2) . The current approach to the diagnosis of hypothalamic hypophyseal lesions using a combined pituitary function stimulation test and pituitary MRI have improved detection and allowed for the diagnosis of hypopituitarism as a cause of delayed puberty 2) . We report a case of a 22-year old man who presented with absent secondary sexual characteristics and was otherwise asymptomatic. Using a combined approach with the pituitary function stimulation test and pituitary MRI, we could identify the cause of delayed puberty to be hypopituitarism due to a pituitary structural abnormality, i.e., pituitary stalk dysgenesis and ectopic neurohypophysis. The patient was started on hormone replacement therapy and gradually developed secondary sexual characteristics. CASE REPORT A 22-year old man was referred to our clinic because of the absence of the development of secondary sexual characteristics. The past medical history was unremarkable there were no significant pregnancy or perinatal problems. The family history was negative for growth failure, pituitary or thyroid disease. During childhood, the patient reported that he had always been short for his age. However, after a delayed growth spurt, he began to grow steadily at 20 years of age. At the time of admission, he was 183 cm. He weighed 75 kg and his body mass index (BMI) was 22.4 kg/m 2 . He had normal intelligence and normal body proportions. His bone age was 15 years, which was slightly below expected. X-ray evaluation of both hands showed open epiphysis ( Figure 1). Physical examination revealed a micropenis, no pubic or axillary hair and palpable though small testes ( Figure 2). These findings were compatible with a Tanner stage I development. The patient had a high pitched voice and appeared young for his stated age. The patient denied physical discomfort, had no signs of gynecomastia, no signs of orthostatic hypotension, nor any symptoms that could be associated with hypothyroidism. The laboratory findings showed normal erythrocyte sediment rate, hematological parameters, blood glucose, serum sodium, potassium, calcium, phosphate, magnesium, and creatinine. Random testosterone (0.08 ng/mL, reference range 2.4~18.3 ng/mL) and luteinizing hormone (LH) (below 0.1 mLU/mL, reference range 0.4~5.7 mLU/mL) levels were all low; the follicle stimulating hormone (FSH) levels were within normal limits. The karyotype was normal, 46,XY. As a result of the random sexual hormone levels and karyotyping analysis, we could classify the delayed puberty as hypogonadotropic hypogonadism. Therefore, we performed a combined pituitary function stimulation test and pituitary MRI for further assessment. We injected regular insulin (0.1 u/Kg), TRH (200 ug), and LHRH (100 ug) and observed that 2 hours later the blood sugar fell to 60 with patient complaint of hypoglycemic symptoms. The combined pituitary function stimulation test showed no increase in serum LH, FSH, growth hormone (GH), adrenocorticotropin hormone (ACTH) level, and the serum levels of prolactin and thyroid stimulating hormone (TSH) showed a normal increment ( Table 1). The MRI showed decreased pituitary stalk enhancement and an enhancing structure in the hypothalamic area, which appeared to be ectopic neurohypophysis ( Figure 3A, 3B). Therefore, considering the results of the above evaluation, the delayed puberty, in this 22 year old man, was the result of hypopituitarism due to pituitary stalk dysgenesis and ectopic neurohypophysis. The patient was started on hormone replacement therapy with prednisolone 5 mg/day, levothyroxine 0.1 mg/day and testosterone 250 mg IM every 3 weeks. After 3 months of treatment, secondary sexual characteristics began to develop; pubic and axillary hair was noted and the voice changed to a lower pitch. The serum testosterone level, previously 0.08 ng/mL, increased to 0.57 ng/mL. DISCUSSION Delayed puberty is generally defined when puberty development is two standard deviations (SD) below the mean for a given population 1,3) . Hypopituitarism is not a common cause of delayed puberty, but recently the incidence has increased due to improvement in diagnostic tools 2) . The clinical manifestations associated with hypopituitarism vary, depending on the severity of the pituitary hormone deficiency 4,5) . Pre-sentation of symptoms are variable and include: acute adrenal insufficiency and profound hypothyroidism, symptoms indicating a pituitary mass lesion; or the symptoms may be nonspecific such as: fatigue and delayed puberty, as reported in our case 2,4,5) . Therefore, hypopituitarism should be considered in all patients with abnormal development of secondary sexual characteristics even if they are otherwise asymptomatic. In our case, the cause of delayed puberty was hypopituitarism due to pituitary dysgenesis. The pituitary gland develops as a result of a fusion of the adenohypophysis and neurohypophysis during the embryonic period. Incomplete downward migration of the neurohypophysis as a result of a genetic defect or a pituitary stalk injury, from perinatal trauma, may lead to fusion defects. Fusion defects are associated with anterior pituitary gland atrophy due to the lack of stimulation from the hypothalamus 7,8) . However, other studies propose a congenital cause for this form of pituitary abnormality. In these reports mutations in genes responsible for normal posterior pituitary lobe descent and stalk development are suggested to explain cases of hypopituitarism due to a pituitary structural abnormality [11][12][13][14][15] . To date, although there are possible candidates such as HESX1 16) and LHX4 genes 17) , the specific genetic abnormalities leading to pituitary stalk dysgenesis and posterior pituitary ectopia have not been identified. Future analysis is planned to examine other transcription factors and signaling molecules, important during pituitary embryogenesis, especially those involved in posterior pituitary ectopia and pituitary stalk dysgenesis. In a high proportion of patients with non-familial idiopathic growth hormone deficiency (both isolated growth hormone defect and multiple pituitary hormone defect), characteristic radiographic findings include a) small to absent anterior pituitary gland b) small or absent pituitary stalk and c) ectopic posterior pituitary hyperintensity located at the base of the hypothalamus 14) . If an enhancing structure in the hypothalamic area is identified, the diagnosis must distinguish an ectopic neurohypophysis from granulomatous disease and metastatic disease. The findings in our case suggested an ectopic neurohypophysis because of an enhancing structure that showed high signal intensity in both precontrast and postcontrast images. The presence of the neurosecretory granule containing arginine vasopressin neurophysin complex in the astrocytic glial cell can explain the high signal intensity of posterior pituitary lobe 18) . In previously reported cases, many patients with an ectopic neurohypophysis had no posterior pituitary gland dysfunction. In 1996, Adamsbaum reported, in three thousand normal individuals, that the posterior pituitary gland was always located in a fixed site; however, in patients with growth hormone deficiency ectopic neurohypophysis was identified in 40~60%, and 60~80% of them had partial pituitary hormone deficiency but rarely diabetic insipidus 19) . This is because of the proliferation of axons and reorganization of the posterior pituitary lobe at the site above the cutting point of the pituitary stalk 7) . Our patient had no sign of diabetes insipidus and therefore we did not test for it with water deprivation. Our patient had no history of abnormal gestation, breech presentation, and ischemic insult at birth. So we could assume that the structural pituitary abnormality of this 22-year old man might come from the result of congenital and genomic abnormality. However, we did no further evaluation at genomic level. The combined pituitary function testing was noted for no response for cortisol and growth hormone. However, the patient had no complaints suggestive of adrenal insufficiency and the height was within normal limits (183 cm). Extensive destruction of the pituitary gland, greater than 60~70 percent, is required for symptoms of pituitary insufficiency; when present these symptoms are diverse. As in our patient there may be no specific symptoms of steroid deficiency. However, with greater stress symptoms may become more apparent; this might occur as a result of infection or in association with a surgical procedure. Prior reports have shown that in spite of GH deficiency, due to pituitary stalk dysgenesis, height is within normal limits. D.T den Ouden et al explained such phenomenon by offering the following three hypotheses: First, because of the total absence of estradiol, the epiphyses do not close, and the patient continues to grow possibly due to other factors such as insulin. Insulin can act as a growth stimulus partially through activating the IGF-1 receptor. Second, the fact that estrogens have a slightly antagonistic effect on the bioactivity of GH could explain why, in the absence of estrogens, low GH secretion has a greater effect than expected. Last, a possible growth stimulus in patients with high prolactin (PRL) levels, in patients with low GH, which has been implicated in the pathophysiology of growth without GH in craniopharyngioma 20) . In conclusion, there are many cases of partial hypopituitarism that go undiagnosed because they are asymptomatic, and only have absence of secondary sexual development. Therefore, it is important to consider the possibility of hypopituitarism, even in patients who present without other symptoms, as in our case. Furthermore, if hypopituitarism results from a structural pituitary abnormality, especially without a history of perinatal birth injury such as breech presentation, further evaluation is indicated at the molecular level to determine whether a genetic abnormality may explain the structural pituitary malformation, such as posterior pituitary ectopia and pituitary stalk dysgenesis.
2017-06-13T05:48:16.769Z
2006-03-01T00:00:00.000
{ "year": 2006, "sha1": "c7913d68ec19889260fa3a1e7eb524175b4f33e5", "oa_license": "CCBYNC", "oa_url": "https://doi.org/10.3904/kjim.2006.21.1.68", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "6932de3a12df18338855d379450833bea8263b7e", "s2fieldsofstudy": [ "Medicine", "Biology" ], "extfieldsofstudy": [ "Medicine" ] }
256925270
pes2o/s2orc
v3-fos-license
Dual-energy micro-CT for quantifying the time-course and staining characteristics of ex-vivo animal organs treated with iodine- and gadolinium-based contrast agents Chemical staining of soft-tissues can be used as a strategy to increase their low inherent contrast in X-ray absorption micro-computed tomography (micro-CT), allowing to obtain fast three-dimensional structural information of animal organs. Though some staining agents are commonly used in this context, little is known about the staining agents’ ability to stain specific types of tissues; the times necessary to provide a sufficient contrast; and the effect of staining solution in distorting the tissue. Here we contribute to studies of animal organs (mouse heart and lungs) using staining combined with dual-energy micro-CT (DECT). DECT was used in order to obtain an additional quantitative measure for the amount of staining agents within the sample in 3D maps. Our results show that the two staining solutions used in this work diffuse differently in the tissues studied, the staining times of some tens of minutes already produce high-quality micro-CT images and, at the concentrations applied in this work, the staining solutions tested do not cause relevant tissue distortions. While one staining solution provides images of the general morphology of the organs, the other reveals organs’ features in the order of a hundred micrometers. in laboratory routines. Although sample contraction due to the use of alcohol-based staining agents has already been described 9,18 , an evaluation of the benefits in contrast enhancement in the context of tissue shrinkage and distortions is still incomplete, as well as a comparison of the ability of a staining solution to stain one or other type of tissue. The architecture of ex-vivo mouse heart and lungs has already been well described with staining combined with micro-CT 2,4,5,[19][20][21][22] . Thus, rather than providing a morphological description of these organs, here we contribute to the studies of soft-tissues' staining combined with micro-CT, by providing new quantitative information about the diffusion of staining solutions due to staining time, tissue type and staining solution type. For that, we stained ex-vivo mouse heart and lungs controlling the time and using two well known contrast-enhancing solutions: one based on iodine (I 2 in ethanol, known as I2E) and the other on gadolinium (Gadovist). We imaged the samples with a dual-energy micro-CT (DECT) technique, exploring an easy-to-use image acquisition protocol based on the knowledge of the X-ray emission spectrum and the sample components. When compared to regular micro-CT, DECT has the great advantage of providing a quantitative description of the staining agent content within the organs. Thus, our measurements allow a segmentation of the staining agents within the tissues and we could then create 3D maps of the distribution of the contrast agents within the samples. Our results describe for the first time the major differences in contrast and in sample size due to staining time, tissue type and contrast agent. Furthermore, our measurements reveal that much shorter staining times than those usually described in the literature are enough to provide high-quality micro-CT images with high-resolution for these organs. Results To study the effect of the staining time and the staining agent type on the contrast in micro-CT images, we performed sequential staining procedures on mice heart and attached lungs controlling the staining times and using either I2E or Gadovist as contrast-enhancing agent. We consecutively X-ray imaged the samples after each staining procedure, as illustrated in Fig. 1 for I2E staining. Imaging was performed with two different X-ray energy spectra, at a low-and a high-energy. Single-energy spectra imaging. A comparison between tomograms of non-stained and I2E-stained heart imaged at low-energy (40 kVp) shows that the non-stained sample delivers poor contrast among tissues in the organ and, thus, all sample features are lost in the low signal-to-noise ratio ( Fig. 2A and a2). A better contrast is obtained in the tomogram of the heart stained for 30 min with I2E, with the borders of the organ well delineated (Fig. 2B). Indeed, the tomograms of the heart imaged in the same experimental conditions show an increase in contrast with the increasing I2E-staining times ( Fig. 2A-F). This effect is clearly seen in the histograms, and after each staining round, the signal representing the gray values distribution of the sample continuously shifts away from the background signal (black dashed line in Fig. 2 and S1 a1-f1). The details revealed with the staining time include the orientation of the muscular fibers in the intraventricular septum (IVS) and the inner and outer walls of the heart, which can be identified after only 60 min of staining (Fig. 2C). Though after 90 min the saturation of the staining agent in the tissue composing the heart walls is Figure 1. Workflow for the I2E staining procedure of the specimens. For I2E staining, the fixated samples were dehydrated in a graded series of ethanol solutions until 100 % ethanol (top row). Then, they were immersed in I2E staining solution for 30 min. Subsequently, the organs were imaged by dual-energy micro-CT, then the staining procedure was repeated for 30 more min, in a total of 60 min-staining. After micro-CT imaging of the organs stained for 60 min, the staining procedure was repeated, to a total of 90 min. The whole staining plus micro-CT imaging procedure was repeated using the same experimental parameter until the organs were stained for a total of 150 min. The colour gradient in the figures illustrates the increase staining solution accumulated in the organs along the staining time, that also cause a colour change observable with the naked eye. Similar procedure was performed with Gadovist for 60, 120 and 180 min staining-times and without dehydration step. Gadovist does not cause a change in the organs colour observable with the naked eye as suggested in the figure. reached ( Fig. 2 d2-f2, light-gray areas), longer staining times still cause an increase in the relative intensity of the signal related to the IVS (Fig. 2, a2-f2, dark gray area). Similar observations are also made for the lung tissue after each staining round with I2E, as a general increase in contrast is observed in the lungs, with the airways better delineated after each staining round ( Supplementary Fig. S1). After 180 min of staining, I2E highlights structures of less than 100 μm in the lungs. Specifically in the esophagus, a more intense increase in contrast is observed compared with structures in the lungs ( Supplementary Fig. S1, white arrow). We performed a similar study with heart and attached lungs consecutively stained with Gadovist and imaged at a low energy (70 kVp). As observed for I2E staining, Gadovist-stained samples also show an improvement in contrast with increasing staining times, as seen in the tomograms, in the histograms and in the line plot values of the stained heart and lungs ( Fig. 3 and S2). However, a general increase in contrast of the entire sample is seen after each Gadovist staining round, thus, almost no detail of the tissues' morphology is revealed, such as the fibrous muscular tissue of the intraventricular septum and the heart walls or micrometric structures in the lungs. After the staining procedure, an amount of Gadovist solution remains in the heart ventricles ( Fig. 3, black arrows). This same effect is observed in some lungs' airways ( Supplementary Fig. S2, black arrows), and the airways surface are then seen in a darker tone than the lung tissue and the airways. The excellent contrast generated by longer staining times allowed us to better reconstruct the parts of the heart and lungs stained with I2E ( Fig. 4A-D) and Gadovist (Fig. 5A-D). Dual-energy spectra imaging. Though the single-energy spectra imaging allows to observe the increase in contrast due to staining time, dual-energy micro-CT enables a deeper insight into the distribution of the staining media with a high spatial resolution. Dual-energy micro-CT quantifies the local concentration of the staining agent and we then used this technique to better identify the accumulation of the contrast agent on specific parts of the organs analysed. We processed the data obtained with two different X-ray energy spectra using . Iodine distribution in the organs was obtained after processing the data obtained with two different X-ray energy spectra. Iodine distribution is visible in the virtual cuts (E-H, corresponding to the blue planes in the images immediately above) and reconstructed volumes (I-L) of the heart and lungs with increasing staining times, with the red-coloured areas indicating the higher amount of iodine and green-coloured the lower, quantified with dual-energy experiments. an image-based material decomposition method for dual-energy X-ray computed tomography ( Supplementary Fig. S3) 23 . This technique provides a quantification of the staining agent content regardless of the tissue density. Thus, in the maps of the distribution of iodine or gadolinium in the samples, we see the areas where the contrast-enhancing solutions tend to accumulate (Figs 4E-L and 5E-L, with the red-coloured areas containing the higher amounts, respectively, of iodine and gadolinium). The concentration gradient of the contrast-enhancing element is clearly visualised from the outside to the inside in the sliced volumes of I2E-stained samples ( Fig. 4F-H). According to the 3D staining map of iodine, the contrast is equally increased in the entire sample after the first staining round ( Fig. 4F and J). After 90 min of staining, there is a differentiation of the lung tissue and the heart tissue, with the latter accumulating a higher amount of iodine ( Fig. 4G and K). A 150-min staining time result in a more pronounced staining differentiation between heart and lung tissues, and the heart walls ( Fig. 4H and L). The gadolinium maps show that Gadovist accumulates mostly in the lungs than in other tissues after 60 min of staining ( Fig. 5F and J). With longer staining times, this effect is pronounced and, after 180 min of staining, the profile of the lungs lobes is well delineated, the heart walls are depicted with the blood vessels highlighted on its surface ( Fig. 5H and L). Due to longer staining times, along with the gain in contrast that can be quantified by the increasing range and values of attenuation coefficients ( Fig. 6A and B), changes in the total samples volumes are also observed after the sequential staining procedures ( Fig. 6C and D). A shrinkage of approximately 20% occurs after the first I2E-staining round, as expected after changing the organ from fixative aqueous solution to the ethanolic I2E solution. The sample size remains virtually unchanged after each of the further staining sessions (Fig. 6C). The sample size changes by less than 10% with the sequential Gadovist-staining procedures (Fig. 6D). Discussion Our present study has focused on the time necessary for two commonly used staining solutions (I2E and Gadovist) to increase the contrast in micro-CT and on the differences observed in the accumulation of these contrast agents in the heart and lungs of ex-vivo mouse. We used an imaging protocol with two different X-ray energy spectra sequentially acquired. The single low X-ray energy spectrum tomograms and histograms of the samples stained with I2E ( Fig. 2 and S1) and Gadovist ( Fig. 3 and S2) show the uptake of these solutions in the heart and lungs, and the increase in contrast that is related to the staining agent and the staining time. Indeed, after each . Volumetric renderings and gadolinium maps of non-stained and Gadovist-stained heart and lungs with increasing staining times. Samples are shown according to increasing staining times of 0, 60, 120 and 180 min, from left to right. Using the same gray values threshold, comparable tomographic slices of the nonstained and the stained organs are shown for the 70 kVp measurement (A-D). Gadolinium distribution in the organs was obtained after processing the data collected with two different X-ray energy spectra. Gadolinium distribution is seen in the virtual cuts of reconstructed volumes (E-H, corresponding to the blue planes in the images immediately above) and in the entire volumes of the organs (I-L) with increasing staining times. Redcoloured areas indicate the higher amount of gadolinium and green-coloured indicate the lower, quantified with dual-energy experiments. The sample is slightly more compressed onto the tube at 60 min and appears different in the first row. staining round, the gradual shift of the soft-tissue signal in the histograms is consistent with the wider range of intensity values of the samples stained for longer periods of time. Though conventional micro-CT imaging (single-energy spectra) shows the increase in contrast due to staining time, the combined processing of the two datasets obtained with two different X-ray energy spectra result in a three-dimensional map of the staining agent content per voxel. This 3D map cannot be achieved with conventional micro-CT imaging because X-ray attenuation of the tissue cannot be eliminated even for a well-calibrated system. Dual-energy scans, on the other hand, allow quantitative calibration for the known absorbing materials (iodine and gadolinium) present in the staining agents used. Therefore here, to obtain a quantified distribution of contrast agent in the samples, we processed the two different X-ray energy spectra using an image-based material decomposition method for dual energy X-ray computed tomography ( Supplementary Fig. S3) 24 . The main requirement for material decomposition on sequentially acquired computed tomographies is an adequate distinction of the X-ray energy spectra used with respect to the attenuating material for a broad-band X-ray source. This means that, either the spectra must differ sufficiently by being shaped with filters and different peak energies ( Supplementary Figs S4 and S5), or that the attenuation coefficient of the main absorbing material varies significantly over the specified energy range. This is the case for the regime around an X-ray absorption edge. A combination of both was used for our choice of imaging parameters following to the obtained spectra. On one hand, this dual-energy imaging protocol helps discriminating two known materials in the measured sample as it allows differentiation of, for example, iodine or gadolinium from soft-tissue, independent of the density of the staining element and the total X-ray attenuation. On the other hand, it allows quantification of changes in concentration or density of a specific material over the entire sample with high sensitivity. Image-based material segmentation methods suffer from image artifacts such as beam hardening, but the reliability of these techniques can be improved using the information obtained with material phantom measurements ( Supplementary Figs S6 and S7). The material-selective volumes obtained by dual-energy imaging for iodine (Fig. 4) or gadolinium (Fig. 5) display the concentrations of these elements and allows quantification of the diffusion process, particularly in the three-dimensional view. The numbers on the colour bars in Figs 4 and 5 refer to the fraction of the iodine or gadolinium solution used for calibration of the setup. These solutions were identical to the media used for the staining (0.5% I 2 w/v, named I2E; and pure Gadovist). In consequence, a comparison between the material-selective volumes for the different solutions shows that I2E first accumulates in the heart (up to approximately 1.3 times the staining solution concentration at longer staining times) and gives a strong contrast also for small sample features in the organ, whilst Gadovist has higher concentrations in the lung Figure 6. Samples' relative volume and contrast changes with increasing staining times. For I2E staining, the increasing range of attenuation coefficient (average of sample within the grey area) along with the increasing staining times are seen in (A) and the changes in relative volume of the entire sample (heart and attached lungs) are seen in (C). Same data is shown for Gadovist staining in (B) and (D). The green and blue lines connecting the data points in (C) and (D) are not depicting a mathematical dependence and are only a guide to the eyes. tissue (Fig. 5E-H and I-L) and does not depict the small details neither in the heart nor in the lung tissues. After 90 min, structural details are seen in the I2E stained heart, but not in the lungs. In contrast, the heart stained with Gadovist requires longer staining times than the lungs require to appear well depicted in the images. I2E increases the contrast of the heart walls more than the blood vessels (Fig. 5L) and Gadovist accumulates more in the blood vessels and less in the heart walls, thus, thus highlighting the heart blood vessels. Another difference consists in the areas where the staining solution accumulates in the lungs and adjacent tissues: I2E accumulates in the airways walls of the lungs (bronchial tree) and in the esophagus ( Supplementary Fig. S1, arrow), while the Gadovist diffuse and accumulate in the entire organ, but not much in the esophagus and in the airways walls, that therefore appear in a shade (Supplementary Fig. S2). The general differences observed between I2E and Gadovist staining are related to their ability to stain the heart, the esophagus and lungs' airways. Our results agree with published data, which show that among other types of soft-tissues, the iodine-stained muscular tissue shows the highest grayscale values when compared to other types of tissues 9 . Indeed, according to the literature, muscle tissues (like those composing the heart) and epithelial tissues (lining the esophagus and parts of the bronchial tree) are well stained by iodine-containing solutions (both water-and ethanol-based) possibly due to local anisotropy in tissue diffusivity 9 . For Gadovist staining, our results suggest that the diffusibility of Gadovist into the lungs' airways plays a more critical role than the local anisotropy of muscular tissue of the heart. The difference in the staining process between I2E and Gadovist is also illustrated by the different profile of the curves showing how the values of attenuation coefficients change with the staining time ( Fig. 6A and B). Our results show that I2E staining caused a more pronounced change in size than Gadovist staining ( Fig. 6C and D). Sample shrinkage is already described in the literature for I2E 9,18 , due to the presence of ethanol used as solvent, which is known to remove water from tissues 9,25 . Also, the I2E staining procedure used here involved a previous step of dehydration of the organs in ethanol, that is known to increase cells membrane permeation, helping iodine diffusion into the organs 2 . Thus, the overall size differences observed between I2E-and Gadovist-stained samples are more likely to be due to the dehydration effect of the ethanol used in the I2E staining solution. Still, the dehydration caused by ethanol does not seem to cause distortions in the heart and lung tissue, at least in the sample features that are observable in the images presented here. In contrast, the Gadovist is a water-based solution 26 , that causes virtually no sample shrinkage. The variations in volume observed for Gadovist, and for I2E after 30 min of staining, could be due to small errors in the automated size measurements or due to small changes in the position of the sample in the plastic tube during the staining steps. Besides observing the staining maps, another way of interpreting dual-energy CT results is the use of two-dimensional correlation histograms, which represent the number of voxels with a certain combination of effective polychromatic attenuation coefficients measured for low-and high-energy ( Supplementary Fig. S8 for I2E-stained sample and Supplementary Fig. S9 for Gadovist-stained sample) 27 . The two-dimensional correlation histograms results show that, whilst only a small range of attenuation coefficients represent soft-tissue (organ), plastic (sample container) and air (negative values correspond to scattering by the sample), the diversity of attenuation coefficients increases rapidly due to the staining. With increasing staining times, the number of voxels with a large attenuation coefficient corresponding to a high amount of I2E also increases, which refers to iodine accumulation in the tissue (Supplementary Fig. S8). In the correlation histograms for staining with Gadovist, the amount of the staining solution in the lung tissue is clearly differentiated from the amount in the heart tissue ( Supplementary Fig. S9). Indeed, over time the lung tissue becomes saturated and the Gadovist concentration inside the heart approaches saturation at the longest staining time used in this work. In conclusion, our easy-to-use dual-energy X-ray micro-CT acquisition protocol and data analysis allow the quantification and the segmentation of staining agents within soft-tissues samples. Our results describe the major differences in contrast due to the staining time, the staining agent used and the organ studied. Supported by the 3D mapping of the contrast agent accumulation in the organs, our results show a clear improvement in contrast after every staining round with the two staining solutions tested, which are both commonly employed in micro-CT. Moreover, they show that the staining solutions used here diffuse differently in heart and lung tissues, and the saturation of some tissues is virtually reached in less than 3 h of staining procedure. Thus, we show that much shorter staining times than those described in the literature (ranging from hours to weeks) are enough to generate contrast to provide high-quality micro-CT images with high-resolution. When comparing the two staining solutions used, Gadovist is a good contrast agent to reveal the general morphology of the organs studied, while I2E provides more detailed images with features in the order of a hundred micrometers, though heart's blood vessels are well depicted by both staining solutions. Also, the combination of two effects, the local anisotropy in tissue diffusivity and the higher permeability of tissues immersed in ethanol solutions, are in agreement with I2E being capable of staining the heart, the esophagus and the bronchial tree faster than it stains the lungs tissues, while the opposite is observed for Gadovist. Methods Samples preparation. Organ removal was approved from an internal animal protection committee of the preclinical centre (ZPF) of Klinikum rechts der Isar, Munich, Germany (internal reference number 4-005-09). Two 6 months old female C57Bl/6 mice (Charles River Laboratories, Europe) were sacrificed in accordance with relevant guidelines and regulations, and lungs and hearts were excised and fixed in 1% formaldehyde/ 2,5% glutaraldehyde in 1x PBS. All staining and imaging procedures were performed in the same container, and care was taken to prevent the samples moving in between staining procedures to avoid alignment issues during image processing. Only freshly prepared staining solutions were used in all staining steps. For micro-CT imaging of the staining time 0, the sample was removed from the fixative solution and inserted in a plastic tube with a small volume of fixative solution in the bottom of the tube, not touching the sample. Imaging was performed with the tube closed to prevent the sample from drying. Immediately afterwards, the heart and attached lungs were dehydrated in a graded series of ethanol solutions (50, 70, 80, 90, 96% and absolute ethanol for 1 h each), then they were left in the iodine staining solution (0.5% w/v I 2 in absolute ethanol, named I2E). After 30 min-staining, the staining solution was removed from the container, the organs were washed 3 times in pure ethanol and the sample was imaged in a closed plastic tube containing enough ethanol to prevent the sample from drying, but without touching it. This 30 min staining procedure followed by micro-CT imaging was repeated consecutively 5 times, thus, the organs were imaged for staining times equal to 30, 60, 90, 120 and 150 min. For the heart and lungs stained with Gadovist ™ (Bayer Healthcare -solution for intravenous injection containing 604.72 mg mL −1 of Gadobutrol (1.0 mmol mL −1 solution of 10-(2,3-dihydroxy-1-hydroxymethylpropyl)-1,4,7, 10-tetraazacyclododecane-1,4,7-triacetic acid)), a similar procedure of staining followed by micro-CT imaging was performed. Though, since both Gadovist and fixative solutions are both aqueous solutions, the dehydration step necessary for I2E staining was not performed here. Heart and lungs were removed from the fixative solution and first imaged with micro-CT, corresponding to the staining time 0. Then, they were washed 3 times in deionised water and stained for 60 min with pure Gadovist. This 60 min staining procedure followed by micro-CT imaging was repeated consecutively 2 more times, thus, the organs were imaged for staining times equal to 60, 120 and 180 min. Data acquisition. Due to the possibility to maintain a sub-micrometer-resolution even for large samples, a VersaXRM-500 3D X-ray microscope from Zeiss was used to acquire the X-ray computed tomography measurements. All measurement parameters were chosen in a way to reach an appropriate signal-to-noise ratio for a sufficiently short scan time, to avoid sample movement during the scan. A dual energy technique was used to attain an iodine or gadolinium-specific image, which particularises the element distribution over the heart and lungs samples. The dual energy technique relies on the energy dependence of the X-ray attenuation for different materials ( Supplementary Figs S6B and S7B). The samples were imaged with a low energy spectrum (40.2 kVp for the iodine-stained sample and 60.2 kVp for the gadolinium-stained sample) and immediately after with a high energy spectrum (70.1 kVp for the iodine-stained sample and 140.0 kVp for the gadolinium-stained sample), using the parameters in Table S1. Additionally, filter materials supported a sufficient separation of the X-ray emission spectra observed for every measurement ( Supplementary Figs S6A and S7A). For the calibration of the system, phantoms consisting of a rod of PMMA as a soft-tissue equivalent mate- Digital image processing and dual-energy decomposition. The image processing steps are presented in the flowchart in Supplementary Fig. S3 and described in the following text. To achieve high accuracy in image registration, it was necessary to align the dual-energy datasets with each other before reconstruction and to perform an additional post-reconstruction image alignment of the volumes. These steps consider a shift in both translation and rotation. Furthermore, median filtering was used for noise reduction. The sample segmentation was done by drawing region of interests manually and performing linear interpolation in between. The dual-energy decomposition makes use of two CT scans consecutively acquired at two different energy spectra (Supplementary Figs S8A and S9A) to isolate the contribution of a specific material. Especially for the non-simultaneous scanning procedure, which does not require a designated dual-energy CT setup, it is recommended to perform the decomposition post-reconstruction as this simplifies the necessary image registration significantly. Even though this method does not prevent image artifacts such as beam hardening, it provides a satisfying quantitative description of the material contents, which gives an insight into the distribution of a contrast medium and enhances the understanding of the molecular behaviour. For material decomposition we used a linear optimisation approach with non-negative constraints 23 . This solves voxel-wise systems of equations describing a composition of two materials given by: The polychromatic attenuation coefficients µ HE (high energy) and µ LE (low energy) are given by the flat field corrected reconstructed volumes. The values µ stain and µ PMMA can be obtained from calibration measurements as explained above. The resulting material-specific F(x,y,z) gives the fraction of each calibration material for every voxel. Using air as a third material for the decomposition leads to an improvement of the results. The attenuation coefficients are given as the mean value of phantom-free regions in the reconstructed calibration measurements. The systems of equations change to: The validation of the algorithm is shown by the decomposition of the phantoms used for calibration ( Supplementary Figs S4 and S5). Sample volume evaluation. The volume of the samples was computed from the low energy CT data. The sample container was removed manually by a drawing tool in FEI Avizo Fire 8.1 and voxels consisting of air were set to zero by simple thresholding. Holes and chambers inside the sample were excluded from this step and they also contribute to the sample total volume. The whole number of voxels containing values larger than zero were counted automatically and multiplied with (pixel size) 3 to obtain the final volume (see Table). Though the size measurement approach might not give an absolute size value, it is an accessible method to compare sample volumes changes. Availability of data and material. The datasets used and/or analysed during the current study are available from the corresponding author on reasonable request.
2023-02-17T14:33:50.697Z
2017-12-01T00:00:00.000
{ "year": 2017, "sha1": "f3823c6b07c174f3e8de96e43abcbc0bf739d298", "oa_license": "CCBY", "oa_url": "https://www.nature.com/articles/s41598-017-17064-z.pdf", "oa_status": "GOLD", "pdf_src": "SpringerNature", "pdf_hash": "f3823c6b07c174f3e8de96e43abcbc0bf739d298", "s2fieldsofstudy": [ "Medicine", "Engineering" ], "extfieldsofstudy": [] }
236751091
pes2o/s2orc
v3-fos-license
Functional outcome of proximal tibia fracture treated with bicondylar plating by dual approach-lobenhoffer and lateral approach Background: Proximal tibia fractures necessitates early diagnosis and management to prevent severe complications. The goal of treating proximal tibia fractures is to achieve anatomical reduction of articular surface and bring back the functional mobility to pre injury status. Lobenhoffer approach provides direct visualization of fracture, better reduction technique which helps to achieve anatomical reduction and alignment. Aim: To assess the reduction of proximal tibia fractures, radiological union and functional outcome associated with treatment by dual approach. fractures after internal fixation by dual plating. Materials and Methods: A total of 30 cases of bicondylar proximal tibia fractures were studied. Inclusion Observation and Results: Our study used Honkonen and Jarvinen criteria for functional outcomes and showed good results. Conclusion: Early surgical management of proximal tibia fractures is necessary as tibial plateau can tolerate modest deformities. Lobenhoffer and lateral approach provides better visualization of fracture and aids in better surgical management of bicondylar proximal tibial fractures with dual plating and gives excellent anatomical reduction , maintenance of mechanical axis and hence a better functional outcome with effective rehabilitation. Introduction Tibial plateau is an important part of articular surface of knee joint and plays a major role in biomechanics, i.e weight transmission and mobility [1] . Management of proximal tibia fractures have always been very challenging with high complication rates due to soft tissue concerns and fracture morphology [2][3][4] With an increase in high velocity accidents, the incidence of proximal tibia fractures are also on a rise. These fractures encompass many varied fracture configurations that involve medial, lateral or both plateaus with varied degrees of articular depressions and displacements [5] . With recent change in trends, most of the authors recommend open reduction and internal fixation (ORIF) for these fractures. The objective of treating these fractures is to reduce the complications and get back pre injury functional mobility. The goal of ORIF is to achieve stable fixation and early mobilization to achieve near anatomical reduction of articular surface, maintenance of mechanical axis, and anatomical alignment [6,7] . There is high chance of varus collapse and later post traumatic arthritis when bi-condylar fractures have been treated with a single plate and a lag screw [8] . Hence fixing of both condyles using separate plates has been advocated. To achieve anatomical reduction and alignment Lobenhoffer approach has been used for direct visualisation and fixation of the posteromedial fragments of the fracture. Lobenhoffer approach in prone position has minimal soft tissue and neurovascular bundle dissection, better visualization and appropriate placement of hardware [9] Lateral condyle is fixed using lateral approach in supine position. The main objective of the study was to assess the functional outcome of proximal tibia fractures (Schatzker type V and type VI) treated with dual plating using Lobenhoffer and lateral approach. The anatomical reduction of articular surface of proximal tibia, radiological union and clinical outcome with this treatment modality was assessed. Material and Methods This was a clinical, prospective, observational study done in our tertiary centre from June 2018 to July 2020. The study was conducted on patients diagnosed to have proximal tibia fractures (Schatzker type V and Type VI) on AP and Lateral views of knee x-ray. A total of 30 cases were studied. The patients were treated surgically with dual plating using Lobenhoffer and lateral approach. Pre-operative planning  Patients were received and a detailed history was taken, initial stabilization was done. Thorough examination was carried out, the limb was immobilizes with a splint and relevant imaging (x-ray and CT) and haematology studies were done.  Once the diagnosis was made patients were planned for bi-condylar plating using dual approach. Definitive fixation was done only after adequate healing of soft tissues. Appropriate implants, (LCP plates, posteromedial and lateral plates) and instruments were selected.  Appropriate antibiotic prophylaxis and pre-anaesthetic medications were given. Surgical procedure The patient is placed in prone position, parts painted and draped. The limb is exsanguinated and tourniquet is inflated. As per the Galla and Lobenhoffer approach an incision is made over the popliteal fossa, extending for 6-8 cm from the medial head of gastrocnemius from the joint line of knee distally. The incision is opened in layers through the subcutaneous tissue and fascia. Semitendinosus and medial head of gastrocnemius is identified. Semitendinosus is retracted medially and the medial head of gastrocnemius is retracted laterally. The periosteum is then incised and the fracture is identified. The fracture fragments are reduced with the knee in extension and simultaneous axial pull. The fixation is done using postero-medial plates. Wound wash was given and incision was closed in layers. Position was changed to supine and the lateral condyle fracture was approached antero-laterally. "S" shaped incision was made starting 5 cm proximal to joint line curving the incision anteriorly over Gerdy's tubercle and extend it distally 1cm lateral to anterior border of tibia. Joint capsule was incised. Tibialis anterior was elevated by blunt dissection. If depression was present in the articular surface, it was elevated and fracture was reduced and fixed using proximal tibia lateral locking compression plate. Confirmed under fluoroscopy. Post-operative protocol Sterile compression dressing was done. Regular wound inspection and change of dressing was done. Active knee mobilisation was started as tolerated. Suture removal was done on day 12. Patients were advised nonweight bearing walker mobilisation for 8 weeks, and were advised for regular follow up for 6 months. Follow up At every follow up, operative site was examined for wound dehiscence, signs of infection, and imaging was done to assess fracture union, any loss of reduction and implant related failure. Patients were reviewed in the out-patient department at 6 weeks, 12 weeks and 6 months. Signs of union were noted in the serial x-rays taken. Partial weight bearing was encouraged after 8 weeks. Patients were advised full weight bearing mobilization after radiological signs of union was noted. Functional, radiological and clinical outcome was assessed and scored according to Honkonen and Jarvinen criteria at each follow up. Results The observational study was carried out on 30 patients diagnosed to have Schatzker type V and VI proximal tibia fractures. Majority of the patients in our study were male patients aged between 31-40 years. Motor vehicle accidents and fall from height accounted as mode of injury for these fractures. Majority of the cases were operated within 4-6 days following injury. The mean time delay for surgery was 4.9 days. 70% of the fractures were Schatzker type VI and rest 30 percent were Schatzker type V. About 25 patients in our study had no complications. However knee stiffness was the most common complication noted in 3 of our patients, and these patients were started on strict physiotherapy regimen. Superficial infection was noted in one patient and was managed with dressings and appropriate antibiotics according to the culture and sensitivity report. One patient had skin necrosis which was managed with skin grafting and regular saline dressing. Discussion With increase in incidence of motor vehicular accidents, tibial fractures are justified to be termed as fracture of modern age. In depth technical knowledge and expertise in surgical skills is necessary for management of tibial plateau fractures. In the past these fractures were treated with a single midline incision (Mercedes Benz incision) and these were associated with high wound complication rates and secondary loss of alignment [10] . Wound complications led to use of hybrid external fixators, Ilizarov system for these proximal tibia fractures [11] . Coronal fractures like the posteromedial fragments were missed in the earlier days and usually led to loss of alignment and early arthritis [12] Due to their localization, posteromedial dislocation fragments are difficult to incorporate into the wire assembly of hybrid fixations. A pure transcutaneous screwing of such fragments without supporting plate appears in the face of the large acting forces, it makes little biomechanical sense. The goals of management for tibial plateau fractures are anatomical reduction, maintenance of alignment, and stable fixation to allow early rehabilitation. Initially locking compression plates were used only on the lateral side, but this method of fixation was associated with many complications like varus collapse of the medial fragments stabilised by screw [13,14] This further led to the concept of dual plating using the two incision technique. Association for osteosynthesis has advised two incision dual plating for treatment of bicondylar proximal tibia fractures [15] . Dual plating with two incisions provides better visualization and hence better fixation with a rigid construct and lesser wound complications. Lobenhoffer and Gala approach provides better visualization of the fracture and easy anatomical reduction manoeuvre. In 1992, Honkonen and Jarvinen et al. described a comprehensive grading system according to which proximal tibia fractures would be classified as excellent , good , fair , or poor. The HJ criteria are based upon four parameters: subjective, clinical assessment, functional evaluation, and radiological scoring. The subjective assessment consists of the frequency of symptoms experienced by the patient: daily, weekly, fortnightly , monthly or never. The clinical evaluation is based upon extension lag, range of flexion, and thigh atrophy. The functional assessment comprises the ability to walk, climb stairs, jump, squat, and duck walk. The radiological assessment includes the degree of varus /valgus and tilting of the plateau, articular step-off and condylar widening in millimetres, and the relative joint space narrowing indicates degenerative changes after the plateau fracture [16] . In our study males were predominantly affected which can be attributed to our Indian set up where the female population predominantly work indoors. Most of our patients were between 31 -40 years of age, hence we can conclude that the younger sections of our society sustain these fractures due to their active lifestyle. Among 30 patients most common mode of injury being the road traffic accidents, followed by fall from height , left side being more commonly involved than right side, we had 21 Schatzker type VI and 9 Schatzker type V. Our study used Honkonen Jarvinen criteria for functional, clinical outcome which showed excellent to good result. Our study reported Honkonen Jarvinen Clinical outcome to be 73.3% excellent, 23.3% good and 3.3% fair. The functional outcome was 80% excellent, 13.3% good, 3.3% fair and 3.3% poor. Complications seen in our study were, most common being knee stiffness in 3 cases (10%), skin necrosis, 1 case (3.3%) and superficial wound infection in 1 case (3.3%). Galla and Lobenhoffer approach technique in all of cases and had excellent results with the method proving that this surgery reduces the surgical trauma to soft tissue, decreases the need for immobilization and decreases the overall complication associated with surgery leading to excellent functional outcome of the knee joint as confirmed by the follow up of the cases. Conclusion Considering the initial treatment modalities followed in the past we can conclude that proximal tibia fractures can tolerate minimal deformities. Hence from our study we can conclude that surgical management of bicondylar fractures is ideal. The Lobenhoffer approach offers a safe alternative to address posteromedial fractures often seen in high-energy tibial plateau fractures. It allows for direct visualization of the fracture fragment for accurate reduction and plating. Utilization of this approach will maximize treatment of isolated posteromedial and bicondylar fractures of the tibial plateau. Stable fixation and effective post op rehabilitation is important in management of proximal tibia fractures.
2021-08-03T00:04:11.828Z
2021-04-01T00:00:00.000
{ "year": 2021, "sha1": "b87dac3243d79daa0b262542a17a6dae7712be3e", "oa_license": null, "oa_url": "https://www.orthopaper.com/archives/2021/vol7issue2/PartH/7-2-92-399.pdf", "oa_status": "GOLD", "pdf_src": "Adhoc", "pdf_hash": "548ceda58f0588a5aeaf990c2973a0733e3da7a4", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
119171817
pes2o/s2orc
v3-fos-license
The effect of noise intensity on stochastic parabolic equations In the present paper, the effect of noise intensity on stochastic parabolic equations is discussed. We focus on the effect of noise on the energy solutions of the stochastic parabolic equations. By utilising It\^o's formula and the energy estimate method, we obtain the excitation indices of the energy solutions $u$ at any finite time $t$. Furthermore, we improve certain existing results in the literature by presenting a comparably simple method to show those existing results. Introduction In recent years, many authors attempt to explore the role of the noise in various dynamical equations in both analytical and numerical aspects. For example, noise can make the solution smooth [10], can prevent singularities in linear transport equations [9], can prevent collapse of Vlasov-Poisson point charges [6], and also can induce singularities (finite time blow up of solutions) [3,4,19]. In the present paper, we focus on the effect of noise on parabolic equations. The concept of "Intermittency" is the property that the solution u(t, x) develops extreme oscillations at certain values of x, typically when t is going to be large. Intermittency was announced first (1949) by Batchelor and Townsend in a WHO conference in Vienna [1], and slightly later by Emmons [8] in the context of boundary layer turbulence. Meanwhile, intermittency has been observed in an enormous number of scientific disciplines. For example, intermittency is observed as "spikes" and "shocks" in neural activity and in finance, respectively. Tuckwell [24] contains a gentle introduction to SPDEs in neuroscience. Recently, Khoshnevisan-Kim in [14,15] considered the following stochastic heat equation where t ∈ (0, ∞) stands for the time variable, x ∈ G the space variable with G being a given nice state space, such as R, Z (a discrete set) or a finite interval like [0, 1], and the initial data value u 0 : G → R is deterministic (i.e., non random) and is well behaved. The operator L acts on the spatial variable x ∈ G only, and is taken to be the generator of a nice Markov stochastic process on G, and ξ denotes space-time white noise on (0, ∞) × G. Whereby, λ > 0 is a constant and the coefficient σ : R → R is supposed to be a Lipschitz continuous function. Let u be a mild solution of (1.1) with given initial data u 0 . Set and then define which stands for the energy of the solution at time t. In papers [14,15,11,18], the authors showed that the energy E t (λ) behaves like exp(const · λ q ), for certain fixed positive constant q, as λ ↑ ∞. In order to do so, the following two quantities have been introduced Clearly, and¯ represent the lower and upper excitation indices of u at time t, respectively. In many interesting cases, (t) and¯ (t) are exactly equal, and they do not depend on the time variable t ∈ [0, ∞). In such situations, we tacitly write for that common value, just for simplicity. In paper [14], Khoshnevisan- (ii) Suppose that G is connected and (1.4) holds, then (t) ≥ 4 for all t ≥ 0, provided that in addition either G is non compact or G is compact, metrizable, and has more than one element. (iii) For every θ ≥ 4 there exist models of the triple (G, L, u 0 ) for which = θ. One such model is that L := −(−∆) α 2 (the generator of a symmetric α-stable Lévy process) for 1 < α ≤ 2. In [15], Khoshnevisan-Kim considered the following problem for the stochastic evolution equation whereẇ is a space-time white noise, L > 0 is fixed, u 0 (x) ≥ 0 is a non-random, bounded continuous function and σ : R → R is a Lipschitz continuous function with σ(0) = 0. Let They derived the following More recently, Foondun-Joseph [11] complemented the results of [15], that is, they obtained = 4. It is easy to see that a mild solution u of (1.5) which is adapted to the natural filtration of the white noiseẇ and satisfies the following mild formulation of the evolution equation where (G D u)(t, x) := L 0 u 0 (y)p D (t, x, y)dy, and p D (t, x, y) denotes the Dirichlet heat kernel, D := [0, L]. They used the estimate of kernel p D (t, x, y) and a new Gronwall's inequality to prove that = 4. Using similar method, Liu-Tian-Foondun [18] considered the fractional Laplacian on a bounded domain. Now, given a complete probability space endowed with a filtration (Ω, F, {F t } t≥0 , P), let us consider the following linear stochastic differential equation (SDE) with λ > 0 being given before For simplicity, we assume that B(t) is a standard one-dimensional Brownian motion on (Ω, F, {F t } t≥0 , P). It is easy to see that the unique solution of the above SDE is explicitly given by Direct calculations then show that which yields that the excitation index of X t is 2. This is clearly different from the results obtained in [11,15,25], where the authors proved the excitation indice of (ǫ is a sufficiently small constant). A natural and very interesting question then appeared to be that is there some kind of solutions of (1.5) with the associated indices being 2? This motivates us to initiate the present paper. Another propose of our paper is to introduce a comparably simpler method to prove the result of [11] in a simple case. That is, we consider the following stochastic parabolic equations x ∈ D, t > 0, where D ⊂ R n (n ≥ 1), B t is a standard one-dimensional Brownian motion on (Ω, F, {F t } t≥0 , P) as given above. We will obtain the similar result to [11] by changing the stochastic parabolic equations into random parabolic equations. Moreover, it is not hard to find that, in the earlier results, the authors only consider the expression E u(t) 2 L 2 (G) , and we can consider the expression , p > 0, which is clearly an interesting generalisation. In this paper, we will focus on the noise excitability of energy solution for some parabolic equations. We obtain a new result regarding the noise excitability, that is, = 2 under the same condition as in [11] when the noise is only the time white noise (not the space-time white noise). The contribution of our paper is that we consider energy solutions (comparing to [11] where mild solutions are considered). The rest of the paper is organised as follows. In Section 2, some preliminaries and main results are given. Section 3 is devoted to the proofs of the main results. In Section 4, we consider a special noise case of (1.7) and we discuss noise excitability of stochastic equations involving nonlocal operators. Preliminaries and two main results Inspired by [14,15,11,20,23], in this paper, we consider the simple case x ∈ D, t > 0, where D ⊂ R n (n ≥ 1) is a bounded domain, and B t denotes one dimensional Brownian motion. The existence of solutions of (2.1) was obtained by [2]. The main results of this paper are formulated as the following Theorem 2.1 Assume that (1.6) holds. The noise excitation index of the energy solution to If the one-dimensional Brownian motion is replaced by Q-Wiener process, where Q is a trace class operator on L 2 (D), the result of Theorem 2.1 still holds. In fact, for the following equation (i.e., the case driven by time white noise), we have the following result. where D ⊂ R n (n ≥ 1) is a bounded domain and w(t, x) is a Q-Wiener process. The existence of solutions of (2.2) was also obtained by [2]. In this case, the noiseẇ(t, x) is white in time and colored in the space variable. Assume that 0 < sup x∈D q(x, x) ≤ q 1 < ∞, then, the upper excitation index of the solution to (2.2) with initial data u 0 (x) ≥ 0 is 2. Furthermore, if σ ≥ 0 (or ≤ 0) and there is a positive real number q 0 > 0 such that q 0 < inf x,y∈D q(x, y), then the excitation index of the solution to (2.2) with initial data u 0 (x) ≥ 0, ≡ 0 is 2. Remark 2.1 Now, we give the reason why we can not consider the case that g(u) satisfies local Lipschitz condition. More precisely, consider the following general case 3) where A is a divergence operator, f and σ satisfy the local Lipschitz condition. For example, let f (u) ≥ au 1+α and σ(u) = u m . Then the solutions of (2.3) will blow up in finite time (see [4,19]). Moreover, the largest existence time T → 0 as λ → ∞. So we cannot consider problem (2.3). The proofs of our main results In this section, we will prove Theorem 2.1 and Theorem 2.2 by using energy method. Let us first prove Theorem 2.1. Proof of Theorem 2.1. By using the idea of [20,23], one can prove that there exists a unique energy solution. It follows from the results of [19] that the energy solution will keep positive if the initial data u 0 ≥ 0 almost surely. We divide our proof into two steps. Step 1:¯ (t) = 2. By Itô formula, we have Integrating by parts shows that It follows from Gronwall's inequality that which implies that¯ (t) ≤ 2. Step 2: (t) = 2. In order to get the lower bounded, let us consider the following eigenvalue problem for the elliptic equation Since all the eigenvalues are strictly positive, increasing and the eigenfunction φ corresponding to the smallest eigenvalue λ 1 does not change sign in domain D, as shown in [13], then one can normalise it in such a way that Noting that under the assumptions of Theorem 2.1, the solutions of (1.5) will remain positive, thus we can consider (u(t), φ) due to the fact that (u(t), φ) > 0. Denoteû(t) := (u(t), φ). By applying Itô's formula toû 2 (t) and making use of (3.3), we get Taking mean norm then yields that By the comparison principle, we know that Due toû Outline of the proof of Theorem 2.2. Similar to the proof of Theorem 2.1, equation (2.1) has a unique positive energy solution. Remark 3.1 1. We have considered the problem with higher space dimensions as we study the equations perturbed by a noise white in time and colored in the space variable. While in papers [14,15,11,18], the authors only considered one space dimension due to their equations are driven by space time white noise. 2. The Laplace operator ∆ can be substituted by the divergent operator A. A special case and the noise excitability for nonlocal operators In this section, we consider the following problem x ∈ D, t > 0, x ∈ D, where D ⊂ R n (n ≥ 1), B t is a standard one-dimensional Brownian motion on a stochastic basis (Ω, F, {F t } t≥0 , P). We first give a equivalent equation to (4.1). x ∈ D. (4. 2) The proof of this lemma is standard, see e.g. the proof of Proposition 1.1 of [7]. We therefore omit it here. Theorem 4.1 Let u be a weak solution of (4.1) with deterministic initial data u 0 satisfying where c i , i = 1, 2, are positive constants. Then we have, for p > 0, Proof. It follows from Lemma 4.1 that the solutions of (4.1) can be expressed as It follows from the classical parabolic theory that the solutions v of (4.2) can be written as v(t, x) = e Combining the above two inequalities, we get the desired result. The proof is thus complete. Next, we will consider the following initial value problem for nonlocal equations where α ∈ (1, 2], (−∆) α 2 is the L 2 -generator of a symmetric α-stable process X t such that E exp(iξ · X t ) = exp(−t|ξ| α ), {ẇ(x, t)} t≥0,x∈R denotes the space-time white noise. In paper [18], the authors considered the equation (4.5) on bounded domain. Here we would like to generalise the result to the situation of whole spatial space. When σ satisfies global Lipschitz continuous condition, it is routine to show that (4.5) has a unique global mild solution, see e.g. the monographs [2,21,17], as well as Dalang [5] and Foondun-Khoshnevisan [12]. It is easy to see that the mild solution of (4.5) fulfills the following mild formulation where p(t, x) is the transition density function of the symmetric α-stable process X t . Before we state our main results, we recall some properties of the kernel function (transition density function) p(t, x). (iii) p(t, x) ≍ t −1/α ∧ t |x| 1+α . By using Proposition 4.1, it is easy to verify that R p(t, x)p(s, x)dx = p(t + s, 0). (4.7) In particular, p(t, ·) 2 L 2 (R) = p(2t, 0). Let where µ is a positive constant, then the noise excitation index of solution to (4.5) with initial data u 0 (x) ≥ 0( ≡ 0) is 2α/(α − 1). Remark 4.1 We remark the condition (4.8) does indeed make sense. Let us give an example.
2018-02-13T11:45:26.000Z
2018-02-13T00:00:00.000
{ "year": 2018, "sha1": "fc69a317b0215d6e4f0c8db94d9e9d8ba04fa12c", "oa_license": null, "oa_url": null, "oa_status": null, "pdf_src": "Arxiv", "pdf_hash": "fc69a317b0215d6e4f0c8db94d9e9d8ba04fa12c", "s2fieldsofstudy": [ "Mathematics", "Physics" ], "extfieldsofstudy": [ "Mathematics" ] }
14472716
pes2o/s2orc
v3-fos-license
Pulmonary Benign Metastasizing Leiomyoma from the Uterine Leiomyoma: A Case Report Summary Background Benign metastasizing leiomyoma (BML) is a rare condition described as multiple well-differentiated leiomyomas at sites distant from the uterus. Apart from lungs it has also been reported in lymph nodes, heart, brain, bone, skin, eye and spinal cord. We present a case of pulmonary benign metastasizing leiomyoma in a female patient admitted to our hospital with suspicion of left adnexal tumor. Case Report A 45-year-old woman was referred to our hospital with suspicion of left adnexal tumor. The control transvaginal ultrasound examination performed at admission to the Gynecological Department excluded adnexal neoplasm. However, a large amount of fluid within the Douglas pouch raised the oncological concern. The patient underwent myomectomy in 2005. In the same year she was diagnosed with multiple lung nodules and underwent pulmonary wedge resection with the diagnosis of pulmonary benign metastasizing leiomyoma being stated. The decision of reevaluation of the specimen, control CT and puncture of the Douglas pouch fluid was made. Computed tomography performed at the Department of Diagnostic Imaging and Interventional Radiology of the Pomeranian Medical University Hospital revealed multiple, bilateral nodules. The microscopic examination of the samples confirmed the initial diagnosis of benign metastasizing leiomyoma with no evidence of neoplastic cells within the fluid. Conclusions Pulmonary benign metastasizing leiomyoma is a rare entity. However, it should be always taken into consideration in women with a previous or coincident history of uterine leiomyoma, especially when no evidence of other malignancy is present. Background Benign metastasizing leiomyoma (BML) is a rare entity first described by Steiner in 1939 [1][2][3][4][5]. The clinical course is usually indolent with incidental finding of pulmonary nodules on routine chest X-rays [2,3]. It affects middle-aged women with a previous or coincident history of uterine leiomyoma. Despite its ability to metastasize, BML is considered benign due to the lack of mitotic figures or anaplasia [5]. The lung is the most common site of involvement, whereas lymph nodes, heart, brain, skin and eye are more rarely affected [1][2][3]5]. There is much controversy concerning pathogenesis and treatment of this condition [6]. We present a case of pulmonary BML in a 45-year-old Case Report A 45-year-old asymptomatic woman was referred to our hospital with suspicion of left adnexal tumor revealed after transvaginal ultrasonography (TVUS) performed in a private practice. The patient had a past medical history significant for depression and gallbladder calculosis. She did not smoke and occasionally drank alcohol. She underwent appendectomy several years ago and her family history was significant for liver cancer in her aunt. In 2005 she was diagnosed with uterine leiomyoma with subsequent myomectomy. In the same year she was found to have multiple, well-defined nodules of the lungs on a routine chest radiograph. The lesions approx. 15 mm in size were located in both lungs. The fiberoptic bronchoscopy, lavage and sputum examinations performed at a local hospital did not show any tumor, therefore the patient was sent for open pulmonary biopsy for diagnosis. After chest CT examination the wedge resection of the left lower and upper lobe was performed at the Thoracosurgery Department and the pathologic diagnosis of benign metastasizing leiomyoma was made. She was then referred to the Pulmonary Institute in Warsaw for further follow-up. In 2007 due to recurrence of leiomyomas, the patient underwent hysterectomy without oophorectomy. In 2012 she was admitted to our hospital with suspicion of left adnexal tumor. The control TVUS performed at admission to the Gynecological Department revealed corpus luteum. However, the presence of fluid within the pouch of Douglas raised the oncological concern. The decision of reevaluation of the specimen, control CT and puncture of the Douglas pouch fluid was made, with fluid cytology being negative for malignant cells. Chest Computed Tomography (CT) performed at our department showed multiple, sometimes round, with a perfect contour, slightly enhancing nodules of maximum 35 mm in size. The lesions increased in size as compared to the results of the initial CT examination performed before thoracotomy in 2005. No mediastinal lymphadenopathy was observed. We could not compare our results to those from the Pulmonary Institute in Warsaw, as her medical records from that time period were lost ( Figure 1.) During reevaluation of the specimen the pathologic findings from the open lung biopsy were compared to the pathologic findings of the resected uterine leiomyomas with additional staining for estrogen and progesterone receptors. The histopathological report revealed that the resected lung tumors were of similar microscopic appearance. Irregular cystic areas lined with a single layer of lung cells were noted and between those areas, spindle cells were present. There were no mitotic figures, areas of necrosis or nuclear atypia. The immunohistochemical staining results of the epithelial cells were positive for CK AE1/AE3 and TTF-1, all of which are lung cell antigens, whereas the immunohistochemical staining of the spindle cells was positive for SMA, desmin, estrogen and progesterone receptors. The specimen was also positive for Ki-67 (1%), but negative for HMB-45. The histopathological results ruled out the possibility of lymphangioleiomyomatosis and confirmed the presence of smooth muscle cells related to the uterine body, thus the diagnosis of BML was made. The histopathological appearance of the resected uterine tumors was typical for benign leiomyomas. The biggest lesion was 25 mm in size. There were no mitotic figures, areas of necrosis or nuclear atypia. The immunohistochemical staining results were identical to those of the spindle cells of lung nodules and were positive for SMA, desmin, Ki-67 (1%), estrogen and progesterone receptors. After the initial diagnosis of BML was confirmed, the patient was offered hormonal treatment with GnRH agonists and was referred to the outpatient clinic for further observation (Figure 2). Discussion Uterine leiomyomas are the most common tumors of the uterus in women [4]. By contrast, benign metastasizing Figure 1. (A, B) Axial chest computed tomography scans show multiple, well-defined pulmonary nodules in the right lower lobe. A B Case Report leiomyoma is a rare disease described as multiple welldifferentiated leiomyomas at sites distant from the uterus [1][2][3]7]. The lung is the most common location of involvement, whereas lymph nodes, heart, brain, bone, skin, eye and spinal cord are more rarely affected [1][2][3]5]. Though first described by Steiner in 1939, the pathogenesis of pulmonary BML has not been completely identified yet [2,3,8]. Steiner drew a hypothesis of benign smooth muscle cells being transported from uterine leiomyoma and colonized in the lung [3,8]. On the contrary, some authors accept these lesions as low-grade leiomyosarcoma metastases or coexisting with lymphangioleiomyomatosis of the lungs uterine leiomyomas [2]. However, the histological structure of BML, as well as currently used immunohistochemical tests and molecular analysis exclude the latter mechanisms. Patton et al. assessed clonality by analyzing the variable length of the polymorphic CAG repeat sequence within the human androgen receptor gene [9]. The pulmonary and uterine tumors showed identical patterns of androgen receptor allelic inactivation, indicating that they were clonal [9]. The authors also evaluated the length of telomere with FISH method, which appeared long or very long in both BML and corresponding uterine tumors. Therefore the reduction of telomere did not prove to be the crucial stage in metastasis formation. Nuovo and Schmittgen analyzed micro-RNA with FISH method in BML, leiomyomas and leiomyosarcomas [10]. In none of the 10 cases of BML as well as none of the 8 cases of leiomyomas the altered expression of strongly correlating with neoplastic phenotype micro-RNA was observed. It was however present in 13 of 15 cases of leiomyosarcomas. Our patient was asymptomatic and had lung nodules discovered several months after myomectomy, during an annual health check examination. Most of the BML patients reported on remain asymptomatic and are diagnosed when pulmonary lesions are incidentally found on imaging. In rare cases, symptoms such as cough, chest pain, dyspnea and bloody sputum have been described [2,4,5,7]. The entity usually occurs in women with a previous or coincident history of uterine leiomyoma [1][2][3]7]. The mean time period from hysterectomy to the appearance of lung nodules is 15 years. However, metastatic foci of leiomyoma have been discovered up to 24 years after hysterectomy [4,5]. The typical radiological findings of BML include diffuse, bilateral nodular opacities on chest X-rays. Computed tomography usually reveals solitary or multiple soft tissue, well-defined masses scattered in both lungs, with no enhancement after intravenous contrast medium administration, ranging in size from a few millimeters to several centimeters [2][3][4]. Miliary patterns, interstitial lung disease, cavitary nodules and multiloculated fluid-containing cystic lesions have rarely been noted. Endobronchial and pleural sparing is also characteristic; there is no mediastinal lymphadenopathy. Pulmonary nodules may remain stable, decrease or increase in size [2][3][4]. In our case, the patient had multiple, perfectly outlined, slightly enhancing nodules which increased in size as compared to the CT examination performed before thoracotomy. There was no evidence of enlarged lymph nodes. Before stating the diagnosis of BML, it is important to exclude leiomyosarcoma [4]. In BML, a typical histological pattern includes spindle-shaped cells with a low cellular variance in size and shape; without mitotic figures or disorganized growth pattern. Moreover, the Ki67 index for BML is less than that for leiomyosarcoma [4]. It is considered that leiomyosarcoma, even with low cellular atypia or no necrotic areas, has distinctly higher proliferative activity. Nuovo and Schmittgen reported that a mean Ki 67 index for BML was 3.4% (range: 0.7% to 8.1%) and for leiomyosarcoma 28.6% (range: 14.4% to 62%) (p<0.025) [5]. The Ki-67 index for both lung and uterine tumors in our patient was 1%. Another entity with which BML should not be confused is lymphangioleiomyomatosis (LAM). Contrary to BML, in the course of LAM there is proliferation of atypical smooth muscle cells along with blood vessels, lymphatics and small airways. Immunohistochemical staining for HMB-45 is positive in LAM, but negative in BML [2,3]. The reevaluated specimen of tumors in our case was negative for HMB-45. Because of the presence of estrogen and progesterone receptors in BML, its treatment modalities are based on hormonal manipulation by means of surgical or medical oophorectomy. Lung nodules tend to remain stable or occasionally regress after variable treatment options including: GnRH agonists, progesterone, estrogen receptor antagonists and aromatase inhibitors [2][3][4]. Some authors however prefer a 'wait-and-see strategy', especially that in some cases regression of metastatic lesions was observed in situations where estrogen levels naturally drop, such as termination of pregnancy and menopause [2,3]. Our patient was offered medical treatment with GnRH agonists due to size progression of the lung lesions. Conclusions Pulmonary BML is a rare entity. However, it should always be taken into consideration in women with a previous or coincident history of uterine leiomyoma, especially when no evidence of other malignancy is present. The accurate diagnosis should be based not only on the medical history but also on histopathological and immunohistochemical examinations of lung nodules. Since standard treatment of BML has not been established yet, an individual approach in particular clinical cases should be considered.
2018-04-03T04:50:34.888Z
2015-02-26T00:00:00.000
{ "year": 2015, "sha1": "e2d477316074136e08cb74398a979379554e2621", "oa_license": "implied-oa", "oa_url": "https://europepmc.org/articles/pmc4345854?pdf=render", "oa_status": "GREEN", "pdf_src": "Anansi", "pdf_hash": "e2d477316074136e08cb74398a979379554e2621", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
181388297
pes2o/s2orc
v3-fos-license
CO 2 and CH 4 fluxes are decoupled from organic carbon loss in drying reservoir sediments Reservoirs are a prominent feature of the current global hydrological landscape, and their sediments are the site of extensive organic carbon burial. Meanwhile, reservoirs frequently go dry due to drought and/or water management decisions. Nonetheless, the fate of organic carbon buried in reservoir sediments upon drying is largely unknown. Here, we conducted a 45-day-long laboratory incubation of sediment cores collected from a western Mediterranean reservoir to investigate carbon dynamics in drying sediment. Drying sediment cores emitted more CO2 over the course of the incubation than sediment cores 5 incubated with overlaying water (206.7 ± 47.9 vs. 69.2 ± 18.1 mmol CO2 m−2 day−1, mean ± SE). Organic carbon content at the end of the incubation was lower in drying cores, which suggests that this higher CO2 efflux was due to organic carbon mineralization. However, the apparent rate of organic C reduction in the drying sediments (568.6± 247.2 mmol C m−2 day−1, mean ± SE) was higher than C emission. Meanwhile, sediment cores collected from a reservoir area that had already been exposed for 2+ years displayed net CO2 influx from the atmosphere to the sediment (-136.0 ± 27.5 mmol CO2 m−2 day−1, 10 mean± SE) during the incubation period. Sediment mineralogy suggests that this CO2 influx was caused by a relative increase in calcium carbonate chemical weathering. Thus, we found that while organic carbon decomposition in newly dry reservoir sediment causes measurable organic carbon loss and carbon gas emissions to the atmosphere, other processes can offset these emissions on short time frames and compromise the use of carbon emissions as a proxy for organic carbon mineralization in drying sediments. 15 01.pdf Figure 1.Methodological scheme for the collection, incubation, and analysis of sediment cores.Treatments with dark brown sediment represent cores collected from the "Wet" site, while treatments with light brown color represent cores collected from the "Dry" site.SEM: scanning electron microscopy; XRD: X-Ray diffraction. a dry environment conducive to core drying, and to avoid CO 2 build-up inside the incubation chamber.Filtered water was periodically added to the "Incubation: Wet" cores throughout the incubation period to replace water lost through evaporation and maintain a constant water level. Gas flux measurements Core mass and fluxes of greenhouse gases were measured on the first day of the incubation and periodically thereafter.For the analysis of CO 2 flux to the overlying air, "Incubation: Dry" and "Incubation: Drying" cores were temporarily covered with air-tight, custom made caps.These caps created closed chambers with a surface area of 28.3 cm 2 and volume of 283 cm 3 .The chambers were connected to an environmental gas monitor (EGM-4, PP Systems, Massachusetts, U.S.A.) to directly measure CO 2 flux.CO 2 concentration within the chamber was analyzed every 4.8 seconds with an accuracy of 1%.Each chamber analysis lasted at least 300 seconds or until a 10 µatm change in CO 2 was recorded, whichever came first.CO 2 flux was measured approximately every other day. "Incubation: Dry" and "Incubation: Drying" cores were capped as described above for 30 minutes for CH 4 flux analysis on days 0, 1, 5, 19, and 45 of the incubation.10 ml air samples were collected via syringe through a septa at the beginning and end of chamber deployment and transferred to pre-evacuated glass vials (Exetainers 339 W, Labco Lim.Lampeter, UK). Samples were analyzed for CH 4 within 3 weeks of sampling using a gas chromatograph (7820A with a 77697 headspace sampler, Agilent, CA, U.S.A.).The gas chromatograph was calibrated at the beginning and end of each analysis session using a standard curve spanning 1-25 ppm CH 4 in a N 2 gas mixture, with a precision of 0.6 ppm CH 4 . For "Incubation: Wet" cores, CO 2 and CH 4 fluxes were also measured on incubation days 0, 1, 5, 19, and 45.The overlaying water of all cores was covered with an airtight plastic seal for 20 minutes.Water was collected via syringe immediately before and after the 20 minutes and analyzed for DIC as a proxy for sediment CO 2 flux.DIC was analyzed via catalytic oxidation using a TOC-VCSH analyzer (Shimadzu, Japan).Additional water samples were allowed to equilibrate with headspace in enclosed syringes (10 ml water to 10 ml air).The CH 4 concentration of the equilibrated air was analyzed as described above for the other treatments. Drying over the course of the incubation was periodically measured by placing each "Incubation: Dry" and "Incubation: Drying" core on a balance to track change in core mass. Sediment analyses All cores were drained, sectioned and analyzed for water and organic matter content, either upon arrival to the lab for "Initial" or following the incubation for "Incubation" cores.Cores were sliced into 12 sections (0-1, 1-2, 2-3, 3-4, 4-5, 5-7, 7-9, 9-11, 11-13, and 13-15 cm in depth).Water content was determined by drying an aliquot of each section in a 70°C oven until a constant mass was reached (Fig. S2).This dry sediment was then combusted at 450°C for 4 hours and re-weighed to determine the percentage of weight lost on ignition (LOI), a proxy for sedimentary organic carbon content.We assumed that organic carbon content was equivalent to LOI/2 (Dean and Gorham, 1998).All remaining sediment was frozen at -20°C.Sediment from near the surface (1-2 cm in depth) for all cores was later defrosted and analyzed for pH and alkalinity using a Metrohm 848 Titrino Plus Titrator.Sediment was suspended in a 2:1 deionized water:sediment solution and filtered, and the pH of the resultant filtrate was analyzed.Frozen incubation treatment sediment was freeze-dried for additional mineralogical analyses.Freeze-dried sediment was ground, and its mineralogical composition was determined using a Siemens D500 automatic X-Ray diffractometer (working conditions: Cu K-alpha, 40 kV and 30 mA).The identification of mineralogical species was carried out using EVA software attached to the diffractometer and their quantification was done using the standard procedure (Chung, 1974).The uncertainties associated to the quantification method are 5% wt.Albite, calcite, clinochlorite, dolomite, gypsum, kaolinite, microcline, muscovite, and quartz were identified and quantified.To qualitatively assess differences between treatments, freeze-dried sediment from 0-1 cm (i.e.shallowest) and 13-15 cm (i.e.deepest) was coated with gold and imaged using a scanning electron microscope JEOL J-6510 equipped with an EDS detector at the Scientific and Technological Centers of the University of Barcelona. The influence of biological activity in carbon dioxide flux was assessed using defrosted sediment from 1-2 cm in depth from a randomly selected "Incubation: Dry" core.This sediment sample was split into 3 replicate sections of 5 ml each, each of which was placed in a separate 100 ml glass beaker and covered with an airtight seal.Baseline CO 2 flux from each section was analyzed as described above for sediment cores.Samples were then sterilized by exposure to UV light under a laminar flow hood (AH-100, Telstar, Catalonia, Spain) for 45 minutes followed by microwaving at 700 Hz for 90 seconds in 30 second increments, each separated by 1 minute of shaking.Sediment was then allowed to return to room temperature, re-sealed, and analyzed again for CO 2 flux. Data analysis All statistical analyses were conducted in R (R Core Team, 2018).To test differences in CO 2 flux, CH 4 flux, core mass, and water content between treatments, one-way analyses of variance (ANOVA) were conducted on mixed effect models with treatment considered as a fixed effect and replicate core within treatment as a random effect using the lmer function of the package nlme (Pinheiro et al., 2018).Depth was considered an independent variable for water mixed effect models, and time was considered an independent variable for CO 2 and CH 4 flux and core mass mixed effect models.To analyze organic carbon content differences between treatments, we first identified different sediment core layers as defined by a clustering analysis of the organic carbon content profiles from all cores.Clustering analysis was performed using the chclust function from the R package rioja and constraining the result by sample depth (Juggins, 2017).This analysis was performed to identify the depth of the surface layer affected by organic carbon changes during the incubation, for we expected organic carbon changes to be unlikely beyond a surface layer of unknown depth a priori.After identifying the depth of the surface layer most affected by organic carbon changes, we assessed differences in surface layer organic carbon content between treatments using ANOVA. Post-hoc Tukey tests were conducted to identify differences between treatments using the lsmeans package (Lenth, 2016). Net CO 2 flux during the incubation was determined by performing trapezoidal integration under the curve of observed gas flux data points along time, using the trapz function in the package pracma (Borchers, 2018) Extreme outliers were removed from all data sets following examination of box plots, Cook's influential outlier tests, and Cleveland boxplots (Zuur et al., 2010).All plots were created using ggplot from the tidyverse package (Wickham and Team, 2017). Incubation carbon gas fluxes Gaseous carbon fluxes differed between each incubation treatment (F = 68.3,p < 0.001 for pairwise post hoc tests) (Table 1 and Fig. 2)."Incubation: Wet" incubation cores generally displayed positive CO 2 fluxes (i.e.out of the sediment) that declined in magnitude over time (Fig. 2).Meanwhile, "Incubation: Dry" cores displayed positive fluxes for the first week but then consistently negative (i.e.into the sediment) fluxes for the remainder of the incubation (Fig. 2).Post-incubation analysis showed that this CO 2 influx to the sediment persisted even after sediment sterilization."Incubation: Wet-Drying" cores initially displayed positive CO 2 fluxes, but by the end of the incubation two out of three "Incubation: Drying" cores also displayed negative CO 2 fluxes (Fig. 2).Change in CO 2 flux over time significantly correlated with drying, as measured by decline in core mass (p = 0.004, r 2 = 0.27).However, net CO 2 flux over the incubation period did not correlate with core-specific sediment organic carbon or water content. Consistent nonzero mean CH 4 flux values were only observed for "Wet" and "Drying" treatments.Positive CH 4 fluxes were observed on Days 0, 1, and 6 and Day 1 for "Wet" and "Drying" treatments respectively (Figure S3). Similarly, the low methane fluxes observed in this study stress the relevance of local sediment properties in controlling carbon gas fluxes from sediment.Here, "Incubation: Wet-Drying" sediment cores only displayed nonzero CH 4 flux on Day 0 (Fig. S3).This positive CH 4 flux (28.1 ± 2.0 µmol m −2 d −1 ) was much smaller than both the CO 2 efflux observed here and the CH 4 effluxes observed in other reservoir drying studies (Jin et al., 2016;Kosten et al., 2018).This suggests that sitespecific sediment properties such as organic carbon content or grain size and porosity promoting oxic conditions during drying prevented significant methanogenesis from occurring. Drying sediment carbon loss Sediment organic carbon data suggests that the CO 2 efflux in "Incubation: Wet-Drying" sediment cores was driven by organic matter decomposition.Statistical analyses showed that "Incubation: Wet-Drying" cores displayed lower organic carbon content than "Wet" cores (Table 1, Fig. 3).If this small-scale experiment was representative of in-situ reservoir drying, the carbon loss via organic matter decomposition implied by this discrepancy has significant implications for the reservoir's carbon budget. The difference in organic carbon content between "Incubation: Wet" and "Incubation: Wet-Drying" treatments corresponds to an average organic carbon loss rate of 0.57 ± 0.14 mol m −2 d −1 over the course of the incubation and a net organic carbon loss of 3.07 ± 0.76 Mg ha −1 .This loss over just 45 days is comparable in magnitude to changes in soil organic carbon stock during the transition from tropical secondary forest to perenial crops (Don et al., 2011).Moreover, it is equivalent to reversing approximately 1 year of carbon burial at the average burial rate of 250 g C m −2 yr −1 reported by Mendonça et al. (2017) for inland waters.This significant carbon loss may undermine the notion of organic carbon burial in reservoirs as a long-term carbon sink, particularly in regions such as the western Mediterranean in which reservoirs are fairly dynamic ecosystems. Instead, decomposition during prolonged drying events may mineralize a sizeable fraction of the organic carbon buried in sediment during the reservoir's lifetime.Thus, sediment carbon burial should not necessarily be considered a carbon sink in a reservoir's long-term carbon budget, specially in regions where drying events are expected to become more frequent in the future. However, the large variability between replicate cores suggests significant spatial heterogeneity in sediment composition and highlights the need for spatial replication within and across reservoirs in future studies.The previously discussed evidence for organic carbon loss during drying implies that the initial carbon content would be lower in "Initial: Dry" cores than in "Initial: Wet" cores, but this was not the case (Table 1, Fig. 3)."Initial: Dry" sediment did not significantly differ from "Initial: Wet" sediment in organic carbon content.Considering the large variability among replicate cores collected from the same location (which would therefore be more accurately described as pseudo-replicates), greater spatial replication within the reservoir would likely be necessary to resolve differences in sediment carbon content between wet and dry sites.Therefore, although our dataset supports the presence of an enhanced mineralization process during drying, the potential implications at the whole water body scale cannot be fully resolved. Decoupling of carbon gas efflux from sediment carbon loss While the observed organic carbon loss from "Incubation: Wet-Drying" cores was consistent with the observed CO 2 fluxes in direction, approximately three times more organic carbon was lost than CO 2 was emitted.The difference in sediment organic carbon content in the surface layer (0-5 cm) of "Initial: Wet" and "Incubation: Wet-Drying" cores would correspond to a net efflux of 72.3 ± 17.8 mmol CO 2 (mean ± SE).However, observed net efflux was only 26.3 ± 6.1 mmol CO 2 (mean ± SE) per core.Thus, even considering the aforementioned variability in sediment organic carbon data, it appears that a significant portion of the organic carbon consumed via decomposition was not emitted as CO 2 but rather consumed by one or more other processes.This is also supported by the observed influx of CO 2 from the atmosphere into the sediment for "Incubation: Dry" treatment cores.Consistent CO 2 influxes in "Incubation: Dry" cores similar in magnitude to the CO 2 effluxes in "Incubation: Wet-Drying" cores were observed after one week of incubation across all replicates (Table 1, Fig. 2).Furthermore, by the end of the incubation two out of three replicate "Incubation: Wet-Drying" cores also displayed CO 2 influxes.These findings show the relevance of the CO 2 consumption pathway(s) active in these sediments.They also imply that the CO 2 effluxes observed in "Incubation: Wet-Drying" cores must be considered the net result of CO 2 production and consumption processes. Sediment carbon consumption via calcium carbonate chemical weathering Sediment mineralogy results suggest that the observed sediment carbon consumption was likely caused by an increase in during sediment drying, which is consistent with the elevated pore water Ca 2+ ions expected to accompany an increase in calcium carbonate dissolution. SEM imagery provides further evidence of sediment calcium carbonate chemical weathering on the time-scale of the incubation.Calcium carbonate crystals in the "Incubation: Wet" treatment were euhedric, but crystals in the "Incubation: Dry" and "Incubation: Drying" treatments were visibly corroded (Figure 5).Similarly, most carbonate in the "Incubation: Wet" treatment was present in the form of discrete crystals and visually biogenic in origin, while most carbonate in the "Dry" and "Drying" treatments appeared as a thin calcium carbonate coating covering all sediment surfaces.This suggests the existence of calcium carbonate precipitation and chemical weathering cycles, probably occurring as a response to sediment drying and flooding cycles.This supports the hypothesis that an increase in chemical weathering relative to precipitation may occur later in the drying process due to the common-ion effect. Few soil or sediment science investigations link sediment CO 2 influx to an increase in calcium carbonate chemical weathering relative to precipitation.Those that do generally link chemical weathering to factors that are not applicable in the context of this investigation, i.e. climate (Lapenis et al., 2008)), high sediment alkalinity (Lapenis et al., 2008;Emmerich, 2003;Xie et al., 2009;Wang et al., 2016;Ma et al., 2014), and diurnal cycling (Roland et al., 2013;Hamerlynck et al., 2013;Chen and Wang, 2014;Fa et al., 2016).This raises the question of what conditions caused the calcium carbonate chemical weathering hypothesized to occur here.One possible explanation is a combination of 1) high carbonate sediments (29.6 ± 1.4 % CaCO 3 mean ± SE for wet cores) and 2) high air flow and thus CO 2 availability due to sediment dryness.The role of dryness in promoting air flow would explain the lack of both chemical weathering in "Incubation: Wet" sediments and CO 2 influx to dry sediments during the first week of the incubation.Core collection was performed within 48 hours of a rain event, so even exposed sediments were relatively humid at the beginning of the incubation.We posit that sediments may need to reach a certain dryness threshold to establish sufficient air flow and thus CO 2 availability for chemical weathering to occur.Under this scenario, sediment humidity is crucial to determining chemical weathering; sediments must be dry enough to establish sufficient air flow but humid enough for water to be available for the chemical weathering reaction to proceed. Regardless of the precise mechanism(s) causing chemical weathering, high sediment calcium carbonate content and intermittently dry conditions are the most likely driving factors in this context.Thus, this process may regularly occur in this western Mediterranean reservoir as well as in similar systems around the world.Drying reservoir and lake sediments are understudied, so the calcium carbonate chemical weathering and precipitation observed here may be prevalent in a wide variety of contexts.Given the limited spatial replication and laboratory nature of this investigation, further work is needed to determine the relevance of this process under natural conditions. Implications for drying reservoir carbon dynamics The decoupling of organic carbon loss from CO 2 efflux and proposed consumption of inorganic carbon via calcium carbonate chemical weathering in reservoir sediments may have important implications for our understanding of sediment carbon dynamics.First, it indicates that the common strategy of equating carbon gas flux with organic carbon decomposition may be flawed.Thus, if calcium carbonate chemical weathering in sediments is geographically widespread, organic carbon mineraliza- tion rates in dry sediments may be significantly underestimated by studies that only measure CO 2 efflux as a proxy for carbon mineralization.Although we cannot make conclusions regarding the prevalence of this decoupling in other reservoirs with our experiment, our findings constitute a warning that further research is necessary to understand the significance of this process to overall freshwater carbon cycling. Further research on the fate of the alkalinity produced by the calcium carbonate chemical weathering process is also needed to determine its impact on the carbon budget of lakes and reservoirs that seasonally or permanently dry.While CaCO 3 chemical weathering decreases CO 2 efflux, it is unlikely to constitute a long-term carbon sink if the bicarbonate ions produced by dissolution eventually transfom to CO 2 and re-enter the atmosphere through equilibration (Wang et al., 2016).However, if the bicarbonate ions produced by CaCO 3 dissolution are sequestered in either sediment or groundwater, it is also possible that the CO 2 influx observed in this study constitutes the basis of a previously unrecognized long-term carbon sink.The rate of carbon dioxide intake shown here is comparable to rates from a variety of Mediterranean and temperate forest soils (Baldocchi et al., 2018).Such a large sediment carbon sink would therefore carry considerable implications for our understanding of the freshwater carbon cycle, and this question also merits further research. Conclusions This investigation used a laboratory sediment core incubation to explore the effects of reservoir drying on sediment carbon dynamics.We directly linked organic carbon loss to carbon dioxide emissions in drying reservoir sediment for the first time, undermining the idea that organic carbon burial in active reservoir sediments represents a long-term carbon sink.However, we also found a decoupling between carbon loss and carbon gas fluxes and observed carbon dioxide influxes to most sediment cores analyzed.Mineralogical sediment composition suggests that these discrepancies were due to an increase in calcium carbonate chemical weathering.Together, these findings show that while reservoir sediment drying can cause organic carbon decomposition and thus carbon gas efflux to the atmosphere, other sediment processes can potentially offset or even reverse these fluxes. . To determine mineralogical trends among treatments, we conducted a principal component analysis (PCA) on the correlation matrix of the arcsine √ (x) transformed percent abundance data for each mineral using the prcomp function from R core.Pearson correlation tests were conducted between CO 2 flux and core organic carbon content, water content, and change in core mass, and between the first Biogeosciences Discuss., https://doi.org/10.5194/bg-2019-128Manuscript under review for journal Biogeosciences Discussion started: 14 May 2019 c Author(s) 2019.CC BY 4.0 License.two principal axes of the mineralogy PCA and organic carbon and water content using R core.Pearson correlation tests were also conducted between averages of the first two principal axes per replicate core and CO 2 flux and change in core mass. Figure 2 .Figure 3 . Figure 2. Sediment core CO2 + CH4 fluxes over the course of the 45-day-long incubation as determined by core headspace CO2 and CH4 measurements for "Incubation: Dry" and "Incubation: Drying" treatments, and overlaying water DIC and dissolved CH4 measurements for "Incubation: Wet" cores.All replicates are shown in the graph.CH4 fluxes composed less than 1% of carbon gas fluxes for all data points (see Fig. S3 for just CH4 fluxes).Color lines are splines fitted to the data, and are included only for visual reference. , https://doi.org/10.5194/bg-2019-128Manuscript under review for journal Biogeosciences Discussion started: 14 May 2019 c Author(s) 2019.CC BY 4.0 License.3.3Mineralogical sediment transformationsX-ray diffraction results suggested mineralogical transformations occurred during drying(Figs.4, S4).A PCA run using percent abundances of the nine identified minerals revealed divergence between treatments (Figure4).The first two axes of the PCA explained 74.51% of variance.The first axis of the PCA explained 45.6% of variance, with a positive loading of calcite and kaolinite and a negative loading of clinochlorite and quartz.The second axis accounted for 28.8% of variance, with a positive loading of quartz and negative loadings of dolomite, muscovite, and kaolinite."Incubation: Wet-Drying" core samples were grouped by high quartz and clinochlorite, "Incubation: Dry" cores were grouped by high muscovite and dolomite content, and "Incubation: Wet" cores were grouped by high calcite.The first PCA axis scores correlated with organic carbon content (p < 0.001, r 2 = 0.14) and water content (p < 0.001, r 2 = 0.29), and the second PCA axis scores correlated with organic carbon content (p = 0.002, r 2 = 0.13).Average first and second PCA axis scores did not correlate with either CO 2 flux or change in 10 core mass.11 Biogeosciences Discuss., https://doi.org/10.5194/bg-2019-128Manuscript under review for journal Biogeosciences Discussion started: 14 May 2019 c Author(s) 2019.CC BY 4.0 License. Figure 4 .Figure 5 . Figure 4. Principal component analysis (PCA) of mineralogy data for sediment samples from varying depths of "Incubation: Wet", "Incubation: Dry", and "Incubation: Wet-Drying" cores.Color identifies the treatment, while the size of the dots represent core sample depth. Table 1 . Summary of sediment core properties and incubation gas fluxes averaged across treatments types.All values are mean Treatment Initial: Control Wet Initial: Control Dry Incubation: Wet Incubation: Dry Incubation: Wet-Drying 8 Biogeosciences Discuss., https://doi.org/10.5194/bg-2019-128Manuscript under review for journal Biogeosciences Discussion started: 14 May 2019 c Author(s) 2019.CC BY 4.0 License.
2019-06-07T22:36:12.450Z
2019-05-14T00:00:00.000
{ "year": 2019, "sha1": "a9caaf6fed0ffd7b51363814b426b4f9606fbed9", "oa_license": "CCBY", "oa_url": "https://bg.copernicus.org/preprints/bg-2019-128/bg-2019-128.pdf", "oa_status": "GREEN", "pdf_src": "ScienceParseMerged", "pdf_hash": "a9caaf6fed0ffd7b51363814b426b4f9606fbed9", "s2fieldsofstudy": [ "Environmental Science" ], "extfieldsofstudy": [ "Environmental Science" ] }
247247982
pes2o/s2orc
v3-fos-license
Preparation and Characterization of Lidocaine-Loaded, Microemulsion-Based Topical Gels Microemulsion-based gels (MBGs) were prepared for transdermal delivery of lidocaine and evaluated for their potential for local anesthesia. Lidocaine solubility was measured in various oils, and phase diagrams were constructed to map the concentration range of oil, surfactant, cosurfactant, and water for oil-in-water (o/w) microemulsion (ME) domains, employing the water titration method at different surfactant/cosurfactant weight ratios. Refractive index, electrical conductivity, droplet size, zeta potential, pH, viscosity, and stability of fluid o/w MEs were evaluated. Carbomer® 940 was incorporated into the fluid drug-loaded MEs as a gelling agent. Microemulsion-based gels were characterized for spreadability, pH, viscosity, and in-vitro drug release measurements, and based on the results obtained, the best MBGs were selected and subsequently subjected to ex-vivo rat skin permeation anesthetic effect and irritation studies. Data indicated the formation of nano-sized droplets of MEs ranging from 20 - 52 nm with a polydispersity of less than 0.5. In-vitro release and ex-vivo permeation studies on MBGs showed significantly higher drug release and permeation in comparison to the marketed topical gel. Developed MBG formulations demonstrated greater potential for transdermal delivery of lidocaine and advantage over the commercially available gel product, and therefore, they may be considered as potential vehicles for the topical delivery of lidocaine. Background Topical anesthetics, as valuable tools in the field of dermatology, are widely used to control cutaneous pain associated with medical procedures, prevent or treat chronic conditions such as post-herpetic neuralgia, complex regional pain syndrome, and cancer-related pains. These compounds are expected to cause painless, cutaneous analgesia with a quick onset of action and sufficient duration (1)(2)(3). Based on the chemical structure of their intermediate chain, these weak bases are classified into aminoesters (e.g., benzocaine, procaine, tetracaine, etc.) and aminoamides (e.g., bupivacaine, lidocaine, prilocaine) classes (1,4). Local anesthetic (LA) drugs are commercially available in various pharmaceutical dosage forms such as gels, creams, ointments, solutions, and patches (5,6), and many of them are available over-the-counter without the need for a prescription. The purpose of such transdermal formulations is to increase skin permeability, reduce the effective concentration, provide painless, cutaneous analgesia and numbness with a quick onset of action and sufficient duration of action and minimize side effects (6,7). Several strategies have been adopted to enhance the skin permeability of LAs and improve their onset and duration of action, as well as prevent systemic absorption and reduce side effects. Among the most common physical techniques, one can mention iontophoresis, sonophoresis, magnetophoresis, electroporation, microporation, and microneedle technologies (9,11,12). However, using these methods is restricted because of the high cost, need for special devices, and qualified staff (13). Other delivery strategies include the incorporation of LAs into innovative colloidal carrier delivery systems such as liposomes, niosomes, ethosomes, nanospheres, nanoparticles, and microemulsions (5,13). Microemulsions (ME), first introduced by Hoar and Schulman (14), are transparent, spontaneously formed, dispersed systems in which the interfacial layer is stabilized by a layer of surfactant molecules (usually in combination with a co-surfactant) (15). These transparent, low viscose, thermodynamically stable (no tendency for flocculation or coalescence) colloidal dispersions with droplets less than 120 nm in diameter offer several advantages for efficient transdermal delivery of drugs. MEs can be formulated as water-in-oil (w/o), oil-in-water (o/w), and bicontinuous systems (16)(17)(18). The ease of preparation, relatively high solubilizing capacity for a variety of hydrophilic and lipophilic molecules because of the existence of two microdomains in a single-phase solution, long-term thermodynamic stability, and good production feasibility have made them promising drug delivery systems (19)(20)(21). The greater amount of drug incorporated in MEs, compared to conventional topical formulations, could increase the flux of drug through the skin. Moreover, enhancement of drug solubility can increase the concentration gradient and thermodynamic activity of the drug, which could favor its partitioning into the skin. The possibility of employing ingredients for ME formulation with skin penetration enhancing effect can also affect the barrier function of stratum corneum (SC), promoting permeation of drug (22)(23)(24)(25). Since MEs are less viscose in nature, their low skin adherence has restricted their topical application (26). To overcome this challenge and retain the applied dose on the skin for a sufficient time, ME-based gel (MBG) formulations have been, therefore, developed, utilizing a suitable thickening agent to modify the rheological behavior. MBGs, also known as hydrogel-thickened MEs, are nanocarriers derived from o/w MEs composed of dispersed oil phase within a continuous aqueous phase, which is thickened with a suitable hydrophilic gelling agent (27,28). By the addition of gelling components, the application of MEs to the skin becomes easier compared to runny fluid MEs. Various gelling agents such as Carbopol ® , xanthan gum, chitosan, poloxamer, hydroxypropyl methylcellulose, and carrageenan have been utilized for the preparation of MBGs (27,28). Based on the type of polymer used, different procedures for the preparation of MBGs have been employed. A mixture of oil, surfactant, and cosurfactant with the dissolved drug is added to the previously prepared hydrogel matrix in a two-stage procedure. Alternatively, o/w ME is prepared and then gelled by directly dispersing a suitable thickening agent (28). These gels have the advantages of both MEs and hydrogels, including ease of preparation, enhanced drug solubility and permeability, optical clarity, longer shelf-life, water solubility, and spreadability (29,30). In recent years, numerous studies have demonstrated that MBGs are potential transdermal delivery systems for a wide variety of drugs commonly used in different skin disorders or even systemic diseases (31,32). Negi et al. showed that phospholipid MBGs containing lidocaine and prilocaine have enhanced skin permeation and improved the analgesic effect significantly, compared to the commercial cream (13), while a remarkable analgesic activity has been observed with ropivacaine-loaded MBGs, formulated by Transcutol ® HP and Capryol ® 90 (33). In 2017, Ustundag Okur et al. investigated the permeation of Carbopol ® 940based, benzocaine-loaded MBGs and confirmed high permeability through the skin with less systemic side effects with no sign of inflammation and irritation (34). Objectives Since the dermal delivery of lidocaine is still a concern, the suitability of MBGs for its transdermal delivery was examined in this investigation. Thus, this study was planned to develop and characterize lidocaine-loaded MBGs, formulated with pharmaceutically acceptable components. It was hypothesized that a rapid onset and longer duration of anesthetic effect of lidocaine might be produced when incorporated in MBGs. High-Performance Liquid Chromatography (HPLC) Method A quantitative assay of lidocaine base was carried out by an HPLC method outlined in the United States Pharmacopoeia (USP 41-NF 36). Chromatographic studies were implemented on the Knauer HPLC system (Germany), equipped with a UV detector (Smartline 2500), pump (Smartline 1000), and software (Chromgate V3.1.7). The separation process was carried out on a reversed-phase C18 column (5 µ, 250 × 4.6 mm), using a freshly prepared and degassed mobile phase consisting of acetonitrile (A) and water/glacial acetic acid (930: 50; pH 3.4) in the ratio of 1: 4. The flow rate was fixed at 1.2 mL/min, and all measurements were performed at room temperature. The injection of samples was performed on a Reodyne ® injector equipped with a 20 µL loop. The UV detector was set at 254 nm. The calibration curve was found to be linear in the concentration range of 10 -100 µg/mL (r 2 = 0.9985). Determination of Lidocaine Oil Solubility The solubility of lidocaine was evaluated in triacetin, IPM, castor oil, and olive oil by the shake-flask method (35). An excess amount of lidocaine was added to 1 mL of the oil. The mixture was then continuously stirred, using a magnetic stirrer, at room temperature (25°C) for 72 h in order to achieve equilibrium. After removing the undissolved drug, samples were centrifuged (Sigma 1 -14, Osterode am Harz, Germany) at 5000 rpm for 10 min; the supernatant was separated and filtered through a 0.22 µm membrane filter and then diluted with a suitable solvent (chloroform or ethanol 96% v/v). Finally, the amount of the drug dissolved in each oil was assayed by a UV spectrophotometer (UV-2601, Rayleigh, China) at the wavelength of 263 nm, using oil samples with known drug concentrations. Construction of Phase Diagrams Castor oil and triacetin (selected based on the oil solubility studies), four non-ionic surfactants (Tween 80, Labrasol ® , Cremophor ® EL and Cremophor ® RH40) and three co-surfactants (PEG 400, Transcutol ® P and PG) were chosen to construct the phase diagrams and determine the o/w microemulsion domains. Surfactant/co-surfactant weight ratios (R sm ) were kept constant at different values of 1: 1, 1: 2, 2: 1. Clear oil-surfactant mixtures with various weight ratios of 1: 9 to 9: 1 were prepared by weighing appropriate amounts of each component into screwcapped vials and mixing thoroughly at room temperature. Samples were then titrated with small aliquots of triple distilled water while stirring for a sufficient time to attain equilibrium. The course of each titration was inspected visually and through cross-polaroids for determining the clarity and the possible formation of a birefringent liquid crystalline phase. The triangle diagrams were mapped with the top apex representing a fixed R sm (1: 1, 1: 2, or 2: 1), the right and left apices representing the oil and water, respectively. All mixtures which produced optically transparent, non-birefringent solutions at relatively water-rich parts of the phase diagrams were designated as o/w MEs. Preparation of Lidocaine-Loaded MEs Following the determination of o/w ME regions on the phase diagrams, those oil/surfactant/oil systems which demonstrated a relatively extended o/w ME area on the phase diagrams, composed of a minimum of 5% (w/w) oil and not more than 25% (w/w) surfactant mixture, were selected for drug loading. Lidocaine-loaded MEs were prepared by the spontaneous emulsification method. A given amount of lidocaine base was dissolved gradually in the oil phase to which the surfactant mixtures were then added, and the required amount of distilled water was finally added dropwise while stirring the mixture gently until a transparent solution was obtained. The formulations were stored at room temperature and were evaluated for clarity, drug precipitation, and phase separation within 72 h. Characterization of fluid MEs 3.6.1. Refractive Index (RI), pH, and Conductivity The Refractive index of drug-loaded MEs was measured by Abbe Refractometer (2WAJ, Bluewave Industry Co., Ltd., Shanghai, China). The pH and electrical conductivity of MEs were determined using a calibrated pH-meter (744, Metrohm AG, Switzerland) and a conductivity meter (712, Metrohm AG, Switzerland), respectively. Determination of Particle Size and Zeta Potential Mean droplet size (Z-ave), polydispersity index (PDI), and zeta potential of ME formulations were measured at 25°C, using a Malvern Zetasizer (Nano-ZS, Malvern Instruments, Worcestershire, UK), equipped with a Nano ZS ® Software for data acquisition and analysis. Each sample was analyzed in triplicate, and the results were reported as mean ± SEM. Determination of Viscosity and Rheological Behavior A Brookfield DV2T cone and plate viscometer (LV, Brookfield Engineering Laboratories, Middlesboro, USA), equipped with a CP-42 spindle, was used to measure the viscosity and examine the rheological behavior of ME formulations. To evaluate thixotropic behavior, measurements were carried out at a rotation speed ranging from 2 to 70 rpm for both up curves and down curves, at 25 ± 1°C. Results within the 10 -100% range of torque were considered acceptable and recorded. The shear stress (Pa) was plotted vs. shear rate (1/s), and the viscosity was calculated based on the slope of the linear portion of the plots. Spreadability To measure the spreadability of MBGs, a circle with 1 cm in diameter was marked on a glass plate. Half a gram of the test gel was placed on the circle, and a second glass plate was placed on the gel. A 5 g weight was put on the upper glass plate, and after 5 min, the weight was removed, and the diameter of the spread gel was measured and reported (36). pH Measurement One gram of MBGs was mixed with 99 g distilled water and stirred thoroughly until a uniform mixture was obtained. The pH was measured in triplicate, using a calibrated pH meter (744, Metrohm AG, Switzerland). Determination of Viscosity and Rheological Behavior Viscosity and rheological properties of the MBGs were determined at 25 ± 1°C, using a Brookfield DV-III Ultra Programmable Rheometer (Brookfield Engineering Laboratories, Middlesboro, USA), attached with spindle no. 51. Measurements were performed at a rotation speed ranging from 0.5 to 250 rpm for both upward and downward flow curves. Flow curves (rheograms) were plotted, and the viscosities were then calculated based on the slope of the linear portion of the plots. Stability Tests Fluid MEs were stored in sealed glass vials at 25°C for 15 months and observed for any macroscopic changes, including turbidity, phase separation, drug precipitation, and color change (37). The stability of the selected MBGs was also evaluated for 6 months at ambient temperature, 2 -8°C and 40 ± 2°C (relative humidity: 75 ± 5%), and checked for their appearance and viscosity. In addition, the optimum gel formulations were centrifuged (5702, Eppendorf AG, Hamburg, Germany) at 5000 rpm for 30 min, subsequently subjected to seven heating/cooling cycles (24 h at 4°C followed by 24 h at 40°C) and three 24-hour freeze-thaw (FT) cycles (-5 and 25°C) (38-40). In-Vitro Drug Release 3.9.1. Cellulose Acetate Membrane In-vitro permeation study of lidocaine-loaded MBGs was carried out using vertical Franz diffusion cell with 1.767 cm 2 effective diffusion surface area. Synthetic cellulose acetate membrane (MW cut-off 12,000 Da), previously soaked in phosphate buffer pH 7.4 for 24 h at 2 -8°C, was placed between the donor and receptor compartments of the diffusion cell. The receptor chamber (25 mL) was filled with phosphate buffer solution (0.1 M, pH 7.4) and thermostated at 37 ± 0.5°C, while continuously stirring (400 rpm). A quantity of 200 mg of the gel was applied to the membrane, and the donor chamber was covered with Parafilm ® . At predetermined time intervals (5,7,10,15,20,30,45,60,75,90,105, and 120 min), an aliquot of 2 mL sample was taken from the release medium, and the same volume of the fresh buffer was added to the receptor chamber to maintain the sink condition. During the test, the diffusion cells were checked for the presence of a bubble on both sides of the membrane. The cumulative amount of drug released from MBGs at each time was measured. As a control, a commercially available 5 wt% lidocaine gel was used. Ex-vivo Permeation Study The ex-vivo permeability study protocol was approved by the local Animal Ethics Committee of Shahid Beheshti cumulative percentage of lidocaine in withdrawn samples was calculated, and the results were plotted as a function of time (in a minute) and compared with those obtained from the commercial gel. The ex-vivo release profile was fitted into various mathematical models, i.e., zero-order, first-order, Higuchi and Korsmeyer-Peppas, in order to elucidate the kinetic release model. All the experiments were performed in triplicate, and the results were reported as mean ± SEM. Data were statistically analyzed by one-way analysis of variance (ANOVA), followed by Tukey's post hoc using GraphPad Prism version 8.0.1 (GraphPad Software, Inc., USA). A 0.05 level of probability was considered as the level of significant difference (*P < 0.05: significant, **P < 0.01: very significant, and ***P < 0.001: extremely significant). Evaluation of the Local Anesthetic Effect Male Wistar albino rats (200 -250 g) and New Zealand white male albino rabbits (2.0 -2.5 kg) were obtained from Pasteur Institute (Tehran, Iran) and used for local anesthetic studies and skin irritation tests, respectively. The animals were housed in suitable cages at a controlled temperature (20 -24°C), on a 12: 12 h, day/night cycle with free access to a pellet diet and water ad libitum. All animal experiments were performed in accordance with the National Institute of Health (NIH) Guide for the Care and Use of Laboratory Animals (8th edition), approved by the Institutional Animal Care and Use Committee and local Animal Ethics Committee of Shahid Beheshti University of Medical Sciences (No. IR.SBMU.PHARMACY.REC.1399.002). The local anesthetic effect of formulations was assessed by performing a manual von Frey test. All experiments were carried out between 9: 00 and 16: 00. Before the experiments, the adult male rats were placed in an individual clear acrylic box with an elevated plastic wire mesh floor which allowed acclimating for 30 min in the testing environment. Animals were divided randomly into the following groups (n = 8): (1) placebo control group (control A, B, and C); (2) treated group with selected lidocaine-loaded MBGs (formulation A, B, and C); and (3) treated group with the commercial lidocaine gel. In this behavioral study, 0.5 g of each gel was topically applied to the rat hind paw. Then, a series of 10 von Frey filaments with logarithmic incremental stiffness (4, 6, 8, 10, 15, 26, 60, 100, 180, and 300 g) in ascending order was used to determine the mechanical allodynia threshold of the animals, 10 -210 min following the application of the gel with 10-min intervals. Each nylon filament was applied five times through the mesh floor on the plantar surface of the rat paw till it bends (buckles). Brisk paw withdrawal, licking, or shaking of the stimulated paw was considered as a positive response (42). The strongest filament inducing up to two responses out of five stimuli was recorded as the mechanical threshold at each time point. Results were reported as mean ± SEM. Statistical differences were evaluated using two-way ANOVA with Bonferroni's post-test to compare the mechanical threshold at each time, and oneway ANOVA followed by Tukey's post-test to evaluate the created area under the time-course curve (AUC10-210 min) of the mechanical threshold by each formulation during the test. As stated earlier, P < 0.05 was considered statistically significant (*P < 0.05, **P < 0.01 and ***P < 0.001). Skin Irritation Test The acute dermal irritation potential of the final formulation was evaluated in accordance with the OECD guideline (43). The animals were acclimatized for one week before the beginning of the study and had access to a standard diet and food. The hairs on the back of rabbits were trimmed by an electrical clipper 24 h prior to administration of the formulation. The animals were divided into three groups (n = 3) as follows: (1) no application (control); (2) blank MG4; and (3) drug-loaded MG4. Half a gram of gel formulations was applied uniformly to the test area (approximately 6 cm 2 ). At the end of the 4-h exposing duration, the residual gel was wiped off with water. All rabbits were observed for any visible change such as erythema or edema after 1, 24, 48, and 72 h of the gel application. If skin damage cannot be recognized as irritation or corrosion after 72 h, the observation should continue until day 14 in an attempt to determine the reversibility of the effects. Erythema and edema were graded according to the following criteria: 0, no visible reaction; 1, very slight reac- Drug Solubility in the Oil Phase The ability of the oil phase for drug solubilization is considered as the most important criterion for o/w microemulsion formulations (44). Lidocaine solubility results for different oils are given in Figure 2. As can be seen, the highest solubility was obtained in castor oil and triacetin (538.460 ± 7.457 mg/mL and 530.727 ± 6.029 mg/mL, respectively). This represents the potential of these oils to solubilize lidocaine, and therefore, they were selected for ME and MBG preparations. Phase Diagrams and o/w ME Domains Phase diagrams of four-component systems were constructed to determine the appropriate concentration ranges of the components to form MEs. Unlike the systems containing castor oil, the o/w ME region was observed on triacetin-based phase diagrams. Figure 3 indicates that in nearly all phase diagrams (except for triacetin/Tween 80/PEG 400/water systems at R sm of 1: 2 and triacetin/Labrasol ® /PEG 400/water, regardless of R sm ), a transparent, isotropic o/w ME region was formed in the oil-poor part of the phase diagrams. It should be noted that because of the difficulties in accurately determining the boundaries between the ME domains and surfactantrich area on the top of the phase diagrams, samples with up to 50 wt% surfactant mixture were considered as MEs, above which the area was considered as surfactant-rich area. The following generalizations could be made about the investigated systems: (1) tween-based systems showed higher water solubilization capacity in comparison to the other surfactants; (2) irrespective of the type of surfactant and R sm , the largest and smallest ME areas were seen in the presence of Transcutol ® P and PEG 400, respectively; and (3) regardless of the type of surfactant and co-surfactant, R sm did not have a significant influence on the extent of the o/w ME region. Various ME formulations were selected from the relatively extended o/w ME area on the phase diagrams, considering the minimum possible concentration of surfactants. Those with no drug precipitation and phase separation at the time of preparation and after 72 h storage were chosen for further characterization tests (Table 2). ME Characteristics The prepared formulations (Table 2) were found to be macroscopically identical, i.e., homogeneous, singlephase, and transparent by visual inspection. The colloidal nature of these systems was also confirmed by observing the Tyndall effect. Table 3 lists the data of Z-average, PDI, zeta potential, pH, conductivity, and viscosity of the drugloaded ME formulations. The isotropic nature of the formulations was also confirmed as a completely dark field was observed under the cross-polarized light microscope. The refractive indices of all formulations were ranged between 1.3750 and 1.3890 and close to that of water as the external phase (1.334). Electrical conductivity measurement is a useful tool to differentiate w/o droplets from o/w-type droplets and bicontinuous structures. Generally, low conductivity exhibits the formation of w/o droplet MEs (because water makes the internal phase), while systems showing high conductivity are defined as bicontinuous 6 Iran J Pharm Res. 2022; 21(1):e123787. were evaluated by the dynamic light scattering technique. PDI was also determined to provide information about the deviation from the mean size. Table 3 represents the results of size and PDI analysis. As can be seen, in all systems, the average size of the ME droplets was less than 60 nm (ranged from 20.626 -59.506 nm) which lies in the proposed range for ME systems (< 120 nm). All formulations exhibited unimodal droplet size distribution patterns (di-agrams not shown). In most cases, as a measure of droplet size uniformity, PDI values were found to be less than 0.4, suggesting that droplets in nearly all MEs were relatively uniform-sized. To analyze the charge of the droplets, zeta potential is determined. Zeta potential values indicated that the interface had a low surface charge (-0.711 to +0.832 mV). The charge in the interfacial area, in general, may originate 8 Iran J Pharm Res. 2022; 21(1):e123787. from many factors the composition of oil, the presence of electrolytes in the water phase, and the nature of surfactants. In this study, very low zeta potential (nearly zero potential) values obtained in this study could be ascribed to the presence of non-ionic surfactants. Zeta potential is not usually considered as an important measure for the stability prediction of MEs prepared with non-ionic surfactants (46). The pH values of all MEs were found to vary between 7.76 and 8.28. A slight increase in pH could be attributed to the presence of lidocaine in the formulations. MEs possessed very low viscosity (7.85 to 27.1 mPa.s), independent of shear rate. For all formulations, a linear section was observed on the flow curves, constructed with shear stress vs. MBGs Three different gelling agents were used to increase the viscosity of MEs. In the presence of Carbomer ® 934, all gels were found opaque. HPMC was also unable to yield clear, homogenous MBGs with desired viscosity. Turbidity and lack of homogeneity were resolved by substituting HPMC and Carbomer ® 934 with Carbomer ® 940 (47). As described by Chen et al., the reason might be associated with the dissociation of Carbomer ® 934 and HPMC matrices from the hydrated state by surfactant and co-surfactant in the microemulsion (48). The gels were also evaluated in terms of stickiness, ease of spreading, and coarseness by rubbing a sufficient amount of gels between index and thumb fingers. In general, results showed that all Carbomer ® 940-based gels were homogeneous, transparent, and smooth without any particulate matter, grittiness, or lumps and, therefore, MBGs with 1 wt% of Carbomer ® 940 were finally prepared for further investigations. MBGs Properties The data obtained from the characterization of MBGs in terms of spreadability, pH, and viscosity are given in Table 4. The spreadability of gel formulations, that is, the ability of gels to spread uniformly on the skin surface, is a property upon which the therapeutic efficiency of a gel depends and helps in the uniform gel application. Values in Table 4 refer to the extent to which the formulations readily spread on the glass plates by applying a small amount of shear. Results indicated that the highest spreading diameter (4.2 cm) was obtained for the formulation MG8, which possessed the lowest viscosity, whereas the lowest spreading diameter (3.1 cm) was found for the system MG11 with the highest viscosity. The appropriate spreadability of MBGs may be related to the loose gel matrix nature of MBGs due to the presence of oil globules (49). As depicted in Table 4, it was observed that pH values of MBGs were within the physiological range varying from 6.87 to 7.42. This pH range suggests that the gels could result in less irritation to the skin. A decrease in pH of MBGs in comparison with MEs may be attributed to the acidic properties of Carbomer ® 940 (17). The use of MEs on the skin is very difficult because of their fluidity. For a dermal pharmaceutical or cosmetic product, an appropriate viscosity with sufficient retention time on the skin is required. Hence, MBGs were developed by using Carbomer ® 940 in an attempt to modify rheological behavior. Viscosity values for lidocaine-loaded gels are also shown in Table 4. As expected, following the incorporation of the gelling agent into MEs, the viscosity of systems increased significantly (from 224.83 to 871.62 mPa.s), and pseudoplastic behavior was observed. The latter could facilitate and improve the spreading features of the formulation. The flow indices (n) were found to be less than 1 (0.2788 -0.4479), indicating that all MBGs were shearthinning in nature according to the power law equation (13). Rheograms also revealed the absence of thixotropy in the gels investigated (see Figure 5). Stability Studies of Fluid MEs and MBGs The stability of MEs was evaluated after 15 months of storage at room temperature. MGBs were also kept at different storage conditions (5 ± 3°C, 25 ± 2°C and 40 ± 2°C) for 9 months, and their transparency and consistency were monitored. As shown in Figure 6, all ME formulations (except ME 13 ) were clear without any turbidity or sedimentation. The gels also remained clear with homogenous structures and displayed no macroscopic physical changes following storage at ambient temperature and in a refrigerator (see Figure 7). However, loss of their viscosity was observed after 60 days of storage at 40°C. The stability of MBGs was evaluated under stressed conditions by visual inspection. When subjected to centrifugation at 5000 rpm for 30 min, it was found that this stress-induced no damage, and the formulations remained homogeneous and exhibited no sign of phase separation or breakdown. The effect of heating-cooling cycles on the stability of MBGs was also verified. In each heating-cooling cycle, the sample was first heated to 40°C for 24 h and subsequently cooled to 4°C for 24 h. Seven heating-cooling cycles were run to record the gel responses to temperature fluctuations. Finally, the influence of the repeatedly freezethawed treatment (-5 and 25°C for 24 h) on the stability of the gels was investigated. The results obtained from these stability tests foresee MBGs to have good physical stability Iran J Pharm Res. 2022; 21(1):e123787. 9 since no phase separation was observed and the textural properties were not influenced by temperature variation. Permeation Study Drug release and permeation studies through cellulose acetate membrane and rat skin, respectively, from MBGs, were carried out using vertical Franz diffusion cells. Although human skin is considered the gold standard in permeation study of topical formulations, however, limited availability, variability, and ethical reasons have led to employ the animal skin models (50). Some structural similarities between rat skin and human skin (e.g., thickness, lipid content, and water uptake) can propose rat skin as a surrogate for permeation studies (51,52). For preliminary drug permeation screening, the lipophilic artificial membrane was employed, and subsequently, an ex-vivo permeation study on rat skin was conducted for the formulations with the highest flux value through an artificial membrane. In-Vitro Drug Release Through an Artificial Membrane This part of the investigation was aimed to select the best MBG formulations for ex-vivo skin permeation and animal tests. The cumulative percentage of released lidocaine was plotted as a function of time (Figure 8). In general, it was observed that in all MBGs, the drug release percentage at all sampling points was significantly greater than that of the commercial gel, suggesting that MBGs could improve the release pattern of the drug in comparison with the marketed product. In-vitro drug release profiles also revealed that the formulations MG 3 (triacetin/Tween 80/Transcutol ® P at R sm of 2:1), MG 5 (triacetin/Cremophor ® EL/PEG 400 at R sm of 1: 1), and MG 4 (triacetin/Tween 80/PG at R sm of 2:1) released the maximum amount of lidocaine (61.65 ± 1.62%, 61.24 ± 0.70% and 61.04 ± 0.76%, respectively) after 2 hours (P < 0.01) (Figure 8), while the system MG 11 (triacetin/Cremophor ® EL/PG at R sm of 2: 1) displayed the lowest amount of released drug (50.42 ± 0.76%) with no statistically significant difference compared to the commercial gel (P > 0.05). Therefore, MG3, MG4, and MG5 were considered as the optimum gel systems and chosen for exvivo drug permeation investigations. Ex-vivo Drug Permeation Through the Skin The ex-vivo drug permeation through the skin was carried out in an attempt to formulate a vehicle with suitable skin uptake and penetration. Results are depicted in Figure 9. As can be seen, the drug permeation from formulations MG 3 , MG 4 , and MG 5 started immediately without any lag phase, followed by a continuous increase over in permeation was found between MG 3 and MG 4 until 15 min, suggesting that their onset of action could be almost the same. However, higher drug release was observed from MG 4 , which supports a longer duration of action. Rapid onset of action for LAs is very important, and therefore, a high initial permeation is immediately required. Cumulative drug release per unit area of skin surface from all formulations after 10, 20, and 60 min demonstrated a significant enhancement of flux in comparison with the commercial gel (P < 0.001), so that for MG 4 as the optimized system, 4.09, 3.54, and 1.91-fold increase in the flux were observed, respectively. It is crucial to determine the minimum amount of drug permeation that induces local anesthesia (i.e., the anesthetic threshold). Lidocaine anesthetic threshold was calculated to be 500 µg/cm 2 , based on the data obtained from in-vitro drug release and in-vivo anesthetic examination (tail-flick test) (8). Considering this value, it could be expected that formu-lation MG4 causes local anesthesia faster than the other MGBs within 7 minutes after applying the gel. Formulations MG 3 and MG 5 also induced their effects after 10 -15 min; however, the amount of drug needed to initiate the anesthetic effect of the commercial gel was released in 30 to 45 min. In general, it is concluded that the faster local anesthesia was achieved by the use of MBGs, compared to the commercially available gel. An increase in the permeation rate within the first two hours could be explained as follows. Due to the presence of both hydrophilic and lipophilic components and the resulting combined effects, MEs possess a favorable solubilizing behavior. This increases the thermodynamic activity of the drug, which is a driving force for drug release and its penetration (49). Besides, it has been previously reported that topically applied MEs are expected to penetrate the skin and exist intact in the stratum corneum (SC). Kweon et al. have suggested that MEs, once entered into the SC, could alter both the polar and lipid pathways, and the subsequent interaction of the lipid portion of the MEs with the SC makes the dissolved drug partition into the existing lipids. On the other hand, the bilayer structure of the SC could be destabilized by the intercalation of ME droplets between its lipid chains (53). The hydration effect on the drug uptake of the SC by the hydrophilic domain of MEs should also be considered. It is thought that the aqueous phase of MEs would increase the interlamellar volume and disrupt the lipid bilayers due to the swelling of the intercellular proteins, causing a more easily penetration of the drug through the lipid pathway of the SC (53). In conclusion, the greater penetration enhancing the activity of MEs may be attributed to the combined effects of both the lipophilic and hydrophilic domains of microemulsions. As can be seen in Figure 8, the release of lidocaine molecules from the investigated MBGs (MG 3 -MG 5 ) was sustained for 10 h. This phenomenon may be explained by considering the release of the loaded lidocaine from the internal phase, which might act as a drug reservoir, to the exter-nal phase and then from the continuous phase to the skin through passive diffusion. Lidocaine can also be partially solubilized in the external phase and interfacial film of ME that can supply fast release at the initial time of study leading to the fast onset of action without any lag time. It has been suggested that the gel formation in ME limits the diffusion of the drug dissolved in the droplets and therefore slows down its release. Thus, one can conclude that MBGs are potentially able to sustain the release of drugs as compared with their fluid systems. Therefore, the high permeation rate of MG 4 could be related to its ability to create a high-saturated vehicle which can result in high thermodynamic activity (54). The particle size of ME droplets plays an important role in percutaneous drug absorption. It has been reported in the literature that by decreasing the droplet size, the number of particles that can interact with the skin surface is probably increased (49,53,55). In this investigation, the particle size of all ME formulations was in the range of 20 -52 nm. This suggests that a large surface area for the trans-fer of lidocaine to the skin is available. The higher lidocaine flux from formulated MBGs compared to the commercial gel originates from the penetration-enhancing effect of applied components. Cao et al. prepared celecoxib-loaded MBG using Tween 80 and Transcutol ® P and evaluated the ex-vivo permeation of the drug into the mouse skin. The results revealed that the interested formulation could have a 4-fold greater permeability than the conventional gel (56). These findings have also been previously obtained by Shakeel et al., using penetration enhancers such as Labrafil ® , triacetin, Tween 80, and Transcutol ® P for aceclofenac (57). Similarly, other researchers have developed MBGs containing a mixture of Cremophor ® EL and PEG 400 as the surfactant phase to codelivery of evodiamine and rutaecarpine. By application of this nano-based gel formulation, it was shown possible to achieve approximately 2.6-fold higher transdermal flux compared with control hydrogel (58). In general, numerous studies on ME gels prepared with Tween 80 and PG have shown that this surfactant mixture has an important impact on increasing the skin permeability as well as the stability of systems, and it has also been stated that PG can exhibit an additive effect on drug permeation in combination with other penetration enhancers (59). Drug Release Kinetics To determine the kinetics of permeation from these vehicles, the data obtained from ex-vivo permeation experiments were kinetically analyzed according to zero order, first order, Higuchi and Korsmeyer-Peppas models, and the results of data fitting into these models were evaluated by the highest correlation coefficient (R 2 ). Based on the best goodness of fit (see Table 5), it was found that MG 3 , MG 4 , and the marketed product were followed Higuchi kinetic model (MG 3 : R 2 = 0.9942, MG 4 : R 2 = 0.9862, marketed gel: R 2 = 0.9832). Higuchi model-based permeation, previously reported for indomethacin (chitosan-based), terbinafine (chitosan-based), itraconazole (Lutrol ® F127based) and ibuprofen (Carbopol ® 940-based) MBGs and for topical ketoprofen and pentoxifylline MEs (24,(60)(61)(62)(63)(64), suggests that the release process could be mainly controlled by the Fickian diffusion of dissolved lidocaine through the gel network of Carbomer ® 940. However, the analysis of the release plot for MG5 revealed that lidocaine followed the first-order model for controlled permeation, suggesting that the release rate is concentration-dependent (23,65). If diffusion is the main drug release mechanism regarding the Higuchi equation, then a plot of the drug amount released versus the square root of time should result in a straight line. However, a deviation from the Fick-ian equation may be observed, and the mechanism of diffusion from polymeric dosage forms may follow a non-Fickian behavior. Korsmeyer-Peppas equation (Equation 1) is a more general relationship that describes a mixed mechanism of drug release (polymer swelling and/or diffusion) from a polymeric system: (1) Mt M∞ = kt n Where k is a constant incorporating the geometrics and structural characteristics of dosage form, n is the release exponent indicative of the release mechanism, and M t /M is the fractional release of the drug. This equation relates the drug release to the elapsed time (t). In this study, to elucidate the drug release mechanism, the first 60% of drug release data was used to calculate values of n, k, and correlation coefficient (R 2 ) ( Table 5). Values of the release exponent for MG 3 , MG 4 , and MG 4 formulations and the marketed product were calculated to be between 0.484 and 0.854. Therefore, it was concluded that the mechanism of transport for all these formulations followed an anomalous (non-Fickian) behavior, as described in Table 6, possibly including both diffusion and/or polymer erosion phenomena. These results are in accordance with those reported for zaltoprofen and griseofulvin MBGs (47,66,67) and contraceptive vagino-adhesive propranolol HCl gel (68). Anesthetic Effect Paw withdrawal threshold (PWT) values of the lidocaine-treated rats were found to be significantly higher than their respective controls, confirming the induction of the anesthetic effect of lidocaine (P < 0.001). Repeated-measure, two-way ANOVA (followed by Bonferroni's post-test) revealed that MG 4 formulation showed a markedly greater anesthetic effect in comparison with the marketed gel. This finding supports the results of the ex-vivo permeation test ( Figure 10). Also, it was observed that MG 3 showed no statistically significant difference in PWT value approximately during the first two hours of the study, and for MG 5 , the induction of local anesthesia was similar to the marketed gel. In order to compare the average pain threshold during the complete period of observation following the application of the formulations, the area under the time-course curve was calculated. As can be clearly seen in Figure 11, MG 4 and MG 3 induced a statistically significant high pain threshold in comparison to the commercial product (P < 0.001, one-way ANOVA followed by Tukey's post-test), although the difference was not significant between that of MG 5 and the marketed gel. Skin Irritation Test The irritation potential of any transdermal formulation is a critical factor that could limit its use and patient acceptability. In the present study, special consideration was given to the selection of components used in the formulations on the basis of solubility and the minimal skin irritation tendency. Draize primary skin irritation test was performed on the albino rabbit skin to study the irritability of the optimum formulation. The results obtained from skin irritation studies after 1, 24, 48, and 72 h of the gel application are listed in Table 7. The prepared gels were not found to be skin irritants. Conclusion In the present study, various formulations of lidocaineloaded MBGs were prepared and characterized. It was concluded that MBGs could be considered as a more promising approach for the transdermal delivery of lidocaine due to their appropriate viscosity and rheological behavior, spreadability, pH, high penetration ability, and skin toler- 0.44 ± 0.11 0.22 ± 0.11 0 ± 0 0 ± 0 ability with no irritation, high stability, and improvement in PWT and anesthetic effect. However, further research and clinical investigations need to be conducted to elucidate the possible mechanism (s) of lidocaine delivery to the skin and confirm the therapeutic efficacy. and R. Aboofazeli before submission. All authors approved the final manuscript for submission. This study was the subject of the Pharm.D. Thesis of M. Daryab, proposed and approved by the School of Pharmacy, Shahid Beheshti University of Medical Sciences, Iran.
2022-03-07T16:06:02.302Z
2022-01-12T00:00:00.000
{ "year": 2022, "sha1": "f217f36215cd48e4dd003e244384bc3febe1c553", "oa_license": "CCBYNC", "oa_url": "https://brieflands.com/articles/ijpr-123787.pdf", "oa_status": "HYBRID", "pdf_src": "PubMedCentral", "pdf_hash": "b36dbeeaaeb71dd62c29a7ad7e7dccb7a211128a", "s2fieldsofstudy": [ "Biology", "Engineering" ], "extfieldsofstudy": [ "Medicine" ] }
261556710
pes2o/s2orc
v3-fos-license
On envelopes of circle families in the plane In this paper we investigate the relationships between envelopes of circle families and some special curves in the plane, such as evolutes, pedals, evolutoids and pedaloids. Introduction Envelopes of plane curve families have been well investigated since the beginning of the history of differential geometry (see for instance [6]), and they often arise in many guises in physical sciences. For the most typical case, the straight line families in the plane have been studied. In [8] (see also [9] which is an easy to understand expository article focused on envelopes of line families in the plane), solving four basic problems (existence problem, representation problem, uniqueness problem and equivalence problem of definitions), the second author constructed a general theory for envelopes created by straight line families. The study on circle families in the plane is also important since the envelopes of them have several practical applications. For instance, there is an application to soil mechanics. In analysis of the stability of soil masses, the shear strength τ f of a soil at a point on a particular plane is expressed as a linear function of the effective normal stress σ f at failure: where ϕ and c are the angle of shearing resistance and cohesion intercept respectively. A method using Mohr circles to obtain the shear strength parameters ϕ and c can be found in [2]. According to [2], a brief description of this method is given as follows. The stress state of a soil can be represented by a Mohr circle which is defined by the effective principal stresses σ 1 and σ 2 . The center and the radii of the Mohr circle are ( σ1+σ2 2 , 0) and σ1−σ2 2 , respectively. By experiments, we obtain some values of effective principal stresses σ 1 and σ 2 at failure. The Mohr circles in terms of effective principal stress are drawn in Figure 1. The envelope created by Mohr circles is called the Mohr failure envelope which may be a Figure 2). Therefore, in order to investigate the shear strength parameters as precisely as possible, it is important to systematically study the envelopes created by circle families. Moreover, the study of envelopes of circle families can be applied to seismic survey (for example see [1]). For these reasons, the authors have constructed a general theory on envelopes created by circle families in the plane in [10]. For a given point P ∈ R 2 and a positive number λ ∈ R + = {λ ∈ R | λ > 0}, the circle centered at P with radius λ is naturally defined by where the dot in the center stands for the standard scalar product of two vectors. For an open interval I, let γ : I → R 2 (resp., λ : I → R + ) be a C ∞ mapping (resp., be a C ∞ function). Then, the circle family C (γ,λ) is naturally defined as follows. It is reasonable to assume that the normal vector at any point of the curve γ is well-defined. Thus, we naturally reach the following definition. Definition 1. A C ∞ mapping γ : I → R 2 is called a frontal if there exists a C ∞ mapping ν : I → S 1 such that (dγ/dt)(t) · ν(t) = 0 for each t ∈ I. For a frontal γ, the mapping ν : I → S 1 is called the Gauss mapping of γ. We set µ(t) = J(ν(t)), where J is the anti-clockwise rotation by π/2. Then we have a moving frame {ν(t), µ(t)} along the frontal γ(t). Denoteν(t) · µ(t) = l(t), whereν(t) · µ(t) = (dν/dt) (t) ∈ T ν(t) S 1 ⊂ T ν(t) R 2 and two vector spaces R 2 and T ν(t) R 2 are canonically identified. Then, the Frenet formula of γ(t) according to the moving frame {ν(t), µ(t)} is given by In addition, there exists a C ∞ function β(t) such thatγ(t) = β(t)µ(t). Hence, t 0 is a singular point of γ if β(t 0 ) = 0. The pair (l(t), β(t)) is called the curvature of the frontal γ, which is an important invariant of frontals (cf. [3]). A point t 0 ∈ I is called an inflection point of γ if l(t 0 ) = 0. We say that a frontal γ is a front if (l(t), β(t)) = (0, 0) for any t ∈ I. In this paper, the curve γ : I → R 2 used for the definition of a circle family C (γ,λ) is assumed to be a frontal, and the following is adopted as the definition of an envelope created by a circle family. Definition 2. Let C (γ,λ) be a circle family. A C ∞ mapping f : I → R 2 is called an envelope of C (γ,λ) if the following two conditions are satisfied for any t ∈ I. The following is the key notion for envelopes of circle families. Definition 3 ([10] ). Let γ : I → R 2 be a frontal with Gauss mapping ν : I → S 1 and let λ : I → R + be a positive C ∞ function. Then, the circle family C (γ,λ) is said to be creative if there exists a C ∞ mapping Set cos θ(t) = − ν(t) · µ(t). Then the creative condition is equivalent to the condition that there exists a C ∞ funciton θ : I → R such that the following identity holds for any t ∈ I. dλ dt (t) = cos θ(t)β(t). Theorem 1 ([10]). Let γ : I → R 2 be a frontal with Gauss mapping ν : I → S 1 and let λ : I → R + be a positive C ∞ function. Then, the following three hold. (2) Suppose that the circle family C (γ,λ) creates an envelope f : I → R 2 . Then, the created envelope f is represented as follows: where ν : I → S 1 is the mapping defined in Definition 3. (3) Suppose that the circle family C (γ,λ) creates an envelope. Then, the number of envelopes created by C (γ,λ) is characterized as follows. (3-i) The circle family C (γ,λ) creates a unique envelope if and only if the set consisting of t ∈ I satisfying β(t) = 0 and dλ dt (t) = ±β(t) is dense in I. (3-ii) There are exactly two distinct envelopes created by C (γ,λ) if and only if the set of t ∈ I satisfying β(t) = 0 is dense in I and there exists at least one t 0 ∈ I such that the strict inequality | dλ dt (t 0 )| < |β(t 0 )| holds. (3-∞) There are uncountably many distinct envelopes created by C (γ,λ) if and only if the set of t ∈ I satisfying β(t) = 0 is not dense in I. By the assertion (2) of Theorem 1, it is reasonable to call ν the creator for the envelope f created by C (γ,λ) . On the other hand, it is well known that the evolute of a regular curve without inflection points in the Euclidean plane is not only the locus of centers of the curvature, but also the envelope of its normal lines. The involute of a curve is the locus of a point on a piece of taut string as the string is unwrapped from the curve. Hence, the involute will vary as the fixed point varies, and the curve is the evolute of any of its involutes. Taking the advantage of envelope theory, P. Giblin and J. Warder introduced the notion of evolutoids which fills in the gap between the evolute and the original curve (see [5]). Each member of the evolutoids is defined as the envelope of the family of lines, such that each line has a constant angle with the tangent line of the original curve. Additionally, the pedal and the contrapedal of a frontal are defined as locus of the foot of the perpendicular from a given point to tangents or normals of the original curve respectively. Analogous to the evolutoids, S. Izumiya and N. Takeuchi introduced the notion of pedaloid in [7], which fills in the gap between the pedal and the contrapedal. By using the moving frame {ν(t), µ(t)}, the above associated curves of a frontal γ are defined as follows. If there exists a C ∞ function α : I → R such that β(t) = l(t)α(t), the evolute of the frontal γ is defined as follows. where φ is a fixed angle. For a fixed point P ∈ R 2 , the φ-pedaloid of γ relative to P is defined as follows. where Pe γ,P (t) is the pedal of γ relative to P , and CPe γ,P (t) is known as the contrapedal of γ relative to P . Example 1. (1) Let γ : R + → R 2 be the mapping defined by γ(t) = (0, t). Then, it is clear that γ is a frontal. Let λ : R + → R + be the positive function defined by λ(t) = t. Then, it is easily seen that the origin (0, 0) of the plane R 2 is a unique envelope created by the circle family C (γ,λ) , and (0, 0) can be regard as an involute of γ(t), or a pedal of γ(t) relative to the origin. For more details on Example 1, see Section 3. The main result in this paper is the following Theorem 2. Theorem 2. Let γ : I → R 2 be a frontal with Gauss mapping ν : I → S 1 and let λ : I → R + be a positive C ∞ function. Then, we have the following: (1) Suppose that the circle family C (γ,λ) creates an envelope f : I → R 2 . Then f (t) is a frontal with Gauss mapping ν : I → S 1 and its curvature is where l, β : I → R is the functions defined in the paragraph just after Definition 1 and θ : I → R is the function defined in Definition 3. (2) Suppose that the set consisting of t ∈ I satisfying β(t) = 0 and dλ dt (t) = ±β(t) is dense in I, and f (t) is the unique envelope created by the circle family C (γ,λ) with Gauss mapping ν : I → S 1 . Then the following four hold. (2-ii) C (γ,cos φλ) creates two envelopes f 1 (t) and f 2 (t) such that is a fixed point. Then the circle family C (γ1,λ1) creates envelopes f 1 (t) and f 2 (t), such that (2-iv) Let φ be a fixed angle and let P / ∈ Ev f [φ + π 2 ](t) | t ∈ I be a fixed point. Then the circle family C (γ2,λ2) creates envelopes f 1 (t) and f 2 (t) such that (3) Suppose that the set of t ∈ I satisfying β(t) = 0 and | dλ dt (t)| ≤ |β(t)| is dense in I, and the circle family C (γ,λ) creates envelopes f 1 (t) and f 2 (t). Suppose moreover that f 1 (t) is a constant vector. Then we have This paper is organized as follows. The proof of Theorem 2 is given in Section 2. In Section 3, in order to show how Theorem 2 is effectively applicable, several examples including the above (1), (2) of Example 1 are given. Finally, in Section 4, some applications of Theorem 2 are investigated. By the assumption, it follows that λ(t)l f (t) = β f (t) for any t ∈ I. According to the definition of evolutoids of frontals, we obtain For the case of ν(t) = µ(t) and dλ dt (t) = −β(t), we can prove it in the same way. ✷ Proof of the assertion (2-iii) of Theorem 2. The proof of the assertion (2-iv) given in Subsection 2.5 proves the assertion (2-iii) as well. ✷ 2.5. Proof of the assertion (2-iv) of Theorem 2. By the assumption, it follows that always holds for the circle family C (γ1,λ1) . Then C (γ1,λ1) is creative and P is an envelope of C (γ1,λ1) . Moreover, by the proof of the assertion (2-ii), we have As P / ∈ Ev f [φ + π 2 ](t), it ensures that λ 1 (t) > 0 for any t ∈ I. Without losing generality, we may choose the origin as P . Then the envelope f (t) of C (γ1,λ1) satisfies Just from this equation, we have In addition, since f (t) is the envelope, the following equality holds. Proof of the assertion (3) of Theorem 2. Under considering continuity of the functions dλ dt (t) and β(t), it is easily seen that C (γ,λ) is creative. By Definition 2, the assertion "an envelope of a circle family is a point" is equivalent to the assertion "all circles of the family pass through the point". Without losing generality, we may choose the origin as the constant vector f 1 . Then the envelope f (t) of C (γ,λ) satisfies This implies On the other hand, since f (t) is the envelope, it follows Thus, by the similar way as the proof of assertion (2-iv), we have f (t) = 0 or f (t) = 2γ(t) · ν(t) ν(t). Therefore, by the assumption f 1 (t) = 0, we have More examples are provided to demonstrate Theorem 2 as follows. Since the set consisting of t ∈ I satisfying β(t) = 0 and dλ dt (t) = ±β(t) is dense in R, by the assertion (2-i) of Theorem 2, γ(t) is the evolute of f (t). On the other hand, by the assertion (1) of Theorem 2, f : R → R 2 is a frontal with the Gauss mapping ν and the curvature is 2 1+4t 2 , √ 1 + 4t 2 . Thus, the evolute of f is parametrized as follows. It also follows that γ(t) is the evolute of f (t). The circle family C (γ,λ) and its envelope f (t) are depicted in Figure 3. . We calculate that By similar analysis as the one given in Example 4, it is relatively easy to show that the circle family C (γ,λ) creates the unique envelope f (t) = t, t 2 − 1 2 . We consider the circle family C (γ1,λ1) where By the assertion (3-ii) of Theorem 1, the circle family C (γ1,λ1) creates two envelopes f 1 (t) and f 2 (t). We calculate that Then, the envelopes of C (γ1,λ1) are parametrized as follows. Since f 1 (t) = (0, 0) and the set consisting of t ∈ R + satisfying β(t) = 0 and dλ dt (t) = ±β(t) is dense in R + , by the assertion (2-iii) of Theorem 2, we have f 2 (t) = CPe f,0 (t). On the other hand, by the definition of contrapedals of frontals, we can examine that f 2 (t) is the contrapedal curve of f (t) relative to the origin (see Figure 5). By the assertion (3-ii) of Theorem 1, the circle family C (γ2,λ2) creates two envelopes f 3 (t) and f 4 (t). And we calculate that Therefore, the envelopes of C (γ2,λ2) are parametrized as follows. Since f 3 (t) = (0, 0) and the set consisting of t ∈ R + satisfying β(t) = 0 and dλ dt (t) = ±β(t) is dense in R + , by the assertion (2-iv) of Theorem 2, it follows that f 4 (t) = Pe f,0 [ π 4 ](t). Moreover, it is not difficult to show that f 4 (t) is the π 4 -pedaloid of f (t) relative to the origin from the definition of pedaioid of frontals (see Figure 6). . By calculations, we obtain that Therefore, the function cos θ(t) satisfying dλ dt (t) = cos θ(t)β(t) Proposition 1. Suppose that the circle family C (γ,λ) creates a unique envelope f (t), then f : I → R 2 is a frontal with the Gauss mapping ν : I → S 1 and the curvature is Moreover, t 0 is a singular point of f (t) if and only if t 0 is an inflection point of γ(t). Proof. By the assumption, the equality ν(t) = ±µ(t) holds for any t ∈ I. It follows that θ(t) = kπ is a constant from ν(t) = − cos θ(t)µ(t) ± sin θ(t)ν(t), where k is an integer. By the assertion (1) of Theorem 2, we obtain that the curvature of f (t) is Moreover, t 0 is a singular point of f (t) if and only if l(t 0 )λ(t 0 ) = 0. Since λ(t) = 0 for any t ∈ I, then l(t 0 )λ(t 0 ) = 0 is equivalent to l(t 0 ) = 0, which means t 0 is an inflection point of γ(t). In the case of the circle family C (γ,λ) creating a unique envelope f (t), by the assertion (2-i) of Theorem 2, the centre γ(t) is the evolute of f (t). Then the given circle C (γ(t),λ(t)) is the osculating circle of f at t if t is a regular point of f . This fact can be generalized as follows. Proposition 2. Suppose that the circle family C (γ,λ) creates an envelope f : I → R 2 and | cos θ(t 0 )| = 1. If t 0 is not an inflection point of f , then the given circle C (γ(t0),λ(t0)) is the osculating circle of f at t 0 . In other words, we have d dt According to the proof of the assertion (1) of Theorem 2, | cos θ(t 0 )| = 1 is equivalent to df dt (t 0 ) = λ(t 0 ) d dt ν(t 0 ). It follows that λ(t 0 )l f (t 0 ) = β f (t 0 ) if | cos θ(t 0 )| = 1. In this case, t 0 is a regular point of f (t) if and only it is not an inflection point of f (t). According to the definition of the evolute of the frontal, the point γ(t 0 ) = f (t 0 ) − λ(t 0 ) ν(t 0 ) is on Ev(f )(t) and λ(t 0 ) is the radii of the osculating circle of f at t 0 . Proposition 3. Suppose that the circle family C (γ,λ) creates two envelopes f i : I → R 2 where i = 1, 2. Then, the following three hold. (1) Suppose moreover that f 1 (t) is a constant vector. Then t 0 is a singular point of f 2 (t) if and only if t 0 is an inflection point of γ(t). By simplification, we have
2023-09-07T06:42:17.151Z
2023-09-06T00:00:00.000
{ "year": 2023, "sha1": "5506f6137389720dfd373173bdc3f9be44763339", "oa_license": null, "oa_url": null, "oa_status": null, "pdf_src": "Arxiv", "pdf_hash": "5506f6137389720dfd373173bdc3f9be44763339", "s2fieldsofstudy": [ "Mathematics" ], "extfieldsofstudy": [ "Mathematics" ] }
219076639
pes2o/s2orc
v3-fos-license
Strategy-in-Practices: A process philosophical approach to understanding strategy emergence and organizational outcomes Emergence of a firm’s strategy is of central concern to both Strategy Process (SP) and Strategy-as-Practice (SAP) scholars. While SP scholars view strategy emergence as a long-term macro conditioning process, SAP advocates concentrate on the episodic micro ‘doing’ of strategy actors in formal strategy planning settings. Neither perspective explains satisfactorily how process and practice relate in strategy emergence to produce tangible organizational outcomes. The conundrum of reconciling the macro/micro distinction implied in process and practice stems from a shared Substantialist metaphysical commitment that attributes strategy emergence to substantive entities. In this article, we draw on Process metaphysics and the practice-turn in social philosophy and theory to propose a Strategy-in-Practices (SIP) perspective. SIP emphasizes how the multitude of coping actions taken at the ‘coal-face’ of an organization congeal inadvertently over time into an organizational modus operandi that provides the basis for strategizing. Strategy, therefore, inheres within socio-culturally propagated predispositions that provide the patterned consistency that makes the inadvertent emergence of a coherent strategy possible. By demonstrating how strategy is immanent in socio-culturally propagated practices, the SIP perspective overcomes the troublesome micro/macro distinction implied in SP and SAP research. It also advances our understanding of how strategy emergence impacts organizational outcomes. Introduction The Strategy Process (SP) and Strategy-as-Practice (SAP) research traditions share a common concern with how strategies emerge in practice. Where SP scholars emphasize strategy emergence as a long-term conditioning process and focus primarily on realized strategy as a macro development happening over time Pettigrew, 1987Pettigrew, , 2012, SAP advocates attribute strategy emergence to the micro 'doing' of strategy actors in formal strategy planning settings (Jarzabkowski et al., 2007;Johnson et al., 2003). Despite a common concern with strategy emergence, how process relates to practice continues to be an area of lively and seemingly intractable theoretical debate (Burgelman et al., 2018;Gu erard et al., 2013;Hutzschenreuter and Kleindienst, 2006;Jarzabkowski et al., 2016a;Pettigrew, 2012;Sminia and De Rond, 2012;Vaara and Whittington, 2012;Whittington, 2007;Wolf and Floyd, 2017). Neither perspective explains satisfactorily how process and practice relate to one another in strategy emergence to produce tangible organizational outcomes. The theoretical impasse between SP and SAP, we argue in this article, stems from an implicitly shared commitment to a Substantialist metaphysics, which construes processes and practices as processes/practices of primary autonomous actors (Tsoukas and Chia, 2002). The direct consequence of this Substantialist metaphysical commitment is a methodological individualism (Chia and MacKay, 2007), which assumes the prior existence of a 'self-contained individual confronting a world "out there"' (Ingold, 2000: 4). Process and practices are therefore cast as epiphenomenal 'doings' of such autonomous agents. A continued commitment to this Substantialist metaphysics, we argue, is an obstacle to understanding how process and practice are related to one another in strategy emergence and how that affects organizational outcomes. This is because it perpetuates a misleading macro/micro distinction and overlooks the possibility that strategy emergence is immanent in the socio-culturally infused modus operandi and predispositions of an organization. In this article, we draw on Process metaphysics (e.g. Chia, 1999;Chia and MacKay, 2007;Langley and Tsoukas, 2010;MacKay and Chia, 2013;Tsoukas and Chia, 2002), which assumes process is reality (Whitehead, 1978(Whitehead, /1929, as well as the practice turn in social philosophy and theory (e.g. Bourdieu, 1977Bourdieu, , 1990De Certeau, 1984;Dreyfus, 1991;Schatzki, 2001Schatzki, , 2005Schatzki, , 2006, to propose an alternative Strategy-in-Practices (SIP) perspective that overcomes the macro process/micro practice conundrum. But if process is reality, it is also inherently unliveable. What follows from this metaphysical assumption is that practices are viewed as the primary means through which we actively fashion out a 'surrogate' social world that is needed for us to function effectively (Weick, 1979: 177). They provide the means for us to selectively extract and create order, stability and coherence out of this 'blooming, buzzing confusion' that is ultimate reality (James, 1996(James, /1911. Thus, unlike the more established SP and SAP traditions, the SIP perspective that we develop here reverses the metaphysical assumption privileging substantial actors and entities and instead adopts a Process metaphysics that places practices at the centre of strategy emergence. Accordingly, process is a primary existential condition and sociocultural practices are the sole means we employ to extract a coherent and liveable world out of this fluxing ultimate reality. Understood this way, practices are cumulative aggregations of 'know-how' that we rely on to practically cope with the external environment. They find their expression in the multitudinous coping actions taken 'at-the-coal-face' of an organization, and it is through this socio-culturally propagated modus operandi that a coherent strategy inadvertently emerges. The SIP perspective thus circumvents the misleading macro/micro distinction inherent within SP and SAP research and offers a 'third way' to understand how process and practices are related in strategy emergence and how this affects organizational outcomes. By explaining how the 'seeds' of a strategy are already sown via such seemingly inconspicuous local coping actions, a SIP perspective reveals how strategy is often already immanent in an organization's modus operandi, which in turn impacts eventual organizational outcomes (Bourdieu, 1977(Bourdieu, , 1990. The research question we address is: how do process and practices relate to one another in strategy emergence and how are tangible organizational outcomes produced? In addressing this question, we make two key contributions to strategic management theory and practice. First, we respond to calls by strategy scholars to investigate the relationship between process, practice and their links to organizational outcomes (e.g. Burgelman et al., 2018;Chia and Holt, 2006;Chia and MacKay, 2007;Vaara and Lamberg, 2016;Vaara and Whittington, 2012). We do this through a radical revision of the metaphysical commitments underpinning the SP and SAP research traditions from that of 'substance' to 'process' (Chia, 1999;Chia and Holt, 2006;MacKay and Chia, 2013;Prigogine, 1996;Sandberg and Tsoukas, 2011;Tsoukas and Chia, 2002;Whitehead, 1978Whitehead, /1929; see also Dreyfus, 1991). This is accompanied by a metaphysical shift from construing reality in entitative terms as a 'succession of instantaneous configurations of matter' (Whitehead, 1925: 63), so that practice is conceptualized as the doings of 'discrete entities' Dall'Alba, 2009: 1361), to one where 'process' is ultimate and practices are constitutive of social reality (Whitehead, 1978(Whitehead, /1929. Doing so allows us to overcome the prevailing theoretical impasse between SP and SAP scholarship and 'significantly advance our understanding' of 'strategy emergence' (Vaara and Whittington, 2012: 320). Second, we respond to calls for completing the 'practice turn' (e.g. Chia and MacKay, 2007;Seidl and Whittington, 2014;Whittington, 2006), which some scholars argue has yet to have a significant impact on strategy scholarship (e.g. Pettigrew, 2012). We do so by demonstrating how a metaphysical shift from a Substantialist to a Process worldview (e.g. Chia, 1999;Langley and Tsoukas, 2010;MacKay and Chia, 2013;Tsoukas and Chia, 2002) is patently consistent with the more radical implications of the 'practice turn' in social philosophy and theory (e.g. Bourdieu, 1977Bourdieu, , 1990De Certeau, 1984;Dreyfus, 1991;Rouse, 2006;Schatzki et al., 2001). The SIP perspective that we propose explains strategy emergence and organizational outcomes by circumventing the 'macro'/'micro' distinctions inherent in SP and SAP research. It does so by showing how, through socio-cultural influences, an immanent strategy is ever present in organizational life, thereby reflecting the lived experience of practitioners strategizing at the organizational 'coal-face'. Hence, the SIP perspective not only extends current theorizing, but also opens up new vistas for empirical research into strategy emergence. Immanent strategy, therefore, provides the underlying substrate for the subsequent explication of both deliberate and emergent strategies in acts of strategizing. In explaining strategy emergence and outcomes, we show here that deliberate strategizing activities are themselves dependent upon prior practice-shaped, socio-cultural modus operandi; strategy actors are never fully autonomous in their strategic deliberations and hence the choices made. But, far from removing agency from explanations, the SIP perspective maintains that the actions of practitioners are simultaneously constrained and enabled by such practices. The article is structured as follows: first, we expand on the theoretical tensions surrounding the SAP and SP perspectives within strategic management. We identify theoretical commitments to a dominant Substantialist metaphysics as the source of these tensions and explore its consequences. Next, we outline Process metaphysics and show how by embracing the assumption that process is reality, we are better able to appreciate the fundamentally constitutive role that sociocultural practices play in shaping strategic priorities. We then articulate our SIP perspective. Examples of strategy emergence at IKEA along with a comparison of how the strategies of eBay and Alibaba emerged as they competed in China during the early 2000s are then used to illustrate our SIP perspective. Our examples show how local practical coping actions and socio-cultural legacies inadvertently shape the emergence of a coherent strategy even in the absence of deliberate strategic planning. Finally, we conclude by drawing attention to the ever-present existence of socio-cultural influences that we call immanent strategy that inevitably makes organizational strategy emergence possible. Tensions surrounding Strategy-as-Process (SP) and Strategy-as-Practice (SAP) perspectives The SP tradition views strategy emergence as a macro 'pattern in a stream of actions' (Mintzberg and Waters, 1985: 257); an observed consistency of actions created by strategy actors over time. It emphasizes the importance of attending to the behavioural and emergent dimensions of strategizing (Barnett and Burgelman, 1996;Burgelman and Grove, 2007;MacKay and Chia, 2013;Sminia and De Rond, 2012), and it draws attention to the 'relation between strategic content, context, and process' (Pettigrew, 1987: 666). SP focuses on realized strategy as a 'convergence of intended strategy and emergent strategy', and acknowledges that while strategizing is oftentimes deliberate and intentional, the universal experience of strategy practitioners is that 'there are so many things that can intervene' to thwart any intended strategy (Sminia, 2009: 97). Hence, the SP tradition has sought to understand strategy emergence from the foci of identifiable strategy actor, action and decision processes as they evolve over time (Burgelman et al., 2018). By directing attention to 'a sequence of events that describes how things change over time' (Van de Ven, 1992: 169; see also Langley et al., 2013), the SP tradition has helped to show that strategy emergence is essentially 'a long-term conditioning process' (Pettigrew, 1987: 666). Despite shared interest in the complexity and richness of a common focal phenomenon -strategy emergence -and claims of affinity between strategy practice and SP, suggestions that the former is a subset of the latter (Hutzschenreuter and Kleindienst, 2006;Sminia and De Rond, 2012) have been vehemently disputed (Mirabeau et al., 2018;Vaara and Whittington, 2012;Whittington, 2007). SP scholars, for instance, express scepticism about the relevance of formal strategy practices to the emergence of realized strategy (Kouam� e and Langley, 2018), and maintain that SAP scholars' enthusiasm for 'a micro-level of activity' and 'fascination with the details of managerial conduct, distract them from issues with substantive impact on organizational outcomes' (Burgelman et al., 2018: 540). By invoking the practice turn in social theory (Seidl and Whittington, 2014;Whittington, 2006), SAP scholars counter that SP scholarship either misses or misrepresents 'intrinsic features of the phenomena they attempt to describe' (Burgelman et al., 2018: 539), because they have been insufficiently attentive to the 'doings' 'that make up . . . strategizing in practice' (Johnson et al., 2003: 3; see also Jarzabkowski et al., 2016b). While the SP and SAP traditions have both made significant contributions towards advancing strategy theory, scholars have more recently recognized opportunities for cross-fertilizing insights that have emerged from each of these perspectives and acknowledged the need for a combined research stream that they label 'Strategy as Process and Practice' (SAPP) (Burgelman et al., 2018: 532). However, several persistent challenges remain that prevent a comprehensive theoretical integration of the SP and SAP perspectives. Indeed, the very label itself -strategy as process and practice -points to a theoretical impasse and arguably perpetuates rather than reconciles the differences between the two fields of strategy inquiry and their relationship with organizational outcomes. It does so in five ways. First, SP and SAP research have different understandings of what 'strategy research' entails. While SP regards 'strategy research' as elucidating 'the process by which firms realize performance as well as maintain and develop their ability to perform' (Sminia andDe Rond, 2012: 1338), SAP scholars, instead, are more interested in privileging 'the detailed processes and practices that constitute the day-to-day activities of organizational life' (Johnson et al., 2003: 3;Leˆand Jarzabkowski, 2015: 440), without concerning themselves with organizational outcomes. Therefore, while SP research has tended to focus on organizational issues such as survival, strategic change, competitive advantage or innovation (Kouam� e and Langley, 2018), SAP research is focused on how an institutionalized practice succeeds in 'achieving widespread diffusion and adoption' (Whittington, 2007(Whittington, : 1579 through its 'practice-in-use' (Jarzabkowski et al., 2016b). SAP remains relatively silent on how strategy practices lead to desirable organizational outcomes. Several scholars have noted this and insisted that for a practice to be deemed strategic, it must demonstrate how it attained 'a particular coherence or direction to organizational activity' (Fenton andLangley, 2011: 1191; see also Carter et al., 2008). While SP research attempts to link dynamic 'macro' processes to outcomes, they do not, therefore, attend to practices that explain how strategy emergence is possible. SAP research, however, with its focus on 'micro-activities' of institutionalized practices, remains unable to account for macro organizational outcomes. Second, the theoretical relationship between practice and process remains unclear within both SAP and SP research. Process is understood in common sense terms as 'process of', comprising a sequence or succession of change occurring, while practice is understood as the detailed 'doings' of pre-designated strategy actors. SP research has thus been criticized for not opening up the 'black box' of process (Johnson et al., 2003: 3; see also Jarzabkowski et al., 2016b), and for focusing too much on 'remote and abstract processes' that are too 'course-grained' (Chia and MacKay, 2007: 220), so that their findings are 'unamenable to practical action' (Burgelman et al., 2018: 539-540). For SAP scholars, by contrast, practice is tellingly 'what is inside the process' (Johnson et al., 2003: 11), thereby reinforcing the macro/micro relationship between process and practice. To add to the confusion, Carter et al. (2008: 91) point out that SAP researchers appear to simultaneously embrace two very different notions of practice. On the one hand, 'practice seems to mean "being closer to reality" or "being more readily applicable"', on the other hand, 'practice is understood in the Mintzbergian sense of "what people actually do when they strategize"'. This rather loose employment of the term 'practice' has not helped in clarifying the relationship between process, practice, strategy emergence and organizational outcomes. Third, conceptually relating practice and process in simple micro/macro terms underplays the fact that organizational strategies and outcomes are historically constituted and socially embedded aggregate phenomena (Vaara and Lamberg, 2016). In their thoughtful article, Gu erard et al. (2013: 568) suggest that SAP avoid attending to organizational outcomes because 'the path between the practice itself and the aggregate bottom line is improbably long and winding'. For them, attempts to connect practices with strategic impact on firm-level performance is replete with contradictions and inconsistencies (also see Miller et al., 2013). SAP's response to this conundrum has therefore been to measure performance (or at least outcomes) at less aggregated levels and as proximal indicators more closely attuned to the specific phenomena being studied (Jarzabkowski and Spee, 2009;Johnson et al., 2007), be it at the individual or group levels. Reconciling the macro/ micro distinction between SP and SAP, however, entails addressing directly the notoriously difficult-to-justify connection between process, practice and organizational outcomes in strategy emergence in a way that takes into account how different historical and socio-cultural influences shape identities, outlooks and inclinations. The turn to practice in social philosophy and theory is one way of sensitizing strategy scholars to these broader influences on organizational outcomes. Fourth, as intimated earlier, the notion of 'practice' as employed within SAP research appears at odds with the larger 'practice turn' in social theory and philosophy in two crucial respects. First, by focusing on 'micro-activities' Balogun, 2009: 1258), 'micro-processes' (Leˆand Jarzabkowski, 2015: 458), 'micro and macro-level consequences of strategy processes and practices' (Jarzabkowski et al., 2016b: 272) and 'micro level study of practices in context' (Vaara and Lamberg, 2016: 636), SAP research continues to rely on the micro/macro dualism that advocates of the 'practice turn' singularly reject (Bourdieu, 1990;Dreyfus, 1991;Schatzki, 2005). Second, while SAP acknowledges that strategy making is a 'situated, socially accomplished activity' comprising 'those actions, interactions and negotiations of multiple actors' (Jarzabkowski et al., 2007: 7-8), it underemphasizes the fact that strategy practitioners are themselves socio-cultural beings, and so what they perceive and 'do' are always already influenced by their socio-culturally acquired modus operandi (Bourdieu, 1977(Bourdieu, , 2005. A key contribution of the 'practice turn' in social philosophy and theory (e.g. Bourdieu, 1977Bourdieu, , 1990Dreyfus, 1991;Schatzki, 2001Schatzki, , 2005 has been the realization that an acquired modus operandi inevitably shapes the strategic predispositions of practitioners themselves (Bourdieu, 1990: 52). While some scholars have acknowledged this broader socio-cultural influence implied in the practice turn (e.g. Seidl and Whittington, 2014;Vaara and Whittington, 2012), the linking of strategy-making practices with such broader socio-cultural influences is insufficiently emphasized in both the SP and SAP literature (Burgelman et al., 2018;Jarzabkowski et al., 2007;Johnson et al., 2003). Finally, how the everyday operational connects with the strategic and vice versa remains largely unexamined in, particularly, SAP research, despite early SP work alluding to their intimate connection within strategy emergence (e.g. Burgelman, 1983;Burgelman and Grove, 2007;Mintzberg and Waters, 1985;Pettigrew, 1987Pettigrew, , 2012. The emergence of a coherent strategy does not happen in isolation from an organization's operational concerns and its established ways of dealing with problem situations. This comprises the entire milieu of practical coping actions it takes 'at-the-coal-face' of the organization/environment interface on an everyday basis. The traditional, but unhelpful academic separation of the strategic from the operational has led to a truncated understanding of strategy making as somehow the sole prerogative of pre-designated strategy practitioners. So how everyday operational activities feed into strategic priorities, and hence, how an organization's strategy can emerge from its operational strength, remains largely unexamined (Jarzabkowski et al., 2016a(Jarzabkowski et al., , 2016bPettigrew, 2012;Whittington, 1996Whittington, , 2006. Summary The differences in the SP and SAP understanding of 'what' strategy means, the disproportionate methodological and theoretical focus within SAP research on identifiable strategy episodes versus whole processes in SP studies, the lack of a clear conceptual link between 'process' and 'practice', the perpetuation of the macro/micro distinction inherent in process and practice studies, a lack of fidelity to the key principles of practice theory and an artificial separation of the operational from the strategic have all hampered a more nuanced understanding of how strategy emerges and is realized in practice. The theoretical challenge that exists for reconciling practice and process in strategy emergence, and hence the micro and macro levels of analysis they imply, has led a growing number of scholars to call for a re-examination of the metaphysical assumptions underpinning much of current theorizing within SP and SAP research (e.g. Chia and MacKay, 2007;Sandberg and Tsoukas, 2011;Vaara and Whittington, 2012). In what follows, we scrutinize key metaphysical assumptions held by both SP and SAP research. The metaphysics of process and practice Unleashing 'the full power of the practice perspective', scholars point out, requires drawing deeper on its theoretical insights and taking its metaphysical commitment much more seriously (Vaara and Whittington, 2012: 289). Examining the metaphysical assumptions of SP and SAP research is crucial for advancing theory building. They ultimately determine a theory's explanatory scope and predictive accuracy, its logical consistency and ability to generate new insights by 'increasing the causal "grain" of explanations' (Foss and Hallberg, 2017: 412). The task of metaphysics is 'to provide a cogent and plausible account of the nature of reality at the broadest, most synoptic, and most comprehensive level . . . and to render intelligible the world as our experience presents it to us' (Rescher, 1996: 8). To this end, we begin by examining the Substantialist metaphysical commitments underpinning much of current SP and SAP theorizing, before turning to a revised Process metaphysical view that anchors our SIP perspective. Substantialist metaphysics Much of SP and SAP research is shaped by a Parmenidean-inspired Substantialist worldview, which presupposes ultimate reality to be essentially pre-ordered, atomistic and stable. Reality is construed as comprising discrete, identifiable and stable entities 'set side by side like the beads of a necklace' and held together by an equally solid thread (Bergson, 1998(Bergson, /1911. Each entity is assumed to possess properties that are relatively unchanging so that 'substance, identity, . . . causality, subject, object' and so on are privileged as the primary features of reality (Morin, 2008: 34). Consequently, substance is privileged over process, individuality over interactive relatedness (i.e. practices) and classificatory stability over fluidity and evanescence (Rescher, 1996: 31-35). Things change, but change is not inherently constitutive of things. Within the social sciences, this Substantialist worldview manifests itself in the widespread construal of the primacy of autonomous individual agents; an approach that has been labelled 'methodological individualism' (Chia and MacKay, 2007). Methodological individualism assumes that 'all actions are performed by individuals . . . a social collective has no existence and reality outside of the individual members' actions' (Von Mises, 1998/1949. This means that 'processes' and 'practices' are epiphenomenal 'effects' of pre-existent individual agents. This is the metaphysical position assumed both by SP and SAP. From this vantage point, 'process' is construed as a change from state 'A' to state 'B' (Hutzschenreuter and Kleindienst, 2006;Jarzabkowski et al., 2016aJarzabkowski et al., , 2016bLangley et al., 2013;Mintzberg and Waters, 1985;Mirabeau et al., 2018;Pettigrew, 2012;Van de Ven, 1992). Thus, when Mintzberg and Waters (1985: 257) describe strategy as 'patterns in streams of action', when Pettigrew (1997: 338) insists on the importance of observing the 'sequence of individual and collective events, actions, and activities unfolding over time in context' and when Langley et al. (2013: 1) draw attention to how 'managerial and organizational phenomena emerge, change, and unfold over time', they are all essentially relying on this common-sense understanding of process as a transitional phase from one stable state to another. Process, put differently, merely binds a 'succession of unique events' together (Ingold, 2011: 233, our emphasis). Such a Substantialist worldview is also retained by SAP advocates (Sandberg and Dall'Alba, 2009). For instance, when Vaara and Whittington (2012) and Whittington (2006) rely on categories such as practice, practitioners and praxis, they assume these to be self-evident and unproblematic rather than insecure distinctions created through arbitrarily parsing, fixing and naming an essentially fluxing and undifferentiated reality (James, 1996(James, /1911. The idea that ultimate reality is essentially a Process, an 'aboriginal sensible muchness' (James, 1996(James, / 1911 characterized by equivocality, serendipity and unpredictability, is not seriously entertained. This Substantialist worldview leads to SAP's commonsense treatment of practices as simply what self-identical agents 'do' (Jarzabkowski et al., 2016b;Johnson et al., 2003;Whittington, 1996); practices are practices of strategy actors. The axioms of methodological individualism are thereby reinforced. Whether it is about the micro-activities carried out by strategy actors in strategy meetings and in away-day strategy workshops (Hendry et al., 2010;Leˆand Jarzabkowski, 2015;Whittington, 2006), or the discursive and rhetorical practices of strategy actors and their sense-making activities (Kwon et al., 2014;Laine and Vaara, 2007;Samra-Fredericks, 2003), SAP perspectives are predicated upon the assumed autonomy of the individual actor. How the identities, perceptions and predispositions of actors themselves have been shaped and influenced by prior historical, cultural and material conditioning, remains relatively unexamined in this common-sense understanding of practice (see Nicolini, 2012 for an exception). The idea that 'processes rather than things best represent the phenomena that we encounter in the natural world about us' (Rescher, 1996: 2), and that practices are in fact fundamentally reality-constituting and identity-shaping is overlooked. SAP theorists' desire to go 'inside the process' (Burgelman et al., 2018: 532) to examine the activities involved in strategy work, and SP's construal of process as a change from point 'A' to point 'B' (Burgelman et al., 2018;Mirabeau et al., 2018), both betray the dominance of a Substantialist worldview in which practices are related to processes in terms of a micro/macro relationship. However, an alternative and more coherent understanding of how process and practices are related is possible if we embrace a Process worldview. This metaphysical revision enables the 'macro/micro', 'process/practice' dualisms to be overcome in such a way that it helps reveal how local coping actions aggregate and congeal into broader sociocultural practices that then provide the patterned regularities facilitating the possibility of strategy emergence and ultimately shaping organizational outcomes. Process metaphysics Process metaphysics implies an acceptance that process is reality (MacKay and Chia, 2013;Rescher, 1996;Whitehead, 1978Whitehead, /1929. Flux, change and ongoing transformations are fundamental features of ultimate reality; everything flows and nothing abides. Distinctions and categories, events and entities as such, are products of our linguistic interventions into this flowing reality. Such a Process worldview owes its origin in the West to the ancient Greek philosopher Heraclitus who cryptically asserted that reality is always 'in flux like a river' (fragment 5.10, in Mansley-Robinson, 1968: 89), while in the East, ancient Chinese philosophers have insisted that the 'Great Tao' of reality 'flows everywhere' (Lao Tzu, in Chan, 1963: 157); change is an immanent feature of reality. Such a processual view of reality has been more recently revived by philosophers such as Bergson (1998/1911), James (1996/1911 and Whitehead (1978Whitehead ( /1929) and physicists such as Bohm (1980) and Prigogine (1996). Process metaphysics offers immense explanatory potential in understanding the flux of social life and the role that practices play in the artificial construction of social orders. From this Process worldview, 'only flux is experientially real; physical reality as we experience it is always unstable' (Rescher, 1996: 18). All social entities, including institutions, organizations and even the individual, are necessarily 'effects' of socio-cultural practices (Bourdieu, 2005); they are temporary, stabilized patterns of relations forged from a manifold of changes that is ultimate reality. What really exists from this Process worldview are 'not things made but things in the making' (James, 2011(James, /1909. Therefore, all social entities, including society, institutions and organizations are temporary 'bundles' of relationships and practices. Even the individual, as such, is not an isolatable, autonomous unit, but rather a product of socio-cultural practices; each 'emerges as a locus within fields' of social relationships (Ingold, 2000: 3). Process metaphysics therefore does not, indeed, 'deny the reality of substances but merely reconceptualise them as manifolds of process' (Rescher, 1996: 52). A fluxing and ever-changing reality, however, is eminently unliveable. Social beings require a 'workable level of certainty' to lead productive and meaningful lives (Weick, 1979: 6). This is the reason we collectively develop shared practices to help us construct our identities and the social orders that we then find so familiar and necessary. Practices then, from a Process worldview, are our collectively shared and culturally embedded ways of abstracting, fashioning, regularizing and hence creating social entities, events and structures out of this fluxing ultimate reality (James, 1996(James, /1911Whitehead, 1925: 68-69). They help us reduce the 'equivocality' of our lived experience through its progressive ordering into a relatively stable 'surrogate' social reality to which we then subsequently respond (Weick, 1979: 177). Therefore, from this alternative Process worldview, practices are aggregates of coping actions that have evolved through extended collective efforts at dealing with a fluxing reality. The gradual congealing of an initially disparate multitude of local coping actions into a set of established practices provides us with the means to construct social entities such as 'individual' and'environment', 'markets' and'organization', 'resources' and'assets', 'competitors' and'competitive advantage', 'supplier' and'producer', 'operations' and'strategy' (Schatzki, 2005, 2006). Each distinction is forged and reinforced through their practical application so that they eventually become so self-evident that we treat them 'as a thing . . . forgetting that the very permanence of its form is only the outline of a movement' (Bergson, 1998(Bergson, /1911; 'eddies in a river current' (Ingold, 2011: 168); patterns in the flow of actions (Bohm, 1980). Put differently, Process metaphysics is 'perfectly prepared to acknowledge substantial things, but see them rather in terms of processual activities and stabilities' (Rescher, 1996: 52). It is this implicit understanding that reality is process that underpins the practice turn and that has inspired its advocates to insist that practices constitute us, shape our modes of existence and predispose us in our engagement with the external environment (Bourdieu, 1977(Bourdieu, , 1990De Certeau, 1984;Dreyfus, 1991). Understood thus, practices are 'manifolds of actions that are ontologically more fundamental than actions' themselves (Schatzki, 1997: 284). Accordingly, actors themselves are temporarily stabilized 'bundles of practices' (Schatzki, 2005: 466), 'patterns of public comportments . . . sub-patterns of social practices' (Dreyfus, 1991: 151), 'carriers' of collective practices (Reckwitz, 2002: 256). Artificial stabilities such as 'institutions', 'structures', 'organizations', 'markets', 'firms', 'strategies' and so on are, consequently, all a result of the gradual 'firming up' of collective socio-cultural practices that are 'processional, rather than successional' (Ingold, 2011: 53, our emphasis). Hence, every activity constituting the practice is 'recurrent' rather than an 'occurrent' movement (Ingold, 2011: 60, emphases in original); a development of the one before and a preparation for the one that follows. From this process-based understanding, practices are not simply what people 'do' (e.g. Burgelman et al., 2018;Jarzabkowski et al., 2007;Johnson et al., 2003;Whittington, 1996Whittington, , 2006. Instead, practices constitute 'people' in the first instance. A serious commitment to the practice turn therefore requires us to rethink how such recurrent socio-cultural practices render possible strategy emergence and influence an organization's strategic outcomes. As Rouse (2006: 645-646) notes, a major concern of the practice turn has been to 'by-pass, perennial discussions of the relative priority of individual agency and social or cultural structures'. Stated differently, it is precisely the rejection of methodological individualism and an alternative structuralism that lies at the heart of the practice turn in social philosophy and theory. As such, the recourse to practices is motivated by the desire to overcome the 'micro'/'macro' dualism by showing how all 'macro' social phenomena such as structure, culture, organization, firm, strategy and so on, are the result of the congealing of aggregate local 'micro' coping actions into a pattern of accepted socio-cultural practices . Reconceptualizing practices as our means for dealing with a processual reality (Whitehead, 1978(Whitehead, /1929), helps us 'circumvent' the micro/macro, agency/ structure, process/practice, operational/strategic conundrums facing strategy theorists. Practices, then, are not about the 'internal life of process' (Brown and Duguid, 2000: 95). Rather, process provides the imperative for us to recourse to practices as the primary means for creating stability and the social orders we find all around us in an ever-changing world. This 'third way' of understanding the more fundamental nature of process and how practices relate to it enables us to reconceptualize strategy emergence as deriving from the underlying patterned consistency of actions immanent in the inadvertent propagation of practices. Summary Our analysis establishes how theoretical advancements on strategy emergence is hindered by the hegemony of a Substantialist metaphysics within both SP and SAP research. Despite their differences in emphasis, both SP and SAP assume processes of, rather than process is reality. Therefore, within this Substantialist worldview, processes and practices are epiphenomenal to individuals, systems and organizations. The broader understanding of practices as fundamentally a cultivated everexpanding bundle of interactions (Bourdieu, 1977(Bourdieu, , 1990Schatzki, 2005), rather than simply the visible doings of strategy practitioners in strategy meetings, remains unexplored in much of SAP and SP research. However, from the alternative Process worldview implicit in the practice turn in social philosophy and theory, process is what makes practices an imperative in constructing social reality. Accepting that reality is process impels us to view practices as the primary means for selectively fixing, stabilizing and creating the social orders and institutions that we find all around us. Thus, the dissonances between practice and process are effectively dealt with; the macro and the micro, the operational and the strategic, all 'enfold and unfold' into each other (Bohm, 1980). This alternative 'third way' of understanding the more fundamental nature of process and how practices properly relate to it, enables us to rethink strategy emergence as arising from the underlying patterned consistency of actions resulting from the propagation of socio-cultural practices. Strategy, as such, is immanent in such practices. We call this perspective Strategy-in-Practices (SIP). Towards a Strategy-in-Practices (SIP) perspective: Immanence, modus operandi and emergence The SIP perspective that we develop here begins with the assumption that process is reality (Bergson, 1998(Bergson, /1911James, 2011James, /1909Whitehead, 1978Whitehead, /1929. From this SIP perspective, practices are 'the manifestations of . . . complex bundles of coordinated processes' (Rescher, 1996: 49, emphasis in original). Practices enable us to create 'islands' of artificial stabilities (social entities) that provide the raw material for constructing and sustaining social reality and the social orders that we find so familiar and necessary, by 'bundling' and coordinating selective aspects of an ever-flowing ultimate reality. Social practices are therefore the visible foundations of economic, social and cultural life (Bourdieu, 2005). As Rouse (2006: 646) points out, practices provide a revised understanding of the pervasive sociocultural backdrop influencing human behaviour by showing that 'social or cultural structures (exist) only through their continuing reproduction in practices', so much so that culture and structure are in fact abstract instantiations of underlying recurrent practices (Bourdieu, 1977(Bourdieu, , 1990. Institutions, organizations, individuals, discourse, activities and strategy are quintessentially the effects of practice, not the other way around (Schatzki, 2005(Schatzki, , 2006. Practices enable us to 'harness' the flux of reality in order to 'drive it better to our ends' (James, 1996(James, /1911. They define and predispose members of a community so that the types of action taken to deal with exigencies of a situation, and the manner in which it is carried out are both uniquely shared by that community (Bourdieu, 1977(Bourdieu, , 1990. Hence, what socially constructed strategy practitioners 'choose' to do in formalized strategy settings is already irretrievably shaped by their prior socio-cultural conditioning and by their extended immersion into an organization's modus operandi (Bourdieu, 1977(Bourdieu, , 1990. The possibility of strategy emergence and the organizational outcomes it produces is thus always immanent in such practices. Practices, from a SIP perspective, are fundamental to our understanding of the emergence of social phenomena, including and especially the phenomenon of strategy. Unlike the SAP tradition that disembodies practices from context and time, or the SP tradition that investigates realized SPs without recourse to the background array of socio-cultural practices, our SIP perspective shows how socio-cultural practices, comprising a complex milieu of local coping actions that aggregate into a modus operandi, are able to account for the inadvertent emergence of a coherent strategy without the latter ever being the 'product of a strategic orientation' (Bourdieu, 1977: 73). Practices contain 'patterns of regularities' forged through repeated coping actions taken at the 'coal-face' of the organization/environment interface by members of a collective. It is this pattern of regularities that enables a coherent strategy to emerge inadvertently. Practices recursively shape and are themselves subsequently shaped and refined by coping actions so that they are dynamically evolving (e.g. MacKay and Chia, 2013); each engagement modifies and refines the practices themselves thereby resulting in an ever more patterned regularity of responses that we can retrospectively recognize as being inherently 'strategic' (Bourdieu, 2005). To understand SIP, practices must be analysed alongly in context and time. But context here refers to the wider array of socio-cultural practices from which individuals draw in response to situational demands (Schatzki, 2001). Importantly, these practices 'do not arise from beliefs, rules or principles', but rather we are 'socialized into . . . what it is to be a human being' through 'social practices' (Dreyfus, 1991: 23). Practices that emerge serve to orient members and to predispose them to dealing with future situations in a relatively consistent and predictable manner (Schatzki, 2005(Schatzki, , 2006. They generate 'all the "reasonable", "common-sense", behaviours . . . which are possible within the limits of these regularities' (Bourdieu, 1990: 55). This underlying pattern of practice regularities that make up the socio-cultural milieu surrounding an organization is tacitly propagated in the form of established 'ways of engaging and of doing things', or modus operandi, so that they serve to shape those immanent strategic predispositions that Mintzberg and Waters (1985: 257) observed to be a 'pattern in a stream of actions'. Mintzberg and Waters (1985: 257) originally coined the terms 'deliberate' and 'emergent' strategies to distinguish between organizational strategies that are 'realized as intended' from 'patterns or consistencies' that are 'realized despite, or in the absence of intentions'. Two issues are salient within this original conceptualization. First, emergent strategy is conceptualized as occurring on the macro level, in contrast to the micro-level activities and processes out of which they arise. This reinforces the macro/micro distinction, which advocates of the practice turn squarely reject (Bourdieu, 1990;Dreyfus, 1991;Schatzki, 2005). Second, the process of emergence remains a black box, so one can discern both the lower-level inputs (deliberate and emergent strategy) and the higher-level outputs (realized strategy), but not how the lower was transformed to the higher during emergence. In other words, emergence 'is merely "a label for a mystery", inviting the question of what other factor or process manages to explain how these characteristics arise' (Haldane, 1996: 265). The SIP perspective overcomes these twin limitations by conceptualizing strategy as immanent in established social practices. Immanence refers to the latent potential of the tendencies or impulses that inhere within practices that find expression in their actualization. For example, when we say, 'immanent within an acorn is an oak tree', what we mean is that an acorn is a stage of an evolving organism 'moving continually along its predestined journey towards its eventual condition as an oak tree' (Rescher, 1996: 11). The idea of immanence suggests that tendencies and impulses require favourable circumstances (in the case of the acorn -right climate, right soil, protection from rodents, etc.) to be realized. An immanent strategy emerges in the process of actualization. An appeal to immanence is a way to redirect attention to the unique dynamics of socio-cultural practices in order to explain more adequately what is actually going on. Emergence, on the other hand, by focusing on what something is (or is not), 'functions not so much as an explanation but rather as a descriptive term pointing to the patterns, structures or properties that are exhibited' (Goldstein, 1999: 58). Therefore, unlike the deliberate and planned strategies inherent to the SP and SAP perspectives where strategy depends on the autonomous actor's intentions, the SIP perspective recognizes that practices can and do serve as the 'source of these strings of "moves" that are objectively organized as strategies without being the product of a genuine strategic intention' (Bourdieu, 1990: 60). The construct of immanence is therefore only a foundation on which to build an explanation, not its terminus. In other words, immanent in the socio-cultural context is a modus operandi propagated inadvertently through the established practices of a collective; a particular nurtured sensitivity to the local environment, a way of relating to it and a preferred way of engaging and responding to it that appears common-sensically evident. This strategic predisposition, or modus operandi, is what we mean by SIP. Such modus operandi willy-nilly ensures a degree of convergence of approaches in dealing with the exigencies of any given situation faced by an organization. It is this possibility of convergence that makes the inadvertent emergence of a coherent strategy possible in the first instance. Put differently, strategic coherence can also emerge inadvertently without any deliberate intention or design on the part of actors (Chia and Holt, 2009). Within the SIP perspective, what differentiates effective from ineffective practices in a given context is the extent to which such practices sensitize and enskill members of a community or organization to find 'the grain of the world's becoming' and to follow its course 'while bending it to their evolving purpose' (Ingold, 2011: 211). Viewed from this broader understanding of the practice turn, a modus operandi makes for an immanent strategy that enables 'agents to cope with unforeseen and constantly changing situations' (Bourdieu, 1990: 61), while all the time remaining consistent and coherent to an organization's history and socio-cultural heritage. To construe practices as simply the doings of practitioners is thus to trivialize the significance of the 'practice turn' . The SIP perspective advocated here prioritizes how the seemingly inconsequential everyday practical coping actions taken at all levels of an organization inadvertently aggregate into a set of established practices that then shape its strategic predispositions and hence strategy emergence and organizational outcomes. Our conceptual development does not preclude the role that conscious deliberation plays in coping actions and practices. It merely suggests that, through this socioculturally shaped modus operandi, organizational actors are predisposed to acting in certain habituated ways when confronted with situation-specific circumstances. By showing how strategy emerges through these local coping actions congealing into established practices, the SIP perspective directs attention to how the 'microcosm and macrocosm are coordinated, linked to one another in a seamless web of process' (Rescher, 1996: 21) so that they affect organizational outcomes. Summary An organization's strategic predispositions are always already contextually shaped by socio-culturally propagated practices. These socio-cultural practices are infused with and ultimately propagate a modus operandi that shapes how an organization approaches, deals with and responds to the exigencies and extenuating circumstances it faces. We conceptualize such a modus operandi as immanent strategy. Immanent strategy therefore refers to the ever-present pattern of socio-cultural tendencies that facilitates convergence of organizational actions such that the inadvertent emergence of a coherent organizational strategy is possible. We thus direct attention to how wider socio-cultural influences, perceptions and tendencies are expressed through preferred social practices that in turn shape an organization's strategic priorities. This, we propose, accounts for the possibility of inadvertent strategy emergence and the concomitant strategic outcomes. Illustrating a Strategy-in-Practices perspective: IKEA and eBay versus Alibaba The inadvertent emergence of strategy at IKEA from initial operational considerations, and how the socio-cultural moorings of SIP at eBay and Alibaba shaped strategy emergence and organizational outcomes, both help illustrate our SIP perspective. Strategy emergence and Strategy-in-Practices at IKEA Ikea, currently the largest furniture chain in the world, finds its roots in the agrarian Swedish province of Sm˚aland (literally small country). Often portrayed synonymously with its charismatic founder Ingvar Kamprad, its success, as Jarrett and Huy (2018) note, was more a function of 'emergence, haphazardness, and invention through necessity' than planned strategy. Founded in 1943 as a mail-order business selling nylon stockings and pens, followed by furniture in 1948, Ikea launched its first mail-order furniture catalogue in 1951. Competitors responded by launching a price war. On the cusp of bankruptcy, its founder opened its first showroom in 1953 in the Swedish town of € Almhult with the hope that by being able to see and touch the furniture, customers would realize the difference in quality from its competitors. With over 1000 people lined up on its opening day, a new modus operandi of selling through showrooms rather than mail order had been created from coping actions born out of operational necessity (Kamprad and Torekull, 1999). The idea behind Ikea's flat-pack furniture is credited to Ikea's former chief designer, Gillis Lundgren. Lundgren was frustrated trying to fit a new, leafshaped table he had designed into a small post-war car to take it to a nearby photo studio to be photographed in preparation for an upcoming catalogue. He decided to take its legs off. The original idea had come from another Swede, Folke Ohlsson, who in 1949 had patented a ready-to-assemble chair. But having convinced Kamprad that it would cut costs for assembly, inventory and shipping, this practice emerged as the cornerstone of their strategy, allowing Ikea to grow from a small, Swedish, rural operation into a multinational player, and in turn, entrenched Ikea's functionalist, geometric and minimalist approach to design, democratizing access to well-designed furniture, and finding resonance in markets further afield (Brownlee, 2016). The morning that Ikea's first 31,000 square foot store opened in 1958, 18,000 people lined up at its doors. They had not accounted for its popularity, resulting in too few check-outs, frustrated customers and long queues. To cope, staff let customers begin retrieving their own products. It was from this experience that Ikea's self-service model emerged. As Jarrett and Huy (2018) suggest, 'When we focus on . . . [Kamprad], we overlook important, hidden elements of the company . . . Ikea's success did not result from the kind of planful strategy development that is still taught in some business schools.' To understand SIP at Ikea is thus to eschew the macro/micro distinction inherent in tensions between the SP and SAP perspectives, and to recognize how close-quarter engagement with an extant environment results in local coping actions that become established practices, which subsequently provide the basis for competitive advantage. Hence, the immanence of strategy emergence. Figure 1 summarizes the local coping actions at Ikea that we call Strategy-in-Practices. Ikea's very founding is infused with the socio-cultural sensibilities of Sm˚aland, and its egalitarian, hard-working and resourceful peasant culture where employees are referred to as 'colleagues' or 'co-workers' and everyone is encouraged to participate in continuous innovation in its products and services (Jarrett and Huy, 2018). Sm˚aland is an area that, historically, had been agrarian and poor, and its egalitarian values of frugality and hard work stem from a history of shared poverty in the area. The Ikea way thus acquires the socio-cultural Swedish notions of social democracy and a functionalist design ethos that offers 'a wide range of well designed, functional home furnishing products at prices so low that as many people as possible will be able to afford them' (www.ikea.com). Indeed, even the contradictions in Swedish narratives about itself (e.g. egalitarianism, social democracy) appear to be embedded in practice (e.g. see Lindqvist (2009) for a critical account of Ikea's history). What is striking at Ikea is the degree to which socio-cultural practices mutually reinforce and congeal to produce a modus operandi that enabled the emergence of a coherent strategy, right from the way Ikea's stores are organized. From the children's play area at the entrance, to the arrows on the floor designed to guide customers through its showrooms, marketplace and self-service warehouses where customers retrieve their flat-packs, the absence of employees on the shopfloor that leads customers to try out the products, to the Nordic names given to its products, to its Swedish food served in cafeterias all encourage customers to participate in the practical experience (Lindqvist, 2009). The Ikea example is illustrative of where process and activity are privileged over substance, interactive relatedness over discrete individuality, productive energy over descriptive fixity and emergence over stasis (Rescher, 1996;Whitehead, 1925). It offers an insightful understanding of the relationship between process, practice and outcomes and reveals the immanent strategy (SIP) always already present in socio-cultural practices. Socio-cultural moorings of Strategy-in-Practices at eBay and Alibaba The contrasting practices deployed by eBay and Alibaba provide another illustration of the SIP perspective. They do so by demonstrating how socio-cultural influences shaped each organization's modus operandi that in turn led to the emergence of strategies that ultimately impacted organizational outcomes. eBay was founded in 1995 by Pierre Omidyar and is based in San Jose, USA. It became popular for offering goods through online auctions, with the transactions taking place between consumers themselves. This pioneering online auctions model allowed eBay to create an e-marketplace for private buyers and sellers. In 1999, Jack Ma founded Alibaba.com as a business-to-business (B2B) website to provide an outlet for millions of small Chinese factories to market their manufactured goods overseas. Since small factory owners lacked the skills and had to rely on state-owned trading companies to sell their goods overseas, Alibaba offered the opportunity to cut out these 'middlemen' by connecting suppliers directly with buyers. In 1999, the whole of China had 2m Internet users or less than 1% of the country's population online. Yet by 2002, China was the world's fifth largest online market. Attracted by this exponential market growth, eBay entered China in March 2002 by acquiring a 33% stake in EachNet. EachNet was a website founded by Shao Yibo who sought to replicate eBay's online auctions model in China (eBay, 2003). This acquisition made eBay a leading player within Chinese ecommerce. The numbers received a bigger boost in 2003 after the breakout of the Severe Acute Respiratory Syndrome (SARS). SARS convinced millions of Chinese, afraid to go outdoors, to try shopping online instead (Erisman, 2016). eBay's planned mode of strategizing on entering the Chinese market contrasted sharply from Alibaba's emergent approach to strategizing. In the words of eBay's Senior Vice President William Cobb: 'It was quite clear this market was taking off. [Shao Yibo] had studied eBay up one-side and down the other and had really tried to adapt a lot of the eBay principles to the market' (in Clark, 2016: 154). In contrast, Jack Ma's struggle to give coherence to the multitudinous acts of everyday practical coping at Alibaba to convince investors is evident when he remarks: We don't really have a clearly defined business model yet. If you consider Yahoo a search engine, Amazon a bookstore, eBay an auction centre, Alibaba is an electronic market. Yahoo and Amazon are not perfect models and we're still trying to figure out what's best. (in Clark, 2016: 121) Concerned that eBay would eventually encroach and compete in Alibaba's B2B space, Jack Ma launched Taobao in May 2003. Yet SIP at Taobao differed markedly from that at eBay. Taobao (meaning treasure hunt) unlike eBay was a platform consisting of storefronts run by individuals or small traders. These micromerchants could set their stalls up on Taobao for free and this 'effectively gave these small retailers a place to market their wares online . . . (and) introducing features such as instant messaging and elaborate seller rating systems that allowed for convenience, communication and trust building' (Erisman, 2016: 193). But, unlike eBay where prices for the auction start low and got bid up, in Taobao prices often start high and got haggled down. Taobao, thus brought the vibrancy of the Chinese street market's much-loved haggling practices to the online shopping experience (Shiying and Avery, 2009). We thus find the Chinese socio-cultural practice of haggling pitted against the US practice of auctioning embedded in the respective strategic modus operandi of Alibaba and eBay. eBay responded by buying out EachNet thereby achieving a 95% market share and making it instantly the largest player within Chinese e-commerce (Bloomberg Businessweek, 2004). This large market share prompted eBay to monetize its e-commerce platform by charging merchants using its platform a listing fee and introducing commissions on all transactions. Taobao, by contrast, was free from the outset. Buyers did not have to pay to register or transact nor did sellers have to pay to list their products or sell online. This 'freemium' model meant that unlike eBay, Taobao did not have to worry about preventing vendors and buyers from figuring out ways to use the website simply as a place to connect with one another, then conducting their transactions offline or through other means. Erisman (2016: 90) explains: 'Afraid that buyers and sellers might circumvent its system and avoid paying eBay's commissions, eBay went out of its way to keep buyers and sellers blind to each other and unable to communicate with one another before a purchase.' In order to overcome the strategic challenge of the 'trust deficit' between the buyers and sellers that was inhibiting Chinese e-commerce participation, eBay and Alibaba again adopted contrasting practices. Alibaba introduced an escrow-based payment system called Alipay. 'Consumers know that when they pay with Alipay their accounts will be debited only when they have received and are satisfied with the products they have ordered' (Clark, 2016: 18). In contrast, eBay responded by acquiring PayPal for $1.5bn and introduced this direct payment service between buyers and sellers. eBay's fee-based auction business model depended on keeping buyers and sellers apart until the sale was processed; its main priority was to improve the velocity of trade by slashing the time internet users spend completing transactions (The Economist, 2004). PayPal helped slash payment transactions time and eBay offered payment protection on goods sold by eligible traders (those who have built up good reputations within eBay's ranking system). Alibaba's indigenous escrowbased payment system and eBay's import of a trader ranking system transplanted from their US operations represent two contrasting local coping attempts at overcoming the trust-deficit within Chinese e-commerce and therefore contrasting modes of SIP. The resulting strategic divergence -one involving tighter control of transactions (eBay), the other entirely open and loosely regulated (Alibaba) -favoured Alibaba, whose approach reinforced the practical logic of China's street markets and 'freemium' socio-cultural modus operandi. When Henry Gomez, then eBay's Vice President for Public Relations, publicly questioned Alibaba's strategic practices by issuing a press release declaring '"Free" is not a business model' (Erisman, 2016: 164), Jack Ma, reflecting the 'win-win' Chinese philosophy retorted: 'Well, there are a lot of ways we can make money . . . right now our website is totally free, because we want to attract new members. Once our members make money, we will make money' (in Erisman, 2016: 31, emphasis added). In December 2006, eBay exited the market by selling off its Chinese subsidiary, eBay-EachNet, to Tom Online; a venture backed by Hong Kong businessman Li Ka-Shing. Figure 2 summarizes the contrasting local coping actions we call Strategy-in-Practices. The eBay versus Alibaba example offers three profound insights into the analytical potential of the SIP perspective. First, since the SIP perspective is anchored in Process metaphysics, it is able to illuminate how the uniquely unfolding dynamics of socio-cultural practices shape strategy emergence and subsequent organizational outcomes in both instances. By emphasizing 'what is going on', the SIP perspective is able to demonstrate how contrasting socio-cultural practicesauctioning, trader ratings and a fee-based model in the case of eBay and street haggling, escrow accounts and a freemium model in the case of Alibaba -led to contrasting strategy emergence that then impacts organizational outcomes. Second, openness to an immanent strategy developed here, allows us to appreciate the existence of a modus operandi that enables strategic actors to 'act before everything is fully understood to respond to an evolving reality rather than having to focus on a stable fantasy' (Mintzberg and Waters, 1985: 271). The contrasting modus operandi at eBay and Alibaba are alluded to by Porter Erisman (2016: 233), the former Vice President of the Alibaba Group when he remarked: When Chinese and Western management styles come together, the Chinese management style resembles flowing water, whereas the Western management style resembles the rocks. [. . ..] In . . . an entrepreneurial market, going with the flow like water was much more important than standing in the water's way like a rock. Third, the SIP perspective offers an analytical lens to investigate the strategic 'effectiveness' of contrasting socio-culturally infused practices and modus operandi within firms in specific socio-cultural contexts. While eBay responded by embracing a planned approach to its local coping actions, Alibaba responded by embracing the wider socio-cultural milieu immanent in practices that allowed its members to feel 'their way' through a world that is itself in motion, continually coming into being through the combined actions of human and non-human agencies' (Ingold, 2000: 155, emphasis in original). Alibaba was therefore able to leverage the logic of Chinese street markets, and the dynamic vibrancy of direct interactions through haggling practices between Chinese traders that it entails. On the other hand, eBay had sought to apply a logic rooted in US auctions and tightly controlled business practices onto a marketplace imbued with a very different, historically constituted market logic. eBay's strategy was the result of socialization within a set of sociocultural norms that did not find resonance in the Chinese market. Alibaba's intimate understanding of a distinct set of socio-cultural practices that resonate with Chinese shoppers gave it a strategic advantage. Summary As the Ikea and eBay versus Alibaba examples illustrate, the SIP perspective shows how broader socio-cultural practices predispose firms via a modus operandi that orients them in their engagements with the external world and this is how strategies emerge; strategy is immanent in socio-cultural practices. This is evident in the emergence of Ikea's strategy of offering self-assembling flat-pack furniture at affordable prices, where the SIP perspective demonstrates how an effective strategy can emerge from a firm's operational strength derived from its history of practical coping. Likewise, the SIP perspective is able to account for the different responses of eBay and Alibaba as they competed for the Chinese market; one based on acquisition, market domination and a principle of auctioning, the other based on evolutionary growth through offering a free platform and capitalizing on the Chinese penchant for haggling. The SIP perspective also explains how and why practices at eBay and Alibaba resulted in the emergence of two contrasting strategies and the eventual organizational outcome. Strategy-in-Practices perspective: Implications for theory and practice Both SP and SAP research have struggled to satisfactorily explain strategy emergence (Burgelman et al., 2018;Jarzabkowski et al., 2016aJarzabkowski et al., , 2016bVaara and Whittington, 2012). The inability to reconcile tensions between the macro/micro dualisms and the resulting process/practice quagmire are indicative of the theoretical dissonance between the SP and SAP perspectives. Our SIP perspective, by reverting to a processual understanding of the 'practice turn' , circumvents these tensions and offers an alternative 'third way' for understanding strategy emergence that links directly with organizational outcomes. An immanent strategy, as both examples demonstrate, is a strategy born out of socio-cultural predispositions manifested in organizational practices. Practices shape the coping actions taken when dealing with an ever-changing world. Even before organizational strategies are formally explored, discussed and deliberated upon in strategy workshops, reviews or meetings and so on, strategic tendencies are always already influenced by an acquired modus operandi that inevitably shapes the choices arrived at on these occasions. SIP brings the macro inherent in SP research and the micro inherent in SAP research together by identifying sociocultural practices as the basis for explaining strategy emergence and the organizational outcomes that subsequently ensue. The presence of an immanent strategy explains how and why eBay's strategic approach led to a negative outcome as well as how and why Ikea's and Alibaba's emergent strategizing led to positive ones. Immanent strategy is the underlying substrate that unifies strategy 'process' and 'practices' and helps explain 'outcomes'. Instead of assuming a two-tier Substantialist reality that attempts to combine micro-practices with macro processes (e.g. Burgelman et al., 2018), a SIP perspective based on Process metaphysics, settles for a one-tier ontology of process alone (Rescher, 1996). In so doing, it replaces the troublesome ontological dualism of micro-practices (SAP) and macro process (SP) with a more nuanced understanding of the fundamental co-constitution of practices and process as they enfold and unfold into each other. These are the vital insights implied by the 'practice turn' that has yet to be countenanced by either SP or SAP advocates. SIP, therefore, offers an opportunity to develop an integrative understanding of strategy emergence that begins with the multitude of seemingly innocuous everyday coping activities taken in situ and that ends with broader strategic consequences for the organization. This processual understanding that strategy is always immanent in practices is what we mean by Strategy-in-Practices (SIP). The differences between the SP, SAP and SIP perspectives are summarized in Table 1. The SIP perspective has several implications that can further unlock and advance strategy research, theory and practice. First, the SP notion that strategy is something that an organization has, and the SAP notion of strategy as something that individuals in an organization do, sets up a false dichotomy that obscures the reasons why practices are immanently strategic (Jarzabkowski et al., 2016b). SIP's notion of immanent strategy clarifies this relationship. This 'immanent' strategy is expressed through the socio-culturally shaped modus operandi inherent in everyday practical coping. It explains why an organization's coping practices matter in strategy emergence and how these practices often lead to unique and idiosyncratic strategic outcomes. Investigating how practices congeal and give rise to a modus operandi within a firm is therefore a topic that is ripe for future research. Second, while the case for a complementary approach between the SAP and SP traditions in a SAPP perspective (Burgelman et al., 2018: 533;Kouam e and Langley, 2018) is laudable, it is still predicated on a Substantialist metaphysics that misses the wider import of the practice turn in social philosophy and theory. It perpetuates the micro-macro and operational-strategic divide by continuing to view the micro as individual practices, practitioners and praxis, and the macro as behaviours, capabilities, cognition, control systems, organizational performance and so on. This limits the possibility of studying communities, institutions, governments, organizations and societies as 'either features of, collections of, or phenomena instituted and instantiated in practices' (Schatzki, 2001: 6, emphasis added). Therefore, the SIP perspective requires a methodological orientation that allows theorists to not just observe practices, but to actually 'watch what is going on' (Ingold, 2011: 233, emphasis in original). It requires theorists to shun the distanced and disinterested contemplation of 'strategizing' by 'seeing what is out there' (Ingold, 2011: 233, emphasis in original) through 'non participant observation' (Leˆand Jarzabkowski, 2015: 444), and instead opt for techniques that capture and describe effective socio-culturally infused coping practices with an accuracy and sensitivity honed by detailed observation and prolonged first-hand experience. Third, a SIP perspective clarifies the theoretical relationship between process and practice by showing how process and practices enfold and unfold into each other and are culturally imbued. Their separation into either abstract processes within SP approaches, or strategizing episodes in SAP approaches limits their analytical capacity to explain strategy emergence. A SIP perspective encourages scholars to move beyond such false dichotomies inherent within a Substantialist metaphysics by taking the 'processual reality of strategy as the starting point' (Sminia and De Rond, 2012: 1334-1335. Researching strategy through the SIP lens requires a 'study with practices' (Ingold, 2011: 241, our emphasis) rather than a 'study of practices' (Jarzabkowski et al., 2016a(Jarzabkowski et al., , 2016b. This necessitates deploying an armoury of research approaches and methods to uncover the modus operandi within organizations (e.g. Burgelman et al., 2018;Johnson et al., 2007), and by doing so to seek a more fine-grained understanding of the strategy immanent in socio-culturally infused practice. Fourth, while identifying the relationship between practice, process and organizational outcomes remains elusive to strategy scholars, a SIP perspective enables us to interrogate consequents from a practice perspective and this allows for different types of outcomes aggregating into performance through the logic of practice. While this might require, to some degree, a 'leap of faith' called for by Langley (1999), from a SIP perspective, organizational outcomes, be they performance or otherwise, are effects of wider historically constituted socio-cultural practice-complexes that are themselves merely momentary instantiations of an ever-changing organizational reality. Finally, a key implication of the switch from a Substantialist to a Process metaphysics is a renewed appreciation that the central foci of strategy research -institutions and organizations -are brought into being and sustained by socio-cultural practices. The SIP perspective encourages researchers to 'relax their core assumptions about the reified nature organizations and institutions' from one where organizations and institutions are conceptualized as 'enduring formal objective structures detached from the actors who authored them' to one where such social entities are temporarily stabilized effects of socio-cultural practices (Suddaby et al., 2013: 338). It can therefore be theoretically deployed to pry open the black box of 'strategizing' and 'institutional work' undertaken in creating and sustaining 'organizations' and 'institutions'. By enabling theorists and practitioners to probe the 'what', 'why' and 'how' that makes the underlying sociocultural practices strategic, SIP offers refined insights into the inner workings of strategy emergence. Conclusion Both SP and SAP underestimate the significance of the practice turn in social philosophy and theory. In order to restore this significance and to overcome the theoretical impasse between the two, our article investigates the relationship between process, practice and their links to strategy emergence and organizational outcomes (e.g. Burgelman et al., 2018;Chia and Holt, 2006;Chia and MacKay, 2007;Vaara and Lamberg, 2016;Vaara and Whittington, 2012). Our key argument here is that a processual understanding of the 'practice turn' is necessary for fully appreciating how the everyday operational, the socio-cultural and the strategic can be coherently linked together in an integrative framework for explaining strategy emergence. Therefore, what differentiates the SIP perspective from SP, SAP or SAPP is an underlying metaphysical outlook that embraces process as the basis of reality and the notion of practices as our primary means for extracting order, stability and coherence from an otherwise fluxing and uncertain reality (e.g. Chia, 1999;Langley and Tsoukas, 2010;MacKay and Chia, 2013;Tsoukas and Chia, 2002). Such an immanent SIP perspective allows us to see how socio-cultural predispositions inevitably shape our strategic tendencies and how everyday organizational coping actions taken at operational levels can feed into and influence strategic emergence and outcomes. An immanent SIP, therefore, offers an alternative 'third way' of explaining strategy emergence and organizational outcomes. It helps us to acknowledge that strategizing activities are themselves dependent upon prior practice-shaped, socio-cultural predispositions so that agents are never fully autonomous in their strategic deliberations and hence the choices made. From this SIP view, the actions of practitioners are simultaneously constrained and enabled by their acquired modus operandi. This modus operandi originates from a seemingly innocuous multitude of local, coping actions taken at the firm/environment interface that subsequently congeal into an established set of sensitivities and embodied practices that then provides the capacity to respond to the uncertainties of an ever-changing environment. It is this modus operandi, as an immanent SIP that is idiosyncratic to an organization, and this makes possible the strategy emergence that is captured in our SIP perspective. The SIP perspective developed here seeks to go beyond the idea of practice as the 'doings' of strategy actors (Jarzabkowski et al., 2016b) and to overcome the macro/micro distinction implicit in SP and SAP perspectives. It shows how strategies can emerge inadvertently because of the immanent presence of socio-cultural modus operandi that provides the generative principle behind strategy emergence. We have shown that such a perspective exhibits fidelity that is consistent with the more radical implications of the 'practice turn' in social philosophy and theory (e.g. Bourdieu, 1977Bourdieu, , 1990De Certeau, 1984;Dreyfus, 1991;Rouse, 2006;Schatzki et al., 2001). We also demonstrate how such a perspective can have a significant impact on strategy scholarship and the understanding of strategy emergence (Pettigrew, 2012). The SIP perspective offers scholars and practitioners new conceptual and empirical frontiers for theorizing strategy emergence that resonates with the lived experience of practitioners. Future research can direct attention towards questions related to the immanence of strategy as expressed in a sociocultural modus operandi, the advantage-gaining nature of practices, and organizational outcomes as an aggregation of innocuous coping actions of numerous actors. In sum, from a SIP perspective, practices are collectively embodied sets of dispositions that make us who we are and how we respond to the circumstances we find ourselves in. In effect, they contain an immanent strategy directed towards gaining advantage in any circumstance we find ourselves in. The same applies to organizations or society. From this perspective, practices develop regularities, 'patterns in streams of actions' (Mintzberg and Waters, 1985: 257) that can be construed as an immanent strategy. In this regard, it is easily possible to see how a coherent strategy can also emerge inadvertently and non-deliberately through the coalescing of coping actions taken at the coal-face of an organization. This is the key insight that the practice turn in social philosophy and theory affords us; it enables us to reintegrate the wider social and the operational with the strategic and the outcomes they subsequently produce.
2020-05-30T23:04:08.403Z
2020-06-20T00:00:00.000
{ "year": 2020, "sha1": "6fe2e35bbe00574da265167a02c5399d91a9323b", "oa_license": "CCBYNC", "oa_url": "https://journals.sagepub.com/doi/pdf/10.1177/0018726720929397", "oa_status": "HYBRID", "pdf_src": "Sage", "pdf_hash": "853a73f3ede9e16e2dea334bffce20fd26032143", "s2fieldsofstudy": [ "Business", "Philosophy" ], "extfieldsofstudy": [ "Sociology" ] }
266497263
pes2o/s2orc
v3-fos-license
Impact of perceived discrimination and coping strategies on well-being and mental health in newly-arrived migrants in Spain Objectives To explore how perceived discrimination impacts the emotional well-being and mental health of newly-arrived migrants in Spain; and to identify the coping strategies and behavioral changes used to deal with perceived discrimination. Design 102 individual audio-recorded in-depth qualitative interviews were conducted. The interviews were transcribed and analyzed through content analysis. Results Negative emotions related to perceived discrimination included disgust, sadness, fear, loneliness, humiliation, sense of injustice, rage, feeling undervalued or vulnerable, and mixed emotions. Change in behaviors due to perceived discrimination comprised westernization or cultural assimilation, creating a good image, avoiding going out or leaving alone, hypervigilance, stop participating in politics, self-sufficiency, a positive adaptation, and paradoxically, becoming an oppressor. The identified coping strategies to deal with perceived discrimination were ignoring or not responding, isolation, self-medication, engagement in intellectual activities, leisure and sport, talking or insulting the oppressor, denouncement, physical fight or revenge, seeking comfort, increasing solidarity with others, crying, or using humor. Discrimination-related stress and related mental health problems were conveyed, as challenges related to substance abuse and addictive behaviors, mood, and anxiety. Conclusions Findings establish initial evidence of the great impact of perceived discrimination on the health, emotional well-being, and behavior of newly-arrived migrants in Spain, alerting to the need for targeted policies and services to address the effects of discrimination in this population. Further research is needed to explore more closely the causes and effects of perceived discrimination on mental health, to develop more targeted and effective interventions. Introduction Nowadays, Spain is one of the leading destinations worldwide for receiving international migrants, although it was not established as a receiving country for migrations until the early 1980s [1].In 2021, a total of 457,701 people arrived in Spain, reaching 5,440,148 foreigners living in Spain [2]. In recent years, international scientific evidence has pointed out the role of-structural, cultural, and individual-racism in health [3,4].More specifically, perceived discrimination has been conceptualized as a significant stressor affecting the mental health of the global migrant population [5,6].As a possible explanation, Sam & Berry [7] have extensively reported acculturation stress as adapting to the host country's cultural norms and values, while living with the country-of-origin standards.Most studies have associated those experiences of discrimination with significantly higher levels of mental health conditions such as anxiety, depression [8], substance abuse [9], or suicidal ideation [10].Equivalently, perceived discrimination has shown correlations with depression and anxiety, and more strongly, with psychological distress [4,6].In terms of subjective well-being, Hadjar & Backes [11] found a great disadvantage on first-generation migrants.Paradoxically, other authors concluded that life satisfaction of the migrant population is generally positive, depending mainly on the cultural origin-with higher levels of well-being associated to closer cultures-and gender.Moreover, the perception of stigma and discrimination hinders mental health services; it favors unmet mental health needs [12] and legal and language barriers in accessing healthcare or primary care services [13].This is supported by empirical evidence showing how sociodemographic and socioeconomic disadvantage [i.e., gender, race, socioeconomic status, and age] impact health.Still, these factors alone cannot explain inequities in the migrant populations' health [14]. Despite the extensive body of research supporting a higher prevalence of mental health problems among migrant communities, few studies have focused on the types of perceived discrimination and subjectively perceived rejection experiences in the Spanish context.Besides, there is a lack of studies regarding the impact of these incidents on migrants' mental health and emotions.In addition, scarce literature examines the associated behavioral changes and coping strategies.In the Spanish territory, migrant communities experience a greater burden of mental health disorders, perceived discrimination, and negative feelings like rejection [15], compared to those born in Spain [16].According to Gil-Gonza ´lez et al. [17], discrimination may constitute a risk factor for health in migrant workers and could explain some health inequalities among migrant populations in the Spanish society.Previous research has highlighted that structural stigma and minority stress mechanisms can encourage the deterioration of the mental health of migrants in the host country [12,14,18].Additionally, a high percentage of the migrant sample interviewed by Agudelo-Sua ´rez et al. [19] reported perceived discrimination, associated mainly with their condition of being a migrant, but also with their physical appearance and with their workplace.These authors also emphasized that migrant's health worsened after arriving in Spain, compared to their health in the country of origin.Nationwide, Sevillano et al. [16] detected that perceived stress was the best predictor of physical and mental health.While this concept relates negatively to a sense of coherence and satisfaction with life, it relates positively to psychological distress and feelings of social exclusion.Some migrants under extreme migratory grief manifest somatizations, confusional and anxious-depressive symptoms, but also mourning due to separation, feelings of failure and loneliness, guilt, emotions associated with loss of status, the breaking of solidarity ties and extensive communities, fear of punishment, and hopelessness [20].Furthermore, there is often an inadequate response of health care systems, and in a bureaucratic level, migrants have to often overcome challenging requirements concerning health treatment access and other basic means [21]. At the base of many of these health determinants, stress is defined as a person-environment, biopsychosocial interaction, wherein environmental events [stressors] are appraised first as unwanted and negative, and require some actions to cope with when adaptation fails [22].The transactional stress model considers the existence of an interaction between the individual and the environment.The impact of the stressor firstly depends on the cognitive appraisal and the meaning given to it.Secondary appraisal assesses the abilities or resources available to cope with the event.If the individual interprets the stressor as negative or threatening, this evaluation predisposes the development of coping strategies.Folkman & Lazarus's [23] conveyed that coping refers to cognitive and behavioral responses individuals use to manage or tolerate stress.Additionally, the authors define two types of coping functions: the first is aimed at problem-solving, while the second is focused on reducing or managing emotional distress. As commented above, perceived discrimination has been conceptualized as a stressor.However, each individual's response will be different depending on the previous perception of the stress situation [24].Lahoz & Forns [25] found that people who perceived themselves as being discriminated against, tended to use more cognitive avoidance strategies to face the situation.Parallelly, other studies have found that identifying oneself with the underrepresented group can mitigate the negative consequences of racial prejudice and lead to a positive impact on well-being [26,27]. Considering the complexity and diversity involved in the analysis of the migratory processes on mental health [28], migrant narratives were analyzed within a convenience sample aiming to describe their experiences of perceived discrimination and migration-related stress.Since significant differences have been found in association with different types of psychosocial vulnerability [29], gender and culture were considered when recruiting the sample for the study.It is hypothesized that perceived discrimination may increase the risk for mental healthrelated issues and the subsequent loss of emotional well-being.Participants who experience perceived discrimination may report difficulties associated with psychological distress, such as negative emotions, behavioral changes, discrimination-related stress, or mental health problems.Some individuals may refer to coping strategies to deal with the perceived discrimination. In connection therewith, the goals of the present study are: 1) to identify the negative emotions, coping strategies, and behavioral changes related with perceived discrimination in newly-arrived migrants in Spain, and 2) to explore how perceived discrimination impacts their emotional well-being and mental health. Study design The current study used data from the MigraSalud project [30].The MigraSalud project of the Parc Sanitari Sant Joan de De ´u [in collaboration with the Juan Ciudad Foundation] comprises four independent studies that are part of a national scope financed by the Ministry of Labor, Migrations and Social Security in Spain.The project was born in 2018 to provide new scientific knowledge on post-migration well-being and health, with the ultimate purpose of contributing to society by improving healthcare practices.The present work is a qualitative study designed to understand the discrimination-related stress of Spanish migrants in the territory.The qualitative design was used due to the need to emphasize personal narratives and experiences to comprehend better the cultural reality.An in-depth interview instrument with 100 openended questions and 14 sections was created.The interview covers sociodemographic questions, employment conditions, the journey to Spain and the post-migration period, attachment, adverse life experiences, perceived discrimination and stress, social network, identity, intercultural mediation, public health care system in Spain, and COVID-19 pandemic. Setting and sample A snowball sampling strategy was used recruiting seeds from each social group to participate in the interviews and help identify other eligible newly-arrived migrants for recruitment.The recruitment channels were mainly non-profit organizations settled in three Spanish cities [Madrid, Valencia, and Barcelona].Participants were also recruited from the MigraSalud project webpage [31], personal communications, and social media.The process of recruiting participants continued until thematic saturation was reached, which depended on including enough study participants (nearly 100) from the relevant autonomous communities (Madrid, Valencia, and Barcelona) in Spain and the newly arrived migrant vulnerable groups constrained by the languages available in our study. The inclusion criteria were [1] being 18 years or older, [2] not being a Spanish citizen, [3] having lived in Spain for less than 5 years, [4] living, studying, or working in Barcelona, Madrid, or Valencia, and [5] speaking Spanish, English, Chinese, Urdu, French, or Arabic.In addition, the cities of Madrid, Barcelona, and Valencia were chosen since they are the most populated cities in Spain and can be compared between them [32]. Ethical statement Ethical approval was provided by Parc Sanitari Sant Joan de De ´u, Barcelona, Spain (PIC .Participants were thoroughly informed about the objectives and procedures of the study.Each respondent had to provide written informed consent before participation.All documents, including informed consent forms were available in English, Spanish, Chinese, Urdu, French, and Arabic. Data collection Data was collected through 102 individual audio-recorded in-depth interviews from February 2020 to November 2020.This sample size was deemed sufficient for the qualitative analysis and scale of this study, even so, reasons to stop recruitment included time and financial constraints.A guide for the semi-structured interview was developed specifically for this research and used by the facilitators.Questions vary from daily life discrimination experiences to health and emotional well-being concerns.Due to the COVID-19 pandemic, interviews were administered by phone or using online video-call platforms ensuring data encryption for security and privacy reasons.Five interviews were led in person, as some participants did not know how to use information technologies.During the interviews there were only the participant and the interviewer, to avoid the influence of the presence of non-participants.Consent forms and semi-structured interviews were available in English, Spanish, Chinese, Urdu, French, and Arabic because of the regional migration demographics [33].Interviews lasted two hours approximately (for more detailed information, see S2 Fig) .Interviewers were native speakers and asked open-ended culturally appropriate questions.Before getting into the field, exhaustive training on how to conduct in-depth interviews was provided to the interviewers.Training on confidentiality, transcription, and translation processes was also ensured.Interviewers were also fluent in Spanish.Later, the interviewers transcribed the qualitative data in their native tongues, and translations in Spanish were handled.Usually, the same researcher who conducted the interview undertook the transcription and translation to the Spanish language itself; except for the Spanish language interviews, which were transcribed in some cases by a different researcher.Participants were interviewed in their native language and by an interviewer from a similar ethnic group.BMM, MCA-B, YH, REH-E, FV and H conducted the interviews, and they had no prior personal relationship with any of the participants [see S2 Although 105 interviews were conducted, 3 interviews had to be excluded from the analysis due to recording problems that prevented its transcription [N = 2], and for not meeting the inclusion criteria [N = 1]; obtaining a final sample of 102 participants.As the participation was voluntary and the interview was performed in one session, the study did not have refusals or drop-outs. Data analysis Two researchers who did not participate in the interview or transcription process independently analyzed the participants' responses.Through conventional content analysis, they could gain information based on participants' unique experiences without imposing preconceived categories or theoretical perspectives.Independently, the two coders read and re-read the transcripts several times to achieve an understanding of the content of the interviews, while writing observations and highlighting text that appeared to be relevant to the aim of the study.As they read, they started to identify codes, based on patterns, similarities, differences, and relationships.Reflective remarks and memos were also used for the analysis.After independent open coding, the two coders met to discuss the preliminary codes upon reaching a consensus.Code definitions were used as the central criteria for assigning codes.Then, they kept coding by using these codes and adding new ones when they encountered data that did not fit into an existing code.During the coding process, the coders had weekly meetings to discuss new codes and group them into sub-categories and categories.In cases in which there were doubts or lack of context, coders also had meetings with interviewers (for more detailed information about the coding process, see S1 and S2 Figs).Agreement and in-depth discussions among researchers guaranteed intercoder reliability. Distribution of participants Table 1 shows the characteristics of the overall study sample and by region of origin.Considering a non-binary gender approach, we recruited 48 (47.1%) self-identified women, 51 (50.0%) self-identified men, and 3 (2.9%)interviewees reported another gender identity.The age of the participants ranged from 18 to 57, with a mean age of 30.67 (SD = 9.81).Most of the participants had lived on average less than 3 years in Spain.Splitting participants by origin, 26 indepth interviews were conducted with Chinese participants, 21 with Arabic participants from Morocco, 6 with interviewees from other African countries (i.e., Argelia, Cameroon, Mali, Nigeria, Senegal, and Sudan), 40 participants were from Latin America (i.e., Argentina, Brazil, Chile, Colombia, Cuba, Dominican Republic, Honduras, Mexico, Peru, and Venezuela), and 10 from Pakistan and India.Most participants (94 of 102) reported having experienced some type of discrimination since their arrival in the host country.Those who did not report having been discriminated against when asked directly, later in their narrative, it was revealed that they had experienced some form of discrimination.However, it had not been perceived as such.This could be attributed to a lack of knowledge of the concept, not being aware of the phenomenon, or due to stigmatization, as some participants reported that for them, it meant having low self-esteem and a debility trait.An example of this experience was described by Participant 18 [P18]: P18: I've never experienced it (discrimination), but similar things have happened to me, for example there's people in the street with their phone and they are scared or walk away from the Moroccan [referring to himself], or you make a question to someone, and he/she doesn't answer. ... Findings Five emerging categories were found post-hoc related to the aim of the present study: [1] "Negative emotions due to the perceived discrimination", [2] "Change in behaviors due to the perceived discrimination", [3] "Identified Coping Strategies to face Discrimination-Related Stress", [4] "Discrimination-related Stress", and [5] "Mental Health Problems due to Discrimination-related Stress".A total of 48 codes were agreed by the two researchers included in the present study.The 11 sub-categories that emerged from the coding process facilitated the categorization into the 5 main categories (see S1 Table ).Most of the participants openly communicated to have experienced negative emotions due to the perceived discrimination, and some of them reported a decrease in their functional well-being.In general, it was reported that most of the participants changed their behaviors after perceiving discrimination and many of them were also able to identify coping strategies to deal with the discrimination phenomenon.Not all the participants linked any health problem or emotional distress to their perceived discrimination.However, when the interviewers dug into some discrimination narratives it was possible to distinguish new descriptions of discrimination-related stress and mental health problems. 3.2.1. Negative emotions related to the perceived discrimination.When participants were asked about their thoughts, feelings, and emotions just after being discriminated against, the majority of them reported negative emotions.Overall, when a person is subject to discrimination, their experience becomes unique and unrepeatable.Even though the range of emotions varies from one person to another, it has been possible to identify key feelings and emotions that are more prevalent in the narratives of the people interviewed. 'Disgust', 'sadness'-which contains disappointment and feeling bad-, and 'fear'-which includes worry, anxiety, and fear to be discriminated against-were described when the participants felt that although they did not do anything wrong, they could not avoid being discriminated.'Loneliness'-including feeling lonely, marginalized, misunderstood, rejected, and abandoned-, 'humiliation'-which embraces also feeling offended or denigrated-, 'hypervigilance'-containing the sense of being judged or watched by others before acting-and 'mixed' emotions [i.e., when one or more feelings, emotions, or moods occur together] were also stated by some participants, like participant 1 [P1]: P1: When this happens to me [being discriminated] I feel bad all day.It also makes me worry about something similar happening to me again. [26 years, from China] 'Shame' and 'guilt' were some of the emotions conveyed mainly by women from the Chinese community, usually after being excluded or ridiculed by a group of people from their work or study place.The codes 'sense of injustice' and 'rage'-which includes anger, revenge, or impotence-was repeatedly expressed by young men from African countries, and was related to discriminations that mainly happened in public transportation: P2: Once, some friends sneaked into the subway because they didn't have transportation tickets, so the subway security guards cursed us and asked for our documents, and they started saying 'these MENAS [Spanish acronym for unaccompanied minors used derogatorily], they are criminals', and that is not true, and neither is the word MENAS fair. . .and then I must keep quiet, because if I speak without witnesses for the facts, I won't get anything. . . [19 years, from Morocco] Lastly, almost all interviewees described that they changed the way they felt about themselves.Some of the most repeated feelings were to 'feel undervalued or inferior' [i.e., feeling worthless or that others do not value you as they should] [quote by P3] and to 'feel vulnerable', which was mainly attributed to being undocumented and to the lack of institutional or social supports [quote by P4].P3: . ..when we get on the public bus, people move away from us, it's obvious.[. ..]I feel inferior, I feel like a shit, they treat me like I'm disgusting. [35 years, from Cameroon] P4: I feel very bad because we don't have any relatives who can defend us, nor acquaintances. We don't have anyone who can help us to denounce it or anything. I am feeling bad. And that's all about it. [18 years, from Morocco] 3.2.2.Change in behaviors due to the perceived discrimination.Perceived discrimination not only causes immediate negative emotions but also makes participants acquire changes in their behaviors to avoid being discriminated against in the future. Some changes in behavior mainly seen in individuals from Pakistan, China, and Morocco focused on being more accepted by the local community [quotes by P5 & P6] by 'westernization or cultural assimilation' (i.e., changing their lifestyle and their dress code to look like a local and attract less attention) and by 'creating a good image' of oneself and its culture. P5: There are people [referring to migrants] who steal and others who don't, but people [referring to locals] see all of us the same. . .especially in the subway.I tell to my colleagues that if someone stares at you badly, you just ignore it to avoid problems.I also advise them to change the way they dress and their hairstyle, because unfortunately, people glance at you for your physical appearance instead of looking at you for your behavior or your sufferings in life. [19 years, from Morocco] P6: They have no respect for Islam.People don't speak well about my religion, so I avoid conflict situations, for example I don't have a beard because it is not well seen.[47 years,Pakistan] Changes in behavior to avoid discriminatory situations such as 'not going out', 'avoid going alone' [quote by P7], 'hypervigilance' [i.e., change routes or places where they used to go and be very careful and more concerned over what they say or do], and 'stop participating in politics' were mentioned.Additionally, a participant from Senegal also noticed that he had become more 'self-sufficient', showing less caring or solidary with others.P7: The lesson learned from this experience is to avoid sitting alone in the subway.I have to be in a crowd of people.If I am alone, I am more likely to be offended without company or support around me. [22 years, from China] Finally, few participants adopted extreme behavioral changes.On one hand, 'positive adaptation' was assimilated by some individuals as a healthy functioning posture [quote by P8], while a couple of participants acquired a negative adaptation by 'becoming the oppressor' and repeating the same patterns of their bullies. P8: Thanks to them [people who discriminated her] I can better manage the stress and pressure of discrimination. The other day I defended another person against a thief on the street. If it were earlier, I would not have had the courage. [26 years, from China] Identified coping strategies to face discrimination-related stress. Once the participant identified a discriminatory experience, the interviewers inquired about the coping strategies the person put in place to deal with it.Generally, participants have their singular manner to deal with perceived discrimination afterwards, but parallel strategies within members of the same nationality background were distinguished. Internalized coping strategies were defined by the research team as those strategies that did not require interaction with others.As internalized coping strategies, participants conveyed 'ignoring' or 'not responding' to the act or acts of discrimination.Reasons described by the participants were, for instance, fear of getting legal documents revoked or not obtaining them for possible accusations of uncivil behavior.Moroccan unaccompanied minors like P9, felt extremely vulnerable and mainly used these coping strategies: P9: Because if you don't know how to speak the language, and you are undocumented, it's better to ignore and that's it. . . [19 years, from Morocco] Chinese individuals in the academic context repeatedly reported 'isolation' as an adopted coping strategy-which includes detaching oneself from the bully or bullies: P1: I isolate myself observing everything.For example, when I was abroad on my Erasmus time, I just left my place and find another one where I felt more accepted. [26 years, from China]. A small number of individuals from a variety of nationalities mentioned 'self-medication' to deal with daily life discrimination: P10: I am not taking alcohol anymore, but I am taking sleeping pills again.I take them every night before sleeping time.Then, I get up as if I were a ghost; you know my body feels dead.I do some exercise to sleep, but that does not help me a lot. [24 years, from Sudan] Finally, mainly Chinese participants reported 'intellectual activities, leisure and sport'that includes a range of internalizing strategies as writing an introspective diary, reading, listening to music, exercising, and cognitive training. Externalized coping strategies were operationalized as social behaviors that implied others.Externalized coping strategies included 'to talk with the oppressor' (i.e., to search for a constructive dialogue with the bully and request to solve common problems), 'to insult the oppressor'-that mainly happened in the street with an unknown attacking person-, 'to denounce' [e.g., using social media platforms, recording the discrimination situation in their phones to obtain public attention, calling the police, etc.], 'physical fight' or 'revenge' by attacking the bully, 'to seek for comfort'-in this case trusting a meaningful person [quote by P11]-, and 'to increase solidarity with others'. P11: I seek comfort with my boyfriend. My boyfriend has many experiences managing social situations and interactions, and with him, I can seek comfort [31 years, from China] Mixed coping strategies were identified used individually or in a social context.For example, 'to cry', as some individuals cried in isolation and others-mainly from Latin-American communities-preferred to share their crying with somebody else, and 'to use humor', as some of the participants used this resource as an internal managing thought, while others externalized the use of humor by engaging with another person. 3.2.4.Discrimination-related stress.Some of the interviewees reported that being discriminated against increased their emotional distress response: P12: I am feeling more stressed out and nervous. . .when I think about it, I just want to ignore everything. [18 years, from Morocco] P13: All these issues are generating a huge emotional and mental distress to me.Very huge [emphasizing the word huge].To begin with, it was very difficult the relationship with my tutor.In fact, he is the most important person which whom I have to interact here abroad.Our interpersonal relationship is by far the most difficult for me.For this reason, it generates me a lot of emotional and mental distress, I have to think about each word I say, everything I do, and I have to behave carefully.Recurrently, I have to avoid his comments, and I have to be diplomatic with him constantly. [32 years, from China] 3.2.5.Mental health problems due to discrimination-related stress.Participants disclosing discrimination-related stress described mostly mental health-related problems. 'Problems related to substance abuse and addictive behaviors' were conveyed by some of the interviewees-for example, 'alcohol and marihuana abuse' [quote by P14], 'misuse of anxiety medications', and 'compulsive video gaming'. P14: I get recurrent nightmares because of my stress. The way I deal with it is drinking, and complaining with friends. [35 years, from China] 'Problems related to mood' englobes codes as 'sadness', 'depression', and 'suicidal thoughts', as explained by P15: P15: I feel discriminated, and I feel very bad. I feel like a dog. My neighbor's name is Juan [typical Spanish name] but my name is [an Arabic name]. Sometimes I think about suicide, because I feel very humiliated. [46 years, from Morocco] 'Problems related to anxiety' were also described.Anxiety was operationalized including also overthinking and over worry.Some of the participants used the word 'anxiety', but others, especially from Morocco where the term is not culturally present, used expressions as 'nervousness' or 'having nerves'.Behavioral symptoms of anxiety as 'nail biting' were pointed out.'Eating compulsively' and 'sleeping problems' narratives were also outlined.In the following case, a man was talking about the systemic discrimination experienced and how it affected his health: P16: Even if you are an undocumented migrant; you deserve a health care access option. Like everyone else, right? If I have to pay for health care services it is not a problem, but right now, it costs a lot [. . .] so, what can I do? I am afraid. I can't get my checkups done because I can't afford this option. . . [43 years, from Argentina] Another participant disclosed 'psychological enuresis' as an anxiety problem: P10: I recurrently pee in my bed during nighttime, you know?Sometimes I am afraid of everyone, I can't take the bus, I can't walk around the streets, I feel everyone is against me, you know?Everyone is against me . . . [24 years, from Sudan] Lastly, 'not classified problems' as for example 'headaches' were conveyed: P17: Yes, this situation is really creating stress on me, and. ..well,I begun having headaches. [34 years, from Argentina] As a general principle, physical health issues were not directly linked with the stress of being discriminated against.Contrarily, a connection between mental health narratives and perceived discrimination existed.Specifically, physical health problems were associated with immigration issues as harsh labor conditions [e.g., having body or back pain].The only remarked situation where it was described as a physical condition related to the perceived discrimination was when talking about headaches by a Latin American individual. Main findings The data indicate that perceived discrimination negatively impacts the well-being and mental health of newly-arrived migrants in Spain.Different negative expressions are narrated in a broad continuum, ranging from negative emotions, psychological distress, and more severe mental health-related concerns.In this qualitative study, data acknowledged that newlyarrived migrants in Spain have internal, external, and mix coping strategies to deal with perceived discrimination.As a main finding, changes in their form of behaving may occur once they perceive unfair treatment. Interpretation The interviews revealed several negative emotions.Similar findings were reported by Crocker [34], who found emotional suffering linked to stress, loneliness, fear, depression, and trauma in migrants, which have been documented to contribute to mental health risk [35]. In this same line, 'shame' or 'guilt', 'sense of injustice', or need for 'vengeance' reported by the sample of the study, are secondary emotions that imply feeling something about the primary feelings of 'rage' or 'anger' expressed as related to discrimination.As stated by Braniecka et al. [36] these outputs are useful in terms of functional interpretation and emotional adaptation as they may facilitate adaptive coping by promoting the motivational and informative functions of emotions.Fostering solution-oriented actions instead of avoidance, facilitating insight access, better narrative organization, and providing a resilient attitude towards stress and difficult experiences may also ease adaptive coping [36]. The results obtained in terms of coping strategies to face perceived discrimination and the related stress are not based on any pre-established scale, which has allowed to obtain responses without any preconceived ideas.Most coping strategies mentioned by the interviewees largely match with categories established in previous studies, like the eight dimensions described by Folkman et al. [37] [i.e., confrontive coping, distancing, self-controlling, seeking social support, escape-avoidance, planful problem-solving, positive reappraisal, and accepting responsibility].The dimension "accepting responsibility" was the only one not represented in the study. Despite previous studies have analyzed the coping strategies used by a migrant community in a concrete country [6,27], few investigations have focused on the different forms of coping between distinct nationalities that coexist in the same context.Although this research has been able to discern some diversity in the coping approaches of the studied communities, further investigation is needed to delve into the nature of these differences. The manifestations of 'westernization or cultural assimilation' or the efforts described to 'create a good image' found in the present analysis, have also been identified by Bhugra & Becker [38], who studied how the individual's cultural identity may be lost during the assimilation process within the host society that follows acculturation [39].Acculturation frequently results in stress, self-esteem problems, and mental health damage [40].Changes in behaviors to avoid being discriminated against were part of the narrative repeated throughout the interviews.Individuals in the present study described feeling 'vulnerable, worthless, inferior, or undervalued.'These concepts have been explained by Leary & Springer [41] as the result of interactions that involve relational devaluation that often causes hurt feelings.This could also illustrate the results in the feelings of lack of social support perceived by the interviewed participants and the reaction of changing to a 'self-sufficient' attitude as a defense mechanism against discrimination contexts and events. The 'emotional distress response' expressed by the participants as a reaction to discrimination has been previously acknowledged by Wallace et al. [42].The author reported migrants feeling unsafe and vigilant.Migrants also showed anticipatory stress of a possible future racist encounter.Anticipatory stress increased the probability of avoiding spaces suggesting that past exposure to racial discrimination or awareness of racial discrimination experienced by others can continue to affect individual's mental health after arriving in the host country. Concerning the mental health outcomes raised by the participant's answers, 'problems related to substance abuse and addictive behaviors' have been captured by other authors.For instance, Borges et al. [43] found patterns of substance use disorders when facing loneliness, social isolation, stress, and discrimination linked to broader social changes associated with transnational migration, naming direct exposure to substance use opportunities, the transfer of social norms of substance use, and the economic means as factors involved.Horyniak et al. [44] also found regional and global differences in patterns of 'substance use', which may be influenced by local context factors such as availability of substances and social norms. The mood symptoms expressed by the interviewed sample in this study agree with the literature evidence of the last decade.Fortuna et al. [45] and Wolf et al. [46] exposed experiences of 'depression, trauma exposure, pessimism, sense of failure, guilt feelings, punishment feelings, and suicidal thoughts'.The present results also support previous findings related to 'anxiety problems'.Szaflarski et al. [47] explain them as related to stress and the preference for socializing outside one's racial-ethnic group, while in another study, Sapmaz et al. [48] and Mares [49] show varying rates of feeding and sleeping problems, nail-biting, enuresis, and other regressive symptoms among children, as the present investigation does with adults. In sum, perceived discrimination may constitute a risk factor for mental health deterioration and psychological distress.Most of the experiences described by the target sample in the present study match the above findings where the frequency of perceived discrimination events negatively relates to well-being levels in the migrant population [3] and different coping strategies are displayed to face these experiences. Strengths and limitations 4.3.1.Convenience sampling.This study has several strengths and limitations.Convenience sampling was chosen as the recruitment method as it is cost and time-efficient, and simple to implement in one-year funded projects [50].Because the thematic was relatively new in Spain, and scarce literature on the topic was available in the Spanish context, the team wanted to begin from scratch and ask newly-arrived migrants directly about discrimination and mental health.Nevertheless, this sampling method lacks generalization, and the results cannot be transferred into the general Spanish population or between different cultural communities of newly-arrived migrants in Spain [50].Although the present study focused in three cities that are similar in terms of immigration, they also present some local particularities [such as the presence of co-official languages in Barcelona and Valencia] that may have an impact on integration process due to language barriers.4.3.2.Recruiting.Considering that convenience sampling is often not reflective of the target population [50], it is essential to mention the limitations in this regard.In general, older adults are underrepresented in this research.This fact may be because of the recruitment channels used, tied to particular age-group individuals.Moreover, Idescat [51] indicates that registered migrants in Catalonia from the countries interviewed average age is 30 to 44 years old.The tax rate of migrants of more than 60 years old in Catalonia is low-less than 10%-.These numbers may be connected with the idea that Spain is a relatively new host country.Idescat's [51] data is salient as most of the interviews were conducted in the Catalan territory, except for the interviewees from Latino America [N = 38] that were mainly widespread in Valencia [N = 16] and Madrid [N = 13]. With an intersectional perspective, individuals recruited in the present study were from different nationalities, races, ethnicities, classes, religions, cultures, gender identities, and sexual orientations.In the Latino-American sample, diversity in gender and sexual orientation is well represented [N = 6].The team had difficulties incorporating disability in the sample [N = 1]. The underrepresentation in terms of gender in some communities hinders the identification of possible gender differences.Specifically, looking into the sample from newly-arrived Chinese migrants, there was a predominance of women [N = 19] with high-level college degrees with a stable financial situation.As a great strength, interviews were conducted by a culturally aware native Chinese speaker.The sample from Morocco has its constraints too.It was mainly composed of male unaccompanied minors transitioning to adult life from foster care placements in Barcelona [N = 15].In future studies, more self-identified women should be interviewed to explore gender issues regarding stress and discrimination among this community.A main strength was that the interviewer is a social worker from Morocco who has been working with unaccompanied minors for several years and carried out the interviews in the Moroccan dialect of Arabic, known as Darija.The same researcher interviewed the other African countries' participants, as he speaks Arabic and French, and shared cultural and religious background with some of the interviewees.The final sample from African countries [excluding Morocco] was tiny [N = 6], as well as the selection from Pakistan and India [N = 10], being a major barrier for the study of these specific communities.A new manner to reach individuals from these countries must be explored in future studies, as this research was conducted during the COVID-19 pandemic, and challenges with the uses of technology emerged. Data collection and analysis. Our results should be interpreted considering some data collection and analysis limitations.First, the one-to-one limitation linked to the interviews, as the obtained data depends mainly on the interviewer's sensitivity and persistence, and interpersonal interaction.Second, our study was tied to perform most of the interviews online, instead of face-to-face, due to the COVID-19 pandemic social distancing measures.Interviews through online platforms might have excluded people without internet access, a computer, a mobile phone, or unfamiliar with telematic tools.Finally, it is important to highlight that mental health problems due to discrimination-related stress disclosed by the participants were self-reported.Self-reported data is linked to social desirability bias, recall bias, and can be limited by the introspective ability of the participants and their personal bias in relation with the topic.Nonetheless, our study provides an important addition to the literature regarding perceived discrimination in Spain and its consequences on health.Considering the lack of previous research on the topic, the used qualitative methodology allows capturing the complexity of people's behavior and their health issues by obtaining direct information from study participants without imposing preconceived categories or theoretical perspectives before the analysis [52]. Implications, macro level interventions and areas of future research Qualitative research can find suitable answers for health and social care policy-makers and professionals.The study presented and the data available until today show that discrimination is becoming a significant public health problem [4,53].The present research reaffirms that discrimination is connected to negative emotions, distress, and other mental health challenges.These have some implications as the need to invest in the Spain migrant population's mental well-being and the promotion of mental health care access and resources within the community.Accessibility to mental and social services will help to detect discrimination cases in advance.Mental health care access is necessary, but identifying discrimination needs to encompass a multidisciplinary and sensitive cultural team of professionals.An antiracist curriculum and culture within the agencies are essential to cover all the aspects of the practice with newly-arrived migrants in a culturally responsive form.Psychological support is critical for preventing mental health challenges, but legal aid needs to exist in parallel to detect and manage discrimination in a social justice-oriented form.A human rights-based perspective in the clinical and social practice that supports the existing Declaration of Human Rights, and the present European Human Rights treaties will improve the current public health issue.The newly-arrived migrants, in conjunction with local people, must empower themselves to identify different types of discrimination to create common synergies of respect, tolerance, and advocacy towards social change.This work must be mutually reciprocal. To create and promote this anti-discrimination programs and policies, it is relevant to comprehend that each interviewed community and culture perceive discrimination differently.This is because the stereotypes linked to migrants varied depending on their nationality, skin color, gender, sexual orientation, social class, weight, and religion.Accordingly, an intersectional analysis is needed to propose new policy improvements within the Spanish social system.Programs and policies need to adapt specifically to the different communities, be antiracist and inclusive, and social needs must be considered integrating the narratives and life stories of newly-arrived migrants.Individuals from the same community need to be actively involved in creating and implementing programs and policies, with peer support or coordinating efforts with thematic experts.Alongside, policies and awareness-raising campaigns must cover different fields of action and disciplines, as schools, universities, public institutions, or state security forces. Future research must approach the pandemic effects of the COVID-19 and the convergence of discrimination, stress, and mental health in newly-arrived migrants.Today, and more than ever, this pandemic is showing its effects as travelling restrictions arise and border closing solidify.Considering the Real Decreto declaration 463/2020 released by the Spanish Ministry of Inclusion, Social Security and Migrations [54], subsequent studies must not forget how bureaucracy and procedures regarding legal status have been frozen in Spain during the pandemic. [ 18 years, from Morocco] Table for more information]. Table 1 . Characteristics of the overall study sample and description of the participants by origin. Frequencies and proportions [in percentages] are displayed for categorical variables, and means with standard deviation [SD] for continuous variables.https://doi.org/10.1371/journal.pone.0294295.t001
2023-12-24T05:10:50.460Z
2023-12-22T00:00:00.000
{ "year": 2023, "sha1": "598e130a316662bd734b4f0ccd0a69ff80aef46f", "oa_license": "CCBY", "oa_url": null, "oa_status": null, "pdf_src": "PubMedCentral", "pdf_hash": "598e130a316662bd734b4f0ccd0a69ff80aef46f", "s2fieldsofstudy": [ "Sociology" ], "extfieldsofstudy": [ "Medicine" ] }
186707841
pes2o/s2orc
v3-fos-license
A Brief Introduction to Pendleton’s Rules and Their Application in Echocardiographic Training Dear Editor, Feedback is the cornerstone of effective clinical training, so that correct performances are reinforced, incorrect ones are modified, and a path toward progress is identified. Feedback provides trainees with information needed to minimize the gap between desired and actual performances and encourages them to rethink and improve their performance (1). The present article describes Pendleton’s rules and its benefits, criticisms, its modified form (Pendleton plus), and its application in echocardiographic training. Pendleton’s rules, which outline the usual process for giving feedback to trainees (2), include the following stages: Trainee states which items he/she has done well Trainer states which items the trainee has done well, and discusses with the trainee how these were performed well Trainee states which skills he feels should be performed differently Trainer states what the trainee has to do to improve the identified skills Trainee provides his practical performanceimproving program (3). According to these rules, the trainer provides the trainee with balanced feedback when there is a suggestion for improvement (2, 4). The trainee and the trainer first focus on the trainee’s strengths, then on his weaknesses, and then the trainer provides suggestions for improvement. Thus, strengths and weaknesses are equally considered, where strengths are reinforced, and the trainee is given the opportunity to evaluate his performance prior to receiving criticism, in a way to significantly reduce defense against received criticism. Stating his own limitations provides the trainee the opportunity to rethink, creating a safe environment for receiving feedback (2, 4, 5). For learning to happen, the trainer should go beyond merely stating what areas are lacking, and he should provide the trainee with corrective suggestions (4). However, there have been several criticisms exacted on these rules, including inflexibility, the providing of feedback in an artificial setting (2), impossibility of separating strong and weak points in many cases (5), hypocrisy, no consideration for constructive criticism and interactive discussion, time-consuming, allocation of little time to assess weaknesses (4), making the trainee anxious due to the delayed assessment of weak points (2, 4), describing events and inadequate analysis, absence of comment on how good a trainee’s performance is (6), and that in applying these rules the trainer often states either what needs to be changed or how this performance can be improved, and rarely both together (6). According to the conscious-competence model that has been designed for learning skills, when a trainer asks a trainee what he feels he has done well, he is referring to the conscious-competence stage in which the trainee has acquired the skill but has to profoundly focus on that skill when performing it. When the trainer cites any unmentioned items done well by the trainee, he is referring to the unconscious-competence stage, where the trainee has mastered the skill and performs it unconsciously, without thinking (the trainee can also perform other tasks at the same time). When the trainee is asked to state skills that need to be improved, this refers to the consciouscompetence stage, since the trainee is aware of these skills and the need to acquire them. When the trainer reviews items that need to be altered to enhance the trainee’s skill set, he refers to the unconscious-competence stage, since the trainee has no awareness of the intended skill (7, 8). Given these criticisms, the modified version of these rules has been presented as Pendleton plus: The trainer asks for the trainee’s general opinion happen, the trainer should go beyond merely stating what areas are lacking, and he should provide the trainee with corrective suggestions (4). However, there have been several criticisms exacted on these rules, including inflexibility, the providing of feedback in an artificial setting (2), impossibility of separating strong and weak points in many cases (5), hypocrisy, no consideration for constructive criticism and interactive discussion, time-consuming, allocation of little time to assess weaknesses (4), making the trainee anxious due to the delayed assessment of weak points (2,4), describing events and inadequate analysis, absence of comment on how good a trainee's performance is (6), and that in applying these rules the trainer often states either what needs to be changed or how this performance can be improved, and rarely both together (6). According to the conscious-competence model that has been designed for learning skills, when a trainer asks a trainee what he feels he has done well, he is referring to the conscious-competence stage in which the trainee has acquired the skill but has to profoundly focus on that skill when performing it. When the trainer cites any unmentioned items done well by the trainee, he is referring to the unconscious-competence stage, where the trainee has mastered the skill and performs it unconsciously, without thinking (the trainee can also perform other tasks at the same time). When the trainee is asked to state skills that need to be improved, this refers to the consciouscompetence stage, since the trainee is aware of these skills and the need to acquire them. When the trainer reviews items that need to be altered to enhance the trainee's skill set, he refers to the unconscious-competence stage, since the trainee has no awareness of the intended skill (7,8). Given these criticisms, the modified version of these rules has been presented as Pendleton plus: -The trainer asks for the trainee's general opinion about his overall performance, and then briefly provides comments in response. For example, the trainee rates his own performance as excellent, very good, or good, a little problematic, or problematic. In this stage, during the assessment of trainee's insight, general feedback is provided to the trainee, which prevents the trainee and the trainer from being submerged in the narration of the events. -The trainer asks the trainee what skills he has performed well, and why and how they were done well, and the trainer provides a response. In this way, the first and second stages of Pendleton's rules are integrated. -The third stage of Pendleton Plus is almost the same as the third stage of Pendleton's rules, in which the trainee states which skills require improvement, and the trainer encourages the trainee to analyze his performance by asking why, and how, he can improve in the future. -To sum up, the trainer requires the trainee to state the instances where he felt he performed adequately, as well as those that require modification (6). The assumption in using the Pendleton Plus rules in echocardiography training is that the assistant should perform echocardiography on a patient independently, and that the images of measurements and the videos of all normal pathologies and structures, obtained through different echocardiographic modalities, should be stored in the device. First, the trainer requires the assistant to provide an overview of the echocardiography of the patient, and then expresses his opinion on all images and videos. In the second stage, the trainer requires the assistant to state which measurements, images, and videos of the heart were obtained appropriately, and why. For example, if the assistant states: "The pressure gradient of the pulmonary valve and the measurement of the inner diameter of the left ventricle during systolic and diastolic periods were assessed correctly, due to the alignment of the flow through the pulmonary valve, obtaining clear images of the heart in the para-sternal longitudinal view, attention paid to the endocardial movement of the posterior wall of the left ventricle in systolic and diastolic periods, and careful detection of the papillary muscle and its distinction from the posterior wall." At this stage, the trainer provides corrections for any images wrongly assessed by the assistant, and the trainer confirms the assistant-stated cases that have been performed adequately and points out any other portions that have been completed correctly. In the third stage, the trainer requires the assistant to state assessment or imaging cases with which he is dissatisfied, and asks why he is dissatisfied while providing the necessary guidelines. The trainer then requires him to state how he would correct his performance, so as not to repeat the same problems with the next patient. For example, if the assistant states "The four-cavity view of the patient is not ideal due to the patient's obesity" the trainer then repeats the echocardiography, shows the assistant a good four-cavity image, and explains to the assistant how to obtain a good image by turning the probe below the center-line. Lastly, the assistant restates the trainer's instructions for performing echocardiography on obese people. Given that echocardiography depends on the individual's skill level for identifying pathologies of the heart, and understanding that there are both normal cases and complex multistage processes, the trainer should perform echocardiography in the presence of the assistant. While repeating this process, he should provide the assistant with both positive and negative feedback in every stage of measurement, imaging, and pathological assessment. If faults are found in the assistant's performance in each stage, the trainer should also continue to first point out to the assistant any of his appropriate steps completed in that specific stage (not the previous stage or overall echocardiography), then mention the portions that require modification, and lastly teach him how to correct faulty cases. In the fourth stage, the assistant is asked to state some of his strong skills and some that require improvement. For example, the assistant states: "The para-sternal longitudinal measurements of the left ventricle have been done properly, but the heart four-cavity images and their measurements need to be corrected". The author, who is involved in training specialist heart assistants and echocardiography fellowship assistants, recommends knowledge of Pendleton Plus rules and their use in echocardiography training. Supplementary Material Supplementary material(s) is available here [To read supplementary materials, please refer to the journal website and open PDF/HTML].
2019-06-13T13:15:16.777Z
2017-09-25T00:00:00.000
{ "year": 2017, "sha1": "446eea20bba66347d45f756e8143c624d8e8ac1d", "oa_license": "CCBYNC", "oa_url": "http://sdmejournal.neoscriber.org/cdn/dl/fd734316-505f-11e8-809d-c3f75da4e589", "oa_status": "GOLD", "pdf_src": "Adhoc", "pdf_hash": "d0fa025f0446064cfc14237ed03a5089582b1f0c", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [ "Psychology" ] }
231886106
pes2o/s2orc
v3-fos-license
Expression of Caspases in the Pig Endometrium Throughout the Estrous Cycle and at the Maternal-Conceptus Interface During Pregnancy and Regulation by Steroid Hormones and Cytokines Caspases, a family of cysteine protease enzymes, are a critical component of apoptotic cell death, but they are also involved in cellular differentiation. The expression of caspases during apoptotic processes in reproductive tissues has been shown in some species; however, the expression and regulation of caspases in the endometrium and placental tissues of pigs has not been fully understood. Therefore, we determined the expression of caspases CASP3, CASP6, CASP7, CASP8, CASP9, and CASP10 in the endometrium throughout the estrous cycle and pregnancy. During the estrous cycle, the expression of all caspases and during pregnancy, the expression of CASP3, CASP6, and CASP7 in the endometrium changed in a stage-specific manner. Conceptus and chorioallantoic tissues also expressed caspases during pregnancy. CASP3, cleaved-CASP3, and CASP7 proteins were localized to endometrial cells, with increased levels in luminal and glandular epithelial cells during early pregnancy, whereas apoptotic cells in the endometrium were limited to some scattered stromal cells with increased numbers on Day 15 of pregnancy. In endometrial explant cultures, the expression of some caspases was affected by steroid hormones (estradiol-17β and/or progesterone), and the cytokines interleukin-1β and interferon-γ induced the expression of CASP3 and CASP7, respectively. These results indicate that caspases are dynamically expressed in the endometrium throughout the estrous cycle and at the maternal-conceptus interface during pregnancy in response to steroid hormones and conceptus signals. Thus, caspase action could be important in regulating endometrial and placental function and epithelial cell function during the implantation period in pigs. INTRODUCTION The structure and function of the uterus changes significantly during the reproductive cycle and pregnancy in mammalian species. The degree of change in the endometrium during the cycle varies by species, with the most dramatic changes found in humans and non-human primates, which form a hemochorial type placenta (1)(2)(3). In pigs, which form a true epitheliochorial type placenta, the endometrium also undergoes morphological and functional change during the estrous cycle and pregnancy (4). During the estrous cycle in pigs, endometrial change is affected mainly by the ovarian steroid hormones estrogen and progesterone (5,6), and during early pregnancy, it is driven by conceptus-derived signals, including estrogen and the cytokines interleukin-1β (IL1B), interferon-δ (IFND), and interferon-γ (IFNG), in addition to ovarian steroid hormones (4,7,8). Apoptosis, a programmed cell death, plays a critical role in a variety of physiological processes in multicellular organisms. For example, it maintains functional tissue homeostasis by eliminating unwanted or dysfunctional cells (9,10). Apoptosis occurs in the endometrium during the estrous cycle and pregnancy to regulate endometrial homeostasis (11,12). In the human endometrium, apoptotic cell death is observed in endometrial epithelial and stromal cells, with a higher apoptotic rate in the late secretory to early proliferative phases than in the late proliferative to mid-secretory phases of the menstrual cycle (13). In pigs, cells undergoing apoptosis are detected mainly in endometrial stroma during the estrous cycle and early pregnancy and in luminal epithelial cells at the proestrus phase of the estrous cycle, but apoptotic cell death does not occur as dramatically in pigs as it does in primates during the reproductive cycle (14). Apoptotic cell death is induced by intrinsic and extrinsic pathways. The intrinsic pathway is mediated by various intracellular stress and mitochondrial factors, whereas the extrinsic pathway is triggered by extracellular death signals, such as tumor necrosis factor (TNF) superfamily members: TNF-α, Fas ligand (FASLG), and TNF-related apoptosis-inducing ligand (TRAIL, also known as TNFSF10) (15,16). The two pathways result in the activation of caspases, which are cytoplasmic cysteine protease enzymes, to induce apoptotic cell death. Caspases play essential roles in apoptosis and inflammation and are divided into two groups, initiator caspases (CASP8, CASP9, and CASP10) and executioner caspases (CASP3, CASP6, and CASP7) (9,17,18). Once the executioner caspases are activated by the initiator caspases, they recognize the aspartic residue of various intracellular target proteins and cleave them to cause apoptotic cell death. In that way, caspases are used as a representative marker for cells in which apoptosis has occurred. However, the apoptotic signaling pathway that activates caspases also plays an important role in the differentiation of various cell types, such as immune cells, trophoblasts, spermatocytes, epithelial cells, and stem cells (19,20). It has been suggested that caspase activation is locally regulated during cellular remodeling without causing apoptotic cell death and that transient caspase activity is used for cell fate determination (10,21). Although endometrial changes during the estrous cycle and pregnancy involve the apoptotic process and the function of caspases is essential during apoptotic cell death and cellular differentiation, the pattern of caspase expression in the endometrium during the estrous cycle and pregnancy is not fully understood in pigs. We hypothesized that caspases are expressed in the endometrium during the estrous cycle and at the maternal-conceptus interface during pregnancy to regulate apoptosis and cellular differentiation. Therefore, we determined in pigs (1) the expression of caspases (CASP3, CASP6, CASP7, CASP8, CASP9, and CASP10) in the endometrium during the estrous cycle and pregnancy, conceptus tissues during early pregnancy, and chorioallantoic tissues during mid-to late pregnancy; (2) the localization of caspases and apoptotic cells in the endometrium; and (3) the regulation of caspase expression by the steroid hormones estrogen and progesterone and by the cytokines IL1B and IFNG in endometrial tissues. Animals and Tissue Preparation All experimental procedures involving animals were conducted in accordance with the Guide for the Care and Use of Research Animals in Teaching and Research and approved by the Institutional Animal Care and Use Committee of Yonsei University and the National Institute of Animal Science. Sexually mature Landrace and Yorkshire crossbred female gilts of similar age (6-8 months) and weight (100-120 kg) were assigned randomly to either cyclic or pregnant status, as described previously (22). Gilts assigned to the pregnant uterus status group were artificially inseminated with fresh boar semen at the onset of estrus (Day 0) and 12 h later. The reproductive tracts of the gilts were obtained immediately after slaughter on Days 0, 3, 6, 9, 12, 15, or 18 of the estrous cycle or Days 10, 12, 15, 30, 60, 90, or 114 of pregnancy (n = 3-6/day/status). Pregnancy was confirmed by the presence of apparently normal filamentous conceptuses in uterine flushings on Days 10, 12, and 15 and the presence of embryos and placenta on later days of pregnancy. Conceptus tissues were obtained from uterine flushings on Days 12 and 15 of pregnancy. Uterine flushings were obtained by introducing and recovering 25 ml of phosphate-buffered saline (PBS; pH 7.4) into each uterine horn. Chorioallantoic tissues were obtained on Days 30, 60, 90, and 114 of pregnancy (n = 3-4/day). Endometrial tissues from prepubertal gilts (n = 8; approximately 6 months of age) that had not undergone the estrous cycle, with no corpus luteum formed, were obtained from a local slaughterhouse. Endometrium, dissected free of myometrium, was collected from the middle portion of each uterine horn, snap-frozen in liquid nitrogen, and stored at −80 • C prior to RNA extraction. For immunohistochemistry, cross-sections of the endometrium were fixed in 4% paraformaldehyde in PBS (pH 7.4) for 24 h and then embedded in paraffin as previously described (23). Explant Cultures To determine the effects of steroid hormones, IL1B, and IFNG on the expression of caspase mRNA in the endometrium, endometrial tissue was dissected from the myometrium and placed into warm phenol red-free Dulbecco's modified Eagle's medium/F-12 (DMEM/F-12) (Sigma) containing penicillin G (100 IU/ml) and streptomycin (0.1 mg/ml), as described previously (23)(24)(25) with some modifications. The endometrium was minced with scalpel blades into small pieces (2-3 mm 3 ), and 500 mg were placed into T25 flasks with serum-free modified DMEM/F-12 containing 10 µg/ml insulin (Sigma), 10 ng/ml transferrin (Sigma), and 10 ng/ml hydrocortisone (Sigma). To analyze the effect of steroid hormones on the expression of caspases, endometrial explants from immature gilts, immediately after mincing, were cultured with rocking in the presence of increasing doses of estradiol-17β (E 2 ; 0, 5, 50, or 500 pg/ml; Sigma) or progesterone (P 4 ; 0, 0.3, 3, or 30 ng/ml; Sigma) for 24 h in an atmosphere of 5% CO 2 in air at 37 • C. The doses were chosen to encompass the full concentration range of physiological levels of E 2 and P 4 in the endometrium during the estrous cycle and pregnancy (8). To analyze the effect of IL1B on CASP3 and the effect of IFNG on CASP7 expression, endometrial explant tissues from Day 12 of the estrous cycle were treated with E 2 (10 ng/ml), P 4 (30 ng/ml), and increasing doses of IL1B (0, 1, 10, and 100 ng/ml; Sigma) or IFNG (0, 1, 10, and 100 ng/ml; R&D Systems, Minneapolis, MN, USA) at 37 • C for 24 h. To determine the effect of the steroid hormones on the expression of CASP3 during the implantation period, endometrial explant tissues from Day 12 of the estrous cycle were treated with ethanol (control), E 2 (10 ng/ml; Sigma, USA), P 4 (30 ng/ml; Sigma, USA), P 4 +E 2 , P 4 +E 2 +ICI182,780 (ICI; an estrogen receptor antagonist; 200 ng/ml; Tocris Bioscience, Ellisville, MO, USA), or P 4 +E 2 +RU486 (RU; a progesterone receptor antagonist; 30 ng/ml; Sigma, USA) for 24 h. The explant tissues were then harvested, and total RNA was extracted for a real-time RT-PCR analysis to determine the expression levels of caspase mRNA. These experiments were conducted using endometrium from three gilts on Day 12 of the estrous cycle in triplicate and eight immature gilts. Total RNA Extraction, Reverse Transcription-Polymerase Chain Reaction (RT-PCR), and Cloning of Porcine Caspase cDNA Total RNA was extracted from endometrial and conceptus tissues using TRIzol reagent (Invitrogen, Carlsbad, CA, USA) according to the manufacturer's recommendations, as described previously (22). The quantity of RNA was assessed spectrophotometrically, and RNA integrity was validated following electrophoresis in 1% agarose gel. Four micrograms of total RNA from endometrial, conceptus, and chorioallantoic tissues were treated with DNase I (Promega, Madison, WI, USA) and reverse transcribed using SuperScript II Reverse Transcriptase (Invitrogen) to obtain cDNA. The cDNA templates were then diluted at a 1:4 ratio with sterile water and amplified by PCR using Taq polymerase (Takara Bio, Shiga, Japan) and specific primers based on porcine caspase mRNA sequences. The PCR conditions, sequences of primer pairs for caspases, and expected product sizes are listed in Supplementary Table 1. The PCR products were separated on 2% agarose gel and visualized by ethidium bromide staining. The identity of each amplified PCR product was verified by sequence analysis after cloning into the pCRII vector (Invitrogen). Quantitative Real-Time RT-PCR To analyze the levels of caspase expression in the endometrial and chorioallantoic tissues, real-time RT-PCR was performed using an Applied Biosystems StepOnePlus System (Applied Biosystems, Foster City, CA, USA) with the SYBR Green method, as described previously (22). Complementary DNA was synthesized from 4 µg of total RNA isolated from different uterine endometrial and chorioallantoic tissues, and the newly synthesized cDNA (total volume of 21 µl) was diluted 1:4 with sterile water and used for PCR. Power SYBR Green PCR Master Mix (Applied Biosystems) was used for the PCR reactions. The final reaction volume of 20 µl contained 2 µl of cDNA, 10 µl of 2× Master mix, 2 µl of each primer, and 4 µl of distilled H 2 O. The annealing temperature and number of cycles for PCR were the same for all products obtained. The results are reported as expression relative to that detected on Day 0 of the estrous cycle, that on Day 30 of pregnancy in chorioallantoic tissues, or that in control explant tissues after normalization of the transcript amount to the geometric mean of endogenous porcine ribosomal protein L7 (RPL7) and ubiquitin B (UBB), and TATA binding protein (TBP) controls, all using the 2 − CT method as previously described (26). Immunohistochemical Analysis To identify the type(s) of porcine endometrial cells expressing CASP3, cleaved-CASP3, CASP7, poly (ADP-ribose) polymerase (PARP1), an enzyme that is cleaved during apoptosis and used as a hallmark for apoptosis (27), and cleaved-PARP1, sections were immunostained. Sections (5 µm thick) were deparaffinized and rehydrated in an alcohol gradient. Tissue sections were boiled in citrate buffer (pH 6.0) for 10 min. Then, they were washed with PBST (PBS with 0.1% Tween-20) three times, and a peroxidase block was performed with 0.5% (v/v) H 2 O 2 in methanol for 30 min. Tissue sections were then blocked with 10% normal goat serum for 30 min at room temperature. Rabbit polyclonal anti-CASP3 antibody (5 µg/ml; Cell Signaling, Danvers, MA, USA), rabbit polyclonal anti-cleaved-CASP3 antibody (5 µg/ml; Cell Signaling), mouse monoclonal anti-CASP7 antibody (5 µg/ml; Enzo Life Sciences, Farmingdale, NY, USA), rabbit polyclonal anti-PARP1 antibody (1 µg/ml; Santa Cruz Biotechnology, Santa Cruz, CA, USA), or rabbit monoclonal anti-cleaved-PARP1 antibody (1 µg/ml; GeneTex, Irvine, CA, USA) were added and incubated overnight at 4 • C in a humidified chamber. For each tissue tested, purified normal rabbit IgG or mouse IgG was substituted for the primary antibody as a negative control. Tissue sections were washed with PBST three times. Biotinylated goat anti-rabbit or anti-mouse secondary antibody (1 µg/ml; Vector Laboratories, Burlingame, CA, USA) was added and incubated for 1 h at room temperature. Following washes with PBST, a streptavidin peroxidase conjugate (Invitrogen) was added to the tissue sections, which were then incubated for 10 min at room temperature. The sections were washed with PBST, and aminoethyl carbazole substrate (Invitrogen) was added to the tissue sections, which were then incubated for 10 min at room temperature. The tissue sections were washed in water, counterstained with Mayer's hematoxylin, and coverslipped. Images were captured using an Eclipse TE2000-U microscope (Nikon, Seoul, Korea) and processed with Adobe Photoshop CS6 software (Adobe Systems, Seattle, WA, USA). TUNEL Assay and Immunofluorescence Apoptotic cells in endometrial tissue sections were analyzed using the terminal deoxynucleotidyl transferase-mediated dUTP nick end labeling (TUNEL) assay with an In Situ Cell Death Detection Kit (Roche Diagnostics, Mannheim, Germany) used according to the manufacturer's recommendations, as described previously (28). Endometrial tissue sections (5 µm thick) were deparaffinized and rehydrated in an alcohol gradient. The sections were then boiled with 0.1 M citrate buffer (pH 6.0) for 3 min, cooled at room temperature for 10 min, and then washed three times in PBS. For a positive control for TUNEL staining, the sections were treated with DNase I (3 U/ml; Promega) in 50 mM Tris-HCl (pH 7.5), 10 mM MgCl 2 , and 1 mg/ml bovine serum albumin (BSA; Bovogen Biologicals, Melbourne, Australia) for 10 min at room temperature and then washed with PBS. Tissue sections were then blocked with 0.1 M Tris-HCl (pH 7.5) containing 3% (w/v) BSA and 20% (v/v) normal bovine serum for 30 min at room temperature. The TUNEL reaction was performed according to the kit instructions. After the TUNEL reactions, tissue sections were washed with PBS. The tissue sections were counterstained with 4' ,6-diamidino-2phenylindole (DAPI), and fluorescence images were captured using an Eclipse TE2000-U microscope (Nikon, Seoul, Korea) with Adobe Photoshop CS6 software (Adobe Systems, Seattle, WA, USA). Statistical Analysis Data from real-time RT-PCR for caspase expression were subjected to ANOVA using the general linear models procedures in SAS (Cary, NC, USA). As sources of variation, the model included day, pregnancy status (cyclic or pregnant, Days 12 and 15 post-estrus), and their interactions to evaluate steady-state levels of caspase mRNA. Data from real-time RT-PCR performed to assess the effects of day of the estrous cycle (Days 0, 3, 6, 9, 12, 15, and 18) and pregnancy (Days 10, 12, 15, 30, 60, 90, and 114) and the effects of day of pregnancy (Days 30, 60, 90, and 114) on chorioallantoic tissues were analyzed using a least squares regression analysis. The effects of E 2 , P 4 , IL1B, and IFNG doses on explant cultures were analyzed by one-way ANOVA followed by Tukey's post-test. Data from real-time RT-PCR to assess the effects of steroid hormones and their receptor antagonists on explant culture were analyzed by preplanned orthogonal contrasts (control vs. E 2 ; control vs. P 4 ; P 4 vs. P 4 +E 2 ; P 4 +E 2 vs. P 4 +E 2 +ICI; and P 4 +E 2 vs. P 4 +E 2 +RU). Data are presented as means with standard error of the mean. A P-value <0.05 was considered significant, and P-values 0.05-0.10 were considered to indicate a trend toward significance. Expression of Caspase mRNA in the Endometrium During the Estrous Cycle and Pregnancy In real-time RT-PCR analyses, we found that CASP3, CASP6, CASP7, CASP8, CASP9, and CASP10 mRNA was expressed in the endometrium during the estrous cycle and pregnancy (Figure 1). During the estrous cycle, the steady-state levels of CASP3 (quadratic, P = 0.0572), CASP6 (quadratic, P < 0.01), CASP7 (linear, P < 0.05), CASP8 (quadratic, P < 0.01), and CASP10 (quadratic, P < 0.05) mRNA changed, with the highest levels of CASP3, CASP6, CASP7, and CASP8 in the proestrus phase and that of CASP10 in the proestrus to metestrus phase. On Days 12 and 15 post-estrus, the expression of CASP3 was affected by day (P < 0.05), status (P < 0.01), and the day x status interaction (P < 0.05). The expression of CASP6 was affected by the day x status interaction (P < 0.05), that of CASP7 was affected by day (P < 0.05), that of CASP8 was affected by status (P < 0.01), and that of CASP9 was affected by day (P < 0.01). The expression of CASP10 was not affected by day, status, or the day x status interaction. During pregnancy, the steady-state levels of CASP3 (linear, P = 0.0526), CASP6 (cubic, P = 0.073), CASP7 (linear, P < 0.05), and CASP10 (quadratic, P < 0.05), but not CASP8 or CASP9 mRNA, changed with the highest levels on Day 12 for CASP3, on Day 15 for CASP7, and Day 60 for CASP6 and CASP10. Expression of Caspase mRNA in Conceptuses During Early Pregnancy and Chorioallantoic Tissues in Later Stages of Pregnancy In RT-PCR analysis using cDNAs from conceptuses from Days 12 and 15 of pregnancy, we found that CASP3, CASP6, CASP7, CASP8, and CASP10 mRNA but not CASP9 mRNA in conceptuses from both days of early pregnancy (Figure 2A). These caspases were also detectable in endometrial tissues from same days. In addition, we performed real-time RT-PCR analyses to determine whether the expression of CASP3, CASP6, CASP7, CASP8, CASP9, and CASP10 mRNA changed in chorioallantoic tissues during pregnancy. The abundance of CASP3, CASP6, CASP7, CASP8, CASP9, and CASP10 mRNA in chorioallantoic tissues changed, with the highest levels on Day 30 for CASP3 and at term for CASP6, CASP7, CASP8, CASP9, and CASP10 (linear effect of day for CASP6, CASP7, CASP8, CASP9, and CASP10, P < 0.01; quadratic effect of day for CASP3, P < 0.01) ( Figure 2B). Localization of CASP3, Cleaved-CASP3, and CASP7 Proteins in the Endometrium on Days 12 and 15 Post-estrus Having determined that CASP3, CASP6, CASP7, CASP8, CASP9, and CASP10 mRNA was present in the endometrium during the estrous cycle and pregnancy and in conceptuses and chorioallantoic tissues during pregnancy and that the expression of CASP3 and CASP7 mRNA was highest during early pregnancy, we next determined the cellular localization of the CASP3, cleaved-CASP3 (an active form), and CASP7 proteins in the endometrium on Days 12 and 15 post-estrus using immunohistochemistry (Figure 3). CASP3 proteins were mainly detected in endometrial luminal (LE) and glandular epithelial (GE) cells and in scattered stromal cells, with stronger signal intensity on Days 12 and 15 of pregnancy than during the estrous cycle, and they were localized subcellularly to both the cytoplasm and the nucleus (Figure 3A). The active form of CASP3, cleaved-CASP3 protein, was localized primarily to the nucleus of LE cells and some stromal cells in the endometrium on Days 12 and 15 of pregnancy ( Figure 3B). Both CASP3 and cleaved-CASP3 proteins were detected in the small intestine used as a positive control. CASP7 protein was localized to the cytoplasm of LE and stromal cells in the endometrium, but only on Day 15 of pregnancy ( Figure 3C). Trophectoderm cells in conceptuses were also positive for CASP7 protein on Day 15 of pregnancy ( Figure 3C). CASP7 protein was detected in the lymph node used as a positive control. Immunohistochemistry for cleaved-CASP7 was not done due to the lack of an appropriate antibody to detect porcine cleaved-CASP7 protein. TUNEL Staining and PARP Cleavage Analysis for in situ Apoptotic Cell Death in the Endometrium During the Estrous Cycle and Pregnancy Because CASP3 and CASP7 proteins were localized to endometrial epithelial and stromal cells during the estrous cycle and pregnancy, we determined whether cells expressing CASP3 and CASP7 were undergoing apoptotic cell death. Because apoptotic cells undergo DNA degradation and PARP1, an enzyme involved in DNA repair, is cleaved by caspases (29), we performed the TUNEL assay and immunostaining of PARP1 and cleaved-PARP1 in endometrial tissues from pregnant pigs. We found that apoptotic cells in the endometrium during pregnancy were predominantly in stromal cells, not in epithelial cells, with many apoptotic cells found on Day 15 of pregnancy and very few cells found during the later stages of pregnancy ( Figure 4A). PARP1 protein was localized to most cell types in the endometrium on Days 12 and 15 of the estrous cycle and pregnancy (Figure 4B), but cleaved-PARP1, a marker for apoptotic cells, was localized primarily to stromal cells on Day 15 of pregnancy ( Figure 4C). The PARP1 and cleaved-PARP1 proteins were also detected in the ovary used as a positive control. Effects of the Steroid Hormones E 2 and P 4 on Caspase Expression in Endometrial Tissue of Prepubertal Gilts Because the expression of caspases changed during the estrous cycle and because E 2 from the ovary and P 4 from the corpus luteum regulate the expression of many endometrial genes during the cycle (4, 8), we hypothesized that E 2 and P 4 might affect the expression of caspases in the endometrium. Therefore, we obtained endometrial tissues from immature gilts, which had not been exposed to cyclical ovarian hormones, and treated them with increasing doses of E 2 or P 4 . We found that the expression of CASP7 mRNA was decreased by E 2 (0 vs. 500 pg/ml, P < 0.05), but the expression of CASP3, CASP6, CASP8, CASP9, and CASP10 mRNA was unaffected by E 2 (Figure 5). The expression of CASP7 (0 vs. 30 ng/ml, P < 0.05), CASP8 (0 vs. 3 ng/ml and 0 vs. 30 ng/ml, P < 0.01), and CASP10 (0 vs. 3 ng/ml, P < 0.01; 0 vs. 30 ng/ml, P < 0.05), but not CASP3, CASP6, and CASP9, was affected by P 4 treatment (Figure 6). Effects of IL1B and Steroid Hormones on CASP3 and the Effect of IFNG on CASP7 Expression in Endometrial Tissues Because the expression of CASP3 and CASP7 was highest on Days 12 and 15 of pregnancy, respectively, and porcine conceptuses secrete estrogen and IL1B2 into the uterine lumen on Day 12 and IFND and IFNG on Day 15 (4,8), we assumed that the expression of CASP3 on Day 12 could be affected by estrogen and IL1B and that of CASP7 on Day 15 of pregnancy could be affected by IFNG. We treated endometrial explant tissues from Day 12 of the estrous cycle with increasing doses of IL1B and steroid hormones and found that IL1B induces the expression of CASP3 (0 vs. 1 ng/ml, P < 0.05; Figure 7A), but steroid hormones and their receptor antagonists do not affect the expression of CASP3 (Figure 7B). When increasing doses of IFNG were administered, the IFNG induced the expression of CASP7 (0 vs. 10 pg/ml, 0 vs. 100 pg/ml; P < 0.01) (Figure 7C). DISCUSSION The significant findings of this study in pigs were: (1) caspases CASP3, CASP6, CASP7, CASP8, CASP9, and CASP10 were expressed in the endometrium during the estrous cycle and pregnancy in a stage-and pregnancy status-specific manner; (2) conceptuses on Days 12 and 15 of pregnancy and chorioallantoic tissues from Day 30 of pregnancy to term expressed caspases, except CASP9, on Days 12 and 15 of pregnancy; (3) CASP3, cleaved-CASP3, and CASP7 proteins were localized to endometrial cells, with increased signal intensity in LE and GE cells during early pregnancy; (4) apoptotic cells in Endometrial explants from immature gilts were cultured at 37 • C in DMEM/F-12 with increasing doses of estradiol-17β (E 2 ; 0, 5, 50, and 500 pg/ml) for 24 h. Experiments were performed with endometria from eight gilts. The abundance of mRNA, determined by real-time RT-PCR, is relative to that of CASP3, CASP6, CASP7, CASP8, CASP9, and CASP10 mRNA in the control group of endometrial explants (0 pg/ml E 2 ) after normalization to the transcript amount of RPL7, UBB, and TBP mRNAs. Data are presented as the mean with standard error. The asterisk denotes statistically significant difference when values were compared with the control group: *P < 0.05. the endometrium were localized to some scattered stromal cells, with increased numbers on Day 15 of pregnancy; (5) E 2 and P 4 affected the expression of some caspases in endometrial tissues; and (6) IL1B and IFNG upregulated the expression of CASP3 and CASP7, respectively, in endometrial explant tissues. Caspases are essential mediators of apoptosis and play an important role in a variety of biological processes (9,10,17). Two groups of caspases, initiator caspases and executioner caspases, are activated during the pathway to apoptotic activation. Caspases are expressed in the endometrium during the reproductive cycle and pregnancy and that they mediate apoptotic cell death in various species (2,30,31). However, the expression of all initiator and executioner caspases in the endometrium throughout the estrous/menstrual cycle and at the maternal-conceptus interface during pregnancy has not been fully studied in any species. The results of this study indicate the Endometrial explants from immature gilts were cultured at 37 • C in DMEM/F-12 with increasing doses of progesterone (P 4 ; 0, 0.3, 3, and 30 ng/ml) for 24 h. Experiments were performed with endometria from eight gilts. The abundance of mRNA, determined by real-time RT-PCR, is relative to that of CASP3, CASP6, CASP7, CASP8, CASP9, and CASP10 mRNA in the control group of endometrial explants (0 ng/ml P 4 ) after normalization to the transcript amount of RPL7, UBB, and TBP mRNAs. Data are presented as the mean with standard error. The asterisks denote statistically significant differences when values were compared with the control group: *P < 0.05; **P < 0.01. variable expression of the initiator and executioner caspases in the endometrium during the estrous cycle and pregnancy and in conceptus/chorioallantoic tissues throughout pregnancy in pigs. During the estrous cycle, the expression of caspases CASP3, CASP6, CASP7, CASP8, and CASP10 changed with the stage of the cycle, with the highest levels in the proestrus phase for CASP3, CASP6, and CASP7 and in the proestrus to metestrus phase for CASP8 and CASP10. These data indicate that the expression of caspases is dynamically regulated in the endometrium during the estrous cycle and may be related to cyclic remodeling of this tissue in pigs. The incidence of apoptotic cell death in LE cells was previously shown by TUNEL assay to be highest in the estrus phase in pigs (14), suggesting that caspases expressed in the proestrus phase could cause apoptotic cell death in Endometrial explants from gilts on Day 12 of the estrous cycle were cultured (A) with increasing doses of IL1B (0, 1, 10, and 100 ng/ml) in the presence of with E 2 (estradiol-17β; 10 ng/ml) and P 4 (progesterone; 30 ng/ml), (B) with steroid hormones [control (C), E 2 (E), P 4 (P), E 2 + P 4 (PE), E 2 +P 4 +ICI (I, an estrogen receptor antagonist) (PEI), or E 2 +P 4 +RU (R; a progesterone receptor antagonist) (PER)], or (C) with increasing doses of IFNG (0, 1, 10, and 100 ng/ml) in the presence of with E 2 (10 ng/ml) and P 4 (30 ng/ml). The abundance of mRNA expression, determined by real-time RT-PCR analyses, was relative to that for CASP3 and CASP7 mRNA in the control group of endometrial explants after normalization to the transcript amounts of RPL7, UBB, and TBP mRNAs. Data are presented as means with standard error. These treatments were performed in triplicate using tissues obtained from each of three gilts. The asterisks denote statistically significant differences when values were compared with the control group: *P < 0.05; **P < 0.01. the endometrium in the estrus phase. In bovine endometrium, CASP3 expression does not change during the estrous cycle, but active forms of CASP3 proteins increase at the follicular and early luteal phases compared with the mid-to late luteal phase (2). Furthermore, CASP8 expression in the bovine endometrium increases toward the follicular phase from the luteal phase (31). Thus, it seems that the endometrial expression of some caspases increases in pigs and cows as the cycle moves toward the estrus phase. The pattern of caspase expression in the endometrium during the estrous cycle led us to postulate that the expression of caspases and the activation of apoptotic signaling could be related to cyclical changes in the endometrium triggered by the actions of steroid hormones from the ovary. In this study, we found that P 4 decreased the expression of CASP8 and CASP10 in endometrial explant tissues. Because the endometrial expression of CASP8 and CASP10 was low at the diestrus phase and high at the proestrus to metestrus phase of the estrous cycle, it is likely that P 4 causes the decreased levels of CASP8 and CASP10 expression in the endometrium at the diestrus phase of the cycle in pigs. However, P 4 increased the expression of CASP7, whereas E 2 decreased the expression of CASP7 in endometrial explant tissues, even though the endometrial expression of CASP7 was high in the proestrus phase of the cycle, when plasma levels of P 4 and E 2 decrease and increase, respectively (8). These data indicate that the regulation of CASP7 expression in the endometrium during the estrous cycle is much more complex than can be explained by the simple action of P 4 and E 2 and thus needs further analysis. Although the levels of caspase expression during pregnancy have not been much studied in any species, it has been shown that the levels of active Casp3 protein in the rat endometrium are highest at mid-pregnancy (30). In this study, the endometrial expression of caspases CASP3, CASP6, CASP7, and CASP10 during pregnancy changed, with the highest levels occurring during early pregnancy for CASP3 and CASP7 and during mid-pregnancy for CASP6 and CASP10, suggesting that the expression of caspases is pregnancy stage-specific and varies with the type of caspase. In particular, we observed that the expression of CASP3 and CASP7 was highest on Days 12 and 15 of pregnancy, respectively, which is the period when conceptuses interact with the endometrium for implantation (4,7,8). Because the implanting porcine conceptus secretes estrogen and IL1B on Day 12 of pregnancy and type I and II IFNs, IFND and IFNG, around Day 15 of pregnancy (4,8), we postulated that estrogen and/or IL1B might be responsible for inducing CASP3 expression in the endometrium on Day 12 and that IFNG might be responsible for inducing CASP7 expression on Day 15 of pregnancy. Indeed, our results performed using endometrial explant culture revealed increased expression of CASP3 in response to IL1B but not E 2 , whereas CASP7 expression was stimulated by IFNG. These data indicate that the expression of CASP3 and CASP7 in the endometrium during early pregnancy in pigs is induced by conceptus-derived IL1B and IFNG, respectively. Because CASP3 and CASP7 are well-known executioner caspases during apoptosis (9,17), we determined which cell type(s) expressed CASP3 and CASP7 proteins in the endometrium during early pregnancy and whether the cells expressing CASP3 and CASP7 were undergoing apoptotic cell death. Our results show that CASP3 and active CASP3 proteins were predominantly localized to LE and stromal cells on Days 12 and 15 of pregnancy and that the CASP7 protein was primarily localized to LE cells on Day 15 of pregnancy. Interestingly, however, results from the TUNEL assay and cleaved-PARP1 staining show that only stromal cells in the endometrium, not epithelial cells, were undergoing apoptosis during early pregnancy. These data suggest that CASP3 and CASP7 might not be involved in endometrial epithelial apoptosis during early pregnancy in pigs. Executioner caspases CASP3, CASP6, and CASP7 play critical roles in both apoptotic cell death and cell differentiation in various cell types, such as keratinocytes, muscle cells, neurons, and stem cells (18). Also, the initiator caspase CASP8, which is expressed by cytotrophoblast cells, is involved in the differentiation of cytotrophoblast cells into the syncytiotrophoblast layer in human placental villi (32)(33)(34). Thus, the increased endometrial expression of CASP3 and CASP7 in response to conceptus-derived signals at the time of conceptus implantation could be expected to act on epithelial cell differentiation instead of activating apoptosis. Indeed, at the time of implantation LE cells of the porcine endometrium show various aspects of differentiated cellular characteristics: changed morphology (35)(36)(37), increased production of secretory proteins, including fibroblast growth factor 7 (38) and secreted phosphoprotein 1 (39), and increased expression of immunityrelated molecules, including interferon α/β receptor 1 and 2 (40), interferon gamma receptor 1 and 2 (25), cysteine-Xcysteine motif chemokine ligand 12 (41), TNF superfamily 10 (28), and cytotoxic T-lymphocyte-associated protein 4 (Yoo and Ka, unpublished data). The expression of most of those molecules in endometrial epithelial cells is induced by conceptus signals, estrogen, IL1B, or IFNG, and those molecules play important roles in conceptus implantation. Thus, it is likely that CASP3 and CASP7 are also involved in activating the differentiation process of endometrial LE cells in response to conceptus-derived signals. However, the nature of the differentiated cellular characteristics mediated by CASP3 and CASP7 in endometrial LE cells still needs further study. Our results also show that caspases were expressed in conceptus tissues during early pregnancy and in chorioallantoic tissues during mid-to late pregnancy. In particular, the levels of CASP6, CASP7, CASP8, CASP9, and CASP10 expression in chorioallantoic tissues increased as the pregnancy came to term. However, as determined by TUNEL assay in this study, apoptotic cells were barely detectable in chorioallantoic tissues during pregnancy. It has been shown that CASP3 proteins levels in porcine placental tissue are higher on Day 30 than on Days 60, 80, and 90 of pregnancy, what coincides with the expression pattern of CASP3 mRNA in this study (42). In ovine placentas, CASP3 and CASP9 proteins increase toward term, and the levels of active CASP3 and CASP9 are increased in placentas with intrauterine growth restriction pregnancy compared with normal pregnancy (43). In bovine placental tissues obtained at parturition, CASP3 and CASP8 mRNA and proteins are expressed (44), and CASP8 and CASP10 are expressed in human placental villi at term (33,45). Thus, the expression of caspases in placental tissues is common among mammalian species and increases toward term. In addition, because accumulated evidence shows that apoptotic cell death increases in the placenta when pregnancy complications occur, such as intrauterine growth restriction, preeclampsia and preterm premature rupture of membranes in humans (46,47), and in placental tissues derived from somatic cell nuclear transfer-cloned embryos in pigs (48), it is likely that caspases are important in regulating placental function and are activated in situations of inappropriate placental development during pregnancy. CONCLUSION In conclusion, the results of this study in pigs show that caspases are expressed in the endometrium, with differential expression patterns throughout the estrous cycle and pregnancy, and in the conceptus and chorioallantoic tissues during pregnancy; CASP3 and CASP7 are localized primarily to endometrial epithelial cells during early pregnancy; the steroid hormones E 2 and P 4 regulate the expression of caspases in endometrial tissues; and IL1B and IFNG induce the expression of CASP3 and CASP7, respectively, in endometrial tissues. These results suggest that caspases dynamically expressed in the endometrium and at the maternal-conceptus interface could play important roles in the establishment and maintenance of pregnancy in pigs by regulating apoptosis and epithelial differentiation. DATA AVAILABILITY STATEMENT The original contributions presented in the study are included in the article/Supplementary Material, further inquiries can be directed to the corresponding author/s. ETHICS STATEMENT The animal study was reviewed and approved by Institutional Animal Care and Use Committee of Yonsei University.
2021-02-12T14:26:26.296Z
2021-02-12T00:00:00.000
{ "year": 2021, "sha1": "2ab3c8cf37a66b96fc8aca6bc3aea8fd45aacfd3", "oa_license": "CCBY", "oa_url": "https://www.frontiersin.org/articles/10.3389/fvets.2021.641916/pdf", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "2ab3c8cf37a66b96fc8aca6bc3aea8fd45aacfd3", "s2fieldsofstudy": [ "Biology" ], "extfieldsofstudy": [ "Medicine" ] }
3992667
pes2o/s2orc
v3-fos-license
Multi-Modal Geometric Learning for Grasping and Manipulation This work provides an architecture that incorporates depth and tactile information to create rich and accurate 3D models useful for robotic manipulation tasks. This is accomplished through the use of a 3D convolutional neural network (CNN). Offline, the network is provided with both depth and tactile information and trained to predict the object's geometry, thus filling in regions of occlusion. At runtime, the network is provided a partial view of an object. Tactile information is acquired to augment the captured depth information. The network can then reason about the object's geometry by utilizing both the collected tactile and depth information. We demonstrate that even small amounts of additional tactile information can be incredibly helpful in reasoning about object geometry. This is particularly true when information from depth alone fails to produce an accurate geometric prediction. Our method is benchmarked against and outperforms other visual-tactile approaches to general geometric reasoning. We also provide experimental results comparing grasping success with our method. I. INTRODUCTION Robotic grasp planning based on raw sensory data is difficult due to occlusion and incomplete information regarding scene geometry. Often, for example, one sensory modality does not provide enough context to enable reliable planning. For example, a single depth sensor image cannot provide information about occluded regions of an object, and tactile information is incredibly sparse. This work utilizes a 3D convolutional neural network to enable stable robotic grasp planning by incorporating both tactile and depth information to infer occluded geometries. This multi-modal system is able to utilize both tactile and depth information to form a more complete model of the space the robot can interact with and also to provide a complete object model for grasp planning. At runtime, a point cloud of the visible portion of the object is captured, and multiple guarded moves are executed in which the hand is moved towards the object, stopping when contact with the object occurs. The newly acquired tactile information is combined with the original partial view, voxelized, and sent through the CNN to create a hypothesis of the object's geometry. Depth information from a single point of view often does not provide enough information to accurately predict object geometry. There is often unresolved uncertainty about the geometry of the occluded regions of the object. To alleviate this uncertainty, we utilize tactile information to This work is supported by NSF Grant CMMI 1734557. We gratefully acknowledge the support of NVIDIA Corporation with the donation of the Titan Xp GPU used for this research. Authors are with Columbia University, (davidwatkins, jvarley, allen)@cs.columbia.edu These completions demonstrate that small amounts of additional tactile sensory data can significantly improve the system's ability to reason about 3D geometry. The Depth Only Completion for the pitcher does not capture the handle well, whereas the tactile information gives a better geometric understanding. For this example, the additional tactile information allowed the CNN to correctly identify a handle in the completion mesh and similar completion improvement was found for the rubber duck. The rubber duck was not present in the training data. generate a new, more accurate hypothesis of the object's 3D geometry, incorporating both visual and tactile information. Fig. 1 demonstrates an example where the understanding of the object's 3D geometry is significantly improved by the additional sparse tactile data collected via our framework. An overview of our sensory fusion architecture is shown in Fig. 2. This work is differentiated from others [1] in that our CNN is acting on both the depth and tactile as input information fed directly into the model rather than using the tactile information to update the output of a CNN not explicitly trained on tactile information. This enables the tactile information to produce non-local changes in the resulting mesh. In many cases, depth information alone is insufficient to differentiate between two potential completions, for example a pitcher vs a rubber duckie. In these cases, the CNN utilizes sparse tactile information to affect the entire completion, not just the regions in close proximity to the tactile glance. If the tactile sensor senses the occluded portion of a drill, the CNN can turn the entire completion into a drill, not just the local portion of the drill that was touched. The contributions of this work include: 1) a framework for integrating multi-modal sensory data to holistically reason about object geometry and enable robotic grasping, 2) an open source dataset for training a shape completion system using both tactile and depth sensory information, 3) open source code for alternative visual-tactile general completion methods, 4) experimental results comparing the completed object models using depth only, the combined depth-tactile information, and various other visual-tactile completion methods, and 5) real and simulated grasping experiments using the completed models. This dataset, code, and extended video are freely available at http://crlab. cs.columbia.edu/visualtactilegrasping/. II. RELATED WORK The idea of incorporating sensory information from vision, tactile and force sensors is not new [2]. Despite the intuitiveness of using multi-modal data, there is still no concensus on which framework best integrates multi-modal sensory information in a way that is useful for robotic manipulation tasks. While prior work has been done to complete geometry using depth alone, none of these works consider tactile information [3] [4]. In this work, we are interested in reasoning about object geometry, and in particular, creating models from multi-modal sensory data that can be used for grasping and manipulation. Several recent uses of tactile information to improve estimates of object geometry have focused on the use of Gaussian Process Implicit Surfaces (GPIS) [5]. Several examples along this line of work include [ [12]. This approach is able to quickly incorporate additional tactile information and improve the estimate of the object's geometry local to the tactile contact or observed sensor readings. There has additionally been several works that incorporate tactile information to better fit planes of symmetry and superquadrics to observed point clouds [13][14] [15]. These approaches work well when interacting with objects that conform to the heuristic of having clear detectable planes of symmetry or are easily modeled as superquadrics. There has been successful research in utilizing continuous streams of visual information similar to Kinect Fusion [16] or SLAM [17] in order to improve models of 3D objects for manipulation, an example being [18] [19]. In these works, the authors develop an approach to building 3D models of unknown objects based on a depth camera observing the robot's hand while moving an object. The approach integrates both shape and appearance information into an articulated ICP approach to track the robot's manipulator and the object while improving the 3D model of the object. Similarly, another work [20] attaches a depth sensor to a robotic hand and plans grasps directly in the sensed voxel grid. These approaches improve their models of the object using only a single sensory modality but from multiple points in time. In previous work [21], we created a shape completion method using single depth images. The work provides an architecture to enable robotic grasp planning via shape completion, which was accomplished through the use of a 3D CNN. The network was trained on an open source dataset of over 440,000 3D exemplars captured from varying viewpoints. At runtime, a 2.5D point cloud captured from a single point of view was fed into the CNN, which fills in the occluded regions of the scene, allowing grasps to be planned and executed on the completed object. The runtime of shape completion is rapid because most of the computational costs of shape completion are borne during offline training. This prior work explored how the quality of completions vary based on several factors. These include whether or not the object being completed existed in the training data, how many object models were used to train the network, and the ability of the network to generalize to novel objects, allowing the system to complete previously unseen objects at runtime. The completions are still limited by the training datasets and occluded views that give no clue to the unseen portions of the object. From a human perspective, this problem is often alleviated by using the sense of touch. In this spirit, this paper addresses this issue by incorporating sparse tactile data to better complete the object models for grasping tasks. III. VISUAL-TACTILE GEOMETRIC REASONING Our framework utilizes a trained CNN to produce a mesh of the target object, incorporating both depth and tactile information. We utilize the same architecture as found in [21]. The model was implemented using the Keras [22] deep learning library. Each layer used rectified linear units as nonlinearities except the final fully connected (output) layer which used a sigmoid activation to restrict the output to the range [0, 1]. We used the cross-entropy error E(y, y ) as the cost function with target y and output y : This cost function encourages each output to be close to either 0 for unoccupied target voxels or 1 for occupied target voxels. The optimization algorithm Adam [23], which computes adaptive learning rates for each network parameter, was used with default hyperparameters (β 1 =0.9, β 2 =0.999, =10 −8 ) except for the learning rate, which was set to 0.0001. Weights were initialized following the recommendations of [24] for rectified linear units and [25] for the logistic activation layer. The model was trained with a batch size of 32. We used the Jaccard similarity [26] to evaluate the similarity between a generated voxel occupancy grid and the ground truth. IV. COMPLETION OF SIMULATED GEOMETRIC SHAPES Three networks with the architecture from [21] were trained on a simulated dataset of geometric shapes (Fig. 3) where the front and back were composed of two differing shapes. Sparse tactile data was generated by randomly sampling voxels along the occluded side of the voxel grid. We trained a network that only utilized tactile information. This performed poorly due to the sparsity of information. A second network was given only the depth information during training and performed better than the tactile-only network did. It still encountered many situations where it did not have enough information to accurately complete the obstructed Fig. 2: Both tactile and depth information are independently captured and voxelized into 40 3 grids. These are merged into a shared occupancy map which is fed into a CNN to produce a hypothesis of the object's geometry. for z in range(grid dim-1, -1, -1) do 10: if vox gt cf[x, y, z] == 1 then 11: tactile vox.append(x, y, z) 12: continue 13: tactile points = vox2point cloud(tactile vox) 14: return tactile points half of the object. A third network was given depth and tactile information which successfully utilized the tactile information to differentiate between plausible geometries of occluded regions. The Jaccard similarity improved from 0.890 in the depth only network to 0.986 in the depth and tactile network. This task demonstrated that a CNN can be trained to leverage sparse tactile information to decide between multiple object geometry hypotheses. When the object geometry had sharp edges in its occluded region, the system would use tactile information to generate a completion that contained similar sharp edges in the occluded region. This completion is more accurate not just in the observed region of the object but also in the unobserved portion of the object. V. COMPLETION OF YCB/GRASP DATASET OBJECTS We used the dataset from [21] to create a new dataset consisting of half a million triplets of oriented voxel grids: depth, tactile, and ground truth. Depth voxels are marked as occupied if visible to the camera. Tactile voxels are marked occupied if tactile contact occurs within the voxel. Ground truth voxels are marked as occupied if the object intersects a given voxel, independent of perspective. The point clouds for the depth information were synthetically rendered in the Gazebo [27] simulator. This dataset consists of 608 meshes from both the Grasp [28] and YCB [29] datasets. 486 of these meshes were randomly selected and used for a training set and the remaining 122 meshes were kept for a holdout set. The synthetic tactile information was generated according to Algorithm 1. In order to generate tactile data, the voxelization of the ground truth high resolution mesh (vox gt) (Alg.1:L1) was aligned with the captured depth image (Alg.1:L4). 40 random (x, y) points were sampled in order to generate synthetic tactile data (Alg.1:L5-6). For each of these points (Alg.1:L7), a ray was traced in the −z, direction and the first occupied voxel was stored as a tactile observation (Alg.1:L11). Finally this set of tactile observations was converted back to a point cloud (Alg.1:L13). Two identical CNNs were trained where one CNN was provided only depth information (Depth Only) and a second was provided both tactile and depth information (Tactile and Depth). During training, performance was evaluated on simulated views of meshes within the training data (Training Views), novel simulated views of meshes in the training data (Holdout Views), novel simulated views of meshes not in the training data (Holdout Meshes), and real non-simulated views of 8 meshes from the YCB dataset (Holdout Live). The Holdout Live examples consist of depth information captured from a real Kinect and tactile information captured from a real Barrett Hand attached to a Staubli Arm. We used depth filtering to mask out the background of the captured depth cloud. The object was fixed in place during the tactile data collection process. While collecting the tactile data, the arm was manually moved to place the end effector behind the object and 6 exploratory guarded motions were made where Fig. 5: As the difficulty of the data splits increase, the delta between the Depth Only CNN completion accuracy and the Tactile and Depth CNN completion accuracy increases. The additional tactile information is more useful on more difficult completion problems. the fingers closed towards the object. Each finger stopped independently when contact was made with the object, as shown in Fig. 4. Fig. 5 demonstrates that the difference between the Depth Only CNN completion and the Tactile and Depth CNN completion becomes larger on more difficult completion problems. The performance of the Depth Only CNN nearly matches the performance of the Tactile and Depth CNN on the training views. Because these views are used during training, the network is capable of generating reasonable completions. Moving from Holdout Views to Holdout Meshes to Holdout Live, the completion problems move further away from the examples experienced during training. As the problems become harder, the Tactile and Depth network outperforms the Depth Only network by a greater margin, as it is able to utilize the sparse tactile information to differentiate between various possible completions. This trend shows that the network is able to make more use of the tactile information when the depth information alone is insufficient to generate a quality completion. We generated meshes from the output of the combined tactile and depth CNN using a marching cubes algorithm. We also preserve the density of the rich visual information and the coarse tactile information by utilizing the post-processing from [21]. A. Alternative Visual-Tactile Completion Methods In this work we benchmarked our framework against the following general visual tactile completion methods. Partial Completion: The set of points captured from the Kinect is concatenated with the tactile data points. The combined cloud is run through marching cubes, and the resulting mesh is then smoothed using Meshlab's [ Fig. 6: The entire Holdout Live dataset. These completions were all created from data captured from a real Kinect and a real Barrett Hand attached to a Staubli Arm. The Depth and Tactile Clouds have the points captured from a Kinect in red and points captured from tactile data in blue. Notice many of the Depth Only completions do not extend far enough back but instead look like other objects that were in the training data (ex: cell phone, banana). Our method outperforms the Depth Only, Partial, and Convex Hull methods in terms of Hausdorff distance and Jaccard similarity. Note that the GPIS completions form large and inaccurate completions for the Black and Decker box and the Rubbermaid Pitcher, whereas our method correctly bounds the end of the box and finds the handle of the pitcher. implementation of Laplacian smoothing. These completions are incredibly accurate where the object is directly observed but make no predictions in unobserved areas of the scene. Convex Hull Completion: The set of points captured from the Kinect is concatenated with the tactile data points. The combined cloud is run through QHull to create a convex hull. The hull is then run through Meshlab's implementation of Laplacian smoothing. These completions are reasonably accurate near observed regions. However, a convex hull will fill regions of unobserved space. Gaussian Process Implicit Surface Completion (GPIS): Approximated depth cloud normals were calculated us- We found M = 300 to be a good tradeoff between speed and completion quality. Additionally we used s = 0.001, d = 0.0005, and n = 100. In prior work [21] the Depth Only CNN completion method was compared to both a RANSAC based approach [32] and a mirroring approach [33]. These approaches make assumptions about the visibility of observed points and do not work with data from tactile contacts that occur in unobserved regions of the workspace. B. Geometric Comparison Metrics The Jaccard similarity was used to compare 40 3 CNN outputs with the ground truth. We also used this metric to compare the final resulting meshes from several completion strategies. The completed meshes were voxelized at 80 3 and compared with the ground truth mesh. The results are shown in Table I. Our proposed method results in higher similarity to the ground truth meshes than do all other described approaches. The Hausdorff distance metric computes the average distance from the surface of one mesh to the surface of another. A symmetric Hausdorff distance was computed with Meshlab's Hausdorff distance filter in both directions. Table II shows the mean values of the symmetric Hausdorff distance for each completion method. In this metric, our tactile and depth CNN mesh completions are significantly closer to the ground truth compared to the other approaches' completions. Both the partial and Gaussian process completion methods are accurate when close to the observed points but fail to approximate geometry in occluded regions. We found that in our training, the Gaussian Process completion method would often create a large and unruly object if the observed points were only a small portion of the entire object or if no tactile points were observed in simulation. Using a neural network has the added benefit of abstracting object geometries, whereas the alternative completion methods fail to approximate the geometry of objects which do not have points bounding their geometry. C. Grasp Comparison in Simulation In order to evaluate our framework's ability to enable grasp planning, the system was tested in simulation using the same set of completions. The use of simulation allowed for the quick planning and evaluation of 7900 grasps. GraspIt! was used to plan grasps on all of the completions of the objects by uniformly sampling different approach directions. These grasps were then executed not on the completed object but on the ground truth meshes in GraspIt!. In order to simulate a real-world grasp execution, the completion was removed from GraspIt! and the ground truth object was inserted in its place. Then the hand was placed 20 cm away from the ground truth object along the approach direction of the grasp. The spread angle of the fingers was set, and the hand was moved along the approach direction of the planned grasp either until contact was made or a maximum approach distance was traveled. At this point, the fingers closed to the planned joint values. Then each finger continued to close until either contact was made with the object or the joint limits were reached. Table III shows the average difference between the planned and realized Cartesian finger tip and palm poses, while Table IV shows the difference in pose of the end effector between the planned and realized grasps averaged over the 7 joints of the hand. Using our method, the end effector ended up closer to its intended location in terms of both joint space and the palm's Cartesian position versus other completion methods' grasps. D. Live Grasping Results To further test our network's efficacy, the grasps were planned and executed on the Holdout Live views using a Staubli arm with a Barrett Hand. The grasps were planned using meshes from the different completion methods described above. For each of the 8 objects, we ran the arm once using each completion method. The results are shown in Fig. 6 and Table V. Our method enabled an improvement over the other visual-tactile shape completion methods in terms of grasp success rate and resulted in executed grasps closer to the planned grasps, as shown by the lower average joint error (and much faster than GPIS). VII. CONCLUSION Our method provides an open source novel visual-tactile completion method which outperforms other general visualtactile completion methods in completion accuracy, time of execution, and grasp posture utilizing a dataset which is representative of household and tabletop objects. We demonstrated that even small amounts of additional tactile information can be incredibly helpful in reasoning about object geometry. This CNN uses both dense depth information and sparse tactile information to fill in occluded regions of an object. Experimental results verified that utilizing both vision and tactile was superior to using depth alone. In the future we hope to relax the fixed object assumption by using tactile sensors developed in our lab that allow contact without motion. We are also interested in a more general exploration algorithm of the unseen part using tactile.
2018-04-03T00:16:27.721Z
2018-03-20T00:00:00.000
{ "year": 2018, "sha1": "6c046d3a2fc4425649eab3ab105c65fa5045ef0d", "oa_license": null, "oa_url": "http://arxiv.org/pdf/1803.07671", "oa_status": "GREEN", "pdf_src": "Arxiv", "pdf_hash": "aad27b485c0f5d86f9fe971c3fe1016c4a7d542c", "s2fieldsofstudy": [ "Computer Science" ], "extfieldsofstudy": [ "Computer Science", "Engineering" ] }
49666288
pes2o/s2orc
v3-fos-license
Indy: a virtual reality multi-player game for navigation skills training Working in complex industrial facilities requires spatial navigation skills that people build up with time and field experience. Training sessions consisting in guided tours help discover places but they are insufficient to become intimately familiar with their layout. They imply passive learning postures, are time-limited and can be experienced only once because of organization constraints and potential interferences with ongoing activities in the buildings. To overcome these limitations and improve the acquisition of navigation skills, we developed Indy, a virtual reality system consisting in a collaborative game of treasure hunting. It has several key advantages: it focuses learners' attention on navigation tasks, implies their active engagement and provides them with feedbacks on their achievements. Virtual reality makes it possible to multiply the number and duration of situations that learners can experience to better consolidate their skills. This paper discusses the main design principles and a typical usage scenario of Indy. INTRODUCTION Developing spatial understanding of an industrial building's structure can be quite difficult. Indeed, the facility priority use is production. In particular, learning sessions in such a building, even short ones, have to meet the production constraints such as busy schedule and safety measures. Virtual learning environments (VLE) offer a great alternative for learning tasks difficult to undertake in the real world [4]. Indeed, they allow learners to visit a facility without any consideration about availability, distances or safety. They also give trainers the possibility to modify characteristics of the environment beyond what is possible in real life. They give learners the possibility to learn through active action instead of passive knowledge acquisition, as explained for instance by Pan et al. [14]. Besides, attention, active engagement, feedback and strengthening phases are important in a learning process to ensure effective acquisition of knowledge, as already identified by Dehaene [5] in the way children learn how to read. Indy is a new virtual reality application for professionals who will work in industrial facilities. It aims at helping them get familiarized with the facilities during their training period. It is designed to provide professional trainers a tool to build new pedagogical strategies based on virtual reality. Collaboration Literature tend to show that fostering social interactions and collaboration leads to higher learning efficiency for a virtual group [5][11] [19]. Based on this hypothesis, we designed Indy to foster collaboration in a learning context, as defined by Roschelle et al. [16]: "the mutual engagement of participants in a coordinated effort to solve the problem together". According to Slavin [19], group goals and individual accountability can contribute to collaborative learning achievements. Kreijns et al. [11] also cite positive interdependence and promotive interaction as levers to enforce collaboration. The authors insist on the fact that "the key to the efficacy of collaborative learning is social interaction, and lack of it is a factor causing the negative effectiveness of collaborative learning". Nonetheless, they point out that technology allowing communication won't automatically imply social interaction, and that off-task casual communication is important for group cohesion. Indy offers trainers a tool to create a training scenario where several teams are immersed in a virtual industrial building, relying on an asymmetric collaboration method [12]:  Some learners are immersed, using a head mounted display (HMD), in a virtual mockup of the building. They will play the "hunters", who will have to find their way to the objective.  Some learners use floor maps and 360° photographs, on a desktop computer. They will play the "radios", who will have to guide the hunters. With this method, learners have access to complementary information according to their role: each one has a key capability to achieve the scenario objective. This makes the communication within the team necessary to navigate in the virtual building. Besides, each team member has a particular point of view and professional background that can be shared with the other team members either to help one to be more efficient or to explain his/her choices. In Indy, communication between teammates relies on two components: oral communication and pointing at objects in the 3D environment. This way they can help each other, share their viewpoints to confirm the itinerary (current position and future direction) and identify the objective. However, communication is not limited to the VLE: Indy is designed as part of a full training sequence, during which the trainer and the learners are physically in the same room. Although VLEs allow virtual teams, with physically separated members [10], we wanted to preserve the training sequence, which fosters social interaction. Gamification Gamification can be defined as "the use of game design elements in non-game contexts" [6]. It is commonly used to increase users' motivation and engagement [8] [15]. Nah et al. [13] list the following design elements commonly used in gamified applications, in the educational and learning contexts: points, levels/stages, badges, leaderboards, prizes and rewards, progress bars, storyline, and feedback. Sailer et al. [18] also list specific elements known to show positive effect on users' motivation: points, badges, leaderboards, performance graphs, meaningful stories, avatars and teammates. Indy is a treasure hunt in an industrial facility: learners have to search in the building for a specific equipment or zone. These situations simulate the ones operators often face in real life, when planning an intervention in a building in which they can go only occasionally. This approach is similar to "investigation-scenarios" already experimented in learning sessions [7]. Trainers can launch a contest between teams: the time taken by the teams to find the objective is recorded and displayed at the end of the hunt. The trainer then has the opportunity to use the following elements:  Level/stages: the trainer can create a new hunt in a few seconds, adapting the difficulty level to the class. The difficulty can vary according to the objective itself, but also to the presence of obstacles placed by the trainer, and the kind of information he/she gives orally or through the application before the hunt.  Points / leaderboard: the trainer can use the recorded time of each team to that matter.  Feedback: after the end of a hunt, an interface allows the trainer to show to the learners all their itineraries during the hunt and screenshots that he/she might have taken. He/she can use all this material for debriefing. Spatial navigation training 3D VLE are an efficient tool for acquiring spatial navigation skills [4][1]. Waller et al. [20] suggest that using immersive VLE for spatial navigation training might be equivalent to training in a real environment. Indeed, Rodrigues et al. [17] confirmed that good spatial knowledge transfer can occur from virtual reality to the real world. Chrastil et al. [3] affirmed that active navigation leads to better spatial knowledge. Particularly, active decision making plays a great role in elaborating a mental model of the place. Carbonell-Carrera et al. [2] also insisted on the importance of maps as a complement to VLE for spatial orientation. As a result, Indy aims at maximizing the spatial knowledge acquired during training:  The "hunters" have to move along the entire itinerary, at a standard walking pace.  The "radios" have to orally explain the itinerary to the hunters.  Finding the objective involves decision making along the way, particularly when the trainer places obstacles. Indy focuses on the development of 4 skills: navigating in the building (landmarks identification), reading a map (survey knowledge), converting information between both, and communicating route information to others. BUILDING MOCKUP Indy uses the virtual mockup of a reactor building. It includes a detailed 3D model reconstructed from laser scans, 360° photographs and updated floor maps [9]. Creating the learning scenario When the trainer creates a new hunt, several configuration options are available:  Type of hunt: the objective can be to point at a specific equipment or regroup into a target zone.  Starting point: the trainer can choose any room of the building as a starting point.  Objective: depending on the type of hunt, the trainer can either point at an equipment in the 3D model or draw a circle on a floor map.  Objective as displayed to the learners: text that will be displayed, on demand, on the learners' side.  Obstacles: the trainer can place obstacles or markings anywhere in the building. Hunt The learners are divided into teams of 3 or 4 members. In each team, one person plays the "radio" and the others play the "hunters". Before starting the hunt, each team can gather around the radio's computer to prepare its itinerary. The learners can look for the equipment in the photos, and decide which itinerary seems to be best and even select alternative paths. Afterwards, each hunter puts a HMD to start the hunt. All teammates can communicate orally, in order for the radio to give navigation instructions to the hunters. They also have the possibility to point at objects in the 3D environment, either with the mouse (for the radio) or with the Vive controllers (for the hunters). When doing so, a ray comes from the player's avatar and the object intersected by the ray is highlighted. The players can't see players from other teams. The visibility between teammates is defined with the following rules:  The radio can see the hunters' avatars (and pointing ray) in the 3D environment, and their positions on the floor maps.  The hunters can see other hunters' avatars (and pointing ray), but not the radio's avatar (nor pointing ray). This keeps the hunters from simply following the radio. During the whole hunt, the trainer can:  See all the learners on the floor maps or in the 3D environment: he/she can choose which team(s) to see, and which team(s) can see his/her own avatar in the 3D environment.  Point at objects (similarly to the radio players). The pointing ray will be seen by all players seeing his/her avatar.  See all learners' point of view.  Take screenshots: they are saved with their timestamp relative to the hunt. End of the hunt To complete the hunt, all the hunters of the team must either (depending on the type of hunt):  point at the target equipment at the same time: they have to keep pointing at the object for two seconds, while a validation animation is displayed ;  be present in the target zone at the same time, and stay in it for two seconds (with the same validation animation). Once the objective is found and validated, the team's total time is saved. When all teams found the objective, the hunt ends: on all screens, a score board shows the total time of each team. The trainer can then start the debriefing. Debriefing For a training session to produce better results, we propose feedback data to the trainer. On his/her computer, the trainer can see: the total time of each team in a single timeline, hunters' paths drawn on floor maps and screenshots he/she took during the session. The debriefing interface is completely interactive. The trainer can move a cursor along the timeline to update hunters' paths on the maps. All the screenshots the trainer took are also situated on the timeline. The column on the right, used to change floors, also indicates with markers the floors where the learners currently are, synchronized with the timeline cursor. The debriefing interface was designed as a tool, leaving the liberty to the trainer to use it as he/she wants, and display it in the classroom to focus on the elements pedagogically relevant for the training. DISCUSSION Indy is a tool for professional trainers, allowing them to create and propose scenarios adapted to all kinds of learners: beginners, experienced professionals, engineers or operators. The treasure hunts can be repeated and debriefed as many times as necessary to achieve the trainers' pedagogical objectives. The debriefing is a key functionality for trainers. It is designed to be as little restrictive as possible, so that trainers would have the entire liberty to define the pedagogical valorization of the hunts. Among its possible uses, we can anticipate the following: discussing learners' itineraries and their implications in terms of safety, suggesting alternatives taking into account additional constraints than those experienced during the hunt, ask learners to explain their own choices, comparing different sessions with the same objective, or even evaluate long-term progression during the learners' careers. The variety of scenarios that trainers can build with Indy offers several benefits. It allows trainers, when needed, to adapt to a specific work profile. Indeed, different professionals have different needs in terms of spatial knowledge of the facilities: emergency intervention teams, logistics supervisors, operators performing non-destructive testing, safety engineers, etc. The obstacles adding functionality, in particular, plays a central role in adapting the scenarios. It also offers a way to evaluate the learners' global understanding of the building structure, as they will adapt their strategies in real time to find the objective. By fostering collaboration and turning the learning experience into a game, we intend to focus learners' attention and actively engage them in using their navigational abilities. Indy offers feedback and allows trainers to integrate strengthening phases in the learning sequence. Indy is a tool and the learning scenarios built upon it are an expression of the pedagogical strategy trainers will choose. The best practices are yet to be explored. However, Indy has yet to be experimented in a real learning session. In order to confirm our hypotheses on the pedagogical benefits of the proposed approach, a comprehensive study should be undertaken. Despite several promising studies, the empirical data is still sparse to fully understand the transfer mechanisms from virtual reality to real life, especially for professionals in an industrial context. CONCLUSION We presented Indy, a collaborative virtual reality application using gamification to help professionals acquire navigation skills needed to work in industrial facilities. Indy consists in a treasure hunt in a virtual building, making learners cooperate in order to find an objective. It is designed to be integrated into existing training methods, and to be used by a trainer with a group of learners. This paper summarized the design principles of Indy and its key functionalities offering pedagogical benefits. ACKNOWLEDGMENTS This work is funded by EDF R&D, and was developed by Florian Gavel during his 6 months internship.
2018-07-11T15:13:06.000Z
2018-03-18T00:00:00.000
{ "year": 2018, "sha1": "712fa14909ea657cd28ab19df742b49f9988eb54", "oa_license": null, "oa_url": "http://arxiv.org/pdf/1807.04184", "oa_status": "GREEN", "pdf_src": "Arxiv", "pdf_hash": "712fa14909ea657cd28ab19df742b49f9988eb54", "s2fieldsofstudy": [ "Computer Science", "Engineering" ], "extfieldsofstudy": [ "Computer Science" ] }
51689147
pes2o/s2orc
v3-fos-license
Vitamin D 3 Receptor Activation Rescued Corticostriatal Neural Activity and Improved Motor Function in –d 2 R Tardive Dyskinesia Mice Model Orolingual (ol) Area (li and Ol Aims). as a Confirmatory Test, the Dyskinetic Mice (−d 2 ) Showed High Global Aims Score When Compared with the Vd 3 Ra Interven Haloperidol-induced dyskinesia has been linked to a reduction in dopamine activity characterized by the inhibition of dopamine receptive sites on D 2-receptor (D 2 R). As a result of D 2 R inhibition, calcium-linked neural activity is affected and seen as a decline in motor-cognitive function after prolonged haloperidol use in the treatment of psychotic disorders. In this study, we have elucidated the relationship between haloperidol-induced tardive dyskinesia and the neural activity in motor cortex (M1), basal nucleus (CPu), prefrontal cortex (PFC) and hippocampus (CA1). Also, we explored the role of Vitamin D 3 receptor (VD 3 R) activation as a therapeutic target in improving motor-cognitive functions in dyskinetic mice. Dyskinesia was induced in adult BALB/c mice after 28 days of haloperidol treatment (10 mg/Kg; intraperitoneal). We established the presence of abnormal involuntary movements (AIMs) in the haloperidol treated mice (−D 2) through assessment of the threshold and amplitude of abnormal involuntary movements (AIMs) for the Limbs (Li) and in motor function. Ultimately, we deduced that VD 3 RA activation reduced the threshold of abnormal movement in haloperidol induced dyskinesia. Introduction Antipsychotics are often employed in the management of depression, schizophrenia and other neurological disorders; however, prolonged use of these drugs often results in tardive dyskinesia (TD) and other associated movement disorders [1].The primary effects of these drugs involve the inhibition of dopaminergic D 2 receptor in the nigrostriatal system and cortical projections leading to persistent involuntary movements in the face, limbs, oral region, trunk and a decline in memory function [2] [3]. Previous studies have shown that haloperidol-induced motor disorders involve partial inhibition of dopamine receptive sites on D 2 -receptor which prevents the heretomeric combination of D 1 and D 2 receptors during dopaminergic neurotransmission [4].As a result of the inhibition of D 1 -D 2 combination, calcium transport is impaired and this creates a state of D 2 receptor sensitivity and calcium-linked oxidative stress [4]- [6]. An important aspect of drug-induced dyskinesia is its effect on motor-cognitive function due to dopaminergic D 2 inhibition.Several studies have shown a decline in motor function and cognition after a prolonged use of these drugs in treatment of depression and schizophrenia [7] [8].Going further, the observed behavioral deficits were attributed, in part, to loss of dopaminergic neurotransmission in the motor and memory neural systems, predominantly due to D 2 R inhibition and abnormal calcium currents in corticostriatal outputs [3] [5]. The significance of the D 2 R stimulation and calcium current supports the wide role of dopamine in various brain centers involved in motor and memory functions [5] [9].Specifically, dopamine interacts with glutamate in hippocampal memory formation, striatal motor function, addiction and reward.In addition, glutamate-dopamine cross-talk has been described in glucose metabolism and oxidative stress in these brain centers [10]- [12]. Our previous experiments have shown that activation of Vitamin D 3 receptor (VD 3 R) reduces calcium toxicity through central and peripheral mechanisms, and improves motor-cognitive function in mice after haloperidol induced parkinsonism [13].In the present study, we investigate the link between haloperidol-induced dyskinesia and M1, CPu, PFC and CA1 neural activities in vivo.Furthermore, we studied the role of VD 3 R activation in improving motor-cognitive functions through restoration of epoch neural activities in the brain areas of dyskinetic mice. Materials All chemical reagents were sourced from Sigma-Aldrich, Germany.Haloperidol Injection was procured from Kanada Pharmacy, Nigeria and re-suspended in dextrose saline.VD 3 was procured form Standard Pharma, Nigeria and dissolved in normal saline.Haloperidol and VD 3 solutions were prepared weekly as needed and stored at 4˚C. Motor Function At the end of the treatment phase (Day 28 for Vehicle, −D 2 , +VDR; Day 35 for -D 2 /+VDR), the animals were examined in various tests for motor function.All animals were familiarized with the behavioral testing tools during the treatment phase and were moved to the testing area 72 hours before the commencement of the tests. Motor Function Test for Dyskinesia Abnormal Involuntary Movements (AIMs): Involuntary movements were accessed in AIMs (Orolingual and Limb AIMs) tests for dyskinesia.The values were recorded as amplitude of movement and basic movement using the methods of Cenci and co-workers [15].Three experienced scientists assessed each animal independently using the grading scale of 1 -4.The average score was adopted in each case for the amplitude and basic movement score at 0, 15 and 30 minutes respectively. Parallel Bar Test: Motor coordination was accessed on two raised 1m long (1 mm) parallel bars (3 cm apart) mounted on the 60 cm high wooden frame.The animal was placed at the 0.5 m mark (center of the raised bars) following which we determined the duration taken by the mice to make a 90˚ turn (latency of turning; LOT) [16]. Rotarod: The test involved three trials of 3 minutes each (T 1 , T 2 and T 3 ) separated by an inter trial time of 90 minutes.The time spent on the Rotarod in T 1 , T 2 and T 3 was determined and averaged to calculate the latency of fall (LOF). Cylinder Test: Each mouse was placed in a 500 ml transparent beaker (cylinder) and was allowed to explore the walls of the cylinder with the forelimbs while standing on the two hind limbs [15].The number of times an animal explored the wall of the cylinder with the forelimb was counted to determine the average score of climbing attempts for each group. Bar Test: the magnitude of motor impairment was also measured in the bar test [15].The forelimbs of the animal were placed on a raised wooden bar for 3 minutes.The time taken by the animal to move the limb off the raised bar was measured for the treatment and the control groups. Electrophysiology Electrophysiological recordings of extracellular calcium hyperpolarization currents were obtained from the basal nucleus (CPu), Motor cortex (M1; L4 -L6), Hippocampus (CA1) and the Prefrontal cortex (PFC) using chronically implanted wire electrodes.Thirty minutes before the implant, animals received 2 mg/Kg i.p. meloxicam and were deeply anesthetized using 100 mg/Kg Ketamine and 5 mg/Kg Diazepam (i.p.) to keep the mice immobile but awake for basal motor functions (corneal reflexes and diaphragmatic movement) [17] [18].Using a stereotaxic frame, the scalp was removed above the bregma to expose the cranium.Periosteal tissue was removed using hydrogen peroxide solution and a cotton bud.The M1, CPu, PFC and Hipp were located using a calibrated grid to determine the position and depth (electrode length) relative to the bregma [M1 (AP: +3.34 mm ML: +3 mm DV: +2.5 mm), CPu coordinates (AP: +2.28 mm ML: +3 mm DV: +6 mm), Hipp (AP: −4.4 mm ML: +2 mm DV: +3 mm), PFC (AP: +2.2 mm ML: +1 mm DV: +2.5 mm)].A dental drill was used to make holes in the cranium following which insulated wire electrodes were inserted to the appropriate depth.The ground electrode was placed on the cranium of the contralateral side.Subsequently, the implant was covered by orthodontic resin to hold the electrodes in position during the recording procedure.The terminal wires of the electrodes were connected to the amplifier through small head-sockets in preparation for immobile awake recordings.The data from the amplifier (Spiker Box; Backyard Brains, Michigan, USA) was captured on the Audacity software v4. AIMs Study for Dyskinesia The global AIMs score for Limb (Li) and Orolingual (Ol) AIMs were assessed from 0 -30 minutes after the animals were pretreated with 10 mg/Kg Haloperidol (intraperitoneal).Basic and amplitude scores were allotted by three independent scientists on a scale of 1 -4 at 0, 15 and 30 minutes.Subsequently, the basic and amplitude scores were converted to the global AIMs score (Basic X Amplitude) for the -D 2 group after 28 days of haloperidol treatment to confirm dyskinesia, and -D 2 /+VDR treatments 7 days after VD 3 RA intervention (Day 35). Li AIMs: Abnormal involuntary movement was observed in the limbs of the haloperidol treated mice after 28 days (-D 2 ).The frequency and threshold of such movements increased between 0 -15 minutes after haloperidol treatment and decreased sharply between 15 -30 minutes.After 7 days of VD 3 RA intervention (-D 2 /+VDR), the AIMs score decreased significantly when compared with the -D 2 at 0 -15 and 15 -30 minutes (Figure 3(a)). Ol AIMs: Haloperidol treatment (-D 2 ) caused abnormal involuntary movements of the Orolingual area and was characterized by uncontrolled chewy mouth movements and tongue protrusion.The threshold (AIMs score) was at its peak shortly after haloperidol was given intraperitoneally (0 -15 minutes) and decreased significantly thereafter (15 -30 minutes).Similar to the observations in Li AIMs, VD 3 RA intervention also reduced the severity of Ol AIMs when the -D 2 /+VDR treatment was compared with the -D 2 group.Lower AIMs scores were recorded in this group throughout the duration of the test (from 0 -15 and 15 -30 minutes) (Figure 3(b)). Thus, VD 3 RA treatment after haloperidol induced dyskinesia significantly reduced the threshold of Li and Ol AIMs seen as a decline in the AIMs score from 0 -30 minutes post haloperidol treatment.It is important to mention that the mice were also assessed for Axial AIMs (Ax) but showed no significant change in ipsilateral and contralateral turns when compared with the control (untreated animals). Motor Function Tests After establishing that dyskinesia was induced in the animals, the dyskinesia models (-D 2 and -D 2 /+VDR) were compared with the VD3RA treated (+VDR) and untreated control groups (NS) in motor function tests. Rotarod: After haloperidol induced dyskinesia (-D 2 ), the LOF decreased significantly on the treadmill when compared with the control (P < 0.05).Subsequent VD 3 RA intervention (-D 2 /+VDR) caused an increase in the LOF (motor function) when compared with the -D 2 treatment and the control (P < 0.05).However, VD 3 RA treatment (28 days; +VDR), without prior induced dyskinesia, significantly increased the LOF (motor function); when this treatment was compared with the control and the -D 2 /+VDR (P < 0.01) (Figure 1(a)). Cylinder Test: Motor activity was measured as a function of climbing attempts score.Haloperidol induced dyskinesia (-D 2 ) caused a decrease in motor activity which was seen as a decline in climbing attempts score versus the control (Figure 1(b)).VD 3 RA intervention in dyskinetic mice (-D 2 /+VDR) increased the climbing attempts to match control scores as no significance was observed between the -D 2 /+VDR and the control (untreated) mice.Similar to our observations in the Rotarod test, VD 3 RA treatment (+VDR) without prior induced dyskinesia significantly increased the climbing attempts (motor function) when compared with the control and -D 2 /+VDR (P < 0.05) (Figure 1(b)). Parallel Bar Test: All groups were compared with the untreated control (standard) such that a significant increase or decrease in LOT score was considered a decline in motor function.LOT scores decreased after haloperidol induced dyskinesia when the -D 2 treatment was compared with the control, thus suggesting a decline in motor coordination (P < 0.05).Subsequent VD 3 RA intervention (-D 2 /+VDR) significantly increased the LOT when compared with the control (P < 0.05) and -D 2 (P < 0.001).This outcome suggests a decline in motor coordination when compared with the control as the animals were characterized by freezing of movements on the raised bars.VD 3 RA treatment (+VDR), in non-dyskinetic control mice, caused a decline in LOT when compared with the control (P < 0.05) and was not significant versus -D 2 treatment (Figure 1(c)); thus suggesting a decline in motor coordination when compared with the control. Bar Test: No significant change was observed in the bar test when the treatment groups were compared with the control.However, empirical data suggests that -D 2 , -D 2 /+VDR and VD 3 R treatments did induce hyperkinesia, thus, causing a reduction in time taken for the mice to remove the limbs from the raised platform (versus the control; Figure 1(d)). Motor Neural Activity (M1 and CPu) in -D 2 Induced Dyskinesia Haloperidol induced dyskinesia (-D 2 ) caused a decline in motor function (increase in Ol/Li AIMs), abnormal neural activities in the M1 (L4 -L6) and CPu-when the -D 2 was compared with the control.Prominent changes in the neural spike trains involved an increase in M1 activity (Figure 1(f)) and a reduced threshold of CPu outputs when the dyskinetic group (-D 2 ) was compared with the control (Figure 1(e)).After VD 3 RA intervention (-D 2 /+VDR), an increase in motor activity (decreased threshold of Ol/Li AIMs; Figure 3(a versus the control and dyskinetic mice (-D 2 ; Figure 1(e) and Figure 1(f)).Furthermore, haloperidol induced dyskinesia (-D 2 ) increased M1 but reduced CPu neural outputs and was associated with a decline in motor function, while VD 3 RA treatment (VDR+) of control mice decreased M1 activity and was associated with an increase in motor coordination.However, VD 3 RA intervention after haloperidol induced dyskinesia (-D 2 /+VDR) decreased M1 activity and increased CPu outputs-when compared with -D 2 -leading to an improvement in motor function.From these findings, we deduced loss of CPu burst frequencies was associated with abnormal movement in dyskinetic and VD 3 RA (only) treated mice.By contrast, VD 3 RA intervention reduced abnormal movements (dyskinesia) by increasing CPu burst frequencies in vivo.From these findings, both M1 and CPu neural outputs were affected in dyskinesia and subsequent VD 3 RA intervention. -D 2 induced Dyskinesia Affects Cognition Related Brain Centers The effect of haloperidol induced dyskinesia was further investigated on memory-related brain centers (PFC and Hipp).Unilateral electrode recordings in immobile awake mice showed a resting state PFC activity and Hippocampal CA1 burst in the control.Haloperidol induced dyskinesia increased the PFC frequency in immobile awake animals and reduced the CA1 burst activity when compared with the control (Figure 2(c)).This treatment was also characterized by an irregular RMS threshold for PFC and CA1 versus the control (Figure 2(b)).However, VD 3 RA intervention in dyskinetic mice (-D 2 /+VDR) reduced the PFC neural frequency (reduced RMS threshold) while restoring the CA1 burst pattern (increased RMS threshold).Thus we deduced that VD 3 RA intervention can significantly restore memory alterations associated with dyskinesia by increasing CA1 burst and reducing PFC activity (opposite for dyskinetic mice).Interestingly, VD 3 RA treatment of control animals (+VDR), however, increased PFC activity and reduced CA1 bursts similar to the effect of -D 2 .Due to movement impairments, memory function was not assessed in these animals in behavioral tests (Figure 2 Discussion Taken together, the outcome of this study confirms the role of Dopaminergic D 2 receptor (D 2 R) blockade in tardive dyskinesia after prolonged intraperitoneal haloperidol treatment.Similar to the findings in L-DOPA induced dyskinesia model (6-hydroxydopamine; 6-OHDA lesion) [15], haloperidol induced dyskinesia caused abnormal involuntary movements in the Orolingual region and Limbs.In this study, we observed no prominent change in axial movement when compared with the described symptoms for L-DOPA induced dyskinesia.The main abnormal movement observed in the orolingual region and limbs can be described as hyperkinesia (Figure 3 Subsequent VD 3 RA intervention in dyskinetic mice reduced the threshold and amplitude of the Orolingual and Limb aims observed over a duration of 30 minutes post haloperidol treatment.The dyskinetic mice (-D 2 ) showed a steady increase in abnormal Orolingual movement (Li) from 0 -15 minutes and this declined rapidly between 15 -30 minutes.Similarly, the global AIMs score for Li was highest within the first 5 minutes after haloperidol treatment and reduced significantly between 15 -30 minutes.However, VD 3 RA treatment reduced the threshold and amplitude of Li and Ol AIMs when the intervention group (-D 2 /+VDR) was compared with the dyskinetic group (-D 2 ) at 0, 15 and 30 minutes respectively (Figure 3 Haloperidol Induced Dyskinesia After confirming the role of haloperidol in -D 2 induced dyskinesia, and the effect of VD 3 RA in ameliorating the dyskinesia-linked movement disorders, we examined the general effects of prolonged D 2 R blockade and VD 3 RA intervention on motor function in these animals using arrays of motor function tests. Dyskinetic mice (-D 2 ) showed a decline in motor function in behavioral tests; Rotarod, Cylinder test, parallel bar test but exhibited no significant change in the raised bar test when compared with the control.The dyskinetic mice recorded a decline in latency of fall (LOF) when compared with the control (P < 0.05; Figure 1(a)).Similarly, these animals scored lower in climbing attempts when assessed in the cylinder test versus control (untreated animals) (P < 0.05; Figure 1(b)).Subsequent analysis of motor coordination on raised parallel bars revealed a decline in motor coordination when compared with the control and was seen as a significant decrease in latency of turning (LOT) (Figure 1(c); P < 0.05).Surprisingly, these animals showed no significant change in motor function when examined for limb removal in a raised bar test and the outcome was rather inconclusive.However, this can be attributed to the abnormal movements associated with the limbs (due to tardive dyskinesia) rather than a swift removal of limbs as a result of motor coordination (Figures 1(a)-(d)). Although the role of VD 3 RA in Parkinsonism has been studied extensively, its importance in dyskinesia remains poorly explored.Other studies have favored experiments on the effect of Vitamins (Vitamin E) in reducing the threshold of tardive dyskinesia [19].The anti-dyskinetic effect of Vitamin E was attributed mostly to its antioxidant properties and its role in radical detoxification [19] [20].Similarly, VD 3 deficiency has been linked to the cause and progression of various movement disorders [21]; including PD.By virtue to its role in radical detoxification, calcium-related signaling and general brain health, VD 3 depicts a potential therapeutic target in reducing dyskinetic symptoms similar to the effects Vitamin E [22] [23].In support of this hypothesis, 7 days of VD 3 RA intervention in the dyskinetic mice reduced Ol and Li AIMs, and was associated with an overall improvement in motor function when compared with the untreated dyskinetic mice (-D 2 ) (Figure 3(a) and Figure 3(b), Figure 1(a) and Figure 1(b)).In the Rotarod test, the intervention group (-D 2 /+VDR) recorded an increase in LOF when compared with the -D 2 (untreated dyskinesia) (P < 0.05; Figure 1(a)).Similarly, these animals recorded an increase in climbing attempts score when compared with the untreated mice (-D 2 ) (Figure 1(b)).These outcomes suggest that VD 3 RA can reduce dyskinesia significantly, similar to what was described for Vitamin E [19]- [21]. Neural Epoch Activity in Dyskinesia The abnormal involuntary movement observed after prolonged haloperidol treatment (Figure 3(a) and Figure 3(b); Figures 1(a)-(d)) were associated with prominent changes in the M1 and CPu neural outputs in deeply sedated (immobile awake) mice.Haloperidol induced dyskinesia was characterized by an increase in M1 and CPu outputs when compared with the control.This increase in cortical and striatal outputs was attributed to the observed hyperkinesia (AIMs) in this dyskinetic mice; similar to the reports by Lindenbach and Bishop [24].By contrast, VD 3 RA intervention decreased the M1 cortical output , facilitated CPu burst frequencies and was associated with a decrease in global AIMs scores (improved motor function) in the -D 2 /+VDR.Thus, these observations, and other reports [25], suggests that a reduction in CPu burst frequency and an increase in M1 cortical outputs were associated with the observed motor deficits seen in Dopaminergic receptor inhibition. Corticostriatal Outputs in Dyskinetic Mice Haloperidol induced dyskinesia (-D 2 ) decreased CA1 burst and reduced PFC cortical outputs in resting mice when compared with the control (Figure 2(c)).Subsequent VD 3 R intervention (-D 2 /+VDR) improved motor function and was associated with a restoration of hippocampal bursts while reducing the PFC cortical outputs when compared with the untreated dyskinetic mice (-D 2 ; Figure 2(c)).From these observations, we deduced that changes in neural activities in the PFC and CA1 were associated with haloperidol induced dyskinesia when the M1-CPu was compared with the PFC-CA1 outputs (Figure 1(f) and Figure 2(c)). In addition, haloperidol induced dyskinesia increased M1 and PFC (cortical outputs) but decreased the CA1 and CPu burst frequencies.A similar effect was described by Wang and Goldman-Rakic on the role of D 2 R in control of burst activities through its long term depression (LTD) effects on glutamergic systems; often explored in treatment of schizophrenia [26].However, after VD 3 RA intervention (-D 2 /+VDR), restoration of CA1 and CPu burst frequencies were observed.This was associated with an increase in motor activity and a decrease in the threshold of abnormal movements when the animals were assessed in behavioral tests for motor function.These observations suggests that D 2 R inhibition affects motor and cognitive brain centers sequel to drug induced dyskinesia, through disruption of CA1 and corticostriatal projections to the PFC and M1. Conclusion Haloperidol induced dyskinesia was associated with a loss of CPu-CA1 burst pattern and increased M1 neural frequencies.VD 3 RA intervention improved motor function and reduced AIMs score through reversal of M1, CPu and CA1 outputs in dyskinetic mice, specifically, restoration of CPu-CA1 burst frequencies. 2 and analyzed in Sig View v2.1 (Signal Labs, USA) to determine the extracellular summation epoch neural activity (calcium signals) expressed as Frequency (Hz) per unit time (Figure 1(g) and Figure 2(a)). Figure 1 . Figure 1.((a)-(d)) Motor function Tests for Dyskinesia and VDRA Treated Haloperidol-Induced dyskinesia Mice Model.(a) Rotarod: The latency of fall (LOF) was determined on the treadmill for a test duration of 3 minutes.The dyskinesia group (-D 2 ) recorded a reduction in the LOF when compared with the control and the +VDR groups (P < 0.01).However, VD 3 RA intervention in -D 2 mice increased the LOF significantly when compared with the -D 2 and the control group (P < 0.05).Similarly, VD 3 RA treatment (only) significantly increased motor function versus the control (P < 0.01); this showed the effect of VD 3 RA in increased motor function in normal and dyskinesia mice model; (b) Cylinder Test: The climbing attempts in the cylinder reduced significantly in the -D 2 mice when compared with the control (P < 0.05).VD 3 RA intervention group (-D 2 /+VDR) recorded an increase in climbing attempts when compared with the -D 2 treatment and the control.Similar to our observations in the Rotarod test, VD 3 R (only) treatment significantly increased motor function as the +VDR group recorded the highest climbing attempts when compared with the -D 2 (P < 0.001) and -D 2 /+VDR (P < 0.05); (c) Parallel Bar Test: The latency of turning (LOT) was reduced in the -D 2 treatment; the animals were unstable and constantly moving on the bars.VD 3 RA intervention (-D 2 /+VDR), however increased the LOT when compared with the -D 2 (P < 0.001).The outcome for the -D 2 /+VDR can be described as an increase in coordination but a reduction in motor activity when compared with the +VDR and -D 2 (P < 0.001); (d) Raised Bar Test: No significant change was recorded in this test for the treatment groups and the control.((e)-(g)) Motor Neural Electrophysiological Recording for Dyskinesia Mice Model and VD 3 RA Intervention.((e)-(f)) Haloperidol treatment induced dyskinesia in Mice after 28 days of intraperitoneal administration.This was characterized by a reduction in RMS in the M1 (L4 -L6), loss of burst activity and a reduction in RMS (*) in the CPu (Figure 1(f)).VD 3 RA intervention (-D 2 /+VDR) reduced the RMS in the M1 and CPu when compared with the control (Figure 1(e)) spike pattern (Figure1(f)).This was also associated with a reduction of Global AIMs score (Ol and Li) and an increase in motor coordination when the -D 2 /+VDR treatment was compared with the -D 2 group.VD 3 RA (only) treatment caused an increase in motor activity in control animals when compared with the control (untreated) animals.Neural recordings for VD 3 RA treatment showed a decline in RMS in the M1 (L4 -L6) coupled with a decreased frequency of the spike train.However, this group recorded a slight decline in CPu RMS when compared with the control (Figure1(f)).The decreased RMS in VD 3 RA treatment facilitated a loss of burst action potential in the spike pattern (Figure1(f)).Thus, it was inferred that VD 3 RA treatment in control animals can cause neural changes seen as hyperactivity in behavioral studies.Similar to the observations in -D 2 , VD 3 RA also caused a loss of burst pattern in the CPu; in both cases hyperkinesia was observed in the animals; (g) Schematic illustration of chronic unilateral electrode implants in the M1 (L4 -L6) and the CPu. Figure 2 . Figure 2. Neural Epoch Activity in the PFC and Hippocampus (CA1) of immobile awake dyskinesia and VD 3 RA intervention mice.(a) Schematic illustration of electrode placement in the PFC and Hippocampus (CA1) regions; ((b)-(c)) The dyskinesia group (-D 2 ) recorded an increase in PFC activity but a decrease in CA1 burst when compared with the control (Figure 1(c)).VD 3 RA intervention (-D 2 /+VDR) facilitated an increase in CA1 burst when compared with the -D 2 and the control and was associated with an increased CA1 RMS (Figure 2(b)).However, the -D 2 /+VDR treatment caused a decrease in PFC RMS when compared with the control (Figure 2(b)) and was associated with a decrease in spike train frequency.Prolonged VD 3 RA treatment of control animals caused an increase in PFC activity and RMS; also a reduction CA1 burst and RMS (Figure 2(b) and Figure 2(c)). Figure 3 . Figure 3. Abnormal Involuntary Movements (AIMs) Study for Haloperidol induced Dyskinesia and VDRA intervention.(a) Limb AIMs (Li): Intraperitoneal Haloperidol treatment for 28 days (-D 2 ) induced prominent limb dyskinesia in the animals 30 minutes after the treatment.The peak of limb AIMs was observed at 15 minutes and declined rapidly between 20 -30 minutes.Interestingly, we observed that VD 3 RA intervention for 7 days after dyskinesia (-D 2 /+VDR) significantly reduced the severity of the symptoms in these animals.A decline in the global AIMs score was also recorded from 0 minutes to 30 minutes; (b) OrolingualAIMs (Ol): Haloperidol induced dyskinesia significantly increased the global AIMs score for Orolingual AIMS with prominent constant movement of the mouth and chewy activities.These symptoms were most noticeable after haloperidol was administered (0 Minutes) and gradually reduced by 15 -30 minutes.The -D 2 /+VDR treatment showed very limited signs of uncontrolled Orolingual movements throughout the duration of the experiment and recorded minimal global AIMs score when compared with the -D 2 treatment (a) and Figure 3(b)).
2016-10-14T09:01:21.650Z
2015-08-04T00:00:00.000
{ "year": 2015, "sha1": "10279ae37d8c312a6ee6da66e1246eadf9d1059c", "oa_license": "CCBY", "oa_url": "http://www.scirp.org/journal/PaperDownload.aspx?paperID=58939", "oa_status": "GOLD", "pdf_src": "Crawler", "pdf_hash": "10279ae37d8c312a6ee6da66e1246eadf9d1059c", "s2fieldsofstudy": [ "Biology", "Psychology", "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
235066597
pes2o/s2orc
v3-fos-license
PAKISTAN AFGHAN RELATION: HISTORY CONFLICTS AND CHALLENGES Pakistan and Afghanistan, two immediate Muslim neighbors, that not only share a common border but also have many other commonalities such as; linguistic, strong historical, ethnic, cultural, and religious ties. Despite of its geo-strategic location and various common factors, nature of Pak- Afghan ties the nature of PakAfghan relations is characterized by mistrust and suspicions and painful experience for both of them. Since its inception, Pakistan faces a hostile attitude from its western neighbor Afghanistan. Except during the Taliban’s four-year era, all the rulers of Afghanistan showed fluctuating degrees of dissatisfaction towards Pakistan. Conflict over Duran line, Soviet-Afghan war Pakistan support to Taliban, Pakistan role in War on Terror, and growing cross border militancy has stained relation between both countries. The main objective of this study is to evaluate the Pakistan Afghanistan relation in a historical context and highlight those factors which were the main hurdles in the way of a smooth and positive relation.   Introduction Interstate behaviour can exhibit three types of group relations, namely, conflict, competition, and cooperation While the majority of situations reflect 'cooperation and competition (Ali Meherunisa2001:143) . it is unfortunate that Pakistan-Afghanistan relations have led to a steady undertone of conflict, competition rather than cooperation and cordiality. Pakistan and Afghanistan, -two Muslim immediate neighbors, sharing 2640 kilometers land border which is frequently known as Durand Line. This line is since centuries, crossed each day by thousands of people, which is a handy source of people-to-people contacts as well as trade and economic interactions. Afghanistan, a landlocked country that acts as a bridge between South Asia and the Middle East, due to its geostrategic signification and location, Afghanistan is a gateway to natural resources-rich Central Asian state, it offers one of the shortest and most cost reduction effective air and land routes for access to natural resources-rich Central Asian States to other regions and global. Afghanistan is also important for it lies adjacent to the Middle East and Central Asian countries -402-who have got more than half of the world's total oil reserves i The country shares its border with Iran and Turkmenistan, the countries who are the world's second and third largest producers of natural gas. The pipeline routes for the transference of gas is one of its kind of important economic rivalry especially in this area where two of the world's largest gas reserves are present. . (Kahn & Than .K 2015) Afghanistan, in this battle for geopolitical battle, is a strategic piece of real state. Both neighboring countries not only share a common border but also have many other commonalities such as; linguistic, socio-economic, and strong historical, ethnic, cultural, and religious ties. Both countries are common members of various regional and sub-regional organizations like the Economic Cooperation Organization (ECO) South Asian Association for Regional Cooperation (SAARC) and the Organization Islamic Conference (OIC). Despite its geostrategic location and bonds of faith, history, and culture, Pakistan ties with Afghanistan remain a painful experience rather than smooth and relation like other countries. since from its inception Pakistan face hostile attitude from its western neighbor Afghanistan. Except during the Taliban's four years era (1997)(1998)(1999)(2000)(2001) all the ruler of Afghanistan have shown fluctuating degrees of dissatisfaction towards Pakistan conflict over the Durand line, soviet Afghan war Pakistan support to Taliban, Pakistan's role in War on Terror, and growing cross border militancy has stained relation between both countries. Post 9/11 bilateral relations have been revolving around serious distrust, blame game, refugee crisis, cross border violation leading to an environment of low-intensity hostility towards each other. Keeping view of the above introduction bilateral relation between both countries can be divided into three section / phases, which would help to understand the bilateral ties in the present scenario at regional and global context: Pakistan Afghanistan Relations: 1947-1979 From the very day of its inception, Pakistan demonstrated deeply positive sentiments towards the Muslim world based on common bonds of faith, culture, and history, and attached great importance to its relations with the Muslim states. Afghanistan, though a Muslim country, created the problem of Pakhtunistan for Pakistan, besides making territorial claims on Pakistani territory in the provinces of the NWFP and Baluchistan. It gave financial aid to the Pakistani tribal and encouraged them to challenge Pakistani authorities. Afghanistan was the only country to vote against Pakistan's admission to the United Nations in 1947. It is a written fact in the history that Afghanistan was used as the buffer zone against the huge white beer in South Asia by the British government, nevertheless, despite this cruel reality, the monarchial government had been found at ease in Afghanistan and British colonial master. (Javaid, U. 2016:137). All this is highly correlated with the historical Franco-Afghan wars in the 19 th and 20 th centuries in which the ill-will against Pakistan was deeply rooted where British powers were taking control over the topographical area of Pakistan when Pakistan did not even emerge on the global map. As it has been discussed earlier that maintaining friendly relations with all neighboring states in general and Muslim states particulars is a basic principle of Pakistan's foreign policy. But Afghanistan has been a problem for Pakistan since 1947. The core issue has been Afghanistan's refusal to recognize the Durand line drawn by the British in 1893 as the official border between the two countries and, by extension, claims to the NWFP and the Pathan-dominated parts of Baluchistan. Afghanistan's support for agitation among some Pathans in the NWFP for an independent. homeland, 'Pakhtoonistan' or 'Pashtunistan', strained relations with successive governments in Pakistan during the 1950s and 1960s. Afghanistan disrespecting this intention of Pakistan had a very rude awakening on the day when Pakistan had to be accepted as an independent state. Afghanistan not only refused to accept Pakistan's independence but also voted against it in the United Nations. (Durani, M. U., & Khan, A. 2009). It claimed the ownership over the whole NWFP, Baluchistan, and some areas of Punjab and also questioned the agreement about Durand Line that was signed in order to formalize Afghanistan's Frontier with the British-ruled Hindustan. Importantly, Kabul declared the Durand Line to be the imaginary line on July 26, 1949, voiding all the agreements that were approved previously. )Shah, S. 2017). But these claims went unnoticed at that time as the world had moved on from the same old-fashioned 18 th -century geopolitics. (Ahmad, N. 2016, ) Although the negative vote was withdrawn soon after, it sowed the seed of mistrust in the earlier days of evolution of bilateral relations leaving a lasting bad taste. (Shamshad A 2010 :303), The border arrangement and already existing Durand Line was the first issue to be raised by Afghanistan on different forums, despite the fact that the Border Agreement of 1893 was continuously ratified by the successive rulers of Afghanistan. As per the treaty, Durand Line has the status of the international border and according to Article 11 of Vienna Convention on Succession of States in Respect of Treaties (VCSSRT), it has international acceptance and legitimacy. (Ahmad Shayeq Qassem, 2008 :13) That's why, though Afghanistan has raised the Durand Line issue at bilateral level as an occasional pressure point against Pakistan, it has never taken it up at any multilateral forums. Based on Durand Line, Afghanistan often puts forward its territorial claims over Pakistani territories covering some of the Pashtun inhibited areas falling in tribal areas, Khyber Pakhtunkhwa, and parts of Balochistan province.It was right after the inception of the country that bilateral relations between the two became ill-fate particularly from 1947 to 1963. Despite being neighboring countries and close ethnic ties, discord and conflict became the basis for the relations. The era of Sardar Doud with two spells (from 1947 to 1963 and again 1973 to 1978) shows the most disturbed period and painful experience in terms of the relationship between both countries. Sardar Doud's era was responsible for various unpleasant incidents such as border security clashes, disruptions embassies, the embargo of trade, burning of national embassies and flags, etc are the proof of bitter bilateral diplomatic ties. .( (Durani, M. U., & Khan, A. 2009). To take the advantage of the initial domestic and international problem faced by Pakistan after independence, Afghanistan has launched two-fold strategy to weaken and destabilizes Pakistan. Firstly, it has openly joined Pakistan's rival India and also the Soviet Union. Secondly, it provided politically and financially assistant and backed secessionist politicians in NWFP to weaken Pakistan. All these acts are warmly welcomed and fully supported by India. Due to Indo-Afghan Nexus, Pakistan's joined SEATO and CENTO for defense purposes, this act makes more irritation between two Muslim countries. When Pakistan joined the defense pacts in 1954 and 1955, Afghanistan exploited the occasion to provoke feelings of indignation in the Middle East. In these anti-Pakistan activities, the Afghan authorities were supported by India whose interest lay in ensuring that in the event of war with Pakistan over Kashmir, the Afghans should open a second front against Pakistan in the North-West Frontier.' (UK.Diss 2019) Despite this, Pakistan followed a policy of patience with Afghanistan and gave the facilities for trade and passage of goods by its railway to Afghanistan. Detent in Pak-Afghanistan Relations (1963-73) on a number of occasions, Iran's mediation helped ease out the tense relations of Pakistan with other countries. In intra-regional disputes especially, Pakistan found in Iran as a steadfast friend. In 1963, Iran's -mediation helped restore diplomatic ties between Pakistan and Afghanistan after a break of two years. King Zahir Shah of Afghanistan was coaxed to tone down his support for the 'Pakhtunistan' issue. (Ali Meherunisa,:145) After the successful mediation of Iran (Tehran Accord) Pakistan and Afghanistan settled to reinstate their diplomatic ties, resume commercial and trade relations and open their air and land border. it was further decided by both countries that they resolve all their mutual disputes according to international law and develop an atmosphere of mutual trust and friendship. All these confidence-building measurements (CBMs) dilute the focus of Kabul on the long-standing issue of Pakhtunistan and made positive views towards Pakistan by Afghan rulers. During the state visit of King Zahir Shah to Pakistan in 1968, further increase of mutual relation and increase of economic cooperation and trade. Moreover, the decision to disband one unit by Islamabad was also welcome by Kabul which further increased the positiveness of Pak-Afghan relations including upsurge mutual respect and economic cooperation. Notwithstanding this bitterness, during two India Pakistan wars (1965 & 1971), Afghanistan upheld firm impartiality and assured Pakistan of posing no threat from its western border which helped Pakistan in relocating its troops from the Pak-Afghan border to combat zones on the India-Pakistan border. (Umar S,2009) Pak Afghan Relation After Soviet Invasion (1979 -1996) It was on December 27, 1979, when the Soviet invasion of Afghanistan. As a result, the invasion was perceived by Islamabad as a calculated move. Moscow's takeover of Afghanistan deeply perturbed Zia ul Haq's sense of Islamic brotherhood In the opinion (Ahmad S Q, 2008 : 13) Pakistan, right after the invasion, seemed to have pursued several goals in the Afghan conflict including getting closer in a tie with Mujahideen in the war against the Soviets, convincing the Pushtoons on the other side of the border to come closer to Islamabad so that it may influence them to withdraw with their principle separatist claim, flair up the anti-Indian sentiment, increasing the fighting capability to the extent of becoming a nuclear power, etc. Soon after the Soviet military intervention in Afghanistan, Pakistan tried to explore the prospects of a negotiated settlement of the problem based on the withdrawal of the Soviet troops, the guarantee of non-intervention, and the return of the Afghan refugees. Pakistan's efforts enjoyed international support including that of the US, China, and the Muslim world. Pakistan took the initiative of organizing the meeting of the foreign ministers of the Islamic countries at a conference, which condemned the Soviet action, appointed a committee comprising foreign ministers of Pakistan and Iran, and secretary-general of the organization to resolve the issue. The committee could not make any headway because of the non-co-operation of the Soviet Union. The Afghanistan problem was also taken up by the UN General Assembly in January and September 1980, which passed resolutions with an overwhelming majority, calling for a peaceful settlement of the Afghanistan crisis, including the withdrawal of the Soviet troops. Pak-Afghan Relations And Geneva Accords The major accomplishment of Pakistan concerning the foreign policy front during this specific period was the signing of a historic Geneva Accord in April 1988, which formed the basis for an end of the Soviet occupation of Afghanistan. (Jilani Anees ,2001: 374) The accord was generously welcomed almost all over the world particularly involved parties in the Afghan conflict. It was due to the fact that after Vietnam (1964Vietnam ( -1975, this was the first time that a super-power decided to withdraw from the on-going war to which it was deeply committed. Islamabad and declared that Pakistan's role in any long-term Afghan peace settlement remained crucial. Nawaz Sharif made a reciprocal visit to Kabul in November, during which he assured Karzai of Pakistan's willingness to assist the Afghan peace process. However, a significant improvement in Afghan-Pakistani relations was evident following the presidential inauguration of Ashraf Ghani in September 2014. Ghani visited Pakistan twice in the ensuing months and, most importantly, visited key Pakistani security and economic partners such as China and Saudi Arabia, trying to convince them that instability in Afghanistan would not be in their interest, particularly after the departure of Western military forces. He stressed that "the hostility between Pakistan and Afghanistan has been buried in the past two days Ghani also appeared to be willing significantly to downsize diplomatic relations and security co-operation with India, an old Pakistani demand, in exchange for greater Pakistani co-operation in bringing Afghan insurgents to the negotiating table. But a few months later he also opted for one of his predecessors-Hamid Karzai. After a couple of attacks presumably by the Taliban on Kabul Airport and the Afghan Parliament, Ashraf Ghani alleged Pakistan for steering the attacks and renounced: Clashes between troops from the two countries at the Torkham border crossing in June 2016 appeared to discourage any likelihood of Afghan Pakistani co-operation on security issues in at least the short term. Afghan-Pakistani relations remained tense at mid-2017, with more border closures and mutual accusations of support for insurgent groups. ( South Asia 2018:7) Issues And Challenges Bilateral relationships are always carried out with a package of challenges and opportunities. Indeed there are greater where many regional and international players are involved to end this long-simmering conflict. Nevertheless, the leadership of the two countries now requires to stand-up to the occasion and capitalizes on opportunities. If the relationship continues to remain hell-bent in the context of irritants, only third parties that are unwilling to bring peace to the region would benefit at the cost of the wellbeing of the people of two countries. Around one million Afghans have so far been killed in foreign military interventions, both of which were led by the Soviet Union between 1979 and 1989 as well as the USled coalition forces from October 2001 to date. They have remained homeless and ended up as being refugees in Pakistan, Iran, and other neighboring countries. As many as three generations of Afghans have been affected by civil war and still, there is no letup as far as violence is concerned. Until and unless the country does not become stable, it will continue to destabilize its neighbors especially Pakistan. As far as the US is concerned, it spent as many as $1 trillion in Afghanistan in its longest war in history. It has almost withdrawn without accomplishing regrettably any desired results. (Ahmer Monis 2020) By default, Pakistan has a key role in Afghan reconciliation processes, both ongoing as well as those which could evolve. Pakistan has always supported the "Afghan-led and Afghan-owned," peace process. However, there are serious limitations with regards to this concept. Besides the peace process, there are many areas where Pakistan and Afghanistan could work jointly. They could, for example, put in combined effort to improve the life standards including health and education facilities of their citizens and increase employment opportunities through increased bilateral trade. Both countries need each other: Pakistan needs Afghanistan for stability, economic prosperity, and for the security of its borders; and notwithstanding Afghan leadership's desire to diversify the country's trade through Iranian Chabahar port, Afghanistan will continue to need Pakistan for Karachi and Gwadar ports, as well as Pakistani land routes to trade with other countries. Afghanistan and Pakistan need to adopt a more realistic approach towards each other's sensitivities. India's factor in the respective policies of both sides has remained a factor of deterioration in the relations. Although, as an independent and sovereign state, Afghanistan has the liberty to choose friends of its interactions with such friends should not pose security threats to the neighboring countries. The presence of India's consulates in Afghanistan's major cities and their role in jeopardizing the Western border is a matter of concern for Pakistan. Bilateral ties can also be strengthened if both Kabul and Islamabad opt for economic cooperation and trade as the basic foundation for their engagement. Being a landlocked state, Afghanistan's trade has been passing through Pakistan. Afghanistan is the second-largest export market of Pakistani products and both are the largest trading partners. Though bilateral official trade level is very low, i.e. $1.5 billion,59 informal trade and smuggling of various goods accounts for over $6 billion.60This informal trade, though to the peril of both economies, is one of the underlying strengths of people to people bondage. Improvement in road infrastructure and removing institutional constraints for making the travel between the two countries easy could further strengthen people to people relations. (Mairajul-Hamid 2017 : 59) They already have the Afghanistan Pakistan Transit Trade Agreement (APTTA) in place, as a rational and viable source to strengthen the bilateral economic ties and trade with other regional states. Diplomatic initiatives are required to stress upon Afghanistan to consider delinking Pakistan's trade with Central Asia with allowing a free two-way Afghan ride to India or with providing India access to Afghanistan and beyond through its land routes as one of the CBM. Conclusion Being an immediate neighbor, Pakistan always gives dominant Significant to its relations with Afghanistan as Pakistan's peace and stability depend on Afghan peace and stability. Pakistan from the first day of its inception has been in mere support of clenching friendly relationships with the other Muslim countries as an important notion of its foreign policy. However, Afghanistan nurtured the negative attitude in the face of their shared colonial legacy. Traditionally, the Pak-Afghan relationship has been characterized by mutual mistrust and lack of confidence and third parties have always been a decisive factor in determining the Pak-Afghan relations. Keeping in view the above comprehensive analysis concerning Pak-Afghan fluctuating bilateral relations, Pakistan from the day of its independence has been in mere support of clinching amicable relations with Afghanistan and other Muslim countries as an important paradigm of its foreign policy. However, Afghanistan kept on nurturing a negative attitude in the face of their shared colonial legacy. Based on the above-mentioned geostrategic importance of both neighboring states, it is certain that both must remain engaged in their respective identities and roles in the region. Therefore, the solution of the border issues that have been prolonged between Pakistan and Afghanistan should settle down on a prompt basis provided the fact that the political interests of both the countries must not be compromised. There is a major overlap area of national interests of the two countries, with exception of a few complexities. It must not be understood by both the countries that commonality of interests is driven by geographical connectedness, cultural and ethnic and historical similarities. Both countries have to safeguard their interests by engaging with each other with the cooperation they should work in unison to overcome irritants. Working out robust and durable peace with a countrywide insurgency led by the Taliban is the need of the hour. The differences within the Kabul government circles for major appointments and negotiations with the Taliban and other militants are obstructing the bringing of long-term stability. Afghanistan cannot achieve durable peace all alone, co-opting Pakistan would help to achieve it together. The Afghan government has to cede political concessions first to bring the Taliban to the negotiation table; and then for keeping them, engaged till final settlement. The rise of Daesh (ISIS) and its spread in Afghanistan is another area warranting close coordination between Afghanistan and Pakistan. Pakistan should be generous in letting Afghan refugees return at their own pace. Afghanistan should respect Pakistan's sensitivities concerning its security issues with India and avoid overt and covert appeasement of India at the cost of Pakistan. Islamabad should also realize this fact that no one is favorite for it in Kabul .Afghanistan needs to abide by its international obligations, firstly about the demarcation of the international border and secondly follow the international norms with regard to border management for putting an end to drugs, small arms, and human trafficking as well as cross border attacks. The international community and the neighboring states are obliged to respect the sovereignty of Afghanistan and help it out in its stabilization instead of preparing for another great game and use of Afghanistan as a battlefield of proxy wars against each other. Paper concludes that there would be no durable relation that can be improved or normalized unless the lack of confidence and mistrust that characterize their relations is addressed. It is needed to take an effective mechanism from both sides which generate an environment that is favorable to long-lasting peace and security in the region.
2021-05-22T00:05:03.778Z
2020-01-01T00:00:00.000
{ "year": 2020, "sha1": "a14b59c8e544c705a7c39df55c1e0a13949e82e1", "oa_license": "CCBY", "oa_url": "http://pjia.com.pk/index.php/pjia/article/download/14/28", "oa_status": "HYBRID", "pdf_src": "Adhoc", "pdf_hash": "3df63f13f9fcaecf3ed46e2ebf5f21fd3b68ae49", "s2fieldsofstudy": [ "History", "Political Science" ], "extfieldsofstudy": [ "Political Science" ] }
8120609
pes2o/s2orc
v3-fos-license
Chondrocyte source for cartilage regeneration in an immature animal: Is iliac apophysis a good alternative? Background: Autologous articular cartilage at present forms the main source of chondrocytes for cartilage tissue engineering. In children, iliac apophysis is a rich and readily accessible source of chondrocytes. This study compares the growth characteristics and phenotype maintenance of goat iliac apophysis growth plate chondrocytes with those sourced from goat articular cartilage, and thereby assesses their suitability for autologous chondrocyte transplantation in immature animals for growth plate and articular cartilage regeneration. Materials and Methods: Four sets of experiments were carried out. Cartilage samples were harvested under aseptic conditions from goat iliac apophysis and knee articular cartilage. The chondrocytes were isolated in each set and viable cells were counted and subsequently cultured as a monolayer in tissue culture flasks containing chondrogenic media at 2.5 × 10 3 cells/cm 2 . The growth was periodically assessed with phase contrast microcopy and the cells were harvested on 8 th and 15 th days for morphology, cell yield, and phenotype assessment. Student’s t-test was used for comparison of the means. Results: Confluence was reached in the iliac apophysis growth plate chondrocytes flasks on the 10 th day and the articular cartilage chondrocytes flasks on the 14 th day. Mean cell count of growth plate chondrocytes on the 8 th day was 3.64 × 10 5 (SD = 0.601) and that of articular cartilage chondrocytes was 1.40 × 10 5 (SD = 0.758) per flask. The difference in the means was statistically significant ( P = 0.003). On the 15 th day, the mean cell number had increased to 1.35 × 10 6 (SD = 0.20) and 1.19 x 10 6 (SD = 0.064) per flask, respectively. This difference was not statistically significant ( P = 0.26). The population doubling time on the 8 th day of cell culture was 3.18 and 6.24 days respectively, for iliac apophyseal and articular cartilage chondrocytes, which was altered to 3.59 and 3.1 days, respectively, on the 15 th day. The immunocytochemistry showed 100% retention of collagen 2 positive and collagen 1 negative cells in both sets of cultures in all samples. Conclusion: Iliac apophysis is a rich source of chondrocytes with a high growth rate and ability to retain phenotype when compared to articular cartilage derived chondrocytes. Further in vivo studies may determine the efficacy of physeal and articular repair in children with apophyseal chondrocytes. Original Article IntroductIon C artilage tissue engineering in humans for focal articular cartilage defect is a standard therapy offered in many centers in the world. [1][2][3][4] The sources of cells for this are mainly articular cartilage and autologous mesenchymal stem cells. 4 Chondrocytes are the logical cells to be used as a source for cartilage tissue engineering. These are obtained in the adults from the non weight bearing articular cartilage of the joints by an arthroscopic procedure at the time of diagnosing joint pathology. 1 When considering articular cartilage or growth plate tissue engineering in a child, iliac crest apophysis forms a rich and readily accessible source of chondrocytes. 5 The growth and culture characteristics of chondrocytes isolated from the Iliac apophysis have not been extensively studied. Additionally, its efficacy as a rich and proliferative source of chondrocytes in monolayer cultures has not been compared to the usual source of autologous chondrocytes, i.e. articular cartilage. The aim of this study was to compare the growth characteristics and phenotype maintenance of chondrocytes sourced from articular cartilage with those sourced from iliac apophysis. This could in turn form the basic groundwork for future studies aimed at cartilage regeneration using chondrocytes isolated from iliac apophysis. The beneficiaries Departments of Orthopaedics and 1 Physiology, Christian Medical College, Vellore, Tamil Nadu, India of this would be children with cartilage defects in the articular surfaces and growth plate. 6,7 MAterIAls And Methods This study was carried out at the basic science laboratory as part of the preliminary work leading to development of use of iliac crest chondrocytes for growth plate regeneration in goats. This was an Institutional Review Board approved study funded by the Department of Biotechnology, Government of India. Four paired sets of experiments were designed, comparing iliac crest derived chondrocytes with articular cartilage derived cells from freshly slaughtered goats. Technique Goat legs and hindquarters (taken from freshly slaughtered goats) were collected from the slaughter house. In the laboratory, outer skin from the goat leg and the iliac bone was removed using scalpel, and the leg and iliac bone were wrapped in sterile tissue paper and soaked in 70% ethanol for 1 hour. Under sterile condition, cartilage samples from iliac apophysis and articular surface of the knee were dissected out. Perichondrium and other surrounding tissues were removed from the cartilage by sharp dissection using a knife. The cartilage specimens were transferred to the Dulbecco's modified Eagle Media/Hams f12 (DMEM/F12; Sigma Aldrich, St. Louis, MO, USA). All cartilage sampling was processed within 4 hours of slaughter. Chondrocyte isolation The cells were isolated from the cartilage by mincing into small pieces and transferred into the DMEM/F12 media containing 2 mg/ml of collagenase type II (Worthington, Lakewood, NJ, USA). The tubes were incubated in a 37°C water bath shaker overnight. After incubation, the collagenase action was diluted by adding 20 ml of DMEM/F12 media into 10 ml of sample and the digested cartilage sample filtered through a 100 µm cell strainer (BD Biosciences, San Jose, CA, USA) and centrifuged at 2400 rpm for 10 minutes at 25°C. The supernatant was discarded and the cells were resuspended with 5 ml of media and centrifuged. The pellet was again suspended in media and then 100 µl of cell suspension mixed with Trypan Blue was placed in a hemocytometer and viable cells were counted to arrive at the cell yield. 8 Cells were then cultured as a monolayer in T-25 cm 2 flask containing DMEM/F12 medium at 2.5×10 3 cells/ cm 2 density. Culture flasks were incubated for 15 minutes in a CO 2 incubator, and then 10% fetal bovine serum (Sigma) and 62 µg/ml ascorbic acid (Sigma) were added. Monolayer cultured chondrocytes were harvested on 8 th and 15 th days for the assessment of morphology, phenotype by immunocytochemistry [ Figure 1a-f], and population doubling time (PDT). Phase-contrast microscopy was done to assess cell growth, morphology, and time to confluence [ Figure 2 a-d]. PDT can be calculated given two measurements of a growing quantity, q 1 at time t 1 and q 2 at time t 2 , and assuming a constant growth rate, the doubling time is calculated as: The PDT was determined by using an online calculating software. 9 Iliac and articular chondrocytes phenotype on 8 th and 15 th days was assessed by immunocytochemistry using anti-collagen type I and II antibodies following the protocol by Marlovits et al. 10 The statistical analysis was performed using Statistical Package for Social Sciences (SPSS) software ver. 16 Figure 1a-f]. Articular cartilage sample of set 2 was discarded because of contamination. dIscussIon The source of chondrocytes for articular cartilage regeneration has been extensively investigated in the last two decades. 4,11,12 In the initial stages of autologous chondrocyte transplant for focal chondral defects, the source of chondrocytes was the non weight bearing cartilage harvested from the affected joint expanded in vitro and transplanted as a monolayer. 1 In the recent past, there have been two major directions in research. One of these is to explore the role of scaffolds in maintaining the phenotype. 1 The second one is to find an alternative source of chondrocytes because of the diminished growth potential and availability in the adult human. 4,11,12 The chondrogenic differentiation of mesenchymal stem cells from bone marrow, adipose tissue, and umbilical cord has been explored. 4,11,12 One of the major issues in chondrocyte expansion is the quality of the chondrocytes grown in vitro, mainly its ability to retain its phenotype in culture, if hyaline cartilage regeneration is to be achieved. 4,[11][12][13] There is a good source of autogenous chondrocyte in the iliac apophyseal growth plate in children, which can be potentially used for treating growth plate abnormality and articular cartilage defects. 14 In this study, we focused on the growth rates of the harvested cells as evidenced by their doubling time and time to reach confluence when seeded in a uniform manner. Our results showed that the growth rate of the cells from iliac apophysis was much higher as compared to the articular cartilage chondrocytes, as demonstrated by the significantly higher total number of cells on the 8 th day culture, a shorter doubling time on the 8 th day, and earlier confluence in each flask (10 th vs. 14 th day). This implies that iliac apophysis chondrocytes divide much faster than articular cartilage chondrocytes. We posit the paradoxical slowing of growth rate on 15 th day of iliac apophyseal chondrocytes to the fact that the confluence was reached much earlier in the iliac apophyseal chondrocytes, and thus, due to factors like contact inhibition, space, and nutrition limitation, there was slowing of expansion in their number. 15 We are unable to find a similar study in the literature which has compared the growth rates of chondrocytes derived from different sources. The previous studies comparing chondrocytes from growth plate with those from articular cartilage have focused on the expression of markers to distinguish chondrocytes from articular cartilage from those derived from the growth plate. The second aim was to study the ability to retain morphology and phenotype of the chondrocytes, which determines their ability to form hyaline cartilage when transplanted. The phenotypical assessment done using markers for collagen I and collagen II on the 8 th and 15 th days showed maintenance of chondrocyte phenotype, proving growth plate to be satisfactory in this respect with suitability for transplantation up to 15 th day. The collagen II expression as compared to collagen I expression in in vitro culture is time dependent and has been shown to decrease from peak values of 215-to 430-fold higher in the first week of human articular chondrocyte culture and slowly decreases from day 10 onward. 10 The changes toward collagen I expression are associated with a more fibroblastic phenotype which is not suitable for transplantation as it forms fibrocartilage in vivo. While we have not done any quantitative assay, the comparable retention of 100% of the cells as collagen I negative and collagen II positive in growth plate cultures up to 15 days suggests that they are suitable for transplantation. Another study has shown that porcine growth plate chondrocytes show comparable phenotype maintenance in the first week of monolayer culture, but show progressively increased collagen I expression on continued culture, similar to articular chondrocytes. 16 There are a number of limitations in this study. Growth and doubling time of chondrocyte would have been better assessed by using more number of time points. 8 Once confluence is achieved, it becomes more difficult to assess the number of cells in the culture because of the voluminous extracellular matrix produced by the chondrocytes. This needs to be digested and cells need to be released without lysis for accurate cell counts. One of the important features of growth plate chondrocyte is its ability to undergo hypertrophy and cell death as an initial step toward endochondral ossification. 17 These differences can be best investigated by looking for hypertrophy markers such as collagen X, parathyroid hormone receptor, and Indian hedgehog. 18 While hypertrophy is desirable in the growth plate reconstitution, this feature is not advantageous when undertaking articular cartilage regeneration. These hypertrophy markers were not tested in this experiment as they were outside the scope of this study. However, literature shows that collagen X, a hypertrophy marker, is expressed in higher quantity in the growth plate chondrocyte as compared to articular chondrocyte in monolayer and 3-D culture. 16 There is also evidence to suggest that in vitro growth plate derived tissue engineered cartilage in 3-D pellet culture gradually loses hypertrophy markers and expresses markers which are common with articular cartilage. 18 More in vivo studies would also be warranted to understand the behavior of the growth plate chondrocytes when subjected to normal mechanical stresses of the joint and studied for articular replacement therapy. conclusIon Iliac apophysis is a rich source of chondrocytes with a very high growth rate and comparable ability to retain the phenotype. We posit that this is suitable as a chondrocyte source for growth plate regeneration. This has clinical use in children and adolescents, who have an immature iliac apophysis, as a source of autologous chondrocytes for treatment of articular cartilage defects such as osteochondritis dissecans and traumatic osteochondral defects, and repairing and replacing physeal defects following infection and trauma. We expect the harvesting procedure, if eventually translated, to be simpler and less morbid when compared to arthroscopic harvesting from articular cartilage for autologous cartilage transplantation.
2018-04-03T00:00:38.674Z
2012-07-01T00:00:00.000
{ "year": 2012, "sha1": "6746a1fae8ef93f8560540bc08463679710f00ab", "oa_license": "CCBYNCSA", "oa_url": "https://europepmc.org/articles/pmc3421929", "oa_status": "GREEN", "pdf_src": "ScienceParsePlus", "pdf_hash": "29a20cde346d32de41a81ad0e5bffd6c477a18b3", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
259837227
pes2o/s2orc
v3-fos-license
Making the Nystr\"om method highly accurate for low-rank approximations The Nystr\"om method is a convenient heuristic method to obtain low-rank approximations to kernel matrices in nearly linear complexity. Existing studies typically use the method to approximate positive semidefinite matrices with low or modest accuracies. In this work, we propose a series of heuristic strategies to make the Nystr\"om method reach high accuracies for nonsymmetric and/or rectangular matrices. The resulting methods (called high-accuracy Nystr\"om methods) treat the Nystr\"om method and a skinny rank-revealing factorization as a fast pivoting strategy in a progressive alternating direction refinement process. Two refinement mechanisms are used: alternating the row and column pivoting starting from a small set of randomly chosen columns, and adaptively increasing the number of samples until a desired rank or accuracy is reached. A fast subset update strategy based on the progressive sampling of Schur complements is further proposed to accelerate the refinement process. Efficient randomized accuracy control is also provided. Relevant accuracy and singular value analysis is given to support some of the heuristics. Extensive tests with various kernel functions and data sets show how the methods can quickly reach prespecified high accuracies in practice, sometimes with quality close to SVDs, using only small numbers of progressive sampling steps. 1. Introduction. The Nyström method is a very useful technique for data analysis and machine learning. It can be used to quickly produce low-rank approximations to data matrices. The original Nyström method in [35] is designed for symmetric positive definite kernel matrices and it essentially uses uniform sampling to select rows/columns (that correspond to some subsets of data points) to serve as basis matrices in low-rank approximations. It has been empirically shown to work reasonably well in practice. The Nyström method is highly efficient in the sense that it can produce a low-rank approximation in complexity linear in the matrix size n (supposing the target approximation rank r is small). For problems with high coherence [13,32], the accuracy of the usual Nyström method with uniform sampling may be very low. There have been lots of efforts to improve the method. See, e.g., [10,13,22,42]. In order to gain good accuracy, significant extra costs are needed to estimate leverage scores or determine sampling probabilities in nonuniform sampling [8,11,21]. Due to its modest accuracy, the Nyström method is usually used for data analysis and not much for regular numerical computations. In numerical analysis and scientific computing where controllable high accuracies are desired, often truncated SVDs or more practical variations like rank-revealing factorizations [6,17] and randomized SVD/sketching methods [19,33] are used. These methods can produce highly reliable low-rank approximations but usually cost O(n 2 ) operations. The purpose of this work is to propose a set of strategies based on the Nyström method to produce high-accuracy low-rank approximations for kernel matrices in about linear complexity. The matrices are allowed to be nonsymmetric and/or rectangular. Examples include off-diagonal blocks of larger kernel matrices that frequently arise from numerical solutions of differential and integral equations, structured eigenvalue solutions, N-body simulations, and image processing. There has been a rich history in studying the low-rank structure of these off-diagonal kernel matrices based on ideas from the fast multipole method (FMM) [15] and hierarchical matrix methods [18]. To obtain a low-rank approximation to such a rectangular kernel matrix A with the Nyström method, a basic way is to choose respectively random row and column index sets I and J and then get a so-called CUR approximation (1.1) A ≈ A :,J A + I,J A I,: , where A :,J and A I,: denote submatrices formed by the columns and rows of A corresponding to the index sets J and I, respectively, and A I,J can be understood similarly. However, the accuracy of (1.1) is typically low, unless the so-called volume of A I,J happens to be sufficiently large [14]. It is well known that finding a submatrix with the maximum volume is NP-hard. Here, we would like to design adaptive Nyström schemes that can produce controllable errors (including near machine precision) while still retaining nearly linear complexity in practice. We start by treating the combination of the Nyström method and a reliable algebraic rank-revealing factorization as a fast pivoting strategy to select significant rows/columns (called representative rows/columns as in [37]). We then provide one way to analyze the resulting low-rank approximation error, which serves as a motivation for the design of our new schemes. Further key strategies include the following. 1. Use selected columns and rows to perform fast alternating direction row and column pivoting, respectively, so as to refine selections of representative rows and columns. 2. Adaptively attach a small number of new samples so as to perform progressive alternating direction pivoting, which produces new expanded representative rows and columns and advances the numerical rank needed to reach high accuracies. 3. Use a fast subset update strategy that successively samples the Schur complements so as to improve the efficiency and accelerate the advancement of the sizes of basis matrix toward target numerical ranks. 4. Adaptively control the accuracy via quick estimation of the approximation errors. Specifically, in the first strategy above, randomly selected columns are used to quickly perform row pivoting for A and obtain representative rows (which form a row skeleton A I,: ). The row skeleton is further used to quickly perform column pivoting for A to obtain some representative columns (which form a column skeleton A :,J ). This refines the original choice of representative columns. Related methods include various forms of the adaptive cross approximation (ACA) with row/column pivoting [4,24], the volume sampling approximation [8], and the iterative cross approximation [23]. In particular, the method in [23] iteratively refines selections of significant submatrices (with volumes as large as possible). However, later we can see that this strategy alone is not enough to reach high accuracy, even if a large number of initial samples is used. Next in the second strategy, new column samples are attached progressively in small stepsizes so as to repeat the alternating direction pivoting until convergence is reached. Convenient uniform sampling is used since the sampled columns are for the purpose of pivoting. This eliminates the need of estimating sampling probabilities. The third strategy enables to avoid applying pivoting to row/column skeletons with growing sizes. That is, the row (column) skeleton is expanded by quickly updating the previous skeleton when new columns (rows) are attached. We also give an aggressive subset update method that can quickly reach high accuracies with a small number of progressive sampling steps in practice. With the forth strategy, we can conveniently control the number of sampling steps until a desired accuracy is reached. It avoids the need to perform quadratic cost error estimation. The combination of these strategies leads to a type of low-rank approximation schemes which we call high-accuracy Nyström (HAN) schemes. They are heuristic schemes that are both fast and accurate in practice. Although a fully rigorous justification of the accuracy is lacking, we give different perspectives to motivate and support the ideas. Relevant analysis is provided to understand certain singular value and accuracy behaviors in terms of both deterministic rank-revealing factorizations and statistical error evaluation. We demonstrate the high accuracy of the HAN schemes through comprehensive numerical tests based on kernel matrices defined from various kernel functions evaluated at different data sets. In particular, an aggressive HAN scheme can produce approximation accuracies close to the quality of truncated SVDs. It is numerically shown to have nearly linear complexity and further usually needs just a surprisingly small number of sampling steps. Additionally, the design of the HAN schemes does not require analytical information from the kernel functions or geometric information from the data points. They can then serve as fully blackbox fast low-rank approximation methods, as indicated in the tests. The remaining discussions are organized as follows. We show the pivoting strategy based on the Nyström method and give a way to study the approximation error in Section 2. The detailed design of the HAN schemes together with relevant analysis is given in Section 3. Section 4 presents the numerical tests, followed by some concluding remarks in Section 5. 2. Pivoting based on the Nyström method and an error study. We first consider a low-rank approximation method based on a pivoting strategy consisting of the Nyström method and rank-revealing factorizations of tall and skinny matrices. A way to study the low-rank approximation error will then be given. These will provide motivations for some of our ideas in the HAN schemes. Let A be the m × n kernel matrix which is sometimes also referred to as the interaction matrix between x and y. We would like to approximate A by a low-rank form. The strong rank-revealing QR or LU factorizations [17,26] are reliable ways to find low-rank approximations with high accuracy. They may be used to obtain an approximation (called interpolative decomposition) of the following form: where Q is a permutation matrix, r ≡ |J | (size or cardinality of J ) is the approximate (or numerical) rank, and F max ≤ c with c ≥ 1. c is a user-specified parameter and may be set to be a constant or a low-degree polynomial of m, n, and r [17]. We suppose r is small. The column skeleton A :,J corresponds to a subset t ⊂ y which is a subset of landmark points. Here we also call t a representative subset, which can be selected reliably by strong rank-revealing factorizations. A strong rankrevealing factorization may be further applied to A T :,J to select a representative subset s ⊂ x corresponding to a row index set I in A :,J . That is, we can find a pivot block A I,J . Without loss of generality, we may assume |I| = |J | = r. (If the factorization produces I with |I| < |J |, V can be modified so as to replace J by an appropriate index set with size |I|.) Thus, the resulting decomposition may be written as an equality where P is a permutation matrix and, with 1 : m standing for 1, 2, . . . , m, Since A :,J is a tall and skinny matrix, we refer to (2.3) as a skinny rank-revealing (SRR) factorization. (2.2) and (2.3) in turn lead to the approximation With (2.5), we may further obtain a CUR approximation like in (1.1) (with the pseudoinverse replaced by A −1 I,J ). The direct application of strong rank-revealing factorizations to A to obtain (2.2) is expensive and costs O(rmn). To reduce the cost, we can instead follow the Nyström method and randomly sample columns from A to form A :,J . However, the accuracy of the resulting approximation based on the forms (2.2) or (2.5) may be low. On the other hand, we can view the SRR factorization (2.3) as a way to quickly choose the representative subset s (based on the interaction between x and t instead of the interaction between x and y). In other words, (2.3) is a way to quickly perform row pivoting for A so as to select representative rows A I,: from A. Then we can use the following low-rank approximation: which may be viewed as a potentially refined form over (2.2) when J is randomly selected. (Note that P and E depend on J .) We would like to gain some insights into the accuracy of approximations based on the Nyström method. There are various earlier studies based on (1.1). Those in [5,42] are relevant to our result below. When A is positive (semi-)definite, the analysis in [42] bounds the errors in terms of the distances between the landmark points and the remaining data points. A similar strategy is also followed in [5, Lemma 3.1] for symmetric A. The resulting bound may be very conservative since it is common for some data points in practical data sets to be far away from the landmark points. In addition, the error bounds in [5,42] essentially involve a factor A −1 I,J 2 (or A + I,J 2 ), which may be too large if high accuracy is desired. This is because the smallest singular value of A 11 may be just slightly larger than a smaller tolerance. Here, we provide a way to understand the approximation error based on (2.6). It uses the minimization of a slightly overdetermined problem and does not involve A −1 I,J 2 . The following analysis does not aim to precisely quantify the error magnitude (which is hard anyway). Instead, it can serve as a motivation for some strategies in our high-accuracy Nyström methods later. It is obvious that where the last step is because A i,J A −1 I,J is a row of E in (2.4) and its entries have magnitudes bounded by c. With c ≥ 1, we further have Since this holds for all v ∈ R r , take the minimum for v to get the desired result. The bound in this lemma can be roughly understood as follows. If A Ii,j is nearly in the range of A Ii,J for all i, j, the bound in (2.7) would then be very small and we would have found I and J that produce an accurate low-rank approximation (2.6). Otherwise, to further improve the accuracy, it would be necessary to refine I and J and possibly include additional i and j indices respectively into I and J . A heuristic strategy is to progressively pick i and j so that A Ii,j is as linearly independent from the columns of A Ii,J as possible. Motivated by this, we may use a subset refinement process. First, use randomly picked columns A :,J to generate a row skeleton and then use the row skeleton to generate a new column skeleton. The new column skeleton suggests which new j should be attached to J . Next, if a desired accuracy is not reached, then randomly pick more columns to attach to the refined set J and start a new round of refinement. Such a process is called progressive alternating direction pivoting (or subset refinement ) below. 3. High-accuracy Nyström schemes. In this section, we show how to use the Nyström method to design the high-accuracy Nyström (HAN) schemes that can produce highly accurate low-rank approximations in practice. We begin with the basic idea of the progressive alternating direction pivoting and then show how to perform fast subset update and how to conveniently control the accuracy. 3.1. Progressive alternating direction pivoting. The direct application of strong rank-revealing factorizations to A has quadratic complexity. One way to save the cost is as follows. Start from some column samples of A like in the usual Nyström method. Use the SRR factorization to select a row skeleton, which can then be used to select a refined column skeleton. The process can be repeated in a recursive way, leading to a fast alternating direction refinement scheme. A similar empirical scheme has been adopted recently in [23,29]. However, when high accuracies are desired, the effectiveness of this scheme may be limited. That is, just like the usual Nyström method, a brute-force increase of the initial sample size may not necessarily improve the approximation accuracy significantly. A high accuracy may require the initial sample size to be overwhelmingly larger than the target numerical rank, which makes the cost too high. Here, we instead adaptively or progressively apply the alternating direction refinement based on step-by-step small increases of the sample size. We use one round of alternating row and column pivoting to refine the subset selections. After this, if a target accuracy τ or numerical rank r is not reached, we include a small number of additional samples to repeat the procedure. The basic framework to find a low-rank approximation to A in (2.1) is as follows, where the subset J is initially an empty set and b ≤ r is a small integer as the stepsize in the progressive column sampling. 1. (Progressive sampling) Randomly choose a column index set J ⊂ {1 : n}\J with | J | = b and set J = J ∪ J . (Row pivoting) Apply an SRR factorization to A :, J to find a row index set I: where U looks like that in (2.3). 3. (Column pivoting) Apply an SRR factorization to A I,: to find a refined column index set J : where V looks like that in (2.2). 4. (Accuracy check ) If a desired accuracy, maximum sample size, or a target numerical rank is reached or if I stays the same as in the previous step, return a low-rank approximation to A like the following and exit: Otherwise, repeat from Step 1. (More details on the stopping criteria and fast error estimation will be given in Section 3.3.) This basic HAN scheme (denoted HAN-B) is illustrated in Figure 3.1, with more details given in Algorithm 3.1. Note that the key outputs of the SRR factorization (2.3) are the index set I and the matrix E. (The permutation matrix P is just to bring the index set I to the leading part and does not need to be stored.) For convenience, we denote (2.3) by the following procedure in Algorithm 3.1 (with the parameter c in (2.4) assumed to be fixed): The scheme may be understood heuristically as follows. Initially, with J a random sample from the column indices, it is known that the expectation of the norm of a row of A :, J is a multiple of the norm of the corresponding row in A (see, e.g., [1,9]). Thus, the relative magnitudes of the row norms of A can be roughly reflected by those of A :, J . It then makes sense to use A :, J for quick row pivoting (by finding A I, J with determinant as large as possible). This strategy shares features similar to the randomized pivoting strategies in [25,40] which are also heuristic and work well in practice, except that the methods in [25,40] need matrix-vector multiplications with costs O(mn). With the resulting row pivot index set I, the scheme further uses the SRR factorization to find a submatrix A I,J of A I,: with determinant as large as possible, which enables to refine the column selection. It may be possible to further improve the index sets through multiple rounds of such refinements like in [23,29]. However, the accuracy gain seems limited, even if a large initial sample size is used (as shown in our test later). Thus, we progressively attach additional samples (in small stepsizes) to the refined subset J and then repeat the previous procedure. In practice, this makes a significant difference in reducing the approximation error. In this scheme, the sizes of the index sets I and J grow with the progressive sampling. Accordingly, the costs of the SRR factorizations (3.1)-(3.2) increase since the SRR factorizations at step i are applied to matrices of sizes m × (ib) or (ib) × n. With the total number of iterations N ≈ r b , the total cost (excluding the cost to check the accuracy) is With i increases, the iterations advance toward the target numerical rank or accuracy. Fast subset update via Schur complement sampling. In the basic scheme HAN-B, the complexity count in (3.3) for the SRR factorizations at step i gets higher with increasing i. To improve the efficiency, we show how to update the index sets so that at step i, the SRR factorization (for the row pivoting step for example) only needs to be applied to a matrix of size (m − (i − 1)b) × b instead of m × (ib), followed by some quick postprocessing steps. Suppose we start from a column index set J = J ∪ J as in Step 1 of the basic HAN scheme above. We would like to avoid applying the SRR factorizations to the full columns A :, J in Step 2 and the full rows A I,: in Step 3. We seek to directly produce an expanded column index set over J , as illustrated in Figure 3.2. It includes two steps. One is to produce an update I to the row index set I (Figure 3.2(a), which replaces Steps (c)-(d) in Figure 3.1) and the other is to produce an update to the column index set ( Figure 3.2(b)). Clearly, we just need to show how to perform the first step. With the row pivoting step like in (3.1), we can obtain a low-rank approximation of the form (2.6). Using the row permutation matrix P in (2.6) (computed in (2.3)), we may write A as At this point, we have where S is the Schur complement. Remark 3.1. In the usual strong rank-revealing factorizations like the one in [26], the low-rank approximation is obtained also from a decomposition of the form (3.5) with S dropped. Here, our fast pivoting scheme is more efficient. Of course, the strong rank-revealing factorization in [26] guarantees the quality of the low-rank approximation in the sense that, there exist low-degree polynomials c ≥ 1 and f ≥ 1 in m, n, and k (size of A 11 ) such that (2.4) holds and, for 1 where σ i (·) denotes the i-th largest singular value of a matrix. Our subset update strategy is via the sampling of the Schur complement S. In fact, when A I,: = A 11 A 12 is accepted as a reasonable row skeleton, we then continue to find a low-rank approximation to S in (3.5) so it makes sense to sample S. It is worth noting that the full matrix S is not needed. Instead, only its columns corresponding to A :, J are formed. That is, we form where L corresponds to J and selects entries from {1 : n}\J in a two-level composition of the index sets as follows: That is, sampling the columns of A with the index set J is essentially to sample the columns of S with L. For notational convenience, suppose the columns of A have been permuted so that Now, apply an SRR factorization to S :,L to get Then S ≈P Î E S K,: . Accordingly, we may write S as where S K,: = S 22 S 23 . From (3.8) and (3.9), S can be further written as whereŜ is a new Schur complement (and is not formed). At this point, we have the following proposition which shows how to expand the row index set I by an update I. whereP is a permutation matrix, I = I ∪ I with Proof. (3.5) and (3.10) lead to We can now factorize the second factor on the far right-hand side of (3.13) as Then, A may be written as The block  21Â22Â23 essentially corresponds to the rows of A with index set I in (3.12). This is because of the special form of the second factor on the right-hand side of (3.15). Then get (3.11) by letting E = ĒÊ . This proposition shows that we can get a factorization (3.11) similar to (3.5), but with the expanded row skeleton A I,: . Accordingly, we may then obtain a new approximation to A similar to (2.6): To support the reliability of such an approximation, we can use the following way. As mentioned in Remark 3.1, if (3.5) is assumed to be obtained by a strong rankrevealing factorization, then we would have nice singular value bounds in (3.6). Now, if we assume that is the case and (3.10) is also obtained by a strong rank-revealing factorization, then we would like to show (3.11) from the subset update would also satisfy some nice singular value bounds. For this purpose, we need the following lemma. where s = σ k (A) σ k+1 (A) . Proof. By (3.6) and the interlacing property of singular values, Similarly, by the interlacing property of singular values and (3.17), where the result σ 1 (S) ≥ σ k+1 (A) directly follows from Weyl's inequality or [20,Theorem 3.3.16]: Then As a quick note, here s = σ k (A) σ k+1 (A) reflects the gap between σ k (A) and σ k+1 (A). Since we seek to expand the index sets I and J (and k hasn't yet reached the target numerical rank r), it is reasonable to regard s as a modest magnitude. Now we are ready to show the singular value bounds. This proposition indicates that, if (3.6) and (3.17) are assumed to result from strong rank-revealing factorizations, then (3.11) as produced by the subset update method would also enjoy nice singular value properties like in a strong rank-revealing factorization. This supports the effectiveness of performing the subset update. Although here we obtain (3.6) and (3.17) through the much more economic SRR factorizations coupled with the Nyström method, it would be natural to use subset updates to quickly get the expanded index set I (from the original index set I). The SRR factorizations are only applied to blocks with column sizes b instead of ib in step i. In a nutshell, the subset update process starts from a row skeleton A I,: , samples the Schur complement S, and produces an expanded row skeleton A I,: and the basis matrixŨ in (3.16). The process is outlined in Algorithm 3.2. I← I ∪ I,Ẽ ← ĒÊ 9: end procedure Such a subset update strategy can also be applied to expand the column index set J . That is, when I is expanded into I ∪ I, we can apply the strategy above with J replaced by I, J replaced by I, and relevant columns replaced by rows. We then incorporate the subset update strategy into the basic HAN scheme. There are two ways to do so with different performance (see Algorithm 3.3). • HAN-U: This is an HAN scheme with fast updates for both the row subsets and the column subsets. Thus, both the index sets I and J are expanded through updates. In this scheme, |I| and |J | are each advanced by stepsize b in every iteration step. if i = 1 then ⊲ Column pivoting in the initial step . . . ⊲ Keeping the remaining lines of Algorithm 3.1 15: end procedure If Algorithm 3.2 is applied at the ith iteration of Algorithm 3.3 as in line 6, the main costs are as follows. • The formation of S :,L costs • The SRR factorization of S :,L in (3.8) costs • The computation of (3.14) costs These costs add up to O(ib(2bm + m + 2b 2 )), where some low-order terms are dropped and b is assumed to be a small fixed stepsize. The HAN-U scheme applies Algorithm 3.2 to both the row and the column subset updates. Accordingly, with N ≈ r b iterations, the total cost of the HAN-U scheme is which is a significant reduction over the cost in (3.3). The cost of the HAN-A scheme depends on how many iteration steps are involved and on how aggressive the index sets advance. In the most aggressive case, suppose at each step the updated index set I (or J ) doubles the size from the previous step, then it only needsÑ ≈ log 2 r b steps. Accordingly, the cost is which is comparable to ξ HAN−U . Moreover, in such a case, HAN-A would only need about b log 2 r b column samples instead of about r samples, which makes it possible to find a low-rank approximation with a total sample size much smaller than r. This has been observed frequently in numerical tests (see Section 4). 3.3. Stopping criteria and adaptive accuracy control. The HAN schemes output both I and J so we may use U A I,: , A :,J V T , or U A I,J V T as the output lowrank approximation, where V and U look like those in (2.2) and (2.6), respectively. Based on the differences of the schemes, we use the following choice which works well in practice: The reason is as follows. A :,J V T is the output from the end of the iteration and is generally a good choice. On the other hand, since HAN-A obtains U from a full strong rank-revealing factorization step which potentially gives better accuracy, so U A I,: is used for HAN-A. The following stopping criteria may be used in the iterations. • The iterations stop when a maximum sample size or a target numerical rank is reached. The numerical rank is reflected by |I| or |J |, depending on the output low-rank form in (3.21). • In HAN-B and HAN-A, the iteration stops when I stays the same as in the previous step. • Another criterion is when the approximation error is smaller than τ . It is generally expensive to directly evaluate the error. There are various ways to estimate it. For example, in HAN-U and HAN-A, we may use the following bound based on (3.5) and (3.10): (Note the approximations to A and S are obtained by randomization.) We may also directly estimate the absolute or relative approximation errors without the need to evaluate A −à 2 or A 2 . In the following, we give more details. The following lemmas suggest how to estimate the absolute and relative errors. If (3.5) is further assumed to satisfy (3.6), then Proof. From where the equality from the first line to the second directly comes by the definition of expectations and is a trace estimation result in [1]. This gives (3.22). If The probability result indicates that, even with small b, θ is a very accurate estimator for E 2 F (provided that (3.6) holds). We can further consider the estimation of the relative error. Proof. With (3.23), where E(CC T ) = b n−r I is simply by the definition of expectations and has been explored in, say, [9]. This leads to which, together with A 11 2 ≤ A 2 , yields the first inequality in (3.24). The second inequality in (3.24) is based on (3.6): From these lemmas, we can see that the absolute or relative errors in the low-rank approximation may be estimated by using S :,L and A 11 . For example, a reasonable estimator for the relative error of the low-rank approximationà is given by This estimator can be quickly evaluated and only costs O(b(m − k) + b 2 + k 2 ). The cost may be further reduced to O(b 2 + k 2 ) by using n−k b SK,L 2 A11 2 since S K,L results from a strong rank-revealing factorization applied to S :,L and there is a low-degree polynomial g in m − k and b such that To enhance the reliability, we may stop the iteration if the estimators return errors smaller than a threshold consecutively for multiple steps. 4. Numerical tests. We now illustrate the performance of the HAN schemes and compare with some other Nyström-based schemes. The following methods will be tested: • HAN-B, HAN-U, HAN-A: the HAN schemes as in Algorithms 3.1 and 3.3; • Nys-B: the traditional Nyström method to produce an approximation like in (1.1), where both the row index set I and the column index set J are uniformly and randomly selected; • Nys-P: the scheme to find an approximation like in (2.6) but with I obtained by one pivoting step (2.3) applied to uniformly and randomly selected A :,J ; • Nys-R: the scheme that extends Nys-P by applying several steps of alternating direction refinements to improve I and J like in lines 7-9 of Algorithm 3.1, which corresponds to the iterative cross-approximation scheme in [23]. (In Nys-R, the accuracy typically stops improving after few steps of refinement, so we fix the number of refinement steps to be 10 in the tests.) In the HAN schemes HAN-B, HAN-U, and HAN-A, the stepsize b in the progressive column sampling is set to be b = 5. The stopping criteria follow the discussions at the beginning of Section 3.3. Specifically, the iteration stops if the randomized relative error estimate in (3.25) is smaller than the threshold τ = 10 −14 , or if the total sample size S (in all progressive sampling steps) reaches a certain maximum, or if the index refinement no longer updates the row index set I. Since the HAN schemes involve randomized error estimation, it is possible for some iterations to stop earlier or later than necessary. Also, HAN-B does not use the fast subset update strategy in Section 3.2, so an extra step is added to estimate the accuracy with (3.25). The Nyström-based schemes Nys-B, Nys-P, and Nys-R are directly applied with different given sample sizes S and do not really have a fast accuracy estimation mechanism. In the plots below for the relative approximation errors A−à 2 A 2 , the Nyström and HAN schemes are put together for comparison. However, it is important to distinguish the meanings of the sample sizes S for the two cases along the horizontal axes. For the Nyström schemes, each S is set directly. For the HAN schemes, each S is the total sample size of all sampling steps and is reached progressively through a sequence of steps each of stepsize b. In the three Nyström schemes, the cardinality |I| will be reported as the numerical rank. In the HAN schemes, the numerical rank will be either |I| or |J |, depending on the low-rank form in (3.21). Since the main applications of the HAN schemes are numerical computations, our tests below focus on two and three dimensional problems, including some discretized meshes and some structured matrix problems. We also include an example related to high-dimensional data sets. The tests are done in Matlab R2019a on a cluster using two 2.60GHz cores and 32GB of memory. Example 1. First consider some kernel matrices generated by the evaluation of various commonly encountered kernel functions evaluated at two well-separated data points x and y in two and three dimensions. x and y are taken from the following four data sets (see Figure 4.1). (a) Flower: a flower shape curve, where the x set is located at a corner and |x| = 1018, |y| = 13965. (b) FEM: a 2D finite element mesh extracted from the package MESHPART [12], where the x set is surrounded by the points in y with |x| = 821, |y| = 4125. The mesh is from an example in [41] that shows the usual Nyström method fails to reach high accuracies for some kernel matrices even with the number of samples near the numerical rank. (c) Airfoil: an unstructured 2D mesh (airfoil) from the SuiteSparse matrix collection (http://sparse.tamu.edu), where the x and y sets are extracted so that x has a roughly rectangular shape and |x| = 617, |y| = 11078. (d) Set3D: A set of 3D data points extract from the package DistMesh [30] but with the y points randomly perturbed with |x| = 717, |y| = 6650. The points in the data sets are nonuniformly distributed in general, except in the case FEM where the points are more uniform. The data points in two dimensions are treated as complex numbers. The setup of the x and y sets has the size of x just several times larger than the target numerical rank. This is often the case in the FMM and structured solvers where the corresponding matrix blocks are short and wide offdiagonal blocks that need to be compressed in the hierarchical approximation of a global kernel matrix (see, e.g., [15,31,36,38]). We consider several types of kernels as follows: where α is a parameter. Such kernels are frequently used in the FMM and in structured matrix computations like Toeplitz solutions [7] and some structured eigenvalue solvers [16,28,34]. For data points in three dimensions, |x−y| represents the distance between x and y. For each data set, we apply the methods above to the kernel matrices A as in (2.1) formed by evaluating some κ(x, y) at x and y. Most of the kernel matrices have modest numerical ranks. The schemes Nys-B, Nys-P, and Nys-R use sample sizes S up to 400 in almost all the tests. The HAN schemes use much smaller sample sizes. HAN-B and HAN-U use sample sizes S ≤ 200 for most tests, and HAN-A uses sample sizes S ≤ 50 for all the cases. For some kernels evaluated at the set Flower, the relative errors A−à 2 in one test run are reported in Figure 4.2. With larger S, the error typically gets smaller. However, Nys-B is only able to reach modest accuracies even if S is quite large. (The error curve nearly stagnate in the first row of Figure 4.2 with increasing S.) The accuracy gets better with Nys-P for some cases. Nys-R can further improve the accuracy. However, they still cannot get accuracy close to τ = 10 −14 and their error curves in the second row of Figure 4.2 get stuck around some small rank sizes insufficient to reach high accuracies. In comparison, the HAN schemes usually yield much better accuracies, especially with HAN-B and HAN-A. HAN-U is often less accurate than HAN-B but is more efficient because of the fast subset update. The most remarkable result is from HAN-A, which quickly reaches accuracies around 10 −15 after few sampling steps (with small overall sample sizes). The second row of Figure 4.2 also includes the scaled singular values σi(A) σ1(A) . We can observe that HAN-B and particularly HAN-A produce approximation errors with decay patterns very close to that of SVD. To further confirm the accuracies, we run each scheme 100 times and report the results in Figure 4.3. In general, we observe that the HAN schemes are more accurate, especially HAN-B and HAN-A. The direct outcome from HAN-U is not accurate, but this is likely due to the quality of the V factor in (3.21). In fact, most other schemes end the iteration with a low-rank approximation in (3.21) after one row or column pivoting step by an SRR factorization. Thus, if we apply an additional row pivoting step to A :,J at the end of HAN-U so as to generate a new approximation U A I,: like in (3.21), then the resulting errors of HAN-U (called effective errors in The aggressive rank advancement also makes HAN-A very efficient. For each data set, the average timing of HAN-A and Nys-R from 100 runs is shown in Table 4.1. HAN-A is generally faster than Nys-R by multiple times. Example 2. Next, consider a class of implicitly defined kernel matrices with varying sizes. Suppose C is a circulant matrix with eigenvalues being discretized values of a function f (t) at some points in an interval. Such matrices appear in some image processing problems [27], solutions of ODEs and PDEs [2,3], and spectral methods [31]. They are usually multiplied or added to some other matrices so that the circulant structure is destroyed. However, it is shown in [31,39] that they have small off-diagonal numerical ranks for some f (t). Such rank structures are preserved under various matrix operations. The matrix A we consider here is the n × n upper as the sample size S increases in one test, where the second row shows the errors with respect to the resulting numerical ranks corresponding to the first row and the SVD line shows the scaled singular values. right corner block of C (with half of the size of C). It is also shown in [39] that A is the evaluation of an implicit kernel function over certain data points. We consider A with its size n = 512, 1024, . . . , 16384 so as to demonstrate that HAN-A can reach high accuracies with nearly linear complexity. For each n, we run HAN-A for 10 times and report the outcome. As n doubles, Figure 4.10(a) shows the numerical ranks r from HAN-A, which slowly increase with n. This is consistent with the result in [39] where it is shown that the numerical ranks grow as a low-degree power of log n. The low-rank approximation errors are given in Figure 4.10(b) and the average time from the 10 runs for each n is given in Figure 4.10(c). The runtimes roughly follow the O(r 2 n) pattern, as explained in Section 3.2. Example 3. Finally for completeness, we would like to show that the HAN schemes also work for high-dimensional data sets. (We remark that practical data analysis may not necessarily need very high accuracies. However, the HAN schemes can serve as a fast way to convert such data matrices into some rank structured forms that allow quick matrix operations.) We consider kernel matrices resulting from the evaluation of some kernel functions at two data sets Abalone and DryBean from the UCI Machine Learning Repository (https://archive.ics.uci.edu). The two data sets have 4177 and 13611 points in 8 and 16 dimensions, respectively. Here, each data set is standardized to have mean 0 and variance 1. We take the submatrix of each resulting kernel matrix formed by the first 1000 rows so as to make it rectangular and nonsymmetric. A set of test results is given in Figure 4.11. Nys-B can only reach modest accuracies around 10 −5 . Nys-R can indeed gets quite good accuracies. Nevertheless, HAN-A still reaches high accuracies with a small number of sampling steps. Similar results are observed with multiple runs. for high-dimensional tests, where σ in the kernel functions is set to be four times the maximum distance between the data points and the origin, and the SVD line shows the scaled singular values. 5. Conclusions. This work proposes a set of techniques that can make the Nyström method reach high accuracies in practice for kernel matrix low-rank approximations. The usual Nyström method is combined with strong rank-revealing factorizations to serve as a pivoting strategy. The low-rank basis matrices are refined through alternating direction row and column pivoting. This is incorporated into a progressive sampling scheme until a desired accuracy or numerical rank is reached. A fast subset update strategy further leads to improved efficiency and also convenient randomized accuracy control. The design of the resulting HAN schemes is based on some strong heuristics, as supported by some relevant accuracy and singular value analysis. Extensive numerical tests show that the schemes can quickly reach high accuracies, sometimes with quality close to SVDs. The schemes are useful for low-rank approximations related to kernel matrices in many numerical computations. They can also be used in rank-structured methods to accelerate various data analysis tasks. The design of the schemes is fully algebraic and does not require particular information from the kernel or the data sets. It remains open to give statistical or deterministic analysis of the decay of the approximation error in the progressive sampling and refinement steps. We are also attempting a probabilistic study of some steps in the HAN schemes that may be viewed as a randomized rank-revealing factorization.
2023-07-13T07:36:16.341Z
2023-07-11T00:00:00.000
{ "year": 2023, "sha1": "96b124059d04d425bd02f5c53b0bb292c721320b", "oa_license": null, "oa_url": null, "oa_status": null, "pdf_src": "Arxiv", "pdf_hash": "96b124059d04d425bd02f5c53b0bb292c721320b", "s2fieldsofstudy": [ "Mathematics", "Computer Science" ], "extfieldsofstudy": [ "Mathematics", "Computer Science" ] }
219935649
pes2o/s2orc
v3-fos-license
Relationship Between Levels of Fasting Blood Glucose and HbA1C in Prediabetes Patients --Prediabetes is a condition blood glucose levels are higher than normal but lower than diagnosis of diabetes mellitus. Parameters prediabetes according to the American Diabetes Association 2017, blood glucose levels ≥100 <126 mg/dL in impaired fasting glucose, ≥ 12 mg/dL in impaired glucose tolerance and HbA1C level 5.7 6.4%. HbA1C test describe of the state blood glucose in the last 2-3 months. The study aimed to determine relationship of fasting blood glucose and HbA1C value in determining prediabetes. Methods of research is descriptive, using secondary data. Data collection was carried out during April to September 2018 at one of the private hospitals in East Bekasi. Total samples inclusion factor were 92 samples. The Spearman test results showed relationship is positive and weak (r = 0.230) with α = 0.05 and the linear regression test line equation yields HbA1C value = 5,430+ 0.003 (blood glucose value) + 0.005 (age) with R value 0.309 between fasting blood glucose test and HbA1C values in prediabetes. The results of this study indicate that value of fasting blood glucose affects value of HbA1C. The higher blood glucose is more Hb molecules that bind to sugar. INTRODUCTION Diabetes mellitus is a metabolic disorder that has become a global problem. Based on data from Riskesdas (2013) 5.7% of diabetics patient in Indonesia, almost 73.7% or around 8,485,329 million were not diagnosed with diabetes. This can be considered dangerous because a late diagnosis can cause many complications that occur in type 2 diabetes mellitus [1]. Fasting blood glucose test (hexokinase methode) uses venous blood is a gold standard for diagnosis of diabetes mellitus type 2 and is often used in hospitals [2,3]. According to International Diabetes Federation (IDF), American Diabetes Association (ADA), Indonesian Endocrinology Association (Perkeni), the criteria diagnosis of diabetes can be confirmed if the condition of blood sugar during fasting is above 126mg/dl and 2 hours after meals (2 hours post prandial) above 200mg/dl. Impaired Fasting Glucose (IFG) condition if the fasting blood sugar level is between 100-125mg/dl while Impaired Glucose Tolerance (IGT) condition is impaired if the fasting blood sugar is above 126 mg/dl while the Impaired Glucose Tolerance (IGT) condition is impaired if the fasting blood sugar is above 126 mg/dl but 2 hours after eating 140-200mg/dl. Both IFG and IGT are also called prediabetes, which are strong candidates for future diabetes. Prediabetes is a danger sign, a yellow light, a marker of diabetes later, or a "candidate" for diabetes. Comparison of diabetes patients with prediabetes in Indonesia is 1: 3 [4]. Indonesia was ranked third in the world with the number of prediabetes as many as 29 million people beat China. Prediction in 2040 Indonesia will be ranked first number of prediabetes which is estimated to reach 36.8 million people. Laboratory tests for the prediabetes of the recommended is HbA1C test because these tests can be performed both on IGT and TGT patient [5]. HbA1C is a blood glucose test through measurement of hemoglobin A1C levels found in erythrocytes. HbA1C is hemoglobin that can bind to glucose (glycohemoglobin) [6]. The HbA1C test describes state of blood sugar in the last 2-3 months. Several recent studies recommend HbA1C test to diagnose or screening for prediabetes as a comparison of the examination of venous blood glucose tests [5,7]. The use of two different methods of testing blood glucose is recommended to confirm and establish a diagnosis of someone prediabetes or diabetes [5]. HbA1C values for normal people are below 5.6%, prediabetes is between 5.7 -6.4%, diabetes is above 6.5% [4]. II. MATERIALS AND METHODS The research method was descriptive observational analytic and the sampling technique was crosssectional. Data was collected at the Clinical Pathology Laboratory of East Bekasi Private Hospital. The data taken is secondary data that is included in the inclusion criteria, which has a HbA1C test result of 5.7 -6.4%, does a fasting blood sugar test, there is no routine history of carrying out the HbA1C test and fasting blood sugar. Data was collected from April to September 2018. The data was processed using the SPSS program to test the description, normality, spearman correlation test and regression with an error rate of 5% [8]. Descriptive Analysis Descriptive and Test Normality Data In this study sample data included in the inclusion criteria were 92 samples consisting of 43 men (46.7%) and 49 female (53.3%). The age distribution of 92 prediabetes samples obtained the youngest sample age is 10 years old, the oldest is 81 years old and modus is 49 years as many as 6 people. IV. DISCUSSION Based on table 1, the prediabetes sample suffered more women (53.3%) than men (46.7%). Sex differences can affect factors of type 2 diabetes mellitus such as differences in the type of sex hormones in women and men, these differences have a major influence on energy metabolism, body composition, vascular function and inflammatory response. Thus, endocrine imbalance can cause the risk of prediabetes to type 2 diabetes mellitus, especially in women who have a greater risk of both biological factors and due to stress exposure. Patients with prediabetes with conditions of impaired glucose tolerance (IGT) are more experienced by women and prediabetes with conditions of impaired fasting glucose (IFG) are mostly experienced in men [9]. Prediabetes criteria according to The American Diabetes Association 2017 in Table 1. shows fasting glucose levels in IGT patients> 126 mg / dL and IFG are between> 100 -<126 mg / dL. Based on this, the results of this study indicate the average value of fasting blood glucose in men is 100.97 mg / dL and in women is 103.3 mg / dL. These results indicate prediabetes criteria in men and women are in the category of IFG. Prediabetes can be experienced everyone at any age, especially in people with obesity and age 45 years, in table 1 shows that most patients with prediabetes are 49 years old. prediabetes can be experienced by anyone at any age, especially in people with obesity and over the age of 45 years, in table 1 shows that most patients with prediabetes are 49 years old. Factors that can affect prediabetes condition such as level of education, level of opinion, place of residence, daily activities and family history of diabetes [10]. The majority of factors that influence prediabetes are social behavior factors. Nowadays people tend to share activities on social media such as food, fast food restaurants that tend to entice someone to try and cause obesity. Prediabetes increases the risk of becoming type 2 diabetes mellitus and cardiovascular disease. Early identification and management of prediabetes diagnosis can reduce the incidence of diabetes and its complications [11]. Management of prediabetes screening or risk of diabetes with asymptomatic symptoms such as weight weighing in adolescents who are overweight or obese (BMI 25kg/m2 or ≥23kg/m2 in Asian Americans), routine blood glucose testing in someone 45 years old If the test shows normal, repeat test for at least 3 years after the last blood glucose test. Prediabetes can also be associated with obesity (especially abdominal or visceral obesity), dyslipidemia with high triglycerides, low HDL cholesterol and hypertension [12]. HbA1C test is specific glycated hemoglobin that is formed due to addition of glucose to N-terminal amino acid valine in α-hemoglobin chain. In this study a weak relationship was found between fasting blood glucose yield and HbA1c. Despite having a weak association HbA1C is still used as a marker to diagnose prediabetes besides fasting blood glucose testing. This is because concentration of glycated hemoglobin (HbA1C) depends on the concentration of blood glucose and lifespan of a red blood cell which is typically 120 days, which means the relative proportion of HbA1C at any one time depends on the mean circulating blood glucose level over that 3 month period [13]. HbA1C test and fasting blood glucose results when used as a basis for making adjustments to the treatment of pre-diabetes mellitus [14]. Use of HbA1C as a screening marker has rapidly been adopted in V. CONCLUSION There is a correlation between fasting blood glucose test and HbA1C results in patients with prediabetes with a weak (R = 0.309) and positive relationship.
2020-06-18T09:08:12.008Z
2020-06-08T00:00:00.000
{ "year": 2020, "sha1": "32285e79e5185a91b3fab377ea7c019ca2e82ee8", "oa_license": "CCBYNC", "oa_url": "https://doi.org/10.2991/ahsr.k.200523.001", "oa_status": "GOLD", "pdf_src": "MergedPDFExtraction", "pdf_hash": "f79b879cbe1decf6f4351a66c017ce49b4c91798", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
221617456
pes2o/s2orc
v3-fos-license
A Real-Time Indoor Localization Method with Low-Cost Microwave Doppler Radar Sensors and Particle Filter We propose a novel method of localization based on low-cost continuous-wave unmodulated doppler microwave radar sensors. We use both velocity measures and distance estimations with RSS from radar sensors. We also implement a particle filter for real time localization. Experiments show that, with a reasonable initial estimate, it is possible to track the movements of a person in a room with enough accuracy for considering using this type of devices for monitoring a person or indoor guiding applications. Indoor monitoring elderly people or guiding visually impaired persons (see e.g. [1]) need accurate and fast real-time localization systems. Indoor positioning is a very active research topic, and many technologies have been developed in this field (see e.g. the survey [2]). Nevertheless, unmodulated continuous wave doppler radar has been little explored for pure positioning applications, although they could be inexpensive and simple to deploy. Doppler radars are widely used in presence detectors (door opening) as well as for vehicle speed control. Unlike frequency-modulated continuous-wave (FMCW) or pulse radar, they do not measure distances. Instead, they may determine the radial velocity components of moving objects in the field of the radar. We propose in this paper to use meager cost and rudimentary radar modules (see Fig. 1) originally designed for presence detection. This opens the door to very low cost monitoring or blind guidance applications. Unlike metrology grade sensors (e.g. Doppler radar for road control), miniature microwave radars offer minimal accuracy in the counterpart of their low cost (about 10 euros). Those devices are used in research projects for many applications. Some researchers use them for gait monitoring and movement classification [3,4], while others try to estimate the walking speed [5] or investigate guidance and obstacle avoidance applications [6]. At first sight, it seems unreasonable to use these sensors alone to locate a person because they do not provide absolute distance measurements. This is why some researchers have used Kalman filters to combine these radar measurements with more reliable technologies that measure distances (e.g. UWB, see [7]). We here propose a different approach in which we use the Received Signal Strength (RSS) of the radar in order to obtain a distance evaluation. Measurements Model According to the Doppler effect equation, the speed v measured by the radar can be written as : with c the speed of light, f d the Doppler frequency, f tx the frequency of the radar signal (typically 24 GHz) and α the angle formed between the direction of motion and the radar beam. Notice that, for our practical application, the Doppler signal contains several frequency components related to the limb movements of the subject. As already explained, these sensors are not designed to measure distances. However, the Received Signal Strength Indication (RSSI) provides some distance information that can nevertheless be used, although with a priori very limited accuracy. The RSSI is generally considered to be unreliable for distance evaluation, difficult to model and very sensitive to the environment (due to shading effect, reflections, lack of polarization of antennas), even if, in radio waves based triangulation methods, analytical and empirical methods have been proposed to take into account the delicate problem of reflections in indoor environment [8]). For radars, considering waves that make a round trip between the radar and the objet, the received power P r is usually written in terms of the transmission power P t as: where G is the antenna gain, σ the radar cross section (reflection) of the target, and λ the wavelength. For a transmitted signal s t (t) = cos(2πf c t), neglecting the phase term, the received signal can be written as s r (t) = α cos(2π(f c +f d )t) where α is a distance dependent attenuation factor deduced from Eq. (2). In the sensor, the received signal is mixed with the emitted signal (c.f. Fig. 2) giving : Filtering the result through a low-pass filter provides a signal which permits to estimates the frequency f d , and whose amplitude is directly proportional to the received power and can therefore be used to estimate a distance. Rather than using the level of total energy received, we propose to use the magnitude of the predominant frequency in the signal, which likely corresponds to the direct path. Eventually, the distance R is recovered from this magnitude, assuming that the target cross section is constant during movement and magnitude follows a free field k/R 4 model (cf. (2)). Since no emitter or receiver is carried, this method is naturally immune to shadowing or antenna polarization alignment problems from which RSS techniques usually suffer. Real Time Localization Method The experimental setup consists of a set of static sensors (at least two orthogonal sensors) as shown in Fig. 3. These sensors use patch-plane antennas and are not very directional, they have an attenuation of less than 3dB at ±60 • on the horizontal plane. It is therefore advisable to move them a few metres away from the working surface to cover it completely without introducing too much attenuation linked to directivity. The Doppler output of each sensor is connected to a suitable amplification circuit (about 60 dB) including a 5 Hz-900 Hz pass-band filter (corresponding to In order to increase the precision of the system, the user can be furthermore equipped with an Inertial Motion Unit (IMU) that combines data from an accelerometer, a gyroscope and a magnetometer to estimate his/her orientation. We use an IMU composed of the low-cost MEMS (Microelectromechanical systems) sensor TDK-Inversense MPU9250 connected to a microcontroller running the Magdwick data fusion algorithm [9]. Such a device can provide orientation information with an accuracy of about 3-5 • . It is a small wireless device (the size of a matchbox), that can be worn on the belt. In the method described below, it is assumed that the user is moving in the direction of the sagittal plane (i.e. orienting himself in his direction of travel). The IMU also helps in suppressing the forward-backward ambiguity which exists when using the doppler radars alone, at least in their most simple use. We aim at developing an algorithm for estimating the position of the subject. In that respect several difficulties need to be solved: -Radars sensors provide highly noisy and often unusable (no sharp peaks in the spectrum) or missing measurements. -It is impossible to distinguish motionless situations from those with no measurement (out-of-range). -The measurement noise is not gaussian and hard to model. To address these issues, we have developed a localization algorithm based on the particle filter (PF) method. PFs are algorithms for estimating the state of a dynamic system using Monte-Carlo methods. The PF are suitable for (strongly) non-linear models, non-Gaussian measurement noise and incomplete measurements. The particle filter algorithm is given in Algorithm 1 below. Algorithm 1: Doppler Radar particle filter algorithm. Result: Particle filter (Initialisation) Random creation of a set of particles representing the possible states including speed and position for k = 0 to Max do -For all particles: prediction of next particle state assuming a constant velocity -Measure sensors radial velocity, distance deduce from RSS and IMU orientation. • Identify static target situation (sub-threshold velocity and RSSI for all sensors) • Discard inconsistent measurements; -For all particles: Updating the particle weight taking into account the measurement; -Removal of Small Particles and resampling; -Calculation of the estimate (using a weighted average); end 3 Results Distance Estimation Accuracy We evaluate the ranging accuracy using the magnitude of the Doppler signal alone in the case of a displacement in the direction of the axis of the sensor as well as in the case of a displacement in the orthogonal direction. First, a subject is walking back and forth towards the sensor at roughly constant walking speed. Figure 4-left compares the actual distance to the measured one. We observe that the dispersion of data increases with the distance, but we can see that the k/R 4 model provides nevertheless a realistic estimation. We report in Fig. 4-right the distance measurements when moving in a direction orthogonal to the sensor. In this case, the model is suitable as well, but the dispersion increases sharply as the distance increases (Table 1). Localization with a Particle Filter We have implemented the device described in Fig. 3 in a surface area of 8 × 8 m. Tests are conducted with a single person walking (speed from 0.5 to 1.5 m/s) in this area. Tracking on Different Courses. In order to show the tracking capabilities of the system, we have performed different types of displacements. Some are visible in Fig. 5 These different movements include smooth and continuous movements as well as abrupt changes of direction, round and long trips. Theses tests were carried out using radar RSS and velocity data but also incorporating the orientation of the person given by the IMU. During our tests, after initial convergence, the maximum error remains less than 1.5 m while the average error is around 0.5m. In most cases, the IMU allows to improve the quality of positioning by significantly smoothing trajectories, which is useful for the audio guidance applications we develop. Sensor Fusion Efficiency. Figure 6 illustrate localization on a circular course using RSSI only, velocity only and data fusion. Localization using velocity can be quickly affected by drift while RSSI alone presents a rather erratic trace due to the imprecision of the measurements. RSSI only Velocity only Velocity+RSSI fusion Velocity+RSSI and IMU fusion On a circular type course of about ten revolutions, including U-turns at not constant speed, the average error was about 0.4 m and 0.9 m maximum using the velocity/distance/IMU fusion. Without the IMU, the average error is about 0.7 m and 1.3 m maximum (Fig. 7). RSSI and Velocity RSSI, velocity and IMU In all experiments, the accuracy is limited by the error in the estimation of the distances and the velocities. Nevertheless, it is noticeable that no significant drift is observed even in experiments with many turns in circular courses. Conclusion Our experiments have shown that the fusion of RSSI and velocity data allows for the use of very low cost Doppler radar sensors for localization applications that do not require high accuracy. Indeed, RSSI measurements permit to limit the position drift that would be observed with velocity measurements alone. The limited range (about 10 m) of the system is the main issue of this technology. Although limited, the accuracy could be sufficient for guidance applications. We plan to investigate further with more sensitive sensors in particular to use them for applications in guiding visually impaired people for indoor sport activities as we already did with Ultra-Wideband or RTK-DGNSS outdoor [1].
2020-09-12T13:11:58.048Z
2020-08-10T00:00:00.000
{ "year": 2020, "sha1": "228c397cbddefb1758acdcb823db8de0a3acc387", "oa_license": null, "oa_url": null, "oa_status": null, "pdf_src": "PubMedCentral", "pdf_hash": "4e8b36cdae2c9525f6630dd62057179a74aca55d", "s2fieldsofstudy": [ "Computer Science" ], "extfieldsofstudy": [ "Computer Science" ] }
6637620
pes2o/s2orc
v3-fos-license
Integration of Myeloblastosis Associated Virus proviral sequences occurs in the vicinity of genes encoding signaling proteins and regulators of cell proliferation Aims Myeloblastosis Associated Virus type 1 (N) [MAV 1(N)] induces specifically nephroblastomas in 8–10 weeks when injected to newborn chicken. The MAV-induced nephroblastomas constitute a unique animal model of the pediatric Wilms' tumor. We have made use of three independent nephroblastomas that represent increasing tumor grades, to identify the host DNA regions in which MAV proviral sequences were integrated. METHODS Cellular sequences localized next to MAV-integration sites in the tumor DNAs were used to screen a Bacterial Artificial Chromosomes (BACs) library and isolate BACs containing about 150 kilobases of normal DNA corresponding to MAV integration regions (MIRs). These BACs were mapped on the chicken chromosomes by Fluorescent In Situ Hybridization (FISH) and used for molecular studies. Results The different MAV integration sites that were conserved after tumor cell selection identify genes involved in the control of cell signaling and proliferation. Syntenic fragments in human DNA contain genes whose products have been involved in normal and pathological kidney development, and several oncogenes responsible for tumorigenesis in human. Conclusion The identification of putative target genes for MAV provides important clues for the understanding of the MAV pathogenic potential. These studies identified ADAMTS1 as a gene upregulated in MAV-induced nephroblastoma and established that ccn3/nov is not a preferential site of integration for MAV as previously thought. The present results support our hypothesis that the highly efficient and specific MAV-induced tumorigenesis results from the alteration of multiple target genes in differentiating blastemal cells, some of which are required for the progression to highly aggressive stages. This study reinforces our previous conclusions that the MAV-induced nephroblastoma constitutes an excellent model in which to characterize new potential oncogenes and tumor suppressors involved in the establishment and maintenance of tumors. Introduction Chicken nephroblastomas induced by MAV1-N represent a unique animal model of the Wilms tumor, a kidney cancer occurring in young children at a frequency of about 1:6000 births. Early cytogenetic studies have identified multiple chromosomal alterations in Wilms tumors, raising the possibility that several steps in the differentiation pathway of blastemal cells could represent potential targets for tumorigenic events [1]. In an attempt to characterize genes that are altered at various stages of tumor development, we have taken advantage of the histological similarities between Wilms tumor and MAV-1(N) induced nephroblastoma. MAV is a replication competent retrovirus which can induce nephroblastomas, osteopetrosis and lymphoid leucosis when injected in chicken [2]. Molecular cloning of the MAV1(N) proviral genome permitted us to isolate a pure viral strain inducing specifically nephroblastomas when injected, either intravenously in ovo on embryonic day 18, or intraperitoneally in day-old chickens [3]. The characterization of MAV sequences contained in avian nephroblastomas established that these tumors were polyclonal and that in tumor DNA, MAV was inserted at a limited number of sites, suggesting that either the integration of MAV at other sites was not associated with nephroblastoma induction, or that the selection pressure occuring naturally during tumor development had counterselected cells carrying MAV proviral genomes at other sites [1,4]. The analysis of lambda librairies obtained from these tumors reinforced the idea that in the tumor DNA, the MAV proviral sequences were integrated at a few distinct cellular sites. The MAV genomes present in well-developed tumors were all heavily rearranged whereas in diffuse tumors of smaller size showing a less advanced tumor phenotype, the MAV genomes were full length in size and functional [5]. Use of junction fragments containing viral U3 and adjacent cellular sequences, permitted to establish that in one of the most developed tumor one of the proviral genome was integrated within a gene that is known as ccn3 and that we originally designated "nov" for « nephroblastoma overexpressed » [5]. CCN3 is one of the three founders of the CCN family of proteins which presently consists of six different members. Its expression is associated with cell quiescence [6,7]. In normal conditions, the expression of ccn3 undergoes spatiotemporal regulation in several different tissues originating from the three germ layers, with major sites of expression being adrenal, nervous system, cartilage and bone, muscle, and kidney [7][8][9][10][11][12][13][14]. The production of CCN3 protein can be increased or decreased upon carcinogenesis [7,12,[15][16][17][18][19][20]. In Wilms' tumors, the expression of ccn3 was a marker of differentiation [12] whereas in Ewings' tumors, the expression of ccn3 was associated with a higher risk of developing metastasis [17]. In all cases, the full length CCN3 protein shows antiproliferative activity. Albeit its expression was elevated in all avian tumors, the ccn3 gene was found to be disrupted in only one case, suggesting that either an unknown viral product, or MAV LTR enhancer was responsible for increased ccn3 expression. Indeed, it is well known that LTR enhancer sequences can activate transcription of genes localized several tens of kilobases away. However, the limited size (20 kb) of the insert DNA that was contained in the lambda recombinants, did not permit to establish whether MAV LTR sequences were present in the vicinity (at a genome scale) of the ccn3 gene in the DNA of all tumor cells. Since we had isolated and studied tumors representing three increasing developmental stages, we took advantage of this material to ask whether the progression from an initial diffuse tumor to a well developed tumor, was accompanied by the selection of cells carrying particular MAV integration sites. To tackle this problem, we have used the BAC (bacterial artificial chromosome) and FISH (fluorescent in situ hybridization) strategies. The results we report here confirm that a limited number of MAV-integration sites are detected in the DNA of MAVinduced nephroblastomas, with an over representation of integration sites on chromosome 2. In well-developed tumors, MAV-integration sites are localized in the vicinity of genes encoding proteins involved in matrix remodeling, angiogenesis, and signaling. Our results also indicated that ccn3 is not a common integration site in these tumors. Labeling of the BAC DNA fragments Prior to labelling, the BAC DNA fragments were amplified by PCR using the Expand high fidelity PCR system from BOEHRINGER MANNHEIM. One hundred nanograms of insert was mixed with U and R primers (0.3 µM/L), 8 µl dNTP(10 mM), mix II buffer (10 µl), TaqE (3 U) and water to 50 µl. Amplification was performed for 30 cycles of 94°C for 30 seconds, 50°C for 1 min, 72°C for 1 min. The size of the amplified fragments were checked by electrophoresis in 1% agarose gels prior to purification with the QIAquick PCR Purification Kit (Qiagen). DNA fragments (50 ng of each) were labeled with the Amersham multiprime DNA labelling system (Amersham Pharmacia Biotech RPN161Z LIFE SCIENCE) in the conditions rec-ommended by the supplier and purified by filtration through Sephadex G50 to remove nucleotides that were not incorporated. Screening of the BAC library Duplicate filters on which BAC DNA preparations had been transfered were incubated with labelled probes as described above. Colonies containing positive BACs were picked and grown at 37°C overnight into 4 µ of LB medium containing 10 µg/ml chloramphenicol. The DNA contained in pelleted cells was extracted as described above and resuspended in 40 µl of TE containing RNaseA. The DNA content of each BAC was analyzed by both Dot blotting and Southern blotting of HindIII-digested DNA. Ccn3 probe The pC1K clone [5] was used as a source of chicken ccn3. For preparation of the ccn3 probe, the 2.0 kb Kpn1fragment was purified by electroelution as described [21] Isolation and characterization of BACS Figure 1 Isolation and characterization of BACS. Panel A shows typical results obtained with a BAC containing a MAV-integration site contained in one of the probes dereived from avian nephroblastoma. Filters of BAC DNA were duplicated. To check the specificity of the probes used, two micrograms of genomic chicken DNA were digested with 40 units of HindIII restriction endonuclease at 37°C for 18 hours and run in a 1% agarose gel at 2 volts/cm for 20 hours. The separated DNA fragments were denatured by incubation in 0.5 N NaOH for 45 min, neutralized in 0.5 M Tris HCl, 1.5 M NaCl. Transfer onto Appligene Positive Membrane was performed in 20 × SSC for 18 hours and the membrane was baked for 2 hours at 80°C prior to use for hybridization with labeled cloned cellular fragments. All cellular probes cloned from nephroblastoma DNA libraries detected a single fragment in HinIII digested normal DNA (see panel C for typical result) except for P38 which contained chicken repetitive sequences (panel B). Panels D and E: DNA preparations from positive Bacs were digested with NotI (panel 1 E shows ethidium bromide staining of the gel) and transfered onto nitrocellulose prior to hybridization with the probes used for their isolation. A single NotI fragment is detected by the probes in the BAC dans (panel 1D). Isolation of polyadenylated RNA from normal kidneys and nephroblastomas Frozen tissues were homogenized with a polytron and 0.5 g of powder was resuspended in 9 ml guanidine thiocyanate buffer for purification of total RNA as previously described [21]. Final RNA pellets were resuspended in 400 µl sterile distilled water and the concentration of each sample was determined by densitometry. To isolate polyadenylated RNA species, each sample (1 mg total RNA in 500 µl water) was mixed with 55 µl Oligitex Suspension (Qiagen) and incubated for 3 min at 70 min in a water bath. After 10 min at room temperature the Oligotex:mRNA complex was pelleted by 2 min centrifugation at 14000-18000 g and the supernatant carefully removed. The pellet was further treated as recommended by the supplier and the polyadenylated RNA fraction was collected in a final volume of 50 µl. Labelling of polyadenylated RNA preparations To prepare labelled RNA probes, 500 ng of each polyA-RNA preparation were mixed with 500 ng oligo dT, incu-bated for 10 min at 70°C and chilled on ice for 5 min. Samples were then mixed with 5 µl of 10 × PCR buffer, 5 µl of MgCl2 25 mM, 5 µl DTT 0.1 M, 2.5 µl mixture of dTTP, dATP, dGTP(10 mM each), 2.5 µl of ddTTP(1 mM), 5 µl of 32 P-dCTP, and incubated for 5 min at 25°C. After addition of 1 µl of reverse transcriptase (Invitrogen) (200 U/ µl), the mix was incubated for 10 more min at 25°C and for 50 min at 42°C. The reaction was stoped by incubation at 70°C for 15 min. Each labelled preparation was purified by chromatography through a column of Sephadex G50. Hybridization of BAC DNA filters The blots were rinsed with 6 × SSC, and prehybridized at 68°C for 18 hours. After hybridization with labeled probe in the presence of COT I DNA the blots were washed with 2 × SSC, 0.1%SDS at 56°C for 1 hour and with: 0.1 × SSC, 0.1%SDS at 65°C for 1 more hour. Autoradiography of the dried blot was performed at -80°C. Distribution of positive BACs on chicken chromosomes Figure 3 Distribution of positive BACs on chicken chromosomes. Purification of BAC DNA fragments To recover the DNA fragments containing sequences that encode differentially expressed RNAs, 4 µg of BAC DNA were digested with HindIII, and run in 1% low melting agarose gel. The fragments of interest were eluted by incubation at 65°C for 10 min prior to addition of 1 ml Wizard Plus resin from Promega and filtration through a minicolumn connected to vacuum manifold. The column was rinsed with washing buffer and the DNA fragments eluted with 50 µl of TE buffer. Ligation and transformation HindIII digested purified DNA fragment (50-100 ng) were ligated to 50 ng of dephosphorylated HindIIIdigested pUC18 vector in the presence of 2.5 µl T4 ligase (Appligene) at 14°C for 18 hours. For transformation, 2 µl samples of the ligation mixture were mixed with 200 µl of DH5α competent cell. Electroporation was performed at 2.45 Kv, 25 µF, 400 Ohm. After addition of 1 ml of cold LB medium, bacteria were spread onto LB plate containing 100 µg/ml ampicillin and incubated at 37°C. Screening of chicken cDNA libraries A library of chicken spleen cDNA was spread on LB agar plates containing tetracycline(7.5 µg/ml) and ampiciline(12.5 µg/ml) and incubated at 37°C, overnight. The colonies were transferred to Qbiogene Neutral Membranes and replicated to LB agar plates containing tetracycline and ampiciline. The membranes were incubated successively in 0.5 N NaOH for 10 min, 0.2 N NaOH and 1.5 M NaCl for 10 min, 0.2 M TrisHCl and 2 mM EDTA for 20 seconds and 2 × SSC for 20 seconds. The filters were baked at 80°C for 2 hours prior to use for hybridization with appropriate probes. Sequencing of positive cDNA clones Sequencing of the cDNAs was performed with T7 universal primer and CDM8 (TAAGGTTCCTTCACAAA) primer. Northern blot hybridization Samples of total RNA (20 µg in 9.3 µl water) were incubated with, 20 µl deionized formamide, 6.7 µl formaldehyde, 4 µl 10 × MOPS, incubated for 5 min at 68°C, chilled on ice before loading. Formaldehyde-MOPS gels were run at 100 volts for loading and 50 volts overnight in 1 × MOPS buffer. The gel were then rinsed with DEPCtreated 20 × SSC containing 2 mercaptoethanol, transferred to Appligene positive Membranes and treated for hybridization. Fluorescent in situ hybridization (FISH) The purified DNA inserts were labelled by nick translation with biotin or digoxigenin 16 dUTP (Appligene OncorR Mapping of BAC sequences on chicken chromosome 2 Figure 5 Mapping of BAC sequences on chicken chromosome 2. Mapping of BAC sequences on chicken chromosome 1 Figure 4 Mapping of BAC sequences on chicken chromosome 1. Chromosomes slides were incubated at 70°C for 2 min in 70% formamide, 2 × SSC (pH 7.2) and dehydrated in an ethanol series at 4°C. 2 µl of a 1/5 dilution of labelled probe was mixed with 6 µl of human Cot1 DNA(from a 1 mg/ml solution ; GibcoBRLR) and 32 µl of hybrisol VI (Oncor R) denatured 5 min at 80°C and incubated 30 min at 37°C before deposition on the slide. The slides were then incubated overnight at 37°C in a humidified chamber and washed three times 5 min at 42°C in 50% formamide, and in 2 × SSC at 42°C. After being rinsed in 4 × SSC at room temperature, the slides were incubated for 30 min at 37°C in blocking solution (Roche Diagnostics Meylan France). The digoxigenin was detected using an anti-digoxigenin-rhodamin-labelled antibody (Appligene OncorR) and the biotin with FITC labelled avidine (Appligene OncorR). The slides were washed with 4 × SSC for 10 min in a shaker. After draining the excess liquid, DAPI was used as counterstain and Vectashield R (Vector laboratories Inc. Burlingame, CA 94010, USA) as antifading solution. Pictures were acquired on a Zeiss epifluorescence microscope using a tri-CCD camera and Vysis computer software (Smart capture 2). Chromosome assignments were made using reverse DAPI by reference to the GTG banded ideograms of chicken proposed by Ladjali-Mohammed et al [22]. The cytogenetic localization of the various BAC DNAs was performed by two independent FISH runs. For each experiment 50, and 25 cells were analyzed respectively. Except for rare cases, four chromatids per cell showed a specific signal and the percentage of triploid and tetraploid cells present in the embryonic chicken fibroblast culture was taken into account. Human-chicken genomic comparisons Syntenic conserved chromosome segments between human and chicken were determined from Schmid et al [23]. Human cancer genes localized in the syntenic areas were selected from the "Atlas of Genetics and Cytogenetics in Oncology and Hematology". http://www.infobio gen.fr/services/chromcancer/. Their presence in the presumed chicken chromosome areas was investigated by data processing using different web sites (NCBI:http:// www.ncbi.nlm.nih.gov, EMBL EBI : http:// www.ebi.ac.uk/embl/, infobiogène : http://www.infobio gen.fr for human and NCBI, Wageningen university : Maping of BAC2 on Gga1q14 (triploid metaphase) Figure 8 Maping of BAC2 on Gga1q14 (triploid metaphase). Mapping of BAC sequences on chicken chromosome 3 and 5 Figure 6 Mapping of BAC sequences on chicken chromosome 3 and 5. Co-localisation and assymetrical duplication of areas identi-fied by BAC 50 (red) and 65 (green) Figure 7 Co-localisation and assymetrical duplication of areas identified by BAC 50 (red) and 65 (green). http://www.zod.wau.nl/vf/ for chicken) and by calculation from the physical to the cytogenetical localisation using the relative position in the sequence and the " consensus mid-points " of the markers reported in Schmid et al [23]. Isolation of BACs clones harbouring genomic DNA fragments flanking the viral/cellular junctions of MAV1related proviruses From the libraries of lambda recombinant DNA prepared with tumors 501D, 501 and 725 [5] we derived a total of 23 DNA fragments containing genomic sequences flanking the junction fragments that were previously identified in these three MAV-induced tumors. These fragments are representative of MAV Integration Regions (MIRs) at the chromosomic scale. Their size ranged from 800 bp to 4. 7 kb. Each of them was checked by Southern blot analysis on chicken genomic DNA for the presence of repetitive sequences (figure 1B,C). Only one of them was found to contain repetitive sequences (figure 1B) and was discarded. The 22 remaining clones were used as probes for screening a chicken BAC (Bacterial artificial chromosome) library which contain 60.000 clones arrayed in 96-well plates and with an average insert sizes of 120-150 kb (figure 1A). The probes were pooled by groups of three or four prior to hybridization. A total of 78 positive clones were selected for further studies, the DNA of which was digested with Not I and hybridized with the different probes separately to verify that they indeed contained MIRs (figure 1D,E). Three different groups of BACs were isolated (figure 2) with the probes originating from the three nephroblastomas. Each group was tumor-specific. None of the probes from one given tumor, hybridized with BACs corresponding to another tumor (data not shown). Furthermore, when all BACs from each group were radiolabeled and used as probes on BACs from the two other groups, no overlaping sequence could be found (data not shown). These results indicated that no DNA fragment was common to the collection of 78 BACs. Interestingly probes 4 and 16 hybridized with several different BACs (4,7,8,10,14,16) in the same group, and probes 81 and 62 hybridized with two different BACs (23.4 and 24.4) that belong to another group. These results suggested that two integrations sites of MAV were localized in the same DNA fragments in tumor 725 and in tumor 501. However, they correspond to different MIRs in the chicken genome. In order to localize the various MIRs at a chromosomic scale, we have performed FISH experiments using the various BACs as probes. Chromosome localization of BACs Eighteen BACs, containing sequences detected with the different probes could be assigned on the chicken chromosomes ( figure 3). Although the distribution of MIRs on macro and micro-chromosomes corresponded to the expected theoretical value the number of MIRs mapped Detection of CCN3 sequences in normal and tumor DNA Figure 9 Detection of CCN3 sequences in normal and tumor DNA. RNA species purified from normal kidney cells (N) and tumor cells (T) were labeled (see materials and methods) and used to probe BAC15 DNA that harbors the ccn3 gene. The DNA fragments detected with the RNA species expressed in tumor cells (panel B) confirm that ccn3 is overexpressed in the tumor context. As a control, the DNA fragments from BAC15 were hybridized with radiolabeled chicken ccn3 cDNA. The DNAfragments which are detected correspond to exons encoding the ccn3 RNA species that are highly expressed in the tumor context (panel C). on chromosome Gga2 was twice as larger as the number that would be expected from a random distribution. The precise identification of the micro-chromosomes that gave a positive signal was not performed but co-hybridization experiments established that the four positive BACs corresponded to loci which were localized on 4 different microchromosomes (data not shown). It is worth noting that the DNAs from 2 Bacs (22 and 100) which were isolated with three different cellular probes from tumor 501 (41, 57 and 51 respectively) co-localized on Gga 1q11 (figure 4) and could not be separated on interphasic nuclei. From these results, one could estimate that the distance separating the sequences contained in BACs 22 and 100 is smaller than a thousand kilobases. Similarly DNA sequences from BACs 50 and 65 which were detected by probes 10 and 18 co-localised on Gga 2q21 ( figure 7). In that case, the analysis of asynchronus replicative figures allowed us to establish that the corresponding loci were distinct and probably contained in a DNA segment of a few hundred kilobases. Most interestingly, two BACs (1 and 90) isolated with probes from two Differential expression of genes contained in positive BACs Figure 10 Differential expression of genes contained in positive BACs. The digested BACs DNAs were hybridized with labeled RNA species isolated from either normal kidney cells (N) or tumor cells (T). Comparison of the hybridization patterns allowed to identify DNA fragments containing genes whose expression is either enhanced or abolished in tumors. different tumors (725 and 501) were assigned to Gga 5q23-25. The strong signals obtained after hybridization of the chicken BACs with other galliform chromosomes suggested that the sequences of the MAV targets are relatively well conserved throughout the galliform order. The chromosome assigments were identical, with respect to karyotypes evolution among these birds. As an example, markers of Gga2 are scattered on two chromosomes in Castreus wallishii (data not shown). The assignment of ccn3 on Gga 2q34-36 was already reported [28]. To date, a series of nine genes have been assigned on chicken chromosome 2q: PRKDC (q24-25), PENK, MOS, LYN (q26), CALB1 (q26), CA2, TRHR, MYC and HSF1 of which the human homologs lie on chromosome segment 8q11-q24.1 [23]. The localization of ccn3 on human 8q24 and chicken chromosome 2q34-36 reinforced the chromosomal homology between the two species and suggested that the syntenic segment between the two species could be extended up to avian 2q3. Furthermore, the mouse ccn3 gene maps to chromosome 15 [33] in a region of conserved synteny with man including TRHR, MYC and HSF1 [23]. The order of genes on the chicken map is still subject to changes. Furthermore, based on detailed analyses on other chicken chromosomes many rearrangements are known to occurr within syntenic regions. However, with the restriction regarding the order of the genes, the present findings suggest that ccn3 and AdamTS1 belong to syntenic groups well conserved between chicken, mouse and man. These genes also constitute another example where the synteny is better conserved between chicken and man than between man and mouse. ADAMTS1 sequences are overexpressed in MAV-induced nephroblastomas The detection of ADAMTS1 locus as a MAV integration site caught our attention because the ADAMTS1 protein belongs to a family of proteases involved in angiogeneis and tissue remodeling that are both required for tumor progression [32] In order to establish whether the BACs of interest indeed contained genes differentially expressed in tumor samples, polyadenylated RNA species purified from normal kidneys and nephroblastomas, were used to probe digested BAC-DNAs. Typical results obtained with a series different BACs are shown in figure 10. Various BACs contained DNA sequences encoding RNA species that were either decreased or increased in tumors, or both. As a first step in our identification of MIRs, we focused on BAC2 because it contained sequences mapping in the vicinity of ADAMTS1, and it provided a simple differential hybridization pattern. Two DNA fragments (6.2 kb and 3.2 kb) encoded abundant RNA species in the tumor samples, whereas a low molecular weight fragment encoded sequences that were slightly reduced in tumors. Only the 6.2 kb and 3.2 kb fragments could be subcloned. When used as probes on chicken DNA, the 3.2 kb fragment was found to contain repetitive sequences and could not be used for further studies. The 6.2 kb fragment of BAC2 could be used as a probe to check that it was indeed strongly detected by RNA from the tumor samples (data not shown). To identify sequences that were expressed from this 6.2 kb DNA fragment, the cloned insert was used as a probe to screen a chicken spleen cDNA library. Sequencing of two positive cDNA clones indicated that they were sharing 100% identity with part of the human ADAMTS1 coding sequence ( figure 11). These results indicated that the DNA locus containing the chicken ADAMTS1 gene, was a MAV integration site and suggested that disregulation of ADAMTS1 might be involved in the development of MAV-induced nephroblastomas. Northern blotting of RNA species isolated from normal kidneys and four different MAV-induced nephroblastomas indeed established that ADAMTS1 was overexpressed in the 4 Detection of ADAMTS1 expression in normal kidney and nephroblastoma tissues Figure 12 Detection of ADAMTS1 expression in normal kidney and nephroblastoma tissues. RNA samples 3,4, 5 and 20 were prepared from MAV1-induced nephroblastomas collected 18 weeks after injection of MAV1 and RNA samples 8,12, and 12 were prepared from normal kidney of 18 week old chicken. Electrophoresis and Northern blotting were performed as described in the text. The resulting blot was hybridized under stringent conditions with radiolabeled cDNA corresponding to the chicken ADAMTS1 sequence. Samples of glyceraldehyde 3 phosphate dehydrogenase RNA were used as quantitation standard. Discussion The studies that we have performed during the past decade have allowed us to identify viral sequences responsible for the very high efficiency and restricted pathogenic potentential of MAV1-(N) [29][30][31]. We also established that MAV-induced nephroblastomas were polyclonal tumors [4] that constituted a unique model of the pediatric Wilms tumor [1]. The analysis of genomic libraries prepared from MAV1induced tumors representing three different stages in tumor progression established that the MAV proviral genomes contained in the DNA of the tumor cells were not integrated in common sites. However, the relatively small size of the DNA insert transduced by the recombinant lambda phages, did not permit to exclude the possibility that MAV proviral genomes were inserted in common regions at the chromosome scale. In order to determine whether MAV-induced rearrangements of the host genome were common to the three chicken nephroblastoma tumours that represented increasing developmental stages [5], we have isolated and characterized 78 BACs containing the normal DNA fragments corresponding to the insertional sites of MAV in the genome of these tumor cells. The molecular analysis of these BACs did not permit us to identify any common integration sites, but one, among the three different tumors. It is well known that selective pressures likely occured during tumor progression and that the integration sites that we might identify at the late stages could be associated with events that led to tumor establishment. Therefore the lack of a common integration site, at a scale of 150 kb, probably reflected the various developmental stages and phenotypes of the tumors. On the other hand, our results suggested that preferential MAV integration sites might be conserved in the developed tumors, since independent junction fragments corresponding to different proviral genomes cloned from a given tumor, were hybridizing with the same BACs. These results suggested that the distribution of MAV integration sites in the tumors might not represent initial events but rather reflect the complex chromosome rearrangements that occur during tumor progression. The use of BACs to perform a FISH analysis of the MAV integration sites permitted us to gain a better insight into the distribution of integration sites in the various tumors that we analyzed. In spite of the polyclonal nature of the nephroblastomas, a rather simple profile was obtained. The chicken genome is composed of 34 chromosomes, among which are 9 macrochromosomes and 25 microchromosomes. The MAV integration sites were found to be equally distributed between the micro and macrochromosomes that stained positive. However, these sites were not distributed randomly. Instead, the number of MAV integration site on chromosome 2 was much higher than expected and 3 integration sites were detected on this chromosome by two independent BACs. These finding suggested that during the establishment and progression of nephroblastomas, the maintenance of chromosome 2 alterations were preferentially selected. The results obtained by FISH confirmed that MAV proviral genomes were integrated in a limited number of sites, as previously predicted by junction fragment analysis [4] and pulse field electrophoresis [1]. During the preparation of this manuscript, Pajer et al. reported the use of inverse PCR and LTR-RACE to identify nephroblastoma-associated loci (Nals) in MAV2-induced avian nephroblastomas (Pajer, personal communication and manuscript in press). In order to compare the position of MAV insertion sites identified by FISH and PCR, we have calculated the physical localization of the Nals on each corresponding chromosome and found a fairly good match between the two sets of results (See additional file 1: Compilation of cytogenetic data obtained from FISH analysis). In both studies, MAV integration sites were found to be mainly distributed among chromosomes 1 and 2. Of particular interest was the identification of hot spots for proviral sequences at 1q1, and 2q2. Whether these sites represent preferential MAV integration sites or regions that contain genes required for tumor development remains an open question. Two different types of information could be drawn from these observations: i) the distribution of MIRS and Nals points to chromosome regions that frequently harbour proviral MAV sequences in tumors; these regions likely contain genes that are important for tumor development; and ii) the reduced number of MAV integration sites that are maintained during tumor progression points to genes that are probably important at later stages, and the comparison of MAV integration sites in early and late tumors might help to distinguish between genes involved in the establishment and in the maintenance of the tumor state. Based on the relatively well conserved synteny betwen man and chicken it was also possible to predict the nature of potential genes of interest. The use of normal and tumor RNAs as probes to identify BAC fragments that contain genes that are differentially expressed in normal and tumor tissue (figure 10) also provided critical information that could be used as another clue to assign potential genes to MAV insertion loci. is a locus which is lost in sporadic, non papillary renal cell carcinomas and oncocitomas. GPH (Gephyrin at 14q23.3) is a cytoplasmic, peripheral protein that anchors Gly-R. Although it is widely expressed, it is especially predominantly expressed in kidney. TRAF3 (TNF receptor associated factor 3 at 14q32-33) encodes an adapter protein that recruits other signaling molecules to the ligandbound TNF family receptor. A gradient of TRAF3 is detected along the nephron, with progressive expression from proximal tubule to the collecting duct. TGFB3 is a well know transforming growth factor. The region defined by BAC1 and 90 also corresponded to Nal 5-13 (Pajer et al. In press). Because these integration sites were identified in tumors representing different developmental stages, this area corresponded to a common integration region whose alteration is conserved during tumor progression, therefore suggesting that the gene(s) encoded by this portion of genome might be critical for nephroblastoma development and (or) tumor progression. In addition to this situation, the two other integration sites identified in the most developed tumor by BACs 15 and 2 corresponded respectively to ccn3/nov (8q24.1 in human) and AdamTS1 (21q23.1 in human), two genes whose involvement in angiogenesis, matrix remodeling and tumorigenesis is well documented [7,[19][20][21][22][23][24][25][26][27]. The ccn3 gene was previously mapped on chicken chromosome 2q34-36 [28]. Although the present study, and the results of Pajer et al. indicated that ccn3 is not a common integration site for MAV, this gene was identified as a MAV target in both studies. However, the MAV2-induced tumors analyzed by Pajer et al. did not show any increase in ccn3 expression. Since both the MAV1-and the MAV2-induced nephroblastomas that we analyzed showed elevated levels of ccn3 expression [1], these conflicting observations result from either the route of injection, the time frame for injection, the different nature of the viral strains or host differences. The MAV2 (O) strain that was used in our previous studies was molecularly cloned and sequenced [2]. It induced 20% nephroblastomas, as opposed to 100% efficiciency of the MAV1(N) strain. In both cases, nephroblastomas were induced after intraveinous injection of 14 day-old embryos or intraperitoneal injection of day old chicken [1]. Since we have established that blastemal cells undergoing epithelial differentiation are the targets for MAV1, the time frame and route of injection may be critical. Indeed, the blastemal cells express high levels of ccn3 (Cherel et al. manuscript in preparation). Therefore, the elevated levels of ccn3 expression detected in all MAVinduced nephroblastomas might result from the expan-sion of blastemal cells that are transformed at a well defined stage of differentiation upon MAV1 infection. Hybridization of BACs containing MAV1-integration sites with labeled mRNAs isolated from normal kidney tissue and nephroblastomas also permitted us to perform an analysis of the genes that are proximal to MAV integration sites and that are differentially expressed in normal and tumor condition. Among the different genes that were uncovered in this study, the ADAMTS1 gene was of potential interest. The ADAMTS1 protein is a matricellular proteinase known to participate in the late stages of tumorigenesis. Forty-five percent of newborn ADAMTS1 null mice died, probably as a result of kidney malformation that becomes apparent at birth [34]. Comparison of the expression pattern of CCN3 and ADAMTS1 shows striking similarities. In both cases overexpression of the protein is detected in all tumors tested, while the MAV proviral sequences are detected only once in the vicinity of these genes. These observations suggest that MAV -induced nephroblastoma occurs via a multistep process that involves a cascade of proteins acting along a common signaling pathway. Direct or indirect alteration of any step could result from MAV integration within or in the vicinity of critical genes whose increased expression would eventually be required for tumor progression. The identification of TGFβ 3 locus as a target for MAV integration in two independent tumors (501 and 725) is in favor of such an hypothesis. The role of TGFβ 1 in expression of CCN genes expression has been widely documented and the antagonistic activity of TGFβ 1 and TGFβ 3 has been shown to be critical in several instances. The activation of TGFβ3 expression by MAV might therefore result in an increased expression of CCN3 in tumors, similar to that observed upon integration of MAV within the ccn3 gene itself. Interestingly, tumor 725 which is the most developed is the only one in which integration of MAV occured in three gene loci whose alterations would have cummulative effects. A less developed tumor such as 501 only shows integration in the vicinity of TGFβ 3 and the early developed tumor does not show any of them. In summary, our present study suggests that the development of nephroblastoma from an initial diffuse tumor phenotype (501D) to a well developed compact tumor (725) is accompanied by the selection of MAV integration sites in chromosome loci where genes involved in kidney differentiation are localized. The alteration of any of these genes by MAV integration at early stages of blastemal cell differentiation, would trigger the tumorigenic process. The multiplicity of potential genetic and cellular targets would provide support to the very high efficiency of MAV1 (N) which can induce 100% nephroblastomas within a 8-week period of time post injection. It will be interesting to determine whether the phenotypic variability of the MAV-induced nephroblastomas compares to the Wilms' tumors situation, and if the various subtypes of tumors result from different sequences of genes alterations.
2014-10-01T00:00:00.000Z
2006-01-10T00:00:00.000
{ "year": 2006, "sha1": "12b3a25b063188fca130777f013af2906a99a5e5", "oa_license": "CCBY", "oa_url": "https://biosignaling.biomedcentral.com/track/pdf/10.1186/1478-811X-4-1", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "671ce1b68706e53f4bce97f06d4cc0ee2bf37c7d", "s2fieldsofstudy": [ "Biology", "Medicine" ], "extfieldsofstudy": [ "Biology", "Medicine" ] }
12630372
pes2o/s2orc
v3-fos-license
Human embryonic stem cell-derived neurons establish region-specific, long-range projections in the adult brain While the availability of pluripotent stem cells has opened new prospects for generating neural donor cells for nervous system repair, their capability to integrate with adult brain tissue in a structurally relevant way is still largely unresolved. We addressed the potential of human embryonic stem cell-derived long-term self-renewing neuroepithelial stem cells (lt-NES cells) to establish axonal projections after transplantation into the adult rodent brain. Transgenic and species-specific markers were used to trace the innervation pattern established by transplants in the hippocampus and motor cortex. In vitro, lt-NES cells formed a complex axonal network within several weeks after the initiation of differentiation and expressed a composition of surface receptors known to be instrumental in axonal growth and pathfinding. In vivo, these donor cells adopted projection patterns closely mimicking endogenous projections in two different regions of the adult rodent brain. Hippocampal grafts placed in the dentate gyrus projected to both the ipsilateral and contralateral pyramidal cell layers, while axons of donor neurons placed in the motor cortex extended via the external and internal capsule into the cervical spinal cord and via the corpus callosum into the contralateral cortex. Interestingly, acquisition of these region-specific projection profiles was not correlated with the adoption of a regional phenotype. Upon reaching their destination, human axons established ultrastructural correlates of synaptic connections with host neurons. Together, these data indicate that neurons derived from human pluripotent stem cells are endowed with a remarkable potential to establish orthotopic long-range projections in the adult mammalian brain. Electronic supplementary material The online version of this article (doi:10.1007/s00018-011-0759-6) contains supplementary material, which is available to authorized users. Introduction Recent progress with the derivation of neural stem cells from embryonic [1][2][3], fetal [4][5][6], and adult [7] sources provides interesting prospects for regenerative medicine [8]. However, the capability of grafted neurons to integrate and in particular to establish appropriate long-range projections in the adult brain has been a matter of controversy. Pioneering studies employing primary fetal human donor cells [9,10] showed a substantial capacity for axonal outgrowth from telencephalic transplantation sites. However, massive in vitro expansion of neural cells was in some studies associated with impaired axonal outgrowth [11][12][13], while other studies reported extensive or enhanced axonal outgrowth even after extensive pre-transplant in vitro proliferation of the donor cells [2,14,15]. Site of implantation, age of the Electronic supplementary material The online version of this article (doi:10.1007/s00018-011-0759-6) contains supplementary material, which is available to authorized users. transplant recipient, presence and extent of local lesions, and glial scaring are also considered to influence axonal outgrowth from grafted neurons [16]. Park et al. found bioscaffolds highly effective to facilitate axonal growth from grafts after hypoxic injury, which was otherwise inhibited [17]. Technically, labeling strategies sufficient for the detection of distant processes were not always applied, which might have led to an underestimation of long-range projections in some studies. Mechanistically, axonal growth and pathfinding, as well as their inhibition, strongly depend on a precise interplay between endogenous signaling molecules and receptors on donor cells. Axonal outgrowth for example is, in the adult brain, inhibited by proteins associated with CNS myelin (e.g., Nogo) via signaling through respective receptors on growth cones [18]. However, why axonal outgrowth from some donor populations escapes the inhibitory environment in the adult brain while it is blocked for others has not yet been determined. Species differences might contribute to the different results obtained in various xenograft models, potentially due to a mismatch of endogenous inhibitory molecules with xenogeneic receptors on human neurons. However, this notion is challenged by data from Gaillard et al., showing that long-range axonal outgrowth is possible following transplantation of murine cells into adult isogenic hosts [19]. Moreover, the results of this and several other recent studies suggest that murine donor cells exhibiting the same regional identity as the implantation site can establish region-specific axonal projections in newborn [20,21] and adult hosts [19]. However, the potential of human neural grafts to establish axonal projections in the adult brain still deserves further investigation. We have recently established a stable population of long-term self-renewing neuroepithelial stem cells (lt-NES cells) from pluripotent human ES cells [2], which give rise to neurons with a posteriorized regional phenotype in vitro. We used this highly uniform population to explore whether heterotopically grafted human neural stem cells with a highly restricted regional phenotype can give rise to region-specific axonal projections. To that end, lt-NES cells were transplanted into the cortex and hippocampus of adult rodents, i.e., locations exhibiting different and highly specific neuronal innervation patterns of clinical relevance. Our data show that lt-NES-derived neurons develop axonal projections highly specific for the implantation site and establish morphologically mature synapses. Cell culture Human ES cell-derived long-term self-renewing neuroepithelial stem cells (lt-NES cells, derived from hES cell lines I3 and H9.2) were generated and transduced for GFP expression as described previously [2,22] and in Supplementary Methods. For transplantation, donor cells passage 25-55 were trypsinized, washed in calcium-and magnesium-free PBS supplemented with 0.1% DNAse and concentrated to 7.5 9 10 4 cells/ll. PCR RNA was extracted using standard procedures from lt-NES cells and their differentiating progeny after 2, 4, and 8 weeks of in vitro differentiation and subsequently, cDNA was generated. Primer pairs (Supplementary Methods, table 1) were designed (Primer3) and PCRs performed using common cycling parameters. Animals and transplantation Severe combined immunodeficient-beige (SCID-bg) mice were used at an age of 8-10 weeks (n = 52, body weight 22-28 g). Alternatively, female 12-week-old Sprague-Dawley rats (n = 20, body weight 220-280 g) were used. Rats and mice were stereotactically transplanted according to coordinates adopted from Paxinos et al. [23] (Supplementary Methods). Sprague-Dawley rats were immunosuppressed with daily injections of cyclosporine (10 mg/kg i.p.). Animals were monitored for wound infections and neurological deficits on a daily basis during the first 2 weeks after transplantation and in weekly intervals thereafter. Care and use of the animals conformed to institutional policies and state legislation. Immunohistochemistry and microscopy Primary antibodies (Supplementary Methods, table 2) of the same species were never used together to avoid cross reactivity. Primary antibodies were visualized using corresponding FITC, Cy3 or Cy5 conjugated secondary antibodies. Sections were analyzed on a Fluoview 1000 confocal microscope (Olympus) or, if DAPI visualization was required, on a Zeiss Axioimager Z1 equipped with the Apotome technology (Zeiss) to reconstruct optical sections. Pre-embedding immunolabeling for electron microscopy was performed with the human specific anti-synaptophysin antibody. Ultrathin sections were examined under an electron microscope (CM-10, Philips). Statistical analysis In vivo analysis for the assessment of viability was performed in analogy to the Cavalieri method (Supplementary Methods). For determination of phenotypes in vivo at least 150 cells per animal (n = 3 per time point) were counted for every marker. Values represent % ± standard deviation. Statistical significance was calculated using paired Student's t test [*p (two-sided) = 0.01-0.05]. Results Prolonged differentiation into mature, non-tumorigenic grafts Human ES cell-derived long-term self-renewing neuroepithelial stem cells (lt-NES cells) were propagated as described previously [2]. In the presence of FGF2 and EGF, these cells exhibit uniform expression of the neural stem cell-associated genes nestin and sox2, a rosette-like growth pattern, high neurogenic differentiation potential, and a regional phenotype corresponding to an anterior hindbrain location, with all these properties remaining stable for at least 80 passages [2]. For transplantation, 7.5 9 10 4 lt-NES cells (passage 25-55; derived from lines I3 and H9.2 [24,25]) expressing EGFP from the PGK promoter [2], were stereotactically injected into the dentate gyrus or motor cortex of adult immunodeficient SCID-bg mice or immunosuppressed Sprague-Dawley rats. Recipient mouse brains were analyzed 3, 6, 12, 24, and 48 weeks after transplantation, rat brains at 3, 6, and 12 weeks after transplantation. Analysis of the hippocampal grafts in mice revealed that the survival of GFP-positive donor cells decreased from 48.7 ± 5.5% 3 weeks after transplantation to 15.8 ± 4.8% 12 weeks after transplantation in the hippocampus and remained largely stable thereafter. In parallel, the graft volume decreased from 100.7 ± 8.6 9 10 6 to 30.1 ± 6.3 9 10 6 lm 3 . Despite almost stable donor cell numbers beyond 12 weeks post grafting, the graft volume decreased further until the end of the experiment (p = 0.03, Fig. 1A). The majority of individual donor cells showed protracted differentiation characteristics along the neuronal lineage (Fig. 1B). Cells expressing nestin decreased from 39.2 ± 10.0% at 3 weeks after transplantation to undetectable levels at [6 months after transplantation. Within the same time period, the percentage of proliferating Ki67positive cells decreased from 15.9 ± 6.4% to undetectable levels ( Fig. 1B-F). Not a single teratoma or neurogenic tumor was detected in more than 70 animals upon both macroscopic and microscopic examination. Immunohistochemical analysis of the grafts failed to detect any human cells positive for Oct4, cytokeratin, a-fetoprotein and smooth muscle actin as markers for residual pluripotent or non-neural cells (data not shown). The neuronal marker MAP2ab (Fig. 1G) was present from the earliest time point of analysis and increased up to 82.7 ± 3.3% at 12 months after implantation, whereas NeuN, another marker expressed in mature human neurons, was expressed only after more than 12 weeks in vivo and increased up to 24.7 ± 9.3% at 12 months post grafting (p = 0.04, Fig. 1H). Most of the grafted cells remained within the primary transplantation site (Fig. 1C, E, G). However, occasional mature neurons (\0.1%) were found to populate adjacent hippocampal and cortical regions, where they could be detected for up to 60 weeks after transplantation, the latest time point assessed (Figs. 1H, 3C). GFAP-positive astrocytes accounted for less than 0.3% of detectable donor cells, and staining with the oligodendrocyte-specific marker O4 revealed no immunopositive cells (data not shown). Donor cells could not only be identified by virtue of their GFP expression, but also using human-specific antibodies to human nuclei, Ki67, nestin, synaptophysin and NF-M [26]. Furthermore, human nuclei show a remarkably homogenous heterochromatin pattern in DAPI stains ( Fig. 1D, H, asterisks), which can be distinguished from the coarse heterochromatin of mouse cells (e.g., Figs. 1H, 3H). Nonetheless, fusion of donor cells with host cells remains a concern in the interpretation of transplant studies, in particular when donor cells appear to acquire traits of resident cells. It is thus essential to distinguish whether regional differentiation patterns are truly due to donor cell plasticity rather than mimicry through cell fusion. We addressed this issue and stained grafted brain slices 6 and 9 months after transplantation into SCID-bg mice with an antibody against human nuclei and combined this staining with an in situ hybridization for the detection of mouse satellite DNA. More than 500 human nuclei were analyzed in both transplantation sites, but not a single nucleus co-stained positive for the murine DNA in situ probe (Fig. 1I), suggesting that cell fusion is not a relevant event after transplantation of human lt-NES cells into the adult mouse brain. Human lt-NES cells acquire a posterior phenotype corresponding to the hindbrain area under standard in vitro differentiation conditions [2]. Therefore, we were interested whether these cells might, upon integration into the telencephalon, acquire an anterior phenotype. More than 150 donor cells were analyzed per animal and only two cells with a co-localization of human nuclei and BF1, a widely expressed telencephalic transcription marker, were found ( Fig. 2A, \0.2%). Mature human neurons were irrespective of their location of an inhibitory, GAD67positive (Fig. 2B, 55.8 ± 6.8%) and calretinin-positive phenotype (Fig. 2C) with no major difference in cortical versus hippocampal transplants. In addition, clusters of donor cells also stained positive for the excitatory synaptic marker vGlut2 (Fig. 2D). However, the punctate and distalsynaptic staining pattern of this marker precluded a reliable quantification. The remaining cells could not be clearly assigned. Specifically, no human nuclei or synapses co-localized with vGlut1 or markers of dopaminergic or serotonergic differentiation. This transmitter phenotype is also in line with the in vitro regional code of the donor cell population and further supports the heterotopic identity of the graft with respect to the telencephalic transplantation sites. Human cells also stained positive for the GABA-A receptor (Fig. S1A) and the GluR1 subunit (Fig. S1B) of the AMPA receptor within hippocampal and motor cortex locations. lt-NES-derived neurons generate axons in vitro and in vivo After initiation of in vitro differentiation by growth factor withdrawal, lt-NES cells formed a complex axonal network within several weeks (Fig. S2A). RT-PCR analyses showed that differentiated lt-NES cells express a number of factors and receptors known to be involved in axon outgrowth and guidance [27] (Fig. S2B). Members of the netrin, ephrin, semaphorin and robo/slit families of guidance molecules and receptors were expressed in proliferating lt-NES cells, but some of them decreased during in vitro differentiation, suggesting the presence of a cell-autonomous time window for the direction of axonal outgrowth after transplantation. On the contrary, within the adult mammalian brain, axonal growth is limited to an absolute minimum [27]. Molecular stop signals like Nogo, Myelin-associated glycoprotein (MAG), Myelin-oligodendrocyte glycoprotein (MOG) and the repulsive guidance molecule (RGM-A) [27] play an important role. These inhibitors signal through a receptor complex composed of the Nogo receptor (NgR1), LINGO and p75. RT-PCR analysis of proliferating and differentiating human lt-NES cells revealed a continuous expression of Nogo and RGM-A, whereas MAG and MOG were not expressed (Fig. S2C). NgRs and Lingo were expressed at a constant level, whereas p75 was upregulated upon neuronal Fig. S3). Upon in vivo transplantation, axons from primary motor cortex grafts grew with a speed of up to 1 mm/week within the first 6 weeks after transplantation in mice and rats as measured by the expression of the human neurofilament protein (hNF-M) (schematic representation in Fig. 3A, seen in 11 out of 12 mice surviving for 6 weeks or longer). Most cells remained within their primary cortical transplantation clusters (Fig. 3B) with single cells (\0.1%) migrating out of these primary transplantation sites (Fig. 3C). Human axons originating from these cells entered and followed the corpus callosum in the ipsilateral hemisphere and frequently branched off into the adjacent neocortex or entered the internal capsule (Fig. 3D). Upon high power magnification the grey matter of the basal ganglia, too, was scattered with axons staining for human synaptophysin. Patches of human synaptophysin immunoreactivity were frequently found in close spatial relationship with dots staining for the postsynaptic marker PSD95, suggesting the formation of xenogenic synapses (Fig. 3d, inset). Some of the donor-derived axons entered the cerebral peduncles (Fig. 3E) and could be further followed into the ipsi-and contralateral grey and white matter of the cervical spinal cord in 3 out of 8 mice surviving for 9 or 12 months after transplantation (Fig. 3F). Long-range projections were also found to extend through the corpus callosum into the contralateral hemisphere, where they branched off the corpus callosum and proceeded through all cortical layers (Fig. 3G). Here and in other target regions, human synaptophysin-positive dots were often closely associated with host axons, suggesting the formation of synaptic structures (Fig. 3H). For hippocampal transplants, 7.5 9 10 4 cells were stereotaxically delivered to the upper blade of the dentate gyrus (DG) of adult mice and rats (Fig. 4B). Many axons emanating from these grafts projected to the ipsilateral CA3 sector, an innervation pattern characteristic of the mossy fiber pathway. Here, human axons, identified by virtue of their GFP expression, were found to spread across a distance of up to 2 mm within 3 weeks after transplantation (Fig. S4, rat hippocampus). Within the CA3 sector, human axons followed the stratum radiatum of the pyramidal layer (Fig. 4C), wherein they further projected to reach the adjacent CA2 and CA1 sectors. Remarkably, some of the donor-derived axons traversed the pyramidal cell layer, entered the fimbria and crossed to the contralateral hemisphere, where they approached the contra-lateral hippocampal pyramidal cell layer (Fig. 4D), a trajectory typical for commissural hippocampal axons. Quantification of axons positive for human neurofilament revealed a high abundance of donor-derived fibers in the ipsi-and contra-lateral hippocampus as compared to extrahippocampal regions such as the corpus callosum (p = 0.004), the entorhinal (p = 0.005) or motor cortex and the thalamus (Fig. 4E). This data strongly supports the notion that the engrafted neurons adopt projection patterns typical for resident hippocampal neurons [28]. Evidence for the formation of xenogenic synapses Considering the well-characterized synaptic circuitry of the hippocampus, we chose hippocampal grafts to assess morphologically the formation of synapses between donor and host cells. At 6 months after transplantation, human axons identified with the human-specific NF-M antibody were found inside the stratum radiatum (STR; Fig. 5A). Alongside these projections, small patches of human synaptophysin immunoreactivity were detected, suggesting the formation of presynaptic terminals. Some of these patches were co-stained with an antibody to vGlut2 (dilution 1:4,000, boxed areas in Fig. 5B). No vGlut2 immunoreactivity was detected outside the hSyn-positive patches under the conditions used here. No hSyn or vGlut2 signal was detected in non-transplanted hippocampi. When the vGlut2 antibody was used at higher concentrations (1:800), abundant small immunoreactive puncta became detectable (data not shown), most likely corresponding to the endogenous vGlut2 positive terminals in this region [29]. vGlut2 may thus, under the special conditions described here, be considered as a human-specific marker. In further triple-labelings, the antibody against hSyn was replaced by a MAP2ab antibody labeling host dendrites (Fig. 5C). High magnification revealed a dotted vGlut2-positive staining of human synaptic terminals around host dendrites, suggesting the formation of xenogenic synapses (Fig. 5c). Interestingly, in the ipsilateral hippocampus hSyn and vGlut2 immunoreactivity were mainly detected within the stratum radiatum (STR, p = 0.015), whereas within the contralateral hippocampus (Fig. 5D) vGlut2 immunoreactivity was, though generally less abundant (STR ipsilateral vs. STR contralateral p = 0.049), preferentially found in the vicinity of human fibers within the stratum oriens (Fig. 5E). The clear identification of hSyn-positive terminals and co-localization with markers for specific neurotransmitters allowed the quantification of inhibitory and excitatory Single human neurons migrated out of the clusters for a maximum distance of 1.5 mm, still residing within the cortex where they oriented radially C. Most human fibers entered and proceeded within the corpus callosum (CC), frequently branched off into the adjacent cortex, or turned medial to join the internal capsule (IC) D. Magnification from the striatum d revealed human fibers within the white matter of the internal capsule, and human-specific synaptophysin immunoreactivity, frequently in close association with PSD95 immunoreactivity (magnified insert). Human axons were also identified in the cerebral peduncle E and within the grey and white matter of the cervical spinal cord F. Human axons crossed to the contralateral hemisphere branched off the corpus callosum and traversed through the adjacent cortex in a tangential orientation G. Here human-specific synaptophysin immunoreactivity was found in close association with axons of host neurons (magnified insert in H). Scale bars B, C, E and F 30 lm, D and G 100 lm, d and H 10 lm. In B and C arrows point to pial surface donor-derived synaptic terminals in different projection fields. This analysis revealed that within the ipsilateral pyramidal cell layer, 36.7 ± 5.9% of human terminals costained positive for GAD67 whereas 12.8 ± 3.2% of the human terminals co-stained positive for vGlut2 24 weeks after transplantation. Within the contralateral pyramidal cell layer 32.9 ± 5.9% of the human terminals stained positive for vGlut2 and no clear co-localization with GAD67 could be detected. To confirm the formation of xenogenic synapses at the ultrastructural level we performed preembedding electron microscopy with an antibody recognizing human synaptophysin 1 year after transplantation. In line with the light microscopic data, strong immunoreactivity was selectively found in axon terminals (Fig. 6), demonstrating correct protein targeting of synaptophysin in the human lt-NES cell-derived neurons. Stained terminals formed regular synaptic contacts in the hilus close to the transplant (Fig. 6A) and within the ipsilateral stratum radiatum of CA 1-3 (Fig. 6B, C). As a structural sign of functional activity, the terminals displayed abundant vesicles, and in particular docked vesicles at the presynaptic membrane (Fig. 6a-c), suggesting the presence of a readily releasable pool of vesicles [30]. Discussion The most important finding of this study is the remarkable specificity with which human ES cell-derived long-term self-renewing neuroepithelial stem cells (lt-NES) recapitulate endogenous axonal projections within the adult brain. Some previous studies with primary and in vitro propagated human cells had already hinted at their capacity for extensive axonal innervation [9,10,15,31], which was mainly found to follow white matter tracts close to the site of transplantation. Our study revealed that upon transplantation into the motor cortex lt-NES establish ipsi-and contralateral projections as well as trajectories into the pyramidal and extrapyramidal motor system, including the cerebral peduncles and the cervical spinal cord. Interestingly, the same cells adopted a hippocampus-specific projection profile with laminar specificity when transplanted into the dentate gyrus. The contralateral fiber projection and termination pattern via the fimbria-fornix closely resembles that of endogenous commissural fibers [28]. Importantly, only glutamatergic human terminals were detectable along these long-range fibers, despite the fact that the majority of donor cells show a GABAergic phenotype after transplantation. To exclude fusion with host cells as a possible explanation for this phenomenon, we used all available genetic, immunological, and morphological markers to unambiguously identify human cells. We also demonstrate that human axon terminals establish contacts displaying all morphological characteristics of normal active synapses. So far, only few studies employing murine cells have reported such a highly region-specific projection pattern of grafted neurons. Gaillard et al. [19] showed that motor cortex grafts can reestablish appropriate long-range projections within the adult murine motor system. They found the homotopic nature of their explants to be important for successful reconstruction. This notion is further supported by two recent publications employing transplantation of murine ES cell-derived cortical precursors into newborn mice [20,21]. In contrast, the human ES cellderived lt-NES cells used in our study do not exhibit a telencephalic phenotype. They are posteriorized and show a marker profile compatible with an anterior hindbrain identity, a bias acquired during long-term in vitro expansion in the presence of growth factors [2]. After transplantation into the adult hippocampus and motor cortex, they do not acquire a region-specific phenotype as indicated by the absence of BF1, a transcription factor broadly expressed in both telencephalic regions. Thus, for both target regions, our donor cell population can be regarded as heterotopic. Yet, the cells exhibit highly specific patterns of axonal outgrowth. The precise mechanisms for this remarkable specificity remain to be elucidated. As in the study by Gaspard et al., we left the surrounding endogenous motor cortex or hippocampal tissue intact in order to maintain potential local guidance cues [27]. However, it seems unlikely that complex non-linear trajectories such as innervation of target regions in the contralateral hemisphere via the fimbriafornix pathway are merely guided by chemoattractants and repellents. A more likely explanation could be that the newly formed axons grow alongside host fiber tracts, which, by nature, represent region-specific trajectories. Considering that many of the newly formed axonal projections pass myelinated fiber tracts such as the corpus callosum and the fimbria-fornix, it is remarkable that this process appears not to be inhibited by myelin-associated inhibitors of axonal growth. In this respect, the slow maturation of human neurons might provide a substantial advantage. We found that p75 is only upregulated after Fig. 5 Immunohistochemistry of xenogenic synapses. A Axons positive for human neurofilament (hNF-M) project through the stratum radiatum of CA1, which exhibits abundant dotty immunoreactivity for human synaptophysin, suggesting formation of presynaptic terminals by the donor neurons. B Triple immunofluorescence staining reveals that a subset of human synapses co-expresses the glutamatergic marker vGlut2 (boxed purple punctae). C Triple staining with antibodies to hNF-M, vGlut2 and MAP2 reveals a close association of the human glutamatergic terminals with host dendrites (c shows magnification of the boxed area in C). D in the contralateral CA3 sector vGlut2 immunoreactivity localizes mainly within the stratum oriens and the pyramidal cell layer, indicated by arrowheads, as quantified in E. PCL pyramidal cell layer, STR stratum radiatum, STO stratum oriens, scale bars 20 lm several weeks of in vitro differentiation. This delayed expression of an important member of the receptor complex inhibiting axonal outgrowth [32,33] might determine a permissive time window for axonal outgrowth from human lt-NES cells in the adult brain. Prospects for experimental brain repair Although the mechanisms enabling long-range axonal projections in the adult mammalian nervous system require further investigation, the results of our and other studies in related systems [9,10,15,31] indicate that human neural grafts may, under appropriate conditions, eventually be used for the innervation of remote targets in the host brain. This prospect could be particularly relevant for diseases affecting a specific group of neurons with defined projections. Examples include central motoneurons, affected in ALS, but also nigral dopamine neurons projecting into the striatum, the main target of Parkinson's disease. In this regard, it is important to realize that substantial functional benefits might already result from incomplete structural repair [34]. However, functional data going beyond the morphological detection of axons and synapses will be essential to validate the efficacy of graft-derived projections. Safety considerations play an essential role with respect to the source of the donor cells [35,36]. The potential to proliferate human ES cell-derived lt-NES cells over many passages without compromising their neuronal differentiation potential enables the generation of large numbers of pure neural donor cells. In our preclinical model, and with the cell numbers used, this population did not result in tumor/teratoma formation in any of the transplant recipients, which were followed up to the limit of the recipient animal's lifespan. Taken together, human neuroepithelial stem cells derived from pluripotent cells show promising results in terms of transplant survival, safety, neuronal differentiation, axonal pathfinding, and synaptogenesis in vivo. In combination with efficient strategies to direct the donor cells towards fates of therapeutic value [37][38][39][40][41] and iPS technology [36,42] for the generation of immunocompatible grafts, they should provide a versatile tool for experimental nervous system repair.
2014-10-01T00:00:00.000Z
2011-07-21T00:00:00.000
{ "year": 2011, "sha1": "03af7d0213ecedc8678100516b48e3ca9295b4a8", "oa_license": "CCBYNC", "oa_url": "https://link.springer.com/content/pdf/10.1007/s00018-011-0759-6.pdf", "oa_status": "HYBRID", "pdf_src": "PubMedCentral", "pdf_hash": "43f143f374dd4a12e19cc343615732fda891e28b", "s2fieldsofstudy": [ "Biology", "Medicine" ], "extfieldsofstudy": [ "Medicine", "Biology" ] }
238656631
pes2o/s2orc
v3-fos-license
Comparative Study of Biometric Models for Individuality Investigation In the entire world, security systems are essential for the protection of life and property. This is a growing technology which has become increasingly used in our daily life. Other areas of application include but not limited to commercial banking sectors, educational institutions, border control via passport verification, voter’s registration and verification and so on. In order to provide such needed and adequate security, biometric systems are essential. Biometric is the technique used to identify an individual based on his /her physiological (e.g. fingerprint, face, retina, and so on) and behavioral (gait, signature, voice, and so on) characteristics. Every individual identity relied majorly on these categories of traits. Traditional methods of establishing a person identity include the knowledge (password, username) and possession (card, token)-based. A biometric that uses a single biometric trait for recognition is prone to problems related to non-universality, spoof attacks, limited degree of freedom, large intra-class variability, and noisy data. Some of these problems can be overcome by integrating the use of multiple biometric traits of a user (e.g. face, fingerprint). This paper provides a comparative study of commonly known biometric models for individuality investigation with emphasis on methodologies, strengths and weakness. INTRODUCTION The word 'biometrics' is derived from Greek word 'bios' means life and 'metric' means measurement [1]. Biometric is the most practical means of identifying and authentication individuals in a reliable and fast way through unique physiological and behavioral characteristics. Any characteristic can be used as a biometric identifier to recognize a person as long as it satisfies the following requirements [3][4][5]: a. Universality: Every individual should possess the biometric characteristic b. Uniqueness: No two persons should be the same in terms of the characteristic. c. Permanence: The biometric characteristic should be invariant over time. d. Collectability (Measurability): Ease of acquisition or measurement of the trait. The acquired data should be in form that allows subsequent processing and extraction of the relevant feature sets e. Performance: The recognition accuracy, speed, and robustness to operational and environmental factors should be accepted. f. Acceptability: Indicate the extent to which people are willing to accept the characteristics such that they are willing to have their biometric traits captured and assessed. g. Circumvention: Refers to how difficult it is for fraudulent techniques to fool a system that is based on the characteristics Biometric characteristics a person possesses can either physiological characteristics or behavioral characteristics as shown in Figure 1. Figure 1: Classification of Biometrics Physiological characteristics are unique characteristics physically present in human body. This can either be morphological or biological, morphological identifiers mainly consists of face, fingerprint, iris, ear, and biological identifiers are DNA, blood, saliva or urine which may be used by medical teams and police forensics. Behavioral characteristics are related to behavior of a person, which include signature, voice, gait, walking pattern, and so on [2]. Biometrics offers certain advantages such as distinctiveness, cannot be forgotten or lost, and the person to be authenticated needs to be physically present at the point of identification, also, it is difficult to forge or steal biometric identity [3,5]. Human behavioral characteristics can be changed with time, but their physiological characteristics can never be changed and has the benefit of remaining stable throughout the life of an individual. Biometrics is inherently more reliable and more capable than traditional knowledge-based and token-based techniques. In most cases, a typical biometric system consists of the following components as depicted in Figure 2: a. The Sensor module: This module responsible for acquisition of biometric data b. The feature extraction module: This is where the acquired data is processed to extract feature vectors c. Matching Modules: The feature vectors in the reference database are compared against those in the template database to ascertain the degree of similarity. d. Decision module: Where the usher's identity is verified or a claimed identity is accepted or rejected. A biometric system can either be unimodal or multimodal. A unimodal rely on the evidence of a single source of information for authentication while multimodal rely on more than one source of information for authentication. A unimodal system prone to the following deficiencies [3]: (a) Noisy data from sensors, (b) high intra-class variation, (c) high interclass similarities, (d) non-universality, (e) non-variant representation, (g) spoofing. A multimodal biometric system based on multiple traits is expected to be more robust to noisy, address the problem of non-universality, improving matching accuracy, and provide reasonable protection against spoof attacks [6][7]. The use of biometric is application dependent and there is no single biometric that can meet all the requirements of every possible application. Generally, a biometric system can operate either in the verification mode or identification mode [7]. In verification (or authentication) mode, a one to many (1:M) comparison against a biometric database in an attempt to establish the identity of an unknown individual is performed. Users template matched with all the templates stored in database to identify with the template that has the highest similarity [8]. Identification mode can be used either for positive recognition (user does not provide any information about the template to be used), or negative recognition where the system establishes a person is truly who he/she claims to be. This can be achieved through biometric since other methods of personal recognition such as password, PIN, or keys are ineffective. Areas of applications of biometrics to ascertain the individuality include but not limited to the following [4]: law enforcement and public security, military, border/travel/immigration control, voter registration and identification, healthcare and subsidiaries, commercial applications, physical and logical access. Section 2 of this paper presents various biometric models for individuality investigation, while Section 3 presents review of some of research works on biometric models with emphasis on methodologies, strengths and weakness of some biometric models by different authors. The conclusion drawn is also presented in Section 4. BIOMETRIC MODELS FOR INDIVIDUALITY INVESTIGATION The summaries of various biometric models for individuality investigation are presented below: Fingerprint A fingerprint is an impression left by the ridges and valleys of a human finger. The ridges are the dark and raised portions while the valleys are the white and lower portions [3]. Human fingerprints are detailed, unique, difficult to alter and durable over the life of an individual making them suitable for human identification and classification. The ridges of finger form six major pattern types namely: arch, tented arc, left loops, right loop, twin loop and whorls, as shown in Figure 3 [3][4]. Figure 3: Types of fingerprint patterns Acquisition of fingerprint image is considered to be the most crucial step in an automated fingerprint identification and authentication system as it has a drastic effect on the overall system performance. The performance of an automated identification system relies heavily on the fingerprint image quality which can be affected by several factors such as, presence of scars, variations of the pressure between the finger acquisition sensor, introduction of spurious, contaminants or artifacts and the environmental conditions during the acquisition stage [4]. The procedure for capturing fingerprint using a sensor (optical, ultrasonic, capacitive, or thermal) consist of rolling or touching the finger on the platen. Fingerprint image enhancement is performed to remove the noticeable noise and other contaminants acquired during enrolment and it requires a number of processes which include segmentation, normalization, filtering, binarization and thinning [3]. Iris Recognition The iris is the elastic, thin, pigmented, circular and connective tissue in the eyes which controls the size and the diameter of the pupil, and limit the amount of light entering the eye [8]. A typical iris is depicted in Figure 4. The iris is developed in early life in a process called morphogenesis [2]. The iris is unique for individual and even the identical twins have different iris patterns. The texture of the iris is very complex and distinctive which is very useful for recognition process. The iris of the eyes has been described as the important part of the body for biometric identification for the following reasons [16]: (a) iris is an internal organ that is well protected against damage and wear, unlike fingerprint which can be difficult to recognize due to bruise or cut. (b) The iris is mostly flat, and its geometric configuration is only controlled by two complementary muscles (sphincter pupillae and dilator pupillae) that control the diameter of the pupil. Challenges confronting iris recognition include growing difficulty from distance larger than few meters and required the cooperation from the individual to be identified. It is also susceptible to low performance for poor quality images [3]. Unlike fingerprint scanner which can be easily acquired, iris scanners are relatively expensive, scanners can be disappointed or fooled by high quality image, and ultimately required the cooperation from the users during iris data acquisition stage [2,12]. Face recognition This is a way of recognizing a human face through technology such as photographic or video. This technology analyses the shape and position of different parts of the faces to determine the match. Facial recognition has received substantial attention due to human activities found in various applications of security purposes like airport, criminal detection, face tracking, forensic, and so on. Automated face recognition involves some processes which include: face detection, face normalization, face feature extraction and matching as depicted in Figure 5. Figure 5: Block Diagram of Face Recognition System The most common approaches used for facial recognition system are feature-based techniques, template-based techniques, appearance based techniques, model-based technique and hybrid methods [24]. In feature based techniques, facial features like eyes, nose, mouth, eyebrows, shape of the face and their positional relationship between them are extracted and their locations, geometry and appearance are fed into a structural classifier. A major challenge of this technique is the fact that it does not allow for feature restoration, particularly when the system tries to retrieve features that are invisible due to large variations. In template based techniques, features of face like eyes, nose, mouth, and so on are extracted based on template function and appropriate energy function. In appearance based techniques, linear transformations and statistical methods are used to find basic vectors to represent the face. In model-based technique, a new sample is introduced to the model and the parameters of the model are used to recognize the image. This technique usually classifies images as 2D or 3D. Hybrid model uses a combination of both holistic and feature extraction methods. 3D images are generally used in this method. The image of the face is captured in 3D to capture the curves of the eye sockets, or the shapes of the chin or forehead. The 3D system comprises of detection, position, measurement, representation and matching. Each of the feature extraction methods has its shortcomings, for example, template based feature extraction method do not represent global face structure whereas appearance based feature extraction method do represent global face structure with a high computational cost. Detecting faces from image are prone to some limitations which include illumination problem, pose variations occlusions due to accessories. Various face detection techniques that have been developed to address these limitations include Principal Component Analysis, Neural Network, Machine Learning, Geometrical Modelling, Hungh Transform and template matching [24]. Generally, face recognition has a number of significant weaknesses, the technology focuses mainly on the face portion, that is, around the hairline down, as a result, a person has to look straight at the camera to make recognition possible at the enrolment stage. Also, despite the technology is still developing at a rapid pace, the level of security it currently offers does not yet commensurate with that of other biometrics. Gait Recognition Gait recognition is the study of the way human walk and can also be used for identification purposes. This is a process where the features of human motion are extracted and use these extracted features to authenticate the identity of the person in motion. The major strengths of gait include the following [15]: it does not require user's interaction, it can be easily measured at a distance as long as gait is visible, it is difficult to disguise or occlude. Also, it is robust to low resolution images. Gait recognition system involves capturing human walking image, extract salient gait features, and feature recognition as conceptualized in Figure 6. Typically, there are two approaches to gait recognition systems, Motion based approach and model based approach [23]. In motion based approach, the human gait is considered as a sequence of image and features are extracted from these images while, the model based approach extract the motion of the human body by means of fitting their models to the input images. This approach is scale invariant and reflect in the kinematic characteristics of walking manner. Existing feature extraction techniques include the following [3]: Palm Print Recognition Like fingerprint, the application of palm print is enormous which include criminal, forensic, or commercial. A palm print is an image acquired from the palm region of the hand either through online image (scanner) or offline image (ink and paper). The palm consists of principal lines, wrinkles (secondary lines) and epidemic ridges. Like fingerprint, the uniqueness of palm print is attributed to its formation during birth, no two individuals have exactly the same palm print patterns. Also, palm print is distinctive, easily captured by low resolution device. A palm print recognition system broadly consists of four modules namely: palm print scanning, preprocessing, feature extraction and matching. Scanner is used to collect palm print images while the acquired palm print image undergo segmentation at preprocessing stage. Most of the preprocessing involves the following steps [16]: (a). Binarizing the palm images (b). Extracting the shape of the hand (c). Detecting the salient points (d). Establishing a coordination systems and (e). Extraction of the central parts. Palm print feature algorithm are categorized into the following: line-based, subspace-based, local statistical-based, global statistical based and coding-based approaches [18]. Palm print classifiers such as neural network, hidden Markov models and correlation filters and various measures which include cosine measure, weight Euclidean distance, and Learning distance have been used for palm print classification [19]. The major weakness of palm print is that it changes with time depending on the type of work the person is doing for an extended duration to time. Signature Recognition This is a behavioral biometric that identifies an individual on the basis of their handwritten text (as conceptualized in Figure 7). Signature recognition requires an individual to supply a sample of text which serves as a basis of measurement of their writing. The purpose of signature recognition process is to identify the writer of a given sample, while signature verification process is to confirm or reject the sample. Basically, there are two techniques for writing signature, the static and dynamic techniques [10]. The static technique (often referred to as off-line mode of recognition) requires the individual to supply their signature on paper, then, digitalized it through an optical scanner or camera and then run it on software algorithm that recognizes the text by a way of analyzing its shape. Dynamic signature recognition, is a biometric modality that uses the anatomic and behavioral characteristics that an individual exhibit when signing his/her name or document. Some of the signature recognition techniques are dynamic time wrapping, hidden Markov model and vector quantization. Dynamic signature recognition characteristics are complex and unique to the handwriting style of the individual, its major weakness is the large intraclass variability, that is an individual's own signature may different from person to person which often making the dynamic signature recognition difficult and cumbersome [11]. Hand Geometry Recognition This is a biometric that identify persons by the shape of their hands. Hand geometry is very reliable when combined with other forms of identification such as ID card, PIN etc. In large populations, hand geometry is not suitable for one-tomany applications, in which user is identified from his biometric without any other identification. Hand geometry biometric systems utilize features such as finger length, width, thickness, finger area to perform personal authentication [3]. The device measures these features of an individual's hand while guided on a plate. It uses a camera to capture a silhouettes image of the hand. The enrollment process of a hand geometry system typically requires the capture of three segmental images of the hand, which are evaluated and measured to create a template of the user's characteristics. In order to create a verification template, the person places his/her hand on the plate, and the system captures an image, which is compares to the template developed upon enrolment. A similarity score is produced and based on the threshold of the system, the claim is either rejected or accepted. Hand geometry recognition systems have gained immensely popularity and public acceptance as evident from their extensive deployment for applications in access control, time attendance application and several other verification tasks. Major strengths of hand geometry include, simple imaging representations (features can be extracted from low-resolution hand images), the ability to operate under harsh environmental condition (immune to dirt on the hand and other external factors), and verification is extremely fast. One of the weaknesses of the hand geometry characteristics is that it is not highly unique, limiting the applications of the hand geometry system to verification tasks only [11]. Voice/ Speaker Recognition The voice recognition relies on how the person speaks. The acoustic pattern traits of the speech are used by voice recognition to differentiate the individuals and these patterns consist of both behavioral pattern (speech style, voice pitch) and physical (shape and size of the throat and mouth) [8]. Speaker recognition is the identification of a person from characteristics of voices. During enrolment, audio devices such as microphones, telephones, and so on are used to capture the voice of the individual and asked to repeat the word or phrase. The electrical signal is then generated for the microphone using Analog to Digital ADC converter as shown in Figure 8. Voice recognition can either be speaker dependent or speaker independent. Speaker dependent system is based on the knowledge of individual voice trait, while speaker independent voice recognition recognizes the speech, words or phrase by the users with the restriction of the context of the speech. There are different techniques for voice recognition, Text-dependent style, where the user is requested to authenticate the word or phrase earlier stored during enrolment for verification purpose. In addition, the use of shared-secrets (passwords and PINs) or knowledge-based information can be employed in order to create a multi-factor authentication scenario, the text during enrolment and verification are usually the same. In case of text-independent system, the text during enrollment and verification is always different, the enrolment may happen without the user's knowledge. Preference template are generated for different phonetic sounds of the human voice rather than samples for certain words [2]. Basically, identification and verification of voice recognition comprises of four stages, voice recording, feature extraction, pattern matching and decision (whether accept/reject). The voice biometric is reliable, inexpensive and easy to use and no special instruction is required. However, some of the limitations include: susceptible to quality of microphone and environmental noise, the voice change if person is sick or old age, high rate of false non match as the technology fail to distinguish recognition when the distance is wide. Also, it depends on emotional condition of individuals [25]. SYNOPSIS OF SOME RESEARCH WORKS ON BIOMETRIC MODELS FOR INDIVIDUALITY INVESTIGATION The summary of the objectives, methodologies and the limitations of some research works that are based on the models presented in the preceding Section is presented in this Section. The author in [12] proposed Iris biometric system using a hybrid approach. The algorithm developed comprises of four major steps: (a) image processing using histogram matching, thresholding and Canny edge operator. (b) localization of pupillary and limbic boundary using Circular Hough Transform, (c) Iris normalization using Daugman's rubber sheet model, and, (d) feature extraction using Haar wavelet and binary encoding. The algorithm was validated using Btree matching with Hamming distance as a matching metrics. The algorithm is not susceptible to pupil dilation due to varying illumination, specular reflections, or erratic and inconsistent limbic or pupillary boundaries. The authors in [13] presented Face-Iris multimodal biometric identification system. The system used facial feature extraction technique which is singular spectrum analysis (SSA) modeled by normal-inverse Gaussian distribution (NIG) model and statistical features (entropy, energy, and skewness) derived from wavelet transformation. The authors performed the classification process using Fuzzy K-Nearest Neighbor (FK-NN). The fusion of the face-iris features was performed using score fusion and decision fusion. The developed system was performed optimally and efficiently improved the performance of unimodal biometric based on face or iris. However, the system is not effective when using low resolution images. Silhouette correlation analysis based on human identification approach is proposed in [14]. The author extracted the features which consist of the following three dimensions: horizontal axis (x), vertical axis (y) and temporal axis (t). The correlation result between the original silhouette and the new one are used as raw feature of human gait. The author used discrete Fourier transformation to extract features from the correlation result followed by normalization process of the features to minimize the affection of noise. The dimension of the features was reduced by apply primary component analysis. The implementation was carried out using CASIA database. The algorithm produced efficient and better classification results. However, the system only work with two images correlation but perform poorly for three or more images correlation. The authors in [15] proposed a model based approached for gait recognition using the mathematical theory of geometry and image processing technique. The images were segmented using Hough transformation and corner detection technique. The authors inputted the segmented images to Canny edge detection algorithms in order to detect the image edges and to reduce the noise by means of Gaussian filtering. The Hough transform algorithm was then implemented to isolate the extracted gait features. The authors then applied Harris Corner Detection technique to detect the corners and then generated the feature points. Digital camera was used by the authors to collect the gait data by placing the camera at an angle 90 degree and 270 degree in an environment with controlled illuminations and store the output in database. The model developed produced a better rate of recognition than other previous methods. Also, it does not require silhouette images. However, the segmentation approach is not robust enough since it produces poor segmented output. The model also failed when gait database is large. A multimodal biometric system for person identification using palmprint and iris modalities is proposed in [20]. The model was based on Minimum Average Correlation Energy filter (MACE) for matching. The outputs of the subsystem (iris and palmprint) are combined using the concept of data fusion at matching score level. The experimental results proved the superiority of multimodal system over the unimodal system. A model to establish a measure of discrimination of iris that is statistically inferable is proposed in [21]. The individuality model was validated by transforming the class problem into a dichotomy using a distance measure between two samples of the same class and between those of two different classes. Both features such as distance measure and classifiers were evaluated. Feature extraction was carried out using simple binary and multilevel 2D wavelet approaches. Scalar distance, feature vector distance and histogram distance were used for distance measure while Bayes decision rule, nearest neighbor, artificial neural network and support vector machine served as classifiers. The authors in [22] proposed a multimodal biometric technique based on palm and fingerprint. IITD palmprint database comprises 230 right and left hand color images and UPEK fingerprint database were used for the experimental evaluation. All the images in the two databases were subjected to image enhancement. The algorithm produced more accurate and fast recognition result when compared to other techniques. The technique is suitable for real-time palmprint verification than other model. The authors in [23] presented a human gait recognition system. The data were collected by recording a video of the subject from the camera and then converted the video into frames of the still images. Feature extraction technique was applied to get the silhouettes, while noise filtration was applied to the extracted silhouettes a get better quality images and stored in database. The silhouettes were trained with Principal Component Analysis (PCA). The camera produced low quality image. Table 1 shows further, the summary of some of these works. CONCLUSION Biometric recognition has been an essential tool for increasing security in all facet of human endeavor due to its increasing public acceptance, massive accuracy gains, and today, many applications make use of biometric technology. This paper has presented some of the existing biometric models for individuality investigation. The motivations, methodologies, strengths and weaknesses of the physiological and behavioral forms of biometric have also been presented. These techniques of identifying individuality offer advantages over traditional methods involving ID cards or PIN numbers. Hence, these systems are proven highly confidential computer-based security system. The usefulness and selection of a particular biometric systems depend on the application areas and this allows its usefulness and applications in all biometric authentication systems.
2021-09-27T18:29:50.396Z
2021-08-18T00:00:00.000
{ "year": 2021, "sha1": "3e42e8ca3d10a2b693dc5888409d8fbebef081cf", "oa_license": null, "oa_url": "https://doi.org/10.5120/ijca2021921540", "oa_status": "GOLD", "pdf_src": "ScienceParsePlus", "pdf_hash": "6bcafe0d2085059edd921bd55a53c5b3ea5a11ae", "s2fieldsofstudy": [ "Biology" ], "extfieldsofstudy": [ "Computer Science" ] }
253882357
pes2o/s2orc
v3-fos-license
Use of multimodal dataset in AI for detecting glaucoma based on fundus photographs assessed with OCT: focus group study on high prevalence of myopia Background Glaucoma is one of the major causes of blindness; it is estimated that over 110 million people will be affected by glaucoma worldwide by 2040. Research on glaucoma detection using deep learning technology has been increasing, but the diagnosis of glaucoma in a large population with high incidence of myopia remains a challenge. This study aimed to provide a decision support system for the automatic detection of glaucoma using fundus images, which can be applied for general screening, especially in areas of high incidence of myopia. Methods A total of 1,155 fundus images were acquired from 667 individuals with a mean axial length of 25.60 ± 2.0 mm at the National Taiwan University Hospital, Hsinchu Br. These images were graded based on the findings of complete ophthalmology examinations, visual field test, and optical coherence tomography into three groups: normal (N, n = 596), pre-perimetric glaucoma (PPG, n = 66), and glaucoma (G, n = 493), and divided into a training-validation (N: 476, PPG: 55, G: 373) and test (N: 120, PPG: 11, G: 120) sets. A multimodal model with the Xception model as image feature extraction and machine learning algorithms [random forest (RF), support vector machine (SVM), dense neural network (DNN), and others] was applied. Results The Xception model classified the N, PPG, and G groups with 93.9% of the micro-average area under the receiver operating characteristic curve (AUROC) with tenfold cross-validation. Although normal and glaucoma sensitivity can reach 93.51% and 86.13% respectively, the PPG sensitivity was only 30.27%. The AUROC increased to 96.4% in the N + PPG and G groups. The multimodal model with the N + PPG and G groups showed that the AUROCs of RF, SVM, and DNN were 99.56%, 99.59%, and 99.10%, respectively; The N and PPG + G groups had less than 1% difference. The test set showed an overall 3%–5% less AUROC than the validation results. Conclusion The multimodal model had good AUROC while detecting glaucoma in a population with high incidence of myopia. The model shows the potential for general automatic screening and telemedicine, especially in Asia. Trial registration: The study was approved by the Institutional Review Board of the National Taiwan University Hospital, Hsinchu Branch (no. NTUHHCB 108-025-E). Supplementary Information The online version contains supplementary material available at 10.1186/s12880-022-00933-z. Introduction Background Glaucoma, a neurodegenerative disease, is a significant cause of blindness worldwide. Due to ambiguous symptoms, early diagnosis is difficult. When signs develop, the visual field (VF) is usually lost. Chua et al. estimated that approximately 50% of patients with glaucoma were undiagnosed [1]. Accurate glaucoma detection is based on a combination of clinical examinations and various tests for structural and functional optic nerve head damage, including fundus photography, visual field test, optical coherence tomography (OCT) [2]. However, most of these devices are only available in regional hospitals or medical centers. Fundus examination is a basic tool commonly used for the diagnosis of glaucoma. In addition, some prospective research about taking fundus photos with only a smartphone has appeared recently [3], which is prone to increase the convenience and prevalence of fundus photos in the future. However, examining glaucoma through fundus photography requires considerable clinical experience, and the conclusions often differ from those of experts [4,5]. Therefore, artificial intelligence (AI) methods might have great potential to address this problem because of the recent well-established convolutional neural networks to process big data with increased processing speed and accuracy. Over the past few years, AI has been widely applied to different aspects of ophthalmology [6][7][8][9][10], in which glaucoma is one of the most popular. Researchers have developed several algorithms to differentiate glaucomatous eyes from normal eyes using fundus photography to detect glaucoma. The areas under the AI receiver operating characteristic curves (AUROCs) in the studies by Ting [11][12][13]. However, these studies did not focus on the increasing prevalence of myopia. Myopia may cause morphological changes in the retina and optic disc. For instance, retina tessellation, staphyloma, optic disc tilting, and peripheral papillary atrophy [14,15] may make the diagnosis of glaucoma through the fundus more difficult. Li et al. found that when AI differentiates glaucomatous images from non-glaucomatous ones, pathologies or high myopia are the most common causes of false-negative results [12]. Myopia also plays an important role in the false-positive results. The condition is prevalent worldwide. According to a previous study, the global prevalence of myopia will reach 49.8% in 2050, which is more than twice the prevalence reported in 2000 [16]. In Taiwan, from 2010 to 2011, the prevalence of myopia in men between 18 and 24 years of age was 86.1% [17]. Therefore, the high prevalence of myopia will become an obstacle for AI to diagnose glaucoma. Objective This study aims to provide a tool that can help diagnose glaucoma by using color fundus images, which can be applied in areas with a low level of medical resources or areas where many people have myopia. Methods This study was conducted at the National Taiwan University Hospital, Hsinchu Branch, Taiwan, in accordance with the Declaration of Helsinki (1964). The study was approved by the Institutional Review Board of the National Taiwan University Hospital, Hsinchu Branch (no. NTUHHCB 108-025-E). Informed consent was obtained from all participants. In order to make the proposed methodology clearer we listed the main checkpoints in our methods as follows. 1. Collecting data and precise glaucoma grading with optical examinations, fundus images, visual field, and OCT data 2. Selecting the Xception transfer learning model as the fundus image classifier. 3. Training a fundus image regression model to predict OCT-obtained RNFLT and C/D vertical ratios 4. Training a multimodal model using the predicted RNFLT, C/D vertical ratios (both from the regression model), color fundus images, and optical examination numerical results 5. Performing error analysis to identify false predictions and improve model training Subjects and data collection All data were collected when the participants visited the hospital's general or ophthalmologic clinic from June 2019 to September 2020. After providing informed consent, demographic data was collected (for example, age and sex) and the participants underwent a set of ophthalmologic examinations, including a visual acuity test using a Snellen chart, measurement of intraocular pressure (IOP) using a non-contact tonometer NT-530P (Nidek Co., Gamagori, Japan), measurement of refractive error using an ARK-510A (Nidek Co., Gamagori, Japan), measurement of axial length using an AL-SCAN (Nidek Co., Gamagori, Japan), slit-lamp examination, gonioscopy, fundoscopy, and a visual field test using a Humphrey Field Analyzer-840 (HFA-840; Carl Zeiss Meditec, Inc. Dublin, CA, USA). Both 45° optic disc-centered and macular-centered color fundus images with a resolution of 1620 × 1440 were captured using a Zeiss VISUCAM 524 (HFA-840; Carl Zeiss Meditec, Inc. Dublin, CA, USA) in the color and green modes. Both eyes of each participant were evaluated and imaged. All participants underwent OCTA (Angiovue, Optovue Inc., Fremont, CA, USA) using the split-spectrum amplitude-decorrelation angiography algorithm. All images contained retinal nerve fiber layer RNFL thickness, ganglion cell complex thickness, and optic nerve head analysis, including the cup/disc ratio, rim area, and disc area. The RNFL thickness measurement was centered on the optic disc in an annulus region, with the outer diameter as 4 mm and the inner diameter as 2 mm. The GCC scan, comprising retina nerve fiber, ganglion cell, and inner-plexiform layers, was centered 1 mm temporal to the fovea with a scan size of 7 mm × 7 mm area. All scans were reviewed manually to ensure correct disc/ cup segmentation and quality. The RNFL thickness and ganglion cell complex thickness calculation were divided into superior and inferior parts, to find compatible glaucomatous defects in the visual field examination. Subjects with or without pseudophakia between 20 and 80 years of age were included. Any subjects with positive cataract would be excluded due to blurred color images. The exclusion criteria included: best-corrected visual acuity below 12/20; a history of ocular disease, such as vitreoretinal diseases, ocular trauma, uveitis, non-glaucomatous optic neuropathy, retinopathy, and any other ocular disease that may affect optic nerve or visual field result; systemic diseases, such as diabetic retinopathy, hypertensive retinopathy, stroke with visual field loss; OCTA scan signal strength index less than 40; visual field test: fixation loss > 15%, false positive rate > 20% or and false negative > 10% A grading system was established using OCT numerical data extracted from OCTA and visual field examination. According to the ganglion cell complex and RNFL thickness, OCT numerical data showed normal, borderline, and abnormal ganglion cell complex values in the macular and optic nerve head fiber layers of the image. The borderline and normal categories were combined into the normal category to simplify the grading system. All participants were diagnosed with normal (N), preperimetric glaucoma (PPG), or glaucoma (G). Specifically, PPG is the case where either the ganglion cell complex, RNFL, or both regions were damaged, but the visual field was still normal. For the G group, the ganglion cell complex, RNFL, or both regions were damaged and showed compatible visual field changes. The mean defect of visual field more than -2 dB would be considered as abnormal visual field. Our study team have built up a normal ground of OCT and OCTA for normal subjects with different axial length [18]. All glaucoma patients were followed more than 1 years and their collected data was confirmed by an experienced glaucoma specialist. It is worth mentioning that all glaucoma patients had been followed up with for at least one year. The analysis collapsed the three groups into binary groups, N + PPG, G and N, PPG + G, to form non-glaucomatous and glaucomatous groups. Note that each image corresponds to a unique person. We sorted the collected data in chronological order, and then took the data from the last two weeks as the test set and the rest as the training set. Preprocessing of images and numerical data The fundus images were automatically cropped by only keeping the pixels within the red circle, as shown in Additional file 1. The images were cropped using the OpenCV package [19]; details are provided in Additional file 1. Most of the numerical data from optic examination, OCTA, and patient demographics had missing ratios below 5%; all the missing values were filled by the MICE algorithm [20]. Transfer learning image classification This study applied transfer learning model with Xception as base model using ImageNet dataset training weights. The last output layer was replaced with a dense layer for classification. A self-attention model was applied using the method used by Guan et al. [21]. Self-attention methods can help our models identify key regions of fundus images associated with glaucoma. The key regions identified by self-attention methods are cropped, and both the cropped image and the whole image were sent to two Xception models initialized by pretrained weights. Each Xception model generates a feature map (or output vector) of 1 × 1024. These two feature maps are concatenated to form a vector of 1 × 2048, which is then sent to a fully connected neural networks of 5 hidden layers to generate an output for predicting glaucoma. Details of the data processing are shown as a block diagram under the self-attention section in Fig. 2. All the convolution layers and last classification layer were fine-tuned [22]. The Adam optimizer was used in the model with a learning rate of 0.0001 and a decay of 0.001 with categorical cross-entropy as the loss function. All of the models were trained for 80 epochs in batches of 30 images per step with ten-fold cross-validation for model evaluation. The validation accuracy was monitored during the model training process, and the model was saved with the best validation accuracy to predict the test data. First, a single transfer learning image classification model was trained with both the disc center and macular center images for selecting the best base models from Inception v3, Inception Resnet v2 and Xception [23,24]. After the selection we trained both disc-center and macular-center transfer learning images classification models separately and used them in the multimodal model. All models were trained on a computer with Ubuntu (16.04.6), Intel(R) i7-7740X CPU, two GeForce GTX 1080 Ti 11 GB GPU, and 62GiB system memory. Regression model for predicting the OCTA-acquired C/D v ratio and average RNFL thickness To include essential features such as the cup/disc vertical ratio (C/D v ratio) and average RNFL thickness, we used a regression model that included color fundus images as input to predict the OCTA-acquired C/D v ratio and average RNFL thickness with a method similar to that used by Medeiros et al. [25]. The color fundus images were magnified onefold to the optic disc area and manually cropped to increase the model accuracy. The regression model was trained using the transfer learning method with the Xception network. The last output layer was changed to a dense layer with a linear activation function and mean-squared error loss function for value prediction. Multimodal model Patients' demographic data, OCT numerical data extracted from OCTA, and color fundus images were collected. The data collection consisted of numerical data and two images: disc-centered and macular-centered color fundi. Numerical data, alongside disc-centered and macular-centered color fundus images, were included in the training input. The numerical data included age, sex, axial length, visual acuity, heart rate, and blood pressure. Glaucoma specialists selected these features, considering that most of the subjects in this study had high myopia. The regression model predicted the C/D v ratio and average RNFL thickness as the input of the multimodal model. After training the models with disc-centered color fundus and macular-centered color fundus, the feature maps extracted by the second last layer were concatenated with the selected numerical features. Random forest (RF), support vector machine (SVM), Ada-boost (Ada), decision tree (with either the CART or C4.5 algorithm), logistic regression (LogReg), Naïve Bayes (NB), k-nearest neighbors (KNN), and a dense neural network (DNN) were trained with the newly concatenated inputs for the final prediction. Finally, a webpage was built with the multimodal model we proposed for general screening of glaucoma and telemedicine application. An illustration of the process described above was shown in Additional file 2. Statistics All statistics were computed using Python packages, Scikit-learn and NumPy. True-positive rate (sensitivity)-false-positive rate (1-specificity) receiver operating characteristic (ROC) curves were plotted. Using the ROC curve, the tradeoffs between sensitivity and specificity, and the area under the ROC (AUROC) curve was used to determine the model's performance. Accuracy, F-measure, and confusion matrix were computed based on the ROC curve's optimal cutoff point using Youden's J statistic [26]. All the metric equations were listed in Additional file 3. Study design flow and proposed models To design a decision-making model for precise glaucoma diagnosis, both disc-centered and macular-centered images should be completed. The dataset in Table 1 was acquired after excluding blurred, ambiguous images and images that only had one single-centered image. Our models predict the probabilities of normal, PPG, or glaucoma as outcomes. These possibilities were regrouped into either (normal + PPG) and glaucoma, or normal and (PPG + glaucoma). The models required the following predictors (features) to predict the outcome: disc-and macular-centered RGB color fundus images (RGB channels with normalized 0-1 values), C/D v ratio (%), RNFL thickness (μm), age, sex (0 = female, 1 = male), visual acuity (logMAR), axial length (mm), heart rate (bpm), systolic blood pressure (mmHg), and diastolic blood pressure (mmHg); the full features were listed in Table 1. The correlation matrix of the features was calculated and shown in the Additional file 4. The subjects who participated in this study had an overall mean axial length of 25.19 ± 1.78 mm, 25.93 ± 1.84 mm, 25.67 ± 2.23 mm in N, PPG, and G groups, respectively, as shown in Table 1. Eye with a spherical equivalent ≤ -6.0 diopter (D) or an axial length (AL) ≥ 26 mm is defined as high myopia [27]. The number of participants with high myopia (axial length ≥ 26 mm) was 199 out of 596 (33%) in the N group, 38 out of 66 (57%) in the PPG group, and 218 out of 493 (44%) in the G group. After grading and preprocessing the data, the study design flow (Fig. 1) was followed to train the proposed model, as shown in Fig. 2. Transfer learning image classification The results of selecting best base models for transfer learning from Xception, Inception v3, and Inception ResNet v2 were as shown in Fig. 3; there was no significant difference between these three CNN models. Although the overall accuracy of the model reached 87.09%, the PPG group's sensitivity was low. As shown in Fig. 3 Table 2. The N + PPG, G groups had a higher AUROC than the N, PPG + G groups for the validation set. Although the test results showed higher performance in the N, PPG + G groups, the increase was less than 1%, as shown in Table 2. Fig. 1 The flow chart of the study design. First, dataset collection was done at the National Taiwan University Hospital, Hsinchu Branch. After glaucoma specialists reviewed all the collected images as well as the participants' demographic and OCT extracted numerical data, each participants' eye was precisely graded as N, PPG, or G. OCT extracted numerical data were not included in the training process. Data from the last two weeks was kept as a test dataset, and the rest of the data were used for training the models with tenfold cross-validation. The model was built on a webpage for telemedicine and the labeled color fundus images will be published as open data. N, normal; OCTA, optical coherence tomography angiography; PPG, pre-perimetric glaucoma; G, glaucoma Regression model for predicting the OCTA-acquired cup/ disc vertical ratio and average RNFL thickness The regression model was trained to predict the average RNFL thickness using the color fundus images. In the validation set, Pearson's correlation coefficient was 0.856, and the coefficient of determination (R 2 ) was 73.29%. Moreover, as shown in Fig. 4, the model predicted the C/D v ratio with a high Pearson's correlation coefficient of 0.885 and an R 2 of 78.13%. The model could also predict well in a test set with a high Pearson correlation coefficient (average RNFL: 0.905, C/D v ratio: 0.926) and R 2 (average RNFL: 76.31%, C/D v ratio: 76.65%). Multimodal model After predicting the C/D v ratio and average RNFL thickness using the regression model, the patients' demographic data and images were used to expand the model modalities. For a clear view of the multimodal models' performances, the top three models used in the final prediction were identified; the models' performances were shown in Figs. 5 and 6, and Additional file 5. In the validation and test sets, the top three models for classifying the N, PPG, and G groups were random forest, SVM, and DNN. These three models achieved high AUROCs in the validation set: random forest (99.7%), SVM (99.4%), and DNN (99.1%). In the test set, the AUROCs of the top three models were also high: SVM (95.1%), DNN (95.0%), and random forest (94.1%), as shown in Fig. 5. The three groups were collapsed into binary groups in the multimodal models with the same rationale mentioned above. After converting the three-group classification to binary-group classification, the model performance improved. After the three groups were collapsed into binary groups, the multimodal models with the top three AUROCs remained the same as with the three-group classification (shown in Fig. 5), namely random forest, SVM, and DNN. In the N, PPG + G groups, the AUROCs of the random forest, SVM, and DNN models reached 99.56%, 99.59%, and 99.18% in the validation set. In the test set, the AUROCs slightly decreased to 93.77%, 94.42%, and 94.45%, respectively. In contrast, in the (N + PPG, G groups, the AUROCs of the random forest, SVM, and DNN models reached 99.62%, 99.68%, and 99.01% in the validation set, and the AUROCs decreased slightly to 93.84%, 93.29%, and 95.38%, respectively, in the test set, as shown in Additional file 5 and Fig. 6. When comparing both grouping methods, the (N + PPG, G) groups showed the highest AUROCs in both the validation and test sets. The accuracy, precision, sensitivity, and F3 were computed after selecting the optimal cutoff point with Youden's J statistic. These metrics showed a trend similar to the AUROCs for the top three models in both the test and validation sets, as shown in Additional file 5. Main findings In this study, the multimodal model achieved an AUROC of 99.7% with SVM in the N + PPG, G groups in the validation set. The multimodal model using DNN maintained an AUROC of 95.4% in the test set, which showed promising results when detecting glaucoma in a myopic population. These improved performances were mainly attributed to the precise glaucoma grading of color fundus images with OCT extracted numerical data and the increase in model modalities that combined numerical data and images. Precise grading of normal, PPG, and glaucoma groups with OCT extracted numerical data Li et al. trained an algorithm using a dataset of 31,745 fundus images, achieving an AUC of 0.986 [12]. Christopher et al. reported an AUC in the range of 0.79-0.97 by training AI with 17,195 images [13]. With a total of 3,132 fundus images, Shibata et al. achieved an AUC of 0.965 [28]. Compared to previous studies, several new strategies were adopted to create a new perspective in our study. First, the exclusion criterion was stricter to avoid lower-quality images and erroneous estimation results. This study excluded cataracts patients because cataracts undermines the quality of color fundus images and contributes to the underestimation of retinal layer thickness [29]. Second, our grading system was more objective. In previous studies, the ground truth about glaucoma was usually obtained by human judication through color fundus photography. Subjective evaluation resulted in low reproducibility, and even experienced ophthalmologists could not provide a fair grading standard [4,30,31]. Therefore, a precise and objective grading system was constructed using OCTA to collect patients' constructive data, by referring to other clinical examination histories, and divided subjects into three groups: normal, PPG, and glaucoma. This classification system was the first to consider OCT numerical data extracted from OCTA and visual field as grading standards, which was applied to AI trained with color fundus photography. Third, myopia was compatible with the worldwide pervading trend. In the dataset, 455 of 1155 participants (39%) had high myopia with axial length ≥ 26 mm. The results in Additional file 6 show that after separating myopia from nonmyopia in test dataset. Both (N, PPG + G) and (N + PPG, G) shown 2-5% less AUROCs when compared myopia group to non-myopia. Although high myopia leads to relatively lower AUC values as high myopia cases are difficult to differentiate using color fundus images, patients with high myopia were still included in preparing for the era of high myopia prevalence. The benefit of using a multimodal model and a different approach for handling multimodality problems Xception model was selected as the transfer learning base model although it showed similar performance with Inception v3 and Inception ResNet v2 mainly because it is the latest modification in the Inception model series [32][33][34]. Taking advantage of the Xception model and adopting it as a feature extractor in the proposed multimodal model, we proposed a multimodal model based on the following observations. First, increasing the modalities often enhances the model's performance, which has been widely reported in the literature [35,36]. However, few studies have increased the multimodality relevant to glaucoma diagnosis in machine learning. One example was the use of four different OCT images and color fundus images for glaucoma diagnosis through machine learning [37]. Second, to increase model accuracy when dealing with Third, multiple sources of information from various medical reports were viewed by experts before making a diagnosis of glaucoma. The goal of this study was to make the model accessible in areas with low of medical resources. Therefore, OCT extracted numerical data were not included because the corresponding machines used to capture the data were expensive. To compensate for the lack of OCT extracted numerical data input in the multimodal model, a regression model was trained to predict the C/D v ratio and average RNFL thickness (which could be derived from the acquired OCT data) using color fundus images. The rationale for collapsed three groups into binary groups The models in this study predicted N, PPG, and G's probabilities with each participant's data. The N + PPG, G groups have been commonly applied for precise G prediction in some studies. In contrast, grouping N, PPG + G would overestimate the G group. Whether PPG patients need treatment is still under debate. Studies suggested in PPG eyes, the mean defect progressions of VF were − 0.09 ± 0.25 [38], − 0.17 ± 0.72 [39], and − 0.39 ± 0.64 dB/year [40], respectively, slightly more severe than the median mean deviation rate of the population, − 0.05 dB/year [41]. In conclusion, which way to binarize the data is better requiring more research to rationalize. Differences between the test and validation sets The multimodal model showed a slight decrease of almost 5% in AUROC for the test set compared to the training and validation sets. This decrease may be due to the following facts. First, the test set was collected in the last two weeks, while ten-fold cross-validation was performed in the first eight weeks. This difference in the time span is likely to degrade the performance of the test set. Second, the change in the PPG ratio over the total images in the test set may also degrade the performance. Although the test set did not perform as well as the validation set, it still achieved an AUROC of 95%, which is a good result compared with other studies that involved a large population of high myopia. As shown in Additional file 7, there are no significant differences between training and test sets in terms of age, axial length, HR, SBP, DBP, and visual acuity. ). There were decreases in the AUROCs for the test dataset with both methods, but the best AUROC of the three models still exceeds 90%. RF, random forest; Ada, adaptive boosting; SVM, support vector machine; LogReg, logistic regression; NB, Naïve Bayes; KNN, k-nearest neighbor; CART, classification and regression tree; C4.5, C4.5 decision tree; DNN, dense neural network; AUROC, area under receiver operating characteristic curve Error analysis: false-positive and false-negative cases The figure in Additional file 8 shows false-positive and false-negative cases by our model. Although our model achieved high AUROC with a high myopic population dataset, high myopic cases were still the majority in the false-positive cases, as shown in Additional file 8. On the other hand, false-negative cases show that our model would predict incorrectly with smaller disc, cup areas than normal color fundus images; these small cup and disc cases might have misleading cups-to-disc ratio and would cause the model to predict false-negative cases [42]. Comparison with currently used methods and public datasets testing The comparison table of our research and existing studies was shown in Additional file 9. Most of the listed studies trained the convolutional neural networks with single modalities (color fundus). Guangzhou An et al. study used a multimodal model/ ensemble learning model with OCTA and color fundus data and had a similar result with our model [35]. Although some studies showed higher test results than our study, these might cost by the high incidence of myopia in our study, or the limitations listed below. To validate the model performance with external datasets, two Kaggle datasets were tested and evaluated [43,44]. The datasets had different modalities compared to ours; therefore, a new pre-processing method and disccentered model were used to compare the results. The methods and results were shown in Additional file 10. Additional file 10 showed that after retraining the disccentered model with our training dataset and Kaggle training dataset (adapted model), tenfold cross-validation AUROC decreased by 3%-4% to 90.52% when compared to our original model with AUROC of 95.91% as shown in Table 2. After testing the adapted model with the Kaggle test set, the AUROC dropped further to 68.88%, as shown in Additional file 11. The published codes on the Kaggle websites showed the same results as our model, which had approximately 60%-70% accuracy [45,46]. To validate the code mentioned above, a tenfold cross-validation model was trained with only the Kaggle training dataset (model 2), as shown in Additional file 11. Our test set performed better than the Kaggle test set with AUROCs 86.61% and 70.47%, respectively, as shown in Additional file 11. The adaptation model mentioned above was applied and predicted using a second Kaggle dataset [44]. The second dataset had a large population but consisted of different types of ocular diseases. The color fundus images labeled as glaucoma (n = 203) and random normal fundus images (n = 300) were selected. Although the second Kaggle dataset had a higher AUROC of 83.01%, the first dataset still performed the worst. We suspected that the main reason for the low performance of the first Kaggle dataset was mainly due to mismatch and several hypothesized reasons as below. First, the color fundus images acquired in these two datasets might have been collected from different populations. Second, the fundus of false-positive cases showed several images that our dataset had excluded, such as blurred images that might have been caused by cataracts and other systemic disorders, as shown in Additional file 11. Third, the imbalance ratios in the two datasets were different. The Kaggle dataset collected more normal images. The imbalance of data in machine learning is still an obstacle that is difficult to overcome. For these reasons, it is important that the user strictly follow the guidelines listed on the web page when uploading their data to predict an optimal result. Limitations One limitation of this study is its relatively small database. However, the database contains 1,150 color fundus photographs, complete ophthalmological data (for example, OCTA, visual field, IOP, axial length), and an accurate grading system. If more data are collected, the algorithm might achieve a higher AUROC value. Undoubtedly, the major limitation of this study is that some potential confounders such as age and axial length were not distributed equally within groups. Young adults showed a higher incidence of myopia. We also adopted some methods to control for confounding effects, such as the exclusion of cataracts and age-related macular disease, and a questionnaire about participants' health condition was completed beforehand. A better sampling method is difficult, but is expected in our future work. With the proposed multimodal model, large-scale screening and telemedicine can be applied in rural areas. Moreover, a webpage was built to establish a telemedicine website using the multimodal model [47]. We would like to release labeled color fundus data for interested researchers and academic studies. An example of a labeled image was presented in Additional file 12. The model and data will be free to use and can be easily accessed globally.
2022-11-26T14:44:46.265Z
2022-11-24T00:00:00.000
{ "year": 2022, "sha1": "e35cc486452c3f9d149818d94139eded9d4e8b9d", "oa_license": null, "oa_url": null, "oa_status": null, "pdf_src": "Springer", "pdf_hash": "e35cc486452c3f9d149818d94139eded9d4e8b9d", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
234014784
pes2o/s2orc
v3-fos-license
Centrifugal convection in a two-layer system of reacting miscible fluids The authors study the effect of uniform rotation on the system of two reacting miscible liquids placed in a cylindrical Hele-Shaw cell. The cell performs a rotation with a constant velocity around the axis of symmetry resulting in a radially directed inertial field. The initial configuration of the system is statically stable and consists of two concentric layers of aqueous solutions of acid and base, which are spatially separated. When liquids are brought into contact, a neutralization reaction begins, which is accompanied by the release of salt. In this work, we obtain a system of governing equations and present the results of numerical simulation. We found that reaction-diffusion processes lead to the formation of a non-monotonic density profile with a potential well. If the rotation rate gradually increases, then a cellular convection pattern can develop in the potential well. We found that with further growth of the control parameter, the periodicity of the pattern is violated due to the influence of another convective instability, which independently develops in the domain close to the axis of rotation. The action of the inertial field results in the ejection of some convective vortices from the potential well. Introduction Since the world of chemical reactions is rich, the researchers dealing with the chemo-hydrodynamics are focused on studying the effect of the model reaction with relatively simple, but nonlinear kinetics. The neutralization reaction is ideal for this role [1]. The most popular initial configuration of a convective system, which can be relatively easily realized in an experiment, includes two spatially separated reactant solutions. If the liquids are miscible (for example, both solutions are aqueous), then immediately after bringing them into contact, a narrow transition zone with a reaction front is formed between them. The faster the reaction rate, the thinner this zone. The results of experiments by different authors [2][3][4] have shown that the reaction proceeds frontally in a gravity field and is accompanied by the onset of the convective motion caused by the difference in the diffusion rates of the reacting components. The emerging convective state is a disordered finger-like structure spreading on both sides of the transition zone. In this case, the mass transfer still occurs under the control of diffusion, which leads to a slow reaction rate. In the paper [5], the authors attempted to classify all possible types of buoyancy-driven instabilities arising in two-layer miscible systems based on the study of the asymptotic of long evolution times of the system. The classification is based on the assumption that the processes that occur above and below the reaction front are reliably separated by an interlayer of liberated salt. Therefore, one should consider their features apart. The main conclusion of this classification is that a neutralization reaction does not give rise to new types of instabilities, but rather changes the features of previously well-known instability mechanisms: the instability of double diffusion (DD), diffusion layer convection (DLC), and Rayleigh-Taylor instability (RT). However, the classification given in [5], as it was soon shown in [6], is not complete since there is a reaction regime under the control of energetic convection in the form of a shock wave with a front oriented perpendicular to the force of gravity. This chemoconvection mode occurs when the density of the upper layer is approximately equal to the density of the lower layers. It is important to note that this effect has been confirmed experimentally for a homologous series of reactants [6]. Besides, the diffusion coefficients of reactants dissolved in water were shown to depend on the concentration of species, and this can lead to a periodic system of chemoconvective cells, which resembles the Rayleigh-Benard convection in its regularity. In this work, we study the reaction-diffusion-convection processes developing under not a constant, but of a space-variable inertial field, which is created due to the centrifugation of the two-layer reacting system around its axis of symmetry. In addition to spatial inhomogeneity, the centrifugal field can be tuned by changing the rotation frequency, which also gives the system new degrees of freedom. Mathematical formulation Let us consider a two-layer system of miscible reacting liquids placed in a cylindrical Hele-Shaw (HS) cell rotating at a constant angular velocity Ω = Ω0γ. Here, Ω0 is the absolute value of the angular velocity, γ is a unit vector directed along the rotation axis (Fig. 1a). Let us denote the radius of the HS cell as R and its gap width as h. The necessary condition for the Hele-Shaw approximation requires R >> h. The sufficient condition requires that the convective structures that can potentially arise as a result of pattern formation be much larger than the HS cell gap. The description of the system in the framework of the coordinate system rotating together with the body leads to the appearance of Coriolis and centrifugal forces. In what follows, we neglect the influence of static gravity. Thus, the inertial field acts on a fluid element along with the layer, which makes it possible to assume that possible fluid flow would be quasi-two-dimensional. In previous works, we have reported the dependence of the diffusion coefficients of the substances on their concentrations. For sake of definiteness, let us consider a specific pair of reactants for which we previously developed the diffusion model tested in [6]. Let A denote the concentration of an aqueous solution of nitric acid HNO3, and B stands for the concentration of sodium hydroxide NaOH. The initial Fig. 1b. The contact of two solutions leads to the onset of a neutralization reaction producing water and sodium nitrate NaNO3. In a simplified form, the kinetics of this reaction can be defined as where the water production and heat release were neglected. One can notice that the wide sidewalls of the HS cell are usually made from glass, which transmits significant heat because of the thermal conductivity coefficients of water and glass are nearly the same. In comparison with the concentration effects, the thermal effects can be controlled to a greater extent during the experiment. Thermally insulated walls enhance the role of heat, while perfectly conductive walls make the heat effect negligible. We rewrite the problem in a dimensionless form using the following measurement units: length -h, time -h 2 /DA0, velocity -DA0 /h, pressure -ρνDA0 /h 2 and concentration -Alim, where DA0 stands for the tabular value of the diffusion coefficient of the fastest of the three substances (nitric acid), Alim is the maximum value of the acid concentration, up to which the approximation for the concentration-dependent diffusion (CDD) effect works well, ρ and ν are density and kinematic viscosity of the fluid, respectively. Then we obtain the following nonlinear system of dimensionless convection-reaction-diffusion equations written in a two-field formulation in terms of the stream function Ψ and the vorticity Φ: Several dimensionless similarity criteria have appeared in the system of equations (2). The Schmidt number Sc = ν/DA0 characterizes the ratio of the characteristic diffusion times of the solute (acid) and the diffusion of the momentum of the solvent element (water). For the set of substances under consideration, this parameter is Sc = 317. The Damköhler number Da = h 2 αAlim/DA0 is the ratio of the characteristic diffusion time of the fastest reactant to the characteristic reaction time. The neutralization reaction is considered to be fast so that reactants do not have time to penetrate deep into the solution of its paired species. Therefore, the reaction between two initially separated solutions proceeds in a narrow zone called the reaction front. In the paper [7], the authors estimated the value of the kinetic constant of the neutralization reaction. The obtained range was from 102 to 105. In this work, we take the value Da = 10 3 that falls within the range defined in [7]. The set of Rayleigh concentration numbers appeared in (2) determine the buoyancy effects in solutions of the corresponding substances under the centrifugal force field: The CDD effect is key to this work. We assume for simplicity that each diffusion coefficient depends only on the concentration of its substance and the data set falls in the experimentally defined range on a straight line f (x) = a + bx, where x is the concentration, a and b are some constants. These linear laws in a dimensionless form, then, are (4) Finally, we formulate the boundary and initial conditions. The boundary conditions are The initial conditions are formulated as it follows: Results of numerical simulation As we have shown earlier in the work [6], the control parameter of the problem is the ratio of the initial concentrations of solutions γ = γB/γA defined by (7). The initial configuration of the system remains statically stable down to the lowest value γ = RA/RB ≈ 0.83, which determines the state of equal densities of central and peripheral layers (i.e. isopycnic line). We demonstrated that there exists a bifurcation point γ* for γ > RA/RB that separates the onset of two fundamentally different regimes of reaction-diffusionconvection processes. In the range RA/RB < γ < γ*, the traveling shock-like wave mode is realized which proceeds under the control of convection. In the case γ > γ*, the transfer processes are controlled by diffusion. In this case, convection also can take place, but its intensity is determined by diffusion processes. In this paper, we restrict ourselves to considering the effect of centrifugal action in the case γ > γ*. For definiteness, let us fix the initial concentrations at the values γA = 0667 и γB = 0667 (γ = 1) and trace how the type of instability changes with increasing rotation frequency. The Rayleigh number RA unambiguously determines the intensity of rotation in the system, but we should keep in mind that the centrifugal force increases with distance from the axis of rotation, therefore, the inertial effect will be different at different points of the HS cell. Thus, the Rayleigh numbers defined by (3) characterize the centrifugal force at the edge of the cuvette. In what follows, we present preliminary results of the numerical simulation of the nonlinear problem (2), (4)−(6) obtained for different values of RA. The problem has been solved by the finite-difference method; the details of the computational scheme are given in [8]. It is convenient to present the numerical results not for individual concentration fields, but for the total density, which is an addition to the density of the solvent: In contrast to the static gravity field, in this problem, we can change the intensity of the inertial field. When the rotation frequency is low, the HS cell is in microgravity. In this case, the inertial force can not give rise to the chemoconvective motion of the fluid. Fig. 2 shows the time evolution frames obtained for RA = 1.6·10 3 , which corresponds to an overload of 0.02g at the edge of the HS disk. One can see from the figure that there exists a reaction-diffusion base state, which gives rise to two statically stable potential wells in the density field. Potential wells are concentric, one of them is closer to the axis of rotation and wider, and the other is located further and narrower. The wells occur when an emerging component (salt) starts to accumulate near the reaction front creating a potential barrier near the reaction zone. The numerical simulation of the time evolution of the base state shows that the shape and depth of the wells practically do not change with time, and the width slowly increases under the influence of diffusion processes (Fig. 2a-c). The large value of the kinetic reaction constant guarantees almost instantaneous formation of a potential well after bringing the solutions into contact (Fig. 2a). In contrast, the onset and growth of chemoconvective instability require much more time. With an increase in the rotation frequency, one can observe the appearance of cellular convection in the potential well located further from the rotation axis, as well as a periodic system of plumes floating up to the center of the cell. Fig. 3 shows the sequential development of both instabilities at RA = 2.4·10 4 (about 0.3g). We can learn from the figure that a cellular structure appears at about t = 0.6 and acquires a mature form by the time t = 0.8. There are 52 convection cells on a circle of length R/√2. Therefore, the structure wavelength is about lCDD = 1.7 (the wave number is kCDD = 3.7). The characteristic wavelength of the plumes is lDLC = 6.4 (kDLC = 0.9). The DLC-convection plumes move to the center of the HS cell due to the low density of the reaction zone. The intensity of fluid flow in the vicinity of the rotation axis significantly decreases due to the weakening of the inertia field. Large-scale eddies being inherent in this instability define radial directions for intense injection of fresh acid to the periphery of the HS cell, which leads to the formation of a quasiperiodic structure of chemoconvective cells (Fig. 3b). Earlier we studied this effect for the case of a constant gravity field and an infinitely extended layer in detail. In the present problem, the instabilities grow near the reaction front, which has the shape of a circle and a finite length; therefore, synchronization occurs between the structures. Figure 4 illustrates the case of an inertial field that exceeds the static gravity field. The figure shows the frames of the time evolution of the density field at RA = 1.3·10 5 (it is approximately 1.3g). Here, we observe the irregular convective motion of liquid in potential wells almost from the very beginning of evolution. It is interesting to note that some convective cells leave the potential well under the action of the centrifugal force since fluctuations of the density field can exceed the height of the potential barrier farthest from the axis of rotation. These droplets of high density move in a radial direction and may even reach the edge of the disc (Fig. 4c). Notice that a high level of mixing in the central area is achieved already at t = 1. The time evolution of the absolute value of the stream function maximum is shown in Figure 5 for each of the previously considered values of the Rayleigh number. A sharp increase in the stream function at the initial times corresponds to the development of instability within the potential well. This process continues until the sequence of cells is finally formed. It is obvious that convection becomes more intense with increasing rotation frequency. During rotation, the redistribution of density under the action of the centrifugal force leads to collision (competition) of vortices in the center of the HS cell, which explains the non-stationary nature of the dynamics of the stream function maximum shown in Figure 5. Conclusion In this work, we present a theoretical study of the effect of the uniform rotation on the development of chemoconvective instabilities arising in two-layer systems of miscible reacting liquids. We show that at low values of the rotation rate (if the inertia force measured at the edge of the HS cell is less than 0.1g), convection in the system does not develop at all. For higher values of the rotation speed, we observe two different types of chemoconvection, which excite in two potential wells separated by a potential barrier. An increase in the rotation speed of more than 1g leads to density fluctuations, which result in a chaos of fluid movement inside the potential wells. As a result, one can observe the phenomenon of the radial ejection of some chemoconvective cells outside the potential well.
2021-05-10T00:03:33.541Z
2021-02-01T00:00:00.000
{ "year": 2021, "sha1": "9e37bd30dc936b7e53fa906f54853f980c0bcd33", "oa_license": null, "oa_url": "https://doi.org/10.1088/1742-6596/1809/1/012017", "oa_status": "GOLD", "pdf_src": "IOP", "pdf_hash": "f4ce8e7cce71e0465ec076a1b0c265e66b183e2c", "s2fieldsofstudy": [ "Chemistry", "Engineering" ], "extfieldsofstudy": [ "Physics", "Materials Science" ] }
78564840
pes2o/s2orc
v3-fos-license
Evaluation of Physical Activity Intensities and Energy Expenditure in Overweight and Obese Adults Purpose: This study aims to compare total energy expenditure (TEE) estimations made by Actiheart® and Armband®, as well as by MET with TEE measured by indirect calorimetry in an overweight population. Methods: Thirteen volunteers were equipped with Actiheart® and Armband® devices and wore a Fitmate® facemask during a controlled scenario of daily-living activities to evaluate TEE. TEE errors were calculated as the ratio of the differences between Actiheart®, Armband®, MET estimations, and the Fitmate® measurements. Time spent in sedentary, light-, moderateand vigorous-intensity activities was estimated and compared according to the devices. Results: The three mean absolute values of TEE errors were significantly different from zero and different between themselves. The absolute values of errors were different between Armband® and Actiheart® but not between Armband® and MET values or between Actiheart® and MET values. Armband® was the most accurate device for estimating TEE during the activity schedule in this overweight population sample. The distributions of differences varied less around the means, suggesting a smaller inter-individual variability in TEE estimated using Armband® than Actiheart® and MET values. For the time spent in each category of activity, Actiheart® and Fitmate® provided results that were significantly different from the recorded scenario, with differences ranging from 5 to18%. In contrast, there was no significant difference between the time estimated by Armband and the scenario. Conclusions: Our results showed that Armband® was more effective than Actiheart® at the individual level for estimating TEE and daily light-intensity activitiesin overweight or obese people. communicable chronic health diseases.The prevalence of obesity worldwide is steadily increasing.In 2014, 39% of adults worldwide were overweight and 13% were obese [1].Physical inactivity, sedentary behaviors and an excessively rich diet are responsible for chronic imbalance between energy intake and expenditure favoring the development of obesity and its co-morbidities.Sedentary behaviors are defined as "any waking behavior characterized by an energy expenditure lower than 1.5 METs" and they are associated with the development of several chronic diseases and of the premature mortality in adults [2]. Sedentary time, light-, and moderate-to vigorous-intensity activities represent about 57%, 39% and 4% of the awake period in the general population [3].However, among the sedentary behaviors, sitting time was 1 to 2.7 hours/day longer in obese than in normalweight population samples according to both studies [4].Obese people self-reported more time spent watching television or using computers for leisure compared to normal-weight and overweight groups [5]. Recent intervention studies suggest that replacing sitting with standing may result in rapid and positive change in health markers [6].Light-intensity activities such as standing or slow walking as well could play an essential role in fighting obesity. Self-monitoring is a key point in long-term behavior change, especially in physical activity.Thus a simple, valid and non-invasive tools are necessary to assess activity intensities and durations in order to be aware of physical sedentary behavior.Knowledge of the total and rest energy expenditure allows assessing the physical activity level.Therefore validity of energy expenditure estimations made by the devices is a crucial issue.It has to be investigated and established in accordance with a reference method.Indirect calorimetry (IC) is the gold standard and consists in measuring gas exchange: the volume of oxygen (VO 2 ) consumed and carbon dioxide (VCO 2 ) produced.On the basis of these variables, it is possible to deduce TEE using Weir's equation [7].Some portable devices such as Fitmate ® (Fitmate Pro, Introduction Western lifestyle characterized by lack of physical activity and diet rich in fat and refined sugars is associated with various non VO 2 [8].Fitmate ® was validated in a population with a BMI ranging from 18.3 to 32.5 kg.m -² that performed activities such as walking on a treadmill at different speeds and slopes [9].However, IC techniques are very expensive and require specific equipment and qualified staff.Therefore, TEE is sometimes evaluated on the basis of Metabolic Equivalent Task values (MET) associated with each activity [10].The MET value represents the ratio of the physical activity energy expenditure to the resting energy expenditure.Classically, activities are organized into five activity categories: sedentary (≤1.5 METs); light [1.6-2.9METs]; moderate [3-5.9METs]; vigorous [6-8.9METs]; and high-intensity [≥ 9 METs] [11]. One MET is equal to approximately 1 kcal.kg - .h - or 3.5 ml.kg -1 .min -1 in a single 40-year-old man with a body mass of 70 kg [12].Several authors recently reported that the standard MET value of 3.5 ml.kg -1 .min - was significantly higher than those measured in healthy men (3.21 ml.kg -1 .min - ) [12] and in overweight to obese men and women (2.62 ml.kg -1 .min - and 2.47 ml.kg -1 .min - , respectively) [13].It has been recommended that a correction factor be used to adjust MET levels on the basis of an estimate of RMR that accounts for age, height, weight and gender [10].These MET values therefore make it possible to estimate the activity energy cost and to classify activities according to their intensity.However, this method of TEE estimation using MET values is burdensome because it requires the detailed recording of the successive activities. As a result, portable tools that are more economical than IC and easier to use than activity recording have been developed, especially for routine measurements.These devices use accelerometry and heart rate (Actiheart ® ), body temperature, impedancemetry and heat flux (Armband ® ), or only accelerometry (RT3 ® , Actigraph ® ).The reliability and the validity of Actiheart ® were studied in controlled conditions (CC), including periods of rest, walking and running, with normal-weight and overweight people (BMI: 20-30 kg.m -²) [14].Measurements of movements and heart rate were accurate and reliable, allowing Actiheart ® to potentially estimate TEE.It was also validated in the general population when simultaneous measurements were made by indirect calorimetry and Actiheart ® during physical activity on a treadmill [11].No significant difference was noted between the two measurements, except for the step at 9.6 km.h -1 where Actiheart ® underestimated TEE.Armband ® was frequently used in controlled (CC) and free-living conditions (FLC).REE and TEE evaluated by Armband were validated for the general population using IC [15].However, it seems that the accuracy of Armband ® varied according to the intensity of the activity.In a heterogeneous population, Armband ® overestimated the TEE of light-and moderate-intensity activities, and underestimated TEE for vigorous-intensity activities [16].Recently, TEE estimated in CC by Actiheart ® and Armband ® vs. TEE measured by IC were compared in a normal-weight adult population [17].This study showed that Armband ® was more effective for evaluating the TEE of light-and moderate-intensity activities and that Actiheart ® was more accurate for evaluating the TEE of vigorousintensity activities performed in CC. Most device validation studies had been performed either on normal-weight (18.5-25 kg.m -²) [15] or general (18.5-40 kg.m -²) population samples [18] and few studies specifically involved overweight people [19].Thus, the aim of our work was to study the validity of TEE estimated in overweight and obese participants by Actiheart ® and Armband ® devices, and on the basis of MET values during a controlled activity scenario vs. the indirect calorimetry measurements taken by Fitmate ® . Method Participants For this study, 13 adults aged between 18 and 60 years old with a BMI ranging from 28 to 42 kg.m -² were recruited through the sports medicine department of the G. Montpied University Hospital (Clermont-Ferrand, France) and advertisements in a local newspaper.First, a medical visit allowed us to verify the selection criteria, i.e., age, BMI, lack of cardiovascular or locomotion diseases and no recent major surgery.During this visit, the volunteers were weighed and their height, neck, hip and waist circumferences were measured by the physician.They also signed an informed consent form and passed a resting electrocardiogram validated by a cardiologist.A maximal exercise test was then performed under the supervision of a cardiologist.All the participants performed a progressive cycling test on an electromagnetically braked cycle ergometer (Ergoline, Bitz, Germany) until volitional exhaustion to determine the maximal values of ventilation (VEmax), oxygen uptake (VO 2 max), carbon dioxide output and respiratory exchange ratio (RERmax) by direct method (Oxycon Pro, JAEGER, Germany).VO 2 and VCO 2 were measured breath-by-breath through a mask connected to O 2 and CO analysers (Oxycon Pro-Delta, Jaeger, Hoechberg, Germany).Calibration of gases analysers was performed with commercial gases of known concentration.Ventilatory parameters were averaged every 30 s. Electrocardiogram and heart rate (HR) were measured continuously using 10 precordial electrodes.The first stage of the test lasted 3 min, and the initial power output was 35 W. Power output was then increased by 35 W every 2 min 30 s. Pedaling rate was maintained at 60 revolutions per minute.Criteria for the achievement of VO 2 max were subjective exhaustion the participants' maximal HR (HRmax) was closed to their age-predicted maximum HR (i.e., 220-age ± 10 beats.min-1)and/or Respiratory Exchange ratio (RER, VCO 2 /VO 2 ) above 1.02 and/or a plateau of VO 2 .During this exercise, the heart rate and gas exchange were monitored in order to establish the relationship between total energy expenditure and heart rate necessary for individual calibration on the Actiheart ® device. The protocol was approved by the French Committee for the Protection of Human Participantsand was registered under the reference IDRCB 2013-A01140-45 in the ANSM system and 02348554 in Clinical Trials. Study design The volunteers performed each of the nine activities several times for a period of 2-20 minutes according to a defined scenario: sitting, slow, normal and brisk walking, climbing and descending stairs (four floors), standing, slow running and taking public transport (tramway).A researcher followed each volunteer in order to observe the beginning and the end of each activity and to record the real duration of each activity using the smartphone application, "Activity Diary", downloadable from https://play.google.com/store/apps/details?id=fr.inra.activitydiary.Process time was estimated at 106 minutes, but it could best opened at any time if the volunteer requested. Time spent in the four activity categories: We classified activities into four categories.The first category ranged from 0.9 to 2 METs and included sedentary behaviors such as sitting, standing and transportation.The second category, represented by slow walking, corresponded to light-intensity activities and ranged from 2 to 3 METs.The third category included moderate-intensity activities corresponding to 3-5 METs, such as normal walking, descending stairs and brisk walking.The last category, which included vigorousintensity activities, was higher than 5 METs with running and climbing stairs.The general MET values were defined for normalweight adults but depend on individual characteristics.This is why some authors [20] recommended personalizing MET values by taking the REE estimated by the Mifflin-St.Jeor equations, which are adapted to overweight men and women and obese people, into account [21].For each activity category, the error of TEE estimation (%) was expressed either in relative or in absolute values.T-tests and paired t-tests were performed on relative and absolute values of error to determine if the errors were different or not from zero and to compare error levels between them. The time spent in each activity category was already expressed as the percentage of the total recording time.We therefore calculated the gap between the results from the devices and the duration determined from the scenario, which is the reference for the time.All t-tests were performed using SAS 9 software.The percentages of time estimated in an activity category (sitting/standing, light-, moderate-or vigorous-intensity) by Actiheart ® , Armband ® and Fitmate ® were compared to those recorded by the scenario by performing paired t-tests. Statistical significance was set at p < 0.05.Agreement between portable monitoring devices, TEE estimations and reference measurements was evaluated by Bland-Altman plots [23].The bias was estimated by the mean difference (M) and the standard deviation (s).Statistically, 95% of the difference lies between M ± 2s (agreement limits). Participant characterization All participants were overweight or obese (28.5 < BMI < 41.6 kg.m -2 ) and middle-aged (Table 1).There was no difference in age (p = 0.77), weight (p = 0.31), height (p = 0.13), BMI (p = 0.70), waist circumference (p = 0.17) and hip circumference (p = 0.22) between male and female volunteers.However, neck circumference was significantly higher in men than in women (p = 0.02).The waist-tohip ratio was not significantly different between men and women, but it seems that this ratio was higher in men (p = 0.06). value is deduced from the conversion of REE expressed in kcal.d - into ml.kg - .min - : = METp is the personalized MET; METg is the general MET; 3.5 corresponds to the general oxygen consumption at rest (ml.kg -1 .min - ); REE Mifflin is the Resting Energy Expenditure predicted by the Mifflin-St.Jeor equations (ml.kg -1 .min - ). On the basis of the activity scenario, i.e., durations of activities and their MET values, we calculated TEE scenario . On the basis of TEE given minute-by-minute by Armband ® , Actiheart ® and Fitmate ® , MET values were calculated and time spent in each category was estimated considering the following equations: Description of the two devices used in the protocol: The description of both devices Pro-3 Armband ® (version 6.1, BodyMedia, Pittsburgh, PA, USA) and Actiheart ® device (CamNtech, Cambridge, UK), and the calculation of MET values from TEE were shown in [17]. TEE measurements made by Fitmate ® : The Fitmate ® device (Cosmed, Rome, Italy) was used as a reference for TEE measurement.We used facemasks with a turbine flowmeter (28 mm diameter) adapted to the ventilation measurement during exercise.This device also includes a galvanic fuel cell oxygen sensor for analyzing the fraction of oxygen expired and for subsequently estimating the VO 2 .The facemask was connected to the central unit, which was placed in a backpack during the activities for practical reasons.A chest belt connected to the central unit was responsible for monitoring the heart rate in real-time.This real-time monitoring makes it possible, in conjunction with VO 2 measures, to estimate changes in the respiratory quotient during exercise.TEE Fitmate was then calculated using an Excel macro provided by Delta Medical, which takes Weir's equation and the variation of RQ into account.A recent study reported that the average RQ at rest was higher for overweight and obese people (0.87) than for normal-weight people (0.85) [22]. Statistical analysis The anthropometric and individual characteristics (age, weight, height, BMI, circumferences) recorded in men and women were compared using t-tests.The TEE values given by the devices (Actiheart ® and Armband ® ) and estimated by the scenario were compared to those of Fitmate ® .Taking Fitmate ® as a reference, we calculated the error as follows: ( ) Activity scenario: duration and MET values The expected time for the whole set of activities was 106 minutes and the actual mean time was 99 ± 18 minutes.The varying durations were due to external factors such as climate (rain or snow), a tramway strike and the volunteers' reactions such as fatigue. Each physical activity of the scenario was associated with general MET values, followed by personalized ones, as shown in Table 2.The personalized values of METs were higher than the general MET values because the volunteers were overweight.This means that the intensity of activities is stronger for an overweight than for a normalweight participant. Comparison between TEE estimated by both devices, the scenario and TEE Fitmate ® The TEE values measured by Fitmate ® were the references for calculating errors.The mean relative value of error for estimating TEE was not significantly different from zero in the case of Actiheart ® and MET values: -7.0% ± 15.9% and 3.4% ± 13.0% (p > 0.05), but significantly different from zero for Armband ® , indicating an underestimation of -7.7% ± 8.4% (p = 0.008).Errors in absolute value were all significantly different from zero, 9.3% ± 6.9%, 16.3% ± 6.6% and 12.6% ± 4.7 for Armband ® , Actiheart ® and the scenario, respectively.However, the mean difference between Actiheart ® and Armband ® errors in absolute value was significantly different from zero (-6.7%, p = 0.004).Thus, the error expressed in absolute value was higher for Actiheart ® (15.9% ± 6.8%) compared to Armband ® (8.4%± 6.6%).The two other mean differences in error (between Armband ® or Actiheart ® and the scenario) were not significant (p = 0.21). Comparison between activity duration estimations and the scenario The percentages of time spent in sedentary sitting and standing, light-, moderate-and vigorous-intensity activities are shown in Table 3. Sitting and standing activities took up most of the time: 57%, 50%, 54% and 59%, according to Armband ® , Actiheart ® , Fitmate ® and the scenario recordings, respectively.The paired t-tests showed significant underestimations of -8.7% (p = 0.006) and -4.4% (p = 0.002) between the scenario and Actiheart ® or Fitmate ® for sitting/ standing activities, respectively.A similar result was observed for moderate-intensity activities (-9.7%, p = 0.008 and -8.6%, p < 0.0001 for Actiheart ® and Fitmate ® , respectively).Conversely, there was an overestimation of light-intensity activity duration by Actiheart ® and Fitmate ® compared to the scenario (p < 0.0001).Otherwise, there were few vigorous-intensity activities: less than 7% of the time and only one significant overestimation (4.5%, p = 0.006) by Fitmate ® compared to the scenario. No difference was observed between Armband ® and the scenario, regardless of the activity intensity (sitting/standing: p = 0.15; light: p = 0.10; moderate: p = 0.28; and vigorous:p = 0.16), or between Actiheart ® and the scenario for the estimation of vigorous-intensity activity (p = 0.89). Discussion In the present study, the performances of two portable devices (Actiheart ® and Armband ® ) and the personalized MET values for TEE estimation were compared with the results given by a reference method of indirect calorimetry (Fitmate ® ) in a population sample of overweight and obese volunteers.Knowledge of TEE is essential in order to estimate the time spent in active behaviors (duration and intensity), to propose a diet adapted to the physical activity level.Moreover, research devoted to the study of total and physical activity energy expenditure depends on valid devices.Actiheart ® and Armband ® were validated on normal-weight populations [11,16].It was then necessary to specifically test and validate them ona sample of overweight and obese populations in controlled conditions.Firstly, we compared the TEE estimated by the devices, calculated from the MET values associated with the activities in the scenarios and measured with the Fitmate ® device.Our findings showed that the absolute values of errors were smaller for the SenseWear Pro3 Recent studies have pointed out that sedentary time is the most detrimental to health [25] We therefore then compared the time of immobile activities (sitting and standing), light (slow walking), moderate (normal and brisk walking) and vigorous (running and stair climbing) evaluated by Actiheart ® and Armband ® , calculated from Fitmate ® and recorded by an engineer with the smartphone application "Activity Diary".The scenario provided the real time spent in each category since every change of activity was precisely recorded.The results are similar for Actiheart ® and Fitmate ® , which significantly underestimated the percentage of time for sitting/ standing and moderate-intensity activities, and overestimated lightintensity activities included in the scenario.Both devices, Actiheart ® and Fitmate ® , made it possible to estimate these times on the basis of TEE and, therefore, from variables such as heart rate.Following an activity, there is a recovery phase where the heart rate/TEE will gradually decrease to return to its resting value.This phase can be the cause of the underestimation of sitting/standing activities and the overestimation of light-intensity activities for these two devices.There is no difference between Armband ® and the scenario.This best accuracy is probably due to the fact that the measures made by the Armband are based not only on accelerometry, but also on body temperature, impedancemetry and heat flux.The accuracy of the Armband ® device in terms of sedentary time and the opportunity to obtain minute-by-minute results makes it a sensor that is particularly well adapted to the overall assessment of sedentary behavior.Results obtained with Actiheart ® and Fitmate ® depended on heart rate and recovery.Heart rate is also known to be influenced by factors other than physical activity, such as stress.For these reasons, these last two devices underestimated immobility and were therefore less well suited than Armband ® for estimating this activity category. As in other studies, a small sample size of volunteers was studied because the volunteerswere physically homogeneous and the protocol was performed in controlled conditions ( [14,15] and [26]).The volunteers of the present study were all able bodied, without gait disorder and sedentary.On the metabolic basis, they were similar too.Their TEE values recorded during the first sitting activity in the scenario were correlated neither with age nor with BMI (results not shown).Their activities during the scenario were standardized and controlled.Furthermore the small sample size of the present study was sufficient to show significant differences between the devices.In a further additional study, it would be interesting to test both these sensors in larger number of volunteers in free-living conditions where activities are brief, spontaneous and heterogeneous since these activities are assumed to have beneficial effects on health.However, since both of these sensors only provide minute-by-minute results, they cannot identify very short light-intensity activities (less than one minute) such as small displacements involved in daily tasks.The design of a device capable of integrating novel functionalities that are able to capture frequency and duration of brief light-intensity activities is therefore needed. Health problems related to obesity and sedentary behavior are on the rise.The sensor SenseWear Pro-3 Armband ® provided accurate results with a difference of less than 2% of time spent in each of the four intensity category compared to the scenario.It estimated TEE with less than a 10 % error compared to IC. Actiheart ® was well adapted to estimate normal weight people TEE.The knowledge of assessment validity of total energy expenditure and sedentary behaviors by the research devices are therefore a major issue in the overweight population. where TEE Device is the Total Energy Expenditure estimated by Actiheart ® or Armband ® or from MET values, and TEE Fitmate is the Total Energy Expenditure estimated by Fitmate ® . Table 2 : General and personalized MET values (METg and METp) according to the activity category. Table 3 : [14,24]ent (%) in each intensity category determined by the three devices and the scenario (Mean ± SD).Paris et al.Int J Sports Exerc Med 2016, 2:040Armband ® (9%) than for Actiheart ® (16%) and MET values (12%).Moreover, the limits of the agreements were closer for Armband ® than for the other two methods.Few studies have focused on the validity of both of these devices in overweight and obese adults.As regards energy expenditure estimated by Armband ® Pro 2 version 4.0 during exercise sessions performed for five minutes by twenty obese adults, an overestimation of TEE by 20, 30 and 31 % during bicycling, stair stepping and walking was observed[19].The Bland-Altman plots did not show agreement between Armband Pro 2 and indirect calorimetry measured in obese volunteers.In our study Armband Pro 3 version 6.1 were used.Small biases and good agreement between the recent version of Armband and IC were observed.Furthermore the TEE was evaluated over a longer time span in the present study than in Papazoglou et al.'s work.The device bias may increase with brief time span by reason of edge effects.To our knowledge, Actiheart ® was never specifically validated in an overweight or obese adult population.To improve the accuracy of this heart rate monitor, the individual or group "heart rate-TEE" relationship has to be established and implemented in the TEE prediction model[14,24].The best results were obtained with the individual calibration that was used in our study.Nevertheless, Actiheart ® provided less accurate TEE estimation than Armband ® .The personalization of the MET values associated with the activity scenario allowed us to obtain a third evaluation of TEE.The magnitude of error in absolute value was intermediate between those of Actiheart ® and Armband ® , without being significantly different.Thus, if the activity durations are precisely known, the TEE estimation on the basis of the activity scenario and the personalized MET values is accurate.However, the TEE evaluation from MET values requires accurate activity recordings that would be very demanding and tedious in free-living conditions without technological support.
2019-01-02T01:47:19.115Z
2016-06-30T00:00:00.000
{ "year": 2016, "sha1": "b7b683d8460cecf942b3b8f74ac59537a637b573", "oa_license": "CCBY", "oa_url": "https://clinmedjournals.org/articles/ijsem/international-journal-of-sports-and-exercise-medicine-ijsem-2-040.pdf", "oa_status": "HYBRID", "pdf_src": "Anansi", "pdf_hash": "b7b683d8460cecf942b3b8f74ac59537a637b573", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
20503836
pes2o/s2orc
v3-fos-license
Instabilities and resistance fluctuations in thin accelerated superconducting rings The non-equilibrium properties of a driven quasi-one dimensional superconducting ring subjected to a constant electromotive force ({\it emf}) is studied. The {\it emf} accelerates the superconducting electrons until the critical current is reached and a dissipative phase slip occurs that lowers the current. The phase slip phenomena is examined as a function of the strength of the {\it emf}, thermal noise, and normal state resistivity. Numerical and analytic methods are used to make detailed predictions for the magnitude of phase slips and subsequent dissipation. I. INTRODUCTION When driven away from equilibrium, many systems encounter instabilities leading to new states or phases. Often there exists a multiplicity of possible states that can be selected near the onset of the instability. The selected state may depend on various factors such as the rate at which the system is driven through the instability, noise, internal excitations, different dissipation mechanisms and the system size. In this paper the selection of states is studied in driven superconducting rings. Many of the phenomena observed here are not limited to superconducting rings, but appear in many other physical systems ranging from pattern forming systems [1,2,3,4,5] to lasers [6]. The relative simplicity of the superconducting system makes it possible to obtain information about some of the general questions in driven non-linear systems such as state selection and the effect of dissipation on the state selection process itself. The mesoscopic nature of the system, i.e., the superconducting ring has a finite circumference with a finite number of accessible states, is fundamental to this problem. First, it leads to the existence of a finite number of metastable current-carrying states which can compete for occupation. It is this competition that lies at the heart of the problem. Second, care must be taken to distinguish between voltage-driven and current-driven systems. As shown by Tarlie et. al. [7], for systems that are not in the thermodynamic limit, i.e. mesoscopic systems, the choice of ensemble is not free. In this paper we focus on voltage-driven systems as opposed to current-driven systems. In addition to providing a prototype system to study various aspects involving driven systems in general, nonequilibrium superconductivity is of great interest in its own right. Indeed, the current-induced transitions in superconducting filaments have been a subject of intense experimental and theoretical study for almost three decades. Ref. [8] provides a comprehensive review of the field. We concentrate on the emergence of the dissipative phase-slip state [9,10,11,12,13] in voltage-driven mesoscopic systems. When a superconductor (below T c ) is driven by a voltage-source, the supercurrent increases until it reaches a critical value at which point the system becomes unstable. Several interesting phenomena may then occur: the system will enter the dissipative phaseslip state, Joule heating can take place, mode locking, as well as other phenomena. Here, the focus is on the onset of the instability and its effect on the dynamics of the superconducting state. The transitions between the current-carrying states can take place via two fundamentally different routes: i) by a nucleation process involving thermal fluctuations and an activation energy barrier, or ii) the system may be driven to an instability by an external driving force. In the context of nucleation and metastability, the decay of persistent currents in thin superconductors is an old and extensively studied problem [14,15,16,17]. However, the latter [13] involves a decay from a point of instability, and it is relatively poorly understood. One of the major difficulties is this: whereas in the case of nucleation the decay is from a metastable state involving thermal activation and a saddle point, in the latter case the external force drives the system to a point of instability where there is no energy barrier left, i.e., the energy landscape looks locally flat. In this instance the decay and the final state depend on various factors, such as how fast the system was driven, the relative strength of fluctuations, internal excitations, and so on. This makes precise theoretical formulation of the problem difficult, since it is not possible to use the free energy formulation as in the case of metastability [18]. II. THE SYSTEM The physical system considered is a quasi-onedimensional superconducting ring of finite circumference, i.e., the radius of the cross-section area (S) of the superconducting filament is much smaller than coherence length (ξ) and magnetic penetration length (λ): √ S ≪ ξ(T ) and √ S ≪ λ(T ), respectively, see Fig. 1a. When the ring is placed in a time-dependent magnetic field, by Faraday's law of induction, an emf is induced in the ring. this leads to a current that increases in time. (Here E is the electric field, J s the supercurrent density, and c the speed of light.) The time-dependent increase in the current cannot continue indefinitely, and eventually the current will reach a critical value, at which point the system becomes unstable and a dissipative phase slip will occur, resulting in a reduction of the current [13] by a discrete amount. It is important to reemphasize that the system dynamics in the case under study here, viz. the decay of the system from a point of instability, is very different from the historically well-studied problem of the decay of the system from a point of metastability. The picture of the system hopping from one local minimum to the next no longer applies. Rather, the picture now is one where the system is initially in a locally stable state, but as a consequence of the voltage source, the energy landscape evolves in such a way that as the critical current is reached, the system finds itself at the top of a hill. When this situation is encountered, it is possible that there exist a variety of different valleys for the system to flow into, each valley leading to a locally stable state. In this picture, each of these locally stable states compete for occupation. To examine this phenomena the Ginzburg-Landau theory of dirty superconductors will be considered. The Ginzburg-Landau free energy functional can be written, where A is the vector potential, Ψ is the complex valued order parameter, e is the electron charge, m e is the electron mass, c is the speed of light,h is Planck's constant, and a and b are the expansion coefficients. Since the current is induced in the loop by a time varying magnetic flux, the effect of the induced electromotive force (emf) must be included in the GL description. By Faraday's law of induction, the electrons in the loop are subjected to an emf where E is the induced emf and Φ(t) is magnetic flux through the loop. The magnetic flux and the magnetic field are related by where S l is the area of the loop. Eqs. (2) and (3) can be combined to obtain a relationship between the the vector potential and the electric field, i.e., if E x is used to denote the tangential component of the field then A where A x is the tangential component of the vector potential. This in turn gives A x = −Ect/L, where L is the length of the wire. The one-dimensional nature of the problem allows several simplifications. First, since the wire is narrow the magnetic field generated by the supercurrent does not significantly influence the order parameter. This allows one to treat the vector potential A x as a parameter instead of as a dynamical variable. In addition, since the magnetic field energy due to the supercurrent is much smaller than the energy associated with the order parameter, the magnetic field term can be dropped from the free energy [15]. Finally, since the radius of the wire is less than ξ the order parameter is only a function of the tangential direction (x). The geometry of the wire implies periodic boundary conditions, i.e., Ψ(x) = Ψ(x+L). For further analysis and computational efficiency it is convenient to rewrite the equation in dimensionless form using the following transformations: where ξ 2 =h 2 /(2m e |a|) and it is implicitly assumed that the temperature is below the superconducting transition (i.e., a < 0). τ GL is the Ginzburg-Landau time defined as and it is the natural measure for time, i.e., t → t/τ GL . In the following, we will work in dimensionless units, i.e., we perform the transformations as defined above and drop out the primes for convenience. The last transformation in Eq. (4) involves v ec , the electrochemical potential generated by the normal current, which will be formally introduced in the next section where the GL theory is extended to include normal (Ohmic) current generation. In addition, following the scalings in Eqs. (4), it is natural to measure the length in units of the coherence length as ℓ = L/ξ. Then the rescaled boundary condition reads Ψ(x) = Ψ(x + ℓ), and the dimensionless free energy becomes To describe the dynamics of the superconducting condensate, relaxational dynamics are assumed leading to the standard stochastic time-dependent Ginzburg-Landau (STDGL) equation of motion, i.e., where η ≡ η(x, t) is an uncorrelated Gaussian noise source with correlations The angular brackets denote an average, and D is the intensity of the noise determined by the fluctuationdissipation [17] theorem as where H c is the critical field, and To make the model numerically more tractable, it is convenient to make the transformation [17,19] where ω = τ GL 2eE/h. This transformation twists, or winds, the order parameter along the wire. The effect of the transformation is to map the current carrying states to twisted plane waves as illustrated in Fig. 1b. After the transformation, the periodic boundary condition becomes and the equation of motion obtained from Eq. (7) reads as This formulation neglects the electrochemical potential due to normal current generation at a phase slip center. Its inclusion is discussed next. A. Electrochemical potential Eq. (11) would be a sufficient description if the generation of a normal current at a phase slip could be neglected. This approximation is valid when the normal state resistivity is negligible [13,17]. However, the Ginzburg-Landau free energy is only valid for 'dirty' superconductors in which the normal state resistivity is appreciable even at low temperatures. One aim of the current study is to examine the effect of the resistive normal current to the process. To facilitate this goal the equation of motion (i.e., Eq. (11)) must be generalized to include the creation of electrochemical potential gradients at phase slip locations. A phase slip occurs when the system locally loses superconductivity and becomes a normal Ohmic conductor. As discussed above, below T c the system retains the fully superconducting state after making a transition to a state of lower current. An important question is the effect of the generation of normal current on the dynamics and the state selection problem. To account for the generation of normal current, the time derivative in the STDGL equation of motion must be replaced by ∂/∂t + iv ec , where v ec ≡ v ec (x, t) is the electrochemical potential generated by the normal current [17,20,21,22,23]. With that substitution, the dimensionless equation of motion becomes Physically, the appearance of the electrochemical potential is due to local charge imbalance in a superconductor. Gorkov [24] was the first to point out that, in a superconductor the Fermi level, and thus the electrochemical potential is a local time-dependent variable related to the coherence of the superconducting state. Qualitatively, if the local charge balance is disturbed, the Fermi level experiences a local time-dependent perturbation. This in turn affects the local energy gap. Gorkov showed that gauge invariance is preserved, if the order parameter depends on time as exp(−2iµ F t/h) where µ F is the Fermi energy. This leads to the second term on the left hand side in Eq. (12). The electrochemical potential can be determined by combining charge conservation and Ohm's law in the following manner. Charge conservation implies that, ∂ x (J n + J s ) = 0, where J n is the normal current and J s is the supercurrent [25]. From Ohm's law, i.e., ∂ x v ec = −α J n , this can be written where α is a dimensionless Ohmic resistivity and can be written where ρ n is the normal state resistivity, t = T /T c and H c (T ) is the critical field. For a dirty superconductor [28] this can be written where l F is the mean free path length. III. LINEAR STABILITY ANALYSIS The aim of the linear stability analysis is to gain insight into the stability of the current carrying state against small perturbations, how the perturbations grow or decay in time, and how different modes are selected. In general, when the total current exceeds the critical supercurrent an Eckhaus instability occurs. The Eckhaus instability is a longitudinal secondary instability that appears in many systems exhibiting spatially periodic patterns [26,27] To study the Eckhaus instability in superconducting rings the order parameter is linearized around a current carrying state by setting Ψ(x, t) = Ψ 0 + δΨ(x, t), where Ψ 0 = 1 − q 2 e iqx and q = (ω/ℓ)t. Ψ 0 is a current carrying (or uniformly twisted plane wave) state that is a solution of Eq. (12) This limit is satisfied for the range of ω/ℓ's considered in this paper (i.e., 2 × 10 −6 < ω/ℓ < 2×10 −3 ). Since the system possesses translational invariance and admits plane wave solutions, the perturbation is given in terms of its Fourier expansion, i.e., where a kn (t) is the amplitude of mode n associated with wavevector k n = 2πn/ℓ. Substituting into Eq. (12), using Eqs. (9) and (13) to solve for v ec and linearizing in δΨ gives an equation of motion for δΨ or in Fourier space for a kn . Setting a kn (t) = a kn e λ(q,α)t leads to an eigenvalue equation which can be solved to give When λ ± n is negative the corresponding mode is stable, fluctuations decay back to zero, and the superconducting state persists. When λ ± n is positive the current carrying states are unstable with respect to fluctuations of a finite wavevector k n . For the following discussion λ − n can be neglected as it is negative definite. In Fig. 2, λ + is shown for the first three modes as a function of q for several different values of α. For small q all the modes are stable, i.e., λ + n < 0. The inset in Fig. 2a shows that the modes become unstable sequentially; the lowest mode first, then the mode n = 2, and so on. The time (t n ) at which a given mode becomes linearly unstable is determined by the condition λ n (t n ) = 0 which gives For a wire of infinite length this time corresponds to the time at which the current reaches the critical value, i.e., q n = (w/ℓ)t n1 = 1/3 + k 2 n /6 → 1/3 and J c = q c (1 − q 2 c ) = 2/ √ 27. While Eq. (18) implies that single phase slip (i.e., n = 1) processes will dominate, this effect is offset by the rate of increase of λ + n , i.e., ∂λ + n /∂q is an increasing function of n. This can be seen in the small α limit, i.e., ∂λ + n ∂q qn = 2 12 + 6k 2 n 4 + 5k 2 n 3k 2 n + or for simplicity in the small k n limit Thus the rate of increase of the positive eigenvalue increases with n. The situation is somewhat analogous to the classic tortoise/hare race if only two modes are considered (say n = 1 and 2). In this case the tortoise (n = 1) begins the race first since t 1 < t 2 , but the hare accelerates faster since ∂λ + 1 /∂q| q1 < ∂λ + 2 /∂q| q2 . To first order in α, the effect of dissipation is to increase the rate of acceleration of both the tortoise and hare equally. Since the tortoise begins the race first this tends to favor the tortoise winning the race. In terms of mode analysis, increasing the dissipation (i.e., α) increases the probability of a single phase slip (n = 1) occurring over a double phase slip (n = 2). The linear predictions can be used to estimate the relative probabilities of a phase slip of order n occurring. In the linear prediction the equal time correlation function for the n th mode is where Following the instability Eq. (21) describes the evolution of the n th mode from the initial current carrying state described by Ψ i n = 1 − q 2 exp[iqx] to the new current carrying state described by Ψ n =ā n exp[i(q−k n )x] wherē a n = 1 − (q − k n ) 2 . The quantitŷ a n ≡ |a n (t)| 2 /ā n describes the 'distance' from the initial to final n th state and can be thought of as an orthogonal coordinate in an n-dimensional space. The unit of measure in this space is then If it is assumed that a phase slip has occurred when d = 1, then it is natural to interpret the relative probability of an n th order phase slip as Eq. (25) provides a qualitative picture of the state selection process and makes it possible to compare the linear theory to numerical results. This will be done in Section IV (in particular, see Fig. 9). In addition to the dependence of P n on λ + n , P n also depends on the noise strength. While this is not directly visible from Eq. (25) it should be noted that the equation d = 1 imposes a D dependence onâ n and P n . Physically, the noise strength depends on the temperature of the system via the fluctuation-dissipation theorem. The intensity of thermal noise increases as T → T c as demonstrated by Eq. (8). Thus, close to T c the relative importance of the noise becomes increasingly important, whereas away for T c the driving force is dominant. Since α has no time dependence, the expansion of t 0 dt 1 λ + n (t 1 , α) leads to the same result as obtained by Tarlie and Elder [13], i.e., in terms of the intrinsic and extrinsic parameters, the instability of order n becomes active at time τ n = ℓ(∂ q λ + n ωℓ) −1/2 . To summarize, the linear analysis shows that the state selection has a subtle dependence on both the applied driving force and on the intrinsic properties of the system. It is important to note that this analysis can only be expected to give a qualitative description of the process since the analysis does not account for competition between the various modes. These results will be compared with numerical simulations of the stochastic timedependent GL equation in Section IV. IV. NUMERICAL RESULTS The parameters that enter the numerical simulations can be estimated by considering typical experimental values, such as T c = 3K, T = 0.93T c , H c = 300G, and ξ(0) = √ S = 1000Å. With these values the intensity of the noise is D = 10 −3 , the GL time is τ GL = 1.4 × 10 −11 and ω ≈ E/23µV. In the simulations the temperature is fixed and thus the intensity of noise is fixed. ω was varied between 0.0001 to 0.1. This corresponds to electromotive forces from 2nV to 2µV. For dirty superconductors the normal state resistivity can vary between 0.01 and 1.0µΩ − cm and the ρ o varies from 1.0 to 100.0µΩ − cm. Using these values the dimensionless resistivity, α ≈ 10 −4 − 1.0, depending on the dimensions and the material. A simple Euler algorithm was used for the time integration of Eq. (12), and Eq. (13) was solved in Fourier space. The complex order parameter was separated into its real and imaginary parts. The simulation parameters were: L = 64, dx = 0.85, dt = 0.2, where dx and dt are the smallest discrete elements of space and time respectively. A useful parametrization of the length of the system is n ℓ ≡ ℓq c /2π, where q c = 1/ √ 3 from the Eckhaus analysis of the GL equation as discussed above. n ℓ is interpreted as the winding number of the order parameter when the Eckhaus instability is encountered. For the simulations to follow n ℓ = 5 (see Fig. 3). This allows enough complexity due to interaction between different modes, i.e. five modes can compete for occupation, while remaining numerically tractable. When computing the probability of an n th order phase slip, P n , the averaging was typically from 2000 phase slip events (small ω) up to 15,000 phase slips (large ω). Simulations performed at large values of α and ω (not shown here) often lead to unusual results which may be due to numerical inaccuracies. A. Dynamics of the order parameter The dynamics of the order parameter around a phase slip is illustrated in Figs. 3 and 4. Fig. 3 illustrates that a current carrying state is a uniformly twisted plane wave. As the current increases the helix becomes more tightly wound. Due to fluctuations, there will be weak spots where the local supercurrent reaches the critical current before the rest of the system. This is the point where the amplitude of the order parameter starts to decay rapidly toward zero. When |Ψ| 2 → 0, the phase slip center momentarily disconnects the phases to the left and right of it, the helix looses a loop, and the supercurrent jumps to a lower value. This cycle is repeated periodically. In Figs. 3 and 4 the behavior described using linear analysis in the previous section is clearly visible: as the supercurrent increases the absolute value of the order parameter, |Ψ| 2 , decreases and at the moment of the phase slip approaches zero. After the phase slip the order parameter rapidly recovers. Fig. 4 demonstrates this behavior. This allows the amplitude to relax toward equilibrium in the vicinity of the phase slip center (times t 2 and t 3 in Fig. 4 and Fig. 3c,d). After a short time the wire obtains a uniform current (t 4 in Fig. 4). To quantify the phase slip events it is useful to consider two quantities: the spatially averaged supercurrent and the winding number, both at time t. The spatially averaged supercurrent is given by The winding number is a measure of the total phase change in the system and can be defined as As described above, the order-parameter can change its winding number by 2πn, where n = ±1, ±2... Only changes by an integral multiple of 2π are possible in order to preserve the continuity of the order-parameter. This also implies that at a single (multiple) phase slip, the system removes exactly one (integral multiple) fluxoid. Fig. 5 displays the time development of the supercurrent and winding number defined in Eqs. (26) and (27) respectively. The electric field drives the current to the critical current where an instability occurs and the current jumps to a lower value. As suggested by Fig. 5, there can be several modes simultaneously present. In the figure phase slips of order two dominate but occasionally there are jumps of order three. The relative occurrence of phase slips of all orders is shown in Fig. 6 as a function of driving force (i.e., ω) for several values of α. Fig. (6), it can be seen that the probabilities of double and triple phase slips are almost equal, but there is still a small probability for single slips. As discussed in connection with the linear stability analysis the appearance of phase slips of different order is a subtle issue. For example, every now and then the winding number displays little dips, as if the total phase slip was a result of a two stage process. It is instructive to look at the state selection probabilities in Fig. 6 together with the dynamics of the supercurrent and the winding number in Fig. 5. As seen from Fig. 6 phase slips of order n = 1 dominate the process at low driving forces. As the driving force is increased phase slips of order n = 2 become dominant and the shape of the probability curve of becomes skewed. Order by order other modes become dominant in a similar manner. This is consistent with the linear stability analysis as shown in Fig. 2. The little dips referred to above are a result of competition between the modes. As seen in the linear stability analysis, modes of lower order become unstable first but the higher order ones grow at a faster rate. This leads to competition and crossover effects. This implies that the dips in Fig. 5 are not due to a result of a twostage process where a phase slip of higher order occurs via two lower order processes, but instead due to the coexistence of different modes with different growth rates. Fig. 7 illustrates the complicated nature of the phase slip when several modes are simultaneously present. There is a competition between different modes, and it is even possible for several phase slip centers to exist (almost) simultaneously. The rate of change the electrochemical potential at the phase slip center as a function of time. The simulation parameters correspond to a case when single slips dominate almost completely. The slice runs from immediately before the phase slip to just after it. Fig. 8 shows the time rate of change of |Ψ| 2 and the electrochemical potential at the phase slip center as a function of time, when the mode n = 1 is dominant. The time frame is selected in such a way that the figures cover the immediate vicinity of the phase slip. At the moment of the phase slip, |Ψ| 2 = 0. After the phase slip, Ψ rapidly recovers its equilibrium value. Since there is a constant emf acting on the superconductor, |Ψ| starts to decrease after its recovery. As seen in the lower figure, v ec regains its equilibrium value (v ec = 0) at the phase slip center considerably slower than Ψ. This can be seen in the following way. The electrochemical potential is zero if the current is uniform throughout the sample. However, as seen in Figs. 4 and 7, the time required to reach a uniform current is much longer than the time required for healing of the order parameter at the phase slip center. Physically, this corresponds to relaxation of the charge imbalance [28] in a superconductor. The relaxation is diffusive [8,10,29], with time scales typically of order 10 −9 − 10 −10 s. As discussed in the preceding section the electrochemical potential, or dissipation, changes the probability of making an n th order phase slip. This can be seen in Fig. 6 for all values of ω. To highlight this feature the selection probabilities were numerically estimated as a function of α for ω = 10 −3 and are displayed in Fig. 9. In this figure the linear prediction (i.e., Eq. (21)(22)(23)(24)(25)) is also included for comparison. While the linear analysis fails to predict the correct amplitudes for different modes, it provides the correct qualitative picture and predicts the order in which different modes become dominant. The quantitative discrepancies stem from two factors, first the non-linear terms seem to favor a separation between the modes. Fig. 6 shows that once a mode becomes dominant, it quickly suppresses all the other ones. However, in linear theory all the allowable modes have much higher amplitudes at all values of the driving force ω [30]. Second, at the limit α → 0 the linear theory predicts accurately the crossover points where a new mode becomes dominant [30], but the presence of dissipation (finite α) has a significant effect on that as can be seen in Figs. 6 and 9. B. Power dissipated at a phase slip A phase slip is a dissipative process where electrical energy is locally converted into heat. This is due to Ohmic resistance at the phase slip center. Early experiments [10,29] showed that the differential resistance related to the phase slip is temperature independent for a wide range of temperatures except very close to T c . This is a delicate issue; heating due to Ohmic resistance changes the local critical current, and issues related to charge imbalance and relaxation may become important [8]. In the following, the heat generated at a phase slip is estimated. The normal carriers are assumed to follow Ohm's law. The Joule heating law can be used to estimate the heat generated at a phase slip. The power generated is where dΩ is a volume element, S is the cross sectional area of the ring, j n is the dimensional normal current density and E x is the electric field along the wire. In terms of the electrochemical potential The energy per unit volume can then be written where, E o = (2H 2 c /l)α and J n is the dimensionless normal current density. The increase in temperature due to a phase slip can be estimated using the heat capacity. The heat capacity per unit mass is where c is the specific heat. The change in temperature is then where ρ m is the mass density and H 2 c ∝ (1−t) 2 , where t = T /T c [28]. Eq. (30) can be used to estimate the change in temperature due to a phase slip. The linear dependence of T 0 on (1−t) 2 expresses the well known fact [10,29] that close to T c the effects of Joule heating are less significant. Evaluation of ∆T requires information about the time and the length scales involving v ec , and therefore we have not estimated it here. Fig. 10 shows the accumulated energy and power dissipated as a function of time. V. CONCLUSION Here, the dynamics of accelerated quasi onedimensional superconductors under the influence of a voltage source was studied. A constant emf was used to accelerate the supercurrent to the critical current, at which point the Eckhaus instability is encountered and multiple metastable states can compete for occupation. Each of these competing metastable states corresponds to a state with a different supercurrent. The transition to a new state of lower current involves generation of a resistive phase slip center that heals after the phase slip. Because the system was driven by a voltage source, it allowed the study of very general phenomenon, namely the relation to the general methods and problems in nonlinear dynamics, statistical mechanics and pattern formation. Linear stability analysis was used to investigate the Eckhaus instability. It was found out that within the linear approximation, the state selection process is a competition of two factors: the characteristic time at which a mode a n (t) becomes unstable, and the growth rates of the other modes. For small driving forces, the loworder modes have time to grow and dominate the process, whereas for larger driving forces the faster growth rates of high-order modes lead to their dominance. In the intermediate region the competition leads to crossover. Numerical simulations were performed by simulating the stochastic time-dependent Ginzburg-Landau equation. It was found out that the behavior is consistent with the predictions of the linear analysis. Although the behavior was qualitatively similar, nonlinearities and interaction between the phase slips at higher driving forces and higher normal current resistivity lead to differences. In spite of the simplicity of the system, it displays rich and complex phenomena, and more analytical and numerical studies are needed. To the authors' knowledge, there exists no systematic method to study state selection in accelerated systems. Recent work [18,30], suggests that the path integral method of Onsager and Machlup [31] may offer a framework for a systematic study of the decay of systems from points of instability when multiple modes compete for occupation. The extension of this work to problems where the dynamical system is evolving in time, as is the case here, has not been explored. Additionally, future work could explore the two-dimensional case numerically.
2017-10-27T14:05:49.399Z
2002-08-23T00:00:00.000
{ "year": 2002, "sha1": "5937e8a5d07490516c7db5ea4826b58988a76bb3", "oa_license": null, "oa_url": "http://escholarship.mcgill.ca/downloads/p8418s12q", "oa_status": "GREEN", "pdf_src": "Arxiv", "pdf_hash": "731df352e8d0917a1b79397e088d417565197818", "s2fieldsofstudy": [ "Physics" ], "extfieldsofstudy": [ "Physics", "Medicine" ] }
210931799
pes2o/s2orc
v3-fos-license
A Review on Practical Considerations and Solutions in Underwater Wireless Optical Communication Underwater wireless optical communication (UWOC) has attracted increasing interest in various underwater activities because of its order-of-magnitude higher bandwidth compared to acoustic and radio-frequency technologies. Testbeds and pre-aligned UWOC links were constructed for physical layer evaluation, which verified that UWOC systems can operate at tens of gigabits per second or close to a hundred meters of distance. This holds promise for realizing a globally connected Internet of Underwater Things (IoUT). However, due to the fundamental complexity of the ocean water environment, there are considerable practical challenges in establishing reliable UWOC links. Thus, in addition to providing an exhaustive overview of recent advances in UWOC, this article addresses various underwater challenges and offers insights into the solutions. In particular, oceanic turbulence, which induces scintillation and misalignment in underwater links, is one of the key factors in degrading UWOC performance. Novel solutions are proposed to ease the requirements on pointing, acquisition, and tracking (PAT) for establishing robustness in UWOC links. The solutions include light-scattering-based non-line-of-sight (NLOS) communication modality as well as PAT-relieving scintillating-fiber-based photoreceiver and large-photovoltaic cells as the optical signal detectors. Naturally, the dual-function photovoltaic–photodetector device readily offers a means of energy harvesting for powering up the future IoUT sensors. subsea military activities are examples of the growing need to explore the oceans for industrial, scientific, and military purposes.For instance, Saudi Aramco, which is the largest oil and gas company in the world, has over 43,000-km offshore oil pipelines to be monitored, thereby requiring an efficient, secure, and high-speed underwater wireless communication technology.Acoustic communication, which is the most common technology in underwater wireless communication, dates back to 1490 when Leonardo da Vinci suggested detecting ships in the distance by acoustic means [1].Today, studies of the physical layer of underwater acoustic communication have reached a certain level of maturity.Numerous sea trials have demonstrated such communication over tens of kilometers or beyond [2] and transmission rates of tens of kilobits per second or higher [3]- [7], the latter being a substantial advance on the few tens of bits per second in the early stage [8], [9].Acoustic-based video transmission has also been demonstrated [6]. Figure 1 shows the published experimental performance of underwater acoustic telemetry systems regarding their data rates versus their ranges, with a range-times-rate bound to estimate the existing performance envelope [10].As the physical layer verifications become proven, calls are emerging to integrate acoustic modems into networks.Some platforms (e.g., SUNRISE [11], LOON [12], and SWARMs [13]) require network technologies such as medium access control (MAC) [14], multiple input and multiple output (MIMO) [15], [16], localization [17], [18], route discovery [19], and energy harvesting [20].Considering the limited data rate of the acoustic method regardless of its maturity, the increasing need for high-speed underwater data transmission is driving the development of high-bandwidth communication methods.Radio-frequency (RF) technology typically delivers digital communication or full-bandwidth analog voice communication with rates of tens of megabits per second in terrestrial environments over the kilometer range [21].However, researchers are also attempting to deploy RF technology in unconventional environments, such as (i) underground to monitor soil properties and build underground networks [22], [23], and (ii) underwater to build underwater sensor networks.Though the considerable RF attenuation in water that increases drastically with frequency [24], there are still a few prior works on underwater RF communications [25]- [27].In these works, a long transmission distance is always achieved by sacrificing the bandwidth (40 m and 100 bit/s at 3 kHz) [26] or vice versa (16 cm and 11 Mbit/s at 2.4 GHz) [25].Table I summarizes the realizable ranges and data rates of underwater RF communication systems [24]. Given the limited performance of underwater acoustic and RF communication, underwater wireless optical communication (UWOC) has become a transformative alternative.Optical wireless communication (OWC) is data transmission in an unguided propagation medium through an optical carrier, namely ultraviolet (UV), visible, or infrared.Unlike the expensive, licensed, and limited electromagnetic spectrum in RF, the largely unlicensed spectrum (100-780 nm, or ~30 PHz) in OWC enables wireless data transmission at extremely high data rates of up to gigabits per second (Gbit/s) [28].In fact, the development of OWC has been ongoing since the very early years of human civilization.Signaling by means of beacon fires, smoke, ship flags, and semaphore telegraph can be considered as being historical forms of OWC [29].In 1880, Alexander Graham Bell invented the photophone based on modulated sunbeams, thereby creating the world's first wireless telephone system that allowed the transmission of speech [30].The recent development of high-speed power-efficient optoelectronic devices has offered the promise of OWC data rates of up to 100 Gbit/s [31] with transmission links of a few kilometers [32].Such devices include light-emitting diodes (LEDs) [33], superluminescent diodes [34], lasers diodes (LDs) [35], photodetectors [36], modulators [37], and the integration of these devices [38].Furthermore, because of the high energy efficiency of these high-speed optical emitters, OWC with dual functionality, such as light fidelity (Li-Fi), has been proposed for simultaneous lighting and communication purposes [39]. However, because of the complexity in aquatic environments, the early development of UWOC lagged far behind terrestrial OWC.The first experimental UWOC demonstration was made by Snow et al. in 1992, achieving a data rate of 50 Mbit/s over a 5.1 m water channel with a gas laser [40].In 2006, by using a 470 nm blue LED, Farr et al. achieved a 91 m UWOC link with a rate of 10 Mbit/s [41].The first gigabit (1 Gbit/s) UWOC system was implemented by Hanson et al. in 2008 using a diode-pumped solid-state laser [42].However, more considerations are needed for the physical layer of UWOC to mature, one being the selection of a light wavelength that is suitable for use underwater.In the presence of underwater microscopic particulates and dissolved organic matter in different ocean waters, absorption and multiple scattering cause irreversible loss of optical intensity and severe temporal pulse broadening, respectively [43], which in turn degrade the 3 dB channel bandwidth [44].Because of the low attenuation coefficients, blue-green light is preferable in clear and moderately turbid water conditions [45].For highly turbid water, the channel bandwidth can be broadened by using a red-light laser because of the lower scattering at a longer wavelength, as investigated numerically by Xu et al. [46].Based on that study, Lee et al. demonstrated the performance enhancement experimentally by utilizing a near-infrared laser; they showed that the overall frequency response of the system gains an increment of up to a few tens of megahertz with increasing turbidity [47].These investigations led to the demonstration of real-time ultra-high-definition video transmission over underwater channels with different turbidities [48]. Besides the selection of a suitable transmission wavelength, recent years have seen much consideration of modulation schemes, system configurations, and optoelectronic devices.Efficient and robust modulation schemes and system configurations such as orthogonal frequency-division multiplexing (OFDM) [49], pulse-amplitude modulation (PAM) [50], discrete multitone (DMT) with bits and power loading [51], and injection locking [52] are now used to achieve high data rates.Highly sensitive photodetectors such as photomultiplier tubes (PMTs) [53], single-photon counters [54], and multi-pixel photon counters are now used for long-haul communication [55].Figure 2 summarizes the recent advances of laser-based UWOC systems [40], [42], [46], [49]- [70].In that plot, the extinction length, which is defined as the product of the transmission range and the attenuation coefficient of the water channel, is used to normalize the effect of water turbidity. Despite the aforementioned previous investigations, if UWOC is to be used in real oceanic environments, then we must consider how UWOC systems are affected by oceanic turbulence.One of the main challenges with conventional UWOC systems is posed by the strict requirements on positioning, acquisition, and tracking (PAT).It is especially challenging to maintain PAT in the presence of oceanic turbulence because of optical beam fluctuations and, thus, misalignments.To build robust UWOC links to mitigate the effect of turbulence, we highlight herein our solutions, including non-line-of-sight (NLOS) UWOC modality, scintillating-fiber-based photoreceivers, and photovoltaic (PV) cells with a large active area as signal detectors to ease the PAT requirement.Furthermore, by using highly sensitive PV cells as photodetectors, we show simultaneous energy harvesting and signal detection in an underwater environment, thereby also providing solutions to the question of how to supply energy to an underwater data transceiver. II. OCEANIC TURBULENCE In the presence of oceanic turbulence, the optical signal suffers random variations that are commonly known as scintillations.This phenomenon is due to random changes in the refractive index along the path of propagation, which in turn causes random changes in the direction of photons traveling through the water medium.Because the active areas of commonly used photodetectors are small to ensure fast communication links, even slight variations in the direction of the beam can cause signal fading.Underwater turbulence, which can persist for a relatively long time, can be induced by variations in temperature, salinity, or pressure, and by air bubbles in the water channel.Understanding this turbulence-induced fading is critical to establishing long-distance yet stable UWOC links, which is the primary motivation for the vast amount of previous research into water turbulence and its effects on optical links.This research examined the statistical characteristics of underwater turbulence, its impact on the propagation of light, and potential techniques to mitigate those effects. One way to quantify the strength of the turbulence is to determine the scintillation index of the received signal, which is defined as the variance of the received normalized intensity and is expressed as where is the received intensity and 〈⋅〉 denotes the average taken over a long duration.High values of the scintillation index correspond to strong turbulence, which results in poorer performance of UWOC links. A study conducted in the Tongue of the Ocean in the Bahamas measured the refractive-index structure constant to quantify the strength of the turbulence [71].Other experiments have been conducted in emulated laboratory environments to statistically study the histogram of the received intensity in the presence of turbulence-induced fading caused by random and gradient changes in temperature and salinity, and by air bubbles in the water channel [72]- [74].In those studies, the experimentally obtained histograms were fitted with well-known statistical distributions, and the goodness of fit was reported in each case.Such statistical results allow underwater turbulence to be modeled in calculations and simulations and facilitate methods to counter the associated performance degradation.For example, a model was developed to produce a closed-form expression of the bit error ratios (BERs) in vertical underwater channels and was verified using computer simulations [75].Numerical calculations have also been used to study turbulence and to confirm that increasing the aperture size improves the performance under turbulence-induced fading [76].Similarly, it was also shown experimentally that using wider beams can improve the performance of UWOC links in the presence of air bubbles [77].Using beam expansions and aperture averaging is analogous to using spatial diversity in MIMO systems because the light beam travels through a wider space compared to a narrower beam.Moreover, spatial diversity can be achieved by using multiple transmitters.For example, the performance of a multiple-input single-output system has been evaluated [78], in which the transmitters were arranged in a uniform circular array, and it was shown that such a system improves the performance of UWOC links in turbulent water channels.A comprehensive study of the performance of MIMO systems has also been presented [79], and the performance of different wavelengths in the presence of temperature and salinity gradients has been studied [80].The results showed that the scintillation index decreases significantly with wavelength, which suggests improved performance by using longer wavelengths because they are more immune to scintillation.However, it is important to note the critical tradeoff between using longer wavelengths that suffer from higher attenuation and using shorter wavelengths that suffer from stronger turbulence-induced fading.Furthermore, the reciprocity of the effects of underwater turbulence on the UWOC performance has also been studied [81].The importance of the reciprocity of the channel lies in the fact that it alleviates the need for feedback to the transmitters to provide the channel state information in duplex links because they can extract it from the received signals. To show how turbulence affects the beam position, we used a quadrant detector sensor head (PDQ90A; Thorlabs) with its auto aligner cube (KPA101; Thorlabs) to monitor the change in beam position in the presence of a 0.1°C/cm temperature gradient.Figure 3 shows the relative position recorded over 100 s with a sampling rate of 1 kHz. Based on Fig. 3, we note that the beam position in the presence of turbulence changes randomly with time, thereby potentially degrading the performance of UWOC.We also note that the change on the horizontal axis (-1 -1) exceeds that on the vertical axis (-0.4 -0.4), this being due to the deformation of the beam by the vertical temperature difference, which gives the beam profile an oval shape.And the oval shape is mathematically related to the variance of the relative horizontal/vertical position, which is 0.06 for the horizontal and 0.02 for vertical. III. NON-LINE-OF-SIGHT UNDERWATER WIRELESS OPTICAL COMMUNICATION Because of the complexity of the oceanic environment, including turbulence [80], turbidity [82], and undersea obstacles [77], severe signal fading occurs if a misalignment of the optical link happens in line-of-sight (LOS) UWOC, leading to degraded information transfer.By contrast, NLOS UWOC [83], a modality that relieves the strict PAT requirements, promises robust data-transfer links in the absence of perfect alignment.An NLOS UWOC system relies on either reflection from the water surface [84] or light scattering [85] from molecules and particles in the water (e.g., plankton, particulates, and inorganics).Compared with reflection-based NLOS, that based on scattering is more robust because it avoids the possibility of signal fading from the wavy surface.Furthermore, to receive the signal, reflection-based NLOS requires a certain pointing angle to the water surface for making the reflection light travel into the field-of-view (FOV) of the receiver.Therefore, we focus herein on scattering-based NLOS, which entirely relieves the PAT requirements.In such links, the transmitting photons are redirected multiple times by the molecules in the water before being detected by the photoreceiver.Therefore, a light beam with high scattering properties is favorable in NLOS UWOC.Cox et al. measured the total light-scattering cross sections for microscopic particles against the entire visible spectrum [86].They showed that shorter wavelengths exhibit higher scattering for both Rayleigh and Mie scattering.Therefore, blue light (400-450 nm), which is the shortest visible wavelength, is preferred for use in NLOS UWOC.However, having constrained by the development of devices in general, previous works on NLOS UWOC mainly relied on simulations.Monte Carlo simulations [87] and the Henyey-Greenstein (HG) phase function [88] were used to develop models describing the transmitted photons' trajectory.The impulse response [85], BER performance [89], and the effects of channel geometry on path loss [83], [90] have also been predicted based on theoretical simulations.Herein, for the first time, we experimentally demonstrated a high-speed blue-laser-based NLOS UWOC system in a diving pool. In our pool deployment, we used as the transmitter a 450 nm blue LD (PL TB450B; Osram) operating at 0.18 A with an optical emission power of 50 mW enclosed in a remotely operated vehicle (ROV-1), and as the receiver we used a PMT (PMT R955; Hamamatsu) with a high sensitivity of 7×10 5 A/W carried by ROV-2.As shown in Fig. 4(a), the laser and PMT were separated by either 1.5 or 2.5 m.At the far end of the laser beam, a beam dump made of black silicon was used to minimize the light reflected from the pool wall and to ensure that all the received light was due to the scattering process.As shown in Fig. 4(b), the laser and PMT pointed in parallel to fully relieve the alignment requirements.At the transmitter side, an alternating current (AC) signal was generated by a pattern generator (ME522A) with a pseudorandom binary sequence that was 2 10 −1 pattern modulated with non-return-to-zero on-off keying (NRZ-OOK).The PMT was operated at 15 V with a high voltage-controller voltage of 2 V, and an OD2 neutral-density filter was placed in front of the PMT window to control the incident power within the detection range of the PMT.The water was pool water with an absorption coefficient of 0.01 and a scattering coefficient of 0.36 m −1 .Figure 5 shows that a data rate of 48 Mbit/s was achieved with a BER of 2.6×10 −3 when the transmitter-receiver separation distance was 1.5 m, which is below the forward error correction (FEC) limit of 3.8×10 −3 .Meanwhile, for the separation distance of 2.5 m, a maximum data rate of 20 Mbit/s was obtained with a BER of 2×10 −4 .The corresponding eye diagrams are shown in Fig. 6.Upon increasing the data rate, the eyes become closer, inducing higher BER.Besides, compared with the eyes for the separation distance of 2.5 m, those for 1.5 m are noiseless.This is due to the weaker received light and increased inter-symbol interference caused by the multipath scattering with greater separation.Nevertheless, we have demonstrated, for the first time, a high-speed NLOS UWOC link with the PAT requirements fully relieved by using a blue laser.Furthermore, we envisage that a longer-haul NLOS UWOC could be developed in the future based on photon-counting modes using algorithms for pulse-counting, synchronization, and channel estimation [91]. IV. OMNIDIRECTIONAL FIBER PHOTODETECTOR WITH LARGE ACTIVE AREA Paving the way for the upcoming era of the Internet of Underwater Things (IoUT), developments on the transmitter side have enabled transmission of up to gigabits per second in underwater environments [92].However, on the receiver side, the small detection area of conventional photodiodes impedes the practicality in this regard.Although commercial photodiodes have demonstrated high modulation bandwidths of up to gigahertz, the detection areas of these photodiodes are limited to only a few square millimeters.This is largely attributed to the resistance-capacitance limit of the photodiode [93].Considering the severe conditions in underwater environments and to relieve the strict PAT requirement, large-area photoreceivers with higher modulation speeds are essential for both practicality and to improve the connectivity among trillions of IoUT devices. Scintillating fibers, which rely on the photon conversion process of the doped molecules in the fiber to propagate the converted light to the fiber end, were used as the optical receivers for corona discharges in early work [94], [95].Having similar working principles to those of luminescent solar concentrators [96]- [99], scintillating fibers rely on the doped molecules in the core of the fiber to absorb the incoming light and re-emit it at a longer wavelength.The re-emitted light then propagates effectively along with the core of fiber to the fiber end.The first demonstration of using scintillating fibers as the photoreceiver for free-space optical communication (FSO) was reported by Peyronel et al. in 2016 [100].The design was devised for indoor visible-light communication under eye-safe conditions.The advantages of scintillating fibers include the flexibility to form large-area photoreceivers of various sizes with no significant deterioration in response speed.Inspired by these prior studies, we aim to demonstrate the fundamental potential of scintillating fibers as large-area photoreceivers for UV-based UWOC.Compared to traditional photodiodes, this would eventually improve the practicality of UWOC in actual ocean environments with a large angle of view and omnidirectional detection [101]. As a proof of concept, a large-area photoreceiver made of commercially available scintillating fibers was constructed, as shown in Fig. 7.As shown in Fig. 7(a), the photoreceiver comprises around 90 strands of scintillating fibers and thus forms a planar detection area of roughly 5 cm 2 .To demonstrate the modulation capabilities of the scintillating-fiber-based photoreceiver, we used a 375 nm UV LD (NDU4116; Nichia) as the transmitter to send a modulated optical signal over a 1.5-m-long water channel.The photoreceiver was placed at the other end of the water tank, and the strands of the fiber end were coupled into a commercial avalanche photodetector (APD) (APD430A2; Thorlabs) through a series of condenser lenses.Figure 7(b) shows the collimated UV light beam incident on the planar detection area of the large-area scintillating-fiber-based photoreceiver.It is apparent that the photoreceiver is sufficiently large to cover the entire profile of the collimated beam with no additional lenses.In addition, the small-signal frequency response of the large-area scintillating-fiber-based photoreceiver was tested over the same water channel.Figure 8 shows the small-signal frequency response of the photoreceiver with a 3-dB bandwidth of 91.91 MHz, which is relatively high compared to a conventional photodiode with the same detection area.The modulation bandwidth is primarily governed by the recombination lifetime of dye molecules [100], [104], and thus eliminating the need to balance the design trade-off between the detection area and the modulation bandwidth as in conventional photodiodes.Besides, with a conventional photodiode, although the angle of view can be improved by using additional receiver lenses, it is challenging to attain flexibility and omnidirectional detection.The inset of Fig. 8 shows a photograph of the large-area scintillating-fiber-based photoreceiver with high flexibility to form a spheroid-like photoreceiver for omnidirectional detection. Moreover, the data rate of the scintillating-fiber-based photoreceiver in an underwater communication link was tested by modulating the 375 nm UV laser.The transmitter was connected to a BER tester (J-BERT N4903B; Agilent) for OOK signal generation.The signals were transmitted through a 1.5-m-long water channel to the scintillating-fiber-based photoreceiver before coupling into an APD.The APD was then connected back to the J-BERT.Figure 9(a) and (b) show the eye diagrams and corresponding BER below the FEC limit at 150 Mbit/s and a maximum attainable rate of 250 Mbit/s, respectively.Thus, the potential of a large-area and high-bandwidth scintillating-fiber-based photoreceiver is shown for establishing UV-based data transmission in underwater channels.By using a more-complex modulation scheme (e.g., PAM, OFDM, DMT) coupled with bit-loading and pre-equalization techniques, higher data rates of up to gigabits per second could be expected with the large-area scintillating-fiber-based photoreceiver. Table II summarizes the photodetection techniques used in UWOC.Comparatively, the photodetection scheme based on scintillating fibers offers large modulation bandwidth as compared to other prior works, without sacrificing the detection area.Moreover, as compared to conventional photoreceivers based on Si-based photodiodes [46], [59], [67], [102] and solar panels [103], the use of scintillating fibers render large area detection while preserving the modulation bandwidth of the accompanying Si-based photodiode.This could also alleviate the costly and timely development path for a UV-based photoreceiver with a large detection area and high response speed [104]- [107].Hence, the approach can accelerate the realization of UV-based NLOS communication modality to obviate the strict PAT requirements in UWOC. V. PHOTOVOLTAIC CELLS FOR SIMULTANEOUS SIGNAL DETECTION AND ENERGY HARVESTING Following the vigorous development of information technology and popularization of the IoUT concept, energy issues have become a bottleneck for power-hungry UWOC devices.To support the underwater equipment for massive data processing and long-distance communication, it is essential to develop and use sustainable energy resources and explore advanced energy-storage technologies.As a renewable and green energy, solar energy is undoubtedly an alternative to resolve these energy issues.In recent years, PV cells, which are increasingly popular alternatives to traditional photodetectors, have been studied extensively in the field of OWC [108]- [113].and bandwidth by using various novel PV cells.In [111], 34.2-Mbit/s signals were received by using organic PV cells and a red laser over a 1-m air channel.In addition to using LEDs or lasers operating in visible-light band and silicon wafer-based PV cells for OWC [108]- [111], researchers also employed mature near-infrared laser sources and GaAs PV cells to implement efficient energy harvesting and high-speed FSO communication [113].However, an essential prerequisite for realizing long distances and high speeds is strict alignment, which limits the use of conventional OWC for mobile underwater platforms.Consequently, work remains lacking in resolving the above issues.Inspired by these previous studies, PV cells with the dual functions of signal acquisition and energy harvesting show good prospects for application in energy-hungry marine environments.In Ref. [103], the authors first stressed the importance of PV cells for UWOC to resolve underwater energy issues in underwater environments.Considering the complexity in underwater channels, the authors also highlighted the superiorities of PV cells with large detection areas, which can significantly alleviate the alignment issues caused by mobile transmitters and receivers.To promote the application of PV cells in practical UWOC scenarios.In the following, we use a white laser with a large divergence angle for simultaneous lighting and optical communication in UWOC [121].We explore PV cells with large detection areas that are capable of detecting weak light, which can alleviate the alignment issues and lays the foundation for future implementation of long-distance underwater communication. Figure 10 shows the schematic of the PV-cell-based UWOC system.Because the measured bandwidth of the PV cell is only around 290 kHz, a highly spectral-efficient modulation format (i.e., OFDM) was used in the experiment to improve the data rate.The OFDM signals were generated offline.The bit number of the pseudorandom binary sequence was 2 20 VI. FUTURE WORK Beyond the challenges and solutions mentioned above, areas remain that require extensive investigation in practical UWOC deployment.An example is the physical layer of UWOC, which still requires considerable effort before networking construction.Apart from the required compact, high-speed, and low-power optoelectronic devices, solid understandings and further investigations are needed urgently of water channels and modem algorithms, along with both analytical and computational exploratory studies.Higher layers of networking technologies are also in demand, which includes MAC, localization, route discovery, and multihop communication.Furthermore, a low-power and compact computing technology is a major consideration when designing a practical UWOC system for field deployment.This includes digital signal processors (DSPs), field-programmable gate arrays (FPGA), and future general-purpose computing platforms. VII. CONCLUSIONS While UWOC offers high-speed data transfer and complements the existing RF and acoustic technologies, its ultimate performance is affected by the complex underwater environment.The main concerns are (i) alignment loss under oceanic turbulence and (ii) energy supply for power-hungry underwater devices.The oceanic turbulence, which is induced by temperature or salinity gradients in the waters, will cause time-varying characteristics of seawater channels, and thus result in a severe distortion of received signals with large pointing errors, and even failure of communication.However, such PAT issues and energy harvesting in underwater environments can be addressed by several novel system configurations and device innovation.NLOS UWOC, by taking advantage of underwater light scattering, significantly eased the PAT requirements.The demonstration of a 20-Mbit/s/2.5-mblue-laser-based NLOS UWOC proves the feasibility of alignment-free optical communication by establishing an NLOS UWOC link.Besides innovating the communication system configuration, it is also promising to mitigate such pointing errors by using novel photodetectors with a large active area. The study of 250-Mbit/s/1.5-mscintillating-fiber-based photoreceiver link with a 5-cm 2 active area, shows the capability of simultaneously easing the alignment issues while still maintaining high-speed communication.The PV cell, with a large active area of 36-cm 2 as a photoreceiver, on the other hand, shows great potential for simultaneous signal detection and energy harvesting for underwater sensors.Beyond the considerations and solutions mentioned, there are other core areas of research interest for field deployment, such as theoretical models and algorithms for randomly varying water channels, higher layers of networking technologies, and a low-power computing system for underwater environments.It can be envisaged that the above comprehensive suite of technologies may soon revolutionize the technology of underwater communication to meet the demands for comprehensive undersea interconnectivity under the framework of IoUT. Fig. 2 . Fig. 2. Plot of data rate versus range (in terms of extinction length) of recent experimental work on laser-based UWOC. Fig. 4 . Fig. 4. (a) Pool testbed for deployment of 450 nm laser-based non-line-of-sight (NLOS) UWOC modality based on two ROVs.(b) Photograph of transmitter and receiver pointing in parallel direction to form an NLOS configuration. Fig. 3 .Fig. 5 . Fig. 3. (a) Beam position on the receiver side with no temperature gradient.(b) Beam position on receiver side with a 0.1-°C/cm temperature gradient. Fig. 8 . Fig. 8. Measured small-signal frequency response of large-area scintillating-fiber-based photoreceiver over a 1.5-m-long water channel.The inset shows a photograph of the spheroid-like omnidirectional scintillating-fiber-based photoreceiver. Apart from harvesting the energy through the direct current (DC) component of the light source, a PV cell can also convert AC signals superimposed on the light source back to electrical signals for signal detection.Table III summarizes the communication performance of several OWC systems based on different kinds of PV cells.Most of the previous works on PV cells for OWC have been focused on improving the data rate Fig. 9 . Fig. 9. Received BER and eye diagrams at: (a) 150 Mbit/s and (b) 250 Mbit/s over a 1.5-m-long water channel using a 375-nm UV laser as the transmitter. − 1 . The size of the inverse fast Fourier transform was 1024.The number of efficient subcarriers and subcarriers for the frequency gap near DC were 93 and 10, respectively.The number of OFDM symbols was 150, including four training symbols for channel equalization and two for timing synchronization.The cyclic prefix number was 10.Four-quadrature amplitude modulation (4-QAM)-OFDM signals were sent from an arbitrary-waveform generator (AWG) with a sampling rate of 5 MHz.After being adjusted by an amplifier (APM) and an attenuator (ATT), the OFDM signals were superposed on a white LD via a bias tee.Over a 2.4 m transmission distance in the diving pool, the optical signals were detected by a PV cell with a detection area of 36 cm 2 (6 cm × 6 cm).Note that the experiment was conducted in daytime, and thus the main background noise is attributed to sunlight and the underwater channel.To separate the AC signals from the DC signals, a receiver circuit was designed for the PV cell.Besides, an amplifier and a filter were included to amplify the signals and filter the noise outside of the detection band.Finally, the signals were captured by a mixed-signal oscilloscope with a sampling rate of 25 MHz and processed offline.After transmission through the 2.4 m underwater channel, the achieved gross data rate of the OFDM signals was 908.2 kbit/s.The constellation map of the received 4-QAM-OFDM signals is shown in Fig. 11, which is well converged.The corresponding BER was 1.010×10 −3 . Fig. 10 . Fig. 10.Schematic of UWOC system based on a PV cell. Yujian Guo (S'18) received a Bachelor's degree in electrical engineering from the University of Electronic Science and Technology of China, Chengdu, Sichuan, China, in 2017.He is currently a Ph.D. student in the Department of Computer, Electrical and Mathematical Sciences & Engineering, KAUST, Kingdom of Saudi Arabia.His current research interests include underwater wireless optical communication and underwater optical channel characterization.Mustapha Ouhssain received a B.Sc. degree in chemistry from Université Montpellier 2, France (2006) and an M.S. degree in Sciences, Technology and Marine Environments from Toulon University, Toulon, France.He is now a Laboratory Engineer in the Red Sea Research Center, KAUST, Saudi Arabia.His research interests include ocean optics, analytical services, and field and laboratory analysis of marine environments.Yang Weng (S'15) received a B.S. degree (2015) from the Ocean University of China, Qingdao, China, and an M.S. degree (2018) from the National Taiwan University, Taiwan, China.From 2018 to 2019, he was a visiting student at KAUST.His research interests include underwater wireless optical communication and the navigation of autonomous underwater vehicles.Burton H. Jones received his Ph.D. degree from Duke University.He is now a Professor of Integrated Ocean Processes at KAUST.His current interests include biological oceanography, physical and biological interactions, ocean optics, coastal urban issues, and integrated observation and modeling.Tien Khee Ng (SM'17) received his Ph.D. (2005) and M.Eng.(2001) from Nanyang Technological University (NTU), Singapore.He is a senior research scientist at KAUST, and a co-principal-investigator responsible for innovation in MBE-grown nanostructures and devices at the KACST's Technology Innovation Center at KAUST.His research focuses on the fundamental and applied research of wide-bandgap group-III nitride, novel hybrid materials and multi-functional devices for efficient light-emitters, optical wireless communication and energy harvesting.He is also a senior member of OSA, and members of SPIE and IOP.Boon S. Ooi is a Professor of Electrical Engineering at KAUST.He received his B.Eng. and Ph.D. in electronics and electrical engineering from the University of Glasgow.His research focuses on the study of semiconductor lasers, LEDs, and photonic integrated circuits for applications in energy efficient lighting and visible-light communication.He has served on the editorial board of IEEE Photonics Journal, and on the technical program of IEDM, OFC, CLEO and IPC.Presently, he is the Associate Editor of Optics Express (OSA) and Journal of Nanophotonics (SPIE).He is a Fellow of OSA, SPIE and IOP (U.K.).Besides, he is also a Fellow of the U.S. National Academy of Inventors (NAI). A Review on Practical Considerations and Solutions in Underwater Wireless Optical Communication Xiaobin Sun, Student Member, IEEE, Chun Hong Kang, Student Member, IEEE, Meiwei Kong, Member, IEEE, Omar Alkhazragi, Student Member, IEEE, Yujian Guo, Student Member, IEEE, Mustapha Ouhssain, Yang Weng, Student Member, IEEE, Burton H.Jones, Tien Khee Ng, Senior Member, IEEE, Boon S. Ooi* O Fig. 1.Data rate versus transmission range of published experimental work on underwater acoustic systems (from [10]). Xiaobin Sun (S'19) received a B.S. degree in semiconductor physics from the University of Science & Technology, Beijing and an M.S. degree in photonics from King Abdullah University of Science & Technology (KAUST).He is currently working toward a Ph.D. degree at KAUST.His research interests include underwater wireless optical communication and free-space optical and visible-light communication.Meiwei Kong (M'18) received a B.S. degree in Material Physics from Zhejiang Normal University, China, and a Ph.D. degree in Marine Information Science and Engineering from the School of Ocean College of Zhejiang University.She is a postdoctoral researcher at KAUST.Her research interest is underwater wireless optical communication.Omar Alkhazragi (S'19) received the degree of Bachelor of Science in Electrical Engineering in 2018 from King Fahd University of Petroleum and Minerals (KFUPM), Saudi Arabia.He is now an M.S./Ph.D. student in electrophysics in the Photonics Laboratory at KAUST, Saudi Arabia.The primary focus of his research is on experimental and theoretical studies of optical wireless communication systems.
2019-12-19T09:22:04.850Z
2020-01-15T00:00:00.000
{ "year": 2020, "sha1": "6629f97a264af9eb6bec90769fce56be4b594b82", "oa_license": "CCBY", "oa_url": "https://ieeexplore.ieee.org/ielx7/50/8966974/08933430.pdf", "oa_status": "HYBRID", "pdf_src": "ScienceParsePlus", "pdf_hash": "83a2705be3ac112274fb24ea3100d0563ebca6b0", "s2fieldsofstudy": [ "Engineering" ], "extfieldsofstudy": [ "Computer Science" ] }
199169740
pes2o/s2orc
v3-fos-license
Effect of Organic Matter on Swell and Undrained Shear Strength of Treated Soils This paper presents a laboratory and statistical study on swell and undrained shear strength of cemented organic clays blended with eco-friendly (by-product) cementitious materials such as ground granulated blast slag (GGBS) and cement kiln dust (CKD). The presence of organic matter in soils can be very problematic especially during construction of infrastructures such as roads and foundations. Therefore, experimental and statistical investigations are crucial to further understand the effect of organic matter on swell and strength performance of soils treated with by-product materials (GGBS and CKD). Five artificially synthesised organic clays with 0%, 5%, 10%, 15% and 20% organic matters were mixed with 20% cement during the first phase of mixing. In the second phase, cement content was reduced to 4% and blended with 12% GGBS and 4% CKD respectively. All mixed samples were cured up to 56days and subjected to undrained triaxial test and onedimensional oedometer swell test. The undrained shear strength of the untreated soils decreases from 22.47kPa to 15.6kPa upon increase in organic matter from 0-20%. While the swell increases from 1.17% to 3.83% for the same range of 0-20% organic matter. The results also show improvement on strength and swell upon addition of 20% cement for all investigated samples. For samples treated with 4% cement and inclusion of 12% GGBS and 4% CKD, the treated soils showed better performance in terms of swell potential due to reduction in plasticity compared to the plasticity of soils treated with 20% cement. Undrained shear strength increases from 632kPa to 804.9kPa and from 549.8kPa to 724.4kPa with reduction in organic matter upon addition of 20% CEM and 4% CEM: 12% GGBS: 4% CKD after 56days. The results obtained show that the inclusion of GGBS and CKD reduced swell and increases undrained shear strength irrespective of the percentage of organic materials due to cementation effect. However, results of the statistical studies show that the presence of organic matter influences the extent of performance of the cement, GGBS and CKD treated soils. Introduction Peat is an organic material and soils rich in organic matter constitute serious challenge to vital geotechnical and geological land development undertakings especially in areas where their deposits are abundant and unavoidable. Organic clays are particularly not suitable as bedrock or foundation for structures such as roads, railways, tunnels, buildings etc., because of their potentially low bearing strength, high plasticity, high compressibility, low hydraulic and high shrinkage quality [1]. Soil mechanics practitioners and researchers have considered the study of the influence of the organic matter content in clays as very pertinent not only because of their undesirable mechanical and geotechnical properties but also because the amount and composition of organic matter does vary greatly in natural deposits [2,3]. The variation of the soil properties are thought to be indirectly proportional to the organic matter content, and a minimum quantity of about 3-4% of organic matter is sufficient enough to cause a change in the soil properties [3]. Conversely, the behaviour of the clay has also been reasoned to be dependent on the nature of the organic matter [4]. Organic matter found in soils could loosely be divided into three groups: non-humic (vegetal, animal, or micro-organism remains), humic (alkanes, fatty acids, humic acids, fulvic acids, and humins) and anthropogenic contaminants (oils and a variety of compounds) [4,5]. Chemical stabilization of organic clays has been adopted over the past few decades to remedy the weak engineering behaviour of organic clays. Moreover, studies on the influence of the composition of binders, curing conditions and methods of testing on major engineering properties such as the unconfined compression strength and one-dimensional compression behaviour of the stabilized product are rife [4,[6][7][8][9][10][11][12][13][14][15]. Only varying degrees of successes have been achieved in the utilization of calcium-based hydraulic binders such as lime and cement in the stabilization process [16]. Some studies have indicated that indeed the cementation process does occur with a formation of strong bonds between the binders and the organic clays leading to better mechanical behaviour despite the presence of organic contaminants in the stabilized product [17][18][19]. Notwithstanding, several studies have shown that organic matter does negatively affects the chemically stabilized soil considering that the lime or cement grains are coated by the organics (mostly humic acids) thus delaying or preventing the formation of products of hydration reaction, and therefore reducing the strength of the stabilized clays [4,20,21]. In fact, it has been previously reported that, even a little over 1% humic acid content present in clay could render the process of lime stabilisation ineffective [8,12,22]. The mechanisms of lime-treated organic clay are believed to be influenced by the moisture content and the insufficient dissolution of the clay minerals during pozzolanic reaction. Organic matters possess high water retention capacity hence limiting the quantity of water available for the hydration process [7,23]. Furthermore, high water content may produce more spacing between aggregates, thus reducing the required cementation bonding process. There are also concerns regarding the possibility of leaching of calcium-based binders from organic clays after stabilization [12]. Hydrogeologists have shown that water soluble organic carbon (WSOC) frequently transports contaminants through the clay soil profile and into the groundwater [24]. Therefore, WSOC may leach calcium ions from the binder used over the course of time and render the binders ineffective. In view of the forgoing, it is suggested that some of the problems that occur in the chemical stabilization of soils of high organic matter can be avoided either by adopting a partial or total replacement of the hydraulic binders with industrial waste or by-product materials. Apart from potentially aiding the enhancement of the engineering properties of the organic clays, the introduction of wastes or by-product material guarantees a reduction in the cost of construction as well as preservation of the environment. This recommendation is in line with the current growing trend towards the use of alternative binders in soil stabilization [13,14,[25][26][27][28][29][30][31][32][33][34][35][36][37]. The inclusion of Cement, PFA, and GGBS in treatment of clayey soils, reduces the plasticity index of the treated soil [30]. However, this index property can be influenced by the presence of organic matter in the clay and can affect strength. Undoubtedly, the effect of calcium-based stabilizers on geotechnical properties of soils have been studied extensively however, there seems to be little or lack of research on the stabilization of organic clays by partially replacing cement with ground granulated blast furnace slag (GGBS) and cement kiln dust (CKD). This paper will explore the possible organic clay-binder interactions and the influence of the organic matter on the strength and swelling capacity of the stabilized products. In order to achieve this, predetermined quantities of the Irish Moss Peat mixed with the clay shall be utilized to simulate the organic matter (by mass of dry clay) present in the clay. The resulting synthesized products will then be stabilized with predetermined cement quantities and subsequently replacing some of the cement quantities by GGBS and CKD in different combinations. The following objectives are set in order to implement the aim of this study: i. Investigation of the effect of organic matter on swell and undrained shear strength. ii. Study the performance of cement and the inclusion of GGBS and CKD in enhancing swell and undrained shear strength of the investigated soils. iii. Statistical investigation of the significance of curing time, percentage of organic matter and binder type on the swell and undrained shear strength of the investigated soils. It is worth mentioning that percentage of organic matter as used in this study, referred to the percentage of Irish moss peat by weight of dry clay mixed with Materials and Sample Preparation The materials used in this study consisted of Clay (Soil 1), Irish Moss Peat, Portland cement, (CEM), GGBS and CKD. Irish moss peat is a fibrous and opened structured soil with high water holding capacity and a pH of 5.5-6, mainly used for reinvigorating garden soils, and GGBS is a cement substitute and a by-product of the production of iron. CKD is produced in high volumes during the process of cement manufacturing and are mostly used in landfills. GGBS and CKD are both by-product materials, and every year in the UK 2Mt of GGBS is used in the cement industry. Essentially, GGBS comprises of silicates and alumina silicates of calcium and other bases that are manufactured in a blast furnace under molten conditions simultaneously with the iron. There are many advantages of using GGBS and CKD in soil mixing especially the economic benefits [30]. Therefore, studying the possibility of using Cement, and inclusion of GGBS and CKD in soil mixing instead of landfilling is worthwhile because as long as cement is manufactured it would be produced. In this study, 0%, 5%, 10%, 15% and 20% contents of organic matter (Irish Moss Peat) by weight of dry clay were thoroughly mixed with clay for a period of 10min. The simulated clay-organic matter mixed soils are referred to as Soil 1, Soil 2, Soil 3, Soil 4 and Soil 5 as presented in Table 1. Thereafter, soil-mixing operation was carried out following the procedure outlined in EurosoilStab [38] for cementation of soils. In order to investigate the effect of organic matter on strength and swell properties of the treated soils, the different soil types were mixed with CEM and combination of CEM, GGBS and CKD. Table 2 In the first stage of mixing, the five different soil types were mixed with 20% cement content by weight of dry soil at optimum moisture content. Samples for strength test were placed into a 40mm diameter by 76mm height cylindrical tubes in stages and in each stage; the cylinder was tapped several times against a hard surface in order to ensure removal of any air bubble trapped within the samples. To ensure that equal degree of compaction at saturation was achieved for all mixed samples, a dead weight of 10kg was placed on the mixed samples in the cylindrical tubes before extraction. Samples for the swelling test were prepared by extracting core samples of 75mm in diameter and 20mm height from compacted treated soils in a proctor mould. According to Ganjian et. al and Jalull et. al [39,40], the production of cement accounts for approximately 8% of global CO 2 emissions, and that cement can be substituted in a cement mix using GGBS up to 55%. Therefore, in order to explore the economic and environmental benefits of GGBS and CKD in soil mixing, the second stage of mixing considered 80% reduction in cement and substitution of 60% GGBS and 20% CKD leading to the production of cemented soils consisting of a blend of 4% Cem, 12% GGBS and 4% CKD. All treated samples were sealed, wrapped with a thick plastic and cured under water for 7, 14, 28 and 56 days. Laboratory Testing After proper mixing, Atterberg limit test was conducted on the treated organic clay soils to obtain the plasticity index of the soils based on procedures outlined in BS 1377-2: 1990. Unconfined and 1-D Oedometer Compression Test In most experimental programs reported in the literature, unconfined compression and Oedometer tests have been widely used to study undrained shear strength and expansion of swells because it is simple and fast and at the same time reliable and cheap. The indirect methods of evaluation of swell prediction are based mainly on soil properties of the soil tested and on the classification systems, meanwhile the direct methods rely on the physical estimation of the swelling potentials assessment using mainly the 1-D oedometer compression tests in the laboratory [41]. Therefore, in order to investigate the performance of cement and the inclusion of GGBS and CKD in enhancing swell and undrained shear strength of the investigated soils, both the treated and untreated samples were subjected to series of unconfined compression and swell test following the procedure outlined in BS1377-7: 1990 and BS1377-6: 1990 respectively. Samples were tested for strength after each curing period and the undrained shear strength was worked out as half of the average unconfined compressive strength of three samples. While samples subjected to one-dimensional oedometer compression test under 5kPa load were tested after 7 days. Statistical Validation of Investigated Factors A Two-way analysis of variance (ANOVA) was conducted as a means of investigating the significance of the effects of organic matter, curing time and binder combination on swell and undrained shear strength. An ANOVA two-way factor without replication was performed on undrained shear strength and swell assuming that the data are approximately normal and group may have different means but the same standard deviation. The variance was estimated and Fstatistics and p-value were adopted for significance tests. Considering an I X J analysis of variance in a two-way design system, defined in terms of two independent factors (namely curing time and percentage of organic matter), applied in the prediction of a response variable, undrained shear strength (µ). An I X J Matrix of Analysis of Variance (ANOVA) in a two-way design can be defined, where I and J represent the levels of the curing time and the percentage of organic matter respectively. In this analysis, it was considered that every level of the curing time (I) appears in combination with every level of the percentage of organic matter (J) resulting to a 5X4 Matrix and enabling the comparison of I X J groups as show in Table 4. µ12 µ13 µ14 2 µ21 µ22 µ23 µ24 3 µ31 µ32 µ33 µ34 4 µ41 µ42 µ43 µ44 5 µ51 µ52 µ53 µ54 Where the sample size of level i of the curing time and level j of the percentage of organic matter is equal to n ij and the total number of observations is given as; Assuming independent simple random samples (SRSs) of size n ij from each I X J group with different means ῡ ij but equal standard deviation σ, where ῡ ij and σ are unknown parameters and let = the kth observation from the group having curing time at level i and percentage of organic matter at level j, then the statistical model can be written as; In investigating the effect of organic matter and time on undrained shear strength of the investigated soils, a null hypothesis was developed, stating that the variation in organic matter has no effect on strength and swell except due to changes in strength and swell over time. The test of significance was performed using ANOVA to assess the strength of the evidence against the null hypothesis. The significance of the effect of the investigated variables was tested using a representation of significance of the null hypothesis (p-value = 0.05) to ascertain the reliability and validation of the analysis [42]. Results and Discussion The problems associated with swelling soils have generally occurred in clay soils predominated by expansive lattice type mineral such as montmorillonite [43]. However, clay soils can also be classified depending on their range of plasticity as low, medium or high swelling soils. This means that any factor capable of influencing the plasticity of clay can influence its swelling potential. The presence of organic matter in clay is one of such factors capable of affecting plasticity due to increase in liquid limit. Clay soils with low to high swelling capacities swell with increase in moisture content and this makes construction activities difficult due to density variation and subsequent strength variation. Undoubtedly, cement has been widely employed in the construction field for enhancement of plasticity, soil strength and compressibility [44][45][46][47]. However, with concerns about the environmental impact of cement-soil mixing becoming increasingly urgent, the application of by-product (ecofriendly) materials such as GGBS and CKD in soil mixing cannot be overemphasised. Therefore, this study has investigated the strength and swelling capacity of cemented medium swelling clay blended with GGBS and CKD. Undrained Shear Strength The shear strength is a fundamental property required in the analysis of construction projects over organic soils and it generally has limiting low value for such soils. Therefore, in the present study, the effect of varying organic matter on undrained shear strength of cemented clay blended with GGBS and CKD was studied. The results presented in Figure 1 shows a decrease in undrained shear strength for the different soil types in order of increasing organic matters (0 to 20%) due to high affinity for water for organic materials. The undrained shear strength of the untreated soils decreases from 22.47kPa to 15.6kPa upon increase in organic matter from 0-20%. The undrained shear strength of the investigated soils increases with time upon mixing with 20% CEM and 4% CEM: 12% GGBS: 4% CKD due to hydration reaction as shown in Figure 2 (a-b). The strength increase can also be attributed to the gradual formation of cementitious compounds between CKD-GGBS due to higher cementing power of GGBS and a "pore-blocking" effect of hydraulic reactions of GGBS and CKD. Soil-GGBS mixtures could be used in highway embankments and this implies that deep mixing Soil-GGBS and with reduced amount of Cement can provide fill materials of comparable strength to most overconsolidated soils [48]. For clay soils with organic matters, the fibrous organic materials absorb water during mixing and loses it after hydration process and this creates barriers between soil particle-to-particle interactions. The fibrous barrier increases with increase in organic matter in the soil, reduces cementation effect and bonding between individual soils particles, and hence decrease in undrained shear strength. As a result, the undrained shear strength of the treated soil (soil 1) with 0% organic matter increase with time irrespective of additive type as shown in Figure 2 (a-b). Undrained shear strength increases from 632kPa to 804.9kPa and from 549.8kPa to 724.4kPa with reduction in organic matter upon addition of 20% CEM and 4% CEM: 12% GGBS: 4% CKD after 56days. The results obtained show that the inclusion of GGBS and CKD reduced swell and increases undrained shear strength irrespective of the percentage of organic materials due to cementation effect. However, it is interesting to see that the inclusion of GGBS and CKD reduce effect of organic matter on undrained shear strength normalised arbitrary at 28 days compared to cement only treated soils due to pozzolanic reaction as shown in Figure 3 (a-b). The results indicate that an increase in organic matter leads to suppression of pozzolanic reaction of the treated soils. It has been stated that stabilization effects on soil-cement admixture increased when fly ash additives were added to cement even at high organic matter content due to pozzolanic reaction [49]. The increase in strength upon addition of GGBS and CKD can be attributed to the presence of certain amount of SiO 2 , Al 2 O 3 and CaO, which enhanced the pozzolanic reaction between the soil-cement during curing time. It is true that the presence of organic matter in soils, reduces the effect of pozzolanic reactivity but the addition of GGBS increases pozzolanic reactivity [50]. The strength of an improved soil is an indication of the degree of reaction in the soil-binder-water mixture based on the rate of hardening of the improved mixture. Therefore, the type of binder used during deep mixing of soils with medium to high organic matter will be of great significance in determining the extent of improvement. For cement content between 5% to 20%, soils of lower plasticity index exhibits higher strength enhancement compared to soils with slightly higher plasticity [30]. However, with the inclusion of GGBS/PFA and blended with less than 5% Cement, soils of higher plasticity also increases in strength upon treatment due to Ca 2+ exchange and pozzolanic reaction [51]. Soils with organic content (peat) possesses very low shear strength, low specific gravity and high-water content [52]. However, physical and mechanical properties of organic soils are enhanced upon stabilisation or treatment with byproduct cementitious materials as observed in the present study. This is due to reduced void ratio and filling of the space within organic soil particles by grouting or creatively prepared cementitious binder [53]. The strength gain can also be attributed to the neutralisation of humic acid within the soil with organic matters, which propagates the formation of more calcium silicate hydrate gel and increases the strength and densification of the stabilised organic soil [54]. Swell The effect of organic matter on swell capacity of the different soil types was studied using 1-D oedometer compression test under 5kpa load and the displacement was recorded at different time intervals for 24 hours as shown in Figure 4 (a-c). The displacement-time curves show that the maximum displacement increases with increase in organic matter for both the treated and untreated soils. However, the soils treated with CEM/GGBS/CKD show higher efficiency in reducing displacement compared to soils treated with Cement only. The swell capacity of the treated and untreated soils defined in terms of percentage swell was derived using the load displacement-curves at maximum displacement. The results presented in Figure 5 (a-b) to Figure 6, show that swell increases from 1.17% to 3.83% for the same range of 0-20% organic matter and for the samples treated with 4% cement and inclusion of 12% GGBS and 4% CKD, the treated soils showed better performance in terms of swell potential due to reduction in plasticity compared to the plasticity of soils treated with 20% cement. The presence of organic matters in the soils increases liquid and plastic limits and hence, plasticity index. It has been stated in previous section that swelling capacity increases with plasticity index and this is evidenced in this study. The results plotted in Figure 5 (a-b) shows that increase in organic matter causes a corresponding increase in plasticity index and hence, swell. This implies that the presence of organic matters in clay with medium swell capacity increases swell due to increased moisture content absorbed by the fibrous materials. The plasticity index of Cement treated soils was found to be greater than that of the CEM/GGBS/CKD treated soils at 20% additive content as shown in Figure 5 (a). This can be attributed to the higher plasticity index of the untreated soils with varying organic matters. Figure 6 shows that greater reduction in plasticity index for soils treated with CEM/GGBS/CKD resulted to corresponding reduction in swell compared to Cement treated soils at 20% additive content. This explains the suitability of cementitious by-product materials such as GGBS and CKD in treatment of soils with varying organic matters due to pozzolanic activity of GGBS and water absorptive quality of CKD. Statistical Validation of Investigated Factors Statistical analysis was performed using ANOVA to investigate the contributions of the different levels of the independent factors (organic matter, curing time and additive type) on undrained shear strength and swell. The significance of the effect of organic matter on undrained shear strength and swell capacity of a medium swell clay was analysed. It was assumed that the independent factors have no effect on the undrained strength and swell (null hypothesis). In order to test the significance of the null hypothesis of the F-test in ANOVA the P-values (representation of significance of the null hypothesis) was compared to some alpha level (0.05). The influence of the independent factors in predicting the response variable is usually considered significant when Fstatistics is greater than the F-critical value, and the corresponding p-value is less than 5%. Effect of Organic Matter i. Undrained Shear Strength The effect of the varying organic matters on undrained shear strength (response variable) was validated via analysis of variance in a two-way design. The ANOVA was defined in terms of two independent factors (organic matter and time). The significance of each factor was examined, and the results are presented in Table 5 to Table 7. The result presented in Table 6 shows that F-statistics value is greater than the Fcritical with a p-value less than 0.05. This implies that it can be claimed with 95% confidence that the effect of organic matter and curing time contributed to the observed changes in undrained shear strength as expected. In other words, the investigated independent factors are significant in the observed changes in undrained shear strength with Fstatistics greater than F-critical and p-value less than 5% as shown in Table 6. The results presented in Table 7 and Table 8 also show the significance of the effect of organic matter and curing and the reasons to reject the null hypothesis. Table 8 shows that the F-statistics is greater than F-critical and pvalue less than 5%. This shows that the percentage of organic matter and curing time are significant in the observed changes for undrained shear strength and this is consistent with previous results presented in section 3.1. This implies that the presence of organic matters has the tendency of affecting the performance of soils stabilised with cement and blend of GGBS and CKD. 8 19 ii. Swell In the case of swell, samples were tested only at 7days and therefore, the analysis of variance in a two-way design was defined in terms of organic matter and type of additives. The significance of each factor on the swelling capacity of the investigated soils was examined and validated, for which the results are presented in Table 9 and 10 respectively. The result presented in Table 10 shows that the F-statistics is greater than the F-critical and a p-value less than 0.05. Therefore, it can be claimed with 95% confidence that the effect of organic matter and additive type contributed to the observed changes in swelling capacity of the investigated soils. This is consistent with the results presented in section 3.2 and very strong evidence to reject the null hypothesis. Conclusions The undrained shear strength and swell potential are concerns during construction for supporting construction equipment as well at the end of construction in supporting the structure. Organic soils present a distinct behaviour than inorganic soils due to fibrous organic matter. The effect of this fibrous matters on strength and swell has been investigated for Cement and CEM/GGBS/CKD treated soils. Following the analysis and results of this study, the following conclusions have been drawn: The presence of organic matters in clay reduces undrained shear strength and increases swell potential of the untreated soil due to the presence of fibrous organic matters and high moisture ingress but the treatment with CEM, GGBS and CKD reduces swell potential. The undrained shear strength of soil with 0% organic matter treated using 20% CEM and 4% CEM with inclusion of 8% GGBS and 4% CKD is greater in all cases compared to that of soils with varying organic matter. Soils treated using 4% CEM and inclusion of 8% GGBS and 4% CKD produces better performance in terms of reduction in plasticity index of the treated soils and hence swell reduction. The use of CEM and inclusion of GGBS and CKD can provide engineering benefits to control swelling of organic soils and enhanced undrained shear strength, when used in moderate proportions especially in the case of cement. Results of the statistical studies show that the presence of organic matter influences the extent of performance of cement, GGBS and CKD treated soils.
2019-08-02T22:12:36.469Z
2019-07-12T00:00:00.000
{ "year": 2019, "sha1": "1085b87a65aa46d21d5ef7b22e7c3ad9cd58ad30", "oa_license": "CCBY", "oa_url": "http://article.sciencepublishinggroup.com/pdf/10.11648.j.jccee.20190402.12.pdf", "oa_status": "GOLD", "pdf_src": "ScienceParseMerged", "pdf_hash": "1085b87a65aa46d21d5ef7b22e7c3ad9cd58ad30", "s2fieldsofstudy": [ "Environmental Science", "Engineering" ], "extfieldsofstudy": [ "Materials Science" ] }
2172501
pes2o/s2orc
v3-fos-license
Dermatofibrosarcoma Protuberans of Scalp With Cervical Lymph Node Metastasis Dermatofibrosarcoma protuberans (DFSP) is an uncommon, slow growing and locally aggressive tumor of the skin with a high rate of recurrence even after supposedly wide excision. The reports of regional lymph node metastasis and distant metastasis are very rare. Because of the extreme rarity of these cases with metastasis, the experience with management of such patients is very limited. A case of recurrent DFSP of scalp, with metastasis to the regional lymph nodes, in a 17-year-old boy is reported here. This is the second case of DFSP involving scalp and 16th case of DFSP of all sites metastasizing to the regional lymph nodes reported in literature. The patient was treated with wide excision of the lesion and ipsilateral radical neck dissection (including excision of overlying involved skin). Case report A 17-year-old boy was admitted to Lok Nayak Hospital with the chief complaints of a gradually increasing swelling in the right occipito-temporal region of the scalp for 11 years and multiple swellings in the ipsilateral neck for 1 year. His parents had first noticed a small 2 Â 2-cm swelling on the right side of the scalp when the patient was 6 years old. This swelling was excised at a small peripheral medical center when the patient was 10 years of age. Till then, it increased in size very slowly. The parents did not have any medical record or histopathological report. The swelling reappeared about 1 year after the excision. Since then, it had been again increasing in size slowly and painlessly. One year prior to presentation to us, the patient also noticed a few swellings in the right half of the neck. These were also painless and slow to increase in size. General physical and systemic examination of the patient showed no abnormality or unusual feature. All the findings were localized to head and neck. Examination of the scalp showed an 8 Â 6 Â 4-cm fungating, non-tender swelling over the right temporo-parieto-occipital region. It was mobile over the scalp and firm in consistency. There was another swelling, 4 Â 5 cm in size, on the right side of the neck. It was fixed to sternocleidomastoid muscle and adherent to overlying skin. There was no ulceration in the neck (Fig. 1). Multiple lymph nodes were palpable in the posterior triangle of the neck. They were mobile, firm in consistency, non-tender with the largest about 2 cm in size. A clinical diagnosis of soft tissue tumor of the scalp with metastasis in the neck was made. Skiagram of the chest showed no evidence of metastasis. CT Scan of head and neck was suggestive of a malignant soft tissue tumor in the scalp with metastasis in the neck with normal calvarium and intracranial structures. Fine needle aspiration cytology from the neck tumor was reported to be a malignant mesenchymal lesion suggestive of DFSP. The patient underwent wide excision of the primary on the scalp with a 4-cm margin all around. The neck nodes were managed by radical neck dissection on the right side of the neck with wide excision (taking a 4-cm margin all around) of the involved adherent skin. The scalp lesion was adherent to the periosteum over a 2.5-cm diameter area in the central region of the tumor, and hence the periosteum also required removal. The rest of the area had a healthy and intact periosteum. The denuded bone was healthy. The whole of the defect on the scalp and the neck was covered with a split skin graft from the thigh. As expected, there was a small graft loss over the denuded bone, but conservative management using dressings with antibiotic ointment and petrolatum gauze on every third day allowed the area to develop granulation tissue and spread of the graft. Complete healing occurred in 5 weeks (Fig. 2). The patient is disease-free in the follow-up period of 18 months duration. Histopathological examination of the excised scalp specimen showed the tumor cells to be spindleshaped and at places with a storiform pattern (Fig. 3). The tumor was poorly circumscribed and showed infiltrative margins. At a few places, the storiform pattern gave way to more fascicular areas. The overlying skin showed ulceration and infiltration by the tumor cells. The neck specimen showed histopathological features similar to the scalp lesion. Although, there was no ulceration, the tumor cells were reaching up to the epidermis. The lymph nodes from the posterior triangle of neck showed tumor metastasis (Fig. 4). Discussion Dermatofibrosarcoma protuberans (DFSP) is a tumor with high rate of local recurrence and an extremely low but definite risk of metastasis. Hematogenous spread is very rare and lymphatic involvement is even rarer. In an extensive review of literature by Rutgers et al. 1 involving 913 cases, 11 were found to have regional lymph node metastasis. They even tabulated the various parameters of all these cases reported by various authors. However, a later article by Mavili et al. 2 reported their own case to be the tenth. Study of these two reports, their similarities and discrepancies, and other case reports published prior to or later than these review articles brings the figure to 15, with the present author's case being the 16th case with lymph node metastasis in a case of DFSP. The articles reporting such cases according to their year of publication are those by Gentele in 1951, 3 Woolridge in 1957, 4 9 Hausner et al. in 1978, 10 Volpe et al. in 1983, 11 Petoin et al. in 1985, 12 Hirabayashi et al. in 1989, 13 Mavili et al. in 1994 2 and Lal et al. in 1999. 14 Of these articles, only the case reported by Hirabayashi et al. 13 had involvement of the scalp as the primary site. Thus, the case reported by the present authors becomes the second reported case of DFSP of the scalp with metastasis to the regional cervical lymph nodes. In view of the extreme rarity of regional metastasis, prophylactic lymph node dissection is not advocated, while it is but natural to go ahead with it if there is a positive involvement. In the present case, since the overlying skin in the neck was also adherent over a large area, a wide excision of the skin was also done and the resultant defect covered with split skin graft.
2014-10-01T00:00:00.000Z
2004-03-01T00:00:00.000
{ "year": 2004, "sha1": "72d9330fde0261e7585c59882357a33ca93a689a", "oa_license": "CCBY", "oa_url": "http://downloads.hindawi.com/journals/sarcoma/2004/360296.pdf", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "72d9330fde0261e7585c59882357a33ca93a689a", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
19439665
pes2o/s2orc
v3-fos-license
Total hip arthroplasty following failed fixation of proximal hip fractures Background: Most proximal femoral fractures are successfully treated with internal (cid:222) xation but a failed surgery can be very distressing for the patient due to pain and disability. For the treating surgeon it can be a challenge to perform salvage operations. The purpose of this study was to evaluate the short-term functional outcome and complications of total hip arthroplasty (THA) following failed (cid:222) xation of proximal hip fracture. Materials and Methods: In a retrospective study, 21 hips in 20 patients (13 females and seven males) with complications of operated hip fractures as indicated by either established nonunion or fracture collapse with hardware failure were analysed. Mean age of the patients was 62 years (range 38 years to 85 years). Nine patients were treated for femoral neck fracture, 10 for intertrochanteric (I/T) fracture and two for subtrochanteric (S/T) fracture of the hip. Uncemented THA was done in 11 cases, cemented THA in eight hip joints and hybrid THA in two patients. Results: The average duration of follow-up was four years (2-13 years). The mean duration of surgery was 125 min and blood loss was 1300 ml. There were three dislocations postoperatively. Two were managed conservatively and one was operated. There was one super (cid:222) cial infection and one deep infection. Only one patient required a walker while four required walking stick for ambulation. The mean Harris Hip score increased from 32 preoperatively to 79 postoperatively at one year interval. Conclusion: Total hip arthroplasty is an effective salvage procedure after failed osteosynthesis of hip fractures. Most patients have good pain relief and functional improvements inspite of technical dif (cid:222) culties and high complication rates than primary arthroplasty. INTRODUCTION D ue to increase in the aging population, the number of hip fractures in the elderly population is increasing. The management of these fractures ranges from conservative method to osteosynthesis and primary replacement arthroplasty. More and more of these fractures are treated surgically by osteosynthesis for better rehabilitation and early return to function. Various factors causing failure following osteosynthesis include osteoporosis, unstable fracture reduction or poor implant position. Management of failed internal fixation of proximal hip fractures includes revision osteosynthesis or conversion total hip arthroplasty (THA). Total hip arthroplasty is generally accepted as the most successful salvage procedure for failure of these fixation devices. 1 Conversion of failed hip surgeries to THA is indicated where the bone quality is poor, head is damaged due to previous internal fixation, poor bone stock, or limb shortening. Total hip arthroplasty in these patients may be difficult because of presence of previous implant, poor bone stock, scarred tissues and increased risk of infection. The purpose of this study is to evaluate the short term functional outcome, technical difficulties, complications associated with hip arthroplasty performed after failed fixation of proximal hip fractures. MATERIALS AND METHODS Between 1994-2005, 21 hips [ Table 1] in 20 patients (13 females and seven males) with a mean age of 62 years (range, 38 to 85 years) were treated at our institution with hip arthroplasty, after failed fixation of proximal hip fractures, as established by nonunion or implant failure. Records of the patients were retrieved from our computer database. Out of 21 hip fractures where conversion THA was done, 10 were intertrochanteric fractures, nine were fracture neck of femur and two were subtrochanteric fractures. In all cases primary reduction and fracture fixation was done within three weeks of sustaining the fracture. Four out of 21 cases had two surgeries before conversion THA was done. In one hip, intertrochanteric fracture (case no. 2) was treated with dynamic hip screw (DHS) fixation which failed with loss of reduction. Osteotomy was done one year after the first surgery using double-angled barrel plate which too failed for which uncemented THA was done. The second case (case no. 15), a 56-year-old female with intertrochanteric left femur was treated with DHS fixation, which failed. Salvage operation with proximal femoral nail was done seven months after the first surgery. The fracture went into nonunion and conversion THA was done one year after the second surgery [ Figure 2]. In the third case (case no. 8), a 38-year-old male who suffered right femoral neck fracture and was treated initially with closed reduction and DHS fixation. On implant failure, valgus osteotomy and DHS fixation was done after four months. The fracture of neck of femur did not unite and hence conversion to uncemented THA was done six months after second surgery. In the fourth case (case no. 4), a 66-year-old female patient with intertrochanteric fracture of left femur was treated with blade plate fixation but fracture did not unite. Revision surgery was performed with DHS after four years and two months. The implant cut through the neck. Uncemented THA was done five years after the second surgery. Of the remaining seven intertrochanteric fractures, one patient (case no. 7) had fracture of both hips with a gap of one year [ Figure 3]. She had Parkinson's disease. The fractures on both sides were fixed initially with DHS which cut through the head and conversion THA had to be performed. One patient with intertrochanteric fracture was fixed with proximal femoral nail [ Figure 4]. The lag screws backed out within six weeks and fracture collapsed for which conversion THA was done. The remaining four patients with intertrochanteric fractures were treated with DHS where lag screws cut out occured through the femoral head and conversion THA was performed. One fracture neck of femur was managed with primary Pauwell's osteotomy and fixation with double-angle barrel plate [ Figure 1]. The fracture remained ununited at the end of one year. In other seven femoral neck fractures, fixation was done with cannulated hip screws and fracture went into nonunion. In two of these cases screws were broken. One of the subtrochanteric fracture was treated with blade plate assembly where there was implant failure and plate lifted off the bone. Another subtrochanteric fracture was managed with DHS fixation where the lag screw backed out excessively with collapse and there was failure of fixation. In both the cases bones were very osteoporotic. Failure of primary fixation had also caused damage to the femoral head. Both the patients were more than 80 years of age. Considering their advanced age, it was felt that THA will give them a better chance of early ambulation. All patients underwent preoperative detailed clinical examination and evaluated for medical co-morbidities. Patients' medical history, operative notes, discharge summaries and previous and fresh radiographs were retrospectively reviewed. Occult infection as a cause of failure is always considered and a complete preoperative blood count with differential determination of erythrocytes sedimentation rate and C-reactive protein was done. Surgical procedure Total hip arthroplasty was performed by a team headed by the same surgeon. Cemented or uncemented THA was done as decided by the operating surgeon according to the age of the patient and according to condition of bones as seen in preoperative radiographs and also per-operatively. Cementless THA was the preferred choice. Physiological age and patients' level of activity were a major determinant. Younger and active patients were advised uncemented THA. In patients with defect in acetabulum, grafting and cemented THA was done. Uncemented THA was performed in 11 cases, cemented THA in eight cases and hybrid THA in two cases. An attempt was made to incorporate the previous scar in the incision but if not possible fresh incision was made. Transtrochanteric approach was used in those cases where trochanter was fractured or avulsed (n=11). Intraoperative specimen was sent for gram stain and AFB stain (in all cases) to rule out any infection. Implants were removed and bony defects in femur and acetabulum assessed. Autograft was used in five cases to fill up the bony defect in the acetabulum. Cemented or uncemented THA was performed depending on the bone quality, bone defects, patients' general condition and affordability. Replacement was done using standard technique and stability was checked before the closure of the wound. Failed subtrochanteric and intertrochanteric fractures were more challenging and difficult than femoral neck fracture. There was abnormal position of trochanter mass because of rotation defect and malunion of upper femur in frontal and sagittal plane. There was difficulty in intraoperative identification of limb length due to loss of usual landmarks such as lesser trochanter. Removal of fracture screws sometimes required use of a trephine and bridging the last screw hole with a longer stem. While implanting cemented stems screw holes were closed with either bone grafts or by assistant's finger. But there was some cement extrusion from the medial side. Attention was paid to maintain the integrity of abductor mechanism. Trochanter if detached was reattached with tension band wiring technique of Charnley (n=12). Active assisted exercises were started during the first postoperative day and according to patient condition, ambulation started on second or third day. Patients were first ambulated with a walker, then with a stick and gradually progressed to ambulation without any support according to their recovery. Antibiotics were started on the day of surgery with first dose given preoperatively and continued till the third postoperative day. Dressing was changed on the third postoperative day and at the time of discharge on the fifth day. Stitches were removed on the 14 th day of the surgery. Thereafter patient was reviewed at six weeks, three months, six months and at one year and yearly thereafter. Followup proforma was filled at every visit and clinical and radiological results were recorded at each visit. The cement grade of the femoral stem was evaluated according to the criteria reported by Barrack, Mulroy, and Harris. 2 Radiological loosening of the acetabular component was classified according to the criteria of DeLee and Charnley 3 and those proposed by Hodgkinson et al. 4 An acetabular component was considered to be loose if a continuous radiolucent line was evident in all three zones, or if the acetabular component migrated. Migration of an acetabular component was defined by a change in the opening angle of more than 8° or a difference in the component position of >3 mm when comparable radiographs were compared. Fixation of the cementless femoral component was evaluated according to the criteria described by Engh et al. 5,6 and was classified as "bone ingrowth", "stable fibrous" or "unstable". Loosening of the cemented femoral component was evaluated according to the criteria described by Harris et al. 7,8 and was graded as "definite" "probable", "possible" or none. Results The median duration between the primary surgery for fixation of fracture and THA was nine months (range 3 months -8 years). The mean operating time for hip arthroplasty operation was 125 min (range 95 min to 210 min) which included the time to remove the retained hardware such as screws or barrel plates. The mean estimated blood loss was 1300 ml. In one patient difficulty was encountered in removing the screws from the plate and the screw heads had to be cut to remove the plate and extricate the screws. Three patients had intraoperative complications. In two patients the proximal femoral canal got fractured during reaming and was treated with circlage wires. One patient had fracture of the greater trochanter and wiring was done. Uncemented THA was performed in 11 cases, cemented THA in eight cases and hybrid THA was performed in two cases. Trohanteric wiring was done in 12 patients. All surgeries were performed in single stage. Three patients had postoperative complications in the form of dislocation of the hip. All the dislocations were early. In one the dislocation was anterior, observed on the third postoperative day and was subsequently reduced under general anesthesia (GA) and maintained with careful and protected physiotherapy. There was no further dislocation afterwards. The second patient dislocated the head anteriorly on the first postoperative day and was reduced under GA but was unstable on the operation table. So surgery was done and the acetabular lip liner with 10 mm elevation was oriented anteriorly. No further dislocation occurred thereafter. In the third patient, the hip got dislocated during physiotherapy and was reduced under GA, but got redislocated again. It was managed conservatively with master hinge brace for six weeks after which there was no dislocation. Infection was seen in two patients. The superficial infection in one case was treated with antiseptic dressings and antibiotics only. Second case developed deep infection eight months after THA. This was a case of fracture neck of femur where primary Pauwel's osteotomy with fixation with double-angle barrel plate was done. The patient was also a known diabetic. Wound was opened and debridement was done following which antibiotics were given, repeated dressings were done. Prosthesis was retained and infection was controlled in three months. One patient (case no. 1) had aseptic loosening of Charnley cemented THA eight years after salvage arthroplasty and was revised to uncemented THA and it was stable at 63 months of follow-up. Four patients developed medical complications such as paralytic ileus (n=2), urinary tract infection (n=1), congestive heart failure (n=1). All recovered over a period of time. There was no intraoperative or postoperative mortality. The average duration of follow-up was four years (range two years to 13 years). There was no mortality. The patients were followedup at the time of removal of skin stitches two weeks, six weeks, three months, six months, one year and then every year. There was dramatic pain relief in all the patients with four patients reporting moderate and three reporting mild pain. One patient was using a walker whereas four were using a walking stick for ambulation. The mean Harris Hip Score increased from 32 preoperatively to 79 postoperatively at one-year interval. We did not find any difference clinically between cemented and uncemented hips as regards to pain and function. Radiographic follow-up of more than three years was done in 15 patients. In one patient there was aseptic symptomatic loosening of the cemented Charnley stem after eight years which was converted to uncemented stem. Stable nonprogressive radioleucent lines were found around the cup in two cases and around the stem in one case. Clinically, the patients were asymptomatic in all these three cases. DISCUSSION Although most fractures of the proximal femur are treated with a favorable outcome, a complication can result in ongoing hip pain and disability. The reported failure rate with internal fixation for intertrochanteric fracture is in the range of 3-12% with device penetration (2-12%), nonunion (2-5%) and malunuion causing varus deformity (5-11%). 9 In displaced intracapsular hip fractures 20-36% of patients initially treated with reduction and internal fixation required revision within two years usually because of nonunion or avasular necrosis. 10 Parker et al., also showed a reoperation rate of 40% for displaced femoral neck fracture treated with internal fixation. 11 Total hip arthroplasty is generally accepted as the most successful salvage procedure for failure of these fixation devices. 1 Hip arthroplasty dramatically alleviated pain and improved function in the majority of these patients, for whom other salvage techniques would have been difficult or had been tried and had failed. The operation allowed most patients to regain function that otherwise had been lost, which is the hallmark of an effective salvage procedure. The surgeon who is faced with failed internal fixation of a proximal hip fracture should always consider occult infection as a potential cause of the failure. Our current protocol involves a complete preoperative blood count with differential determination of the erythrocyte sedimentation rate and C-reactive protein level. If there is evidence of infection, all hardwares are removed, irrigation and debridement is performed and the arthroplasty is performed in a staged fashion after the intravenous administration of organism-specific antibiotics. In our series there was no evidence of infection as a cause of failure of internal fixation in any of the cases and all surgeries were performed in single stage. Failed internal fixation devices, frequently with broken screws, must be removed from the femur. Special instruments for the removal of broken screws can simplify this process. The surgery takes a longer time because the internal fixation device must first be removed. The surgeon must dissect through the old scars to expose the internal fixation device. This also causes increased blood loss. The ununited head and neck fragment or fragments usually are in a deformed position and must be mobilized before being excised. Many specific problems may occur during conversion of failed internal fixation of intertrochanteric fractures to hip arthroplasty. The anatomy of the proximal femur usually is distorted, especially if the reduction of the hip fracture is imperfect, or if there is communition of the medial bony buttress. The bone quality usually is poor as a result of preexisting osteoporosis, which further decreases as a result of disuse after the failure of internal fixation. The greater trochanter either is not solidly healed or can be fragmented again during hip arthroplasty, thus affecting the abduction function, which leads to an increased dislocation rate and can adversely affect the ambulatory function. In our series there were three dislocations, out of which one was managed surgically and the other two were managed conservatively. Mabry et al., showed a dislocation rate of 9% for secondary total hip arthroplasty. 12 A high dislocation rate (6% for total hip replacement and 12% for hemiarthroplasty) has been demonstrated in other series in which THA was performed for the treatment of nonunion at the site of a femoral neck fractures. 13 Proper reattachment of trochanter with either tension band wiring or trochanteric plate is necessary for the stability of the hip and proper functioning of the abductor mechanism. One difficulty encountered in intertrochanteric fractures is containment of cement when it was being pressurized into the femoral medullary canal. The lag screw hole can be closed by the assistant's thumb, by firmly packed gauze, by a surgical glove inflated with saline or by fashioning a bone plug from the femoral head. 14 For the screw holes, one could apply direct finger pressure, use gauze or,screws that were cut short to close the holes over the lateral cortex when cement is injected. 15 Complication rates in conversion THAs are more than that seen in primary THAs. Infection rates generally increase in already operated areas and with additional hardware. 16 The combined reported complications from published series are deep infection rate of 3.8%, periprosthetic fracture rate of 6.2%, dislocation rate of 11.4%, early implant failure rate of 1.5% and a reopertaion rate of 10.9%. 9,17-22 These complication rates are higher than we would normally see in an osteoarthritic population undergoing primary THA. In our series also there was a deep infection rate of 4.76%, reoperation rate of 4.76% and dislocation rate of 14.28% which is comparable to other studies of a similar nature. Several authors have found salvage THA for failed intertrochanteric fractures to be more difficult with a higher potential for complications than salvage THA for failed femoral neck fracture. Despite this finding, Haidukewych and Berry 17 reported relatively few complications and good pain relief and function in their large series of salvage THA after failed IT fractures. By contrast, McKinley and Robinson 18 reported poor outcomes in their series of salvage THA for failed subcapital fractures. Our subanalysis of salvage THA for failed internal fixation for intertrochanteric fractures and intracapsular neck fracture did not demonstrate any difference in complication rate or clinical outcome. The majority of our patients had good pain relief and marked functional improvement. In a few patients with residual hip pain, the most common apparent cause was trochanteric nonunion or trochanteric bursitis. Hip arthroplasty performed after failed internal fixation of proximal hip fractures is technically more difficult than routine primary THA. But despite the technical challenges hip arthroplasty is an effective salvage procedure after failed fixation of proximal hip fractures.
2018-04-03T01:12:30.526Z
2008-07-01T00:00:00.000
{ "year": 2008, "sha1": "4c294006829e3a2b392dabffa93230fb767d3a88", "oa_license": "CCBY", "oa_url": "https://europepmc.org/articles/pmc2739465", "oa_status": "GREEN", "pdf_src": "ScienceParsePlus", "pdf_hash": "3884cbb64bcc4322313a9af911d4090038c6ad93", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
4975281
pes2o/s2orc
v3-fos-license
A randomized controlled intervention of workplace-based group cognitive behavioral therapy for insomnia Purpose Sleep disturbance is common in the working population, often associated with work stress, health complaints and impaired work performance. This study evaluated a group intervention at work, based on cognitive behavioral therapy (CBT) for insomnia, and the moderating effects of burnout scores at baseline. Methods This is a randomized controlled intervention with a waiting list control group. Participants were employees working at least 75% of full time, reporting self-perceived regular sleep problems. Data were collected at baseline, post-intervention and at a 3-month follow-up through diaries, wrist-actigraphy and questionnaires including the Insomnia Severity Index (ISI) and the Shirom–Melamed Burnout Questionnaire (SMBQ). Fifty-one participants (63% women) completed data collections. Results A multilevel mixed model showed no significant differences between groups for sleep over time, while there was a significant effect on insomnia symptoms when excluding participants working shifts (N = 11) from the analysis (p = 0.044). Moreover, a moderating effect of baseline-levels of burnout scores was observed on insomnia symptoms (p = 0.009). A post-hoc analysis showed that individuals in the intervention group with low burnout scores at baseline (SMBQ < 3.75) displayed significantly reduced ISI scores at follow-up, compared to individuals with high burnout scores at baseline (p = 0.005). Conclusions Group CBT for insomnia given at the workplace did not reduce sleep problems looking at the group as a whole, while it was indicated that the intervention reduced insomnia in employees with regular daytime work. The results also suggest that workplace-based group CBT may improve sleep in employees with primary insomnia if not concomitant with high burnout scores. Electronic supplementary material The online version of this article (10.1007/s00420-018-1291-x) contains supplementary material, which is available to authorized users. Introduction Sleep problems are very common in the working population (Kessler et al. 2011;Lallukka et al. 2010) and negative daytime consequences are often reported in association with sleep complaints (Cooper and Dewe 2008;Ford et al. 2011;Haaramo et al. 2012;Hui and Grandner 2015;Kyle et al. 2013;Rosekind et al. 2010;Riemann and Voderholzer 2003;Shekleton et al. 2010;Söderström et al. 2012). This implies a considerable economic burden both for the individual, the employer and society (Blom et al. 2015;Daley et al. 2009;Sivertsen et al. 2009;Walsh 2004). Consequently, costeffective, evidence-based, and easy to use interventions are motivated to treat incipient sleep problems and prevent development of more severe and chronic sleep disorders among employees. A group intervention for insomnia administered at the workplace would meet these requirements. Two recent 1 3 meta-analyses (Koffel et al. 2015;Navarro-Bravo et al. 2015) confirm the efficiency of group CBT for insomnia on symptoms assessed through validated scales, and sleep parameters such as sleep efficiency measured through diaries. However, among the 13 randomized controlled trials included, none had focused specifically on working individuals and the interventions were mainly conducted in clinical settings. Insomnia is commonly present together with other conditions, such as anxiety, depression and burnout (Bélanger et al. 2004;Ekstedt et al. 2009;Johnson et al. 2006;Lustberg and Reynolds 2000;Riemann 2007;Roth and Drake 2004;Söderström et al. 2012). Earlier studies have found that CBT for insomnia is effective even in the presence of anxiety and depression (Lichstein et al. 2010;Manber et al. 2008;Rybarczyk et al. 2009;Smith et al. 2005;Stepanski and Rybarczyk 2006) also when delivered in group settings (Blom et al. 2015;Belleville et al. 2011;Edinger et al. 2009;Germain et al. 2006;Järnefelt et al. 2014;Okajima et al. 2011;Ye et al. 2015). However, there is a lack of studies investigating the effect of CBT for insomnia comorbid with burnout or stress-related exhaustion. Two recent systematic reviews based on prospective studies show a relationship between high psychosocial work stress and an increased risk of sleep disturbances (Linton et al. 2015;Van Laethem et al. 2013). Moreover, studies using physiological sleep measurements have shown that in particular high job burnout interferes with sleep architecture and causes sleep fragmentation (Ekstedt et al. 2009). Despite the well-defined link between work-related stress and sleep problems, this is to our knowledge the first randomized controlled study to evaluate the efficacy of a group CBT-intervention for insomnia in a working population, conducted at the workplace. Aim The present study aimed to investigate if a workplace-based intervention could improve sleep among employees with moderate insomnia symptoms. The study was made through evaluation of a group CBT-intervention for insomnia (called 'sleep school') by means of a randomized controlled trial. Sleep was evaluated both objectively, through actigraphy and subjectively, through diary ratings and questionnaires. A participation in the group intervention during work hours was expected to reduce insomnia symptoms and improve sleep parameters such as subjective sleep quality and quantitative sleep efficiency. Moreover, an explorative aim was to evaluate the impact of level of burnout scores at baseline on the effect of the intervention. Design Twenty workplaces in the retail sector were invited to participate in the study. Employees would then be able to participate in a group CBT-program free of charge, at their own head office during working hours over a period of 3 months. They were informed that the study comprised randomization into an intervention group or a control group and that the control group had to wait at least 6 months before participating in the group intervention. Participants received two cinema tickets after completing measurements at baseline as well as after having fulfilled all three periods of data collection. Two workplaces were interested in participating in the study and the subjects included were employees from offices, stores, warehouses and logistics. Applicants were informed that participation was voluntary and could at any time be withdrawn. The study was approved by the regional ethical committee in Stockholm (No. 2013/2043-31/2) and informed consent was obtained from all included individuals. There were two inclusion criteria; employees (1) should work at least 75% of full time (corresponding to at least 30 weekly working hours), and (2) report regular (at least four to five times a week) sleep problems. Participants were excluded if they reported an active substance abuse problem or a diagnosed severe mental illness. A psychologist made the screening via a short web-based questionnaire on insomnia symptoms (e.g. "How many times a week do you… (1) Have difficulties falling asleep? (2) Wake up several times during the night with difficulties going back to sleep? (3) Get daytime consequences due to poor sleep?"). In case of self-reported sleep apnea ("Do you suffer at least 2-3 times a week from sleep apnea; gasping or snoring during sleep?"), applicants were recommended to visit the occupational health care to get evaluation and advice for treatment. In total, 72 out of the 73 applicants met the criteria and were then offered to participate in the group CBT-program. Randomization was made through the research randomizer tool on http://www.rando mizer .org. Study sample Out of the 72 included individuals, 2 individuals chose to refrain before randomization. After the randomization, another 6 out of the 70 remaining participants chose to refrain (4 from the intervention group and 2 from the control group). They declared they had too much to do, or were going on leave of absence, or were on sick-leave or were to change workplace. Altogether 64 individuals were invited to participate in the baseline measurement. However, two participants dropped out before the baseline measurement, and another six from the control group chose to refrain after the first measurement period because of sick-leave, time pressure or diminishing interest in participating in the study. Four individuals from the intervention group were excluded from the statistical analyses, since only data from participants who fulfilled measurements at baseline and either the post-measurements or the follow-up measurements, or both, were used. Furthermore, one participant in the intervention group, who reported a value of ISI at baseline lower than 8, was excluded from the analyses, since that indicated no presence of clinical insomnia. This resulted in a sample-size of N = 51 (see Fig. 1). Participants worked in four different working areas; store, office, warehouse or logistics although two-thirds worked in offices. Those working at the office had regular working hours, while those working in stores often had irregular and flexible work time arrangements. Employees within warehouse and logistics typically work in regular shifts (but no night shifts). Background parameters are presented in Table 1. The age range was 22-60 years, 63% were women and about one-third of the participants reported they had children living at home. The majority (73%) of the participants suffered from clinical levels of insomnia at baseline (ISI > 14), 60% showed high levels of burnout scores (SMBQ > 3.75), 42% showed clinical levels of anxiety (HADSa > 10) and 8% clinical levels of depression (HADSd > 10). See Table 2. Chisquared tests showed that the groups did not differ at baseline (p > 0.156). Correlations between ISI, SMBQ, HADSd and HADSa at baseline are presented in Table 3, showing that there is a significant and positive correlation between insomnia (according to ISI), high burnout scores (according to SMBQ) and anxiety (according to HADS). Attrition analyses were made to investigate selection bias; independent t tests did not show any significant differences between completers (participants having completed baseline measurement as well as the post-and/or follow-up measurement; N = 51) and non-completers (N = 4 in intervention group, N = 6 in control group) in terms of gender, age or level of ISI at baseline (p > 0.669). Procedure Baseline measurements were made before the start of the intervention. Post-measurements were made approximately 3 months later, directly after the intervention was finished, and follow-up measurements another 3 months later. When all measurement periods were completed, the control group started their participation in the group CBT-program. Consequently, no data were collected after this point in time, except for an evaluation questionnaire which was distributed after participation in the program. During data collections, participants filled out a sleep-and-wake diary and wore a wrist-actigraph for 10 days. They also filled out a web-based questionnaire. See time-line in supplementary figure S1. Measurements The questionnaire, which was distributed at each data collection period, included questions on demographic factors, working parameters as well as questions on sleep and stress levels. Insomnia symptoms were measured through the Insomnia Severity Index (ISI), which has robust psychometric properties (Bastien et al. 2001), even when administrated online (Thorndike et al. 2011) and in the presence of comorbid insomnia (Geiger-Brown et al. 2015). The ratings Melamed et al. 1992). Symptoms of burnout were rated on a 7-graded scale (1 almost never-7 almost always) according to the question "Please indicate to what extent these feelings usually occur". The mean of the 22 questions gives a measure of burnout with values of ≥ 3.75p indicating high level of burnout scores (Grossi et al. 2003). Depression and anxiety were evaluated through the Hospital Anxiety and Depression Scale (HADS; Zigmond and Snaith 1983;Sullivan et al. 1993). The seven items of each scale (anxiety and depression respectively) on experiences during the last week were rated on a 4-graded scale (from 0 to 3), giving global scorings of 0-7 non-cases, 8-10 possible cases and 11-21 probable cases. During data collections, each morning, participants were asked to answer a modified and shorter version of the Karolinska Sleep Diary (KSD; Åkerstedt et al. 1997) that included questions on stress or worries at bedtime (1 very worried, aroused-5 very calm/relaxed), subjective sleep quality (How did you sleep?; 1 very poorly-5 very well) and non-refreshing sleep (Do you feel refreshed?; 1 not at all-5 completely). In addition, participants wore a wrist-actigraph (Actiware Spectrum Pro by Philips Respironics) day and night, on the non-dominant hand, during the measurement period. Participants were asked to push an event button at lights out in the evening and at final wake-up time in the morning. Movements are measured though a sensitive accelerometer, which makes it possible to classify if the wearer has slept or not, based on the amount of movements per minute. Thereby, the length of the sleep period and the proportion of time awake during the night (sleep efficiency in percent), could be calculated. Sleep length refers to the time between sleep onset and wake-up. Sleep efficiency is based on calculations of the amount of time awake during sleep time. High efficiency corresponds to low amounts of time awake. Scorings and calculations were made in ActiWare Software version 6.0.2 (http://www.actig raphy .com/solut ions/actiw are/). Actigraphy is a well-established and validated objective method, complementing the individual's own ratings in the diary (Sadeh 2011). Mean values were calculated on actigraphy data as well as on diary data for each measurement period. ISI has been extensively used in CBT-research (Blom et al. 2015;Geiger-Brown et al. 2015;Ho et al. 2015;Navarro-Bravo et al. 2015;Seyffert et al. 2016) and constitutes the primary sleep outcome in the present study. However, actigraphy was needed to measure outcomes related to quantity of sleep, such as sleep duration and sleep efficiency. Furthermore, diary measurement of sleep provides psychometric strengths in terms of low risk of recall bias and high reliability when evaluating progress in relation to an intervention (Bolger et al. 2003;Libman et al. 2000). Group CBT-intervention: procedure The sleep school was a CBT-based program involving both theory and practice, developed and led by a trained, certified clinical psychologist. The program was carried out in groups (maximum eight participants in each group) and included five sessions (of 2 h) during a period of approximately 3 months. The aim was to increase participants' knowledge about sleep and to provide practical tools to improve sleep and to decrease stress and worries about sleep. All sessions included a psycho-educative lecture and group discussions. At the end of each session, participants were encouraged to choose a personal homework related to the topic of the session. The program involved five sections (one section per session): (1) basic knowledge of sleep, circadian rhythm and sleep regulation, (2) lifestyle factors, including evening routines, physical activity, alcohol and caffeine use, (3) sleep schedules, sleep restriction and stimulus control, (4) stress and daily balance of activity and relaxation/rest, and (5) mindfulness and acceptance strategies. Absence from sessions were recouped through instructions via e-mail from the group-leader together with the digital presentation from the session and assigned homework. Evaluation questionnaires After completing the group CBT-intervention, participants answered an evaluation questionnaire on how they had experienced the program and how much they had been engaged; e.g. "Has the sleep school taken a lot of your time? Has it been effective? Has it been difficult?" (0 I do not agree-4 I completely agree) or "How many sessions have you participated in?" (0, 1, 2, 3, 4 or 5) "Have you recouped the sessions you missed?" (Yes/No) and "I would recommend participation in a sleep school to employees within retail" (Yes/No/Maybe). Moreover, the (waiting list) control group answered questions to determine if some of the participants had underwent any other type of intervention against their sleep problems during the period of measurements. This was made to ensure their function as a control group against the intervention group. Statistical analyses The statistical analyses were based on multilevel mixed modelling. The model included the outcome variable (e.g. ISI scores or mean value of sleep duration) and the fixed effects of the between-group factor Group (Intervention vs Control; level 2), the within-group factor Time (Baseline, Post-measurement and Follow-up; level 1) and the interaction between Group and Time. Because of the nested data over time, the model was fitted by modelling the autocorrelation. Additional analyses were performed; two sensitivity analyses and item analyses of the seven items included in ISI. These results are presented in a supplement (Table S1 to S5). The first sensitivity analysis was made by excluding employees in warehouse and logistics from the sample (four participants from the intervention group and seven from the control group), since they were typically working shifts. The second one was made by including only the 17 completers in the model. Evaluation of the moderating effect of burnout scores at baseline (SMBQbl) was made by adding a second between-group factor; SMBQbl (high vs low; cut-off 3,75) into the model, resulting in several two-way interactions and a three-way interaction of Group × Time × SMBQbl. The three-way interaction reveals if there is an effect on sleep that depends on both group affiliation (intervention vs control) and level of burnout score (high vs low) over time. Furthermore, post hoc analyses were made only on the intervention group through a model where the between-group factor was SMBQ-bl (high vs low) and the within-group factor was time. A two-tailed alpha-level of 0.05 was used when testing for statistical significance. Descriptives, t tests and Chi-squared tests were carried out in SPSS 24, whereas multilevel analyses were made in STATA 14. Results The proportions of participants in the intervention group with values over the threshold for clinical insomnia dropped from 68% at baseline to 33% after the intervention and were 35% 3 months later. The corresponding data for the control group were 77% at baseline, 65% post-intervention and 62% at follow-up. Using diagnostic criteria, 53% in the intervention group were in remission (ISI ≥ 15 at baseline and < 15 at follow-up), compared to 20% in the control group. Contrast scores were calculated separately by group and time period, and it was found that the intervention group improved significantly on ISI scores between baseline and post-measurements (− 2.534; p = 0.001), but not between post-measurements and follow-up. Thus, a significant within-group effect on insomnia was observed. The control group increased their sleep length (as measured with actigraphy) between baseline and post-measurements (Contrast = 0.258; p = 0.042) and they improved on ISI scores between post-measurement and follow-up (Contrast = − 1.530; p = 0.045). No other significant changes were found over time. The multilevel mixed model showed no difference between groups over time in the degree of insomnia, nor in the sleep parameters as measured with sleep diary or with actigraphy. Mean values are presented in Table 4 and interaction effects in Table 5. Sensitivity analyses were made by excluding shift workers. A significant interaction effect between groups over time was observed in this analysis, with a decrease in ISI for the intervention group (Estimate = − 1.319, p = 0.044, CI = − 2.600 to − 0.038; see Tables S1 and S2 in supplement). Item analyses of the seven items constituting ISI showed two significant interaction effects of group over time; dissatisfaction with current sleep (p = 0.032) and experience sleep problems as disturbing (p = 0.045). These two components of insomnia significantly improved more over time in the intervention group compared to the control group (see Tables S3 and S4 in supplement). Exploring the moderating effect of burnout scores at baseline There is an increasing variation in ISI with time (see distribution measurements in Table 4). Spaghetti plots (Fig. 2) illustrate these individual differences in relation to the intervention. Indeed, when adding SMBQbl as an additional factor in the model it was shown that burnout scores significantly moderated the improvement of ISI. This was indicated by a significant three-way interaction; Group × Time × SMBQbl (estimate 3.28; p = 0.009; CI 0.818-5.746). There were no differences between the intervention group and the control group in the baselinelevels of burnout scores (t = − 0.51, p = 0.612). Further analyses were made on the intervention group only, which was divided into two groups based on the value of SMBQbl (N high = 13, N low = 11; cut-off 3.75). A significant interaction effect of SMBQbl × Time (Estimate 2.51; p = 0.005; CI 0.770-4.244) showed that participants with low levels of burnout scores at baseline significantly improved on insomnia over time, whereas participants with high burnout scores at baseline did not (see Fig. 3). The effects on other sleep parameters measured with diary or actigraphy were not moderated by the level of burnout scores at baseline (p = 0.129-0.409). A corresponding post hoc analysis on the control group revealed no differences between participants with high or low levels of burnout scores over time (see Fig. 3). Participants' evaluation of the intervention Altogether 36 out of 51 participants from both intervention group and control group completed the evaluation questionnaire after having participated in the group CBTprogram. The results showed that 94% felt they had been helped by the intervention and that 91% would recommend the program to other employees within their organization. Participants were relatively satisfied with the structure of the program (86% answered "3" or "4" on a scale ranging from 0 I do not agree to 4 I completely agree) and a large part had the confidence in being able to handle their sleep problems in the future (86%). Half of the participants thought they were more well-rested in the morning (54%) and more alert during the day (54%) after having participated in the program. The majority (76%) felt calmer at bedtime. Adherence to the group intervention In the intervention group, 17 out of the 22 participants who filled out the evaluation questionnaire were considered as having fully participated in the program; meaning they had participated in at least three sessions and in case of absence they had actively recouped the session. Sensitivity analyses, including only the 17 completers in the model, showed no differences in significant interaction effects compared to the original analyses (see Table S5 in supplement). Analyses through t tests showed no significant differences between completers and non-completers in level of burnout scores (t = − 1.08, p = 0.295, CI − 1.86-0.59) or insomnia symptoms at baseline (t = − 0.92, p = 0.367, CI − 2.01-6.06). Discussion The hypothesis that a group CBT-program for insomnia in a workplace setting would lead to decreased levels of insomnia symptoms compared to a control group was partially confirmed. For the full sample, there was no improvement in insomnia based on ISI for the intervention group compared to the control group. There were no effects on sleep parameters such as subjective sleep quality, sleep length and sleep efficiency, measured through diary or actigraphy. However, within-group effects on insomnia symptoms were observed in the intervention group, as ISI scores decreased between baseline and follow-up. Moreover, when shift workers were excluded from the sample, a significant interaction effect between groups over time was observed through reductions of ISI scores in the intervention group compared to the control group. These results are partially in line with previous studies on group CBT for insomnia, showing positive effects on insomnia symptoms assessed both through validated scales (e.g. ISI) and diaries, but no effects on sleep duration (Navarro-Bravo et al. 2015). Similarly, the meta-analysis by Koffel et al. (2015) found small effects on insomnia symptoms such as sleep efficiency and only within-group effects on ISI scores (d = − 0.70), sleep length and sleep quality. The fact that there was no significant effect on sleep duration in the present study could be explained by one of the methods applied during the second half of the program; sleep restriction, where time in bed is diminished to enhance sleep efficiency. However, both sleep length and sleep efficiency, objectively measured with actigraphy, was considered being relatively good at baseline (see Table 3), leading to expectations of small observed effects in these sleep parameters. This could further explain the absence of significant effect on sleep efficiency. Moreover, it should be noted that the effects of a CBTprogram for insomnia might occur later in time, once the participants have actually implemented the tools and tested the different methods. In the meta-analysis by Koffel and colleagues (2015), patients continued to improve on total sleep time and sleep quality even after the group CBTprogram had ended. Consequently, positive results on sleep might appear later than 3 months after completion of the group CBT-intervention. However, using a waiting list control group, a follow-up period longer than 3 months was not feasible. Importantly, improvements on ISI scores in the intervention group were found between baseline and post-measurement and not between post-measurement and follow-up in the present study. Koffel et al. (2015) also found that increased total time spent in group sessions had better effect on sleep. The number of sessions in the included studies varied between 4 and 8, and each session was 60-120 min. In this present study, the program was relatively short with the ulterior motive to reduce time expenditure during work hours. Interestingly, the item analyses of ISI showed no effects on the items measuring disturbed sleep, but positive effects in the two components 'dissatisfaction with current sleep' and 'experiencing sleep problems as disturbing'. This indicates that the group CBT-intervention somehow influenced participants' attitude to their sleep problems, but not their sleep disturbance. Notably, insomnia symptoms were moderate at baseline and larger effects might have been seen in a population with more severe insomnia symptoms. The moderating effect of level of burnout scores at baseline A moderating effect of level of burnout scores at baseline was found, showing that there was a positive effect of the intervention on insomnia symptoms for individuals who scored low levels of burnout at baseline. Earlier studies have found group CBT for insomnia to be effective despite comorbid anxiety and depression (Blom et al. 2015;Lichtstein et al. 2010;Rybarczyk et al. 2009;Smith et al. 2005;Stepanski and Rybarczyk 2006), whereas in this study it was shown that comorbid burnout interfered with the effect of the intervention. The moderating effect of level of burnout scores at baseline is an interesting finding since no other study has investigated treatment efficiency related to stress levels in a workplace setting, even though sleep and stress are so closely related (Armon et al. 2008;Ekstedt et al. 2009;Linton et al. 2015;Van Laethem et al. 2013). When suffering from high burnout levels, a group CBTprogram at the workplace involving both presences during work hours, reading and other homework tasks might be too demanding. Another explanation could be that higher burnout scores relates to decreased ability to benefit and learn from the intervention, despite the level of effort (Vogel and Schwabe 2016). Strengths and limitations The program was highly appreciated by the participants; almost all participants rated it as helpful and would recommend the program to other employees. Data-collections comprised several complementary and reliable instruments; subjective diary data and questionnaires as well as objective actigraphy, which provide well-founded results. The possible benefits or weaknesses of a workplace setting for such a sleep intervention program are not evaluated in this study. However, by locating the sessions at the workplace, the program is easily accessible for the participants (even though some employees needed to travel from the warehouse or the store to the main office) and it might provide a feeling of support from the workplace, which may increase motivation to participate in the treatment. Moreover, employees from the same workplace in a group setting can discuss potential shortcomings of their workplace and how these problems can be solved. The participants can also support and encourage each other to complete the homework related to the sessions and the measurements used for evaluating the program. Possible downsides of a workplace setting could be that participants do not feel comfortable talking openly about their problems together with their colleagues. In large organizations, as in the present study, this problem is however partly reduced. Moreover, a participation in the program may increase the workload unless job demands are not decreased during the days when they had a treatment session. The main limitation of this study is that participant attrition led to reduced statistical power. The a-priori power calculation indicated that 64 participants (32 in intervention group and 32 in control group) would be the threshold for ensuring a large effect size of sleep-improvement. The present study included 51 participants. Beside the limited statistical power, there are some additional limitations in this study that should be pointed out; first, the wide inclusion criteria lead to heterogeneity of the participants in terms of degree of insomnia, comorbidity and working conditions (e.g. employees working daytime vs shift workers). Importantly, the sensitivity analyses, by excluding employees working shifts, showed a significant interaction effect between groups over time on the level of insomnia symptoms. This suggests that group CBT for insomnia might need to be adapted to the specific sleep problems related to work shifts, such as having frequent sleep restriction and irregular sleep timing. Moreover, using a waiting list control group in treatment studies might entail certain downsides. In this study, control questions revealed that almost half of the individuals in the control group had found other ways of handling their sleep problems during the measurement period. They had either seen a physician, tried a self-help program or made structural changes to overcome their problems. Furthermore, when having a waiting list control group at the same workplace as the intervention group, colleagues from the different groups might interact during the intervention. If the participants in the study group discussed the content of the intervention sessions, the control group may get tools and recommendations of how to improve their sleep despite a lack of formal treatment. For example, in this present study, the control group increased their sleep length and improved on insomnia symptoms during the measurement periods. Finally, five participants in the intervention group were considered being non-completers; one participant only attended two sessions out of five and four participants did not recoup the one or two sessions they were absent. This might have affected the efficacy and the outcome of the program (Matthews et al. 2013). Practical implications and future research Given the results of this study, it might be of interest to take the level of burnout symptoms (operationalized as high burnout scores) into account when implementing a program for sleep problems in a workplace setting. Levels of burnout scores should preferably be evaluated before participation to enable a customization of the program. Such a customized program could, for example, be internet-based or available via an app (Bostock et al. 2016;Blom et al. 2015), allowing participants to work at their own pace. Subjects with more severe stress symptoms or high burnout scores should receive adequate help for these symptoms. Results should preferably be generalized to large companies, since we do not know whether size of the company may influence the possibility to participate in the sessions during normal working hours. Notably, it is very important to point out that the sleep program should not replace organization interventions aiming to improve, for example, work scheduling and the balance between work load and recovery. Future studies should investigate possible benefits of a workplace setting as well as changes in work ability and productivity in relation to such a program. Conclusion We conclude that a group CBT-intervention for insomnia in a workplace setting did not improve sleep for the investigated group as a whole over a 3-month period when compared to a waiting list control group. However, withingroup changes, and analyses of changes in subgroups were promising, showing that the intervention reduced insomnia in working subjects with low levels of concurrent burnout scores and employees having daytime work. These findings indicate that a workplace-based group CBT-intervention for insomnia might be a feasible method to treat sleep problems and prevent development of more severe and chronic sleep disorders. However, the program employed in this present study needs to be developed and further evaluated in employees outside the retail sector.
2018-04-03T05:46:03.257Z
2018-01-31T00:00:00.000
{ "year": 2018, "sha1": "fcec4932a15ee5c52198839c16e816dd507ef653", "oa_license": "CCBY", "oa_url": "https://link.springer.com/content/pdf/10.1007/s00420-018-1291-x.pdf", "oa_status": "HYBRID", "pdf_src": "PubMedCentral", "pdf_hash": "fcec4932a15ee5c52198839c16e816dd507ef653", "s2fieldsofstudy": [ "Medicine", "Psychology" ], "extfieldsofstudy": [ "Medicine" ] }
49543448
pes2o/s2orc
v3-fos-license
Fusarium graminearum growth inhibition mechanism using phenolic compounds from Spirulina sp Received 3/8/2012 Accepted 9/10/2012 (00Q5824) 1 Laboratório de Ciência de Alimentos, Escola de Química e Alimentos, Universidade Federal do Rio Grande – FURG, Engenheiro Alfredo Huch, 475, CP 474, CEP 96201-900, Rio Grande, RS, Brasil, e-mail: nandapagnu@terra.com.br 2 Laboratório de Epidemiologia de Plantas, Faculdade de Agronomia, Universidade Federal do Rio Grande do Sul – UFRGS, Av. Bento Gonçalves, 7712, CEP 91540-000, Porto Alegre, RS, Brasil *Corresponding author Fusarium graminearum growth inhibition mechanism using phenolic compounds from Spirulina sp. Introduction There has been an ongoing search for new solutions for losses caused by fungal contamination during farming, storage, and processing of grains because active agents in fungicides can remain in the environment besides selecting resistant and toxigenic species (DORS et al., 2011;HEIDTMANN-BEMVENUTI et al., 2012).One widely-investigated alternative solution to this problem is to search for natural compounds of microbial or vegetable origin with proven antifungal power (VIUDA-MARTOS et al., 2008).Toxigenic fungal species of the Fusarium and Aspergillus families are often used to study the effect of such compounds (ZABKA; PAVELA; GABRIELOVA-SLEZAKOVA, 2011).The chemical compounds extracted from natural sources with antifungal properties include essential oils, phenolic compounds, and peptides (OLIVEIRA; mycelia and spores under an optical microscope comparing them with literature reports.In addition, chemotaxonomy was performed (ASTOLFI et al., 2010(ASTOLFI et al., , 2011) ) for the identification of the toxigenic profile of the microorganism used in the study. Cultures of fungi were grown on Spezieller Nährstoffarmer Agar (SNA) at 25 °C to sporulation and maintained at 4 °C onto SNA slants.All fungal growth experiments were carried out in potato dextrose agar (PDA) for 7 days to obtain discs of mycelial material. Biomass production system and extraction of phenolic compounds Spirulina LEB-18 isolated from the Mangueira Lagoon was supplemented with 20% (v/v) Zarrouk medium, referred to in the text as MLW-S medium, for maintenance, inoculums, and biomass production.The pilot plant for production of Spirulina sp. is located near the shore of the Mangueira Lagoon (33° 30' 13" S and 53° 08' 59" W) and consists of raceway tanks of different dimensions and volumes depending on their purpose.All tanks were lined with glass fiber and covered by a greenhouse structure with transparent polyethylene film.They were agitated by a paddle wheel rotating at 18 rpm, 24 hours a day.The culture media volume was maintained by periodic addition of MLW to compensate for evaporation; roughly 12 L day -1 was added over the course of the experiment (MORAIS; COSTA, 2007). The phenolic compounds were extracted from 6 g of Spirulina homogenized with 20 mL of methanol in an orbital shaker (Tecnal,Erechim,Brasil) at 25 °C for 60 minutes at 200 rpm.The extract was centrifuged, filtered, and evaporated in a rotary evaporator (Fisatom,(801)(802)São Paulo,Brasil) at 50 °C, dissolved in 25 mL of sterile distilled water and clarified with 5 mL of 0.1 M barium hydroxide and 5 mL of 5% zinc sulfate.The clarified extract was vacuum filtered (Marconi, MA 454, Sao Paulo, SP) with a sterile membrane with pore size of 0.25 μm. Phenolic compounds (PC) were quantified by spectrophotometry (Varian, Cary 100, California, USA) using the Folin-Ciocalteau reagent, and their content was determined from a calibration curve of gallic acid with concentrations ranging from 4 to 50 μg mL -1 . Antifungal activity Food-pathogenic fungi multiplication was tested by the Agar dilution method, in the appropriate culture medium (PDA).The phenolic compounds at different concentrations (10%, 8%, 6%, 5%, and 3.5% p/p) were added to the culture medium at a temperature of 35-40 °C and poured into Petri dishes (10 cm diameter).The moulds were inoculated after medium solidification.A disc (1.1 mm in diameter) of mycelial material, taken from the edge of seven-day-old fungi cultures, was placed at the centre of each Petri dish.The Petri dish with the inoculums was then incubated at 25 °C (Quimis, Q216F20M, São Paulo, Brasil).The efficiency of the treatment was evaluated each day for seven days by measuring the diameter of the Fusarium graminearum.Therefore, further investigations into the behaviour of various toxigenic species in the presence of natural fungicides are important since such knowledge could support the use of these fungicides focusing on food safety. In order to reduce the multiplication of fungal biomass, inhibitory compounds act on the primary metabolism of nutrient production reactions, production of membranes or cell walls, respiratory activity, and cell differentiation (CASTRO et al., 2004).These development difficulties can lead to the production of secondary metabolites such as micotoxins, for instance, as the defence against growth medium stress (OLIVEIRA; BADIALE-FURLONG, 2008). Measurement of the zones of inhibition of the colonies, cell wall, membrane constituents (ergosterol and glucosamine), and alterations in enzyme activity with consequent reduced biomolecular synthesis are indicators of the cell multiplication inhibition mechanism.Few of these effects are considered in terms of the production of micotoxins by toxigenic species.Therefore, information on the alteration of these metabolic pathways is fundamental prior to any recommendation or purification of extracts for use in the prevention or inhibition of microbial contamination in the food chain. This study involved the assessment of the antifungal activity of phenolic compounds extracted from Spirulina SP.LEB-18 on 12 toxigenic strains of Fusarium graminearum, extracted from barley and wheat, through the determination of structural compounds and enzyme activity of the microorganisms' primary metabolism in order to identify the principal metabolic path affected. Isolation and growth of Fusarium graminearum species The twelve fungal strains used in this study (Table 1) were obtained from the Plant Epidemiology Laboratory (UFRGS) and were isolated from wheat and barley grains grown in Rio Grande do Sul, Brazil, and harvested in 2007/2008.The fungus was identified based on observations of structures such as the in a rotatory evaporator (Fisatom,(801)(802)Erechim,Brasil) at 60 °C.The residue was dissolved with 10 mL methanol, and transmittance was determined at 283 nm.The ergosterol content was estimated using a calibration curve of standard ergosterol with concentrations ranging from 1.5 to 16.5 μg mL -1 Enzymatic activity The enzyme extract was obtained from the fungal biomass with 20 mL of NaCl 0.9% in an ultrasonic bath (Unique, USC-800A, São Paulo, Brasil) for 40 minutes, centrifuged (Cientec, CT-5000R, São Paulo, Brasil), and filtered.The α-amylase activity was determined by starch degradation estimated quantitatively by iodometric titration; protease activity was determined using albumin substrate and tyrosine as hydrolysis indicator (BARAJ; GARDA-BUFFON; BADIALE-FURLONG, 2010), and lipolytic activity was measured by the release of fatty acids during hydrolysis (FEDDERN et al., 2010). Statistical analysis Conventional statistical methods were used to calculate means and standard deviations.Data were analyzed statistically by ANOVA for significant differences (p < 0.05).To ascertain significant differences between the levels of the main factor, Tukey's test was applied between means (BARROS NETO; SCARMINIO; RUNS, 2003). Results and discussion Using the data of the multiplication of the 12 fungi tested in the absence of the phenolic extract, different primary models were adjusted to obtain the growth parameters (Table 3): maximum exponential growth rate (µ max ) and lag phase length (t l ). Both models showed mean variance greater than 97%, and the Gompzert model proved more applicable due to better fit to the experimental results.Therefore, this model was selected to assess the effects of the phenolic extracts on the kinetic parameters of microbial growth (Table 4).colonised fungus.The values were expressed in millimeters diameter h -1 .All tests were performed in quintuplicate. The daily measurement data of the mycelial growth zone in the control and treatment dishes were adjusted to the logistic model and Gompzert model in order to determine the maximum growth rate and lag phase length (Table 2) (NAKASHIMA; ANDRÉ; FRANCO, 2000;HAMIDI-ESFAHANI et al., 2007).Comparison of the models was made using the correlation coefficient (R 2 ). Fungal development was estimated based on the control group (without phenolic extract for all inhibition indicators assessed in the study).The percentage inhibition of fungal growth was calculated according to Nguefack et al. (2004): Inhibition = 100 × ((control -treatment)/control).The median inhibitory concentration (MIC 50 ) was considered as the PC concentration that resulted in 50% inhibition of the fungal growth when compared to the control groups. On the seventh day, the dishes were frozen and the biomass dried at 60 °C (Quimis, Q314M242, São Paulo, Brasil) for 3 hours to determine the glucosamine and ergosterol levels.To determine amylase, protease, and lipase activity, the petri dishes were not subjected to the drying process. Glucosamine content Glucosamine from dry fungal biomass (1 g) was extracted with 5 mL of HCl 6 mol L -1 at 121 °C for 20 minutes.The hydrolyzed material was neutralized with NaOH 3 M, and reverse titration was carried out with KHSO 4 (1 g 100 mL -1 ).Finally, the colorimetric method was used for the determination of glucosamine (SOUZA et al., 2011).The absorbance units were obtained by spectrophotometry (Varian, Cary 100, California, EUA) at 530 nm, and the concentrations were established using a standard curve for glucosamine (0.01 to 0.2 g L -1 ).The measurements were carried out in triplicate, and the results were expressed as glucosamine per mg sample. Ergosterol content A modified method of the Gutarowska and Zakowska (2009) technique was used to determine the ergosterol content in the dry biomass, consisting of 0.2 g of the samples with 10 mL of methanol agitated in shaker (Tecnal, TE-141, Erechim, Brasil) at 200 rpm for 30 minutes.This procedure was carried out three times.The methanol extract was centrifuged at 3.200 g at 20 °C for 10 minutes.Next, it was heated under reflux for 30 minutes and cooled at 4 °C.The refluxed material was submitted to four partitions with 20 mL hexane.The hexane fraction was dried (444 µg); concentration of 4% (600 µg) was required for the microorganisms B-8, 730-10, and 14C-1 and 8% concentration (1990 µg) for 39C-2 and A-6.This fact shows the variable resistance of the species assayed to the same active agent. The inhibition results obtained in this study are consistent with those found by other authors who have studied Spirulina platensis cultivated under other conditions, which was more efficient than other cyanobacteria (Anabaena oryzae and Tolypotrix ceytonica) and green microalgae (Chlorella pyrenoidosa and Scenedesmus quadricauda) (ABEDIN; TAHA, 2008).Tantawy ( 2011) also assessed the antifungal potential of some microalgae, and the results with Spirulina platensis extracts were effective to inhibit Fusarium oxysporum.It is worth mentioning that the greatest inhibition obtained by these authors was approximately 60%, whereas in this study, the inhibition results were approximately 90%. The percentages of mycelial growth inhibition in all of the species tested, with values ranging from 50% to 90%, can be attributed to the fact that the phenolic extract is acting on the mycelium hyphae causing discharge of cytoplasmic components, The twelve fungi assayed showed a growth latent period that ranged from 1 to 14.7 hours for microorganisms B-8 and C-4 in the control media.Under the same conditions, the exponential growth rate varied from 0.014 to 0.024 h -1 (Table 3).In the medium containing growing concentrations of PE, the microorganisms grew at slower rates, inversely proportional to the concentration level.Therefore, the phenolic extracts reduced the growth rate of the Fusarium toxigenic species studied (Tables 3 and 4). When the culture medium with 10% PE (p/p) was used, a slower lag phase was observed since, on average, the microbial development first became apparent 50 hours after incubation, indicating the fungistatic effect of the PE, especially for the F. graminearum species 702-01, 702-25, 46C-1, 14C-1, and 18C-2.However, when the PE concentration reached 10% in the culture medium, its mycelial growth rates reduced to 0.7, 0.9, and 0.8 mm h -1 .Figure 1 illustrates the behaviour of the most susceptible fungal species towards the effect of the active agent. The IC 50 values found for species 702-01, B-10, C-4, 702-25, 46C-1, and 18C-2 and A-3 were obtained with 3% concentration Results relative to µ max were estimated according to Gompzert model.Values are expressed as means (R 2 ).Same lower case letters in the same column mean non-significant differences between the means at 95% confidence.Same capital letters in the same line mean non-significant differences between the means at 95% confidence. variation in the susceptibility of each strain to the effects of the PE is evidenced (Table 5). The ergosterol content and lipolytic activity were not significantly affected by the PE (Table 2).According to Aragão et al. (2009), microorganisms synthesise extracellular enzymes such as lipase to subsequently produce their triglycerides and other metabolic lipids.In this case, the non-reduction in the ergosterol synthesis pathway and in the lipase activity suggest that the fungi used the fatty acids of the microalgae that can be in the PE (COLLA et al., 2007) for the synthesis of endogenous lipids, thus maintaining the stability of the functions that depend on them.This is a very promising finding since micotoxin synthesis is triggered by the alteration of the metabolic pathway for the production of fatty acids and their byproducts. Conclusions Phenolic compounds extracted from Spirulina sp.LEB-18 showed promising antifungal activity against Fusarium graminearum strains when applied to the culture medium at concentrations ranging from 3% to 8% (p/p) to reach IC 50. Parallel to this inhibition, there was a six to seven-fold reduction (on average) in amylase and protease activity in the fungal biomass compared to that of the control groups.The other metabolic activity indicators showed negligible changes.loss of rigidity, and integrity of the hypha resulting in the collapse and death of the mycelium, similar to that reported by Sharma and Tripathi (2006). Such possibility is suggested by the inhibition of amylase activity (95%) of the F. graminearum 702-01 stock isolated from wheat.This results from the inhibition of the interaction between amylase and the substrate available in the culture medium by PE components hindering the the production of the energy required to form the structures needed to maintain cellular viability (Table 2). The protease inhibition values reached 98% for the species F. graminearum 702-01 and B-8, which were also isolated from wheat.Among the species isolated from barley is F. graminearum 14C-1, whose proteolytic activity was inhibited by 96% (Table 2), which also hinders the structural organization of the microbial cell. Phenolic compounds are also capable of inhibiting amino acid synthesis by hindering the reaction between phosphoenolpyruvate and erythrose-4-phosphate and producing shikimic acid, which results in the production of tryptophan and prevents the production of phenylalanine or tyrosine through the prephenic acid pathway (CASTRO et al., 2004).Therefore, if the use of raw phenolic extract from Spirulina sp.exhibited promising growth inhibitory profile, the data show that this results from the inactivation of the enzymatic systems of Fusarium graminearum. These extracts may have acted as direct inhibitors of the catalysis of the affected enzymes or have prevented their synthesis due to the lack of important amino acids for the constitution of the protein chain. Although the glucosamine content was lower than that in the PE-treated biomass, this was not the most heavily affected component during inhibition of fungal multiplication.In B-8, 730-10, 46C-1 and 39C-2 species, there was reduced production under PE concentrations greater than 8% (p/p); whereas in 702-25, C-4, 14C-1, and A-6, inhibition was obtained using lower phenolic extract concentrations (5% and 3.5% p/p).Again, the LogisticGompzertµ max = (A A *B)/4 µ max = (A A *B)/е t 1 = (D -2)/B t 1 = M -(1/B)A: parameter that describes logarithmic growth of the population, D: parameter associated to cell growth and B: dimensionless parameter. Table 2 . Parameters of the Logistic and Gompzert models associated to the lag phase length (t 1 ) and maximum cellular growth rate (µ max ). Table 3 . Logistic and Gompzert models of the microbial growth curves in the absence of phenolic extract. Table 4 . Maximum growth rate of fungi treated with different concentrations of phenolic extract.
2018-07-01T03:22:08.932Z
2013-02-01T00:00:00.000
{ "year": 2013, "sha1": "0b1f45a5913d8bd4a92ba92e61b9237b22c6d27e", "oa_license": "CCBYNC", "oa_url": "https://www.scielo.br/j/cta/a/mkt7HrHsZDSnHgGfZy4rZPm/?format=pdf&lang=en", "oa_status": "GOLD", "pdf_src": "Anansi", "pdf_hash": "0b1f45a5913d8bd4a92ba92e61b9237b22c6d27e", "s2fieldsofstudy": [ "Biology", "Agricultural And Food Sciences" ], "extfieldsofstudy": [ "Biology" ] }
145932863
pes2o/s2orc
v3-fos-license
APC-Cdh1 Inhibits the Proliferation and Activation of Oligodendrocyte Precursor Cells after Mechanical Stretch Injury The incidence of spinal cord injury (SCI) continues to increase; however, the involved mechanisms remain unclear. Anaphase promoting complex (APC) and its regulatory subunit Cdh1 play important roles in the growth, development, and repair of the central nervous system (CNS). Cdh1 is involved in the pathophysiological processes of neuronal apoptosis and astrocyte-reactive proliferation after ischemic brain injury, whereas the role played by APC-Cdh1 in the proliferation and activation of oligodendrocyte precursor cells (OPCs) after SCI remains unresolved. Using primary cultures of spinal oligodendrocyte precursor cells, we successfully established an in vitro mechanical stretch injury model to simulate SCI. Cell viability and proliferation were determined by MTT assay and flow cytometric analysis of the cell cycle. Real-time fluorescent quantitative PCR and Western blot analysis determined the mRNA and protein expression levels of Cdh1 and its downstream substrates Skp2 and Id2. Mechanical stretch injury decreased the proliferative activity of OPCs and enhanced cellular Cdh1 expression. Dampened expression of Cdh1 in primary OPCs significantly promoted proliferation and activation of OPCs after SCI. In addition, the expression of the downstream substrates of Cdh1, Skp2, and Id2 was decreased following mechanical injury, whereas adenovirus-mediated Cdh1 RNA interference increased postinjury expression of Skp2 and Id2. These findings suggest that APC-Cdh1 might be involved in regulating the proliferation and activation of OPCs after mechanical SCI. Moreover, degraded ubiquitination of the downstream substrates Skp2 and Id2 might play an important role, at least in part, in the beneficial effects of OPCs activity following SCI. Introduction Spinal cord injury (SCI) is a disabling and traumatic disease of the central nervous system (CNS), which often causes permanent and irreversible loss of function in afflicted patients. It was suggested that oligodendrocyte precursor cells (OPCs), which are located in the white matter of the CNS, could rapidly respond to SCI and then proliferate and differentiate into mature oligodendrocytes (OLs) with a myelin-forming capability [1]. The proliferation, activation, and differentiation of OPCs after SCI is regulated by many factors that play a critical role in the processes of subsequent axonal remyelination [2]. However, the specific mechanisms that are involved in regulating the proliferation and activation of OPCs after SCI have not been fully elucidated. The large multimeric E3 ubiquitin ligase Anaphase Promoting Complex (APC) is recognized as one of the main E3 ubiquitin protease systems in living cells that serves a pivotal role in controlling sister chromatid segregation and cellular exit from mitosis [3]. APC helps drive the degradation of protein regulator of the cell cycle, a complex system of molecular events that collectively coordinate chromosome replication and segregation with cell division and growth [4]. Moreover, Cdh1 is an APC coactivator that directly binds to substrates of the APC [4]. Acting as the key regulatory APC subunit, it is also recognized that Cdh1 is important in targeted degradation and does so by specifically binding to the downstream substrates Skp2, Id2, SnoN, and CyclinB1, among others [5]. In recent years, it was found that Cdh1 was highly expressed in neurons and that APC-Cdh1 was involved in many fundamental life activities of the CNS, including axonal elongation, neuronal survival, differentiation, glucose metabolism, and even glial cell proliferation. APC-Cdh1 is also an important regulator of the growth and development of the CNS [6]. Previous work has shown that APC-Cdh1 also plays an important role in CNS injury, and its abnormal activity is an important cause of neuronal apoptosis and astrocyte-reactive proliferation after ischemic brain injury [7,8]. In addition, APC was identified as the major ubiquitin protease system in cells, and together with its regulatory subunit Cdh1 constituted the APC-Cdh1 pathway, which serves an important intracellular mechanism to negatively regulate the cell cycle [9]. Thus, we speculated that APC-Cdh1 might also influence the proliferation and activation of OPCs following SCI. By regulating the activity of APC-Cdh1, it might be possible to promote the activation and regeneration of OPCs following CNS injury. Isolation, Purification, and Culture of Spinal Cord-Derived OPCs. Within 48 hours, newborn spinal cord tissues of Sprague-Dawley rats were selected and the primary rat spinal OPCs were cultured in vitro [10]. The isolated spinal cord tissues were prepared under a dissecting microscope and sliced into small pieces at approximately 1 mm 3 . The pieces were digested with trypsin for 15 min and then the digestion was neutralized. The liberated cells were resuspended in DMEM culture medium (Gibco, USA) containing 20 percent fetal bovine serum and then seeded into a 75 cm 2 culture flask that had been precoated with poly L-lysine (0.1 mg/mL; Sigma). The cells were cultured in a 37 ∘ C, five percent CO 2 incubator with the culture medium being changed every three days. When cultured for 10 days, a large number of OPCs were overlaid onto the superior layer of the astrocytes. The culture flask was fixed on a horizontal shaker at a constant temperature of 37 ∘ C and then preshaken for one hour to remove the microglia, following which, the culture continued to be shaken at 200 rpm overnight. The cell culture supernatant was then reseeded into an uncoated culture dish for 40 min to remove excess astrocytes and microglia, so that the purified OPCs could be obtained. Single cell suspensions of OPCs were prepared using oligodendrocyte precursor cell culture medium or OPCM (supplemented with PDGF-AA and bFGF) and seeded at a density of 1 × 10 4 /cm 2 onto a BioFlex VI six-well plate (Flexcell, USA) that was coated with poly L-lysine and collagen I. The purified OPCs were further cultured for three days, after which, subsequent experiments were performed. Immunofluorescence staining showed that more than 95 percent of cells expressed the OPC-specific antigen A2B5. All animal-related procedures in this study were conducted in accord with published regulations (Guide for the Care and Use of Laboratory Animals, 8th edition, NRC, USA) and approved by the local Ethics Committee of Shanxi Medical University. Construction of Mechanical Stretching Injury Model of OPCs. The mechanical stretch injury model of OPCs was designed according to the methods published in a prior study [11]. Briefly, BioFlex six-well plates were seeded with OPCs that then connected to the FX-4000T6 flexible substrate stretching system (Flexcell, USA) to perform uniform periodic mechanical stretching of the OPCs. The loading parameters were as follows: stretching amplitude of 10 percent; frequency of 0.1 Hz; and a waveform that was a sine wave. The cells were loaded for different durations according to the experimental groupings. After loading into the stretch culture system and completing the culture, the cells were harvested after 48 hours of static culture. Control cells were also seeded and cultured in the BioFlex six-well plate system and subjected to the same environmental conditions as described for the injury group (37 ∘ C, 5% CO 2 ), but in the absence of being connected to the stretching system. The degree of cell injury was measured by an Annexin V Apoptosis Detection Kit assay (Sangon, China) as determined by flow cytometry (BD, USA). The apoptotic rate in the injured group was significantly increased and the cells did not demonstrate any obvious shedding as compared with those in the control group. This observation suggested that the mechanical stretching injury model of oligodendrocyte precursor cells was successfully constructed. Adenovirus Transduction of Oligodendrocyte Precursor Cells. The Cdh1-shRNA adenoviral vector construct was obtained from Hanheng Biotechnology Co., Ltd. The Cdh1-shRNA adenoviral construct and the empty vector adenoviral construct were separately added to the cells at a multiplicity of infection (MOI) = 60. The system was subsequently changed to normal culture medium for continuous culture after a transduction time of eight hours at 37 ∘ C. After 72 hours of transduction, total RNA and total protein were extracted with the purpose of identifying the effects of Cdh1 RNA interference. Cells that had been transduced with the Cdh1-shRNA construct for 72 hours were subjected to stretch injury for 12 hours, and as described above. MTT Assay. The MTT colorimetric assay was used for the determination of cell viability. Briefly, a density of 2 10 5 OPCs was seeded into a 96-well plate and was cultured for 48 hours. After adding 150 ul of MTT solution to each well, incubation of the plate was continued for 4 hours at 37 ∘ C in a five percent CO 2 incubator. After termination of the culture, the supernatant was discarded. Next, 150 ul of DMSO was added to each well to solubilize the purple/blue formazan crystals, and the resultant colored solution was shaken at 37 ∘ C for 15 min to completely dissolve the formazan crystals. The formazan solution was then transferred to a 96-well plate, and the OD absorbance values of each group were measured at a wavelength of 570 nm using a microplate reader (Bio-Rad, USA) to reflect viable cell proliferation. Cell Cycle. After cells were seeded and cultured for 48 hours, the cells were harvested and fixed in 70 percent icecold ethanol at 4 ∘ C overnight. Next, stages of the cell cycle were estimated according to the manufacturer's instructions (KeyGen, China). The ModFit-LT software was used to analyze the frequency of cells that had compartmentalized to each phase of the cell cycle. Cell proliferation was estimated by measuring the proportion of cells that occupied the S phase by flow cytometry. Real-Time PCR. Following treatment, total RNA was extracted with Trizol reagent (Invitrogen, USA), and changes in mRNA expression of Cdh1 and its downstream substrates were quantified by a two-step qRT-PCR. Total RNA was reverse transcribed using a Reverse Transcription Kit (Fermentas, USA), following which PCR was carried out on a StepOnePlus6 real-time fluorescence quantitative PCR system using SybrGreen I as the fluorescent dye. All the primers in this experiment (see Table 1) were designed and synthesized by TAKARA. -actin was used as an internal reference control and amplified with the genes of interest and detected in the same reaction system. PCR conditions were as follows: predenaturation at 94 ∘ C for 10 min, activation of Taq enzyme; denaturation at 94 ∘ C for 15s, annealing at 60 ∘ C, and extension for 60s, at a total of 40 cycles. The relative mRNA expression levels of Cdh1 and its downstream substrates, Skp2 and Id2, were analyzed by the 2 −ΔΔCT method. Western Blot. Following treatment, the cells were fully lysed on ice with PMSF-containing RIPA lysate buffer (Beyotime, China), and total cellular protein was extracted. After sample quantification of total protein, all samples were equilibrated to the same concentration with RIPA lysate buffer. An equal volume of boiled denatured protein was taken and separated by 10 percent SDS-PAGE, after which, the protein was transferred to a methanol pretreated PVDF membrane. After blocking with a five percent BSA nonspecific protein solution, a Cdh1 polyclonal antibody (1:2000, Abcam, UK), a Skp2 monoclonal antibody (1:1500, Abcam, UK), an Id2 monoclonal antibody (1:1700, Abcam, UK), and the -actin monoclonal antibody (1:1000, Santa Cruz, USA) were added to the blotted membranes and incubated overnight at 4 ∘ C. Next day, the membrane was rinsed and incubated with horseradish peroxidase-labeled secondary antibody (1:6000, Abcam, UK) for 2 h at room temperature. Development of the antibody probed membranes by enhanced chemiluminescence (ECL) was performed after three rinses in TBST buffer. Photographic imaging of the developed membranes was performed using the Bio-Rad ChemiDoc MP versatile gel imaging analysis system. The relative expression levels of Cdh1, Skp2, and Id2 proteins were corrected by using -actin as an internal reference control. Statistical Analysis. Statistical analysis was performed using SPSS version 17.0 software. The measured data was expressed as mean ± standard deviation (mean ± SD). An independent sample Student's t-test was used to compare between two groups of data. One-way ANOVA compared observations between multiple groups, and Tukey's posttest was used to correct for multiple comparisons between and within groups. The test level was set at an alpha value of P<0.05, which was considered a statistically significant difference. Decrease in OPCs Proliferation and Coordinate Increase in Expression of Intracellular Cdh1 after Mechanical Stretching. By MTT cell viability assay and cell cycle analysis, we analyzed the effects of mechanical injury on the proliferation of OPCs. Cell viability began to decrease after two hours of mechanical stretching, and the cell viability of the injury group decreased as a function of extending the stretching time, and especially in the Stretch-12 hours group (P<0. 05; Figure 1(a)). Cell cycle analysis showed similar results. The proportion of injury group cells in the S phase of the cell cycle gradually and significantly decreased as a function of extended stretch time, which was lower than found in the control group (P<0.05; Figures 1(b) and 1(c)). Observations suggested that mechanical stretching inhibited OPCs proliferation in a time-dependent manner, which was most obvious following 12 hours of stretch. Western blot analysis and real-time PCR analysis showed that mechanical stretching increased Cdh1 expression in a time-dependent manner (Figures 1(d)-1(f)). These findings suggest that mechanical stretching of OPCs decreases cell proliferation, which is accompanied by enhanced expression of intracellular Cdh1 and stretch at 12 hours was used for subsequent analysis. Effectiveness of an Adenoviral Vector to Silence Cdh1 RNA Expression. To confirm the effectiveness of a Cdh1-shRNA adenoviral vector in silencing Cdh1 expression, OPCs were collected 72 hours after adenoviral transduction, following which the expression of Cdh1 was determined by RT-PCR and Western blot analysis (Figures 2(a)-2(c)). Compared with the vehicle group, the Cdh1-shRNA adenoviral construct significantly decreased Cdh1 (all P<0.05). In addition, changes in expression of the Cdh1 downstream substrates Skp2 and Id2 were also detected after interference. Observations demonstrated that the mRNA and protein expression levels of Skp2 and Id2 exceeded those seen in the empty virus group (P<0.05, respectively). This observation contrasted Stretch-6h Stretch-2h with the observed decrease in Cdh1 expression ( Figures 2(a)-2(c)). Cdh1 Knockdown Promoted OPCs Proliferation and Activation after Mechanical Stretching. Next, the important role played by APC-Cdh1 in the proliferation and activation of OPCs after mechanical stretch injury was explored by knocking-out Cdh1 expression by RNA interference and they were divided to Control, Stretch, Ad-Control-Stretch, and Ad-Cdh1-Stretch groups. Western bolt analysis confirmed that Cdh1 protein expression in the Ad-Cdh1-Stretch group was significantly lower than the Ad-Control-Stretch group (P<0. 05). This observation indicated that Cdh1 knockdown significantly inhibited the ability of mechanical stretch to alter Cdh1 protein expression (Figures 3(a) and 3(b)). Next, the effect of Cdh1 silencing on inhibiting OPCs proliferation following mechanical stretching was explored. As expected, the cellular viability in the Stretch group and the Ad-Control-Stretch group was significantly lower than those of the normal control untreated group (all at P<0.05). The viability of OPCs in the Ad-Cdh1-Stretch group was significantly increased as compared the Ad-Control-Stretch group but did not recover to normal control group levels (P<0. 05). This suggested that silencing Cdh1 expression significantly interfered with decreased cell proliferation activity shown to be induced by mechanical stretching (Figure 3(c)). In addition, cell cycle analysis by flow cytometry showed that interference with Cdh1 RNA expression increased the frequency of OPCs in the S phase of the cell cycle after mechanical stretching as compared with that seen in the Ad-Control-Stretch group (P<0. 05; Figures 3(d) and 3(e)). These findings suggest that Cdh1 silencing could, at least in part, promote OPCs proliferation and activation after mechanical stretch injury. Possible Role of Skp2 and Id2 Expression in Cdh1-Mediated OPCs Proliferation. Expression levels of Skp2 and Id2 in the Stretch-2-hour, 6-hour, and 12-hour groups were lower than those found in the control group, and decreased gradually with extending the mechanical stretch time (all at P<0.05; Figures 4(a)-4(c)). Following Cdh1 knockdown, the expression of Cdh1 in the Ad-Cdh1-Stretch group was significantly decreased, while the protein expression levels of Skp2 and Id2 in the Ad-Cdh1-Stretch group were significantly higher than that found in the Ad-Control-Stretch group (all P<0.05; Figures 4(d)-4(f)). Observations suggest that Skp2 and Id2 might play a role in Cdh1-mediated OPCs proliferation and activation. Discussion We established an in vitro mechanical stretch injury model to simulate the pathophysiological changes of OPCs after SCI. We found that mechanical stretch injury decreased OPCs proliferation and enhanced the expression of intracellular Cdh1. Disruption of Cdh1 expression in primary OPCs promoted their postinjury proliferation and activation. Moreover, while the expression of the downstream substrates of Skp2 and Id2 were decreased, the actual expression of both Skp2 and Id2 were increased following disrupted Cdh1 expression. From these observations, APC-Cdh1 might be an important regulator of OPCs activation and proliferation after mechanical injury, a mechanism that might be dependent on targeted decreased expression of Skp2 and Id2 by ubiquitination. In a prior report, Siebert et al. used an immunological double-labeling technique to determine an almost complete absence of BrdU/Olig1 or Ki67/Olig1 positive cells in the injured area following SCI in the rat model [12]. The results showed that OPCs proliferation was most likely derived from the injured marginal area or the normal white matter, but almost certainly not from in situ activated OPCs proliferation of the injured area; additionally, both necrosis and apoptosis remained as major manifestations in the center of the lesion [13]. Thus, the proliferative behavior of OPCs in the injured area was significantly inhibited as compared with the injured marginal area and at the site of normal tissue farther away from the injured site, an observation that was consistent with the conclusions of this current study. Moreover, OPCs proliferation and activation after SCI were affected by the complex internal environment of the body. This process was mainly initiated by CXCL1, IGF-1, FGF-2, and other derived signals and growth factors secreted by astrocytes and microglia [14]. We used the FX-4000T6 loading device to stretch OPCs in a model system designed to simulate in situ lesions after SCI, and the cells in vitro did not receive usual physiological signals to trigger proliferation by the functional contributions of astrocytes and microglia. Therefore, in the in vitro model, mechanical stretch damage to OPCs might affect its proliferative potential. APC-Cdh1 is an important factor suppressing cell cycle transition from the G1 stage to the S stage. Cdh1 can activate APC through dephosphorylation at the late mitosis and G1 stage in proliferating cells. In addition, it can degrade the downstream cell cycle-related proteins by means of irreversible ubiquitination, so that the cells are maintained at the G1 stage, thus preventing excessive cell proliferation [15]. In this study, with the extension of stretch time, the Cdh1 expression in oligodendrocyte precursor cells was gradually increased. At the same time, flow cytometry analysis suggested that the proportion of G1 stage cells was gradually increased, while the proportion of S stage cells was gradually reduced. The mechanism might be that the increased APC-Cdh1 suppressed the signaling factors that promoted the transition of the cell cycle from the G1 stage to the S stage. Inhibitor of DNA binding 2 (Id2) is a unique class of molecule in the helix-loop-helix (HLH) protein family and is extensively expressed in the brain of adult rats, especially in the oligodendrocyte lineage. It can phosphorylate the Rb protein or suppress the activities of cyclin-dependent kinase inhibitors (such as P21) to promote cell cycle transition from the G1 stage to the S stage [16]. Wang et al. discovered through an in vitro study that silencing Id2 expression with the plasmid vector could evidently inhibit the proliferation of OPCs and accelerate their differentiation [17]. Therefore, we speculated that APC-Cdh1 might block cell cycle progression from the G1 stage to the S stage in OPCs through the targeted degradation of Id2, thus restraining the proliferation and activation of OPCs after mechanical injury. Activation of S-phase kinase associated protein 2 (Skp2) promotes cell entry into the S phase of the cell cycle and does so by decreasing the activity of the cell cycledependent kinase inhibitor p27 (or kip1), and its excessive activation might be associated with the reactive proliferation of astrocytes after brain injury [18]. It is formally possible that Skp2 might be recognized as a substrate by APC-Cdh1 and thus participate in ubiquitin-mediated degradation, due in part to the existence of the destruction-(D-) box motif in Skp2. Hu et al. showed that APC-Cdh1 could bind to Skp2 in the nucleus and did so via nuclear transport under the induction of the regulatory cytokine TGF-, leading to decreased Skp2 expression by ubiquitination, and subsequent inhibition of cellular proliferation [19]. In this study, OPCs expression of Cdh1 was increased after mechanical stretch injury, following which, Skp2 expression decreased significantly. By contrast, adenovirus-mediated Cdh1 silencing enhanced Skp2 expression by OPCs following injury. This observation indicated that Skp2 might indeed be the direct downstream substrate of APC-Cdh1 in the pathological process of mechanical spinal cord injury. We speculated that enhanced Cdh1 expression in OPCs after injury might block the cell cycle in a mechanism that might be in part dependent on ubiquitin-mediated degradation of Skp2, a process that might block the activation and proliferation of OPCs after injury. In our prior in vivo study, we found that the mRNA expression of Cdh1 in the cortical somatosensory motor region was significantly increased after SCI in rats. Decreased Cdh1 expression by RNA interference stimulated the regeneration of damaged axons [20]. These results suggested that APC-Cdh1 was involved in axonal repair following SCI, and its mechanism might be related to the degradation of downstream substrates SnoN and Id2 [21,22]. In combination with the comprehensive analysis reported in this study, APC-Cdh1 was not only an important pathway in regulating axonal regeneration after SCI, but might also serve as an intracellular factor that disrupted activation and proliferation of OPCs. APC-Cdh1 was the cross-point between axonal regeneration and the activation and proliferation of OPCs. Thus, exploration of the effects of regulating APC-Cdh1 activity on OPCs activation and proliferation could prove to be quite significant in terms of understanding the mechanisms of protecting the spinal cord after injury. Observations made in the current study add to the growing body of literature that have revealed novel functions of APC in the CNS including cell cycle regulation, axonal guidance, synaptic plasticity, neurogenesis, and survival of neuronal cells. Interestingly, others have found that degradation products of APC-targeted substrates are associated with neurodegenerative conditions, including Alzheimer's disease, an observation that might implicate dysregulation of APC in neurodegenerative diseases [23], and highlighting the role of APC-Cdh1 as a potential therapeutic target in neuronal degeneration and injury. Therapeutic targeting of Cdh1 also has a historically important context, since it was shown more than a decade ago that inhibition of Cdh1, a coactivator of the APC/cyclosome, in embryonic primary cultures promotes axon elaboration and override growth suppressing activities that are mediated by soluble myelin-derived factors [24]. Furthermore, recognizing that demyelination is a key initiating event in SCI and that oligodendrocyte apoptosis plays an important role in triggering neuronal demyelination, Huang et al. developed a compressed SCI rat model [25]. This group set out to determine whether or not demyelination and oligodendrocyte apoptosis are seen following compressed SCI. Neuronal demyelination occurred shortly after compressed SCI and was provoked, at least in part, by oligodendrocyte apoptosis, a process that was associated with enhanced expression of Id2 after compressed SCI in the rat model [25]. Conclusions In conclusion, mechanical stretch injury was shown to dampen the activation and proliferation of OPCs in a process that was accompanied by high intracellular expression of Cdh1. The downregulation of Cdh1 expression by RNA interference stimulated activation and proliferation of OPCs after mechanical injury. The present study indicates that APC-Cdh1 might serve as a potential therapeutic target in the settings of both recovery and regeneration of neural function after SCI. Data Availability The data set supporting the results of this article are included within the article. Conflicts of Interest The authors declare that there are no conflicts of interest regarding the publication of this paper.
2019-05-07T13:41:01.999Z
2019-04-17T00:00:00.000
{ "year": 2019, "sha1": "fef5959aaecce48624de814563ae970e15cf4260", "oa_license": "CCBY", "oa_url": "http://downloads.hindawi.com/journals/bmri/2019/9524561.pdf", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "85b3ccabf501c1044c2492a5c2d45c1fc08f6562", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [ "Chemistry", "Medicine" ] }
249913866
pes2o/s2orc
v3-fos-license
Experimental Study on Mechanical Strength of Diesel- Contaminated Red Clay Solidified with Lime and Fly Ash Diesel-polluted soil is unstable and easy to migrate with environmental changes and causes secondary pollution. In this paper, 0# diesel is used as the pollutant, and lime fly ash is selected as the solidifying material. This paper selects four curing ages of 7D, 14D, 21D, and 28D and four pollution concentrations of 0%, 5%, 10%, and 15%. 20%, 25%, 30%, and 35% four moisture content variables were used to conduct an unconfined compression test, direct shear test, and scanning electron microscope test on diesel-contaminated red clay. The results show that the curing age significantly affects the curing effect, and the curing age of 21D is the optimal age. The mechanical properties of the cured soil were the best at the optimum age and when the pollution concentration was 5%. The mechanical properties of the solidified soil with a moisture content of 30% are the best at the optimal age and the same pollution concentration. Additionally, the scanning electron microscope data indicate that when the pollution concentration increases, the cement created by the interaction of lime, fly ash, and pozzolan increasingly forms. The “oil film” generated by diesel oil seeping into the soil is bound and unable to fill the soil’s pores, hence reducing the soil’s strength. Introduction With the growth of China's economy, there are more and more engineering projects. Du et al. [1] pointed out that the annual oil production in China has exceeded 1:8 × 10 11 kg, and the oilfield area covers an area of about 3:2 × 10 5 km 2 . Thus, as the industry develops and demand for diesel fuel for vehicle usage increases, diesel fuel consumption and transportation often result in leakage, as shown in the 2013 Beihai diesel tank leaking event in Guangxi and the 2014 Guangxi Nanning diesel tank rollover disaster. A significant quantity of diesel oil seeps into the red clay roadbed and foundation, causing variable degrees of soil pollution. Dieselcontaminated soil is inherently unstable and rapidly migrates in response to environmental changes, resulting in secondary contamination. As a result, efficient soil remediation of diesel contamination is a pressing issue. The remediation methods of diesel-contaminated soil are mainly divided into physical methods (physical separation method, steam extraction method, thermal decomposition, electrolysis, etc.), chemical methods (chemical reduction, chemical leaching, soil performance improvement, and remediation technology, etc.), and biological methods (bioaugmentation method, biological culture method, bacterial injection method, etc.) [2]. Diesel has complex components and various structural properties, which is extremely easy to cause secondary pollution. A single repair method often has an insignificant repair effect and high cost. The research on the restoration of heavy metal ioncontaminated soil has formed a system, but it is still in the exploratory stage to restore oil-contaminated soil and its secondary utilization. He et al. [3] pointed out that the compressive strength of oily soil treated with lime and fly ash first increased and then decreased with the number of dry-wetting cycles. Shah et al. [4] showed that the geotechnical properties of petroleumcontaminated soil were improved after treating petroleum-contaminated soil with different stabilizers such as lime, fly ash, and cement alone or as admixtures. Stabilizers improve soil geotechnical properties through cation exchange, agglomeration, and pozzolanic action. Adding 10% lime, 5% fly ash, and 5% cement to the contaminated soil works best. Kogbara and Al-Tabbaa [5] mixed one part of slaked lime with four parts of slag, one part of cement, and nine parts of slag cement in diesel-contaminated sandy soil. Theresults show that cement and lime-activated GGBS can effectively reduce theleaching of pollutants in polluted soil. Al-Rawas et al. [6] used cement and cement bypass dust as stabilizers to effectively improve the properties of oilcontaminated soils and provide a safe and effective solution for practical construction applications. Portelinha et al. [7] pointed out that diesel pollution affects soil water holding capacity and unsaturated hydraulic conductivity, forming a curved shape similar to clay materials. Bian et al. [8] evaluated the shear characteristics of oil-contaminated soil by resistivity and pointed out that under constant compaction and saturation, the shear strength of oil-contaminated soil decreased with the increase in resistivity. Zhou et al. [9] used a direct shear test, variable head penetration test, and compression test to point out that with the increase in diesel content, the cohesion of oily soil first increased and then decreased, and the internal friction angle first changed slightly and then increased. The compressibility and permeability of oil-contaminated soil first decreased and then increased with the increase in diesel content. Zheng et al. [10] found through the unconfined compressive strength test that it decreases with the increase in oil content, but when the water content is low, the soil strength increases instead; Chen et al. [11] found through the indoor quick shear test that the bonding force between sand particles is small, the influence of nondielectric oil on the bonding force between soil particles is much smaller than that of water, and the effect of crude oil and diesel oil on the shear strength of unsaturated sand is not significant; Li [12] pointed out that the permeability coefficient of diesel-contaminated loam is about 97% lower than that of clean loam when the oil content is 8%; He et al. [13] simulated the dry-wet cycle indoors and confirmed the immobilization of oil-contaminated soil by lime fly ash with the help of an unconfined compression test. It has high compressive strength; Zha et al. [14] and other studies pointed out that the use of fly ash and a small amount of lime mixture can effectively improve the engineering properties of expansive soil, reduce the expansion and shrinkage of expansive soil, and improve the strength of expansive soil. Li et al. [15] have confirmed that lime fly ash can effectively improve the mechanical properties of oil-contaminated saline soil. Han et al. [16] point out that under a CNS boundary condition, the shear stress for single-joint and double-joint specimens increases slowly with the increase in shear displacement. Song et al. [17] propose that calcium oxide enhances the strength of Zn 2+ -contaminated soil because calcium oxide will react with SiO 2 , Al 2 O 3 , and Fe 2 O 3 in the red clay to produce C-S C-A-H. Song et al. [18] compared the red clay before and after pollution; it is found that under the same axial strain, the damage variable increases with the increase in confining pressure. Song et al. [19] pointed out that with the increase in the number of wetting and drying cycles, the soil particles' connection is closed, the soil porosity decreases, and the strength increases. In environmental and geotechnical engineering, the solidification treatment of diesel-contaminated red clay is a study area that cannot be overlooked. Therefore, in this paper, Guilin red clay is used as the test material, lime with a strong curing effect and fly ash with strong adsorption are selected, and through the unconfined compression test, direct shear test, and scanning electron microscope test, the effect of different pollution concentrations, different mechanical properties, and microstructure of cosolidified diesel-contaminated red clay with water content and different curing ages was investigated. Experimental Materials 2.1.1. Diesel. The pollutant used in this test is 0 # diesel, taken from the Sinopec gas station in Guilin City. The color of the 0 # diesel is light yellow with light green luster, slightly soluble in water, with good fluidity but greater viscosity than water and substantial volatility; it has a special pungent odor. The relative density of the diesel used is 0.857. The viscosity coefficient is 3.56-4.05 mPa·s, and the freezing point is -25.82°C. Red Clay. The soil used in the test was taken from a foundation pit in Lingui District, Guilin City, which belongs to the subregion of Guofeng Plain in the structural erosion landform area, with many low mountains and hills. The soil layer structure can generally be divided into three layers. The first layer is plain fill (Q 4 ml ), mainly composed of cohesive soil, crushed stone, and schist, with a loose structure. The layer thickness is 0.40-2.00 m, and the average thickness is 1.01 m. The second layer is plastic secondary red clay (Q 3 al+pl ), which is uniform in soil quality, smooth in section, and slightly glossy. The layer thickness is 0.20-5.70 m, and the average thickness is 1.65 m. The third layer is pebble gravel soil (Q 3 al+pl ), mainly gray-brown pebble gravel, the layer thickness is 0.50-2.50 m, and the average thickness is 0.85 m. The red clay used in this experiment was taken from the second layer at a depth of 3-5 m. The red clay was retrieved, air-dried, crushed, passed through a 2 mm sieve, stored in a moistureproof plastic bucket, sealed for use, and subjected to geotechnical tests. The properties of its basic physical properties and parameters are shown in Table 1. Fly Ash. The fly ash was purchased from Gongyi Longze Water Purification Material Co., Ltd. It is mainly composed of coal ash and slag, with strong adsorption properties, the specific gravity is between 1.95 and 2.36, and the dry density is 450 kg/m 3 to 700 kg/m 3 . The specific surface area is between 220 kg/m 3 and 588 kg/m 3 , and the main components are SiO 2 , Al 2 O 3 , and Fe 2 O 3 . Fly ash can undergo recrystallization, ion adsorption and exchange, carbonation, and pozzolanic reaction with lime, forming a "reticular" and "rod-like" structure in the soil, which 2 Geofluids significantly improves the mechanical properties of the solidified soil. Lime. Lime is a common solidified material purchased from Xilong Science Co., Ltd. as bottled quicklime, with white or gray lumps, granules, or powder. The calcium oxide content is greater than or equal to 98%. CaO reacts with water to form Ca(OH) 2 , of which OHcan decompose Si-O and Al-O bonds in the glass body of fly ash, which can fully stimulate the activity of fly ash. At the same time, it provides Ca 2+ for the formation of hydraulic gels in the hydration reaction. Test Method. The naturally air-dried red clay was crushed and passed through a 2 mm sieve. Calculate the mass of air-dried soil, water, diesel oil, and lime fly ash required to prepare the soil sample (diesel and solidified material are mixed according to the mass percentage of the dry soil). The procedure for the test is shown below. The first step is to equally distribute the weighted diesel oil into the soil, cover it with plastic wrap, and allow it to sit for 12 hours so that the oil molecules may infiltrate the soil particles. The air-dried soil preparation sample is sprayed with distilled water in the second stage. In the third step, the contaminated soil and curing agent were combined and stirred equally and sealed for 24 hours in the third phase, according to the prescribed dose of the curing agent. During this time, the soil samples were flipped over often to ensure that all of the components were well mixed. According to the "Geotechnical Test Method Standards" (GB/ T50123-2019) [20], a standard vertebral triaxial sample with a diameter of 39.1 mm and a height of 80 mm and a reshaped ring knife sample with a diameter of 61.8 mm and a height of 20 mm were prepared by the static pressure method for unconfined compressive strength and direct shear tests. The unconfined compressive strength test and direct shear test were carried out after curing for 7D, 14D, 21D, and 28D under standard curing conditions (curing temperature is 20 ± 2°C, humidity ≥ 95%); dry density of the sample is 1.40 g/cm 3 . Since diesel oil is a nonaqueous liquid, it will not dissolve with the pore water in the soil after infiltrating it, forming a "diesel pore liquid." Therefore, fly ash with strong adsorption and alkalinity is selected, which can strongly combine with oily substances through intermolecular attraction and chemical chain, and is irreversible. Lime is a common solidified material for treating polluted soil, and lime-solidified soil has high shear strength and compressive strength. The combination of lime and fly ash is used to solidify dieselpolluted soil. The two have a pozzolanic reaction and solidification reaction, forming a cementitious compound to fill in the soil particles, which has a good solidification effect and improves the soil's mechanical strength. After many attempts, the moisture content was set to 20%, 25%, 30%, and 35%. The oil content was set to 0%, 5%, 10%, and 15%. The curing age is set as 7D, 14D, 21D, and 28D. The curing material is 20% fly ash+12% lime. Characteristics of Unconfined Compressive Strength of Cured Diesel-Contaminated Red Clay [20,21]. According to the geotechnical test method standard, the maximum axial stress is taken as the unconfined compressive strength without side limitation. If the maximum axial stress is not apparent, the corresponding stress of the axial strain of 15% is taken as the unconfined compressive strength. From Table 1, it can be seen that the optimal moisture content of the soil used in this test is 30%. Therefore, the unconfined compression strength selected the moisture content of 30% and the pollution concentration of 0%, 5%, 10%, and 15% and explored the unconfined compression strength of the same-dosage curing agent at the curing age of 7D, 14D, 21D, and 28D. The unconfined compression strength of different curing ages obtained by the test is shown in Figure 1. It can be seen from Figure 1 that the unconfined compressive strength of lime-fly ash-solidified diesel-polluted soil increases first and then decreases with the increase in curing age, and the strength reaches its peak when the curing age is 21D. Compared with the unconfined compressive strength with a curing age of 7D and a pollution concentration of 10%, the unconfined compressive strength increased by 159%, 166%, 134%, and 144%, respectively. With the increase in age, the failure strain corresponding to the ultimate strength of the solidified diesel-contaminated red clay increased first and then decreased (2.48%, 2.61%, 2.31%, and 1.64%, respectively). Analysis of the reasons shows that the pozzolanic reaction between lime and fly ash has fully occurred, and this reaction is a slowdeveloping and time-consuming process. With the increase in curing age, the active silica in fly ash and the Ca 2+ ionized from lime forms C-S-C and C-A-H and condenses on the surface of soil particles, increasing the effective contact area between soil particles, thereby improving soil strength. At the same time, after the Ca(OH) 2 lime hydration, the SiO 2 forms sol, and Al 2 O 3 on the surface of the fly ash glass body dissolves slowly and gradually reacts with Ca(OH) 2 to form calcium silicate, and calcium aluminosilicate and other compounds are filled in the soil The 3 Geofluids connection strength between the particles and the soil particles is enhanced. The unconfined compressive strength of the solidified soil is improved. Influence of Pollution Concentration on the Unlimited Compressive Strength of Cured Diesel-Contaminated Soil. To explore the influence of diesel pollution concentration on the unconditioned compressive strength of cured soil, the unconfined compressive strength of cured soil under the optimal conditions of 21D curing age and moisture content of 30% was selected to explore the unconventional compressive strength of cured soil under different diesel pollution concentrations. The unconstrained compressive strength of the diesel fuel under different concentrations obtained by the test is shown in Figure 2. It can be seen from Figure 2 that as the pollution concentration increased from 5% to 15%, the unconventional compressive strength of the cured soil decreased with the increase in the pollution concentration, decreasing by 15.5% and 19.4%, respectively. The stress-strain curve of the cured soil under different concentrations is mainly divided into four stages: the elastic stage-the stress-strain curve of this section is linear, and the cured soil sample is inelastic deformation; the yield stage-the stress-strain curve of this section is close to a straight line, and the solidified soil sample begins to undergo plastic deformation; the strengthening stage-the stress of this section slowly increases until the peak, and the solidified soil sample will be destroyed; and the failure stage-the stressstrain curve of this section shows a downward trend until the cured soil sample is destroyed. Geofluids The main reason is that after the diesel molecules penetrate the soil, a layer of "-film" will be formed on the surface of the soil oil as the pollution concentration increases, and the "oil-film" gradually increases and thickens. Because diesel is hydrophobic, the "oil-film" will hinder the infiltration of water, resulting in gelatinous particles generated by the reaction of the cured material unable to fill the soil particles, so that the unconfined compressive strength decreases with the increase in pollution concentration. Influence of Moisture Content on the Unlimited Compressive Strength of Cured Diesel-Polluted Soil. To explore the influence of moisture content on the unconfined compressive strength of cured soil, the unconditioned compressive strength of cured soil under different moisture content was selected with a curing age of 21D, a pollution concentration of 5%, and a moisture content of 20%, 25%, 30%, and 35%. The unconstrained compressive strength under different diesel pollution concentrations obtained by the test is shown in Figure 3. It can be seen from Figure 3 that the unconfined compressive strength of the solidified soil increases and then decreases with the increase in water content, and with the increase in moisture content, the destructive strain of the cured soil shows a gradual increase trend, and the failure strain of 20%, 25%, 30%, and 35% is 1.38%, 1.88%, 2.47%, and 3.07%, respectively. The primary explanation for this might be that when moisture content rises, the percentage of water molecules and oil molecules in the soil increases, whereas the fraction of water molecules and cementitious gels formed by cured materials increases. Most of the components in the pores of the cured soil are water molecules, which reduce the curing effect, so the compressive strength of the soil is reduced. At the same time, the diesel infiltration into the soil will produce a cohesive effect on the soil particles. Because the quantity of water molecules between soil particles increasingly exceeds the number of oil molecules, the bonding effect steadily diminishes. The unconfined compressive strength of the cured soil decreases with the increase in moisture content. Shear Strength Characteristics of Solidified Diesel-Polluted Soil Figure 4. It can be seen from Figure 4 and Table 2 that under the condition of the same pollution concentration, with the increase in the curing age, the shear strength of the cured soil under the same vertical load is gradually increased compared with that of the cured soil with a curing age of 7D and a pollution concentration of 10%, the vertical load is 100 kPa, and the shear strength is increased by 318%; the vertical load is 200 kPa, and the shear strength is increased by 228%; the vertical load is 300 kPa, and the shear strength is increased by 190%; the vertical load is 400 kPa, and the shear strength is increased by 173% at the vertical load of 400 kPa. Compared with the cured soil with a curing age of 7D and a pollution concentration of 10%, the cohesion of cured soil is increased by 507%, and the internal friction angle is increased by 54%. Several factors contributed to this, including low vertical loads and short curing ages. Lime hydration reaction-produced Ca(OH) 2 could not ultimately promote 6 Geofluids fly ash activity, and calcium silicate and other gels created by volcanic-ash interaction were less active. Some of the products are blocked by the "oil-film" formed on the soil's surface by diesel infiltration into the soil and cannot be filled into the soil particles, and there are pores between the soil particles. The shear strength of the soil is reduced. With the increase in the curing age, the curing reaction is sufficient, and a large number of gels such as calcium silicate are generated, covering the "oil-film." When the vertical load increases, the product will be pressed into the "oil-film" to fill the soil particles, so that the pores between the soil particles become less, thereby improving the shear strength of the cured soil. Influence of Pollution Concentration on the Shear Strength of Cured Diesel-Contaminated Soil. The shear test of the cured soil sample with a curing age of 21D was selected to obtain the shear strength of cured dieselcontaminated red clay under different pollution concentrations, as shown in Figure 5. Geofluids It can be seen from Figure 5 that at the same curing age and the same moisture content, the shear strength of the contaminated soil with a combined curing diesel pollution concentration of 5% of the lime fly ash has the best shear resistance compared with that of the cured soil with a pollution concentration of 15%; the shear strength increases by 40% at the vertical load of 100 kPa; the shear strength increases by 44% under the vertical load of 200 kPa; the shear strength increases by 32% under the vertical load of 300 kPa; and the shear strength increases by 9% at the vertical load of 400 kPa. Compared with the cured soil cohesion of 10% at the same curing age, the cohesion of the cured soil decreased by 34%, and the internal friction angle decreased by 2.5%. On the other hand, the contamination concentration was negatively correlated with the curing effect. After diesel infiltration into the soil, analysis of the reasons will be covered with a layer of "oil-film" on the surface of the soil. When the pollution concentration is low, the "oil-film" cannot be covered entirely on the soil. The "oil-film" adsorbed by the curing reaction is less generated by the gel condensation; most of the products fill into the pores between the soil particles; with the increase in pollution concentration, the "oil-film" coverage area increases; most of the products are adsorbed and cannot be filled into the soil particles to provide strength for the solidified soil. Therefore, as the pollution concentration increases, the shear strength of the cured soil decreases. Effect of Moisture Content on Shear Strength of Solidified Diesel-Polluted Soil. The shear test of the cured soil sample with a curing age of 21D was selected to obtain the shear strength of cured diesel-contaminated red clay at different moisture content rates. It can be seen from Figure 6 that under the same curing age and the same pollution concentration, the shear resistance of contaminated soil with a combined curing diesel Furthermore, the moisture content is negatively correlated with the curing effect. Analyze the reasons; when the moisture content is low, the soil particles are filled with glue gel generated by water molecules and curing reaction, the moisture in the soil can provide strength, and the shear strength of the cured soil is provided by the combination of water molecules and the cement condensate generated by the curing reaction; with the increase in moisture content, the water in the soil is excessive, not providing strength; the shear strength of the cured soil is mainly provided by the gel condensate generated by the curing reaction, and the pore water mainly plays a lubricating role. Therefore, the shear strength of cured soil at low moisture content is higher than that of cured soil at high moisture content. At the same time, the higher moisture content will reduce the adsorption of the cement, resulting in the cohesion of the cured soil decreasing with the increase in the moisture content. Influence of Pollution Concentration on the Microscopic Characteristics of Solidified Diesel-Polluted Soil A healed soil sample with optimum curing age of 21D and moisture content of 30% was chosen for scanning electron microscopy tests based on comparisons of the microscopic properties under various pollution levels. The scanning electron microscopy (SEM) test uses Hitachi Corporation's S-4800 field emission scanning electron microscope. To obtain more explicit microscopic morphology, the central part of the soil sample is gold-injected to increase electrical conductivity. Figure 7 shows the microscopic morphology of cured soils at different concentrations at 1000 times and 5000 times magnification. Comparing the microscopic morphological photos of lime fly ash combined with curing contaminated diesel soil under different pollution concentrations, when the pollution concentration is 0%, the curing reaction generates a large number of filamentous, needle-like, and flake gel crystals. With the adsorption effect of fly ash, the fly ash is closely connected with soil particles, mainly through point-topoint or point-to-surface connections. Some of the products are stacked on top of the soil particles in flakes, and the number of loose and tiny soil particles in the soil body is small. When the pollution concentration is 5%, the diesel molecules penetrate the soil, the fly ash adsorbs the diesel into the soil particles, the arrangement is uneven, there are still more pores, and the gel crystals generated by multiple curing reactions form a smaller-scale mesh unit structure (see Figure 7(d)), which improves the strength of the cured soil. When the pollution concentration reaches 10%, the number of oil molecules rises, and the gel crystal forms a bigger size agglomeration that is more consistently structured but still has huge holes. More granular debris is adsorbed by diesel to the surface of the soil particles, and a large number of diesel molecules form a honeycomb structure and bond with the agglomerate. When the pollution concentration is 15%, large-scale agglomerates form aggregates, and the aggregates are primarily in contact with 9 Geofluids surfaces, with fewer pores and larger particles. The increase in pollution concentration increases the thickness of the "oil-film" formed by diesel infiltration into the soil, and the adhesion to large particle aggregates is more substantial so that the cured products cannot be filled into the soil particles and the soil strength cannot be improved. The microscopic Conclusion This paper is aimed at solving the problem of diesel pollution red clay under different influencing factors and uses lime fly ash as a curing agent. Based on the indoor geotechnical tests, conclusions can be drawn as follows: (1) The diesel-contaminated red clay cured with lime and fly ash has higher compressive strength and shear strength. When the content of the curing agent is certain, the curing age of 21D is the optimal age. Compared with the curing age of 7D, the unconfined compression strength of the same moisture content and the same pollution concentration increases by 159%, 166%, 134%, and 144%, respectively. Compared with the vertical loads of 100 kPa, 200 kPa, 300 kPa, and 400 kPa under the same moisture content and pollution concentration during the 7D curing period, the shear strength increases by 40%, 44%, 32%, and 9%, respectively (2) The failure mode of solidified diesel-contaminated soil is strain softening. When the pollution concentration increased from 5% to 15%, the unconfined compressive strength of solidified soil decreased with the increase in pollution concentration, which decreased by 15.5% and 19.4%, respectively. As the pollution concentration increases, the viscosity of diesel oil slows down the rate of ash reflection. On the other hand, the "oil-film" formed by the infiltration of diesel oil into the soil particles gradually thickens, which hinders the filling of the gels generated by the curing reaction into the soil. Therefore, the mechanical strength of contaminated soil cannot be improved (3) The main reason for improving the strength of the solidified contaminated soil is that the cement particles formed by the reaction of multiple fly ash and lime pozzolans are connected to form a network structure filled into the soil particles, and the particles are closely arranged to form tight connections. As the pollution concentration increases, the dieselbonded cement particles cannot enter the soil particles, and the soil strength decreases. The consistency of the microstructural characteristics and the change of the macroscopic mechanical strength of the solidified diesel-polluted soil were confirmed Data Availability The data used to support the findings of this study are included within the article. Conflicts of Interest We declare that we do not have any commercial or associative interest that represents a conflict of interest in connection with the work submitted.
2022-06-22T15:21:37.217Z
2022-06-20T00:00:00.000
{ "year": 2022, "sha1": "4a268893555cb84ac0b29055c9053f88bb60fdd4", "oa_license": "CCBY", "oa_url": "https://downloads.hindawi.com/journals/geofluids/2022/3891030.pdf", "oa_status": "GOLD", "pdf_src": "Anansi", "pdf_hash": "83d2f235c24ee2d1710bfe8f9ac433c60fe5b4b5", "s2fieldsofstudy": [ "Environmental Science", "Engineering" ], "extfieldsofstudy": [] }
228083533
pes2o/s2orc
v3-fos-license
Explainable Link Prediction for Privacy-Preserving Contact Tracing Contact Tracing has been used to identify people who were in close proximity to those infected with SARS-Cov2 coronavirus. A number of digital contract tracing applications have been introduced to facilitate or complement physical contact tracing. However, there are a number of privacy issues in the implementation of contract tracing applications, which make people reluctant to install or update their infection status on these applications. In this concept paper, we present ideas from Graph Neural Networks and explainability, that could improve trust in these applications, and encourage adoption by people. Introduction Contact Tracing is the task of searching people who might have come in physical contact with individuals of interest, like those who have tested positive for SARS-Cov2 coronavirus. Digital Contact Tracing is the version of contact tracing which relies on mobile phones and/or wearable devices for monitoring events when individuals were in close proximity. As governments have tried to encourage the use of such digital contact tracing applications in response to COVID19, a number of privacy related issues were raised. proposed a number of recommendations for contact tracing applications. Although a number of solutions have been proposed including those based on federated learning, the adoption of contact tracing apps by people has been lukewarm. As shown by , the adoption is less than 15% of the population in several countries that have seen significant COVID19 outbreaks. The low adoption of contact tracing and the related exposure notification apps have lead to concerns that these apps are not going to work if people are not motivated to use them Bengio [2020]. Even among the people who have downloaded such apps, the usage remains particularly low, making them ineffective in combating the spread of SARS-Cov2 virus. Abueg et al. [2020] argued that contact tracing apps could be useful even when the adoption is low. But increased adoption and information sharing benefits the society at large. In this work, we propose ideas that can encourage the adoption of digital contact tracing and exposure notification applications, while adhering to the privacy considerations. First, we propose that not all instances of proximity between individuals need to generate exposure notifications. Instead only predictions of possible exposure to the coronavirus should lead to alerts. This can be accomplished using link prediction and node classification tasks in graphs. Link Prediction is the task of finding missing links in a graph. Given a property graph where nodes are people, and their physical contacts are links, Graph Neural Network (GNN) models can be trained to predict additional exposure links. These links can happen even when there is no recorded physical proximity event (because of apps being switched off or people not carrying the phone on them during a chat in office and other similar potential exposure events). If we are able to predict exposure links well, classifying whether a node is exposed, is a relatively easier problem. So in this work, we focus only on link prediction. The other related task in graphs, namely entity resolution has applications in physical contact tracing and the solutions proposed here are applicable for entity resolution as well. Explaining the neural model predictions is naturally very important to build trust in the digital contact tracing apps. But making the explanations more human understandable is particularly important in applications aimed at the general population. We propose an improvement to the state of the art Anchors solution of Ribeiro et al. [2018], and also introduce a new path ranking based explainability solution. Like in any social network, the contact tracing apps are only as good as the number of people participating in the network, and especially willing to share information with the network. We propose Graphsheets, based on Factsheets by Arnold et al. [2019], to provide standardized information, to increase trust in the contact tracing applications and the underlying GNN model predictions. Finally, we draw on the nudge idea introduced by Thaler and Sunstein [2009] in behavioural sciences, to encourage users to share relevant information based on explainability techniques and graphsheets. 2 Related Work Tang [2020] presents a survey of currently available contact tracing applications and the technology choices. MIT Technology Review [2020] maintains a Contract Tracing tracker apps with data on their adoption in respective geographical areas. Mosoff et al. [2020] also maintain a similar list of apps. Hamilton et al. [2017] introduced GraphSAGE (SAmple and AggreGatE) an inductive framework that leverages node feature information (ex: text attributes, node degrees) to efficiently generate node embeddings for previously unseen data or entirely new (sub)graphs. In this inductive framework, we learn a function that generates embeddings by sampling and aggregating features from a node's local neighborhood. proposed a Position Aware Graph Neural Network that significantly improves performance on the Link Prediction task over the Graph Convolutional Networks. Much of the recent work on explanations are based on post-hoc models that try to approximate the prediction of complex models using interpretable models. Vannur et al. [2020] present post-hoc explanations of the links predicted by a Graph Neural Network by treating it as a classification problem. They present explanations using LIME (Ribeiro et al. [2016]) and SHAP (Lundberg and Lee [2017]). Arya et al. [2020] introduced the AIX360 toolkit which has a number of explainability solutions that can be used for post-hoc explanation of graph models, if they can posed as approximated as interpretable models. Agarwal et al. [2020] introduced Neural Additive Models which learn a model for each feature to increase the interpretability. We begin by explaining our experimental setup. We use the framework provided by , which in turn uses pytorch Paszke et al. [2019] and more specifically pytorch geometric Fey and Lenssen [2019]. One of the pre-processing steps we do is finding the all pairs shortest paths calculation using appropriate approximations. This pre-processing step comes in handy to explain the links as well. Link Prediction for Contact Tracing Following the procedure in , we choose only connected components with atleast 10 nodes for our experiments. A positive sample is created by randomly choosing 10% of the links. For the negative sample, we use one of the nodes involved in the positive samples and pick a random unconnected node as the other node. The number of negative samples is same as that of the positive samples. We discuss hard negative samples in our future work section. Our batch size is typically 8 subgraphs and for PGNN, we use 64 anchor nodes. Dataset Model ROC AUC Std. Dev. UDBMS GCN 0.4689 0.0280 P-GNN 0.6456 0.0185 Table 1: Comparison of Link Prediction performance with different GNNs As shown on Table 1, the P-GNN model seems to perform better on this dataset. In the next section, we'll explore ways to explain link prediction for the example shown in Figure 2. Explanations One of the commonly used methods for explainability is LIME Ribeiro et al. [2016]. Anchors Ribeiro et al. [2018] is an improvement over LIME which works by focusing only on the important features. The Link Prediction problem can perhaps be posed as a simple classification problem of predicting whether a person is exposed. A typical explanation for Link Prediction is as shown in Figure 3. However, such interpretability solutions are rarely satisfactory to end users. Hence we propose an improvement to Anchors by incorporating ideas from GNN Explainer . Graph Anchors for Link Explanation We train a simple classifier to predict if a source and target nodes are linked or otherwise. This classifier is trained based on the output of the GNN model described in Section 3. Compared to the GNN model, this post hoc classification model is considered to be more interpretable. In the above explanation, the classifier model is predicting a link between Ellen DeGeneres and Heidi Montag just like the original GNN model. Anchors model then explains that these two nodes will have a link 52.3% of the time, if the conditions shown in Figure 3 hold. This result suffers from loss of information between the GNN model and the post-hoc interpretable model. Our classifier above is rudimentary and with more feature engineering, we could make the classifier model closer to the GNN model. However, even with reduced accuracy, providing a summary of the conditions under which a model predicts a link is more intuitive than the more complex graph explainability solutions. Hence we tried tried generating Anchors explanation on the output of the GNN Explainer model, rather than the post-hoc classification model. The model consumed by the Anchors now consists of a subset of original features, and the target is a binary variable based on the output of the original GNN model. This simple modification to the input of the Anchors explainability solution seems to work reasonably but is still unsatisfactory from the end user point of view, as shown in Figure 4. GNN-Explainer and Anchors both use the broad idea or removing unnecessary features and using only the important features for explaining the links. Path Ranking based Link Explanation Our next explainability solution is inspired by ideas in Error detection in Knowledge Graphs. In particular, we use the PaTyBRED approach described in Melo and Paulheim [2017] and Melo and Paulheim [2020]. We propose using the ranking function in the PaTyBRED algorithm to explain the link predicted by GNNs. The algorithm works by ranking already existing paths between two nodes. The idea here is Figure 5: Path Ranking Algorithm based explanation that the information contained in that path is more useful than other paths to understand the predicted link. This idea is somewhat similar to the GNN Explainer idea of exploring the subgraph around the predicted link, except that we prefer to use an independent algorithm to rank all the paths rather than subgraph around the two nodes. As shown on Figure 5, showing paths between two nodes that are predicted to be linked, can be a substitute for what would otherwise be unsatisfactory explanations using features and common neighbors. After having trained models for contact tracing use-case, it is important to ensure that downstream applications and model builders follow similar practices for fairness, privacy, data protection and ethical reasons. Towards this goal, we introduce Graph Sheets. Based on Factsheets Arnold et al. Graphsheets [2019], Graph Sheets include facts about the graph being studied and a number of FAQ types questions and answers that model developers have to consider before training Link Prediction models on their own datasets. As shown in Figure 6, we provide some sample information regarding the model, graph and the systems used in training Graph Neural Models on the dataset. Other relevant information can be added based on the application. Nudging Having created a Graph Sheet and being able to generate human understandable explanations for the predicted links, we can try to nudge users to both use contact tracing applications and to share their personal data with the community. Thaler and Sunstein [2009] proposed nudging to alter the choice architectures to make people behave in ways that are assumed to be good for them (without making other choices difficult or costly). In privacy preserving contact tracing, there is an inherent tension between the societal good for people to share personal information and the privacy risk involved with sharing such information. Prainsack [2020] discusses the broader implications of nudging on public healthcare data. For the limited purpose of contact tracing where each participant is likely to benefit from sharing personal information, nudging them to share such information could be beneficial. As shown in Figure 7b, insights drawn from both explainability and graph sheets can be used for nudging. Email Id, is not a useful feature for contact tracing and hence is better not collected, or made clear to users that it's optional and used only for administrative purposes, as the case may be. Age on the other hand, is a significant factor is assessing risk for COVID19 or other health factors. Models could be trained with age as a feature or alternatively, the age could be used to tailor alerts while being stored only on the user's device. For example, if the model predicts an exposure link with less confidence, the app could make a decision to alert the user or otherwise based on the risk profile of the user. Conclusion We described a Graph Neural Network model to predict exposure links in a contact tracing application to reduce false positives (unnecessary alerts) and also false negatives (lack of updates by participants). We proposed three methods to encourage people to both install and share information with the community, and optionally a centralized authority, by increasing trust in the applications. The three methods are namely, graph sheets to increase trust, human understandable explanations for the exposure notifications, and nudging based on important features found by explainability techniques.
2020-12-11T02:15:46.813Z
2020-12-10T00:00:00.000
{ "year": 2020, "sha1": "bb7b3ceab2951d81866728c1f0dd5b586af965f5", "oa_license": null, "oa_url": null, "oa_status": null, "pdf_src": "Arxiv", "pdf_hash": "bb7b3ceab2951d81866728c1f0dd5b586af965f5", "s2fieldsofstudy": [ "Computer Science" ], "extfieldsofstudy": [ "Computer Science" ] }
256970321
pes2o/s2orc
v3-fos-license
Imaging and Characterization of Oxidative Protein Modifications in Skin Skin plays an important role in protection, metabolism, thermoregulation, sensation, and excretion whilst being consistently exposed to environmental aggression, including biotic and abiotic stresses. During the generation of oxidative stress in the skin, the epidermal and dermal cells are generally regarded as the most affected regions. The participation of reactive oxygen species (ROS) as a result of environmental fluctuations has been experimentally proven by several researchers and is well known to contribute to ultra-weak photon emission via the oxidation of biomolecules (lipids, proteins, and nucleic acids). More recently, ultra-weak photon emission detection techniques have been introduced to investigate the conditions of oxidative stress in various living systems in in vivo, ex vivo and in vitro studies. Research into two-dimensional photon imaging is drawing growing attention because of its application as a non-invasive tool. We monitored spontaneous and stress-induced ultra-weak photon emission under the exogenous application of a Fenton reagent. The results showed a marked difference in the ultra-weak photon emission. Overall, these results suggest that triplet carbonyl (3C=O∗) and singlet oxygen (1O2) are the final emitters. Furthermore, the formation of oxidatively modified protein adducts and protein carbonyl formation upon treatment with hydrogen peroxide (H2O2) were observed using an immunoblotting assay. The results from this study broaden our understanding of the mechanism of the generation of ROS in skin layers and the formation/contribution of various excited species can be used as tools to determine the physiological state of the organism. Introduction Numerous crucial bodily processes and functions, such as thermoregulation, metabolism, sensory perception, excretion, hormones, and vitamin synthesis, can be attributed to the largest organ, the skin, either partially or to their full extent [1]. Additionally, the skin represents the primary mechanical obstruction that secures the body against invading pathogens, solar irradiation, fluctuating temperature, dehydration, and various chemical and mechanical aggressions [2][3][4]. Therefore, it is imperative to study the mechanisms of its multitudinous role to discover novel effective treatments/procedures to maintain/restore its healthy condition. To achieve this, researchers often turn to animal models as a representation of human skin [5,6]. In addition to alleviating ethical and financial burdens, the variety of model systems and tools available makes them an attractive alternative [7][8][9]. These systems must possess high levels of similarity with human skin in terms of the attribute(s) that are integral to a given research purpose. Pig/porcine skin ranks at the top (particularly in dermatology) due to the myriad anatomical, biochemical, and physiological similarities found in pigs and humans [10]. Its advantage also lies in its widespread accessibility and cost-effectiveness, which can be traced to pigs being the most produced/consumed meat worldwide [11]. Similarities are found starting from the multilayered skin composition, which can be grouped into three main layers: the epidermis, the dermis, and the hypodermis (also called subcutis) [10,12]. However, no animal model system shares all the features of human skin in its entirety. For our study, porcine skin was selected as an ex vivo model, owning to its comparably higher similarity with human skin in the features essential for our research (such as its morphology and biochemical composition) than the other available models. Skin is rich in reactive oxygen species (ROS), such as, but not limited to, hydrogen peroxide (H 2 O 2 ) and hydroxyl radical (HO • ) [7,13,14]. This is due to their endogenous and exogenous production/stimulation. Constant contact with molecular oxygen (O 2 ), xenobiotics, solar radiation, oxidative metabolism, pathogens destroying immune cells, physiological and psychological stress are examples that affect the delicate balance of natural oxidants/antioxidants in the cell and, subsequently, the tissue/organ/organism [15,16]. Under regulated conditions, ROS are generally neutralized by a network of non-enzymatic antioxidants, such as glutathione and ascorbic acid or enzymatic antioxidants such as superoxide dismutase (SOD), catalase (CAT), glutathione peroxidase (GPX), glutathione reductase, and thioredoxin reductase (TRX). A key enzyme that detoxifies H 2 O 2 is the peroxisomal localized catalase. Enzymatic antioxidants act in a coordinated way to maintain normal redox homeostasis [13,17]. If this condition is not restored through innate enzymatic and non-enzymatic mechanisms/exogenous antioxidants, it leads to a condition called oxidative stress [18,19]. The eustress is attributed to ROS involved in cellular signaling, but the risk comes when oxidative stress prevails, reaches toxic levels, and important biomolecules are negatively affected [18]. Biomolecule oxidation was found to be the cause/factor in the formation/aggravation of several diseases, such as diabetes, psoriasis, Alzheimer's disease, and other diseases related to age [20,21]. Oxidative stress-induced protein modifications are a common feature in several pathologies and are routinely employed as a marker of oxidative processes, along with malondialdehyde (MDA), which is a by-product of lipid peroxidation [22]. This study aims to increase our understanding of ROS-induced oxidative stress in skin. Exogenous oxidant (H 2 O 2 ), with or without transition metal ions, was used to mimic chemical/environmental pollutants. The exogenous use of transition metals enhances the oxidative process drastically; thus, its use in our two-dimensional studies was intended to enhance the subsequent photon emission, described later in the section. Using a noninvasive ultrasensitive charge-coupled device (CCD) camera, a 1 O 2 scavenger (sodium ascorbate), and the interference filter, we attempted to understand the degree of damage to biomolecules reflected by ultra-weak photon emission. As skin is rich in iron and other transition metals, we believe that the cascade of reactions (mediated by the formation of the HO • ) might have played a role in the eventual oxidative damage to lipids and proteins. To further understand the mechanism and possible oxidative consequences, we used protein immunoblotting, where anti-MDA and anti-DNP antibodies were used to observe the protein modification. Spontaneous and Fenton Reagent-Induced Ultra-Weak Photon Emission from Skin The two-dimensional image of the ultra-weak photon emission was measured spontaneously from the porcine ears and after the topical application of the Fenton reagent in the setup shown and described in detail in Section 3. Variable concentrations of H 2 O 2 (0, 2.5 mM, 5 mM, and 10 mM) and FeSO 4 were topically applied to the skin biopsies and the corresponding ultra-weak photon emission images were captured ( Figure 1). The upper panel ( Figure 1A) shows the photographs of the prepared skin biopsies; Figure 1B shows the spontaneous ultra-weak photon emission images to demonstrate any variability in the spontaneous ultra-weak photon emission; Figure 1C shows the dependence of the ultraweak photon emission with the increasing concentration of Fenton reagent. It is obvious that with the increasing concentration of oxidants, there is a corresponding increase in the intensity of the ultra-weak photon emission. Following the optimization, we further carried out our study on ex vivo porcine ear, where the treatment condition was limited to 10 mM H 2 O 2 /250 µM FeSO 4 ( Figure 2). Figure 2A (left panel) shows an image of the ultra-weak photon emission from an ex vivo porcine ear without any stimulation/induction of oxidative stress. In Figure 2B, the ultra-weak photon emission image was measured following the treatment with the Fenton reagent. Figure 2C shows the photon intensity at the pixels marked on the images by white dotted lines. As apparent from the intensity of the ultra-weak photon emission, the skin not treated with Fenton reagent (control) shows no enhancement, whereas the skin treated with Fenton reagent shows a maximum intensity of~25 counts/pixel. that with the increasing concentration of oxidants, there is a corresponding increase in the intensity of the ultra-weak photon emission. Following the optimization, we further carried out our study on ex vivo porcine ear, where the treatment condition was limited to 10 mM H2O2/250 µM FeSO4 ( Figure 2). Figure 2A (left panel) shows an image of the ultraweak photon emission from an ex vivo porcine ear without any stimulation/induction of oxidative stress. In Figure 2B, the ultra-weak photon emission image was measured following the treatment with the Fenton reagent. Figure 2C shows the photon intensity at the pixels marked on the images by white dotted lines. As apparent from the intensity of the ultra-weak photon emission, the skin not treated with Fenton reagent (control) shows no enhancement, whereas the skin treated with Fenton reagent shows a maximum intensity of ~25 counts/pixel. Sodium ascorbate (10mM), which is a scavenger of 1 O 2 [23], was topically applied to the porcine skin 10 min before the application of the Fenton reagent. It is evident that the presence of sodium ascorbate before the application of Fenton reagent noticeably lowered the ultra-weak photon emission ( Figure 3). As evident from the intensity of the photon emission, the Fenton reagent-treated porcine skin shows a higher intensity, which was found to be suppressed almost completely in the case of the skin pretreated with sodium ascorbate. It is thus obvious that the involvement of 1 O 2 dimol photon emission in the overall ultra-weak photon emission can be substantial ( Figure 3). The conclusion is based on the fact that in an oxygen-rich environment, the excitation energy from 3 C=O * can be transferred to O 2 via triplet-singlet energy transfer, which can lead to the formation of 1 O 2 . The collision of two 1 O 2 results in photon emission in the red band of the spectrum (634 and 703 nm), referred to as dimol emission [24]. To confirm the claimed primary sources ( 3 C=O * ) of the photon emission under the induced oxidative stress, we mounted a blue-green interference filter type 644 with a transparency between 340-540 nm in front of the objective lens with the experimental condition, as in Figure 3. It can be seen that if the transparency was limited in the range of the blue-green region, typically destined for 3 C=O * emission, partial photon emission can still be captured ( Figure 4). This indicates that 3 C=O * can be one of the significant contributors of ultra-weak photon emission during oxidative radical reaction in the skin. Sodium ascorbate (10mM), which is a scavenger of 1 O2 [23], was topically applied to the porcine skin 10 min before the application of the Fenton reagent. It is evident that the presence of sodium ascorbate before the application of Fenton reagent noticeably lowered the ultra-weak photon emission ( Figure 3). As evident from the intensity of the photon emission, the Fenton reagent-treated porcine skin shows a higher intensity, which was found to be suppressed almost completely in the case of the skin pretreated with sodium ascorbate. It is thus obvious that the involvement of 1 O2 dimol photon emission in the overall ultra-weak photon emission can be substantial ( Figure 3). The conclusion is based on the fact that in an oxygen-rich environment, the excitation energy from 3 C=O * can be transferred to O2 via triplet-singlet energy transfer, which can lead to the formation of 1 O2. The collision of two 1 O2 results in photon emission in the red band of the spectrum (634 and 703 nm), referred to as dimol emission [24]. Sodium ascorbate (10mM), which is a scavenger of 1 O2 [23], was topically applied to the porcine skin 10 min before the application of the Fenton reagent. It is evident that the presence of sodium ascorbate before the application of Fenton reagent noticeably lowered the ultra-weak photon emission ( Figure 3). As evident from the intensity of the photon emission, the Fenton reagent-treated porcine skin shows a higher intensity, which was found to be suppressed almost completely in the case of the skin pretreated with sodium ascorbate. It is thus obvious that the involvement of 1 O2 dimol photon emission in the overall ultra-weak photon emission can be substantial ( Figure 3). The conclusion is based on the fact that in an oxygen-rich environment, the excitation energy from 3 C=O * can be transferred to O2 via triplet-singlet energy transfer, which can lead to the formation of 1 O2. The collision of two 1 O2 results in photon emission in the red band of the spectrum (634 and 703 nm), referred to as dimol emission [24]. Protein Modification under Generated ROS Reactive oxygen species create oxidative radical reactions in cells due to several cellular components, for example, DNA, protein, lipids, and carbohydrates undergo mod-ifications [25]. In the present study, the proteins undergoing modification by ROS were characterized using an immunoblotting technique. We limited the stress induction to H 2 O 2 treatment alone as such high oxidative damage is not necessarily required to study protein modification using western blotting; the level of endogenous transition metal ions is believed to be sufficient to mediate the process. On the contrary, to image photon emission as a result of oxidative damage, we need a moderate to high level of oxidative damage and, thus, metal ions were additionally supplemented. The ultra-weak photon emission imaging was performed in a porcine ear treated with 10 mM H2O2/250 µM FeSO4 in the absence (red circle) and presence (black circle) of sodium ascorbate (10 mM). Samples were treated with sodium ascorbate 10 min before to the topical application of Fenton reagent. The Y-axis in the lower panel indicates the number of photon counts after 30 min of accumulation, whereas the X-axis shows the pixel of the image. To confirm the claimed primary sources ( 3 C=O * ) of the photon emission under the induced oxidative stress, we mounted a blue-green interference filter type 644 with a transparency between 340-540 nm in front of the objective lens with the experimental condition, as in Figure 3. It can be seen that if the transparency was limited in the range of the blue-green region, typically destined for 3 C=O * emission, partial photon emission can still be captured (Figure 4). This indicates that 3 C=O * can be one of the significant contributors of ultra-weak photon emission during oxidative radical reaction in the skin. Protein Modification under Generated ROS Reactive oxygen species create oxidative radical reactions in cells due to several cellular components, for example, DNA, protein, lipids, and carbohydrates undergo modifications [25]. In the present study, the proteins undergoing modification by ROS were characterized using an immunoblotting technique. We limited the stress induction to H2O2 treatment alone as such high oxidative damage is not necessarily required to study protein modification using western blotting; the level of endogenous transition metal ions is believed to be sufficient to mediate the process. On the contrary, to image photon emission as a result of oxidative damage, we need a moderate to high level of oxidative damage and, thus, metal ions were additionally supplemented. To study the protein modification (protein carboxylation and protein carbonyl formation), we used anti-MDA and anti-DNP, respectively. For characterization, skin biopsies treated with H2O2 (10 mM) and control non-treated samples were separated using SDS PAGE and the samples were loaded in duplicate. Anti-MDA antibodies bind to MDA-modified proteins, thereby enabling the detection of MDA-protein adducts. Malondialdehyde reacts specifically with amino acid residues such as Lys, Arg, His and Cys. With reference to the anti-MDA blot ( Figure 5A), MDA-protein adduct formations To study the protein modification (protein carboxylation and protein carbonyl formation), we used anti-MDA and anti-DNP, respectively. For characterization, skin biopsies treated with H 2 O 2 (10 mM) and control non-treated samples were separated using SDS PAGE and the samples were loaded in duplicate. Anti-MDA antibodies bind to MDAmodified proteins, thereby enabling the detection of MDA-protein adducts. Malondialdehyde reacts specifically with amino acid residues such as Lys, Arg, His and Cys. With reference to the anti-MDA blot ( Figure 5A), MDA-protein adduct formations were observed around 15 kDa, 45 kDa, 50 kDa, 65 kDa, 130 kDa and 250 kDa. However, the band density of 65 kDa, 130 kDa and 250 kDa proteins were found to be enhanced in comparison to the control untreated groups. Differences in the levels were represented as densitogram in separate panels for each protein ( Figure 5B) and the mechanism involved is presented in Figure 6. To monitor the protein carbonyl formation, derivatization was conducted, as mentioned in Section 3.4. The western blot analysis of the control and H 2 O 2 -treated skin biopsies displayed protein carbonyl levels, as measured by the anti-DNP antibodies ( Figure 6A). A distinct band at 130 kDa was observed in both groups with varied patterns. It is clear that the carbonylated proteins isolated from the control groups are significantly less formed than the treatment groups. Figures 7 and 8 (created with elements from BioRender.com) show the steps involved in the formation of the MDA-protein adduct and protein carbonyl formation, respectively. Differences between the control and treatment groups are presented as a separate densitogram ( Figure 6B). Additional studies targeting the identification and characterization of selected proteins from both anti-MDA and anti-DNP blots are under study. were observed around 15 kDa, 45 kDa, 50 kDa, 65 kDa, 130 kDa and 250 kDa. However, the band density of 65 kDa, 130 kDa and 250 kDa proteins were found to be enhanced in comparison to the control untreated groups. Differences in the levels were represented as densitogram in separate panels for each protein ( Figure 5B) and the mechanism involved is presented in Figure 6. To monitor the protein carbonyl formation, derivatization was conducted, as mentioned in Section 3.4. The western blot analysis of the control and H2O2-treated skin biopsies displayed protein carbonyl levels, as measured by the anti-DNP antibodies ( Figure 6A). A distinct band at 130 kDa was observed in both groups with varied patterns. It is clear that the carbonylated proteins isolated from the control groups are significantly less formed than the treatment groups. Figures 7 and 8 (created with elements from BioRender.com) show the steps involved in the formation of the MDA-protein adduct and protein carbonyl formation, respectively. Differences between the control and treatment groups are presented as a separate densitogram ( Figure 6B). Additional studies targeting the identification and characterization of selected proteins from both anti-MDA and anti-DNP blots are under study. Porcine Skin Porcine ears were obtained from a local slaughterhouse. They were transported at a low temperature (on ice) within the first 30 min. Whole ear/skin biopsies for two-dimensional imaging and immunoblotting were prepared according to the procedure described by Chiu and Burd (2005) [26], with minor modifications. Skin samples, collected each day, were used for each set of measurements. Porcine Skin Porcine ears were obtained from a local slaughterhouse. They were transported at a low temperature (on ice) within the first 30 min. Whole ear/skin biopsies for twodimensional imaging and immunoblotting were prepared according to the procedure described by Chiu and Burd (2005) [26], with minor modifications. Skin samples, collected each day, were used for each set of measurements. Reagents and Antibodies Fenton reagent preparation was conducted using H 2 O 2 (Sigma-Aldrich Chemie GmbH, Mannheim, Germany) and ferrous sulfate (FeSO 4 .7H 2 O) (BDH Laboratory Supplies, Poole, UK). A variable concentration of H 2 O 2 (2.5 mM, 5 mM, 10 mM) was used with a fixed concentration of iron sulphate (FeSO 4 ) (250 µM) to chemically generate HO • . The procedure for topical application and its duration are specified in the figure legends, as applicable. Phosphatase and protease inhibitors were purchased from Roche (Mannheim, Germany). Rabbit polyclonal anti-MDA antibody was purchased from Abcam [anti-MDA antibody (ab27642)] (Cambridge, UK) and polyclonal goat anti-rabbit IgG conjugated with horseradish peroxidase (HRP) from Bio-Rad (Hercules, CA, USA). Rabbit polyclonal Dinitrophenyl-KLH antibody (anti-DNP) were procured from ThermoFisher scientific (Waltham, MA, USA). Experimental Conditions and Setup for Two-Dimensional Imaging of Ultra-Weak Photon Emission A unique design of dark rooms is a prerequisite to avoid any interference by the absence of a photon. In the current study, all of the ultra-weak photon imaging measurements were conducted in an experimental dark room. Further details on the adopted methodology can be found in Prasad and Pospíšil (2013) [27]. The dark room, as well as the measurement setup, is shown in Figure 9. All of the experiments were carried out in three biological replicates, and the representative images have been presented. To study the spectral distribution of the ultra-weak photon emission in the oxidation reactions using Fenton reagents, filter type 644 (Schott and Gen, Jena, Germany), which is a blue-green interference filter with a transmission in the range 340-540 nm, was used and mounted in front of the objective lens of the CCD camera ( Figure 1) [27]. the spectral distribution of the ultra-weak photon emission in the oxidation reactions using Fenton reagents, filter type 644 (Schott and Gen, Jena, Germany), which is a blue-green interference filter with a transmission in the range 340-540 nm, was used and mounted in front of the objective lens of the CCD camera ( Figure 1) [27]. Two-dimensional photon emission imaging was measured in porcine ear/skin biopsies utilizing a sensitive CCD camera. The skin samples were dark-adapted for 30 min to eradicate any interference by delayed luminescence and treated afterward. The other experimental conditions are as per the procedure described in the listed reference [27]. The VersArray 1300B CCD camera (Princeton Instruments, Trenton, NJ, USA) with a spectral sensitivity of 350-1000 nm and ~90% quantum efficiency was used under the following parameters: scan rate, 100 kHz; gain, 2; accumulation time, 30 min (porcine ear/skin biopsies). The CCD camera was cooled to −108 °C using a liquid nitrogen cooling system, which helps to reduce the dark current. Before each measurement, the data correction was made by subtracting the background noise from the experimental dataset. Figure 9. Schematic diagram of the experimental setup for two-dimensional photon emission imaging using a CCD camera. The diagram shows the inner dark room (gray) and the outer control room (green). The filter position for the spectral measurement was positioned in front of the objective lens, as shown. Protein Immunoblotting Skin biopsies were prepared through initial washing with physiological solution (0.9% NaCl). First, 0.5 g of the skin biopsies were subjected to the desired treatment (control or Fenton reagent applied topically for 30 min each), followed by rinsing with distilled water. Subsequently, the samples were snap-freezed in liquid N2. The samples were then homogenized with radioimmunoprecipitation assay (RIPA) buffer (150 mM NaCl, 50 mM Tris (pH 8.0), 0.5% sodium deoxycholate, 0.1% SDS, and 1% NP-40) comprising 1% (v/v) protease and phosphatase inhibitor (v/v) (three times, 1 min each), followed by sequential centrifugations at 8000 rpm (30 min, 1 time) and 14,000 rpm (45 min, 2 times). The supernatant was collected and quantified with a Pierce BCA protein estimation kit (Thermo Fisher Scientific, Paisley, UK). The detailed sample preparation procedure is presented in Figure 10. Protein samples for Western blotting were prepared with SDS Laemmli sample buffer. The prepared samples were then subjected to electrophoresis and immunoblotting analysis using anti-MDA antibody. For immunoblotting, 2 biological replicates were performed for each measurement. Two-dimensional photon emission imaging was measured in porcine ear/skin biopsies utilizing a sensitive CCD camera. The skin samples were dark-adapted for 30 min to eradicate any interference by delayed luminescence and treated afterward. The other experimental conditions are as per the procedure described in the listed reference [27]. The VersArray 1300B CCD camera (Princeton Instruments, Trenton, NJ, USA) with a spectral sensitivity of 350-1000 nm and~90% quantum efficiency was used under the following parameters: scan rate, 100 kHz; gain, 2; accumulation time, 30 min (porcine ear/skin biopsies). The CCD camera was cooled to −108 • C using a liquid nitrogen cooling system, which helps to reduce the dark current. Before each measurement, the data correction was made by subtracting the background noise from the experimental dataset. Protein Immunoblotting Skin biopsies were prepared through initial washing with physiological solution (0.9% NaCl). First, 0.5 g of the skin biopsies were subjected to the desired treatment (control or Fenton reagent applied topically for 30 min each), followed by rinsing with distilled water. Subsequently, the samples were snap-freezed in liquid N 2 . The samples were then homogenized with radioimmunoprecipitation assay (RIPA) buffer (150 mM NaCl, 50 mM Tris (pH 8.0), 0.5% sodium deoxycholate, 0.1% SDS, and 1% NP-40) comprising 1% (v/v) protease and phosphatase inhibitor (v/v) (three times, 1 min each), followed by sequential centrifugations at 8000 rpm (30 min, 1 time) and 14,000 rpm (45 min, 2 times). The supernatant was collected and quantified with a Pierce BCA protein estimation kit (Thermo Fisher Scientific, Paisley, UK). The detailed sample preparation procedure is presented in Figure 10. Protein samples for Western blotting were prepared with SDS Laemmli sample buffer. The prepared samples were then subjected to electrophoresis and immunoblotting analysis using anti-MDA antibody. For immunoblotting, 2 biological replicates were performed for each measurement. Steps showing the workflow and optimized protocol for whole-skin tissue lysate preparation, isolation, and the BCA assay for protein estimation. Whole cell homogenates (10 µg/lane), processed on 10% SDS gel, were then transferred to blotting membranes (nitrocellulose) using a Trans-Blot Turbo transfer system (Bio-Rad, Hercules, CA, USA). The membranes were blocked (BSA in phosphate buffered saline, pH 7.4, containing 0.1% Tween 20) overnight at 4 °C. The blocked membranes were probed for 2 h with an anti-MDA antibody at RT. After 4 cycles of washing with PBST and incubation for 1 h at room temperature with HRP-conjugated anti-rabbit secondary antibody (dilution 1:10,000) and subsequent washing [PBST, 5× (5 min each)], the immunocomplexes were visualized utilizing Immobilon Western Chemiluminescent HRP Substrate (Sigma Aldrich, GmbH, Mannheim, Germany) and imaged using an Amersham 600 imager (GE Healthcare, Amersham, UK). Densitometry analysis of the blots obtained was generated using Image J 1.53t [public domain software (Bethesda, MD, USA) provided by the National Institute of Mental Health, United States]. Conclusions The skin is the primary interface between the body and environmental aggression and excessive production of reactive species, including ROS, and reactive nitrogen species have been known to form in skin tissues. Oxidative stress and its resulting oxidation products have been reported to be the main cause of ageing. Due to the limitation associated with the use of human skin for research purposes, porcine skin has been used as a model in the present study, with H2O2 as an exogenous oxidant. The two-dimensional spatiotemporal images and their spectral analysis confirmed the participation of triplet excited carbonyls and singlet oxygen dimol emission as substantial contributors resulting from oxidative radical reactions in the skin. The resultant oxidatively modified protein adducts and the differences in the band density of the selected proteins in the non-treated and H2O2 treated skin tissues were confirmed by blotting analysis. As oxidative stress remains Steps showing the workflow and optimized protocol for whole-skin tissue lysate preparation, isolation, and the BCA assay for protein estimation. To detect protein carbonyl formation, the collected protein fractions were subjected to derivatization. Carbonyl groups present in the protein side chains were derivatized with 2,4 dinitrophenylhydrazine (DNPH), leading to the formation of stable 2,4 dinitrophenylhydrazone (DNP) derivative, which involves the addition of an equal volume of protein and 12% SDS (final concentration at 6%) and subsequent addition of 1X DNPH solution (50 mM solution in 50% sulphuric acid). The mixture was incubated at RT for 30 min and the reactions were neutralized with 2 M Tris base and 30% glycerol (0.75× v/v of DNPH solution). The resulting protein fractions were centrifuged at 14,000 rpm for 10 min and the supernatants were loaded onto SDS gels for immunoblotting with an anti-DNP antibody. Whole cell homogenates (10 µg/lane), processed on 10% SDS gel, were then transferred to blotting membranes (nitrocellulose) using a Trans-Blot Turbo transfer system (Bio-Rad, Hercules, CA, USA). The membranes were blocked (BSA in phosphate buffered saline, pH 7.4, containing 0.1% Tween 20) overnight at 4 • C. The blocked membranes were probed for 2 h with an anti-MDA antibody at RT. After 4 cycles of washing with PBST and incubation for 1 h at room temperature with HRP-conjugated anti-rabbit secondary antibody (dilution 1:10,000) and subsequent washing [PBST, 5× (5 min each)], the immunocomplexes were visualized utilizing Immobilon Western Chemiluminescent HRP Substrate (Sigma Aldrich, GmbH, Mannheim, Germany) and imaged using an Amersham 600 imager (GE Healthcare, Amersham, UK). Densitometry analysis of the blots obtained was generated using Image J 1.53t [public domain software (Bethesda, MD, USA) provided by the National Institute of Mental Health, United States]. Conclusions The skin is the primary interface between the body and environmental aggression and excessive production of reactive species, including ROS, and reactive nitrogen species have been known to form in skin tissues. Oxidative stress and its resulting oxidation products have been reported to be the main cause of ageing. Due to the limitation associated with the use of human skin for research purposes, porcine skin has been used as a model in the present study, with H 2 O 2 as an exogenous oxidant. The two-dimensional spatiotemporal images and their spectral analysis confirmed the participation of triplet excited carbonyls and singlet oxygen dimol emission as substantial contributors resulting from oxidative radical reactions in the skin. The resultant oxidatively modified protein adducts and the differences in the band density of the selected proteins in the non-treated and H 2 O 2 treated skin tissues were confirmed by blotting analysis. As oxidative stress remains a key factor in distinguishing physiological and pathological conditions, the present study helps to identify the specific protein targets involved in the process of oxidative damage in the skin. In addition, spatiotemporal imaging with a CCD camera, also demonstrated by Abdlaty and co-workers [4], is presented as a powerful tool for non-invasive imaging that has the potential to pave the way for its widespread usage in research and/or clinical trials.
2023-02-18T16:11:47.461Z
2023-02-01T00:00:00.000
{ "year": 2023, "sha1": "c46872ca1d9b99e1e1040be48a48b714bc7b97b5", "oa_license": "CCBY", "oa_url": null, "oa_status": null, "pdf_src": "PubMedCentral", "pdf_hash": "23e583e80bdd587fe109f2b5b6f880f097ccd6e5", "s2fieldsofstudy": [ "Materials Science", "Medicine", "Environmental Science" ], "extfieldsofstudy": [ "Medicine" ] }
265413906
pes2o/s2orc
v3-fos-license
Topical or intravenous administration of tranexamic acid accelerates wound healing Objectives In this study, we aimed to investigate the morphological and histological effects of tranexamic acid (TA) on wound healing in a rat wound model. Materials and methods A total of 24 adult male Wistar Albino rats were used in this study. All rats were simple randomly divided into three groups including eight rats in each group. A full-thickness skin defect was created on the back of the rats in all groups. Serum physiological (2 mL) was instilled saline drops after wound formation (control group). Wound was created and topical TA (0.12 to 0.15 mL [30 mg/kg]) was applied (local group). Intravenous TA (0.12 to 0.15 mL [30 mg/kg]) was applied intravenously before the wound was created (intravenous group). The wound diameters of the groups were photographed and measured on Days 0, 3, 7, 10, 14 and, at the end of Day 14, the rats were sacrificed and their histopathological results and wound diameters were compared. Results Fibroblast count values of the control group were found to be significantly lower than the local group (p=0.002), and no significant difference was observed between the local and intravenous groups (p>0.05). The collagen density (%) values of the control group were found to be significantly higher than the local and intravenous groups (p=0.016 and p=0.044). Wound diameter values of the control group on Day 10 day were found to be significantly higher than the local and intravenous groups (p=0.001). In addition, the wound diameter values of the control group on Day 14 were found to be significantly higher than the local and intravenous groups (p=0.001 and p=0.0001). The wound diameter changes of the control group on Days 0-10 were found to be significantly lower than the local and intravenous groups (p=0.001). In addition, the wound diameter changes of the control group on Days 0-14 were found to be lower than those of the local and intravenous groups (p=0.001 and p=0.0001). Conclusion The use of local or intravenous TA may have positive effects on the fibroblast count and wound contraction in a rat wound model. or systemic hyperfibrinolysis. [2]Tranexamic acid is an antifibrinolytic agent that has been widely used for many years in orthopedic, cranial surgery, cardiac surgery, and urological surgery procedures to minimize the intra-and postoperative blood loss and the resulting transfusion requirement. [3,4]nce TA has a hemostatic effect by inhibiting plasminogen activation and providing fibrin protective activity, in the present study, we hypothesized that it would have a positive effect on the wound healing model in rats.We, therefore, aimed to investigate the morphological and histological effects of TA on wound healing in rats. MATERIALS AND METHODS A total of 24 male Wistar Albino rats with an average of 2.5 (2 to 3) months and an average weight of 250 (range, 200 to 300) g were used in this study.All procedures were carried out in the Experimental Animals Breeding and Research Center of the Medical Faculty of Düzce University between 1 March 2019 and 15 July 2019.The subjects were simple randomly divided into three groups and observed for one week preoperatively to kept under standardized conditions in the laboratory with eight rats in each cage.They were identified using a tail-marking counting system to ensure proper matching.During the study, the rats were given unlimited tap water (ad libitum) and standard rodent food.The rats were monitored in a cage in a temperature-controlled room (23º-25º) with a 12/12-h light/dark cycle.Antibiotics were not administered in either group before or after surgery.Serum physiological (control group) was instilled saline drops after wound formation.Wound was created and topical TA was applied (local group).Intravenous TA was applied intravenously before the wound was created (intravenous group).The control group received a single dose of 2 mL of saline topical immediately after surgery, the local group received a single dose of 0.12 to 0.15 mL (30 mg/kg) topical immediately after surgery, and the intravenous group received a single dose of 0.12 to 0.15 mL (30 mg/kg) intravenously immediately before surgery. Surgical technique The rats, which were monitored and prepared, were taken to the intervention room.The anesthetic dose was calculated by measuring the weight of each rat on an electronic scale.Ketamine (Eczacıbaşı, Istanbul, Türkiye) 50 mg/kg and xylazine (Bayer, Istanbul, Türkiye) 10 mg/kg were used as anesthetics.Anesthesia was administered intraperitoneally from the left inguinal region. The rats were stained with povidone-iodide (Batticon ® , ADEKA, Türkiye) after shaving their dorsal regions (Figure 1a, b).The mean wound sizes of 17.93±1.06,19.26±1.26,and 19.3±1.43 mm were created in each of the control, local, and intravenous groups, respectively.In the control group, using a scalpel, a full-thickness skin defect was created and 2 mL of saline solution was dripped into the wound after surgery.The wound created for local group was coated with oxidized cellulose (Surgicel™; Johnson & Johnson, Piscataway, New Jersey, USA) and sutured to the wound edges so that locally applied substances could be absorbed faster and remain in the wound area longer.Afterwards, TA 0.12 to 0.15 mL (30 mg/kg) (Transamine ® , Actavis, Türkiye) was applied topically to the wound.In the intravenous group, a vascular access was opened with a granule through the tail vein, and a single intravenous dose of 0.12 to 0.15 mL (30 mg/kg) of TA was administered before the surgical intervention.A scalpel was, then, used to create a skin defect from an area that was shaved from the back (Figure 2a, b).Then, on Days 0, 3, 7, 10, 14, the diameters of skin lesions of rats of all groups were measured using millimeters (mm) to track contractions, captured, and recorded simultaneously.Diameter measurement on photographs was made using the Autocad software (Autodesk, USA) (Figure 3).The rats were sacrificed at the end of Day 14, the wound scars on their dorsal areas were excised, and they were placed in 10% formaldehyde and examined histopathologically. Histopathological evaluation After the usual follow-up procedures, the samples were embedded in paraffin and 7-micron-thick sections were obtained.Routine hematoxylin-eosin and Masson's trichrome staining were done on the sections.Additionally, immunohistochemical staining was performed with CD31 (Biocare, 1/200) and transforming growth factor-beta (TGF-B1) (Santa Cruz, 1/100) antibodies with the avidin-biotin immunoperoxidase method for vessel density and TGF-B1 (Figure 4a, b). Lymphocyte density was categorized as mild (1), moderate (2), and severe (3). [5]The number of fibroblasts per square millimeter was also measured.Numerical values were obtained by measuring scar thickness on a computer.Images were taken from Masson's trichromes and converted to black and white format.The density of the scars was, then, measured in the most intense area for each sample using the Image J image processing software (Figure 5a-c and Figure 6a, b).Vessel density per square millimeter was obtained using CD31 dye.If TGF-B1 immune expression was focal (≤10%), the score was 1, if it was between 10 and 25%, the score was 2, and if >25%, the score was 3. Mild intensity score was given as 1, moderate-intensity score as 2, and severe intensity score as 3.The total score was obtained by multiplying these values with each other. [5]The histopathological evaluation was done one time at the end of Day 14 by the same specialist in the Department of Pathology. Statistical analysis Statistical analysis was performed using the Number Cruncher Statistical System (NCSS) 2007 Statistical Software (NCSS LLC, Utah, USA).Descriptive data were expressed in mean ± standard deviation (SD), median and interquartile range (IQR) or number and frequency.The distribution of variables was examined with the Shapiro-Wilk normality test, Friedman test was used for time comparisons of RESULTS There was no statistically significant difference in lymphocyte count, polymorphonuclear leukocyte (PMNL) count, scar thickness, microvascular density values between control, local and intravenous groups (p=0.087,p=0.994, p=0.098, and p=0.315, respectively); however, a statistically significant difference was observed between the fibroblast count values (p=0.018).Fibroblast count values of the control group were found to be statistically significantly lower than the local group (p=0.002), and no statistically significant difference was observed between the local and intravenous groups (p>0.05).In addition, a statistically significant difference was observed between the collagen density (%) values of the control, local and intravenous groups (p=0.049).The collagen density (%) values of the control group were found to be statistically significantly higher than the local and intravenous groups (p=0.016 and p=0.044) (Table I, Figures 7, 8).Wound diameter values of the control group on Day 10 were found to be statistically significantly higher than the local and intravenous groups (p=0.001).In addition, the wound diameter values of the control group on Day 14 were found to be statistically significantly higher than the local and intravenous groups (p=0.001 and p=0.0001) (Table II, Figure 9). There was no statistically significant difference between the wound diameter on Day 0-3 and on Day 0-7 th % change values of the control, local and intravenous groups (p=0.505 and p=0.501); however, a statistically significant difference was observed between the % change values on Day 0-10 and Day 0-14 (p=0.0001 and p=0.0001).Wound diameter changes of the control group on Day 0-10 were found to be statistically significantly lower than the local and intravenous groups (p=0.001).In addition, wound diameter changes of the control group on Day 0-14 were found to be significantly lower than those of the local and intravenous groups (p=0.001 and p=0.0001), and no statistically significant difference was observed between the local and intravenous groups (p=0.146)(Table III, Figure 10).A statistically significant difference was observed between the TGF-B distributions of the control, local and intravenous groups (p=0.0001).The mild presence of TGF-B in the control group was found to be statistically significantly higher than the local and intravenous groups. None of the subjects died in any group after surgery.Wound infection was not observed in any of the rats during follow-up. DISCUSSION Wound healing continues to be an important problem throughout history, both after major traumas and major surgical interventions.8] In a systematic and meta-analysis study of topical and intravenous TA on blood loss and wound healing in bone surgery, Xu et al. [9] reported that it had an equal effect on blood loss after bone surgery and was also beneficial in wound healing. Similarly, in the animal study in which we modeled wound healing, we found that when topical and intravenous TA was applied to the wound area, it has a positive effect on wound healing in the early period compared to the control group. Tranexamic acid is a drug that has been used for a long time, particularly in orthopedic and cardiovascular surgeries, to stop bleeding. [6,7]In the review of the literature on the use of TA in orthopedic surgery, there have been mostly studies related to intra-and postoperative bleeding. [10,11]nother study showed that TA significantly reduced hemoglobin loss and the number of transfusions in patients who underwent primary unilateral total knee arthroplasty (TKA) compared to the control group, but no significant difference was observed in thromboembolic complications or wound healing. [12]n the English literature, there is a limited number of clinical studies on wound healing of TA and there is no animal study. [15][16][17][18][19][20] For instance, intravenous administration provides fast and uniform distribution of TA in the knee joint, while locally applied TA quickly reaches the maximum concentration in the knee and does not cause the systemic side effects caused by intravenous TA administration. [15,16]Additionally, Wong et al. [16] reported that locally applied TA in TKA was more effective and caused fewer complications.In the study by Manor and Sadeh, [17] in which water and fat-soluble drugs were injected into the anterior tibial muscles of rats, some amphiphilic and lipid-soluble drugs with heterogeneous pharmacological properties caused acute muscle fiber necrosis while injecting TA, which is one of the water-soluble drugs, caused no tissue damage.This shows that our local application does not cause muscle necrosis.Another study by Reichel et al. [18] showed that TA decreased the migration of inflammatory cells and post-ischemic exaggerated neutrophilic response in ischemia/reperfusion injury in rats administered in vivo TA.Similar to the studies in the literature, there was no additional drug-related inflammation in addition to the surgery-induced inflammation in our study; therefore, it can be speculated that TA application may have an anti-inflammatory effect on wound healing. Considering the studies on TA in the literature, topically applied TA in rats with Achilles tendon rupture has been found to have a negative effect on tendon healing in the late period, but it has not been reported to have a negative or positive effect on wound healing. [19]In another study, Xie et al. [20] reported a lower incidence of wound complications compared to the control group in their study to evaluate the effect of TA on the reduction of postoperative blood loss in internal fixation and bone graft open reduction applications in calcaneal fractures.In our study, we did not encounter any wound complications, particularly wound infection, in any of the rats during the follow-ups. Many studies have been conducted showing that TA has an effect on the inflammatory system in addition to its bleeding-reducing feature.In these studies, the authors have mostly advocated the anti-inflammatory effect, but there are also studies showing the pro-inflammatory effect. [21,22]he study of Xie et al. [23] evaluated patients who received intravenous TA at different doses in total hip replacement surgery.The rate of inflammation in the postoperative period in patients who received high doses decreased more than the other groups (lower C-reactive protein and interleukin [IL]-6 levels).On the contrary, in the study of Grant et al. [24] in TKA patients who underwent TA, compared to patients who received TA during surgery and patients who did not receive TA, TA significantly increased monocyte chemoattractant protein-1 (MCP-1), tumor necrosis factor-alpha (TNF-a), IL-1b, and IL-6 levels after surgery compared to patients without TA. [24]In our study, inflammatory parameters were not measured in the blood of rats.This is one of the limitations to our study in terms of inflammation.However, histopathologically, there was no significant difference in the lymphocyte and PMNL values in all three groups in our study. Wound healing is a process in which three phases: hemostasis-inflammation, proliferation and remodeling follow each other. [25]When we evaluated the inflammation phase of our study, no statistically significant difference was observed between the 14 th -day lymphocyte count and PMNL values of the control group, local and intravenous groups.The reason for this may be due to the rapid decrease in PMNL and lymphocyte counts while passing into the proliferation phase, while lymphocyte and PMNL cells are dominant in the hemostasis and inflammation phase, which is the first stage of wound healing.At the end of the inflammation phase, growth factors and cytokines released from platelets attract fibroblasts to the wound and initiate the proliferative phase.In our study, histopathologically, the fibroblast count was statistically significantly higher in those treated with local TA than the control group; however, there was no significant difference between the control and intravenous groups.At the last stage of wound healing, the number of fibroblasts begins to decrease, but the lack of significant difference in the number of fibroblasts between the intravenous and control groups in our study may be due to the measurement made on Day 14.The proliferative phase is characterized by angiogenesis, collagen production and deposition, and wound contraction.In angiogenesis, new blood vessels are formed from epithelial cells, providing nutrients and oxygen for new cells.In epithelialization, epithelial cells proliferate and spread to the wound sites; however, fibroblasts proliferate at the wound site and proliferate by transforming into myofibroblasts. [26]In the current study, the collagen density (%) values of control group on Day 14 were found to be significantly higher than those of local and intravenous group, and no significant difference was observed between local and intravenous group.This is because while the remodeling phase was continuing on Day 14 in local, the wound diameter in local and intravenous group on Day 14 was 0 mm, the wound was completely healed, and the remodeling phase started earlier.In addition, while no significant difference was observed between the control, local and intravenous group wound diameters on Day 0, Day 3, and Day 7, local and intravenous group wound diameters were found to be significantly smaller than local on Day 10 and Day 14.No significant difference was observed between the local and intravenous groups.This result indicates that TA has a positive effect on wound healing and accelerates wound healing compared to local group.However, the fact that there were no significant differences between the wound diameters in local and intravenous did not bring a new perspective to the TA best practice discussion, which is still a controversial issue in the literature. Various growth factors play a role in wound healing, of which TGF-b is of particular importance for all stages of this process.It has a pleiotropic effect on wound healing by regulating cell proliferation, differentiation, programmed cell death, extracellular matrix production, and modulating the immune response.There are three TGF-b isoforms (TGF-b 1, 2, and 3), each showing a unique expression pattern.There is increasing evidence that TGF-b1 has a profibrotic role both in vivo and in vitro.The TGF-b3 plays a powerful and specific role in preventing scar formation, while overproduction of TGF-b1 and -b2 isoforms promote scar formation. [26]The TGF-b3 is more common than TGF-b1 in the early stages of wound healing.The TGF-b1 is detected at high levels in wound healing only after epithelialization begins.In particular, recent data have shown that TGF-b1 may have a fibrotic effect, while TGF-b3 may have an anti-fibrotic effect during wound healing and in different tissues (skin, lips, oral and laryngeal mucosa). [27]In our study, TGF-b1 concentration in the local and intravenous groups was found to be statistically significantly higher than the TGF-b1 concentration of the control group.In addition, as the wound diameter on Day 14 was significantly lower in the local and intravenous groups compared to the control group, it can be suggested that TGF-b1 has a profibrotic effect in line with the information in the literature. The main limitations to our study are that it is an animal study, and an animal model of wound healing can be created with longer follow-up times and more groups for wound healing.In addition, the scaffold (Surgicel™) we used to fix the TA in the wound area in all groups in our study may increase inflammation.Therefore, unused animal model studies should be performed.Furthermore, the level of biochemical indicators in the blood during wound healing can be determined or biomechanical study of resistance to wound tension forces can be conducted. In conclusion, TA is widely used both topically and intravenously in various doses to reduce the amount of bleeding that can occur after major orthopedic surgery, as well as the amount of blood and blood products that can be used for replacement.As a result of the study we planned, given that when used for this purpose, it may have effects in areas other than its purpose, and TA seems to shorten wound healing time and reduce scarring in rats. FIGURE 2 . FIGURE 2. (a) Application of absorbable oxidized regenerated cellulose (Surgicel™) to the group applied topical Tranexamic acid (Local group).(b) application of local Tranexamic acid. 14 FIGURE 3 . FIGURE 3. Measurement of wound diameters of control, local and intravenous groups with the help of Autocad program on Days 0, 3, 7, 10, 14. FIGURE 9 .FIGURE 10 . FIGURE 9. Change in wound diameters of control, local and intravenous groups from Day 0 to Day 14. non-normally distributed variables, Dunn's multiple comparison test was used for subgroup comparisons, Kruskal-Wallis test was used for intergroup comparisons, Dunn's multiple comparison test was used for subgroup comparisons, and chi-square test was used for qualitative data comparisons.A p value of <0.05 was considered statistically significant. TAbLE I Comparison of lymphocyte, PMNL, fibroblast count, scar thickness, collagen density and MVD values of Control, Local, and Intravenous groups PMNL: Polymorphonuclear leukocyte; MVD: Microvascular density; SD: Standard deviation; IQR: Interquartile range; Kruskal Wallis test.FIGURE 8. Mean collagen density values of control, local and intravenous groups. TAbLE III Comparison of the percentage changes in wound diameter of the control, local and intravenous groups on Days 0, 3, 7, 10, 14 SD: Standard deviation; IQR: Interquartile range.
2023-11-25T16:20:10.651Z
2023-11-23T00:00:00.000
{ "year": 2023, "sha1": "961ed306a8d55f8f81b5d9ffd9a537afebb191bf", "oa_license": "CCBYNC", "oa_url": "https://jointdrs.org/full-text-pdf/1538", "oa_status": "GOLD", "pdf_src": "ScienceParsePlus", "pdf_hash": "4f24f9096e93bbecc432911bb247c9cd394925af", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [] }
271532302
pes2o/s2orc
v3-fos-license
Incidence and predictors of adverse outcomes in patients with rheumatic mitral stenosis following percutaneous balloon mitral valvuloplasty: a study from a tertiary center in Thailand Background Rheumatic mitral stenosis (MS) remains a common and concerning health problem in Asia. Percutaneous balloon mitral valvuloplasty (PBMV) is the standard treatment for patients with symptomatic severe MS and favorable valve morphology. However, studies on the incidence and predictors of adverse cardiac outcomes following PBMV in Asia have been limited. This study aims to evaluate the incidence and predictors of adverse outcomes in patients with rheumatic MS following PBMV. Methods A retrospective cohort study was conducted on patients with symptomatic severe MS who underwent successful PBMV between 2002 and 2020 at a tertiary academic institute in Thailand. Patients were followed up to assess adverse outcomes, defined as a composite of cardiac death, heart failure hospitalization, repeat PBMV, or mitral valve surgery. Univariable and multivariable analyses were performed to identify predictors of adverse outcomes. A p-value of < 0.05 was considered statistically significant. Results A total of 379 patients were included in the study (mean age 43 ± 11 years, 80% female). During a median follow-up of 5.9 years (IQR 1.7–11.7), 74 patients (19.5%) experienced adverse outcomes, with an annualized event rate of 2.7%. Multivariable analysis showed that age (hazard ratio [HR] 1.03, 95% confidence interval [CI] 1.008–1.05, p = 0.006), significant tricuspid regurgitation (HR 2.17, 95% CI 1.33–3.56, p = 0.002), immediate post-PBMV mitral valve area (HR 0.39, 95% CI 0.25–0.64, p = 0.01), and immediate post-PBMV mitral regurgitation (HR 1.91, 95% CI 1.18–3.07, p = 0.008) were independent predictors of adverse outcomes. Conclusions In patients with symptomatic severe rheumatic MS, the incidence of adverse outcomes following PBMV was 2.7% per year. Age, significant tricuspid regurgitation, immediate post-PBMV mitral valve area, and immediate post-PBMV mitral regurgitation were identified as independent predictors of these adverse outcomes. Introduction Rheumatic heart disease, particularly mitral stenosis (MS), remains a notable health issue in Asia, including Thailand, leading to considerable mortality and morbidity, such as heart failure and cardiac death [1].The management of MS has significantly evolved since the early 1980s with the introduction of the Inoue balloon [2].Percutaneous balloon mitral valvuloplasty (PBMV) has become the intervention of choice for patients with symptomatic severe MS and favorable valve morphology [3,4]. However, limited data exist on the long-term adverse cardiac outcomes of rheumatic MS patients undergoing PBMV in Asia, including Thailand, where rheumatic heart disease is prevalent.Therefore, this study aims to evaluate the incidence and predictors of adverse cardiac outcomes in patients with rheumatic MS undergoing PBMV at a tertiary academic institution in Thailand. Study population This was a retrospective cohort study.Eligible patients were those aged 18 years or older with symptomatic severe rheumatic MS who underwent PBMV using the Inoue technique between July 2002 and September 2020 at Siriraj Hospital, Bangkok, Thailand.The diagnosis of severe MS was defined according to the guidelines at that time.In cases where patients underwent more than one PBMV procedure, only information from the first procedure during the study period was considered, with any subsequent procedures counted as outcomes.Only patients who had complete transthoracic echocardiographic data both before and immediately after the procedure were included.Patients with missing information on procedural success, missing echocardiographic data, or insufficient follow-up time were excluded from the study.The study was conducted in accordance with the Declaration of Helsinki.The institutional ethics committee (Siriraj Institutional Review Board [SIRB], Faculty of Medicine Siriraj Hospital, Mahidol University) approved this retrospective study and waived the need for additional written informed consent. Echocardiography All patients underwent comprehensive echocardiography before and immediately after PBMV, in accordance with guideline recommendations [10].The mitral valve area was measured using transthoracic echocardiography with 2-dimensional planimetry in the parasternal shortaxis view.If planimetry was not available, the mitral valve area was determined using the pressure half-time method.Mitral valve morphology was assessed using the standard Wilkins echocardiographic scoring system, which involves semiquantitative grading of four components: leaflet mobility, valve thickening, subvalvular fibrosis, and valvular calcification [11].Immediate post-PBMV transthoracic echocardiographic results were collected and included immediate post-PBMV mitral valve area, immediate post-PBMV mitral valve pressure gradient, and post-PBMV mitral regurgitation (MR).Right ventricular systolic pressure (RVSP) was estimated using tricuspid regurgitation (TR) maximum velocity and right atrial pressure.The severity of TR was assessed using multiple methods, such as visual assessment, vena contracta width/area, and regurgitant volume [12].Significant TR was defined as moderate to severe TR by any assessment method. PBMV technique The decision for the mitral valve intervention procedure was made by a heart team that considered clinical and echocardiographic data (e.g., age, comorbidities, surgical risk, severity of MR, presence of left atrial thrombus) as well as the mitral valve Wilkins score to decide whether each patient should undergo PBMV or mitral valve replacement.All PBMV procedures were performed using an Inoue balloon catheter via an anterograde transseptal approach.Both right and left cardiac catheterizations were conducted before and during the procedure to evaluate hemodynamic alterations.The appropriate balloon size (in millimeters) was determined using the formula: (height in cm / 10) + 10.The balloon was gradually inflated from lower to higher volumes.Following each inflation, alterations in the transmitral mean pressure gradient and the extent of MR were monitored.Based on the interventionist's judgment and in order to achieve optimal results, balloon inflation could be continued up to 1-2 mm more than the estimated size.In the final stage, left ventriculography was conducted to assess the degree of final MR. Clinical follow-up Follow-up data were collected through clinical visits and medical records.Patients were monitored for adverse cardiac outcomes, which included a composite of cardiac death, heart failure hospitalization, repeat PBMV, or mitral valve surgery.Cardiac death was defined according to standard recommendations [13].In cases where patients experienced multiple events, only the first event was considered for event-free survival analysis. Statistical analysis Statistical analyses were performed using IBM SPSS Statistics for Windows, version 20.0 (IBM Corp., Armonk, NY, USA).Continuous variables with a normal distribution were reported as mean ± standard deviation (SD), while continuous variables with a non-normal distribution were reported as median and interquartile range (IQR).The normality of the variable distribution was assessed using the Kolmogorov-Smirnov test.Categorical variables were presented as absolute numbers and percentages.Differences between groups were compared using the Student's unpaired t-test or Mann-Whitney U test for continuous variables, and the chi-square test or Fisher's exact test for categorical variables, as appropriate.Event rates were estimated using the Kaplan-Meier method.To analyze predictors of composite adverse outcomes, we conducted a Cox regression analysis to assess univariable predictors based on baseline characteristics and echocardiographic variables.Variables with a p-value < 0.05 in the univariable analysis were included in the multivariable analysis to identify independent predictors.The results of the Cox regression analysis are presented as hazard ratios (HR) with corresponding 95% confidence intervals (CI).A p-value < 0.05 was considered statistically significant for all tests. Results A total of 472 patients were studied.Ninety-three patients were excluded for the following reasons: incomplete pre-PBMV echocardiographic data in 33 patients, incomplete post-PBMV echocardiographic data in 38 patients, congenital heart disease in 4 patients, and incomplete follow-up data in 16 patients.Therefore, 379 patients were included in the final analysis.Figure 1 illustrates the patient flowchart.The median follow-up time was 5.9 years (IQR 1.7, 11.7).Adverse outcomes occurred in 74 (19.5%) patients, with an annualized event rate of 2.7%.Table 1 demonstrates the baseline characteristics, echocardiographic, and procedural data of the study population, with comparison between those with and without adverse outcomes. The mean age was 43.7 ± 11.4 years, and 80.7% were women.Sixty-nine patients (8.2%) were in NYHA functional class III-IV, and 53% had atrial fibrillation.The mean mitral valve area was 0.92 ± 0.22 cm², and the median Wilkin score was 8 (IQR 8, 9).Patients with adverse outcomes had a higher proportion of those in NYHA functional class III-IV (52.7% versus 9.8%, p < 0.001), a greater left atrial atrial dimension (57.5 ± 7.4 versus 54.7 ± 8.6 mm, p = 0.01), and a higher prevalence of significant TR (33.8% versus 20.0%, p = 0.01) compared to those without adverse outcomes.There was no significant difference in mitral valve area and Wilkin score between patients with and without adverse outcomes.Patients with adverse outcomes had a significantly lower immediate post-PBMV mitral valve area, a higher immediate post-PBMV mitral valve pressure gradient, and a higher prevalence of immediate post-PBMV MR compared to those without adverse outcomes (all p < 0.05). presents Patients with significant TR had a significantly higher rate of adverse outcomes than those without significant TR (log-rank p = 0.007). Discussion The main findings of the study were that during the median follow-up time of 5.9 years, 19.5% of patients with rheumatic MS undergoing PBMV experienced adverse cardiac outcomes, with an annualized event rate of 2.7%.Age, significant TR, immediate post-PBMV mitral valve area, and immediate post-PBMV MR were identified as independent predictors of adverse outcomes, with significant TR emerging as the strongest predictor. Rheumatic heart disease remains a significant health concern in many parts of the world, particularly in low-and middle-income countries, where it is one of the leading causes of cardiovascular morbidity and mortality [14].Ou et al. reported global trends in rheumatic heart disease, noting increasing trends in the age-standardized rates of incidence and prevalence worldwide.The respective estimated annual percentage changes were 0.58 and 0.57, with increasing trends commonly observed in lowand middle-socioeconomic countries [15]. PBMV should be considered as an initial treatment for selected patients who exhibit mild to moderate calcification or impaired subvalvular apparatus, but otherwise possess favorable clinical characteristics [16].A recent meta-analysis showed lower procedural morbidity associated with PBMV compared with mitral valve replacement, thus supporting the recommendation of PBMV in young patients with suitable anatomy [17].Several studies report long-term outcomes of patients with MS following PBMV, demonstrating an incidence of adverse outcomes ranging from 16 to 19% [5][6][7][8][9].In our study, the incidence of adverse outcomes was comparable to prior studies, with a rate of 19.5%.Patients with adverse outcomes had a worse functional class and a greater left atrial dimension, which were also similar to previous studies [8,9].However, in our study, NYHA functional class was only a predictive factor in the univariable analysis.It should be noted that NYHA functional class, although a strong predictive factor for adverse outcomes, is a subjective variable.The clinical data from patients complaining of a defined NYHA functional class were assessed by physicians, which may result in differences in interpretation among patients. Significant TR was the strongest predictor of adverse outcomes in our study, which was reported in prior studies.TR is most often the consequence of left-sided cardiac diseases that induce right-sided chamber dilatation, and hemodynamically significant TR can cause significant morbidity and mortality [18,19].Although rheumatic TR can occur, secondary TR due to pulmonary hypertension is far more common in patients with rheumatic heart disease.Significant TR can develop over time even after successful PBMV [20].Sagie et al. studied the association between the presence of TR and immediate and late adverse outcomes in patients undergoing PBMV.They found that the prevalence of significant TR was 31%, and patients undergoing PBMV with significant TR exhibited advanced mitral valve and pulmonary vascular disease, suboptimal immediate results, and poor late outcomes [21].Another study by Caldas et al. also showed that the prevalence of significant TR was 12.8% in patients with rheumatic MS undergoing PBMV and was independently associated with adverse outcomes [22].Our study revealed a similar prevalence of significant TR to that reported by Sagie et al., but higher than that reported by Caldas et al.This difference could be attributed to variations in the definition of severity and the assessment methods used in the studies.Nevertheless, significant TR consistently emerges as a strong predictor of adverse outcomes in all studies, including ours.Immediate post-PBMV mitral valve area has been associated with long-term outcomes in patients with rheumatic MS undergoing PBMV in prior studies [23,24].Our results also showed consistent findings.Significant MR following PMMV is a frequent event, mainly related to commissural splitting, with favorable clinical outcomes [25].However, patients with damaged central leaflet scallop or subvalvular apparatus had the worst outcomes compared to patients with mild or commissural MR [26].In our study, although we were unable to classify the severity or mechanism of immediate MR following PBMV, immediate post-PBMV MR still emerged as an independent predictor of adverse outcomes. The clinical implication of our study is that PBMV in patients with severe MS demonstrated good long-term outcomes with a relatively low rate of adverse outcomes.Most adverse outcomes were mitral valve surgeries, with very low rates of mortality or heart failure.Additionally, our study highlighted significant known predictors of adverse outcomes, such as age, immediate post-PBMV mitral valve area, and immediate post-PBMV MR, as well as an emerging predictor, significant TR, which should be integrated into the care of this patient population. Limitations Our study had several limitations.Firstly, the study methodology was retrospective, and therefore, some confounding factors could not be totally eliminated.However, multivariable analysis was performed to the best of our ability.Secondly, the PBMV procedures were performed by experienced operators in a tertiary center, which may limit generalization.However, the rate of adverse outcomes in our study was comparable to prior studies.Thirdly, we were unable to conduct follow-up echocardiography after discharge, which could be associated with adverse long-term outcomes.Fourthly, we defined the severity of other valvular functions rather than MS using multiple methods, including qualitative and quantitative methods, and were unable to quantify the severity of other valvular functions (e.g., TR) in every patient, which may not be consistent. Conclusion In patients with symptomatic severe rheumatic MS, the incidence of adverse outcomes following PBMV was 2.7% per year.Age, significant TR, immediate post-PBMV mitral valve area, and immediate post-PBMV MR were identified as independent predictors of these adverse outcomes. Fig. 2 Fig. 2 Kaplan-Meier survival curve of patients with rheumatic mitral stenosis undergoing percutaneous balloon mitral valvuloplasty Fig. 3 Fig. 3 Kaplan-Meier survival curve of patients with rheumatic mitral stenosis undergoing percutaneous balloon mitral valvuloplasty comparing patients with and without significant TR.Abbreviation: TR = tricuspid regurgitation Table 1 Baseline characteristics, echocardiographic, and procedural data of the study population, with comparison between those with and without adverse outcomes Data are expressed as number (%), mean ± standard deviation, or median and IQR.Bold-italic values are < 0.05 Table 2 Adverse cardiac outcomes Table 3 Univariable and multivariable Cox regression analyses for the predictors of adverse cardiac outcomes Abbreviations: CI = confidence interval, HR = hazard ration, IMR = immediate post valvulotomy mitral regurgitation, IMVA = immediate post valvulotomy mitral valve area, IMVPG = immediate post valvulotomy mitral valve pressure gradient, NYHA = New York Heart Association, TR = tricuspid regurgitation
2024-07-30T06:17:44.292Z
2024-07-29T00:00:00.000
{ "year": 2024, "sha1": "080d6ac89c7d9c5b25b3bbf32fe3c941ddf79fe9", "oa_license": "CCBY", "oa_url": null, "oa_status": null, "pdf_src": "PubMedCentral", "pdf_hash": "6a66ca591d1058c500f364538005ab35e27b4134", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
233182551
pes2o/s2orc
v3-fos-license
Vital rates of two small populations of brown bears in Canada and range‐wide relationship between population size and trend Abstract Identifying mechanisms of population change is fundamental for conserving small and declining populations and determining effective management strategies. Few studies, however, have measured the demographic components of population change for small populations of mammals (<50 individuals). We estimated vital rates and trends in two adjacent but genetically distinct, threatened brown bear (Ursus arctos) populations in British Columbia, Canada, following the cessation of hunting. One population had approximately 45 resident bears but had some genetic and geographic connectivity to neighboring populations, while the other population had <25 individuals and was isolated. We estimated population‐specific vital rates by monitoring survival and reproduction of telemetered female bears and their dependent offspring from 2005 to 2018. In the larger, connected population, independent female survival was 1.00 (95% CI: 0.96–1.00) and the survival of cubs in their first year was 0.85 (95% CI: 0.62–0.95). In the smaller, isolated population, independent female survival was 0.81 (95% CI: 0.64–0.93) and first‐year cub survival was 0.33 (95% CI: 0.11–0.67). Reproductive rates did not differ between populations. The large differences in age‐specific survival estimates resulted in a projected population increase in the larger population (λ = 1.09; 95% CI: 1.04–1.13) and population decrease in the smaller population (λ = 0.84; 95% CI: 0.72–0.95). Low female survival in the smaller population was the result of both continued human‐caused mortality and an unusually high rate of natural mortality. Low cub survival may have been due to inbreeding and the loss of genetic diversity common in small populations, or to limited resources. In a systematic literature review, we compared our population trend estimates with those reported for other small populations (<300 individuals) of brown bears. Results suggest that once brown bear populations become small and isolated, populations rarely increase and, even with intensive management, recovery remains challenging. | INTRODUC TI ON Many of the world's terrestrial large carnivore populations are declining (Ripple et al., 2014). In most cases, the principal causes of decline are habitat loss, habitat fragmentation, as well as humancaused mortality from legal harvest, conflict with humans for safety or livestock, and persecution. Independently or synergistically, these factors erode the geographic ranges of species, contracting and fragmenting populations, leaving increased interface edges and population isolates (Henschel et al., 2014;Kenney et al., 2014;Proctor et al., 2012;van Oort et al., 2011). At some point along the continuum of decline, the initial causes may become outweighed in their effect by additional threats arising as a function of diminishing population size. Threats associated with small population sizes include an increased vulnerability to demographic stochasticity, loss of genetic variability, and Allee effects (Berec et al., 2007;Brook et al., 2008;Caughley, 1994). Successful conservation requires an understanding of the initial causes of decline and those that might be specific to small, remnant populations (Brook et al., 2008;van de Kerk et al., 2019). Many threatened large carnivore species are wide-ranging, occur at lowdensity, and have long generation times. Therefore, even in large populations, it unavoidably takes years to collect sufficient sample sizes of vital rates to infer population trends and their underlying mechanisms (e.g., Gough & Kerley, 2006;Regehr et al., 2018;Schwartz et al., 2006). For small populations, with inevitably small sample sizes, acquiring data required for strong inferences is improbable (Mosnier et al., 2015;Zipkin & Saunders, 2018). As a result, estimates of vital rates and causes for their suppression are seldom obtained in wild populations with few individuals (e.g., Tosoni et al., 2017;Wittmer et al., 2005), despite their world-wide relevance for the development of effective conservation strategies. Brown bears (Ursus arctos; Figure 1), called grizzly bears over most of North America, are large-bodied, long-lived omnivores with late onset of reproduction. Females are predominantly philopatric and therefore do not rapidly colonize neighboring habitats or provide demographic rescue to small populations, while males will often disperse outside of their natal home range (McLellan & Hovey, 2001;Proctor et al. 2004). Brown bears usually reach maturity between 4 and 6 years of age, and females remain with dependent offspring for two or more years (Garshelis et al., 2005;Mace et al., 2012;McLellan, 2015;Schwartz et al., 2006). Brown bears typically have high (>95%) annual adult survival probabilities even as populations approach carrying capacity, and population regulation occurs due to variable recruitment rates that reflect the abundance of food or other density-dependent effects (Keay et al., 2018;McLellan, 2015;Schwartz et al., 2003;van Manen et al., 2015). As a result, recovery efforts for this species usually target adult female survival, which is often reduced by human-caused mortality. Substantial range contraction has resulted in a fragmented global population of brown bears with many small isolates in need of conservation attention (Mattson & Merrill, 2002;McLellan et al., 2017;Zedrosser et al., 2001). There have been several successful efforts to recover such brown bear populations. For example, in the 1930s, as few as 130 brown bears remained in Sweden but following a reduction in human-caused mortality, the population grew to approximately 700 by the mid-1990s (Swenson et al., 1995), and to over 3,200 bears by 2008 (Kindberg et al., 2011). In the United States, brown bear populations in both the Yellowstone Ecosystem and in northern Montana grew at up to 7.6% annually for over 20 years van Manen et al., 2015) to over 700 bears in each population (Haroldson et al., 2014;Kendall et al., 2009) following the reduction of human-caused mortality. Not all recovery attempts of brown bear populations, however, have resulted in marked population increases. In Italy, the relict Apennine bear population has been isolated from other brown bears for more than 1,000 years (Benazzo et al., 2017) and in recent times, consistently numbered between 50 and 60 individuals without evidence of increase notwithstanding persistent conservation efforts . In the French Pyrenees, brown bear populations declined to <10 individuals in 1990, and, despite efforts to reduce adult mortality, by 1995 only five individuals remained in the western part of the range and the central population required reintroduction with bears from Slovenia (Chapron et al., 2003). Reasons for variable outcomes of recovery efforts in brown bears remain poorly understood but may be related to the small size and increasing isolation of remnant populations. Here, we use data collected from telemetered brown bears to estimate causes of mortality, age-specific survival probabilities, reproductive rates (age of primiparity, litter size, and interbirth interval), and population growth in two adjacent but genetically separate (Apps et al., 2014;McLellan et al., 2019) North Stein-Nahatlatch population declined by approximately 5% each year. We then compare our population growth estimates to those published for other small (<300 individuals) brown bear populations across their distribution using a literature review. Combined, our two objectives aim to better understand the relationship between population size, connectivity, and vital rates in brown bears, contributing not only to our understanding of the effectiveness of brown bear recovery efforts but also to the limited empirical knowledge on small population demography in general. The western portion of the study area is a temperate rain forest dominated by western red cedar (Thuja plicata) and western hemlock (Tsuga heterophylla). At high elevations, the winter snowpack is deep and a maritime climate results in moderate summer temperatures. The eastern portion of the study area is warmer and dryer; Douglas-fir (Pseudotsuga menziesii) dominates low elevation forests, and higher elevation forests consist mostly of Engelmann spruce (Picea engelmannii) and subalpine fir (Abies lasiocarpa) ranging eastward from wet variants of the forest to very dry variants (MacKenzie, 2012). Alpine ecosystems range from lush, moist herbaceous meadows to dry heather (Cassiope spp.) and rock dominated topography. The average annual rainfall is approximately 1,030 mm on the western side and 400 mm on the eastern sides of the study area. Avalanche chutes are common throughout the study area and are often used by bears in the spring and early summer (McLellan & McLellan, 2015). Mean road density is 0.21 and 0.16 km/km 2 in the McGillvary Mountains and North Stein-Nahatlatch populations, respectively, and each has similar amounts of settled lands along their perimeters. | Bear capture and monitoring We captured, collared, and monitored bears from 2005 to 2018. Bears were immobilized by darting from a helicopter (see McLellan & McLellan, 2015). Capture was carried out in spring, shortly after den emergence when bears were feeding in avalanche chutes and open alpine meadows, or early autumn when they fed on huckleberries (Vaccinium membranaceum). Once immobilized, we weighed, measured, and fitted bears over 2 years old with either GPS or VHF collars (Lotek Inc.). We also obtained a tissue sample for genetic identification and a vestigial premolar for determining age via cementum annuli. We classified 2 to 5-year-old female bears as subadults and those 6 years and older as adults (Garshelis et al., 2005). Insufficient data were collected from males to include in this analysis. All collars had canvas spacers to ensure that the collar would drop off, and we weakened the canvas on the col- Capture effort, defined as the time we spent searching for bears, was evenly distributed between the study populations until 2014 when it had become apparent that the North Stein-Nahatlatch population was not only small, but had an unusually high incidence of adult female mortality. Although no bears had been injured or killed in our research, the risk of additional female mortality due to capture was deemed too high, so captures from 2015-2018 were limited to the McGillvary Mountains population. We continued to monitor the vital rates of all collared bears in both populations until 2018. Throughout the first part of the study (2005)(2006)(2007)(2008), we located collared bears by fixed-wing aircraft every 2 weeks from May to November. On each flight, we downloaded GPS location data and attempted to visually locate each bear and count the dependent offspring of females. If we did not find a bear for more than 8 consecutive weeks, we censored them from the analyses estimating survival and reproductive parameters at the time of their last known status. The populations were also monitored using genetic capture-recapture, and all females were genetically detected after their collars had dropped or stopped working; therefore, all had known fates while collared. In the second part of the study (2010)(2011)(2012)(2013)(2014)(2015)(2016)(2017)(2018), some females were fit with VHF collars to reduce the number of recaptures needed to maintain continuity of monitoring. Collared bears were subsequently located and observed from a helicopter at least once each spring and then again in summer and fall. We intermittently located bears from the ground between aerial attempts. Offspring age was determined by size for cubs (bears <1-year-old) and yearlings. We grouped attendant offspring that were 2 years of age and older because we could not visually distinguish between 2 and 3-year-old bears with certainty if the previous year's status was unknown. All collars were programmed to signal if the collar had not moved in 24 hr, and these mortality signals were investigated as soon as possible after detection. Whenever we found a dropped collar with rotted canvas, we assumed the bear was still alive. If a bear was found dead, we performed an investigation and necropsy in the field to determine the cause of death. | Survival analysis We estimated survival (S) for adult and subadult female bears using the staggered entry design for the Kaplan-Meier estimator (Pollock et al., 1989). We considered population (S~Pop) and age class (S~Age) as categorical covariates in candidate models along with the null model (S~1) because we were interested in establishing whether the previously identified differences in population trend (McLellan et al., 2019) were indicative of differences in population-specific survival and because survival often differs among age classes McLellan, 2015;Schwartz et al., 2006). We used months as our monitoring interval from April to October when most bears were active (McLellan & McLellan, 2015) and amalgamated November through March into one monitoring interval because monthly mortality would not be distinguishable when bears were hibernating. We used RMark v.2.25 (Laake & Rexstad, 2008;White, 2008) to fit models and compared models using AIC. Models that deviated from the top model with <2 ∆AIC units were averaged to obtain survival estimates for each population and age class (Burnham & Anderson, 2004). For consistency with analysis of reproductive parameters (below), we also tested for statistical significance of effects present. We estimated cub survival by observing the number of cubs for each collared female shortly after den emergence and at least once before the following denning season. We assumed cub mortality if they were no longer seen with their mothers. We censored from analysis bears with cubs not located within a month of den emergence or mothers that dropped their collars or that were not visually located again that year. Due to a possible lack of independence of cub survival within a litter Swenson et al., 2001), we first tested whether individual cub survival differed from survival within litters using an ANOVA and then tested for a difference in cub survival between populations. We resampled with replacement (bootstrapped) cub survival data 1,000 times to obtain mean survival and 95% confidence intervals (McLellan, 2015). We used the same method to estimate yearling survival. Analyses were conducted using PopTools (Hood, 2011). | Age of primiparity, litter size, and interbirth interval We located all telemetered females shortly after den emergence in spring to determine whether they had reproduced and to count the number of offspring. To estimate the average age of primiparity, we used the techniques developed by Garshelis et al. (1998). This method incorporates data from all monitored nulliparous females, including those not monitored to parturition. For each age, the proportion of females in the sample that reproduced are weighted by the proportion of females in the study population available to have a first litter at each age. The resulting estimates are not biased toward early maturing bears, which, due to shed collars and mortality, are less likely to be lost from the sample than late-maturing bears. We used the same weighting technique of Garshelis et al. (1998) as adapted by McLellan (2015) to estimate the average interbirth interval for each population. We used data from all collared females for which one or more birthing events were known. The proportion of the female sample that had a subsequent litter during each year following the birth year was used to estimate breeding intervals without bias toward shorter intervals. For each parameter, the means and confidence intervals were obtained for each population by bootstrapping the original sample 1,000 times using PopTools (Hood, 2011). | Reproductive state transition and reproductive rate The stable reproductive state distribution describes the proportion of adult females in each reproductive state (Schwartz & White, 2008). For brown bears, female reproductive state is defined by the presence and age of dependent offspring: alone (A), with cubs (C), with yearlings (Y), or with 2-year-old or older offspring (T). To obtain an estimate of each population's stable reproductive state distribution, we first estimated the probability that a female will transition from one reproductive state to another using the multistate model in RMark (Laake & Rexstad, 2008). We considered ten biologically possible transitions that are observable when an adult female is monitored for two or more consecutive years. We then iteratively multiplied the resulting age distribution by the transition probability matrix in a Markov chain until it reached the asymptotic stable reproductive state distribution. For this analysis, we used markovchain package v. 0.6.9.15 (Spedicato et al., 2014) in program R (v.3.6.1; R Core Team, 2019). We resampled the data with replacement and bootstrapped estimates of the stable reproductive state distribution 1,000 times to estimate means and 95% CIs for each population. We did not have a sufficient sample size to include female age as a covariate for transition probabilities. We estimated the mean reproductive rate (m x; female cubs/year/ female) for each population by multiplying the estimated number of female cubs per litter by the stable reproductive state proportion of females with cubs (Schwartz & White, 2008). Because the sex of each cub was not determined, we assumed the number of female cubs to be half the total number of cubs. Like all methods that pool data across time, this method for estimating the reproductive rate assumes constant transition probabilities, but is more robust to possible sampling bias from capture and variability in monitoring duration than by simply using the proportion of monitored individuals (Schwartz & White, 2008). | Population growth and stable age distribution We estimated finite population growth rate (λ) by constructing an age-structured matrix population (Leslie matrix) model using repeated random samplings (Monte Carlo estimation) from the bootstrapped probability distributions of age-specific survival and reproduction for each population . We considered the age of last reproduction as 24 years (McLellan, 2015;Schwartz et al., 2003). For each iteration, we solved for the dominant eigenvalue, which is the population growth rate. For each population, we also estimated the net reproductive rate (R 0 ), defined as the estimated number of female cubs an adult female will produce in her lifetime, and the mean generation time (GenT), defined as the time required for a typical female to produce R 0 offspring (Caswell, 2001). Analyses were conducted using the popbio v.2.2.4 package (Stubben & Milligan, 2007) in program R (R Core Team, 2019). We repeated this process 1,000 times for each population to estimate the mean and variance for each latent variable. We estimated the stable age distribution of each population by converting the age-structured matrix population model into a stagestructured population model. Because vital rates were estimated for age groups, only transition rates required calculation. The transition rate is the expected proportion of individuals transitioning from one life stage to the next and is conditional on individual survival and the population growth rate (Caswell, 2001;Fujiwara & Diaz-Lopez, 2017). We applied a conditional age group-transition rate described by Caswell (2001) where the probability that an individual will transition from one age group to the next (P j,i ) is: and λ is the population-specific growth rate, x i is the first age in stage i, x j is the first age in stage j = I + 1, and l (x) is the survivorship at time x. We repeated this process for all 1,000 bootstrapped Leslie matrices from the preceding analysis to estimate the confidence interval around each population's stable age class outcome . | Population growth estimates in small populations of brown bears To investigate the relationships between population size and population growth rates in small populations of brown bears, we used the IUCN Red List of Threatened Species, which lists all isolated populations globally (McLellan et al., 2017). We limited our search to unhunted populations that had fewer than 300 bears. Then, we used Google Scholar and Web of Science to search for additional information. Specifically, we searched the name of the country or region, the species, and the term "population" (e.g., "Himalaya" + "Ursus arctos" + "population") for each population we had identified above and only included populations for which there was a population size or trend estimate available. For populations with long histories of research and sometimes changing population size, we used the size estimated toward the beginning of population change and did not consider examples from populations that did not have temporally overlapping estimates of size and growth rate. We categorized the populations according to their known geographic or genetic connectivity (connected to other populations or isolated) and history of augmentation (augmented or not). | Survival of independent females Comparison of known-fate survival models indicated that survival differed between populations ( | Age of primiparity, interbirth interval, and litter size Six of the monitored females in the McGillvary Mountains were nulliparous during the course of the study; three did not reproduce by 6, 7, and 8 years old when they were censored because they had lost their collars, and one female reproduced for the first time at age 11. The remaining two had first surviving litters at age eight and nine. Although both these females were monitored from ages 3 to 7 and 6 to 8, neither was observed the year immediately before having surviving cubs so could have had nonsurviving cubs at ages seven and eight, respectively. Two more females were first captured at age 7 with cubs. The estimated mean age of primiparity was 8.3 years (95% CI: 7.0-10.0) excluding possible nonsurviving first litters. Including the possible nonsurviving first litters, the mean age of primiparity would be 7.9 years (95% CI: 6.1-10.0). In the North Stein-Nahatlatch, the age of first surviving litter was observed for one bear at 12 years, and two nulliparous females died at ages 7 and 8 years old. We were unable to obtain estimates of primiparity with sufficient precision to compare populations. Interbirth intervals did not measurably differ between populations. We observed six interbirth intervals and six partial intervals | Survival of dependent offspring We estimated the survival rates of cubs and yearlings as well as the reproductive rates of adult females from the reproduc- Stein-Nahatlatch (Table 3; Figure 3a). | Stable age distribution The difference in survival probabilities resulted in differences in the stable age distribution of each population (Figure 3b). The McGillvary Mountains had a higher proportion of yearling and subadult bears, whereas the North Stein-Nahatlatch had proportionately more adults and older bears (Figure 3b). F I G U R E 3 Density plots of (a) the stable reproductive state distribution estimated using in multi-state transition models on reproductive data from collared adult female brown bears (≥6 years). (b) Bootstrapped estimates of the stable age distribution estimated from vital rates for grizzly bears in the McGillvary Mountains and North Stein-Nahatlatch populations in southwestern British Columbia, Canada | Population growth estimates in small populations of brown bears Comparing population growth rates with those of other brown bear populations with available data showed that the projected increase in the McGillvary Mountains population was similar to other connected populations following reductions in human-caused mortality ( Figure 4). In contrast, the projected decline in the North Stein-Nahatlatch population was very low but mirrored observed declines in other small and isolated brown bear populations. | D ISCUSS I ON We used data from telemetered individuals in two adjacent but distinct brown bear populations in British Columbia, Canada, to measure the demographic components of population change following the elimination of human-caused mortality from legal hunting. Although both populations were in ecologically similar environments, one had triple the density and some genetic and demographic connectivity to neighboring bear populations, while the other was small (<25 individuals) and genetically isolated. Despite similar management efforts, the larger, connected population we studied had higher independent female and cub survival rates than the small, isolated population resulting in widely divergent population trends. | Small population inferences An interesting caveat of studying small populations with very few adult females is that vital rate estimates, and subsequent inferences, were unavoidably drawn from small sample sizes. Extreme outcomes (high and low) are common with small samples, and therefore, researchers have little confidence with small samples obtained from large populations. However, because the number of individuals we monitored approached the size of the entire population, we are confident that our population-specific estimates and projections are accurate for the period of study, but less confident that the results necessarily reflect systemic factors in the observed trends. Chance events can play a major role in the dynamics of small populations (Engen & Saether, 1998). If, by chance, a different group of females with the same genetic makeup had been living in the study area, we would likely have generated different estimates. Although we were sampling from an infinite potential population, our sample size was severely restricted to the few animals that were actually there. | Population growth estimates in small populations of brown bears Comparing data from the two populations we studied with other small, unhunted brown bear populations suggested some relationships among population size, connectivity, and population growth for the species. Although a statistically rigorous comparison was not possible due to the limited availability of vital rate estimates and their variances, small, isolated bear populations below approximately 50 individuals usually continued to decline even with efforts to reduce mortality unless they were augmented (Figure 4). This agrees with observations from other studies of small populations of carnivores. | Survival estimates in small populations of brown bears Adult female survival in the small and isolated North Stein-Nahatlatch population was much lower (0.81). Other small and isolated brown bear populations also report lower adult female survival rates (range between 0.91 and 0.93) (Chapron et al., 2009;Kasworm et al., 2007;Tosoni et al., 2017). In these populations, despite considerable conservation efforts, female survival was similar or even lower than in several heavily hunted brown bear populations (McLellan, 2015;Miller et al., 1997). High adult female mortality is further illustrated by the high natural mortality rate of 0.12 mortalities/bear-year of monitoring in the North Stein-Nahatlatch population; much higher than recorded in large populations where it ranged from <0.005 mortalities/bear-year monitoring in the Yellowstone Ecosystem (Schwartz et al., 2006) Cub survival in the North Stein-Nahatlatch population was among the lowest documented for the species. Only a few highdensity, unhunted Alaskan populations in remote areas where they were thought to be at carrying capacity had similarly low cub survival (Keay et al., 2018;Sellers et al., 1999 (Ralls et al., 1988) and wild ungulate populations (Walling et al., 2011), respectively. Third, infanticide by adult males may also have contributed to low cub survival (McLellan, 1994). Sexually selected infanticide has been documented in brown bears in Sweden , and it may be exacerbated when the adult sex ratio favors males (Chapron et al., 2009;McLellan, 2005). In small populations with few adult females and long interbirth intervals, there will often be years when all adult females are with dependent offspring, and none are reproductively available (e.g., Gonzalez et al., 2016) likely resulting in an increased chance of infanticide and creating a component Allee effect on juvenile survival as the population decreases. | Reproductive rates in small populations of brown bears Reproductive rates were similar in both of our study populations but generally lower than in other brown bear populations below carrying capacity McLellan, 2015;Zedrosser et al., 2011). The reproductive state distributions of the two populations we studied suggest that adult females in the North Stein-Nahatlatch population may be without dependent offspring more frequently than in other brown bear populations (e.g., Garshelis et al., 2005;Schwartz et al., 2003;Støen et al., 2006), including the McGillvary Mountains. Long interbirth intervals and late age of primiparity contributed to an increased number of adult females without dependent offspring (offspring ≤2 years) and the associated reduction in reproductive rates. Although the sample sizes used to estimate reproductive parameters in these populations are insufficient to support any definitive conclusions, the observed ages of primiparity in the North Stein-Nahatlatch were older than the 5-7 years of age at which females in most other populations reproduce McLellan, 2015;Schwartz et al., 2006; but see Garshelis et al., 2005). | Small population dynamics The increased vulnerability of small populations to decline has been widely discussed, concluding that small populations of sexually reproducing diploids are subject to loss of genetic diversity and some form of Allee effect (Berec et al., 2007;Caughley, 1994;Frankham et al., 2019). Further, the effects of demographic stochasticity on population growth and the probability of extinction are increased in small populations. The random fluctuations in birth and death events have very little effect on population growth in large populations; however, in small populations, simultaneous "bad luck" among few individuals can cause the population to decline to zero (Engen & Saether, 1998) population. Our findings support the theory that as populations decline, we can expect wide variations in vital rates that diverge from those common for the species, potentially exacerbating the rate of decline (Lande, 1998). Also, that small, isolated populations of large carnivores may not respond as well to recovery actions that were successful for larger, or more connected populations of the same species (Fanshawe et al., 1991;Ferreras et al., 2001;Groom et al., 2014;Tosoni et al., 2017). For many carnivores, this means that solely focusing on reducing adult mortality is likely to be insufficient to promote recovery and should not be the only conservation strategy. By understanding the demographic components of population change, and how they may differ in small populations compared to larger ones, we not only increase our understanding of the relationships between population size and the potential for recovery but also are better able to convincingly prescribe targeted populationspecific recovery initiatives. ACK N OWLED G M ENTS The DATA AVA I L A B I L I T Y S TAT E M E N T Data are available at https://datad ryad.org/stash/ share/ vuGxV Q-HVBxB sg1bL nV459 3oycN lmHHd S-FCsuD Vbt4
2021-04-09T05:16:08.826Z
2021-03-10T00:00:00.000
{ "year": 2021, "sha1": "3d7c5e70e20eac450cb5432628c45a7b3d075ab1", "oa_license": "CCBY", "oa_url": "https://onlinelibrary.wiley.com/doi/pdfdirect/10.1002/ece3.7301", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "3d7c5e70e20eac450cb5432628c45a7b3d075ab1", "s2fieldsofstudy": [ "Environmental Science", "Biology" ], "extfieldsofstudy": [ "Medicine" ] }
10113099
pes2o/s2orc
v3-fos-license
The Gromov-Witten potential associated to a TCFT This is the sequel to my preprint"TCFTs and Calabi-Yau categories", math.QA/0412149. Here we extend the results of that paper to construct, for certain Calabi-Yau A-infinity categories, something playing the role of the Gromov-Witten potential. This is a state in the Fock space associated to periodic cyclic homology, which is a symplectic vector space. Applying this to a suitable A-infinity version of the derived category of sheaves on a Calabi-Yau yields the B model potential, at all genera. The construction doesn't go via the Deligne-Mumford spaces, but instead uses the Batalin-Vilkovisky algebra constructed from the uncompactified moduli spaces of curves by Sen and Zwiebach. The fundamental class of Deligne-Mumford space is replaced here by a certain solution of the quantum master equation, essentially the"string vertices"of Zwiebach. On the field theory side, the BV operator has an interpretation as the quantised differential on the Fock space for periodic cyclic chains. Passing to homology, something satisfying the master equation yields an element of the Fock space. Notation We work throughout over a ground field K containing Q. Often we will use topological K vector spaces. All tensor products will be completed. All the topological vector spaces we use are inverse limits, so the completed tensor product is also an inverse limit. All the results remain true without any change if we work over a differential graded ground ring R, and use flat R modules. (An R module is flat if the functor of tensor product with it is exact). We could also have only a Z/2 grading on R. Acknowledgements I would like to thank Tom Coates, Ezra Getzler, Alexander Givental and Paul Seidel for very helpful conversations, and Dennis Sullivan for explaining to me his ideas on the Batalin-Vilkovisky formalism and moduli spaces of curves. Topological conformal field theories Let S be the topological category whose objects are the non-negative integers, and whose morphism space S(n, m) is the moduli space of Riemann surfaces with n parameterised incoming and m parameterised outgoing boundaries, such that each connected component has at least one incoming boundary. These surfaces are not necessarily connected. Let S χ (n, m) ⊂ S(n, m) be the space of surfaces of Euler characteristic χ. S is a symmetric monoidal topological category, under disjoint union. Let C * be the functor of normalised 1 singular simplicial chains with coefficients in K, any field containing Q. As C * is a symmetric monoidal functor, C * (S) is a differential graded symmetric monoidal category. We also need a shifted version : define C (d) * (S(n, m)) by C (d) i (S)(n, m) = ⊕ χ C i+d(χ−n+m) (S χ (n, m)) Note that χ − n + m is even, so this shift in degree doesn't affect the signs. Definition 3.0.1. A topological conformal field theory of dimension d over K is a symmetric monoidal functor F : C (d) * (S) → Comp K to the category of complexes over K, with the property that the tensor product maps F (n) ⊗ F (m) → F (n + m) are quasi-isomorphisms. The notion of topological conformal field theory was introduced independently by Getzler [Get94] and Segal [Seg99]. One source of topological conformal field theories is the following theorem. Theorem (C., [Cos04]). Let C be a Calabi-Yau A ∞ category of dimension d over K. Then there is a topological conformal field theory F , of dimension d, with a natural quasi-isomorphism CC * −d (C) ⊗n ∼ = F (n), where CC * refers to the Hochschild chain complex. Remark: To make the signs easier to deal with, I have changed the notation a little from [Cos04]. This explains the shift by d in the Hochschild chain complex, which wasn't present in [Cos04]. We are also interested in Z/2 graded TCFTs. This is a symmetric monoidal functor from C * (S) to the category of Z/2 graded complexes of vector spaces, compatible with differentials and with the grading. To keep notation simple, we will always work with only a Z/2 grading. Informal outline of the construction This paper constructs, for certain TCFTs, something playing the role of the Gromov-Witten potential. One way to do this is due to Kontsevich [Kon03]. His idea is that, in certain circumstances, we can extend the TCFT to include operations coming from the Deligne-Mumford spaces. Then the Gromov-Witten potential is defined in the usual fashion, using the fundamental class of Deligne-Mumford space and ψ classes. However, it turns out that there is a choice involved in this construction, essentially of a trivialisation of the circle action on the TCFT. The present paper provides an alternative to Kontsevich's construction, which is canonical, but instead of a generating function gives us a state in a Fock space 2 . 1 Normalised means we quotient by the subcomplex of degenerate simplices. 2 Previously, in an attempt to understand Kontsevich's lecture [Kon03], I constructed a homotopy functor which when applied to the uncompactified moduli spaces of curves yields the Deligne-Mumford spaces. It follows automatically that applying the same functor to a TCFT yields something which carries operations from the Deligne-Mumford spaces. I sketched this construction in the introduction to [Cos04], without proof. However, one of the results stated there is not right. My calculation of the result of the functor applied to a TCFT was over-optimistic. I had claimed we get cyclic homology, but The constructions in this paper bypass Deligne-Mumford space completely. Instead we use Sen and Zwiebach's [SZ94,SZ96] Batalin-Vilkovisky algebra, which is constructed from the uncompactified moduli spaces of curves. The fundamental class of Deligne-Mumford space is replaced by a certain solution of the quantum master equation in this BV algebra. We construct from this solution of the master equation a ray in a certain Fock space associated to a TCFT. The idea that the fundamental chain satisfies the master equation is not new here, but is due to Sullivan [Sul04], and appeared implicitly in earlier work of Sen and Zwiebach [SZ94,SZ96]. This fundamental chain is essentially what Zwiebach calls "string vertices"; it is unique up to homotopy. The connection with the Fock space seems to be new, though. There is also a BV algebra associated to a TCFT. The solution of the master equation in moduli spaces gives one, say S, in this BV algebra. The master equation says that (d + △) exp S = 0 We interpret the total BV operator d + △ as the quantised differential on a chain level Fock space for a certain dg symplectic vector space. With this differential, the Fock space becomes a dg module for the Weyl algebra. Thus, passing to homology, exp S becomes an element of the Fock space for the homology of our symplectic vector space. There are various technicalities which make a rigourous exposition of this construction a little unreadable. Therefore I'll start by giving a sketch of the construction, which emphasises the geometry and de-emphasises the technical details. 4.1. Complexes with a circle action. Let F be a TCFT. In this section we will make the simplifying assumption that the maps F (1) ⊗n → F (n) are isomorphisms, and not just quasi-isomorphisms. This is just for expository purposes. As the monoid S(1, 1) contains S 1 as a subgroup 3 , the algebra C * (S 1 ) acts on V . This is formal; there is a quasi-isomorphism 4 H * (S 1 ) → C * (S 1 ) . Let D : V → V be the odd operator corresponding to the fundamental class of S 1 . We have three associated complexes, instead we get cyclic homology tensored with the ring of functions on a certain space of inner products on cyclic homology. So that in order to get operations from Deligne-Mumford spaces we need to choose such an inner product. 3 We allow "infitely thin" annuli in S(1, 1), so that S becomes a unital category. 4 This only works if C * is the normalised singular simplicial chains. The subscript or superscript hS 1 refers to homotopy coinvariants on invariants respectively. 4.2. The category of annuli. Let M(m) be the moduli space of Riemann surfaces with m (outgoing) boundary components. Such surfaces may be disconnected; also they may have connected components with no boundary. We allow m = 0. Let M g (m) ⊂ M(m) be the subspace of connected surfaces of genus g. Now we define a topological category A, which is a subcategory of S. The objects of A are the non-negative integers, and the morphisms are the morphisms in S(n, m) given by Riemann surfaces all of whose connected components are annuli. As each such annulus has at least one incoming boundary component, this is a subcategory. Taking singular chains, we find a functor C * (A) → Comp K , sending n → C * (M(n)). If F is a TCFT, sending n → F (n) also defines a functor from C * (A) to complexes. A TCFT contains operations from the uncompactified moduli spaces of curves, so we should be able to relate F and M. One could hope for a natural transformation C * (M) → F , encoding the TCFT structure. However, there is a problem; F carries operations from the moduli space S(n, m) of Riemann surfaces which have at least one incoming boundary component, whereas M is given by Riemann surfaces with no incoming boundary components, and possibly no boundary at all. The way around this problem is to construct a kind of semi-direct product functor F M : C * (A) → Comp K , which contains both F and M, as well as the data of the action of S on F . This construction will be explained in the body of the paper. Here we will simply pretend that there is a natural transformation C * (M) → F . This is purely for expository purposes. The argument used in the body of the text is more complicated but relies on the same ideas. 4.3. Batalin-Vilkovisky algebras. Recall [Get94] that a Batalin-Vilkovisky algebra is a commutative dga B equipped with an odd differential operator △, which is of order 2, and satisfies △ 2 = [d, △] = 0. Consider the complex where S 1 ≀S m is the wreath product group (S 1 ) m ⋉S m . Let F g,n (M) be the part coming from connected surfaces of genus g with n boundaries. The complex F(M) is a commutative differential graded algebra, where the product comes from disjoint union of surfaces. We will give F(M) the structure of Batalin-Vilkovisky algebra. This BV algebra from moduli spaces was introduced by Sen and Zwiebach [SZ94,SZ96], and also studied by Sullivan [Sul04]. The operator is defined 6 as a sum of all possible ways of gluing pairs of boundary components together, with a full twist by S 1 . This operator comes from the fundamental one chain in C 1 (A(2, 0)). It turns out that any other symmetric monoidal functor C * (A) → Comp K defines a Batalin-Vilkovisky algebra, in a similar way. In particular, there is a BV structure on Sym * V hS 1 . A point in A(2, 0), thought of as a zero chain, gives a pairing , on V . The operator △ on Sym * V hS 1 is the order 2 differential operator, which on Sym ≤1 V hS 1 is zero, and satisfies There is a map of BV algebras This definition extends to elements S ∈ B ⊗ R for any ring R. We are interested in Theorem 1. There exists a sequence of elements S g,n ∈ F g,n (M), of degree 6g −6+2n, with the following properties. (2) Form the generating function S satisfies the Batalin-Vilkovisky quantum master equation : Further, such an S is unique up to homotopy through such series. In particular the class [e S ] in d + △ homology is independent of any choice. A homotopy of solutions of the master equation (or of anything else) is a family of such, parameterised by the contractible dga K[t, d t] = Ω * A 1 . Remarks: (1) This result is essentially a mathematical formalisation of the work of Sen and Zwiebach [SZ94,SZ96]. The choice of such a solution of the master equation is essentially the same as the choice of string vertices in their work, which for these authors is a certain subspace of M g (n). They realised that string vertices satisfy the master equation, and that changing the choice of string vertices changes e S by a d + △ exact term. Also the string vertices must correspond to the fundamental class, since every Riemann surface appears uniquely by gluing surfaces lying in the string vertices. This point was made clear in the work of Sullivan [Sul04] on chain level Gromov-Witten invariants. (2) The proof of this theorem is very easy. The only facts we need about moduli spaces are trivial facts about the rational homological dimension of M g,n /S n . Therefore the result is true if we use any other sequence of spaces M g (n) with the same gluing structure. The solution of the master equation is intrinsic to the modular operad with compatible circle actions given by the spaces M g (n). Let M(n) be the moduli space of stable, possibly nodal, possibly disconnected, algebraic curves, with n marked smooth points. Consider the commutative dga F(M) defined by The algebra structure comes from disjoint union. Make this into a BV algebra by setting △ = 0. Let where [M g,n /S n ] is an orbifold fundamental chain for M g,n /S n . Theorem 2. There is a map F(M) → F(M) in the homotopy category of BV algebras, such that S maps to [M]. These results are not as mysterious as they might at first appear. The master equation can be rephrased as d S g,n + g 1 +g 2 =g n 1 +n 2 =n+2 1 2 {S g 1 ,n 1 , S g 2 ,n 2 } + △S g−1,n+2 = 0 where { } is a certain odd Poisson bracket on the space F(M), constructed in a standard way from the BV operator △. If α ∈ F g,n (M) and β ∈ F h,m (M) then {α, β} ∈ F g+h,n+m−2 (M) is the sum over ways of gluing a boundary of α to one of β, with a full twist by S 1 . The twist by S 1 raises degree by 1. Similarly, △α is the sum over ways of gluing a boundary of α to itself, with a twist by S 1 . We can see why the fundamental chain satisfies the master equation, and relate it to the fundamental class of Deligne-Mumford space, using a nice model for the spaces M(m) introduced by Kimura, Stasheff and Voronov [KSV95]. This is the moduli space N (m) of algebraic curves in M(m) decorated at each marked point with a ray in the tangent space, and at each node with a ray in the tensor product of the rays of the tangent spaces at each side. This is an orbifold with corners, whose interior consists of non-singular curves; this shows that it is homotopy equivalent to M(m). The operations of gluing marked points and of rotating the rays at the marked points make N (n) into a functor from a category A ′ , weakly equivalent to A. We can construct from N a BV algebra F(N ), in the same way we constructed F(M). These two BV algebras are quasi-isomorphic. Consider the space X(n) def = N (n)/S 1 ≀ S n where S 1 ≀ S n refers to the wreath product group (S 1 ) n ⋉ S n . A surface in X(n) has unordered marked points, and unparameterised boundaries (i.e. no ray in the tangent space). The BV algebra F(N ) is given by The space X(n) is an orbifold with corners. Let X g (n) be the subspace of connected surfaces of genus g. The boundary of X g (n) is (away from codimension 2 strata) a union of bundles over products of similar moduli spaces. There is a component for each way of splitting g = g 1 + g 2 , n + 2 = n 1 + n 2 , and a component corresponding to the loop, where we have a genus g − 1 surface with n + 2 marked points. Let us describe this last component in detail. It is a bundle over the space X g−1 (n + 2), consisting of a point in this moduli space together with a choice of two marked points, and a way of gluing them together. (There is an S 1 of possible ways of gluing). Let us call this space Y . There is a diagram Pulling a chain in X g−1 (n + 2) back to X (and so increasing the degree by 1), and then pushing it forward to X g (n), is precisely the operator −△. A similar picture holds at the other boundary components, except we find the bracket operator { } instead of △. Now suppose that the orbifold with corners X g (n) has a fundamental chain [X g (n)], behaving in a nice functorial manner. Then the discussion above would imply that d[X g (n)] + g 1 +g 2 =g n 1 +n 2 =n+2 1 2 {[X g 1 (n 1 )], [X g 2 (n 2 ) ]} + △[X g−1 (n + 2)] = 0 In other words, the fundamental chains of these moduli spaces satisfy the BV master equation. Now it is not difficult to show (using the fact that H i (X g (n)) = 0 for i ≥ 6g − 7 + 2n, (g, n) = (0, 3)) that there is a unique solution of the master equation up to homotopy, which sits in the correct degrees and has the correct leading term. Also, quasi-isomorphic BV algebras have the same set of homotopy classes of solutions of the master equation. These results now explains why the image of the class S g,n in H 6g−6+2n (M g,n /S n ) is the fundamental class. 4.4. Weyl algebras and Fock spaces. We have a map of BV algebras F(M) → Sym * V hS 1 . Let D ∈ Sym * V hS 1 [[λ]] denote the image of exp S. As S satisfies the master equation, D satisfies (d + △)D = 0 The last step involves interpreting the homology class of D as an element of a Fock space, by interpreting the differential d + △ as the natural differential on a chain-level Fock space. V carries an inner product, , coming from an annulus in C * (S(2, 0)). We can arrange so that D is skew self adjoint, and d is self adjoint. Define an antisymmetric pairing Ω on V Tate by Ω is compatible with the differential d + t D on V Tate . This is the same form as that used in the work of Coates and Givental [CG01,Giv01,Giv04]. The symplectic nature of Tate cohomology is also studied by Morava [Mor01]. Let W(V Tate ), the Weyl algebra, be the free algebra generated by u ∈ V Tate modulo the relation [u, Here [u, u ′ ] is the super commutator. V Tate has a decomposition V Tate = V hS 1 ⊕V hS 1 into isotropic subspaces. The subspace V hS 1 is preserved by the differential, but V hS 1 is not in general. The left ideal in the Weyl algebra generated by V hS 1 is also preserved by the differential. Let F(V Tate ) be the quotient module. The action of Sym * V hS 1 ⊂ W(V Tate ) on the image of 1 in F(V Tate ) gives an identification The Weyl algebra action here is such that V hS 1 acts by multiplication, and V hS 1 acts by differentiation. F(V Tate ) is a dg module for the dg algebra W(V Tate ), i.e. the differential is compatible with the action. Proof. We can consider Sym * V hS 1 ⊂ W(V Tate ) as a subalgebra, which is not preserved by the differential. The differential d W(V Tate ) is a derivation. So the associated map d W(V Tate ) : Sym * V hS 1 → W(V Tate ) is characterised by how it behaves on the generators V hS 1 . We have Therefore, using the relations in W(V Tate ), we find where indicates that we skip that term. Modulo the ideal generated by V hS 1 this is the same as d + △. The operator d + △ is the unique odd differential on the space Sym * V hS 1 which make it into a dg module for the Weyl algebra. Let us use the notation We will assume the map D : V → V is zero on homology 7 , and that the pairing on H * (V ) is non-degenerate. This implies that H is symplectic, and the map H + → H is injective with Lagrangian image. It is easy to see that where F(H) is the quotient of W(H) by the left ideal generated by H + . Therefore [D] ∈ F(H) is an element in the Fock space for H. Of course, this construction is not restricted to the fundamental class. Suppose R is a graded commutative algebra, and φ ∈ F(M) ⊗ R satisfies (d + △)φ = 0. Then φ carries over to an element of F(V Tate ) ⊗ R and so, when we pass to homology, an element of F(H) ⊗ R. This allows us to include various tautological classes, such as kappa classes, etc. As explained in section 10, choice of a complementary subspace to H + , preserved by t −1 , leads to an isomorphism and thus a more familiar looking Gromov-Witten potential. Changing the polarisation changes this potential by an element of Givental's twisted loop group. Remark: Maxim Kontsevich has informed me that he independently discovered the relation with Givental's group. Remark: The main point at which this informal exposition differs from the rigourous construction contained in the rest of the paper is the following. Recall that we don't really have a natural transformation C * (M) → F , but instead we have a kind of semidirect product functor F M . It turns out that we don't end up with an element in the BV algebra Sym * V hS 1 , but instead we construct a module for the Weyl algebra ]. This is encoded in the ideal in the Weyl algebra which annihilates it. 7 When the TCFT comes from a Calabi-Yau A∞ category, this corresponds to degeneration of the non-abelian Hodge to de Rham spectral sequence 4.5. The holomorphic anomaly. This picture connects very well with Witten's approach to the holomorphic anomaly [Wit93]. In Witten's picture, as I understand it, we interpret the B model potential (without descendents) for a Calabi-Yau 3-fold X as an element of the Fock space associated to the symplectic vector space H 3 (X). On the moduli space of Calabi-Yaus, there is a Gauss-Manin connection on H 3 (X), which preserves the symplectic form. This therefore induces a projectively flat connection on the associated Fock space, and the line spanned by the potential should be flat. Witten doesn't phrase things in quite this way. Rather, he thinks of a single Fock space, associated to H 3 (X), for some Calabi-Yau X. This doesn't depend on the complex structure on X. Each choice of complex structure, however, yields a polarisation into a direct sum of Lagrangian subspaces. Also each complex structure on X yields a B model potential, which can be considered to be a function on H 3,0 ⊕ H 2,1 . Using the polarisation, we can identify the space of functions on H 3,0 ⊕ H 2,1 with the Fock space for H 3 (X, C), and the claim is that the line in the Fock space is independent of the choice of complex structure on X in a given connected component of the moduli space of complex structures. This picture is of course equivalent to the one in the previous paragraph. Let us now see how this works in our context. For each Calabi-Yau A ∞ category, we have a Fock space with an element in it. We can think of this as a sheaf of left ideals in the sheaf of Weyl algebras on the CY moduli space 8 . The Weyl algebra is associated to the periodic cyclic chain complex, shifted by d. There should be a Gauss-Manin flat connection on this, which is the chain level version of the one introduced by Getzler [Get93] on periodic cyclic homology. The latter was used in Barannikov's work on the B model [Bar99,Bar00]. Conjecture. After modifying the Gromov-Witten potential to take account of the unstable moduli spaces (g, n) = (0, 1) and (0, 2), the ideal is preserved by this flat connection. This will be discussed elsewhere. For a Calabi-Yau X over K, we use an appropriate A ∞ version of the derived category of sheaves. Then we can use the Hochschild-Kostant-Rosenberg theorem to identify periodic cyclic homology with H − * DR (X)((t)), where t has degree −2. The Gauss-Manin connection on this is the K((t)) linear extension of the usual one. It is interesting to note that the ideal in the Weyl algebra can be defined even for degenerate TCFTs, where the pairing on V is degenerate on homology. The TCFT constructed from a non-compact symplectic manifold should yield an example of such. Calabi-Yau A ∞ categories where the pairing on Hochschild homology is degenerate can be thought of as lying on the boundary of the moduli space of Calabi-Yau A ∞ categories, corresponding to large complex structure (B-model) or large volume (Amodel) limits. This idea is made clear in Seidel's work [Sei02], where the Fukaya category of a projective variety is seen as a deformation of the Fukaya category of an affine piece. 4.6. Open closed Gromov-Witten invariants. In future work I plan to consider the open-closed version of these constructions. In a similar way to the closed case, Zwiebach [Zwi98] has constructed a Batalin-Vilkovisky algebra from the moduli spaces of Riemann surfaces with open and closed boundary. Again, there is a unique up to homotopy solution of the quantum master equation in this BV algebra satisfying certain properties, which plays the role of the fundamental chain in these open-closed moduli spaces. This corresponds to the fundamental chain of the moduli space of surfaces with open-closed markings constructed by Liu [Liu02], which is an orbifold with corners. For a Calabi-Yau A ∞ category, these fundamental chains give operators between spaces of morphisms in the category, and the Hochschild complex. This structure is a kind of quantisation of the A ∞ structure. For the A model, these operators should correspond to "counting" surfaces with Lagrangian boundary conditions and with marked points constrained to lie in certain cycles. Bar00] has previously constructed the genus 0 B model potential. His construction works for a Calabi-Yau A ∞ category satisfying the same conditions as are used here. His idea is that the genus zero potential is encoded in a flat connection on the periodic cyclic chain complex on the moduli space of Calabi-Yau A ∞ categories. The periodic cyclic chain complex is quasi-isomorphic to what we have been calling V Tate . For each point of the moduli space we have a subspace V hS 1 , which corresponds to negative cyclic chains. This subspace satisfies a Griffiths transversality condition, giving what Barannikov calls a semi-infinite variation of Hodge structure. After choosing a polarisation, Barannikov shows how to construct a Frobenius manifold from such a semi-infinite variation of Hodge structure. A formulation of these ideas closer to what we are doing here has been given by Givental [Giv01,Giv04]. Fix a reference Calabi-Yau A ∞ category, C. Each nearby category C ′ has a subspace tV hS 1 (C ′ ) ⊂ V Tate (C ′ ). We can translate these to subspaces tV hS 1 (C ′ ) ⊂ V Tate (C) using the flat connection on V Tate . Here they sweep out a Lagrangian cone. If we choose a polarisation of V Tate (C), then the cone is the graph of a function on the positive part, which Givental shows satisfies the equations of a genus 0 potential. Givental's twisted loop group acts by change of polarisation. Our construction yields an ideal in the Weyl algebra for V Tate . The semi-classical limit of this is an ideal in the symmetric algebra of V Tate , which cuts out a Lagrangian submanifold in the dual of V Tate , which is quasi-isomorphic to a completion of V Tate . After taking account of the moduli spaces of curves with (g, n) = (0, 1) and (0, 2), this should correspond to the cone constructed by Barannikov and Givental. Complexes with a circle action Most of the rest of the paper consists of going through this construction in more detail. I will give all the definitions of the previous section again, but more carefully. Firstly, we consider again complexes with a circle action. Let V be a chain complex, which is either Z or Z/2 graded. A circle action on V is by definition an action of the dga H * (S 1 ). This consists of a map D : V → V , which is of square zero, commutes with d, and in the Z graded case is of degree 1. There are several natural associated complexes, as explained in [Jon87,HJ87,Lod98]. The first is the homotopy invariants The homotopy coinvariants is the space with differential induced from that on V Tate . Remark: The use of the completed spaces K[[t]] and K((t)) is essential. If we used K[t] and K[t, t −1 ], then the functors sending V → V hS 1 or V Tate would not be exact. In the definition of the various cyclic (co)homology groups it is also essential to complete in this way. If V is graded, and not just Z/2 graded, then all of these spaces are graded. The grading on V hS 1 and V Tate is defined by giving t degree −2. The grading on V hS 1 is defined by giving t −k degree 2k − 2. Let with the differential d f = −εtf and the circle action D f = εf . Here we give t −k degree 2k − 2 as before, and ε degree one. Note that the coinvariants for the H * (S 1 ) action on V ⊗ E S 1 is V hS 1 . That is, Similar constructions exist when there are n commuting circle actions, i.e. an action of H * ((S 1 ) n ). We have to be a little careful about completions here. The space we use to define homotopy coinvariants is K[[t 1 , . . . , t n ]]. That used to define the Tate complex is K((t 1 , . . . , t n )), which by definition consists of series such that λ i 1 ,...,in = 0 whenever min(i 1 , . . . , i n ) is sufficiently small. We are also interested in complexes with an action of H * (S 1 ≀ S n ). If V is such a complex, we let . . , t −1 n ] S n where the subscript S n refers to coinvariants, and the differential is d + t i D i . 5.1. Relation with the equivariant homology of spaces. Let X be a (reasonable) topological space with an S 1 action. Let ES 1 be a contractible space with an S 1 action. The action of K[[t]] on the right hand side corresponds to cap product with the first Chern class of the S 1 bundle X × ES 1 over X × S 1 ES 1 . Proof. This result is true in much greater generality. S 1 could be replaced by a topological monoid or topological category satisfying some mild topological conditions. The best proof in this generality involves simplicial methods. There is a simplicial space model for the homotopy quotient X//S 1 ≃ X × S 1 ES 1 . Singular chains on this give a simplicial object of Comp K . This should be compared with a simplicial model for the homotopy tensor product C * (X) ⊗ L C * (S 1 ) K. Instead of going through this argument, I will sketch a geometric construction of the map H * (C * (X) hS 1 ) → H * (X × S 1 ES 1 ), which makes it clear that multiplication by t corresponds to cap product with the first Chern class. Let S ∞ ⊂ C ∞ be the set of vectors of norm 1. It is well known that S ∞ is contractible, and that the natural S 1 action is free. The quotient has differential d f = −εtf and circle action D f = εf . The complex C * (S ∞ ) also has a circle action, coming from the map C * (S 1 ) ⊗ C * (S ∞ ) → C * (S ∞ ) and the fundamental chain in C * (S 1 ). We define a map E S 1 → C * (S ∞ ), compatible with the differential and the circle action, as follows. We send t −1 to the point (1, 0, . . .), considered as a zero chain. Then εt −1 must go to the circle (z, 0, . . .) with |z| = 1, with its canonical anti-clockwise orientation. Since −t −2 bounds εt −1 , we can send −t −2 to a fundamental chain of the cycle (z 1 , z 2 , 0, 0, . . . ..) ∈ S ∞ with z 2 ∈ [0, 1]. This is oriented in a canonical way, as it is isomorphic to the disc |z 1 | ≤ 1. We can continue on in this fashion, and find that t −k gets send to (−1) k+1 times a fundamental chain for (z 1 , z 2 , . . . , z k , 0 . . .) with z k ∈ [0, 1], and εt −k gets sent to (−1) k+1 times a fundamental chain of the 2k − 1 sphere (z 1 , . . . , z k , 0, . . .). The sphere is oriented as the boundary of the ball (z 1 , . . . , z k , 0, . . . Similar remarks hold for cohomology. To check that cap product by the first Chern class corresponds to multiplication by t, all we have to check is the sign. We can do this on BS 1 . Note that the line bundle over CP ∞ corresponding to the principal S 1 bundle , which makes it clear that multiplication by t is the same as cap product with c 1 (O(−1)). The category of annuli and functors from it For an integer m, let M g (m) be the moduli space of connected Riemann surfaces of genus g with m boundary components, considered to be outgoing. We allow m = 0. Also we allow "unstable" surfaces; the only restrictions are that g ≥ 0, m ≥ 0. However, we need to treat the cases when g = 0, 1 and m = 0 separately. Since we can't glue anything to surfaces in these spaces, these are essentially placeholders. We declare that M 0 (0) = M 1 (0) are both a point. Technically, the spaces M g (0) are topological stacks. We will always work with the coarse moduli space. This is reasonable in our setting, as ultimately we only care about rational singular chains. We will somewhat loosely use the language of orbispaces. For instance, we will say that X is a principal S 1 orbi-bundle over Y to mean that Y is the coarse moduli space of an orbispace over which X is a principal S 1 bundle. The boundary components of the surfaces in M are parameterised with the opposite orientation to that induced from the orientation on the surface. That is, if we take the vector field on the boundary associated to the parameterisation, and apply the complex structure J to it, it becomes outward-pointing. On M g (n)/(S 1 ) n we have n principal S 1 orbi-bundles. The associated complex line bundles correspond to the tangent lines at the marked point of a punctured curve. Define M(m) like M g (m) except that the surfaces need not be connected. As before, we consider any two complex structures on a torus or sphere with no boundaries to be the same. Let be the subspace of stable surfaces, that is those surfaces each of whose connected components have negative Euler characteristic. Sometimes it will be more convenient to use M s , and sometimes M. The main advantage of using M is that it includes the operation of forgetting a boundary component; if we just used M s we would lose information. On the other hand, using M s makes notation much simpler when we compare the solution of the master equation with the fundamental class of Deligne-Mumford space. Now we define a topological symmetric monoidal category A, which is a subcategory of S. The objects of A are the non-negative integers, and the morphisms are the morphisms in S(n, m) given by Riemann surfaces each of whose connected components is an annulus. As each such annulus has at least one incoming boundary component, this is a subcategory. Note that M s (m) ⊂ M(m) is a sub-functor. If we used actual annuli, the categories S and A would not be unital. So let us modify the definition a little, to something weakly equivalent. In S instead of annuli we now use "infinitely thin" annuli, i.e. circles. The parameterisations on each "boundary" of the infinitely thin annulus are then required to differ from each other only by a rotation and possibly (if both boundaries are incoming) a change of orientation. With this definition, A is a unital category, and A(1, 1) = S 1 . Also A(n, n) = S 1 ≀ S n as a group. This modification doesn't change anything essential, as in [Cos04] I showed that quasi-isomorphic symmetric monoidal categories have homotopy equivalent categories of functors. Let C * : Top → Comp K denote the functor of normalised singular simplicial chains with K coefficients. Normalised means we quotient out by degenerate simplices. This is a symmetric monoidal functor: the monoidal structure comes from the shuffle product Therefore we get a functor A(1, 1)). (2) The class of a point in A(2, 0), the moduli space of annuli with two incoming boundaries. Call this G ∈ H 0 (A(2, 0)). Relations are Proof. It suffices to write down the map on the generating morphisms of H * (A). The morphism D goes to a fundamental chain in C 1 (A(1, 1)). The morphism G goes to the chain associated to an annulus in C 0 (A(2, 0)). Pick the annulus where both parameterisations start at the same point. It is easy to check the relations hold. It is crucial here that we use the normalised singular simplicial chain complex. Thus, instead of considering functors from C * (A), we will always use functors from H * (A). We can describe such functors explicitly, using the generators and relations description for H * (A). A functor F : H * (A) → Comp Z/2 K is given by: (1) For each n ≥ 0, a Z/2 graded complex F (n), with maps F (n) ⊗ F (m) → F (n + m), and S n actions on F (n). This data satisfies some straightforward axioms, most of which simply express the fact that the operators G ij , D i interact well with the symmetric group actions and tensor products. Some other axioms are : A particularly simple case happens when the functor is split, that is the maps F (1) ⊗n → F (n) are isomorphisms, for all n (including n = 0, when we find F (0) = K). In this case, let V = F (1). (2) An odd operator D : V → V which is of square zero and commutes with d, i.e. a circle action. (3) An even symmetric pairing on V , such that In the Z graded case, the operator D is of degree one. On the direct summand F (I) ⊗ C ′ * (M(J)), the circle action D i is the corresponding one on F (i) or C ′ * (M(J)), depending on whether i ∈ I or i ∈ J. We will define the gluing maps also on the direct summand F (I) ⊗ C ′ * (M(J)). If i, j ∈ I, or i, j ∈ J, then G ij is the gluing map from F or C ′ * (M). If i ∈ I and j ∈ J, then the gluing map is more difficult to construct. This uses the action of C * (S) on F . It is enough to define this map on connected surfaces, C * (M conn (J)). Then we can turn the j boundary around to give an element in C * (S(j, J \ {j}) which acts on F . More formally, by the definition of C Now let i ∈ I and j ∈ J 1 ; we will define the gluing map on one of the direct factors of this decomposition. For simplicity, we will assume k = 1. Then there is an isomorphism There is a map Composing these maps gives the required operator K be a symmetric monoidal functor. We will construct an associated Weyl algebra and Fock space. 7.1. The construction in a simplified case. Let us first consider the simplified case when F is split. Let V = F (1). Then V has a circle action, and we have the auxiliary equivariant chain complexes, V hS 1 , V hS 1 and V Tate . It is not difficult to check that this defines a functor H * (A) → Comp Define an antisymmetric form Ω on V Tate by Ω(vf (t), wg(t)) = v, w Res f (−t)g(t) d t This is the same as the form used in the work of Givental and Coates [CG01,Giv01,Giv04]. In the case when the inner product on V is non-degenerate this is symplectic. Note that Ω is compatible with the differential, that is Ω(d(vf (t)), wg(t)) + (−1) |v| Ω(vf (t), d(wg(t))) = 0 This follows from the fact that on V , d is skew self adjoint and D is self adjoint with respect to the pairing . Thus, we have an associated Weyl algebra W(V Tate ). V Tate is polarised, as V Tate = V hS 1 ⊕ V hS 1 . The differential on V Tate preserves V hS 1 , but not in general V hS 1 . Let F(V Tate ) be the associated Fock space. This is defined to be the quotient of W(V Tate ) by the left ideal generated by V hS 1 . We can identify F(V Tate ) = Sym * V hS 1 As, we can consider Sym * V hS 1 as a subalgebra of W(V ), using the splitting of the map V Tate → V hS 1 . Then the action on this on the element 1 ∈ F(V Tate ) gives the isomorphism. This is not an isomorphism of complexes, however, because V hS 1 ⊂ V Tate is not a subcomplex. Let us write the natural differential on F(V Tate ) as d. This is the differential obtained by realising it as a quotient of W(V Tate ). This is an order 2 differential operator. Let d denote the usual differential on Sym * V hS 1 , which we identify with F(V Tate ). Then we can write d = d +△ where △ is an odd order 2 differential operator on F(V Tate ), and satisfies We can describe △ explicitly. It is an order 2 differential operator on Sym * V hS 1 . Such an operator is uniquely characterised by its behaviour on Sym ≤2 V hS 1 . △ is zero on Sym ≤1 V hS 1 , and for (v 1 f 1 (t 1 ))(v 2 f 2 (t 2 )) ∈ Sym 2 V hS 1 , we have This has been proved in lemma 4.4.1. 7.2. The construction in general. We want to mimic this construction in general. K be any symmetric monoidal functor. On F (n) there are n commuting circle actions, that is operators D i for 1 ≤ i ≤ n, which (super)-commute and square to zero. Thus we can form the various auxiliary complexes, F Tate (n) = F (n) ⊗ K((t 1 , . . . , t n )) Let G ij : F (n) → F (n − 2) be the gluing map, coming from the class of a point in H 0 (A(2, 0)). For 1 ≤ i < n denote by For each 1 ≤ i < n, let σ i ∈ S n be the transposition of i with i + 1. Recall S n acts on F (n); this action extends to each of the auxiliary complexes mentioned above. There are tensor product maps F Tate (n)⊗F Tate (m) → F Tate (n+m), and similarly for F hS 1 and F hS 1 . The space ⊕ n F Tate (n) is an associative algebra, with product coming from these tensor product maps. We define the Weyl algebra W(F ) to be the quotient of ⊕ n F Tate (n) by the two-sided ideal generated by the relation, for each x ∈ F Tate (n). The Fock space F(F ) is defined to be the quotient of W(F ) by the left ideal spanned by those elements x ∈ F Tate (n) which contain no negative powers of t n . We can consider ⊕F hS 1 (n) Sn to be a subalgebra of W(F ), using the standard splitting of the map F Tate (n) → F hS 1 (n). Here the subscript S n refers to coinvariants, so that ⊕F hS 1 (n) Sn is a commutative algebra. The action of ⊕F hS 1 (n) Sn on the vector 1 ∈ F(F ) generates F(F ), and induces an isomorphism F(F ) ∼ = ⊕F hS 1 (n) Sn As before, this is not an isomorphism of complexes. We will refer to the natural differential on the left hand side as d, and that on the right hand side as d. The differential d is an order 2 differential operator, whereas d is a derivation. It is easy to see that, as before, is an order two operator which satisfies △ 2 = [d, △] = 0. As before, we can write this operator down explicitly. For 1 ≤ i < j ≤ n, define a map △ ij : Res z=0 Res w=0 f (t 1 , . . . , t i−1 , z, t i , . . . , t j−2 , w, t j−1 , . . . , t n−2 ) d z d w (In this expression z is in the i'th position and w is in the j'th position, and the remaining places are filled with t 1 , . . . , t n−2 in increasing order). Then, i<j △ ij commutes with symmetric group actions, and so descends to give a map △ : ⊕F hS 1 (n) Sn → ⊕F hS 1 (n) Sn It is now not difficult to check that d = d +△. The proof is the same as that of lemma 4.4.1, which proves this in the special case that F is split. We will need these constructions when F is the functor C ′ * (M) associated to moduli spaces of curves. In that case we use the notation W(M), F(M). Note that if F → G is a natural transformation of functors H * (A) → Comp Z/2 K , there is an associated homomorphism of Weyl algebras W(F ) → W(G), and a map F(F ) → F(G) of W(F ) modules. 7.3. Geometric interpretation of the differential on F(M). We know that In calculating the differential on F(M) we used operators These operators have a geometric interpretation, which gives a geometric interpretation to the order two part △ of the differential d = d + △. Let Y = M(m + 2)/(S 1 ) m × S 1 where we quotient by the circle actions on all boundaries except the i and j ones, and by the anti-diagonal circle action from the i, j boundaries. Proof. This consists of unravelling the definition. For simplicity we will consider the case m = 0. Recall is a certain contractible complex with a circle action. This is a map of dg H * (S 1 ≀ S 2 ) modules. The map has an interpretation as : take an element of C * (M(2)) h(S 1 ) 2 , lift (in any way) to C * (M(2)) ⊗ (E S 1 ) ⊗2 , apply D 1 , then apply the gluing map there. This makes the result clear. Batalin-Vilkovisky algebras Definition 8.0.2. A Batalin-Vilkovisky (BV) algebra is a differential Z/2 graded supercommutative algebra B, together with an odd operator △ : B → B, which is an order 2 differential operator, and satisfies For each functor F : H * (A) → Comp Z/2 K , the Fock space F(F ) constructed in the previous section is a Batalin-Vilkovisky algebra. If B is a BV algebra, then it acquires an odd Poisson structure. The bracket is defined by This satisfies the Jacobi identity; see [Get94]. Here it is also shown that d is a derivation of this bracket, that is Therefore B becomes a differential Z/2 graded Lie algebra, with this Lie bracket and differential d. , where t is of degree 0 and ε is of degree −1, with differential ε d d t . Let g be a differential graded Lie algebra, with differential of degree −1. A solution of the Maurer-Cartan equation in g is an element S ∈ g −1 satisfying If g is only Z/2 graded, S must simply be odd. A homotopy between solutions S 0 , S 1 of the Maurer-Cartan equation in g is an element S(t, ε) ∈ g[t, ε] which satisfies the Maurer-Cartan equation: and such that S(0, 0) = S 0 , and S(1, 0) = S 1 . Note that we can write S(t, ε) = S a (t) + εS b (t) The Maurer-Cartan equation for S implies that S a satisfies the Maurer-Cartan equation, and that so that the path in g −1 given by S a (t) is tangent to the action of g 0 on solutions of Maurer-Cartan in g −1 . Let MC(g) be the set of Maurer-Cartan elements in g and let π 0 (MC(g)) be the quotient of this by the equivalence relation generated by homotopy. These definitions work in the Z/2 graded case, and also for odd Lie algebras, with obvious changes. If B is a BV algebra, let BV(B) be the set of solutions of the master equation in B, that is the set of solutions of the Maurer-Cartan equation in B considered as an odd dg Lie algebra. Let π 0 BV(B) be the set of homotopy classes of solutions of the master equation, defined as above. It follows that a homotopy equivalence g → g ′ (i.e. a map which has an inverse up to homotopy) induces an isomorphism on π 0 MC. In nice cases, quasi-isomorphisms of dg Lie algebras also induce isomorphisms on the set of homotopy classes of solutions of the Maurer-Cartan equation. Suppose g is a dg Lie algebra with a filtration g = F 1 g ⊃ F 2 g ⊃ . . ., such that g is complete with respect to the filtration, and such that [F i g, F j g] ⊂ F i+j g. In particular g/F 2 g is Abelian and each g/F i g is nilpotent. Then we say g is a filtered pro-nilpotent Lie algebra. Lemma 8.1.1. Let g, g ′ be filtered pro-nilpotent dg Lie algebras, and let f : g → g ′ be a filtration preserving map. Suppose the map Gr g → Gr g ′ induces an isomorphism on H i for i = 0, −1, −2. Then the map is an isomorphism. Proof. This result seems to be well known. For instance it is essentially theorem 5.1 of [HL04], or theorem 2.1 of [Get02]. For completeness I will sketch a proof. First we will show that we can replace f by a surjective map. Let g ′′ ⊂ g ⊕ g ′ [t, ε] be the subset of elements (γ, α(t, ε)) such that f (γ) = α(0, 0). This is an analog of the Serre construction which replaces any map of topological spaces by a fibration. It is easy to see, by mimicking the corresponding topological argument, that the natural maps g ֒→ g ′′ and g ′′ → g are inverse homotopy equivalence. The key point in the topological argument is to use the multiplication map [0, 1] × [0, 1] → [0, 1]. Here we instead use a bi-algebra structure on K[t, ε]. The coproduct is defined on the generators by t → t ⊗ t, ε → t ⊗ ε + ε ⊗ t. It is easy to check that this coproduct is compatible with the differential. It remains to show that the map π 0 MC(g ′′ ) → π 0 MC(g ′ ) is an isomorphism. There are three things to prove. (3) The map MC(g ′′ ) → MC(g ′ ) has the path lifting property. All of these are proved by the same kind of inductive argument. A similar result holds in the Z/2 graded case, under the assumption that Gr f is a quasi-isomorphism. The master equation and the fundamental chain We have constructed the Sen-Zwiebach Batalin-Vilkovisky algebra F(M) associated to moduli spaces. Now we will construct in this a solution S of the master equation, which plays the role of the fundamental class. There is a natural inclusion map where M g (n) ⊂ M(n) is the subspace of connected surfaces of genus g. Denote by F g,n (M) this subspace. F(M) is freely generated as a commutative algebra by the subspaces F g,n (M). Proposition 9.0.2. For each g, n with 2g − 2 + n > 0, there exists an element S g,n ∈ F g,n (M) of degree 6g − 6 + 2n, with the following properties. (2) Form the generating function S satisfies the Batalin-Vilkovisky quantum master equation : Further, such an S is unique up to homotopy through such elements. A homotopy of such elements is a solution of the master equation in F(M) ⊗ K[t, ε], satisfying the analogous conditions. Here t has degree 0 and ε has degree −1, and d t = ε. Proof. Let M g,n be the usual moduli space of smooth algebraic curves of genus g with n marked points. This is rationally homotopy equivalent to M g (n)/(S 1 ) n . We will need the following bound on the homological dimension of M g,n /S n : (9.0.1) H i (M g,n /S n ) = 0 for i ≥ 6g − 7 + 2n if (g, n) = (0, 3) To see this, observe that M g,n is simply connected as an orbifold, because the mapping class group is generated by Dehn twists, and compactifying M g,n has the effect of trivialising the elements of π 1 (M g,n ) coming from Dehn twists. In particular H 1 (M g,n ) = 0. It follows that H 1 (M g,n /S n ) = 0, as we are using coefficients in K ⊃ Q. The boundary of M g,n /S n is always connected. (When (g, n) = (0, 4), the boundary of M g,n is connected). Poincaré duality and the cohomology long exact sequence for the pair (M g,n /S n , ∂M g,n /S n ) gives the required bound. Alternatively, we could use the bounds on the homological dimension of M g,n obtained by Harer [Har86]. This gives the result when (g, n) = (0, 4). In that case, it is easy to see that the coinvariants of the S 4 action on H 1 (M 0,4 ) = K 2 are trivial. Now define a dg Lie algebra g. The space g i is the set of such that S g,n ∈ F g,n (M), and S g,n is of degree 6g − 6 + 2n + 1 + i. The bracket [ ] g is { } and the differential is d g = d. The set of homotopy equivalence classes of solutions of the Maurer-Cartan equation in g is the same as the set of homotopy equivalence classes of solutions S of the master equation in F(M) satisfying S g,n ∈ F g,n (M) and S g,n is of degree 6g − 6 + 2n. Filter g by saying F k g is the set of those S such that S g,n is zero for 2g − 2 + n < k. Then is a descending filtration by dgla ideals. g is complete with respect to this filtration. The bounds (9.0.1) on the homological dimensions of moduli spaces, together with the fact that M 0,3 is a point, tell us that Therefore the map g → g/F 2 g satisfies the conditions of lemma 8.1.1. The result follows immediately. Note that S comes from the stable moduli spaces. Recall that M s (m), the space of surfaces each of whose components has negative Euler characteristic, is a sub-functor of M(m). Therefore we have an associated BV algebra F(M s ) and an injective map of BV algebras F(M s ) → F(M). S is in F(M s ). We want to compare this solution of the master equation with the usual fundamental class of Deligne-Mumford space. Let M g,n be the space of stable nodal curves of genus g with n marked smooth points. Let M(n) be the moduli space of possibly disconnected stable nodal curves with n marked points. Define This forms a Batalin-Vilkovisky algebra, with BV operator △ = 0, so that d = d. Let Remark: In fact, we could use M instead M s , but this would involve some messing around with unstable surfaces. We will construct this using a nice model for the spaces M s (m), introduced by Kimura, Stasheff and Voronov [KSV95]. Let N g (n) be the moduli space of algebraic curves in M g,n , together with at each marked point, a ray in the tangent space, and at each node, a ray in the tensor product of the tangent spaces at each side. N g (n) is a torus bundle over a certain real blow-up of M g,n , and is an orbifold with corners, whose boundary consists of the locus of singular curves. Let N (n) be defined in the same fashion, except using possibly disconnected curves. The space N (n) has an action of S 1 ≀ S n . Also there are gluing maps G ij : N (n) → N (n − 2), for each 1 ≤ i < j ≤ n. These satisfy various compatibility conditions, which means that sending n → N (n) defines a symmetric monoidal functor Passing to chain level, we find a functor C * (N ) : H * (A) → Comp K . Therefore, we have an associated BV algebra F(N ). Proposition 9.0.2 applies without change to F(N ), giving a solution of the master equation, also denoted by S, in F(N ). We want to compare this with the solution in F(M s ). First we need to compare the two BV algebras. The BV algebra is associated functorially to a functor H * (A) → Comp K . A natural transformation F → G between such functors is a quasi-isomorphism if it induces a quasi-isomorphism on the chain complexes F (n) → G(n) for all n ∈ Z ≥0 = Ob H * (A). Two functors are quasi-isomorphic if they can be connected by a chain of quasi-isomorphisms. If F, G are quasi-isomorphic functors, then the associated BV algebras F(F ), F(G) are quasi-isomorphic on d homology (but not necessarily on d homology). Sketch of proof. We will show the corresponding result at the level of the functors N , M s : A → Top. That is, we will show these functors are rationally weakly equivalent. A rational weak equivalence is a natural transformation that induces an isomorphism on the rational homology of the associated spaces. We need to construct a chain of rational weak equivalences between M s and N in the category of functors A → Top. Let me sketch such a construction. Firstly, consider a moduli space P like N , except that instead of a ray in the tangent space, the surfaces now have an embedded parameterised disc at each marked point, together with a number t ∈ [0, 1/2]. We need to define on this space the structures above. The circle actions are given by rotating the discs. We need to say how to glue two marked points together. If these have numbers t, t ′ ∈ [0, 1/2] where 0 < t ≤ t ′ , we glue together the circles of radius t around the marked points. If t = 0, we glue the marked points together to get a node, with a ray in the tensor product of the tangent lines for each side. We get a chain of weak equivalences as follows. Let P t be the part of P where all marked points have the same label t ∈ [0, 1/2]. The inclusion P t ֒→ P is a weak equivalence. Also there is a weak equivalence P 0 → N , and a weak equivalence M s → P 1/2 . By definition, F(N ) = ⊕ n C * (N (n)) hS 1 ≀Sn so that we have an algebra homomorphism F(N ) → F(M). It is clear that this intertwines the ordinary differential d on F(N ) with that on F(M). Lemma 9.0.6. The map π : F(N ) → F(M) intertwines the quantised differential d on F(N ) with the usual differential d on F(M). Proof. Recall we can write d = d + △ where △ is an order 2 operator on F(N ). It suffices to show that π(△(x)) = 0 for all x ∈ F(N ). Recall the explicit description of △ at the end of section 7. There is a gluing map G ij : M(n) → M(n − 2), for each 1 ≤ i < j ≤ n. The diagram commutes. Also, the diagram commutes, where p is the projection map and a i is the action map for the S 1 action on M(n) which rotates the ray at the i'th marked point. To show π(△(x)) = 0, it suffices to show the following. Let D i be the degree one operator on C * ( M(n)) coming from the i'th circle action. We need to show that, for all y ∈ C * ( M(n)), G ij (D i y) = 0. That this is a sufficient condition follows from the explicit description of △ at the end of section 7. But this condition follows from the commutativity of the diagram As the BV operator (the differential) on F(M) is a derivation, the Poisson bracket associated to the BV structure is trivial. Therefore, the image of S in F(M) is closed. We can write The image π(S g,n ) of each S g,n is closed. This corresponds to a map △ : H * (X(n), ∂X(n)) → H * +1 (∂X(n − 2), ∂ 2 X(n − 2)) This map has the following geometric description. The space ∂X(n) \ ∂ 2 X(n) is the moduli space of curves C ∈ X(n) with a single node. Equivalently, it is the moduli space of curves C ∈ X(n + 2) with a choice of two unordered points, and a ray in the tensor product of the tangent spaces of these points. Thus there is a map φ : ∂X(n) \ ∂ 2 X(n) → X(n + 2) \ ∂X(n + 2) The space X(n) has a canonical Q orientation, as it is a complex orbifold. This induces an orientation on the boundary ∂X(n). We can use Poincaré duality to identify H * (X(n), ∂(X(n)) = H * (X(n) \ ∂(X(n)) H * (∂X(n), ∂ 2 (X(n)) = H * (∂X(n) \ ∂ 2 (X(n)) Then the required map is minus the pull back map: There is a natural boundary map d : H * (X(n), ∂X(n)) → H * −1 (∂X(n), ∂ 2 X(n)). The operator d + △ makes H * (Gr ≤1 F(N )) into a BV algebra. The algebra structure is given by identifying it with H * (Gr F(N )/ Gr ≥2 F(N )). This BV algebra structure gives us a Lie bracket {[X g 1 ,n 1 ], [X g 2 ,n 2 ]} = 0 It is clear that [X 0,3 ] = [S 0,3 ]; in fact we chose S 0,3 to satisfy this. It follows that [X g,n ] = [S g,n ] for all g and n. Note that the space C * ( M(n)) h(S 1 ) n has an action of K[t 1 , . . . , t n ]. On homology, H * (C * ( M(n)) h(S 1 ) n ) = H * (M(n)). The action of K[t 1 , . . . , t n ] is by cap product with minus the ψ classes, i.e. the first Chern classes of the tangent lines at the marked points. This is because the oriented torus bundle M(n) → M(n) is that associated to the tautological tangent line bundles. The space C * (N ) h(S 1 ) n also carries an action of K[t 1 , . . . , t n ], and the map C * (N ) h(S 1 ) n → C * ( M(n)) h(S 1 ) n is a map of K[t 1 , . . . , t n ] modules. Therefore we can see not just the fundamental class, but its cap products with ψ classes. The Gromov-Witten type invariants associated to a TCFT In this section, for each TCFT we will construct a left ideal in the associated Weyl algebra, together with an element in it. This encodes the Gromov-Witten potential of the TCFT. Recall that so far, for each TCFT F , we have constructed a Weyl algebra W(F ), and a Fock space F(F ). There is also Weyl algebra and Fock space, W(M) and F(M), associated to moduli space. As I mentioned earlier, in an ideal world, we might hope that there is a natural Of course, the space F(F M ) is far too big. As an algebra it is isomorphic to F(M) ⊗ F(F ). The differential does not respect this decomposition, nor does the action of W(F ). Let be the commutative subalgebra of elements which contain only negative powers of the t i . This subalgebra is not preserved by the differential. Then, the submodule F D (F ) is explicitly given as the set ]. The only thing to check is that this space is preserved by the action of W(F ). If we have an element X ∈ F (1)[[t]] ⊂ W(F ), then it acts by a derivation of F(F M ). This derivation takes elements of F g,n (M) into F(F ). Therefore If the map F (1) ⊗n → F (n) was an isomorphism, and not just a quasi-isomorphism, this would be enough. In the general case, a little bit more needs to be checked, but its not difficult. Choice of polarisation and Givental's group To extract a more familiar looking potential, we need to pass to homology. In this section we will make the following two assumptions on our TCFT: (1) The map D : F (1) → F (1) is zero on homology. When our TCFT comes from a Calabi-Yau A ∞ category as in [Cos04], this is equivalent to the degeneration of the spectral sequence from Hochschild homology tensored with K((t)) to periodic cyclic homology. This spectral sequence is the non-commutative analog of Hodge to de Rham. (2) The inner product on H * (F (1))) is non-degenerate. This implies that H * (F (1) Tate ) is symplectic, i.e. the natural anti-symmetric pairing is non-degenerate. Also the map H * (F (1) hS 1 ) → H * (F (1) Tate ) is injective, and the image is a Lagrangian subspace. In this section we will use the following notation : Proof. The uniqueness part is well known. We will show existence. Modulo λ, this follows immediately from the definition of F D (F ). The point is that the potential D = exp(S) is 1 modulo λ. The In order to get a more familiar kind of potential, we need to choose some extra data. The symplectic vector space H has various natural structures, namely the isomorphisms given by multiplication by t and t −1 , the Lagrangian subspace H + . We will look for polarisations of H compatible with these structures. The space H is naturally filtered, by the subspaces t k H + . The associated graded is canonically isomorphic to H((t)), as a K((t)) module. The corresponding symplectic form on H((t)) is given by refers to the pairing on H = H * (V ) coming from that on V . where φ * is obtained by replacing each φ i by its adjoint. This means that φ(t) is an upper-triangular element in Givental's twisted loop group. 10 Any such operator can be quantised to act on the Fock space. As, any such φ admits a logarithm, which is an infinitesimal symplectomorphism of H((t)), commuting with the t action. Such an infinitesimal symplectomorphism, A say, admits a quantisation, in a standard fashion [Giv01], to an element in the Weyl algebra A ∈ W(H((t))). A is characterised up to an additive scalar by the condition that the inner derivation [ A, −] of W(H((t))), when restricted to H((t)) ⊂ W(H((t))), is A. Then the quantised operator φ acts on the Fock space by φ = exp( log(φ)) where log(φ) acts on the Fock space Sym * t −1 H[t −1 ] in the standard way, as an element of the Weyl algebra. The exponential makes sense because log(φ) is a locally nilpotent operator on Sym * t −1 H[t −1 ]. The symplectomorphism φ induces an automorphism φ of the Weyl algebra W(H((t))). We can twist the action of W(H((t))) on Sym * t −1 H[t −1 ] via the automorphism φ. Lemma 11.0.13. φ is the unique up to scale isomorphism of Sym * t −1 H[t −1 ] such that for w ∈ W(H((t))), x ∈ Sym * t −1 H[t −1 ], Therefore, if we change the isomorphism H ∼ = H((t)) by a symplectomorphism φ, then the corresponding line D changes by φ. Relation with ordinary Gromov-Witten invariants Suppose our TCFT is equipped with operations from the Deligne-Mumford spaces, and not just the uncompactified spaces. This should happen for the A-model TCFT associated to a compact symplectic manifold. Then the ancestral potential constructed above, for a certain choice of polarisation, coincides with the actual ancestral potential, coming from the fundamental class and ψ classes in Deligne-Mumford space. In fact, this is immediate from the results of section 9, but I will briefly go through some of the details anyway. For simplicity we will suppose that the TCFT is also equipped with operations from the space of curves with no marked points, all though this is not necessary. Also we will assume, to keep the notation simple, that our TCFT is split. So, suppose we are given a complex V , with an inner product, and maps φ : C * (M(n)) → V ⊗n such that the gluing maps between the spaces M(n) correspond to the inner product on V . Give V the trivial circle action, D = 0. Then V defines a functor, say F : H * (A) → Comp K . Recall the space M(n) is the principal (S 1 ) n orbi-bundle over M(n) given by curves in M(n) with at each marked point a ray in the tangent space. The complexes C * ( M(n)) define a functor H * (A) → Comp K . The complexes C * (N (n)) do also, and there is a natural transformation There is a natural transformation C * ( M) → F . There is an associated map We have F( M) ≃ ⊕ n C * (M(n)) Sn and we have already seen that the image of the solution of the master equation in F(N ) goes to the fundamental classes of M g,n /S n . Now, since the circle operator on V is zero, there is a canonical isomorphism This arises from the canonical polarisation which in this case is compatible with the differential. The map C * (M(n)) Sn → Sym n t −1 V [t −1 ] is given, at least on homology, by The point is that the action of K[t 1 , . . . , t n ] on C * (M(n)) ≃ C * ( M(n)) h(S 1 ) n is given on homology by cap product with minus the ψ classes. This makes it clear that the ray in the Fock space we have constructed coincides with the ordinary ancestral potential. The B model and the holomorphic anomaly Suppose that X is a smooth projective Calabi-Yau variety of dimension d, for simplicity over C (which in this section we take to be our base field). Pick a holomorphic volume form Vol X on X. We would like to use the results of this paper and [Cos04] to construct the B model analog of the Gromov-Witten potential of X. Currently there is a small technical gap in this construction, which will be discussed elsewhere. In [Cos04], I showed how to associated to a Calabi-Yau A ∞ category a TCFT, whose homology is the Hochschild homology of the category. The derived category of coherent sheaves on X, D b (X), is a Calabi-Yau category. However, it is not the category we want, for various reasons explained in [Cos04]. in such a way that the circle action ∂ on V corresponds to D on F (1). Also, F (1) is quasi-isomorphic to the Hochschild chain complex shifted by d, where the circle operator corresponds to the B operator. Then F (1) Tate ≃ V Tate , V hS 1 ≃ F (1) hS 1 and V hS 1 ≃ F (1) hS 1 . Now consider the de Rham complex Ω − * ,− * Rescaling the holomorphic volume form corresponds to rescaling λ. The fact that the potential is flat tells us what happens when we change the complex structure on X. This corresponds to keeping the same symplectic vector space, Fock space, and line in the Fock space, but changing the polarisation of H.
2014-10-01T00:00:00.000Z
2005-09-12T00:00:00.000
{ "year": 2005, "sha1": "de19da61ced2c84ef13e167bd954badb31f5b9e4", "oa_license": null, "oa_url": null, "oa_status": null, "pdf_src": "Arxiv", "pdf_hash": "de19da61ced2c84ef13e167bd954badb31f5b9e4", "s2fieldsofstudy": [ "Mathematics" ], "extfieldsofstudy": [ "Mathematics" ] }
212417512
pes2o/s2orc
v3-fos-license
Analysis of serum inflammatory mediators in type 2 diabetic patients and their influence on renal function Aim To evaluate the serum concentrations of inflammatory mediators in patients with type 2 diabetes mellitus (T2DM) with or without renal alteration (RA) function. Methods Serum samples from 76 patients with T2DM and 24 healthy individuals were selected. Patients with T2DM were divided into two groups according to eGFR (> or < 60mL/min/1.73m2). Cytokines, chemokines and adipokines levels were evaluated using the Multiplex immunoassay and ELISA. Results TNFR1 and leptin were higher in the T2DM group with RA than in the T2DM group without RA and control group. All patients with T2DM showed increased resistin, IL-8, and MIP-1α compared to the control group. Adiponectin were higher and IL-4 decreased in the T2DM group with RA compared to the control group. eGFR positively correlated with IL-4 and negatively with TNFR1, TNFR2, and leptin in patients with T2DM. In the T2DM group with RA, eGFR was negatively correlated with TNFR1 and resistin. TNFR1 was positively correlated with resistin and leptin, as well as resistin with IL-8 and leptin. Conclusion Increased levels of TNFR1, adipokines, chemokines and decrease of IL-4 play important role in the inflammatory process developed in T2DM and decreased renal function. We also suggest that TNFR1 is a strong predictor of renal dysfunction in patients with T2DM. Introduction Type 2 diabetes mellitus (T2DM) is one of the most prevalent subtypes of diabetes mellitus (DM). It is a metabolic disorder resulting from the relative deficiency of insulin production and/or its action, which leads to increased serum glucose levels, which is considered the main cause of chronic kidney disease (CKD) [1,2]. Hyperglycemia in T2DM is strongly associated with the development of macrovascular and microvascular complications, which may result in decreased renal function. The imbalance between mediators triggers or enhances T2DM complications. Activation of the innate immune system alone induces hyperglycemia and insulin resistance. Thus, diabetes and inflammation are simultaneously involved, feeding a positive feedback loop [8]. Early identification of the risk of progressive loss of renal function in patients with T2DM might delay diabetes complications in these patients. Due to the lack of feasibility in measuring glomerular filtration rate, in clinical practice, the estimated glomerular filtration rate (eGFR) and albuminuria have been used as parameters to evaluate the renal function of patients with T2DM. An eGFR <60 mL/min/1.73 m 2 might characterize decreased renal function [9, 10]. Plasma and urinary markers have recently shown that that early progressive renal decline, in the context of T2DM, has multiple causes [11]. Given the need to identify possible factors that contribute to low-grade inflammation and its complications, as reflected in the renal function of patients with T2DM, this study aimed to evaluate the serum concentrations of inflammatory mediators in patients with T2DM with or without renal alteration (RA), determined by the eGFR, and verify the correlation of these mediators to decreased renal function. Patients Type 2 DM patients were recruited in the Endocrinology Outpatient Clinic of the Federal University of Triangulo Mineiro (UFTM), Uberaba, Minas Gerais, Brazil, between January to December of 2018. Healthy volunteers were recruited from the facilities of the Federal University of Triangulo Mineiro (UFTM), Uberaba, Minas Gerais, Brazil. Patients included in the study had T2DM diagnosis, age over 18 years-old and were in medical follow up in Endocrinology Outpatient Clinic of the UFTM. Healthy people aged above 18 years old and with normal renal function were also included for comparison. Pre-diabetic patients, T2DM patients aged under 18 years old and patients with T2DM and healthy people without sufficient data for eGFR calculation (age, race, serum creatinine and gender) were excluded from the study. A total of 100 adult patients were recruited for this study, 76 of whom had T2DM (28 men and 48 women) and 24 were healthy volunteers (10 men and 14 women). The patients with T2DM were divided into two groups according to the eGFR (mL/min/1.73 m 2 ) using the equation proposed by the Chronic Kidney Disease Epidemiology Collaboration study (CKD-EPI) [12]. These were the T2DM group without RA (n = 56, patients with T2DM with eGFR>60 mL/min/1.73 m 2 ), with median age of 59.5 (18-84) years, and the T2DM group with RA (n = 20, patients with T2DM with eGFR <60 mL/min/1.73 m 2 ), with median age of 75 (37-94) years. The control group consisted of 24 healthy patients without DM and with eGFR >60 mL/min/1.73 m 2 , with median age of 34 (22-58) years. Because age differences between study participants are a limitation to be clarified, and age is considered a very important factor in the development of various entities, we used statistical tests to exclude the contribution of age difference in the studied sample. However, in the present study, it was possible to demonstrate that this parameter did not directly influence the evaluated markers. To demonstrate this result and eliminate any bias, we performed analyzes comparing elderly and non-elderly patients in the T2DM group without RA and elderly and non-elderly patients in the T2DM group with RA, as follows: T2DM group without RA: IL-4 (p = 0.3436; t = 0.9554); TNFR1 (p = 0.2640; t = 1,129); TNFR2 (p = 0.0476; t = 2,027); TNF-α (p = 0.5707; U = 276.5); IFN-γ (p = 0.0479; t = 2,024); IL-8 (p = 0.7313; U = 288.5); Eotaxin Clinical and laboratory data of the patients in the study were obtained from the information in the follow-up medical records of the patients with T2DM and the results of routine blood tests previously acquired from the volunteers. The study was conducted in the laboratories of General Pathology Department and Immunology Department of the Federal University of Triângulo Mineiro (UFTM), Uberaba, Minas Gerais, Brazil. This study was approved by the Research Ethics Committee of the Federal University of Triângulo Mineiro under opinion number 3,001,006. All samples were archived and identified by codes with letters and numbers to ensure that individuals were anonymized. All patients and volunteers who were invited to participate in the study signed the informed consent form, after clarification. Methods Patients with T2DM were approached at the time of routine clinical consultation, individually, at the doctor's office. Healthy people recruited were referred to a reserved room in the General Pathology Department of UFTM. They were instructed about the research and those who agreed to participate, signed the consent form and had the biological sample collected. The biological sample was collected in a reserved and appropriate blood collection room, where general data of the participants were also recorded. The sample was collected in a sterile tube, containing a separating gel, and centrifuged, after 30 min of rest, at 3,000 rpm, at 4˚C, for 15 min to obtain the serum. The serum sample was stored at -80˚C until analysis. Statistical analysis In the statistical analysis, an electronic spreadsheet (Microsoft Excel) was elaborated, and the data were analyzed using the GraphPad Prism software, version 7.0 (GraphPad Software, USA). The variables were tested for normality using the Kolmogorov-Smirnov test. For a nonnormal distribution, we used the Mann-Whitney U test in the comparison between the two groups and the Kruskal-Wallis H test, followed by the Dunn's post hoc test, among three or more groups. The proportions were compared using the chi-square test (χ 2 ) or Fisher's exact test. We used the Pearson's r test to correlate parametric variables and the Spearman's test (rS) for nonparametric variables. Differences were considered statistically significant when p < 0.05. Clinical and laboratory characteristics of the participants A total of 100 patients were selected for the study and classified into three groups: the control group with 24 patients (24%), T2DM group without RA with 56 (56%) patients, and T2DM group with RA with 20 (20%) patients. According to the general characteristics of the groups, there was a predominance of women in the three groups, with 14 (58.3%) in the control group, 36 (64.3%) in the T2DM group without RA, and 12 (60%) in the T2DM group with RA. The patients were mainly Caucasian, with 20 (83.3%) in the control group, 45 (80.4%) in the T2DM group without RA, and 16 (80%) in the T2DM group with RA. Most patients with T2DM had hypertension, with 36 (64.3%) in the T2DM group without RA and 18 (90%) in the T2DM group with RA, whereas there was no patient with hypertension in the control group. The body mass index (BMI) was higher in the T2DM groups than in the control group. Patients with T2DM with RA had longer DM duration compared to patients with T2DM without RA. Moreover, most patients in both groups reported the use of insulin to control diabetes, with 38 (67.8%) in the T2DM group without RA and 17 (85%) in the T2DM group with RA. Regarding the laboratory data, as expected, fasting serum glucose and glycated hemoglobin levels were higher in patients with T2DM. Serum urea and creatinine levels were higher in the T2DM group with RA than in other groups. Serum total cholesterol, high-density lipoprotein cholesterol, and triglyceride levels were similar among groups. However, serum low-density lipoprotein cholesterol levels were higher in the control group than in other groups. Regarding the habit of drinking alcohol, most 14 (58.3%) patients in the control group reported social use, while 30 (53.6%) patients in T2DM group without RA and 12 (60%) patients in T2DM group with RA reported not to consume alcohol. Most patients in all groups were non-smokers. Regarding physical activities, half of the patients both in control and T2DM with RA groups did not practice physical activities as well as 35 (62.5%) patients of T2DM group without RA. The clinical and laboratory characteristics of the patients are detailed in Table 1. Imbalance in serum cytokine production in patients with T2DM with RA Inflammatory cytokines were analyzed in patients with T2DM to evaluate their production in the context of renal function. Patients with T2DM showed a significant decrease in serum IL-4 and TNFR2 levels (p = 0.0165, U = 617, and p = 0.0024, U = 541.5, respectively) and a significant increase in TNFR1 level compared to controls (p = 0.0035; U = 554.5). However, there was no significant difference in TNF-α level between the groups (p = 0.7919; U = 879), and there was only a tendency of increased IFN-γ level in patients with T2DM (p = 0.0724; U = 690). Comparing the groups based on RA, the T2DM group with RA had decreased IL-4 levels compared to the control group (p = 0.0090; H = 9.413, Dunn's post hoc test) and Increased serum adipokine production in patients with T2DM with RA Observing the imbalance in serum cytokine production in patients with T2DM with RA and considering T2DM as a low-grade chronic inflammatory process, we analyzed adipokine production in these patients. There was a significant increase in serum adiponectin and resistin levels in patients with T2DM compared to the control group (p = 0.0230, U = 631.5, and p = 0.0003, U = 478.5, respectively). Moreover, there was a tendency for increased leptin levels in patients with T2DM compared to that in the control group (p = 0.0844; U = 698). Comparing the groups based on RA, there was an increase in adiponectin levels in the patients with T2DM with RA compared to the control group (p = 0.0422; H = 6.329, Dunn's post hoc test). Regardless of RA, patients with T2DM showed an increase in resistin levels compared to the control group (p = 0.0014; H = 13.12, Dunn's post hoc test). However, the serum leptin levels were significantly higher in the T2DM group with RA compared to those in the T2DM group without RA and control group (p = 0.0076; H = 9.759, Dunn's post hoc test) (Fig 2). Increased serum chemokine production in patients with T2DM with RA Observing the increase in adipokine production associated with the imbalance in cytokine production in patients with T2DM with RA, we analyzed the chemokine production in these patients. There was a significant increase in serum IL-8 (p<0.0001; U = 322), eotaxin (p = 0.0330; U = 648.5), MIP-1α (p = 0.0011; U = 541), and MIP-1β (p = 0.0380; U = 655.5) levels in T2DM patients compared to those in the control group. Comparing the groups based on RA, it was found that, regardless of RA, serum IL-8 levels remain significantly elevated in the T2DM group without RA and T2DM group with RA compared to that in the control group (p<0.0001; H = 22.8, Dunn's post hoc test). The same mechanism was observed with regard to the MIP-1α level (p = 0.0011; H = 13.61, Dunn's post hoc test). There was no significant difference in eotaxin (p = 0.0816; H = 5.011, Dunn's post hoc test) and MIP-1β levels (p = 0.1173; H = 4.286, Dunn's post hoc test) between the groups. However, it is possible to observe that both eotaxin and MIP-1β tend to behave similarly to IL-8 and MIP-1α (Fig 3). Correlation between eGFR and cytokines/ chemokines/ adipokines in patients with T2DM with RA To evaluate the correlation of mediators in patients with T2DM with RA, correlations between eGFR and cytokines/chemokines/adipokines were analyzed. Patients with T2DM had a Fig 4). Patients with T2DM with RA showed a positive and significant correlation between TNFR1 and resistin (p = 0.0002; rS = 0.7349) and leptin (p = 0.0420; rS = 0.4586). A positive and significant correlation was also observed between resistin and leptin (p = 0.0192; r = 0.5185) and between resistin and IL-8 (p = 0.0087; rS = 0.57, Fig 5). of inflammatory cytokines in the development and progression of diabetic kidney disease [16][17][18]. In this context, this study analyzed the serum cytokine/chemokine/adipokine levels in patients with T2DM with or without RA as determined by eGFR in relation to those in healthy patients, to investigate the association of these inflammatory mediators with decreased renal function. In our study, patients with T2DM had decreased serum IL-4 levels compared to the control group. A decreased IL-4 level was also found in patients with T2DM with RA compared to that in the control group, indicating that not only diabetes but also RA characterized by eGFR <60 mL/min/1.73 m 2 is associated with decreased IL-4 level. IL-4 is a Th2 profile anti-inflammatory cytokine that acts to reduce the secretion of proinflammatory cytokines by activated macrophages and stimulates the production of a number of anti-inflammatory molecules, such as IL-1ra In this study, the results of patients with T2DM were different with respect to serum TNFR levels as TNFR1 level increased and TNFR2 level decreased compared to those in the control group. One of the main findings of our study was that, exclusively, the increase in TNFR1 level distinguished the patients with T2DM with RA from those with T2DM without RA and healthy volunteers. This fact was confirmed by the negative and significant correlation between eGFR and TNFR1 in patients with T2DM and even stronger correlation in patients with T2DM with RA. Thus, this result shows that TNFR1 can predict a decrease in renal function in patients with T2DM. Although TNFR2 is decreased in patients with T2DM without RA compared to those in the control group, it was noted that its serum level might increase in patients with T2DM with RA, which was strengthened by the fact that eGFR correlates negatively with TNFR2 level. Nevertheless, despite the relevance of the results found with their receptors, we did not observe any significant difference between the groups in relation to TNF-α. TNF-α is a pleiotropic cytokine that plays an important role in the mediation of inflammatory processes. It is a transmembrane homotrimeric protein, which is produced by many cells, including fat, endothelial cells, and leukocytes. In plasma, TNF-α appears free or bound to the circulating TNFR1 and TNFR2 [28]. In a 12-year follow-up study conducted in patients with T2DM, it was observed that, of all markers analyzed, only TNFR1 and TNFR2 were associated with the risk of end-stage renal disease. A stronger association was found with TNFR1, suggesting that high serum levels of this receptor can predict the progression of T2DM to CKD [29]. Other studies have shown that elevated plasma TNFR1 levels are associated with decreased eGFR in patients with T2DM [30, 31], which corroborates our findings. In contrast, a recent study showed that patients with T2DM had increased plasma levels of not only TNFR1 but also TNF-α and TNFR2 compared to the control group, which differs from our findings. However, similar to our results, TNFR1 and TNFR2 were strongly associated with kidney injury [32]. It is still unclear why serum TNFR levels are more closely associated with eGFR. One possible explanation is that, because TNFR levels are at least 100 times greater than TNF-α levels, circulating TNFRs play an important role in the progression of diabetic kidney disease, regardless of the TNF-α levels [33]. In association with the cytokine findings, our study found an increase in serum adipokine levels in patients with T2DM. Patients with T2DM with RA showed a significant increase in adiponectin level. Regardless of RA, patients with T2DM also showed an increase in resistin level. However, serum resistin levels tend to increase as eGFR decreases, which was confirmed by the negative and significant correlation found between eGFR and resistin level. Additionally, another important finding of our study was in relation to the serum leptin levels, which established its importance in distinguishing patients with T2DM with RA, also showing a negative and significant correlation with eGFR. In light of these findings, our results demonstrate that an increase in adipokine levels is related to a decrease in renal function in patients with T2DM. Adiponectin is an adipokine secreted exclusively by human adipocytes [34]. It has beneficial effects on insulin resistance and anti-inflammatory [35] and anti-oxidative properties [36]. It is suggested that the anti-inflammatory action of adiponectin is due to the inhibition of proinflammatory cytokine production, such as IL-6 and TNF-α, by macrophages and/or reduction of their phagocytic action [37]. We observed a significant increase in adiponectin level in our patients. Perhaps, this factor has contributed in the serum levels of proinflammatory cytokines, such as TNF-α and IFN-γ. Although some studies reported that patients with T2DM have lower circulating quantities of adiponectin than those without T2DM [38, 39], other studies have shown that, under various kidney disease conditions [40, 41] and in patients with T2DM with CKD [42, 43] the serum adiponectin levels are increased, which corroborates our findings. Another study evaluating more than 1,200 patients with T2DM showed an inverse correlation between serum adiponectin levels and eGFR [44]. The correlation of adiponectin and CKD is still controversial. It is suggested that, in individuals with kidney dysfunction, increased adiponectin levels represent not only a decrease in renal excretion but also a temporary homeostatic mechanism in an attempt to reduce renal damage through anti-inflammatory and anti-oxidative mechanisms [45,46]. Resistin is a protein secreted mainly by macrophages and monocytes in humans and has proinflammatory effects [47,48]. The association between serum resistin levels and CKD in diabetes is also unclear. It was recently observed that patients with microalbuminuria and T2DM with eGFR <60 mL/min/1.73 m 2 showed a significant increase in serum resistin levels compared to patients with T2DM with normal renal function. Additionally, serum resistin levels were correlated negatively with eGFR and positively with C-reactive protein level. Thus, the main determinants of resistin levels in patients with T2DM are renal function level and inflammation [7]. Axelsson et al. demonstrated that high resistin levels in patients with T2DM with CKD were associated with decreased eGFR and inflammation [49]. A prospective cohort study showed that high resistin and TNFR2 levels are related to a higher risk of decline in renal function [50]. Moreover, an increase in resistin levels was observed in the early stages of CKD [51]. This means that even in mild renal function, there is already an increase in resistin level, which corroborates our findings. In agreement, other investigations suggest that resistin might promote endothelial dysfunction by enhancing the oxidative stress, an effect that would eventually culminate in glomerular dysfunction [52,53] and that the adverse effects of resistin could be attributed to its ability to stimulate proinflammatory cytokine production [47,54]. Another adipokine with proinflammatory effects, which promotes the synthesis of other inflammatory cytokines, is leptin. It is involved in the control of food intake, leading to appetite suppression. Patients with obesity have hyperleptinemia due to the development of leptin resistance [55]. High leptin levels are associated with insulin resistance and development of T2DM [56]. It has been shown that an increase in serum leptin levels are related to a decline in eGFR, and this association has been described to be stronger in women [57] and patients with CKD [58]. Both the decrease and increase in leptin levels are risk factors for the decline in renal function in patients with T2DM [59]. Our results showed that patients with T2DM with decreased renal function had increased serum leptin levels. In addition to a decreased renal excretion due to renal dysfunction, unfavorable actions of leptin, such as the activation of the sympathetic nervous system, rather than causing beneficial effects, may affect the renal function decline in patients with hyperleptinemia. This can be then further compromised, due to the leptin resistance found in these patients [59]. Among the chemokines analyzed, we observed that patients with T2DM had a significant increase in serum IL-8, eotaxin, MIP-1α, and MIP-1β levels compared to the control group. One study evaluated urinary cytokine levels in patients with T2DM with normo-and microalbuminuria and found a significant increase in urinary IL-8, IP-10, MCP-1, G-CSF, eotaxin, RANTES, and TNF-α levels in patients with microalbuminuria compared to patients with normoalbuminuria. Patients with microalbuminuria had a significant increase in GM-CSF, MIP-1α, and MIP-1β levels compared to the control group. These results indicated that determination of the urine cytokine level might be useful in the diagnosis and early treatment of diabetic nephropathy [60]. Our results showed that the increase in serum IL-8 levels in patients with T2DM is independent of the presence of RA, although its increase, accompanied by decrease in eGFR, is noticeable. IL-8 (CXCL8) was the first chemokine to be discovered and has a predominantly chemoattractant effect on neutrophils [61,62]. It enhances the expression of adhesion molecules by endothelial cells and antagonizes IgE production stimulated by IL-4 [63]. It is produced mainly by monocytes/macrophages and, to a lesser extent, by fibroblasts, endothelial cells, keratinocytes, hepatocytes, melanocytes, and chondrocytes. IL-1, TNF-α, and IFN-γ are its main stimulators [64]. In the kidneys, podocytes and endothelial cells of interstitial vessels are the main sources of IL-8, while tubular epithelial cells express small amounts of this cytokine. In inflammatory kidney diseases, the IL-8 expression increases fivefold compared to those in normal structures. It increases the level of endothelial cells near the inflammatory site, facilitates the recruitment and crossing of leukocytes through the endothelium, and alters the expression of adhesion molecules [65]. Urinary IL-8 levels have been observed to be elevated in the early stages of diabetic nephropathy in patients with T2DM [66]. Another study that evaluated the association between urinary cytokine levels and decreased eGFR in patients with T2DM with DN found that increased urinary levels of IL-6, IL-8, TNF-α, and TFG-β were predictors of a faster decline in renal function, indicating the clinical utility of these levels in stratifying the risk of renal disease progression [67]. In patients with T2DM, IL-8 was negatively associated with eGFR and positively associated with BMI [68]. These studies reveal that there is an association between increased IL-8 level and decreased eGFR in patients with T2DM. Our results showed that the increase in serum IL-8 level anticipated a decrease in renal function in patients with T2DM. A possible explanation is that the hyperglycemic environment itself promotes increased serum levels of this chemoattractant cytokine. These contribute to the onset and progression of the inflammatory process, from recruitment, especially of neutrophils, to vascular changes, such as increased permeability that favors the arrival of new inflammatory cells to the inflammatory site, which results in renal function impairment [69]. Similar to that found in relation to serum IL-8 levels, in our study, the patients with T2DM also showed an increase in serum MIP-1α and MIP-1β levels compared to the control group. However, only MIP-1α level showed a significant difference in the evaluation of T2DM group without RA and T2DM group with RA in relation to the control group. Thus, similar to the IL-8 level, the increase in serum MIP-1α levels in patients with T2DM is independent of the presence of RA. However, its increase also seems to accompany the decrease in eGFR. MIP-1α (CCL3) and MIP-1β (CCL4) belong to the CC subfamily of chemokines and induce the expression of adhesion and costimulatory molecules on the surface of T cells, NK cells, macrophages, and monocytes. These chemokines not only mediate the chemotaxis of these cells but also promote the secretion of proinflammatory cytokines [70]. One study evaluated the serum levels of inflammatory cytokines in 64 patients with T2DM with CKD, and it was observed that patients with eGFR of 30-59 mL/min/1.73 m 2 had increased serum MIP-1α levels. This was associated with the decline in eGFR and also correlated positively with urinary albumin excretion [71]. Patients with T2DM with diagnosis of DN showed an increase in serum MIP-1β levels in CKD stages 1-2 [72]. Thus, corroborating these studies, our results suggest that increased serum MIP-1α and MIP-1β levels may anticipate the decline in renal function of patients with T2DM. In this study, patients with T2DM showed an increase in serum eotaxin levels compared to the control group. However, there was no difference between patients with T2DM with and those without RA. Although there was no such difference, a trend of increased serum eotaxin levels in these patients was noted. Eotaxin is a CC chemokine that acts on chemotaxis, mainly of eosinophils. It is secreted by endothelial cells, macrophages, fibroblasts, and smooth muscle cells [73]. In 2015, a study conducted in African American patients with type 1 diabetes was the first to report that increased plasma eotaxin levels are an independent predictor of renal failure [74]. A prolonged hyperglycemia process increases the excretion of urinary eotaxin and other inflammatory mediators [75]. Increased urinary eotaxin levels were found in patients with microalbuminuria and T2DM compared to patients with normoalbuminuria and controls [60]. In addition to angiogenic properties [76] and contributing to renal interstitial eosinophilia [77], studies have shown that an increase in serum eotaxin levels in patients with T2DM could play an important role in the process of atherosclerosis that developed in patients with T2DM [78] and chronic renal disease [72]. Increased serum eotaxin levels were also observed in obese mice and humans [79]. Thus, it is possible to associate the increase in serum and urinary eotaxin levels with the development of T2DM complications and renal function impairment, as probably related to obesity that may affect patients with T2DM and favor the low-grade chronic inflammatory process [72,79]. Low-grade chronic inflammation promoted by T2DM is associated with macrophage infiltration in the kidney. Monocytes/macrophages and neutrophils are considered primordial cells that drive inflammation and concomitant production of proinflammatory cytokines in vivo [80,81]. Increased infiltration of activated monocytes, macrophages, and T lymphocytes are described in the kidneys of patients with T2DM with DN [82,83] reinforcing the hypothesis that T2DM is a disease of the innate immune system [84,85]. Thus, our study demonstrates that patients with T2DM with RA show increased stimulation for the recruitment of innate immune cells, through an increase in the serum levels of pro-inflammatory chemokines, stimulated by the increase in adipokines and TNFR1, with consequent decrease in IL-4, favoring the inflammatory process. Hence, this immune mechanism could be associated with and plays an important role in promoting the decline in renal function in patients with T2DM. Therefore, our study highlights the importance of this screening for increased serum TNFR1, adipokine, and chemokine levels and decreased serum IL-4 level in patients with T2DM to identify individuals at risk of progressive loss of renal function. We demonstrated that TNFR1 correlated positively with resistin and leptin in patients with T2DM with RA, showing its contribution to the increase of these adipokines in conditions of decreased renal function. The relationship between these molecules appears to be complex, and there are still several points related to their signaling that need to be better understood. Although studies such as Fasshauer et al., 2001, have shown a negative effect of resistin on TNF-α production in an adipocyte cell line [86], several studies have shown that TNF-α signaling via TNFR1 is a strong stimulus for resistin and leptin production [47, [87][88][89][90]. Resistin also acts by stimulating proinflammatory cytokines such as TNF-α and IL-12 [91,92], and may be the link between inflammation and insulin resistance in an inflammatory environment [93,94]. Moreover, the relationship between adipokines, TNF-α and their receptors has been reported in other studies on diseases with an important inflammatory component, such as Lupus [95], Inflammatory Bowel Disease [96], rheumatoid arthritis [47, 97,98], atherosclerosis [99,100] and chronic kidney disease in the absence of Diabetes Mellitus [101,102]. The real role of leptin and resistin as risk indicators for kidney injury has yet to be further clarified. Studies have reported that leptin is inversely related to glomerular filtration rate [103,104] and positively associated with chronic kidney disease [58]. Leptin is metabolized mainly by renal proximal tubular cells and decreased glomerular filtration rate may result in decreased leptin clearance and therefore higher serum leptin levels. High serum leptin levels were observed in the early stages of kidney disease in T2DM patients, demonstrating that leptin degradation is already impaired in the early stages of nephropathy [105]. Studies have also reported increased serum resistin inversely associated with estimated glomerular filtration rate in T2DM patients [49,106]. Individuals with renal dysfunction have accumulated serum resistin levels, which is possibly due to reduced renal clearance. However, resistin levels are significantly increased even in individuals with eGFR between 60-89 mL/ min/ 1.73 m2, where polypeptides would be filtered almost normally [51]. However, serum resistin levels are still significantly higher in patients with mild renal dysfunction than in those with eGFR> 90 mL/min/ 1.73 m 2 . Thus, it is hypothesized that resistin plays an important role in decreasing renal function [51], possibly through its proinflammatory effect that may be detrimental to renal function [106]. Mills and colleagues reported that leptin and resistin are significantly associated with the risk and severity of chronic kidney disease [104]. Thus, we can suggest that possibly increased levels of resistin and leptin associated with increased levels of TNFR1 may indicate the risk of renal injury in T2DM patients. Furthermore, under this same condition, resistin was shown to be correlated positively to leptin and IL-8. Thus, adipokines, especially resistin and leptin, TNFR1, and IL-8, exert similar behaviors in patients with T2DM with decreased renal function in their inflammatory process. Conclusions Our study showed that serum TNFR1, IL-4, adipokines, and chemokines play an important role in the inflammatory process in T2DM and decreased renal function. Moreover, our data indicate that TNFR1 is a strong predictor of renal dysfunction in patients with T2DM.
2020-03-05T10:30:09.411Z
2020-03-04T00:00:00.000
{ "year": 2020, "sha1": "1103705a4bea6cffa159926732fb3e1407d6a111", "oa_license": "CCBY", "oa_url": "https://journals.plos.org/plosone/article/file?id=10.1371/journal.pone.0229765&type=printable", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "71a271eee62b8fe00efbad9121a8ae8e2518c01b", "s2fieldsofstudy": [ "Medicine", "Biology" ], "extfieldsofstudy": [ "Medicine" ] }
256471842
pes2o/s2orc
v3-fos-license
Network analysis of potential risk genes for psoriasis Psoriasis is a complex chronic inflammatory skin disease. The aim of this study was to analyze potential risk genes and molecular mechanisms associated with psoriasis. GSE54456, GSE114286, and GSE121212 were collected from gene expression omnibus (GEO) database. Differentially expressed genes (DEGs) between psoriasis and controls were screened respectively in three datasets and common DEGs were obtained. The biological role of common DEGs were identified by enrichment analysis. Hub genes were identified using protein–protein interaction (PPI) networks and their risk for psoriasis was evaluated through logistic regression analysis. Moreover, differentially methylated positions (DMPs) between psoriasis and controls were obtained in the GSE115797 dataset. Methylation markers were identified after comparison with the common genes. A total of 118 common DEGs were identified, which were mainly involved in keratinocyte differentiation and IL-17 signaling pathway. Through PPI network, we identified top 10 degrees as hub genes. Among them, high expression of CXCL9 and SPRR1B may be risk factors for psoriasis. In addition, we selected 10 methylation-modified genes with the higher area under receiver operating characteristic curve (AUC) value as methylation markers. Nomogram showed that TGM6 and S100A9 may be associated with an increased risk of psoriasis. This suggests that immune and inflammatory responses are active in keratinocytes of psoriatic skin. CXCL9, SPRR1B, TGM6 and S100A9 may be potential targets for the diagnosis and treatment of psoriasis. Introduction Psoriasis is an immune-mediated inflammatory skin disease with important physiological and psychosocial consequences [1]. Psoriasis has complex characteristics, affecting about 2% of the general population, and the prevalence of psoriasis varies from country to country [2,3]. There are several clinical variants of this disease, namely plaque psoriasis, erythrodermic psoriasis and pustular psoriasis, among which plaque psoriasis accounts for more than 90% [4]. Clinically, red patches of silver-white multilayered scales are characteristic of psoriasis, and the patient's epidermis is thickened, clearly demarcating from adjacent non-banded skin [5]. The disease affects more than 10% of the body surface area and usually requires systemic drug therapy. Inflammatory infiltration consisting of dermal dendritic cells, macrophages, T cells, and neutrophils is present in psoriatic plaques [6]. Help T cells have long been recognized as an important pathogenic factor in psoriasis. Further evidence suggests that there is a strong T cell component to maintain psoriasis through Th1, Th17, and Th22 cells and their derived cytokines [7]. The complex interaction of pro-inflammatory cytokines, chemokines, growth factors and chemical mediators initiated by Th17 cells may be the key factors Open Access to induce keratinocyte proliferation, angiogenesis and neutrophil influx, ultimately leading to keratinocyte over-proliferation and the characteristics of psoriatic plaques [8]. Recently, mast cells have been shown to be the major producers of interleukin-22 in psoriasis and atomic dermatitis [9]. Among various molecules associated with psoriasis, tumor necrosis factor (TNF)-α, IL-23, IL-17 and IL-22 are important regulators of psoriasis [10]. Emerging evidence suggests that different phenotypes have different immunogenetic characteristics, which may affect treatment options [11]. Epidermal gene modification is considered as an essential factor in the pathogenesis of psoriasis [12]. Epigenetic changes also play an important role in the differentiation of CD4( +) T lymphocyte subsets and in the pathogenesis of psoriasis [13]. Among all epigenetic mechanisms, DNA methylation is one of the important factors in keratinocyte differentiation [14,15]. Common drugs used to treat psoriasis, such as methotrexate, have been reported to interfere with the methyl transfer function of folic acid, thereby restoring normal methylation status [16]. However, researchers still need to make greater efforts to elucidate how abnormal epigenetic modifications affect the pathogenesis of psoriasis. In addition, some environmental factors, including modifiable variables such as physical trauma, drug reversal, psychological stress, obesity, are associated with the development and deterioration of psoriasis [17]. In this study, psoriasis-related sequencing data in public databases were used to explore the molecular mechanisms and potential risk factors of the disease. Data collection We collected psoriasis-related datasets from the gene expression omnibus (GEO) database. GSE54456 included gene expression profile from skin samples of 92 psoriatic and 82 normal punch biopsies. The genes annotated in RefSeq was used to quantify gene expression levels. The gene expression was normalized to the number of reads per kilobase per million mapped reads (RPKM). GSE114286 included gene expression profile from 9 normal skins from healthy volunteers and 18 lesional skins from patients with psoriasis. The gene expression was normalized through RPKM. GSE121212 included gene expression profile of skin tissues obtained from a carefully matched and tightly defined cohort of 28 psoriasis patients, and 38 healthy controls. Paired reads were mapped to the human reference genome (b37), number of reads for each gene was counted using HTSeq. Identification of differentially expressed genes The differentially expressed genes (DEGs) between psoriasis and normal were obtained through limma R software package. The log2 fold change (FC) ≥ 2 and P-values < 0.05 were considered as a statistically significant difference. Enrichment analysis The enrichment analysis was performed using clus-terProfiler R software package for the DEGs, including gene ontology (GO) functional analysis and Kyoto Encyclopedia of Genes and Genomes (KEGG) pathway analysis. The cellular component (CC), biological process (BP) and molecular function (MF) terms belonged to GO function. P-value of < 0.05 was considered statistically significant. Construction of protein-protein interaction (PPI) network The DEGs were put into the online tool STRING (https:// string-db. org) to construct the PPI network. A combined score of ≥ 0.5 was considered significant. The hub genes were chosen based on a higher number of associations with other genes (degree) in the PPI network. DNA methylation analysis GSE115797 included DNA methylation profile of 24 psoriatic disease skin tissue and adjacent normal skin samples. The signal intensities along with the detection P-values were calculated for each CpG probes after BMIQ normalization. Normal skinized average beta values were calculated using ChAMP R software. Differentially methylated positions (DMPs) between psoriasis and normal were achieved by limma R software package. Set screening threshold with false discovery rate (FDR) < 0.05. Statistical analyses Statistical analyses were carried out with SPSS Statistics 21.0. The differences between two groups were compared by Student t test. LRM function in RMS R software package was used for logistic regression analysis. Differentially expressed genes in psoriasis By comparing the differences between psoriasis and healthy controls, we obtained differentially expressed genes (DEGs). A total of 401, 1857, and 466 DEGs were obtained in GSE54456, GSE114286 and GSE121212, respectively (Fig. 1A). Among these DEGs, we found 118 genes simultaneously present in three datasets and then defined them as common DEGs (Fig. 1B). PPI network of differentially expressed genes After screening using a combined score, a total of 86 DEGs entered to PPI network (Fig. 3A). Among them, we identified the top 10 genes with the highest degree as hub genes (Fig. 3B). Including IL1B, CXCL10, S100A7, IL17A, CCL20, SPRR1B, CXCL1, PI3, LCN2 and CXCL9. Surprisingly, they were up-regulated in all three datasets (Fig. 3C). The AUC values of hub genes were all greater than 0.7, which may have the ability to judge psoriasis, especially SPRR1B (Fig. 3D). Importantly, we performed logistic regression analysis and presented the risk prediction of hub genes for psoriasis through a nomogram (Fig. 3E). Results suggested that the higher the expression of CXCL9 and SPRR1B, the greater the risk of psoriasis. Hub genes were also mainly enriched in the IL-17 signaling pathway. Discussion High-throughput sequencing studies have revealed the impact of large numbers of gene expression and epigenetic modification changes on psoriasis through transcriptome and epigenetic modification studies. However, the complementarity of multiple omics data has not been fully utilized for comprehensive systematic analysis to obtain more views on disease regulation. In this study, three datasets of psoriasis were analyzed to obtain more accurate psoriasis-related genes. The molecular dysregulation mechanism and potential risk factors of psoriasis were further explored by enrichment and Combined with methylation analysis, we obtained the methylation modified markers, which expanded the regulatory mechanism of psoriasis. In addition to previous molecular mechanism studies, our study provided a more in-depth understanding of psoriasis epidermal gene regulation. Among the three groups of DEGs, we found 118 common genes, which may be closely related to the dysregulation mechanism of psoriasis. These DEGs were significantly involved in psoriasis-related biological functions and signaling pathways. As one of the most famous immune processes underlying the pathogenesis of psoriasis, interleukin-17 (IL-17) pathway, showed a strong enrichment effected on psoriasis-related genes and epigenetic variation [18]. In addition, there is an imbalance between the differentiation and proliferation of keratinocytes in patients with psoriasis, and IL-17A can promote the proliferation of epidermal keratinocytes [19,20]. This promotes the development of thickened skin lesions infiltrated with a variety of inflammatory cells [21]. The inhibitory effect of anti-IL-17A on psoriasis plays an important role in the early clinical, histopathological and molecular treatment of psoriasis [22]. In general, the main function of TLRs is to mediate the inflammatory response, which has also been proved to be involved in the development of psoriasis [23]. Keratinocytes also recognize pathogens and endogenous cellular stress signals through NOD-like receptor (NLR), thereby mediating the immune response [24]. Compared with healthy controls, the hub genes we identified were up-regulated in three datasets of psoriasis. Interleukin-1β interferes with epidermal homeostasis by inducing insulin resistance, thereby participating in the development of psoriasis [25]. IL-17A activates the expression of CXCL1, CCL20 and S100A7 in keratinocytes, and activating the innate immune system [10]. The chemokines CXCL 9 and CXCL 10 released by epidermal keratinocytes have a strong chemotactic effect on the key cell monocytes, neutrophils and T cells in psoriasis [26,27]. Consistent with our analysis, SPRR1B has also been identified as a potential biomarker of psoriasis by other studies [28,29]. The score of psoriasis area and severity index was positively correlated with the expression of PI3 [30]. Lipocalin-2 (LCN2) is significantly higher in patients with psoriasis than in healthy controls, and may be used as a clinical indicator of psoriatic pruritus [31]. In addition, psoriasis patients have different degrees of DNA methylation compared with healthy human skin and are associated with immune activation or immunoinflammatory diseases [32]. The presence of differentially methylated positions in patients with psoriasis may be of great interest. Transglutaminase 6 (TGM6) has been reported to be associated with keratinocyte differentiation [33]. S100A9 can be an effective target for the treatment of psoriasis [34]. The level of S100A9 is related to the severity of psoriasis, which induces the production of inflammatory mediators by activating TLR 2 and 4 [35]. The limitation of this study is that we only analyzed the expression of mRNA level in limited epidermal specimens, but lacked the detection of protein expression in specimens. Secondly, we only explored the methylation modification of genes, and the specific role in the pathogenesis of psoriasis remains to be further studied. In addition, more experiments are needed to determine whether the key genes have a role in the diagnosis and treatment of psoriasis. Conclusion This study identified multiple differentially expressed genes associated with psoriasis. Importantly, CXCL9 and SPRR1B, as well as methylation markers TGM6 and S100A9, have an impact on the risk of psoriasis. These genes were mainly enriched in keratinocyte differentiation and IL-17 signaling pathway. This suggests that these genes have great potential for the diagnosis and treatment of psoriasis.
2023-02-02T15:28:31.249Z
2021-06-16T00:00:00.000
{ "year": 2021, "sha1": "777bec29c4127a94f4973462cf7e14584d1f55a1", "oa_license": "CCBY", "oa_url": "https://hereditasjournal.biomedcentral.com/counter/pdf/10.1186/s41065-021-00186-w", "oa_status": "GOLD", "pdf_src": "SpringerNature", "pdf_hash": "777bec29c4127a94f4973462cf7e14584d1f55a1", "s2fieldsofstudy": [ "Biology", "Medicine" ], "extfieldsofstudy": [] }
240068052
pes2o/s2orc
v3-fos-license
Challenge of Politico-Economic Sanctions on Pharmaceutical Procurement in Iran: A Qualitative Study Background: Politico-economic sanctions over the recent years have led to significant challenges in the pharmaceutical supply chain (PSC) in Iran. Given the importance of the chain’s resilience for the health system and its impact on accessibility, equity, and public health, this study was conducted to determine the major challenges facing pharmaceutical procurement in Iran after the imposition of these sanctions. Methods: This study was a qualitative research with a content analysis approach conducted in 2019. Eighteen policymakers and administrative managers in food and drug administration of two Iranian Medical Universities and Iran’s Ministry of Health were included in the present study via snowball sampling and semi-structured interview. The data were analyzed using the framework analysis of MAX QDA10. Results: Five main themes and 15 sub-themes were identified, which addressed pharmaceutical supply chain challenges under politico-economic sanctions. These included the challenges in financing, purchasing, importing, and manufacturing domestic products in addition to storing and distributing medicines, along with challenges facing the general public, particularly patients. Conclusion: The results revealed that pharmaceuticals are not immune to politico-economic sanctions, although they are not directly subjected to them. Sanctions, similar to any economic crisis, can affect public health and limit their access to healthcare. Identifying supply chain challenges and planning to address them could help policymakers find solutions to enhance PSC resilience in the future. • Politico-economic sanctions have had direct and indirect adverse effects on health systems. • Pharmaceutical supply chain may lose its resilience against these sanctions. • The quantity and quality of pharmaceutical access could be restricted as a result of these sanctions. What's New • Sanctions impose several challenges to financing, purchasing, and importing new pharmaceuticals in Iran. • Sanctions vastly restrict the access to international collaboration in terms of technology transfer and joint research and development. • Producing domestic products, storing, and distributing them are among other challenges affected by sanctions in Iran. Introduction It is widely acknowledged that medicines are one of the most important components of healthcare, prompt access to which is one of the most important objectives of healthcare systems around the world. 1 Easy access to medicines is also one of the main concerns of governments. 2 Hence, the pharmaceutical supply chain (PSC) in a country is required to provide adequate and quality medicines at a reasonable price, at the right time, and in the right place in order to achieve the objectives of a health system as well as those of the stakeholders in the PSC. 3,4 The PSC is one of the main components of a healthcare system and consists of all the processes, information, resources, financing, technologies, and pharmaceutical industry stakeholders, such as manufacturers, suppliers, brokers, service providers, buyers, and sellers. 5 Therefore, any unexpected shocks on PCS may lead to vast consumption of the resources, which threatens public health. 6 Various interdependencies within national and international domains have accordingly led to complexity, opacity, and uncertainty in the PSC, aggravating its vulnerability and consequently exposing it to greater risks. 7 These risks could also jeopardize the supply chains that have been carefully developed according to scientific principles. 8,9 Therefore, there is a dire need to manage such risks in the PSC to prepare for and respond appropriately to a wide range of threats. 9 Building resilience in a given chain, especially in critical and abnormal situations, can act as a bulwark against risks in the PSC. 8 Thus, resilience provides the ability for the supply chain to return to its original state or even to a more desirable state than the past and the ability to detect threats. Accordingly, the PSC could predict, regulate, or even prevent potential failures and vulnerabilities. 10 A major threat to the PSC in Iran over recent years has been the imposition of unilateral and multilateral politico-economic sanctions against the country. 3 This situation was exacerbated, when the United States (US) withdrew from the Joint Comprehensive Plan of Action (JCPOA), the agreement aimed at halting Iran's uranium enrichment beyond a certain level, in exchange for the lifting of some sanctions (on May 8, 2018), prompting the other parties to the agreement, such as the European Union (EU), to reinstate all the previous sanctions. 11 This should be seen in the light of the fact that the EU was the primary source of Iranian supplies of medicines. Under this condition, although medicines have not been the direct subject of sanctions against Iran, the PSC has encountered numerous problems, including payment for imported medicines, the supply of raw materials, and the sale of pharmaceutical products. 12,13 In this paper, it was highlighted that the US sanctions have caused numerous difficulties in the supply of medical equipment and medicines and widespread problems in Iran's PSC. 13 Another study also showed that the price of new chemotherapy drugs and new biologic cancer drugs increased significantly after the US sanctions, which restricted patients' access. 14 Setayesh and Mackey also claimed that 73 types of medicines, closely related to the burden of disease in Iran, became scarce following the US sanctions, 44% of which were also classified as essential medicines by the World Health Organization. 15 Sanctions could affect health systems by increasing transportation complications, currency transfer, and lack of money. These, along with limited access to chemotherapeutic agents and severe drug shortages, are cited as the main effects of sanctions on cancer treatment. 16 Barriers to accessing new radiotherapy equipment and limitations in obtaining spare parts for accelerators due to sanctions were other effects of sanctions on public health, particularly on severely ill patients. 17 In short, it appears that the politico-economic sanctions have had direct and indirect effects on the drug procurement capabilities of the Iranian PSC. According to Yazdi-Feyzabadi and others, weak health services, high health technology costs, increased prices of drugs and medical supplies, and decreased affordability and accessibility are among the most critical direct effects of sanctions on the health system. The socioeconomic, structural, and intermediate determinants of health also play an essential role, as indirect adverse effects of sanctions, on the health system. 18 According to what was mentioned above, this study investigates and determines the challenges of Iran's PSC in different areas after the imposition of politico-economic sanctions, which may affect the resilience of the supply chain. Identifying the challenges could help policymakers find solutions to increase the resilience of the PSC in the future. Methods The current work is a qualitative content analysis conducted in 2019. The Ethics Committee of Shiraz University of Medical Sciences approved the proposal of this article under the following ID: IR.SUMS.REC.1397.18779. Herein, semistructured interviews were conducted with Presidents, Vice-presidents, and Heads of the Food and Drug Administration of Iran's Ministry of Health, Shiraz University of Medical Sciences and Tehran University of Medical Sciences, to explore the challenges facing the PSC in the wake of politico-economic sanctions. The interviewees were selected using the snowball sampling method. Initially, the Vice-president and executives of the Vice-Chancellor's Office of Food and Drug were interviewed at Shiraz University of Medical Sciences. Subsequently, they were asked to introduce the individuals interested in the field. The study participants were selected from well-informed individuals, who had sufficient information on drug storage, distribution, and delivery, as well as the challenges of the PSC. They were also willing to share their own opinions. One-on-one interviews were conducted with the participants between October and December 2019, mostly at their workplaces by one of the researchers (ZD). At the beginning of the interviews, general explanations about the study objectives and the need for confidentiality of information were given verbally. Written informed consent was also obtained from all the interviewees, and they were ensured that they could withdraw from the study at any time, if they were not satisfied. The average duration of the interview sessions was approximately 50±10 min, and all the interviews were conducted by one of the researchers. The interviews were recorded with the consent of the participants and transcribed within a very short time after completion. The interviews continued until saturation was reached, which was after 18 interviews. To prepare the semi-structured interview guide, which consists of five main items and 18 sub-items, a review of relevant literature and an open-ended pilot interview were conducted. It was then validated by two experts from the Food and Drug Administration associated with Shiraz University of Medical Sciences. In addition, the validity of the interview guide was confirmed after conducting two pilot interviews with the participants. The main questions were as follows: In your opinion, what are the main negative impacts of sanctions on drug procurement in Iran? What are the main barriers in importing, producing, and distributing medicines during the sanctions? How can the politico-economic sanctions impact the PSC? In what areas can the resilience of the supply chain be damaged by the sanctions? To increase the accuracy and precision of the study, four criteria developed by Guba and Lincoln were considered, namely credibility, confirmability, transferability, and dependability. 19 To increase the credibility of the study, long-term participation and continuous observations were used. As a result, the researchers were fully involved in the study, adequate communication was established with the participants, and general concepts that emerged during the study were accepted. To this end, the interviews and a review of related literature were incorporated. To increase the confirmability of the findings, the participants were presented with the coded data to verify their accuracy. The transferability of the study findings was also promoted by explaining the conditions of the informed study participants and the interview method in a comprehensible manner. An attempt was also made to select the samples entirely in line with the objectives of the study and without any bias. The data were analyzed in parallel with their collection to help the researchers be fully acquainted with the principles of theoretical research. To increase the dependability of the study findings, the process of coding the concepts, themes, audio, and textual information was provided. To ensure this, two research teams analyzed the content individually and discussed the themes to reach a consensus in case of disagreement. In addition, data analysis was conducted using a five-step framework. The researcher primarily listened to the audio files of the interviews several times, and the transcribed texts were read several times to identify the data. Secondly, to identify a thematic framework, repeated ideas in the identification process were transformed into groups of similar ideas or codes. Thirdly, indexing units or parts of the data that were associated with a particular code were characterized. Subsequently, after indexing, the data were summarized in a code table based on the thematic framework. The data were ultimately combined. Subsequently, graphs and interpretations were employed to define the concepts, show the relationships between the concepts, specify the nature of the phenomenon, and offer explanations to determine the challenges. 20 The coding and classification of the data were performed by two of the researchers (PB and GM), who had enough familiarity and reflexivity with qualitative analysis using the software MAX QDA10 (VERBI GmbH, Germany). Results All the 18 respondents were male (100%) and had a mean age of 55.25±8.3 years. The results of the analysis of the interviews contributed to the identification of five main themes and 15 sub-themes related to the challenges facing the PSC during the politico-economic sanctions. As represented in table 1, the main themes included the following challenges: financing medicines, purchasing and importing medicines, domestic production of medicines, storage and distribution of medicines, and community and patientassociated challenges. Financing Challenges Challenges related to drug financing were composed of four sub-themes, namely international transaction banks, lack of financial resources, interactions with international organizations, and reverse trafficking. All the respondents believed that sanctions have caused numerous challenges in the PSC through disruptions in electronic transaction Several interviewees also acknowledged that the unavailability of foreign currency and Iranian Rial meant that medicines could not be delivered. In this regard, one of the participants said: "Unfortunately, the funds for the supply of medicines are currently in the hands of the health insurance companies, which are unable to pay due to insolvency." [P6] In addition, some of the interviewees assumed that interactions with international organizations were very limited under the sanctions. Despite this, these organizations also provided some support to Iran in this area; for example, one of the participants said: "Before the sanctions, international organizations did not help our country much in terms of financial problems. Their support was also not strong enough in the field of health. During the sanctions, remittance procedures overshadowed such contributions." [P4] Most interviewees mentioned reverse trafficking triggered by Iran's exchange rate fluctuations as one of the challenges facing the PSC. In this regard, one of the participants said: "Prices in our country are not set at a real rate, and exchange rate fluctuations can create a twoprice market and lead to reverse trafficking." [P15] Challenges in Purchasing and Importing Challenges in purchasing and importing medicines included the three sub-themes of inadequate attention to medicine requirements, problems associated with Iran's Customs Administration and clearance procedures, and the processes involved in importing medicines. Certain respondents cited the lack of attention to medical needs in Iran and the lack of estimation of annual needs in ordering medicines from importers as challenges in purchasing and importing medicines; for instance, one of the participants stated: "Information on diseases, such as information on the prevalence and incidence of certain diseases in different regions of Iran, is of the utmost importance to make an initial estimate of the needs, which should be based on accurate information about actual needs." [P3] Most of the respondents also included Iran's Customs Administration and clearance procedures among the challenges of the PSC. They believed that the clearance of medicines was delayed by Iran's Customs Administration and especially the procedures for the clearance of essential items were time-consuming. In this regard, one of the participants opined that: " Currently, there are few interactions between the Ministry of Health and Medical Education and Customs Administration. I think the authorities in Iran's Customs Administration must know that the timely release of items is of great importance to avert harm to the pharmaceutical market." [P7] Almost all the respondents said that importation of medicines faced many challenges, such as lack of assessment of importers' capabilities, instability and delays in importing medicines and medical devices, and reducing competition due to the limited number of companies importing one-item products. In this regard, one of the participants said: "Unfortunately, currently, government Creation of hopelessness and incivility in the society Lack of access to primary healthcare Spread of diseases and lack of timely control of communicable diseases Shortage and disruption in obtaining medicines for chronic diseases Exacerbation of diseases and treatment discontinuation The slowdown in response to treatment and increase in length of stay in bed Lack of procurement of OTCs Lack of choice by patients OTC: Over-the-counter corruption and rent-seeking related to imported items are rife, and the government has created a kind of self-imposed sanction against medicine imports." [P5] Challenges Facing Domestic Production The challenges facing the domestic production of drugs included three sub-themes: hoarding by manufacturing companies, lack of raw materials for production, and inability to update scientific knowledge. Some interviewees pointed to hoarding by manufacturing companies as one of the challenges facing the PSC during the sanctions; for example, one of the participants stated: "In case of the current sanctions, some manufacturing companies are hoarding a large amount of the most expensive prescription drugs, which may lead to increased prices and unavailability of certain items in short intervals." [P11] From the perspective of the majority of the interviewees, the lack of raw materials for production was seen as a challenge facing the PSC. In this regard, one of the participants noted: "Although we can produce drugs, we are highly dependent on imported raw materials, especially those for packaging." [P9] The inability to update scientific knowledge was also mentioned by three interviewees as a challenge for the PSC. In this regard, one of the participants added: "Sanctions have stopped us from updating our knowledge, which is their least important effect; for example, there is no possibility of progress in terms of new packaging, new formulations, and scientific techniques without any contact with the outside world." [P7]. Storage and Distribution Challenges Challenges in the storage and distribution of medicines included the three sub-themes of exploitation by stakeholders and brokers, no control over medical products, and problems in prescribing and supplying. Accordingly, certain participants believed that exploitation by some stakeholders and brokers could be a challenge for the PSC under sanctions. In this regard, one of the participants stated: "Since a big proportion of the profits belong to the manufacturing and distribution companies, pharmacies stockpile drugs to increase prices and disrupt the supply and demand cycle. After the scarcity sets in, they sell them at high prices and gradually enter the black market." [P13] Some interviewees identified the lack of control over medical products and devices as one of the challenges facing the PSC. In this regard, one of the participants said: "There are a few problems in terms of controlling and monitoring medicines, but there is less monitoring in the area of consumables due to the volume of activities and delivery points. Thus, pharmacies are inspected at least twice a month, yet medical products are not inspected." [P2]. The problem with prescriptions and providers was also mentioned by some interviewees as one of the challenges related to medication sanctions. In this regard, one of the participants stated: "In the current situation, it is not possible to talk about replacing and changing your medication regime." [P14] Community and Patient-Associated Challenges Challenges related to community and patients also included two sub-themes: social and psychological impact on the community and compromised health and well-being. Almost all the respondents believed that sanctions have caused negative social and psychological impacts, such as feelings of insecurity and high dissatisfaction, among people. One of the participants said: "I think the most important problem related to sanctions is social dissatisfaction and the reduction of social capital. I think the trust between the people and the government is damaged." [P12] In addition, most respondents acknowledged that sanctions would jeopardize the health and well-being of the community and that patients with chronic illnesses, in particular, would suffer. In this case, one of the respondents stated: "Since medicines and equipment are imported with delay, the cost of treatment delays is one of the problems associated with sanctions. Since drugs are available for a limited duration and difficult to get at other times, the treatment is not completed, and the course of treatment is prolonged." [P8] Discussion The findings of this study revealed that the challenges facing the PSC during the period of politico-economic sanctions included financing, purchasing, importation, production, storage, and distribution of drugs, in addition to community and patient-associated challenges. As mentioned earlier, the findings of this study indicated that the difficulty of working with international transaction banks, lack of financial resources, limited interaction with international organizations, and reverse trafficking were among the challenges in financing medicines. It should be noted that Iran has faced various regional and international sanctions in foreign trade and financial and banking services over the recent decade, imposed by the US after May 2018. On the other hand, Iran's pharmaceutical industry usually plays an important role in providing medicines and necessary medical services to patients. 21,22 Although medicines are exempted from sanctions, pharmaceutical companies have encountered many difficulties importing raw materials and medicines after the sanctions due to foreign exchange restrictions. A study by Cheraghali and others also revealed that one of the weaknesses of the PSC under sanctions, which reduces its resilience, was the mode of payment for imported drugs and raw materials. 12 Similarly, according to Kokabisaghi, politicoeconomic sanctions have had severe effects on the macroeconomy in decreasing government revenues, increasing inflation, depreciating national currency, and unemployment, among others. Such macroeconomic criteria could directly lead to a progressive deterioration in the well-being and health of the community and reduce access to healthcare and medicine. 23 Challenges in procuring and importing drugs also included lack of attention to drug needs, problems with Iran's Customs Administration and clearance process, and the processes involved in importing drugs. It appears that lack of access to prevalence rates of diseases in different regions of Iran has been attributed to an inaccurate assessment of treatment needs. In addition to shortfalls in certain items, some other medicines expire due to excessive imports and orders, resulting in a loss of financial resources. In addition, the abundance and variety of stringent regulations in Iran's Customs Administration have resulted in a timeconsuming clearance process that can delay access to essential medicines. As a result of sanctions, medicines are imported exclusively by a limited number of companies, which has led to reduced monitoring and inadequate assessment of importers. Similarly, Moret highlighted the negative impact of sanctions on access to and use of medicines by the citizens of Iran and Syria. 24 This restriction may be partially due to the importation of essential medicines. The challenges associated with the domestic production of drugs are also related to hoarding by manufacturing companies, lack of raw materials for production, and failure to update scientific knowledge. It seems that manufacturing companies are taking advantage of this chaotic situation in Iran and exacerbating the crisis through storage and hoarding. In addition, there is extreme dependence on pharmaceutical and non-pharmaceutical products and advanced equipment and machinery for domestic production, which is hindered by sanctionrelated conditions and the inability to access new pharmaceutical production science. The results of the study by Ghiasi and others also showed that access to imported and domestic medicines for the treatment of asthma has reduced by 19% and 42%, respectively, due to sanctions, which has increased patients' suffering. 25 Storage and distribution challenges were also associated with exploitation by stakeholders and brokers, lack of control and monitoring of medical products, and problems in prescribing and supply. Unfortunately, it appears that pharmacies, as the last link in the PSC, are storing away drugs and creating a black market. Moreover, the limited access of manufacturing companies to the best treatments based on scientific and experimental resources causes numerous problems for patients. The results of a study by Kheirandish and others indicated that sanctions have reduced access to medicines for asthma and cancer patients, mainly imported medicines and those whose raw materials depend on imports. 26 Ultimately, the challenges associated with the community and patients included the social and psychological impact on people and the threat to health and well-being. Sanctions have also created a hostile social and psychological atmosphere, a sense of insecurity, a decline in public trust in the government, reduced social capital, and higher levels of dissatisfaction among people. In this regard, the good performance of pharmaceutical companies and their adherence to ethical obligations and provision of affordable medicines, specifically for the poor, have resulted in an increase in social capital and better social welfare in Bangladesh. 27 Furthermore, the lack of timely provision of medicines endangers patients' lives and increases the severity of their diseases. Examining data on infants and children in 69 countries, Petrescu found that underweight infants and children in countries subject to sanctions are most likely to die before the age of three. 28 The current status of the drug market in Iran suggests that sanctions have affected the general public and community health sectors, leading to limited access to necessary medicines in the domestic market and increased patient suffering. Similarly, Aigbogun and others cited environmental fluctuations, external pressures, high sensitivity of the PSC, and dependencies on and linkages with external elements 7 as the main challenges to PSC resilience in Malaysia; this is consistent with the findings of the present study. To sum up, the present findings implied that politico-economic sanctions can harm the PSC and raise several issues for public health. Evidence from other countries' experiences also shows that sanctions can limit many diagnostic and treatment services. According to a report on North Korea, the food crisis, the decline in medical research, and the number of therapeutic and diagnostic items available for citizens are among the greatest challenges resulting from the imposition of sanctions on that country. 29 Another study from Cuba also reported that despite the negative impact of sanctions on the health system, proper management of the PSC could mitigate their adverse effects. 30 Based on the aforementioned discussion, paying attention to the role of suppliers and producers and to the role of the medical community and consumers as a strategy to address the challenges of the PSC under politico-economic sanctions can go a long way in improving the quality of medicines, increasing production and exports, and addressing the challenges posed by sanctions in the pharmaceutical sector. Most importantly, policymakers should consider informing the medical community and using alternative medicines and methods, the role of providers in cases of poor brand loyalty, and consumerpatient collaboration. The present paper has certain limitations, which could be as follows: the national document analysis could be merged with the data achieved from semi-structured interviews for more data robustness and achieving triangulation for qualitative research. Conclusion The findings of this study implied that pharmaceuticals may be affected by politicoeconomic sanctions in five main areas, namely financing, purchasing and importing, manufacturing of domestic products, storage and distribution of medicines, and challenges to people and patients. Identifying supply chain challenges and planning to address them can inform policymakers on how to increase PSC resilience in the future.
2021-10-28T03:30:16.035Z
2021-10-04T00:00:00.000
{ "year": 2022, "sha1": "90a85b1e385e3ffa4671453cc60a12587d5256c3", "oa_license": "CCBYND", "oa_url": null, "oa_status": null, "pdf_src": "PubMedCentral", "pdf_hash": "e7f2c054f9d2ad085a597ea9e1deaaa10a90d940", "s2fieldsofstudy": [ "Medicine", "Political Science" ], "extfieldsofstudy": [ "Medicine" ] }
3375630
pes2o/s2orc
v3-fos-license
Bioremediation of dyes by fungi isolated from contaminated dye effluent sites for bio-usability. Biodegradation and detoxification of dyes, Malachite green, Nigrosin and Basic fuchsin have been carried out using two fungal isolates Aspergillus niger, and Phanerochaete chrysosporium, isolated from dye effluent soil. Three methods were selected for biodegradation, viz. agar overlay and liquid media methods; stationary and shaking conditions at 25 °C. Aspergillus niger recorded maximum decolorization of the dye Basic fuchsin (81.85%) followed by Nigrosin (77.47%), Malachite green (72.77%) and dye mixture (33.08%) under shaking condition. Whereas, P. chrysosporium recorded decolorization to the maximum with the Nigrosin (90.15%) followed by Basic fuchsin (89.8%), Malachite green (83.25%) and mixture (78.4%). The selected fungal strains performed better under shaking conditions compared to stationary method; moreover the inoculation of fungus also brought the pH of the dye solutions to neutral from acidic. Seed germination bioassay study exhibited that when inoculated dye solutions were used, seed showed germination while uninoculated dyes inhibited germination even after four days of observation. Similarly, microbial growth was also inhibited by uninoculated dyes. The excellent performance of A. niger and P. chrysporium in the biodegradation of textile dyes of different chemical structures suggests and reinforces the potential of these fungi for environmental decontamination. Introduction Due to rapid industrialization and urbanization, a lot of chemicals including dyes are manufactured and are being used in day-to-day life. About 100,000 commercial dyes are manufactured including several varieties of dyes such as acidic, basic, reactive, azo, diazo, anthraquinone based meta complex dyes with an annual production of over 7 x 10 5 metric tons are commercially available (Campos et al., 2001). Approximately 50% of the dyes are released in the industrial effluents (Zollinger, 1991). They are used on several substrates in food, cosmetics, paper, plastic and textile industries. Some of them are dangerous to living organisms due to their potential toxicity and carcinogenicity. Dyes in wastewater often lead to calamities viz. the incidence of bladder tumors has been reported to be particularly higher in dye industry workers than in the general population (Suryavathi et al., 2005). Natural pigments used for coloring textiles have been replaced by "fast colors" which do not fade on exposure to light, heat and water. These features unfortunately go with the perils of harmful effluent quality. About 15% of the dyes used for textile dying are released into processing waters (Eichlerova et al., 2006). Besides being unaesthetic, these effluents are mutagenic, carcinogenic and toxic (Chung et al., 1992). Commonly applied treatment methods for color removal from colored effluents consist of integrated processes involving various combinations of biological, physical and chemical decolorization methods (Galindo and Kalt, 1999;Robinson et al., 2001;Azbar et al., 2004), of these, approximately 10-15% of unused dyes enter the wastewater after dyeing and after the subsequent washing processes (Rajamohan and Karthikeyan, 2006). Chemical and physical methods for treatment of dye wastewater are not widely applied to textile industries because of exorbitant costs and disposal problems. Green technologies to deal with this problem include adsorption of dyestuffs on bacterial and fungal biomass (Fu and Viraraghavan, 2002;Yang et al., 2009) or low-cost non-conventional adsorbents (Crini 2006;Ferrero, 2007). A variety of physicochemical treatments have been devised previously for the dyes and textile wastewater. However, these suffered from some serious drawbacks in terms of their limited applications or their high cost. Besides, chemical treatments created an additional chemical load in water bodies that eventually resulted in sludge disposal problems. Several factors determine the technical and economic feasibility of each single dye removal technique. These include; dye type and its concentration, wastewater composition, operation costs (energy and material), environmental fate and handling costs of generated waste products. A very small amount of dye in water (10-50 mg/L) is highly visible and reduces light penetration in water systems, thus causing a negative effect on photosynthesis (Khaled et al., 2010;Dhanjal et al., 2013). Recently, dye removal became a research area of increasing interest, as government legislation concerning the release of contaminated effluent becomes more stringent. Various treatment methods for removal of dyes from industrial effluents like chemical coagulation using alum, lime, ferric chloride, ferric sulphate and electro coagulation are very time consuming and costly with low efficiency. Among the numerous water treatment technologies, research interest in the fungal bioremediation due to their biomass compared to the bacteria, has increased significantly for decolorization and degradation of synthetic dyes (Shahid et al., 2013). Keeping the above points in view, the main objectives of the problem were to screen and employ selected potential textile dye effluent soil fungal sp. capable to decolorize and detoxify the textile dyes using solid and liquid media under shaking and stationary conditions. Methods and Materials Chemicals and media All the chemicals, dyes and media such as Potato Dextrose Agar (PDA), Potato Dextrose broth (PDB) and Nutrient agar (NA) were procured from Himedia, Mumbai, India. Source of fungal strains The dye effluent soil-isolated (from Meerut region, Uttar Pradesh, India) fungal strains A. niger and P. chrysosporium were maintained on Potato Dextrose Agar (Himedia, Mumbai, India) and sub cultured periodically to maintain their viability. Identification of these soil fungal strains was done previously based on their morphological characters (Yao et al., 2009). These fungi have selected in present study because A. niger and P. chrysosporium has been widely studied for dye decolorization. Screening of soil-derived fungi for dye decolorization activities Sixty one dye effluent soil fungal strains were screened for their ability to degrade dyes using the tube overlay method. Initially, the fungal strains were grown on culture plates pre-filled with Potato Dextrose Agar (PDA) and incubated at room temperature for 14 days. Following incubation, mycelial agar plugs (~5 mm 2 ) were cut approximately 5 mm from the colony margin and inoculated on test tubes (in triplicates) containing 5 mL of PDA overlaid with 1 mL of PDA with 0.01% (w/v) respective textile dye. All culture tubes were incubated at room temperature (~25°C) and observed weekly for up to four weeks. Clearing of the overlaid dye indicates full decolorization (+++). Partial dye decolorization (++) was indicated by less dye intensity in comparison with the control (uninoculated PDA overlaid with PDA + 0.01% dye). All the three fungal strains were selected on the basis of full or maximum (+++) decolorization. Decolorization of dyes in solid medium (Tube overlay method) The two selected fungal strains were further tested for their ability to decolorize on PDA and Sabouraud Dextrose Agar (SDA) medium, Himedia, Mumbai, India. This was done to select which medium support better growth and dye decolorization activities of selected fungal isolates. Initially, all the three fungal strains were grown as previously described. Following incubation, fungal mycelial agar plugs (~5 mm 2 ) were cut approximately 5 mm from the colony margin and inoculated on test tubes (in triplicates) each pre-filled with 2 mL of the Potato Dextrose Agar (PDA) and SDA medium, supplemented separately with either with following dye 0.01% (w/v) Malachite green, Nigrosin and Basic fuchsin, respectively (Lopez et al., 2006). The culture tubes were then incubated at room temperature (~25°C). The growth of the fungi and its ability to decolorize the dye were observed weekly up to four weeks. The depth of dye decolorization (in mm) indicated by clearing of the dye was then measured. Based upon growth of fungal strains and dye decolorization, for further studies PDA medium was chosen. Assay for the dye decolorization activities of fungi in liquid media The spores and mycelia of A. niger and P. chrysosporium were then dislodged from Petri plates using a flame-sterilized inoculating loop and mixed properly with one mL of sterile distilled water. From this mixture, 10 mL of the fungal spore and mycelium inoculum were added on culture vials (in triplicates) pre-filled with 25 mL Potato Dextrose Broth (PDB) supplemented with 0.01% of either one of the following dyes: Malachite green, Nigrosin and Basic fuchsin. Three sets were prepared and were incubated either under constant agitation/shaking (100 rpm, Yorko Scientific Orbital Shaker) or under stationary/without shaking condition (Park et al., 2007). All culture vials were incubated at room temperature (~25°C) for 10 days and all assays were performed in triplicate. Growth and dye decolorization were noted every day. Following culture for 10 days, the culture filtrates were decanted and subjected to spectrophotometric analysis. Absorbance maxima of the tested dyes were read as malachite green-620 nm, Nigrosin-600 nm and Basic fuchsin-550 nm wavelength. The extent of dye decolorization by the soil fungal strains on liquid media was calculated using the formula below: where Pdd = Percent dye decolorization, Absorbance c = Absorbance control and Absorbance i = Absorbance inoculated. Finally, the mycelial biomass were harvested on clean Petri plates and observed directly. Fungal hyphae were also mounted on clean glass slides and observed under a compound light microscope, make Olympus (1500x) for the biosorption of dyes. Enzyme assay Laccase activity was measured by using syringaldazine as a substrate as per the method of (Valmaseda et al., 1991). The activity was assayed using using 1.0 mL of 0.2 M sodium phosphate buffer (pH 5.7) and 0.2 mL syringaldazine (1.6 mg/mL) in absolute ethanol, (4.47 Mm). Reactions were initiated by the addition of syringaldazine and after mixing; incubations were conducted at 30°C for 1 h, because after 1 h highest enzyme laccase activity was observed. The absorbance was measured in a spectrophotometer (ELICO SL 150) before (0 time) and after incubation (60 min) at 526 nm and the increase in absorbance was calculated. One unit activity was defined as the enzyme producing one absorption unit/min at 526 nm. Seed germination bioassay Effect of bioremediated and untreated dye solution was observed on wheat seed germination. The wheat seeds were sterilized using 0.1% HgCl 2 solution for 50 s, washed 6-7 times with sterile distilled water to remove traces of HgCl 2 . In sterile Petri plates sterile filter paper was kept soaked in bioremediated, untreated dye solution and with sterile distilled water soaked filter paper as control, respectively. Ten wheat seeds were kept in each Petri plate and the experiment was conducted in triplicate. Observation on seed germination was taken for four days. The experiment was conducted at room temperature of 25 ± 1°C. Bacterial toxicity Effect of bioremediated and untreated dye solution was observed on bacterial growth by measuring zone of inhibition. Log phase cells of E. coli, 0.1 mL of 10 -8 were evenly spreaded on Petri plates and sterile filter paper discs impregnated with bioremediated, untreated dye solution and sterile distilled water were kept on the seeded bacterial cells at equidistance and pressed lightly and kept at 30°C for 48 h, observation for zone of inhibition was observed, if any (Kumar, 2011). Results Out of 61 fungal isolates, two fungal isolates were selected after comprehensive screening of the textile dyes biodegradation for further studies. Similarly, dye decolorization in liquid medium by P. chrysosporium exhibited different tendency as that of tube overlay method. In this method highest decolorization was observed in Nigrosin (90.15%), followed by Basic fuchsin (89.8%), Malachite green (83.25%) and least by dye mixture (68.4%) under shaking conditions (Figure 3). Change in pH was also observed by A. niger and P. chrysosporium under shaking and stationary conditions. After inoculation, the pH of all the dye samples was increased to neutral or near neutral (Figures 4, 5). Microbial bioassay study showed that untreated dye (control) inhibited the growth of E. coli on Pteriplates by forming a zone of inhibition, while the treated dye did not show any zone of inhibition ( Figure 6). Moreover, seed germination examination also showed the effect of untreated and treated dyes on germination of seeds. It was found that germination percentage was higher upto 90% by treated dye, while the untreated (control) dye inhibited the germination of wheat seeds (Figure 7). Data of zone of clearnce and seed germination is not mentioned due to brevity. Discussion The use of these fungi, thus could offer a much cheaper and efficient alternative treatment of wastewaters contaminated heavily with textile dyes. However, even though qualita-tive assays using the tube overlay method are powerful tools in screening fungi for extracellular enzyme produc-tion, they are not conclusive in that a negative reaction is not an absolute confirmation of a species' inability to produce a particular enzyme (Abdel-Raheem and Shearer, 2002). Hence, the tube agar overlay method (Lo-Rani et al. The removal of the dye color is vital in the potential ap-plication of effluent soil fungal organisms as bioremediation agents in wastewater treatment plants and in runoff waters. Thus, it is essential to test dye contaminated soil fungal strains for dye decolorization in liquid medium. Aspergillus niger and P. chrysosporium bioremediated Nigrosin, Basic fuchsin and Malachite green within 6 days up to 90%. Kanmani et al. (2011) reported that Nigrosin was a substrate for the ligninolytic enzyme lignin peroxidase. Microbes have developed enzyme systems for the decolourization of azo dyes, moreover, dye molecules display a high structural variety, and they are degraded by only few enzymes. These biocatalysts have one common mechanistic feature, they are all redox-active molecules and thus, exhibit relatively wide substrate specificities (Mester and Tien 2000). In the case of enzymatic remediation of azo dyes, azo reductases and laccases seem to be the most promising enzymes. These enzymes are multicopper phenol oxidases that decolourize azo dyes through a highly nonspecific free radical mechanism forming phenolic compounds, thereby avoiding the formation of toxic aromatic amines (Wong and Yu, 1999). Fungal systems appear to be most appropriate in the treatment of azo dyes (Ezeronye and Okerentugba, 1999). The capacity of fungi to reduce azo dyes is related to the formation of exo enzymes such as peroxidases and phenol oxidases. Peroxidases are hemoproteins that catalyze reactions in the presence of hydrogen peroxide (Duran et al., 2002). Laccase oxidizes the phenolic group of the phenolic azo dye, with the participation of one electron generating a phenoxy radical which is sequentially followed by oxidation to a carbonium ion. A nucleophilic attack by water on the phenolic ring carbon bearing the azo linkage to produce 3-diazenyl-benzenesulfonic acid (III) and 1, 2-naphthoquinone then takes place . Use of A. niger and P. crysosporium as dye biodegrader or decolorizer has been studied in this report and the efficient decolorization may be attributed to either through the ac-tion of extracellular enzymes such as laccase and/or biosorption by the fungal biomass. Laccase production by the soil fungal species have been studied, this test was performed to know, whether laccase enzyme plays any role in biodegradation of dyes. Furthermore, the soil fungal strains also showed promising decol-orization activities against tested dyes (Figures 3, 4). Ali et al. (2009) andVasudev (2011) exhibited that Malachite green was readily degraded in liquid culture by A. flavus, A. solani and some white rot fungi within six days up to 96%, as also shown in this study. Results of the dye biodegradation by soil fungi in this study using spectrophotometric analysis were even comparable with the percent dye decolorization exhibit-ed 1060 Rani et al. by the white-rot fungus Trametes versicolor and Pleurotus ostreatus (Yao et al., 2009), and even P. chrysosporium (Bumpus and Brock, 1988). Soil fungi possess ligninolytic enzymes and play an im-portant role in the degradation of lignocellulose in soil ecosystems (Okino et al., 2000). These lignin-degrading enzymes are directly involved not only in the degradation of lignin in their natural lignocel-lulosic substrates but also in the degradation of various xenobiotic compounds, including dyes. Moreover, ligninolytic enzymes have been report-ed to oxidize many recalcitrant substances such as chlo-rophenols, polycyclic aromatic hydrocarbons (PAHs), organophosphorus compounds, and phenols (Wesenberg et al., 2003). Additionally, it is not unusual for some species to demonstrate both enzyme-mediated degradation and biosorption in the decolorization of textile dyes (Park et al., 2007;Shahid et al., 2013). It is thus feasible that in addition to the production of extracellular enzymes, the ability of the dye effluent soil fungi to decolorize synthetic dyes is coupled also with their biosorption abilities (Kaushik and Ma-lik, 2009). We have also observed dye absorption by the test fungal mycelium under light microscope (1500 x) (Figure 8). This may account for the more efficient textile dye biodegradation by the soil fungal strains (Kirby et al., 2000) (Figures1-4). It is therefore, possible that the ability of dye effluent soil-derived fungi to degrade malachite green, Nigrosin and Basic fuchsin as revealed in this study can be largely attributed to the lignin-degrading enzyme system of the organism. In addition to extracellular enzymes, it is also likely that dye decolorization activity of these fungi could also be attributed to the ability of their mycelia to adsorb/absorb the dye. Bioremediation rate of dyes was higher with individual dyes as compared to dye mixture, which could be due the reason that a mixture of dyes forms different complex structures which become resistant for biodegradation. Biosorption of dyes occur essentially either through complexation, adsorption by physical forces, precipitation, entrapment in inner spaces of fungal mycelium, ion exchange due to surface ionization, and by formation of hydrogen bonds (Yeddou-Mezenner, 2010). Due to an increased cell-to-surface ratio, fungi have a greater physical contact with the environment. Thus, some fungi have demonstrated better dye adsorption potential exceeding that of activated charcoal (Fu and Viraraghavan, 2002). Detoxification of all the dyes was finally confirmed by the wheat seed germination and bacterial growth bioassay. The untreated dyes inhibited the wheat seeds germination after four days of incubation, while the seed germination was observed after 48 h in treated dyes treatments. Similarly, filter paper discs impregnated with untreated dye solution exhibited zone of inhibition of microbial growth, while the discs impregnated with treated dyes showed no zone of inhibition. The results of this study suggest that po- tentially competent fungal strains can be efficiently used for detoxification and bioremediation of harmful dyes. Conclusions The decolorization of dyes was studied under stationary and shaking conditions; encouraging results were obtained after 3 days, but maximum decolorization of all the dyes were obtained after 6 days. In this study we have observed higher decolorization under shaking conditions by P. chrysosporium and A. niger, which could be due to better oxygenation of the fungus and regular contact of secreted enzymes with dye molecules to decolorize it, moreover agitation also helps the fungus to grow better. Disappearance of dye color may be due to biodegradation of chromophore in dye molecule because of extracelluar enzyme production by fungi along with absorption and adsorption. Due to the environmental friendly techniques it utilizes, bioremediation has been characterized as a soft technology. Its cost-effectiveness and the little disturbance in the environment render this technology a very attractive and alternative method of choice. The identification and research of new fungal strains with the aid of molecular techniques will further improve practical applications of fungi and it is anticipated that fungal remediation will be soon a reliable and competitive dye remediation technology.
2018-02-19T21:27:12.339Z
2014-10-09T00:00:00.000
{ "year": 2014, "sha1": "18318ced24b82b107e4ce100df6313675c206ba6", "oa_license": null, "oa_url": "http://www.scielo.br/pdf/bjm/v45n3/39.pdf", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "18318ced24b82b107e4ce100df6313675c206ba6", "s2fieldsofstudy": [ "Biology", "Environmental Science" ], "extfieldsofstudy": [ "Biology", "Medicine" ] }
56491413
pes2o/s2orc
v3-fos-license
Fabrication of Kidney Proximal Tubule Grafts Using Biofunctionalized Electrospun Polymer Scaffolds The increasing prevalence of end-stage renal disease and persistent shortage of donor organs call for alternative therapies for kidney patients. Dialysis remains an inferior treatment as clearance of large and protein-bound waste products depends on active tubular secretion. Biofabricated tissues could make a valuable contribution, but kidneys are highly intricate and multifunctional organs. Depending on the therapeutic objective, suitable cell sources and scaffolds must be selected. This study provides a proof-of-concept for stand-alone kidney tubule grafts with suitable mechanical properties for future implantation purposes. Porous tubular nanofiber scaffolds are fabricated by electrospinning 12%, 16%, and 20% poly-ε-caprolactone (PCL) v/w (chloroform and dimethylformamide, 1:3) around 0.7 mm needle templates. The resulting scaffolds consist of 92%, 69%, and 54% nanofibers compared to microfibers, respectively. After biofunctionalization with L-3,4-dihydroxyphenylalanine and collagen IV, 10 × 106 proximal tubule cells per mL are injected and cultured until experimental readout. A human-derived cell model can bridge all fiber-to-fiber distances to form a monolayer, whereas small-sized murine cells form monolayers on dense nanofiber meshes only. Fabricated constructs remain viable for at least 3 weeks and maintain functionality as shown by inhibitor-sensitive transport activity, which suggests clearance capacity for both negatively and positively charged solutes. Introduction Due to a constant scarcity of donor kidneys, approximately two million endstage renal disease patients worldwide must undergo dialysis, which is the only treatment option besides organ transplantation. Unfortunately, hemodialysis does not provide the same long-term beneficial effects on quality of life and survival as kidney transplants, annual mortality of hemodialysis lies around 20%. [1,2] This poor outcome is related to the therapeutic restrictions of dialysis: diffusion and convection remove only a fraction of metabolic waste products from the blood, predominantly small uremic solutes (<500 Da). Meanwhile, large and protein-bound solutes remain in the body as their clearance depends on active tubular secretion. Retention and gradual accumulation of the waste products, also known as uremic toxins, are a hallmark of chronic kidney disease (CKD) and are associated with disease progression, cardiovascular complications, and increased mortality. [3] For this reason, renal assist devices (RADs) are being developed to enhance conventional dialysis. These devices include an extracorporeal cellular unit of renal proximal tubule epithelial cells, which express multiple transporters that cooperate in basolateral uptake and luminal excretion of various endogenous metabolites, including uremic toxins. [3,4] FDA-approved phase I/II clinical studies with a RAD suggested a potential added value of such a device in the treatment of critically ill subjects. [5][6][7] The development, current status, and technical challenges of RADs have been extensively reviewed elsewhere. [2,[8][9][10] Unfortunately, the efficiency of current RADs is limited to short-time extracorporeal applications. In the long run, implantable constructs for continuous blood clearance and maintained cell function would be the best solution. Promising kidney engineering approaches comprise innovations from stem cell-based therapies and de novo organogenesis to kidney decellularization techniques and additive manufacturing technologies like 3D printing. [11] Lab-grown functional and autologous transplants are the holy grail for overcoming donor kidney shortage, graft rejection, and lifelong immunosuppressive therapy. However, due to the immense anatomical and physiological complexity of the kidney, these approaches are still in their infancy and far from clinical application. To steer a middle course, we propose to downscale kidney engineering from a complex whole organ to implantable hollow tubes that follow the principle of RADs by taking advantage of the active secretion system of proximal tubule cells. For the creation of kidney proximal tubules, recent approaches have mainly been focusing on the use of hydrogels. [12][13][14][15][16][17][18] Although promising results have been obtained, these tubules are either intended for in vitro testing only, too fragile for transplantation, or embedded in bulk gels, which would hamper nutrient supply and clearance capacity if not adequately vascularized. Thus, other technologies are required for the fabrication of implantable kidney tubule constructs that display both high diffusibility and mechanical stability. Solution electrospinning is a traditional method that has to date been employed to fabricate tubular poly-ε-caprolactone (PCL) scaffolds for vascular or neuronal grafts, which have proven biocompatibility and sufficient mechanical stability in mouse, dog, and sheep models. [19][20][21][22][23][24][25][26] Electrospinning also enables the production of nanofiber scaffolds with high porosity and surface-to-volume ratio, which is essential for the desired diffusibility of bioengineered kidney tubes. Considering their excellent properties for high mechanical stability and diffusibility, electrospun nanofiber tubes would be superior to current hydrogel models for the creation of kidney proximal tubule grafts. Furthermore, biofunctionalized nanofiber meshes could mimic the micro-architecture of the native extracellular matrix (ECM), since the renal basement membrane mainly consists of cross-linked collagen IV fibers. [27] By mimicking the macromolecular ECM architecture and composition as well as its stiffness, cell-ECM interactions could promote normal tissue homeostasis. [28] Thus, electrospun nanofiber meshes have the potential to provide additional physicochemical cues for proper graft functionality. The goal of this study was to electrospin porous tubular scaffolds that enable luminal epithelialization with proximal tubule epithelial cells to construct functional kidney tubule grafts. We electrospun 12%, 16%, and 20% v/w PCL (chloroform and dimethylformamide, 1:3) scaffolds with distinct morphology and mechanical properties. After biofunctionalization with an L-3,4-dihydroxyphenylalanine (L-DOPA) and collagen IV double coating, we cultured two renal cell lines with different cell sizes and origin on the luminal scaffold surface and investigated tight junction formation, long-term viability, and transport functionality. Scaffold Fabrication In this study, we designed and characterized biofunctionalized polymer scaffolds for the fabrication of kidney proximal tubule grafts with a luminal tight monolayer of functional renal proximal tubule epithelial cells. An overview of the workflow is depicted in Figure 1. We used 12%, 16%, and 20% v/w PCL dissolved in dimethylformamide and chloroform (ratio 3:1) to fabricate tubular nanofiber scaffolds, in the following referred to as 12%, 16%, and 20% PCL scaffolds. These scaffolds had an inner diameter of 0.7 mm, an outer diameter of approximately 1 mm, and distinct morphologies on microscale. The inner diameter was prespecified by the 0.7 mm needle template, which corresponds to around ten times the diameter of kidney tubules in situ. Although the physiological dimension might be technically feasible, it would have been impractical for experimental handling and is, aside from that, not a prerequisite for heterotrophic implantations. Key parameters of the electrospinning set-up, that is, electrical potential, spinneret-to-collector distance and feeding rate were optimized to obtain a stable electrified jet. Optimized settings were found with 9-17 kV, a feeding rate of 0.3-0.8 mL h −1 for 20-30 min and a template distance of 7-12 cm. Using these settings, electrospinning of a 12% PCL solution resulted in stable tubular scaffolds with a wall thickness of 147 ± 63 μm as compared to 298 ± 107 μm for 16% and 247 ± 90 μm for 20% PCL (Figure 2a,b,d). The wall thickness varied between but also within the scaffolds because of the nonuniform fiber organization, which is a typical characteristic of the electrospinning process. However, the fibers effectively distributed over the rotating needle template due to its electrostatic charge attraction and thereby tended toward an axial orientation pattern ( Figure 2c). Also, the fiber diameter was slightly variable within each scaffold. With fiber diameters of 0.53 ± 0.30, 0.88 ± 0.44, and 1.06 ± 0.66 μm for 12%, 16%, and 20% PCL scaffolds, respectively, the diameter increased significantly. Thereby, the percentage of nanofibers compared to microfibers dropped from 92% to 69% and 54%, respectively. It is worth noting that electrospinning of 12% PCL solutions resulted in the formation of beaded fibers, which are the typical consequence of too high surface tension and low polymer concentration or charge density. Beads are often considered a defect as their presence results in lower mechanical properties. However, this feature does not, per definition, hamper the use of 12% scaffolds for biological application. In the current study, the demonstrated variations in wall and fiber thickness and hence limited reproducibility are considered a minor problem as we strived for a proof-of-concept for the fabrication of electrospun kidney tubule grafts with sufficient mechanical stability and maintained renal transport activity. Nonetheless, the production of scaffolds with welldefined wall thicknesses and fiber organization would be of future interest in order to comply to GMP guidelines as well as to obtain improved cell differentiation and function by providing a favorable microenvironment. [29] To this end, the recently developed technology of melt electrospinning writing could advance the fabrication of tubular scaffolds with highly accurate fiber dispositions, but this falls beyond the scope of the current study. [30,31] Mechanical Scaffold Properties After fabrication and structural characterization, the mechanical behavior of the tubular scaffolds was investigated under uniaxial tensile loading (Figure 3a,b). An overall increase in tangent modulus as well as stress and strain at break was observed with increase in polymer concentration, which demonstrates increasing mechanical stability. The obtained tangent modulus of 20% PCL scaffolds with 67.7 ± 7.4 MPa was significantly higher than for 12% and 16% PCL scaffolds with 16. 5 ± 7.6 and 23.5 ± 13.6 MPa, respectively ( Figure 3c). Interestingly, the tangent moduli of 12% and 16% PCL scaffolds were comparable to the elasticity of native tubular basement membranes, for which values of 3-10 MPa have been reported. [32] Furthermore, 16% and 20% PCL scaffolds exhibited a strain at break of at least 60%, which is a pivotal property, for example, to withstand fluid pressures and to resist the deformation required during implantation. The overall weaker mechanical properties of 12% PCL scaffolds can be explained by the earlier reported presence of beaded fibers (Figure 2c). Luminal Epithelialization For epithelialization of the inner scaffold wall, two renal cell lines of different origin were used, of which the cells considerably differed in cell size-induced renal tubular epithelial cells (iREC) were derived from murine fibroblasts through conversion into renal-like cells by transduction of essential transcription factors. [33] For these cells, an average size of 13.7 ± 1.3 μm was measured when cultured in PCL tubular scaffolds. Conditionally immortalized proximal tubule epithelial cells (ciPTEC) are urine-derived proximal tubule cells from human origin. [34] In this study, we used a clonal cell line transduced with Organic Anion Transporter 1 (OAT1), which has been proven to be functionally stable. [35] The measured cell size of these cells was 29.7 ± 5.5 μm when grown in PCL tubular scaffolds. Due to the hydrophobic nature of PCL, cells were only partly able to adhere to the scaffolds, but adhesion properties were considerably improved by coating the PCL scaffolds with mussel-inspired adhesive L-DOPA and collagen IV, according to a method established previously. [4,[36][37][38] The double coating of L-DOPA and collagen IV enabled the formation of a complete and polarized epithelial monolayer (Figure 4a). Both cell lines were cultured on the luminal side of all three scaffolds and monolayer formation was investigated after 3 weeks (Figure 4b,c). Interestingly, iREC adhered to scaffolds of all three polymer concentrations, but were only able to form a continuous intercellular barrier with high expression of the tight junction marker Zona Occludens 1 (ZO-1) on 12% PCL scaffolds, while ciPTEC were able to form tight monolayers on all three scaffolds. This indicates that the choice of polymer concentration for scaffold fabrication is not only critical for the desired mechanical properties of the scaffold, but also the resulting scaffold morphology has an essential impact on cell barrier formation, which is crucial to allow leakage-free and selective transport of solutes into the lumen. Here, the high content of nanofibers in 12% PCL scaffolds formed a sufficiently dense fiber mesh for the small-sized iREC to adhere and to form cell-cell contacts, whereas higher PCL concentrations created fiber-to-fiber distances that could only be bridged by the bigger-sized ciPTEC. A transverse scaffold view, as shown for ciPTEC (Figure 4c), and a transverse cut of a 12% PCL scaffold with iREC ( Figure 4d) show that the cells formed luminal monolayers without significant migration into the scaffold and that the monolayers covered the entire scaffold surface. A viability assay confirmed the formation of dense and viable monolayers for both cell lines without any sign of material toxicity after at least 3 weeks of culture (Figure 4e,f). Construct Functionality For active uremic toxin removal, kidney proximal tubule grafts must not only possess a complete and tight monolayer, they must also be able to effectively transport metabolic solutes from the outside of the construct into the lumen for subsequent drainage. Renal proximal tubule epithelial cells possess a coordinated network of a multitude of transporters with overlapping specificities for the efficient transcellular transport of a broad spectrum of solutes. OAT1 and Organic Cation Transporter 2 (OCT2) are the most prominent basolateral transporters in human proximal tubule cells, which are responsible for the uptake of anionic and cationic metabolites, respectively. To demonstrate renal transport activity in our fabricated kidney proximal tubule grafts, 20% PCL scaffolds of matured ciPTEC were incubated for 10 min with the fluorescent organic anion fluorescein or the fluorescent organic cation 4-(4-(Dimethylamino)-styryl)-N-methylpyridinium iodide (ASP + ), in the absence or the presence of the OAT1 inhibitor probenecid or the OCT2 inhibitor tetrapentylammonium (TPA + ). Fluorescence microscopy imaging confirmed substrate uptake with significant decrease in both fluorescein uptake (p < 0.0001) and ASP + uptake (p < 0.025) in the presence of their respective transport protein inhibitor ( Figure 5). These results suggest that electrospun tubular scaffolds allow the rapid diffusion of both negatively and positively charged compounds through the fibrous scaffold wall toward the basolateral side of the luminal cell monolayer. Moreover, we demonstrated that OAT1 and OCT2, both located at the basolateral membrane of tubular epithelial cells, maintained renal transport functionality for at least 3 weeks of culture. These are two very important features of kidney proximal tubule grafts for the efficient and continuous clearance of metabolic waste products from the body. Conclusion Tissue engineering is a rapidly developing field, but kidney engineering attempts to strive for a whole organ of yet inimitable complexity. Inspired by the principle of RADs, we downscaled kidney engineering to simple tubular scaffolds with proximal tubule cells to focus on the unmet medical need of active uremic toxin removal. Among the technologies used for the fabrication of porous tubular scaffolds, solution electrospinning is traditional and yet popular due to its simplicity and cost-effectiveness. Here, we demonstrated that biofunctionalized electrospun polymer scaffolds can be used for the creation of kidney proximal tubule grafts. Sufficient mechanical stability, rapid diffusibility, tight cellular monolayer formation, and prolonged construct viability and functionality demonstrated superior properties over existing proximal tubule models with regard to implantation purposes and continuous blood clearance as renal replacement therapy. It should be noted that different cell sources must be developed and extensively characterized before implantation, for example, patient-derived or HLA-matching induced pluripotent stem cells that were differentiated to proximal tubule epithelial cells. Moreover, advanced biomaterials will likely further improve scaffold characteristics. For the first proofof-concept of porous stand-alone kidney tubule grafts, we used PCL, a well-characterized biodegradable polymer, and a simple electrospinning set-up. However, scaffold fabrication should be extended to both more advanced materials, for example, collagen IV, hydrogel/ scaffold composites, or decellularized extracellular matrices as well as more advanced technologies, for example, melt electrospinning writing. By selecting optimal parameters regarding scaffold dimensionality, topography, effective surface stiffness, and substrate thickness, we will be able to produce well-defined scaffolds with conceivably enhanced cell function. [39][40][41][42] Thereby, advanced biofabrication approaches could enable the adaptation of RAD principles to implantable, well-defined tubular tissue constructs with fine-tuned mechanical properties and biological functionality. Solution Electrospinning An in-house built solution electrospinning setup was used in this study, as illustrated in Figure 1. The system consisted of a programmable syringe pump (NE-1000, New Era Pump Systems, Inc., USA) with a metallic syringe needle as spinneret, a brass tube as rotating collector equipped with a DC motor and a high voltage source (Heinzinger, LNC 1000-5 POS, 0-10 kV, Germany). Electrospun fibers were collected in the grounded rotating collector with a diameter of 0.7 mm positioned at 7-12 cm, opposite to the syringe pump. A rotation speed of approximately 140 rpm was fixed and kept for 20-30 min, while 12%, 16%, or 20% (w/v chloroform and dimethylformamide, 1:3) PCL solutions were electrospun with a feeding rate of 0.3-0.8 mL h −1 and an applied voltage of 12-17 kV. Mechanical Analysis The mechanical behavior of the tubular constructs was tested under uniaxial tensile loading using a universal testing machine (Zwick Z010, Germany) equipped with a 1 kN load cell. Tests were performed at a rate of 1 mm min −1 . Prior to testing, the nominal dimensions of each sample, that is, diameter and length, were measured. The tangent modulus, strain, and stress at break were determined from the engineering stress-strain curves. Tangent moduli were calculated at the linear region (i.e., 2-5% strain region). Scanning Electron Microscopic Imaging and Analysis To analyze scaffold characteristics, images were captured with a Phenom desktop scanning electron microscope (Phenom world, Eindhoven, the Netherlands) at an acceleration speed of 10 kV. The samples were prepared by freezing three scaffolds per PCL concentration in liquid nitrogen and three samples were cut: one section from the middle part of the scaffold and two from the outer ends. The wall thicknesses were measured at the top, left, right, and bottom of the section cuts. Fiber diameters were measured using 8000× magnified images in ImageJ (https://imagej.net/Fiji/Downloads). In each longitudinally opened sample, three locations were selected at the top, middle, and bottom. A line was drawn across the middle and the first 15 fibers that intersected this line were measured at the point of intersection, leading to a total of 45 measurements per sample and 135 per scaffold. For each scaffold, the ratio of micro-to nanofibers was calculated. Induced renal tubular epithelial cells (iREC) were developed by Kaminski et al. by directly reprogramming mouse fibroblasts into renal-like cells through transfection of the transcription factors Emx2, Hnf1b, Hnf4a, and Pax8. [33] Cells were cultured in Dulbecco's modified Eagle's medium (DMEM) supplemented with 10% FBS and 1% penicillin/ streptomycin. The latter was omitted 7 days prior to experimental readout. Cell Culture in Biofunctionalized PCL Scaffolds Immediately after electrospinning, tubular scaffolds were first sterilized using 365 nm UV light (2.6 mW cm −2 , UVP CL-1000) on both sides for 15 min each. Subsequently, the constructs were injected twice with 10 × 10 6 cells per mL followed by 2 h incubations at 33 °C (ciPTEC-OAT1) or 37 °C (iREC) in dry 12-well plates with a 180° turn in between, before culture medium was added. The following day, the scaffolds were transferred into new 12-well plates and cultured for 3 weeks. Immunofluorescence Matured ciPTEC and iREC were fixed with 2% w/v paraformaldehyde in HBSS and permeabilized with 0.3% v/v triton X-100 in HBSS for 10 min. To prevent nonspecific antibody-binding, the cells were exposed to a block solution consisting of 2% v/v FCS, 2% v/w bovine serum albumin (BSA), and 0.1% v/v tween-20 in HBSS for 30 min. The primary antibody against the tight junction protein zonula occludens 1 (ZO-1, Thermofisher Scientific) was diluted 1:50 in block solution and the cells were incubated for 1 h, followed by incubation with goat-anti-rabbit-Alexa488 conjugate (1:200, Life Technologies Europe BV, Bleiswijk, The Netherlands) for 30 min. Finally, nuclei were stained using DAPI nuclei staining (Sigma-Aldrich, 1:1000) for 7 min and the scaffolds were mounted with Prolong Antifade Mounting Medium (Thermofisher Scientific) in Willco wells glass bottom dishes (WillCo Wells BV, Amsterdam, the Netherlands). ZO-1 expression and localization were examined using the confocal microscope Leica TCS SP8 X, flters of 410-494 nm and 512-551 nm, and software Leica Application Suite X. Live/Dead Viability Assay To determine cell viability/cytotoxicity after 3 weeks of cell culture in tubular PCL scaffolds, the scaffolds were rinsed in HBSS and incubated with 2 μM calcein-AM and 1 μM ethidium homodimer-1 (Thermofisher Scientific) for 15 min at 37 °C. Images were captured with flters set at 544-572 nm and 625-686 nm, using the confocal microscope Leica TCS SP8 X and software Leica Application Suite X. Transport Assays To test transport functionality after 3 weeks of cell culture, five 20% PCL scaffolds with ciPTEC-OAT1 were rinsed in HBSS and incubated with 1 μM fluorescent OAT1 substrate fluorescein in the presence or the absence of 100 μM OAT1 inhibitor probenecid for 10 min at 37 °C. Another five scaffolds were incubated with 5 μM of the OCT2 substrate 4-(4-(dimethylamino)-styryl)-N-methylpyridinium iodide (ASP + ) in the presence or the absence of 20 μM OCT2 inhibitor tetrapentylammonium (TPA + ) for 10 min at 37 °C After incubation, the scaffolds were rinsed in ice-cold HBSS, cut open longitudinally, and images were captured at 520-600 nm using the confocal microscope Leica TCS SP8 X and software Leica Application Suite X. Fluorescence intensity was semi-quantified using ImageJ with 16 bit images and background subtraction. Data Analysis Unless stated otherwise, a minimum of three scaffolds of each polymer concentration was used per experiment. Data were analyzed in Graphpad Prism 7 (GraphPad Software Inc., La Jolla, USA) using Student's unpaired t-test or one-way ANOVA with multiple comparisons. Workflow of design and fabrication of the biofunctionalized electrospun polymer scaffolds for kidney proximal tubule grafts. a) A kidney contains 200 000-1 000 000 functional units, the nephrons. After blood filtration through the glomerulus, active secretion of metabolic waste products takes place between the proximal tubules and peritubular capillaries. While filtration can be replaced by hemodialysis, active secretion requires proximal tubule cells as part of advanced renal replacement therapies. b) Solution electrospinning was used to fabricate tubular scaffolds with different polymer concentrations. Two cell lines of proximal Scaffold fabrication. a-c) Scanning electron microscopic images of tubular electrospun scaffolds made of 12%, 16%, and 20% w/v PCL. Scale bars: (a) 1 mm and (b,c) 100 μm. d) Comparison of scaffold wall thicknesses, measured on four locations in three sections of three different scaffolds. e) Comparison of scaffold fiber diameters, measured on three locations in three sections of three different scaffolds. Boxplots present the mean and 5-95 percentiles. *p < 0.05, **p < 0.01, ****p < 0.0001 using one-way ANOVA. Cell functionality. ciPTEC-OAT1 cultured in 20% PCL scaffolds showed uptake of (left) 1 μM fluorescein via Organic Anion Transporter 1 (OAT1) and (right) 5 μM ASP + via Organic Cation Transporter 2 (OCT2), which could be inhibited by 100 μM probenecid and 20 μM TPA + , respectively (n = 5). Scale bars: 100 μm. Data are expressed as mean ± SD. *p < 0.05, ***p < 0.0001 using Student's unpaired t-test.
2018-12-15T14:02:39.146Z
2018-12-13T00:00:00.000
{ "year": 2018, "sha1": "5a8784b919003917c09b45221b2d99c776afb141", "oa_license": "CCBYNCND", "oa_url": "http://dspace.library.uu.nl/bitstream/1874/389617/1/Jansen_et_al_2019_Macromolecular_Bioscience.pdf", "oa_status": "GREEN", "pdf_src": "PubMedCentral", "pdf_hash": "32d5076d707f546ff9f27cd09a254a97a2c80dda", "s2fieldsofstudy": [ "Engineering", "Medicine", "Materials Science" ], "extfieldsofstudy": [ "Chemistry", "Medicine" ] }
232481613
pes2o/s2orc
v3-fos-license
Perspective: the nose and the stomach play a critical role in the NZACE2-Pātari* (modified ACE2) drug treatment project of SARS-CoV-2 infection ABSTRACT Background: COVID-19 has caused calamitous health, economic and societal consequences globally. Currently, there is no effective treatment for the infection. Areas covered: We have recently described the NZACE2-Pātari project, which seeks to administer modified Angiotensin Converting Enzyme 2 (ACE2) molecules early in the infection to intercept and block SARS-CoV-2 binding to the pulmonary epithelium. Expert opinion: Since the nasopharyngeal mucosa is infected in the first asymptomatic phase of the infection, treatment of the nose is likely to be safe and potentially effective. The intercepted virus will be swallowed and destroyed in the stomach. There is however a limited window of opportunity to alter the trajectory of the infection in an individual patient, which requires access to rapid testing for SARS-CoV-2. The proposed strategy is analogous to passive immunization of viral infections such as measles and may be of particular benefit to immunodeficient and unvaccinated individuals. Introduction SARS-CoV-2, the zoonotic agent responsible for COVID-19 originated in the Hubei province of China [1]. COVID-19 has caused catastrophic health, economic and societal consequences globally. There is currently no universally effective treatment for the infection. Over the last fifteen months, an unprecedented international scientific effort has been launched to understand the infection and to develop new treatments and vaccines. References relevant to the current article have been identified from the published literature to 11 March 2021. Only peer reviewed articles have been included. The reader is advised to seek up-to-date references as this field is changing each day. evolves in three overlapping clinical phases ( Figure 1) [6]. In the incubation period, the nasopharyngeal mucosa is infected [7]. The virus gains access to the respiratory epithelium by binding angiotensin converting enzyme 2 (ACE2) on the cell surface [8]. Host proteases facilitate viral entry by cleaving the S1 subunit of the viral spike (S) glycoprotein, allowing the S2 subunit to fuse with the host cell [9]. The phases of the infection Following fusion with the epithelial cell membrane, viral RNA is released, which hijacks cellular organelles for production of progeny virions. Release of daughter virus causes destruction of host pneumocytes. In older individuals and those with comorbidities, there is often progression to lung involvement. In this second pulmonary phase, typically around day 5, patients experience increasing breathlessness, fever, fatigue and myalgia. Laboratory correlates of worsening disease include neutrophilia, lymphopenia, raised inflammatory markers including Creactive protein (CRP) and ESR [11]. Around day 10, those destined to suffer the systemic phase will progress to acute respiratory distress syndrome (ARDS) and experience dysfunction of other target organs including the heart, brain, kidneys and activation of the coagulation cascade [12]. The systemic phase is associated with viral sepsis leading to a cytokine storm, with release of multiple cytokines including IL6. Persistent elevation of troponin may signify myocarditis leading to heart failure. Patients entering the systemic phase have a high mortality rate from multi-organ failure in spite of invasive ventilation and extra-corporeal membrane oxygenation (ECMO). Survivors of COVID-19 are often left with disabilities including strokes, myocarditis, renal impairment, pulmonary dysfunction and a condition similar to chronic fatigue syndrome [13,14]. 'Long COVID' is being increasingly recognized in COVID-19 survivors [15]. Their long-term prognosis is unknown. A pandemic psychiatric morbidity is likely to be a major consequence of COVID-19 [16]. In contrast to the severe outcomes noted above, in many cases the disease is mild and some patients are asymptomatic. The explanation for asymptomatic infection, even in older persons is poorly understood [17]. These asymptomatic individuals can however transmit the virus and are a logistical and societal challenge to countries attempting to eradicate the infection [18]. The immune response to SARS-CoV-2 The correlates of protective immunity to SARS-CoV-2 are not currently understood [19]. The initial nasal phase is largely asymptomatic. The virus engages a series of ploys to avoid detection and activation of the innate immune system [20]. Cytoplasmic pattern recognition molecules such as Toll-like receptors, RIG-like receptors, MDA5 and protein kinase R (PKR) are normally triggered by the presence of viral RNA Article highlights • There is currently no effective treatment for COVID-19 • Safe and effective vaccines face financial and logistical challenges, which will hinder global deployment • The virus initially infects the nasopharyngeal mucosa by binding cellsurface ACE2. By evading the innate immune system the virus is able to multiply exponentially and in some cases infect the lungs and other organs. • Here we describe the potential use of recombinant modified ACE2 molecules to target the virus in the nasal phase • Treating the virus in the asymptomatic incubation phase may alter the prognostic trajectory of the infection • If deployed globally with rapid testing, this treatment may help mitigate the ongoing pandemic • Rapid viral evolution may render some vaccines and monoclonal antibodies ineffective • In contrast, modified ACE2 molecules are likely to be effective even with rapid viral evolution, as the virus requires ACE2 to gain entry into cells Figure 1. The three overlapping clinical phases of COVID-19 [6]. The asymptomatic (incubation) nasal phase is followed by the pulmonary and systemic phases leading to multi-organ failure in some patients. Administration of NZACE2-Pātari by a nasal dropper may intercept of SARS-CoV-2 leading to a reduced viral load and milder disease [35,36]. Stopping proton pump inhibitors (PPIs) may allow the stomach to function as an effective antiviral organ [10]. Because the drug will be administered several times a day, there will be no requirement to fast. * NZACE2-Pātari has been created but has not entered clinical trials at the time of writing. The simplest trial design would be to randomize unvaccinated households with an infected individual to receive either placebo or active drug to determine infection rates. This diagram was constructed from free images on the Internet. or its variants including double-stranded RNA during viral replication [20]. Viral endonucleases cloak the viral RNA preventing activation of pathogen pattern recognition molecules [20]. In addition anti-interferon antibodies [21] impede the action of type 1 interferons [22]. During the initial five days, there is thus unopposed viral replication in the nasal mucosa. Other arms of the innate immune system such as NK cells are also rendered ineffective [23]. The complement cascade may have a role in aggravating the infection [24]. Activation of the inflammasome may lead to pyroptosis and could contribute to the cytokine storm in patients experiencing multi-organ failure [20]. The adaptive immune system is also subverted by the virus with the loss of CD8 cells [25]. The S protein is highly glycosylated rendering it less immunogenic [26]. Humoral immunity to SARS-CoV-2 may not be protective and there is the possibility of antibody disease enhancement (ADE) [27]. Some patients dying from the infection had both high viral loads and antibody titers, indicating these antibodies were unable to neutralize the virus. The in vitro correlates of ADE are only partially understood [28]. Treatment strategies for COVID-19 There is currently no universally effective curative treatment for COVID-19. Repurposing existing drugs has largely been disappointing with multiple trials showing the lack of efficacy of treatments such as hydroxychloroquine [29]. Dexamethasone has shown modest benefits for severely ill patients [30]. Although a large number of vaccines have entered production and in some cases emergency approval for use, they face many logistical challenges including global distribution, long-term efficacy and the risk of adverse effects including vaccine-induced ADE or thrombotic events. Selection of escape mutants by vaccines remains a concern. Very recently, the Astra-Zeneca vaccine was shown to be less effective against the South African variant (B.1.351) and the planned roll-out for HCWs has been suspended in South Africa. There are also increasing fears monoclonal antibodies such as bamlanivimab, casirivimab and indevimab may be rendered ineffective by viral evolution, particularly those bearing the E484K substitution [31]. The NZACE2-Pātari project to treat COVID-19 We have recently described the NZACE2-Pātari project which aims to intercept SARS-CoV-2 and block infection of respiratory epithelial cells [32]. We have constructed modified ACE2 molecules, which will be administered by an inhaler during the early phases of the infection [33,34]. We expect our drugs may mitigate the viral pneumonia and consequently reduce the risk patients will progress to the systemic phase of the infection, which carries high morbidity and mortality. As part of this project, we also plan to administer the ACE2derived drugs (NZACE2-Pātari) by the nasal route ( Figure 1). In this article we explore the risks and benefits of nasal administration of these drugs. We expect NZACE2-Pātari will result in a reduction in the number of virions that are able to infect the nasal mucosa. Consequently, there will be fewer virions that can reach the lungs by microaspiration. A lower viral burden at each stage of the infection may reduce the numbers of patients entering the pulmonary and systemic phases of the disease. Current data indicates a lower viral burden is associated with milder disease [35,36]. Using this strategy, we expect disease severity will be mitigated. Administration of NZACE2-Pātari to the nose Nasal NZACE2-Pātari can be easily administered by a dropper (Figure 1). A dropper is a low-pressure device and is unlikely to denature the molecules by shear stress. NZACE2-Pātari will be administered with patients leaning back over the bed and then rotating their head laterally to ensure coverage of the nasal mucosa. Patients would receive 4 mg of NZACE2-Pātari to the nose over 2 days. From the calculations presented here, we would expect SARS-CoV-2 to be overwhelmed by NZACE2-Pātari, which stoichiometrically far exceeds the number of virions (by approximately 170x). Some studies have suggested higher nasal viral loads (1.5x 10 (7)) but most virus will likely be bound to NZACE2-Pātari [38]. Nasal secretions may increase toward day 5 of infection, but the proposed dose should compensate for higher nasal mucous production. Furthermore, viral titers decrease toward the end of the nasal phase [38]. Repeated administration of the drugs over 2 days will substantially reduce the viral load and may alter the trajectory of the infection (Figure 1). The SARSCoV-2/NZACE2-Pātari complexes will reach the pharynx by naso-ciliary transport and be swallowed leading to hydrolytic destruction in the stomach as discussed below. It is acknowledged some viral particles may escape binding to NZACE2-Pātari, particularly if there is a delay in diagnosis but we expect the overall viral burden will be reduced, mitigating disease severity [35]. Advantage of the nasal route The NZACE2-Pātari project exploits a critical vulnerability of the virus, which is the obligate requirement for the receptorbinding domain (RBD) of the S glycoprotein to bind cellular ACE2 to infect cells. The RBD is therefore a stable target for antiviral strategies and unlikely to mutate, without loss of pathogenicity [39]. The nasal route for therapeutic administration is very attractive for several reasons. Delivery of drugs to the site of infection allows smaller doses to maximize therapeutic effect and minimize adverse reactions. The administration of nasal drops is simple, it does not require any special skill and selfmedication is possible once dispensed by a health-care worker (HCW). Therefore, nasal delivery increases the likelihood of patient compliance and reduces treatment cost. The solution used to reconstitute these products is simple. The formulation will include a protein such as human serum albumin to stabilize NZACE2-Pātari. It will also contain mucoadhesives to increase the duration of the drug in the nasal cavity. The drugs could be lyophilized and reconstituted when needed. This would allow the easier global deployment of the drug as transporting lyophilized proteins carries low risk of inactivation in hostile environmental conditions around the world. Overdosage is unlikely to be a problem. These drugs are likely to have a high therapeutic index. Any excess drug will be swallowed and hydrolyzed in the stomach. We expect excess drug also to be transported to the pharynx, which may provide anti-viral activity for the SARS-CoV-2 throat infection. A further advantage of the nasal route is that a portion of drugs could reach the brain from the nasal cavity [40]. The brain is a primary target for SARS-CoV-2 and if NZACE2-Pātari enters the brain it may provide a direct neuroprotective effect against the virus [41,42]. We acknowledge NZACE2-Pātari will have no efficacy in patients who develop viral sepsis leading to neurological damage. In the initial nasal incubation phase of COVID-19 there is minimal inflammation hence the patient is asymptomatic. This offers NZACE2-Pātari the advantage of reduced degradation by cellular proteases, which could otherwise shorten the drugs half-life. Furthermore, in the absence of severe inflammation, an intact muco-ciliary clearance system allows transit of SARS-CoV-2/NZACE2-Pātari complexes to the pharynx. Once swallowed, this will lead to hydrolytic destruction of the virus in the stomach, as detailed below. Safety, potential risks and adverse reactions to nasal NZACE2-Pātari The nose appears to be very safe compared to most other routes of drug delivery. Administration of drugs to the nose such as erythropoietin or glucagon did not cause serious adverse reactions [43,44]. Similarly, nasal ketorolac challenges in highly sensitized Samter's triad patients rarely caused bronchospasm [45]. The angiotensin converting catalytic site of NZACE2-Pātari has been rendered inactive by a single amino acid substitution (R273A) while the affinity for the virus has been improved by removing an N-glycosylation site (N90D). ACE2 catalyses the production of angiotensin 1-7 and 1-9, which have a role in blood pressure control [46]. We would not expect adverse cardiovascular events with supraphysiological doses of inactivated ACE2 administered to the nose, even if some of the drug enters the circulation. A biosimilar drug, APN01 (containing the active ACE2 catalytic site) was well tolerated in high doses, when administered intravenously for ARDS [47]. Since the correlates of protective immunity for COVID-19 are not understood, we are not administering modified ACE2 conjugated to an Fc receptor. The consequences of early activation of macrophages are unknown at this time. There is a risk this could trigger ARDS. Such conjugates could be considered in the future, with better understanding of the immunological conundrum posed by SARS-CoV-2. Nasal adverse reactions are typically allergies and given the short duration of treatment, we would not expect patients to become sensitized to the modified ACE2 drugs. Any such local reaction can be treated with a decongestant or an antimuscarinic agent such as Ipratropium. A short course of treatment is unlikely to trigger an autoimmune neurological disorder. These risks must be balanced against a lethal infection with a propensity to cause severe long-term disability, for which there is no widely available curative treatment. The role of the stomach in mitigating COVID-19 The stomach is likely to play an important role in the defense against SARS-CoV-2 ( Figure 1) [48]. The virus is inactivated by pH ranges (1.5-3) found in the stomach [49]. It is likely much of the virus from the nasal infection is swallowed and destroyed in the stomach [48]. There are now multiple studies showing that the use of proton pump inhibitors (PPIs) is associated with worse outcomes in COVID-19 [50,51]. There is more than one interpretation of this observation. First, it is possible the active viral burden is increased by a high stomach pH. This would allow intact virus to travel to the small intestine and gain entry through gut epithelial cells [52]. Diarrhea and abdominal discomfort can be symptoms of COVID-19 [53]. Secondly, the use of PPIs may be more common in those with obesity and type 2 diabetes. These individuals are at increased risk of gastroesophageal reflux (GER). It is possible the use of PPIs allows the stomach to act as a reservoir for active virus, which leads to greater microaspiration into the lungs from GER. Recent data from China suggests that young HCWs exposed to greater inocula of the virus before the use of personal protective equipment (PPE), had more severe outcomes [54]. The result of increased GER with active virus might have the same consequence as those exposed to a greater pulmonary inoculum of the virus before the use of PPE. Lower rates of GER in children may be an important age-related protective factor against COVID-19 and may be one explanation for severe outcomes for those with obesity and type 2 diabetes. In the absence of effective antiviral drugs, the stomach will play an important role in destroying swallowed SARS-CoV-2 complexed to NZACE2-Pātari (Figure 1). A critical part of our strategy is to stop PPIs temporarily. The stomach would no longer serve as a reservoir of active viral particles to aggravate the pneumonitis or to gain entry through the gut. Alternatives to PPIs may need to be considered including metoclopramide for symptomatic GER. Elevating the bed at night and avoiding dinner just before bedtime are other simple measures to reduce GER. Clinical utility NZACE2-Pātari is best suited to treating unvaccinated patients in the earliest phases of the disease [32]. They could be deployed at the onset of a disease outbreak in a nursing home, when unvaccinated patients first test positive. They could be used in prisons and refugee centers, where physical and social distancing is difficult. NZACE2-Pātari may have a role at the NZ border to treat infected travelers and their contacts. These drugs could be used for infected staff at the border including military and hotel personnel. If effective, they will assist front-line hospital nursing and medical staff, who are at high risk of infection [55]. NZACE2-Pātari may mitigate community outbreaks, where unvaccinated contacts can be rapidly traced, tested and offered treatment if they are infected. This will reduce the R0 of the virus. This will have major economic benefits as it may reduce the need for stringent quarantines. This strategy may be particularly effective for unvaccinated patients with comorbidities including obesity and diabetes, who are at risk of severe outcomes. It may also be beneficial for infected Māori and Pacifika patients who have an increased burden of comorbidities and would be expected have severe outcomes [3]. Intervention rates in Māori and Pacifika patients have been shown in multiple studies to lag behind patients of other ethnicities, which could contribute to severe outcomes [56]. COVID-19 is likely to exacerbate preexisting economic, health, and social disparities in vulnerable groups [4,57]. Because NZACE2-Pātari will also serve as insurance against future coronavirus infections utilizing ACE2 to gain entry into cells. It will also assist in the unfortunate event where COVID-19 vaccines cause ADE in some patients [58]. This could cause serious reputational damage to vaccines in general, leading to resurgence of vaccine-preventable diseases [59][60][61]. In the event SARS-CoV-2 vaccines cause ADE, NZACE2-Pātari could mitigate the situation by outcompeting such vaccine-induced antibodies. They could be of benefit to immunodeficient patients and the elderly, who respond poorly to many vaccines [62][63][64][65]. Many immunodeficient patients are also disadvantaged as they have pulmonary disease from the infective and inflammatory sequelae of immune system failure. If proven to be safe and effective, these drugs will help a broad array of unvaccinated, infected individuals early in the disease. Caveats Our strategy is analogous to administering normal immune globulin as postexposure prophylaxis for measles. We plan to treat newly diagnosed unvaccinated patients in the presymptomatic nasal phase (Figure 1). These patients will have a positive SARS-CoV-2 test, ideally by PCR or by a rapid antigen test. Testing is a challenge in many parts of the world and turnaround times vary greatly for PCR tests. The PCR result or antigen detection test should be available on the same day. If proven safe and effective, the global deployment of NZACE2-Pātari will depend on access to rapid testing for SARS-CoV-2. In the event a patient receives NZACE2-Pātari treatment in the absence of disease, it is however very unlikely this will cause adverse effects. Ideally, patients should only need treatment once, to minimize the small risk of sensitization to NZACE2-Pātari, which has two amino acid differences from wild-type ACE2. Again, an adverse reaction is likely to result in temporary nasal obstruction. The nose is a very attractive route to safely treat COVID-19, given the limited understanding of the correlates of protective immunity. There is however a small window of opportunity to influence disease severity. As noted above, nasal NZACE2-Pātari is less likely to be effective once patients enter the pulmonary and systemic phases. The key to success is early identification and treatment of COVID-19 patients in the presymptomatic nasal phase. Expert opinion Since SARS-CoV-2 binds cell surface ACE2 to infect cells, this represents an attractive target to intercept the virus. Mutation of the receptor-binding domain of the viral spike glycoprotein is unlikely to be tolerated without loss of pathogenicity. The receptor-binding domain is thus a stable target for antiviral strategies. The viral infection evolves in three overlapping clinical stages; the nasal, pulmonary and systemic phases. SARS-CoV-2 deploys strategies to avoid provoking an inflammatory response during the initial nasal incubation period. As a result, the virus is able to exponentially multiply, unchallenged. Patients destined to have more severe disease then progress to the pulmonary and systemic phases with dysfunction of multiple organs including the lungs, brain, heart and kidneys as well as the coagulation cascade. Targeting the nose during the incubation period is thus an attractive strategy before the second pulmonary or third systemic phases, for which there is no universally effective treatment. In the absence of widely available effective antiviral drugs, we have identified the stomach as a key antiviral organ. The virus is destroyed by low pH levels typically found in the stomach. Current data indicates patients being treated with proton pump inhibitors have a high mortality. Our interpretation of this data is that the swallowed virus remains intact in patients on PPIs and can infect the lungs by gastroesophageal reflux or gain access to other organs through cells of the small intestine. The nose and the stomach are, therefore, the key organs allowing interception and destruction of the virus before the pulmonary and systemic phases of infection. Treating the nose is very attractive as there is a very low risk of adverse effects and furthermore, treating the site of infection requires minimal doses of the drug. Our calculations based on published data show that the virus can be overwhelmed with a total dose of 4 mg, particularly if the modified ACE2 molecules (NZACE2-Pātari) have a higher affinity for the virus compared to wild-type ACE2 molecules expressed on the cell surface. Treating the nose with frequent dosing will intercept each wave of daughter virus released from the nasal mucosa. NZACE2-Pātari can be synthesized in molecular biology laboratories and lyophilized for easy transport globally. The drug can be easily reconstituted and self-administered by a nasal pipette. The risk of nasal adverse effects is very low and can be treated with a decongestant. If the treatment is coupled with a point of care rapid antigen test, treatment could be commenced within 24-48 hours of infection. Early treatment is the key to mitigating disease severity and reducing viral transmission. NZACE2-Pātari could be deployed in many clinical settings including outbreaks of the virus in nursing homes, prisons, refugee camps and other areas where social and physical distancing is difficult. If safe and effective, this strategy may mitigate disease severity in individual unvaccinated patients and may rapidly bring the infection under control, if there is a new cluster of infection. Repurposing existing drugs for COVID-19 has been disappointing. The recent successes of the Pfizer, Moderna, Johnson & Johnson and Astra-Zeneca vaccines are encouraging but not all patients will be protected by vaccines. Immunodeficient patients may not respond to vaccines. Similarly, those who are not immunized will have no protection against the virus and its variants. Escape mutants as noted above with the Astra-Zeneca vaccine, remain a serious concern. The recent description of rare cases of vaccine associated/induced thrombosis and thrombocytopenia could cause reputational damage to adenovirus based vaccines. Furthermore, the stringent storage and transport requirements for mRNA-based vaccines will create logistical challenges. If proven to be safe and effective, NZACE2-Pātari may mitigate the health crises in countries such as Brazil and India until new drugs and vaccines are developed and widely deployed. Acknowledgments The NZACE2-Pātari project has not reached clinical trials. Prototypes of these drugs have been produced and were partially tested in vitro. This project has been suspended because of a lack of funding and institutional support. We are gifting our ideas so colleagues with funding and facilities can bring these treatments to fruition to save lives. Science will ultimately prevail against SARS-CoV-2. Author contributions RA conceived the idea of intercepting the virus with ACE2 molecules and wrote the first draft of the manuscript. All other authors were part of the writing team and modified and edited the manuscript. Declaration of interest The project team may patent future modified ACE2 molecules to facilitate global use. The authors have no other relevant affiliations or financial involvement with any organization or entity with a financial interest in or financial conflict with the subject matter or materials discussed in the manuscript. This includes employment, consultancies, honoraria, stock ownership or options, expert testimony, grants or patents received or pending, or royalties. Reviewer disclosures Peer reviewers on this manuscript have no relevant financial or other relationships to disclose.
2021-04-02T06:17:56.703Z
2021-04-01T00:00:00.000
{ "year": 2021, "sha1": "29d38834fba3e9fd5d2dd3a404b733b4b53ce382", "oa_license": null, "oa_url": "https://www.tandfonline.com/doi/pdf/10.1080/1744666X.2021.1912596?needAccess=true", "oa_status": "BRONZE", "pdf_src": "PubMedCentral", "pdf_hash": "8170363091b1a024e8b0e4dff6b9d7ee721553ea", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
11930447
pes2o/s2orc
v3-fos-license
Mind and body: how the health of the body impacts on neuropsychiatry It has long been established in traditional forms of medicine and in anecdotal knowledge that the health of the body and the mind are inextricably linked. Strong and continually developing evidence now suggests a link between disorders which involve Hypothalamic-Pituitary-Adrenal axis (HPA) dysregulation and the risk of developing psychiatric disease. For instance, adverse or excessive responses to stressful experiences are built into the diagnostic criteria for several psychiatric disorders, including depression and anxiety disorders. Interestingly, peripheral disorders such as metabolic disorders and cardiovascular diseases are also associated with HPA changes. Furthermore, many other systemic disorders associated with a higher incidence of psychiatric disease involve a significant inflammatory component. In fact, inflammatory and endocrine pathways seem to interact in both the periphery and the central nervous system (CNS) to potentiate states of psychiatric dysfunction. This review synthesizes clinical and animal data looking at interactions between peripheral and central factors, developing an understanding at the molecular and cellular level of how processes in the entire body can impact on mental state and psychiatric health. INTRODUCTION The concept that our mind and our mental processes are influenced by the health of our bodies is intuitively appealing and central to many approaches to health and wellbeing. However, there has been a recent explosion of clinical and physiological evidence to support this theory, shifting a "commonsense" approach to health toward a clinically useful and pharmacologically targetable model. We are now moving toward mechanistic models for the interactions between peripheral and central factors, gaining an understanding at the molecular and cellular level of how processes in the entire body can impact on mental state and psychiatric health. Although good evidence exists for these associations in many psychiatric disorders, in this review we will focus on depression, for which the evidence is perhaps most compelling. Some epidemiological associations between corporeal disorders and psychological ill-health are well established. The link between coronary artery disease and depression, for example, has been extensively investigated (Nemeroff and Musselman, 2000;Rugulies, 2002;Barth et al., 2004), and it appears that not only are the two disorders strongly associated but that depression is a predictor of poor cardiovascular outcome. Such epidemiological evidence reinforces the widely held notion that the sadness of depression both co-occurs with and potentiates cardiac disease. However, we are now moving toward an understanding of the shared molecular processes which may underpin the link between these disorders. Although evidence of psychiatric and peripheral comorbidities abounds in the literature, there is also growing interest in the more subtle variations in physiological function which may be antecedents of overt illness but which may be sufficient to modulate CNS processes and mental state. In this review we will focus on several of the major pathways implicated in the aetiology of depression which may mediate the links between the mind and body. SYSTEMIC DISORDERS ASSOCIATED WITH DEPRESSION Strikingly, a recent study conducted in the United States indicated that of middle aged or older adults meeting diagnostic criteria for a major depressive disorder, two thirds reported comorbid cardiovascular disease (González and Tarraf, 2013). Up to 20% of patients with coronary heart disease meet diagnostic criteria for major depression, and up to 47% report significant and long-lasting depressive symptoms (Bush et al., 2005;Carney and Freedland, 2008). Recent reports have indicated that this effect is not restricted to individuals with cardiovascular disease, as patients undergoing rehabilitation for pulmonary disease were even more likely than cardiac patients to exhibit clinically significant depression and psychological distress (Serber et al., 2012). Cardiovascular risk factors are pathologically relevant even prior to diagnosis. Studies of patients with long-term depressive or anxiety disorders revealed elevated incidence of sub-clinical cardiovascular disease, as measured by a variety of parameters including plaque deposition and arterial stiffness (Seldenrijk et al., 2013), and blood pressure, glucose, body mass index (BMI), diet, and physical activity (Kronish et al., 2012). Interestingly, and relevant to sex differences often observed in the context of anxiety and depressive disorders [for which prevalence can be as twice higher in women compared to men, see Bekker and van Mens-Verhulst (2007); Kimbro et al. (2012)], significant depressive symptoms are more common in younger women with peripheral arterial disease than in other gender-age groups (Smolderen et al., 2010). Also, recent meta-analysis of cardiovascular risk factors and depression in later life demonstrated relatively strong associations between depression and diabetes, cardiovascular disease and stroke (Valkanova and Ebmeier, 2013). These findings also highlight the relationship between diabetes and psychiatric health. Meta-analytic evidence suggests that patients with depression have an elevated risk of developing type two diabetes (Knol et al., 2006;Mommersteeg et al., 2013), and conversely that patients with diabetes have significantly increased risk of developing depression (Anderson et al., 2001;Rotella and Mannucci, 2013). A longitudinal study revealed that the incidence of diabetes was highest in individuals with the greatest number of depressive symptoms (Carnethon et al., 2003), and a large community-based study demonstrated that diabetes was associated with an increased risk of depression (de Jonge et al., 2006). This bi-directional relationship is suggestive of convergent pathological processes rather than a simplistic cause and effect relationship. Interestingly, some clinical studies have hypothesized that the doubled rates of depression in female diabetic patients could help explain the high prevalence of coronary heart disease in women with diabetes (Clouse et al., 2003). Autoimmune disease, for example rheumatoid arthritis, is also associated with markedly elevated risk of depression (Margaretten et al., 2011a;Covic et al., 2012). Notably, there appears to be a strong correlation between the severity of rheumatoid arthritis and the incidence of depression, with a recent meta-analysis demonstrating that those with the most severe form of arthritic disease have a six-fold higher incidence of depression relative to those with the mildest form Godha et al. (2010). Clearly the impact of declining quality of life associated with severe systemic disease cannot be overlooked. However, these findings and the many others describing strong associations with psychiatric disease and peripheral illness do provoke the question of whether there are fundamental mechanisms in common. How does the health of the body affect the health of the mind, and what are the underlying pathological processes which underpin this relationship? Although we do not yet have a full understanding of the complexities of the bidirectional relationship between body and brain, convergent evidence suggests that the endocrine response to stress (via the HPA axis), and immune dysregulation (via inflammatory pathways), may be playing a central role. STRESS RESPONSIVITY AND THE HYPOTHALAMIC-PITUITARY-ADRENAL AXIS The most well established example of mind-body interaction is the link between psychological stress and psychological ill-health. In fact, adverse or excessive responses to stressful experiences are built into the diagnostic criteria for several psychiatric disorders, including depression and anxiety disorders. The body's response to stress is mediated by the hypothalamic-pituitaryadrenal (HPA) axis, by which stressful stimuli modulate the activity of a tightly regulated cycle of circulating hormones. Stress per se is not necessarily problematic; the body is well equipped to respond to stressful stimuli and to some extent stress is necessary for normal function. However, excessive or prolonged stress, or perturbations in the function or regulation of the HPA axis may result in abnormal changes in hormones circulating through both the periphery and the CNS. As previously mentioned, women are twice as likely as men to suffer from stress-related psychiatric disorders and there is evidence that sex differences in stress responses could account for this sex bias (Bangasser and Valentino, 2012). The HPA axis is the primary circuit that mediates the physiological response to stress and regulates the level of circulating glucocorticoid hormones (e.g., CORT: cortisol in humans, corticosterone in rodents). Arginine vasopressin (AVP) and corticotrophin-releasing hormone (CRH, also originally referred to as CRF for corticotrophin-releasing factor) are synthesised and released from the paraventricular nucleus (PVN) of the hypothalamus, and are arguably the highest order regulators of the HPA axis activity within the central nervous system (CNS). These neuro-hormones act synergistically to stimulate adrenocorticotrophin (ACTH) secretion from the anterior pituitary, culminating in increased levels of circulating CORT. The HPA axis is modulated by a negative feedback loop encompassing the hippocampus, hypothalamus and anterior pituitary. Following CORT secretion into the peripheral blood circulation, CORT passes through the plasma membrane of cells, particularly in the pituitary, hypothalamus, and hippocampus where it binds to the glucocorticoid receptor (GR). Finally, glucocorticoid catabolism involves 5α-reductase type 1 (predominantly a liver enzyme) and 11β-hydroxysteroid dehydrogenase type 2 (in kidney). The psychological determinants of an individual's response to stress are important predictors of outcome, although this area is beyond the scope of this review [reviewed comprehensively by Liu and Alloy (2010)]. However, physiological variations in HPA axis function and related pathways may also modulate the response to stress and alter the threshold for psychiatric disorders. Despite substantial limitations in the objective assessment of stress, multiple studies have documented an association between stressful life experiences and depression (Kendler and Gardner, 2010). Interesting examples of HPA axis dysfunction modulating psychiatric health come from Cushing's syndrome and Addison's disease, states of hyper-and hypo-cortisolemia, respectively. Cushing's syndrome is associated with a high prevalence of psychopathology, primarily depressive symptoms but also mania and anxiety (Pereira et al., 2010). Addison's disease has been less extensively investigated but appears to be associated with an increased risk of a variety of psychiatric symptoms, including depression, delusions, hallucinations, and anxiety (Anglin et al., 2006). In both disorders it should be borne in mind that adrenal dysfunction can also lead to electrolyte and metabolic abnormalities which can also contribute to CNS disturbances. Nonetheless, the fact that treatment of the hyper-or hypo-cortisolaemia resolves the psychiatric symptoms in most cases strongly suggests that changes in adrenal corticosteroids are a primary driving force for the psychiatric symptoms (even though this is not the sole determining factor, as half of subjects with Cushing's do not develop depressive symptoms). Therapeutic administration of high doses of corticosteroids has been associated with the development of a manic behavioral state (Warrington and Bostwick, 2006;Kenna et al., 2011;Fardet et al., 2012). These observations also highlight a critical pathway by which HPA axis function may alter mental state. Corticosteroids are generally prescribed in cases of uncontrolled inflammatory disease, and act as powerful antiinflammatory factors. As we will discuss below, inflammatory states are strongly linked to perturbations in psychiatric health. More subtle variations in HPA axis function have been directly associated with psychiatric disorders, in particular depression. A recent meta-analysis described the magnitude of the difference between depressed and non-depressed group in cortisol, ACTH and CRH levels. Looking at 361 studies, the results show that overall depression is associated with small-to-moderate elevations in ACTH and cortisol and a reduction in CRH levels (Stetler and Miller, 2011). However, in older people, the association between cortisol and major depression was U-shaped (Bremmer et al., 2007). Another large cohort study revealed significant associations between major depressive disorders and specific HPA axis indicators, such as a higher cortisol awakening response in MDD patients compared to controls . Those modest but significant differences were also observed in patients with anxiety disorders (Vreeburg et al., 2010). In line with clinical findings, the circadian pattern of corticosterone has been reported to be disrupted in rodent models of depression (Touma et al., 2009;Bonilla-Jaime et al., 2010). In rats, chronic stress induces a depressive-like phenotype, associated with dysregulation of the HPA axis and reductions in dopaminergic and serotonergic transmissions in the PFC (Mizoguchi et al., 2008). Affective-like behavioral deficits have been reported in mouse mutants with altered HPA axis function [see Renoir et al. (2013) for review]. Chronic treatment with corticosterone as well as isolation rearing increase the depressive-like behavior in GRdependent and independent manners (Ago et al., 2008). Chronic elevation of corticosterone creates a vulnerability to a depressionlike syndrome that is associated with increased expression of the serotonin synthetic enzyme tryptophan hydroxylase 2 (tph2), similar to that observed in depressed patients (Donner et al., 2012). Interestingly, the effects of chronic corticosterone administration in animal models have also been studied in the context of affective and systemic disorders. In that regard, chronic corticosterone in mice was found to induce anxiety/depression-like behaviors (David et al., 2009) as well as decrease sucrose consumption in a model of anhedonia (Gourley et al., 2008). Chronic antidepressant treatment reversed those behavioral impairments. Furthermore, relevant to the relationship between stress and metabolic syndrome, 4-wk exposure to high doses of corticosterone in mice, has been found to increase weight gain and plasma insulin levels as well as reduce home-cage locomotion (Karatsoreos et al., 2010). Using a chronic mild stress (CMS) paradigm, in which mice were housed individually and alternatively submitted to unpredictable "mild" stressors (such as periods of continuous overnight illumination, short periods of food/water deprivation etc.), Palumbo et al. (2010) found that mice subjected to the CMS procedure exhibited an increase in serum corticosterone levels during the first few weeks of exposure. However, these elevated corticosterone levels returned to baseline levels after 6 weeks of CMS. Similarly, Adzic et al. (2009) reported reduced CORT levels in chronically isolated rats (for 21 days), whereas CORT was increased after an acute 30-min immobilization stress. Altered circadian activity of the HPA axis has also been reported in a CMS rat model of depression (Christiansen et al., 2012). Interestingly, this study suggests a recovery of diurnal corticosterone rhythm after 8 weeks of CMS. Taken together, these observations suggest an adaptive capacity for the HPA axis to cope with prolonged stress. The effects of chronic stress on HPA axis function have been widely studied in both animal models and clinical populations. Many of those investigations have focused on the negative feedback part of the HPA axis (mainly mediated by the GR). Such feedback is efficiently probed by the established combined dexamethasone-suppression/corticotrophin-releasing hormone stimulation (dex/CRH) test (Ising et al., 2007). Altered dex/CRH test are seen in major depression (Mokhtari et al., 2013) as well as in chronic stress conditions. For example, overcommitment in chronically work-stressed teachers was significantly associated with blunted response to the dex/CRH challenge (Wolfram et al., 2013). Further regression analyses showed that low social support at work and high job strain were associated with more cortisol suppression after the dexamethasone suppression test (Holleman et al., 2012). In rodents, social isolation decreased the feedback sensitivity of the HPA axis to dexamethasone (Evans et al., 2012). Another animal study reported that socially deprived mice had increased adrenal weights as well as a greater increase in corticosterone levels in response to acute stress (Berry et al., 2012). Interestingly, those chronic stress-induced HPA axis dysfunctions were associated with depressive/anxietylike behavior as well as impaired hippocampal plasticity (i.e., altered hippocampal neurogenesis and reduction in BDNF levels) (Berry et al., 2012;Evans et al., 2012). Polymorphisms in genes controlling the activity of the HPA axis are also associated with differential risk of psychiatric disease. Polymorphisms in the GR gene have been associated with major depression in multiple cohorts (van West et al., 2005;van Rossum et al., 2006) [but also see Zou et al. (2010); Zimmermann et al. (2011)]. Interestingly, some GR polymorphisms are also a predictor of the HPA axis response to psychosocial tests (Kumsta et al., 2007) and have been found to be associated with the extent of stress hormone dysregulation in major depression (Menke et al., 2013). Genotype-phenotype associations have also been identified in terms of response to antidepressant response (Ellsworth et al., 2013). Evidence of gene-environment interactions in the stress response and psychiatric susceptibility comes from a study of the corticotrophin-releasing factor receptor (CRF-R) (Bradley et al., 2008). Individuals with a particular CRF-R genotype who had experienced child abuse had enhanced risk of depression as adults, an observation repeated in two ethnically different populations. Overall, studies suggest that the degree of HPA axis hyperactivity can vary considerably across psychiatric patient www.frontiersin.org December 2013 | Volume 4 | Article 158 | 3 groups, likely due to genetic and environmental factors during early development or adult life. In that regard, two separate studies reported that polymorphisms of the FKBP5 gene that potentially modify the sensitivity of the GR are associated with an increased likelihood of adult depression for individuals exposed to adverse life events (Zimmermann et al., 2011) and childhood physical abuse (Appel et al., 2011). Genes involved in other pathways may also potentiate an aversive response to stress. A landmark early study described an association between a variant in the serotonin transporter gene and the response to stressful life experiences (Caspi et al., 2003). This functional variant in a major target of antidepressant therapies is associated with an elevated response to fearful stimuli, elevated hormonal responses to stress, and increased risk of depression in response to stress exposure (Lesch et al., 1996;Hariri et al., 2002;Jabbi et al., 2007). Variants in multiple genes in the serotonergic pathway have also been associated with altered behavioral phenotypes in animal models [reviewed in Holmes (2008)]. Critically, changes in circulating corticosteroids can regulate the activity of the rate-limiting serotonin synthetic enzyme tryptophan hydroxylase 2 in the brain (Clark et al., 2005(Clark et al., , 2007. In rodent models, acute restraint stress up-regulates serotonin production in the amygdala (Mo et al., 2008), whilst chronic administration of ACTH to disrupt HPA axis function results in an increased level of serotonin in the prefrontal cortex in response to acute stress (Walker et al., 2013). Taken together these findings demonstrate that alterations in HPA axis function can directly impact on CNS systems known to be associated with psychiatric disease. PERIPHERAL DISORDERS ASSOCIATED WITH HPA CHANGES AND PSYCHIATRIC DISEASE A wealth of evidence is now emerging to illustrate the link between stress and risk factors for physiological disorders, in particular metabolic disorders. Hyperactivity of the HPA axis and hypercortisolaemia is associated with the metabolic syndrome (Anagnostis et al., 2009). Similarly, both chronic stress and chronic treatment with glucocorticoids are associated with central adiposity, dyslipidaemia, atrophy of skeletal muscles, insulin resistance, and glucose intolerance: a suite of symptoms remarkably resemblant of the metabolic syndrome itself (Kyrou and Tsigos, 2009;van Raalte et al., 2009). Elevations of circulating glucocorticoids have also been linked with an increased risk of depression in those with metabolic disorder , and relative insensitivity to the dexamethasone suppression test has been documented in patients with this disorder (Kazakou et al., 2012). On the other hand, disturbances in fatty acid metabolism have been observed in cohort studies of depression (Assies et al., 2010). Fatty acid levels appear to have a bidirectional relationship with HPA axis activity, with glucocorticoids modulating fatty acid metabolism (Brenner et al., 2001;Macfarlane et al., 2008), and supplementation of polyunsaturated fatty acids reducing cortisol levels in both healthy subjects (Delarue et al., 2003) and in those with depression (Jazayeri et al., 2010;Mocking et al., 2012). A study examining this relationship in more detail has shown that the circadian changes in cortisol have a different association with the major fatty acid forms in major depression patients compared to controls (Mocking et al., 2013). Other studies have demonstrated both changes in visceral fat levels and adrenal gland volume in women with major depressive disorders (Ludescher et al., 2008). Some of these associations appear to have developmental antecedents, with exposure to dietary high fat in the perinatal period being linked with both altered HPA axis function and mood changes (Sasaki et al., 2013). If metabolic disorders are considered as a spectrum, then diabetes is arguably positioned as the end point of this decline in function. Chronic stress and sustained dysregulation of corticosteroid production are strongly associated with the development of type 2 diabetes mellitus in both human cohorts and in animal models (Chan et al., 2003;Rosmond, 2005;Reagan et al., 2008;Anagnostis et al., 2009;Matthews and Hanley, 2011). As an example in mice, streptozotocin (STZ)induced diabetes resulted in increased depressive-like behavior as well as increased corticosterone levels (Ho et al., 2012). The convergence of the associations between HPA axis dysfunction and both diabetes and depression is striking, with compelling evidence for links between the two disorders and this central underlying risk factor [reviewed in Champaneri et al. (2010)]. Dysfunction of HPA signaling also appears to interact with the autonomic nervous system to influence cardiovascular function. Components of the HPA axis act outside the hypothalamus to regulate sympathetic outflow, and thus heart rate. Elevated heart rate has been associated with depression in multiple studies (Forbes and Chaney, 1980;Carney et al., 1993Carney et al., , 2000Lechin et al., 1995), and is a strong predictor of multiple parameters of cardiovascular disease, including myocardial ischaemia, arrhythmias, hypertension, and cardiac failure (Dyer et al., 1980;Kannel et al., 1987;Palatini and Julius, 1997). Depression is associated with an increased risk of mortality in patients with cardiovascular disease (Mann and Thakore, 1999), and this increased risk is strongly linked with hypercortisolaemia (Jokinen and Nordstrom, 2009). In healthy subjects, cortisol and ACTH response to the Dex/CRH test were negatively associated with central adiposity and blood pressure and positively associated with HDL cholesterol, strong risk factors for cardiovascular disease (Tyrka et al., 2012). Taken together, these studies speak to the accumulating evidence suggesting a link between disorders which involve HPA dysregulation and the risk of developing psychiatric disease. This is illustrative of the bidirectional relationship between peripheral illness and mental health: HPA axis changes may be either contributors to or consequences of peripheral disorders but also have the capacity to modulate brain function and predispose to psychiatric disease. PHARMACOLOGICAL TARGETING OF THE HPA AXIS The GR antagonist mifepristone has been tested as an adjunctive treatment for psychiatric disorders (Schatzberg and Lindley, 2008). Most recently, a randomized controlled trial of adjunctive mifepristone in patients with bipolar disorder demonstrated alterations in cortisol levels which were correlated with improvements on neuropsychological tests of working memory (Watson et al., 2012). An earlier, smaller scale trial by the same group showed improvements in both neurocognitive function and Frontiers in Pharmacology | Neuropharmacology December 2013 | Volume 4 | Article 158 | 4 depression rating scores (Young et al., 2004). However, a similar study in schizophrenia showed alterations in plasma cortisol but no significant change in symptoms (Gallagher et al., 2005). These mixed findings do highlight the potential utility of therapeutics targeting HPA axis function, but also are suggestive of the heterogeneity in the role of the HPA axis across, and potentially also within, psychiatric disorder diagnoses. The main challenge in pharmacological targeting of the HPA axis is that blockage of all GR-dependent processes could ultimately lead to counteractive effects such as elevated endogenous corticosterone levels. In that context, a newly developed high-affinity GR ligand (C108297) shows promising characteristics in rats (Zalachoras et al., 2013). Indeed, C108297 displays partial agonistic activity for suppression of CRH gene expression and potently enhances GR-dependent memory consolidation. This compound, which does not lead to disinhibition of the HPA axis, could help in dissecting the molecular signaling pathways underlying stress-related disorders. In recent years, other therapeutic strategies interacting at different levels of the HPA axis have been developed. Those include agents acting on CRH-R1 receptor and adrenal steroidogenesis as well as modulators of the 11β-hydroxysteroid dehydrogenase type (11β-HSD1), the enzyme regulating cortisol metabolism (Thomson and Craighead, 2008;Martocchia et al., 2011). In patients who were successfully treated with fluoxetine, the secretion of cortisol decreased (Piwowarska et al., 2012). Furthermore, recent data suggest that GR levels in lymphocytes could be used to predict response to antidepressant treatment in major depressive patients (Rojas et al., 2011). However, it should be noted that GR levels seemed inconsistent over time in this study. Also, measuring cortisol levels in depressed patients before and following treatment with SSRI, Keating et al. (2013) concluded that that stress physiology was unlikely to be a key factor in the response to antidepressant treatment. The variation in findings from these studies may reflect differing modes of activity of the different antidepressant drug classes, superimposed on a heterogeneous patient population. This was illustrated in a study examining changes in daily cortisol patterns in patients using SSRIs, tricyclic antidepressants, other therapeutics or no medications (Manthey et al., 2011). A complex pattern emerged, with some antidepressants suppressing the morning peak in cortisol, and others altering the response to the dexamethasone suppression test. However, the challenges inherent to measuring a circulating factor which is both diurnally regulated and acutely sensitive to environmental cues should not be underestimated. IMMUNE DYSREGULATION, INFLAMMATION AND PSYCHIATRIC HEALTH There is strong evidence that peripheral growth factors, pro-inflammatory cytokines, endocrine factors, and metabolic markers contribute to the pathophysiology of major depressive disorders and antidepressant response (Schmidt et al., 2011). Similarly, many of the systemic disorders associated with a higher incidence of psychiatric disease involve a significant inflammatory component. In fact, as our understanding of the aetiology of these disorders deepens, it has become apparent that there is significant overlap between the factors driving peripheral inflammatory disease and psychiatric disorders. Elevations of pro-inflammatory cytokines have been observed in both clinical populations and animal models of heart failure (Levine et al., 1990;Francis et al., 2003), after coronary surgery (Hennein et al., 1994), and following heart transplants (Azzawi and Hasleton, 1999). Importantly, the pathogenesis of atherosclerosis is intrinsically inflammatory (Koenig, 2001), with elevated local and circulating pro-inflammatory cytokines. In addition, the acutephase marker C-reactive protein (CRP) is strongly associated with cardiovascular disease (van Holten et al., 2013), and can be used as a diagnostic or prognostic factor. As discussed above, cardiovascular disease is strongly associated with changes in psychiatric health, in particular depression. Cardiovascular disease is in turn closely linked with obesity, dyslipidemia, diabetes and metabolic disease. The elevated frequency of anxiety and depression in these disorders may in part underlie the association between cardiovascular and psychiatric risk factors. In studies of diabetic patient cohorts, the inflammatory marker CRP was consistently predictive of direct associations between depression severity, lipid profiles and obesity levels (van Reedt Dortland et al., 2013). Similarly, increased risk of depression in a cohort of patients with diabetes was associated with a higher BMI, illustrating the link between depression and poor control of cardiovascular risk factors (Kimbro et al., 2012). Obesity itself is considered to be a state of low-grade inflammation, and is linked with elevated depressive symptoms. In addition, in a longitudinal study CRP levels at baseline were statistically associated with depression scores (Daly, 2013). Other disease states involving inflammatory processes are associated with elevated risk of depression. Major depression is the most common psychiatric manifestation of multiple sclerosis, with an incidence approaching 50% (Lo Fermo et al., 2010). Likewise, although the incidence rate varies significantly between studies, an elevated incidence of depression has been documented in systemic lupus erythematosus (Palagini et al., 2013) and rheumatoid arthritis (Dickens et al., 2002). Common to all of these disorders is an autoimmune-mediated elevation of inflammatory signaling, with increased circulating pro-inflammatory cytokines observed in the periphery and in the CNS. Large casecontrol studies have described increased rates of anxiety and depression in patients with inflammatory bowel disease (Kurina et al., 2001;Ananthakrishnan et al., 2013a,b). Altered gut permeability to enteric bacteria has also been associated with depression. Translocation of bacterial allergens [in particular lipopolysaccharide (LPS)], stimulates a systemic immune response characterized by elevated IgM and IgA antibodies reactive to the bacteria. Individuals with chronic depression are more likely to display increased LPS-reactive IgM and IgA than control subjects, indicating that elevated gut permeability may be potentiating a systemic inflammatory state (Maes et al., 2008(Maes et al., , 2012a. The case for altered peripheral inflammation in psychiatric disease is strong, perhaps most so for major depression. Individuals with clinically classifiable major depression exhibit a wide range of changes in inflammatory markers, including elevated cytokines, chemokines, and acute phase proteins, findings which have been replicated in several meta-analyses and which in some studies appear to be correlated with specific depressive symptoms (Miller et al., 2009). There appears to be a shift in the function of the immune system in depression, with an increase in pro-inflammatory cytokines accompanied by a decrease in cellular immunity (Zorrilla et al., 2001;Dowlati et al., 2010). The strength of these findings is heightened by a positive correlation between the elevations in pro-inflammatory cytokines and the severity of depression rating scores (Howren et al., 2009). A recent longitudinal population-based study demonstrated strong associations between depressive symptoms and elevated levels of the pro-inflammatory cytokine IL-6 and CRP (Lu et al., 2013). Notably, heightened IL-6, CRP and depressive symptoms were all predictive of reduced pulmonary function, in a cohort with no known history of obstructive pulmonary disease. This large study highlights the substantial cross-over between inflammatory disease and depressive symptomatology. However, there may be differences between sub-populations in depression, with some individuals more likely to display an inflammatory pathophysiology. Although the number of patients classified as suffering atypical depression is relatively low, these patients may be more likely to show high levels of inflammatory markers such as CRP (Hickman et al., 2013). Part of the population variance may result from polymorphisms in the CRP gene. The association between CRP levels and depressive symptoms may be moderated by CRP gene haplotype, in a complex manner which may underpin some of the variations in other association studies (Halder et al., 2010). Patients receiving therapeutic administration of cytokines for cancer or chronic viral infections, [in particular interferon (IFN)-alpha and interleukin-2] frequently experience psychiatric symptoms, including the development of frank major depression in a significant proportion of patients (Capuron et al., 2004;Raison et al., 2005). IFN-alpha stimulates both peripheral and central release of pro-inflammatory cytokines, a fact which underpins the behavioral effects of this cytokine and highlights the capacity for systemic immune signals to regulate CNS processes (Capuron et al., 2000(Capuron et al., , 2001(Capuron et al., , 2002(Capuron et al., , 2003(Capuron et al., , 2004Raison et al., 2005;Eller et al., 2009;Alavi et al., 2012;Birerdinc et al., 2012;Udina et al., 2012). Of particular relevance to the treatment of depressive disorders is the emerging evidence that at least part of the therapeutic efficacy of currently available antidepressants may result from their concomitant anti-inflammatory effects. Although the response rate and efficacy of current antidepressants is far from universal, at least some patient populations derive significant benefit from these medications. However, the previously accepted notion that modulation of synaptic monoamines represents the sum total of the therapeutic effects of these drugs has now come into question. Recent studies have shown that selective-serotonin-reuptake inhibitor medications can suppress immune cell activation and release of inflammatory cytokines in the periphery and ex-vivo (Diamond et al., 2006;Taler et al., 2007;Branco-de-Almeida et al., 2011). Notably, this immune-regulatory effect is not restricted only to the periphery, but can also affect microglia, the immune cells of the CNS (Hashioka et al., 2007;Horikawa et al., 2010). A recent meta-analysis of human depression studies showed that antidepressant treatment at least partially ameliorates the elevations of pro-inflammatory cytokines associated with the disorder (Hannestad et al., 2011). Although it is clear that drug discovery in psychiatric disease needs to look beyond established drug classes, these findings emphasize the potential clinical utility of targeting inflammatory function in depression. Finally, potential sex-differences have been suggested when assessing the effects of LPS on cytokine gene expression. Indeed, females had increased hippocampal levels of IL-6 of TNF-α with respect to males after repeated administration of LPS (Tonelli et al., 2008). MECHANISMS OF IMMUNE MODULATION OF PSYCHIATRIC FUNCTION Historically the CNS was regarded as a "privileged" site with regards to the immune system, with little immune communication across the blood-brain barrier except in cases of frank CNS infection. However, it is now clear that the brain is sensitive to peripheral immune stimuli and can respond with activation of central immune cells and local production of inflammatory cytokines. Microglia are the CNS equivalent of macrophages, releasing cytokines upon activation and facilitating a central immune response, even in the absence of peripheral immune cell migration into the CNS. The brain's response to peripheral inflammatory stimuli can be seen most clearly in the pattern of behavioral changes which reliably results from systemic infection, administration of synthetic bacterial wall components or administration of cytokines (Dantzer, 2004;Pucak and Kaplin, 2005). Termed "sickness behavior," this encompasses changes in motor activity, consummatory behavior, social interaction, circadian rhythms, and responsivity to hedonic and aversive stimuli. The parallels between these behavioral changes and aspects of depression have been well noted and have been a prompt for extensive research. Systemic administration of synthetic bacterial endotoxin, or LPS, induces a well-established pattern of peripheral inflammation. However, multiple studies have now also demonstrated that systemic inflammation activates CNS microglia, including in non-human primates (Henry et al., 2008;Hannestad et al., 2012). In mice, systemic LPS causes microglial activation and synthesis of cytokines (Puntener et al., 2012). Microglia form close contacts with synaptic structures and appear to regulate synaptic strength (Wake et al., 2009). These cells also express multiple neurotransmitter receptors and are therefore acutely responsive to neuronal signaling (Kettenmann et al., 2011). Activated microglia are also a key source of reactive oxygen species, contributing to a status of inflammation-induced oxidative stress in the CNS (Dringen, 2005). Oxidative stress, driven both peripherally and centrally, is strongly associated with psychiatric aetiology. Reduced plasma L-tryptophan, the precursor for serotonin, is a potential biomarker of "vulnerability to depression" (Maes et al., 1993). Indeed, tryptophan depletion is widely used to study the contribution of reduced serotonin transmission to the pathogenesis of major depressive disorder (Van der Does, 2001) and also relevant in the context of immune activation (Kurz et al., 2011). The depressive symptomatology associated with immunomodulatory therapy may be mediated in part by changes in tryptophan metabolism. Pro-inflammatory cytokines such as IFN-γ, IFN-α, and TNF-α, and reactive oxygen species, induce activation of the enzyme, indolamine 2, 3 dioxygenase (IDO) in microglia, which metabolizes tryptophan via the kyneurenine pathway (Maes, 1999;Wichers et al., 2005;Dantzer et al., 2008; Frontiers in Pharmacology | Neuropharmacology December 2013 | Volume 4 | Article 158 | 6 2012b). This shifts the balance of tryptophan toward kynurenine and away from serotonin, reducing serotonin bioavailability (Capuron et al., 2002(Capuron et al., , 2003Vignau et al., 2009). Notably, in the CNS only microglia further metabolize kynurenine to quinolinic acid, which exerts neurotoxic effects (Guillemin et al., 2005;Soczynska et al., 2012). Patients treated with IFN-α for hepatitis C infection developed depressive symptoms including negative moods that were correlated with increased levels of kynurenine (Wichers et al., 2005). In addition, analysis of plasma tryptophan and kynurenine pathway metabolites in patients with major depression showed increased rates of tryptophan degradation compared to normal control subjects (Myint et al., 2007). Taken together, these findings indicate that cytokine-induced microglial activation can mediate changes in neurotransmitters and other bioactive metabolites which may underpin mood disorders. Also, recent data indicate that cognitive impairments (as well as the decline in neurogenesis observed during ageing) can be in part attributed to dysregulation in blood-borne factors such as changes in peripheral CCL11 chemokine levels (Villeda et al., 2011). These findings support the crosstalk between peripheral molecular processes to central effects related to cognitive and emotional function. PHARMACOLOGICAL TARGETING OF INFLAMMATORY PATHWAYS Several of the therapies for the inflammatory disorder rheumatoid arthritis potentiate the effects of antidepressant therapies (Margaretten et al., 2011b). Such drugs target pro-inflammatory cytokine pathways, for example TNF-α antagonists such as etanercept. This particular drug is also commonly used in the treatment of the inflammatory skin condition psoriasis, and large-scale studies of this drug have indicated that patients with psoriasis receiving this drug show reduced depression scores relative to placebo (although the level of depressive symptoms in these patients was relatively low overall, and would not constitute a diagnosis of major depression) (Tyring et al., 2006). Interestingly, follow-up studies indicated that the change in depression score was independent of disease state (Krishnan et al., 2007). Drugs with a similar TNF-α antagonist activity have also shown antidepressant activity in trials in patients with other inflammatory conditions, including Crohn's disease and ankylosing spondylitis (Persoons et al., 2005;Ertenli et al., 2012). Critically, a recent study of the TNF-α antagonist infliximab in otherwise healthy patients with major depression demonstrated that the antidepressant activity of this drug was dependent on the level of inflammatory markers at baseline . This study demonstrated that depressed patients with higher levels of the inflammatory markers TNF-α and CRP showed a decrease in depression rating scores over the course of the study. It is also worth noting that the patients in this study were poorly responsive to classical antidepressant therapy, which may indicate that a sub-population exists in whom inflammation is correlated with both poor antidepressant response and efficacy of anti-inflammatory medication. A second recent study also demonstrated that patients with depression who experienced a decline in symptoms with infliximab treatment also showed elevated inflammatory gene expression in peripheral immune cells (Mehta et al., 2013). Response to infliximab was also associated with reductions in the expression of other genes involved with innate immune activation. Agents such as infliximab are too large to cross the blood-brain barrier, and therefore the amelioration of depressive symptoms is more likely associated with resolution of peripheral inflammation than direct effects of the drug in the brain. However, as we have discussed above, CNS microglia are acutely sensitive to circulating cytokine levels and so their level of activity may well be modulated by anti-inflammatory treatment. The developing focus on inflammatory function in depression has spurred trials of other anti-inflammatory drugs as adjuncts to antidepressant treatment. A large-scale longitudinal population study revealed that statin users were less likely than non-users to have depression at baseline (Otte et al., 2012). Statin users who did not have depressive symptoms at baseline were also less likely to develop depression during the follow-up period. Statins are commonly prescribed to individuals who have had a cardiac event or intervention. A prospective study in this population showed that prescription of statins reduced the likelihood of developing depression by up to 79% (Stafford and Berk, 2011). A large community study also documented reduced exposure to statins and aspirin (another non-steroidal anti-inflammatory agent) in women with major depressive disorder (Pasco et al., 2010). Likewise, women who were exposed to these agents were also less likely to develop depression over the course of the study. Similar results were also observed in a large population-based cohort of elderly patients, with statins exerting a protective effect against the development of depressive symptoms . Notably, this study also documented a positive correlation between the use of systemic corticosteroids and depression. The cyclooxygenase-2 (COX-2) inhibitor celecoxib is a nonsteroidal anti-inflammatory drug used widely in the treatment of pain, particularly related to arthritic conditions. This drug has been found to improve depressive symptoms when administered in conjunction with the antidepressants sertraline (Abbasi et al., 2012), reboxetine (Muller et al., 2006), and fluoxetine (Akhondzadeh et al., 2009). However, it should be noted that other trials have resulted in conflicting findings, with several showing no beneficial effect of celecoxib in depression (Musil et al., 2011;Fields et al., 2012). The discrepancies in these study results are potentially reflective of the complexity of the inflammatory pathways, in which COX-2 and many other key molecules may play multiple roles. In the brain, COX-2 has anti-inflammatory and neuroprotective effects (Minghetti, 2004), and COX-2 deficient mice show increased neuronal damage, microglial reactivity and oxidative stress markers (Aid et al., 2008). Hence targeting of inflammatory pathways in depression requires careful investigation of both peripheral and central responsivity. COX-2, in particular, may not be the most appropriate target for adjunct therapies in depression [reviewed in Maes (2012)]. In addition, modulation of immune and inflammatory signaling necessitates caution with regard to the potential of lowering defenses to opportunistic infection and malignancy. Long term use of immune-modifying drugs has been associated with increased incidence of serious infections and cancer (Bongartz et al., 2006;Atzeni et al., 2012;van Dartel et al., 2013). This raises the possibility that agents which www.frontiersin.org December 2013 | Volume 4 | Article 158 | 7 directly regulate the CNS rather than peripheral inflammatory response, or have more mild anti-inflammatory effects, may be more appropriate targets for the pharmacotherapy of depression. Still peripherally-active, but arguably milder in effect, are the non-steroidal anti inflammatory medications, including aspirin. Animal studies using aspirin have shown moderate but discernible effects on depressive behavior Wang et al., 2011). Preliminary clinical trials have correlated this, showing a synergistic effect of co-therapy with antidepressants and aspirin (Mendlewicz et al., 2006). However, perhaps more compelling is the result from a large-scale longitudinal cohort study, which documented an association between aspirin use and lowered risk of depression (Pasco et al., 2010). Echoing this is a cross-sectional study which demonstrated that men with elevated plasma homocysteine, a marker of cardiovascular risk, had a reduced risk of depression if they had been taking aspirin (Almeida et al., 2012). Minocycline, a second-generation tetracycline derivative, has recently attracted significant attention for its potential efficacy as an antidepressant. This well characterized drug has potent antiinflammatory and neuroprotective effects which are independent of its antibiotic efficacy (Pae et al., 2008;Dean et al., 2012). Most importantly, minocycline readily crosses the blood brain barrier and is known to inhibit microglial activation (Pae et al., 2008;Dean et al., 2012). Studies in mice have demonstrated that minocycline attenuated the elevations in CNS IL-1β, IL-6, and IDO induced by bacterial endotoxins (Henry et al., 2008). This study also showed that pre-treatment with minocycline prevented the development of depressive-like behavioral endophenotypes, and normalized the kynurenine/tryptophan ratio in the plasma and brain (Henry et al., 2008). These findings clearly indicate that minocycline has effects on microglia through inhibition of the synthesis of pro-inflammatory cytokines and IDO upregulation, and that these may flow through to ameliorate mood states. Echoing this, a small open-label study reported minocycline (150 mg/kg/day) in combination with serotonin reuptake inhibitor contributed to ameliorate depressive mood and psychotic symptoms in patients with psychotic unipolar depression (Miyaoka et al., 2012). The developing appreciation of the role of inflammatory function in depression has highlighted the potential role of dietary sources of anti-inflammatory species. Deficiencies of the antioxidant and anti-inflammatory Coenzyme Q10 (CoQ10) have been associated with depressed mood (Maes et al., 2009), and a preliminary study of supplementation with CoQ10 showed an amelioration of depression scores in a cohort with bipolar disorder (Forester et al., 2012). Several studies in pre-clinical models have shown potential antidepressant effects of omega 3 fatty acids (Watanabe et al., 2004), and conversely, deficient diets during prenatal development have been associated with persistent changes in mood state (Chen and Su, 2013). Compounding this, altered lipid profiles have been described in the cortex of patients with mood disorders (Tatebayashi et al., 2012). Large-scale population assays have shown associations between dietary lipid profiles and the risk of depression (Hoffmire et al., 2012). Although the outcomes of clinical trials using omega 3 supplementation are still under some debate, recent meta-analyses have pointed to some degree of improved outcome in depressed patients (Lin and Su, 2007;Bloch and Hannestad, 2012;Martins et al., 2012). Intriguingly, omega 3 fatty acids have received particular attention for the treatment of depressive symptoms post-myocardial infarction (Gilbert et al., 2013;Siddiqui and Harvey, 2013). In such cases, the anti-inflammatory effects of this lipid may be ameliorating both the peripheral inflammatory state and the secondary central inflammation. INTERFACES BETWEEN HPA AXIS AND IMMUNE DYSFUNCTION Whilst it is clear that both inflammation and HPA dysfunction are associated with psychiatric pathology, these two systems interact at multiple levels and may together constitute a synergistic effect on neuronal function. Across the spectrum of systemic disorders associated with peripheral inflammation and an increased risk of depression, many are also associated with elevated susceptibility to, or worsening symptoms in response to stress. A large scale longitudinal study showed an association between inflammatory bowel disease (Crohn's disease and ulcerative colitis) and depressive symptoms (Ananthakrishnan et al., 2013b). These disorders are strongly associated with perceived life stress, with time to relapse predicted by stress levels (Triantafillidis et al., 2013). Studies of metabolic syndrome, diabetes and associated cardiovascular diseases have shown that not only is this suite of disorders associated with increased risk of depression and a low-grade inflammatory state, but that chronic stress is a strong promoting factor [reviewed in Kyrou and Tsigos (2009)]. These interactions may have developmental antecedents, with exposure to a high fat diet in early life being associated with both altered HPA axis function, inflammatory regulation and disordered behavioral profiles in later life (Sasaki et al., 2013). Nonetheless, the question remains as to how these complex systems interact in both the periphery and CNS, and by what mechanisms these systems modulate neuronal function and mood. Synthetic glucocorticoids are used therapeutically at supraphysiological levels for their anti-inflammatory effects. However, when examining the relationship between the HPA axis and the immune system in physiological or pathophysiological states, the situation appears more complex. Glucocorticoids modulate the immune system through binding to receptors expressed by immune cells, which down-regulates transcription of pro-inflammatory genes and up-regulates production of anti-inflammatory cytokines (Barnes, 2006;Leonard, 2006). Glucocorticoids also regulate the circulating numbers, tissue distribution and activity profile of lymphocytes in a timedependent manner [comprehensively reviewed in Dhabhar (2009)]. Compared to acute stress, chronic stress appears to suppress some of the protective aspects of immune regulation, whilst enhancing the drive to a pro-inflammatory state. The complexity of these interactions is reflective of the fact that chronic stimulation of the HPA axis may not in fact result in a hypercortisolaemic state; given the capacity of the HPA axis for negative feedback regulation, the baseline cortisol levels in chronic stress may actually be lower than normal. Glucocorticoids can Frontiers in Pharmacology | Neuropharmacology December 2013 | Volume 4 | Article 158 | 8 be used therapeutically as immuno-suppressants but in some experimental models appear to have pro-inflammatory effects. Part of this discrepancy may come from differences between in vivo and in vitro models, however in addition the complexities of chronic stress in an animal model should not be overlooked. Chronic stress may appear to increase or decrease circulating glucocorticoids depending on the method of stress and the method of glucocorticoid measurement employed. An animal with chronic down-regulation of HPA axis responsivity, for example, may respond to the acute stress of blood collection or some forms of euthanasia with an overshoot of normal glucocorticoid response, giving the impression of elevated circulating hormone levels in response to the chronic stress. Immune activation may also feed back to modulate glucocorticoid sensitivity. Production of cytokines also up-regulates expression of the GR and modulates the sensitivity of the HPA axis to negative feedback (Arzt et al., 2000). Elevation of pro-inflammatory cytokines, including IL-2, appears to inhibit nuclear translocation of the GR and suppress glucocorticoid signaling (Goleva et al., 2009;Schewitz et al., 2009). Likewise, administration of IL-1 up-regulates HPA axis activity (Dunn, 2000). Systemic exposure to pro-inflammatory stimuli such as bacterial LPS induces secretion of CRH, therefore activating the HPA axis (Sternberg, 2006). These studies illustrate the complex bidirectional interactions between HPA axis function and regulation of inflammation. Potential sex-differences have been suggested when assessing the effects of LPS on stress response. Indeed, female rats showed a higher LPS-induced corticosterone release compared to male animals (Tonelli et al., 2008). The relationship between HPA axis activity and inflammation may also be regionally specific. The peripheral response to stress and HPA activation is likely to be qualitatively, quantitatively and temporally distinct from that observed in the CNS. In a mouse model of chronic stress, increases in basal inflammatory markers were observed in multiple brain regions (Barnum et al., 2012). Chronic unpredictable stress can also up-regulate the response to peripheral inflammatory stimuli, mediated by glucocorticoid signaling (Munhoz et al., 2006). This differs somewhat to the concept of glucocorticoid signaling as immunosuppressive, and highlights the need for further investigation of the nexus between HPA and immune function in the brain. Microglia represents the critical interface point between the activity of the HPA axis, circulating inflammatory signals and the brain's inflammatory response. Microglial number and morphological changes associated with activation can be increased by chronic stress in animal models (Nair and Bonneau, 2006;Tynan et al., 2010). Blockade of glucocorticoid signaling can block stress-induced sensitization of microglial inflammatory responses (Frank et al., 2011), and microglial activation can be primed by in vivo exposure to glucocorticoids (Nair and Bonneau, 2006) or chronic stress (Farooq et al., 2012). Within the CNS, the balance between pro-and anti-inflammatory responses to peripheral immune stimuli is modulated by the density of microglial cells (Pintado et al., 2011). The relationship between microglial activation and the stress response has been most comprehensively investigated in animal models. Repeated exposure to restraint stress induced microglial activation in male C57BL/6 mice, as measured by the degree of proliferation of microglia (Nair and Bonneau, 2006). The increase in microglial number was positively correlated with elevation of serum corticosterone levels induced by stress exposure. Similarly, chronic restraint stress caused a significant increase in activated microglia and number of microglia in multiple brain regions (Tynan et al., 2010;Hinwood et al., 2012), and inescapable stress potentiates the microglial response to immune stimuli (Frank et al., 2012). However, high doses of glucocorticoid agonists suppress the microglial production of inflammatory cytokines (Chantong et al., 2012). These differential responses may be reflective of central vs. peripheral differences, in addition to switching from a pro-to anti-inflammatory response to physiological vs. pharmacological levels of glucocorticoids. Nonetheless, the consensus from these studies is that microglia are acutely sensitive to both HPA axis function and inflammatory signals, and act as an inflection point between peripheral and central responses to these stimuli. As discussed above, the activation state of the microglial population has direct effects on neuronal function, via secondary cytokine production, reactive oxygen species production, neurotoxic effects and modulation of neurotransmitter production. CONCLUSIONS It has long been established in traditional forms of medicine and in anecdotal knowledge that the health of the body and the mind are inextricably linked. Although strong associations between somatic illnesses and psychiatric disturbances have routinely been described in the literature, it is only recently that western medicine has sought to, or indeed had the means to, investigate the mechanisms underlying these associations. Strong and continually developing evidence now suggests that converging disruptions to inflammatory and endocrine pathways may interact in both the periphery and the CNS to potentiate states of psychiatric dysfunction, in particular depressed mood. Further evidence highlights the potential role of the CNS inflammatory FIGURE 1 | Biological mechanisms by which peripheral dysfunction may impact on neuronal function and therefore psychiatric state. Schematic illustration of the potential role of the CNS inflammatory cells, microglia, as a critical nexus between HPA axis activity, inflammation, and neuronal dysfunction. www.frontiersin.org December 2013 | Volume 4 | Article 158 | 9 cells, microglia, as a critical nexus between HPA axis activity, inflammation and neuronal dysfunction (Figure 1). Aspects of these pathways may therefore present as possible targets for therapeutic interventions for psychiatric disease or psychiatric complications of somatic disease. Even more efficacious may be targeting multiple aspects of these pathways or convergence points such as central microglial cells. In this review we have focused on the biological mechanisms by which peripheral dysfunction may impact on neuronal function and therefore psychiatric state. However, we do not wish to discount the psychological influence of ill health on mental function. Clearly the psychological stresses associated with chronic illness or suboptimal health may themselves potentiate, perpetuate and exacerbate psychiatric disease. An effective clinical approach to integrated patient management therefore may need to target the HPA axis dysfunction, inflammatory changes or other pathological processes associated with peripheral disorders, but also approach the psychological health of the patient.
2016-06-17T21:54:41.935Z
2013-10-14T00:00:00.000
{ "year": 2013, "sha1": "a406a5a7c554350ae00323c76bae4130f210c14e", "oa_license": "CCBY", "oa_url": "https://www.frontiersin.org/articles/10.3389/fphar.2013.00158/pdf", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "a690c1535dfac07b50cd41593d105a96fbe1c8f2", "s2fieldsofstudy": [ "Psychology", "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
118505431
pes2o/s2orc
v3-fos-license
Theoretical investigations of electronic structure and magnetism in Zr2CoSn full-Heusler compound The half-metallic properties of a new and promising full-Heusler compound, Zr2CoSn, are investigated by means of ab initio calculations within the Density Functional Theory framework. The ferromagnetic ordered Hg2CuTi-type crystal structure is energetically favorable and the optimized lattice parameter is 6.76 A. The total magnetic moment calculated is 3 uB/f.u. and follows a typical Slater-Pauling dependence. The half metallicity disappears if the unit cell volume is contracted by 5 %. Introduction Advances in microelectronics and magnetic data storage devices, nowadays, depend on novel approaches of device fabrication based on the synergistic use of charge and spin dynamics of electrons in multifunctional materials. Various new device concepts have already found practical applications in magnetoelectronics or spintronics (e.q. read heads for magnetic recorders or nonvolatile memory components). For efficient spintronic devices, it is desirable to have nearly 100% spin-polarized carrier injection. Since half metallic materials have electrons of only one spin state present, around Fermi level [1], they are promising candidates for use as spin injectors. One of the most interesting class of compounds with half metallic characteristics are Heusler alloys [2], reported in literature in only two variants: the full-Heusler X 2 Y Z com-pounds and half-Heusler XY Z compounds; X is a transition metal, Y a transition metal or a rare-metal and Z a main group element. The full-Heusler materials crystallize either in Cu 2 MnAl (L2 1 ) or in Hg 2 CuT i-type structures. If X atom is more electronegative than Y is obtained the Cu 2 MnAl phase with F m3m space group [3]. The often called inverse Heusler structure (Hg 2 CuT i-prototype) with F43m space group was reported when the Y element more electronegative than X. The X atoms in Hg 2 CuT i-type structure, are placed in the Wyckoff positions 4a(0,0,0) and 4c(1/4,1/4,1/4) while Y and Z in 4b(1/2,1/2,1/2) and 4d(3/4,3/4,3/4), respectively [4]. Even though no single set of properties can characterize the entire Heusler family, the magnetic behavior and multifunctional properties recently reported in literature, make these half-metallic systems to play an important role in the research field of magnetic tunnel junctions [5] and spintronics [6,7]. Nowadays, promising materials which crystallize in Hg 2 CuT i-prototype, e.g. Mn 2 , T i 2 , Sc 2 -based Heusler compounds are intensively studied theoretically and experimentally, for potential use in devices that can inject currents of high spin polarization [8,9,10,11,12,13]. In this paper, the electronic structure and magnetic properties calculated from first-principles investigations of a new proposed Zr 2 CoSn full-Heusler compound, are reported. Investigations based on Density Functional Theory (DFT) predict that the ideal Zr 2 CoSn system exhibits half metallic behavior and might be suitable for use in spintronics. Method of calculation The structural parameters of Zr 2 CoSn bulk material were determined using the Full-Potential Linearized Augmented Plane Wave (FPLAPW) method, as implemented in WIEN2K code [14]. The Perdew Burke Ernzerhof [15] Generalized Gradient Approximation (GGA) was employed for the exchangecorrelation parametrization. The muffin-tin radii R M T of 2.45 a.u., 2.52 a.u. and 2.38 a.u. were used for Zr, Co and Sn, respectively. Inside the muffin-tin spheres, lattice harmonics of up to l = 10 were selected for the basis set. In the interstitial region, the plane wave cut-off value used was K max R M T = 8 (where K max is the maximum modulus for the reciprocal lattice vector). The tetrahedron method [16] with a grid containing 560 irreductible k points was selected in the irreducible part of the Brillouin zone (BZ) to construct the charge density in each self-consistency step. The cut off energy of -6 Ry defined the separation between the valence and core states. The charge con- vergence was checked versus the number of k points. The self-consistency was achieved when the total energy deviation was better than 0.01 mRy per cell and the integrated charge difference between two successive iterations less than 0.0001 e/a.u. 3 . Results and Discussions Ab initio calculations were performed for bulk Zr 2 CoSn full-Heusler material, to investigate the existence and nature of magnetic properties in intercorrelation with the electronic structure. In general, experimental preparation and interpretation of true half-metallic compounds are still scarce, therefore from the beginning, structural optimization calculations needs to be considered to estimate the magnetic and structural stable phase by means of the total energy minimization. The ferromagnetic configurations of Zr 2 CoSn in the two crystal structure prototypes (Cu 2 MnAl and Hg 2 CuT i), typical for the full-Heusler materials, and the antifferomagnetic and non-magnetic states of Zr 2 CoSn with L2 1 structure are the starting point of the calcula- tions, as shown in Figure 1. Based on the results, the ferromagnetic configuration of Zr 2 CoSn, with Hg 2 CuT i structure is energetically favorable and has the lowest calculated total energy, therefore, the Zr 2 CoSn compound has to crystallize in the inverse Heusler structure with space group F-43m. Hence, the Wyckoff sequence considered for further calculations is 4a(Zr), 4c(Zr), 4b(Co) and 4d(Sn) and the calculated equilibrium lattice parameter was 6.76Ȧ as corresponding to the Zr 2 CoSn ferromagnetic phase with Hg 2 CuT i -type structure. In the GGA scheme used in these investigations, half metallic properties are illustrated by the occurrence of an energy band gap in one of the spin channels and the integer total magnetic moment of the compound. The total and partial density of states of Zr 2 CoSn as function of energy difference E tot − E F ermi , performed at the equilibrium lattice constant are displayed in Figure 2. The electronic structure reveals a metallic character in majority spin channel and a semiconducting behavior, with an energy gap, around Fermi level, in minority spin channel. It is concluded that Zr 2 CoSn exhibits half-metallic properties with a completely spin polarization in the ground state. In Figure 3 is illustrated the band structure of Zr 2 CoSn full-Heusler alloy at optimized geometry. The density of states from majority spin channel is displayed in the left panel, while in the right panel of the figure is plotted the minority spin band structure where the complete absence of states at Fermi level implies the existence of an energy band gap of 0.543 eV. The indirect band gap is formed between the energy from the highest occupied states from valence band (VB) at the Γ point and the lowest unoccupied states from conduction band (CB), at the L point. The spin flip, calculated as the gap from the highest valence band maximum of minority-spin to the Fermi level for Zr 2 CoSn compound at optimized lattice constant is 0.484 eV. The 3d electrons of Zr and Co atoms determine the energy band gap around the Fermi level. The density of states located above the Fermi level mainly comes from Zr(4a) atoms, while below the Fermi level the density of states of Co atoms has the main contribution(see Figure 4 ). The width Figure 5. The compound is predicted to be an ideal candidate for spintronics, due to the existence of a gap in only one spin direction, for a large lattice parameter range. The half metallic properties disappears for a volume confinement of more than 5 % (e.g. corresponding to a lattice parameter of 6.67Ȧ). For further contraction, the Fermi Level falls within conduction band (CB), so that the compound becomes a typical ferromagnet and the spin polarization decreases. However, a such critical volume confinement is high, making the compound a very stable one with respect to the polarization properties. The calculated total magnetic moment of Zr 2 CoSn Heusler compound, with Hg 2 CuT i structure is 3 µ B and follows the Slater-Pauling dependence M t = Z t − 18 µ B /f.u. [10] (M t is the total spin magnetic moments per formula unit cell and Z t , the total number of valence electrons). The main contribution to the total magnetic moment comes from Zr(4a) and Co atoms, which are ferromagnetic coupled (see Figure 6). The spin-polarization calculations reveal that the siteresolved magnetic moments per atom, at the optimized lattice constant are 0.446, 0.946, 0.816 and -0.013 µ B for Zr(4c), Zr(4a), Co and Sn, respectively. The magnetic moments of Zr (4a and 4c) atoms decrease, while the one of Co atoms increases, with increasing lattice constant.The different atomic environments determines the dissimilar local magnetic moments of zirconium atoms. Conclusions First principles investigations of electronic and magnetic properties of Zr 2 CoSn full-Heusler alloy have been reported. The half-metallic behaviour in relation to the densities of states in the bulk material is reproduced for ground state. In the minority spin channel, the energy gap is 0.543 eV at the optimized lattice constant of 6.76Ȧ. From applications point of view, the investigated material, Zr 2 CoSn, is predicted to be suitable for spintronic devices due to high spin polarization. Acknowledgments A. Birsan would like to thank Dr. P. Palade for his support, and Dr. L. Ion for helpful discussions. The authors acknowledge the financial support provided by the Romanian National Authority for Scientific Research through the CORE-PN45N projects. This work was also supported by the strategic grant POSDRU/159/1.5/S/137750, "Project Doctoral and Postdoctoral programs support for increased competitiveness in Exact Sciences research" cofinanced by the European Social Found within the Sectorial Operational Program Human Resources Development 2007-2013.
2014-11-26T09:50:43.000Z
2014-11-26T00:00:00.000
{ "year": 2015, "sha1": "cf10aea97cd6a739df4d0c50d8b09dc11ec533df", "oa_license": null, "oa_url": "http://arxiv.org/pdf/1411.7154", "oa_status": "GREEN", "pdf_src": "Arxiv", "pdf_hash": "cf10aea97cd6a739df4d0c50d8b09dc11ec533df", "s2fieldsofstudy": [ "Physics" ], "extfieldsofstudy": [ "Physics", "Materials Science" ] }
10083389
pes2o/s2orc
v3-fos-license
Wedge wetting by electrolyte solutions The wetting of a charged wedge-like wall by an electrolyte solution is investigated by means of classical density functional theory. As in other studies on wedge wetting, this geometry is considered as the most simple deviation from a planar substrate, and it serves as a first step towards more complex confinements of fluids. By focusing on fluids containing ions and surface charges, features of real systems are covered which are not accessible within the vast majority of previous theoretical studies concentrating on simple fluids in contact with uncharged wedges. In particular, the filling transition of charged wedges is necessarily of first order, because wetting transitions of charged substrates are of first order and the barrier in the effective interface potential persists below the wetting transition of a planar wall; hence, critical filling transitions are not expected to occur for ionic systems. The dependence of the critical opening angle on the surface charge, as well as the dependence of the filling height, of the wedge adsorption, and of the line tension on the opening angle and on the surface charge are analyzed in detail. I. INTRODUCTION Over the past few decades numerous theoretical and experimental investigations have been performed aiming at a microscopic understanding of the phenomena of fluids at interfaces, e.g., capillarity, wetting, and spreading, which are of technological importance for, e.g., coating processes, surface patterning, or the functioning of microfluidic devices [1][2][3][4][5]. Particularly simple model systems to investigate these phenomena theoretically are planar homogeneous substrates, which have been studied intensively [6][7][8]. This way, methods have been developed to relate the thickness of fluid films adsorbed at substrates and the contact angle to fluid-fluid and wallfluid interactions, to infer surface phase diagrams, and to characterize the order of wetting transitions. However, the preparation of truly flat homogeneous substrates requires a huge technical effort and in nature there is no such thing as a perfectly flat surface [9]. On the one hand, one is always confronted with geometrically or chemically structured substrates, irregularlyshaped boundaries, or geometrical disorder. On the other hand, modern surface patterning techniques allow for the targeted fabrication of structured substrates with pits, posts, grooves, edges, wedges etc. in order to generate functionality, e.g., superhydrophobic surfaces [10]. This leads to the necessity of studying substrates beyond the simple flat geometry, but the wetting properties of such nonplanar substrates are very different from smooth and planar walls and their description is much more complex. Perhaps the most simple of the aforementioned elementary topographic surface structures are wedges, which are formed by the intersection of two planar walls meeting at a particular opening angle. First predictions of the phenomenon of the filling of a wedge upon decreasing * mussotter@is.mpg.de † bier@is.mpg.de the opening angle have been based on macroscopic considerations [11,12]. Microscopic classical density functional theory and mesoscopic approaches based on effective interface Hamiltonians revealed that systems with long-ranged Van-der-Waals interactions, where critical wetting transitions of planar walls occur, exhibit critical wedge filling transitions with universal asymptotic scaling behavior of the relevant quantities [13][14][15]. It has been argued that the order of a filling transition equals the order of the wetting transition of a planar wall [16]. However, it turned out later that the relation between the orders of wetting and filling transitions is more subtle: If the wetting transition is critical then the filling transition is critical, too. Otherwise, if the wetting transition is of first order then the filling transition may be first-order or critical, depending on whether or not a barrier exists in the effective interface potential at the filling transition [17,18]. A consequence of the latter scenario with first-order wetting transitions is the possibility to have first-order filling transitions, if the critical opening angle is wide, and critical filling transitions, if it is narrow. These predictions from mesoscopic approaches have been recently verified by microscopic classical density functional theory [19,20]. In order to reduce complexity, all cited previous theoretical studies on wedge wetting have been performed for models of simple fluids. However, many fluids used in applications, including pure water due to its autodissociation reaction, are complex fluids containing ions, so that the generic situation of wedge wetting by electrolyte solutions is of enormous interest from both the fundamental as well as the applied point of view. Despite the huge relevance of electrolytes as fluids involved in wedge wetting scenarios [21], this setup has not been theoretically studied before on the microscopic level, probably due to the expected lack of universality and increased complexity as compared to cases with critical wetting and filling transitions. Indeed, it turned out for planar walls that the presence of ions, not too close to bulk critical points, generates first-order wetting and a non-vanishing barrier in the effective interface potential below the wetting transition [22]. Hence, on very general grounds, one expects first-order filling transitions of wedges to take place for electrolyte solutions. In the present work, a microscopic lattice model is studied within a classical density functional theory framework in order to investigate the properties of wedge wetting by electrolyte solutions. The usage of a lattice model allows for technical advantages over continuum models [22][23][24]. The model and the density functional formulation is specified in Sec. II. In Sec. III first the bulk phase diagram and the wetting behavior of a planar wall of the considered model are reported. Next, wedge wetting is studied in terms of three observables: the wedge adsorption, the filling height, and the line tension. The dependence of these quantities on the wedge opening angle, on the surface charge density of the walls of the wedge, as well as on the strength and the range of the nonelectrostatic wall-fluid interaction are discussed in detail. Concluding remarks on the first-order filling transition considered in the present work and the more widely studied critical filling transition are given in Sec. IV. A. Setup In the present work, the filling behavior of an electrolyte solution close to a wedge-like substrate is studied. Consider in three-dimensional Euclidean space a wedge composed of two semi-infinite planar walls meeting at an opening angle θ along the z-axis of a Cartesian coordinate system (see Fig. 1). Due to the translational symmetry in z-direction the system can be treated as quasi-two-dimensional. In between the two walls an electrolyte solution composed of an uncharged solvent (index "0"), univalent cations (index "+"), and univalent anions (index "-") is present. The wedge is in contact with a gas bulk at thermodynamic coexistence between liquid and gas phase. This choice of the thermodynamic parameters allows for two different filling states of the wedge. From macroscopic considerations [11,12], a critical opening angle (1) with the contact angle ϑ of the liquid can be derived, which marks the transition between the wedge being filled by gas ("empty wedge") for θ > θ C and the wedge being filled by liquid for θ < θ C . It is of utmost importance for the following to realize that, from the microscopic point of view, a macroscopically empty wedge is typically partially filled by liquid. FIG. 1. Schematic depiction of the studied system. The two unit vectors eu and e u ′ are parallel to the two walls which meet at the opening angle θ. An arbitrary location r can be specified by the lateral and the normal components ru, rv or r u ′ , r v ′ with respect to the walls. The parallelogram close to the wedge apex indicates the geometry of the unit cells by which the space in between the walls is tiled. Characterizing the dependence of the critical opening angle θ C on the wall charge and describing the partial filling upon approaching the filling transition for θ > ∼ θ C are the objectives of the present study. B. Density functional theory In order to determine the equilibrium structure of the fluid in terms of the density profiles of the three species, classical density functional theory [25] is used. As wetting phenomena typically require descriptions on several length scales, computational advantage is gained by studying a lattice fluid model in the spirit of Refs. [22][23][24]. In order to account for the special geometry of the system at hand, the standard lattice fluid model is adapted by using parallelograms as basic elements of the grid, which is indicated by the parallelogram close to the apex of the wedge in Fig. 1. The size of an elementary parallelogram, which can be occupied by at most one particle of either species, is chosen such that, with d denoting the particle diameter, the sides parallel to the wall are of length d and they are a distance d apart from each other (see Fig. 1). Each cell is identified by a pair (l, j) of integer indices where l ≥ 0 denotes the distance from the wall and j represents the location parallel to the walls (see Fig. 1). The approximative density functional of this model used in the present work can be written as α∈{0,±} φ α;l,j (ln(φ α;l,j ) − µ * α + βV l,j ) + (1 − φ tot;l,j ) ln(1 − φ tot;l,j ) + 1 2 n,m βU * l,j;n,m φ tot;l,j φ tot;n,m + βU el , where φ α;l,j = ρ α;l,j d 3 denotes the packing fraction of fluid component α ∈ {0, ±} inside the cell specified by the indices (l, j), φ tot = φ 0 + φ + + φ − being the sum of the partial packing fractions, µ * α is the effective chemical potential of component α, and ρ max = 1/d 3 is the maximal number density of the fluid. In the following the values k B T = 1/β with T = 300 K and ρ max = 55.5 mol are chosen in correspondence with water at room temperature. Whereas the first line of Eq. (2) corresponds to the exact lattice fluid of non-interacting particles in an external field, the terms in the second line of Eq. (2) describe interactions amongst the particles in a meanfield-like fashion. The external potential V l,j in Eq. (2) describes the nonelectrostatic interaction of the wall with a particle in cell (l, j). It is chosen to be independent of the specific particle type. Here the wall-fluid interaction strength at a given position r results from a superposition of interactions with all points s at the surface of the walls (see Fig. 1): where βΦ is the underlying molecular pair potential of the wall-fluid interaction. For the sake of simplicity the Gaussian form with decay length λ is used, which leads to the nonelectrostatic wall-fluid interaction, Eq. (3), where the dimensionless coefficient h describes the wallfluid interaction strength. The two remaining expressions in Eq. (2) consider the interactions among the particles, which we consider as being composed of an electrically neutral molecular body and, in the case of the ions, an additional charge monopole. The way these interactions are treated regards the interactions as split in two contributions: the interaction between uncharged molecular bodies, which we refer to as non-electrostatic contribution, and the interaction between charge monopoles. In the present work we ignore the cross-interactions between a charge monopole and a neutral body. However the chosen model proves to be sufficiently precise as it qualitatively captures the relevant feature of an increase of the ion density for an increasing solvent density. For example in the case of a liquid phase with density φ 0 = 0.80907 coexisting with a gas phase with density φ 0 = 0.19093, the ion densities increase from φ ± = 1.81541 · 10 −3 in the gas to φ ± = 7.51554 · 10 −3 in the liquid. In the Eq. (2), the non-electrostatic contribution to the fluid-fluid interaction is treated within random-phase approximation (RPA) based on the interaction pair potential U * l,j;n,m between a fluid particle in cell (l, j) and another one in cell (n, m). Here this interaction is assumed to be independent of the particle type and it is assumed to act only between nearest neighbors, i.e., between particles located in adjacent cells. Finally, in Eq. (2) all electrostatic interactions, both wall-fluid and fluid-fluid, are accounted for by the electric field energy βU el . The electric field entering βU el is determined by Neumann boundary conditions set by a uniform surface charge density σ at the walls of the wedge, planar symmetry far away from the wedge symmetry plane and global charge neutrality. Furthermore, the dielectric constant is assumed to be dependent on the solvent density. It is chosen to interpolate linearly between the values for vacuum (ǫ = 1) and water (ǫ = 80). This linear interpolation has been previously shown to match the behavior of the dielectric constant in mixtures of fluids very well [26]. In addition it is important to note, that here the surface charge is not caused by the dissociation of ionizable surface groups, i.e., charge regulation as in Ref. [27] is not relevant here, but it is assumed to be created by an external electrical potential, which is applied to the wall. One can imagine the wall being an electrode with the counter electrode being placed far from the wall inside the fluid. C. Composition of the grand potential Upon minimizing the density functional βΩ[φ] in Eq. (2) one obtains the equilibrium packing fraction profiles φ eq , which lead to the equilibrium grand potential βΩ eq = βΩ[φ eq ] of the system. This equilibrium grand potential can be decomposed into three contributions: The first contribution −pV with the pressure p and the fluid volume V equals the bulk energy contribution. It corresponds to the grand potential of an equallysized system completely filled with the uniform gas bulk state. The second term γA with the interfacial tension γ and the total wall area A corresponds to the quasi-onedimensional case of the gas being in contact with a planar wall. The third contribution τ L with the line tension τ and the length L of the wedge is the only contribution to the total grand potential, where the influence of the wedge enters, and it is therefore of particular importance in the present work. A. Bulk phase diagram In the bulk region, far from any confinements, the densities φ α , α ∈ {0, ±} of the three fluid components become constant, and, due to local charge neutrality, φ + = φ − . This simplifies the density functional βΩ [φ] in Eq. (2), and the Euler-Lagrange equations read shows the corresponding ion packing fractions φ+ and φ− as functions of the distance v from the wall. Panel (a) identifies the system exhibiting partial wetting for the present configuration. where 1/T * is proportional to the strength of the fluidfluid interaction βU * . For the ion-free case I = 0 the liquid-gas coexistence line is given by the analytical expression µ * 0 = − 1 T * (see solid red line in Fig. 2). For fixed but non-vanishing ionic strengths I the liquid-gas coexistence lines have been calculated numerically (see the black crosses in Fig. 2). Whereas the deviations from the ion-free case are only marginal in the bulk phase diagram for all ionic strengths considered here, it is of major importance to determine the coexistence conditions precisely, because surface and line properties (see Eq. (6)) are highly sensitive to them. B. Electrolyte wetting on a planar wall Before studying the filling behavior of a wedge, it is important to study the wetting of a planar wall because the results enter as the surface contributions to the total grand potential Eq. (6) and the quasi-one-dimensional packing fraction profiles provide the boundary conditions far away from the wedge symmetry plane. In the case of a planar wall the density functional βΩ[φ] simplifies to a quasi-one-dimensional one and, due to the corresponding relations r u = −r u ′ , r v = r v ′ (see Fig. 1), the expression Eq. (5) for the fluid-wall interaction becomes With this set of equations one can determine the equilibrium packing fraction φ α;i of the fluid close to the planar wall, where the integer index i ≥ 0 denotes the distance of the cell from the wall. One possibility to characterize wetting of a planar wall is by means of the excess adsorption with the total packing fraction φ (gas) tot of the gas phase at liquid-gas coexistence for the given temperature T * , which measures the additional amount of particles in excess to the gas bulk phase due to the presence of the wall. Alternatively, one can consider the film thickness with the total packing fraction φ (liquid) tot of the liquid phase at liquid-gas coexistence for the given temperature T * , which corresponds to the thickness of a uniform liquid film of packing fraction φ (liquid) tot with the same excess adsorption Γ[φ tot ] as the equilibrium total packing fraction profile φ tot . Minimizing the grand potential functional Eq. (2) for a planar wall (see Eq. (8)) with the constraint of fixed excess adsorption Γ[φ tot ], Eq. (9), or fixed film thickness l[φ tot ], Eq. (10), and subtracting the bulk contribution of the grand potential as well as the wall-liquid and the liquid-gas interfacial tensions (γ sl and γ lg , respectively), one obtains the effective interface potential βω [6]. An example for βω(l) is displayed in Fig. 3(a). The position l = l eq of the minimum of the effective interface potential βω(l) corresponds to the equilibrium film thickness. The corresponding equilibrium total packing fraction profile φ tot for the parameters chosen in Fig. 3(a) is shown in Fig. 3(b). Using this procedure, one can determine the equilibrium density profiles for different ionic strengths I, temperatures T * , wall-fluid interaction strengths h, decay lengths λ, and surface charge densities σ. Here, the wall charge σ is varied and a wetting transition is observed at a critical value σ C . All four setups in Fig. 4 exhibit the characteristics of first-order wetting transitions, which are identified by finite limits of Γ upon h ր h C or σ ր σ C . In addition for all these cases the first-order nature has been verified by studying the effective interface potential (see inset in Fig. 4(a)), which is clearly manifested by the energy barrier separating the local and the global minimum. For the quasi-ion-free case σ = 0 in Fig. 4(a) the choice Eq. (4) of the molecular pair potential of the wall-fluid interaction leads to a wetting transition of first order, in contrast to the choice of the nearest neighbor potential in Ref. [23], which generates a second-order wetting transition. However, it has been shown that for σ = 0 (see Fig. 4(b)) wetting transitions are of first order once the Debye length is larger than the bulk correlation length [22]. C. Wedge wetting by an electrolyte solution Having studied the system under consideration in the bulk (Sec. III A) and close to a planar wall (Sec. III B), one can investigate wedge-shape geometries. As explained in the context of Eq. (1), the system undergoes a filling transition for the opening angle θ (see Fig. 1) approaching the critical opening angle θ C from above. For θ < θ C the wedge is macroscopically filled by liquid, whereas for θ > θ C the wedge is macroscopically empty. In the following, the filling of an empty wedge, i.e., θ ց θ C , will be studied. Following Eq. (1), the critical opening angle θ C can be calculated from the contact angle ϑ of the liquid, which is related to the depth of the minimum of the effective interface potential by [6] cos ϑ = 1 + ω(l eq ) γ lg (12) with the liquid-gas surface tension γ lg . Hence, the critical opening angle θ C can be inferred from the wetting properties of a planar wall using the method of Sec. III B. Figure 5 displays the critical opening angle θ C as function of the wall charge σ for the case of decay lengths λ ∈ {1 d, 2 d}. As the contact angle ϑ decreases upon increasing the wall charge due to the electrowetting effect [28], the critical opening angle θ C increases with increasing wall charge. For the critical wall charge σ = σ C the critical opening angle θ C reaches the value of 180 • , since for this wall charge the wetting transition of the planar wall occurs (compare Fig. 4(b)), i.e., for a planar wall the wetting and the filling transition are identical. Figure 6 displays the equilibrium packing fraction profiles inside wedges with opening angles θ = 180 • (Fig. 6(a)) and θ = 80 • (Fig. 6(b)) with the parameters h, λ, and σ identical to those of Fig. 3(b). Away from the wedge symmetry plane the structure rapidly converges towards that of a planar wall, which verifies the chosen size of the numerical grid being sufficiently large to capture all interesting effects. Furthermore, the decrease of the opening angle, as shown in Fig. 6(b), leads to an increase of the density close to the tip of the wedge. For example the maximal density increases from 15 % of the relative density difference between liquid and gas density to almost 30 %. However, the increase in the density is limited to the close vicinity of the tip of the wedge, which is an indication of first-order filling transitions. In fact, in the presence of ions, wetting transitions at a planar wall are of first order with a barrier in the effective interface potential βω(l) (see Fig. 3(a)) being present for all states below the wetting transition of a planar wall [22]. Hence filling transitions of wedges are expected to be of first order, too [17,18]. In order to describe the filling transition of a wedge quantitatively, several quantities have been studied. Far away from the symmetry plane of the wedge the packing fraction profiles coincide with those at planar walls (see Fig. 3(b)). Upon decreasing the opening angle θ, an increase of the density close to the tip of the wedge occurs (see panel (b)). Firstly the wedge adsorption with the length of the wall l wall shall be discussed. In the spirit of the excess adsorption Γ at a planar wall (Eq. (9)), this quantity ∆ measures the excess of an inclined wedge above the excess adsorption Γ of a planar wall. In Fig. 7 the wedge adsorption ∆ is shown as function of the opening angle θ and of the wall charge density σ for decay lengths λ = 1 d ( Fig. 7(a)) and λ = 2 d (Fig. 7(b)). The ionic strength is I = 100 mM and the wall-fluid interaction strength h has been chosen as in Fig. 4(b). Upon decreasing the opening angle θ the wedge adsorption ∆ increases, regardless of the wall charge density σ, the decay length λ, or the non-electrostatic wall-fluid interaction strength h. However, the limits of ∆ upon approaching the filling transition, θ ց θ C , are finite, which signals a first-order filling transition (see in particular the inset of Fig. 7(a)). Moreover, for any fixed opening angle θ > θ C , the wedge adsorption ∆ increases with increasing wall charge density σ. Both observations can be understood in terms of the strength of the interaction between wall and fluid. In case of an increasing wall charge density σ, the increase of ∆ stems from an increase of the counterion density which is stronger than the accompanying decrease of the coion density. This phenomenon is well-known for non-linear Poisson-Boltzmann-like theories as the present one. For the case of a decreasing opening angle θ > θ C the growing overlap of the wallfluid interactions, both the non-electrostatic as well as the electrostatic one, leads to an increase in the density. Besides these general qualitative trends there are quantitative differences for the two cases in Fig. 7, which differ in the values of the decay length λ. One way to compare Figs. 7(a) and 7(b) is to consider the limits ∆(θ + C ) upon θ ց θ C for a common value of the wall charge density σ. In this case, the shorter-ranged wall-fluid interaction, λ = 1 d (see Fig. 7(a)), leads to higher values of ∆(θ + C ) than the longer-ranged one, λ = 2 d (see Fig. 7(b)). However, since shorter decay lengths λ lead to smaller critical opening angles θ C (see Fig. 5), which correspond to stronger overlaps of the wall-fluid interactions of the two walls of the wedge, an increase in the wedge adsorption ∆ is caused mostly for geometrical reasons. Alternatively, if one compares Fig. 7(a) and 7(b) for a fixed opening angle θ > θ C and a fixed wall charge density σ, the wedge adsorption ∆ is larger for the case of the longer-ranged wallfluid interaction. This can be readily understood given the fact that, for fixed opening angle and wall charge, the interaction strength at a specific point in the system is the stronger the longer ranged the interaction is. As a second quantity to describe the filling of a wedge the filling height is considered, where Γ sym denotes the excess adsorption along the symmetry plane (cell index j = 0) of the wedge: The definition of the filling height l w of a wedge is similar to that of the film thickness l at a planar wall (see Eq. (10)). It expresses the distance of the liquid-gas interface of the adsorbed film from the tip of the wedge. Figure 8 displays the filling height l w as function of the opening angle θ and of the wall charge σ with the decay lengths λ = 1 d in Fig. 8(a) and λ = 2 d in Fig. 8(b). When discussing the filling height l w one has to account for the geometrical effect of an increasing side length l w1 (θ) := d/ sin(θ/2) of the elementary parallelograms in the direction of the symmetry plane (see Fig. 1) upon decreasing the opening angle θ. It is equivalent to a filling height of exactly one cell and it is displayed in Fig. 8 as a black dashed curve. By comparing the filling height l w (θ) with the trend given by the side length l w1 (θ) one infers a stronger increase of the former upon approaching the filling transition θ ց θ C , which can be attributed to the filling effect. Similar to the wedge adsorption ∆, the filling height l w increases either upon decreasing the opening angle θ towards the critical opening angle θ C or, for fixed θ > θ C , upon increasing the magnitude of the wall charge density σ. The reason for these observed trends of the filling height l w is again, as for the wedge adsorption ∆, a consequence of the increased magnitude of the wall-fluid interaction. Finally, the filling height l w , as the wedge adsorption, approaches a finite limit upon θ ց θ C , which is in agreement with the expectation of a first-order filling transition. As shown in Eq. (6), the equilibrium grand potential Ω eq may contain a contribution scaling proportional to a linear extension L of the system and the corresponding coefficient of proportionality of the dimension of an energy per length is called the line tension τ . In the present context of a wedge, the line tension τ measures the structural difference between a wedge and a planar wall, and the contribution τ L scales with the length L of the wedge along the z-direction. Figure 9 displays the line tension τ as function of the opening angle θ and of the wall charge density σ for decay lengths λ = 1 d ( Fig. 9(a)) and λ = 2 d ( Fig. 9(b)). The qualitative dependence of the line tension τ on the opening angle θ turns out to depend on the wall charge density σ: For small wall charge densities the line tension is negative and it decreases monotonically with decreasing opening angle. For sufficiently large wall charge densities the line tension is positive for large opening angles and, if the critical opening angle θ C is small enough, negative for small opening angles, i.e., the line tension may depend non-monotonically on the opening angle. For molecular length scales d ≈ 3Å and room temperature T ≈ 300 K the order of magnitude of the line tension |τ | ≈ pN is in accordance with literature [24,29,30]. IV. CONCLUSIONS AND SUMMARY In the present work the filling of charged wedges by electrolyte solutions has been studied within microscopic classical density functional theory of a lattice model (Fig. 1). As in previous studies [22][23][24], considering lattice models offers technical advantages over continuum models, as the former allow for the explicit description of larger parts of the system. The electrolyte solution comprises a solvent and a univalent salt. A short-ranged attractive interaction between the fluid particles leads to a liquid-gas phase transition of the bulk electrolyte solution (Fig. 2). A fluid-wall interaction derived from a Gaussian pair potential (Eq. (4)) gives rise to first-order wetting transitions of a planar wall in contact with a gas bulk phase (Figs. 3). This first-order wetting transition of a planar wall can be driven by the wall-fluid interaction strength or by the surface charge density (Fig. 4). The critical opening angle, below which the wedge is filled, depends on the surface charge density and on the decay length of the wall-fluid interaction (Fig. 5). Upon approaching the critical opening angle from above, a macroscopically small but microscopically finite amount of fluid is accumulated close to the apex of the wedge (Fig. 6). This observation as well as the finite limits of the wedge adsorption (Fig. 7), the filling height (Fig. 8), and the line tension ( Fig. 9) are compatible with a first-order filling transition. Upon increasing the surface charge density, the line tension as function of the opening angle changes from a monotonically increasing negative function via a function exhibiting a positive maximum to a monotonically decreasing positive function (Fig. 9). The unequivocally first-order filling transitions found within the model of the present work are in full agreement with the general expectation for systems with barriers in the effective interface potential at the filling transition [17,18]. Moreover, this is expected to be the case for any electrolyte solution not too close to a critical point, as such systems exhibit barriers in the effective interface potential for all conditions of partial wetting [22]. Therefore, the optimistic point of view in Ref. [19] expecting the experimental accessibility of systems displaying critical filling transitions requires to exclude the vast class of dilute electrolyte solutions as potential candidates. On the other hand, being assured of the first-order nature of filling transitions in the presence of electrolyte solutions allows one to numerically efficiently set up more realistic models, which are not restricted to a lattice for technical reasons, to quantitatively describe wetting and filling of complex geometries.
2017-09-13T08:00:24.000Z
2017-06-29T00:00:00.000
{ "year": 2017, "sha1": "bfd99fb6f4f55602e0408bb377cf897b0087114e", "oa_license": null, "oa_url": "http://arxiv.org/pdf/1706.09678", "oa_status": "GREEN", "pdf_src": "Arxiv", "pdf_hash": "bfd99fb6f4f55602e0408bb377cf897b0087114e", "s2fieldsofstudy": [ "Chemistry", "Materials Science", "Physics" ], "extfieldsofstudy": [ "Materials Science", "Physics", "Medicine" ] }
11372551
pes2o/s2orc
v3-fos-license
Crisis and Caring for Inner Selves : Psychiatric Crisis as a Social Classification in Sweden in the 1970 s This article aims to contribute to the understanding concerning the introduction of crisis psychotherapy in the 1970s in psychiatric clinics in Sweden. The article discusses how this psychotherapy became central in the work of the clinics in supporting patients to well-being and inner growth. The ambition was that patients in an acute crisis-situation would be offered care immediately, aiming at a short and intensive contact with the professionals to avoid hospitalization and long-term sick leave. These ideas were by no means new; in the 1960s, a Western debate had emerged in which the hospitalization in psychiatric clinics had received criticism. In Sweden, the psychiatrist Johan Cullberg was a key actor during the 1970s in the introduction of the psychiatric crisis perspectives. Here, his publication ‘The psychic trauma’ from 1971 is analysed. The publication inspired psychiatric clinics to introduce crisis psychotherapy in three different pilot projects. The projects were presented in articles in the Swedish Medical Journal. These articles have also been analysed here. Self-care is highlighted through this material as a concept to be analysed. The question is discussed as to how the concept of the psychiatric crisis initiated and institutionalized a new form of social classification in which the patients were to take more responsibility for their own inner growth. Introduction We all are likely to run into psychiatric crises -the person who never does, is rather to be pitied.It is also a situation where we all should have the right to receive helphelp to listen to our own capabilities of finding a solution, not to run away from the sometimes painful self-defining that the situation often contains (Cullberg 1971:3, my translation). In the publication 'The Psychic Trauma: About Crisis Theory and Crisis Psychotherapy' from 1971, the Swedish psychiatrist Johan Cullberg presented the concept of the psychiatric crisis.In this text, the crisis is presented as something essential for the human being and something we must not run away from.The psychiatric crisis should instead be seen as an important part of how humans define their inner self, almost necessary for the individual in order to develop a strong and complete self.In this article, the psychiatric crisis will be used as a starting point for discussing how crisis psychotherapy in the 1970s manifested a specific psychological being that was expected to take responsibility for his or her own inner self, a form of self-care.Focus is on how this form of self-care is institutionalized; how patients in crisis are categorized in an outpatient care unit in Sweden. In his book, Inventing our Selves the sociologist Nikolas Rose argues that there has been a transformation in the Western society; the individual is increasingly regarded as a psychological being with an inner mental process of growth.This has changed 'our conceptions of what persons are and how we should understand and act toward them, and our notions of what each of us is in ourselves, and how we can become what we want to be' (Rose 1998:11).Rose links this change to the growth of psychology in Europe and North America in the twentieth century and to how the psychological knowledge has come to have a central role for how individuals are caring for their inner selves.Emphasized by the philosopher Michel Foucault, the care of the self is an old idea from the classical and late antiquity concerning how the subject relates to his or her own actions (Foucault 1990).This idea was accentuated when psychology made the self into a psychological knowledge.From this theoretical perspective, the psychological knowledge highlighted by Rose's Foucauldian perspective can be regarded as a form of selfcaring project that is placed upon individuals, making them responsible for their own inner growth.As we will see in this article, conceptualizing the self with psychological knowledge in this way provides a new perspective for what the human being can be and for what she or he can strive.In this article, this theoretical argument will be analysed from a Swedish perspective using the psychiatric crisis as a case of how crisis psychotherapy in the 1970s initiated and institutionalized a new form of psychological knowledge in which patients were to take increasing responsibility for their own inner growth.More specifically, the subject of the analysis is the crisis psychotherapy that was introduced in the psychiatric treatment in clinics during the 1970s.In this article, the crisis psychotherapy is utilized as a case for discussing how care of the self has become part of the Swedish psy-chiatry, and how cultural ideas about self-care received practical form in a specific psychiatric treatment. The crisis psychotherapy was a treatment, which in Sweden presented an alternative to more traditional psychiatric treatments in the 1970s.The ambition was that patients in an acute crisis-situation would be offered care immediately, with the aim of a short and intensive contact with the professionals to avoid hospitalization and long-term sick leave.These ideas were by no means new; in the 1960s, a Western debate had emerged in which the hospitalization in psychiatric clinics had received criticism (Goffman 1961;Szasz 1961;Scheff 1966;Foucault 1967).It was not just an attempt to find a new psychiatry, but also a process of finding other ways to perceive the patient who consulted the clinic for treatment (cf.Micale & Porter 1994).In Sweden, the psychiatrist Johan Cullberg was a key actor in the 1970s in the introduction of the psychiatric crises perspectives. 1Particularly, the previously mentioned publication from 1971, 'The Psychic Trauma', became central for many of the psychiatric clinics that introduced the crisis-treatment (Cullberg 1971). 2 A main point in Cullbergs publication was how the psychiatric crisis was presented as having a developmental potential for the individual; meaning that the crisis could be something beneficial and normal to go through.This alternative psychiatric treatment can be considered as a means for the clinic to give the patient more responsibility for his or her own potentials to grow as a human being.In this article, this matter is analysed as a change in the attitude of the psychiatric clinics, which implied avoiding hospitalization of the patients and instead focusing upon the patient's possibilities to handle the psychiatric crisis on their own under the care of a psychiatric treatment. Method In the psychiatric disciplines -the clinics, as well as the psychiatric researcherspaying attention to the patient's acute crisis situation was a perspective creating a new classification of when a patient had a crisis and what care that patient needed.In the early 1970s, the theories of the psychiatric crisis were gradually applied in psychiatric treatment in Sweden.Consequently, the classification of what a crisis is also started to interact with certain kinds of behaviour among the patients.This is what the philosopher Ian Hacking defines as classificatory looping; meaning that social classifications, in this case the psychiatric crisis, interact with the behaviour that has been classified (Hacking 1999).Social classifications can be studied methodologically through what Hacking names a style of reasoning.In which way is social classification associated with an ontological discussion concerning the different kinds of behaviour that should be incorporated in the specific classification that is identified as the psychiatric crisis (Hacking 2004)?Through the theories about the psychiatric crisis, psychiatry gained a territorial extension that provided the professionals new principles, or logical sentences, for their style of reasoning concerning some specific human behaviour.In examining those sentences closer, it is possible to study the social classifications that are associated with the psychiatric crisis supporting the objectivity of the theoretic framework behind the concept (of the psychiatric crisis).Hacking points this out when he writes, 'The truth of a sentence (of a kind introduced by a style of reasoning) is what we find out by reasoning using that style.Styles become standards of objectivity because they get at the truth' (Hacking 1992:13).In this article, the style of reasoning in Cullberg's publication is studied.The style of reasoning is also examined in articles of other psychiatrists on how crisis psychotherapy initiated a new form of psychological knowledge implying that the patients should take more responsibility for their own inner growth. For my analysis, two different empirical categories have been used to study the style of reasoning concerning the psychiatric crisis.The first category consists of Cullberg's short publication 'The Psychic Trauma' from 1971 (Cullberg 1971).This publication was the first longer and more comprehensive introduction to crisis theory and crisis psychotherapy in Sweden. 3The publication is of importance since it introduced the psychiatric crisis perspective, but also started to inspire other psychiatrists to introduce crisis psychotherapy in psychiatric clinics.Cullberg's main reasons are presented in the article and are analysed with Hacking's theoretical perspective arguing that the style of reasoning can unfold those social classifications that give the arguments their truth (Hacking 1992).Focus is on those sections in the publication where Cullberg claims that patients ought to be more responsible for their own inner growth.These arguments are analysed in relation to the criticism of the psychiatry in the mid 1960s and 1970s (see Psychiatric Crises and Selves). The second category comprises the articles of other psychiatrists, in which they present and analyse their introduction of the new crisis psychotherapy in clinics.Through a search in Swedish Medical Journal, I have found three articles from the 1970s that present these clinical introductions.The articles are 'Crisis Intervention in an Outpatient Care Unit -Alternative Psychiatric Care' (Stenstedt 1973, my translation), 'Crisis Therapy -An Alternative' (Boëthius et al. 1977, my translation) and 'Two Years of Experiences of Crisis Therapy' (Ardelius et al. 1978, my translation).As the titles proclaim, these articles represented trial projects at different clinics in Sweden, where crisis psychotherapy had been introduced, used and evaluated.The question for the three different trial projects was whether crisis psychotherapy could be used in clinics and if it had any benefits for the patients.The first article -'Crisis Intervention in an Outpatient Care Unit' -is probably the first documented example of interventions applying crisis psychotherapy in Sweden.The pilot project started as early as December 1971 at the Psychiatric Clinic, Karolinska Hospital in Stockholm.Thus, this was the same year that Cullberg's publication 'The Psychic Trauma' was published.The reason why everything started the same year is that the psychiatrist Karin Stenstedt, the writer of the article, was a colleague of Cullberg's and well versed in his reasoning.By analysing the article, it is possible to give a perspective on how the arguments in the publication were transformed to the clinic.For this reason, my analysis is focused on the first article from 1973.The two other articles are mentioned to illustrate the fact that the arguments of Stenstedt and Cullberg were used in other clinics.Stenstedt's reasons are analysed with regard to Hacking's classificatory looping.The introduction of the concept of the psychiatric crisis in clinics started a form of interaction with the kinds of behaviour that had been classified (Hacking 1999).First, this interaction is presented as a new classification that is introduced in clinics (see A New Classification); thereafter, the new classification is analysed as a form of self-care (see Individualized Care). Psychiatric Crises and Selves From the mid-1960s, an increasing amount of actors articulated a criticism of the kind of psychiatry that was practiced internationally as well as in Sweden.Among many things, it was a critique of an individual approach to how to care for people's mental health problems.This was seen as a structural problem.A central point was also the critique of those norms in society that concerned what was considered as normal development and adaptation to society.The criticism was directed towards a prevailing belief that people would adjust to what was considered normal, and that this would bring about a more harmonious society; if people behaved 'normally', the society would also function more normally (Ohlsson 2008; Jönsson forthcoming). Cullberg's publications from this period originated from the criticism of regarding people as a form of individual normality.Instead, Cullberg came to join those who preferred to regard people as part of the community.A principal matter in this critique, and this was pointed out very clearly in Cullberg's publication, was that the individual had the right to occasionally feel bad and receive appropriate treatment for this malaise (Cullberg 1971).Considering the publication more closely, we can see how Cullberg integrated this theoretical view of the self and at the same time presented his perspectives in a medical mode, more appropriate for the psychiatric disciplines.For example, we find that traditional medical case histories were presented, representing typical traumatic situations that may lead to crisis.The typical traumatic situations that are presented by Cullberg comprise object loss, loss of autonomy, reproductive problems, problems with relationships, social shame, changes in the societal structure and external disasters.In the publication, Cullberg also describes a model to understand the course of the crisis, as well as symptomatology and treatment.In this way, the psychiatric crisis was a concept with inherent opportunities to see each patient as a psychological individual who was entitled to self-defining and psychological help. In the principles of crisis psychotherapy, Cullberg points out that the therapist had the role of a catalyst for the healing process.He writes, 'He should give the patient an opportunity, under as decent conditions as possible, to go through the crisis so that he achieves a new direction and preferably with experiences that increase his self-knowledge' (Cullberg 1971:31, my translation).The patient had the responsibility to not repress the crisis, but instead promote a healing process that would give him possibilities to go through the crisis.The professionals had the role of supporting this process of the patient's quest to feel better.Accordingly, not only the healing process was important, but the crisis was also a way to conceptualize the self. Cullberg reveal that this provided the professional a new role in the healing process in which the responsibility should not be the doctor's or the therapist's, but the patient's.Hence, he saw two immediate consequences for the professionals.The first point was 'The therapist's task is not to give back what the patient has lost or to take away the painful reality'; the second point was 'The therapist's task is not primarily to cure or remove the 'symptoms', because these are part of the process and the reality' (Cullberg 1971:31, my translation).Of course, if the patient had too much pain or self-destructive manifestations he or she should be given some form of alleviating treatment.Nevertheless, the fact of the matter was that the patient should take responsibility for the painful reality involved in the crisis. This can be seen as the first step to find new perspectives on patients that had a psychiatric crisis.Moreover, the primary step was taken for a classificatory looping in which theories about psychiatric crises could be used by psychiatric clinics to identify the kind of behaviour that had been classified in theory (Hacking 1999, cf. Blumer 1971).In this classificatory looping, the patient's psychiatric crisis was something that he or she should be encouraged to understand as a self-caring project.It was in enduring the painful reality that the patient had the possibility to invent himself (Rose 1998).For this reason, Cullberg's point of views can be seen as a rationalized programme for the patient. A New Classification In December 1971, a pilot project started at the Psychiatric Clinic, Karolinska Hospital in Stockholm, offering crisis psychotherapy.The project was later presented in the article 'Crisis intervention in an outpatient care unit' in Swedish Medical Journal (Stenstedt 1973, my translation, see also Falk & Stenstedt 1973).The background for the project was that the clinic was to be rebuilt and the beds reduced from 77 to 31.At the same time, the responsibility for the patients should not be affected.An outpatient care unit consisting of nine professionals was assembled, with two psychiatrists, one psychologist, one social worker, two psychiatric nurses, one occupational therapist and one part-time physiotherapist.Assis-tant manager was Karin Stenstedt.The aim for the unit was to receive patients in emergent crisis situations and provide them with swift and individualized care.It was vital to offer various kinds of activities and be flexible to the patients' needs.This might involve individual conversations, movement treatment, occupational therapy and so on.Consequently, the ideas of the psychiatric disciplines were implemented in actual practice by professionals with set guidelines for how the crisis treatment should be managed (Rose 1998).The theories about the psychiatric crisis were transformed into guidelines and practical counselling with patients. Although there were no medical diagnoses for crises, the crisis treatment affected how to classify the patient.In the article, Stenstedt highlights the matter '[…] at the beginning of the work of the outpatient care unit, the concept of crisis was not very consistently defined among the professionals in the unit' (Stenstedt 1973:4157, my translation).The professionals used the definition of the psychiatric crisis that Cullberg had described; but at the same time, it was a definition that needed to be more consistently applied in the outpatient care unit.As Stenstedt points out in the article, the definition of the psychiatric crisis became more solid the longer the professionals in the outpatient care unit worked together.Returning to Hacking, this can be seen as a classificatory looping in which the psychiatric crisis gave rise to new classifications; this provided new cases, which created more knowledge about the cases, generating more experts, which created a need for more research and so on (Hacking 1999).The psychiatric crisis should be seen as a concept that constantly was changing while it was in the loop. However, the classification was also confirmed while it was in the loop, giving the professionals possibilities to distinguish between patients that had a psychiatric crisis and those who had not.Thirty-nine percent of the patients who came to the clinic were classified as having a psychiatric crisis.The remaining were classified according to three, at that time, traditional diagnoses: psychosis, neurosis and borderline. 4Those who received the psychiatric crisis classification had been affected by an event that was said to trigger crisis.The description of these triggers was largely taken from Cullberg's publication 'The Psychic Trauma'.In Stenstedt's article this is pointed out: The most common cause for crisis is undoubtedly more or less acute relationship problems; about a third of the cases concern infidelity.In frequency after relationship problems are problems at work.[…] Next are those who have consulted us because of object loss, particularly due to the death of a close relative.Then there are those who consulted us in relation to reproductive problems (Stenstedt 1973:4157-4158, my translation). The triggers can be regarded to be so common that we can expect many cases that could confirm the classification of the psychiatric crises.However, there were other projects in the 1970s that confirmed these classifications.One example is reported in the article 'Crisis Therapy -An Alternative', using Cullberg's psychiatric crisis criteria from 1971 (Boëthius et al. 1977, my translation). 5In 1978, 'Two years of experiences of crisis therapy' was published (Ardelius et al. 1978, my translation).In the later article, there was not only a confirmation of the classifications presented in the articles from 1973 and 1977, but also a statement from the authors that this treatment was something society should offer patients suffering from a psychiatric crisis: In recent years, the acute crisis reaction that people may develop has received ever more attention.A crisis reaction means that a previously healthy and functioning human being is affected by a substantial setback in life; the loss of a relative or any other matter that places new demands on the individual.[…] In these cases, society must be willing to provide crisis treatment (Ardelius et al. 1978:4147, my translation). Social classifications, here in the form of the psychiatric crisis, interacted not only with the kinds of behaviour that had been classified, in this case the acute crisis reactions, but also became something that could be used in an argumentation that society should invest resources in this treatment.Psychiatric crisis, crisis reactions and crisis psychotherapy were parts of a classificatory looping in the 1970s; which confirmed the importance and established the need to work with this psychiatric perspective in society (cf.Hacking 1999).Cullberg's psychiatric crisis criteria were vital points in this looping, but it was in clinical practice that the classified behaviour started to interact and create a classificatory looping.It was in the psychiatric clinic that a transformation from psychiatric crisis theory to care practice took place (cf.Mol et al. 2010).When these theories were introduced, the professionals attained new perspectives on what a patient was and which responsibilities the patient had for his or her own well-being. Individualized Care Likewise, the introduction of the psychiatric crisis in clinics had an impact on, what may be termed as the care practice, in which the introduction of the psychiatric crisis created other forms of cultural and social practices in the clinic (cf.Mol et al. 2010).Regarding Stenstedt's article, some of these practices can be analysed in relation to the criticism of psychiatry in the 1960s and 1970s.Primarily, there was a concrete aim for the outpatient care unit at the Psychiatric Clinic, Karolinska Hospital: the intention of not hospitalizing the patients.This idea must be understood regarding the context of the general criticism of psychiatry in the 1960s and 1970s (Ohlsson 2008).In the article, this criticism can be discerned: The aim was therefore to try to organize a small outpatient care unit, which without waiting time, would be receiving patients in emergency crisis situations and for a limited time giving them an intensive problem-focused contact.An exceedingly important point was the possibility of individualized care.This should be adapted in a flexible way to the specific needs of each individual.Firstly, the intention was to be able to offer various forms of activities; secondly, and above all, to provide patients with an opportunity to work through their current problems in group discussions or private conversations (Stenstedt 1973:4154, my translation). Returning to Rose's arguments, the psychiatric disciplines, here in the form of a new small outpatient care unit, were generated to meet the requirements that the patients at this time were considered to have (Rose 1998).The aim of the care was to be flexible in view of the individual's needs, with no waiting time and designing a problem-focused contact with the patients. 6This specific psychiatric discipline was created in contrast to the old psychiatry care; consequently, it defined what the discipline should not be.Simultaneously, this redefinition of psychiatric care also influenced the idea of what a patient is and should be.The patient appeared as an actor that was expected to be interested in individualized care, having specific needs of this care.Thus, the objectives with the outpatient care unit were to transform the mental health services for some of the patients who needed treatment. The outpatient care unit organized a new type of treatment; the focus was said to be on adjusting the care for the patients' needs.In this reorganization, the patients were increasingly regarded as isolated individuals, separated from a unifying patient category.This is a cultural process that arose during the 1970s and that has been widely analysed within individualization theories (Giddens 1991;Lasch 1991;Beck & Beck-Gernsheim 2001).In these theories, the character of the individual is pointed out as increasingly negotiable and less governed by traditions and norms.A person's character tends to be more of 'for the time being' and less consistent.Based on such cultural process, I want to argue that Cullberg's psychiatric crisis criteria provided a possibility for the psychiatric clinics to meet this new group of patients, and at the same time create this patient within the social classification of the psychiatric crisis (cf.Hacking 1999).A central point for this line of reasoning is that the patient should now feel that he or she was in a process of psychosocial development, that every stage in life contains experiences and challenges for the human development. 7In the practical work in the outpatient care unit, as described in the three articles, focus was on helping the patient to understand and explore his or her own feelings.In Stenstedt's article, this is pointed out very clearly: 'The patient must be allowed and encouraged to express those feelings of sadness, shame, hostility, anxiety etc, that are associated with the crisis situation and are often perceived as forbidden' (Stenstedt 1973:4155, my translation).The patients ought to take their feelings seriously and be encouraged to talk about how they feel. The Swedish researcher Claes Ekenstam, historian of ideas and sciences, has stressed that in the 1950s and 1960s a representation of people as feeling human beings became more common.This was not a new idea but it attained a strong position in disciplines such as psychology, sociology and biology.It was a representation that emerged in a polemic against the understanding of humans as being rational and calculating, an idea that can be found in the description of man as mechanical, economic or stoic (Ekenstam 2007).Reasoning concerning the feeling human being is vital in the understanding of how the psychiatric crisis, not only became part of the perspectives of the psychiatric clinics in meeting the individualized patient, but was also significant in the presentation of a treatment that could interact with the behaviour that had been classified through the psychiatric crisis (cf.Hacking 1999).It became essential for the care that the patient was encouraged to take his or her emotions seriously; through her feelings, the patient could take responsibility for her own potentials as a human being.This was highlighted in Stenstedt's article: "An important aspect on crises, that needs to be emphasized, is that these are not necessarily entirely negative life experiences, but contain positive aspects and provide opportunities for development.The crisis holds, as Lydia Rapoport (1967) puts is, significant 'growth-promoting potentials' (Stenstedt 1973:4155, my translation).An important part of this reasoning was the change in the responsibility of the psychiatric clinic for the patient; the psychiatric crisis became something for which the patient had responsibility for as well. The psychiatric crisis became a social classification that affected how the professionals should take care of the patients and what responsibility the patient had for his or her well-being.I would like to draw attention to the shift towards encouraging the patient to take responsibility for the 'recovery' and for the opportunities of development embedded in the psychiatric crisis.The psychiatric crisis interacted, not only with the behaviour that had been classified, but it also evoked a new moral for which responsibilities the patient had for his or her own wellbeing. Discussion As pointed out in this article, theories about the psychiatric crisis and crisis psychotherapy in the 1970s created opportunities in the psychiatric clinics to respond to the patient as the feeling human being (cf.Ekenstam 2007).A significant conception during this period was the representation of the human being as a feeling person; another prominent idea concerned individualisation.In the following quotation, we can sense how the professionals in the outpatient care unit felt that their ideas were well suited for the times: When we started, we did it entirely according to the conviction, based on our experiences on the weekly ward, that an activity like this should be able to fill great practical needs.Like many others, we had been inspired by Johan Cullberg's publication 'The Psychic Trauma' (1971) and had begun to be interested in psychological crises and crisis therapy (Stenstedt 1973:4154-4155, my translation). The impression is that it was conviction that made them start using the psychiatric crisis as a possibility to regard the patient in new perspectives (cf. Foucault 2003).This conviction has many similarities to Hacking's explanation of how social classifications can change our consciousness and let us enter new worlds (Hacking 1992).Using theories about the psychiatric crisis is one example of how professionals attained new perspectives in the 1970s and regarded the patients from the ward in a slightly new way.At the same time, it is important to point out that this feeling of conviction is related to the self-fulfilling potential in the psychiatric crisis theory.There must be a classificatory looping when the social classifications interact with the behaviour that has been classified (Hacking 1999).This perspective could be confirmed by interaction with the patient, and with other professionals.Further, if we go back to the quotation, Stenstedt stresses that 'many others' had been inspired by Cullberg's publication. Partly, it was an expected change in the psychiatric clinics.Patients were not to be hospitalized, but were instead provided with a psychiatric treatment that could give individuals possibilities to handle the psychiatric crisis largely on their own.The crisis psychotherapy was now to be a support for the patient on his or her way to well-being and inner growth; this is an argumentation that has been highlighted in previous studies (Frykman 1994;Rose 1998;Ekenstam 2006).Through this change, the internal and mental self-control of the patients emerged, replacing the external control.On the basis of this reasoning, my claim is that the psychiatric crisis can be seen as a form of a self-caring project for the individual.Not only do social classifications interact with the behaviour that has been classified, but they also interacted with a moral category of what a patient was and should be. Finally, if we once again return to Hacking, he discusses how social classifications can change our experience of which moral category we belong to (Hacking 1999).I would argue that the psychiatric crisis had this effect in the 1970s when new conceptual meanings changed how a crisis situation could be experienced, altering the responsibilities of the individual in this situation.Thus, self-care should be understood as a central part in the classificatory looping of this specific social classification consisting of the psychiatric crisis.When the psychiatric crisis as a social classification interacts with the patient's behaviour, this is when self-care also can be activated and be institutionalized as a practice in the care unit (cf.Mol et al. 2010).Therefore, self-care must be analysed in relation to those social classifications that are a part of a historical and cultural context. In order to understand this change, it is important to relate the transformation in the psychiatric clinics to a more general change in the historical and cultural context.Using these different cultural expressions, the article shows how selfrealization and individual development became embedded as a cultural ideal.It can provide us with perspectives on how self-care came to be used in practice in the beginning of the 1970s and influenced both healthcare and the everyday lives of people. Conclusion In this article, I have studied how the psychiatric crisis became a social classification in the 1970s, not only providing new perspectives on some specific kinds of behaviour, but also transforming this behaviour to be part of a self-caring project. I have traced this historical development to international psychologists and psychoanalysts; it was introduced in Sweden through the psychiatrist Cullberg, in the publication 'The Psychic Trauma' from 1971 (Cullberg 1971).In the 1970s, these theories about the psychiatric crisis and crisis psychotherapy were tested in different pilot projects at psychiatric clinics.In the article, the pilot projects are understood as answers to the need of encountering patients with individualized requests, which enhanced the need for a treatment that took the feelings of the patients seriously.The patient's care for him or herself became more important than external control.This provided opportunities for the crisis psychotherapy to be regarded as a self-caring project.Lars-Eric Jönsson and Gabriella Nilsson, Lund University for thorough and constructive comments.I would also like to thank the anonymous peer reviewers for thorough and constructive comments. Kristofer Hansson is an ethnologist, Ph.D. and researcher at the Department of Arts and Cultural Sciences, Lund University.His primary research interests are biomedical technologies, medical praxis in health care, and citizen participation.He has published in Swedish and international journals as well as anthologies on these subjects.He is also a member of The Nordic Network for Health Research within Social Sciences and the Humanities (http://nnhsh.org).E-mail: Kristofer.Hansson@kultur.lu.seNotes 1 The concept of the psychiatric crisis developed theoretically in the 1940s and onwards; many psychologists and psychoanalysts from America came to use the word.See for example the psychiatrist Eric Lindemann (Lindemann 1944), the psychoanalyst Elliot Jaques (Jaques 1965) and the psychoanalyst Erik Homburger Erikson (Erikson 1993). 2 In 1975, Johan Cullberg published the book Crisis and Development (1980), which became a very popular textbook in Sweden.However, the shorter publication from 1971, which was used in the alternative psychiatric treatment, is studied in this article. 3 In the 1970s and the beginning of the 1980s, more crisis-titles were published in Sweden by both Swedish authors and translated authors.See for example: Ekselius et al. 1976;Fried 1978; Folksams sociala råd och TCO:s socialpolitiska råd 1979; Ewing 1980. 4 The classification was divided into the following categories: psychosis 7 percent, neurosis 45 percent, borderline 5 percent, "crisis" 39 percent and others 4 percent (Stenstedt 1973:4157).5 Moreover, a similar point was the focus on preventing long-term hospitalization and that the patients should go back to work as soon as possible. Another vital matter for the outpatient care unit was to reduce medication of psychotropic drugs.With this form of crisis treatment, the attempt from the outpatient care unit was to get away from medicalization and instead try to find other forms of care for those patients who needed psychiatric help.Psychotropic drugs were in the article defined as the less desirable option for treatment, and were considered to make the patient passive and regressive in the course of his or her illness (Stenstedt 1973).
2017-10-11T07:50:52.585Z
2012-11-09T00:00:00.000
{ "year": 2012, "sha1": "9959d7994f154c1d58f86480167ecab3cf4c7948", "oa_license": "CCBY", "oa_url": "https://cultureunbound.ep.liu.se/article/download/2016/1382", "oa_status": "GOLD", "pdf_src": "ScienceParseMerged", "pdf_hash": "9959d7994f154c1d58f86480167ecab3cf4c7948", "s2fieldsofstudy": [ "Psychology", "Medicine" ], "extfieldsofstudy": [ "Sociology" ] }
222311590
pes2o/s2orc
v3-fos-license
Circadian Profile of Salivary Melatonin Secretion in Hypoxic Ischemic Encephalopathy Purpose In the present study, the salivary melatonin secretion in the hypoxic ischemic encephalopathy (HIE) children was measured. The logit model was fitted to the data to obtain the salivary dim light melatonin onsets (DLMOs), and the results were compared with the values estimated from the classic threshold method with a linear interpolation and those previously published for the blood measurements. Materials and Methods 9 patients suffering from HIE aged from 65 to 80 months were included in the study. The melatonin levels were assessed by a radioimmunoassay (RIA). The diurnal melatonin secretion was estimated using a nonlinear least squares method. Student's t-test and the Mann–Whitney U test were used for the comparisons of the obtained parameters. Results The circadian profiles of the melatonin secretion for both calculation methods do not differ statistically. The DLMO parameters obtained in the blood and saliva samples in children with hypoxic ischemic encephalopathy were similar. Introduction Melatonin (N-acetyl-5-methoxytryptamine), secreted mainly by the pineal gland, but synthesized also in many other tissues and cells, diffuses into blood plasma and saliva, where it can be measured experimentally. Because it is involved in the regulation of circadian rhythms, such as the sleep-wake rhythm, neuroendocrine rhythms, or body temperature cycles, the disturbances of its secretion are considered an early indicator of certain disorders and also the biomarker of their follow-up [1]. e most commonly reported phase marker of the melatonin secretion is the DLMO-the dim light melatonin onset [2]. In physiological condition, it is particularly convenient, since it can usually be obtained about 2 to 3 hours prior to a habitual sleep. In our previous studies, we showed that melatonin secretion in children with hypoxic ischemic encephalopathy (HIE) is significantly disturbed, even stronger than in epilepsy. Its characteristic features, as seen from the blood sampling [3,4], are the delayed melatonin phase release and the shift of the DLMO parameters to the later morning hours. e mathematical modelling of the circadian melatonin cycle was used by us to objectify the description of the melatonin secretion in blood [5]. Although many mathematical models are used to obtain the information on the circadian phase from the plasma melatonin rhythm [6], they may be insufficient when modelling the salivary melatonin secretion due to much lower concentration of melatonin in saliva than in the plasma. Moreover, because taking the saliva samples without disturbing a sleeping individual is difficult, the estimates may be of lower resolution, and in consequence, the lack of data from the entire time of melatonin secretion makes it impossible to use any threshold calculation that depends on the overall amplitude of the pulse [7]. Most of the modelling methods depend on curve-fitting of the melatonin profile and/or the crossing of a threshold to determine phase. e simple threshold interpolation is one of the most widely used for the DLMO description in endogenous melatonin secretion [8,9], though it is claimed to be less accurate than the more flexible curve-fitting models [10,11]. In saliva, the phase estimates are usually calculated by the same curve-fitting and threshold methods as for plasma melatonin, but the nonlinear shape of melatonin secretion and missing data may lead to the contradictory results. erefore, more complex interpolations, biophysical models fitting, or the differential equation methods were introduced in the circadian rhythm analysis [4,10,12]. In our previous work [5], we applied a bell-shaped function to model the melatonin secretion in blood in the HIE children-such modelling requires a complete melatonin curve to calculate the DLMOs. e logit model proposed in this work makes it possible to estimate the DLMO parameters with a shorter melatonin collection time. We used such model to fit to the melatonin salivary secretion onsets in the HIE children, and its results were compared with those obtained from the classic threshold method with linear interpolation. en, the estimates of the salivary DLMO parameters were compared with the blood values from the previous study [5]. Materials and Methods e study was approved by the Ethic Committee of the Medical University of Silesia in Katowice. e informed written consent was taken from the parent or caregivers. e study was carried out at the Department of Pediatric Neurology, School of Medicine in Katowice, the Medical University of Silesia in Katowice. 2.1. Patients. 9 patients suffering from hypoxic ischemic encephalopathy aged from 61 to 82 months (mean age 5.92 years, SD ± 0.56) were included in the study from the group of 19 patients. 10 patients were rejected at the initial stage of the data analysis due to the missing data from the rising part of the salivary melatonin onset-in such cases, the phase markers could not be calculated. e demographic characteristics of the participants are shown in Table 1. e recruitment procedure of the study group was the same as described in our previous work [5]. Before and during the experimental period, the subjects were not administered the medications affecting melatonin secretion, such as benzodiazepines and their antagonists, fluvoxamine, caffeine, vitamin B12, and nonsteroidal antiinflammatory drugs (aspirin, ibuprofen, indomethacin, adrenolytics, prostaglandins inhibitors, calcium channel blockers, dexamethasone, and antidepressants). Furthermore, the legal guardians of the patients were instructed not to use a toothpaste or mouthwash during the assessments. In 3 patients, the epileptic seizures did not occur on the day or the day before the melatonin measurements, and 6 children had them on the day of sampling. e amplitudes of the melatonin release were estimated for these patients using a nonparametric Mann-Whitney U test, and no statistical significance was found (p � 0.3662). Experimental Design. In order to determine the melatonin concentration and its circadian excretion profile, all subjects had their saliva taken every hour starting from 17:00 till 7:00 am. Saliva was collected in dim red light (10 lux) into the Salivette tubes (Sarstedt, Germany) by chewing on a cotton swab for 1-2 min. e sampling took place during the hospitalization at the Department of Pediatric Neurology, Medical University of Silesia in Katowice. During the collection, the patients stayed in a darkroom. e use of tablets and cells was prohibited. e collected samples were shipped in dry ice to the laboratory to be radioimmunoassayed for melatonin detection. e experiment was performed under a dim light condition. e enzyme-linked immunosorbent assay (ELISA) method was used. e lower limit of sensitivity was determined by interpolating the mean optical density minus 2 SDs of 30 sets of duplicates at the 0 pg/mL level. e minimal concentration of melatonin that can be distinguished from 0 was 1.37 pg/mL. e functional sensitivity of the assay (Salivary Melatonin EIA Salimetrics) was 1.42 pg/ml, and the intra-and interassay coefficients of the variabilities were 0.2% and 16.6%, respectively. Data Analysis. e circadian timing was determined by calculating the DLMO values as the marker of the individual circadian clock and in particular as an indicator of the beginning of the internal biological night. Moreover, the minimum and maximum melatonin concentrations were also calculated [8,11,13]. Melatonin Secretion Parameters: Curve-Fitting Method. In order to describe the nonlinear character of the melatonin onset, the three parameter logit estimation was applied to the data [4,10]. Due to the changes in the melatonin secretion amplitudes in pediatric patients, the relative thresholds were used in the melatonin profiles description to normalize the amplitude differences and facilitate the comparisons [2,10]. e model is based on a time-dependent melatonin function MLT(t): e melatonin secretion parameters were normalized, and their biophysical interpretation is as follows: (1) b 1 is a melatonin release amplitude (pg/mL), where the amplitude is the difference between the minimum and maximum melatonin concentrations (2) b 2 (DLMO50E) denotes the time at which a melatonin level exceeds 50% amplitude (h), and E indicates the method of determining the parameter (E-estimation) (3) b 3 is a minimum melatonin concentration (pg/mL) from a range <0, +∞) (4) e maximum melatonin concentration b max is a sum of b 1 and b 3 e dim light melatonin onset 25% (the time at which a melatonin level exceeds 25% amplitude) was calculated using equation (1): e phase and amplitude parameters may have different meanings in some situations depending on the used methods in secretion profile analysis [4,5,10,[14][15][16][17]. us, Figure 1 presents the graphical representation of the melatonin secretion model and visualizes the biophysical meaning of its parameters. is set of parameters was estimated in order to define a melatonin cycle in saliva. e data were fitted with a nonlinear least squares fitting analysis based upon the Levenberg-Marquardt method in Statistica 12 software. e quality of the obtained models was verified by the normality test of the residuals' distribution, the statistical significances of the estimated parameters, the percentage of the explained variance (>80%), and the R value (>0.89). Melatonin Secretion Parameters: reshold-Based Method with Linear Interpolation. e salivary DLMO was also determined using the threshold-based methods that depend on the crossing of a predetermined melatonin concentration. A linear interpolation was used to determine the threshold-crossing time. e relative threshold of the 25th and 50th percentiles of the melatonin amplitude were chosen to obtain DLMO25I and DLMO50I, respectively (the I index denotes that the parameters were obtained via interpolation). e minimum and maximum melatonin concentrations correspond to the lowest and highest values in the measured secretion profile, respectively. Statistical Comparison of the Melatonin Secretion. e obtained melatonin parameters (the minimum and maximum melatonin concentrations and the DLMO parameters) were interpreted in terms of their biophysical and clinical meaning and analyzed using Statistica 12 software. In order to compare the estimated (a curve-fitting method) secretion parameters with those obtained with the threshold-based method (with a linear interpolation), Student's t-test was used. Since, for the minimum melatonin concentration, the individual groups do not meet the requirements for the parametric tests (the data were not normally distributed), a nonparametric Mann-Whitney U test was used. Finally, the salivary melatonin DLMO parameters estimated using the curve-fitting method were compared with those obtained from the blood samples of the HIE patients [5]. e values less than 0.05-a predetermined significance level-were accepted as indicating that the observed result would be highly unlikely under the null hypothesis. Results e salivary melatonin onset curves were approximated for each patient separately, and the illustrative example of the estimated model obtained for a representative patient with hypoxic ischemic encephalopathy is shown in Figure 2. e approximated parameters are b 1 � 60,07 (pg/mL); b 2 � 24.65 (h); b 3 � 4.05 (pg/mL); b max � 64.12 (pg/mL); and DLMO25 � 23.26 (h). Comparison of the Salivary Melatonin Secretion Parameters Calculated from the Curve-Fitting and reshold-Based (with a Linear Interpolation) Methods. e salivary melatonin secretion profiles (DLMO50, DLMO25, and maximum and minimum melatonin concentrations) estimated by the curve-fitting method were compared with that obtained using the traditional threshold-based method with a linear interpolation. e statistical analysis was performed using Student's t-test, and the results are presented in Tables 2 and 3. Moreover, the minimum melatonin concentrations were compared using Mann-Whitney-Wilcoxon because the requirements for the parametric tests were not fulfilled for this variable (the data were limited and not normally distributed). Comparison of the Salivary and Blood Melatonin DLMO Parameters in Hypoxic Ischemic Encephalopathy. e salivary DLMO parameters obtained using a curve-fitting method were compared with those obtained from the blood secretion profiles also for the HIE group from our previous study [5] (Table 4). e results show more evidence for the null hypothesis of no difference of the salivary and blood DLMOs in the HIE groups. e statistical analysis of the other concentration parameters was not performed due to the expected significant disproportion of the melatonin concentrations values in both biofluids. Discussion Determination of the melatonin levels in saliva is the most popular method, due to its ease, relatively low invasiveness, and a relationship between the circadian changes of the melatonin concentration in saliva and the melatonin variations in plasma [18] or serum [19][20][21]. Also, the urinary excretion of 6-sulphatoxymelatonin (aMT6s), the major melatonin metabolite in humans, is proved to oscillate consistently with melatonin concentration in plasma and saliva [2,22]. As we found in the PubMed database, in the last decade, the salivary melatonin measurements were mentioned in about 57% of the papers concerning the melatonin secretion, whereas about 25% related to its evaluation in blood or serum and about 18% in urine. e concentration of melatonin in saliva is 24-33% of the plasma melatonin (this percentage reflects the free melatonin fraction, not related to albumin), making its routine determination an analytical challenge [18,23,24]. Due to the volume differences of plasma and saliva, the interindividual differences in sensitivity to the light, a diurnal variation in melatonin synthesis, and the effects associated with the continuous production of saliva and low salivary melatonin concentrations, the salivary melatonin sampling is of lower resolution and sensitivity than in case of blood [2,25]. Moreover, the salivary measurements are associated with the limitations resulting from eating, drinking, and oral hygiene measures that could falsify the results. us, taking into account the evident quantitative advantage of the salivary measurements over those in other biofluids, but at the much lower accuracy of the salivary melatonin profile estimates, one may ask whether the latter are reliable and what is the actual correspondence of the results obtained for various biofluids. ough such comparisons are available for healthy subjects [26], there is a sparse number of the papers on the correlation between the salivary and blood melatonin levels in HIE children [2,6,27]. In our study, the salivary DLMO parameters obtained for the HIE children were compared with the blood phase markers of the children with the same diagnosis [5]. As revealed from the comparison, the DLMO values are consistent showing that the circadian melatonin phase markers in blood and saliva for the children with hypoxic ischemic encephalopathy are similar. Additionally, for both data sets, the curve-fitting method was applied in the DLMO calculations. Unfortunately, the differences in the data collection procedures did not allow to use one circadian model. However, when using different fitting curves in the estimation process, one must be aware that it may result in the uncertainties of the determined values. On the other hand, due to the frequent limitations of the saliva and blood collection methods, one universal melatonin secretion modelling method seems to be unavailable. erefore, to facilitate the comparisons of the results and to verify them, some comparative testing should be applied [2], and in our study, the comparison of the salivary parameters with the blood melatonin ones plays such role. ough the saliva and blood samples were collected for two separate HIE groups, the inclusion criteria for the children with the same clinical diagnosis were the same. In the estimations, we focused on the DLMO, as the most accepted and reliable circadian phase marker, claimed to be more reliable than DLMOoff (the dim light melatonin offset) and the phase markers derived from the core body temperature rhythm [11,15]. On the other hand, the most diversity in the published methods occurs with determination of the onset of melatonin secretion. e main hypothesis of our work was that the DLMO parameters obtained via the estimation and interpolation methods are compatible. Moreover, since the full salivary melatonin profiles without missing data are usually difficult to be obtained, we tested the ability of the simplified model to predict the rising part of the melatonin synthesis onset accurately. Two methods of the DLMO determination from the salivary melatonin measurements in the HIE patients were compared: the threshold-based interpolation and the curve-fitting method. e logit model has been developed to describe the onset part of the melatonin secretion cycle. e statistical analysis of the results confirmed the consistency of the circadian parameters estimated in both methods. As expected, the melatonin salivary data are highly spread, especially during the night part of the cycle. Due to the observed fluctuations, the criteria for the model quality acceptance were lowered (the percentage of the explained variance (>80%) and the R value (>0.89)) compared to the literature data [5,16,28]. e fitting method is useful when other methods of the melatonin cycle description are either impossible or impractical [5,10,16,17,28], in case of difficult (a large statistical spread) or incomplete data, where the bell-shaped model, as the more demanding, cannot be applied [5]. Unfortunately, due to the simplicity of the logit function, it does not allow to estimate DLMOoff or to indicate the location of the release amplitude and calculate the duration of the night melatonin release [4]. e individual differences in the sleep/wake schedules can be also analyzed and described in terms of the patient's chronotype. Because the circadian clocks vary with the sex, age, the genetic background [29], and light exposure [30], the Morningness-Eveningness Chronotype Questionnaires, such as the Munich ChronoType Questionnaire (MCTQ) [31], the Morningness-Eveningness Questionnaire (MEQ) [32], or the Morningness-Eveningness Scale for Children (MESC) [33], may be applied as a simplified estimate of the circadian timing. Importantly, the chronotypes assessed with them are generally strongly correlated with DLMO [34], both in adults International Journal of Endocrinology [35] and in healthy school-aged children and adolescents [36][37][38], but the reports on infants are scarce [39,40]. According to Simpkin et al. and Randler et al. [39,40], in the toddler age, there is a prevalence of the morning types, but during the next years of age, a progressive delay in chronotype takes place [39], and finally, each type of chronotype can be seen in the preschool children [36,39,41]. e differences between the morning and evening chronotypes are seen as a shift of the DLMO values towards the night hours [35]. e morning-type individuals have earlier sleep-wake schedules, earlier diurnal peaks of alertness and performance, and earlier sleep propensity rhythms than the evening-type individuals [42]. On the other hand, an accumulating evidence suggests that there is a feedback between the epileptic seizures and the circadian rhythms-in its consequence, the seizure timing influences the timing of the daily activities, sleeping, and wakefulness, i.e., the chronotype [4,16,43] showed that the phase shift of the melatonin release occurs later in the epileptic patients and found that there is a significant relationship between a phase shift of the melatonin peak and the seizures. However, the supporting studies with the application of the chronotype questionnaires were not performed in that study. Our current results confirm the previous observations, as in the HIE children, the DLMO50 and DLMO25 values are shifted to the late night hours too. In accordance with the previous findings, also in this study, in 6 patients (66.6% of the studied group), the epileptic seizures occurred on the day of the melatonin sampling, leading the melatonin secretion in children with hypoxic ischemic encephalopathy to be strongly disturbed. us, the obtained results point towards the supposition that, in the HIE children, the evening-type chronotype may dominate. However, the studied group is too small (9 subjects), and the salivary measurements were not supported by the chronotype questionnaire analyses, leaving such supposition uncertain. Because the reports concerning the influence of antiepileptic drugs on the melatonin levels in saliva and plasma of pediatric patients are ambiguous [44][45][46], we decided not to exclude the patients due to the treatment applied. Along with the small group size, it is the main drawback of the study, but as shown in Tables 2-4, no statistically significant differences between the biofluids and the methods were found. A larger study, based upon the mathematical modelling of the whole melatonin profiles and with application of the chronotype questionnaires, would be necessary to gain a better insight into the disturbances of the circadian rhythms in HIE patients and of their chronotype. e main disadvantage of the saliva sampling is the lack of the standardized sampling protocols and the standardized normative values enabling the comparison of the results in the tested groups, especially in young children or in epileptic children. In our study, the saliva and blood samples were taken every 1 hour which, according to Crowley et al., allows to estimate the DLMO as accurately as in case for a sampling time of 0.5 hour [47]. Abeysuriya et al. indicate that development of modelling will open new possibilities to calculate and compare the melatonin secretion profiles independently of the biomaterial being tested [26]. Such modelling may allow to establish the normative values for melatonin. We hope that our comparative two-model mathematical approach to evaluation of the melatonin secretion parameters (DLMO) in two biofluids brings us closer to such solution and underlines the role of mathematical modelling. Generally, higher sampling rate and more data streams used for fitting are necessary to obtain more accurate prospective predictions. Conclusions In this study, we compared the basic parameters of melatonin secretion calculated using the curve-fitting method and the popular threshold method (with a linear interpolation). We showed that the results do not differ statistically, which, in our opinion, argues in favor of using a simple and well-known method being more resistant to imperfect sampling. Moreover, we compared the results of the determined time parameters (DLMO25 and DLMO50) with those obtained in the blood melatonin measurements from our previous work [5]. Despite the differences in the nature of these biofluids and the sampling schemes (regular blood measurements vs. frequent incomplete data in saliva), we showed that the results of the examined parameters do not differ statistically. In both studies, the different mathematical models were used, but the obtained DLMO parameters agree and do not differ statistically, which allows us to conclude that they could be used interchangeably as needed. Data Availability e datasets generated for this study are available from the corresponding author upon request. Conflicts of Interest e authors declare that there are no conflicts of interest regarding the publication of this paper.
2020-10-14T05:06:36.333Z
2020-09-25T00:00:00.000
{ "year": 2020, "sha1": "28579601523c010bda51e1ece5048b271fafedd4", "oa_license": "CCBY", "oa_url": "https://downloads.hindawi.com/journals/ije/2020/6209841.pdf", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "28579601523c010bda51e1ece5048b271fafedd4", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
266678624
pes2o/s2orc
v3-fos-license
The Role of the Toll-like Receptor 2 and the cGAS-STING Pathways in Breast Cancer: Friends or Foes? Breast cancer stands as a primary malignancy among women, ranking second in global cancer-related deaths. Despite treatment advancements, many patients progress to metastatic stages, posing a significant therapeutic challenge. Current therapies primarily target cancer cells, overlooking their intricate interactions with the tumor microenvironment (TME) that fuel progression and treatment resistance. Dysregulated innate immunity in breast cancer triggers chronic inflammation, fostering cancer development and therapy resistance. Innate immune pattern recognition receptors (PRRs) have emerged as crucial regulators of the immune response as well as of several immune-mediated or cancer cell-intrinsic mechanisms that either inhibit or promote tumor progression. In particular, several studies showed that the Toll-like receptor 2 (TLR2) and the cyclic GMP–AMP synthase (cGAS)–stimulator of interferon genes (STING) pathways play a central role in breast cancer progression. In this review, we present a comprehensive overview of the role of TLR2 and STING in breast cancer, and we explore the potential to target these PRRs for drug development. This information will significantly impact the scientific discussion on the use of PRR agonists or inhibitors in cancer therapy, opening up new and promising avenues for breast cancer treatment. Introduction Breast cancer is the most prevalent malignancy in women and ranks as the second leading cause of cancer-related deaths on a global scale [1].Despite advancements in the treatment of breast cancer, a significant proportion of patients still advance to metastatic disease, posing a considerable challenge to effective therapy.The limited success of current therapies often stems from a predominant focus on targeting cancer cells alone, overlooking the intricate interactions within the tumor microenvironment (TME).Various cell populations in the TME engage in complex communication with cancer cells, facilitating cancer progression and resistance to treatments.In some instances, the complex crosstalk between cancer and immune cells leads to immunosuppression, redirecting the immune system toward a protumorigenic response. Dysregulation of innate immunity is often associated with breast cancer and significantly contributes to inducing a chronic inflammatory state within the TME-a hallmark of cancer that contributes to all steps of cancer development and to resistance to current therapies [2].Moreover, some of the receptors typically expressed by innate immune cells, such as pattern recognition receptors (PRRs), may promote tumor growth through intrinsic mechanisms within cancer cells [3]. Mounting evidence suggests a significant association between Toll-like receptors (TLRs) and the development of breast cancer [4,5].TLRs exhibit a dual role in cancer: they can mediate tumor cell death, activating an effective antitumor immune response, or inhibit the immunosuppressive activity of myeloid-derived suppressor cells (MDSCs), preventing tumor growth [6,7].However, they also possess protumorigenic functions [7].This is particularly evident for TLR2, the most expressed TLR in triple negative breast cancer [4].Indeed, data from the Kaplan-Meier plotter (kmplot.com(accessed on 21 December 2023)) and from the literature indicate that high expression of TLR2 is associated with poor relapsefree survival in breast cancer patients [8].On the contrary, the expression levels of the other TLRs are not associated with poor prognosis in breast cancer (kmplot.com(accessed on 21 December 2023)).TLR2 activation by pathogen-associated or damage-associated molecular patterns (PAMPs or DAMPs, respectively), activates a signaling cascade initiated by Myeloid Differentiation Primary Response protein 88 (MyD88).The consequent recruitment of the interleukin-1 receptor-associated kinase (IRAK)-TNF Receptor Associated Factor (TRAF) complex leads to the activation of NF-κB and MAPK pathways.This induces the production of pro-inflammatory cytokines, and in cancer cells, may stimulate epithelial to mesenchymal transition (EMT) and proliferation (Figure 1).TLR2 promotes cancer cell survival and proliferation in breast [8] and gastric cancers [9], as well as in pancreatic ductal adenocarcinoma [10].However, some studies suggested that TLR2 activation by PAMPs, or DAMPs such as high-mobility group box 1 (HMGB1) and heat shock proteins (HSPs), may promote anticancer immune responses [11][12][13].The conflicting reports on TLR2's antitumor and protumor properties underscore the critical need to comprehend its context-dependent role in breast cancer for potential therapeutic advancements. Int. J. Mol.Sci.2024, 25, x FOR PEER REVIEW 2 of 18 therapies [2].Moreover, some of the receptors typically expressed by innate immune cells, such as pattern recognition receptors (PRRs), may promote tumor growth through intrinsic mechanisms within cancer cells [3]. Mounting evidence suggests a significant association between Toll-like receptors (TLRs) and the development of breast cancer [4,5].TLRs exhibit a dual role in cancer: they can mediate tumor cell death, activating an effective antitumor immune response, or inhibit the immunosuppressive activity of myeloid-derived suppressor cells (MDSCs), preventing tumor growth [6,7].However, they also possess protumorigenic functions [7].This is particularly evident for TLR2, the most expressed TLR in triple negative breast cancer [4].Indeed, data from the Kaplan-Meier plotter (kmplot.com(accessed on 21 December 2023)) and from the literature indicate that high expression of TLR2 is associated with poor relapse-free survival in breast cancer patients [8].On the contrary, the expression levels of the other TLRs are not associated with poor prognosis in breast cancer (kmplot.com(accessed on 21 December 2023)).TLR2 activation by pathogen-associated or damage-associated molecular patterns (PAMPs or DAMPs, respectively), activates a signaling cascade initiated by Myeloid Differentiation Primary Response protein 88 (MyD88).The consequent recruitment of the interleukin-1 receptor-associated kinase (IRAK)-TNF Receptor Associated Factor (TRAF) complex leads to the activation of NF-κB and MAPK pathways.This induces the production of pro-inflammatory cytokines, and in cancer cells, may stimulate epithelial to mesenchymal transition (EMT) and proliferation (Figure 1).TLR2 promotes cancer cell survival and proliferation in breast [8] and gastric cancers [9], as well as in pancreatic ductal adenocarcinoma [10].However, some studies suggested that TLR2 activation by PAMPs, or DAMPs such as high-mobility group box 1 (HMGB1) and heat shock proteins (HSPs), may promote anticancer immune responses [11][12][13].The conflicting reports on TLR2's antitumor and protumor properties underscore the critical need to comprehend its context-dependent role in breast cancer for potential therapeutic advancements.TLR2 islocalized in the outer cell membrane, and mainly dimerizes with TLR1 and TLR6.TLR2 uses the canonical MyD88 pathway to transduce a signal that, through the IRAK-TRAF6 complex, induces the activation of NF-κB and MAPK.NF-κB is responsible for the transcription of several pro- Recent attention has focused on another PRR with a dual role in breast cancer, the stimulator of interferon genes (STING).The cyclic GMP-AMP synthase (cGAS) -STING pathway plays a crucial role in the innate immune system, recognizing cytosolic dsDNA [14].Upon dsDNA binding, cGAS triggers cGAMP production, activating STING and the interferon regulatory factor 3 (IRF3) transcription factor.This cascade leads to the expression of type I interferons (IFNs), pro-inflammatory cytokines, and chemokines [15,16].The consequent capability of the STING pathway to stimulate the cytotoxic activity of natural killer (NK) and cytotoxic CD8 + T cells [16] led to the development of many exogenous STING agonists used to stimulate anticancer immune responses.However, despite promising preclinical data, the majority of the clinical trials using STING agonists failed.One reason could lie in STING-induced activation of the NF-κB and MAPK pathways, which may foster cancer growth [17].Indeed, chronic activation of STING may increase the expression of immunoregulatory genes [18] and promote breast cancer progression [19]. The emerging conflicting roles of TLR2 and STING in breast cancer progression, associated with their therapeutic potential, have prompted us to focus on the role of these two PRRs in breast cancer.Understanding the molecular mechanisms underlying their antitumor and protumorigenic effects holds promise for developing new combined therapies that could ameliorate cancer patients' prognosis. Role of TLR2 in Anticancer Immune Responses TLRs are fundamental tools exploited by the immune system to trigger the innate immune response against pathogens and subsequently activate adaptive immunity.The immune surveillance is not only focused on preventing or fighting infections.It is also crucial for identifying and eliminating malignant cells that can initiate tumor development.When a tumor arises, the immune system engages in a complex process to counteract tumor progression, and attempts to resist cancer-induced immune suppression.In this context, the activation of TLRs by DAMPs released from cancer cells may enhance antigen presentation mechanisms in dendritic cells (DCs) and promote macrophage differentiation towards the M1 phenotype, known for its antitumor activity [20].Consequently, TLR2 agonists have been evaluated in several solid tumor models as adjuvants for immune stimulation.For instance, the TLR2 natural ligand polysaccharide krestin (PSK) was able to stimulate TLR2 and elicit NK cell-mediated antitumor responses in different cancer models.In particular, PSK potentiated trastuzumab-induced antibody-dependent cell cytotoxicity of HER2 + breast cancer cells by stimulating NK cell activation [21].To confirm that TLR2 stimulates NK cells, Ke et al. demonstrated that treatment with Strongylocentrotus nudus egg polysaccharide, a TLR2 agonist, induces NK cell proliferation, cytotoxicity and release of interleukin (IL)-2 and IFN-γ in a mouse model of lung cancer [22].In lung cancer preclinical models, the administration of PAMPs or synthetic TLR2 ligands induced the differentiation of M1 macrophages that release nitric oxide, IFN-γ and pro-inflammatory cytokines, suggesting that TLR2 activation favors antitumor immune reactions [23].In melanoma, TLR2 stimulation using the synthetic compound diprovocim was reported to induce antitumor activity in response to ovalbumin vaccination [24].Similarly, in a fibrosarcoma study, combining vaccination against tumor-associated antigens with TLR2 agonists resulted in an increase in CD8 + T cells and antibody production, with a concomitant reduction in Treg frequency.However, the administration of TLR2 ligands alone produced the opposite effect, increasing Tregs.This suggests that TLR2 can yield both proand antitumoral effects, depending on the context [25]. Thus, the role of TLR2 in anticancer immune responses remains controversial and largely reliant on tumor types and models.TLR2 appears to possess the potential to either stimulate the immune system or suppress it.Part of this controversy might arise because TLR2 expression is not confined to the immune system.Many cancer cell types express TLR2 and take advantage on the activation of its signaling pathway, as will be discussed in the following paragraph. The Protumoral Role of TLR2 in Breast Cancer Immune cells exploit PRRs to detect disruptions in tissue homeostasis caused by cancer cells, thereby triggering an antitumor immune response.However, cancer cells can express PRRs as well, benefiting from the activation of their signaling pathways [26].Among the PRRs, members of the TLR family exhibit an intriguing dual role in cancer.TLR2 mRNA expression is significantly higher in breast cancer than in normal tissues, with a higher expression in the triple-negative and HER2 + subtypes as compared to the luminal A and luminal B [8].A significant increase in the levels of soluble TLR2 was observed in the sera from patients with both metastatic and non-metastatic breast cancer, as compared with healthy donors.Of note, soluble TLR2 was significantly higher in metastatic than in non-metastatic breast cancer patients, suggesting that TLR2 might be used as a biomarker to monitor disease progression [27].Moreover, a significant correlation was observed between high TLR2 expression and poor prognosis in breast cancer patients [8].Similarly, a positive correlation exists between the expression levels of its partners, TLR1 and TLR6, and the development of brain metastases in breast cancer patients [28], indicating the potential involvement of the TLR2 heterodimers in the metastatic spread of pre-existing tumors.Furthermore, high TLR2 mRNA levels are associated with poor relapse-free and overall survival in breast cancer patients who underwent surgery [8], as well as in patients treated with endocrine therapy or chemotherapy [8,29].Importantly, TLR2 expression levels can predict the response to endocrine therapy with high accuracy in both luminal A and luminal B breast cancer patients [8].Resistance to both endocrine therapy and chemotherapy has been associated with the presence of cancer stem cells (CSCs) [30].TLR2 is expressed by breast cancer cells and promotes CSC self-renewal, invasiveness and drug resistance [31].These effects result from various mechanisms, mostly dependent on the availability of TLR2 ligands, particularly DAMPs, in the TME.These molecules can be actively secreted by cancer cells or passively released during chemo-or radiotherapy, activating TLR2 signaling in either cancer or immune cells.Among the DAMPs that induce TLR2 activation, HMGB1 plays a major role.In the nucleus, HMGB1 is involved in the DNA repair processes and transcription, interacting with transcription factors like p53 [32].However, HMGB1, as other moonlighting proteins, not only acts as a DNA-binding protein but also functions outside the cell.It can be actively secreted in response to cytokine stimulation or passively released from necrotic or damaged cells, subsequently inducing TLR2 activation [33].Notably, breast cancer patients exhibiting high HMGB1 expression are more prone to develop cancer metastasis, especially in triple-negative breast cancer [34].Other important DAMPs activating TLR2 include heat shock proteins (HSPs), a family of proteins that play a dual role.While their intracellular function involves supporting the correct folding or refolding of nascent and misfolded proteins, they can also be actively secreted or released by necrotic cells, binding to TLR2 in the TME.In breast cancer, the extracellular HSP90 co-chaperone Morgana acts as another significant activator of TLR2 [35], whereas versican has been identified as the DAMP involved in TLR2-mediated tumor promotion in glioma and lung cancer [36].TLR2 serves also as a sensor of PAMPs, representing a bridge between eukaryotic cells and the microbial world, whose alterations may influence cancer development in different ways.In addition, TLR2 expressed on immune cells may not only exert antitumor activity but also lead to several immune suppressive effects that indirectly promote cancer progression [37]. Hence, we can classify TLR2's protumoral effects into two primary categories: cancer cell-intrinsic and -extrinsic. The Cancer Cell-Intrinsic Protumoral Effects of TLR2 In breast cancer cells, TLR2 is overexpressed and can be activated by endogenous ligands such as HSPs, HMGB1 and other DAMPs, or by exogenous ligands derived from pathogens, like bacterial lipoproteins.Once activated, TLR2 signaling promotes tumor growth and survival.Among the critical downstream molecules activated by TLR2 is NF-κB, which triggers the transcription of various pro-survival and pro-inflammatory genes [37]. Wenjie Xie and colleagues have demonstrated that TLR2 activation promotes breast cancer cell survival, proliferation and invasion through the activation of NF-κB and the secretion of protumoral cytokines.They also reported a 10-fold higher TLR2 expression in the invasive MDA-MB-231 human triple-negative breast cancer cells compared to the poorly invasive ER + MCF-7 cells [38].TLR2 plays a role in the regulation of CSCs, a subpopulation of tumor cells with self-renewal and tumor-initiating capabilities.Activation of TLR2 in mammary epithelial stem cells and in breast cancer cells enhances the expression of stemness-related genes, promoting the acquisition of CSC properties that contribute to treatment resistance and tumor recurrence [39].We have previously demonstrated the upregulation of TLR2 in breast CSC-enriched tumorspheres compared to epithelial-like breast cancer cells.We showed that the HMGB1-TLR2-NF-κB axis promotes CSC selfrenewal through the release of IL-6 and tumor growth factor-β (TGF-β), subsequently activating STAT3 and Smad3 [31].More recently, we demonstrated that TLR2 promotes breast cancer progression and metastasis in HER2 + transgenic mice in a cancer cell-intrinsic way.Indeed, tumor growth was impaired in HER2 + TLR2 KO mice as compared to their TLR2 WT littermates.Transplantation experiments using breast cancer cell lines derived from these mice demonstrated that TLR2 KO cells were not tumorigenic, in contrast to TLR2 WT cells.However, as discussed in the next section, TLR2 also contributes to tumor progression in a tumor cell-extrinsic manner, by shaping an immunosuppressive TME [29].Similarly, TLR2 deletion impairs tumorigenesis in MMTV-Wnt1 transgenic mice that spontaneously develop ER neg mammary tumors.Of note, TLR2 further upregulates the transcription of Wnt-dependent genes (including Cd44 and Lgr5) [39], which serve as significant mediators of NF-κB protumorigenic activity [40]. Apart from its direct effect on tumorigenesis, we have demonstrated that TLR2 mediates breast cancer resistance to doxorubicin and other drugs, inducing immunogenic cell death (ICD) and the release of DAMPs that activate TLR2 signaling in cancer cells, thereby enhancing their survival [29].In addition to NF-κB, TLR2 signaling activates the MAPK pathway, involved in cell proliferation and migration.In gastric cancer, this activation leads to the EMT, a pivotal process linked to cancer cell invasion and metastasis.TLR2-mediated EMT allows cancer cells to acquire a more migratory and invasive phenotype, facilitating their dissemination to distant sites and the formation of metastasis (Figure 2) [41].Another important aspect of TLR2's protumoral role is its impact on angiogenesis, the process of forming new blood vessels to supply nutrients and oxygen to growing tumors.TLR2 activation in breast cancer cells upregulates the expression of vascular endothelial growth factor (VEGF) and other pro-angiogenic factors, fostering the formation of new blood vessels that support tumor growth [38]. Beyond its direct effects on cancer cells, TLR2 may also confer protection from CD8 + T cell killing.Other TLRs, such as TLR4, induce the expression of immune checkpoint molecules like programmed death-ligand 1 (PD-L1), which can lead to T-cell exhaustion and suppress antitumor immune responses [42].However, the correlation between TLR2 and PD-L1 expression requires further elucidation. TLR2 serves as both a DAMP detector and a sensor of PAMPs, connecting TLR2expressing cells with the microbial world.This mechanism has been largely studied in the immune system for its protective role against infections.Recently, researchers have focused on studying the role of the microbiota in different diseases.Alterations in the microbiota composition, and its interaction with our cells, might be involved in many pathogenic processes, including carcinogenesis and cancer recurrence [37,43].Several studies have demonstrated the correlation between specific bacterial species and tumor development, especially in the gastrointestinal system.For instance, Helicobacter pylori contributes to gastric and colon carcinogenesis through various mechanisms, including TLR2-mediated activation of NF-κB and Wnt, and the induction of EMT in epithelial cells [44,45]. A specific microbiota has been detected in the mammary gland, altered in breast cancer.Bacteroides fragilis is found in tumor biopsies of breast cancer patients, and promotes tumorigenesis.Fusobacterium nucleatum (F.nucleatum), an oral commensal bacterium, can spread to other organs and become pathogenic.Indeed, F. nucleatum is detected in colon and breast cancer tissues, directly promoting tumor progression by activating TLR2 signaling in cancer cells and inducing immunosuppression, as described in the following paragraph [46][47][48].TLR2 serves as both a DAMP detector and a sensor of PAMPs, connecting TLR2-expressing cells with the microbial world.This mechanism has been largely studied in the immune system for its protective role against infections.Recently, researchers have focused on studying the role of the microbiota in different diseases.Alterations in the microbiota composition, and its interaction with our cells, might be involved in many pathogenic processes, including carcinogenesis and cancer recurrence [37,43].Several studies have demonstrated the correlation between specific bacterial species and tumor development, especially in the gastrointestinal system.For instance, Helicobacter pylori contributes to gastric and colon carcinogenesis through various mechanisms, including TLR2mediated activation of NF-κB and Wnt, and the induction of EMT in epithelial cells [44,45]. A specific microbiota has been detected in the mammary gland, altered in breast cancer.Bacteroides fragilis is found in tumor biopsies of breast cancer patients, and promotes tumorigenesis.Fusobacterium nucleatum (F.nucleatum), an oral commensal bacterium, can spread to other organs and become pathogenic.Indeed, F. nucleatum is detected in colon and breast cancer tissues, directly promoting tumor progression by activating TLR2 signaling in cancer cells and inducing immunosuppression, as described in the following paragraph [46][47][48]. Beyond the individual protumoral role played by specific bacterial species under certain conditions, it is important to note more complex alterations in the microbiota across Beyond the individual protumoral role played by specific bacterial species under certain conditions, it is important to note more complex alterations in the microbiota across various parts of the body.These conditions, called dysbiosis, can cause carcinogenesis or promote the progression of existing tumors by enhancing resistance to therapies and the spread of metastasis.Correlations between antibiotic therapies before cancer diagnosis or in its early stages and poorer prognosis in breast cancer patients have been reported.This is probably caused by the establishment of dysbiosis and increased presence of protumoral bacteria.Furthermore, chemotherapy can induce dysbiosis and opportunistic infections, increasing the availability of TLR2 ligands and potentially limiting therapy effectiveness [49][50][51]. Collectively, this evidence strongly suggests that in breast cancer, the presence of TLR2 along with its endogenous or exogenous ligands can significantly influence a worse prognosis. The Cancer Cell-Extrinsic Protumoral Effects of TLR2 TLR2 does not just impact cancer cells directly; it also plays a significant role in cancer progression through its influence on the immune system.When activated, TLR2 can either promote or hinder tumor growth, depending on the specific immune cells involved.Tregs, MDSCs, neutrophils and macrophages express TLR2, and upon its activation, they contribute to cancer progression and metastasis due to their immunosuppressive functions [37].Treg frequency is significantly decreased both in the TME and in the periphery in TLR KO breast tumor-bearing mice, since TLR2 activation on Treg cells induces their expansion [29,52].Moreover, upon TLR2 stimulation, macrophages release chemokines that recruit Treg [53].Tregs subsequently inhibit the antitumor activity CD8 + T cells through the release of immunosuppressive cytokines, like IL-10 [37].Similarly, TLR2 activation in B lymphocytes induces their differentiation into regulatory B cells (Bregs) that produce IL-10 and suppress the T-cell antitumor response [54].TLR2 activation in CD4 + T cells, triggered by HSP90 on autophagosomes released by breast cancer cells, initiates an autocrine IL-6 cascade.This process induces the expression of IL-10 and IL-21, fostering immune suppression and inhibiting antitumor responses [55].TLR2 activation in bone marrow precursors induces their differentiation to MDSCs, which accumulate in the TME and release protumoral cytokines.Moreover, this contributes to the polarization of macrophages towards the protumoral M2 phenotype and the release of nitric oxide (NO), a potent inhibitor of effector T cells.DAMPs such as HMGB1 and serum amyloid A 1 protein, secreted by breast cancer cells and elevated in the plasma and tumor biopsies from patients with advanced triple-negative breast tumors, induce an immunosuppressive response in neutrophils via TLR2 [34,56].Specifically, HMGB1 prompts the release of neutrophils' extracellular traps, favoring the development of lung metastasis in triple-negative breast cancer mouse models [34]. These mechanisms collectively create an immunosuppressive microenvironment that supports tumor progression.TLR2 activation within immune cells triggers the release of various factors that hinder immune responses against tumors, promoting their growth and spread.Understanding these interactions helps identify potential targets to disrupt the immunosuppressive TME and bolster antitumor immunity. Role of cGAS-STING in Antitumor Immunity Besides its crucial functions in the immune response against pathogens, the cyclic GMP-AMP synthase (cGAS)-stimulator of interferon genes (STING) pathway plays pivotal role in the antitumoral immune response.This pathway is widely expressed in immune cells as well as cancer cells, influencing carcinogenesis through various mechanisms.One such mechanism involves the induction of a robust type I interferon (IFN) response mediated by cGAS-STING activation by tumor-derived DNA in immune cells and tumor cells themselves [57].Type I IFNs enhance the tumor immunogenicity and facilitates the adaptive immune response against cancer cells [58].In the context of cancer, the cGAS-STING pathway is activated by cytosolic DNA derived from chromosomal instability (CIN), a hallmark of cancer [59,60], or by DNA damage resulting from cancer therapy.Upon activation in tumor cells, the release of type I IFNs, characteristic of "hot" tumors, shapes the TME and triggers an antitumor response.Type I IFNs target DCs in the TME, which play a crucial role in initiating tumor specific-T cell responses and eliminating the tumor [61].The activation of the cGAS-STING pathway can also trigger downstream NF-κB signaling, which plays a significant role in regulating tumor growth.Under certain circumstances, the activation of the canonical NF-κB pathway may synergize with cGAS-STING activation, enhancing the type I IFN response and strengthening the antitumor immune defense [62].In addition to cell-intrinsic DNA sensing in tumor cells, the cGAS-STING pathway can be activated in immune cells within the TME by tumor-derived DNA via membranous vesicles, such as exosomes [63], which fuse with immune cell membrane.DCs take up extracellular DNA, resulting in a type I IFN response that, in turn, increases tumor-infiltrating DCs and enhances presentation to CD8 + T cells [64].Furthermore, other studies have demonstrated that not only tumor-derived DNA but also the immunostimulatory second messenger cGAMP released by tumor cells can activate the cGAS-STING pathway [65,66].Tumor-derived cGAMP can be transferred to immune cells in the TME, triggering the release of type I IFNs via STING, which, in turn, activates NK cells, crucial for antitumor immunity [67,68].Additionally, the uptake of cGAMP has been observed also in monocytes and macrophages [69], where it activates STING signaling and leads to reprogramming of M2 tumor-promoting macrophages to a M1 antitumor phenotype [70].However, a pan-cancer analysis has demonstrated that elevated activation of the cGAS-STING pathway, particularly of IRF3, is associated with poor prognosis in patients diagnosed with specific types of cancer, including colorectal, prostate and lung adenocarcinomas [71].Indeed, it is becoming more and more evident that the cGAS-STING pathway exerts a controversial role in cancer, supporting diverse and sometimes opposing functions, favoring tumor progression in some contexts.These aspects will be dissected in the following paragraph. The Protumoral Role of cGAS-STING in Breast Cancer As discussed above, the cGAS-STING pathway plays an important role in antitumor adaptive immunity, involving the release of type I IFNs that shape the TME in an antitumoral setting.Consistently, defective STING signaling has been suggested to promote tumorigenesis and host immunosurveillance evasion [72].Various studies have demonstrated that epigenetic silencing of cGAS and STING genes, rather than mutation, occurs in different types of cancer [73,74].However, cGAS-STING signaling has also been linked to cancer cell survival and tumor progression.In the context of breast cancer, it has been shown that cGAS-STING signaling activation often yields paradoxical outcomes.While the type I IFN response induced by acute STING activation has antitumor effects, persistent activation of this pathway results in chronic inflammation.This induces a shift from type I IFN response and canonical NF-κB signaling to noncanonical NF-κB signaling, contributing to tumor-promoting effects [75,76].Persistent activation of cGAS-STING signaling has been observed in different tumor types with high levels of CIN, including breast cancer, resulting in tumor progression, invasion, and metastasis formation [75,77] through both cell-intrinsic and cell extrinsic mechanisms. In the context of breast cancer, CIN and the subsequent DNA damage occur frequently [77] and are known to activate cGAS-STING signaling.It has recently been reported that tumor cells with CIN rely on the cGAS-STING pathway to promote cancer cell survival through the STING-mediated NF-κB activation, which in turn induces the expression of IL-6 and the subsequent activation of STAT3 [78].Furthermore, DNA damage induced by chemotherapy in triple negative breast cancer cells triggers the cGAS-STING pathway, leading to NF-κB activation and the pro-survival IL-6-STAT3 axis.This promotes immune escape by upregulating PD-L1 expression [79].Additional studies reveled that PD-L1 expression by breast cancer cells contributes to the expression of the IFN-related DNA damage resistance signature (IRDS), a subset of IFN-induced genes that protect cancer cells from DNA damage.This mechanism is sustained by low and persistent levels of type I IFNs due to the DNA damage-induced cGAS-STING activation [80].Similarly, another study highlights that genotoxic stress induced by chemotherapy sustains DNA damage response and breast cancer cell resistance by triggering chronic cGAS-STING-dependent IFN production.This, in turn, induces the expression of PARP12, a member of the ADP-ribosyl transferases family involved in the control of protein translation and inflammation, whose high expression is associated to poor prognosis in breast cancer patients [81].Conversely, conflicting studies stated that chronic activation of cGAS-STING signaling by CIN or DNA damage induced by radio-and chemotherapy results in the downregulation of type I IFN response.Instead, it promotes downstream alternative NF-κB signaling, leading to the upregulation of EMT-related gene expression, supporting tumor invasion and metastasization [75,82].In addition, mutated p53, but not wild-type p53, has been reported to suppress the canonical cGAS-STING-TBK1-IRF3 axis in breast cancer.This occurs by bind-ing to TBK1 and preventing the formation of the trimeric STING-IRF3-TBK1 complex [83].Ultimately, this results in reduced type I IFN production, switching to the alternative NF-κB signaling (Figure 3).Collectively, these studies indicate that breast tumor cells can rewire the cGAS-STING signaling to promote cancer cell survival, tumor progression and metastasization.mation, whose high expression is associated to poor prognosis in breast cancer patients [81].Conversely, conflicting studies stated that chronic activation of cGAS-STING signaling by CIN or DNA damage induced by radio-and chemotherapy results in the downregulation of type I IFN response.Instead, it promotes downstream alternative NF-κB signaling, leading to the upregulation of EMT-related gene expression, supporting tumor invasion and metastasization [75,82].In addition, mutated p53, but not wild-type p53, has been reported to suppress the canonical cGAS-STING-TBK1-IRF3 axis in breast cancer.This occurs by binding to TBK1 and preventing the formation of the trimeric STING-IRF3-TBK1 complex [83].Ultimately, this results in reduced type I IFN production, switching to the alternative NF-κB signaling (Figure 3).Collectively, these studies indicate that breast tumor cells can rewire the cGAS-STING signaling to promote cancer cell survival, tumor progression and metastasization. Figure 3. cGAS-STING antitumoral and protumoral mechanisms.This graphical representation illustrates the antitumoral and protumoral roles played by the cGAS-STING signaling pathway.In tumor cells, the activation of the cGAS-STING pathway is predominantly triggered by CIN or DNA damage induced by chemo-and radiotherapy.Upon activation, double-stranded DNA fragments bind to cGAS, leading to the synthesis of cGAMP.Subsequently, cGAMP binds to STING dimers in the endoplasmic reticulum (ER) membrane, activating STING and causing its trafficking to an ER-Golgi intermediate compartment.In this compartment, TBK1 is recruited to phosphorylate STING, and this phosphorylation, in turn, recruits IRF3.Phosphorylated IRF3 can form dimers, translocate to the nucleus, and activate the transcription of target genes, including type I IFNs, that are released by tumor cells, recruiting antitumor immune cell populations.Conversely, a chronically low production of type I IFNs activates PARP12, whose overexpression is associated with poor prognosis in breast cancer patients.Furthermore, dysregulated cGAS-STING signaling can activate the transcription factor NF-κB, leading to chronic inflammation and tumor-promoting effects.This includes the activation of the IL-6-STAT3 axis, promoting pro-survival effects, tumor cell proliferation, and PD-L1 expression, resulting in immune escape.Mutated forms of p53 play a role in the transition Figure 3. cGAS-STING antitumoral and protumoral mechanisms.This graphical representation illustrates the antitumoral and protumoral roles played by the cGAS-STING signaling pathway.In tumor cells, the activation of the cGAS-STING pathway is predominantly triggered by CIN or DNA damage induced by chemo-and radiotherapy.Upon activation, double-stranded DNA fragments bind to cGAS, leading to the synthesis of cGAMP.Subsequently, cGAMP binds to STING dimers in the endoplasmic reticulum (ER) membrane, activating STING and causing its trafficking to an ER-Golgi intermediate compartment.In this compartment, TBK1 is recruited to phosphorylate STING, and this phosphorylation, in turn, recruits IRF3.Phosphorylated IRF3 can form dimers, translocate to the nucleus, and activate the transcription of target genes, including type I IFNs, that are released by tumor cells, recruiting antitumor immune cell populations.Conversely, a chronically low production of type I IFNs activates PARP12, whose overexpression is associated with poor prognosis in breast cancer patients.Furthermore, dysregulated cGAS-STING signaling can activate the transcription factor NF-κB, leading to chronic inflammation and tumor-promoting effects.This includes the activation of the IL-6-STAT3 axis, promoting pro-survival effects, tumor cell proliferation, and PD-L1 expression, resulting in immune escape.Mutated forms of p53 play a role in the transition from the canonical cGAS-STING-TBK1-IRF3 signaling to the cGAS-STING-NF-κB signaling, contributing to the tumor-promoting effects of the cGAS-STING pathway.Created with BioRender.com(accessed on 14 November 2023). Activation Innate immunity serves as our primary defence against various diseases, including cancer, and acts as a vital intermediary for the activation of adaptive immunity.Consequently, employing agonists capable of stimulating innate immune pathways appears to be a logical approach in the realm of anticancer therapies.Despite the considerable attention these molecules have garnered in the past decade among researchers, they remain relatively unexplored as therapeutic targets.Particularly in breast cancer, only a limited number of studies have been published on this subject. To date, the majority of the investigations on TLR2 ligands have centered on Pam3CSK4.This agonist has been utilized to enhance vaccination efficacy by augmenting antigen presentation by DCs and, consequently, CD8 + T cell activation in preclinical cancer therapy [84][85][86].Recently, Wei Shi and colleagues demonstrated the effectiveness of a selfassembled vaccine targeting tumor-specific antigens combined with TLR2 agonists as an immunotherapeutic approach against breast cancer.This combination was found to induce DC maturation and bolster the CD8 + T cell response [87].Additionally, a study by Bichern Liu and colleagues demonstrated that administering Streptococcus-derived PepO protein in a triple negative breast cancer mouse model stimulates antitumor immunity by steering macrophage differentiation towards the M1 phenotype through TLR2 signaling [88].Prior research by Hailing Lu and colleagues showed that the mushroom-derived polysaccharide krestin (PSK) exhibits antitumor activity by triggering an immune response in NK cells via TLR2 activation.Hence, PSK synergizes with trastuzumab, enhancing its ability to mediate antibody-dependent cell cytotoxicity (ADCC) in a mouse model of HER2 + mammary cancer.Similar results were obtained using human NK cells and MDA-MB-231 cells [21,89].However, apart from these studies, there is a lack of other preclinical studies on TLR2 activation as a treatment for breast cancer.According to our knowledge, clinical trials administering TLR2 agonists in breast cancer patients have not been conducted. In contrast, the stimulation of cGAS-STING signaling is a strategy that has shown promising results in different tumors in preclinical models.The study of STING agonists has gained ground in recent years, either as standalone treatment or in combination with other therapies.Recently, Corrales at al. demonstrated that the intratumoral injection of STING agonists leads to an effective antitumor T-cell response triggered by a potent type I IFN production [90].Furthermore, STING agonist cyclic dinucleotides can serve as potent adjuvants for radiotherapy, enhancing the adaptive antitumor response and reducing the immune suppressive microenvironment by reprogramming M2 macrophages [91].Concerning breast cancer, different STING agonists have been employed to enhance the antitumor response, mostly in combination with other strategies.Systemic administration of cGAMP, the endogenous ligand of STING, showed high efficacy in suppressing tumor growth and reducing lung metastasis in a mouse model of triple negative breast cancer [92].Other STING agonists, such as the c-di-GMP, have been shown to increase the potency of a bacterial-based vaccine for metastatic breast cancer in the mouse triple negative breast cancer 4T1 model.This approach suppresses metastasization and impairs tumor growth by targeting MDSCs [93].Due to the high expression of PD-L1 in many types of breast cancer, STING agonists have been tested in combination with immune checkpoint blockades (ICB).It has been observed that STING agonists combined with atezolizumab have a synergistic effect in improving the antitumor response in a model of breast cancer by increasing CD8 + T cells and reducing Tregs infiltrating the tumor [94].cGAS-STING exogenous activation can also promote NK cell antitumor immunity.Lu et al. demonstrated that treatment with high doses of cGAMP can prime the signaling activation in NK cells from PMBCs of patients with cancer [68].Conversely, another study indicates that direct uptake of cGAMP by NK cells results in cell death, while the delivery of encapsulated STING agonists is able to indirectly activate NK cells by STING signaling activation in DCs [95].As discussed in the previous paragraphs, mutated forms of p53 can interfere with antitumor activities of the cGAS-STING signaling in breast cancer, while wild-type p53 contributes to its activation [83].For this reason, pharmacological reactivation of p53 could be an effective strategy to reactivate the cGAS-STING-dependent IRF3 signaling in tumor cells, promoting cancer cell apoptosis and modulating the TME [83] (Figure 4).tion in DCs [95].As discussed in the previous paragraphs, mutated forms of p53 can interfere with antitumor activities of the cGAS-STING signaling in breast cancer, while wild-type p53 contributes to its activation [83].For this reason, pharmacological reactivation of p53 could be an effective strategy to reactivate the cGAS-STING-dependent IRF3 signaling in tumor cells, promoting cancer cell apoptosis and modulating the TME [83] (Figure 4).The pathological shift of this signaling, from an immunostimulatory state to a protumoral one, can be reversed by utilizing STING agonists combined with agents that restore canonical NF-κB and IRF3 pathways, hindering cancer progression (left circle).The proposed combination of STING agonists with the drug APR-246, known for reactivating the tumor suppressor P53 in P53-mutated tumors, is suggested here.The efficacy of these therapies could potentially be enhanced through combination with ICB immunotherapies, such as anti-PD-L1 antibodies.Right panel: Conflicting data exist in the current literature concerning the role of TLR2 in cancer.Consequently, the use of agonists or inhibitors must be carefully evaluated based on the specific context.TLR2 agonists appear promising in enhancing antigen presentation when coupled with anticancer vaccines (right upper circle).However, considering the protumoral roles of TLR2 as described in this review, TLR2 inhibition might serve as an alternative strategy, particularly in tumor subtypes where TLR2 correlates with poor prognosis.Furthermore, TLR2 inhibition has shown promising outcomes when combined with chemotherapies inducing immunogenic cell death, thereby releasing DAMPs capable of activating TLR2 signaling and its described protumoral effects.Created with BioRender.com(accessed on 14 November 2023). Despite the promising outcomes observed in preclinical tumor models, the translation of STING agonists into successful clinical trials has proven challenging.This discrepancy can be attributed to the limited efficacy of different STING agonists, such as the 5,6dimethylxanthenone-4-acetic acid (DMXAA), which was the inaugural STING agonist The cGAS-STING pathway exhibits alterations in various tumors, including breast cancer.The pathological shift of this signaling, from an immunostimulatory state to a protumoral one, can be reversed by utilizing STING agonists combined with agents that restore canonical NF-κB and IRF3 pathways, hindering cancer progression (left circle).The proposed combination of STING agonists with the drug APR-246, known for reactivating the tumor suppressor P53 in P53-mutated tumors, is suggested here.The efficacy of these therapies could potentially be enhanced through combination with ICB immunotherapies, such as anti-PD-L1 antibodies.Right panel: Conflicting data exist in the current literature concerning the role of TLR2 in cancer.Consequently, the use of agonists or inhibitors must be carefully evaluated based on the specific context.TLR2 agonists appear promising in enhancing antigen presentation when coupled with anticancer vaccines (right upper circle).However, considering the protumoral roles of TLR2 as described in this review, TLR2 inhibition might serve as an alternative strategy, particularly in tumor subtypes where TLR2 correlates with poor prognosis.Furthermore, TLR2 inhibition has shown promising outcomes when combined with chemotherapies inducing immunogenic cell death, thereby releasing DAMPs capable of activating TLR2 signaling and its described protumoral effects.Created with BioRender.com(accessed on 14 November 2023). Despite the promising outcomes observed in preclinical tumor models, the translation of STING agonists into successful clinical trials has proven challenging.This discrepancy can be attributed to the limited efficacy of different STING agonists, such as the 5,6-dimethylxanthenone-4-acetic acid (DMXAA), which was the inaugural STING agonist evaluated in clinical trials and the sole one to reach phase III.These agonists have demonstrated efficacy restricted to murine STING [96].Notably, structural analyses of human and murine STING underscored the significance of developing STING agonist derivatives specifically binding to the human receptor [97].Furthermore, the inherent instability of the majority of STING agonists in vivo has hindered their therapeutic effectiveness.Consequently, numerous preclinical trials across various solid cancers are underway to explore formulations aimed at circumventing this challenge.Strategies include incorporating these molecules into nanoparticles or inducing their in vivo production by bacteria or cells.However, as of now, no reported outcomes exist regarding breast cancer patients' responses to these trials [98]. Inhibition Considering TLR2's multifaceted role in fostering protumoral and immunosuppressive effects, as delineated in preceding sections, we propose an innovative therapeutic strategy centered on TLR2 inhibition (see Figure 3).The ambiguous nature of TLR2 in cancer necessitates a meticulous approach, weighing the use of agonists or inhibitors contingent upon specific contexts.While TLR2 stands as a crucial molecule for the immune system, insights from TLR2 KO mice, exhibiting no evident alterations in immune cells or other functions, support the feasibility and safety of its inhibition [29,99].However, no clinical trials have been conducted, or are currently ongoing, pertaining to TLR2 inhibition in breast cancer or other solid tumors.Nevertheless, the utilization of TLR2-targeting monoclonal antibodies or inhibitors has showcased promise in hematological malignancies clinical trials [100].Studies in mouse models of various cancers, such as gastric and pancreatic cancer, have revealed that genetic deletion of TLR2 or inhibition using blocking antibodies like OPN-301 hindered tumorigenesis [41].Recent publications have also highlighted the impact of natural compounds like Robinin in inhibiting pancreatic cancer progression through the modulation of the TLR2-PI3k-Akt signaling pathway [101].Moreover, investigations in head and neck squamous cell carcinoma demonstrated that TLR2 blocking antibodies restrained the growth of organoids and patient-derived xenografts (PDXs) [102].Despite promising outcomes observed with TLR2 antagonistic targeting in various cancer models, there remains a lack of investigations into TLR2 inhibition as a treatment for breast cancer.Recently, our team published a study demonstrating that TLR2 promotes breast cancer progression in preclinical models, both through cancer cell-intrinsic mechanisms and immunosuppression.Notably, TLR2 plays a pivotal role in chemotherapeutic resistance since chemotherapy triggers the release of DAMPs, activating its signaling pathway.Importantly, inhibition of TLR2 using the small molecule CU-CPT22 potentiated the efficacy of chemotherapy both in vitro and in vivo [29].We believe that our study, coupled with existing data on TLR2 targeting in diverse cancers, could pave the way for a novel approach in breast cancer treatment, warranting comprehensive exploration in the forthcoming years. Conclusions The wide array of PRRs exhibiting both tumor-suppressive and tumor-promoting functions presents an unparalleled prospect for the development of anticancer treatments aimed at either enhancing or inhibiting their signaling pathways.However, a comprehensive understanding of how a particular receptor operates in a specific cancer environment is crucial to avoid inadvertently bolstering tumor-promoting immune suppression or instigating prolonged inflammation during treatment.In the context of breast cancer, TLR2 and STING emerge prominently among PRRs for their potential in pioneering new therapeutic approaches.Modulating these specific PRRs holds the promise of triggering both intrinsic anticancer effects within tumor cells and eliciting immune-mediated anticancer responses.Such modulation may augment the sensitivity of cancer cells to chemotherapy and ICB.Directly inhibiting TLR2 with antagonists or addressing elements within the gut or tumor microbiota that trigger its activation using antimicrobial agents could potentially heighten the efficacy of certain anticancer treatments in tumors resistant to standard therapies.Simultaneously, activating the cGAS-STING pathway in immune cells could foster T lymphocyte recruitment within the TME and restore an antitumor immune microenvironment.It is worth noting that potential challenges associated with employing drugs that modulate these PRRs in cancer immunotherapy could be mitigated by targeting these drugs specifically to breast cancer or immune cells, utilizing selectively targeted nanoparticles or other sophisticated delivery systems.The integration of these strategies with chemotherapy and/or ICB paves the way for the development of innovative therapies in the ongoing battle against breast cancer. Figure 1 . Figure 1.Schematic representation of TLR2 dimers and their signaling pathway.Like other TLRs, TLR2 forms homo-or heterodimers that allow for activation and signaling upon ligand binding.TLR2 islocalized in the outer cell membrane, and mainly dimerizes with TLR1 and TLR6.TLR2 uses the canonical MyD88 pathway to transduce a signal that, through the IRAK-TRAF6 complex, induces the activation of NF-κB and MAPK.NF-κB is responsible for the transcription of several pro- Figure 1 . Figure 1.Schematic representation of TLR2 dimers and their signaling pathway.Like other TLRs, TLR2 forms homo-or heterodimers that allow for activation and signaling upon ligand binding.TLR2 islocalized in the outer cell membrane, and mainly dimerizes with TLR1 and TLR6.TLR2 uses the canonical MyD88 pathway to transduce a signal that, through the IRAK-TRAF6 complex, induces the activation of NF-κB and MAPK.NF-κB is responsible for the transcription of several pro-inflammatory cytokines.The MAPK pathway induces the epithelial to mesenchymal transition, promoting cancer cell invasion and metastasis.Created with BioRender.com(accessed on 22 December 2023). 18 Figure 2 . Figure 2. TLR2-mediated protumoral mechanisms.Cancer cells actively or passively releaseDAMPs, especially following cell death induced by chemo-or radiotherapy.In addition, alterations in the microbiota due to antibiotic therapies, chemotherapies, or other factors can foster the proliferation of bacterial species that exert protumoral activity, releasing PAMPs in the TME.These endogenous and exogenous TLR2 ligands activate its signaling, promoting tumor progression in a cancer cell-intrinsic manner (on the left).TLR2 signaling activates NF-κB, which subsequently transcribes protumoral cytokines such as IL-6, TGF-β and VEGF.This stimulates cancer cell survival, angiogenesis and resistance to therapies.Additionally, TLR2 triggers the MAPK pathway, inducing EMT and promoting metastasis.Moreover, TLR2 is expressed by immune cells, where it exerts immunosuppressive effects favoring tumor progression in a cancer cell-extrinsic manner (on the right).TLR2 induces the differentiation of T and B regulatory cells, as well as of MDSCs from bone marrow precursors.MDSCs contribute to immunosuppression through several mechanisms, including the reprogramming of macrophages into the M2 phenotype.These processes collectively result in the production of cytokines that inhibit CD8 + T cells and their activity, promoting tumor immune evasion.Created with BioRender.com(accessed on 10 November 2023). Figure 2 . Figure2.TLR2-mediated protumoral mechanisms.Cancer cells actively or passively release DAMPs, especially following cell death induced by chemo-or radiotherapy.In addition, alterations in the microbiota due to antibiotic therapies, chemotherapies, or other factors can foster the proliferation of bacterial species that exert protumoral activity, releasing PAMPs in the TME.These endogenous and exogenous TLR2 ligands activate its signaling, promoting tumor progression in a cancer cell-intrinsic manner (on the left).TLR2 signaling activates NF-κB, which subsequently transcribes protumoral cytokines such as IL-6, TGF-β and VEGF.This stimulates cancer cell survival, angiogenesis and resistance to therapies.Additionally, TLR2 triggers the MAPK pathway, inducing EMT and promoting metastasis.Moreover, TLR2 is expressed by immune cells, where it exerts immunosuppressive effects favoring tumor progression in a cancer cell-extrinsic manner (on the right).TLR2 induces the differentiation of T and B regulatory cells, as well as of MDSCs from bone marrow precursors.MDSCs contribute to immunosuppression through several mechanisms, including the reprogramming of macrophages into the M2 phenotype.These processes collectively result in the production of cytokines that inhibit CD8 + T cells and their activity, promoting tumor immune evasion.Created with BioRender.com(accessed on 10 November 2023). Figure 4 . Figure 4. Approaches to PRR-targeted therapies for breast cancer.Schematic representation of the therapeutic approaches proposed in this review.Left panel: The cGAS-STING pathway exhibits alterations in various tumors, including breast cancer.The pathological shift of this signaling, from an immunostimulatory state to a protumoral one, can be reversed by utilizing STING agonists combined with agents that restore canonical NF-κB and IRF3 pathways, hindering cancer progression (left circle).The proposed combination of STING agonists with the drug APR-246, known for reactivating the tumor suppressor P53 in P53-mutated tumors, is suggested here.The efficacy of these therapies could potentially be enhanced through combination with ICB immunotherapies, such as anti-PD-L1 antibodies.Right panel: Conflicting data exist in the current literature concerning the role of TLR2 in cancer.Consequently, the use of agonists or inhibitors must be carefully evaluated based on the specific context.TLR2 agonists appear promising in enhancing antigen presentation when coupled with anticancer vaccines (right upper circle).However, considering the protumoral roles of TLR2 as described in this review, TLR2 inhibition might serve as an alternative strategy, particularly in tumor subtypes where TLR2 correlates with poor prognosis.Furthermore, TLR2 inhibition has shown promising outcomes when combined with chemotherapies inducing immunogenic cell death, thereby releasing DAMPs capable of activating TLR2 signaling and its described protumoral effects.Created with BioRender.com(accessed on 14 November 2023). Figure 4 . Figure 4. Approaches to PRR-targeted therapies for breast cancer.Schematic representation of the therapeutic approaches proposed in this review.Left panel:The cGAS-STING pathway exhibits alterations in various tumors, including breast cancer.The pathological shift of this signaling, from an immunostimulatory state to a protumoral one, can be reversed by utilizing STING agonists combined with agents that restore canonical NF-κB and IRF3 pathways, hindering cancer progression (left circle).The proposed combination of STING agonists with the drug APR-246, known for reactivating the tumor suppressor P53 in P53-mutated tumors, is suggested here.The efficacy of these therapies could potentially be enhanced through combination with ICB immunotherapies, such as anti-PD-L1 antibodies.Right panel: Conflicting data exist in the current literature concerning the role of TLR2 in cancer.Consequently, the use of agonists or inhibitors must be carefully evaluated based on the specific context.TLR2 agonists appear promising in enhancing antigen presentation when coupled with anticancer vaccines (right upper circle).However, considering the protumoral roles of TLR2 as described in this review, TLR2 inhibition might serve as an alternative strategy, particularly in tumor subtypes where TLR2 correlates with poor prognosis.Furthermore, TLR2 inhibition has shown promising outcomes when combined with chemotherapies inducing immunogenic cell death, thereby releasing DAMPs capable of activating TLR2 signaling and its described protumoral effects.Created with BioRender.com(accessed on 14 November 2023).
2023-12-31T16:03:33.661Z
2023-12-29T00:00:00.000
{ "year": 2023, "sha1": "009c25882ff652347951cc3012143a40ec0c7a98", "oa_license": "CCBY", "oa_url": "https://www.mdpi.com/1422-0067/25/1/456/pdf?version=1703836017", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "f5cbcd02c6aa59eccd15521af6dba0e088a5afee", "s2fieldsofstudy": [ "Medicine", "Biology" ], "extfieldsofstudy": [] }
14452279
pes2o/s2orc
v3-fos-license
Stochastic integration based on simple, symmetric random walks A new approach to stochastic integration is described, which is based on an a.s. pathwise approximation of the integrator by simple, symmetric random walks. Hopefully, this method is didactically more advantageous, more transparent, and technically less demanding than other existing ones. In a large part of the theory one has a.s. uniform convergence on compacts. In particular, it gives a.s. convergence for the stochastic integral of a finite variation function of the integrator, which is not c\`adl\`ag in general. Introduction The main purpose of the present paper is to describe a new approach to stochastic integration, which is based on an a.s. pathwise approximation of the integrator by simple, symmetric random walks. It is hoped that this method is pedagogically more advantageous, more transparent, and technically less demanding than other existing ones. This way hopefully a larger, mathematically less mature audience may get acquainted with a theory of stochastic integration. Since stochastic calculus plays an ever-increasing role in several applications (mathematical finance, statistical mechanics, engineering, . . . ) nowadays, this aim seems to be justified. A basic feature of the present theory that, except for the integral of general predictable integrands, one always has almost sure uniform convergence on compact intervals. This is true for the approximation of the integrators, quadratic variations, local times, and stochastic integrals when the integrand is a C 1 or a finite variation function of the integrator. We believe that for a beginner it is easier to understand almost sure pathwise convergence than L 2 or in probability convergence that typically appear in stochastic calculus textbooks. We mention that our method significantly differs from the a.s. converging approximation given by Karandikar [4]. Important tools in our approach are discrete versions of Itô's and Itô-Tanaka formulas. The continuous versions easily follow from these by a.s. pathwise convergence. (See earlier versions of it in [9] and [10].) To our best knowledge, our approach is new in giving a.s. convergence for the stochastic integral of a finite variation function of the integrator, which is not càdlàg in general. In the case of a more general, e.g. predictable integrand, our method is closer to the traditional ones, e.g. we too have L 2 convergence then. The only, hopefully interesting, difference is that in the approximating sums random partitions are used so that the values of the integrand are multiplied by ±1 times a constant scaling factor. The signs are from an independent, symmetric coin tossing sequence. The most general integrators considered in this paper are continuous local martingales M . It is easy to extend this to semimartingales X that can be written X = M + A, where M is a continuous local martingale and A is a finite variation process. The reason is simple: integration with respect to such an A can also be defined pathwise. Preliminaries A basic tool of the present paper is an elementary construction of Brownian motion (BM). The specific construction we are going to use in the sequel, taken from [10], is based on a nested sequence of simple, symmetric random walks (RW's) that uniformly converges to the Wiener process (=BM) on bounded intervals with probability 1. This will be called "twist and shrink" construction in the sequel. Our method is a modification of the one given by Frank Knight in 1962 [5]. We summarize the major steps of the "twist and shrink" construction here. We start with an infinite matrix of independent and identically distributed random variables X m (k), P {X m (k) = ±1} = 1 2 (m ≥ 0, k ≥ 1), defined on the same complete probability space (Ω, F , P). (All stochastic processes in the sequel will be defined on this probability space.) Each row of this matrix is a basis of an approximation of the Wiener process with a dyadic step size ∆t = 2 −2m in time and a corresponding step size ∆x = 2 −m in space. Thus we start with a sequence of independent simple, symmetric RW's S m (0) = 0, S m (n) = n k=1 X m (k) (n ≥ 1). The second step of the construction is twisting. From the independent RW's we want to create dependent ones so that after shrinking temporal and spatial step sizes, each consecutive RW becomes a refinement of the previous one. Since the spatial unit will be halved at each consecutive row, we define stopping times by T m (0) = 0, and for k ≥ 0, These are the random time instants when a RW visits even integers, different from the previous one. After shrinking the spatial unit by half, a suitable modification of this RW will visit the same integers in the same order as the previous RW. We operate here on each point ω ∈ Ω of the sample space separately, i.e. we fix a sample path of each RW. We define twisted RW'sS m recursively for k = 1, 2, . . . usingS m−1 , starting withS 0 (n) = S 0 (n) (n ≥ 0) andS m (0) = 0 for any m ≥ 0. With each fixed m we proceed for k = 0, 1, 2, . . . successively, and for every n in the corresponding bridge, T m (k) < n ≤ T m (k + 1). Any bridge is flipped if its sign differs from the desired: and thenS m (n) =S m (n − 1) +X m (n). ThenS m (n) (n ≥ 0) is still a simple symmetric RW [10, Lemma 1]. The twisted RW's have the desired refinement property:S The third step of the RW construction is shrinking. The sample paths of S m (n) (n ≥ 0) can be extended to continuous functions by linear interpolation, this way one getsS m (t) (t ≥ 0) for real t. The mth "twist and shrink" RW is defined byB . Then the refinement property takes the form Note that a refinement takes the same dyadic values in the same order as the previous shrunken walk, but there is a time lag in general: It is clear that this construction is especially useful for local times, since a refinement approximates the local time of the previous walk, with a geometrically distributed random number of visits with half-length steps (cf. Theorem C below). Now we quote some important facts from [10] and [12] about the above RW construction that will be used in the sequel. Theorem A. On bounded intervals the sequence (B m ) almost surely uniformly converges as m → ∞ and the limit process is Brownian motion W . For any C > 1, and for any K > 0 and m ≥ 1 such that K2 2m ≥ N (C), we have where K * := K ∨ 1 and log * K := (log K) ∨ 1. (N (C) here and in the sequel denotes a large enough integer depending on C, whose value can be different at each occasion.) Conversely, with a given Wiener process W , one can define the stopping times which yield the Skorohod embedded RW's B m (k2 −2m ) into W . With these stopping times the embedded dyadic walks by definition are This definition of B m can be extended to any real t ≥ 0 by pathwise linear interpolation. If a Wiener process is built by the "twist and shrink" construction described above using a sequence (B m ) of nested RW's and then one constructs the Skorohod embedded RW's (B m ), it is natural to ask about their relationship. The next lemma explains that they are asymptotically identical. In general, roughly saying, (B m ) is more useful when someone wants to generate stochastic processes from scratch, while (B m ) is more advantageous when someone needs discrete approximations of given processes. Lemma A. For any C > 1, and for any K > 0 and m ≥ 1 such that K2 2m ≥ N (C) take the following subset of Ω: The next theorem shows that the rate of convergence of (B m ) to W is essentially the same as the one of (B m ), cf. Theorem A. Theorem B. For any C > 1, and for any K > 0 and m ≥ 1 such that In the last part of this section we quote a result from [12] about an elementary definition of Brownian local time, based on the "twist and shrink" RW's (B m (t)). This is basically a finite-time-horizon version of a theorem of Révész [7], in a somewhat sharper form. First, one can define one-sided, up and down local timesl . The one-sided (and two-sided) local times of the mth "twist and shrink" walkB m at a point x ∈ 2 −m Z up to time t ∈ 2 −2m Z + are defined as corresponding to the fact that the spatial step size ofB m is 2 −m . ThenL ± m (t, x) can be extended to any t ∈ R + and x ∈ R by linear interpolation, as a continuous two-parameter process. x)) almost surely uniformly converges as m → ∞ to the one half of the Brownian local time L(t, x). For any C > 1, and for any Very similar statements hold when the "twist and shrink" walksB m (t) are replaced by Skorohod embedded walks B m (t) in the definition (5) and (6) of local time. This local time will be denoted by L ± m (t, x) in the sequel. Similar statements hold for (L − m (t, x)) as well. Proof. By Lemma A, almost everywhere on an event A * K,m whose complement has probability going to zero exponentially fast with m, one has that Thus on A * K,m , for almost every ω and for all (t, Hence, by the triangle inequality and by Theorem C, we obtain the above statement: A discrete Itô-Tanaka formula and its consequences It is interesting that one can give discrete versions of Itô's formula and of Itô-Tanaka formula, which are purely algebraic identities, not assigning any probabilities to the terms. Despite this, the usual Itô's and Itô-Tanaka formulae follow fairly easily in a proper probability setting. Discrete Itô formulas are not new. Apparently, the first such formula was given by Kudzma in 1982 [6]. A similar approach was recently used by Fujita [3] and Akahori [1]. Discrete Tanaka formulae were given by Kudzma [6] and Csörgő-Révész [2]. The elementary algebraic approach used in the present paper is different from these; it was introduced by the first author in 1989 [9]. Lemma 1. Take any function f : R → R , a ∈ R, step ∆x > 0, and numerical sequence X r = ±1 (r ≥ 1). Let S 0 = a, S n = a + (X 1 + · · · + X n )∆x (n ≥ 1). Then the following equalities hold: (discrete Stratonovich formula). Alternatively, (discrete Itô's formula). Finally, Proof. Algebraically, The first equality follows from the fact that if X r = 1, one has to add a term, while if X r = −1, one has to subtract a term in the trapezoidal sum. Then the second equality follows since 1/X r = X r , and the third equality is obviously implied by the second. Summing up for r = 1, . . . , n, the sum on the left telescopes, and from the three equalities one obtains the three formulae, respectively. Introducing the notation h ±∆x (x) := (f (x ± ∆x) − f (x))/(±∆x) and comparing (8) and (9), we obtain a discrete occupation time formula, cf. (19): Let us apply Lemma 1 with Skorohod embedded walks (B m (t)). Elementary calculus shows, c.f. [9, Theorem 6], that when g ∈ C 2 (R) and we set f = g ′ in (7) or in (8), then, as m → ∞, the terms converge almost surely to the corresponding terms of the Stratonovich and Itô's formula, respectively. On one hand, this gives a definition of the Itô integral and Stratonovich sums respectively. Here (X m (r)) ∞ r=1 is an independent, ±1 symmetric coin tossing sequence, 2 −m X m (r) = B m (r2 −2m ) − B m ((r − 1)2 −2m ). This essentially means that in the sums approximating the Itô integral we apply a partition with the Skorohod stopping times 0 = s m (0) < s m (1) < · · · < s m (⌊t2 2m ⌋), since B m (r2 −2m ) = W (s m (r)). Second, taking almost sure limits as m → ∞, one immediately obtains the corresponding Itô's formula and Stratonovich formula In the same way, now we show almost sure convergence of stochastic sums (f (W ) · W ) m t , when g is the difference of two convex functions and f = g ′ − , its left derivative. Then we immediately obtain the Itô-Tanaka formula as well, with the help of (9). Theorem 1. Let g be the difference of two convex functions, g ′ − be its left derivative, and let µ be the signed measure defined by the change of g ′ − when restricted to compacts (the second derivative of g in the generalized function sense). Then for arbitrary K > 0, almost surely as m → ∞, and for any t ≥ 0, Proof. The basic idea of the proof is that one may substitute Skorohod embedded walks B m (r2 −2m ) = W (s m (r)), B m (0) = W (0) = a ∈ R into (9) to obtain Then it is enough to deal with the almost sure uniform convergence on compacts of the terms on the right side of (14), and that will imply the same convergence of the stochastic sum on the left side and will prove the Itô-Tanaka formula (13) as well. First we show (13) for measures µ supported in a compact interval [−M, M ]. Because of the linearity of (13), it can be supposed that g is convex. Then g ′ − is non-decreasing, left-continuous, and , is a regular, finite, positive Borel measure on R, with total mass µ(R) = g ′ + (M ) − g ′ − (−M ). We are going to prove (13) pathwise. For this, let Ω 0 , P {Ω 0 } = 1, denote a subset of the sample space Ω, on which, as m → ∞, B m (t) uniformly converges to W (t) on [0, K] and L ± m (t, x) uniformly converges to L ± (t, x) on [0, K] × R, cf. Theorem B and Corollary 1 above. We fix an ω ∈ Ω 0 . Then, obviously, W (t) and L ± (t, x) will have continuous paths. Consider the first term on the right side of (14). We want to show that it uniformly converges to g(W (t)) − g(W (0)) for t ∈ [0, K]. With the notation The first term on the right side of (15) goes to zero as m → ∞, because g ′ − is non-decreasing, hence Riemann integrable on any compact interval [a, B m (t m )], the trapezoidal sum is a Riemann sum on the same interval, so their difference can be estimated from above by the difference of the upper sum and the lower sum of g ′ − on [a, B m (t m )], which, in turn is dominated by 2 −m (g ′ + (M ) − g ′ − (−M )) for any t ∈ [0, K]. The second term on the right side of (15) converges to 0 as well, since B m (t m ) converges uniformly to W (t) for t ∈ [0, K], and g ′ − is bounded on R. Since the slope of g at any point is bounded above by g ′ + (M ) and below by g ′ − (−M ), we see that g is Lipschitz, so absolutely continuous. Hence Thus we have proved that almost surely as m → ∞. Now we turn to the convergence of the second term on the right side of (14). Fixing an ω ∈ Ω 0 , we want to show that as m → ∞, (The other half containing the L − m (t, x) terms is similar.) Now, we have that The first term on the right side of (18) goes to zero, since, by Corollary 1, L + m (t, x) uniformly converges to 1 2 L(t, x) on [0, K] × R, and The second term on the right side of (18) also goes to zero, since it is the difference of a Riemann-Stieltjes integral and its approximating sum. Recall that L(t, x) is continuous, non-decreasing in t for any x, L(K, x) has compact support, so bounded, as x varies over R, and the total mass µ(R) < ∞. Thus by (16) and (17) Define µ M as the measure determined by the change of (g M ) ′ − , which is supported in [−M, M ] with finite mass. Clearly, g(W (t)) = g M (W (t)) if 0 ≤ t ≤ T M and, since L(T M , x) = 0 for all x, |x| > M , The previous argument proves (12) for the interval [0, K ∧ T M ], and also (13) if 0 ≤ t ≤ T M . Since the stopping times (T M ) ∞ M=1 increase to ∞ a.s., this proves the theorem. Comparing Itô's formula (11) and Itô-Tanaka formula (13) when g is C 2 and convex, we obtain that Since this holds for any continuous and positive function g ′′ , by a monotone class argument we obtain the well-known occupation time formula for any bounded, Borel measurable function h: As a special case, one gets ∞ −∞ L(t, x) dx = t. Also, from (13) with g(x) = |x − a|, one obtains Tanaka's formula Integration of predictable processes In this section our aim is to show that when the integrand (Y (t)) t≥0 is a predictable process satisfying some integrability condition, our previous approach to approximate the stochastic integral by sums of values of the integrand at the Skorohod stopping times s m (k), multiplied by ±2 −m , where the signs are obtained by a fair coin tossing sequence, still works. In other words, the standard non-random partitions of a time interval [0, t] may be replaced by our random partitions 0 = s m (0) < s m (1) < · · · < s m (⌊t2 2m ⌋) of Skorohod embedding of random walks into the Wiener process. This is not surprising, because, as Lemma A shows, such Skorohod partitions are asymptotically equivalent to partitions (k2 −2m ) t2 2m k=0 , as m → ∞. This approach will be described rather briefly, since in this case the details are essentially the same as in the standard approach to stochastic integration found in textbooks. Let (F t ) t≥0 denote the natural filtration defined by our BM W . In this paper, stopping times and adapted processes are understood with respect to this filtration. By definition, we say that Y is a simple, adapted process if there exist stopping times 0 ≤ τ 0 ≤ τ 1 ≤ · · · increasing to ∞ a.s. and random variables ξ 0 , ξ 1 , . . . such that ξ j is F τj -measurable, E(ξ 2 j ) < ∞ for all j, and In the sequel, the only case we will consider is when, with a given m ≥ 0, the stopping time sequence is the one given by the Skorohod embedding (3). Let Y be a left-continuous, adapted process and with a fixed b > 0, take the truncated process Further, with m ≥ 0 fixed, take with the Skorohod stopping times (3). Then Y b m is a simple, adapted process. Then, similarly as in (10), a stochastic sum of Y b m is defined as where (X m (r)) ∞ r=1 is a sequence of independent, P {X m (r) = ±1} = 1 2 random variables. Observe that (22) without the X m (r) = ±1 factors would be asymptotically an ordinary Riemann sum of the integral t 0 Y b (s) ds. (The differences s m (r) − s m (r − 1) asymptotically equal 2 −m by Lemma A.) The random ±1 factors multiplying the terms of the sum transform it into an approximation of a stochastic integral. It is clear that the usual properties of stochastic integrals hold for such stochastic sums. Namely, one has linearity: Lemma 2. Let K > 0 be fixed and Y be a left-continuous, adapted process such that m K → 0 by bounded convergence. These prove the lemma. So let Y be a left-continuous, adapted process such that Y K < ∞ and take × Ω), by isometry, J b (t) tends to a random variable J(t) in L 2 (Ω), which is defined as the stochastic integral t 0 Y (s) dW (s) for t ∈ [0, K]. From this point, the discussion of the usual properties of stochastic integrals and extensions to more general integrands largely agrees with the standard textbook case, therefore omitted. Extension to continuous local martingales The extension of the methods of the previous sections to continuous local martingales is rather straightforward. By a useful theorem of Dambis and Dubins-Schwarz (DDS), a continuous local martingale M can be transformed into a Brownian motion W by a change of time, when time is measured by the quadratic variation [M ] t . Let a right-continuous and complete filtration (F t ) t≥0 be given in (Ω, F , P) and let the continuous local martingale M be adapted to it. Define Thus N m (t) almost surely uniformly converges on compacts to [M ] t , and B m (N m (t)) similarly converges to M (t). Combining the DDS time change and the pathwise approximations of local time described in Section 2, it is not too hard to generalize the local time results of [12] to continuous local martingales M . Let the mth approximate local time of M at a point x ∈ 2 −m Z up to time t ∈ 2 −2m Z + be defined by L M,± m (t, x) = 2 −m #{s : B m (s) = x, B m (s + 2 −2m ) = x ± 2 −m }, where s ∈ 2 −2m Z ∩ [0, [M ] t ) and B m (j2 −2m ) = M (τ m (j)). Then L M,± m (t, x) can be extended to any t ∈ R + and x ∈ R by linear interpolation, as a continuous two-parameter process. The major difference compared to Corollary 1 is that that the time interval
2009-07-06T19:31:38.000Z
2007-12-23T00:00:00.000
{ "year": 2007, "sha1": "a68b82f1d794fc4b1b8da032cc6cba7b63315b8f", "oa_license": null, "oa_url": "http://arxiv.org/pdf/0712.3908", "oa_status": "GREEN", "pdf_src": "Arxiv", "pdf_hash": "a68b82f1d794fc4b1b8da032cc6cba7b63315b8f", "s2fieldsofstudy": [ "Mathematics" ], "extfieldsofstudy": [ "Mathematics" ] }
103452185
pes2o/s2orc
v3-fos-license
pH sensitive mesoporous nanohybrids with charge-reversal properties for anticancer drug delivery The surface/interface state of nanomaterials plays a key role on their biomedical applications. Nanotechnology o ff ers a versatile means to develop nanoparticles with well-de fi ned architecture. In this study, mesoporous silica nanoparticles were fi rstly loaded with an anticancer drug (doxorubicin, DOX), which were then decorated with a cationic oligomer (low molecular weight polyethyleneimine, LPEI) to acquire an increased surface charge. The resulting particles were further self-assembled with negative-charged bovine serum albumin (BSA) as natural protein nanoblocks to o ff er surface charge tunability. The resulting mesoporous nanohybrids (MDPB) acquired charge-reversal ability, which presented negative charge under biological conditions (bene fi cial to biocompatibility), while displaying a positive-charged state under acidic conditions mimicking the tumor extracellular microenvironment (favoring cell uptake or tumor penetration). Furthermore, the nanohybrids not only allowed for an e ff ective loading of DOX drug, but also accelerated its release under acidic tumor microenvironments in a sustainable way. In vitro biological study indicated that the DOX-free nanoparticles were biocompatible, while MDPB exerted good cytotoxicity against cancer cells, suggesting their promise for therapeutic delivery application. Introduction Facing the increasing threat of cancer diseases, how to overcome the multidrug resistance and improve therapeutic bioactivity is still a challenging topic in the eld of anticancer research. [1][2][3][4][5] Even though small anticancer drugs (e.g., doxorubicin (DOX) and paxlitaxel (PTX)) can be encapsulated into some liposomal-based nanoformulations and employed for cancer treatment at a clinical and/or trial clinical level, [3][4][5] success has never been achieved because the current nanocarriers have low tumor penetration ability as well as passive release properties. [3][4][5] Therefore, it is important to develop a kind of nanocarriers which can carry drug(s) to penetrate through solid tumor and smartly deliver it (them) to kill cancer cells. 6 Normally, negative-charged nanoparticles have better biocompatibility than the positive-charged one. 7,8 It is also known that tumors assume more acidic extracellular environment (pH 6.5) as compared to the normal biological conditions. 11,12 For nanomedicines used for intravenous injection application, it is better that they present negative charge during blood circulation (to decrease protein adsorption in plasma as well as blood cell trapping), while transformed into positivecharged ones under acidic extracellular environment (pH 6.5) to enhance their interactions with tumor cells with a negativecharged membrane. 9,10 Nanoparticles with such pH-sensitivity are called as charge-reversal nanosystems. 11,12 To obtain this purpose, some pH-sensitive chemical bonds (e.g., b-carboxylic acid) has been graed onto some polymers with positive charges to transform them into a temporary negative ones, which can be again triggered into positive state upon their arrival at tumor site to enhance their interactions with cancer cells. 13 However, this kind of modication is involved in a series of chemical reactions, which greatly complicated the fabrication process and increased the production cost. 13 Herein, we rstly prepared mesoporous silica nanoparticles (MSN) which have been widely used for therapeutic delivery study because of their biocompatibility, biodegradability and controllable structure. 14 Aer that, MSN with mesoporous structure was used to encapsulate doxorubicin (DOX, as a model anticancer drug). For tuning their surface charge, cationic hyperbranched polyethylenimine with low molecular weight (LPEI) which displays less cytotoxicity than high molecular weight PEI has been assembled onto MSN/DOX (MD) to acquire positive charges of LPEI, where its protonation effect may offer pH sensitivity in drug release. 15,16 Then, the LPEI-modied MSN nanosystems (MDP) were further decorated with a natural protein (bovine serum albumin, BSA) with negative charges for further adjustment of the nanoparticle surface to enable charge reversal property. As a kind of main component protein existing in the blood plasma, the coverage of BSA on the nanocarriers may protect them from trapping by blood cells. 17 Furthermore, BSA has a nanosized architecture ($7 nm), which may act as a temporary nanoblocks to block DOX leakage, while its biodegradability may enable an achievement of a sustainable therapeutic delivery. 17,19,20 Also, the process for preparation of the mesoporous nanoparticles is very environment-friendly and not involved in any organic solvent (Scheme 1). The results indicated that MDPB nanoparticles effectively encapsulated DOX, which can be released in an acidic-accelerated drug release way under both acidic conditions mimicking extracellular and intracellular microenvironments. Furthermore, the nanonparticles themselves were biocompatible, and delivered DOX to cancer cells to present good cytotoxicity against cancer cells. Results and discussion Synthesis and characterization of charge-reversal mesoporous nanohybrids A sol-gel chemistry approach was used to synthesize mesoporous silica nanoparticles (MSN) through an emulsion method. Aer that, cationic doxorubicin was mixed with MSN to get DOX-loaded samples (MD), which was assembled with PEI with low molecular weight (LPEI) to offer MDP nanoparticles, followed by decoration with BSA to obtain MDPB nanohybrids. LPEI introduction can endow the nanoparticles with positive charges, which can be balanced via assembly with negative BSA to obtain samples with charge-reversal property. The advantage of using LPEI, instead of PEI with high molecular weight, is that the former is more biocompatible than the latter. 21,22 Through optimization, it was found that decoration of MD with 0.2 mg mL À1 LPEI1 and 0.1 mg mL À1 BSA (this sample was named as MDPB_0.1) can endow the nanoparticles with chargereversal property upon pH variation from normal conditions (pH 7.4) to acidic tumor extracellular microenvironments (pH 6.5) (Fig. 1). Generally, BSA decoration did not affect the drug loading capacity of the formed nanohybrids. The encapsulation efficiency of the nanohybrids of all studied BSA was maintained at $60% (Fig. 2). For morphological analysis, the samples were imaged via Transmission Electron Microscopy. As can be seen from Fig. 3, all the samples assumed as a sphere shape with diameter around 100 nm. The MD present an obvious mesoporous structure, while surface modication resulted in a blurring on the nanoparticles, especially for MDPB nanohybrids, suggesting a successful coating of LPEI and BSA onto the surface of MSN nanoparticles. In vitro drug release study As mentioned above, it is important to improve the controllability of nanomedicines to increase their therapeutic bioactivity. [23][24][25] Even though MSN nanoparticles own good porous Scheme 1 Schematic representation of how to prepare MDPB nanohybrids via decoration of doxorubicin (DOX)-loaded mesoporous silica nanoparticles (MSN) with low molecular weight PEI (LPEI), followed by introduction with bovine serum albumin (BSA) through a self-assembly approach. Charge-reversal property can be achieved by adjustment of the ratio of cationic LPEI and anionic BSA, which can improve tumor penetration, cancer cell uptake and acidic-accelerated drug delivery targetability, resulting in good cytotoxicity against cancer cells. structure, their pore openness is not benecial to achieving the sustainability in drug release. Our idea in this study is to introduce a temporary protective layer on their surface by assembling a positive oligomer polyethylenimine (LPEI) on its surface, followed by decoration with bovine serum albumin (BSA) which has a nanosized architecture of $7 nm. 17 The existence of BSA in the outer layer may act as temporary nanoblocks to improve the controllability of DOX release from the nanoparticles, thus reducing its burst release. To verify our hypothesis, the release behaviors of unmodied DOX-loaded MSN (MD), LPEI-coated MD before (MDP) and aer BSA (MDPB) treatment were studied under PBS solution at pH 7.4. As shown in Fig. 4a, a burst release (8 h, $65%) occurred for MD systems. Although the physical coating of LPEI onto MSN decreased DOX release ability in some degree, the release rate was still quite high (8 h, 54%). As comparison, MDPB presented a limited DOX release (43%) up to 8 h, suggesting the successful BSA coating onto the LPEI layer to sustain the release rate of the encapsulated DOX from the nanoparticles. Generally, pH responsiveness is a useful bioactive stimulus for achievement of anticancer drug release controllability, since various solid tumors present acidic extracellular microenvironment (pH 6.5) and intracellular compartments (e.g., endo/ lysosomal compartments) display a little bit more acidic state (pH 5.0). 26,27 In order to check their pH sensitivity, the DOX release behaviors of the nanoparticles under both physiological (pH 7.4) and acidic (pH 6.5 and 5.0) conditions were investigated. As can be seen from Fig. 4b, the decrease of pH value signicantly increased the release capacity of DOX from MDPB. For instance, aer 8 h incubation, their cumulative releases in PBS solution at pH 7.4, 6.5 and 5.0 were about 43%, 55% and 63%, respectively. This is important indicator for targetable drug delivery, because during blood circulation period most DOX can be protected to be released from the nanoparticles. However, aer their arrival around the tumor site and/or uptake by cancer cells presenting acidic microenvironments, DOX can be released in an accelerated manner to increase its anticancer bioactivity and decrease its side effects. [28][29][30] Since PEI tends to undergo protonation at acidic conditions (low pH values), which may lead to the PEI shell swelling of the nanoparticles, resulting in an easier diffusion of protonated DOX with higher solubility from them to accelerate its release capacity. The pH sensitivity of the MDPB carriers may be used to improve their therapeutic targetability to tumors by limiting drug leakage during their blood circulation while allowing for a timely delivery of the drug upon arrival at tumor site and/or uptake by cancer cells. 31 In vitro biological study For biomedical applications, the native nanoparticles should be biocompatible while drug-loaded ones should display good therapeutic bioactivity. 32,33 Therefore, the cytotoxicity of the DOX-free nanoparticles was analyzed via 3-(4,5-dimethylthiazol-2-yl)-2,5-diphenyltetrazolium bromide (MTT) assay by culturing them against A549 cells (a carcinomic human alveolar basal epithelial cell line). The results indicate that cells treated by MPB nanohybrids displayed a high cell viability ($85%) even aer cell culture time of 48 h and at nanoparticle concentration up to 7.50 mg mL À1 (Fig. 5). These mean that the nanoparticles are biocompatible. It can be seen from Fig. 6 that MDPB showed a dosage-dependent cytotoxicity towards A549 cells, Hep-G2 cells and C2C12 cells with IC50 values of 2.11, 2.03 and 6.90 mM, revealing that the cytotoxic effect was only from the drug which was loaded within the nanohybrids and their specic cytotoxicity against cancer cells as compared to normal cells (Fig. 6). The high cytotoxicity may be associated with their charge-reversal ability, benecial to an enhanced cell uptake capacity (Fig. 7). The biodegradability and good cytocompatibility of the MDPB nanoparticles, as well as their pH-sensitive drug release controllability make them promising for anticancer therapeutic delivery applications. Preparation and characterization of MDPB nanoparticles Mesoporous silica nanoparticles (MSN) were prepared according to an emulsion method. 18 Briey, 1.5 mL TEOS was added in 20 mL aqueous solution containing 2.18 g CTAB and 0.08 g TEA under magnetic stirring at 95 C, and the reaction was maintained at the same conditions for another 4 h. The precipitation was collected by centrifugation and 3 time water/ethanol wash, followed by 2 day reux with the ethanol solution of hydrochloric acid (10% v/v) at 78 C. The mixture was ltered and lyophilized to obtain mesoporous silica nanoparticles (MSN). For drug loading, 1 mL DOX aqueous solution (2 mg mL À1 ) was mixed with 19 mL phosphate-buffered saline (PBS, pH ¼ 7.4) containing 10 mg MSN under 400 rpm stirring for 24 h, followed by centrifugation/water wash (10 000 rpm, 10 min) thrice. The obtained DOX-loaded MSN were abbreviated as "MD". For surface modication, 10 mg MD samples in 9 mL distilled water was mixed with 1 mL aqueous solution of LPEI (0.2 mg mL À1 ) under magnetic stirring for 1 h. The mixture was then centrifuged/washed thrice to get MDP samples, which were resuspended into 9 mL water and treated with 1 mL aqueous solution containing different amount of BSA under stirring at room temperature for 12 h. The solution then underwent centrifugation/wash to obtain MDPB nanoparticles. The unloaded DOX in the supernatant was evaluated by DOX uorescence analysis (l ex ¼ 480 nm, l em ¼ 580 nm) using a microplate reader (SpectraMax M2, Molecular Devices, USA) to calculate drug loading amount. The Zeta potential of the nanoparticles at different pH values in PBS were analysed by a Zetasizer Instrument (Nano ZS, Malvern Instruments, UK) via a Dynamic Light Scattering (DLS) technique. The morphology of the nanoparticles was examined by transmission electron microscope (TEM, JEOL JEM-2100, Nikon, Japan) with an accelerating voltage of 120 kV. Before measurement, the samples were dispersed in ethanol (0.5 mg mL À1 ) under sonication. The aqueous suspensions of the samples were dropped onto a 400 mesh copper grid, followed by air-drying before analysis. In vitro drug release study For the drug release experiments, DOX-loaded nanoparticles in 1 mL water containing equivalent DOX amount (100 mg) was introduced in a dialysis membrane (MWCO: 14 000 Da, Shanghai Yuan Ju Biological Technology Co. Ltd., Shanghai, China). Dialysis was then performed against 9 mL PBS solution, under different pH values (7.4, 6.5 or 5.0), at a temperature 37 C. At different time intervals, an aliquot of the PBS solution (100 mL) was taken out for spectrophotometrical analysis and refreshed with 100 mL PBS solution. The released DOX was quantied by measuring the DOX uorescence (l ex ¼ 480 nm, l em ¼ 580 nm) using a microplate reader (SpectraMax M2, Molecular Devices, USA). The cumulative release (C r ) of DOX against time was obtained according to the equation: where W t and W tot are the cumulative amount of drug released at time t, and the total drug contained in the nanohybrids used for drug release, respectively. Evaluation of cell viability A549 cancer cells (a carcinomic human alveolar basal epithelial cell line), Hep-G2 cancer cells (a human liver cancer cell line), together with C2C12 normal cells (mouse myoblast cell line) as normal cell control, were incubated in asks containing Dulbecco's modied Eagle medium (DMEM) and 10% fetal bovine serum (FBS) in a humidied atmosphere and 5% of carbon dioxide in a Corning culturist (incubator) at 37 C. The cytotoxicity of DOX-free or loaded nanoparticles was evaluated by examining the viability of A549 cells using a MTT assay. Briey, cells were incubated in 96-well plate at a density of 5000 cells per well. Aer 1 day, the cultured DMEM solution was replaced with 200 mL fresh DMEM solutions of DOX-free and DOX-loaded nanoparticles. Subsequently, cells were incubated for 48 h at 37 C before the MTT assay. For MTT assay, a 30 mL MTT solution was added to each well. Aer further incubation for 4 h at 37 C, 200 mL DMSO was added to each well to replace the culture medium and dissolve the insoluble formazan crystals. The absorbance at 492 nm was measured by using the UV spectrophotometer. The relative cell viability was demonstrated as OD test /OD control  100%. For cell uptake quantication, intracellular drug accumulation was investigated via ow cytometry assay. Briey, A549 cells (10 000 cells per well in a 6-well plate) were incubated against free DOX, MDPB_0.1 in DMEM containing 3.0 mM equivalent DOX. Aer 24 h incubation, cells were washed with PBS, trypsinized and recollected, which were re-suspended in PBS (0.5 mL) for a ow cytometry assay. Conclusions In summary, we develop an effective approach to fabricate a kind of stimulative mesoporous silica nanoparticles. An anticancer drug, doxorubicin (DOX), was rstly loaded into mesoporous silica nanoparticles (MSN), which were coated with cationic oligomer (LPEI) as well as natural negative protein (BSA) onto their surface. The variation of LPEI and BSA contents allowed for controllable adjustment of their surface charge to acquire charge-reversal ability. The resulting nanoparticles (MDPB) presented pH sensitivity in DOX delivery under conditions mimicking intracellular conditions and acidic tumor microenvironments, resulting in good cytotoxicity against cancer cells. Conflicts of interest There are no conicts of interest to declare.
2019-04-09T13:06:11.258Z
2017-09-26T00:00:00.000
{ "year": 2017, "sha1": "5809236b632214910e2b3905237b902006b88bcc", "oa_license": "CCBYNC", "oa_url": "https://pubs.rsc.org/en/content/articlepdf/2017/ra/c7ra05912d", "oa_status": "GOLD", "pdf_src": "Anansi", "pdf_hash": "69d702d9103664fa390ddd13357f5b0300d395f8", "s2fieldsofstudy": [ "Medicine", "Materials Science", "Chemistry" ], "extfieldsofstudy": [ "Chemistry" ] }
201640217
pes2o/s2orc
v3-fos-license
Mesoporous bioactive glass combined with graphene oxide scaffolds for bone repair Recently there has been an increasing interest in bioactive factors with robust osteogenic ability and angiogenesis function to repair bone defects. However, previously tested factors have not achieved satisfactory results due to low loading doses and a short protein half-life. Finding a validated stable substitute for these growth factors and apply it to the construction of porous scaffolds with the dual function of osteogenesis and angiogenesis is therefore vital for bone tissue regeneration engineering. Graphene oxide (GO) has attracted increasing attention due to its good biocompatibility, osteogenic, and angiogenic functions. This study aims to design a scaffold composed of mesoporous bioactive glasses (MBG) and GO to investigate whether the composite porous scaffold promotes local angiogenesis and bone healing. Our in vitro studies demonstrate that the MBG-GO scaffolds have better cytocompatibility and higher osteogenesis differentiation ability with rat bone marrow mesenchymal stem cells (rBMSCs) than the purely MBG scaffold. Moreover, MBG-GO scaffolds promote vascular ingrowth and, importantly, enhance bone repair at the defect site in a rat cranial defect model. The new bone was fully integrated not only with the periphery but also with the center of the scaffold. From these results, it is believed that the MBG-GO scaffolds possess excellent osteogenic-angiogenic properties which will make them appealing candidates for repairing bone defects. The novelty of this research is to provide a new material to treat bone defects in the clinic. Introduction In recent years, methods for bone tissue engineering with the help of bioactive materials represent a promising strategy for bone repair after trauma, severe infection, tumor resection and congenital skeletal abnormalities [1], but it remains a major challenge in orthopedic surgery. Conventional surgical treatment has plateaued because the gold-standard autologous bone transplantation causes unavoidable secondary damage to the donor site [2], so implantation of a bioactive bone graft is needed to bridge the gap. The bone regeneration process needs close temporal and spatial coordination of events involving bone cells, marrow stroma and associated vascular elements [3]. Neoangiogenesis, which is related to nutrient supply, is critical for bone repair [4]. New bone formation and vascularization are often limited to the periphery of the scaffolds due to the damaged blood vessels in the center of the defect [4,5]. Methods that aim to address this issue have been explored, including the use of an expensive recombinant pro-angiogenic vascular endothelial growth factor (VEGF) and bone morphogenetic Ivyspring International Publisher protein (BMP) in combination with tissue engineering scaffolds [6][7][8]. These have strong properties of angiogenesis and osteogenesis, respectively. However, these approaches may prove problematic due to the high doses of proteins required, a short protein half-life and the inability to sustain biological activity easily [9,10]. Finding a validated stable substitute for these growth factors and applying it to the construction of porous scaffolds with the dual function of osteogenesis and angiogenesis is a goal for bone tissue regeneration engineering. Graphene oxide (GO), due to its high strength, large surface area and good cytocompatibility, has been widely investigated for various biomedical applications [11][12][13][14][15]. GO's carboxyl, hydroxyl and epoxy groups promote interfacial interaction with polymeric matrices and ceramic, leading to their improved mechanical strength [16][17][18]. GO is also capable of encouraging osteogenic differentiation and hydroxyapatite mineralization, thus increasing calcium fixation [19][20][21][22]. Due to these properties, GO has been incorporated into several matrices aimed at bone regeneration, such as GO/TCP and GO/PLGA [23][24][25][26]. Apart from the excellent biocompatibility, physical and osteogenesis properties of GO, recent studies have shown that graphene oxide can promote angiogenesis [27,28]. Angiogenesis is a basic process in bone tissue regeneration [4,5], and GO can induce angiogenesis and contribute to nutrient formation and transportation in bone regeneration [29]. Another important aspect of bone tissue engineering is the three-dimensional scaffolds which provide a template for seeded cells to stimulate cell proliferation and differentiation, and also an interconnected pore structure to allow nutrients to penetrate into the scaffolds. Mesoporous bioactive glasses (MBG) have attracted increasing interest in bone tissue engineering in the last several years [30]. The MBG scaffolds are similar to the porous structure of subchondral bone, because of their highly inter-connected large pores (300-500 μm) and ordered structure nanopores (2-50nm) [30]. Consequently, MBG scaffolds promote greatly enhanced attachment, spreading and proliferation of cells, resulting in high bioactivity and degradation properties which benefit from the improved nanopore volume and surface area [31]. However, the clinical translation of these scaffolds is impeded by weak osteogenic inducible activity for differentiation of osteogenic related precursor cells. As a solution, composite MBG scaffolds were developed to overcome the weak osteogenesis and angiogenesis by adding bioactive factors. It was difficult to achieve these two biological functions simultaneously by using only one bioactive growth factor due to the differing mechanisms. Recent research had reported that graphene improved cytocompatibility and significantly enhanced the hardness and Young's modulus of BGs [32]. However, the use of low-dimensional material, such as GO, to enhance the performance of mesoporous bioglass to repair bone defects has not been reported. In our study, we have explored a high temperature calcination technique to prepare MBG-based composite porous scaffolds using GO as an active reinforcer. We aim to determine whether these orderly porous scaffolds are suitable for cell adhesion and for promoting the proliferation, osteogenic differentiation of bone marrow mesenchymal stem cells and bone regeneration following a critical defect located in the cranium of rats. Materials Graphene oxide with a lateral size of 0.8-1.2 nm and thickness of 0.5-5 µm were purchased from Nanjing XFNANO Materials Technology Co., Ltd (Nanjing, China). P123 (EO 20 All reagents were analytical grade and were used as received. Porous MBG-GO scaffolds were prepared using a high temperature calcination technique. Briefly, GO was homogeneously mixed in a MBG solution and the ratios of GO : MBG were 0 : 4, 2.5 : 4, 25 : 4 (mg/mL). They were named MBG, MBG-LGO and MBG-HGO respectively. Each blend was mixed to form a homogeneous slurry and cast into a gelatin sponge with a diameter of 5 mm and a height of 2 mm. The gelatin sponge grouted by MBG-GO mixed solution was then dried at 60 °C. Finally, the dried scaffolds were calcined at 500 °C under nitrogen protection for 5 h to obtain the porous MBG-GO scaffolds. The scaffolds were sterilized using gamma irradiation. Characterization of scaffolds The morphologies of the synthesized scaffolds were characterized by scanning electron microscopy (SEM, JEOLJSM-6701F) and high resolution micro-CT (mCT-80, Scanco Medical AG, Bassersdorf, Switzerland). The phase composition of the scaffolds were characterized by Fourier transformation infrared spectrum (FTIR, Bruker IFS66V FTIR spectrometer). In vitro cellular evaluation Rat bone marrow mesenchymal stem cells (rBMSCs) were obtained from four femurs of Sprague-Dawley (SD) rats. Briefly, marrow of the femoral midshaft was extracted and then suspended in minimum essential medium alpha (α-MEM) containing 10% fetal bovine serum (FBS, Hyclone, Logan, UT, USA), 100 U/mL of penicillin and 100 mg/L of streptomycin (Hyclone). The non-adherent cells were discarded, and when the adherent cells reached 80-90% confluence they were passaged and became passage one (P1) cells. Sub-cultured rBMSCs at passages 4-5 were adopted in all in vitro cellular experiments. Cell attachment To examine cellular morphology on the scaffolds, a 200 µL of rBMSC suspension containing 5×10 3 cells was directly seeded onto the testing MBG and MBG-GO scaffolds. After 24 h of incubation, prior to SEM observation, the scaffolds were removed from the culture wells, rinsed with phosphate buffered saline (PBS) and fixed with 2.5% glutaraldehyde in PBS for 1 h. They were washed with PBS followed by sequential dehydration in graded ethanol (30% to 100%) and freeze drying. The cell-scaffolds were sputter-coated with gold and the morphological characteristics of the attached cells were characterized using SEM. Cell proliferation The scaffold extracts were obtained following the International Standard Organization protocol (ISO 10993-5). Briefly, the scaffolds were immersed in minimum essential medium alpha (α-MEM) containing 10% fetal bovine serum (FBS, Hyclone, Logan, UT, USA), 100 U/mL of penicillin and 100 mg/L of streptomycin (Hyclone) in a cell incubator (humidified atmosphere with 5% CO2 at 37 °C) for 72 h (3cm 2 /mL). The supernatant was filtered and refrigerated at 4 °C for use within 7 days. The suspension containing 2×10 3 cells was seeded into each well in a 96-well plate and the cells were incubated in humidified culture conditions. A Cell Counting Kit-8 assay (Dojindo Molecular Technologies, Inc. Japan) was performed to evaluate the cell proliferation of MBG and MBG-GO scaffold extracts. Briefly, 90 µL of culture medium and 10 µL of CCK-8 solution were added into each well at days 1, 3, and 7 and incubated at 37℃ for another 4 h. At the end of the incubation period, 100 µL of solution was removed from each well and transferred into another 96-well plate. The light absorbance was measured at 450 nm with a microplate reader (Bio-Rad 680, USA). All the results are presented as the optical density (OD) values minus the absorbance of blank wells. The study was performed in triplicate. ALP activity, staining and immunofluorescence evaluation To evaluate the osteogenic effect, rBMSCs were cultured with the extracts for 7 and 14 days to study their osteogenic differentiation ability. At different time points, the cells were lysed using 100 µL RIPA lysis buffer, and the cell supernatant was collected into a 96-well plate. The alkaline phosphatase (ALP) activity in the supernatant was evaluated with the Alkaline Phosphatase Assay Kit (Beyotime, China). After co-incubation of extracts and p-nitrophenol for 30 mins at 37 ℃, the ALP activity was determined at a wavelength of 405 nm. Finally, the ALP levels were normalized to the total protein content determined by the bicinchoninic acid (BCA) Protein Assay Kit (Beyotime, China). The study was performed in triplicate. ALP staining was performed to detect ALP expression in the rBMSCs. At each predetermined time point, cells were washed with PBS three times, fixed with 4% paraformaldehyde for 15 mins, and incubated with the ALP staining kit (Beyotime) for 30 mins at 20 ℃ according to the manufacturer's protocol. After washing with PBS, the stained cells were examined using an inverted microscope (Leica DMI6000B, Solms, Germany). We also detected the ALP expression by immunofluorescence staining at day 7. Briefly, the cells were fixed by 4% paraformaldehyde for 15 mins and incubated in 0.1% Triton for 30 mins to permeabilise the cells. Non-specific protein-protein interactions of the cells were blocked by 1% BSA for 1 h. The cells were then incubated with the antibody ALP (1 : 200, Abcam108337) overnight at 4 ℃. The secondary antibody was donkey anti-rabbit Alexa Fluor 488 (1 : 200, Abcam) used for 1 h. Finally, the cytoskeleton and nuclei were stained with FITC-phalloidin and DAPI, respectively. A fluorescence microscope (Leica) was used to acquire representative images. Alizarin red S staining and OCN immunofluorescence assay rBMSCs were cultured as described above. At day 14, the cell layers were fixed with 4% paraformaldehyde for 15 mins, and washed with PBS three times followed by adding 2% Alizarin red S solution (Cyagen) for 10 mins. Cells were washed with PBS three times and the mineralized nodules were then examined using an inverted microscope (Leica DMI6000B, Solms, Germany). Finally, the staining was extracted by adding 10% cetylpyridinium chloride (Sigma-Aldrich Co., USA) for 15 mins at 37 ℃. The absorbance was recorded at a wavelength of 595 nm using a microplate reader (Bio-Rad 680, USA). We determined a later osteogenic differentiation protein, osteocalcin (OCN), by immunofluorescence staining at day 14. Briefly, the cells were fixed in 4% paraformaldehyde for 15 mins, permeabilized by 0.1% Triton-X for 30 mins, blocked using 1% BSA for 1 h and incubated with primary antibodies for OCN (1 : 200, Abcam13420) overnight at 4 ℃. Secondary donkey anti-mouse Alexa Fluor 488 (1 : 200, Abcam) was applied to combine with the primary antibody for 1 h. Finally, the cytoskeleton and nuclei were stained red and blue with FITC-Phalloidin and DAPI, and observed with a fluorescence microscope (Leica). Osteogenesis/Angiogenesis related genes expression of rBMSCs The expression levels of osteogenesis/angiogenesis-related genes (alkaline phosphatase (ALP), runt-related transcription factor 2 (RUNX-2), osteocalcin (OCN), collagen type 1 (COL1)), vascular endothelial growth factor (VEGF) and hypoxia-inducible factor 1-alpha (HIF-1α) were measured using quantitative reverse transcription polymerase chain reaction (qRT-PCR) analysis. Typically, the cells were seeded at a density of 2 × 10 4 cells per well, cultured for 7 days in a 6-well plate, then harvested using Trizol Reagent (Invitrogen Carlsbad, CA, USA) to extract the RNA. The obtained RNA was reverse-transcribed into complementary DNA (cDNA) using a Revert Aid First Strand cDNA Synthesis Kit (Thermo Fisher Scientific, Waltham, MA, USA) and qRT-PCR analysis was performed on an ABI Prism 7300 Thermal Cycler (Applied Biosystems, Foster City, CA, USA) using SYBR Green detection reagent. GAPDH was employed as the housekeeping gene for internal normalization. All samples were assayed in triplicate and independent experiments were performed three times. The relative expression was calculated using the formula: 2 -△△Ct . Primer information is given in Table 1. Cranial bone defect model and artificial scaffolds implantation The SD rat cranial bone defect model was used to investigate the osteogenic capacity of the scaffolds in vivo. The experimental procedures, housing and animal care were approved and carried out in accordance with the regulations for animal experiments of the Animal Ethics Committee of Shanghai Sixth People's Hospital-affiliated Shanghai Jiao Tong University. Eight-week-old male SD rats were obtained from Shanghai Xipuer-Bikai Laboratory Animal Co., Ltd (Shanghai, China) and housed in a standard SPF animal laboratory. After adaptation for one week, 250-300g SD rats were used for establishing the critical cranial bone defect model. For the surgical procedure as previously described [33], the cranium was exposed through a central incision after general anesthesia with an intraperitoneal injection of 0.5% pentobarbital sodium (9 mL/kg body weight). Two critical-sized calvarial defects with a diameter of 5 mm were created on each side of the cranium using a dental trephine irrigated by ice saline solution to avoid thermal injury. After the bone was removed, the drilled holes were rinsed with saline solution and the 5 mm × 2 mm scaffolds were then randomly implanted into the defects. Following the operation, the animals received intramuscular antibiotic injections, were allowed free access to food and water and were monitored daily for potential complications. In total, 24 animals were divided into three groups as follows: (1) MBG scaffolds group, n = 8 (2) MBG-LGO scaffolds group, n = 8 and (3) MBG-HGO scaffolds group, n = 8. Microcomputed tomography (micro-CT) To evaluate the in vivo bone ingrowth of the implanted porous scaffolds, craniums were harvested and evaluated at 12 weeks using a high-resolution micro-CT (mCT-80, Scanco Medical AG, Bassersdorf, Switzerland) at an isometric resolution of 18 μm. Scanco software was used for analysis. Three-dimensional grayscale images were generated using the CTVol program. As there are density differences between scaffolds and new bone, CTAn software used in this study can differentiate between them. Percentage of new bone volume relative to tissue volume (BV/TV) and bone mineral density (BMD) in the bone defect were both calculated. Microfil perfusion in the bone defect The vasculature of SD rats was injected with 20 mL of silicone rubber compound (Microfil MV-122, Flow Tech, Carver, MA) after they were euthanized at 12 weeks post-operation [34]. Briefly, the animals were anesthetized and the rib cage was opened. The descending aorta was clamped and the auricula dextra was incised. Heparinized saline and Microfil were successively perfused into the left ventricle with an angiocatheter. Successful perfusion was defined as a yellow color change in the eyes and tongue. Finally, the rats were stored at 4 °C overnight to ensure plasticization of the contrast medium, after which the crania were dissected and fixed in 4% paraformaldehyde for another 48 h. The fixed crania were decalcified in 10% ethylenediaminetetraacetic acid (EDTA; Sigma, US) for four weeks. Images were obtained with a high-resolution micro-CT imaging system at 9 µm resolution, and the number and volume of vessels within the 5 mm diameter region surrounding the bone defect were evaluated. Sequential fluorescent labeling in the bone defect At 2, 4, and 6 weeks after the operation, the SD rats were intraperitoneally injected with tetracycline (TE, 25 mg/kg of body weight), alizarin red (AL, 30 mg/kg of body weight) and calcein (CA, 20 mg/kg of body weight). The mineralized tissue was observed using the trichromatic sequential fluorescent labeling method [3]. Newly bone formation and mineralization analysis One part of each specimen was dehydrated in ascending concentrations of alcohols from 70% to 100% and embedded in polymethylmethacrylate (PMMA). After hardening, the sagittal sections of the specimens were cut into 150 μm thick slices using a microtome (Leica Microsystems Ltd., Wetzlar, Germany), followed by grinding and polishing to a final thickness of approximately 50 μm. The sections were first viewed using confocal laser scanning microscopy (CLSM) (Leica) to examine fluorescent labeling. New bone formation and mineralization were quantified at six locations of the defect site. The mean value of the six measurements was calculated to obtain average values for each group. The sections were then stained with van Gieson's staining to identify new bone formation. Red indicated new bone formation, and black indicated residual materials [35]. The area of new bone formation was evaluated quantitatively in six randomly-selected sections using Image Pro 5.0 (Media Cybernetics, Rockville, MD, USA). Immunohistochemical (IHC) analysis The other half of the craniums was decalcified for 4 weeks in 10% EDTA solution, dehydrated with gradient alcohols, embedded in paraffin, and then sectioned into 4 µm thickness sections. Osteogenesis and angiogenesis were evaluated by IHC analysis for osteocalcin (OCN, abcam13420) and CD34 (abcam81289). Statistical analysis All the above data are presented as mean ± standard deviation (SD). Differences between groups were calculated by one-way analysis of variance (ANOVA) and Student-Newman-Keuls post-hoc tests. The statistical analysis was conducted using SPSS 17.0 software (SPSS Inc., Chicago, IL, USA). The difference was considered significant when P < 0.05. Characterization of MBG and MBG-GO composite scaffolds To verify that MBG was successfully bound with GO through a chemical bond combination, FTIR spectra were obtained (Fig. 2D). For MBG and MBG-GO, both broad bands appeared at around 3400 cm -1 , which was associated with the O-H stretching vibration of adsorbed water molecules. After binding with MBG, the typical carbonyl group of GO disappeared at 1730 cm -1 . A new band of composites appeared at 1085 cm -1 , which was attributed to the To observe the pore size and porosity of the scaffolds, SEM and micro-CT scans of the materials were performed. The results show that the three types of scaffolds have similar pore sizes (300-500 µm) and porosity (63.2%-68.7%) ( Fig. 2A-B, 2E). It is observed that the content of GO had no significant effect on the pore diameter and porosity of the scaffold, even increasing the porosity of the scaffold to some extent. Biocompatibility of the scaffolds with rBMSCs rBMSCs were cultured on scaffolds to investigate the cell compatibility of porous MBG-GO scaffolds. The attachment and morphology of cells on scaffolds were observed by SEM (Fig. 2B-C). After being cultured for 24 h, rBMSCs attached to the surface of the pore struts in scaffolds. Well-spread morphology was observed and the pore walls of MBG-GO groups were almost completely covered by cytoskeleton. As determined by a CCK-8 proliferation assay (Fig. 2F), all MBG and MBG-GO scaffold extracts supported cell proliferation well. However, the proliferation rates of MBG-GO scaffolds were significantly higher than those of MBG group at days 1, 3 and 7 (P < 0.05). The MBG-LGO group showed the best rate, but the difference was not statistically significant when compared to the MBG-HGO (P > 0.05). Osteogenic differentiation effect of rBMSCs cultured with extracts Fig. 3A reveals that ALP expression increased over time, and the highest ALP expression was observed in the MBG-HGO group, followed by the MBG-LGO group. Consistent with the ALP staining results, a similar trend was observed in ALP activity and immunofluorescence staining assays, with the highest ALP activity and ALP (green) fluorescence intensity detected in the MBG-HGO group (Fig. 3B-D). A later osteogenic differentiation protein, osteocalcin (OCN), was detected by immunofluorescence staining at day 14. The results showed that rBMSCs cultured with the MBG-HGO extract expressed more OCN (green) than those in the MBG or MBG-LGO group (Fig. 4A). To study the mineralization level of rBMSCs cultured with scaffold extracts, Alizarin red S staining was conducted at day 14. A great number of calcified nodules were stained red in the MBG-HGO group than in the other groups (Fig. 4B), and the trend was further confirmed by the quantitative test shown in Fig. 4C. To further clarify the osteogenic differentiation effect of rBMSCs cultured with scaffold extracts, several marker genes which were essential during osteogenesis were examined. The results showed that these osteogenic-related genes were all up-regulated in cells cultured with MBG-GO extracts compared to the MBG group (P < 0.05) (Fig. 5A-D). Simultaneously, we found that the expressions of the ALP and COL1 genes were stronger in the MBG-HGO group than in the MBG-LGO group (P < 0.05) (Fig. 5A-B), indicating that the MBG-HGO scaffold extract could enhance osteogenic differentiation better. Porous MBG-GO scaffolds promotes bone regeneration in vivo According to the results of in vitro study, we further studied the in vivo osteogenesis effect of porous MBG-GO scaffolds with a large cranial defect in rats. The rats survived well; none of them died or developed infections during the course of the study after scaffold implantation. Three-dimensional micro-CT reconstructed images showed the morphology of the newly-formed bone (Fig. 6A-B). In the sagittal plane (Fig. 6C), more newly-formed bone was observed in MBG-HGO scaffold group than in other groups. Quantitative analysis of the newly-formed bone was performed by the image analysis system. The local BMD was markedly higher in the MBG-HGO scaffold group (0.64 ± 0.08 g/cm 3 ) than that in the MBG scaffold group (0.10 ± 0.04 g/cm 3 ), or in the MBG-LGO scaffold group (0.50 ± 0.04 g/cm 3 ) (P < 0.05) (Fig. 6D). The differences in BV/TV between these groups also showed the same pattern (Fig. 6E). The results indicate that MBG-GO scaffolds can significantly improve bone regeneration and that BMD increases with the increasing content of GO. Porous MBG-GO scaffolds promote vascularization in vivo In our in vitro study, we studied two marker genes, VEGF and HIF-1α, which were essential during angiogenesis. The results showed that these were both significantly up-regulated in MBG-GO groups compared to the MBG group when cultured with scaffold extract for 7 days (Fig. 5E-F). To clarify the effects of local vessel formation in scaffolds after 12 weeks of implantation, micro-CT imaging was carried out. Three-dimensional reconstruction images were obtained and typical images were displayed (Fig. 7C). We could observe the newly formed vascular networks in the defect area from the corresponding images. They showed that the MBG group had almost no new visible vascular formation, whereas in the other groups a considerable number of vessels extended along the scaffolds from the edge of the defects. The number and of new vessels in the MBG-HGO group were both larger than for the other groups ( Fig. 7D-E). Similar to of new bone growth, large groups of vascular networks also existed in the center of the MBG-HGO group (Fig. 7C). Newly bone formation and mineralization analysis As shown in Fig. 7A, new bone formation and mineralization were analyzed at 2, 4 and 6 weeks by sequential fluorescence labels. At 2 weeks, the percentage of TE labeling (yellow) in the MBG-HGO scaffold group (1.09 ± 0.19%) was greater than that in the MBG scaffold group (0.29 ± 0.07%), or the MBG-LGO scaffold group (0.76 ± 0.14%) (P < 0.05). At 4 weeks, the highest percentage of AL labeling (red) was observed in the MBG-HGO scaffold group (1.09 ± 0.13%), but there was also a significant difference between the MBG-LGO scaffold group (0.86 ± 0.10%) and the MBG scaffold (0.35 ± 0.05%) (P < 0.05). At 6 weeks, the percentage of CA labeling (green) in the MBG-HGO scaffold group (1.02 ± 0.16%) was significantly higher than that in the MBG scaffold (0.33 ± 0.06%), or in the MBG-LGO scaffold group (0.80 ± 0.11%) (P < 0.05) (Fig. 7B). The results indicate that GO can promote bone formation at early stages. Consistent with the above results, histological analysis using van Gieson staining of undecalcified specimens showed extensive new formation of bone in the defect areas (Fig. 8A). Bone regeneration was markedly increased in the MBG-HGO scaffold group (71.05 ± 8.07%), with the new bone formation area significantly greater than that in the MBG scaffold (6.17 ± 1.59%), and in the MBG-LGO scaffold groups (33.28 ± 8.97%) (P < 0.05) (Fig. 8B). Immunohistochemical (IHC) analysis To further clarify the osteogenic and angiogenic functions of stents in vivo, the osteogenic and angiogenic markers OCN and CD34 were detected by immunohistochemical staining of decalcified cranial specimens. There was virtually no obvious positive staining for OCN/CD34 in the pury MBG scaffold group, but positive brown staining for OCN/CD34 was apparent in the MBG-GO groups (Fig. 8C, E), and greater positive staining was found in the MBG-HGO scaffold group (Fig. 8D, F). The analysis of bone regeneration in cranial defects indicated that MBG-GO scaffolds can significantly improve newly bone formation and neovascularization, which increased in line with the increasing content of GO. Results were consistent with the previous micro-CT results. Discussion The regeneration process of bone defects primarily relies on a physical bridge between defect ends and the chemical guidance of bioactive molecules and proteins. Bioactive ceramics have been widely accepted and used as a successful biomaterial in studies of bone repair and drug carriers [1-3, 6, 30, 31, 36-39]. Particularly in the bone tissue engineering field, various bioactive factors like DMOG, VEGF and BMP-2 encapsulated in mesoporous bioactive glasses have already been tested to enhance the function of osteogenesis and angiogenesis [3,[6][7][8]. However, it is difficult to achieve these two biological functions simultaneously by only using one bioactive growth factor due to the differing mechanisms. Their inherent shortcomings, including short half-life, low activity, side effects at larger physical dosage and potential immune reaction, severely restrict their application in clinical settings [9,10]. To achieve fully functional and structural recovery, an advanced bioactive scaffold is needed to provide an ideal environment for bone tissue regrowth. In recent years, low-dimensional nano-materials including carbon nanotubes, graphene, and boron nitride nanotubes have shown significant potential in reinforcing bioactive ceramics because of their unique structures and properties [40]. GO is a representative new conductive material with the ability to enhance mechanical properties, cell attachment, proliferation and more importantly, osteogenesis-angiogenesis [16,22,29,41]. Thus, most studies on graphene oxide composites have focused on polymer matrices. Few studies have been carried out on graphene oxide/glass ceramic composites. Considering that skeletal development occurs in close spatial and temporal association with angiogenesis [3], in this study MBG was responsible for carrying GO to enhance osteogenesis and angiogenesis. GO is a nano-sized particle with excellent dispersion properties in water and ethanol [42], and thus GO can be mixed into a MBG solution homogeneously. In addition, comparing with other metal elements, such as copper and cobalt, which are beneficial to bone formation [43,44]. The chemical structure of GO contains an abundance of hydroxyl, epoxy and carboxyl groups on the basal planes of the GO sheets, and GO therefore possesses large surface area and enhances cells adhesion [16]. It can therefore react with TEOS to ensure that the MBG can firmly bind with GO sheets [32,45]. In this study, we varied the weight ratio of MBG to GO to synthesize three types of composite scaffolds, consisting of purely MBG, MBG-LGO and MBG-HGO. By using a high temperature calcination technique, the scaffold can be produced with symmetrical macroporous struts and mesoporous interfaces. We also confirmed that MBGs can firmly bind with GO sheets via FTIR assay. The porosity of scaffolds increased slightly with the addition of graphene oxide. The pore size of the MBG-GO scaffolds (300-500 μm) permits the free exchange of nutrients, such as necessary proteins, oxygen and water, thus facilitating the delivery of energy for bone regeneration and the formation of capillaries [46]. In this study, we also confirmed that the size of 300-500 μm is conducive to cell adhesion and the growth of new bone and blood vessels. Excellent cytocompatibility and osseointegration with the host are a prerequisite for biomaterials. These are important criteria in evaluating whether a biomaterial can be implanted in vivo [32]. The surface physicochemical properties of scaffolds are important for cell behavior. A succession of processes occur in the initial adhesion of cells with implants. Cell adhesion directly impacts cell growth, migration, and differentiation. Direct cellular adhesion and subsequent cellular responses are therefore critical and prerequisite parameters for osteointegration and osteoconduction [47,48]. Herein, the cytocompatibility of MBG-GO composites were first detected by SEM assay, which quantifies cellular activity when exposed to materials and is a commonly used method to analyze the possible harmful effects that materials induce in cells. When the cells were cultured for 24 h, the cells on the surface of MBG, MBG-LGO and MBG-HGO scaffolds all maintained their fusiform shapes, but the cells spread out more on the surfaces of MBG-LGO and MBG-HGO scaffolds. This result further demonstrated the functional bioactive environment provided by GO. The viability and proliferation of these cells were assessed by CCK-8 test after one, three, and seven days of incubation. According to the data shown in Fig. 2F, MBG-GO groups had a significantly higher cell viability rate than that of the control group, indicating that GO has a significant effect on the proliferation of the rBMSC cells. The enhanced proliferation was further demonstrated by immunofluorescence assays in which more cytoskeletons and nuclei were seen at high magnification (Fig. 3A, 4A). The results from SEM and CCK-8 assays indicated that a small amount of GO had no obvious cell toxicity, but excessive GO content may inhibit cell proliferation due to toxicity. Differentiation of rBMSCs is the key process for bone regeneration. It has been demonstrated that GO can enhance the osteogenic activity of osteoprogenitor cells and stimulate in vivo bone regeneration [19,26]. In our study, we also confirmed that the binding of GO significantly promoted rBMSC osteogenic differentiation and new bone formation. Generally, the differentiation of cells are the significant steps that occur before bone mineralization. The fundamental processes of cell differentiation and function are governed by the interaction of cells with their substrate [31,49]. GO could promote osteogenic differentiation through activation of the Wnt/β-catenin signalling pathway, and the effect of osteogenesis is seen to be concentration-dependent [50]. In our vitro study, ALP, RUNX2, OCN and COL1 gene secretion were significantly enhanced in MBG-GO groups, and the results of qRT-PCR were further confirmed by ALP staining, mineralized nodules staining, and ALP/OCN cells immunofluorescence tests. The comprehensive use of micro-CT, histological examination fluorochrome labelling and IHC revealed that more intensive bone formation was detected in the MBG-GO group, which indicates that GO stimulated the participation of rBMSCs in the repair of bone defects. Microvessels are also vital to bone regeneration [4,5]. To verify the influence of the different scaffolds on angiogenesis in the process of bone regeneration, various markers were used including VEGF, HIF-1α and CD34. The HIF-1 complex, one of the most important angiogenesis signalling pathways, initiates numerous gene expressions including VEGF, and modulates stem cell proliferation, differentiation and pluripotency [51]. CD34, which belongs to a family of single-pass transmembrane proteins, is closely associated with vascular-associated tissue [52]. Low concentrations of GO can promote the expression of VEGF and angiogenisis by activating the AKT signaling pathway, upregulating the p-eNOS and initiating downstream NO activation [53]. Our in vitro results revealed notably elevated levels in VEGF, HIF-1α genes in rBMSCs cultured in the MBG-GO scaffold extracts compared with the purely MBG extract, which may result from relatively low concentrations of GO in the scaffold. In line with these results, areas with more new vessels were found in the MBG-GO groups in vivo via Microfil and CD34 staining evaluations. Furthermore, it is an interesting result that GO content has played a vital role in both angiogenesis and osteogenesis. The highest levels of ALP, COL1 and VEGF gene expressions were found in MBG-HGO group in vitro, and MBG-HGO can also promote osteogenesis and angiogenesis better in vivo. In our in vivo experiment, it was found that part of the MBG had degraded after 12 weeks via van Gieson staining assay, and GO had no significant effect on the degradation of MBG. The recent studies had revealed that GO could be degraded by myeloperoxidase secreted by activated neutrophils [54,55]. However, our in vivo results showed that GO did not degrade completely after 12 weeks (Fig. 8E). Further studies are needed on the mechanism of osteogenesis-angiogensis and degradation of GO in vivo. Conclusions In summary, MBG-GO scaffolds have been successfully fabricated by a high temperature calcination technique. The results showed that MBG-GO scaffolds possessed ordered macropores, as well as exhibiting good biocompatibility and stimulating proliferation of rBMSCs and osteogenic differentiation. In a bone defect model, MBG-GO scaffolds significantly enhance new bone and vessel formation in both the inner and peripheral scaffold areas in defects without the presence of growth factors or stem cells. Therefore, MBG-GO scaffolds demonstrated excellent osteogenic-angiogenic properties and will be appealing candidates for bone defect repair.
2019-08-20T18:48:11.809Z
2019-08-08T00:00:00.000
{ "year": 2019, "sha1": "0814001d20bbcd1683ca8e777ea7229d61db226c", "oa_license": "CCBY", "oa_url": "https://www.ijbs.com/v15p2156.pdf", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "0814001d20bbcd1683ca8e777ea7229d61db226c", "s2fieldsofstudy": [ "Biology", "Materials Science", "Engineering", "Medicine" ], "extfieldsofstudy": [ "Chemistry", "Biology", "Medicine" ] }
13819882
pes2o/s2orc
v3-fos-license
Product Market Competition and Economic Growth : The Role of Increasing Returns to Production Specialization In a simple one-sector endogenous growth model of imperfect competition, we show that the competitiveness-growth relationship can be mixed, crucially depending on the degree of the increasing returns to specialization. This ambiguity not only reconciles the theoretical prediction with the recent empirical evidence, but also provides a plausible explanation for the diversity in the competitiveness-growth relationship across countries. Introduction Is more intense product market competition (PMC) good or bad for growth?This question is important, since its answer will govern the development of antitrust and other competition policies.The conventional Schumpeterian paradigm indicates that monopoly power is viewed as the reward accruing to the successful firms from their innovative activities; the larger this reward, the stronger the incentive to innovate.Since tougher competition erodes the monopolistic rents that can be appropriated by successful innovators, more intense PMC is harmful to technological progress and hence economic growth ([1-3]).However, this theoretical prediction is not supported by empirical studies.References, [4][5][6][7][8] have pointed to a positive correlation between PMC and productivity growth at the firm-and industry-level, thereby leading to a positive link between PMC and aggregate economic growth. 1 [9,10] use recent data on the patenting activity of a panel of UK and US firms and refer to an inverted-U relationship between PMC and innovation (growth). To reconcile the theory with the empirical evidence, the Schumpeterian paradigm has been recently re-formulated and several extensions of the R&D model of endogenous growth have been proposed in the theoretical literature.[11,12] introduce agency issues into the basic Schumpeterian growth model and show that tougher competition can force managers to speed up the adoption of new technologies, which is beneficial to economic growth. Study of [14] shows that there exists an inverted-U relationship between PMC and growth in a model à la Romer with human capital. 2 By allowing products to be both horizontally and vertically differentiated, [15] also obtains a positive relationship between PMC and growth.Unlike these studies, the present study departs from the Schumpeterian paradigm and attempts to shed light on the role of the returns to specialization in the determination of the PMC-growth relationship in a simple onesector AK model of imperfect competition with endogenous entry.In line with [16,17], in our model endogenous growth is based on the returns to specialization, rather than on firms' R&D.We show that increasing returns to production specialization (IRPS) can serve as an alternative that provides a plausibly theoretical explanation to the mixed competitiveness-growth relationship found in the empirical literature.It is important to emphasize that in a departure from the model setting in [17][18][19], we follow [20,21] to introduce a distinction between the returns to specialization and the markup.By doing so, we not only have a better measure of PMC, but can also further verify the role played by the returns to specialization in the PMC-growth relationship. IRPS have been shown to have their practical importance.[23] (Chapter 5), [24,25] argue that if the same assortment of commodities can be manufactured in specialized firms, the so-called scale economies will lead to better work performance and improved organization of work.Studies conducted in the United States both during and after World War II have shown that, in several industries, productivity has tended to increase by 18 -20 percent as accumulated output has doubled through the production of a particular commodity (horizontal specialization) (see [26] for the details).Specialization is also often used to explain higher productivity in US as compared to Canadian textile plants (see, for instance, [23]).Recent studies, such as [27][28][29], further point out that nowadays many products are becoming more modular over time and this development is often associated with a change in industry structure towards higher degrees of specialization.It has contributed to specific activities becoming more suitable and has attracted a large number of de novo entrants. Our analysis suggests that the competitiveness-growth relationship can be either positive or negative depending on the degree of IRPS.The economic intuition underpinning this PMC-growth relationship is as follows.Firstly, by forcing price to converge to marginal cost, tougher competition decreases the distortion of market imperfections that yields a lower long-run level of capital in comparison with a perfectly competitive economy.In an AKtype (a [30]-type) endogenous growth model, this efficiency gain gives rise to a positive effect on economic growth.By contrast, competition may decrease the rate of economic growth in the presence of endogenous entry in the long run.Higher monopoly power, on the one hand, raises incumbents' profits and, on the other hand, provides an incentive for new firms to enter the market.If the increase in the number of firms leads individual firms to specialize in a single product, increasing returns to specialization occur.As the positive effect of the externality is substantial, monopoly power give rises to a favorable, rather than harmful, effect in terms of boosting economic growth.The ambiguity of our result reconciles the theoretical prediction with the recent empirical evidence in a one-sector AK model, rather than the Schumpeterian endogenous growth model.Interestingly, we show that there exists an inverted-U-shaped relationship between PMC and growth.This implies that due to the distinct status quo level of PMC, competition is beneficial in countries where the degree of IRPS is relatively low, but remains detrimental elsewhere with relatively high IRPS.This provides a plausible explanation to the diversity in the competitiveness-growth relationship across countries. The rest of the paper is organized as follows.Section 2 sets up the model of households, firms, and conditions for macroeconomics equilibrium.Section 3 is the result analysis.Section 4 provides some concluding remarks. The Model Consider an economy which consists of households and firms.Time t is continuous. Households The economy is populated by a unit measure of identical and infinitely-lived households.For the sake of analytical convenience, we assume that each household supplies inelastically one unit of labor services per unit of time, i.e., the fixed quantities of labor t .Our main results are valid in the model with a labor-leisure choice.In equilibrium, the labor market clears and the household obtains the desired quantity of employment. H  Given a constant time preference rate  and an initial capital stock 0 K , each household seeks to maximize the following lifetime utility by choosing consumption and capital (1)  subject to its budget constraint: , where ( r ) is the wage (rental) rate and the are the aggregate profits derived from intermediate firms ( t and it denote the number of intermediate goods produced and an individual firm's profits, respectively).Solving the household's problem yields the standard Keynes-Ramsey Rule:    , where and the transversality condition t  is the shadow price associated with the budget constraint. Firms On the production side, there are two types of goods: a homogeneous final good and differentiated intermediate inputs indexed by i.The final good market is perfectly competitive, while the intermediate goods market is characterized by monopolistic competition.By following [20], the final good is produced simply using a continuum of intermediate inputs it , y and takes the following generalized form of production function: [20,21], the specification of (3) allows us to clearly separate increasing returns from imperfect competition, so that both effects can be fully disentangled.Of importance, it provides us with a better measure of PMC.As we will see, this is particularly important when we are exploring the competitiveness-growth relationship. 3ssume that the final good is the numéraire and that it is the relative price of the intermediate good i.Thus, the profit maximization problem for the final good firm is given by: Accordingly, the corresponding first-order condition is as follows: Equation ( 4) is the demand function for the ith intermediate good which is characterized by a constant price elasticity 1 (1 )   .As is evident, a larger  implies a higher price elasticity of demand for intermediate good i and, accordingly, indicates that the intermediate good sector is more competitive. Intermediate good producers employ capital it and labor it to produce their product and sell it to the final good producers at the profit-maximizing price.With an overhead cost k h  (paid in units of intermediate good output), the production technology for intermediate good i can be expressed as: where the parameter a(1 -a) measures the capital (labor) share.Subject to (4) and ( 5 .Moreover, free entry guarantees zero profits for each intermediate good producer.Thus, from (7), the quantity of each intermediate good produced is: 1 With this resulting relationship, (5) gives the equilibrium number of firms: Given the fixed supply for labor t , substituting (8) and 3) yields the aggregate output of the final good: (1 ) (1 ) To ensure a balanced-growth-path (BGP) equilibrium, we need to impose the constraint . With ( 8) and ( 9), the aggregate consistency condition refers to .Thus, given the symmetric equilibrium relationship t t t and by substituting the intermediate good producers' profits (7), the factors' prices (6), and the intermediate good's price into the household's budget constraint, the economy-wide resource constraint is given by: The Relationship between PMC and Growth We now are ready to investigate the relationship between the degree of imperfect competition (or PMC) and the growth rate of the BGP equilibrium.Under symmetric equilibrium, the Keynes-Ramsey rule (2) and the aggregate resource constraint (10) can be represented by (1 )  and, accordingly, the balanced-growth rate  is: (1 ) Based on (11), we then establish the following proposition: Proposition 1.In the presenc IRPS ( 0 e of   ), there exists an inverted-U-shaped relationship between PMC (  ) and growth (  ).ct to Proof.By differentiating (11) with respe  , we immediately have: (1 ) (1 ) Proposition 1 indicates that the growth-competitiveness relationship can positive or negative depending on the degree  be either of IRPS  and the level of competitive- ness  .Intuitively, since stronger competition (a higher  ) will promote production efficiency and increase each existing firm's output ven (gi (1 ) ).This will give rise to a sitive effect on the balanced growth rate.By contrast, more intense PMC may decrea d growth rate in t ous entry.As shown in (7), higher monopoly power tends to raise profits in equilibrium, hence creating an incentive for new firms to enter the market.If the presence of endogenous entry leads to increasing returns to specialization ( 0 po se the balance ce of endogen he presen   ), the increase in the number of firms will generate a positive external effect in terms of boosting economic growth.As is evident, the growth-enhancing effect ing from increasing returns to specialization becomes larger if stemm  is higher.Under such a situation, stronger competition is more likely to harm the balanced-growth rate, leading to a negative competitiveness-growth relationship. Note that if the returns to production specialization are either absent 0   or decreasing 0   , monopoly power cannot generate a sufficiently positive external comp rela effect on growth.Consequently, there is an unambiguously positive etitiveness-growth tionship.In contrast, if we adopt the specification of [17,19] and set (1 ) (with the degree of IRPS being 1  ), (12) turns out to be   inates the for-As a result, there is a monotonically negative relationship between PMC and indicating that the latter entry effect dom mer effect of production efficiency. growth.Such a result is debatable, since it obviously comes from an inappropriate specification which cannot clearly separate increasing returns from imperfect competition. Interestingly, there exists an inverted-U-shaped relationship between PMC (  ) and growth (  ) with a maximum growth rate which is consistent with the recent ev n idence (see [9,10]).As shown in Figure 1, the growthmaximizing PMC requires that the conditio of ˆ1 ( 1)     be satisfied.This implies that the status quo level of PMC and the degree of IRPS jointly govern the competitiveness-growth relationship.If we fo egree of IRPS (1 ) 1.43 llow [32] and set the d , the maximum growth rate is located at around ˆ1 (1 ) 0.7      .Note that this value is in accordance with the empirically plausible range 1.32 -1.49, om US data by [33] and also within the range of various rized from 8 studies by [34]. 535] estimates that during 1981-2004 the weighted average PMC in the European area is λ = 0.73; notably, Italy shows higher markups estimated fr areas summa (λ = 0.62).Specifically, PM paper we set up a simple one-sector endogenous mpetition and use it to PS in a competitiveness-C λ = 0.91 in Japan, λ = 0.89 in the UK, λ = 0.88 in the US during 1975-2002 (estimated by [36]), λ = 0.58 in Egypt in the 1990s (estimated by [37]), and λ = 0.62 in Thailand during 2001-2005 (estimated by [38]).In addition, the 2008 OECD indicators of Product Market Regulation reveal that, due to deregulation policy, OECD countries in general are more competitive than less developed non-OECD countries, such as China, Russia, India, and South Africa. 6With these observations, Figure 1 suggests that given a specific degree of IRPS more intense competition will be more likely to stimulate economic growth for the countries with less competition at the status quo (such as Italy or less developed countries), while it will be more likely to hamper growth for the countries with high competition (such as Japan and the European area).In other words, given a specific degree of IRPS there is more likely to be a positive competitiveness-growth relationship in the less developed countries. Concluding Remarks In this growth model of imperfect co highlight the importance of IR growth relationship.This ambiguity of our result allows us to reconcile the theoretical prediction with the recent empirical evidence in a one-sector AK model, rather than the Schumpeterian endogenous growth model.Of importance, it allows us to provide a simple, but interesting, numerical example, indicating that due to the statu in ountries where the degree of IRPS is relatively low, [1] P. M. Romer, "Endogenous Technological Change," Journal of Politica .5, 1990, pp.S71-S102.doi:10.1 distinct s quo level of PMC, competition is beneficial c but remains detrimental elsewhere with relatively high IRPS.Our result not only reconciles the theoretical prediction with the recent empirical evidence, but also provides a plausible explanation to the considerable diversity in the PMC-growth relationship across countries.
2018-04-28T05:03:17.936Z
2012-08-02T00:00:00.000
{ "year": 2012, "sha1": "cf92350c9940782879794ae7733a15c544b2e91a", "oa_license": "CCBY", "oa_url": "http://www.scirp.org/journal/PaperDownload.aspx?paperID=21520", "oa_status": "GOLD", "pdf_src": "ScienceParseMerged", "pdf_hash": "cf92350c9940782879794ae7733a15c544b2e91a", "s2fieldsofstudy": [ "Economics" ], "extfieldsofstudy": [ "Economics" ] }
251102298
pes2o/s2orc
v3-fos-license
A New Hope: Sodium-Glucose Cotransporter-2 Inhibition to Prevent Atrial Fibrillation Atrial arrhythmias are common in patients with diabetes mellitus (DM), and despite recent advances in pharmaceutical and invasive treatments, atrial fibrillation (AF) and atrial flutter (AFl) are still associated with substantial mortality and morbidity. Clinical trial data imply a protective effect of sodium-glucose cotransporter-2 inhibitors (SGLT2is) on the occurrence of AF and AFl. This review summarizes the state of knowledge regarding DM-mediated mechanisms responsible for AF genesis and recurrence but also discusses the recent data from experimental studies, published trials and metanalyses. Introduction Patients with diabetes mellitus (DM) are constantly increasing in numbers and are estimated to reach 580 million worldwide in a 10-year projection. Diabetes mellitus induces intra-cardiac processes such as left ventricular hypertrophy (LVH), endothelium dysfunction, interstitial fibrosis, inflammation and microvascular damage and significantly impacts treating cardiovascular patients [1,2]. The occurrence of atrial fibrillation (AF) is strongly connected with DM, and AF manifestation in diabetic patients signifies unfavorable cardiovascular outcomes [3]. AF is the most popular sustained cardiac arrhythmia and is related to increased morbidity and mortality, standing as one of the primary causes of stroke and heart failure (HF) onset [4]. The reciprocal effect of DM and AF and their respective outcomes has been a matter of discussion for a long time, and the quest for therapeutic options in both diseases is more challenging than ever. Conventional antiarrhythmic drugs, acting as ion blockers, are ineffective in about half of patients suffering from AF and are associated with severe cardiac and extracardiac side effects [5]. Because of their pleiotropic actions, it is suggested that antidiabetic agents may have a role in AF inhibition. However, thus far, no glucoselowering treatment has proven such a benefit, and results from corresponding studies with antidiabetic agents are conflicting [6]. Sodium-glucose cotransporter-2 inhibitors (SGLT2is) are one of the latest antidiabetic treatments. They have shown a broad class effect in reducing HF hospitalizations in patients with or without baseline cardiovascular disease (CVD), as well as a proven benefit in renal protection [7,8]. SGLT2is promotes glycosuria and natriuresis by restraining glucose and Na + reabsorption. They also produce a reduction in blood glucose and HbA1c levels, arterial blood pressure, and body weight loss, which are critical factors for CVD and HF [9]. Because of the abovementioned mechanisms and others yet to be discovered, they have demonstrated a favorable outcome in HF patients with or without DM [10,11]. In one of the major SGLT2i cardiovascular outcome trials (CVOT), dapagliflozin decreased the incidence of reported episodes of AF-adverse events in high-risk DM patients, but it remains to be shown whether these effects apply to the class of SGLT2is [12]. In this review, we aim to Cx expression was commonly found in the atria [28]. Cxs expression and distribution are affected by DM, resulting in atrial conduction abnormalities and the promotion of AF development. Myocardial Energetics in DM and Their Effect on AF Genesis Myocardial energy metabolism depends strongly on fatty acids and glucose. Fatty acid metabolism is more oxygen-consuming and less efficient for ATP production compared to glucose metabolism. In people with diabetes, there is increased fatty acid metabolism and decreased glucose uptake due to insulin resistance, and the diabetic heart is prone to ischemia because of its constrained metabolic pathway [33,34]. The upscale in fatty acid metabolism causes deterioration in mitochondrial structure and decreased mitochondrial oxidative phosphorylation, promoting toxic lipid metabolite attendance and reactive oxygen species (ROS) generation. In addition, atrial mitochondria in diabetic patients are characterized by decreased respiration capacity and increased oxidative stress [35]. Moreover, mitochondrial biogenesis-related protein levels, such as peroxisome proliferator-activated receptor gamma coactivator 1-alpha (PGC-1a), nuclear respiratory factor 1 (NRF-1) and transcription factor A (Tfam), are reduced in DM [36]. The combination of these and several other pathological mechanisms leads to mitochondrial dysfunction that will eventually compromise cardiac ATP generation, promoting contractile dysfunction and arrhythmogenesis [37]. Adenosine monophosphate-activated protein kinase (AMPK) acts against metabolic stress by elevating ATP levels in cardiomyocytes via a rise in fatty acid and glucose metabolism [38]. In diabetic patients, impaired AMPK activity and atrial calcium homeostasis trigger AF onset and recurrence. On the other hand, smooth AMPK activation helps atrial calcium homeostasis and may protect against metabolic stress and AF progression [39]. Moreover, diabetic hearts are characterized by increased epicardial adipose tissue infiltration, another factor linked positively with AF [13]. Autonomic Dysfunction in DM and AF Genesis Abnormal autonomic innervation is well recognized as an important mechanism of AF development and progression. Autonomic nervous system (ANS) disturbances can induce major changes in atrial electrophysiology and induce atrial tachyarrhythmias, such as AF [40,41]. In AF, simultaneous sympathetic and parasympathetic activations are the most common trigger for arrhythmogenesis. The onset of AF results from this imbalance between the two arms of cardiac ANS [41]. The pivotal role of the autonomic nervous system in atrial arrhythmogenesis is also supported by circadian variation in people suffering from symptomatic AF [42]. A strong association was also found between autonomic dysfunction and subclinical atrial fibrillation in patients with type 2 DM who underwent 48 h ambulatory ECG monitoring [43]. Heart rate recovery, an index of cardiac autonomic neuropathy, has been regarded as another predictor of AF risk in type 2 DM patients [44]. Regardless of the unexplored underlying cellular mechanisms, all the above findings suggest a strong link between autonomic remodeling and the occurrence of AF in patients with diabetes, considering the significant impact that the cardiac ANS has on cardiac electrophysiology. Effect of SGLT2 Inhibition in Atrial Fibrillation: Basic Science and Experimental Data Mechanisms responsible for the SGLT2i benefit in AF prevention and reduction include plasma volume reduction, cardiac remodeling and enhanced cardiac energy status by increased ketone oxidation and cardio-myocyte Na-H exchange [45]. Incoming data from basic science and animal models demonstrate an interaction between SGLT2is and the atrial myocardium. Reactive oxygen species, originating from mitochondrial dysfunction, are pro-arrhythmic and have been related to AF onset and progression. Fibrosis and hypertrophy in cardiac myocytes are common in atrial fibrillation and are also linked with reactive oxygen species [19]. In a study by Yurista et al., SGLT2is have shown the capacity to elevate mitochondrial biogenesis and empower mitochondrial function [46]. Their study investigated the association between mitochondrial dysfunction and the susceptibility to develop AF and also demonstrated that empagliflozin potentially restores mitochondrial function, thus ameliorating electrical and structural remodeling. In another relevant study by Shao et al., empagliflozin had a beneficial impact in atrial structural and electrical remodeling by improving mitochondrial function and mitochondrial biogenesis in Type 2 DM, acting as a prevention agent of DM-related atrial fibrillation [36]. Administration of empagliflozin in diabetic rats significantly prevented the development of atrial myopathy and improved atrial mitochondrial respiratory function and biogenesis in DM rats [36]. SGLT2i acts in the renal proximal convoluted tubules, halting sodium/glucose reabsorption, and thus increases glucose excretion in the urine and lowers blood glucose levels but also generate natriuresis and diuresis, and eventually a reduction in atrial volume [12]. Increased uric acid levels have been related to AF onset and progression, and SGLT2is impel a favorable outcome against AF because of a reduction in uric acid plasma levels [47,48]. Hypomagnesemia is linked to AF as it increases sinus node automaticity and supraventricular ectopy [49]. SGLT2is preserve serum magnesium levels as they improve insulin sensitivity and the insulin/glucagon ratio and avoid hypomagnesemia [50]. Insulin shifts extracellular magnesium intracellularly, reducing the circulating magnesium in individuals with or without diabetes. Increased serum magnesium levels can also generate an anti-ischemic and anti-inflammatory effect on the heart [51]. In addition, SGLT2is have been shown to produce a beneficial reduction in epicardial fat. This dynamic pathogenic tissue has been related to coronary artery disease but also increased AF incidence and severity [13,52]. In their study, Sato et al. demonstrated that patients who received dapagliflozin presented significant epicardial fat volume reduction at a six-month follow-up compared with the control group [53]. SGLT2is also decrease obesity, systemic inflammation, oxidative stress and sympathetic overdrive, which are important factors for AF onset and progression [9,40,54]. Glycemic variations, namely hypoglycemia, have been linked to an increased risk of AF in DM patients [55]. The hypoglycemia risk in SGLT2i is negligible compared to other glucose-lowering agents, thus triggering AF events caused by glycemic variations are avoided in patients receiving SGLT2i. The glycemic lowering of SGLT2i is insulin independent and related to decreased glucose reabsorption in the kidney and the risk of hypoglycemia with SGLT2i is minimal in contrast to other agents, such as insulin or sulphonylureas, that can cause hypoglycemia [56,57]. In the EMPA-HEART study, 97 DM patients who received empagliflozin demonstrated a reduction in left ventricular (LV) mass, as measured by cardiac magnetic resonance imaging after a follow-up of six months. Along with the improvement in LV mass indexed to body surface area, the empagliflozin group exhibited a reduction in systolic/diastolic arterial pressure and an increase in hematocrit, which may account in part for the beneficial cardiovascular outcomes [58]. In a sub-study of the EMPA-HEART study, Mazer et al. demonstrated a beneficial impact in red blood cell production induced by increased erythropoietin secretion in patients receiving empagliflozin. Erythropoietin can produce favorable hemodynamic and myocardial energetic alterations, adding an extra systematic anti-inflammatory/proangiogenic effect, promoting SGLT2i cardioprotection [59]. SGLT2is delivers further cardiovascular benefits in human physiology by soothing inflammasome activity. In a study by Sukhanov et al., smooth muscle cells (SMCs) from human aortas were used to investigate the role of empagliflozin in vascular inflammation. Empagliflozin mitigates NLPR3-associated overproduction of cytokines IL-1β and IL-18 as well as SMC migration and proliferation [60]. In another study by Kim et al., empagliflozin demonstrated high levels of inflammasome mitigation, an outcome strongly connected to improved metabolic factors such as lower levels of fasting serum insulin and uric acid in parallel with higher levels of serum ketone bodies. They compared NLRP3 activity in human macrophages (paracrine IL-β production and mRNA levels) between 29 patients receiving empagliflozin and 32 receiving sulfonylurea (glimepiride) for 30 days [61]. The abovementioned protective mechanisms of SGLT2i on AF are summarized schematically in Figure 1. Empagliflozin mitigates NLPR3-associated overproduction of cytokines IL-1β and IL-18 as well as SMC migration and proliferation [60]. In another study by Kim et al., empagliflozin demonstrated high levels of inflammasome mitigation, an outcome strongly connected to improved metabolic factors such as lower levels of fasting serum insulin and uric acid in parallel with higher levels of serum ketone bodies. They compared NLRP3 activity in human macrophages (paracrine IL-β production and mRNA levels) between 29 patients receiving empagliflozin and 32 receiving sulfonylurea (glimepiride) for 30 days [61]. The abovementioned protective mechanisms of SGLT2i on AF are summarized schematically in Figure 1. Effect of SGLT2 Inhibition in Atrial Fibrillation: Clinical Data and Metanalyses In the DECLARE-TIMI 58 (Dapagliflozin Effect on Cardiovascular Events-Thrombolysis in Myocardial Infarction 58) study, 769 AF episodes occurred in 589 patients over a median follow-up of 4.2 years. A total of 124 patients had two AF episodes, 36 patients had three episodes, and 20 patients had ≥ four episodes [62]. The maximum number of episodes in a single patient was six and seven. A total of 6.5% of the participants (1116) suffered from AF at baseline, who tended to be older and to have a higher body mass index, a history of coronary artery disease and HF, and higher urine albumin-to-creatinine ratio at baseline, and lower baseline-estimated GFR [12]. All these factors contribute to AF manifestation. The risk of the first AF episode was reduced by 19% (264 versus 325 episodes; 7.8 versus 9.6 events per 1000 patient-years; hazard ratio [HR], 0.81 [95% CI, 0.68-0.95]; P = 0.009;). However, these positive results should be taken with a grain of salt. As Effect of SGLT2 Inhibition in Atrial Fibrillation: Clinical Data and Metanalyses In the DECLARE-TIMI 58 (Dapagliflozin Effect on Cardiovascular Events-Thrombolysis in Myocardial Infarction 58) study, 769 AF episodes occurred in 589 patients over a median follow-up of 4.2 years. A total of 124 patients had two AF episodes, 36 patients had three episodes, and 20 patients had ≥ four episodes [62]. The maximum number of episodes in a single patient was six and seven. A total of 6.5% of the participants (1116) suffered from AF at baseline, who tended to be older and to have a higher body mass index, a history of coronary artery disease and HF, and higher urine albumin-to-creatinine ratio at baseline, and lower baseline-estimated GFR [12]. All these factors contribute to AF manifestation. The risk of the first AF episode was reduced by 19% (264 versus 325 episodes; 7.8 versus 9.6 events per 1000 patient-years; hazard ratio [HR], 0.81 [95% CI, 0.68-0.95]; P = 0.009;). However, these positive results should be taken with a grain of salt. As stated by the authors, ECGs were not routinely collected nor independently reviewed, and events of AF/AFL were not centrally confirmed. In the EMPA-REG OUTCOME trial, the empagliflozin beneficial effect on HF-related outcomes such as HF hospitalizations, CV death, all-cause death, and the first administration of loop diuretics and development of edema, was consistent in both patients with AF and without AF [63,64]. On the other hand, patients with baseline AF presented a higher rate of the abovementioned events. Still, their absolute number prevented by empagliflozin was more significant than in patients with no baseline AF. In the trial, new events of AF were limited, and no difference in terms of AF onset was noticed between empagliflozin and placebo, as the prevalence of new-onset AF was 2.4% with placebo and 2.9% with empagliflozin. However, AF analysis was performed in a DM population with high CV risk and the possibility of AF prevention could be revisited in upcoming trial outcomes [64]. The CANVAS and CANVAS-R trials with canagliflozin were CV safety studies that compared SGLT2i canagliflozin with placebo and the overall prevalence of new-onset AF was 5.96% with canagliflozin against 6.04% with placebo [65,66]. When subjects were categorized based on the presence or absence of HF, there was a significantly higher prevalence of AF in HF patients (14.4% vs. 4.6%, p < 0.001), but AF as a risk factor did not seem to affect CV death or hospitalization for HF during the trial (p 0.47) [66,67]. In a population-based propensity score-matched cohort study consisting of 79,150 DM patients receiving SGLT2is compared with 79,150 matched DM patients receiving DDP-4 inhibitors, it was shown that there was a 17% reduction of new-onset AF in the SGLT2i arm [68]. The study aimed to evaluate the risk of new-onset arrhythmias and all-cause mortality in type 2 DM patients receiving SGLT2i [68]. The CVD REAL Nordic study enlisted T2D patients receiving antidiabetic drugs and was conducted between 2012 and 2015 in nationwide registries in Denmark, Norway and Sweden [69]. Patients were divided in two groups: (i) new users of dapagliflozin (n = 10,227) and (ii) new users of DPP-4 inhibitors (n = 30,681). New-onset AF occurred at a rate of 1.47 per 100 patient-years with dapagliflozin and 1.58 episodes/100 patient-years with DPP-4 inhibitors, respectively (HR 0.92 NS) [69,70]. In another observational study by Norhammar et al., dapagliflozin was examined in a Swedish population regarding cardiovascular outcomes [71]. Data from the D360 Nordic program were collected and generated in a cohort of 7102 type-2 DM subjects. A group of 21,306 propensity-matched control patients taking other glucose-lowering medications was set as a comparison, and after a mean follow-up of 1.6 years, dapagliflozin demonstrated cardioprotective outcomes comparable to those in DECLARE TIMI 58. A statistically significant reduction in all-cause and cardiovascular mortality was noticed, but with regard to AF episodes, no difference was demonstrated (HR 0.94; p = 0.425) [71]. In a study with HF patients with non-ischemic etiology and type 2 DM, SGLT2is reduced the risk of developing new-onset AF [72]. A total of 210 HF patients with nonischemic and sinus rhythm and reduced left ventricular ejection fraction of 31.0 ± 8.2% were enrolled; 60 of them also suffered from DM. Kaplan-Meier curve analysis showed that non-ischemic HF patients without DM suffered from fewer AF occurrences compared with those with DM (log-rank p = 0.0003). Of the 60 HF and DM patients, those treated with SGLT2is (20 with dapagliflozin, 7 with empagliflozin and 5 with canagliflozin) experienced fewer occurrences of the development of new-onset AF compared to those not receiving SGLLT2is (log-rank p = 0.040). Despite the limited number of patients, it was shown that DM is related to new-onset AF manifestation and SGLT2i administration can significantly reduce AF development in DM patients [72]. The number of published metanalyses evaluating the effectiveness of SGLT2i in AF prevention is steadily increasing. In their work, Okunrintemi et al. included the four CV safety trials (EMPA-REG OUTCOME, CANVAS, DECLARE-TIMI 58 and VERTIS CV), three renal outcome trials (CANVAS-R, CREDENCE and DAPA-CKD) and one HF trial (DAPA-HF) [73]. In these trials, 0.9% (295 of the 31,261) participants who received SGLT-2i had at least one AF episode reported as a serious adverse event against 1.1% (291 of 24,705) participants in the placebo group. Metanalysis of these trials demonstrated a significantly lower incidence of AF in participants with and without DM favoring SGLT2is against placebo (RR [95% CI] =0.79 [0.67, 0.93], I 2 = 0%) and the number needed to treat (NNT) was 427 [73]. In a metanalysis by Fernades et al., which included 34 randomized trials with 63,166 patients, canagliflozin, dapagliflozin, empagliflozin and ertugliflozin were investigated based on their effect on AF prevention [75]. The class of SGLT2is demonstrated a significant reduction in risk of incident atrial arrhythmias (OR, 0.81, 95% CI 0.69-0.95; P = 0.008) in patients with diabetes. Combined data from three major SGLT2i CVOTs (the CANVAS Program, DECLARE-TIMI-58, and CREDENCE) revealed that AF/AFL events were reduced by 19% in the receiving SGLT2i group, even though that was not the primary analysis endpoint [76]. However, data from the CREDENCE sub-analysis did not support the beneficial upturn in AF/AFL incidence (HR 0.76; 95% CI Antiarrhythmic Potential of SGLT2is Antiarrhythmic Potential 1389 0.53-1.10; p = 0.15), setting the hypothesis that the positive result was driven by data from the DECLARE-TIMI-58 trial [77]. A recent metanalysis by Zheng et al. included 20 randomized trials involving 63,604 patients, most of them with DM. The primary analysis evaluated the incidence of AF and stroke [78]. The SGLT2 inhibitors evaluated were dapagliflozin (seven studies, 28,834 patients), empagliflozin (five studies, 9082 patients), canagliflozin (seven studies, 17,440 patients), and ertugliflozin (one study, 8246 patients). SGLT2i administration was related to a significant attenuation in the risk of AF incidence (odds ratio = 0.82; 95% confidence interval, 0.72-0.93; P = 0.002) compared with control. However no significant difference was demonstrated in stroke between SGLT2is and controls (odds ratio = 0.99; 95% confidence interval, 0.85-1.15; P = 0.908). However, given the heterogeneity of the studies analyzed, there was no standard protocol for AF assessment and AF risk factors may be mi-matched between the SGLT2i and control group, which is a major problem with all current metanalyses. The clinical data published thus far are, without doubt, hypothesis-generating. In addition, the SGLT2i class effect on AF prevention cannot be extrapolated since current meta-analyses inconsistently clarify that data from DECLARE-TMI-58 (with dapagliflozin) predominantly affect the results in terms of AF prevention [77]. In order to prevent further confusion in clinical practice, adequately powered randomized SGLT2is trials should be performed, with primary AF endpoints, such as new-onset AF, AF recurrence/burden, in patients with or without DM, with different SGLT2i agents. Such trials, like the EMPA-AF (NCT04583813) with empagliflozin in patients with DM or obesity, HF and AF and the DAPA-AF with dapagliflozin versus placebo in patients undergoing AF catheter ablation, are ongoing. Conclusions The results from the abovementioned studies and analyses provide experimental and clinical data for a SGLT2i-induced favorable effect on the incidence of AF. SGLT2i agents are now validated HF drugs due to their impressive outcomes in clinical trials, not only in the DM population, but also in patients with HF or at high risk for CVD and their pleiotropic effects may provide a great benefit in reducing AF. Further research at experimental and clinical levels is required to evaluate AF manifestation and progression in well-defined populations of patients with or without DM and to explore the dynamics of SGLT2i in this process. Author Contributions: N.K. and V.K. wrote the manuscript based on an idea by E.T. E.T. and I.P. supervised the process and edited the manuscript. All authors have read and agreed to the published version of the manuscript.
2022-07-28T06:18:21.177Z
2022-07-26T00:00:00.000
{ "year": 2022, "sha1": "76b7732cfdb3f8fd86833f92da503a628028cc90", "oa_license": "CCBY", "oa_url": "https://www.mdpi.com/2308-3425/9/8/236/pdf?version=1658826954", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "42ac4277e961c75a7929ef3b1f168ec53468acb2", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
251456217
pes2o/s2orc
v3-fos-license
Unpacking Qualitative Methodology to Explore Experiences of Mothers with Children with Autism Spectrum Disorder in the UAE: A Thematic Analysis Inquiry The current study provides a detailed description of the qualitative research design and methodology, used while exploring challenges and support structures experienced by expat mothers of children with Autism Spectrum Disorder (ASD) in the United Arab Emirates. In-depth, face-to-face, semi-structured interviews were administered with 17 mothers recruited using purposive and snowball sampling. Recurrent and relevant themes were generated using thematic analysis. Given there is a greater need for highlighting methodological rigor in qualitative research, we discuss steps such as a) using field knowledge to create an interview protocol, b) administering collaborative qualitative research, c) having strong eligibility criteria for participants, d) incorporating perspectives of multiple coders in the analytical process, e) being reflective and aware of one’s potential biases, f) enhancing interview protocol based on pilot interviews, g) and focusing on the quality of perspectives or information power instead of quantity of perspectives. Interpretation of findings and recommendation of evidence-informed guidelines incorporates strengths and limitations of the qualitative methodology utilized in the study. Background ASD is a developmental disability where children experience deficits in their social and communication skills, often combined with repetitive behaviors and other behavioral problems, making parenting extremely distressing (Kuhn & Carter, 2006). Parents of children with ASD face greater parenting stress (Hayes & Watson, 2013) and poorer family functioning than parents of children with other developmental disabilities (Al Khateeb et al., 2019;Cridland et al., 2014). Diagnosis is often delayed (Linnsand et al., 2021) and may lead to feelings of loss and failure as a parent (Ying et al., 2018). In addition, parents may also experience systemic challenges related to admissions to mainstream schools and streamlining therapeutic interventions. Moreover, it has been noted that parents of children with ASD experience greater stress related to family functioning, parenting along with increased levels of financial stress when compared with typically developing children (Zeffane & Melhem, 2017). Mothers, often primary caregivers, experience greater psychological problems and often have to move away from their careers (Meadan et al., 2010;Riahi & Izadimazidi, 2012). Research further suggests that they lack support and may feel alienated from family and community (Solomon & Chung, 2012). Such issues often exasperate in expat or immigrant families, who must learn to navigate new social, cultural, and political structures. Eighty-eight percent of the population in the UAE are expats (Edarabia, 2021). While the government prioritizes support for children with special needs (Federal Law No. 29), the advancements are relatively recent compared to several countries in the West (Borsay, 2012;Kim et al., 2019). Therefore, with its aim to facilitate support for citizens and residents in the UAE, the government encourages research examining the effectiveness of laws and policies regarding families and children with special needs (Sheikh, 2015). However, there is little information about, both personal and systemic, support structures and challenges experienced by caregivers of children with special needs, especially from a qualitative perspective (Hussein & Taha, 2013;Shahrokhi et al., 2021;Sheikh, 2015). Overall factors such as a) lack of knowledge about experiences of expat parents of children with ASD, b) limited research in the field of inclusive and special education, c) call for identification of areas of advancement and development in the field, and most importantly, and d) lack of information on protective factors for expat mothers who are likely to experience psychological problems and high parenting stressinspired this research in the UAE. Theoretical Positioning A family-systems theory acknowledges the uniqueness of a family as an interactive and reactive unit, consisting of its own norms and values. Hodgetts et al., (2013) mention that family systems theory extends beyond immediate family and includes support structures from environmental factors. In line with the present study's context, previous research has shown that families with children with ASD experience high adjustment difficultiesCridland et al., 2014). The experience of raising children with ASD depends on various support structures. Mothers with adequate social support have demonstrated to experience less stress and increased levels of optimism, leading to better family outcomes (Ekas et al., 2010;Zaidman-Zait et al., 2017). Additionally, a recent meta-analysis showed that positive sources of support, including parent-to-parent support, decreases symptoms of depression in mothers (Schiller et al., 2021). Cridland and colleagues (2014) further report that internal as well as external support provides caregivers with the opportunity to convey experiences, share emotional difficulties and receive support. This support ranges from social support from other parents with children with Autism to problemsolving skills and strategies along with adequate health and medical support. However, seeking support and relying on support structures could differ due to cultural and socio-economic differences. Moreover, external factors as geographical location and internal factors such as ethnicity or education could also significantly impact a family's willingness to seek support services (Krakovich et al., 2016). Overall lack of research in the field in the middle east, lack of information about experiences of expat mothers, and need for identifying challenges and support structures of families with children with ASD, inclusive of the pandemic related experiences, were primary research-based motivations of the authors. The Present Study Therefore, the primary aim is to explore experiences of support for expat mothers of children with ASD. The aim of the study is to explore support structures and challenges experienced by expat mothers of children with ASD in the UAE (Lamba et al., 2022). The current article describes the rationale and methodology used to qualitatively analyze lived experiences of these mothers. Mothers shared their experiences of -parenting, help-seeking, diagnoses, schooling, support groups and community, in relation to their children's ASD. The materials used were interview domains designed using previous research studies that explored similar domains. The findings of the study can be used on a psychosocial and policy level. Explanation and Justification of Method Design: Qualitative analysis of semi-structured interviews We utilized a qualitative approach to enable an in-depth understanding of lived experiences of mothers with children with ASD. The qualitative methodology requires participants to reflect on their experiences, embraces subjectivity and meaning-making within a specific socio-cultural context, and establishes 'truth' by using intersubjectivity (Biggerstaff, 2012;Howitt & Cramer, 2010;(Lazard & McAvoy, 2020). A well-researched qualitative study focuses on 'the understanding and explanation' of the dynamics of social relations (Queirós et al., 2017). Based on this principle, the core values of this research are structured around social relationships and the interactions between them. The methodology also supported the larger aim of the study as qualitative research studies are increasingly being used to inform policy guidelines and systems decisions (Lewin & Glenton, 2018). Qualitative research is often administered in a natural setting, and unlike quantitative research, the researchers actively engage in the generation and analysis of detailed textual data. There are many data sources such as interviews, diary entries, focus groups, and observations. We used in-depth, semi-structured, face-to-face interviews in the current study to collect data. Sampling/Recruitment Participants were recruited with a combination of purposive and snowball sampling. We defined the target group and established the inclusion criteria. The interviewer (AT), being a special needs educator in the region, had immense field knowledge and network, and initially approached two participants via email (purposive sampling). Afterwards, these two participants were requested to identify and approach mothers in their networks who would be willing and appropriate for the study (snowball sampling). Fifteen participants were successfully recruited using snowball sampling. They also approached a few prospective participants via social media. Phase 2 of the study was administered to explore participants' experiences during the pandemic. Both purposive and snowball samplings are nonprobability sampling techniques commonly used in qualitative research (Verma, 2019). In particular, snowball sampling facilitates the collection of data from "hidden", systematically marginalized, or vulnerable populations (Woodley & Lockard, 2016), this supported collection of data from otherwise 'hard to reach' participants in the current study. Inclusion criteria. Key characteristics such as specific demographics and experienced-based attributes are often used in high-quality research to closely define the sample. In the current study, a strong eligibility criterion for participants facilitated a psychologically homogeneous sample (Robinson, 2014) and increased the internal and external validity of the research (Patino & Ferreira, 2018). Mothers who 1) had at least one child diagnosed with ASD, 2) were expats in the UAE, and 3) spoke basic communicative English were invited for participation. These characteristics were closely in line with the research objective. In particular, since expats or immigrants often have different challenges and support structures and may not have access to similar sources of information compared to individuals from the host cultures, we deemed it necessary to study their experiences separately. Given the lack of similar previous research in the UAE, exploratory nature of the current study, and 'hard to reach' sample, we made sure that the inclusion criteria were not too restrictive but still facilitated external validity, greater homogeneity in the sample, and quality data collection (Golafshani, 2003). Overall, 37 mothers were identified and approached for the study; however, 20 participants could not find time to participate, leading to a response rate of 46%. We anticipated a low response rate as mothers with children with ASD have extremely hectic schedules. It is possible that mothers who were extremely distressed could not find time to participate in the research. In addition, participants were not provided any incentives for participation. The final sample consisted of 17 mothers. Qualitative researchers use a sample size that enables the creation of nuanced and insightful information (Biggerstaff, 2012). For the current study, a rather broad aim of the study, the feasibility of data collection, diversity of experiences shared by the participants, previous studies of similar designs, and choice of data analysis inspired the final sample size (Malterud et al., 2016;Onwuegbuzie & Leech, 2007a). Based on Onwuegbuzie and Leech (2007b) recommendation, we ensured that a) the sample size was not too large that it became difficult to extract rich and meaningful themes and that b) the sample size was not too small that it became difficult to reach data saturation and achieve information power (Malterud et al., 2016). It is also important to note that we observed a saturation of ideas after approximately 10 interviews concerning experiences with spouses and support groups; however, participants continued to showcase the diversity of experiences and share unique anecdotes about systemic challenges such as diagnoses of their children and experience with teachers and schools. Therefore, we continued data collection and interviewed all the mothers who accepted our invitation. We believe that the final sample size of 17 participants is sufficient to report frequencies and qualitative insights using thematic analysis. The end of the initial phase of data collection overlapped with the start of the pandemic. To explore the experiences of participants in the context of the COVID-19 pandemic, we emailed follow-up questions, exploring challenges and support structures during the pandemic, to all the participants. Seven participants sent detailed responses of their experiences, leading to a response rate of 41%. Again, the researchers expected a lower response rate during participants' extremely busy schedules during the pandemic, however participants who responded to the questions offered detailed insights regarding their experiences. Therefore, despite a small sample size, a clear theme related to the pandemic emerged. Other procedural insights. The interviews were administered in a private conference room of a hotel in Dubai. We wanted to ensure that participants felt comfortable in a secure and nonthreatening environment. Open and noisy public spaces are discouraged for interviews of personal nature as they do not provide the participants with a 'relaxed and safe' environment that they may need to share their thoughts and feelings (Ecker, 2017;Elwood & Martin, 2000). The interviewer (AT) engaged them in informal conversations and ensured that the participants felt comfortable before starting the interview. The questions were communicated in simple spoken English as it was not the first language of several respondents. The interviews were conducted between March 2019 and August 2020 and ranged from 37 minutes to 110 minutes. At the end of each interview, participants were asked if they wanted to add anything that they felt might be relevant to the research. It allowed some participants to offer new information, which we may have missed during the interview. After the first two interviews, NL and AT, re-evaluated the interviewer's (AT) positionality in relation to the research topic and identified potential weaknesses in the interview protocol. This step, often underutilized in qualitative research (Sampson, 2004), allows for methodological adjustments, strengthens the interview protocol, and enhances the quality of the research (Malmqvist et al., 2019). In fact, during the pilot stage, the first two participants also helped improve the interview schedule by providing their inputs, supporting the integrated knowledge transmission model of research (Nguyen et al., 2020). Given participants' enthusiasm for increasing awareness related to autism and the interviewer's personal and professional network as a special needs educator in the UAE, the findings will be disseminated in concerned schools and governmental institutions upon publishing. Measures. We used semi-structured in-depth interviews to collect data as it provides opportunities for detailed probing and follow up questions. The interviewer (AT) included the participant's non-verbal behavior and tone of voice into her field notes which were later incorporated into the coding process. The interview schedule was created based on our readings of previous literature and discussion on key factors associated with lived experiences of expat mothers with children with ASD in the UAE (see Appendix A). The questions tapped into the challenges experienced and the support structures available to the participants. Each section contained approximately 4-5 questions, related to: A. Diagnosis (e.g., What were some of the early signs that you picked up?) B. School system (e.g., Does your child receive additional support services in the school?) C. Therapeutic services (e.g., Do you feel that some therapy services are more essential than others?) D. Support from family and other structures (e.g., What was your family's reaction to the diagnosis of your child?), E. Other life challenges (e.g., How do you prepare your child for new experiences?). F. Experiences related to the pandemic (e.g., Please describe your experience of challenges of caregiving during the pandemic). Participants were also asked to rate their satisfaction with diagnosis, schooling system, therapy, spouse, and support groups on a five-point rating scale ranging from 1 (very dissatisfied) to 5 (very satisfied). Data Handling/Analysis The audio recordings were transcribed verbatim, and the transcriptions were verified by the AT and NL. Deciding on an Analytic Approach and Process of Analysis Chosen The study adopted an inductive, bottom-up, data-driven approach to gain insight into participants' lived experiences (Potter & Wetherell, 1987;Priya & Dalal, 2016). The interviews were transcribed and then analyzed using Thematic Analysis (Braun & Clarke, 2006) to identify repetitive patterns and themes in the textual data collected from mothers with children with ASD in the UAE. Thematic analysis was selected as it is a flexible approach, offers theoretical independence, and searches for themes that concisely describe the phenomenon studied and its relations to the social context (Terry et al., 2017). Braun and Clarke (2006) highlight the importance of a researcher's judgment in identifying themes. We recognized a theme when it was repetitive and captured crucial points in relation to the research question. We included semantic and latent coding to incorporate explicit meanings such as frequent keywords and phrases and implicit meanings such as ideas within the coding process (Braun & Clarke, 2006). The data analysis involved a six-step process to ensure validity and reliability (Howitt & Cramer, 2010). The first step, familiarization with data, included getting familiar with the data by reading, repeated reading, and transcribing the data. The primary coder immersed herself in the data to ensure that they are familiar with the content. The second step included generation of initial codes relevant to the data. We also calculated frequencies for each code to explain how many mothers experienced each phenomenon. We then categorized the codes into potential sub-themes. Clusters of sub-themes were identified as primary themes. We also generated a mind map (using simple post-its) with all the important codes and prospective primary and subthemes during this process. It helped us in visualizing all the themes as a team. After that, we reviewed the themes. At this stage, we discarded codes that did not fit into any cluster of themes. It was an important step as we re-evaluated the importance of each theme in relation to the transcripts and the primary aim of the research study. We made sure that the final map truly reflected the data set (see Appendix B). It is important to note that during the process of analysis, we identified a few quotes that best explained participants' experiences. This was an extremely important step for us as we wanted to showcase mothers' feelings and concerns appropriately. The fifth step included naming all the sub-themes and primary themes using simple and self-explanatory terms. After that, we created a final thematic map. The final step included relating findings to literature and writing the report using frequencies and quotes. Ethics Information sheets were provided to the participants, highlighting the aims of the research study. Participants were asked for their consent before participating in the interviews. Interviewers were further informed about their rights as participants to withdraw at any point from the study. In summary, voluntary participation through consent was sought to also maintain the authenticity of the data collected. Participants were also assured of anonymity and confidentiality. Interview transcripts although recorded were anonymized through pseudonyms and only anonymized excerpts were used for publication. Direct or indirect identifiers were additionally removed when necessary. Moreover, if the data related to a particular category of a participant reveals information about the wider group that is considerably sensitive, those transcripts were condensed to maintain confidentiality. As noted in literature, discussing struggles and experiences in relation to parenting a child with ASD can cause significant distress to parents due to the sensitivity of the subject. Participants were informed of how they were allowed to take breaks during the interview or refuse to answer a question asked. Furthermore, if participants did exhibit distress the interviewer would give the participants the time and space to feel better, enquire if they would like to proceed. If the participant expressed their concern and did not want to continue, the interviewer would conclude the formal interview, thank the participant for their time and provide debriefing information. Dissemination The findings from the study will only be used for academic purposes which include journal articles, conference presentations, and policy recommendations/proposals. The findings will be disseminated via summaries presented through infographics and power-point presentations. The key study findings will be summarized and presented to groups of educators, mental health professionals, parents, and caregivers in the United Arab Emirates. These plans may be amended based on feedback from research advisories. Most importantly, interviewer's (AT) network as a special needs teacher in the UAE, is being utilized to further create awareness in the public via discourse and policy recommendations. Therefore, the authors (AT and NL) have created a workshop for parents of children with ASD, teacher assistants, and shadow teachers in the UAE, to highlight some of the concerns raised by mothers in the current study. Rigor We followed Lincoln and Guba's (1985) criteria of dependability, transferability, credibility and confirmability as essential factors to ensure rigor in a qualitative study. Protocols were developed for collecting data, refining proposed methods and analysis of findings. They explained that credibility of a study is determined when researchers or readers are confronted with the experience and when they recognize it. To ensure there was credibility in the current study, we engaged in multiple activities such as persistently observing the data collected, prolonged engagement with the data to familiarize ourselves with it, and lastly triangulating data with existing literature as well as researcher triangulation where two researchers attempted to analyze results and compare findings. To maintain trustworthiness, we maintained reflexivity and rigor throughout the research process (Barrett et al., 2020;Barry et al., 1999). Dependability was another factor that researchers engaged in to ensure the research protocol was traceable, clearly documented, and logical. Additionally, reflexivity is central to the audit trail, therefore the primary researcher kept a self-critical account of the research process including their external and internal dialogue, daily logistics, rationales as well as methodological decisions and personal reflections of their values, insights and interests (Sandelowski, 1986) To ensure transferability of the maintenance, analysis was designed in a manner that ensured generalizability. Thick descriptions were provided for themes that emerged so that future researchers can attempt to replicate the study protocol. Lastly, confirmability refers to the fact that the researcher's interpretation of the findings is clearly derived from data. Tobin & Begley, (2004) suggest that confirmability is usually met when credibility, transferability and dependability are achieved. Therefore, these processes were included for theoretical, analytical and methodological choices throughout the study to ensure others understood why and how decisions were made. Sandelowski, (1986) mentions that a study and its findings are auditable when another researcher can clearly follow the decision trail made by the primary investigator. Therefore, 30 percent of the interviews (n = 5) were selected using a random number generator in excel and were coded by a second coder (NL). The inter-class correlation coefficient (average measure) was 0.7. Coding disagreements were then discussed to reach a mutual agreement. Themes were cross-checked by both coders (AT and NL), a process that systematically enhanced the "truth value", credibility, and overall trustworthiness of findings (Lincoln & Guba, 1985). Lastly, peer scrutiny and debriefing were utilized, by reaching out to peers. The fresh perspective provided, helped the primary researchers to challenge earlier assumptions. Debriefing sessions amongst the researchers also took place to decrease researchers' bias. Discussion We explored support structures and challenges of expat mothers with children with ASD in the UAE (Lamba et al., 2022). Mothers' retrospective narrative suggests that they felt dissatisfied with medical professionals and extremely stressed during the process of diagnosis. Several mothers felt that either their child/ren was misdiagnosed, or the diagnosis was delayed. It was often delayed as symptoms of ASD could be confused with symptoms of comorbid impairments such as delayed language development. Mothers also narrated initial struggles of finding an appropriate school or therapeutic interventions for their child/ren with ASD. Given the mothers are expats, it is possible that initially, access to information related to interventions was scattered and unclear. Perhaps limited inclusive staff across schools and healthcare settings in the region posed a systemic challenge, leading to often switching between healthcare professionals and schools, further increasing their financial and emotional stress. However, most reported feeling satisfied with therapy at the time of the interview. Approximately half of the sample was not satisfied with the support they received from husbands. While most appreciated financial support, they wished for more emotional and instrumental support. Mothers also felt alienated from the community at large, and a few narrated experiences of negative comments or behaviors from family members such as grandparents and neurotypical siblings. Qualitative insights suggest that such experiences, unfortunately, perpetuate the feeling of feeling and staying 'invisible' in society. Importantly, mothers felt extremely satisfied with support groups. Mothers shared their common experiences, empathized with each other, and exchanged key information regarding school admissions and therapeutic support. Previous research has shown that such experiences of belongingness facilitate self-efficacy, effective coping, and overall psychological well-being of participants (Ekas et al., 2010;Zhang et al., 2015). In addition, seven mothers shared their experiences with children with ASD during the pandemic. Children with ASD thrive on routine and predictability, however, uncertainty interrupted schooling and therapy, and lack of routine during the pandemic exacerbated the situation and contributed to violent and disruptive behaviors. While answers related to the pandemic were rich and insightful, only seven mothers responded to follow-up questions related to the pandemic. It is also important to note that qualitative research does not necessarily include a highly representative sample and lacks the power of generalizability. The findings, however, showcase information power within the social context of the research. Steps described enhancing methodological rigor of the current study contributed to greater validity and trustworthiness in the findings. · Turn on the recording device. · Thank the participant for coming and for contribution to the study. · Provide the participant with some information on the study (Provide Information Sheet). · Assure the participant that their identity and the data that they will provide will be confidential and will be kept confidential. · State that the participant has the right to pass a question if they wish not to answer. · Inquire whether the participant has any further questions. · Make sure that the participant signed the consent form prior to continuing the interview. Interview Protocol Consent (Note whether verbal consent has been provided and whether the participant is aware of the aim of the research) Developmental History and Personal Information How old was your child when he/she was diagnosed with Autism Spectrum Disorder? (Prompt: Would you like to tell me about that day and what made you decide to seek support/diagnoses from a medical professional? Were you satisfied with the support that the medical professional provided you with?) What were some of the early signs that you picked up on that something wasn't quite right? (Prompt: Did you child/children start crawling later than other infants/toddlers their age? Did your child have limited vocabulary?) What does a typical day at home look like? (Prompt: Describe your morning routines to get the children up and ready for the day) Can you describe your relationship with your child? (Prompt: What makes them happy? How would you describe your child to others around you?) How do you communicate with you child? (Prompt: Do you use PECS? Do you use gestures? Do you use basic syntax and gestures?) Schooling At what age did you child attend school/nursery? (Prompt: Was it easy to find a school in the community?) Do you think your child has a good relationship with his/ her teacher? (Prompt: Can you describe this relationship? Does your child respond well to his/her teacher?) Did the teachers approach you to speak to you about your child's needs? (Prompt: Were the teachers and staff approachable and supportive when speaking to you?) Have you ever approached the class teacher to discuss any matters? (Prompt: Can you describe a particular event that comes to mind when thinking of your relationship with the teacher?) Does your child receive additional support services in the school? (Prompt: Does your child receive support from a shadow teacher or learning support?) What has your experience been like with these support services? (Prompt: Are you satisfied with the services provided by the school?) Therapeutic Centres Does your child receive any type of support therapy? (Prompt: Does your child receive ABA therapy/speech therapy/occupational therapy etc.?) Let's talk about the support services and your experiences thus far. (Prompt: Are you pleased with the type of services available? Do you believe that there are services that are better/worse than others?) Do you feel that some therapy services are more essential than others? (Prompt: If you could give advice to other mums regarding therapy what would that be? What type of services do you feel are essential to the well-being of your child?) Do you believe that the services provided by the mentioned centre is beneficial to your child? (Prompt: Are you satisfied with the services provided by the centre?) Have you seen a significant difference in your child/children's ability/abilities compared to pre and post support services development? (Prompt: What are some of the differences you have noticed?) Relationships Can you describe your family? (Prompt: Can you name the members in your immediate family? Can you name the members in your extended family?) If you think about your family, who do you think of? (Prompt: What do you think best describes your family?) What was your family's reaction to the diagnosis of your child? (Prompt: Did you feel that you were supported from the start?) What was your spouse's reaction to the diagnosis of your child? (Prompt: Did your husband seek a second opinion, did your husband provide you with support?) What do you consider your spouse's role in your family to be? (Prompt: How does your husband support you?) Can you describe the level of support that you get from your immediate family? (Prompt: How does your spouse support you?) And can you describe the level of support that you get from your extended family? (Prompt: How do they support you?) (If no) What kind of support do you think is needed from extended families? (Prompt: Helping with school pick-ups and drop offs or supporting with consistency in behavioural interventions etc.?) (If yes) Could you describe the similarities in the types support of the two family structures? (Prompt: Can you use some examples of how your immediate family and your extended family supports you?) Do you think that support from immediate family or extended family is important in families with special needs children? (Prompt: Why is support important in families that have SEN children?) Environment What impact do you think home environment has on the well-being of your child? (Prompt: Do slight changes in plans upset your child and how do you deal with these changes?) How do you prepare your child for new experiences? (Prompt: Can you think of a time that you went on holiday or a time when you had to go on an errand?) What do you think are the neighbours or community's reaction towards your family? (Prompt: What do you think they feel when they see your family?) Do you feel that your community or your neighbours are supportive? (Prompt: What makes your community reliable? What does not make your community reliable?) Are you currently in a support group for mothers who have children with Autism Spectrum Disorder? How has this group supported you? Can you tell me about your experiences in this group? (Prompt: Are you satisfied with the support you are receiving from the mothers in your group?) COVID 19 Pandemic Please describe (in detail) your experiences of challenges of caregiving during the pandemic. Please describe (in detail) support structures available to you during the pandemic. Is there anything else you would like to add? Thank the participant for taking part in the research. Additional Notes Aspects that were mentioned by the participant that requires more probing: Non-Verbal Communication and Observations by researcher:
2022-08-10T15:15:20.868Z
2022-01-01T00:00:00.000
{ "year": 2022, "sha1": "07de3b2282b81dff4933e4c3f16bcd85efca67b4", "oa_license": "CCBYNC", "oa_url": "https://journals.sagepub.com/doi/pdf/10.1177/16094069221110295", "oa_status": "GOLD", "pdf_src": "Sage", "pdf_hash": "935943c75e3fe17cb63c3e1027d8f5fae3e3ee01", "s2fieldsofstudy": [ "Psychology", "Medicine", "Sociology", "Education" ], "extfieldsofstudy": [] }
237383509
pes2o/s2orc
v3-fos-license
Stress, psychosocial factors and the New Zealand forest industry workforce: Seeing past the risk of harm to the potential for individual and organisational wellbeing Background: There is clear evidence that stress is having an impact on the health and wellbeing of the forest industry workforce in Aotearoa New Zealand. While this has legal ramifications under the national health and safety legislation, international research also shows that harm to mental health invariably leads to reductions in work force productivity and business profitability. The reverse is also true: improved mental wellbeing can lead to greater worker engagement and commitment, which in turn increases productivity and profitability. Although these relationships are well substantiated, managers and leaders in the forest industry may not be aware of either the existence of a workplace stress problem or of its impact. Methods: A critical review is undertaken of stress and psychosocial hazards research within the international forest industry or similar industries (e.g. construction), with particular attention given to the explanation of psychosocial hazards. Results: International research on the forest industry largely confirms what we know about harmful aspects of job content and workplace conditions. However, it is argued that the focus within this research on job content and immediate workplace conditions obscures the impact of the wider social context. This limits the potential of management to move beyond seeing psychosocial factors simply as risks to be minimised at the workplace level. Bringing an ecological perspective to the analysis of forestry workplaces makes it easier to identify the elements of forest management practice that may contribute to stress within the workforce. It also becomes easier to identify the interactions between family, community and workplaces that may either exacerbate or reduce workforce stress. Conclusions: This paper highlights particular opportunities for reducing stress and enhancing wellbeing within the New Zealand forest industry workforce. It suggests that the psychosocial conditions that contribute to mental ill-health can be reconfigured to promote mental health, with wellbeing benefits that extend beyond the workplace. Psychosocial demands on a person can be motivating as long as the person has the resources to meet the challenge. Successful stewardship of the psychosocial environment at the forest management level is thus an opportunity to increase value to both investors and other stakeholders. New Zealand Journal of Forestry Science Best et al. New Zealand Journal of Forestry Science (2021) 51:5 https://doi.org/10.33494/nzjfs512021x93x Introduction At the core of the constitution of the World Health Organisation is the notion that health is a human right that goes beyond the absence of harm to include physical, mental and social wellbeing (World Health Organisation 2019). In the psychosocial domain, health can be understood as emerging from the relations between our physical and mental capabilities and the social Keywords: work-related stress, psychosocial hazards, forest industry workforce, ecological perspective et al. 2015). Yet rather than managing those psychosocial factors to promote worker wellbeing (Leka et al. 2015), the occupational health and safety frameworks in most developed countries seek to prevent harm by eliminating or minimising the risks to worker health represented by hazardous psychosocial conditions (Chirico et al. 2019). Recent changes in New Zealand's health and safety legislation are a good example of the limitations of this approach. The Health and Safety at Work Act 2015 contains a clear expectation that the work-related risks to a person's mental health should be managed by the people in charge of that work or workplace (Health and Safety at Work Act 2015). In the interpretations (section 16) of the Act, the definition of a hazard includes behaviour that has the potential to harm, "whether or not that behaviour results from physical or mental fatigue, drugs, alcohol, traumatic shock, or another temporary condition that affects a person's behaviour". The understanding of "health" in the Act includes both mental and physical health. However, managing workplace factors that impact psychosocial health through a framework of psychosocial hazards may obscure the opportunity to enhance both individual and organisational health represented by those factors. Health should be addressed as something more than harm elimination or reduction (Leka et al. 2015). Designing workplaces and work processes in ways that go beyond harm elimination and reduction can improve a worker's quality of life and enhance productivity and sustainability. Making such interventions can be a challenge when the workforce is largely employed through service contracts, however, as is the case with the New Zealand forest industry. While the direct terms and conditions of the employment relationship are set by the contractor/employer, the scope of those conditions is largely controlled by the agreement between the contractor and the forest owner. The organisation of the work and the workplace is therefore not totally within the control of the employer. However, the Health and Safety at Work Act 2015 places the responsibility for the primary duty of care onto a "Person conducting a business or undertaking" (PCBU). This means that the obligation for managing the risk of a negative health outcome arising from mental distress sits with whoever creates that risk, regardless of where in the process of work that risk arises (and irrespective of the nature of the employment relationship between the PCBU and the worker who suffers harm). This obligation to manage risk is more than just a legal and economic matter. The principles behind the International Labour Organisation's health and safety standards are not just that work should take place in a safe and healthy working environment, but also that conditions of work should be consistent with workers' wellbeing and human dignity. Work should offer real possibilities for personal achievement, self-fulfilment and service to society (Forastieri 2016). Although the legal challenge for the forest industry is to design workplaces and processes that reduce mental harm across business boundaries, this expectation also presents a moral and economic opportunity. By focusing on wellbeing rather than harm reduction, the industry could positively impact both workers' quality of life and reap the potentially significant financial benefits of a more loyal, engaged, and productive workforce. What makes this challenging for forest owners and managers is that the feedback loops that could bring the impacts of mental distress to their attention are poorly developed. While noting signs of stress in the workforce, Lovelock and Houghton (2017) and Nielsen (2015b) both concluded that more work was needed to increase awareness of the full range of risks faced by forestry workers and the health impacts of those risks. Furthermore, the arm's length nature of their service agreements means that forest owners and managers do not have the direct relationship with worker health and safety that would enable awareness of stress and its impacts. Despite widespread annual physical health checks of workers by contractors, there is no consistent and centralised assessment process in operation within the industry (Forest Industry Safety Council 2018). The generally available investigation methods used to generate a learning feedback loop after an accident are unable to take psychosocial factors and any associated stress into account (Van Wassenhove & Garbolino, 2008, as cited by Leka et al. 2015). Those charged with managing health and safety within the New Zealand forest industry could well be operating somewhat unaware of the potential impacts of stress in its various expressions on workers' wellbeing. Against this backdrop, this paper examines the research on stress within the international forest industry workforce so as to identify opportunities for enhancing wellbeing amongst forest industry workers in New Zealand. It begins by reviewing how work-related stress and its risk factors are generally explained. It then considers what the extant research on work-related stress from the world's forest industries suggests about health impacts and psychosocial hazards within forestry. It finishes by questioning whether a focus on workplace psychosocial hazards is the most appropriate framework to address stress and wellbeing within the forest industry workforce in New Zealand. An alternative approach is presented, based on ecological systems theory, and some of the implications for potential interventions are noted. Explaining work-related stress Work-related stress has generated a large body of academic research that focuses primarily on how a person fits or does not fit into his or her work environment (Väänänen et al. 2014). In this framework, work-related stress is seen as psychological strain or a set of negative psychophysiological responses and reactions (Chirico et al. 2019) that occur either when the demands of the work environment exceed the capabilities and resources of the worker or when the needs of the worker cannot be supplied by the work environment (Dewe & Cooper 2017;Forastieri 2016). Stress is thought to occur when that mismatch becomes chronic or unmanageable (Leka et al. 2015). Much research has sought to clarify the relationship between the work environment and the individual's body and mind through investigations of the impact of work place characteristics on particular unhelpful behaviours and psychological and somatic symptoms (Väänänen et al. 2014). Measuring both the psychosocial hazards and the symptoms has required the development of self-rating scales (Väänänen et al. 2014). This work has shown that chronic and unresolvable exposure to a number of work place characteristics can increase the likelihood that a proportion of the workforce will suffer a negative psychophysiological response as a result (Leka & Jain 2010;Maslach & Leiter 2016). Both the New Zealand Workplace Barometer (Bentley et al. 2019) and the World Health Organisation review of psychosocial hazards at work (Leka & Jain 2010) use the following definition of psychosocial hazards: those aspects of work design and the organisation and management of work, and their social and environmental contexts, which have the potential for causing psychosocial or physical harm. (Cox et al. 2003, p. 195) The World Health Organisation (2008) developed a summary of work-related psychosocial hazards (see Table 1) for the European Framework for Psychosocial Risk Management. The framework identifies ten psychosocial domains, each of which can be thought of as a potential source of work-related stress (Forastieri 2016). The domains are divided into two groups: work content (which includes psychosocial hazards related to the conditions, organisation and component tasks of the job), and work context (which includes psychosocial hazards related to workplace organisation) (Cox & Griffiths, 2005, as cited by Forastieri 2016). The New Zealand Workplace Barometer is closely based on this EU framework, in that it incorporates all ten domains and adds workplace bullying to the domain of interpersonal relationships at work (Bentley et al. 2019). Given that the New Zealand Workplace Barometer lists forestry as one of the industries with the highest reported levels of bullying -with greater than 10% of respondents reported having been bullied -the addition of bullying is highly relevant to the industry. This is particularly the case as the definition of bullying used by the survey required the harassment to occur over a period of time and to involve one or more perpetrators (Bentley et al. 2019). The World Health Organisation (2008) point out that while bullying can be considered a psychosocial risk, it should also be regarded as a consequence of a poor psychosocial work environment. The implication of this perspective is that if an organisation mitigates the risks listed in Table 1, then the risk of bullying will also reduce. The findings of other research overlap considerably with this framework, albeit with different emphases. In their review of two decades of research on burnout and its causes and outcomes, Maslach and Leiter (2016) pointed to six key domains of psychosocial hazards: workload, control, reward, community, fairness and values. Of these domains, the conceptualisation and Maslach and Leiter (2016) place much greater emphasis on the role of rewards (financial, institutional or social), fairness (the extent to which decisions are perceived as being fair and equitable) and values (the alignment between the individual's values and those of the organisation they work for) in the development of burnout. Nevertheless, these differences may be quite important to the health and wellbeing of the various actors within the New Zealand forest industry. Issues such as whose interests are represented in the service contracts that form the basis for employment of the workforce and how those contracts distribute risk and reward will shape the perception of whether those agreements are seen as fair and equitable. Furthermore, with a workforce that is approximately 37% Māori (Ministry for Primary Industries 2020) there is a significant potential for differences in world views between a substantial part of the workforce and the forest industry, creating mis-alignment in values (B. Hooper, personal communication, 13 August 2020). Similarly, in a review of the epidemiological literature of work-related stress, Pfeffer (2018) points to ten workplace exposures that affect human health through stress. As with the psychosocial hazards associated with burnout, this perspective is largely the same as that used in the European Framework. However, there are key differences that are important in the context of the New Zealand forest industry. Job insecurity, whether for one's own job or that of colleagues, is much more prominent in Pfeffer's framework. This is something that could be considered important in an industry which employs most of its workforce on a contractual basis. Workers paid by piece rates or hourly rates are exposed to the risk of reduced hours or job loss resulting from contractual transgressions or downturns in the log market. Job insecurity is also highlighted as a key work-related psychosocial stressor by other authors (e.g. Dewe & Cooper 2017). Furthermore, Pfeffer included access to health care as a significant stressor, reflecting the "US-centric" nature of the epidemiological literature. However, any industry reliant on a rurally located workforce in New Zealand should be cognisant of reduced access to health care for those who live outside the urban centres, a pattern which reflects health service restructuring between 1980 -2001 and the consequential differences in all-cause mortality rates between urban and rural regions (Pearce et al. 2008). The differences between these frameworks highlight the contextual nature of psychosocial hazards and the need for psychosocial risks within the New Zealand forest industry to be researched more thoroughly than is currently the case. However, there has been research undertaken within the forest industries of other countries that is relevant in the New Zealand context. This research, which highlights potential psychosocial hazards in the New Zealand forest industry and their impacts on health and wellbeing, is discussed in the next section. Evidence of health impacts of work-related stress in the forest industry Within the international forest industry, the study of known mental health conditions and their association with wellbeing and safety is centred on an 18 year prospective cohort study of workers at a Finnish based multinational forest industry company (Väänänen et al. 2008). This study assessed health and potential risk factors within the workforce, which included manual labourers and machine operators. Research based on data from this study has highlighted the association of burnout with negative health and safety outcomes. Burnout was assessed using the Maslach Burnout Inventory (MBI, Maslach, Jackson & Leiter, 1996, as cited in Maslach & Leiter 2016. This consists of three dimensions: overwhelming physical and emotional exhaustion arising from depleted emotional and physical resources with insufficient recovery (Maslach & Leiter 2017); feelings of cynicism that reflect a detached attitude towards work and increasing disregard towards one's co-workers and clients (Toppinen-Tanner et al. 2002); and a reduced sense of accomplishment and effectiveness (Seidler et al. 2014). Assessments occurred at various times throughout the study period and could be correlated with a number of different health outcomes recorded by Finland's National Population Register Centre and the company itself (Väänänen et al. 2008). The health outcomes explored over the life of this research program are significant to the New Zealand forest industry for a number of reasons: firstly, they involve a large number of participants (ranging from 3895 to 10062 employees) that are mostly men (greater than 76%) involved in manual work or machine operation (greater than 62%); and, secondly, burnout is correlated with clinically derived indicators of health (Väänänen et al. 2008). These are considered more reliable than self-report measures (Väänänen et al. 2014). The research facilitated by this program all points to burnout being associated with negative health outcomes. An increase in the MBI summary score of one unit was associated with a 35% increase in the risk of mortality among workers less than 45 years old (Ahola et al. 2010). Of the subscales, only exhaustion produced a statistically significant hazard ratio when adjusted for sociodemographic and baseline health factors. A similar study of the relationship between burnout and severe injuries by the same research group found a one unit increase in the burnout summary score to be related to a 10% increase in the risk of injury requiring hospitalisation or causing death (Ahola et al. 2013). Of the MBI subscales, emotional exhaustion was associated with a 9% increase in the risk of injury, while cynicism was related to a 10% increase. This suggests that having both energy and motivation to act safely is important to prevent workplace injury or death. Toppinen-Tanner et al. (2005) reported on burnout as an event prior to sickness absence for different medically certified causes of absence. They found that the MBI summary score was positively correlated with the risk of future medically certified absence (after adjustment for age, gender, occupation, and baseline absence). The increased risk of future illness was shown to include mental and behavioural disorders and diseases of the cardiovascular and musculoskeletal systems. Burnout predicted future hospital admissions for mental health and cardiovascular disorders among participants who had not suffered the disorder prior to the start of the study (Toppinen-Tanner et al. 2009). Although none of these studies defined a causal pathway between burnout and negative health outcomes, they do suggest that work-related stress conditions are associated with increased risk of injury, illness and early mortality within a male dominated, manual and machine operator workforce. Such research is relevant to the New Zealand forest industry. Of concern, therefore, is that there are already indications that mental distress is having an impact on New Zealand forest industry workers. The New Zealand forest industry is part of an occupational group (Forestry and Farming) that comprises 6.8% of male suicide victims in New Zealand (Suicide Mortality Review Committee 2016). If that percentage still holds, increases in male suicide levels in New Zealand (Coronial Services 2020) suggest that deaths by suicide could have exceeded accidental workplace deaths for the farming and forestry occupational group in both 2018 and 2019. WorkSafe New Zealand's National Health and Safety Attitudes and Behaviours Survey (NHABS) also noted that "stressrelated or mental illness was more likely to be identified as a long-term health problem by workers who had personally experienced a serious harm incident (22% compared with 12% of those who had not experienced an incident) or a near miss incident (19% compared with 11%)" (Nielsen 2015a, p. 68). The same survey found that 27% of employees and 36% of employers experienced a serious harm near miss or actual incident in the preceding 12 months (Nielsen 2015a). This is in line with international evidence that highlights (i) the interaction between exposure to actual and potential trauma and mental health disorders (Tehrani 2004) and, more specifically, (ii) the relationship between exposure to risks and hazards and mental distress (Nahrgang et al. 2011). Furthermore, the Lovelock and Houghton (2017) review of the industry highlighted the prevalence of health conditions among workers such as hypertension and diabetes, poor lung function due to high levels of smoking, and high levels of substance abuse. All of these conditions have some association with stress as lifestyle responses to mental distress (Forastieri 2016;Leka & Jain 2010;Solar & Irwin 2010). The 2014 NHABS (Nielsen 2015a) also lists fatigue, ill health, stress and addictions as barriers to improvement in health and safety outcomes and notes that emotional and physical stress is of high concern to those working in the industry. Mental distress and strain are also known to have significant negative impacts on business profitability and sustainability (Leka et al. 2015;Pfeffer 2018;World Economic Forum 2008). Presenteeism (presenting for work while sick or injured) has been shown to reduce worker productivity, with a cost impact four times greater than that of directly treating the condition (Edington & Burton 2003). The latest NHABS reported that 53% of forest workers surveyed had worked while sick or injured and 46% had worked while overtired (Nielsen 2018). Similarly, a reduction in psychological health has also been associated with the sort of risky and dangerous behaviour that can lead to both accidents and quality loss arising from adverse events (Du Plessis et al. 2013;Forastieri 2016;Leka et al. 2015). The same study that identified exposure to risks and hazards as a risk factor for mental distress also found an association between mental distress and risky and dangerous behaviour (Nahrgang et al. 2011). Finally, workers exposed to hazardous psychosocial environments are less likely to engage in re-training or further learning (Leka et al. 2015). This should be of concern to an industry looking to adapt to the physical safety risks through the introduction of mechanised harvesting systems (Steepland Harvesting Programme 2018) and increase its workforce to take advantage of growth opportunities (Harris 2017;Moore 2017). Health impacts of psychosocial workplace conditions Within the international forest industry there are several studies that examine the relationship between psychosocial workplace conditions and workers' health in forest and logging operations. Although none employ the psychosocial hazard framework outlined in Table 1, all consider factors that fit within that framework. Elements such as psychological demand, intellectual discretion and exposure to risks and hazards have been associated with disorders of the neck / shoulders and lower back (Hagen et al. 1998), mental strain (Inoue 1996), and reduced job or life satisfaction (Mylek & Schirmer 2015). The international findings fit with the work of Lilley et al. (2002) on fatigue, work / rest patterns and recent injury and near injury experience. This found that 78% of participants reported experiencing fatigue sometimes, often, or always, with 19% experiencing fatigue often or always. There was also a significant association between self-reported near misses in the previous 12 months and the reported level of fatigue experienced at work. Getting eight hours sleep and taking breaks was associated with reduced fatigue, but the majority of participants reported having seven hours or less sleep per night (and almost 25% reported six hours or less). These are all psychosocial conditions associated with reduced mental wellbeing. Despite the paucity of research, there is enough evidence to suggest forestry workplaces contain psychosocial hazards that are harmful to mental health and that these hazards fit within the European Framework (see table 1). Research in the New Zealand forest industry (Lilley et al. 2002;Lovelock & Houghton 2017;Nielsen 2015b) suggests these conditions also apply to local forestry workplaces. However, some research within the forest industry highlights a key difference between physical hazards and the psychosocial domains listed as hazards in Table 1. As noted above, the risk management objective for a physical hazard is to reduce the potential of that hazard negatively impacting the health condition of the worker (Health and Safety at Work Act 2015). The goal is for the worker to go home at the end of the day in the same health state as when they arrived. On the other hand, effective management of psychosocial risks creates the potential for the worker to go home with enhanced wellbeing (Bentley et al. 2019;Leka et al. 2015). In their study of the impact of job content (see Table 1) on logging machine operator wellbeing, Hanse and Winkel (2008) found that daily task variety, job rotation and access to breaks when required were all positively associated with job satisfaction to a statistically significant degree. They also found a statistically significant positive association between job control and job rotation with reduced musculoskeletal symptoms, and between job rotation and access to breaks with reduced headaches and sleeping problems. Overall, job rotation -defined as operating a shift system that broke up machine operating hours, altering tasks to reduce machine operating tasks, and restricting or controlling the number of machine operating hours -had a positive impact across all three measures of wellbeing in the study. Similarly, in their survey of Australian forestry managers and workers, Mylek and Schirmer (2015) found a number of work context elements (see Table 1) were associated with improved wellbeing. Participants who felt they had more control over their work, reported a better work-life balance and were more satisfied with their income also reported higher life satisfaction and general health. Other psychosocial conditions that were significantly associated with higher life satisfaction included job security, a positive workplace culture (defined as confidence in being able to express views), a felt level of social support, higher work efficacy and a positive work-related social identity. Interestingly, only job control, work-life balance, income, a positive culture, and work-related efficacy were positively associated with general health. What these results reflect is that psychosocial factors can be managed, not just to reduce work-related stress but to promote worker engagement, "a persistent, positive affective-motivational state of fulfilment that is characterised by the three components of vigour, dedication and absorption" (Maslach & Leiter 2016, p. 104) as a state of wellbeing. Furthermore, Nahrgang et al. (2011) found engagement was positively associated with reductions in risky and unsafe behaviour, adverse events and accidents and injuries. If psychosocial risk management is approached with engagement as the goal, psychosocial factors can switch from hazards to be eliminated to protective factors that can be pursued, not only to protect workers from harm but also to promote wellbeing. Changing Psychosocial Conditions -from harmful to "well-ful" While the Health and Safety at Work Act 2015 might make it clear that any forest owner or manager must ensure, to a reasonably practicable extent, the mental and physical health of those working in the forest, the nature of their relationship with the workforce does not easily fit with the 'employee' focus of the psychosocial risk framework. A forest owner or manager could easily be forgiven for thinking that psychosocial risks exist only within the organisations and workplaces in which the workers are directly employed. That highlights a specific weakness of this approach to thinking about workrelated stress. The weakness is that the model assumes that all of the stress experience captured in the research originates within the worker's immediate work context (Theorell et al. 2015). This is of concern for the forest industry, as it ignores the broader social structures and systems (e.g. piece rate contracts) that may drive those risk factors in the immediate work context. It also overlooks the ways people exist within adjacent systems that may have highly permeable boundaries. With such a narrow view of context, the focus goes onto the individual and what can be done to enable individual coping (Harkness et al. 2005). As a result, workplace wellbeing interventions typically seek either to modify micro-organisational factors (e.g. decision latitude and social support) that surround the individual (Väänänen et al. 2014) or to enhance the individual's ability to cope through counselling or stress management techniques (Harkness et al. 2005). Macro-organisation and wider social system issues are often not addressed and the opportunity to eliminate stress through removal of the stressors in the wider context is not considered (Dewe et al. 2010). In considering how psychosocial factors could be managed to the benefit of both individuals and the organisations in which they work, it is important to recognise that workplaces sit within an ecological system where they exist in relationship with all other parts of that ecological system. Ecological systems theories, such as that proposed by Bronfenbrenner (1977), explain human behaviour by recognising that individuals always act within these larger social and ecological systems (Figure 1). To understand behaviour it is also important to understand the nature of the institutions and social structures within each level of the system and the ways in which those levels interact and may reinforce each other (Golden & Earp 2012). Stokols (1992) argues that the social, physical, and cultural aspects of this multilayered environment each have a cumulative effect on health. There are consequently multiple influences on specific health behaviours and outcomes, and multiple opportunities to intervene. Achieving change will require interventions at a number of different points within the system (Sallis et al. 2008). Unfortunately, interventions for worker wellbeing within the New Zealand forest industry, as guided by the legislation and the traditional conception of workplace mental health, are focused almost entirely on the specific settings in which people work. Yet what the ecological perspective shows is that health is determined as much by what goes on in the mesosystem (where those settings interact, see Figure 1) and by the social, political and cultural settings of the exo-and macrosystems, as by what goes on within the specific work setting. The significance of this point for designing wellbeing interventions can be illustrated by considering the interactions between the various industry players. Lilley et al. (2002) confirmed that the total workday length for forestry workers in New Zealand was increasing, that there were substantial groups of workers whose break times were compromised, and that there had been a reduction in the number of workers getting two consecutive days off in every seven days in the preceding ten years. Hide et al. (2010) study of cable logging operations noted the inconsistent break times, and that work pace and workload were often driven by the pace of the adjacent workstations. These are all factors directly controllable within the workplace (the microsystem, in Bronfenbrenner's framework, see Figure 1). However, they also pointed to the impact of elements beyond the direct control of the contractor. The challenge of achieving daily piece rate targets, working on sites with limited operating and storage space, and bottlenecks in the downstream supply chain all directly impacted the working day length. These conditions arise from the mesosystem (interactions within the supply chain) and the exosystem (outsourcing operations using piece rate contracts). Furthermore, long commutes were found to increase the length of the workday, suggesting that urbanisation, a macrosystem change, was adding to the problem. Ecological systems theory also helps explain the impact of interactions between work, family, the community, and wider societal issues such as gender and socio-economic status. Lovelock and Houghton (2017) identified that the poor health and safety outcomes in the New Zealand forest industry may originate with psychosocial stressors outside of the workplace. These included high drug use in worker families and communities, insecure and overcrowded accommodation, and conflict with unemployed family members. Studies from outside the industry have also highlighted the potential of family conflict to reduce the cognitive resources available to an employee at work (Du et al. 2017). While confirming partner conflict as a predictor of wellbeing (in this case, using burnout as the measure of wellbeing), Rössler et al. (2015) also found an association with never having been married. This suggests that it is not only what goes on in families that impacts worker performance and wellbeing (Kinnunen et al. 2006) but also the structure of the family itself. As ecological systems theory indicates, work can also impact wellbeing within settings outside work. What appear to be unhealthy lifestyle choices (e.g. smoking, drug and alcohol use, a carbohydrate dense diet associated with obesity, diabetes and hypertension) could be, in part, a coping response to stress arising from work or from the situations workers find themselves in as a result of the way their work is organised (Forastieri 2016;Leka & Jain 2010). Construction workers in Australia have linked several personal health issues, including an increased use of alcohol, to the pressures of long working hours (McKenzie, 2008, as cited by Du Plessis et al. 2013. Evaluations of health promotion programs within male dominated industries in Canada and Australia have also found that while workers recognise the importance of healthy lifestyle choices on their physical and mental health, they also face a number of obstacles in making those choices (Lingard & Turner 2015;Seaton et al. 2019). Low socio-economic status, long work hours that interfere with family commitments, Best et al. New Zealand Journal of Forestry Science (2021) Exo -specific social structures that impinge upon or encompass the immediate settings in which people are found (e.g. the forest industry, WorkSafe, FISC) Meso -interactions between the major settings an individual inhabits (e.g. specific log supply chains) Micro -specific settings such as place, time, physical features, activity, participant, and role (e.g. specific logging crew) and socio-cultural constructions of masculinity that emphasise material success can contribute to a culture that inadvertently promotes unhealthy diets, alcohol misuse, and risk taking and stoicism in the face of difficulties (Du Plessis et al. 2013;Iacuone 2005;Kolmet et al. 2006;Lingard & Turner 2015;Seaton et al. 2019). Lovelock and Houghton (2017) pointed to a similar conflict between the imposition of safety rules on New Zealand logging crews and the socio-cultural constructs operating within those crews (e.g. the role of experience in establishing crew hierarchy). Lingard and Turner (2015) concluded that the underlying environmental causes of construction workers' unhealthy behaviours may be structural and that health promotion initiatives designed to change workers' health behaviour will consequently be of limited effectiveness. This could well apply to the New Zealand forest industry. As stated above, an ecological approach suggests that improving workers' health outcomes will require intervening in multiple places within the system (Sallis et al. 2008). Poor mental health at work will most likely reflect multiple psychosocial factors, some of which will be located outside of the direct relationship with the employee or, indeed, outside of the workplace entirely, in families, the community and society more generally (Forastieri 2016;Lingard & Turner 2015;Sallis et al. 2008). However it is also important to recognise that benefits from successful interventions are also likely to accrue in multiple places within the system. Leka et al. (2015) argue that successful psychosocial risk management can result in benefits to organisational productivity and quality. A study of 7000 Polish machine operators using the European Framework for Psychosocial Risks set out in Table 1 highlighted the inverse relationship between the level of the psychosocial risk reported by the participants and their reported levels of commitment to and enjoyment of the work and their workplace (Mościcka-Teske et al. 2017). While the target setting for intervention may be in the forest, benefits such as improvements to productivity, quality and worker commitment will flow beyond the immediate employer to the forest owner and industry level. Similarly, psychosocial protective factors experienced at work also have the potential to spill over into the family environment through enhanced mood and skills such as time management (Kinnunen et al. 2006) or self-esteem and social support (Ten Brummelhuis & Bakker 2012). Given this complexity, improving the psychosocial factors within forestry workplaces will mean looking beyond the day-to-day work settings and workplaces in which forest workers are engaged and considering the forest management practices and operations that impact the way work is organised and completed. Figure 2 sets out some aspects of forest management practices that have the potential to influence the psychosocial risk factors for stress. They represent risk factors because of their potential impact on the relationship with the contractor, particularly with respect to the contractor's profitability, the balance of power within the contract and its impact on business risk. Examples of the way in which risk is transferred to the contractor, through the contract, include the setting of a production target as the basis for payment, and the forest owner / manager's engineering of the work site, particularly the quality of the access and, for harvesting, the setting layout, the maximum and average haul distances, and the skid size. Some of these elements of risk involve decisions made with information gathered for the forest owner's uses but which may not be fit for purpose for managing the contractor's risk (e.g. inventory data). Some of the key decisions may be made in the absence of data or evidence (e.g. estimating production targets without prior productivity measurement evidence). The forest owner / manager may still have control of the sources of risk despite the consequences of the risk having been handed over to the contractor (e.g. establishing piece rates using production when the payment is actually based on uplift and the trucking and delivery is directly contracted and managed by the forest owner). Elements of the forest owner's / manager's risk can be mitigated by passing some of that risk to the contractor (e.g. the need for layoffs during market downturn). Risk is also imposed on the contractor though the terms of the contract, including the crew day rate used as the basis for the piece rate and the way in which perceived transgressions against the contract conditions are dealt with (e.g. stand downs). The allocation of risk between the forest owner / manager and their contractors can be thought of as an expression of the forest owner / manager's psychosocial safety climate. The psychosocial safety climate refers to the "shared perceptions of organisational policies, practices and procedures for the protection of worker psychological health and safety that are largely driven from senior management" (Idris et al. 2012, p. 19). The terms and conditions of the contract have a material impact on the demands made on the contractors and their workers (e.g. work pressure resulting from target or throughput) and the resources they have available to them (e.g. profitability, cashflow, skills, machinery, work study data, control over site lay out). The Job Demand -Resources Model (Bakker & Demerouti 2007) describes how work-related stress is constructed in the balancing of demands and resources. When demands outweigh resources, stress results. Being inherently motivational, resources can overcome the costs associated with demands and generate engagement. The potential for the work contained in that contract to have a positive impact on wellbeing is established, essentially, through the process of generating the relationship between the forest owner / manager and the contractor, and then capturing that relationship within the contracting processes. Figure 2 also suggests that working directly with communities may be required to ensure interventions in the workplace are successful (Sallis et al. 2008). There is a need to engage with the workforce and their communities in a socially and culturally aligned manner (Wold & Mittelmark 2018). The New Zealand forestry industry is dominated by men who often conform to the dominant constructs of working class masculinity in Aotearoa New Zealand, irrespective of whether that helps or hinders the industry's efforts to mitigate the health risks of stress. Working with that dominant construct means involving those men in the design, decision making and implementation of any efforts to mitigate mental health risks. Fortunately, there are some good examples of successful mental health initiatives (mostly focused on suicide prevention) centred on male participant empowerment, such as the Community Response to Eliminating Suicide (CORES) programme developed in rural Tasmania (Jones et al. 2015) and the Mates in Construction initiative developed in the Queensland construction industry (Martin et al. 2016). Technology is also being used in mental health prevention and care to overcome obstacles to accessing help services (Luxton et al. 2011). It also means recognising that the community's contribution to health and wellbeing involves infrastructure and services such as housing, schools and health centres (Solar & Irwin 2010) and that business are increasingly playing a role in the development of community capability as a community partner (Lee 2011). Implications If the New Zealand forest industry accepts that its workers operate in conditions that pose a mental health risk, then ecological systems theory can be used as a basis for turning that risk into an opportunity to enhance the industry's value and social licence. However, it has not been the intention of this paper to be specific about recommended interventions. While the little research that does exist about the psychosocial conditions within New Zealand's forestry workplaces suggest they can be understood through the internationally recognised frameworks, Lovelock and Houghton (2017) show that even well-intentioned initiatives such as the imposition of greater controls around safe practice can be met with resistance if they do not fit with the socio-cultural constructs in operation for this particular workforce. Socio-cultural constructs of gender have also been implicated in the resistance to making healthier eating choices by Australian construction workers (Lingard & Turner 2015). As the research reviewed here indicates, proceeding with worker wellbeing interventions in the absence of an ecological perspective carries some risk. Further research that aims to understand what those who work in the bush perceive as their biggest threats and challenges, and what they regard as their coping resources and obstacles, is required before interventions can be prescribed with confidence. Conclusions This paper has summarised what the extant research can tell forest managers in New Zealand about stress and its various expressions in the workplace (where it is both a potential risk and a potential opportunity). It has assessed that risk by looking at the health and safety consequences of mental distress and by examining what is known about psychosocial hazards within forestry workplaces. It then suggested that mitigating those risks will require going beyond harm reduction as a strategy to thinking about psychosocial factors as possible drivers of a more engaged and committed workforce. Interventions aimed at taking advantage of those opportunities within forest management practice and the environments outside the workplace will require thinking beyond the contracts engaging the workforce and instead focusing on the risk factors inherent in forest management practice and the communities in which workers reside. The rewards for doing so go beyond compliance with health and safety legislation. At its heart, the provision of safe and healthy work environments is a moral and ethical issue. As noted earlier, the International Labour Organisation's principles are that work should take place not only in a safe and healthy working environment but also in an environment that offers real possibilities for personal achievement, self-fulfilment and service to society (Forastieri 2016). In other words, the imperative with health and safety management is to go beyond ensuring workers survive to enabling workers to thrive. The New Zealand forest industry has an opportunity to go beyond the harm reduction focus of the current legislation through promoting worker health and wellbeing, and this should enhance both the industry's economic performance and its environmental sustainability.
2021-09-01T15:13:24.801Z
2021-01-01T00:00:00.000
{ "year": 2021, "sha1": "258145b964f4ae453adf885d522dfdd384f5e6a8", "oa_license": "CCBY", "oa_url": "https://nzjforestryscience.nz/index.php/nzjfs/article/download/93/47", "oa_status": "GOLD", "pdf_src": "Anansi", "pdf_hash": "f62ca68e2ea8d3b7362039f33b97a096b64041c1", "s2fieldsofstudy": [ "Environmental Science", "Psychology", "Business", "Sociology" ], "extfieldsofstudy": [] }
218974202
pes2o/s2orc
v3-fos-license
An Algerian Corpus and an Annotation Platform for Opinion and Emotion Analysis In this paper, we address the lack of resources for opinion and emotion analysis related to North African dialects, targeting Algerian dialect. We present TWIFIL (TWItter proFILing) a collaborative annotation platform for crowdsourcing annotation of tweets at different levels of granularity. The plateform allowed the creation of the largest Algerian dialect dataset annotated for both sentiment (9,000 tweets), emotion (about 5,000 tweets) and extra-linguistic information including author profiling (age and gender). The annotation resulted also in the creation of the largest Algerien dialect subjectivity lexicon of about 9,000 entries which can constitute a valuable resources for the development of future NLP applications for Algerian dialect. To test the validity of the dataset, a set of deep learning experiments were conducted to classify a given tweet as positive, negative or neutral. We discuss our results and provide an error analysis to better identify classification errors. Introduction Currently, there are more than 4 billion Internet users worldwide. More than 50% of the North African population has access to the Internet. The region has also seen a growth of more than 17% in the number of social media users compared to 2017 1 . In Algeria, more than 50% of the population are registered users on different social platforms and around 46% of them use mobile devises for such activity 2 . These numbers represent a growth of 17% in social media use and more than a 19% growth in the use of mobile devises for such platforms. Twitter users in Algeria reached 8.73% in August 2019 compared to August 2018 (2.96%). The number has almost tripled over a year, making Twitter the third most used platform by active social media users 3 . On social media platforms, 76% of users express their sentiments by clicking corresponding buttons when available, such as "Like", "Dislike". Around 50% expresses views or sentiments using "emoticons", "emojis" or "smileys". Across the Arab region, more than 30% of the users use Arabic script and 26% uses Latin script (mostly English and French) and about 15% combine both (Salem, Feb 5 2017). Compared to other Arabic dialects, the North African dialects have other peculiarities, as several languages are used in everyday conversations. For example, the expression "Nro7o ensemble?" is a combination of the French word "ensemble" meaning "together", and the Arabic word "nro7o ( )", meaning "we go together". Sentiment analysis and emotion detection in Arabic have been widely studied (Baly et al., 2017;Al-Smadi et al., 2018;Abo et al., 2018). Most related work focus on Modern Standard Arabic (MSA), although a few investigated Arab dialects, such as Jordanian (Atoum and Nouman, 2019;Duwairi, 2015), Egyptian (Shoukry and Rafea, 2012), Iraqi (Alnawas and Arici, 2019), Levantine (Baly 1 https://wearesocial.com/blog/2018/01/global-digital-report-2018 (visited on 23rd, November 2019) 2 https://www.slideshare.net/wearesocial/digital-in-2018-innorthern-africa-86865355 3 http://gs.statcounter.com/social-media-stats/all/algeria/2019 et al., Qwaider et al., 2019) and Tunisian (Medhaffar et al., 2017). North African dialects, including Algerian dialects (ALGD) are less normalised compared to MSA. They have been enriched by many languages over the years, which resulted in a complex linguistic situation. Also, we found a significant lack of resources for most of these dialects such as lexicons, dictionaries, and annotated corpora. In this paper, we address the lack of resources for opinion and emotion analysis related to North African dialects, targeting Algerian dialect. We present TWIFIL (TWItter proFILing) a collaborative annotation platform for crowdsourcing annotation of tweets at different levels of granularity. The platform allowed the creation of the largest Algerian dialect dataset annotated for both sentiment (9,000 tweets), emotion (about 5,000 tweets) and extra-linguistic information including author profiling (age and gender). The annotation resulted also in the creation of the largest Algerien dialect subjectivity lexicon of about 9,000 entries which can constitute a valuable resources for the development of future NLP applications for Algerian dialect. This paper is organised as follows: Section 2 provides a general overview of opinion and emotion analysis (OEA) in ALDG. Section 3 introduces the specificities of the ALGD. The annotation platform is described in Section 4 and experiments in Section 5. We finally conclude providing some perspectives for future work. Related Work Over the years OEA has been widely used in a variety of applications such as marketing and politics, etc. These have inspired several methods ranging from lexicon-based approaches (Al-Moslmi et al., 2018) to corpus-based (Abdul-Mageed and Diab, 2012) to recently Deep learning (Al-Smadi et al., 2018). As mentioned in the introduction, numerous studies on Arabic sentiment analysis have been carried out in recent years (Abdul-Mageed and Diab, 2012;Nabil et al., 2015;Badaro et al., 2018). The Arabic dialects are a variety of MSA which includes languages with less normalisation and standardisation (Saadane and Habash, 2015). They differ from MSA on all levels of linguistic representation, from phonology and morphology to lexicon and syntax. It is worth mentioning that the highest proportion of available resources and research publications in Arabic OEA are devoted to MSA. Regarding Arabic dialects, the Middle-Eastern and Egyptian dialects received the largest share of all research effort and funding. On the other hand, very little work has been conducted for the OEA of the Maghrebian dialects (Medhaffar et al., 2017). In addition, research into ALGD is rare which resulted in a lack of resources. The proposed Arabic OEA approaches focus mainly on MSA where few of Arabic dialects have been explored, Jordanian (Atoum and Nouman, 2019;Duwairi, 2015), Egyptian (Shoukry and Rafea, 2012), Iraqi (Alnawas and Arici, 2019), Levantine (Baly et al., 2019;Qwaider et al., 2019) and Tunisian (Medhaffar et al., 2017). Even though, the community is attracting more and more attention to the Arabic dialects with competitions such as the 2018 Semantic Evaluation competition first task 4 . Which included five sub-tasks on inferring the affectual state of a person from their tweet: 1. emotion intensity regression, 2. emotion intensity ordinal classification, 3. valence (sentiment) regression,4. valence ordinal classification, and 5. emotion classification. For each sub-task, labeled data were provided for English, Arabic, and Spanish (Mohammad et al., 2018). North African countries are known for their diversity in spoken dialects, which in recent years have generated huge volumes of written data on social media, such as Algerian Arabic, which is widely used on social networks. In (Qwaider et al., 2019) the authors studied the feasibility of using MSA approaches and apply them directly on a Levantine corpus. Results were as expected, they obtained not more than 60% accuracy. However, when they tested different machine learning algorithms they reached an accuracy of 75.2%. The same approach was adopted to tackle the ALGD. Where the methods of OEA applied to ALGD were the same as those applied to MSA. At first it seemed promising, although yielded significantly low performances (Saadane and Habash, 2015). So it was deemed necessary to develop solutions and build resources for the OEA of the ALGD. (Saadane and Habash, 2015), proposed a list of phonetic rules to be followed, to facilitate the automatic translations of Algerian Arabic and MSA, in both directions. Such tools could be used in several Natural Language Processing (NLP) applications, such as OEA. The authors rely on the CODA spelling model (Conventional Orthography for Dialectal Arabic) proposed by (Habash et al., 2012), for the Egyptian dialect. Furthermore, (Zribi et al., 2014) extend the CODA guidelines to take into account to Tunisian dialect and (Jarrar et al., 2014) lation solutions for MSA and ALGD, in both directions. Mataoui et al. (2016), presented a Lexicon-Based Sentiment Analysis Approach for Vernacular Algerian Arabic, the approach addresses specific aspects of the ALGD fully utilised in social networks. A manually annotated corpus and three lexicons, (negation words lexicon, intensificationwords Lexicon, a list of emoticons with their assigned polarities and a dictionary of common phrases of the ALGD) were proposed and tested for polarity computation. Rahab et al. (2017) proposed an approach to annotate Arabic comments extracted from Algerian Newspapers websites as positive or negative classes. For this work, they created an Arabic corpus named SIAAC (Sentiment polarity Identification on Arabic Algerian newspaper Comments). They tested two well-known supervised learning classifiers which are Support Vector Machines (SVM) and Naive Bayes (NB). For experiments, they used different parameters and various measures in order to compare and evaluate results (recall, precision and F-measure). In terms of precision, the best results were obtained using SVM and NB. It was proved that the use of bi-gramme increases the precision for the two models. Furthermore, when compared to OCA (Opinion Corpus for Arabic (Rushdi-Saleh et al., 2011)) SIAAC showed competitive results. Guellil and Azouaou (2017) proposed an automatic parser for the ALGD which they called "ASDA" (Syntactic Analyzer of the Algerian Dialect), which labels terms in a given corpus. Their work presents a table which contains for each term its stem and different prefixes and suffixes. The goal behind such work is to help determine the different grammatical parts of a given text, in order to perform an automatic translation of the ALGD. (Guellil et al., 2018), proposed a simple polarity calculation method for corpus annotation. It is a lexicon-based approach where the lexicon is automatically created using the English lexicon "SOCAL" (Taboada et al., 2011). Words were translated into Arabic although their polarity remained the same. The generated lexicon is then used to annotate the corpus. It is clear from studying related works, publicly available resources for sentiment analysis in ALGD are rare. Those which are available such as (Mataoui et al., 2016), gives only the polarity of comments collected, without any information on the emotion expressed or the user expressing an opinion. It is the same as the one proposed by (Guellil et al., 2018). Therefore, we propose the first and the largest Algerian corpus annotated at both sentiment and emotion levels as well as extra-linguistic information level (age, gender, etc.). Algerian Dialect Specificities and Challenges Algerian Arabic or Algerian dialect is considered less normalised and standardised compared to MSA. It has a vocabulary inspired from Arabic, but the original words have been altered phonologically and morphologically, (Meftouh et al., 2012). Algerians express themselves in several languages, Arabic, French, English, as well as, Tamazight the original language of the first inhabitants of the region. Tamazight is also divided according to regions, for example Kabyl, Chaoui, Mzabi and Tergui. More than 99% of Algerians have Tamazight and ALGD as their native language. About 73% of the country's population speak ALGD while 27% speak Tamazight 5 . The ALGD is a mixture of Turkish, Italian, Spanish, English, French, although mainly Arabic. Other new languages are also used due to culture fans for instance, Japanese, Korean and others. It is practically the same for Tunisians and Moroccans however, Egyptians do not use as much French. The following properties are not only specific to the Algerian dialect. • Code-switching: North Africans alternate between two or more languages, or language varieties, in the context of a single conversation. This is illustrated in the following example: "C'est bon ". The user has used an Arabic expression " " and a French expression "C'est bon" which means "It taste good thank you". However, the Algerian dialect is also formed by transformed words from the languages which inspired Algerians through the ages. Take the word " " which is inspired from the Arabic word " " meaning "ear", where the first letter was changed. This phenomenon is known as "Intraword switching " in linguistics, (Sankoff and Poplack, 1981), where a switch could occur in one or more places in the same word. • Encoding a language in letters of another language: either Arabic expressions encoded in Roman letters known as "arabizi", or the opposite which is called "romanisation". As an example of arabizi we have "ya3tik lsaha", written in Arabic as " " meaning "thank you", and " " written in Arabic, which refers to the English expression "bye bye". • The combination of the two: code-switching and encoding a language in letters of another one. "sba7 l5ir ça va?", an expression of a mixture of Arabic expression "sba7 l5ir : " meaning "good morning" and French "ça va?" meaning "how are you?". • The use of numbers instead of letters or words: this phenomenon has been observed with the proliferation of mobile phones and the social web, where users started to use more and more abbreviations. Since numbers resemble some letters and some syllables, they were used to replace those letters and syllables. Table 3. gives examples of the meaning of each number with its use. • Derivatives of the Algerian dialect: It is also a fact that North Africans speak a variety of dialects in each region. In Algeria, each area is characterised by its own spoken variation of dialect. and Western areas speak with totally different accents. For example the word "woman", in the East she is called " " pronounced "m'ra" in the west " " pronounced "sheera". • Social media chats language: social web users, especially the young, use many emoticons and emojis. Besides abbreviations (already mentioned in earlier paragraph), social media has its own language. Since emoticons help express emotion in a single character, its use has widely spread. "Hashtags", are used to find, follow, and contribute to a conversation. "Sharing/retweeting" a post is a way of showing support, participating or even trivialising the post. Another characteristic of social chatting is the use of capital letters. Internet code for Yelling and Shouting. In most cases this is considered rude. In other instances, typing everything in capitals conveys the importance of the text. There are other methods to emphasis a word or a text such as the use of *asterisks* and s p a c i n g words out or even letters' repetition to emphasise non-verbal signals (joy, anger, etc.). Letter repetition is used to overstate comments. For example: yaaaaay, stoooop 6 . • Idioms and expressions, which are mostly used for sarcasm, or to suggest something indirectly or covertly. For instance, " " is a way of calling someone boring, where the expression is a common name. Above all, there is the possible existence of more than one language in the same sentence. With many possible writing styles, possible writing errors and new words, frequently appearing, makes the Algerian dialect very difficult to understand and very complex to process automatically. These linguistic diversities call for special attention, which is why the spoken and written dialects are very rich and varied languages 7 . Contribution Tools and resources are essential if progress to be made in this field of research. To ensure the credibility of resources, using crowd-sourcing was considered. Hence, an open platform was created for manual annotation which we called "TWIFIL". The three main contributions of our work are: • A crowd-sourcing annotation platform. • Multi-grained annotations were done at both word and tweet level. • Multi-level annotations including sentiment, emotion and extra-linguistic information (age, gender and topic). TWIFIL TWIFIL (TWIter representing the social media and FIL of profile, meaning giving a profile to the published data, age of the author, gender, etc.) is a public platform accessible to everyone through the web 8 or mobile 9 . It was created to facilitate the generation of Algerian dialect's resources (corpus, dictionary, lexicon) but also to help researchers annotate their own data. The annotators were given guidelines on how to annotate each text. Along with, description of each category (polarity, emotion, etc.) as well as examples of already annotated texts of the same category. To respect users' control and privacy, texts of the tweets were the only data displayed. The administrators or the corpus holders can validate or ignore an annotation based on its consistency with regards to OEA, this was implemented to help recheck the annotations relevance. The annotation guidelines are as follows: • the sentiment polarity of the shared text labeled between [-10 ; +10]; • the opinion class (positive, negative, neutral); • the emotion felt by the reader of the text labeled as (joy, anger, disgust, fear, sadness, surprise, trust, anticipation, love neutral); we followed the Plutchik eight 8 https://twifil.com 9 shorturl.at/jntMY emotion set (Plutchik, 1984) to which we added love and the neutral class to account for factual tweets; • the topic of the text (politics, sports, diverse, etc.); • the age of the author, labeled using age classes ([12-20] • the gender of the author (male, female, other). The dialect lexicon is practically the same without age nor gender. Annotators provide their impressions regarding the polarity and the emotion of a word or an idiom from the ALGD. New words can be added to the dictionary (which must be validated by an admin), different spellings added of the words and different related words. They can also add idioms with their description to facilitate the comprehension and use of the idiom. The platform allows users to also upload their own corpora to be annotated. The Generated Annotated Corpus The data displayed on the TWIFIL platform are tweets collected through the Twitter API using both standard and stream. TWIFIL has more than 140k collected tweets using geo-tagging and keywords, the set of keywords contained names of known figures from politics to arts and sports, the name of some places and local events, etc. At the end we collected tweets posted between 2015 and 2019 obtained from different random geo-locations in Algeria. With the help of 26 annotators it was possible to generate a corpus and lexicon, which were validated by the admins of the platform. Considering tweets which were at least annotated by three different annotators, the labels of the corpus were assigned according to a majority vote, where a label has been used more than once otherwise the tweet will not be selected to be part of the corpus (examples can be found in Table 2). Data statistics As mentioned, a corpus was built of 9,000 annotated and validated tweets for sentiments. Indeed, the corpus has 4,350 positive tweets, 2,615 negative tweets and 2,191 neutral tweets. The table 3 gives the details for the tweets annotated for emotion analysis. For the age and topic we collected about 300 annotated tweets and for gender we have more than 700 (413 male, 255 female and 36 others) annotated tweets. The Generated Annotated Lexicon Our approach constructs a lexicon containing words in both Arabic and Latin letters with their polarity/emotion/different spellings, by using words from the lexicon proposed by (Mataoui et al., 2016) which contains 5,027 word, without considering their polarity, since we used a different scale. The lexicon was enriched by the TWIFIL users and now counts about 9,000 terms and expressions of the ALGD (examples can be found in Table 4). We followed the same approach we used during the generation of our corpus, where the labels of the words were chosen following the dominant vote. Experiments and results The experiments undertaken exploited the sentiment corpus and are as follows: First, not only we implement and test SVM with different data representations (binary, frequency , etc) but we also tested the SVC (Support Vector Classification) (Chang and Lin, 2011), an adaptation of SVM for classification problems. Second, we build a Multi-Layer Perceptron (MLP) sentiment classifier based on different neural architectures and different data representations. Third, we explore the lexicon based methods to compare results. The lexicon-based method consists of adding two columns to the bag of words (BOW) vector. The first represents the number of negative words in the tweet, calculated using the proposed lexicon. The second represents the number of positive words which exist in the tweet. Finally, we evaluate if deep learning models have good or higher performance for Algerian OEA than other state-ofthe-art approaches. Deep learning (DL) is a recent sub-field of machine learning and an efficient outcome of artificial neural network. In the last years, many researchers have studied DL for OEA. Since we also aim to improve the OEA of ALGD by improving the performance outcomes based on the combination of both the tested DL models and various preprocessing techniques. For this, two DL models are used, namely CNN and LSTM. We implemented the classical architecture of CNN and LSTM introduced in (Zhou et al., 2015). We have also used word embedding (WE) as part of our deep learning models. Using the Keras python library, precisely the Embedding layer 10 . It requires that the input data is digitally encoded, therefore, we used words' frequency. The Embedding layer is initialized with random weights and will learn an embedding for all of the words in the training dataset. And since recently researchers started exploring the contextual embeddings we tested the BERT, or Bidirectional Encoder Representations from Transformers. BERT, a language model introduced by Google and it has recently been added to Tensorflow hub, which simplifies integration in Keras models. We tested the BERT-Base, Multilingual Uncased. Therefore, we separately test each of those algorithms 10 https://keras.io/layers/embeddings/#embedding namely SVMs, MLP classifiers, convolutional neural networks (CNN) and long short-term memory (LSTM) in ALGD. Data Pre-treatment and Methodology Worldwide, expressed opinions and comments constitute a valuable information mine. However, the majority of the text produced by the social websites has an unstructured or noisy nature. This is due to the lack of standardisation, spelling mistakes, missing punctuation, non-standard words, repetitions and more. Indeed, such text needs a special treatment. The purpose of this stage is to prepare the data for the following step, which is the classification of tweets such as "Oooh chaba bzaf" which translates to "ohh it is very beautiful" should be recognised as positive and the sentence " " meaning "why do you do this?" should be classified as negative. To correctly classify these sentences and others, we need to perform a set of treatments: 1) Text treatment. 2) Transformation of the texts to a machinereadable format (binary/digital). The steps undertaken are detailed in the following; • Filtering: replacement of URL links (e.g.http://example.com) by the term "link", Twitter user names (e.g. @pseudo -with symbol @ indicating a user name) by the term "person". • Cleaning: removal of all punctuation marks as well as the exaggerations such as: "heyyy" replaced by "hey" and the consecutive white spaces were also removed. • Tokenization: to segment the text by splitting it by spaces and form our BOW. • Removing stop-words: to remove articles (" ", " ", etc) from the BOW. Fig 2 shows the achievement of the classifier during our experiments with and without some pre-processing treatments, where progress can be practically seen with each treatment applied separately, but also when applied to them all. The results demonstrated that pre-processing strategies on the reviews increases the performance of the classifiers. The data collected is used to extract the characteristics which will be used to train the classifier. The existence Word/expression and different spellings Polarity Polarity class Emotion Hayla == great/ , , 5 Positive joy T3ayi == boring/ , , t3ay -5 Negative disgust Ch7al == how much/ 0 Neutral Neutral Table 4: An excerpt of the generated lexicon via TWIFIL Figure 2: The evolution of the classifier's performance after each pre-processing step of a word was used as a binary characteristic and also considered as the baseline. Tests were performed on different information representation methods, proposed in the literature, of the information retrieval field, such as the frequency of occurrence of a keyword considered as a more appropriate characteristic. During our research for approaches using this type of formatting, it was found that (Pak and Paroubek, 2010), rejected the idea and we quote "the overall sentiment may not necessarily be indicated through the repeated use of keywords". Their work was based only on a binary representation. However, others have recently used such representations (ElSahar and El-Beltagy, 2015) trained their classifiers using TF*IDF and word count. Their tests concluded that TF*IDF was the least performing method with a 3-class classification problem. However, word count gave the best accuracy, reaching 60%. In addition, (Das and Chakraborty, 2018), compared the use of TF*IDF and word existence representations, their experiments illustrated that TF*IDF is the best suited formatting for the problem. TF-IDF was used as an alternative to the binary model. However, for sentiment analysis, the binary model has been widely used by several researchers; hence, we chose to test different data representations used in the literature, namely binary, count, frequency and TF*IDF. The result of the previous step is a vector of words, which in this step is transformed into a digital vector by: firstly, using the same dimension of the vector for all texts. Secondly: replacing the words by one of the following configurations: 1) binary 0 or 1 to represent the presence of a term. 2) Count: a simple count of the words in the text. 3) Frequency: the frequency (freq) of each word as a ratio of words within each text. 4) TF*IDF: term frequency-inverse document frequency, a statistic that reflects the importance of a word in a document, in our case the corpus. Results and Discussion This section presents the different results obtained, with different trained models, as well as the tests performed to choose the length of the BOW. The corpus had about 26,000 distinct terms among which tests revealed that there are 3,000 terms, which are the most relevant terms. Such size of the vector of a tweet is what yielded the best performances in terms of accuracy (Acc Table 5 shows the results of the different tests performed using different data representations (D-R). The first row gives the SVC results and the second the MLP results in term of accuracy. The last column gives the results of the lexicon-based (lex) method using word frequency vectors concatenated to words' polarity count vectors. As shown, exploiting the lexicon based to create a hybrid method with machine learning yielded promising results. The same behavior was noticed whith DL models, CNN and LSTM. (Alayba et al., 2018) and Levantine (Elnagar et al., 2018)). Error analysis We extracted all the wrongly classified texts. After studying these texts we chose the most representative ones that are illustrated in table 8. The first example represents the examples that are positive but contains some ambiguous words like "hungry" which represents texts that share similar vocabulary but are classified differently. The second example classified as positive while been annotated as negative. This suggests a lack of context since we don't have enough text to know for sure. If we look at some examples that were predicted wrongfully we understand that the most recurrent errors occur when a text contain both positive and negative words. In addition to misspellings and grammatical errors. There are also some examples that do not carry a sentiment like the third example, but were giving a positive or a negative class. The significant phrases or words present in the texts of positive class may fall under the negative class in the training set or vice versa, which may lead to misclassification. In addition, the out-of-vocabulary problem, many words have been skipped which can also be another reason. Conclusion The Arabic language is characterised by a wide number of varieties in dialects. With the emergence of the social web, Text Prediction Actual class 1 Our worst maybe!!But we are aware and will not hinder our renaissance and the efforts of the others will not be lost in vain it enables users to express their opinions using these dialects. Algerian Dialect differs from MSA on all levels of linguistic representation, from phonology and morphology to lexicon and syntax. Opinion and emotion analysis of the ALGD is challenging due to the rich morphology of the language. Extracting the enormous volume of comments and reviews presented on the social web requires taking into account the peculiarities of the Algerian Dialect and it's characteristics (Arabizi, code-switching, etc). Publicly available resources for OEA of the DALG are scarce. In this paper we presented an open platform for public annotation which we called "TWIFIL". It helped create a quite large annotated corpus as well as a dialectal lexicon. These tools can be exploited for opinion and emotion analysis at a relatively low cost. This resource is now available to the community. It will provide a useful benchmark for those developing opinion and emotion analysis tools for the Algerian dialect. As a final step, we applied various machine learning models to classify the ALGD tweets as either positive, negative or neutral. Then, we measured their accuracy and efficiency. We also analysed and evaluated the performance of the selected algorithms when applied to ALGD using different pre-processing techniques such as normalisation, stop words and URLs. To enhance the results of the models we trained them with different data representations where term frequency proved to be more efficient than binary and TF*IDF. To further boost the results, we used a hybridisation of machine learning models and lexicon-based methods, which surpassed the baseline results of all models. We also tested the contextual embedding using the BERT model which did not surpass our baseline. In fact, the experimental results prove that deep learning models have a better performance for OEA of the ALGD than classical approaches (support vector machines and multi-layer perceptron). In the future, we plan to continue with this research and address the remaining challenges, towards developing additional resources and tools for opinion and emotion analysis of Maghrebian multilingual dialects and use the obtained data to build a multilingual sentiment classifier. As well as implementing and testing other machine learning algorithms. We also plan to complete the development of the platform to allow users to add their own classes and allow the platform to offer part of speech annotations. But mainly enlarge the corpus and lexicon.
2020-05-29T13:12:14.871Z
2020-05-01T00:00:00.000
{ "year": 2020, "sha1": "f5808755486ad4fe58d832e752e624382c5a2d5a", "oa_license": null, "oa_url": null, "oa_status": null, "pdf_src": "ACL", "pdf_hash": "b3e643db31bedf136e6e786909bf9a1ca2816281", "s2fieldsofstudy": [ "Linguistics", "Computer Science" ], "extfieldsofstudy": [ "Computer Science" ] }
204835798
pes2o/s2orc
v3-fos-license
Mapping genetic interactions in cancer: a road to rational combination therapies The discovery of synthetic lethal interactions between poly (ADP-ribose) polymerase (PARP) inhibitors and BRCA genes, which are involved in homologous recombination, led to the approval of PARP inhibition as a monotherapy for patients with BRCA1/2-mutated breast or ovarian cancer. Studies following the initial observation of synthetic lethality demonstrated that the reach of PARP inhibitors is well beyond just BRCA1/2 mutants. Insights into the mechanisms of action of anticancer drugs are fundamental for the development of targeted monotherapies or rational combination treatments that will synergize to promote cancer cell death and overcome mechanisms of resistance. The development of targeted therapeutic agents is premised on mapping the physical and functional dependencies of mutated genes in cancer. An important part of this effort is the systematic screening of genetic interactions in a variety of cancer types. Until recently, genetic-interaction screens have relied either on the pairwise perturbations of two genes or on the perturbation of genes of interest combined with inhibition by commonly used anticancer drugs. Here, we summarize recent advances in mapping genetic interactions using targeted, genome-wide, and high-throughput genetic screens, and we discuss the therapeutic insights obtained through such screens. We further focus on factors that should be considered in order to develop a robust analysis pipeline. Finally, we discuss the integration of functional interaction data with orthogonal methods and suggest that such approaches will increase the reach of genetic-interaction screens for the development of rational combination therapies. Background Whole genome and exome sequencing have provided an encyclopedia of genes that are involved in cancer development and progression, as part of programs such as The Cancer Genome Atlas (TCGA). These heroic efforts have revealed that many cancer cells hijack defined signature cancer pathways through acquired mutations that activate oncogenes or inactivate tumor suppressors [1]. Yet, these efforts have also demonstrated that the genetic backgrounds of different types of cancers are vastly heterogeneous, resulting in a high number of cases with inaccurate prognosis and ineffective chemotherapy treatments. Precision cancer therapeutics, which aims to tailor a treatment regimen to the unique genetic background of each disease, is a targeted and promising approach. This strategy relies on targeting particular mutants upon exploiting their genetic dependencies through the identification and mechanistic characterization of the genetic interactions involved in tumorigenesis, treatment response, and the development of drug resistance. Genetic interaction occurs when pairwise perturbations of two genes involved in the same or parallel pathways result in a phenotype that is different from the expected additive effect of each individual mutation [2][3][4]. Genetic (epistatic) interactions can be synergistic (or synthetic), where the interaction of two genes exaggerates the phenotype, or buffering, where the perturbation of one gene masks the perturbation of another. Genes that result in a synergistic effect are commonly interpreted as working in compensatory pathways. The identification of such functional networks is particularly important for understanding oncogenic pathways because the heterogeneity in the genetic background of cancers is often associated with the connected pathways that might provide multiple potential rewiring mechanisms. Large-scale assessment of genetic interactions to identify functional networks has been performed using high-throughput assays in model organisms. One such example, in yeast, is the epistatic mini array profile (E-MAP) approach, which uses a symmetric matrix of gene perturbations to enable quantitative analysis of the type and strength of the interaction between each pair of genes that have been deemed to be functionally or physically related [5][6][7][8]. Hierarchical clustering analyses of the scores obtained from these genetic-interaction screens reveal functionally related genes and complexes. In this article, we discuss recent targeted, genomewide, and high-throughput screening studies that have employed dual loss-of-function, chemical-genetic interaction, and combinations of gene activation and inhibition methods to identify relevant genetic interactions. We also review the clustering and analysis pipelines used in high-throughput genetic-interaction screens for rapid translation of the generated data into effective therapies for cancer treatment. Furthermore, we suggest that combining genetic-interaction screens with orthogonal quantitative approaches to generate global networks can facilitate the development of rational combination therapies. Genetic interactions as therapeutic targets in cancer Cancer cells often obtain selective advantage through functionally cooperative genetic interactions, in which the deleterious effects of oncogenic or tumor suppressor mutations are, presumably, compensated for by secondary alterations. For example, cancer cells can tolerate higher levels of replication stress that result from the overexpression of oncogenes because of the amplification of replication stress response kinases, such as ataxia telangiectasia mutated (ATM) and Rad3-related (ATR) kinase [9,10]. Efforts by TCGA revealed such co-occurring and mutually exclusive genomic alterations in cancer. In this context, co-occurring mutations are potential candidates for dependency factors, while mutually exclusive alterations are potential candidates for synthetic lethality. Yet, it is important to emphasize the possible limitations of such approaches for functional interpretation. First, the differential classification of functional genetic variants to distinguish these from random passenger variants is not trivial. Second, sequencing results are not reflective of the protein levels or post-translational modifications in the cell. Even though the mutation of two genes may appear to be mutually exclusive at the genomic level, investigation of their final protein products may indicate a tendency for co-occurring alterations. Inhibition of gain-of-function mutations in oncogenes is an effective cancer therapy approach, but restoring the functions of loss-of-function mutations in tumor suppressors is not yet clinically feasible. Rather than functional restoration, a strategic approach to exploit such mutations is to identify synthetic lethal interactions of tumor-suppressor genes in order to target tumor cells. Synthetic lethality is a form of synergistic genetic interaction, in which simultaneous deletion of two genes results in cell death whereas deficiency of either one of the same genes does not. Specific synthetic lethal interactions between the driver mutations of a tumor and druggable targets have been exploited to develop effective cancer treatments. For example, drugs targeting poly(-ADP-ribose) polymerase (PARP) enzymes are synthetically lethal with loss-of-function mutations of BRCA1 and BRCA2 in tumor cells, leading to cell death resulting from the homologous recombination repair defects [2,[11][12][13]. PARP1 is a DNA damage sensor that binds to DNA damage sites, leading to the poly ADP-ribosylation (PARylation) of target proteins for the recruitment of DNA repair effectors. In addition, PARP1 auto-PARylation mediates its own release from the DNA damage sites [14]. PARP1 is also implicated in the reversal and repair of blocked replication forks [15]. Inactivation of the catalytic activity of PARP1 disrupts single-stranded DNA damage repair and causes PARP1 trapping by impairing its own release from the DNA damage site. These events block the replication fork reversal and cause double-stranded DNA breaks [15]. In cells that have a deficiency in homologous recombination repair, PARP1 trapping results in double-stranded lesions and eventually leads to cell death, providing an opportunity for targeted therapy in BRCA-mutant cancer cells (Table 1). The use of PARP1 inhibitors as monotherapies for patients with BRCA-mutated cancer demonstrates how effective synthetic-lethality screens can be for drug development. Yet, as with many other therapies, resistance to PARP1 inhibitors arises in advanced disease, suggesting that the most effective responses to treatment with PARP1 inhibitors might be elicited either in early-stage disease or through the development of rational drug combinations [16]. To address both of these issues, several clinical trials are currently evaluating the efficacy of therapies that combine PARP1 inhibitors with chemotherapy or mutation-specific inhibitors (ClinicalTrials.gov reference NCT02576444) [17]. The PARP inhibitor niraparib was also tested for use as maintenance therapy in platinum-sensitive ovarian cancer, regardless of its BRCA1 status [18]. The median duration of progressionfree survival was significantly longer for patients receiving niraparib. These results, together with the observation that about 50% of epithelial ovarian cancer patients without BRCA1 mutations exhibit defective homologous recombination, already indicate the potential wider reach of these PARP inhibitor therapies [19]. The dynamic rewiring of cancer cells that are exposed to anticancer drug treatments adds an additional layer of complexity to traditional functional interaction studies. In the clinic, the targeting of multiple factors within the same pathway has proven to be an effective strategy, possibly because targeting a signaling pathway can result in differential responses depending on the presence of upstream mutations [20,21]. Moreover, therapy-resistance mechanisms in tumor cells rely on compensatory pathways that functionally buffer the inhibition of drug target genes. An example of this is the acquired resistance of BRAFV600E-mutant melanoma cells to BRAF inhibitors that occurs as a result of MAPK pathway activation. In this case, specifically in the BRAFV600E-mutant background, melanoma patients treated with a combination of a BRAF inhibitor with a MEK inhibitor exhibited improved progression-free survival when compared to patients treated with BRAF inhibitor alone [20][21][22] ( Table 1). Combination therapy to target both the primary target and the resistance mechanism has been further supported as an effective strategy. A short hairpin RNA (shRNA) screen of human kinases and several kinase-related genes revealed that knockdown of epidermal growth factor receptor (EGFR) synergized with PLX4032, a BRAF inhibitor, in the suppression of BRAFV600E mutant colorectal cancers [23]. A phase 3 clinical trial recently demonstrated that a combination of encorafenib (a BRAF inhibitor), binimetinib (a MEK inhibitor), and cetuximab (an EGFR inhibitor) had an overall response rate (ORR) of 48% in BRAFV600E-mutant metastatic colorectal cancer patients, which was an increase in ORR compared to controls [24]. The development of high-throughput genetic-interaction screens with robust analysis and clustering pipelines is thus imperative to accelerate the identification of new druggable synthetic-lethal or other genetic interactions and to guide the improved prediction of drug synergies and rational combination drug therapies. Cancer models in mammalian cells and their applications in anticancer drug discovery The key driver mutations causing oncogenesis and the factors involved in rewiring cancer cells in response to therapy remain unclear. Systematic and high-throughput approaches to dissect these functionally interconnected pathways might be clinically beneficial. Recent efforts to identify genetic interactions in a high-throughput platform involve combinatorial pairwise perturbations of two genes in an arrayed or genome-wide format ( Table 2). The most common approaches to date are pairwise gene knockouts or a combination of a gene knockout and drug inhibition. A more recent and less-explored approach is to combine gene activation with gene inhibition, although the activation of a mutated gene is currently not feasible in the clinic. Dual loss-of-function methods Dual loss-of-function studies form the foundation of genetic-interaction studies. Pairwise genetic-interaction screens in mammalian cells can involve the pairwise knockdown of specific genes using short interfering RNA (siRNA) or CRISPR inhibition (CRISPRi) platforms (where a catalytically dead version of Cas9 is fused to a Krüppel-associated box (KRAB) transcriptional repression domain) [25,26]. Downregulation of target genes can result in a partial phenotype, so this approach can be used advantageously to target genes that are essential for viability [27]. Alternatively, combinatorial gene knockouts in mammalian cells can be mediated using the CRISPR-Cas9 platform [28,29]. For example, Shen et al. [30] developed a systematic approach to map genetic networks by combining CRISPR-Cas9 perturbations. Pairwise gene knockout combinations of 73 cancer genes with dual-guide RNAs in three human cell lines-HeLa (human papilloma virus-induced cervical adenocarcinoma cells), A549 (an adenocarcinomic alveolar basal epithelial cell line), and HEK293T (human embryonic kidney cells)-enabled the identification of interactions that have potential therapeutic relevance. These interactions were then tested with drug combinations in order to develop synthetic-lethal therapies [30]. Interestingly, only 10.5% of identified interactions were common to given cell-line pairs, and no shared interactions were seen in all three cell lines. These observations might suggest a high degree of diversity in genetic interactions between different tumors, demonstrating the necessity of using a large number of cell lines and samples when performing similar studies. Combinatorial CRISPRi screening platforms have been used to increase the throughput of approaches in which individual genes or gene pairs are downregulated [31,32]. The proof of concept experiment, which targeted 107 chromatin-regulation factors in human cells using a pool of double-sgRNA constructs for the pairwise downregulation of genes, revealed both positive and negative genetic interactions [31]. In this context, it is important to confirm the repression efficiency of each combination of single-guide RNAs (sgRNAs) because the efficiency of double-sgRNAs was observed to be lower than that of single-sgRNAs [31]. This study was followed by a largescale quantitative mapping of human genetic interactions using a CRISPR interference platform, in which 472 gene pairs were systematically perturbed in two related human hematopoietic cancer cell lines (K562 and Jurkat) [32]. Interestingly, even though this experimental pipeline captured 79.3% of the interactions listed in the STRING (Search Tool for the Retrieval of Interacting Genes/Proteins) database for the tested genes, the vast majority of the highly correlated gene pairs (315 of 390 genetic interactions (GI) with GI correlation > 0.6) were not captured by STRING annotation [33]. These results are indicative of either a lack of physical interactions between these functionally related gene pairs or unidentified proteinprotein interactions. Systematic gene ontology annotation of the emergent gene clusters enabled the identification of gene clusters that might be functionally related in K562 and Jurkat cells, and suggested new factors that are involved in vital processes such as ER protein trafficking and DNA synthesis. The epistasis analysis used in this study revealed that the accumulation of an endogenous metabolite intermediate, isopentenyl pyrophosphate (IPP), causes replicative DNA damage and therefore increases the dependence of cells upon an intact DNA damage response pathway. This finding suggests a potential combination-treatment strategy, which both targets the pathway that promotes the accumulation of IPP and at the same time exploits the newly acquired dependence of the tumor cells upon the DNA damage response pathway. These experiments illustrate the potential of geneticinteraction maps in revealing combinations of druggable target genes that do not have a known physical association. Mapping chemical-genetic interactions Quantitative chemical-genetic studies, in which inhibition by a compound is combined with a gene knockdown or knockout, are an alternative to pairwise gene perturbations [34,35]. For example, investigation of the influence of the knockdown of 612 DNA-repair and cancer-relevant genes on the response to 31 chemotherapy compounds revealed that loss-of-function mutations in ARID1A and GPBP1 contribute to PARP inhibitor and platinum resistance in MCF10A, a non-tumorigenic human breast epithelial cell line [34]. This result is in contrast to the findings of another chemical-genetic screen that tested isogenic ARID1A-deficient MCF10A cells against a panel of chemotherapeutic drugs and DNA-repair inhibitors [36]. This screen indicated an increased sensitivity of ARID1A-deficient cells to a combination of ionizing radiation with PARP inhibition [36]. Inactivating mutations in ARID1A have been detected in multiple forms of human cancers. ARID1A is a component of the SWI/SNF chromatin remodeling complex and is implicated in non-homologous end joining (NHEJ), suggesting that it might be an important modulator of the response to PARP inhibitors and combination therapies. Deep investigation of the genetic targets of therapies that have already been approved by the US Food and Drug Administration has the potential to expand the number of patients who can benefit from these therapies by revealing novel targets that are highly mutated in cancer cells. For example, further investigation of the synthetic lethality of PARP inhibitors with BRCA1 and BRCA2 mutations instigated a series of discoveries that suggest that PARP inhibitors can also be used to target deficiencies in other genes that are involved in homologous recombination [37][38][39][40]. Several studies investigated the synthetic lethal interactions of PARP inhibitors [11,41] and ATR inhibitors [9,42] against custom siRNA libraries. The clinical relevance of these studies is currently being tested in clinical trials of multiple rational drug combination therapies (Table 1, ClinicalTrials.gov reference NCT04065269) [17,43,44]. In addition to defects in genes involved in homologous recombination, mutations in other genes have also been shown to sensitize cancer cells or immortalized cells to PARP inhibitors. Recently, a genome-wide dropout CRISPR screen for genes that, when mutated, sensitize cells to PARP inhibition was performed using the human cell lines HeLa, RPE1-hTERT (a telomerase-immortalized retinal pigment epithelium cell line), and SUM149PT (a triple-negative breast cancer cell line with BRCA1 mutation). Dropout screens are generally used to identify genes that are essential for cell viability, and they involve RNA interference (RNAi) or CRISPR screening of two or more cell lines over a series of cell divisions. In this case, the screen revealed the hypersensitivity of RNase-H2-deficient cells to PARP inhibition [35]. Of 155 highconfidence gene knockouts that sensitized cells to the PARP inhibitor olaparib, 13 genes scored positive in all three cell lines, and 60 genes were common to two cell lines. Besides the factors that are known to be involved in homologous recombination and Fanconi anemia, and the kinases ATM and ATR (which are involved in the DNA damage response), genes encoding splicing and transcription factors and the RNase H2 enzyme complex were shown to sensitize cells to olaparib treatment in all three cell lines. A parallel screen utilized a similar genome-wide CRISPR-Cas9-based approach in three independent human cell lines to identify genes that, when depleted, showed synthetic lethality with ATR inhibition [45]. Interestingly, depletion of the RNAse H2 enzyme also led to a synthetic lethality with ATR inhibition. Collectively, these data indicate that loss of RNase H2 might be a promising biomarker for PARP and ATR inhibitor-based therapy, and provide an opportunity for a rational combination therapy involving PARP and ATR inhibitors for RNase H2 loss. An orthogonal strategy, which has the simultaneous advantage of increasing the throughput of screens, is to leverage the conserved interactions in model organisms. Large-scale genetic-interaction screens have been developed in the yeasts Saccharomyces cerevisiae and Schizosaccharomyces pombe, and have been used extensively to gather biological insights [5,[46][47][48]. However, the genetic interactions observed in model organisms need to be validated in mammalian cells and in the clinic. Thus, a viable hybrid approach is to target conserved tumor suppressor genes for genetic interactions in yeast, followed by validation of the identified interactions in mammalian cells. For this purpose, synthetic genetic array (SGA) analysis provides a convenient and large-scale platform for systematic construction of double mutants in yeast, allowing the mapping of synthetic genetic interactions. SGA involves the construction of double mutants by crossing a query mutant strain to an array of approximately 5000 viable deletion mutants [48]. In order to connect tumor suppressor genes to druggable targets, Srivas et al. [49] used SGA technology in S. cerevisiae and constructed a geneticinteraction map of 43,505 gene pairs that are known to be small molecule targets, tumor suppressors, or clinically relevant [49]. Guided by the yeast network, a more targeted chemo-genetic interaction map obtained using 21 drugs and 112 tumor suppressor genes in HeLa cells revealed a total of 127 synthetic sick or synthetic lethal interactions. Clonogenic assays were then performed to determine whether the interactions identified in the chemo-genetic screen (on the basis of an observed reduction in cell growth) also resulted in the reduced survival of individual tumor cell clones. Five of the seven combinations identified from the conserved tumor suppressor XRCC3 network resulted in negative effects on tumor cell clonal survival when XRCC3 is also knocked down. XRCC3 is involved in the homologous recombination repair pathway. These results suggest that the drugs targeting relevant genes should be investigated as therapies for tumors with XRCC3 loss-of-function mutations. Mapping the directionality of genetic interactions Functional and modular data obtained through geneticinteraction methods can fall short in providing information about directional and regulatory dependencies. Orthogonal approaches that can be incorporated with the genetic-interaction data to overcome this limitation are discussed in the next sections. This shortcoming has been addressed by several studies. For example, in combinatorial RNAi screens conducted in Drosophila cells, regulatory and temporal directionality was derived through mathematical modeling and time-dependent analysis of differential genetic interactions [50,51]. A recent quantitative dual screen addressed this issue by combining the CRISPR-mediated activation (CRIS-PRa) of one gene with the knockout of a second gene [52]. This combinatorial approach has the additional advantage of enabling studies of the effects of gene amplification or gain-of-function alterations of several protooncogenes, which are known to be just as important as the effects of gene deletions for rewiring of cancer cells. This enabled the formation of a directional dependency network for human K562 leukemia cells. The systematic identification of genes whose activation altered the fitness of K562 cells treated with the tyrosine kinase inhibitor imatinib was conducted using a genome-wide library targeting every coding and over 4000 non-coding transcripts [52]. In addition to genes with known roles in leukemia and imatinib resistance, this screen identified previously uncharacterized genes (BBX, NOL4L, and ZC3HAV1), which were shown to have roles in drug resistance. To quantify dual genetic interactions, activating sgRNAs targeting 87 candidate genes from the primary screen were combined with knockout sgRNAs targeting 1327 genes from KEGG-annotated cancer-relevant signaling pathway genes. The directional dependencies of the genetic interactions were then inferred for those cases in which one gene activated its partner. In these gene pairs, individual activation and knockout of the activating gene partner produce opposing phenotypes, providing an opportunity to incorporate this information into the genetic-interaction scoring algorithm that accounted for the singular and combinatorial perturbation phenotypes. Such a high-throughput approach enables the identification of genes that can be exploited for cancer therapy. As this approach has been limited to K562 cells, it still remains to be explored whether this method is widely applicable to other models. Considerations for a robust analysis pipeline The inference of functional data from large-scale genetic network mapping in human cells requires robust and thorough data-analysis pipelines. In this context, a dataanalysis workflow involves considerations for experimental design, quality control, and mathematical scoring. The earliest studies on the use of genetic-interaction mapping to dissect the functions of protein complexes involved E-MAPs in yeast [47,53,54], as mentioned earlier. These studies established the ground rules in terms of experimental design for isolating hits and building a reliable genetic-interaction map. The computational scoring and clustering algorithms used to analyze the data include statistical analyses of the strength of each genetic interaction, of the correlation between replicates, and of the clustering of biological complexes [55]. Similar computational scoring algorithms can be applied to mammalian systems. In mammalian systems, several high-throughput genetic-interaction screens have been conducted using a targeted approach with some prior knowledge on the interaction networks of the genes or the pathways to be studied [30-32, 34, 49]. This kind of approach decreases the noise and minimizes the potential for false negatives in the data, allowing milder phenotypes to be detected. Even though these milder phenotypes might not be good targets for monotherapy, they might provide clues for combinatorial drug design and about functional redundancy in cancer cells. A promising strategy for combinatorial drug discovery is to target compensatory pathways to block functional redundancies. With the current methodologies, genome-wide trigenic interaction mapping is not trivial, but these milder phenotypes can be used to predict target candidates for combinatorial drugs and can be tested in combinatorial, trigenic contexts [56]. As compared to targeted screens, genome-wide analysis allows the unbiased determination of genetic interactions without prior knowledge of physical or functional networks [45,57,58]. Genome-wide screens have a potential to reveal unexpected interactions between previously uncharacterized gene pairs (Table 2). However, any CRISPR-Cas9 based genetic-interaction analysis comes with three major considerations. First, there is heterogeneity in the editing efficiencies provided by different sgRNAs. This consideration applies to CRISPR-Cas9-based screens performed either in an arrayed format or as pooled libraries. In addition to using at least three sgRNAs for each targeted gene, quantitative assessment of gene-editing efficiency in arrayed knockout experiments should be conducted using tools such as TIDE (Tracking of Indels by Decomposition) or using ICE (Inference of CRISPR Edits) analysis following Sanger sequencing [59][60][61]. Once the gene-editing efficiency for each sgRNA is confirmed, the genotype-phenotype correlation in arrayed formats is straight forward. In comparison, the analysis of genome-wide pooled screens requires the use of next-generation-sequencing (NGS) technologies for genotype-phenotype correlation. The second consideration is cell-line variability. The Cancer Genome Atlas (TCGA) dataset indicates that 89% of tumors, of 33 cancer types, contain at least one somatic driver alteration in ten canonical signaling pathways that are known to be highly mutated in cancer [1]. These data represent commonalities between different cancer types. Yet, predictions of disease prognosis and drug sensitivity in cancer are vastly inaccurate because of the diverse mutational landscape of individual tumors. For example, a recent study suggested that the tumor lineage determines whether mutations in BRCA1 and BRCA2 are indispensable founding events or biologically neutral events for tumorigenesis [62]. In addition, the genomic copy number of different cell lines was suggested to affect CRISPR targeting and toxicity after genome editing [63,64]. These observations are indicative of the importance of conducting functional interaction screens in multiple different cell lines, not only for the identification of robust synthetic lethal or other interactions, but also for the identification of more targeted precision treatment opportunities. Third, drug dosing and timing should be considered. Importantly, for screens that measure phenotype upon drug treatment, the dynamic range of experiments is highly dependent on the drug concentration and treatment duration. Boettcher et al. [52] showed that, when compared with a single treatment, repeated drug treatment can allow for greater enrichment of resistance genes. For chemogenetic interaction profiling that accounts for the stated considerations, drugZ scoring has been introduced as a software tool for the identification of both synergistic and suppressor interactions [35,65]. Combining genetic interaction screens with orthogonal quantitative approaches to generate global networks A major goal of functional-interaction mapping studies is to elevate gene-association studies from the identification of individual genes that are associated with phenotypes to providing more interpretable genetic information on the biological pathways that are involved. In addition, the ability to combine functional interactions with physical interaction modules, in order to build global interaction networks, is important for dissecting the effects of differential mutations in cancer. High-throughput geneticinteraction screens generate an unprecedented amount of cell-specific functional genomics data that can help to reveal genetic networks. Genetic-interaction profiles provide a quantitative measurement of functional similarity. These maps can be overlapped with different kinds of network information obtained by orthogonal approaches to further inform functional interpretation and the prediction of novel gene function (Fig. 1) [67]. These approaches include gene-ontology analysis, as well as analyses of the mutational landscape of patient tumors, gene regulation, and protein-protein interaction. Gene-ontology analysis provides a platform for the systematic annotation of the gene clusters that are enriched for genes known to act in similar pathways or in a given complex [32,68]. Statistical analysis of the genomic mutational landscape of patient tumors from TCGA provides an additional layer of information, as gene pairs that are rarely co-mutated are candidates for synthetic lethal interactions [69][70][71]. In addition, because cancer cells are under selective pressure, two genes might need to be co-mutated to provide a growth advantage to tumor cells. Yet, as discussed earlier, these approaches for functional interpretation are statistically limited by the small number of tumors that have been sequenced and by the unclear classification of functionally relevant mutations. Integrating co-expression data and gene-regulation information from gene expression profiles can also be a useful approach for establishing correlations and extracting functional subnetworks. In particular, recent advances in the analysis of single-cell RNA sequencing data provide a reliable platform for the interrogation of gene-gene relationships [72][73][74]. Perturb-seq combines single-cell RNA-seq with pooled CRISPR-based gene perturbations, and this tool has been developed to obtain a greater amount of mechanistic information from genetic perturbation screens by the identification of gene targets through changes in gene expression [74]. Norman et al. [73] also applied this technology to the CRISPRa platform, and were able to determine the differential expression profiles of 112 genes whose activation resulted in growth enhancement or retardation in K562 human leukemia cells [73]. Finally, the incorporation of annotated protein-protein interaction data into genetic-interaction screens can enable the mapping of comprehensive global networks that include information at both the genomic and the proteomic level in the cell. Protein-protein interaction studies using multiple different cell lines can provide a networklevel framework for differential genetic interactions that are observed in various cell lines [75]. Several recent studies have employed integrated network analysis to investigate the long-standing question of the involvement of virus infections in the development of cancer. Large-scale protein-protein and genomic screens addressed the roles of human papillomavirus (HPV) in oncogenesis and human lymphotropic virus type I (HTLV-I) in adult T cell leukemia/lymphoma (ATLL) [76,77]. The physical interactions of HPV and human proteins in three different cell lines (C33A, HEK293, and Het-1A) were determined by mass spectrometry following the affinity purification of complexes associated with viral proteins. The protein-protein interaction data were then combined with data defining the genomic mutational landscape of tumors. Comparison of HPV + and HPV − tumor samples led to the identification of eight genes that are altered frequently in HPV − tumors but rarely in HPV + tumors. This finding was followed by the generation of a network propagation framework, in which proteins were scored on the basis of their proximity to HPV-interacting proteins or proteins that are preferentially mutated in HPV − tumors within the Reactome functional interaction (ReactomeFI) reference network. This integrative approach resulted in the identification of an interaction between L2 HPV protein and the RNF20/40 histone ubiquitination complex that promotes tumor cell invasion [76,78]. Around the same time, a pooled shRNA screen targeting lymphoid regulatory factors in eight ATLL cell lines revealed essential roles for the BATF3-IRF4 transcriptional network in malignant ATLL cell proliferation [77]. The gene expression profiles of BATF3 or IRF4 knockdowns overlapped significantly with each other, with 494 genes decreasing significantly. In addition, inactivation of HBZ, the HTLV-1 viral protein whose expression is maintained in all ATLL cells, resulted in decreased abundance of BATF3 and MYC mRNAs. ChIP-seq analysis revealed that MYC is a direct target of BATF3-IRF4, but not of HBZ, suggesting that HBZ regulates MYC expression through BATF3. Finally, the relevance of this type of analysis to the development of new treatments was tested by evaluating the sensitivity of ATLL cells to bromodomain and extraterminal motif (BET) inhibitor JQ1. BET family proteins can regulate the expression of several oncogenes upon recognizing histone lysine acetylation to assemble transcriptional activators and chromatin-interacting complexes [79]. JQ1 treatment was toxic to the ATLL cells and reduced BATF3 and MYC mRNA levels in the cell. Currently, BET inhibitors are being studied extensively in clinical trials, both as monotherapy and in combination therapy to halt the transcription of oncogenes and to decrease cancer cell survival in multiple different cancer types [80]. Genomic mutational analysis of tumors Protein-protein interaction Genetic interaction screen Gene ontology annotations Fig. 1 Hypothetical integration of genetic-interaction screens with orthogonal quantitative approaches to enable the identification of pathways. From left to right, the experimental pipeline is such that genetic interactions are scored and clustered to identify genes that are potentially involved in the same or parallel functionally relevant pathways and/or in potential protein complexes. These genes are annotated using Gene Ontology terms [66]. The mutational landscapes of the genes of interest are tested for statistically significant co-mutation or mutual exclusivity. A co-immunoprecipitation experiment is conducted to identify the proteins that interact with the protein encoded by the gene of interest. Data obtained through these orthogonal approaches are combined to deduce the biological pathway genes or pathways, through dual loss-of-function or chemicogenetic analysis, respectively. The combination of CRISPR-based screening technologies and integrative analysis pipelines has enabled the formation of interaction networks that provide new insights into the functions of genes. Moreover, synthetic lethal or synthetic sick interaction pairs guide the design of selective combination therapies (Fig. 2). For example, mutations in several homologous recombination factors or inhibitors of the phosphatidylinositol 3-kinase signaling pathway, which were shown to synergize with PARP inhibition in BRCA1-proficient cancer cells in preclinical studies, are currently being tested in clinical trials (ClinicalTrials.gov reference NCT03344965). In line with this, buffering genetic interactions of drug target genes are candidates for drug-resistance mechanisms. Thus, the inhibition of these resistance mechanisms together with the primary genes may be an effective therapeutic strategy. It is imperative that genetic-interaction screens are expanded to include more genes and cell types to enable the identification of global networks. Comparisons of different cell types can reveal differences among cell types that can have important distinguishing biological implications. Conclusions and future directions To gain insights into the dynamic functional relationships between cellular processes and the rewiring of cancer cells in response to changing conditions such as drug treatment, it is important to consider differential genetic-interaction approaches in response to a stimulus. Most genetic-interaction analysis in mammalian systems is limited by 'end-point' experiments and by the use of non-specific phenotypic readouts, such as cellular growth rate. The analysis of genetic network plasticity and context-dependent rewiring events has been demonstrated in yeast and Drosophila cells, where quantitative comparisons of genetic interactions in untreated and treated conditions at different timepoints have revealed an enrichment of interactions in the target pathway [51,81]. Similar dynamic rewiring events can also be revealed by time-resolved analysis following loss-of-function mutations in mammalian systems. Coupling CRISPR-based gene perturbations to more mechanistic readouts, such as proteomic, transcriptomic or cell-localization phenotypes, will also enable the mechanistic elucidation of epistatic interactions. A derivative approach that is yet to be conducted in high-throughput systems is the inference of drug-resistance mechanisms. These approaches would inform rational drug combinations and accelerate the development of targeted therapies. To date, genetic-interaction screens in mammalian cells have relied on differential gene copy number and Fig. 1. A loss-of-function mutation in gene a is indicated to be a driver mutation for cancer development. The hypothetical case indicates a synthetic-sick interaction between gene a (which is involved in DNA repair) and gene g (which is involved in cellular metabolism). From left to right, inhibition of gene f or gene g in the cancer (a −/− ) background results in synthetic sickness, but not lethality. Synthetic lethality in the cancer background is only achieved by coinhibition of the genes f and g (or of genes f and h) expression profiles in cancer cells and cell-proliferation readouts. Yet, most tumors arise as a result of a mutation rather than the complete absence of a gene [71]. Distinguishing driver mutations and their specific functions will facilitate the discovery of target pathways. Therefore, conducting gene-interaction screens using pathogenic mutant versions of the target genes, rather than complete gene knockouts, will be important for drug development. Analyses of the mutational landscapes of tumors indicate that each tumor harbors a high number of somatic mutations. Global network analysis might reveal that these mutations converge in several hub events, such as protein interactions or transcriptional regulation. The integration of genetic-interaction datasets with other sources of information obtained through orthogonal experimental and computational tools is challenging and requires effective collaborations between molecular and cancer biologists, computational biologists, and clinicians. Several groups have formed such collaborative mapping initiatives in mammalian systems [73,75,82]. Ultimately, these efforts promise to lead to global network maps, which could allow predictions of effective drug-target combinations for each individual cancer cell background.
2019-10-23T08:00:15.251Z
2019-10-22T00:00:00.000
{ "year": 2019, "sha1": "10686cbc067274b9153c3c23304e21f167330e4c", "oa_license": "CCBY", "oa_url": "https://genomemedicine.biomedcentral.com/track/pdf/10.1186/s13073-019-0680-4", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "1e2b5e91af3898cfad33c2ce81d3b116e84a2ea5", "s2fieldsofstudy": [ "Biology" ], "extfieldsofstudy": [ "Biology", "Medicine" ] }
1707071
pes2o/s2orc
v3-fos-license
Analysis of the Environmental Factors Affecting the Growth Traits of Iran-Black Sheep A study was conducted to evaluate the effects of non-genetic factors on the growth behavior of Iran-Black sheep. The data of growth performances, birth weight (BW), weaning weight (W3), weight at 6, 9and 12 months of age (W6, W9 and W12, respectively), were taken from 1522 lambs belonging to data bank from Abbas Abad Sheep Breeding Station located at the North-east of Iran during a period of five years. Statistical analyses were performed using a general linear model including non-genetic factors: lamb sex, birth year and litter size as main effects, the lamb's age when weighed as covariate, and the interactions between these factors. Results showed that all traits were significantly (P INTRODUCTION Sheep breeding is important of livestock production in Iran as there are about 50 million heads of sheep in this country (FAOSTAT, 2016). The Iran-Black is a new composite sheep that has been developed by cross breeding of Chios and Balouchi breeds in Abbas Abad sheep breeding station in Iran. This breed is more resistant to diseases and arid condition with more meat tendency and reproducibility. There are various production traits of this breed which suggest that there is a potential for improvement of economic traits. However, growth performances are preferred traits to improve due to low economic value of wool compared to meat production. In this situation, more emphasis should be placed on growth traits and carcass quality as well as reproductive traits (Snymanet al., 1995).Estimation of heritability indicates the potential of genetic improvement. The amount of heritability depends on both genetic and environmental variation in growth performance. Any selection program to improve growth traits should be designed based on the genetic and environmental effects on the objective traits (Yazdiet al., 1999). Non-genetic factors must be corrected before starting genetic analysis. Some environmental factors can be adjusted before any statistical analysis, however, there are still unknown environmental differences between animals, known as residual error. An adjustment should be made for environmental and physiological sources of variation such as age, sex, birth type or litter size, years, seasons and such other environmental variables that can be evaluated (Babar et al., 2004).The effect of nongenetic factors on growth performance in sheep has been investigated in several studies. These factors in different areas have their own specific effects regarding the environmental characteristics of corresponded areas (Gbangbocheet al., 2006;Momohet al., 2013). Therefore, the present study was carried out to investigate the effect of sex of lamb, year of birth and litter size on body weight of Iran-Black lambs at different ages. Animals and location of study area The data on 1522 lambs born from 547 Iran-Black ewes sired by 60 rams kept at the Abbas Abad sheep breeding station located at a semi-arid area in the North-east of Iran during 2005-2009 were utilized to estimate the effect of environmental factors affecting BW, W3, W6, W9, and W12.The animals were raised in a closed system and fed with alfalfa, barley and straw. Sheep were supplemented in the last month of gestation and during lactation (usually barley), and births occurred mainly in April and May. Lambs were left with dams until age90 days, from this age they were kept to fatten until reaching slaughter age. Data and analyses The data file contained information on individuals, sire and dam identification code, sex, litter size, birth date, date of weighing and measure of body weight. The data were analyzed to estimate the effect of year of birth, litter size and sex of lamb born on the lamb growth. The mathematical model assumed for the Least-Squares Analysis was: Yijklm = P + Si + Aj + Lk + (SA)ij + (SL)ik+ b(Age ± Age) + Hijklm (1) where Yijklm is the weight of a lamb; P is the overall mean; Si is the sex of lamb; Aj is the year of birth of a lamb; Lk is litter size; (SA)ij is the interaction between sex and year of birth; (SL)ikis the interaction between sex and litter size; b is regression coefficient, Age is age of lamb at weighing time, Hijklm is residual error.A statistical analysisusing the univariate general linear model from the statistical package Minitab v.16 was used to analyze the effect of the fixed factors and interaction between them on the total variance of the records. 7KHODPE ¶V DJH DW ZHLJKLQJ time was used as covariate to correct the record of W3, W6, W9 and W12. Comparison of means was performed by Tukeytest, setting P<0.05 to identify significant differences between treatments. III. RESULTS AND DISCUSSION The data were used in the present study belonging to Abbas Abad sheep breeding station that Iran-Black breed has been created over there. As shown in Fig. 1, there was not such a big variation for all traits among different years, however, it was significant. Two reasons are supposed for this result, first, a scientific selection program has not been applied and second, environmental factors significantly influence the traits. The effects of sex, birth year and litter size are shown in the Tables one to three, respectively. All non-genetic factors that have been investigated in this study significantly influenced on lamb weights in all ages (P 0.001). However, the interaction between these factors had non-significant effect on growth performances. Male animals were heavier than females as shown in Table 1. This fact has been reported in the other studies (McManus et al., 2003;Babar et al., 2004;Macedo and Arredondo, 2008;Baneh and Hafezian, 2009;Ulutaset al., 2010;Gbangbocheet al., 2011;Momohet al., 2013;Lupiet al., 2015). Differences in physiological functions in both sexes cause such a tendency in body weight. The nature of testosterone, a steroid hormone whose anabolic effects act as growth promoter, attributes in postnatal growth in males (Lupiet al., 2015). The variation in lamb weights at different ages observed in different years (Table 2) may be due to variation in the environment, resulting primarily from differences in the amount of rainfall and the quantity and quality of herbage available. The management includes farmer manager, his ability to supervise the staff, availability of financial resources and selection strategies. Climate and environmental changes affect the quality and quantity of pasture forages, which affect the provision of food (Assan and Makuza, 2005;Momohet al., 2013).Adequately fed ewes are expected to produce heavy lambs. Litter size (single or multiple) had significant effects on living weight at different ages of lambs,single born lambs were heavier than multiple born lambs (Table 3). This result is according to the earlier studies (Dimsoskiet al., 1999;Assan and Makuza, 2005;Hinojosa-Cuéllaret al., 2012;Gavojdianet al., 2013). The low birth weight and subsequent growth rate of twin born lambs can be attributed to competition for nutrients in utero. This could be due to uterine space and thelimited capacity of ewes to provide more nourishmentfor the development of multiples fetuses and more milkfor lambs (Gbangbocheet al., 2006;Momohet al., 2013). However, the multiple born lambs may demonstrate compensatory growth after weaning. Low birth weight was found to be leading cause of reduced lamb viability (Wilson, 1986). Therefore particular nutritional attention should be given to ewes lambing twins. Nutritional stress limits the lambs from expressing their full genetic potential (Chang and Rae, 1972) for birth weight and weaning weight. Table 4 presents the coefficients of phenotypic correlation between body weights and corresponded Pearson correlation P-value. Although, all correlation coefficients are significant, the phenotypic correlations of birth weight with the body weights at subsequent ages ranged from low to intermediate and were positive. Similar results were observed in previous studies for the Tellicherry goats, Iran-Black and Lori-Bakhtiari sheep (Thiruvenkadanet al., 2009;Rashidi, 2013;Vatankhah, 2013, respectively). The W3 body weight had a significant, positive and moderate to high genetic correlation with the subsequent body weights (0.356 ± 0.732). This indicated that selection for increased bodyweight at this age would result in genetic improvement in the subsequent ages.Phenotypic correlation between two traits includes both the genetic and environmental correlations. With appropriate design, the genetic correlation can be separated from the environmental correlation (Momoh, 2013). Therefore, in this study the environmental correlation between WW and post-weaning weights may be higher than pre-weaning weights. IV. CONCLUSION The results obtained in the present study revealed that environmental factors cause differences in live weight of Balouchi sheep from birth to 12 months of age.A breeding program needs to adjust records according to non-genetic effects to estimate breeding values of animals accurately. Sex of lamb, year of birth and litter size influenced body weight of Balouchi lambs. Hence, the effect of these factors should be considered in mixed model approaches to find pure genetic values of animals. V. ACKNOWLEDGEMENT I would like to express my thanks to my colleagues in the Sheep Breeding Station of Abbasabad, especially Mr. Majid Jafari, for permission to use the data for this study.
2019-04-02T13:03:50.973Z
2017-02-01T00:00:00.000
{ "year": 2017, "sha1": "f331380b47a1d046127cc93b1d4d969c3e576d46", "oa_license": null, "oa_url": "https://doi.org/10.22161/ijeab/2.1.21", "oa_status": "GOLD", "pdf_src": "Neliti", "pdf_hash": "d7711d902ef37032822478112519b4bad090ea86", "s2fieldsofstudy": [ "Agricultural and Food Sciences" ], "extfieldsofstudy": [ "Biology" ] }
254275148
pes2o/s2orc
v3-fos-license
The study of contact properties in edge-contacted graphene-aluminum Josephson junctions Transparent contact interfaces in superconductor-graphene hybrid systems are critical for realizing superconducting quantum applications. Here, we examine the effect of the edge-contact fabrication process on the transparency of the superconducting aluminum-graphene junction. We show significant improvement in the transparency of our superconductor-graphene junctions by promoting the chemical component of the edge contact etch process. Our results compare favorably with state-of-the-art graphene Josephson junctions. The findings of our study contribute to advancing the fabrication knowledge of edge-contacted superconductor-graphene junctions. Recent studies have established the importance of the transparency at the superconductor-semiconductor interface for exploring proximity-induced superconductivity in the hybrid systems [16][17][18]. One figure-of-merit IcRn is often used to quantify the contact transparency in JJ devices [19], where Ic is the critical supercurrent and Rn is the normal resistance of the junction. Theory predicts that graphene JJ with transparent contact should achieve IcRn = 2.44/e in the short limit, where  is the superconducting gap [20]. Previous demonstrations of edge-contacted graphene JJ devices reported significantly smaller IcRn/ than theory [2,10,[12][13][14]21]. A possible reason for this observation is the short coherence length () of the superconductors (e.g., Nb, NbN, MoRe) in those studies, which is comparable to or smaller than the graphene channel length (Lch). This design constraint prevents the operation in the short limit; thereby making it difficult to assess the extent of graphene-superconductor transparency using IcRn/. A recent study reported a significant improvement in the normalized IcRn for edgecontacted graphene JJ devices, establishing the prospects of achieving operation in the short limit [22]. A key to this demonstration was the fabrication of short-channel (Lch< 0.2 m) graphene devices that were contacted by an aluminum (Al) superconductor, which satisfies a requirement for short limit operation, i.e., Lch ≪  (coherence length of aluminum). More importantly, the high Ic (about 6 A) of these short-channel graphene devices indicate high transparency at the superconductor-graphene interface. While edge-contacted graphene JJ has improved considerably, there is still significant room for enhancing their performance to reach the theoretical prediction. Studies of the structural, chemical, and electronic properties of the superconducting contacts to graphene are crucial for developing optimization strategies. Our work here is a step in this direction. Here, we fabricated edge-contacted graphene devices with Al superconducting contacts as the test vehicle. Specifically, we examined the effect of the edge contact etch process on the performance of the resulting JJ devices. We observed that modifying this step can lead to significant enhancement in IcRn/ that outperforms JJ graphene devices with large  superconducting electrodes [2,10,[12][13][14]21], while matching the "second-to-champion" devices with Al superconducting electrodes [22]. Lastly, we examined the structural properties and elemental composition of the contacts to graphene in our BGB JJ devices using high-resolution transmission electron microscopy (HRTEM). This analysis revealed the unintentional incorporation of carbon (C) and oxygen (O) impurities at the contact interface with graphene. In this work, we used poly(vinyl) alcohol (PVA)-assisted-graphene-exfoliation method to produce the monolayer graphene flakes [23]. A sub-10 nm PVA film was spin-coated onto Si substrates covered with SiO2 and used as the adhesion promotion layer during the graphene exfoliation. The BGB heterostructures were constructed using the Quantum Material Press tool [24]. To achieve BGB heterostructures with atomically clean interfaces, we followed a stacking process, which allows the full removal of the polymeric contaminations across the entire dimensions of the heterostructures [8]. Before the fabrication of devices, we confirmed the monolayer graphene and the interface cleanliness using Raman spectroscopy [25] (see Supplementary Fig. S1). Figure 1a shows the schematic illustration of a BGB JJ device, where graphene is edgecontacted by bilayer Ti/Al (10/30 nm) electrodes. In this structure, the silicon substrate functions as the global bottom gate. We observed that direct deposition of Al yields poor electrical contact to graphene. In contrast, we found experimentally that the incorporation of a 10-nm-thin Ti interlayer considerably improved the contact quality. We employed a self-aligned process for fabricating the BGB JJ devices [10]. This fabrication process is useful for minimizing the polymer residues at the contact region, while relaxing the requirement of good alignment in the lithography step. We fabricated the graphene JJ devices by first defining a rectangle-shaped BGB mesa using a combination of e-beam lithography and an etching step. A subsequent lithography step defined the self-aligned metal contact regions, followed by reactive ion etch (RIE) of the BGB heterostructures that exposes the graphene edge for contacting to the metal electrodes. Lastly, Ti/Al metal electrodes were formed through sputtering and lift-off processes. We fabricated two sets of device samples (Al-1 and Al-2) by varying the edge contact etch process. Specifically, we employed two different CHF3/O2 gas mixtures by adjusting the flow rates (40/4 sccm for Al-1 and 60/4 sccm for Al-2 samples). Other etch conditions in this step were kept unchanged. Our rationale for this experimental design is that increasing this gas ratio is known to promote the chemical etching [26]. We hypothesized that promoting the chemical component of the BGB etch can be favorable for enhancing the chemical activation of the graphene edge to yield stronger coupling to the metal electrode [27]. Figure 1b shows the optical image of Al-1 JJ devices configured in a transfer-length-measurement (TLM) structure with Lch=0.5, 0.75, and 1 m. We designed the JJ devices on Al-2 to include smaller Lch than Al-1, with Lch=0.3, 0.4, and 0.6 m. The channel widths of all devices are W=5 m. Our electrical characterization of the BGB JJ devices initially focused on studying carrier transport in graphene at room temperature ( Figure 1) and low temperature of 1.5 K ( Figure 2). These measurements were performed using standard low-frequency lock-in technique with an excitation current of 5-10 nA. The BGB JJ devices had a quasi-fourpoint structure, which eliminates the resistance contribution from the metal electrode leads. The current was injected from lead 1 to 2 and the voltage was measured between leads 3 and 4 (see Figure 1b). This configuration measures the sum of the graphene channel resistance and contact resistance. Figures 1c-d show the room-temperature device resistance (R) as a function of the gate bias (Vg) for JJ devices on the Al-1 and Al-2 samples, respectively. In all devices, the total resistance exhibits an electron-hole asymmetry, where the hole branch has a higher resistance than the electron branch. This electron-hole asymmetry is attributed to the contact-induced electron doping [2,10,28]. This phenomenon yields the formation of p-n junctions at the contact region, creating an additional barrier for the hole carriers at the metal-graphene interfaces. We employed TLM analysis to examine the contact resistance and carrier transport at room temperature. The width-normalized resistance can be written as a linear function of Lch as follows RW=R c W+R ch W=R c W+ L ch σ where Rc is the total contact resistance, Rch is the graphene channel resistance and  is the graphene conductivity. By applying a linear fit to the TLM data (see Supplementary Fig. S2), we extracted the width-normalized Rc (see Figures 1e and g). We also calculated the mean-free-path (Lmfp) by assuming diffusive transport in these devices. wavevector. This analysis revealed that Al-2 devices have a smaller Rc than Al-1 devices at high carrier density. Furthermore, we calculated 0.5 m<Lmfp<1 m in Al-1 devices at high electron density, which is comparable to the device dimension (see Figure 1f). The resistance of the Al-2 devices at high electron density showed negligible channel length dependence, which yielded Lmfp≫0.6 m (marked by the gray shading in Figure 1h). The TLM analysis at room temperature suggests that the Al-1 and Al-2 devices operate in the crossover regime between the diffusive and ballistic transport. Next, we studied the low-temperature transport at T=1.5 K, which is above the superconducting critical temperature (Tc) of Al. Figures 2a-b show the gate voltage dependent plots of resistance for Al-1 and Al-2 devices, respectively. The data revealed negligible channel-length dependence of resistance away from the CNP, suggesting ballistic transport. Assuming ballistic transport in graphene, we investigated the metalgraphene contact properties. A ballistic conductor (i.e., Lmfp>Lch) with no scattering centers has zero resistance. Therefore, the total resistance in a ballistic device can be attributed to the resistance in the immediate vicinity of the conductor-contact interfaces, which is known as the Sharvin resistance (Rs) [29][30][31]. This quantum-limited resistance is determined by the number of conducting modes (M) in the ballistic conductor. Thus, the graphene Sharvin resistance can be written as where h is the Planck constant, g0 is the conductance quantum, and the factor 4 comes from the spin and valley degeneracy in graphene. In Figures 2a-b, we plotted Rs to allow comparison with the measured device resistances. This analysis revealed an observable difference between the theoretical and measured values, suggesting that additional factors should be considered for describing the device resistance. In Landauer's formula, a finite transmission probability (Tr) is used to account for the difference in R and Rs [32]. Tr describes the averaged probability of electron transmission from one metal contact to the other. Thus, the overall conductance G can be written as G=G s T r . From this equation, we extracted Tr as a function of the carrier density in graphene, as shown in Figures 2c-d. This analysis revealed that, at high electron density, Al-2 devices exhibit about 30% higher Tr than the Al-1 devices (Tr~0.4 versus 0.3). While this analysis estimates the extent of the non-ideality in the ballistic transport of graphene devices, further studies are required to identify the exact sources of the finite Tr. For example, various non-idealities in the materials can contribute to the deviation of Tr from unity, such as disorders at the contact interfaces (i.e., graphene edge or metal electrode), non-specular boundary scattering in graphene, and mismatch of conducting modes between metal and graphene [29,31]. Nevertheless, this analysis revealed the noticeable role of the edge contact etch in our fabrication process in shaping the electronic properties of the contact. Next, we evaluated the Josephson characteristics of the Al-1 and Al-2 devices. Figure 3 shows the representative Josephson dc measurements of the 0.4 m device on the Al-2 device sample at 15 mK. The data show Ic of 1.63 and 0.3 A at electron and hole carrier density of 1.410 12 and -510 11 cm -2 , respectively. These results confirm the gate tunability of the supercurrent. Ic increases with the carrier density on both electron and hole branches. Moreover, the minimum Ic (0.06 A) occurs at the charge neutrality point (CNP). The differential resistance map in Figure 3b revealed the electron-hole asymmetry in Ic. For the same carrier density, the electron branch provides over 4 times higher Ic than the hole branch. The considerably smaller Ic on the hole branch is due to the contact-induced doping [2,10,28]. These results confirm the proximity-induced superconductivity of graphene. We have provided the results of the other devices on Al-1 and Al-2 samples in Supplementary Fig. S3. Next, we examined the supercurrent interference pattern in the presence of vertical magnetic field (Bz). These measurements provide information about the field profile in the junction by analyzing the node periodicity of the supercurrent [33,34]. Assuming a rectangular junction with uniform current distribution, the interference pattern follows where Φ 0 is the flux quantum, Φ is the out-of-plane flux in the junction region, I c 0 is the zero-field critical current. In Figures 3c-e, we plot the measured Fraunhofer patterns at different carrier density of 1.410 12 , -510 11 cm -2 , and CNP. The field oscillation frequency is 0.43 mT, which gives Lch+2=0.93 m (hence =0.26 m), where  is the penetration depth into the Al electrodes. Furthermore, we used these datasets to reconstruct the current density distribution of the junction [35]. Figure 3f illustrates the critical current extracted from Figure 3c. In Figure 3g, we calculated current density profile, which shows nearly constant current density within the junction. This result implies a uniform contact quality and homogeneous transport properties in this BGB JJ device (see Supplementary Fig. S3 for measurements in other junctions). The large Ic of 1.63 A in Figure 3a points to a good transparency at the Al-graphene junction. Therefore, we next examined the transparency of the superconducting Algraphene junction by calculating IcRn . Figures 4a-b show the I-V curves of the Al-1 and Al-2 devices, measured at high electron density using a biasing current up to 10 A. The use of high biasing current is an important consideration for accurate extraction of Rn as it must be extracted in the regime where the junction becomes fully ohmic. From the slope of these I-V curves, we calculated Rn≈90  for Al-1 devices at 510 11 cm -2 . For Al-2 devices, we obtained Rn≈50  at 1.410 12 cm -2 . These extracted Rn values are consistent with the transport measurements at 1.5 K. Furthermore, we used the cold branch I-V curves in Figures 4a-b to obtain Ic, providing Ic=0.6, 0.46 and 0.38 A for Al-1 devices and Ic=1.7, 1.63 and 1.45 A for Al-2 devices. Figure 4c shows the summary of the IcRn data for Al-1 and Al-2 devices normalized to  of the superconductor (notice the star symbols in this plot). An important consideration in obtaining IcRn/ for evaluating the contact transparency is to use the bulk superconducting gap of Al [17]. To do so, we measured Tc of Al, which was 1.15 K and 0.9 K in Al-1 and Al-2 samples. Correspondingly, the bulk superconducting gap of Al in Al-1 and Al-2 samples were 175 and 136 µeV. Using this information, we calculated IcRn/ to be 0.3, 0.24, and 0.2 for Al-1 devices, 0.68, 0.6, and 0.54 of Al-2 devices. Comparing IcRn/ of the devices with similar Lch on Al-2 and Al-1 samples indicates almost 2 times improvement in the transparency of the superconducting Al-graphene junction. This analysis provides further evidence for the important role of the edge contact etch in our fabrication process. To put these results in perspective, in Figure 4c, we compared IcRn/ of our devices with state-of-the-art counterparts that use large  superconductors [2, 10, 12-14, 21, 36]. For fair comparison in this plot, we normalized IcRn to the bulk superconducting gap for all data. This summary plot indicates higher IcRn/ of our devices relative to their counterparts with similar lengths. We attribute the observed improvement, in part, to the longer coherence length of Al in our devices. The measurements of IcRn as a function of temperature can also provide information about the contact transparency [19]. Figure 4d shows the experimental IcRn for the 0.5 and 0.3 m devices on Al-1 and Al-2 samples. From the data, we recovered IcRn=∆/e using the Kulik-Omelyanchuk relation [37] and found =0.22 and 0.43 for the 0.5 and 0.3 m devices, respectively (see Supplementary Note 2). This analysis further confirms the improved junction transparency in the Al-2 devices. Another important property of a JJ device is the induced superconducting gap ∆ind, which can be obtained from the multiple Andreev reflection (MAR) [17]. Figures 4e-f show the measured conductance of the 1 and 0.3 m devices (on Al-1 and Al-2 samples) at different gate voltages. Several discernable conductance peaks can be observed in the conductance. The position of those peaks corresponds to the energy levels of 2∆ind/N, where N is an integer number. From Figures 4e-f, we extracted ∆ind=100 and 80 eV for Al-1 and Al-2 devices. Our observation of a smaller ∆ind than the bulk superconducting gap is consistent with the previous studies on Al devices [17]. This analysis emphasizes the importance of normalizing IcRn to the bulk superconducting gap for assessing the junction transparency. Lastly, we analyzed the structural properties and elemental composition of the metalgraphene junction using HRTEM and electron dispersive X-ray spectroscopy (EDS). Figure 5a shows the cross-sectional HRTEM image of a BGB JJ with Lch=0.5 m. Figure 5b shows the zoomed-in view of the same BGB JJ at the edge contact region. The HRTEM images confirmed the continuity of the Ti/Al metal stacks on the BGB sidewall. It also revealed the sharpness of the metal-BGB interface. The basal plane of the BGB layers is visible in Figure 5b, confirming the atomically clean interfaces between graphene and hBN layers. These results confirmed the excellent structural quality of the BGB stacks. Figure 5h provided detailed information about the elemental composition of the metalgraphene junction. Figure 5h confirms the formation of the edge contact between graphene and the Ti layer. Curiously, this analysis also revealed the presence of elemental C and O atoms, which mainly overlap with the Ti interlayer. In our experiment, we did not investigate the mechanism of O and C incorporation and the nature of their interactions with Ti. However, we suspect that the incorporation of these impurities occurs during the Ti deposition due to its known high reactivity with C and O [38][39][40]. While we do not know how these impurities affect the junction transparency, we conjecture that they may be a limiting factor in improving the performance of our graphene JJ devices. Therefore, a future research direction is to investigate the effect of alternative interlayers that are less susceptible to the unintentional incorporation of C and O impurities. In conclusion, the results reported here illustrate the noticeable role of the edge contact etch in our fabrication process on the transparency of the superconducting Al-graphene junction. Specifically, we found that promoting the chemical component of the etch process was beneficial in improving the device performance. With this modification, the normalized IcRn of our devices surpassed that of state-of-the-art counterparts with large  (Refs. [2, 10, 12-14, 21, 36]), while matching the second-to-champion devices with Al (Ref. [21]; see Supplementary Fig. S4). Furthermore, our HRTEM study revealed the presence of C and O impurities in the Ti interlayer. We conjecture that further improvement of junction transparency may require strategies for achieving an impurityfree metal interlayer. Lastly, many studies have contributed to the persistent
2022-12-07T06:42:44.796Z
2022-12-05T00:00:00.000
{ "year": 2022, "sha1": "eaac6156a80e9601a6475b5cc37622fbc0212f8d", "oa_license": null, "oa_url": null, "oa_status": null, "pdf_src": "Arxiv", "pdf_hash": "eaac6156a80e9601a6475b5cc37622fbc0212f8d", "s2fieldsofstudy": [ "Materials Science" ], "extfieldsofstudy": [ "Physics" ] }
240228470
pes2o/s2orc
v3-fos-license
Functional and environmental performance of plant-produced crumb rubber asphalt mixtures using the dry process Incorporating crumb rubber (CR) using the dry process, directly in the asphalt mixture rather than into the bituminous binder requires no plant retrofitting, and therefore is the most practical industrial method for CR incorporation into asphalt mixtures. Nevertheless, very few large scale studies have been conducted. This work uses a holistic approach and reports on the functional and environmental performance of asphalt mixtures with different concentrations of CR fabricated employing the dry process in asphalt plants. Gaseous emissions were monitored during the production and laboratory leaching tests simulating the release of pollutants during rain, was conducted to evaluate the toxicology of both the CR material alone and the modified asphalt mixtures. In addition, laboratory compacted samples were tested to assess their fatigue behavior. Furthermore, noise relevant surface properties of large roller compacted slabs were evaluated before and after being subjected to a load simulator (MMLS3) to evaluate their resistance to permanent deformation. The results confirm that comparable performance can be achieved with the incorporation of CR using the dry process for high performance surfaces such as semi-dense asphalt, which usually require the use of polymer modified binders. Environmental performance improvement can be achieved by a washing step of the CR material that could remove polar CR additives which have commonly been used as vulcanization accelerator during rubber production. TOC Total organic compounds VOC Volatile organic compounds Introduction Crumb rubber (CR) from waste tires has been used as modifier of asphalt materials for the last 40 years [1]. For the blending process various methods have been developed over the years that can be summarized under three general categories. First, is the wet process that refers to the blending of CR with bitumen and added to the asphalt mixture after a reaction time; second, is the dry process where the CR is added directly to the mixture and third the terminal blend that refers to bitumen with CR which is digested into the bitumen at the refinery or at a bitumen terminal. Many practical studies have shown that the use of CR can improve the properties of asphalt mixtures as a result of interaction withthe bituminous binder. The main physical mechanism governing this interaction involves a swelling process of the rubber particles by the lower molecular weight fractions of the binder (e.g. maltenes) [2]. Nevertheless, the final performance of this solution is closely linked to a good mixture design including percentage of binder and CR. Along with the selection of a compatible asphalt binder, the type of CR particle consequence of its production plays a key role in obtaining a reliable behavior of the final asphalt mixture [3]. Nowadays, the dry manufacturing process seems to be the technique that is more practical to use for the industry, as it does not require significant changes in the production plants. Beside the practical knowledge gained, the development of chemical treatments focused on the enhancement of the compatibility between the CR and the asphalt binder has led to overcome the initial inconsistencies found during the first test trials [4]. These surface treatments can involve catalytic agents, dispersing agents, and hydro-thermal reactions to activate CR particles [2]. As a consequence of this, CR particles can now be added directly to the mixer as additive or partial replacement of the mineral aggregates. After mixing with the optimal amount of hot bitumen (ca. 160-220°C), the mixture usually requires a curing, maturation or digestion time at high temperature to achieve a proper reaction between the CR particles and the asphalt binder. The nature of this interaction is a combination of physical and chemical as reported in a review by Lo Presti [5] comprising partial digestion of the rubber into the bitumen on the one hand and, on the other, adsorption of the aromatic oils within the polymeric chains that are the main components of the rubber. Latest versions of modified CR have reduced this time to 30 min or lower, which could easily coincide with the hauling distance from the plants to the construction site [6]. Most of the published works about the performance of CR asphalt mixtures modified astofiusing the dry process focus on laboratory scale studies [4,7,8]. However, very few report on the performance of optimized designs produced at large industrial scale asphalt plants. For example, Feiteira Dias et al. [9] evaluated the mechanical response of gap-graded asphalt rubber mixtures manufactured in an asphalt plant using the dry process in a field study of trial sections. The laboratory testing indicated that the mixtures with CR showed better rutting and fatigue performance. Moreover, visual inspection after five years of service confirmed a satisfactory performance, both with regard to the structural and functional performance. Nevertheless, due to the elevated binder content used in these experimental mixtures ([ 8.5%), it was difficult to link this enhanced response only to the addition of the CR. Likewise, Eskandarsefat et al. [10] have shown the effect of CR addition with the dry process on dense asphalt mixtures with reclaimed asphalt pavement (RAP) by means of both laboratoryscale and in situ tests. They aimed at studying the influence of the elastic CR particles on the stiffening effect usually associated to the presence of aged binder from the RAP. Moreover, the potential absorption of the rejuvenating agent by the unmodified CR was analyzed. They observed that CR modified asphalt mixtures showed an optimal response against permanent deformation and moisture susceptibility. However, their results confirmed that the binder content needed to be slightly increased in the design of the CR modified asphalt mixtures to enhance the workability and to meet the volumetric requirements. In addition, the skid resistance measured on the test tracks was found to be reduced with CR which was also found by Miró et al. [40] in the evaluation of the functional characteristics of several test tracks built with gapgraded asphalt mixtures modified with CR by the dry process. Specifically, the addition of CR using the dry process led to a decrease in macrotexture, which was proportional to the increase in CR content. Although it was observed that these mixtures were more prone to wear, they emphasized that construction and service conditions could strongly affect the surface characteristics. In a more recent study, Sangiorgi et al. [11] carried out a field evaluation of the viability of Stone Mastic Asphalt (SMA) mixtures with CR incorporated as partial replacement of limestone filler. They have found that the modified mixtures fabricated in the plant obtained similar volumetric and mechanical properties compared with the standard mixtures. Moreover, although the texture values of these rubberized surfaces were in line with the parameters typically recorded for gap-graded mixtures, they showed lower tire-road noise levels. In addition to the field performance, the environmental effect of such mixtures is a debate that is going on today in all construction sectors and in fact, has become central for the assessment of new asphalt mixture designs, including the ones modified with CR by the dry process. CR is a mixture of natural and synthetic rubber (such as styrene-butadiene-rubber), carbon black, sulfur and sulfur-based cross-linking agents and various other additives (e.g. aging retardants, reinforcing agents, accelerants, antioxidants, plasticizers, fillers, or textiles) [2]. Due to its constituents, monitoring of the environmental effects of CR is important. In an earlier technical report, [12] showed higher concentrations of diverse pollutants in air samples taken during the production of CR asphalt mixtures as compared with conventional ones. They noted that the presence of polycyclic aromatic hydrocarbons (PAHs) in the modified mixtures could have the potential to cause health problems (e.g. cancer or respiratory irritation) in workers due to occupational exposure. However, in a more recent study, Nilsson et al. [13] concluded that it was not evident if exposure to rubber bitumen possesses a higher risk than exposure to standard bitumen, in terms of air pollutants such as benzothiazole-and PAH-emissions. Likewise, Sangiorgi et al. [11] did report effective benefits for rubberized mixtures with a reduction in air emissions of respirable dust particles and PAHs during the placement process. Also in Italy, Zanetti et al. [14] investigated the gaseous emissions produced during paving operations of asphalt mixtures modified with CR by the dry process. For the determination of the concentration of volatile organic compounds (VOCs) and PAHs, different analytical tests were conducted in the laboratory. They concluded that composition of fumes was affected by several material specific (i.e. mixture composition, CR type and base bitumen type) as well as site-specific (i.e. layer thickness, placement and air temperature, wind, air pressure) factors. Nevertheless, relative contributions of bitumen quantity, type and composition, seem to be the most relevant parameters. Moreover, the results showed that the toxic and carcinogenic risks for workers on site in the case of bituminous mixtures containing CR, were comparable to that of standard paving materials. Along with air pollutant emissions during the construction of different test tracks, Santagata et al. [15] quantified the concentration of PAHs, VOC and metals by means of leaching tests. The results confirmed that the values obtained for CR asphalt mixtures complied with the required regulations. The same conclusion had been already reported in several American studies conducted in order to evaluate the potential leachate of hazardous components presented in crumb rubber [16][17][18]. The authors concluded that the CR modified asphalt mixtures did not show a relevant threat for the environment and human health either. In an updated review, Wang et al. [19] summarize the overall environmental impact associated to the CR modified asphalt mixtures. The report states that the rubberized asphalt technology was favorable to reducing greenhouse gas GHG emission [20]. Also, the authors note the study by Stout et al. [21] that mention the emissions of O 2 , N 2 , CO 2 , NO x and SO 2 from the production of rubberized asphalt mixtures were similar to those for hot mix asphalt. However, emissions of CO and CH 4 were much lower from rubberized asphalt mixtures ca. 40% and 60% respectively. It should be noted that these were measured during the wet process for a continuous manufacturing process. In that review, it is also emphasized that, in their work, Feraldi et al. [22] stated that the use of commercial asphalt modifier based on recycling scrap tires showed excellent environmental advantages and reduced the volatile organic compounds (VOCs) by 30% in comparison with SBS modified asphalt. With the current state of the art, many researchers consider that asphalt mixtures modified with CR by the dry process can potentially be a substitute for polymer modification. These high performance polymer modified mixtures were originally developed to improve the mechanical behavior to meet the requirements of a growing traffic demand observed during the past decades. However, issues related to their high cost as well as to production temperatures (i.e. greenhouse gas emissions) and recyclability of polymer modified mixtures have been a topic of discussion within the international community. In spite of the vast amount of knowledge on the use of CR as a performance enhancing additive in road materials and an effective waste mitigation measure, the technology readiness level (TRL) is varied worldwide. For example Piao et al. [23] have shown that wet process CR has reached a TRL of 7-9 (application is partially or completely industrialized) whereas in Switzerland it has reached a TRL of only 1-4 (progress at laboratory scale or lower). On the other hand, the dry process CR has a worldwide TRL level of 5-7 (pilot projects have been implemented in the field) whereas the Swiss TRL level in only 1-4. This is a typical trend that is observed i.e. technologies reaching different levels of TRL. The worldwide discrepancy is due to various factors including lack of legislation and incentives to lack of know-how and trust of the new technologies by the practicing professionals and decision makers. The current work aims at closing this knowledge gap by using a holistic approach by analyzing the functional performance and environmental effects of plant produced asphalt mixtures modified with CR using the dry process in comparison with a conventional one prepared with polymer modified bitumen (PmB). Three batches of semi-dense asphalt mixtures were produced in an asphalt plant where VOC emissions were measured. Afterwards, samples were taken from each mixture in order to conduct a series of leaching tests. Additionally, fatigue tests were conducted on cylindrical samples, while large slabs were roller compacted and used for the evaluation of their surface texture characteristics as well as their responses to permanent deformation at medium scale under repetitive loading with a load simulator. Materials A semi-dense asphalt mixture with a maximum aggregate size of 4 mm (SDA4) was selected for this study. This type of mixture is used commonly as a low noise surface course (SNR 640 436: 2015). A total of three 800 kg batches were manufactured in an asphalt plant. The one produced with PmB 45-80-65 was the reference following the Swiss standard. In parallel, a base asphalt binder type 70/100 was used for the mixtures that incorporated different percentages of CR by the dry process directly in the asphalt mixer with the pre-heated mineral fractions. The CR particles (\ 0.6 mm) were produced by mechanical shredding and chemically activated by a patented treatment for use in the dry process [24]. While the CR particles could occupy some space in the mixture, no reduction was made in the aggregates or binder to compensate for this. However, a slight increase of the asphalt binder content based on the CR percentage was recommended by the CR producer. This was aimed at countering the potential swelling process that generally involves the absorption of lighter fractions of the asphalt binder into the internal matrix of the rubber particles. The details of the mixture designs are shown in Table 1. The operating parameters, mixing times and temperatures were the same within each mixture manufacturing process. Furthermore, a digestion time of 30 min at the mixing temperature (160°C) was used for allowing a proper CR-asphalt binder interaction. When working with chemically activated crumb rubber, as here, swelling will be much faster hence the material will reach a state at which it can be paved quicker. This is evidenced in the viscosity measurements over time performed previously [6]. The rubber amount was based on ca. 15% of fresh bitumen as shown by previous experiments [6]. Due to the potential ageing effects on the mixture induced by this stage, and to allow proper comparison between the mixtures, the reference mixtures were subjected to the rest period at high temperature (digestion process) as well. During the production process this CR-asphalt binder interaction was undertaken in silos at the plant site. The mixtures were thereafter placed in 25 kg boxes and transferred to the laboratory for further testing. Prior to the sample preparation, the mixtures were heated briefly in a microwave oven to attain workability. VOC emissions During the plant production process, which is a batch process, the temperatures of the standard materials (mineral aggregates, filler and asphalt binder) were the same for the three different mixtures. This was employed to isolate the effect of the CR that was incorporated at ambient temperature directly into the mixer before the addition of the hot bitumen. No significant difference in terms of handling and mixing was noticed between the mixtures and approximately a temperature of 160°C was always recorded at the exit of the mixer. Likewise, in order to assess the effect of the CR incorporation on the concentration of VOCs during the production of the mixtures, samples of emissions were taken at two locations from the mixer aspiration duct and from the exhauster (downstream) in order to quantify the VOCs. The measurements were carried out with a Flame Ionization Detector (SICK FID 3006). This equipment was used for measuring Total Organic Compounds (TOC) or VOCs counted on carbon atoms (VOC-C1). The sensors were installed at two locations: in the mixer aspiration duct and in the clean gas duct i.e. downstream of the plant's main bag filter (Fig. 1) and exhauster (main blower) which are located upstream from the exhaust gas chimney. All emission data was recorded with a mobile NI (National Instruments) LabVIEW SignalExpress using a sampling rate of 1 data point per second for each signal (s -1 ). Leaching tests As mentioned earlier, CR is a mixture of natural and synthetic rubber (such as styrene-butadiene-rubber), carbon black, sulfur and sulfur-based cross-linking agents and various other additives (e.g. aging retardants, reinforcing agents, accelerants, antioxidants, plasticizers, fillers, or textiles) [2]. In particular, CR can contain various benzothiazole (BT) derivatives, which are sulfur-based compounds that are used as vulcanization accelerator and/or are by-products of the accelerators formed during the tire vulcanization. Therefore, in asphalt mixtures modified with CR, the release of these compounds in fumes during road work and in run-off water during service life can lead to exposure of humans and the environment to carcinogenic and toxic compounds. Benzothiazole (BT) derivatives are designed to be thermally labile and as such decompose at elevated temperatures e.g. during vulcanization of rubber and road work. These BT decomposition products in the fumes were not specified. However, BT derivatives are relatively stable under ambient temperatures and can be dissolved in water, especially in acidic water and with this be released from CR-modified asphalt. Likewise, any concentration of PAHs has the potential to cause cancer in workers exposed to its fumes during the production of the asphalt mixtures or the final construction of the road infrastructure. In order to assess concentrations of PAHs and benzothiazoles in the CR material, samples of the CR used in this study were extracted with dichloromethane (DCM) to dissolve the PAHs and BTs. All SDA mixtures produced in the plant, were analyzed in the laboratory by means of a set of leaching tests with acidic water (diluted acetic acid, pH 4.93) using established methods [25]. The materials for leaching tests were prepared in triplicates. For each test, 100 g samples of loose asphalt mixture were placed in a glass bottle and 2 l of acidic solution (pH = 4.93) was added. Mild acidic conditions simulate leaching with rain, which is slightly acidic too. The bottle was rotated at 30 rpm for 18 h at room temperature. Then the extract was separated from the solid using a borosilicate glass fiber filter (0.6-0.8 lm). The PAHs in the leachates were extracted by using solid-phase extraction disks (ENVITM-18 DSK SPE disks, Supelco). A high resolution combined gas chromatograph mass spectrometer (GC-Ultra-HRMS Orbitrap QExactive) was used to identify PAHs by following an established leaching procedure [26]. Concentrations of BT-derivatives were determined with a combined liquid chromatography-triple-quadrupol mass spectrometer (LC-QQQ-MS, Agilent 1290 Infinity). A list of the investigated compounds, abreviations used, molecular masses and if available, water solubilities [27] are given in Table 2. Resistance to fatigue Cylindrical Marshall specimens of 100 mm diameter and 40 mm thickness were prepared by hammer compaction and used to perform indirect tensile tests to obtain the fatigue resistance (EN 12,697-24, AL-SP-Asphalt 09). During the fatigue test until failure, a continuous sinusoidal load was applied with a frequency of 10 Hz at a constant temperature 10°C. According to the standard, three loading strain amplitudes were implemented using 0.035 MPa as the lower stress and an upper stress between 0.4 and 0.8 MPa. The material's fatigue function is expressed as: where e el is the horizontal elastic initial strain and C1, C2 are fitting constants. N macro is the number of loading cycles when the energy ratio (ER) reaches its peak, N being the number of cycles and E(N) the stiffness modulus at each cycle: 2.5 Permanent deformation with model mobile load simulator MMLS3 In order to reproduce stresses and strains that are similar to those produced in real pavements, the model mobile load simulator MMLS3 was used as a medium scale accelerated pavement loading testing system [28]. Permanent deformation tests at 25°C were conducted on large slabs (two each) from each asphalt mixture that were roller compacted. These were 1600 mm in length and 435 mm in width with a thickness of 40 mm. During the test, the evolution of the vertical deformation of the rut generated on the wheel path was monitored with a laser scanner with a precision of 1 mm (Fig. 2). Three reference lines transversal to the wheels direction (A, B and C) were scanned at 1000, 5000, 10,000, 20,000, 40,000 and 60,000 loading cycles. The percent rut depth was calculated as an average of the measured points on the rut width (100 mm) at the reference lines resulting in 3 locations and 100 points per location (A, B, C in Fig. 2) resulting in 300 measurements points as follows: where d x,y are the punctual vertical displacements in mm measured along the width of the rut (x-direction) on each reference line (y-direction) along lines A, B, C (refer Fig. 2) before the test (initial) and after different number of cycles (n), d n is the average of vertical Surface texture and skid resistance The texture of the surface is usually associated to the driving safety under wet conditions, noise emission or fuel consumption [29]. In addition, it plays a relevant role in wearing of the rolling tires [30]. This results in tire dust that could potentially have a detrimental effect on the surrounding environment by means of soil and groundwater leachates as well as ultrafine particles suspended in air affecting humans cardiovascular and respiratory systems. Nevertheless, previous research has shown a positive influence of porous pavements in mitigating resuspension of road particles [31]. In this study, the surface properties of the SDA slabs were investigated through their microand macro-texture characteristics. The friction/adhesion between tires and the rolling surface is related to micro-texture. The skid resistance for each slab was measured by using a British pendulum arm tester on wet surfaces at three different pendulum swings on the same location (EN 13,036-4). The tester incorporates a spring-loaded slider made of a standard rubber which upon releasing passes over the test surface. This contact results in an energy loss that is quantified by the upswing of the arm using a calibrated scale. The Pendulum Test Value (PTV) was determined by the constant value achieved by the final three swings. In addition, the surface macrotexture (texture wavelengths 2.5-100 mm) was measured with stationary laser profilometry (Ames Engineering 9400HD) mounted on top of the asphalt slabs. Texture levels (in dB) were obtained by scanning the profile of the surface. Two different punctual measurements (100 mm 9 50 mm) were carried out with resolutions of 0.005 mm vertically, 0.006 mm along the length of the scan and 0.02 mm for the width. Afterwards, the mean profile depth (MPD) was calculated (ISO 13473-2) as the average depth of the surface over a 100 mm baseline. The texture level (L TX,k ) relative to the texture wavelengths, k, is calculated by taking the 1/3rd octave band power spectral density (PSD) graphs for every 10 scanlines and using the following equation derived from ISO 13473-4: where Z p;k is the 1/3rd octave band PSD amplitude for a certain texture bandwidth, k. 0.232f represents the bandwidth, and a ref is the reference value of the surface profile amplitude (10 -6 m given by ISO 13473-4). In addition, an extra set of measurements was conducted on the rolling path after the MMLS3 test in order to evaluate the surface evolution after the wearing caused by the rolling of tires. Two measurements before and after wearing, respectively, were conducted for each sample. Results and discussion The VOC emissions were measured at the mixer and the chimney during the production of the different SDA mixtures at the Weibel Oberwangen trials. The obtained values are shown in Fig. 3. As expected, the concentration observed at the chimney was lower in comparison to the one measured directly from the mixer. As the graphs show, the concentrations at the exhauster were much lower (see scaling in Fig. 3). In fact, the levels at the blowers were minor due to significant dilution with fresh air sources from the total plant process. In general, VOC peak height concentrations were found to be higher during the production of the experimental CR mixture batches than during the mixing of the reference polymer modified mixture. To further evaluate these results, the emissions obtained at the mixer are next analyzed in detail. The areas corresponding to each mixture production were calculated. The calculation of the areas allow to consider the entire VOC vs time curve and not only the peak. In this case, a base line correction was applied first to accurately define the accumulated concentration during the slightly different mixing times. Although, a lower peak was observed for the reference mixture that could be related to the slightly longer mixing time used to fabricate the reference mixture (Fig. 3) Furthermore, it is important to note that this evaluation was based on small single batch productions (800 kg) for one type of mixture and one type of plant. In normal operating conditions hundreds of tons are produced daily. Therefore, these results must be only considered as a preliminary evaluation and future measurement campaigns with an increased number of batches should be undertaken for a more solid conclusion and statements. These could further assess other contributing factors such as plant type, schematic and layout, airflows, various mixture designs and mixing sequences, bitumen types and qualities as well as additives (e.g. CR types and addition rates). It should be further noted that mixer emissions are one of many contributors resulting in the total plant exhaust gas emissions measured in the chimney, the latter being relevant for compliance with exhaust gas emission limits. Other emission sources would be burners, transfer points or aspiration systems. The presence of PAHs in the CR used as modifier and in the different SDA mixtures prepared for this investigation was evaluated and the release of PAHs from these materials to the environment was studied by leaching tests with acidic water. Figure 4 displays chemical structures of 16 priority PAHs. Abbreviations are given in Table 2. Concentrations of DCM-extractable PAHs (mg/kg) of the CR material alone are shown in Fig. 5. As reported before [32], pyrene (8) was found to be the main compound within the CR sample followed by benzo(ghi)perylene (15), fluoranthene (7) and phenanthrene (5). The priority PAH concentration of 40 mg/ kg CR was significantly lower than the German and Swiss limit values of 250 and 5000 mg/kg for binder materials, respectively [33,34]. Regarding the genotoxic potential, the toxicity equivalence-weighted (TEQ) sum of carcinogenic PAHs was calculated. With this approach, the importance of the carcinogenic compounds such as benzo(a)pyrene (13), which has the highest genotoxicity factor of 1.0, are better evaluated and the toxicity of CR is found at a concentration of 1.2 mg TEQ/kg. The TEQ-weighted pattern of the genotoxic PAHs is also shown in Fig. 5. Leachate concentrations (ng/l) of 16 priority PAHs from different asphalt mixture samples are shown in Fig. 6. It can be observed that leachates from all asphalt mixtures present similar patterns which are directly related to the asphalt-PAH pattern rather than to the CR-pattern (Fig. 5). Comparison of Figs. 5 and 6 shows that the addition of CR had a small effect on the PAH leachate pattern (Fig. 6). Mainly 2-and 3-ring PAHs, which are water soluble to some degree (Table 2), were released from the asphalt mixtures and accumulated in all leachates. The PAH concentrations in the experimental mixtures with CR of 248 ± 16 and 168 ± 6 mg/l were lower in comparison to the concentration obtained for the reference mixture prepared with PmB (314 ± 26 mg/l). The extraction experiments were repeated three times (n = 3) and produced reproducible results. Thus the observed differences are more related to the different batches of mixtures than the CR contents. However, PAH pattern in the three leachates are very similar and are The genotoxic potential of these leachates can be compared from Fig. 7. Similar patterns are obtained, with benzo(a)pyrene (13) as the dominant genotoxic PAH along with contributions of naphtalene (1). There is no increase of the genotoxic potential due to the presence of CR either. The concentrations were found to be lower for the SDA mixtures with CR (1.0 and 0.9 mg TEQ/l) than for the reference mixture (4.3 mg TEQ/l). In parallel, the CR and asphalt mixtures samples were checked for the presence of benthiazole (BT) compounds. Although various BT derivatives are used in the rubber industry, in this study cyclohexyl-amino-BT (17, HABT) and 2,4-morpholino-BT (18, MoBT) are used as rubber markers. Respective chemical formulas are given in Fig. 8. Table 2 Fig. 5 Concentrations (mg/kg) of 16 priority PAHs in DCMextracts of the CR material (n = 1) and pattern of the toxicityequivalent-weighted genotoxic potential (TEQ-%). For abbreviations refer to Table 2 These BT-derivatives have been identified before in CR and are well-known. Furthermore, isotope-labeled standard materials are availible to quantify them. Both rubber markers indeed could be identified in the type of CR used in this work. HABT- (17) and MoBT- (18) contents (ng/g) in organic CR extracts (DCM) and respective concentrations (ng/l) in aqueous leachates of different asphalt mixtures are shown in Table 3. Table 2 It is confirmed that MOBT (18) and HABT (17) are present in the CR material and are extractable with acidic water (pH = 4.93) from CR-modified asphalt mixtures with the given leaching procedure. Concentrations of MOBT (18) and HABT (17) in organic extracts of the CR sample were found to be similar (810 ng MOBT/g versus 690 ng HABT/g). As expected, substantially higher concentrations of these BT-compounds were found in aqueous leachates of CR-modified asphalt mixtures (Table 3). MOBT (18) concentrations of 10 ± 1, 124 ± 4 and 169 ± 2 ng/l were found in asphalt leachates without and with 0.7% and 1.0% CR, respectively. HABT (17) concentrations even increased by factors of 26 and 30 from 2.1 ± 0.1 to 53 ± 3 and 61 ± 3 ng/l from asphalt leachates without and with 0.7% and 1.0% CR. These values seem to be related to the amount of the CR in each asphalt mixture. Interesstingly, the MOBT/HABT ratios in leachates differ from the one observed for CR. In this case, the higher polarity of MOBT (18) results in an increased MOBT-concentration in the leachates with respect to HABT (17). It can be concluded that the leaching of MOBT (18) from the CR-modified asphalt is more efficient than the one of HABT (17). However, both BT-derivatives, which are more polar than PAHs, are released from CR-asphalt and can be detected in leachates and with it, will eventually be washed out from CR-modified asphalt roads. A washing step of the CR-material with acidic water, before its application in asphalt mixtures, could possibly lower the amounts of benzthiazoles in CR and in CR-modified asphalt and their release to the environment. The mechanical performance of the two asphalt mixtures with CR was evaluated in comparison to the response obtained for the reference asphalt mixture prepared with PmB. Figure 9 shows the behaviour of the cylindrical specimens regarding their resistance to fatigue. It can be observed that the use of PmB (blue curve) offers a better response at high and low strain levels. Regarding the CR content (orange and gray curves), no significant difference was found between the two experimental mixtures. Furthermore, the classical parameter e 6 , defined as the strain to reach one million cycles, was also calculated from the obtained fatigue data as e 6, SDA4-Ref_PmB = 50.3 lm/m and e 6, SDA4-%CR = 41.8 lm/m. Theses values confirmed that the resistance to the fatigue was worse for the CR mixtures. However, previous studies of SDA mixtures with similar voids content have shown a fatigue resistance for SDA mixtures of 33 lm/m [35] Fig. 8 Chemical structures of two benzthiazole derivatives found in the CR material and leachates from CR-modified asphalts. N-cyclohexyl-amino-benzthiazole (17) and 2,4-morpholino-benzthiazole (18) were studied. For abbreviations refer to Table 2 Table 3 Benzthiazole derivatives in organic CR-extracts (ng/g, n = 1) using dichloromethane and in leachates of non-modified and CR-modifed asphalts (ng/l, n = 3) using acidic water Fig. 8. It was confirmed that conventional mixtures with PmB, designed for high performance, behave considerably better with very low rutting performance (\ 2%). The CR mixtures also showed acceptable response against permanent deformation with maximum values after 60,000 cycles of 3.8% and 5.6% for the mixtures with 1.0% and 0.7% CR, respectively. The trends are similar to that previously reported for similar mixtures. In a previous study, it was shown that EC asphalt showed similar rutting performance to the ECR [36]. Although no requirements exist for this type of test on SDA mixtures, the conventional rutting tests requirements lie in the 7.5-10% range (SN 640-431-1c). Using these values as a guide indicates that the obtained rut depths for the CR modified mixtures are comparatively low. This fact anticipates that the incorporation of CR using the dry process would not compromise the performance of SDA mixtures against rutting at service temperatures (Fig. 10). Finally, the properties of the surface texture of the different slabs were analyzed. The PTVs were measured to quantify skid resistance under wet conditions. In this case, higher values were obtained for the SDA mixtures with CR (PTV = 65 for SDA4-1.0%CR and PTV = 63 for SDA4-0.7%) in comparison to the reference one (PTV = 46). Some countries specify certain thresholds for the skid resistance. For example, a retained PTV of higher than 55 after the first two months of service is required in Italy [11]. The values obtained for CR modified mixtures are above these limits. However, it is important to remark that unlike field measurements, lab measurements were carried out on surfaces not subjected to any wearing process due to the tire friction. In addition, the MPD levels of the different surfaces were assessed (Fig. 11), showing a slight increase in MPD with CR, which was in contrast to previous studies on dense mixtures with dry process CR (Paje et al. 2010) [10]. The results show that after experiencing wearing from the MMLS3 test, the reference mixture with PmB and SDA with 0.7% CR showed an MPD decrease whereas the 1% CR showed a slight increase. Nevertheless, this change was not significant for both experimental mixtures with CR. These results confirm how the MPD may evolve differently with the modification of porous [37] or dense pavements [38][39][40]. The texture level profiles (Fig. 12) confirm that the samples are similar in macrotexture ([ 1 mm), but the SDA-0.7%CR sample has significantly lower microtexture before wearing. After wearing, the texture level for all of the samples is reduced. The microtexture (\ 1 mm) for the SDA-1.0%CR sample is especially lower after wearing, with a 5 dB reduction at 0.1 mm in wavelength, and a reduction in the texture level overall. In this case, the decrease in both the micro and macro texture was consistent with previous findings with dry process CR [10], showing texture level to me more reliable than MPD. Conclusions Due to their high performance characteristics, conventional semi-dense asphalt (SDA) pavements required the use of polymer-modified binders in their mix design. However, the production and life cycle of these polymer-modified solutions can involve economic and environmental disadvantages. As an alternative, in this study, the potential use of crumb rubber (CR) from waste tires as additive for SDA mixtures using the dry process was assessed. Two contents of a CR type specially engineered for dry process applications were used to prepare different batches of SDA mixtures in an asphalt plant. Gaseous emissions during the production processes, leaching of CR additives with acidic water and performance tests as well as surface characterizations were carried out in order to evaluate the experimental SDA mixtures with CR and to compare them with a conventional SDA mixtures (with PmB). It was observed that the addition of CR had no negative effect on the overall emission of volatile organic compounds (VOC) and on the release of polycyclic aromatic hydrocarbons (PAH). However, both investigated benzthiazole derivatives, which are derived from vulcanization additives used in the rubber production, were released from CRmodified asphalt under the given leaching conditions, which simulate exposure to acid rain. It is proposed that an additional washing step of the CR material, before mixing with asphalt, can remove these polar compounds and with it lower the risks for their release to the environment. Although the conventional mixture fabricated with PmB obtained better responses against fatigue and permanent deformation, the presence of CR in the mixtures did not compromise the mechanical behavior beyond requirements for this type of surfaces. Likewise, similar macrotexture levels were measured for all the slabs compacted from the different mixtures. Finally, the amount of CR incorporated and the binder content did not seem to have relevant influence on the factors evaluated in this study. Therefore, it can be concluded that the use of CR and its incorporation using the dry process could become a cheaper and environmentally friendly alternative even for high performance asphalt mixtures. The future construction of test tracks with these designs will be useful to further evaluate this technology since a more accurate VOC analysis during the production of larger amounts of mixtures with CR could be conducted and a field study of the final surface properties after compaction could be decisive to assess the importance of the differences found in micro-texture levels at lab scale. investigation, writing-original draft, review and editing, visualization, supervision, project administration. HR, NH, NM, BL: Conceptualization, methodology, validation, formal analysis, investigation, writing-review and editing, visualization. Funding Open Access funding provided by Lib4RI -Library for the Research Institutes within the ETH Domain: Eawag, Empa, PSI & WSL. All funding agencies have been listed in the acknowledgment section. Data availability The raw/processed data required to reproduce these findings can be shared upon request. Declarations Conflict of interest The authors declare that they have no known competing financial interests or personal relationships that could have appeared to influence the work reported in this paper. Ethical approval The authors declare that they have followed the conventional scientific ethical standards. Consent for publication The authors declare that they have given their consent for publication. Consent to participate The authors declare that they have given their consent to participate in this publication. Open Access This article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons licence, and indicate if changes were made. The images or other third party material in this article are included in the article's Creative Commons licence, unless indicated otherwise in a credit line to the material. If material is not included in the article's Creative Commons licence and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this licence, visit http://creativecommons.org/licenses/by/4.0/.
2021-10-18T16:59:38.946Z
2021-10-01T00:00:00.000
{ "year": 2021, "sha1": "8e9a601791d75a05245d4889834f2787b3114c88", "oa_license": "CCBY", "oa_url": "https://link.springer.com/content/pdf/10.1617/s11527-021-01790-y.pdf", "oa_status": "HYBRID", "pdf_src": "PubMedCentral", "pdf_hash": "85094e29cf79a379ae90a45daf7970f771c39836", "s2fieldsofstudy": [ "Materials Science", "Engineering" ], "extfieldsofstudy": [ "Medicine" ] }
111903255
pes2o/s2orc
v3-fos-license
Effect of interference fit size on local stress in single lap bolted joints The interference fit is an effective process technique to improve the fatigue life of aircraft structures. In this article, the experiments including the interference fit bolt installation and tensile loading in bolted joint were carried out. A three-dimensional finite element model was established to simulate the experimental process, and the finite element model was validated by comparing the simulated data with the experimental data of the squeeze forces and the strains. By finite element simulation and analysis, it can be concluded that the location of maximum value of the maximum principal stress on the upper plate faying surface is going far away from the hole edge with the increase in interference fit size. Furthermore, by analyzing the hoop stress variations along a prescribed path, the maximum value of the hoop tensile stress is smallest at the interference fit size of 1.5%. Introduction The bolted joint is one of the most important mechanical connection methods in aircraft assembly. At the junction of the main load-bearing components in the aircraft, such as wing beams, fuselage strengthen frames, and other important structural connected parts, interference fit bolted joints are widely used to improve the structural fatigue performance. 1 Because of the high strength, light weight, controlled preload, high fatigue life, and other characteristics, titanium (Ti) alloy bolts are used with as many as 40,000 in a B747 and 5500 with interference fit in a A320's wing. 2,3 In addition, interference fit bolted joints are often adopted as an important solution in aircraft repair. 4 As the interference fit fastener, Ti-alloy bolts can achieve lighter weight and longer fatigue life for aircraft structures. Residual stress around the hole, induced by the interference fit bolt insertion, plays a key role in the components' fatigue life. However, to better understand the anti-fatigue behavior, it is necessary to conduct a comprehensive analysis of the stress and strain around the hole for the whole process of interference fit bolt inserting, clamping, and cyclic tensile loading. Some researchers [5][6][7] simplified the interference fit assembly as a plane strain or plane stress problem to calculate the stress distribution around the hole by using the theory of elastic and plastic mechanics. However, the above stress analysis was limited to idealized assumptions and did not consider the process of the interference fit installation. Jiang et al. 8,9 have developed a two-dimensional axis-symmetric finite element (FE) model to simulate the interference fit bolt insertion process, and then analyzed the deformation and residual stress distribution around the hole. For interference fit fastened structure subjected to an external load, the studies [10][11][12][13][14][15][16] were sufficient, but they only considered the stress analysis in the loading stage and lacked the simulation in the fastener installation stage. Usually, an even-distributed residual compressive stress was directly given at the beginning of loading in those studies. Recently, several literatures about interference fit riveting 17-20 studied the hoop stress or maximum principal stress around the holes in single lap joints from the riveting process to the tensile loading stage by FE simulation, and the simulated results of riveting force and residual strain were verified by experiment. Chakherlou and colleagues 21-23 simulated the entire process covering the interference fit pin or bolt insertion, bolt clamping, and cyclic loading by the FE method, which is used to explain the crack initiation phenomenon of the fatigue test. This article studies the influence of the interference fit on local stress in a single shear bolted lap joint with an interference fit in the upper and lower plates and validates the FE model by the experimental results. In this article, first, the specimens of single lap joints for different interference fit sizes were designed, Ti-alloy hi-lock bolts were chosen, and the test of the entire loading sequence for quasi-static hi-lock bolt insertion, bolt clamping, and tensile loading was carried out. The squeeze forces in bolt insertion, the clamping torque on nut, and the strains on upper plate in the tensile loading stage were measured and recorded. Then, by using FE software ABAQUS 6.10-1, 24 a three-dimensional FE model was established to simulate the above experimental processes. The simulated results were validated by the experimental results. Finally, the hole vicinity stress distribution after interference fit bolt installation was analyzed by FE simulation. Moreover, the maximum principal stress and the hoop (circumferential) stress variation on the upper plate faying surface around the hole were studied when the joints were subjected to external load. Specimen information A total of four joint specimens were designed and manufactured. Each lap joint consists of four 7050-T7451 aluminum (Al) alloy plates that are the upper plate, lower plate, and two backing plates whose thicknesses are 5 mm. The dimension of the upper and lower plates was 140 3 60 3 5mm 3 , and the distance between the plate free edge and hole center was 20 mm. The specimen configuration is shown in Figure 1. To ensure the hole on the upper plate and lower one matched and was coaxial at the hi-lock bolt insertion, the two plates were pre-fastened by using four common screws M4. The tightening torque is T 0 = 0.5 N m at each common screw (the total torque value is 2 N m). By the empirical formula T 0 = 0:2F 0 Á d 0 , the clamping force is F 0 = 2.5 kN. Because the lap area is about 40 3 60 = 2400 mm 2 , the average pressure is q 0 = 1 MPa, which is the preload before interference fit bolt insertion. The 10-mm wide materials on each side will be cut by wire electrical discharge machining (WEDM) after the insertion of the bolt and nut, and the four common screws will be eliminated. In this article, the interference fit size was defined as where d is the bolt shank diameter and D is the hole diameter. Ti-alloy (Ti-6Al-4V) hi-lock bolt M8 was chosen in the experiment and the four bolts with the same shank diameter of 7.97 mm were especially selected. The holes for the bolt insertion were milled by computerized numerical control (CNC) machining center. The diameters of the holes and interference fit size are listed in Table 1. In the experimental test, strain gauges were employed to capture the strain variation during tensile loading. The test equipment was made by Zhejiang Measurement and Instrument Company and its type was B 3 120-1AA with the sensitivity factor of 1% and transverse effect factor of 0.5%. After the installation of the hi-lock bolt and nut, strain gauges 1-5 were mounted on the top surface and 6 on the bottom of the upper plate. All the gauges were positioned in the joint longitudinal direction as shown in Figure 2. The distance between the gauge centerline and the hole center was 10 mm, and the identified error was less than 0.5 mm. Strain gauges 2 and 4 were used to measure the hole vicinity hoop strains, and gauges 1 and 3 were used to measure the radial strains. The difference in the value between gauges 5 and 6 was used to evaluate the secondary bending strain in the tensile loading stage. Experimental test As shown in Figure 3, the interference fit insertion of the hi-lock bolt was carried out on the Material Testing Machine. A supporting fixture with a central hole was placed on the base seat of the machine. In order to ensure the static or quasi-static insertion for the bolt, the squeeze speed is set to 2 mm/min. The data collecting device can measure and record the squeeze force in the insertion process with the pressure sensor. After the interference fit bolt insertion, the spring steel washer (outer diameter of 13 mm, inner diameter of 9 mm) and M8 3 1 common steel nut were applied for clamping. The tightening torque on the nut of every specimen was T = 3.5 N m. By T = 0:2Fd, the clamping force was F = 2.2 kN, and the pressure between the washer and the lower plate was about q = 30 MPa. The tensile test of the bolted specimen was carried out in the multi-function testing machine, as shown in Figure 4. The testing machine (made by the Institute of Applied Mechanics, Zhejiang University) consists of the reaction frame, hydraulic power devices (YZB50-2 3 2 pump and YCD-5 jack), and force sensor (RCT-21 kN produced by Showa, and measuring error 0.3%). The specimen was connected with the frame by two M14 common screws and two U-shape connectors. The strain gauges on specimen connected with a dynamic strain indicator and a PC (not shown in Figure 4). In the tensile test, each specimen underwent three cycles of tensile loading. The remote maximum tensile load was 10 kN, and one cycle time was 1 min. Material properties To obtain the exact elastic and plastic material properties of the lap plates and hi-lock bolt, compressive tests were performed on Al-alloy cylinder and Ti-alloy cylinder. Three tests were performed for each material, and the results of three tests were very similar; then the middle test was chosen to fit the curve. Nominal stress-strain data were directly obtained by a forcedisplacement curve in the test, but true stress-strain values are required in FE software ABAQUS. Using the formula e true =l n( 1 + e nom ) and s true = s nom (1 + e nom ), the nominal stress-strain values were converted to true values. The curves about true stress versus true strain are shown in Figure 5 FE model The FE model was developed in the software ABAQUS, as shown in Figure 6. Due to the bolted joint being a symmetrical structure, the FE model takes only half of the specimen, and therefore boundary conditions were imposed in the symmetrical plane. To simplify the model, the bolt threaded section was replaced with a simple cylinder, and nut and washer were regarded as one part. The details, an arc of 2.5 mm radius and 25°at the bolt transition region (also called the import part) and a chamfer C0.6 mm at the hole entrance of the lap joint, are shown in Figure 6(c). The model was meshed with the hexahedral reduced elements C3D8R, which has higher computing speed and greater accuracy than the tetrahedral elements C3D4 in ABAQUS. The mesh in the hole vicinity was refined, and the mesh at the ends of the joint was coarsen. Complicated contact friction was produced between the bolt shank and the hole wall in hi-lock bolt insertion, and a coefficient of friction of 0.1 was specified. The coefficient of friction between Al-alloy plates generally changed from 0.2 to 0.5 in fatigue test of fastened joint. 13,14,16 In this test, the specimen only experienced several times of cyclic loading, so the friction coefficient of 0.2 was chosen between the upper plate and lower one in FE model. Load and boundary conditions Corresponding to the experimental process, the FE simulating process has three analytical steps. The first step is the interference fit bolt insertion process, which is defined as the static analysis. A reference point (RP)-1 was defined, and the coupling constraint was established on the bolt top surface. The bolt was uniformly pressed into the hole by applying a displacement condition U z on RP-1 (using a default time of 1 s); at the same time, the squeeze force was recorded through RP-1. Other boundary conditions are U y = 0 at the symmetry plane of the bolt and plates, U x = 0 at both ends of the surface, and U z =0 (7 R 14, R = ffiffiffiffiffiffiffiffiffiffiffiffiffiffiffi )a t the bottom surface of the lower plate. The uniform preload q 0 = 1 MPa was applied to the top surface of the upper plate, as shown in Figure 6 The second step is applying the clamping force and is also defined as the static analysis. The washer and nut as a part are applied as a distance condition to obtain q = 30 MPa average pressure on the washer surface in contact with the lower plate (tightening torque of 3.5 N m). In the FE model, the initial distance is 0.3 mm between the washer and lower plate. When the interference fit size is 1%, the distance is U z = 0.335 mm. The bottom of the bolt is fully constrained. The interference fit bolt insertion created the protruded deformation around the hole on the exit plane, 5,9,23 and the deformation on the exit plane of the lower plate increased with the increase in interference fit size, so the distance slightly decreased with the increase in interference fit size. The third step is defined as an implicitly dynamic analysis. In this step, the bolted joint is subjected to tensile loading. One end of the joint is fixed, and a cyclic force is applied at the other end, which is F(t) = (1/ 3 3 t)kN (t = 0-30 s) and F(t) = 1/3 3 (602t)kN (t = 30-60 s), as shown in Figure 6(a). In order to ensure the nut and bolt composing a whole part after the nut installation and before the tensile loading stage, a constraint named ''Tie'' is imposed on them. However, if the Tie-constraint is established and effective at first step, it cannot produce effectively at the third step. The technique ''Transferring results between ABAQUS analysis'' is used. First, an old job was generated to conduct the first and second steps in the FE model. Second, the Tie-constraint was established, the FE model was updated, and the result of the old job was imported. Finally, a new job was generated to finish the last step. Results and discussion Comparison of experimental results with simulation results for squeeze force In the interference fit bolt insertion, the squeeze force (F sq ) generally consisted of two components, which are the deforming force of the bolt import part making the hole wall deformed and the frictional force between the hole wall and the bolt shank. The squeeze force determines the hole deformation and the hole vicinity stress distribution to a certain extent. Figure 7 shows the history curves of the experimental and simulated squeeze forces for the hi-lock bolt insertion as the interference fit sizes is 1.5%. It can be found that the squeeze forces do not increase linearly as the plate thickness increases and experience two cycles of increase and decrease. At first, the squeeze force increases nonlinearly as the hi-lock bolt is pressed into the hole. When the bolt import part approaches the hole exit of the upper plate, because of the protruded deformation, 8,9,23 the deforming force component decreases gradually. Then, as the bolt is continually pressed into the lower plate hole, another cycle of the squeeze force variation occurs. Finally, at the end of the insertion, all that remains is the frictional resistance force. The maximum squeeze force of experimental tests and FE simulations under four different interference fit sizes are presented in Table 2. The relative error between the experimental and simulation results is calculated, and the maximum error is 8%. The errors could be caused by a number of reasons. In the FE simulation, the friction conditions are based on idealized assumptions, and the bolt is completely vertical to the surface of plats when pressed into the holes. In the experimental test, the friction coefficient of the entire hole wall cannot be completely consistent, the hole may have form and position errors, and the hi-lock bolt may produce tiny deflections in the pressing process. From the comparison of the squeeze force history and maximum value, the test results are basically consistent with the simulation results, so the FE simulation results are confident in the bolt insertion stage. It is important to understand how sensitive the squeeze force is to the friction coefficient of the contact surface between the bolt shank and the wall of the hole, so additional FE simulations under three different friction coefficients were conducted. As shown in Figure 8, the effect of the friction coefficient has a direct effect on squeeze force, and the squeeze force increases rapidly as the friction coefficient increases. As seen in Figures 7 and 8, the simulated squeeze force with the friction coefficient of 0.1 is most similar to the test results, so 0.1 is adopted as the friction coefficient in the FE model. Comparison of experimental results with simulation results for strains As expected, the load path eccentricity caused secondary bending when the single lap bolted joints are in tension, which influenced the fretting fatigue seriously in the single lap joint. Figure 9 shows the secondary bending and strain distribution from the FE prediction when the tensile load is 10 kN and the interference fit size is 1.5%. G5 and G6 were located near the joint overlap end at the upper plate top surface and bottom surface, and the locations usually generate the maximum bending in the single shear lap joint. In this article, strains in G5 and G6 are used to estimate the secondary bending. The bending level is presented by the curvature determined by subtracting the strains measured with G5 and G6 and then dividing the plate thickness C bend = e bot À e top T plate = e 6 À e 5 5 ð2Þ When the interference fit size is 1.5%, Figure 10 shows the strain variations during the tensile loading stage obtained from both the experimental and FE simulation results. Because the strains in G1-G6 were set to 0 before the tensile test, the experimental strain data in Figure 10 are the actual measured value, and the simulated strain data are the differences of strain during and before the tension. It can also be seen that the strain changes nonlinearly with the increase in the tensile load. G2 and G3 were mounted on the upper plate top surface to measure the hoop strain and radial strain variation around the fastener hole, and the strain is compressive because of the secondary bending as shown in Figure 10(a) and (b). G5 and G6 were used to measure the longitudinal strain variation and to show the secondary bending, and the strains are compressive and tensile, respectively, as shown in Figure 10(c) and (d). When the tensile load reached the maximum value of 10 kN, Tables 3 and 4 listed the maximum strains of G2, G3, G5, and G6 from the experimental and simulated results under different interference fit sizes. It can be seen from Table 3 that the experimental secondary bending strain curvature of the upper plate middle plane is about 540 e26/mm, and the simulated data are approximately 480 e26/mm. The simulated results were slightly smaller than the experimental results. It was found that the bending strains were little related to the interference fit sizes, which were closely related to the joint overlap length, the plate thickness, and width that have been presented in other studies. 13,25 In Table 4, by comparison of the experimental and simulation results, the relative errors for the radial strain of G3 were less than 14%, but the maximum error for the hoop strain of G2 was closely reached at 30%. The similar conclusions were also reached in some of the relevant literatures. 17,18 The reason may be that the bending deformation of the bolt head is very complex and produced errors when the tensile loading simulation was conducted, which affects the strain value in the direction of G2 (and G4). Furthermore, as the interference fit size increased from 0.5% to 1.5%, the strain values also increased, but the strain values decreased as the size continually increased to 2%, which is related to the elastic-plastic deformation states of the holes in bolt insertion stage under different interference fit sizes. The discrepancy between the experimental and simulation results could be explained as follows: (1) there were inherent errors in the FE model from the bolt insertion to bolt clamping stage. (2) Differences existed in how the strain values were obtained. The strain was determined from a point/node in the FE simulation while it was averaged over the gauge area in the experiments. (3) Errors were associated with the strain gauge. For example, gauge reliability and gauge mount conditions. It could be drawn from these strain comparisons that the FE simulation in the tensile loading stage basically reflected the actual experimental situation, and the local stress and strain on the hole wall and the faying surface between the upper and lower plates could be analyzed by using the current FE results with reasonable accuracy. Stress distributions In ABAQUS, the minimum principal stress is used for viewing the residual compressive stress in the entity, and the maximum principal stress is used to examine the tensile stress. Figures 11 and 12, respectively, show the full-field contours of the minimum principal stress after the hi-lock bolt insertion and the maximum principal stress when the tensile force was the maximum of 10 kN (bolt, washer, and nut hidden). It can be observed from Figure 11 that the residual stress was compressive on or around the fastener hole after the bolt insertion. With the increase in interference fit size, the maximum value of compressive stress also increased. Compressive residual stress can effectively resist the fatigue crack initiation and propagation, so with the increase in interference fit size, the fatigue life may be improved more. On the fastener hole wall, the stress distribution was interesting for different interference fit sizes. When the interference fit size was 0.5%, the compressive stress was distributed uniformly. For the interference fit size of 1%, a bigger compressive stress (370 MPa) occurred at the hole entrance on the upper and lower plates. As the interference fit sizes increased to 1.5% and 2%, the bigger stress (420 and 470 MPa) positions moved to the middle and exit positions of hole (shown with arrows in Figure 11). The maximum principal stress is one of the most important components in the study of crack nucleation and structural failure when the joint was in tension. It can be seen from Figure 12 that as the interference fit size was 0.5%, the tensile maximum principal stress value was 347 MPa. When the interference fit sizes increased to 1%, 1.5%, and 2%, the maximum tensile stress decreased to 276, 275, and 277 MPa, which was far below the Al-alloy 7050-T7451 static tensile strength (525 MPa). On the hole wall, the stress concentration region appeared at the faying position of the plates (shown with circles in Figure 12), especially I = 0.5% and I = 1%. When the interference fit sizes were 1.5% and 2%, another region of stress concentration occurred at the upper plate bottom contacted with washer and nut (shown with arrows in Figure 12). Usually, crack nucleation occurs the first time on the upper plate faying surface around the fastener hole during fatigue tests. 17,18 Figure 13 shows the maximum principal stress distribution on the upper plate faying surface of the bolted joint. With the increase in interference fit size, the maximum stress values decreased. When the interference fit sizes were 0.5% and 1%, the position of the maximum stress was close to the hole edge. As the interference fit sizes increased to 1.5% and 2%, the positions were about 2.5 and 3.7 mm away from the hole edge, and the direction of maximum stress and the transverse direction formed a 45°angle. Hoop stress variation Net-section failure is a main mode of fatigue damage in the fastened joint. Hoop stress around the fastener hole along the transverse direction would be a major component resulting in fatigue crack nucleation and growth. The hoop stress or strain on the upper plate faying surface is difficult to measure by the traditional test method, so FE simulation is a good way for hoop stress evaluation and analysis. Figure 14 shows the simulated hoop stress variation along the transverse path on the upper plate faying surface under two different tensile loads 0 and 10 kN. Figure 13(a) shows the hoop stress variation at the tensile beginning (also after the bolt insertion and bolt clamping). It can be seen that the tensile stress was produced on the hole edge and hole vicinity at a low level of interference (0.5% and 1%). When interference fit sizes were bigger (1.5% and 2%), residual compressive stress was produced around the hole (within 1 and 2 mm). Furthermore, the maximum hoop stresses were all tensile and increased with the increase in interference fit size, but the position of the maximum value was gradually further away from the hole edge. What is worth mentioning, the bolt clamping force (tightening torque) is the same under four interference fit sizes, and the difference in the hoop stress variation at the tensile beginning for interference fit sizes is mainly caused by the interference fit insertion. Figure 14(b) shows the hoop stress variations when the tensile load was 10 kN. It indicates that the residual hoop stress in the interference fit installation directly influenced the stress in the tensile loading stage. With the increase in interference fit sizes, the hoop stress decreased on the hole edge, and it implies that the danger of fatigue cracks initiating on the hole edge was reduced. The position of the maximum tensile stress was basically the same as the position before tensile loading, and the maximum value decreased gradually with the increase in interference fit size. The maximum hoop tensile stress was smallest when the interference fit size is 1.5%, which increases as the interference fit size increases to 2%. The above results show that the stress distribution on the hole wall and upper plate faying surface depends on the fit interference size and it does not give rise to a more interesting stress distribution when interference fit size is more than 1.5%. In another research of the Jiang et al., 9 it showed that the protuberance at the hole exit surface is increased rapidly when the interference fit size goes over 1.5%, and the protuberance can harm the fretting strength seriously. Therefore, for Ti-alloy (Ti-6Al-4V) hi-lock bolt, the optimal interference fit size beneficial to the fatigue crack nucleation is 1.5% and can be chosen as an optimal value in aircraft interference assembly.
2019-04-14T13:03:33.051Z
2015-06-01T00:00:00.000
{ "year": 2015, "sha1": "341782d6053962f7fa8b3b3df6d3192701d964c9", "oa_license": "CCBY", "oa_url": "https://journals.sagepub.com/doi/pdf/10.1177/1687814015590307", "oa_status": "GOLD", "pdf_src": "MergedPDFExtraction", "pdf_hash": "d2cae65954e1f7e43ef5826bbd75e6fd76241c2f", "s2fieldsofstudy": [ "Engineering", "Materials Science" ], "extfieldsofstudy": [ "Engineering" ] }
248060014
pes2o/s2orc
v3-fos-license
Biological Properties of Transition Metal Complexes with Metformin and Its Analogues Metformin is a widely prescribed medication for the treatment and management of type 2 diabetes. It belongs to a class of biguanides, which are characterized by a wide range of diverse biological properties, including anticancer, antimicrobial, antimalarial, cardioprotective and other activities. It is known that biguanides serve as excellent N-donor bidentate ligands and readily form complexes with virtually all transition metals. Recent evidence suggests that the mechanism of action of metformin and its analogues is linked to their metal-binding properties. These findings prompted us to summarize the existing data on the synthetic strategies and biological properties of various metal complexes with metformin and its analogues. We demonstrated that coordination of biologically active biguanides to various metal centers often resulted in an improved pharmacological profile, including reduced drug resistance as well as a wider spectrum of activity. In addition, coordination to the redox-active metal centers, such as Au(III), allowed for various activatable strategies, leading to the selective activation of the prodrugs and reduced off-target toxicity. Brief Historical Outlook Based on the World Health Organization (WHO) list of essential medicines, metformin is considered an essential drug for people with diabetes [1]. Due to its safety profile and low cost, metformin has been used worldwide for the management of type 2 diabetes for more than half a century. In addition, metformin is also commonly used off label for the management of other medical conditions, such as polycystic ovary syndrome (PCOS) [2], insulin resistance and obesity [3]. Metformin belongs to the class of biguanides, which have a long medical history [4,5]. Long before the discovery of metformin, the extract from the G. officinalis plant was used by medieval European physicians to treat the symptoms that are now associated with type 2 diabetes. It was discovered that the most active extract was rich in guanidine ( Figure 1). This chemical was synthetically produced at the end of the 19th century but was too toxic to be used in humans despite its hypoglycemic properties. In the 1920s, two synthetic biguanides-synthalin A and synthalin B-were introduced into clinical practice. Although their chemical structures consisted of two guanidine fragments separated by long aliphatic chains, these compounds were still somewhat toxic and were eventually replaced on the market by insulin. Despite marked structural similarity between guanidines and biguanides, the clinical potential of the latter was underappreciated until the discovery of an antimalarial drug saludrine (or proguanil), which was further modified to metformin hydrochloride (at that time called flumamine). It was reported that in 1949 flumamine was used in treating a local influenza outbreak in the Philippines [6]. Only in 1957 was the anti-diabetic potential of metformin rediscovered and the drug was marketed under the name glucophage ("glucose eater") [7]. Subsequently, less polar analogues of metformin-phenformin and buformin-were reported to efficiently reduce blood glucose levels and were introduced to the market in some countries [4,5]. However, their use was associated with the incidence of lactic acidosis and, by the 1980s, they were eventually withdrawn from clinical use in most countries [4,5]. Unexpectedly, various retrospective epidemiologic analyses of patients with diabetes taking metformin or phenformin for prolonged periods of time revealed that these drugs reduced the incidence of cancer, as well as cardiovascular diseases [8][9][10]. In addition, some beneficial effects on liver and renal function were observed [8]. Overall, the antidiabetic and anticancer mechanisms of action of metformin are rather complex and have been described in detail elsewhere [9,[11][12][13]. In brief, metformin and its analogues alter Unexpectedly, various retrospective epidemiologic analyses of patients with diabetes taking metformin or phenformin for prolonged periods of time revealed that these drugs reduced the incidence of cancer, as well as cardiovascular diseases [8][9][10]. In addition, some beneficial effects on liver and renal function were observed [8]. Overall, the antidiabetic and anticancer mechanisms of action of metformin are rather complex and have been described in detail elsewhere [9,[11][12][13]. In brief, metformin and its analogues alter the energy metabolism of the cells, thereby acting as energy disruptors [14]. Metformin was shown to decrease the glucose absorption in the small intestine, increase glucose transport into cells and reduce plasma free fatty acid concentrations, thereby inhibiting gluconeogenesis [13,15]. In addition, metformin was shown to inhibit mitochondrial respiratory chain complex I and decrease hepatic energy status by activating the AMP-activated protein kinase (AMPK), which plays a central role in its mechanism of action [12]. The anticancer effects of metformin are exerted either directly or indirectly, i.e., via the induction of energetic crisis or systemic reduction of insulin levels [9,14]. Finally, the cardiovascular protective action of metformin might be related to its favorable actions on lipid metabolism, hypercoagulation, endothelial function, calcium signaling and platelet hyperactivity [16]. The promising epidemiological findings and extensive studies in various animal models prompted the re-evaluation of metformin, phenformin and their analogues for the use in other diseases [17][18][19]. Since the mechanisms of action, biomolecular targets, pharmacokinetics, pharmacodynamics and safety profiles of these antidiabetic drugs have already been established, some of the preclinical studies might be by-passed, leading to the accelerated approval of these drugs for the treatment of other diseases. Diverse Therapeutic Applications of Metformin Derivatives Biguanides are characterized by a diverse range of therapeutic activities, which have recently been summarized in the excellent review of Bharatam et al. [20]. Herein, we will briefly discuss the application of several metformin derivatives for the treatment and management of diseases other than diabetes, as well as touch upon several strategies for improving metformin activity. The interest in the development of biguanide compounds with antimalarial properties arose from the success of proguanil (paludrine, Figure 1), which has been frequently used since the 1940s. Even nowadays, chemoprophylaxis and treatment of malaria can be accomplished using malarone, which is a fixed-dose drug combination of proguanil and atovaquone [21]. Following the discovery of proguanil, global screening and synthetic efforts revealed several structurally similar compounds with antimalarial properties, including PS-15 ( Figure 2). PS-15 and its analogues demonstrated excellent in vitro and in vivo activity against different resistant strains of P. falciparum, which causes the most dangerous form of malaria-falciparum malaria [22][23][24]. Subsequently, a large number of cyclic biguanides with antimalarial properties have been evaluated [20]. Moroxydine is a biguanide where one amine group has been replaced by the morpholine group. This compound efficiently inhibited both DNA and RNA viruses, including but not limited to herpes zoster virus, herpes simplex virus and adeno virus [20]. In addition, it was shown that moroxydine significantly reduced the duration of fever and pharyngitis [25]. As a result, it was extensively used in the 1960s for the treatment of viral infections such as influenza, measles and mumps. Although moroxydine hydrochloride is still used in several countries as an antiviral agent, its full biological potential has never been achieved. However, the temporary clinical success of moroxydine prompted the investigation of various compounds with a biguanide moiety, which revealed the prominent suppression of various DNA and RNA viruses, including HIV [20]. In light of the COVID-19 pandemic, moroxydine, metformin and other biguanides are considered for the treatment and management of SARS-CoV-2 [26][27][28]. The investigation of the antimicrobial properties of biguanides has led to the discovery of chlorhexidine and alexidine, as well as the polymeric compound polyhexanide (PHMB), which demonstrated strong bactericidal activity against a broad panel of gram-negative and gram-positive strains, as well as fungicidal activity, in particular against C. albicans and streptococci [20]. Chlorhexidine, alexidine and PHMB are widely used as disinfectants in human and veterinary practices, including surgeries, dental procedures and management of burns and mouth hygiene [20,29,30]. In addition, these drugs are used for the treatment of dermatological conditions, e.g. Candida infections [20,29]. Subsequently, synthetic efforts by medicinal chemists, as well as high-throughput screening of compound libraries, resulted in the discovery of novel biguanides with promising antimicrobial properties [20]. Moroxydine is a biguanide where one amine group has been replaced by the morpholine group. This compound efficiently inhibited both DNA and RNA viruses, including but not limited to herpes zoster virus, herpes simplex virus and adeno virus [20]. In addition, it was shown that moroxydine significantly reduced the duration of fever and pharyngitis [25]. As a result, it was extensively used in the 1960s for the treatment of viral infections such as influenza, measles and mumps. Although moroxydine hydrochloride is still used in several countries as an antiviral agent, its full biological potential has never been achieved. However, the temporary clinical success of moroxydine prompted the investigation of various compounds with a biguanide moiety, which revealed the prominent suppression of various DNA and RNA viruses, including HIV [20]. In light of the COVID-19 pandemic, moroxydine, metformin and other biguanides are considered for the treatment and management of SARS-CoV-2 [26][27][28]. The investigation of the antimicrobial properties of biguanides has led to the discovery of chlorhexidine and alexidine, as well as the polymeric compound polyhexanide (PHMB), which demonstrated strong bactericidal activity against a broad panel of gramnegative and gram-positive strains, as well as fungicidal activity, in particular against C. albicans and streptococci [20]. Chlorhexidine, alexidine and PHMB are widely used as disinfectants in human and veterinary practices, including surgeries, dental procedures and management of burns and mouth hygiene [20,29,30]. In addition, these drugs are used for the treatment of dermatological conditions, e.g. Candida infections [20,29]. Subsequently, synthetic efforts by medicinal chemists, as well as high-throughput screening of compound libraries, resulted in the discovery of novel biguanides with promising antimicrobial properties [20]. Following the epidemiological analysis of diabetic populations and the discovery of the correlations between the use of metformin or phenformin and a reduced risk of cancer incidence, both antidiabetic agents were investigated in various in vitro and in vivo cancer Following the epidemiological analysis of diabetic populations and the discovery of the correlations between the use of metformin or phenformin and a reduced risk of cancer incidence, both antidiabetic agents were investigated in various in vitro and in vivo cancer models [31,32]. Both compounds exhibited cytotoxicity in the millimolar or high micromolar concentration range and potentiated the anticancer activity of clinically used anticancer drugs, such as tamoxifen [33], doxorubicin [34], cisplatin [35] and other chemotherapeutic agents, both in vitro and in vivo. However, the potential use of metformin and phenformin in cancer treatment is hindered by serious drawbacks. According to the Biopharmaceutics Classification System (BCS) and Biopharmaceutics Drug Disposition Classification System (BDDCS), metformin is classified as a Class 3 compound (high solubility and low permeability). Due to its hydrophilic nature, metformin poorly penetrates through cellular membranes [36]; therefore, the desired anticancer activity can be achieved only at high doses. Phenformin is less polar than metformin; however, its anticancer effects in vitro and in vivo were also apparent only at high concentrations [37,38]. Since pathophysiological mechanisms underlying cancer may lead to lactic acidosis in most patients in different stages of the disease, chemotherapeutic regimens based on repeatedly high doses of metformin, or especially phenformin, would not be desirable. There are various strategies to overcome the difficulties associated with poor penetration of metformin and phenformin, including their encapsulation into nanocarriers [39], conjugation with targeting moieties [40], or development of prodrugs [41]. The simple modification of the metformin structure with pyrrolidine or furan heterocycles resulted in the formation of novel biguanide-based anticancer agents, HL156A [42,43] and NT1014 [44], respectively. Both compounds were characterized by increased AMPK activity and significantly enhanced cytotoxicity and in vivo activity in comparison with metformin; however, their cytotoxicity remained in the high micromolar range [44]. On the contrary, conjugation of the metformin backbone with a mitochondria-targeting triphenyl phosphine (TPP + ) moiety via aliphatic chain linkers resulted in the formation of a series of compounds with markedly improved anticancer activity [40]. In particular, the lead compound mitometformin (Mito-Met, Figure 2) was at least 1000 times more active than metformin against pancreatic ductal adenocarcinoma (IC 50 = 1.1 µM and 1.3 mM for Mito-Met and metformin, respectively). It was shown that the anticancer mechanism of action of Mito-Met was based on AMPK activation as well as inhibition of mitochondrial respiration via inhibition of mitochondrial complex I and stimulation of superoxide and hydrogen peroxide formation [40,45]. One more approach to enhancing intracellular accumulation of metformin without inducing unwanted toxicity to healthy cells is the development of more lipophilic and pharmacologically inactive prodrugs, which would be biotransformed into metformin after absorption. In fact, the antimalarial compounds proguanil and PS-15 also serve as prodrugs since they transform into active cycloguanil metabolites inside the cells [46,47]. It was shown that proguanil and PS-15 activation were mediated by cytochrome P450 2C19 (CYP2C19) and cytochrome P450 3A4 (CYP3A4), respectively [48,49]. Besides malaria, metformin prodrugs might be useful for the treatment of various diseases, such as cancer [50], diabetes, Alzheimer's disease [51] and others [52]. For example, metformin sulfenamide prodrugs demonstrated improved bioavailability and absorption (by ≈25%) and were readily converted into metformin upon interaction with intracellular thiols [41,53], thereby supporting the viability of the approach. These sulfenamide prodrugs exhibited beneficial effects on plasma haemostasis [52] and inhibited neurodegenerative acetylcholinesterase activity (AChE) [51]. Biological Consequences of Intracellular Interactions of Metformin with Endogenous Metals Biguanides serve as excellent N-donor bidentate ligands due to the presence of two imine groups in cis-positions and the localization of charge density on the terminal nitrogen atoms, which ultimately enhance the stability of the newly formed chelates. One of the first reports on the interactions of biguanides with transition metals, such as Cu or Pt, dates back more than a century ago [54]. Subsequently, a wide range of biguanides with various transition metals have been reported, and their molecular structures were supported by crystallographic evidence [55][56][57]. However, despite extensive structural and synthetic evidence, the biological role of metal-biguanide complex formation was not investigated until recently. It was found that in the absence of intracellular Cu, metformin-mediated AMPK activation in H4IIE liver cells was reduced by at least 50% [58]. The comparison of metformin, biguanide, propanediimidamide (PDI) and malonohydroxamamide (MHA) revealed that only those compounds that could form high-affinity pseudo-aromatic Cu complexes (metformin and biguanide, but not PDI and MHA) induced activation of AMPK signaling. In agreement, only biguanides, but not PDI, inhibited mitochondrial respiration and expression of gluconeogenic genes in H4IIE liver cells and suppressed hepatic glucose production in primary hepatocytes, suggesting that the antihyperglycemic properties of metformin might be Cu-dependent [58]. The computational analysis of Cu-binding energies revealed that the observed differences in biological effects exhibited by metformin, biguanide and PDI could not be explained by different Cu-binding energies [59]. Therefore, it was suggested that metformin and other biguanides might act as pH-sensitive Cu-binding prodrugs and their activation might occur at elevated mitochondrial pH levels, while PDI would require higher pH for the activation [59]. Since biguanides and other antidiabetic drugs are commonly characterized by their antimalarial properties, there might be some similarities between the therapeutic mechanisms of both diseases. Cysteine proteases play a role in both diseases and might be inhibited by endogenous metals. Therefore, it was hypothesized that biguanides might act as trans-compartmental metal shuttles and bring endogenous metals into the proximity of the active site of a cysteine protease with subsequent release of the metals upon dissociation [60]. It was shown that in the absence of the metals, biguanides did not appreciably inhibit falcepain-2 and cathepsin B activity, while in the presence of Zn(II) or Fe(III), both Pharmaceuticals 2022, 15, 453 6 of 69 metformin and phenformin markedly increased the inhibitory effects of the metals by at least 25-55% [60]. The most prominent effects were observed by phenformin (0.02 µM) in the presence of an inactive concentration of Cu(II) (0.5 µM), which caused a remarkable 75% proteolytic inhibition [60]. It is possible that biguanides might play a similar metal-binding role in the context of diabetes, where they bind to the excess of Zn(II) ions on the surface of insulin, thereby preventing its degradation by cysteine proteases [60]. It was reported that not only the antidiabetic and antimalarial, but also the anticancer properties of metformin might be Cu-dependent. The concurrent treatment of several cancer cell lines with 400 µM of CuSO 4 with increasing concentrations of metformin revealed a significant increase in metformin's cytotoxicity [61]. However, it is not clear whether the observed effects were caused by Cu(II) alone or the combination of Cu(II) and metformin. It is well-known that excess of intracellular Cu levels results in the disturbance of cellular Cu homeostasis, oxidative stress and DNA damage [62][63][64][65]. In another work, the alkyne-containing metformin analogue was developed with the aim of establishing in situ labelling of metformin by means of click chemistry [66]. Although the analogue was characterized by higher cytotoxicity than metformin, it functionally phenocopied metformin in several in vitro models and therefore could be reliably used as a suitable metformin surrogate for the subsequent mechanistic investigations [66]. Based on the localization of the click-activated fluorescence, it was suggested that metformin surrogate was selectively accumulated in the mitochondria of breast cancer cells. Moreover, the intensity of the fluorescent signal significantly decreased upon co-incubation with metformin as a competitor. More detailed investigations confirmed the ability of biguanides to remove the redox-active Cu(I) ions from mitochondrial proteins and promote their oxidation to Cu(II), leading to an increase in mitochondrial Cu(II) ion levels and a decrease in mitochondrial Cu(I) ion levels [66], as predicted by the computational analysis [59]. Finally, to investigate whether the anticancer activity of metformin might be indeed linked to its Cu-binding ability, the effects of this drug on the epithelial-to-mesenchymal (EMT) transition were investigated. The EMT transition is commonly linked with the progression of cancer, the formation of metastases and increased tumor resistance. It is believed that Cu is an essential component of EMT; hence, it was hypothesized that the Cu-binding properties of metformin might lead to the suppression of EMT and decreased tumor stemness [66]. In agreement with the hypothesis, both metformin and its clickable analogue significantly reduced the expression of mesenchymal markers, such as fibronectin, vimentin, Zeb1, and decreased the proportion of CD24 − /CD44 + cancer stem cells [66]. Interestingly, the anticancer activity of metformin and its mitochondria-targeting analogue Mito-Met was markedly enhanced in the presence of several Fe(III) chelators, such as deferasirox (DFX) [45]. Since metformin readily binds endogenous Zn(II), Cu(II) and Fe(III) and other metal ions and its cancer potency largely depends on Cu binding, it is plausible that DFX or other metal chelators might have reduced the competitive binding of metformin and other biguanides to Fe(III) and other metals. In the presence of endogenous metals, the biguanide moiety forms metal complexes in proportion to the relative binding affinities and metal availabilities of metals in cells and tissues. As a consequence, the simultaneous competitive binding with different metals might negatively affect the on-target biological activity of metformin and its analogues and induce off-target toxicity. A feasible approach is to administer pre-formed metal complexes of metformin and other biguanides, thereby delivering the most favorable biguanide/metal ratio for optimal biological function. Moreover, coordination of metformin to metal centers is expected to alter its uptake mechanisms and improve the intracellular accumulation and absorption in the bloodstream. Biologically Active Metal Complexes with Metformin and Its Analogues In recent years, an increased interest in bioactive metal complexes has led to a multitude of studies describing the synthesis and biological activity of transition metal complexes with metformin and its analogues. In particular, metformin complexes with transition Scheme 1. Synthetic route toward metformin complexes 1-4 with Y(III), La(III), Ce(III) and Sm(III). Nd(III) complexes 5 and 6 with metformin and its more lipophilic derivative were obtained from NdCl36H2O as a starting material (Scheme 2) and their antidiabetic properties were tested in comparison with uncoordinated ligands and respective Nd(III) salt in Kunming white rats with induced diabetes [69]. It was shown that both complex 5, Nd(III) salt and the respective ligand did not affect the blood sugar levels 2 h after the compounds were administered and only slight decrease in blood sugar levels was observed in rats treated with metformin and 6. All compounds demonstrated similar, moderate antioxidant activity, which did not correlate with their antidiabetic properties [69]. Nd(III) complexes 5 and 6 with metformin and its more lipophilic derivative were obtained from NdCl 3 ·6H 2 O as a starting material (Scheme 2) and their antidiabetic properties were tested in comparison with uncoordinated ligands and respective Nd(III) salt in Kunming white rats with induced diabetes [69]. It was shown that both complex 5, Nd(III) salt and the respective ligand did not affect the blood sugar levels 2 h after the compounds were administered and only slight decrease in blood sugar levels was observed in rats treated with metformin and 6. All compounds demonstrated similar, moderate antioxidant activity, which did not correlate with their antidiabetic properties [69]. obtained from NdCl36H2O as a starting material (Scheme 2) and their antidiabetic properties were tested in comparison with uncoordinated ligands and respective Nd(III) salt in Kunming white rats with induced diabetes [69]. It was shown that both complex 5, Nd(III) salt and the respective ligand did not affect the blood sugar levels 2 h after the compounds were administered and only slight decrease in blood sugar levels was observed in rats treated with metformin and 6. All compounds demonstrated similar, moderate antioxidant activity, which did not correlate with their antidiabetic properties [69]. Scheme 2. Synthetic routes toward Nd(III) complexes 5 and 6 with metformin and its derivative. Dy(III) complexes 7 and 8 with metformin derivatives were prepared from Dy(NO3)35H2O in 65% yield (Scheme 3) and were also investigated in the context of diabetes [70]. The interactions of 7 and 8 with glucose were studied using spectrophotometric methods as well as viscosity measurements. It was revealed that 7 and 8 strongly bound glucose in aqueous solutions at physiological pH, which can be useful for the detection of glucose [70]; however, additional in vitro or in vivo experiments were not performed. Scheme 2. Synthetic routes toward Nd(III) complexes 5 and 6 with metformin and its derivative. Dy(III) complexes 7 and 8 with metformin derivatives were prepared from Dy(NO 3 ) 3 ·5H 2 O in ≈65% yield (Scheme 3) and were also investigated in the context of diabetes [70]. The interactions of 7 and 8 with glucose were studied using spectrophotometric methods as well as viscosity measurements. It was revealed that 7 and 8 strongly bound glucose in aqueous solutions at physiological pH, which can be useful for the detection of glucose [70]; however, additional in vitro or in vivo experiments were not performed. Group IV (Ti, Zr, Hf) To the best of our knowledge, there are only few examples of metformin complexes with group IV elements, and only one complex was described in the context of its biological activity. Coordination of metformin to a Zr(IV) center in the presence of 1,4-diacetylbenzene (DAB) resulted in the formation of complex 9 in excellent yield (Scheme 4) [71]. The antibacterial and antifungal activities of 9 were tested against various bacterial and fungal cultures using the standard disk diffusion method in comparison with metformin, DAB and the antibacterial drug moxifloxacin. Group IV (Ti, Zr, Hf) To the best of our knowledge, there are only few examples of metformin complexes with group IV elements, and only one complex was described in the context of its biological activity. Coordination of metformin to a Zr(IV) center in the presence of 1,4-diacetylbenzene (DAB) resulted in the formation of complex 9 in excellent yield (Scheme 4) [71]. The antibacterial and antifungal activities of 9 were tested against various bacterial and fungal cultures using the standard disk diffusion method in comparison with metformin, DAB and the antibacterial drug moxifloxacin. To the best of our knowledge, there are only few examples of metformin complexes with group IV elements, and only one complex was described in the context of its biological activity. Coordination of metformin to a Zr(IV) center in the presence of 1,4-diacetylbenzene (DAB) resulted in the formation of complex 9 in excellent yield (Scheme 4) [71]. The antibacterial and antifungal activities of 9 were tested against various bacterial and fungal cultures using the standard disk diffusion method in comparison with metformin, DAB and the antibacterial drug moxifloxacin. When complex 9, metformin and DAB were tested against two fungal strains, namely, A. niger and C. albicans, none of the compounds demonstrated fungicidal properties. However, complex 9 showed excellent antibacterial activity against all tested bacterial strains, namely, E. faecalis, S. aureus, K. pneumoniae and Shigella, which was 1.1-2.2 times lower than the activity of moxifloxacin. In contrast, both metformin and DAB did not show activity against any of the tested strains, indicating the important role of the Zr(IV) metal center in the antibacterial properties of complex 9. It should be noted that various Zr(IV) complexes and nanoparticles showed marked antibacterial and antifungal activity, suggesting that the antibacterial activity of complex 9 originated from the metal center [72,73]. Group V (V, Nb, Ta) It is well-known that various V compounds are able to effectively normalize glucose levels both in vitro and in vivo, which makes them promising drug candidates for the treatment of diabetes [74,75]. Therefore, it was hypothesized that the combination of the antidiabetic drug metformin and the V center might lead to the synergistic activity of two When complex 9, metformin and DAB were tested against two fungal strains, namely, A. niger and C. albicans, none of the compounds demonstrated fungicidal properties. However, complex 9 showed excellent antibacterial activity against all tested bacterial strains, namely, E. faecalis, S. aureus, K. pneumoniae and Shigella, which was 1.1-2.2 times lower than the activity of moxifloxacin. In contrast, both metformin and DAB did not show activity against any of the tested strains, indicating the important role of the Zr(IV) metal center in the antibacterial properties of complex 9. It should be noted that various Zr(IV) complexes and nanoparticles showed marked antibacterial and antifungal activity, suggesting that the antibacterial activity of complex 9 originated from the metal center [72,73]. Group V (V, Nb, Ta) It is well-known that various V compounds are able to effectively normalize glucose levels both in vitro and in vivo, which makes them promising drug candidates for the treatment of diabetes [74,75]. Therefore, it was hypothesized that the combination of the antidiabetic drug metformin and the V center might lead to the synergistic activity of two fragments. Coordination of the two equivalents of metformin, phenformin or biguanide to an oxovanadium(IV) fragment resulted in the formation of complexes 10-12 of the type VO(L) 2 in different yields (31-81%) (Scheme 5) [76]. The investigation of antidiabetic activity of the oxovanadium(IV) metformin complex 10 was performed in Wistar diabetic rats in comparison with metformin and bis(maltolato)oxovanadium(IV) (BMOV), which previously demonstrated potent antidiabetic properties under similar experimental conditions. The diabetes was induced by a single intravenous injection of streptozotocin (STZ), resulting in blood glucose levels of over 13 mM. Subsequently, complex 10 was given to animals either via acute intraperitoneal (i.p.) injection at a dose of 0.12 mmol/kg or via acute oral gavage at a dose of 0.60 mmol/kg. The tail vein blood glucose levels were compared prior to drug administration and at selected times up to 72 h after drug administration. fragments. Coordination of the two equivalents of metformin, phenformin or biguanide to an oxovanadium(IV) fragment resulted in the formation of complexes 10-12 of the type VO(L)2 in different yields (31-81%) (Scheme 5) [76]. The investigation of antidiabetic activity of the oxovanadium(IV) metformin complex 10 was performed in Wistar diabetic rats in comparison with metformin and bis(maltolato)oxovanadium(IV) (BMOV), which previously demonstrated potent antidiabetic properties under similar experimental conditions. The diabetes was induced by a single intravenous injection of streptozotocin (STZ), resulting in blood glucose levels of over 13 mM. Subsequently, complex 10 was given to animals either via acute intraperitoneal (i.p.) injection at a dose of 0.12 mmol/kg or via acute oral gavage at a dose of 0.60 mmol/kg. The tail vein blood glucose levels were compared prior to drug administration and at selected times up to 72 h after drug administration. Following acute i.p. injection of complex 10, BMOV and metformin, the response was observed only in rats treated with 10 and BMOV but not metformin. However, the glucose-lowering levels of complex 10 were less significant and persistent than the effects of BMOV, and an obvious side-effect in the form of diarrhea was observed. When the i.p. Following acute i.p. injection of complex 10, BMOV and metformin, the response was observed only in rats treated with 10 and BMOV but not metformin. However, the glucose-lowering levels of complex 10 were less significant and persistent than the effects of BMOV, and an obvious side-effect in the form of diarrhea was observed. When the i.p. injection was replaced with acute oral gavage, only mild gastrointestinal effects were observed in all the treated groups. In total, 100% of rats responded to the treatment with complex 3 and their blood glucose levels returned to a normal range (less than 9 mM) within 24 h. However, the return of hyperglycemic levels after 72 h was observed for all rats. On the other hand, only 43% of BMOV-treated rats returned to hyperglycemic levels, indicating a more sustained response. No positive effects were observed in the metformintreated group. These results indicated that oxovanadium(IV) metformin complex 10 was able to induce a significantly improved antidiabetic response in vivo than uncoordinated metformin, yet no synergistic or additive effects with metformin have been detected. Subsequently, the insulinotropic effects of complex 10 were investigated in comparison with [VO(pyrrolidine-N-dithiocarbamate) 2 ] (VODTC) and VOSO 4 using pancreatic islets isolated from rats with stimulated exocrine pancreatic secretion [77]. The islets were subsequently incubated with increasing concentrations (0.1-1 mM) of compounds of interest, followed by measurements of insulin concentrations. Among all the tested complexes, only VODTC induced significant insulin secretion, while complex 10 did not affect insulin release. Protein tyrosine phosphatases (PTPs) play an important role in the pathogenesis of various diseases, including diabetes and obesity. In an attempt to link the mild antidiabetic activity of complex 10 with its ability to inhibit PTPs, it was incubated with protein tyrosine phosphatase 1B (PTP1B), T cell protein tyrosine phosphatase (TCPTP), hematopoietic protein tyrosine phosphatase (HePTP) and Src homology 2 domain-containing tyrosine phosphatase 1 (SHP1), as well as alkaline phosphatase (ALP) [78]. Phenformin complex 11 and moroxydine complex 13 were used for comparison. As a result, all complexes demonstrated strong inhibition of PTP1B and TCPTP (IC 50 , 80-160 nM), slightly weaker inhibition of HePTP (IC 50 , 190-410 nM) and SHP-1 (IC 50 , 0.8-3.3 µM) and very weak inhibition of ALP (IC 50 , 17-35 µM). Complex 13 was twice less effective towards PTP1B, TCPTP and HePTP, than complexes 10 and 11, while complex 11 demonstrated 3-4 times stronger inhibition of SHP-1 than complex 10 [78]. The inhibition of PTP1B and ALP occurred via typical competitive inhibition of the active site of the enzymes. Based on these observations, it can be hypothesized that the structure of the biguanide to some extent might affect the selectivity of the complexes towards various PTPs and their antidiabetic properties in vivo. To investigate whether the mode of metformin coordination to a V center might affect the antidiabetic properties of the resulting complexes, oxovanadium(IV) complexes with metformin-derived Schiff bases 14 and 15 were prepared (Scheme 6) [79]. The diabetes in Swiss albino mice was induced by i.p. injections of alloxan (150 mg/kg/day). Subsequently, mice were treated via i.p. route with complexes 14, 15 and uncoordinated metformin for 14 days (20 or 40 mg/kg). It was shown that metformin reduced blood glucose levels by 47-53% but did not show any effects on the total levels of serum cholesterol. In contrast, complexes 14 and 15 reduced blood glucose levels by up to 75% and decreased total cholesterol levels. None of the treatment regimens improved the integrity of pancreatic islets, which could possibly indicate that control of hyperglycemia The diabetes in Swiss albino mice was induced by i.p. injections of alloxan (150 mg/kg/day). Subsequently, mice were treated via i.p. route with complexes 14, 15 and uncoordinated metformin for 14 days (20 or 40 mg/kg). It was shown that met-formin reduced blood glucose levels by 47-53% but did not show any effects on the total levels of serum cholesterol. In contrast, complexes 14 and 15 reduced blood glucose levels by up to 75% and decreased total cholesterol levels. None of the treatment regimens improved the integrity of pancreatic islets, which could possibly indicate that control of hyperglycemia was achieved by extrapancreatic mechanisms. Since oxovanadium(IV) metformin complex 10 did not demonstrate superior antidiabetic effects to the combination of metformin and vanadate fragment, another strategy has been employed. The decavanadate [V 10 O 28 ] 6− consists of 10 octahedral vanadium centers and has various advantages over monomeric vanadates. In particular, it showed higher potency in lowering elevated blood glucose levels in diabetic rats. Considering the high anionic charge of decavanadate, its biological properties, in particular the ability to interact with biological membranes, are highly dependent on the counterions [80]. Since metformin affects hydrogen bonding in water, the replacement of the Na + counterion in Na 6 [V 10 O 28 ] with a metforminium cation resulted in a significant increase in solubility of the decavanadate salt in DMSO and the inhomogeneous environment of reverse micelles [81]. Subsequently, various metforminium decavanadates where metformin molecules served as counterions were prepared in moderate to good yields [81][82][83][84]. The effects of metforminium decavanadate 16 (MetfDeca, Scheme 7), as well as uncoordinated metformin, were investigated in Wistar rats, which were given a hypercaloric (HC) diet for 3 months prior to treatment. Rats exposed to an HC diet were characterized by poor carbohydrate tolerance and the deposition of triglycerides in various organs, indicating insulin resistance. Metformin was given daily at a dose of 0.12M/kg together with the HC diet, and 16 was given twice a week at a dose of 2.5 µM/kg together with the HC diet for 30 days. Both treatments revealed significant improvement in morphometric regulation of body mass index (BMI) and fat percentage; however, only 16 demonstrated improvement in biochemical regulation. Importantly, the dose of 16 was 48,000 times lower than the dose of metformin, and the time of administration was reduced to twice a week, indicating the promising therapeutic potential of this compound. Additionally, the anti-diabetic effects of compound 16 were confirmed in other insulin-dependent and insulin-independent animal models [85]. Subsequently, the in vivo antidiabetic effects of 16 were simultaneously compared to metformin and NaVO 3 [86]. Hyperglycemia and hypoinsulinemia were induced in Wistar rats via three days of i.p. applications of alloxan (150 mg/kg). Subsequently, rats were treated with either insulin (1 UI/100 mg/dL of glucose/day), metformin (350 mg/kg/day), 16 (3.5 µM/0.1 kg/day) or NaVO 3 (3.5 µM/0.1 kg/day). It was shown that NaVO 3 demonstrated improved hypoglycemic properties than metformin; however, the most pronounced hypoglycemic properties were demonstrated by insulin and 16, reflected by restored redox balance in liver and muscles, as well as restored insulin levels. Importantly, this study revealed that complex 16 not only demonstrated improved anti-diabetic properties than metformin and monovanadate, but also mediated the regulation of hyperglycemia and oxidative stress through different pathways than monovanadate. Recently, it was reported that hypercaloric consumption in mice resulted in memory deterioration caused by impaired function of the hippocampus [87]. Therefore, it was investigated whether complex 16 could induce hippocampal regeneration and improve recognition memory in Wistar rats with metabolic syndrome [88]. Initially, rats were administered a normal or HC diet for 3 months and subsequently treated with 16 via oral gavage at a dose of 1.23 µg/0.1 kg twice a week for 60 days. As expected, complex 16 improved zoometric and biochemical parameters in rats given a HC diet. Importantly, 16 improved short-term recognition memory, diminished oxidative stress and improved antioxidant activity in rat brains. Administration of 16 reduced the inflammation of the hippocampus, characterized by reduced levels of pro-inflammatory cytokine TNF-α and increased levels of anti-inflammatory cytokine IL-10. In addition, 16 improved the morphology of hippocampal neurons, characterized by the rearrangement of dendritic trees and an increased number of dendritic spines in pyramidal neurons. Based on these observations, 16 might delay the onset of neurodegenerative diseases provoked by metabolic disorders. body mass index (BMI) and fat percentage; however, only 16 demonstrated improvement in biochemical regulation. Importantly, the dose of 16 was 48,000 times lower than the dose of metformin, and the time of administration was reduced to twice a week, indicating the promising therapeutic potential of this compound. Additionally, the anti-diabetic effects of compound 16 were confirmed in other insulin-dependent and insulin-independent animal models [85]. Subsequently, the in vivo antidiabetic effects of 16 were simultaneously compared to metformin and NaVO3 [86]. Hyperglycemia and hypoinsulinemia were induced in Wistar rats via three days of i.p. applications of alloxan (150 mg/kg). Subsequently, rats were treated with either insulin (1 UI/100 mg/dL of glucose/day), metformin (350 mg/kg/day), 16 (3.5 µM/0.1 kg/day) or NaVO3 (3.5 µM/0.1 kg/day). It was shown that NaVO3 demonstrated improved hypoglycemic properties than metformin; however, the most pronounced hypoglycemic properties were demonstrated by insulin and 16, reflected by restored redox balance in liver and muscles, as well as restored insulin levels. Importantly, this study revealed that complex 16 not only demonstrated improved anti-diabetic properties than metformin and monovanadate, but also mediated the regulation of hyperglycemia and oxidative stress through different pathways than monovanadate. Recently, it was reported that hypercaloric consumption in mice resulted in memory deterioration caused by impaired function of the hippocampus [87]. Therefore, it was investigated whether complex 16 could induce hippocampal regeneration and improve recognition memory in Wistar rats with metabolic syndrome [88]. Initially, rats were administered a normal or HC diet for 3 months and subsequently treated with 16 via oral gavage at a dose of 1.23 µg/0.1 kg twice a week for 60 days. As expected, complex 16 Besides their role in the treatment of diabetes and other metabolic disorders, oxovanadium(IV) complexes with metformin and its structural analogues might be effective in the treatment of other diseases. For example, the ability of these complexes to irreversibly bind DNA might be useful for the treatment of cancer [89,90]. The incorporation of glycine or histidine into the oxovanadium(IV)-metformin backbone resulted in the formation of two water-soluble complexes, 17 and 18, in excellent yields (Scheme 8) [91]. improved zoometric and biochemical parameters in rats given a HC diet. Importantly, 16 improved short-term recognition memory, diminished oxidative stress and improved antioxidant activity in rat brains. Administration of 16 reduced the inflammation of the hippocampus, characterized by reduced levels of pro-inflammatory cytokine TNF-α and increased levels of anti-inflammatory cytokine IL-10. In addition, 16 improved the morphology of hippocampal neurons, characterized by the rearrangement of dendritic trees and an increased number of dendritic spines in pyramidal neurons. Based on these observations, 16 might delay the onset of neurodegenerative diseases provoked by metabolic disorders. Besides their role in the treatment of diabetes and other metabolic disorders, oxovanadium(IV) complexes with metformin and its structural analogues might be effective in the treatment of other diseases. For example, the ability of these complexes to irreversibly bind DNA might be useful for the treatment of cancer [89,90]. The incorporation of glycine or histidine into the oxovanadium(IV)-metformin backbone resulted in the formation of two water-soluble complexes, 17 and 18, in excellent yields (Scheme 8) [91]. The DNA-binding ability of these complexes was investigated using standard absorption titration experiments, fluorescence displacement experiments with EtBr, as well as viscosity measurements and gel electrophoresis, which suggested that complexes effectively bound DNA. Subsequent docking studies revealed that the strongest binding of 17 and 18 with DNA nucleotides occurred within the metformin binding pocket. Despite promising DNA-binding results, the anticancer activity of these complexes has not been investigated. The DNA-binding ability of these complexes was investigated using standard absorption titration experiments, fluorescence displacement experiments with EtBr, as well as viscosity measurements and gel electrophoresis, which suggested that complexes effectively bound DNA. Subsequent docking studies revealed that the strongest binding of 17 and 18 with DNA nucleotides occurred within the metformin binding pocket. Despite promising DNA-binding results, the anticancer activity of these complexes has not been investigated. Interestingly, the reaction of metformin with vanadyl sulfate resulted in the formation of a dinuclear oxovanadium(IV) metformin complex 19 ((VO) 2 (metf) 2 (SO 4 ) 2 ) with two SO 4 2− anions acting as bridges (Scheme 8) [92]. The activity of complex 19 at concentration 1 mg/mL against various gram-positive and gram-negative bacterial strains, as well as fungal strains was investigated using a standard disk diffusion method in comparison with uncoordinated metformin (1 mg/mL), streptomycin (10 mg/mL) and ketoconazole (10 mg/mL). Complex 19 demonstrated moderate activity against all tested bacterial and fungal strains, which was approximately 2-4 times lower than the activity of streptomycin and ketoconazole. However, these results cannot be directly compared due to significantly different drug concentrations. As expected, uncoordinated metformin was devoid of any significant activity against all tested bacterial and fungal strains. It was speculated that the improved antibacterial and antifungal activity of complex 19 in comparison with metformin might be related to the easier penetration of the metal complex through bacterial or fungal cell membrane; however, this hypothesis was not experimentally confirmed. Group VI (Cr, Mo, W) Similar to V complexes, Cr complexes with metformin demonstrate antibacterial, antifungal and antidiabetic properties. Cr(III) complex 20 with three bidentate metformin ligands were obtained by the reaction of CrCl 3 ·6H 2 O with 3 equiv. of metformin in a 72% yield (Scheme 9) [92]. Its activity was investigated against various bacterial and fungal strains under the same experimental conditions as complex 19. In comparison with 19, Cr(III) complex 20 demonstrated 1.5-, 2-and 1.8-fold stronger inhibition of B. subtilis, P. aeruginosa and A. niger strains, respectively, and 2.2-, and 1.6-fold weaker inhibition of E. coli and C. albicans strains, respectively. These results indicate that coordination of metformin to different metal centers allows for fine-tuning of the selectivity of the resulting complexes towards specific bacterial and fungal strains. While the antidiabetic properties of V compounds are well-documented, the role of Cr in diabetes is less established [93]. There is some evidence that Cr supplementation may improve the glycemic control in patients with diabetes [94]. Therefore, Cr(III) supplements are commonly used for diabetes and obesity treatment [95]. In addition, several Cr(III) complexes with various ligands induced sensitization of insulin signaling pathways in vitro and in vivo [96]. To investigate whether the combination of Cr(III) and metformin would result in enhanced antidiabetic properties, complex 20 (12.58 mg/kg and 25.16 mg/kg, corresponding to 1000 µg/kg and 2000 µg/kg of Cr) was administered orally to C57BL/6 mice with high-fat diet/STZ-induced diabetes in comparison with metformin (16.6 mg/kg) and CrCl3⋅6H2O (5.12 mg/kg, corresponding to 1000 µg/kg of Cr) for 30-60 days [97]. It was shown that all tested compounds efficiently lowered blood glucose and insulin levels by approximately 11-30%; however, complex 20 demonstrated the most pronounced effects on decreasing abnormal lipid levels. Importantly, both 20 and metformin did not cause any histopathological changes in the kidneys, pancreas, kidney and liver, indicating no sub-chronic toxicity. While the antidiabetic properties of V compounds are well-documented, the role of Cr in diabetes is less established [93]. There is some evidence that Cr supplementation may improve the glycemic control in patients with diabetes [94]. Therefore, Cr(III) supplements are commonly used for diabetes and obesity treatment [95]. In addition, several Cr(III) complexes with various ligands induced sensitization of insulin signaling pathways in vitro and in vivo [96]. To investigate whether the combination of Cr(III) and metformin would result in enhanced antidiabetic properties, complex 20 (12.58 mg/kg and 25.16 mg/kg, corresponding to 1000 µg/kg and 2000 µg/kg of Cr) was administered orally to C57BL/6 mice with high-fat diet/STZ-induced diabetes in comparison with metformin (16.6 mg/kg) and CrCl 3 ·6H 2 O (5.12 mg/kg, corresponding to 1000 µg/kg of Cr) for 30-60 days [97]. It was shown that all tested compounds efficiently lowered blood glucose and insulin levels by approximately 11-30%; however, complex 20 demonstrated the most pronounced effects on decreasing abnormal lipid levels. Importantly, both 20 and metformin did not cause any histopathological changes in the kidneys, pancreas, kidney and liver, indicating no sub-chronic toxicity. The most well-studied and best-selling Cr(III) supplement, which is believed to ameliorate insulin resistance and reduce the risk of cardiovascular diseases, is Cr picolinate [95]. Therefore, it was hypothesized that combination of Cr(III), dipicolinate and metformin might result in synergistic antidiabetic effects [98]. The X-ray diffraction analysis of complex 21 revealed that coordination sphere of a Cr(III) metal center was composed of two tridentate dipicolinate ligands, while metformin acted as counterion (Scheme 9). The antidiabetic activity of 21 was assessed in mice with STZ-induced diabetes in comparison with CrCl 3 and metformin. All tested compounds demonstrated only a moderate decrease of fasting blood glucose levels from ≈11.7 nmol/L to ≈7.8-8.6 nmol/L. However, complex 21 demonstrated significant reduction of total cholesterol and triglyceride levels, as well as partial normalization of high-and low-density lipoproteins. In agreement with initial hypothesis, the effects of 21 were more pronounced than the effects of metformin and respected inorganic Cr(III) salt. The post-mortem histological analysis of kidney and liver sections in treated mice did not reveal any pathological changes, indicating low toxicity of complex 21 [98]. In order to understand whether the replacement of the metal center from V to Cr might result in significant changes in antidiabetic activity, Cr(III) complex 22, which is structurally similar to V complex 15, has been prepared (Scheme 10). The glucose-lowering properties of 22 were investigated in diabetic mice under the same experimental conditions as 15. Additionally, the activity was compared to complex 23, where biguanide fragment of metformin was not involved in the coordination to a metal center [99]. It was shown that both Cr complexes 22 and 23 decreased blood glucose levels in mice with alloxan-induced diabetes by up to 4.24 and 24.62%, respectively, at 20 mg/kg dose and up to 66-67% at 40 mg/kg. These results indicate that the coordination mode of metformin might play an important role in its antidiabetic effects. It should be noted that the structurally similar complex 15 demonstrated higher potency at a lower dose and equal potency at a higher dose. Additionally, a series of Cr(III) complexes 24-26 with metformin and other bidentate N-donor ligands has been prepared and their DNA-binding properties have been investigated (Scheme 11) [100]. It was shown that these complexes could effectively bind DNA grooves, and the strength of DNA binding based on the DNA photocleavage study decreased in the following order: 26 > 25 > 24. On the other hand, docking studies revealed that complex 25 and uncoordinated metformin were characterized by the highest docking scores. It was shown that both Cr complexes 22 and 23 decreased blood glucose levels in mice with alloxan-induced diabetes by up to 4.24 and 24.62%, respectively, at 20 mg/kg dose and up to 66-67% at 40 mg/kg. These results indicate that the coordination mode of metformin might play an important role in its antidiabetic effects. It should be noted that the structurally similar complex 15 demonstrated higher potency at a lower dose and equal potency at a higher dose. Additionally, a series of Cr(III) complexes 24-26 with metformin and other bidentate Ndonor ligands has been prepared and their DNA-binding properties have been investigated (Scheme 11) [100]. It was shown that these complexes could effectively bind DNA grooves, and the strength of DNA binding based on the DNA photocleavage study decreased in the following order: 26 > 25 > 24. On the other hand, docking studies revealed that complex 25 and uncoordinated metformin were characterized by the highest docking scores. tigated (Scheme 11) [100]. It was shown that these complexes could effectively bind DNA grooves, and the strength of DNA binding based on the DNA photocleavage study decreased in the following order: 26 > 25 > 24. On the other hand, docking studies revealed that complex 25 and uncoordinated metformin were characterized by the highest docking scores. Scheme 11. Synthetic route toward Cr(III) complexes 24-26 with deprotonated metformin ligand. Group VII (Mn, Tc, Re) There are several reports of Mn(II) complexes with various organic ligands that demonstrate some antibacterial and antifungal activity [101,102]. Mn(II)-metformin Scheme 11. Synthetic route toward Cr(III) complexes 24-26 with deprotonated metformin ligand. Group VII (Mn, Tc, Re) There are several reports of Mn(II) complexes with various organic ligands that demonstrate some antibacterial and antifungal activity [101,102]. Mn(II)-metformin complexes were also investigated in the context of their antimicrobial activity. Coordination of 2 equiv. of metformin to a Mn(II) center resulted in the formation of octahedral complex 27 (Scheme 12) [103]. This complex demonstrated a broad range of antibacterial activity against E. coli, S. enteritidis, P. aeruginosa, B. subtilis, L. monocytogenes, S. aureus and antifungal activity against C. albicans, which was 2-16-fold higher than the activity of metformin. However, no significant differences were observed between complex 27 and Mn(ClO 4 ) 2 ·6H 2 O salt, indicating the role of the Mn(II) center in the observed biological effects. The replacement of perchlorate axial ligands with acetate ligands in complex 28 did not result in significant changes in antibacterial or antifungal activity [104]. Additionally, preliminary anticancer activity of complex 28 has been tested against cervical carcinoma HeLa cells. While no significant cytotoxicity has been observed, 28 induced cancer cell cycle arrest at the G2/M phase. Surprisingly, other authors reported the antibacterial study of complex 29 with chlorido axial ligands, and no antibacterial activity has been observed [105]. We cannot unambiguously confirm the negative role of Cl axial ligands since the experiments were performed under different experimental conditions. 99m Tc radiopharmaceuticals are widely used for diagnostic nuclear medicine due to The replacement of perchlorate axial ligands with acetate ligands in complex 28 did not result in significant changes in antibacterial or antifungal activity [104]. Additionally, preliminary anticancer activity of complex 28 has been tested against cervical carcinoma HeLa cells. While no significant cytotoxicity has been observed, 28 induced cancer cell cycle arrest at the G2/M phase. Surprisingly, other authors reported the antibacterial study of complex 29 with chlorido axial ligands, and no antibacterial activity has been observed [105]. We cannot unambiguously confirm the negative role of Cl axial ligands since the experiments were performed under different experimental conditions. 99m Tc radiopharmaceuticals are widely used for diagnostic nuclear medicine due to the excellent nuclear properties of 99m Tc [106]. However, even though 99m Tc radionuclides are able to induce DNA double-strand breaks, their therapeutic use is hindered by their insufficient accumulation in cancer cells [107]. It was shown that conjugation of radionuclides to the DNA intercalator facilitated drug internalization and allowed for the 99m Tc decay in close proximity to DNA, leading to the formation of double-strand breaks [107]. Since metformin and its derivatives were shown to effectively bind minor/major groove of DNA in both intercalative and non-intercalative mode [108,109], they might enhance the accumulation of 99m Tc radionuclides in the vicinity of DNA. Tricarbonyl 99m Tc(I) complex 30 with phenformin was prepared in two steps starting from readily available Na 99m TcO 4 (Scheme 13) [110]. Complex 30 demonstrated high stability in the presence of histidine and cysteine and moderate stability in rat serum and might exhibit some potential as a radiotherapeutic agent; however, its interaction with DNA has not been studied [110]. Group VIII (Fe, Ru, Os) It was reported that various Fe complexes demonstrated a broad range of anticancer and antibacterial activities [111][112][113]. For example, Fe(III) complexes with Schiff base-derived ligands significantly inhibited the growth of gram-positive bacteria, possibly through the induction of ferroptosis [112]. Structurally different Fe(III)-metformin complexes 31-33 also demonstrated some antibacterial activity [105,114]. It was shown that the product of the reaction between metformin and FeCl3⋅6H2O was dependent on the amount of added base (Scheme 14) [114]. In particular, the addition of 1 equiv. of KOH (based on metformin) resulted in the formation of dinuclear bridge complex 31, while the addition of 0.5 equiv. of KOH yielded a typical square planar coordination complex of the type ML2. Subsequently, the antibacterial activity of both complexes and metformin has been tested against S. aureus, P. aeruginosa, E. coli, K. pneumoniae and the fungal strain C. albicans using the disk diffusion method. As expected, uncoordinated metformin did not show any activity, except for S. aureus and E. coli, and its coordination to Fe(III) resulted in a significant improvement in antibacterial and antifungal properties. The structure of the complexes determined the selectivity towards the following particular strains: while complex 31 was more active towards P. aeruginosa and E. coli, complex 32 was more selective towards S. aureus and K. pneumoniae. On the contrary to Mn complex 29, structurally similar Fe(III) complex 33 demonstrated some inhibitory potential towards E. coli, P. aeruginosa and S. aureus [105]. Scheme 13. Synthetic route toward radiolabelled 99m Tc(I) complex 30 with phenformin. Group VIII (Fe, Ru, Os) It was reported that various Fe complexes demonstrated a broad range of anticancer and antibacterial activities [111][112][113]. For example, Fe(III) complexes with Schiff basederived ligands significantly inhibited the growth of gram-positive bacteria, possibly through the induction of ferroptosis [112]. Structurally different Fe(III)-metformin complexes 31-33 also demonstrated some antibacterial activity [105,114]. It was shown that the product of the reaction between metformin and FeCl 3 ·6H 2 O was dependent on the amount of added base (Scheme 14) [114]. In particular, the addition of 1 equiv. of KOH (based on metformin) resulted in the formation of dinuclear bridge complex 31, while the addition of 0.5 equiv. of KOH yielded a typical square planar coordination complex of the type ML 2 . Subsequently, the antibacterial activity of both complexes and metformin has been tested against S. aureus, P. aeruginosa, E. coli, K. pneumoniae and the fungal strain C. albicans using the disk diffusion method. As expected, uncoordinated metformin did not show any activity, except for S. aureus and E. coli, and its coordination to Fe(III) resulted in a significant improvement in antibacterial and antifungal properties. The structure of the complexes determined the selectivity towards the following particular strains: while complex 31 was more active towards P. aeruginosa and E. coli, complex 32 was more selective towards S. aureus and K. pneumoniae. On the contrary to Mn complex 29, structurally similar Fe(III) complex 33 demonstrated some inhibitory potential towards E. coli, P. aeruginosa and S. aureus [105]. show any activity, except for S. aureus and E. coli, and its coordination to Fe(III) resulted in a significant improvement in antibacterial and antifungal properties. The structure of the complexes determined the selectivity towards the following particular strains: while complex 31 was more active towards P. aeruginosa and E. coli, complex 32 was more selective towards S. aureus and K. pneumoniae. On the contrary to Mn complex 29, structurally similar Fe(III) complex 33 demonstrated some inhibitory potential towards E. coli, P. aeruginosa and S. aureus [105]. Ru(II) and Ru(III) complexes with biological properties have gained considerable popularity in recent decades [115][116][117]. The initial interest in Ru anticancer complexes was centered on the belief that Ru can mimic Fe and can be selectively transported to cancer cells with high Fe demand by Fe transporters. Nowadays, the role of transferrin in the Scheme 14. Synthetic routes toward Fe(III) complexes 31-33 with metformin or its deprotonated form. Ru(II) and Ru(III) complexes with biological properties have gained considerable popularity in recent decades [115][116][117]. The initial interest in Ru anticancer complexes was centered on the belief that Ru can mimic Fe and can be selectively transported to cancer cells with high Fe demand by Fe transporters. Nowadays, the role of transferrin in the transport of Ru-based drug candidates is debatable, and the exact mechanism of their subcellular localization remains elusive [118,119]. Nevertheless, the success of trans-[tetrachloridobis(1H-indazole)ruthenate(III)] (KP1019) or its sodium salt (KP1339 or IT-139 or BOLD-100) in clinical trials (e.g., NCT04421820, NCT01415297) [120,121], suggests that development of Ru-based anticancer complexes is a viable therapeutic strategy. In particular, half-sandwich Ru(II) anticancer complexes are interesting from the perspective of their easy functionalization and conjugation with various biologically-active fragments [116]. Typically, DNA is not considered as the main biomolecular target of half-sandwich Ru(II) complexes, since the large number of Ru(II) complexes demonstrated a strong preference towards thiol-containing blood serum proteins, such as bovine serum albumin (BSA) [122]. Therefore, it was hypothesized that coordination of metformin, which was shown to effectively bind minor/major groove of DNA [108,109], might enhance the interactions of half-sandwich Ru(II) complexes with DNA, leading to the DNA damage [123]. Complexes 34 and 35 with metformin were prepared in 74-86% yields using standard (η 6 -p-cymene) or (η 6 -benzene)Ru dimers as starting materials (Scheme 15). These drug candidates were active against human breast carcinoma MDA-MB-231 cells, human lung carcinoma A549 cells, as well as human ovarian carcinoma A2780 cells in the range of ≈ 8-30 µM, while metformin was not cytotoxic. On average, 34 was at least 1.5-fold more active than 35 in all cancer cell lines. Importantly, 34 and 35 were not toxic against healthy embryonic kidney HEK293 cells, thereby providing a wide therapeutic window for anticipated treatment strategies. Based on competitive fluorescence assays and docking simulations, it was concluded that 34 and 35 bound to DNA in a non-intercalative manner. The propensity of metformin for strong hydrogen bonding with DNA nucleobases [108,109] significantly contributed towards the DNA-binding affinity of the complexes [123]. In addition, viscosity measurements and gel electrophoresis studies with the supercoiled pUC19 DNA plasmid revealed covalent adduct formation with DNA. As expected, some binding interactions with BSA were observed, which were more pronounced for complex 35 than for 34 [123]. We hypothesize that complexes 34 and 35 might be transported into cancer cells using BSA as a carrier, where they subsequently induce extensive DNA damage, leading to apoptosis. dition, viscosity measurements and gel electrophoresis studies with the supercoiled pUC19 DNA plasmid revealed covalent adduct formation with DNA. As expected, some binding interactions with BSA were observed, which were more pronounced for complex 35 than for 34 [123]. We hypothesize that complexes 34 and 35 might be transported into cancer cells using BSA as a carrier, where they subsequently induce extensive DNA damage, leading to apoptosis. Since novel compounds were not toxic to normal cells, their antidiabetic properties were investigated by measuring α-amylase activity, which typically prevents the absorption of glucose in diabetic patients. It was shown that both complexes could effectively inhibit α-amylase activity at a high micromolar range; however, they were at least twice less efficient than the standard drug acarbose [123]. Group IX (Co, Rh, Ir) The group of Co is widely presented by the whole range of structurally different Co(II) metformin complexes with various biological properties, including antibacterial, Since novel compounds were not toxic to normal cells, their antidiabetic properties were investigated by measuring α-amylase activity, which typically prevents the absorption of glucose in diabetic patients. It was shown that both complexes could effectively inhibit α-amylase activity at a high micromolar range; however, they were at least twice less efficient than the standard drug acarbose [123]. Group IX (Co, Rh, Ir) The group of Co is widely presented by the whole range of structurally different Co(II) metformin complexes with various biological properties, including antibacterial, antifungal, antiviral, anticancer and antidiabetic complexes. The reaction of metformin with CoCl 2 ·6H 2 O in a 1:1 ratio resulted in the formation of tetrahedral complex 36 (Scheme 16) [124]. It was determined by the liquid medium dilution method that the antibacterial activity of complex 36 against E. coli, K. pneumoniae and P. aeruginosa was lower than metformin's activity; however, this complex demonstrated good inhibitory potential towards B. subtilis (MIC 64 µg/mL) and S. aureus (MIC 128 µg/mL). When metformin was added to a Co(II) center in the presence of additional chelating and non-chelating ligands, such as water, DAB or Schiff-bases, resulting complexes adopted octahedral geometry (Scheme 16) [71,125]. On the contrary to 36, complex 37 was devoid of activity against various bacterial and fungal strains, including S. Aureus and K. Pneumoniae [71]. Surprisingly, the activity of 37 against gram-negative Shigella bacteria was even higher than the activity of the antibacterial drug moxifloxacin [71]. It is known that Schiff-bases are commonly characterized by the wide range of biological properties, When metformin was added to a Co(II) center in the presence of additional chelating and non-chelating ligands, such as water, DAB or Schiff-bases, resulting complexes adopted octahedral geometry (Scheme 16) [71,125]. On the contrary to 36, complex 37 was devoid of activity against various bacterial and fungal strains, including S. Aureus and K. Pneumoniae [71]. Surprisingly, the activity of 37 against gram-negative Shigella bacteria was even higher than the activity of the antibacterial drug moxifloxacin [71]. It is known that Schiff-bases are commonly characterized by the wide range of biological properties, including antimicrobial activity [126]; therefore, the combination of metformin, a Schiff-base and Co(II) center was expected to demonstrate an improved antibacterial profile [125]. As a result, complex 38 demonstrated slightly improved activity towards E. coli (zone of inhibition: 11.29 mL (38), 10.41 mm (metformin) and 7.14 mm (Schiff base). However, no improvement in activity against B. megaterium has been observed (zone of inhibition: 8.29 mm (38), 10.07 mm (metformin) and 8.01 mm (Schiff base)). In one of the most recent studies, a series of Co(III) 39-43 complexes with metformin and its analogues via three-step synthesis (Scheme 17) [127,128]. In the first step, biguanide ligands were coordinated to a Co(II) salt in an alkaline medium. In the second step, the resulting Co(II) complex was oxidized to a Co(III) complex using H 2 O 2 and in the last step the OHcounterion was replaced by Clusing diluted HCl. It should be noted that complex 44 was not converted to chloride and the moroxydine ligand was coordinated in a deprotonated form. Subsequently, the antiviral activity of novel complexes was tested against the influenza virus in comparison with [Co(En) 3 [127]. Co(En) 3 Cl 3 did not show any inhibitory potential, indicating the role of biguanide ligands [127]. In another work, the cytotoxicity of 39 was tested against mouse muscle C2C12 cells and human liver carcinoma HepG2 cells [129]. Similar to 43, 39 did not show significant toxicity, indicating that it can be safely used as an antiviral agent or for other purposes [129]. Surprisingly, despite the lack of activity of complexes 41, 42 and 44 against influenza virus, they demonstrated excellent inhibitory potential of herpes simplex virus type 2 strain MS (HSV-2) [128]. In particular, complex 41 inhibited HSV-2 at ED 50 = 6.25 µg/mL and was at least 16 times more selective towards the virus than towards mammalian cells [128]. These results revealed the excellent therapeutic potential of Co-biguanide complexes as antiviral agents and the drastic influence of the biguanide ligands on the antiviral activity and selectivity of the complexes. (at least 8 times more selective towards viral cells) [127]. Co(En)3Cl3 did not show any inhibitory potential, indicating the role of biguanide ligands [127]. In another work, the cytotoxicity of 39 was tested against mouse muscle C2C12 cells and human liver carcinoma HepG2 cells [129]. Similar to 43, 39 did not show significant toxicity, indicating that it can be safely used as an antiviral agent or for other purposes [129]. Surprisingly, despite the lack of activity of complexes 41, 42 and 44 against influenza virus, they demonstrated excellent inhibitory potential of herpes simplex virus type 2 strain MS (HSV-2) [128]. In particular, complex 41 inhibited HSV-2 at ED50 = 6.25 µg/mL and was at least 16 times more selective towards the virus than towards mammalian cells [128]. These results revealed the excellent therapeutic potential of Co-biguanide complexes as antiviral agents and the drastic influence of the biguanide ligands on the antiviral activity and selectivity of the complexes. The biological properties of Co complexes have been extensively investigated for the last 70 years, and the anticancer potential of Co is well-documented [130]. Since Co is an essential trace element, which is particularly required for the biosynthesis of vitamin B12, the disruption of Co homeostasis can be used as an effective therapeutic strategy in cancer. In addition, the fine-tuneable redox-activity of Co complexes allows for easy delivery of bioactive ligands to cancer cells. The anticancer potential of several Co complexes with The biological properties of Co complexes have been extensively investigated for the last 70 years, and the anticancer potential of Co is well-documented [130]. Since Co is an essential trace element, which is particularly required for the biosynthesis of vitamin B12, the disruption of Co homeostasis can be used as an effective therapeutic strategy in cancer. In addition, the fine-tuneable redox-activity of Co complexes allows for easy delivery of bioactive ligands to cancer cells. The anticancer potential of several Co complexes with metformin has also been investigated. Co(II) complexes 45-48 (Scheme 18) with metformin and bidentate N-donor ligands demonstrated the ability to bind DNA within the binding pocket of metformin, similar to Cr complexes 24-26 (Scheme 11) [131,132]. It should be noted that both the Cr and Co complexes with metformin and o-phenylenediamine were characterized by the highest DNA docking scores. In addition, the antidiabetic activity of complex 48 was investigated in mice with STZ-induced diabetes. This complex significantly decreased blood glucose levels as well as normalized lipid profiles; however, no improvement in comparison with metformin has been observed [132]. The anticancer activity of Co(II) complex 49 with two metformin ligands and two nitrate anions (Scheme 18) contributing to the octahedral coordination sphere has been investigated in vitro against Ehrlich ascites carcinoma (EACC) cells. As expected, metformin was devoid of cytotoxicity, while incubation of cancer cells with 300 µg/mL of 49 resulted in only 19% of residual cell viability [133]. Since both metformin and Co complexes were reported to act as antioxidants [134,135], the antioxidant activity of 49 was tested in comparison with uncoordinated metformin using a stable free radical, α,α-diphenyl-β-picrylhydrazyl (DPPH). Both 49 and metformin demonstrated relatively high antioxidant activity of 62 and 41%, respectively [133]. Ir(III) complexes represent a promising class of metal-based biologically active compounds due to the relative inertness of the low-spin 5d electronic configuration of the outer shell of Ir(III) and the relatively high stability of its complexes [136]. Sadler et al. prepared a comprehensive series of half-sandwich Ir(III) complexes with metformin and its analogues, aiming to investigate whether the antimicrobial properties of the complexes can be fine-tuned by the choice of substituents on π-bonded arene or biguanide ligands (Scheme 19) [137]. Ir(III) complexes represent a promising class of metal-based biologically active compounds due to the relative inertness of the low-spin 5d electronic configuration of the outer shell of Ir(III) and the relatively high stability of its complexes [136]. Sadler et al. prepared a comprehensive series of half-sandwich Ir(III) complexes with metformin and its analogues, aiming to investigate whether the antimicrobial properties of the complexes can be fine-tuned by the choice of substituents on π-bonded arene or biguanide ligands (Scheme 19) [137]. Ir(III) complexes represent a promising class of metal-based biologically active compounds due to the relative inertness of the low-spin 5d electronic configuration of the outer shell of Ir(III) and the relatively high stability of its complexes [136]. Sadler et al. prepared a comprehensive series of half-sandwich Ir(III) complexes with metformin and its analogues, aiming to investigate whether the antimicrobial properties of the complexes can be fine-tuned by the choice of substituents on π-bonded arene or biguanide ligands (Scheme 19) [137]. Subsequently, the antibacterial activity of the resulting 16 and 18-electron complexes in comparison with several uncoordinated biguanide ligands was determined against a panel of gram-positive and gram-negative bacterial strains, as well as fungal strains. Importantly, some relationships between the structure, hydrophobicity and antimicrobial Scheme 19. Synthetic routes toward half-sandwich Ir(III) complexes 50-62 with metformin and its analogues. Subsequently, the antibacterial activity of the resulting 16 and 18-electron complexes in comparison with several uncoordinated biguanide ligands was determined against a panel of gram-positive and gram-negative bacterial strains, as well as fungal strains. Importantly, some relationships between the structure, hydrophobicity and antimicrobial activity of the complexes have been established. All tested ligands, including metformin, as well as more hydrophilic complexes 50 and 51 with metformin, were devoid of activity against various pathogenic bacterial and fungal strains with minimum inhibitory concentrations (MIC > 32 µg/mL). On the other hand, more lipophilic complex 52 with metformin demonstrated increased activity against gram-positive strains, probably due to the higher level of penetration through the bacterial membrane. Other lipophilic complexes with phenyl and biphenyl substituents 53-58 demonstrated excellent activity against gram-positive (MIC 0.125-1 µg/mL) and gram-negative bacterial strains (MIC 1-16 µg/mL), with the exception of P. aeruginosa, which is known to have poor membrane permeability. Interestingly, complexes 59-62 with a sulfonyl group with aromatic substituents demonstrated similarly high activity against gram-positive strains and MRSA and no activity against gram-negative strains. All lipophilic complexes, with the exception of 50 and 51, demonstrated significant activity against the fungal strains C. albicans and C. neoformans. With regards to the effects of halido ligand X, no clear structure-activity relationships between 55, 57 and 58 were observed. Importantly, novel Ir complexes demonstrated high levels of selectivity towards microbial organisms vs. mammalian cells, in particular complex 56 (selectivity factor (SF) values range between 8 and >256). Importantly, the antimicrobial activity of Ir(III) complexes was linked with the specific mechanism of action. It was shown that ROS generation, DNA binding or cell wall targeting were responsible for the observed antimicrobial effects. On the other hand, reaction with intracellular thiols, such as L-cysteine, resulted in the rapid release of biguanide ligands and (arene)Ir(cysteine) species, possibly leading to the inhibition of protein biosynthesis. Overall, Ir(III) complexes might selectively deliver metformin and analogous biguanide species to the cells, which otherwise could not penetrate the microbial membrane. This example represents the importance of metal coordination of metformin and its analogues, leading to improved penetration, novel mechanisms of action and biomolecular targets. [138]. Novel complexes were tested against a panel of cancer cell lines in normoxic and hypoxic conditions in comparison with clinically used anticancer drug cisplatin and a structurally similar Ir(III) complex without biguanide ligand. In general, complexes 63-65 were significantly more cytotoxic than cisplatin, in particular in hypoxic conditions. In both normoxic and hypoxic conditions, the cytotoxicity decreased according to the following trend: 63 > 65 > 64. On the other hand, the Ir(III) complex without metformin was characterized by decreased anticancer activity in hypoxic conditions, indicating the role of the biguanide ligand. These differences were corroborated by the ability of 63, but not analogous Ir(III) complex to reduce the expression of hypoxia inducible factor-1α (HIF-1α). The mechanism of action of 63 was linked to the ROS generation and interference with mitochondrial respiration of cancer cells. In addition, complex 63 demonstrated promising anti-invasive and anti-inflammatory potential. Similar to a previously described study, complexes 63-65 readily reacted with glutathione (GSH), resulting in the displacement of the metformin ligand. Therefore, the observed effects might be attributed to the selective release of metformin into the intracellular cancer environment. Group X (Ni, Pd, Pt) The majority of metformin complexes with group 10 transition metals have been investigated in the context of their antibacterial activity. In particular, various Ni(II) complexes demonstrated significant activity against a panel of bacterial strains [139][140][141]. The reaction of 2 equiv. of metformin with various Ni(II) salts (Scheme 21) resulted in the formation of complexes 66-69. In contrast to the structurally similar Mn complexes 27 and 28, Ni complexes 66-68 were obtained as square planar tetracoordinate complexes with perchlorate, acetate or chloride anions outside of the coordination sphere [103,104]. On the other hand, complex 69, which was obtained by the reaction of metformin and NiCl2⋅6H2O in water, was characterized by the hexacoordinate octahedral coordination sphere. It should be noted that the structure of 69 was not confirmed by X-ray diffraction. Group X (Ni, Pd, Pt) The majority of metformin complexes with group 10 transition metals have been investigated in the context of their antibacterial activity. In particular, various Ni(II) complexes demonstrated significant activity against a panel of bacterial strains [139][140][141]. The reaction of 2 equiv. of metformin with various Ni(II) salts (Scheme 21) resulted in the formation of complexes 66-69. In contrast to the structurally similar Mn complexes 27 and 28, Ni complexes 66-68 were obtained as square planar tetracoordinate complexes with perchlorate, acetate or chloride anions outside of the coordination sphere [103,104]. On the other hand, complex 69, which was obtained by the reaction of metformin and NiCl 2 ·6H 2 O in water, was characterized by the hexacoordinate octahedral coordination sphere. It should be noted that the structure of 69 was not confirmed by X-ray diffraction. Both 66 and 67 demonstrated some inhibitory activity against the panel of bacterial strains, including E. coli, P. aeruginosa and S. enteritidis, with MIC of between 256-512 µg/mL, while corresponding inorganic Ni(II) salts were devoid of activity. In general, tetracoordinate Ni-metformin complexes were less effective than structurally similar Mn complexes; however, 67 demonstrated exceptionally high activity against the L. monocytogenes strain with an MIC = 4 µg/mL. Furthermore, hexacoordinate complex 69 was more effective than the structurally similar Mn complex 29 [105]. It seems that coordination of chlorido ligand to the Ni(II) center did not significantly affect the activity of 69 in comparison with 68, and both complexes were characterized by excellent inhibitory potential against several grampositive and gram-negative bacterial strains [142]; however, direct comparison cannot be performed due to the differences in experimental conditions. The reaction of NiCl 2 ·6H 2 O with metformin in the presence of other ligands, such as DAB [71] or a tridentate chelating ligand iminodiacetic acid [142] yielded penta-and hexacoordinate complexes 70 and 71 (Scheme 22). In contrast to structurally similar Zr complex 9, 71 was not active against all tested bacterial and fungal strains except K. pneumoniae, while complex 70 demonstrated broad antibacterial activity, comparable to 68. It should be noted that uncoordinated iminodiacetic acid and metformin ligands also demonstrated some antibacterial activity under the same experimental conditions; however, the activity was lower. Scheme 20. Synthetic route toward Ir(III) complexes 63-65 with metformin. Group X (Ni, Pd, Pt) The majority of metformin complexes with group 10 transition metals have been investigated in the context of their antibacterial activity. In particular, various Ni(II) complexes demonstrated significant activity against a panel of bacterial strains [139][140][141]. The reaction of 2 equiv. of metformin with various Ni(II) salts (Scheme 21) resulted in the formation of complexes 66-69. In contrast to the structurally similar Mn complexes 27 and 28, Ni complexes 66-68 were obtained as square planar tetracoordinate complexes with perchlorate, acetate or chloride anions outside of the coordination sphere [103,104]. On the other hand, complex 69, which was obtained by the reaction of metformin and NiCl2⋅6H2O in water, was characterized by the hexacoordinate octahedral coordination sphere. It should be noted that the structure of 69 was not confirmed by X-ray diffraction. Scheme 21. Synthetic routes toward Ni(II) complexes 66-69 with metformin. Both 66 and 67 demonstrated some inhibitory activity against the panel of bacterial strains, including E. coli, P. aeruginosa and S. enteritidis, with MIC of between 256-512 µg/mL, while corresponding inorganic Ni(II) salts were devoid of activity. In general, tetracoordinate Ni-metformin complexes were less effective than structurally similar Mn complexes; however, 67 demonstrated exceptionally high activity against the L. monocytogenes strain with an MIC = 4 µg/mL. Furthermore, hexacoordinate complex 69 was more effective than the structurally similar Mn complex 29 [105]. It seems that coordination of chlorido ligand to the Ni(II) center did not significantly affect the activity of 69 in comparison with 68, and both complexes were characterized by excellent inhibitory potential against several gram-positive and gram-negative bacterial strains [142]; however, direct comparison cannot be performed due to the differences in experimental conditions. The Since complexes 68 and 70 demonstrated promising antimicrobial activity, their anticancer activity against liver cancer HepG2 cells has been investigated in comparison with metformin and iminodiacetic acid [142]. All compounds demonstrated marginal cytotoxicity in the mM range, yet the activity of 68 and 70 was at least 2-4 times higher than the activity of the ligands, indicating the importance of the Ni(II) center. The observed cytotoxicity might be related to the ability of complexes 68 and 70 to irreversibly bind blood proteins such as albumin. Metformin can be coordinated with a metal center as a part of a macrocycle. For example, macrocyclic Ni complexes 74 and 75 were obtained in two steps via the intermediate formation of a square planar complex 73 with two deprotonated metformin ligands (Scheme 23) [143]. Despite relative structural similarities, 73-75 demonstrated differential selectivity towards various bacterial and fungal strains. While complexes 73 and 74 were equally active against S. aureus, E. faecalis, E. faecium, E. coli, P. aeruginosa, C. albicans and C. parapsilosis (MIC values ≈100-300 µg/mL), complex 75 demonstrated selectivity towards C. albicans and C. parapsilosis (MIC values <100 µg/mL). Importantly, these compounds inhibited bacterial biofilm formation, which is commonly associated with nosocomial infections. Similar to 68 and 70, Ni complexes 73-75 induced relatively marginal anticancer effects in human ileocecal adenocarcinoma (HCT8) and cervical cancer (HeLa) lines, as reflected by insignificant induction of apoptosis and cell cycle interference. In agreement, the structurally similar complex 72 was devoid of cytotoxicity against mouse muscle C2C12 cells and human liver carcinoma HepG2 cells [129]. Subsequently, the drug-likeness of 73-75 was assessed by various computational methods using pharmaco- Since complexes 68 and 70 demonstrated promising antimicrobial activity, their anticancer activity against liver cancer HepG2 cells has been investigated in comparison with metformin and iminodiacetic acid [142]. All compounds demonstrated marginal cytotoxicity in the mM range, yet the activity of 68 and 70 was at least 2-4 times higher than the activity of the ligands, indicating the importance of the Ni(II) center. The observed cytotoxicity might be related to the ability of complexes 68 and 70 to irreversibly bind blood proteins such as albumin. Metformin can be coordinated with a metal center as a part of a macrocycle. For example, macrocyclic Ni complexes 74 and 75 were obtained in two steps via the intermediate formation of a square planar complex 73 with two deprotonated metformin ligands (Scheme 23) [143]. Despite relative structural similarities, 73-75 demonstrated differential selectivity towards various bacterial and fungal strains. While complexes 73 and 74 were equally active against S. aureus, E. faecalis, E. faecium, E. coli, P. aeruginosa, C. albicans and C. parapsilosis (MIC values ≈100-300 µg/mL), complex 75 demonstrated selectivity towards C. albicans and C. parapsilosis (MIC values <100 µg/mL). Importantly, these compounds inhibited bacterial biofilm formation, which is commonly associated with nosocomial infections. Similar to 68 and 70, Ni complexes 73-75 induced relatively marginal anticancer effects in human ileocecal adenocarcinoma (HCT8) and cervical cancer (HeLa) lines, as reflected by insignificant induction of apoptosis and cell cycle interference. In agreement, the structurally similar complex 72 was devoid of cytotoxicity against mouse muscle C2C12 cells and human liver carcinoma HepG2 cells [129]. Subsequently, the druglikeness of 73-75 was assessed by various computational methods using pharmacokinetic bioinformatic databases. Complexes 73 and 74 presented good drug-like features, but only 74 displayed reasonable intestinal absorption and suitable blood-brain-barrier (but not central nervous system) permeability. Based on the computational predictions, all complexes were not toxic to the liver; however, 73 could cause skin sensitization. In addition, complexes with macrocyclic ligands were predicted to inhibit protease activity. The DNA-binding activity of a series of heteroleptic octahedral Ni complexes with metformin and En or other bidentate N-donor ligands 76-80 (Scheme 24) has been investigated using various spectrochemical methods [132,144]. As expected, all complexes were able to bind DNA grooves, similar to Co complexes 45-48 and Cr complexes 24-26, suggesting that octahedral complexes with metformin and other bidentate N-donor ligands demonstrate similar DNA binding properties, independent of the metal center. In addition, complex 80 demonstrated some anti-diabetic properties, similar to Co complex 48. [105]. Similarly, macrocyclic complexes 83 and 84 showed significantly higher antimicrobial activity (MIC values ≈16-62 µg/mL) than structurally analogous Ni(II) complexes 74 and 75 [145]. In addition, 83 and 84 effectively induced apoptosis and necrosis in HeLa cells [145], while 74 and 75 were virtually inactive [109]. Complex 85 was equally active against E. faecalis and Shigella as Zr complex 9, but did not display any activity against K. pneumoniae and S. aureus [105]. The DNA-binding activity of a series of heteroleptic octahedral Ni complexes with metformin and En or other bidentate N-donor ligands 76-80 (Scheme 24) has been investigated using various spectrochemical methods [132,144]. As expected, all complexes were able to bind DNA grooves, similar to Co complexes 45-48 and Cr complexes 24-26, suggesting that octahedral complexes with metformin and other bidentate N-donor ligands demonstrate similar DNA binding properties, independent of the metal center. In addition, complex 80 demonstrated some anti-diabetic properties, similar to Co complex 48. The DNA-binding activity of a series of heteroleptic octahedral Ni complexes with metformin and En or other bidentate N-donor ligands 76-80 (Scheme 24) has been investigated using various spectrochemical methods [132,144]. As expected, all complexes were able to bind DNA grooves, similar to Co complexes 45-48 and Cr complexes 24-26, suggesting that octahedral complexes with metformin and other bidentate N-donor ligands demonstrate similar DNA binding properties, independent of the metal center. In addition, complex 80 demonstrated some anti-diabetic properties, similar to Co complex 48. [105]. Similarly, macrocyclic complexes 83 and 84 showed significantly higher antimicrobial activity (MIC values ≈16-62 µg/mL) than structurally analogous Ni(II) complexes 74 and 75 [145]. In addition, 83 and 84 effectively induced apoptosis and necrosis in HeLa cells [145], while 74 and 75 were virtually inactive [109]. Complex 85 was equally active against E. faecalis and Shigella as Zr complex 9, but did not display any activity against K. pneumoniae and S. aureus [105]. [105]. Similarly, macrocyclic complexes 83 and 84 showed significantly higher antimicrobial activity (MIC values ≈16-62 µg/mL) than structurally analogous Ni(II) complexes 74 and 75 [145]. In addition, 83 and 84 effectively induced apoptosis and necrosis in HeLa cells [145], while 74 and 75 were virtually inactive [109]. Complex 85 was equally active against E. faecalis and Shigella as Zr complex 9, but did not display any activity against K. pneumoniae and S. aureus [105]. It is known that cyclometalated complexes of Pd(II) and Pt(II) are often characterized by excellent anticancer activity [147][148][149]. A series of cyclopalladated metformin complexes with various substituents on the benzylamine moiety have been prepared according to the synthetic route described in Scheme 25. The anticancer activity of complex 86 has been tested against HeLa, MCF7 and A549 cancer cell lines in comparison with complex 82, uncoordinated metformin and the clinically used anticancer drug cisplatin [146]. With the exception of A549, 86 was 2-5-fold more cytotoxic than 82, suggesting a beneficial role of cyclometalated fragments. Both complexes displayed cytotoxicity in the high micromolar range and were less active than cisplatin but significantly more active than metformin, which is known to display cytotoxicity in the high millimolar concentration range. The anticancer activity of 82 and 86 was linked with their DNA intercalation properties, which were confirmed by UV-vis and fluorescent spectroscopy. The methylene blue displacement assay suggested that DNA intercalation occurred via the metformin moiety. In addition, 82, 86 and metformin were shown to effectively interact with BSA; however, the competition experiments revealed the differences in the binding sites between complexes 82 and 86 and metformin [146]. Scheme 25. Synthetic route toward Pd(II) complex 86 with metformin. Since the discovery of cisplatin, Pt(II) complexes have been extensively investigated for their anticancer properties. In general, these complexes exhibit their anticancer activity as a result of DNA binding, which leads to the damage of healthy cells and severe sideeffects [150,151]. Pt(IV) complexes are typically less toxic since they can be selectively activated in cancer cells by various triggers [152,153]. The first synthesis of Pt(II)-metformin complex 87 from cis-dichlorobis(dimethyl sulfoxide)platinum(II) and metformin hydrochloride was performed in 1995 (Scheme 26) [154]; however, no biological properties of this compound were investigated. Subsequently, Pt(IV) complex 88 was prepared from K2PtCl6 in a 27% yield and its anticancer properties were studied on cisplatin-sensitive and cisplatin-resistant P388 leukemia cells in comparison with cisplatin [155]. It is known that cyclometalated complexes of Pd(II) and Pt(II) are often characterized by excellent anticancer activity [147][148][149]. A series of cyclopalladated metformin complexes with various substituents on the benzylamine moiety have been prepared according to the synthetic route described in Scheme 25. The anticancer activity of complex 86 has been tested against HeLa, MCF7 and A549 cancer cell lines in comparison with complex 82, uncoordinated metformin and the clinically used anticancer drug cisplatin [146]. With the exception of A549, 86 was 2-5-fold more cytotoxic than 82, suggesting a beneficial role of cyclometalated fragments. Both complexes displayed cytotoxicity in the high micromolar range and were less active than cisplatin but significantly more active than metformin, which is known to display cytotoxicity in the high millimolar concentration range. The anticancer activity of 82 and 86 was linked with their DNA intercalation properties, which were confirmed by UV-vis and fluorescent spectroscopy. The methylene blue displacement assay suggested that DNA intercalation occurred via the metformin moiety. In addition, 82, 86 and metformin were shown to effectively interact with BSA; however, the competition experiments revealed the differences in the binding sites between complexes 82 and 86 and metformin [146]. It is known that cyclometalated complexes of Pd(II) and Pt(II) are often characterized by excellent anticancer activity [147][148][149]. A series of cyclopalladated metformin complexes with various substituents on the benzylamine moiety have been prepared according to the synthetic route described in Scheme 25. The anticancer activity of complex 86 has been tested against HeLa, MCF7 and A549 cancer cell lines in comparison with complex 82, uncoordinated metformin and the clinically used anticancer drug cisplatin [146]. With the exception of A549, 86 was 2-5-fold more cytotoxic than 82, suggesting a beneficial role of cyclometalated fragments. Both complexes displayed cytotoxicity in the high micromolar range and were less active than cisplatin but significantly more active than metformin, which is known to display cytotoxicity in the high millimolar concentration range. The anticancer activity of 82 and 86 was linked with their DNA intercalation properties, which were confirmed by UV-vis and fluorescent spectroscopy. The methylene blue displacement assay suggested that DNA intercalation occurred via the metformin moiety. In addition, 82, 86 and metformin were shown to effectively interact with BSA; however, the competition experiments revealed the differences in the binding sites between complexes 82 and 86 and metformin [146]. Scheme 25. Synthetic route toward Pd(II) complex 86 with metformin. Since the discovery of cisplatin, Pt(II) complexes have been extensively investigated for their anticancer properties. In general, these complexes exhibit their anticancer activity as a result of DNA binding, which leads to the damage of healthy cells and severe sideeffects [150,151]. Pt(IV) complexes are typically less toxic since they can be selectively activated in cancer cells by various triggers [152,153]. The first synthesis of Pt(II)-metformin complex 87 from cis-dichlorobis(dimethyl sulfoxide)platinum(II) and metformin hydrochloride was performed in 1995 (Scheme 26) [154]; however, no biological properties of this compound were investigated. Subsequently, Pt(IV) complex 88 was prepared from K2PtCl6 in a 27% yield and its anticancer properties were studied on cisplatin-sensitive and cisplatin-resistant P388 leukemia cells in comparison with cisplatin [155]. Since the discovery of cisplatin, Pt(II) complexes have been extensively investigated for their anticancer properties. In general, these complexes exhibit their anticancer activity as a result of DNA binding, which leads to the damage of healthy cells and severe side-effects [150,151]. Pt(IV) complexes are typically less toxic since they can be selectively activated in cancer cells by various triggers [152,153]. The first synthesis of Pt(II)-metformin complex 87 from cis-dichlorobis(dimethyl sulfoxide)platinum(II) and metformin hydrochloride was performed in 1995 (Scheme 26) [154]; however, no biological properties of this compound were investigated. Subsequently, Pt(IV) complex 88 was prepared from K 2 PtCl 6 in a 27% yield and its anticancer properties were studied on cisplatin-sensitive and cisplatinresistant P388 leukemia cells in comparison with cisplatin [155]. [155]. Even though it was shown that both compounds caused similar cycle perturbations at equimolar concentrations, namely, equal levels of cellular accumulation at G2/M phase, the lower resistance factor for 90 indicates the differences in its mechanism of action in comparison with cisplatin. Inspired by the promising in vitro results, the in vivo effects of 90 (6.25-50 mg/kg, i.p. route) were investigated in B6D2F1 mice with P388 xenografts in comparison with cisplatin [155]. Complex 90 demonstrated significant improvement of mouse life span (an increase of 59%) at a maximum tolerated dose of 25 mg/kg, while cisplatin demonstrated a marked 192% improvement at 10 mg/kg. The marked differences between in vitro and in vivo results suggest possible differences in the pharmacokinetic behavior of these compounds. Several Pt(II) and Pt(IV)complexes were investigated as potential antimicrobial agents. Structurally similar Pt(II) and Pd(II) complexes were prepared from the corresponding salts (Scheme 26) [124]. In general, both complexes did not show any prominent activity against a panel of bacterial strains (MIC 512-1024 µg/mL); however, drastic differences were observed in B. subtilis and S. aureus [124]. Pt(II) complex 88, as well as uncoordinated metformin, were devoid of activity against S. aureus Pd(II) complex 89 was moderately active (MIC 256 µg/mL). On the contrary, 89 was relatively inactive against B. subtilis, while 88 demonstrated strong inhibitory potential (MIC 64 µg/mL), which was 2fold higher than the activity of metformin and similar to the activity of Co complex 36. In addition, the antimicrobial activity of Pt(IV) complex 91 with four monocoordinated deprotonated metformin ligands has been investigated using the disk diffusion method (Scheme 26) [105]. Additionally, 91 was moderately active against all tested bacterial and fungal strains, and its inhibitory potential was comparable to that of Pd(II) complex 81. [155]. Even though it was shown that both compounds caused similar cycle perturbations at equimolar concentrations, namely, equal levels of cellular accumulation at G2/M phase, the lower resistance factor for 90 indicates the differences in its mechanism of action in comparison with cisplatin. Inspired by the promising in vitro results, the in vivo effects of 90 (6.25-50 mg/kg, i.p. route) were investigated in B6D2F1 mice with P388 xenografts in comparison with cisplatin [155]. Complex 90 demonstrated significant improvement of mouse life span (an increase of 59%) at a maximum tolerated dose of 25 mg/kg, while cisplatin demonstrated a marked 192% improvement at 10 mg/kg. The marked differences between in vitro and in vivo results suggest possible differences in the pharmacokinetic behavior of these compounds. Several Pt(II) and Pt(IV)complexes were investigated as potential antimicrobial agents. Structurally similar Pt(II) and Pd(II) complexes were prepared from the corresponding salts (Scheme 26) [124]. In general, both complexes did not show any prominent activity against a panel of bacterial strains (MIC 512-1024 µg/mL); however, drastic differences were observed in B. subtilis and S. aureus [124]. Pt(II) complex 88, as well as uncoordinated metformin, were devoid of activity against S. aureus Pd(II) complex 89 was moderately active (MIC 256 µg/mL). On the contrary, 89 was relatively inactive against B. subtilis, while 88 demonstrated strong inhibitory potential (MIC 64 µg/mL), which was 2-fold higher than the activity of metformin and similar to the activity of Co complex 36. In addition, the antimicrobial activity of Pt(IV) complex 91 with four monocoordinated deprotonated metformin ligands has been investigated using the disk diffusion method (Scheme 26) [105]. Additionally, 91 was moderately active against all tested bacterial and fungal strains, and its inhibitory potential was comparable to that of Pd(II) complex 81. Group XI (Cu, Ag, Au) The antibacterial properties of Cu have been known since ancient civilizations [156]. Cu surfaces and materials were shown to effectively inhibit bacterial biofilms, including methicillin-resistant S. aureus, resulting in a significant reduction in hospital-acquired infections [157,158]. Aiming to understand, whether Cu(II)-metformin complexes might have a therapeutic potential as antimicrobial agents, a large panel of Cu complexes has been tested against various bacterial and fungal strains and compared with structurally similar complexes with other metal centers (Scheme 27 and Figure 4). Complex 92 was prepared by the condensation of metformin and readily available 2-pyridinecarbaldehyde in the presence of Cu(ClO 4 ) 2 ·6H 2 O in a 76% yield (Scheme 27). Subsequent nucleophilic addition of methanol resulted in the formation of 93 with a 26% yield, whose structure was confirmed by X-ray diffraction [159]. Group XI (Cu, Ag, Au) The antibacterial properties of Cu have been known since ancient civilizations [156]. Cu surfaces and materials were shown to effectively inhibit bacterial biofilms, including methicillin-resistant S. aureus, resulting in a significant reduction in hospital-acquired infections [157,158]. Aiming to understand, whether Cu(II)-metformin complexes might have a therapeutic potential as antimicrobial agents, a large panel of Cu complexes has been tested against various bacterial and fungal strains and compared with structurally similar complexes with other metal centers (Scheme 27 and Figure 4). Complex 92 was prepared by the condensation of metformin and readily available 2-pyridinecarbaldehyde in the presence of Cu(ClO4)2⋅6H2O in a 76% yield (Scheme 27). Subsequent nucleophilic addition of methanol resulted in the formation of 93 with a 26% yield, whose structure was confirmed by X-ray diffraction [159]. Figure 4) according to the synthetic procedures described earlier [103,104,143,160]. In contrast to structurally similar Ni complexes 73-75, Cu(II) complexes 94-96 were not significantly active against E. faecium, P. aeruginosa and C. albicans and completely devoid of activity against S. aureus, E. coli and C. parapsilosis [143]. Only 96 demonstrated stronger inhibitory activity than its Ni analogue against the E. faecalis strain. While 93-95 did not induce reasonable cytotoxicity in tested cancer cell lines, Cu(II) complexes 94-96 induced significant apoptosis and necrosis in HCT8 cell lines, which was associated with their ability to interfere with the cancer cell cycle and cause G2/M phase arrest [143]. Similar to 83 and 84, 94 and 95 presented good drug-like features, but only 95 displayed reasonable intestinal absorption and suitable blood-brain-barrier (but not central nervous system) were equally moderately potent against E. coli, while other strains were more sensitive to Cu(II) complexes than uncoordinated metformin ligands. Overall, 92 demonstrated the strongest inhibitory potential; however, it was not compared with the respective inorganic Cu(II) salt or the clinically used antibiotics [159]. Olar et al. prepared mono-and dinuclear tetracoordinate Cu(II) complexes 94-99 ( Figure 4) according to the synthetic procedures described earlier [103,104,143,160]. In contrast to structurally similar Ni complexes 73-75, Cu(II) complexes 94-96 were not significantly active against E. faecium, P. aeruginosa and C. albicans and completely devoid of activity against S. aureus, E. coli and C. parapsilosis [143]. Only 96 demonstrated stronger inhibitory activity than its Ni analogue against the E. faecalis strain. While 93-95 did not induce reasonable cytotoxicity in tested cancer cell lines, Cu(II) complexes 94-96 induced significant apoptosis and necrosis in HCT8 cell lines, which was associated with their ability to interfere with the cancer cell cycle and cause G2/M phase arrest [143]. Similar to 83 and 84, 94 and 95 presented good drug-like features, but only 95 displayed reasonable intestinal absorption and suitable blood-brain-barrier (but not central nervous system) permeability [143]. In addition, all tested complexes with macrocyclic ligands strongly inhibited protease activity. Subsequently, compounds 97 and 98 were prepared according to the previously published synthetic procedures [161] and subjected to testing against 82 gram-negative strains of E. coli, K. pneumoniae and E. cloacae, which were isolated from different surfaces in the hospital environment [160]. Dinuclear complex 97 demonstrated significantly higher antibacterial activity than 98, probably due to the presence of two active metal centers. The most pronounced activity was observed in E. coli strains (MIC 18-1250 µg/mL), followed by K. pneumoniae and E. cloacae (MIC 312.5-1250 µg/L) [160]. In another work, the cytotoxicity of 98 was tested against mouse muscle C2C12 cells and human liver carcinoma HepG2 cells [129]. It was shown that 98 was devoid of toxicity, indicating that it can be safely used as an antibacterial agent or for other purposes. Aiming to understand the role of metal center, the antibacterial activity of the complex 99 was compared to the respective inorganic Cu(II) salt and metformin [103]. Both metformin and 99 demonstrated very weak activity against E. coli, S. enteritidis, S. aureus and C. albicans (MIC 512-1024 µg/mL), while other strains, namely, P. aeruginosa, B. subtilis and especially L. monocytogenes, were significantly more sensitive to 99 than to metformin (MIC 4-256 µg/mL). However, the corresponding inorganic Cu(II) salt was even more active against all tested strains, indicating the origin of antibacterial activity in 99 [103]. The counterion exchange from perchlorate to acetate resulted in the formation of complex 100 with a completely different antibacterial profile [104]. Additionally, 100 was only marginally active against E. coli, L. monocytogenes, S. aureus and C. albicans (MIC 512-1024 µg/mL) and moderately active against S. enteritidis, P. aeruginosa and B. subtilis (MIC 128-256 µg/mL), while the respective Cu salt was mostly devoid of antibacterial activity. Subsequently, the ability of 99, 100 and respective Cu(II) salts to inhibit colonization of the eukaryotic cells by S. aureus and P. aeruginosa was investigated. It was shown that all compounds completely abolished the colonization of P. aeruginosa; however, only 99, but not a Cu salt, could abolish the colonization of S. aureus. These results indicate the potential of 99 to prevent bacterial biofilm formation on hospital-related surfaces and prosthetic devices. Structurally similar complexes 99-103 were prepared from Cu perchlorate hexahydrate and biguanide ligands, which were in situ generated via the nucleophilic addition of corresponding amines to dicyandiamide [162]. The X-ray diffraction of 99 and 101 confirmed that perchlorate anions were not coordinated to a Cu(II) center but resided in the outer coordination sphere of the complexes. Additionally, 99 and 100 showed considerable antibacterial activity against E. coli, S. typhimurium, S. aureus and B. cereus at 1.25 mg/mL concentrations, although they were less effective than standard antibiotics amikacin and gentamicin [162]. Slightly bulkier complex 101 did not show any activity against E. coli and B. cereus even at 12.5 mg/mL. In addition, the DNA binding properties of all complexes were tested using UV spectroscopy, and it was suggested that all complexes can interact with DNA either via electrostatic or hydrogen bonding interactions [162]. Similar to tetracoordinate complex 98 with two chloride anions in the outer coordination sphere, the hexacoordinate chlorido-complex 104 was moderately active against various bacterial and fungal strains but not active against the A. flavus fungal strain [105]. No significant differences were observed between Cu(II) complex 104 and Ni(II) complex 70, which were studied under identical experimental conditions [105]. In addition, 104 did not show strong antiproliferative effects against MCF-7 and HeLa cancer cell lines (IC 50 > 50 µM) [163]. Another hexacoordinate complex 105 with monodentate aqua and DAB ligands was moderately active against E. faecalis, K. pneumoniae and Shigella and not active against S. aureus, C. albicans and A. niger. In general, with the exception of Shigella, Cu(II) complex 105 was more active than structurally similar Ni(II) and Co(II) complexes 71 and 37 and was similarly active as Zr(II) complex 9. Besides antibacterial properties, DNA binding, antioxidative and antidiabetic properties of Cu(II)-metformin complexes were also investigated. Complex 106 demonstrated some antihyperglycemic activity in rats with STZ-induced diabetes, as well as DNA binding properties, which were comparable with structurally similar Ni(II) complex 70 and Co(II) complex 48 [132]. Hexacoordinate heteroleptic Cu(II) complexes with metformin and amino acid chelating ligands 107-109 demonstrated quasi reversible electrochemical behaviour; therefore, the DNA binding properties of 108 and 109 were studied using cyclic voltammetry [164]. Based on the pronounced decrease in peak currents, it was confirmed that 108 and 109 formed DNA-bound Cu(II) complexes at the electrode surface, probably via metformin fragment. It was hypothesized that these redox-active complexes might be involved in the dismutation of superoxide and peroxide radicals. As expected, complexes 107-109 demonstrated the ability to inhibit superoxide dismutase and catalase [132]; however, the desirable effects were achieved only at high mM concentrations, which is not desirable for potential anticancer drug candidates. Interestingly, several Cu(II) complexes were investigated as potential herbicides for effective weed management [165,166]. The assumption was based on the ability of redoxactive Cu(II) complexes to decrease GSH/GSSG ratio leading to the inhibition of protein synthesis and suppression of cell division [62,167]. A series of metformin-derived compounds were prepared by the condensation of 2-thiophene-or 2-imidazolecarboxaldehyde with 2-guanidinobenzimidazole or 2-benzothiazolyl-guanidine [165]. The subsequent coordination to a Cu(II) center achieved by in situ electrochemical method, resulted in the formation of tetracoordinate square planar Cu(II) complexes 103-108 with bidentate metformin-derived ligands in 79-92% yields (Figure 4) [165]. The effects of 110-115 ( Figure 5) and their respective free ligands on the photosynthetic activity of photosystem II (PSII) were studied using photochemically active fragments from spinach leaves [131]. Photochemical changes were quantified based on the PSII chlorophyll fluorescence yield. In general, Cu(II) complexes demonstrated stronger inhibitory effects than the respective ligands. The following major differences were observed with respect to the 5-membered thiazole ring: when NH was replaced with the S atom, the inhibitory activity of Cu(II) complexes increased by more than 10-fold (e.g., 6.3% and 63.4% for 110 and 111, respectively). No marked differences were observed between 110, 112 and 114 or between 111, 114 and 115. Similar trends were observed with respect to the Cu(II) inhibition of PSII carbonic anhydrase (CA) activity and α-carbonic anhydrase (α-CA) activity in bovine erythrocytes. In particular, a total of 100% inhibition of CA was observed by complexes 113 and 115 (100 µM). While uncoordinated ligands did not induce marked photochemical changes, they demonstrated significant inhibition of CA and α-CA (39.4-78.9% and 5.6-50.9%, respectively). As expected, all complexes inhibited glutathione reductase (GR) from chloroplasts at the nanomolar level, and the highest inhibitory was observed for complex 112 (IC 50 = 0.025 nM). In order to investigate the mechanism of GR inhibition, the activity of reduced and oxidized forms of GR from S. cerevisae in the presence of Cu(II) complexes 110-115 and respective ligands was studied in a time-dependent manner. The oxidized form of GR was inhibited by both complexes and ligands, while the reduced form of GR was inhibited only by the complexes, indicating their different mechanisms of action. It was suggested that Cu(II) ions and the ligands might act synergistically, where Cu(II) ions could cause initial oxidation of the enzyme and the ligands subsequently induce irreversible enzyme destruction [166]. marked photochemical changes, they demonstrated significant inhibition of CA and α-CA (39.4-78.9% and 5.6-50.9%, respectively). As expected, all complexes inhibited glutathione reductase (GR) from chloroplasts at the nanomolar level, and the highest inhibitory was observed for complex 112 (IC50 = 0.025 nM). In order to investigate the mechanism of GR inhibition, the activity of reduced and oxidized forms of GR from S. cerevisae in the presence of Cu(II) complexes 110-115 and respective ligands was studied in a time-dependent manner. The oxidized form of GR was inhibited by both complexes and ligands, while the reduced form of GR was inhibited only by the complexes, indicating their different mechanisms of action. It was suggested that Cu(II) ions and the ligands might act synergistically, where Cu(II) ions could cause initial oxidation of the enzyme and the ligands subsequently induce irreversible enzyme destruction [166]. In recent years, Au(I) and Au(III) complexes have gained popularity as promising anticancer drug candidates due to their high propensity for intracellular enzymes [168][169][170]. According to Pearson's theory, Au atoms have a high propensity for "soft" ligands, such as thiols, and therefore, Au complexes readily target thioredoxin reductase (TrxR), GR and other thiol-containing biomolecules that are overexpressed in cancer cells [168]. Besides human thiol-containing enzymes, Au complexes were also reported to target bacterial TrxR and glutathione-dependent enzymes, leading to the efficient inhibition of bacterial respiration [168,[171][172][173]. Most of Au-metformin complexes are discussed in the context of their anticancer activity; however, the antimicrobial activity was also reported. Coordination of 3 equiv. of metformin to an Au(III) center yielded a mononuclear octahedral Au(III) complex 116 (Scheme 28) [105]. This complex demonstrated moderated inhibition of all tested bacterial and fungal strains (zone of inhibition of 9-20 mm/mg), which In recent years, Au(I) and Au(III) complexes have gained popularity as promising anticancer drug candidates due to their high propensity for intracellular enzymes [168][169][170]. According to Pearson's theory, Au atoms have a high propensity for "soft" ligands, such as thiols, and therefore, Au complexes readily target thioredoxin reductase (TrxR), GR and other thiol-containing biomolecules that are overexpressed in cancer cells [168]. Besides human thiol-containing enzymes, Au complexes were also reported to target bacterial TrxR and glutathione-dependent enzymes, leading to the efficient inhibition of bacterial respiration [168,[171][172][173]. Most of Au-metformin complexes are discussed in the context of their anticancer activity; however, the antimicrobial activity was also reported. Coordination of 3 equiv. of metformin to an Au(III) center yielded a mononuclear octahedral Au(III) complex 116 (Scheme 28) [105]. This complex demonstrated moderated inhibition of all tested bacterial and fungal strains (zone of inhibition of 9-20 mm/mg), which was 1.4-4 times lower than the activity of tetracycline. 116 was more than 1.4-times more active against gram-positive B. subtilis strains than gram-negative E. coli and P. aeruginosa strains and more than two times more active than another gram-positive S. aureus strain [105]. OR PEER REVIEW was 1.4-4 times lower than the activity of tetracycline. 116 was more than 1.4 active against gram-positive B. subtilis strains than gram-negative E. coli and strains and more than two times more active than another gram-positive S. a [105]. [174]. The cytotoxicity of complexes 117 and 118 against cervical epithelial carcinoma HeLa cells and melanoma B16 cells was comparable to cisplatin; however, 117 and 118 were 5-8 times more active than cisplatin when tested against hepatocellular carcinoma PLC cells and breast carcinoma MDA-MB-231 cells. It was shown that upon interaction with intracellular GSH, 117 and 118 formed GSH adducts, such as [(CˆN)Au(III)(GSH) n ], where CˆN is a cyclometalated backbone and n = 1, 2. These adducts caused extensive cytoplasmic vacuolization and endoplasmic reticulum (ER) stress. In addition, complex 117 caused prominent anti-angiogenic properties at sub-cytotoxic concentrations [174]. Based on these observations, Babak and Ang et al. hypothesized that fine-tuning of the cyclometalated fragment would allow for the prodrug activation and release of biguanide ligands, leading to the complementary action with [(CˆN)Au(GSH) n ] fragments in cancer cells [50]. Since Au complexes readily target TrxR, leading to the interference of mitochondrial function, and metformin is a well-known energy disruptor, targeting mitochondrial complex I, we hypothesized that these two components might synergistically disrupt mitochondrial processes in metabolically-dependent cancers, such as triple-negative breast cancers (TNBCs) [50]. Similar to the observations of Che et al., complexes 119-123 induced cytotoxicity in MDA-MB-231 cells in a low micromolar range and were at least three times more active than cisplatin and more than 100-1000 times more active than metformin. All complexes induced great selectivity towards cancer cells vs. healthy hepatocytes and cardiomyocytes. In particular, 123 demonstrated prominent anticancer activity (IC 50 = 0.72 ± 0.08 µM) ( Figure 6A). We showed that the anticancer activity of complexes was at least partially dependent on their reactivity towards GSH [50]. The least active complex, 121, released phenformin ligand without any activation by GSH, indicating its lower stability in aqueous media. In contrast, the most active complex 123 demonstrated time-dependent release of metformin only in the presence of GSH, in agreement with our hypothesis ( Figure 6B). Complex 122 released metformin only upon heating and was at least ten times less active than 123. The lead drug candidate 123 significantly inhibited mitochondrial respiration in TNBC cells and induced ER stress. We showed that induction of integrative stress forced cancer cells to activate various pro-survival responses, such as metabolic reprogramming, UPR and autophagy; however, 123 effectively shut down pro-survival attempts of cancer cells, resulting in the induction of apoptosis. Subsequently, these observations were confirmed by an independent group of researchers [175]. Inspired by the promising in vitro results, we verified the efficacy of 123 in an orthotopic mammary fat pad animal model, which realistically recapitulates the TNBC environment in contrast to commonly used xenograft models ( Figure 6C). A marked reduction of tumor burden ( Figure 6D) and the formation of large areas of tumor necrosis were caused by 123 [50]. In addition, tumors were characterized by the infiltration of inflammatory cells, suggesting the activation of an immune response. To conclude, complex 123 might be efficient in TNBC patients with a high risk of metastasis and relapse, and it is currently undergoing advanced preclinical investigations [176]. researchers [175]. Inspired by the promising in vitro results, we verified the efficacy of 123 in an orthotopic mammary fat pad animal model, which realistically recapitulates the TNBC environment in contrast to commonly used xenograft models ( Figure 6C). A marked reduction of tumor burden ( Figure 6D) and the formation of large areas of tumor necrosis were caused by 123 [50]. In addition, tumors were characterized by the infiltration of inflammatory cells, suggesting the activation of an immune response. To conclude, complex 123 might be efficient in TNBC patients with a high risk of metastasis and relapse, and it is currently undergoing advanced preclinical investigations [176]. Figure 6C. Group XII (Zn, Cd, Hg) Group 12 consists of Zn, Cd and Hg. While Cd and Hg do not play any physiological role and are highly toxic, the nutritional value of Zn has been known for a very long time [177]. Zn is considered an important chemical element that participates in various biological processes [178]. Zn plays an indispensable role in modulating the function of various enzymes and proteins and acts as an endogenous modulator of neuronal activity [179]. In Group XII (Zn, Cd, Hg) Group 12 consists of Zn, Cd and Hg. While Cd and Hg do not play any physiological role and are highly toxic, the nutritional value of Zn has been known for a very long time [177]. Zn is considered an important chemical element that participates in various biological processes [178]. Zn plays an indispensable role in modulating the function of various enzymes and proteins and acts as an endogenous modulator of neuronal activity [179]. In addition, Zn-based compounds possess a broad range of antimicrobial activity and are commonly used as additives for dental and dermatological purposes [180]. Therefore, several Zn(II) complexes with metformin were prepared with the aim of investigating the role of the Zn metal center in their antimicrobial activity (Figure 7) [103][104][105]124]. Group XII (Zn, Cd, Hg) Group 12 consists of Zn, Cd and Hg. While Cd and Hg do not play any physiological role and are highly toxic, the nutritional value of Zn has been known for a very long time [177]. Zn is considered an important chemical element that participates in various biological processes [178]. Zn plays an indispensable role in modulating the function of various enzymes and proteins and acts as an endogenous modulator of neuronal activity [179]. In addition, Zn-based compounds possess a broad range of antimicrobial activity and are commonly used as additives for dental and dermatological purposes [180]. Therefore, several Zn(II) complexes with metformin were prepared with the aim of investigating the role of the Zn metal center in their antimicrobial activity (Figure 7) [103][104][105]124]. The tetrahedral complex 124, where metformin acts as a monodentate ligand, was tested against E. coli, K. pneumoniae, P. aeruginosa, B. subtilis and S. aureus in comparison with a structurally similar Co(II) complex 36 and square-planar Pt(II) and Pd(II) complexes 88 and 89 [124]. In agreement with the broad antimicrobial activity spectrum of Zn(II), 124 demonstrated strong inhibitory activity against all strains (MIC 32-128 µg/mL) and was 4-32 times more active than other metal complexes [124]. Similarly, complex 125 with two bidentate metformin ligands demonstrated the highest inhibitory activity against various gram-positive and gram-negative strains, which was comparable to or slightly less active than the activity of tetracycline [105]. It should be noted that the antibacterial activity of 125 was higher than the activity of metformin complexes based on Mn(II), Fe(III), Ni(II), Cu(II), Mg(II), Pt(IV), Au(III) and Pd(II) metal centers. However, this complex was devoid of any activity against the A. flavus fungal strain [81]. The inhibitory activity of octahedral complexes 126 and 127 was weaker than tetracoordinate complexes, although the results cannot be directly compared due to the differences in experimental conditions [103,104]. In particular, complex 126 was characterized by marginal activity against all strains in the tested panel (MIC = 128-1024 µg/mL) [104]. When the activity of 126 and 127 was compared with the respective inorganic Zn(II) salts under identical experimental conditions, the inorganic salts demonstrated higher inhibitory potential than the metformin complexes. In contrast to Zn, the therapeutic potential of Cd complexes is hindered by the severe health adverse effects associated with Cd exposure. Therefore, the investigation of Cd(II)-metformin complexes might be more interesting from the fundamental point of view. Heteroleptic octahedral Cd(II) complexes 128 and 129 with metformin and DAB or glimepiride ligands were prepared starting from CdCl 2 ·H 2 O and their antimicrobial properties were investigated in comparison with uncoordinated ligands or Cd(II) salt (Scheme 30) [71,181]. As expected, none of the ligands or Cd(II) salt was active against any bacterial or fungal strain in the panels [71,181]. Conversely, complex 128 demonstrated excellent inhibitory potential against K. pneumoniae, S. aureus and Shigella bacterial strains, which was comparable to moxifloxacin, and similar inhibitory activity against A. niger fungal strains as nystatin [71]. It should be noted that structurally similar metformin complexes with Co(II), Ni(II), Cu(II), Zr(IV), Pd(II) metal centers did not show any activity against A. niger [71]. Similarly, complex 129 strongly inhibited E. coli, K. pneumoniae, P. aeruginosa, P. vulgaris and was particularly active against S. aureus [181]. For certain strains, the activity of 129 even exceeded the activity of the antibiotic amikacin. 30) [71,181]. As expected, none of the ligands or Cd(II) salt was active against any bacterial or fungal strain in the panels [71,181]. Conversely, complex 128 demonstrated excellent inhibitory potential against K. pneumoniae, S. aureus and Shigella bacterial strains, which was comparable to moxifloxacin, and similar inhibitory activity against A. niger fungal strains as nystatin [71]. It should be noted that structurally similar metformin complexes with Co(II), Ni(II), Cu(II), Zr(IV), Pd(II) metal centers did not show any activity against A. niger [71]. Similarly, complex 129 strongly inhibited E. coli, K. pneumoniae, P. aeruginosa, P. vulgaris and was particularly active against S. aureus [181]. For certain strains, the activity of 129 even exceeded the activity of the antibiotic amikacin. The Role of the Metal Center in the Biological Activity and Potential Toxicity of Pre-Formed Metal-Metformin Complexes Taking into account the well-known role of metformin and other clinically used biguanides in the treatment of diabetes and various infections, as well as the epidemiological evidence linking metfomin and reduced cancer risks, it is not surprising that the majority of metformin-metal complexes were investigated in the context of their antidiabetic, antimicrobial and anticancer properties. In Figure 8 and Table A1, we summarized the lead metal-metformin complexes with the most prominent antibacterial, antifungal, anticancer and antidiabetic activity. Since various structurally different V complexes generally exhibit antidiabetic properties [74,75], it was hypothesized that coordination of antidiabetic drug metformin to the V center might result in their synergistic action. The activity of V-metformin complexes was investigated in rats with chemically-or HC diet-induced diabetes. Although V-metformin complexes 10, 14 and 15 were able to reduce blood glucose levels more efficiently than uncoordinated metformin, no marked improvement in comparison with other V-complexes was observed [76,77,79]. In contrast, when metformin was introduced into the structure of decavanadate [V 10 O 28 ] 6− as a counterion, the solubility of metforminium decavanadate 16 considerably improved due to the additional hydrogen bonding with the metformnium cation [81]. As a result of more favorable pharmacokinetic properties, metforminium decavanadate 16 was markedly more active than metformin, sodium decavanadate or V salts in various insulin-dependent and insulin-independent animal models [85,86]. Importantly, 16 was also able to improve diabetes-and obesity-associated memory deterioration [87]. Besides V, metformin complexes with other metal centers, such as Nd, Cr, Ni, Cu, Co and Ru, were also investigated in animals with induced diabetes [69,97,98,123,132]; however, only Cr-metformin complexes 20 and 21 demonstrated significantly improved hypoglycemic effects in comparison with metformin and respective Cr salts [97,98]. The observed profound differences in the hypoglycemic activity of various metal-metformin complexes indicate the unambiguous role of V and Cr metal centers in their antidiabetic mechanism of action. It is believed that one of the mechanisms of V-mediated insulin signaling is based on the inhibition of protein tyrosine phosphatase 1b (PTP1B), which is a key enzyme that inactivates insulin receptor [182]; however, the insulin-independent mechanisms were also reported [183]. Cr complexes were also shown to affect insulin receptors, but independently of PTP1B regulation [184]. Importantly, despite improved antidiabetic effects of V-and Cr-metformin complexes, their effects on the metal metabolism should be considered with caution. At higher doses, V-and Cr-metformin complexes might cause unwanted toxicity as a result of the alteration of essential trace element homeostasis. For example, non-insulin dependent diabetic patients were characterized by Cr and V disbalance [185,186]; hence, large doses of Cr-and V-based antidiabetic drugs might exacerbate the already compromised metal status and contribute to the development of insulin resistance. Since various structurally different V complexes generally exhibit antidiabetic properties [74,75], it was hypothesized that coordination of antidiabetic drug metformin to the V center might result in their synergistic action. The activity of V-metformin complexes was investigated in rats with chemically-or HC diet-induced diabetes. Although V-metformin complexes 10, 14 and 15 were able to reduce blood glucose levels more efficiently than uncoordinated metformin, no marked improvement in comparison with other Vcomplexes was observed [76,77,79]. In contrast, when metformin was introduced into the structure of decavanadate [V10O28] 6-as a counterion, the solubility of metforminium decavanadate 16 considerably improved due to the additional hydrogen bonding with the metformnium cation [81]. As a result of more favorable pharmacokinetic properties, metforminium decavanadate 16 was markedly more active than metformin, sodium decavanadate or V salts in various insulin-dependent and insulin-independent animal models [85,86]. Importantly, 16 was also able to improve diabetes-and obesity-associated memory deterioration [87]. Based on the analysis of the existing literature, the majority of reported metforminmetal complexes, including Y, La, Ce, Sm, Zr, V, Cr, Mn, Fe, Co, Ir, Ni, Pd, Pt, Cu, Au, Zn and Cd, were tested against various panels of bacterial and fungal strains using standard antibacterial assays, such as the disk agar diffusion method. Since these complexes were tested under different experimental conditions, their antimicrobial efficacy cannot be directly compared. However, most metal complexes demonstrated improved activity in comparison with uncoordinated metformin. In order to estimate whether the antibacterial and antifungal effects of metal complexes solely originate from the metal or rather from additive/synergistic effects of the metal center and biguanide ligands, some of the complexes were compared to the respective inorganic salts. It was shown that several complexes, such as Cr(III) complex 21, Ni(II) complexes 66 and 67, Cu(II) complex 100 and Cd(II) complex 128 were indeed more active than uncoordinated ligands and respective inorganic salts, indicating the importance of metformin coordination to the metal centers for enhancement of their antimicrobial properties. Although the antimicrobial role of various metal surfaces has been recognized since ancient times, the clinical use of metal-based antimicrobial compounds is very limited. Recently, Frei et al. performed the antibacterial screening of 906 structurally different metal complexes with various metal centers [187], which also included some of the Ir(III) complexes with metformin and its derivatives (50-62) described earlier [137]. Surprisingly, more than 25% of metal complexes were active against bacterial and fungal strains, and 9.7% out of the 906 complexes were active and non-toxic to human cells. This hit-rate is markedly higher than the hit-rate for organic molecules, which does not typically exceed 2%, suggesting that metal complexes hold great potential as antibiotics. It should be noted that the most active and non-toxic metal complexes in the screen were characterized by the presence of Ru, Ag, Pd and Ir centers, while Cd and Pt complexes were active, but expectedly toxic [187]. While most of the tested Ir(III)-biguanide complexes were not active against gram-negative bacterial strains, they demonstrated one of the strongest inhibitory effects against gram-positive bacterial and fungal strains among all tested compounds in the library [187]. Similarly, in comparison with other metformin-metal complexes described in this review, Ir(III) complexes, such as 58 ( Figure 8) and Cd(II)-biguanide complexes 128 and 129 demonstrated one of the strongest antimicrobial properties [71,137,181]; however, we do not foresee clinical success of Cd(II)-metformin due to their high toxicity and poor selectivity towards bacterial cells. Complexes with endogenous metals, such as Cu and Zn, were considered as "underperformers" in the large screen of 906 complexes, possibly due to the more accelerated ligand substitution in comparison with the second and third row elements and premature decomposition of the complexes before reaching the desired biomolecular target [187]. On the contrary, Cu(II) and Zn(II) complexes with metformin and other biguanides, such as 99 or 125 (Figure 8), showed marked inhibition of bacterial and fungal growth [71,105]. This could be explained by the high stability of metformin complexes in physiological solutions due to the chelating effect of bidentate biguanide ligands. In addition, La(III)-metformin complex 2, Pt(IV) complex 91 and Au(III) complex 116 ( Figure 8) strongly inhibited the growth of several bacterial and fungal strains [68,105]. The lipophilicity of metal complexes played a determining factor in their antibacterial and antifungal activities, since more lipophilic complexes showed more efficient internalization inside bacterial cells, resulting in stronger inhibitory activity [137]. Besides antimicrobial applications, various metal-metformin complexes, including Mn, Ru, Co, Ir, Ni, Pd, Pt, Cu and Au, were tested as potential anticancer agents. Even though the results of cytotoxicity assays cannot be directly compared due to different cancer cell lines and inconsistent drug treatment time, the analysis of literature data revealed that the most significant cytotoxicity was exhibited by Ir(III), Au(III) and Pt(IV) and Ru(II)-metformin complexes [123,138,155] [50,174]. In particular, complex 123 ( Figure 8) exhibited marked cytotoxicity in a submicromolar concentration range (IC 50 (72 h) ≈ 0.15-0.72 µM) [50]. In addition, 90 and 123 significantly reduced tumor burden in vivo [50,155]. Interestingly, one of the strongest antibacterial and anticancer activity was exhibited by structurally different biguanide complexes with Ir(III), Au(III), Cu(II), Pt(IV) and Ru(II) centers. It is possible that, despite certain differences between bacteria and cancer cells, the complexes might undergo similar mechanisms of action. It is known that metformin is devoid of strong antibacterial and anticancer in vitro activity due to its inability to efficiently internalize into bacterial or cancer cells. Conversely, the strong antimicrobial activity of redox-active Ir(III), Au(III), Cu(II) and Pt(IV) complexes with metformin and its derivatives might be linked to their ability to selectively deliver biguanides through bacterial or cancer cell walls and subsequently release the active fragments upon reduction or substitution reactions with intracellular thiols or other biomolecules. Subsequently, the uncoordinated metformin might exert its antibacterial action through binding with endogenous metal ions from metalloenzymes or DNA intercalation, while liberated metal fragments might confer synergistic effects through interaction with other metal-binding biomolecules. Conclusions and Future Outlook Since the mechanism of action of metformin and other biguanides is at least partially dependent on intracellular metal binding, administration of pre-formed metal complexes with a pre-determined biguanide/metal ratio might significantly potentiate their on-target biological activity. We demonstrated that coordination of metformin and its analogues to metal centers often results in enhanced intracellular penetration, reduced drug resistance and synergistic action with biologically active metals, which therefore might be a viable strategy to improve the pharmacological activity of biguanides. However, it became apparent that despite extensive research on metformin-metal complexes, most of the studies lacked structure-activity relationships and in-depth investigation of the mechanism of action of metal complexes in the relevant disease models. In addition, the clinical use of metals is often believed to be associated with unwanted toxicity and side-effects; therefore, it is important to study the toxicity profile of novel metal-based drug candidates in order to understand their limitations for future therapeutic applications. V V
2022-04-10T15:17:39.515Z
2022-04-01T00:00:00.000
{ "year": 2022, "sha1": "890fb9d6a372ce09b23c066beb4df5ef74239a55", "oa_license": "CCBY", "oa_url": "https://www.mdpi.com/1424-8247/15/4/453/pdf", "oa_status": "GOLD", "pdf_src": "MergedPDFExtraction", "pdf_hash": "85d8f6c6a12717da6a90fcedd8a77ffa2a14976a", "s2fieldsofstudy": [ "Chemistry", "Biology" ], "extfieldsofstudy": [ "Medicine" ] }
216205853
pes2o/s2orc
v3-fos-license
The Effect Of Attitude, Normal Subjective And Perceived Behav- ioral Control (Pbc) On Actual Purchasing Through Purchases Of Online Purchase In The Online Retail Industry Online shopping activities are currently growing. Online shopping activities are supported by the increasing number of Internet users. With the growing number of people who know the internet and along with the presence of Generation Z who was born in the digital age make the habit of spending goods and services slowly but surely have to switch online. One of the factors that affect online shopping is the intention of purchasing online. Theory of Planned Behavior (TPB) explains that consumer behavior is shaped by attitudes, subjective norms, and perceived behavioral control (PBC) that form the intention of online purchasing. This study aims to analyze the effect of attitudes on online purchase intentions, to analyze the effect of subjective norms on online purchasing intentions, to analyze the effect of PBC on online purchase intentions and to analyze the effect of online purchase intentions on actual purchases. The research was conducted at Prama Sanur Beach Hotel. The technique of data collection used are observation, interview, documentation, questionnaires. Data analysis technique using quantitative analysis with PLS program. The results show that attitudes have a positive and significant influence on the intentions of online purchasing. Subjective norms have a positive but insignificant effect on online purchase intentions. Perceived Behavioral Control (PBC) has a positive and significant effect on online purchasing intentions, and online purchase intentions have a positive and significant effect on actual purchases. INTRODUCTION Indonesia's e-commerce market especially in the retail industry is experiencing a developmental phase. This is reflected in the significant increase in the value of Indonesian e-commerce transactions from year to year. Data released by the Ministry of Communications and Information of the Republic of Indonesia and the Ministry of Trade of the Republic of Indonesia stated that in 2014, the value of e-commerce transactions reached US $ 12 billion, rapidly increased to US $ 19 billion in 2015, in 2016 predicted past US $ 20 billion or above Rp. 250 trillion (Ministry of Trade, 2016). The rapid adoption of the Internet in trading activities is motivated by the benefits of internet technology for suppliers in business and consumers in shopping. Benefits for suppliers include transaction efficiency, low-cost distribution channels, improved customer service through personalized services, reduced costs , shortening distribution channels, and increasingly easy and low cost communications (Reino et al., 2011). The same thing happens on the consumer side, where the internet offers practicality in shopping, extensive choice, relatively cheap price, and attractive promo frekuentif (SWA, 2016). In Indonesia itself, online shopping activities are growing, in this case can be seen from the rampant online shopping activities, is also supported by the number of internet users in Indonesia is growing in which in 2016 the number of Internet users in Indonesia is 132.7 million users or about 51.5 percent of the total population of Indonesia amounted to 256.2 million (APJII, 2016). With the growing number of people who know the Internet along with the birth of Generation Z who was born in the digital age make the habit of shopping goods and services have been turned into being done online. The most consumptive community in the digital realm is the age group of 19-35 years. Approximately 58 percent of the data is dominated by employees, where the most searched products in the online context are fashion products, gadgets and accessories, household products, cosmetics and beauty products and ticket purchases (ukm.com). Based on the formulation of the above problem, the aims of this study are to analyze the effect of attitudes on online purchase intentions, to analyze the effect of subjective norms on online purchase intentions, to analyze the effect of PBC on online purchase intentions, and to analyze the effect of online purchase intentions on actual purchases CONCEPT AND HYPOTHESIS Theory of Reasoned Action (TRA) was developed by Ajzen and was named Theory of Planned Behavior (TPB) . Theory of Planned Behavior is described as a complete construct of TRA. In TRA it is explained that one's intention toward behavior is formed by two main factors: attitude toward behavior and subjective norm , whereas in TPB is added one more factor that is Perceived behavioral control . According to , individual targets have a great possibility of adopting a behavior if the individual has a positive attitude toward the behavior, obtaining the consent of another individual who is close and related to the behavior and believes that the behavior it can be done well. Theory of Planned Behavior (TPB) explains that consumer behavior is shaped by attitudes, subjective norms, and Perceived behavioral control (PBC) that shape intentions. Intention then affects how a person behaves. This theory forms the basis of current studies that analyze the effect of intent on online buying behavior. This model was developed by Icek Ajzen to perfect the predictive power of Theory of Reasoned Action (TRA), by adding the PBC variable. This theory postulates that attitudes, subjective norms, and PBC together form intent and behavior. The three variables forming the intention in the TPB namely attitude is a person's positive or negative evaluation of a behavior. The concept is the extent to which behavior is judged positive or negative. Subjective norms is a person's perception of a particular behavior, where this perception is influenced by the judgment of the perceived influential person, such as a parent, spouse, friend, and mentor, and Perceived behavioral control (PBC) is perceptions about the easy or difficult conduct of certain behaviors. PBC is determined by the presence of factors that can facilitate or hinder a person's ability to perform such behavior. PBC is conceptually related to self efficacy developed by in social cognitive theory. TPB is one of the theories of behavior with high predictive power, and is used to predict human behavior in all areas. The study often uses this theory in marketing (buying behavior, advertising, public relations), behavior in new environments such as online, and in new issues such as eco-friendly products, health (public education), and entrepreneurial behavior. This study analyzes the influence of intention on online purchasing behavior, so TPB is a very important theory as the basis of this research. Based on the explanation above, therefore, the hypotheses could be formulated as follows: H1 : Leadership positively and significantly affects employee's working spirit at Prama Sanur Beach Hotel. H2 : The work environment has a positive and significant effect on employee morale at Prama Sanur Beach Hotel. H3 : Leadership positively and significantly affects employee performance at Prama Sanur Beach Hotel. H4 : The work environment has a positive and significant impact on performance of Prama Sanur Beach Hotel's employees. H5 : The spirit of work has a positive and significant impact on performance of employees at Prama Sanur Beach Hotel. H6 : mediates leadership influence on performance of employees at Prama Sanur Beach Hotel. The spirit of work mediates the influence of the work environment towards performance of employees at Prama Sanur Beach Hotel. Location and Object of Research The research was conducted at Prama Sanur Beach Hotel which is one member of Aerowisata Hotels and Resorts which was established in 1974. This 5-star hotel is very environmentally friendly with a warm and always smiling staff. The hotel is equipped with a variety of unique restaurants and bars, contemporary facilities and activities for the whole family. Consisting of more than 426 rooms spread over 7 hectares of lush tropical gardens, next to sandy beaches with spectacular scenery located on Lake Tamblingan street Sanur. Population Population is a generalization area consists of objects that have certain qualities and characteristics set by the researcher to be studied and then drawn conclusions . The population in this study are all employees of Prama Sanur Beach Hotel which amounted to 417 people. Sample stated that the sample is part of the number and characteristics possessed by the population. If large populations and the researcher is unlikely to study everything in the population, for example due to limited funds, manpower and time, the researcher can use sample taken from that population. To determine the sample size of a population in the study used Slovin method in (Umar, 2013) Variable Identification Based on the issues that have been described before, the variables to be analyzed can be identified as follows: Endogenous variable is a variable that is influenced by other variables in the model. In this research which is dependent variable is employee performance (Y2). Intervening Variable. According to Tuckman in intervening variable is a variable that theoretically affects the relationship between independent variable with dependent variable into an indirect relationship. In this research which is intervening variable is working spirit (Y1). Exogenous variable is a variable that affects or has an influence on other variables in the model. In this study, the independent variables are leadership (X1) and Work environment (X2). Operational Definition of Variable In order to clarify the variables analyzed, it is necessary to explain the operational definitions for each variable. The definition of operational variables is a step in linking the theory concepts with empirical studies so that mistakes in interpreting the variables analyzed do not occur. The following describes the operational definitions of leadership variables, work environment, working spirit and performance. Leadership Leadership is an activity to influence the behavior of others, or the art of affecting human behavior both individuals and groups . This study measures the leadership of Prama Sanur Beach Hotel based on indicators: Telling, the leader's ability to tell employees what to do. Selling, the leader's ability to sell or give ideas to employees. Participating, is the ability of leaders to participate with members or employees. Delegating, is the ability to delegate tasks and authority to employees. Work Environment According to Nitisemito in that the work environment is everything that is around employees and can affect in carrying out tasks assigned to him for example in the presence of air conditioner (AC), adequate lighting and so forth. This study measures the working with indicators: a) Lighting in the workplace b) Temperatures in the workplace c) Good humidity and air circulation d) Air circulation in the workplace e) Cooperation among departments in the workplace f) Cleanliness at work g) Decoration at work h) Security at work Spirit of Work The spirit of work is to do the job more actively so that work can be expected faster and better . The indicator of employee's working spirit variable used in this study refers to : Willingness to work together Obedience to the provisions of the work implementation Timeliness in completing the work Performance stated that the performance is the work achievement or output both quality and quantity of human resources achieved per unit time period in carrying out its work duties in accordance with the responsibilities given to him. Performance indicators used in this study are based on which stated that in general there are some elements of employee performance, they are: The quantity of the results, measured from the employee's perception towards the number of activities assigned and the results. The quality of the results, measured by employee perceptions to the quality of work produced and the perfection of the tasks to the skills and abilities of employees. Timeliness of results, measured from employee perceptions to an activity completed from the beginning of time until it becomes output. Can finish at predefined time and maximize the time available. Attendance, the attendance rate of employees within the company can determine employee performance. Ability to work together, measured from the ability of employees in cooperation with colleagues and the environment. Procedure of Data Collection This data collection is supported through interview and observation process which is then analyzed directly with statistical tools on the answers obtained from the respondents and collected based on the employee's assessment or the employee's answer to each question in the questionnaire submitted. In addition to the above primary data, this study also utilizes existing data in Prama Sanur Beach Hotel in the form of secondary data, relevant as a supporter of data accuracy. Methods of Data Collection To obtain the necessary data in this study are some of the data collection techniques below: Observation Observation is data collection by conducting direct observation of leadership, work discipline and work environment on employee performance. Interview Interview is a method of data collection by means of direct interview with the leadership of the company or with employees in the company to obtain information relating to research. Documentation of Research Documentation of research is how the collection by reading and recording documents that exist in the company that has to do with that discussed in this study. Questionair Questionnaire is a method of collecting data by giving or asking questions that have been prepared in advance in the form of questionnaires whose answers are filled directly by the respondents. Instruments of Data Collection The research instrument used to collect data in this research was obtained by distributing questionnaires to employees of Prama Sanur Beach Hotel. Questionnaire distributed to the respondent contains a number of questions written about the items of research variables to get perceptions of the research variables which then answers to questions measured by using a likert scale where respondents are asked to state their perceptions by choosing one of the alternative answers that have a weight or score scores as follows: Answer SD (Strongly Disagree), scored 1 Answer D (Disagree), scored 2 The Answer LA (Less Agree), scored 3 Answer A (Agree), scored 4 Answers SA (Strongly Agree), scored 5 Methods of Data Analysis Descriptive Analysis Descriptive analysis serves to describe or give an idea of the object under study through sample data or population as it is, without doing analysis and make conclusions that apply to the public . Descriptive analysis is intended to determine the characteristics and responses of respondents to the item questions on the questionnaire. Descriptive analysis also describes the variables in research such as leadership, work environment, employee's working spirit and the performance's employee of Prama Sanur Beach Hotel. Inferential Analysis Partial Least Square (PLS) is a more appropriate approach for predictive purposes especially in conditions where indicators are formative. With the latent variable in the form of a linear combination of the indicator, the prediction of the value of the latent variable can be easily obtained, so the prediction of the latent variables it influences can also be easily done. In the PLS the structural model of the relationship between latent variables is called the inner model, whereas the measurement model (reflexive or normative) is called the outer model. The structural model or inner model is evaluated by looking at the percentage of variance described by looking at R² (R-square exogenous variables) for latent dependent constructs using Stone-Geisser Q-Square test size and looking at the magnitude of the structural path coefficients. The stability of this estimate is evaluated using the t-statistic test obtained via the bootstrapping procedure. In contrast to Structural Equation Modeling (SEM), the indicators are reflexive, so the change in the value of an indicator is very difficult to know the value changes of latent variables so that the implementation of predictions is difficult. RESULT AND DISCUSSION The Influence of Leadership on Performance of Employee at Prama Sanur Beach Hotel Based on the test results, the influence of leadership on employee performance showed leadership positive and insignificant effect on employee performance. This result means that looking at the description of respondents' answers, good leadership is not followed by maximum performance. This insignificant influence can be explained from the characteristics of respondents the majority of respondents have worked over ten years. The result of this analysis in accordance with the statement put forward by in their researches stated that there is a significant relationship between leadership initiatives to increase leadership style that impact on employee performance. The result of this study is generally not aligned with previous research findings conducted by , (Makena, 2017), ) (Eni, 2017) where leadership is important in impacting employee performance. The Influence of Leadership on Employee Spirit in Prama Sanur Beach Hotel Based on the test results, the influence of leadership towards the spirit of work shows where leadership has a positive and significant effect towards working spirit. This result means that good leadership supports employee's working spirit. The results of this analysis in accordance with the statement put forward by which stated that one of the factors that affect the working spirit is leadership. Leadership has an important role in determining employee's working spirit. Because the leader will be an example for his subordinates. Good leadership will affect employee's working spirit in the organization, with a good leadership example then the spirit of work will even good too . The result of this study is generally in line with previous research findings conducted by and (Mahajaya & Subudi, 2016) where leadership is important in impacting employee's working spirit. The Effect of Work Environment on Performance of Employees at Prama Sanur Beach Hotel Based on the test results, the influence of the work environment on the performance shows that the work environment has a positive and significant effect on the performance. This result means that a good working environment supports employee performance. The results of this analysis in accordance with the statement from states that the performance of employees in an The Effect Of Attitude, Normal Subjective And Perceived Behavioral Control (Pbc) On Actual Purchasing Through Purchases Of Online Purchase In The Online Retail Industry organization is strongly influenced by the work environment. If among employees have not ignored the surrounding environment, then certainly the performance will decrease, so to get a high performance is needed a supportive work environment from the employees. The result of this study is generally in line with previous research findings conducted by , (Mahajaya & Subudi, 2016) and where the work environment is important in impacting employee performance. The Influence of Work Environment towards Working Spirit of Employees at Prama Sanur Beach Hotel Based on the test results, the influence of the work environment on working spirit shows where the work environment has a positive and significant effect on working spirit. This result means that a good working environment supports employee's working spirit. The results of this analysis in accordance with the statement put forward by stated that the work environment can affect the working spirit, with a supportive work environment, employees are expected to continue to strive to improve the working spirit. The results of this study are generally consistent with previous research findings conducted by , (Permana, 2015), and (Timpe, 1992) which explained that the work environment has an important impact on the employee's working spirit. The Influence of Working Spirit on Performance of Employee at Prama Sanur Beach Hotel Based on the test results, the influence of working spirit on the performance shows where the working spirit has positive but not significant effect on the performance. This result means that looking at the description of the respondent's answer, the good of working spirit is not followed by the maximum performance. This insignificant influence can be explained from the characteristics of respondents. The majority of respondents have worked over ten years, although the working spirit is lacking but the employee performance is very good. The results of this analysis in accordance with the statement put forward by stating that working spirit is an important operative function of Human Resources Management, because the better the employee's working spirit in the company, the higher the work achievement that can be achieved. Therefore, the working spirit is an important means to achieve the goal, then coaching on working spirit is part of management that very important. The result of this study is generally not in line with previous research findings conducted by , , and (Mahajaya & Subudi, 2016) explaining that spirit of work has an important impact on employee performance. The Role of Working Spirit in Mediating the Influence of Leadership on Employee Performance at Prama Sanur Beach Hotel Based on the test results, the role of working spirit in mediating the influence of leadership towards employees' performance at Prama Sanur Beach Hotel stated that working spirit is not a mediation between leadership towards performance, because the direct relationship of leadership to performance is not significant, so also indirect relationship shows that leadership to spirit of work and spirit of work towards performance is also insignificant (Solimun, 2011). This shows that, working spirit is not able to explain the influence of leadership towards employee performance. The Role of Spirit of Work in Mediating Work Environment Influence Towards Employee Performance at Prama Sanur Beach Hotel Based on the test results, the role of working spirit in mediating the influence of work environment on employee performance at Prama Sanur Beach Hotel stated that morale is not mediation between work environment to performance, because the indirect relationship of work spirit to coefficient performance is not significant relationship (Hair et al., 2010). This shows that, working spirit is not able to explain the influence of work environment towards employee performance. CONCLUSION Based on the result and discussion above, the conclusion of this study can be made into: Leadership has a positive but insignificant influence on performance of Prama Sanur Beach Hotel's employee. This result means that although leadership increases, the performance improvement of employees of Prama Sanur Beach Hotel is not significant. Leadership has a positive and significant influence on employee spirit of Prama Sanur Beach Hotel. These results give meaning that good leadership strongly supports employee spirit of Prama Sanur Beach Hotel. The work environment has a positive and significant influence on the performance of employees of Prama Sanur Beach Hotel. This result means that a good working environment strongly supports the performance improvement of employees of Prama Sanur Beach Hotel. The work environment has a positive and significant influence on Prama Sanur Beach Hotel's employee spirit. These results mean that a good working environment strongly supports Prama Sanur Beach Hotel's employee spirit. The spirit of work has a positive influence but insignificant effect on the performance of employees of Prama Sanur Beach Hotel. This result means that although spirit at work is improving, the performance improvement of Prama Sanur Beach Hotel employees is not significant. The spirit of work is not mediation between leadership and performance, because the direct leadership relationship to performance is insignificant, so the indirect relationship shows that both leadership to spirit at work and spirit at work towards performance is also insignificant. The spirit of work is not mediation between the work environment on performance, because the indirect relationship of spirit at work to the performance, the relationship coefficient is not significant. Timpe, A. D. (1992). The Art and Science of Business Management Performance. Mumbai: Jaico Publishing House. Umar, H. (2013). Desain Penelitian MSDM dan Prilaku Karyawan. Jakarta: PT Raja Grafindo Persada.
2020-04-27T16:24:12.411Z
2020-04-08T00:00:00.000
{ "year": 2020, "sha1": "5b3578af84f1ad2031be02d656a36c596bb4ad1d", "oa_license": "CCBYNCSA", "oa_url": "https://www.ejournal.warmadewa.ac.id/index.php/jagaditha/article/download/583/1225", "oa_status": "GOLD", "pdf_src": "Anansi", "pdf_hash": "93c2eca1a1af1636e658f4f62932e9411ebd6f80", "s2fieldsofstudy": [ "Business" ], "extfieldsofstudy": [ "Psychology" ] }
257678998
pes2o/s2orc
v3-fos-license
A Systematic Review on Federated Learning in Medical Image Analysis Federated Learning (FL) obtained a lot of attention to the academic and industrial stakeholders from the beginning of its invention. The eye-catching feature of FL is handling data in a decentralized manner which creates a privacy preserving environment in Artificial Intelligence (AI) applications. As we know medical data includes marginal private information of patients which demands excessive data protection from disclosure to unexpected destinations. In this paper, we performed a Systematic Literature Review (SLR) of published research articles on FL based medical image analysis. Firstly, we have collected articles from different databases followed by PRISMA guidelines, then synthesized data from the selected articles, and finally we provided a comprehensive overview on the topic. In order to do that we extracted core information associated with the implementation of FL in medical imaging from the articles. In our findings we briefly presented characteristics of federated data and models, performance achieved by the models and exclusively results comparison with traditional ML models. In addition, we discussed the open issues and challenges of implementing FL and mentioned our recommendations for future direction of this particular research field. We believe this SLR has successfully summarized the state-of-the-art FL methods for medical image analysis using deep learning. I. INTRODUCTION Image processing and analysis both are different tasks and often dependent on each other in terms of classifying an image data. To describe the image processing history we have to look quite back in 1973, an image of a Swedish model Lena is the first one that was used for image processing. Since then image processing has been applied in dozens of research fields, medical imaging is one of them. An image is essentially composed of 2D signals (vertical and horizontal), also with a number of pixels [1]. Different types of images have their different pixel parameters, during analysis these parameters help to extract respective information from the image. On the other hand, the task of the analysis part is to understand the processed images through different techniques, i.e., Machine Learning (ML); this technique includes different ML oriented algorithms. At the beginning, classical The associate editor coordinating the review of this manuscript and approving it for publication was Sudipta Roy . ML algorithms (e.g., SVM, naive bayes, decision tree) were used broadly in image processing research. Later on, it turned into neural network based modeling after introducing deep learning and now it is an integral part of any image analysis task including medical imaging. Every year the usage of medical imaging increases worldwide for diagnostics. The image data mainly represents various radiological images such as, X-ray, computed tomography (CT), magnetic resonance imaging (MRI), ophthalmology images, and so on. Besides, other data from eye, skin, cell have significant contributions in clinical imaging to detect, diagnose and treat diseases [2]. It is becoming increasingly important now to have these medical images being taken by different devices need to be sent across from one system to another and therefore they need a computer network. However, a large collection of such images creates a dataset, they are located and processed in cloud servers under ML approach. In the era of AI, collaborative learning, more specifically sharing data among different institutions, multiple sources can be very efficient in terms of building robust AI models. Since models are trained in centralized individual locations in traditional ML, the collaboration between models is quite tough. Contrastingly, X-rays, CT, MRI all of these are personal data pertaining individual patients which need to protect from risk of this medical information being disclosed or revealed to any unauthorized third party. In addition, even though data sharing is possible, the data store, processing, and analysis are still difficult tasks in a centralized manner. For such that scenario, data encryption-decryption could be a potential solution to exchange information between participants; however the process could be complex, time consuming and not sustainable [3]. So, instead of bringing the data to the location where the model is trained, why not bring the model to the data (institutions and the hospitals) and train directly there in-house, it allows collaborative learning without centralizing the dataset itself, this is called FL. It was first introduced in 2016 [4] and gained a lot of attraction within last couple of years for the healthcare domain. It addresses the privacy and data protection concern, which is currently an important problem in developing medical AI. In FL, the participants can train models locally and estimate different parameters for respective models, then share the parameters to a centralized server for aggregating them. Therefore, the focus is not on which data is used or what algorithms can be trained, the concept is managing the data in a different way where data privacy is reserved. A. OBJECTIVE AND CONTRIBUTION Since medical images are sensitive data, it needs to be protected and preserves the rights of users' personal information. We already discussed FL is arrived to solve the data privacy issue in collaborative ML and within the short time the concept has applied in different fields including medical imaging. Already many articles have been published on FL oriented medical image analysis and they successfully applied this unique data management technique in their research articles. At this stage, it is time to look back, need to review and assess what has been done till now, what are the impacts of FL on medical imaging. Meanwhile, some SLR have been published on the topic, however, they were about overall healthcare applications not particularly for the medical image analysis context. A SLR has been presented in [5], they considered all of the articles which have used all forms of medical data to train their FL models. Similarly in [3], [9], and [6] the authors have included the whole healthcare area to survey and review the papers. Some review articles presented specific medical domains, for example, Naeem et al. [10] worked particularly on brain tumor diagnosis using MRI images. Since FL is comparatively a new concept, most of the review articles emphasized on the design and implementation. Secondly, they discussed the privacy or security opportunity, which is the fundamental characteristic of FL. Some of them [5], [7], and [10] were formulated on different research questions, a common question was regarding the state-of-the-art FL methods; besides, data properties, impact, gaps and future research have been investigated. Alongside, several survey articles have been published on FL for healthcare informatics. Xu et al. [11] surveyed the papers that focus FL in the biomedical area to provide a review. Their effort was to summarize the privacy, statistical and system challenges that exist in this specific domain. A well-known article in this field [12], where the authors discussed the prime factors related to FL in digital health with challenges and solutions. This study is a SLR, we exclusively investigated the FL in medical image analysis and extensively touched every component in the considered articles, specially the performance analysis and comparison with usual ML, which is the main distinction of our study corresponding to the previously published review papers. Our study consisted of several research questions and by answering the questions we illustrated the current research lay-out in the field of medical image processing using FL. In addition, several observations were discussed according to the findings extracted from the literature. Table 1 shows comparative analysis of our contribution and related review articles, our study explored the demographic data, FL architecture, privacy preserving concern, federated data management, and performance of FL models. We did not find any article which has worked particularly on medical images. Consequently, this study can be an outline for future research of FL application in medical imaging. The following are the key contributions of our paper: • We surveyed the insights of FL solely in medical image research in a systematic way. • We provided the latest implementation, advancement, and tendencies toward medical image analysis research using FL in different aspects. • We presented and compared the performance of different FL architectures used in the reviewed articles with traditional ML models, which is the first of its kind. • For incoming contributors we discussed open issues, challenges, and future direction of the research field. Rest of the article is structured with six sections. Basic FL concept is introduced in Section II. Section III described the procedures of this review. The results of this investigation are presented through different research questions in Section IV. Open issues and challenges are discussed in Section V. Besides, Section VI includes the limitation of this study. Lastly, the conclusion and future directions is provided in Section VII. II. FEDERATED LEARNING In this section we have described an overview of FL architecture. The concept of FL is not related directly to the ML components, it is all about a data management process to share data between multiple clients in a privacy preserving manner. For a practical example, suppose a hospital environment that produces some data, also has a model and some computer resources that would like to tackle a specific problem by an AI system. Moreover, the dataset in the institution has not been sufficient to train the model which is able to address this problem. Another hospital dealing with similar difficulty wants to work together on this promise where they have a common goal and can solve a common task. However, both hospitals have different data locally and they need to use each other's data without sharing data directly. This collaborative model training without sharing the data is exactly the purpose of FL. In Fig. 1, we have presented FL in left and traditional ML framework in right to illustrate the fundamentals of both for a hospital environment. In association with that, as supplementary information we have listed necessary keywords and their explanations related to decentralized FL implementation in Table 2. Since FL consists of multiple sources of data, we have shown four clients in the figure. Each of the clients has few common duties, they collect the data from the hospitals, train them using the local ML models and estimate some parameters. These parameters are sent to the central server from every client, not the data itself. Once the central server has received all the local modes' parameters, it aggregates them and takes the weighted average, this is known as the global model and sent back to all of the clients. By this process a learning round is completed and repeated for the next round. However, a well known federated averaging algorithm is FedAvg [13], proposed by Google in 2016, it calculates weighted average of the individual clients. It is very expected that the data quantity could not be the same across the clients, sources with larger datasets will have correspondingly larger weighted losses, individual clients losses are minimized to an overall global loss which is called weighted average. Under FedAvg, every client trains a model for a defined number of epochs through Stochastic Gradient Descent (SGD) algorithm and transmits the learning parameters to the central server and the server performs aggregation in the form of an averaging. The mathematical presentation of FedAvg is: In this function there are k number of clients and each client has its own loss function F k w. Then weight each of the losses by the size of the client's dataset n k . Hence, the overall objective is to minimize a global loss which is a weighted combination of local losses and the local loss is computed on private data which is never shared, only model updates are shared. Apart from the FedAvg, there are many research directions and varieties of FL going on such as SecAgg [54]. Though different combinations exist in the FL implementation, two characteristics are maintained expectedly: the datasets are distributed and remain local, not centralized and have a collaborative model to work towards the same goal. III. RESEARCH METHOD There are several review article types available in the literature to do deeper level of research, such as narrative reviews, systematic reviews. We mentioned at the beginning that a systematic review has been conducted for our investigation. Mainly two SLR methods are popular in practice, one is PRISMA (Preferred Reporting Items for Systematic Reviews and Meta-Analyses) and another is Kitchenham's guidelines; the second one is mainly considered in computer science and software engineering research fields [14]. To conduct this review we followed the PRISMA procedures which is the most common way of performing SLR in the healthcare sector [6]. However, for a SLR, first we need to identify relevant articles that focus on a very specific research area and question(s), secondly appraising the quality of the studies performed and the strength of the evidence in the papers, and lastly synthesize the findings to draw respective conclusions. Fig. 2 shows all of the steps taken to conduct this review sequentially. A. RESEARCH QUESTION Our first step of this review was to establish a group of questions which will describe the literature in the most effective way. Table 3 shows the five contexts and their associated 12 research questions. First context is the overview that talked about the application and problem solved by the FL; next a broad explanation over the datasets was presented; third ML framework; then implementation of FL was discussed including privacy method, types of FL; and lastly the experimental substances, specially the performance comparison have been presented. VOLUME 11, 2023 B. SEARCH PROCESS Since FL was first presented in 2016, the search process of the review was limited over the time period from 1 January 2017 to 30 June 2022. We discovered all of the common databases considered by previous researchers; for example, Science Direct, IEEE Xplore digital library, Springer Link, Wiley Online Library, SPIE digital library, ACM digital library, Multidisciplinary Digital Publishing Institute (MDPI), Nature Portfolio, Taylor & Francis, and Google Scholar. The searching criteria is different across the platforms, we used advanced options of each database to search articles with Boolean ''AND'' and ''OR'' expressions. Our study focused on the implementation of FL in healthcare image processing, so that we carefully avoided the other applications. The search phrases looked over the titles, abstracts, and keywords in each of the databases. Fig. 3 depicts the PRISMA flow diagram where whole statistics of article consideration in this review has been presented. After the search operation, primarily collected articles have gone through a selection process, we have described them in the next sections. C. INCLUSION AND EXCLUSION CRITERIA Literature search strategy is a big challenge while it is needed to find too many papers, these circumstances are solved by a predefined inclusion and exclusion criteria in SLR. This might include limiting the search to only those that contain certain types of studies. However, the processes ensure the task achievement properly, reduce the possibility of bias and protect the selection process from irrelevant research documents. We implemented the inclusion and exclusion on the collected articles from the databases to reach the exact materials that are seeking the readers. We emphasized the following points to include articles for final analysis: • Article that studied medical image datasets. • ML model developed with the FL environment. • FL was the main focus in the findings (result analysis/comparison). Since we performed keyword search, the articles were collected based on the words present in the paper, even if it was mentioned for a single time. Therefore, we excluded the articles that are not relevant and does not fulfill our scope based on the given criteria: • Articles that used private dataset(s) for the ML model. • Studies that are not mainly focused on FL and medical image data. • Hybridization or modify the theme of FL, e.g., federated reinforcement learning. • Abstract, short article, any pre-print, any book or book part. • Articles do not have a clear presentation of the results using ML based performance measures (e.g., [85], [86]). The functionalities of inclusion and exclusion are observed in Fig. 3. It shows the number of initially collected articles from different databases is 161. We have removed the duplicate articles from there and 138 articles were taken for further steps. After that we screened the articles for two times under two different conditions, first we gently explored the title and abstract which helps to remove 96 articles, besides, we extensively investigated the full text of rest 42, where another 25 papers have been disqualified. Finally, we discovered 17 from 161 articles to hold our review. D. DATA EXTRACTION Data collection mostly involved in research questions of our study, we extracted information in order to cover the questions perfectly. At first we created a spreadsheet and input respective information headers on the top. We worked on the 17 articles individually, each time all of the information has been gathered distinctively on the spreadsheet and they were used as our findings. The following data are extracted from every articles: 1) Document title, publication year, and journal/ conference name. 2) Used datasets and their federated settings. 3) The security or privacy protocol used for FL. 4) The algorithms used to train ML models. 5) Performance of the FL model. IV. RESULTS We assembled this section following the research questions that we described in Section III. In the upcoming sections, first we have presented the demographic analysis (also known as numerical analysis) data along with the key contributions and limitations of each reference work in Table 4, thereafter we answered the 12 questions successively. RQ1 What are possible applications of FL? We found the application of FL in different research fields, such as, Diabetic Retinopathy (DR), MRI classification, cancer, pneumonia, COVID-19 detection, and few more. These topics are popular in medical image processing research with conventional ML. Hence, FL also creates new scope to research due to the privacy production efficiency which is essential for this particular imaging research. In 2019, coronavirus disease hitted all over the world and created a crisis regarding identification of COVID-19 samples. The RT-PCR test is the most reliable diagnosis method of the diseases, since inadequate testing kits and some technical limitations, researchers tried to explore alternative ways of COVID screening. Therefore, hundreds of ML based automated and time saving COVID-19 detection models have been presented within the last two years [33]. ML based COVID analysis is mostly carried out by radiological chest images, i.e., X-ray and CT images. Among the contributions, FL also discussed and implemented several detection models as data privacy was a big concern there. In this study, we found six articles out of 17 were specifically worked on COVID-19 detection. Feki et al. [18] proposed a collaborative FL for COVID-19 screening from chest X-ray images; they cooperated with multiple medical institutions without sharing their data. Similarly, Zhang et al. [24] and Yan et al. [29] used X-ray and CT image data for different Convolutional Neural Network (CNN) architectures in FL settings. References [21], [25], and [32] also have contributed to the COVID-19 infection in a multinational way. However, during the pandemic such that artificial intelligence tools were not clinically used significantly to diagnose COVID-19, all of them were experimental operations and hopefully the contribution will help in future initiative. Millions of patients are suffering from fatal diseases worldwide, cancer is top of them. Researchers have shown early detection of cancer can save a large number of lives [34]. Consequently, deep learning has emerged as a potential of early cancer detection by the help of medical images. It extracts features from the raw images and provides decisions regarding cancer detection with notable performance. As a part of ML technique, FL has been considered in several cancer diagnosis techniques, Fig. 4 shows 29.4% articles (five out of 17) of this review were formed on cancer detection. Researcher Polap and their team have published three research papers [17], [19], [22], all of them focused on skin cancer detection with the FL environment. They used seven different skin marks (classes) to train the detection models and successfully implemented the privacy protected FL. Moreover, Hashmani et al. [30] applied FL on a series of dermoscopy images to classify nine different skin diseases. Nowadays, important internal organs of human body, such as lung, breast cancer are the leading causes of cancer death. A FL oriented lung cancer detection model has been proposed by Adnan et al. [28]. They demonstrated that their model achieved acceptable performance while decentralized data configuration applied. One of the domains is Diabetic retinopathy (DR) analysis. Diabetes is a chronic disease that affects millions of people globally and uncontrolled diabetes can lead to serious damage to the body's system including eyes. DR is a common diabetic eye disease and the number one cause of vision loss and blindness in the world. It occurs when diabetes damages the small blood vessels on the retina. In the primary care clinic, those retinal images can be transmitted to an eye care specialist who investigates the image and then provides a consultation. However, these days deep learning algorithms can detect the DR within seconds with high accuracy. Lo et al. [16] analyzed the retinal images to classify the DR positive and non-DR samples using the FL approach. In another article, Zhou et al. [31] introduced a FL framework which classifies five scalability categories of DR, 0 to 4 (No DR to Proliferative DR). Linardos et al. [26] considered FL for Diagnosing Hypertrophic Cardiomyopathy (HCM), whether the subjects are suffering from HMC or normal. In addition to that, a multilabel cardiac diseases classification has been proposed by Chakravarty et al. [23], where 14 classes were examined. The other application includes Autism Spectrum Disorders (ASD) detection. Li et al. [20] applied deep learning in a FL environment to classify MRI images. Their model worked for identifying the ASD using the MRI analysis technique. We also found FL is used in pneumonia detection, Kaissis et al. [27] proposed a model that able to detect different pneumonia samples. RQ2 What problems were solved? Almost all of the articles considered in our investigation solved an universal problem which is 'ensure the security of private data'. Data is always a key factor while we need to train a ML model, besides it is a challenge to protect the data from potential security and privacy threats. These threats are more crucial in Electronic Health Record (EHR) data analysis. Sharing EHR data includes patients' private information, above all their identity could be under risk to expose publicly. Similarly in medical image analysis, maintaining privacy of users' data such as X-ray, CT, MRI images is going to be difficult with traditional ML layout. Hence, FL is a privacy preserving way of training AI algorithms, allows to move the model to the data rather than moving the data to the model and this makes it very useful in cases where sensitive data cannot be shared. Since researchers are working for a long time on the application domains that we have discussed in the previous section, now they applied the same fenomena with privacy preserving FL as their experimental research. B. DATASET RQ3 What type of dataset used? We divided the used datasets in the 17 investigated articles into several categories based on image type. Data type varies from model to model, it actually depends on which domain the model will apply; for example, skin images are used to detect skin cancer. Fig. 5 displays eight different types of medical images collected from various datasets used in the research field. Lung X-ray image: As we mentioned, severe cases of some diseases affect particular organs of our body, lung is one of them. Literature shows COVID-19 and pneumonia complications include lung damage which is the reason behind using lung X-ray and CT images in such disease detection models. Likewise, as we know smoking is dangerous for health, which particularly affects our lungs and a key reason for lung cancer. According to our investigation, six articles [18], [23], [25], [27], [29], [32] have used chest X-ray images out of 17. The X-ray datasets considered in the articles are Cohen JP, TB x-ray, CheXpert, Mendeley data, COVIDx, Chest X-ray (CXR), and COVID 2019 dataset. Moreover, Zhang et al. [24] proposed a FL oriented COVID-19 detection model where chest X-ray and CT images were considered from three datasets, Qatar-Dhaka data, COVID-CT, and Figure 1 dataset. Skin image data: MNIST: HAM10000 is one of the leading datasets used in skin cancer detection research with deep learning techniques. This repository contains 10,015 dermatoscopic images divided into seven different classes. Połap and the groups used the dataset in a series of articles [17], [19], [22] with FL environment. Similar data has been used in [30] [28] used tissue image data, more specifically they proposed a privacy guaranteed ML model where lung tissue images were considered to classify cancer. In addition, Li et al. [20] and Linardos et al. [26] both used MRI images for their models, brain and heart MRI data consequently. We have included all of the dataset name with their references for easy access in Table 5. RQ4 Are the number of data samples sufficient? In ML research, it is very established that the more data we have for training purposes the better prediction we will get from the models. Also, chances of model overfitting will increase when we have a smaller dataset; so, it is always advisable to use a larger dataset. For our study, we analyzed the 17 articles by the range of data samples used in the respective research papers. First, we will discuss the articles which have used less than 1,000 samples. As Table 5 shows, four papers [16], [18], [20] and [26] used very small amounts of data, their number of elements are 153, 216, 370, and 180 respectively. Since larger dataset belongs to better potentiality of inside analysis, literally 153 data samples are not technically sound. Next, within the 10,000 sample range, seven papers [21], [24], [25], [27], [28], [31], and [32] used data samples between 2,109 and 6,284 and this number is quite good. Finally, we found six papers [17], [19], [22], [23], [29], [30] all of them have used more than 10,000 images individually. CheXpert, the largest dataset (overall 223,414 CXR image) found under our investigation was considered by Chakravarty et al. [23]. RQ5 Are non-IID data distribution considered? There are two forms of federated frameworks exist according to the data distribution, IID and non-IID. IID refers to independent and identically distributed. This can be divided into two parts, independence and identical distribution; independence means that the value (data) of an example does not affect the value of the other. This particular scenario is commonly described by a coin flipping experiment, when a coin is flipped, every time the result of both roles does not depend on the other die. Identically distributed means that the probability of any specific outcome is the same, for example every time flipping a coin there is a 50% chance of getting heads and a 50% chance of getting tails and that value does not change while flipping a coin every time. Non-IID technically inverse from both of the sides. While IID data feature distribution is same across clients, the feature distribution is different in non-IID. The problem is quite common in real life, for example, the appearance of the medical image sample using different machines across different hospitals may not align due to different imaging protocols. Therefore, non-IID data settings mean values are dependent on each other and there are overall trends between them. Generally in FL, local models are trained independently where data distribution is hidden to each other and as a result data type and features could be vary client to client [6], [53], this variation makes non-IID data consideration important in FL research. However, in this study, we investigated FL used in medical image analysis. We observed FL data structure is complicated, especially while the local clients' data are significantly different to each other. Our results show only four papers (we did not find sufficient explanation from [26] and [21]) considered non-IID type along with IID data and the rest 13 did not talk about the content. In [18], Feki et al. divided the collected dataset into four parts for clients data, for IID, they used an equal number of images from both sides, client and class. Moreover, for non-IID data they allocated the samples among classes unequally by a ratio of 66% and 44%. Likewise, Adnan et al. [28] performed FL with IID and non-IID data individually where number of samples were different in each client under non-IID scenario. RQ6 Which ML algorithms are used to train local models? Although FL is the leading focused topic of this investigation, ML techniques make the actual difference when it comes to figure out the overall performance of the models. As usual in the FL framework, each client server data is trained by ML algorithms. Since our review is based on medical image data and this image analysis or computer vision task is mostly conducted by CNN oriented deep learning models. However, to answer the question we searched each of the considered articles and found a variety of using built-in CNN models, such as VGG16, Inception, ResNet18, and many more. VGG16 is a widely considered, reliable, and pre-trained model; five out of 17 surveyed papers considered this CNN model. This model is constructed by 16 layers, 13 convolutional and 3 fully connected layers. Likewise, VGG19 is a 19 layers CNN model and used by Lo et al. [16]. Residual Network (ResNet) is also a commonly used algorithm that can be constructed by different numbers of layers, e.g., ResNet18 ( [23], [26], [27], [29]), ResNet50 ( [18], [24]), ResNet101 ( [24]). Other pre-trained CNN models are Inception ( [17], [19], [22]), AlexNet ( [17], [19]). Besides, CNN associated customised deep learning models have been used in several articles which is listed in Table 6. Li et al. [20] have used multi-layer perceptron (MLP) classifier which was a deep neural network constructed by one input, hidden, and output layers. Adnan et al. [28] performed image segmentation using a supervised learning approach called Multiple-Instance Learning (MIL) to train the local models. RQ7 Are any additional security methods implemented? Data privacy and security both are not similar in practice; privacy covers the use (control, access, and regulate) of data, on the other hand, security defines the potential threats of unauthorized access and malicious attacks. FL mainly preserves the privacy concern since trained models of stakeholders are shared instead of sharing data directly. Still, sharing models can be vulnerable while parameters are exchanged between clients and servers and could be a possible threat against system security [28]. Several additional privacy preserving methods have been described in a systematic review article [83]. However, we found few articles that have VOLUME 11, 2023 considered additional initiatives for security in FL based medical imaging research. Most of the articles (three out of four) have used Differential Privacy (DP), it allows companies to collect information about their users without compromising the privacy of an individual and the ultimate goal is to be able to share information about a dataset with other people without revealing individuals Personally Identifiable Information (PII) from the dataset [9], [84]. Li et al. [20] used two different mechanisms of DP, Gaussian and Laplace. They defined the noise level α which varied from 0.001 to 1. Similarly, Kaissis et al. [27] have applied both techniques and Adnan et al. [28] have used only Gaussian noise in their experiments. In addition, Połap et al. [17] used encryption and blockchain techniques to make their FL model more secure. They proposed three different learning agents where blockchain technique was applied in Data Management Agent (DMA). According to their description, all patients data (images) have to be their unique IDs, once a request arrive to analysis, it will check whether the ID is exist or not into the database, if not then it will create an unique ID and a block to the blockchain, then transfer the ID to the database with the image. RQ8 What types federated data partitioning are used? Mainly three categories of FL described in the previous literature based on the training data distributions across the models. Among the three types, Federated Transfer Learning (FTL) and Vertical FL (VFL) are rarely considered in medical research; another one, Horizontal FL (HFL) was used widely. So, in a horizontal partition the client's database holds many different customers but they are collecting all the same type of data on those customers, in other words ''same features, different samples''. In vertical FL, it has different customers in both but there is an overlap of those customers and they are collecting different features, more specifically ''different features, different samples'' [3], [9], [84]. However, in this investigation we focused on the medical image research and found most of the articles were based on HFL. For example, Feki et al. [18] utilized HFL, they used a chest X-ray image dataset where features are same for all clients but samples are different. Interestingly, Kaissis et al. [27] used two different datasets for training and testing their FL models, the fact is both datasets contain X-ray images (same features) and different data. Only two articles we defined as VFL; [24] have taken three datasets, two X-ray and one CT image based. In the article the authors combined the both types of images and used them to train and test models. In [25], the authors used X-ray and ultrasound images for their federated models. X-ray with CT or ultrasound images are technically different, thus their features will be also different and they used various data features in different clients which makes a VFL scenario. RQ9 What are the federated frameworks used? Table 6 represents respective deep learning architectures that were used for training their local models (we discussed in RQ6) and next the federated framework which was mainly the aggression approach of the collected local models in the central server. We observed federated mechanisms are executed in two ways, some articles were driven by formerly proposed build-in FL algorithms and others with basic concepts for aggregation. FedAvg (discussed in Section II), which is a commonly used method in federated aggregation, as Table 6 shows six articles considered this algorithm. Likewise, [20] and [27] used two different federated algorithms named Fed, secure aggregation (SecAgg) respectively. SecAgg is a secure model aggregation for FL also proposed by Google in 2016. Połap and Woźniak [19] proposed a meta-heuristic search based federated model, first they calculated average loss of all local models and then selected only models that have scored higher than the average loss for aggregation in server. Mainly all of them pursue fundamental concepts of FL but they implemented it in different ways. However, the described above federated aggregation process has no impact on the model performance, it is all about engineering the data distribution in a decentralized and collaborative manner. E. EXPERIMENTAL RQ10 What are the performance measures used in the studies? The final and startling step of any ML setting is to assess how good the model is through performance evaluation. The basic idea is to develop a ML model using some training samples and test this train model on some other unknown data. However, the training error is not very useful for actual evaluation, because it is easy to overfit the training data by using complex models which do not generalize well to future samples. Contrariwise, testing error is the key metric since it has a better approximation of the true performance of the model on future samples. Thereby, we only considered testing performance throughout our review. As we found from this investigation, classification and segmentation both tasks were used and that is why their performance were also evaluated in different ways. In Fig. 6 we have presented the number of articles using different performance metrics. Most of the experiments (14 out of 17) were evaluated by accuracy. Recall was the second commonly used measurement criteria, considered by five articles. Area Under the Curve (AUC) score three and precision were used two times. RQ11 How is the performance of the FL frameworks reported? This question is for getting the overview of performance achieved by the FL based models in the 17 articles. Performance assessment is the ultimate part of any ML model where the conducted experiment is evaluated by different matrices. Our investigation revealed 14 articles worked on data classification (binary and multi-class), one article worked on data segmentation, and remaining two considered both of them (listed in Table 4). Usually performance of classification tasks is assessed by accuracy, it represents the report of correctly identified samples from all of the data [14]. We divided the performance into three categories according to the achieved accuracy by the 17 studies: high (>=90%), medium (80%-89%), and low (<80%). Table 7 summarised the performance scores of all articles. High: We found eight articles have an accuracy of 90% or more. Feki et al. [18] performed binary classification, their accuracy score is highest, for FL+VGG16 with data augmentation model 94.4% and for FL+VGG16 93.57%. Połap and Woźniak [19] used the inception91 classifier for the FL model and obtained an accuracy of 91%. Score of [25], [30], and [24] is not clear, they discussed the accuracy between 90-95%. Article [32] and [27] achieved an accuracy of 90.61% and 90% respectively. Yan et al. [29] presented their results using sensitivity, their highest score was 91.26%. Medium: In [22], the author classified the images as diseases and not a disease, their proposed VGG based FL model achieved 89.82% accuracy. Lo et al. [16] performed classification and segmentation both tasks on different datasets, the classification and segmentation accuracy for SFU dataset were 88% and 85% respectively, classification accuracy of OHSU dataset was 89%. In [26], Linardos et al. considered AUC, the highest score achieved by the FL model was 89%. Adnan et al. [28] conducted binary classification with an accuracy of 85%. RQ12 How perform the FL approach compared to the conventional models? Last research question explores the comparative performance analysis between FL and traditional ML image processing research. This query is important while we want to discuss the effect, contribution, and drawback of using FL in medical image analysis. To answer this question we intensively collected experiment results from both areas, 17 FL articles and their relevant conventional models. We already described the performance of the FL models in the previous question and here we will present the results of usual ML models and then the comparative analysis. In Table 7 we summarized the performance of all articles in this review and we presented the results of one or more similar articles opposite to each of the articles to make a comparison chart. To do so, we extensively investigated dozens of research papers that analyzed medical images by traditional ML to explore best matching options which was essential for a reliable comparison. Several conditions were applied in this criteria based on the structural and experimental similarity between ML and FL papers, such as we considered the papers which used similar datasets, algorithms, and performance measures. We expect maintaining this condition will ensure an accurate comparison among the two parties. Our investigation shows in Table 7 that all of the ML models have improved accuracy compared to their respective FL models in existing literature, more specifically we found better ML results against every FL article. For instance, Połap et al. [17] have achieved accuracy with federated VGG16 70% and Inception 67%, however in ML part, Jain et al. [56] achieved 79.23% accuracy with Inception and Liu et al. [57] 87% with ResNet50; all of three have considered the MNIST: HAM10000 dataset. Then as well Chakravarty et al. [23] has an AUC score of 80% with FL environment, but with same dataset and ML algorithm article [67] and [68] have 86% and 87% AUC respectively. V. OPEN ISSUES AND CHALLENGES FL is still a young research field, so it is difficult to draw a remark on the rejection and acceptance. However, here we have discussed the issues and challenges found in the reviewed articles regarding the application of FL in medical image. Generally, FL is invented to fulfill the privacy concern of private data, unfortunately it does not cover all potential privacy threats [93]. However, we described model performance, data heterogeneity, and federated model efficiency issues found from the review below: VOLUME 11, 2023 A. PRIVACY AND SECURITY Medical image data is created by personal information of patients and no one can share this data for AI applications without reliable data protection. FL makes the data sharing between the different institutions with some privacy guarantees by an advanced data management and model construction process, all we have described in Section II. FL is different compared to ML models where the training process is exposed to multiple parties, we do not know the motive of every participant, it is an issue of trust among them; so this additional communication increases the risk of leakage data via reverse engineering. Meanwhile, we observed two further privacy measures used in federated medical image processing, differential privacy and secure aggregation. Differential privacy involves adding carefully selected noise to the outputs and can either be done by the individual clients or server level, secure aggregation is a cryptographic technique (e.g., blockchain technology), ensures the server can only see the aggregate of thousands of updates rather than individual model updates. But the reality is every privacy mechanism comes with a significant computational cost on the federation. B. DATA HETEROGENEITY Our investigation shows data heterogeneity could occur in two ways: number of samples are different (non-IID data) and data features are different (VFL) among the clients. Usually, the number of produced data in hospitals are not identical and in FL, clients can have different data distributions, this uneven distribution of data of client sides might provide opposing gradient updates to the server which is challenging to tackle. Furthermore, practically features of federated datasets are not the same in many cases, for instance X-ray and CT images data can be used in two different clients which makes trouble during aggregate the models parameters centrally in a FL setting. C. OVERALL MODEL PERFORMANCE The first impression of an AI model is the performance, how accurately the model accomplished the task. High performance accuracy makes the model more acceptable than a model that achieved a lower score. We previously discussed the federated model performance and compared them with traditional ML models (RQ12). Our findings show FL failed to perform better than ML with similar model structures, this drawback claims us to reevaluate the usefulness of FL in medical image. D. FEDERATED ARCHITECTURE Training a personalized model on each of the clients is not difficult in FL, problems emerge when all of the model output transfers to the central server and passes through an aggregation process. We observed that the federated models presented in the reviewed articles are mostly theoretical and less practically implemented, few articles included their open source code with their articles. Since the research started in the field a couple of years ago, the research method and materials need to be more easily accessible to future researchers. Besides, we usually have a very controlled setting in research, but the question comes when we try to aim for huge datasets to simulate in a real-world scenario. VI. LIMITATIONS In this section we have admitted the limitations of this study. First, we searched all prominent databases for article collection where some journals and conference proceedings were with subscription download policy. In some of such cases, we could not grab the papers from the sources. Although, we tried for an alternative way, sent email to the corresponding authors and requested for a full text of the required article. However, still we failed to reach some of them ( [94] and [95]) which is limiting the range of this survey. In addition, our inclusion and exclusion process removed articles from the initial fleet and preprint articles were not included there, besides we could not explore all of the searching databases so it could be possible that we missed to include any relevant article(s) on the topic. We did not experiment the models used in the 17 articles under our supervision, for a precise review that would have been more effective. Overall, it is difficult to conclude this study with strong and tested historical evidence, because our review was on very limited time and with insufficient resources since FL was recently introduced. VII. CONCLUSION AND FUTURE DIRECTIONS One of the most popular and effective diagnosis methods is imaging techniques in the medical sector. This practice is increasing day by day and produces tons of image data. AI has lots of opportunities in medical imaging using this data, but clinical use of AI and ML is very limited right now. In research direction, creating a publicly shareable image dataset is very difficult for the medical domain. The major hurdle behind data share and collaboration is privacy issues which are less prioritized in typical centralized models. Apart from this concept, federated or distributed learning is different, here a data-driven learning model is shared not the data directly. In this study, we systematically reviewed the articles that considered FL in their ML based medical image research. We elaborately discussed from every perspective, including demographic data, privacy appearance, datasets, FL characteristics, model implementation, and performance comparison. We noticed in one of our previous articles [33] that deep learning oriented COVID-19 detection using X-ray and CT images has high accuracy, most of them achieved more than 95% accuracy. We further observed a similar trend in this study, here COVID-19 detection research articles are the top scorers with FL mechanism. Although, the scores under FL are comparatively lower than general models, as listed in Table 7. Performance of other application domains with FL models were also not mentionable. Besides, previous articles point out the implementation of federated models is relatively complex, it requires extra communication and maintenance trouble. However, it is favorable to become acquainted that the research field got lots of attention and publications within a very little time, that is why we can hope for promising progress of FL in medical image analysis in future. At this stage, we have summarized our findings below for future direction to the researchers who are interested to contribute in the field: • Privacy concern is not fully solved in FL, however, we cannot deny the importance of decentralized concepts. It could be effective for collaborative ML in medical image research, thus researchers should emphasize on the implementation of additional privacy protection in a cost effective way. • Datasets in the research are collected from various sources and for various purposes where experimental results could differ enormously. There is no particular or benchmark dataset available in federated medical imaging research; need to build some standard datasets to avoid biased data and data heterogeneity problems. • Similarly no benchmark FL model has been presented yet in this field, such that initiative will assist to build robust AL models for further research. • In truth, collaborative models data are prone to be heterogeneous, various classes of data are collaborating there. But our results show the accuracy of multi-class classification is very low (as described in RQ11) which needs to be addressed in future research. • Federated models achieved satisfactory performance in some cases but we cannot narrate as an alternative in the accuracy race with ML models. • There are many weaknesses observed in current publications (papers investigated in this review) of this field, we included the article quality checklist and results in A. Future research could consider the quality analysis questionnaires for article quality improvement. No doubt FL is something that might be in the future horizon. But still there are some technical problems, that challenges need to be tackled before FL is going to be applied vastly. Best of our knowledge this is the first SLR and we believe this review is a reflection of FL research in the area of medical imaging. Table 8 shows 12 Quality Questions (QQ) and scores, mostly motivated from our previous article [14]. The goal of such inquiry was to check the basic quality of the articles published in FL oriented medical imaging. However, each question has one score for one article and a total score of 12 for an individual. We considered the QQ answer in three forms of scoring, ''Yes (1)'', ''Partially Yes (0)'', and ''No (−1)''. The article which clearly supports the question is Yes, partially supported or where no clear answer found is Partially Yes, and lastly fully disagreed is No. We investigated each of the articles to find the answer and assigned the scores in respective columns. As the table interprets most of the articles have failed to fulfill the quality requirement. Highest score is 9 out of 12 gained by [27], followed by six for [20] and [28] both articles individually. The score indicates in some areas quality has been maintained poorly in the research papers, a reason could be that lots of attention made a rush on FL research among the contributors.
2023-03-23T15:34:52.594Z
2023-01-01T00:00:00.000
{ "year": 2023, "sha1": "deaffe6e9f664bf74885ce873b19b0c84dc132f9", "oa_license": "CCBY", "oa_url": "https://ieeexplore.ieee.org/ielx7/6287639/6514899/10077569.pdf", "oa_status": "GOLD", "pdf_src": "IEEE", "pdf_hash": "6a20c8375d02ad9ac44cd48f140a9dc582fb3951", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [] }
209897324
pes2o/s2orc
v3-fos-license
Biosynthesis of poly(3-hydroxybutyrate-co-3-hydroxyvalerate) (PHBV) in Bacillus aryabhattai and cytotoxicity evaluation of PHBV/poly(ethylene glycol) blends The study described poly(3-hydroxybutyrate-co-3-hydroxyvalerate) (PHBV) accumulation in Bacillus aryabhattai PHB10 for the first time and evaluated the polymer induced cytotoxicity in-vitro with PHBV/poly(ethylene glycol) (PEG) blends. The B. aryabhattai strain produced 2.8 g/L PHBV, equivalent to 71.15% of cell dry mass in a medium supplemented with propionic acid, after 48 h incubation. The optimum temperature and pH for the copolymer accumulation was 31 °C and 7, respectively. The gas chromatography–mass spectrometry and nuclear magnetic resonance analyses confirmed the polymer obtained as PHBV. The differential scanning calorimetry analysis revealed that the melting point of the material as 90 °C and its thermal stability up to 220 °C. The average molecular weight (Mn) and polydispersity index of the sample was estimated by gel permeation chromatography analysis and observed as 128.508 kDa and 2.82, respectively. The PHBV showed tensile strength of 10.3 MPa and elongation at break of 13.3%. The PHBV and their blends with PEG were tested for cytotoxicity on human keratinocytes (HaCaT cells) and the cells incubated with PHBV/PEG2kDa blends were 99% viable, whereas with the PHBV alone showed comparatively higher cytotoxicity. The significant improvement in the cell viability of PHBV/PEG2kDa blends indicates its potential as a candidate for skin graft applications. Introduction Polyhydroxyalkanoates (PHA) are polyesters accumulated in microorganisms as intracellular carbon and energy storage material under unbalanced growth conditions (Andreeben et al. 2010). Owing to their biodegradability and physical properties similar to most of the synthetic plastics, PHAs are considered as a green substitute for petroleum-derived plastics. Poly-3-hydroxybutyrate (PHB) is the first discovered, most common and simplest form of PHA found in bacteria (Lemoigne 1926). The inferior physical properties of PHB such as brittleness, high crystallinity and instability during the melting stage hinder their wide-spread applications ). The copolymer of 3-hydroxybutyrate (3-HB) and 3-hydroxyvalerate (3-HV), namely poly(3hydroxybutyrate-co-3-hydroxyvalerate) (PHBV) is synthesized in bacteria especially when the growth medium contains organic acids . The superior properties of PHBV such as better thermal behavior, plasticity, toughness and biodegradability make them more attractive in the bioplastic market (Shang et al. 2004). 3-HV and 3-HB, the monomer units of PHBV are synthesized simultaneously in bacterial cytoplasm via two parallel pathways. The former one is synthesized by the condensation of two acetyl-coenzyme A molecules, and the later one is formed by the condensation of acetyl-coenzyme A and propionyl-coenzyme A molecules ). Therefore, when propionic acid is supplemented in the culture medium constituted for PHB production, it acts as a direct precursor that triggers the formation of 3-HV, in addition to 3-HB. This leads to the copolymer accumulation in bacterial cells. PHBV production in genus Bacillus has been reported previously by supplementing the medium with organic acids 1 3 32 Page 2 of 10 (Kumar et al. 2006;Güngörmedi et al. 2014;Moorkoth and Nampoothiri 2016). PHBV with varying molar ratios of 3-HV are of great importance in the field of biomedical engineering such as the fabrication of cardiovascular stents, drug delivery systems, surgical sutures, medical packaging, etc. (Avella et al. 2000;Riekes et al. 2013;Vilos et al. 2013;Wu et al. 2013;Smith and Lamprou 2014). Also, it has been reported that electrospun PHBV nanofibers are ideal for skin tissue engineering applications (Sundaramurthi et al. 2013). Nevertheless, reports are also available on further improvement of PHBV by blending with other polymers such as poly(butylene succinate), poly(ethylene glycol) (PEG), poly(lactic acid), etc. (Modi et al. 2013;Hu et al. 2017). Among these polymers, PEG lacks toxicity, antigenicity and consequently does not induce any immunogenicity on human cells. Hence PEG has been commonly applied in a wide range of therapeutic formulations, and it will be eliminated from the body through kidneys (Catoni et al. 2013). The blending of PHAs with PEG improves the surface hydrophilicity of the polymer; thereby increases the cell viability and it has been reported to be effective in improving the biocompatibility of PHBV (Shabna et al. 2014;Monnier et al. 2016). Therefore, PHBV-PEG blends can have high application potential in biomedical fields. Bacillus aryabhattai was first reported from cryotubes used for collecting air from the upper stratosphere by Shivaji et al. (2009). Later, the bacterium was isolated from soil in many parts of the world Pailan et al. 2015;Yan et al. 2016) and the PHB accumulating property of this strain was first studied by Van-Thuoc et al. (2012). The B. aryabhattai strain PHB10 used in the present study is also an environmental bacterial strain, and it has been reported previously for high levels of PHB accumulation in their cytoplasm (Pillai et al. 2017a). The genome of this strain harbours a short-chain-length PHA-specific polymer synthase gene (phaC) responsible for its PHB biosynthesis (Pillai et al. 2017b). In the present study, the PHBV accumulation property of this bacterium was addressed. In-vitro cytotoxicity studies were also conducted to evaluate the potential of these biopolymers and PEG blends thereof in perspective of their skin graft applications. Bacterial strain and culture conditions The bacterial strain B. aryabhattai PHB10 used in this study was reported previously from the lab (Pillai et al. 2017a) and the strain is available at Microbial Type Culture Collection (MTCC Accession No. 12561). PHBV production experiment with this strain was conducted following the method described by Moorkoth and Nampoothiri (2016) with minor modifications. The basal medium (1.5 g of peptone, 1.5 g of yeast extract, 1 g of Na 2 HPO 4 and 0.2 g of MgSO 4 .7H 2 O per litre) supplemented with 20 g/L glucose and 10 mM propionic acid was used for the copolymer production. The pH of the medium was adjusted to 7.0 with 1 M NaOH solution. The medium was inoculated with 1% (v/v) seed culture of PHB10 and incubated in a rotary shaker at 31 °C and 180 rpm. For studying the effect of temperature and pH on polymer production, incubation temperatures between 28 and 40 °C and culture media of initial pH between 5 and 9 were used independently. The fermentation studies were conducted in 500 mL flasks with 200 mL culture medium for 48 h. Extraction and quantification of the polymer After the incubation, biomass was harvested by centrifugation at 5000 rpm for 3 min, lyophilized and cell dry mass (CDM) was calculated. The biomass was suspended in sodium hypochlorite solution (available chlorine 5% w/v) and incubated for one h at 45 °C for cell lysis (Shi et al. 1997). The suspended particles were collected by centrifugation at 5000 rpm for 3 min and the cell debris was removed by a series of washing with distilled water, acetone and absolute ethanol. The obtained solid mass was dissolved in boiling chloroform, filtered through glass wool and poured to a clean glass Petri plate. A PHA film was obtained after evaporation of the solvent. The estimation of the accumulated polymer was carried out directly from the dried cell mass. The PHA in the CDM was converted to crotonic acid by sulphuric acid treatment and quantified spectrophotometrically (Law and Slepecky 1961). The polymer yield percentage (PHA%) was calculated by multiplying the PHA mass to CDM ratio by 100. Gas chromatography-mass spectrometry (GC-MS) analysis The sample preparation for the GC-MS analysis followed the methanolysis method described by Juengert et al. (2018). A mixture of 10 mg of polymer sample, 1 mL chloroform and 1 mL acidified methanol (15% v/v H 2 SO 4 ) was taken in a screw-capped glass bottle (20 mL capacity) with polytetrafluoroethylene stopper and heated at 100 °C for 2 h in an oil bath. After incubation, 1 mL chloroform containing an internal standard (0.2% v/v methyl benzoate) and 1 mL deionised water were added to the bottle for phase separation. The bottom organic phase was collected, dehydrated with anhydrous Na 2 SO 4 and an aliquot of 1 μL was injected into the Shimadzu GC-MS QP2010S gas chromatograph, fitted with a Rxi-5Sil MS (30 m × 0.25 mm × 0.25 μm) capillary column. The injection temperature was 280 °C and the Helium gas flow rate was set at 1 mL/min. The initial column temperature of 90 °C was maintained for 3 min, then increased to 190 °C at the rate of 7 °C/min, held for 5 min and then finally increased to 270 °C at the rate of 8 °C/ min, and held for 5 min. After a solvent cut time of 3.6 min, mass spectra were recorded under scan mode in the range of 50-500 m/z. The peaks were compared to the mass spectral libraries (NIST 17 and Wiley) for identification of the compounds (Pillai et al. 2018). Nuclear magnetic resonance (NMR) spectroscopy The NMR spectrum of the polymer was recorded after suspending the samples in high purity deuterochloroform (CDCl 3 ) (Salgaonkar et al. 2013). 1 H NMR spectrum was obtained in model BrukerAvance II 500 NMR spectrometer at 500 MHz and magnetic field strength of 11.7 T. 13 C NMR spectrum was recorded in model BrukerAvance III 400 NMR spectrometer at 400 MHz and 9.4 T (Bruker Corporation, Massachusetts, USA). Differential scanning calorimetry (DSC) DSC analysis of PHA sample was carried out in a Perki-nElmer DSC6000-Pyris Series instrument (PerkinElmer Inc., Massachusetts, USA) under a flowing nitrogen atmosphere at a heating rate of 10 °C per min (Gunaratne et al. 2004). Thermogravimetric analysis (TGA) TGA was carried out using a Perkin Elmer STA 6000 thermal analyzer instrument (PerkinElmer Inc., Massachusetts, USA) over a temperature range from 40 to 620 °C at a heating rate of 20 °C per min (Salgaonkar et al. 2013). Gel permeation chromatography (GPC) The molecular weight distribution and polydispersion index (PDI) of the polymer were determined by GPC using Waters HPLC system with 600 Series Pump and Waters Styragel HR series HR5E/4E/2/0.5 column equipped with a 7725 Rheodyne injector and refractive index 2414 detector (Waters Corporation, Massachusetts, USA) (Su 2013;Qi and Rehm 2001). Chloroform was used as the eluent (flow rate 1.0 mL/min) and polystyrene standards of molecular weight 1,865,000, 34,300 and 685 Da were used for relative calibration. Tensile properties The tensile characteristics of the polymer were measured in a universal testing machine (Make: Tinius Olsen, Model: 50ST) at room temperature with a 50 kN load cell at a fixed cross-head speed of 50 mm/min following the American Society for Testing and Materials standard (ASTM D882-12 2012) procedure. Average values from three independent tests were taken. Formulation of polymer blends with polyethylene glycol PHB and PHBV from the B. aryabhattai PHB10 along with a commercial-grade PHB (Sigma-Aldrich, Missouri, USA) were selected for blend preparation. The PHB polymer sample from the strain available in the lab prepared during our previous study was taken for this experiment (Pillai et al. 2017a). PEG of molecular weight 2 kDa and 8 kDa were blended individually with the polymer samples. The preparations followed a solvent casting technique by combining the polymer and PEG at a ratio of 4:1 (w/w) (Rodrigues et al. 2005). Briefly, 200 mg of PEG was dissolved in 50 mL chloroform in a sealed round bottom flask at 150 rpm and 50 °C, into which 800 mg of respective PHAs were added after the complete dissolution of PEG. The solution was allowed to cool to 25 °C after the polymers were completely dissolved and then poured to a clean glass Petri dish to cast the films. The film thickness was maintained approximately 0.05 mm by adjusting the concentration and the total volume of the polymer solution. Chloroform was evaporated overnight and was kept at room temperature for one week for complete evaporation of the solvent. Evaluation of polymer induced cytotoxicity Cytotoxicity of the polymer samples was evaluated according to Napathorn (2014) with minor modifications. The polymer films cut into circular discs of 0.4 mm diameter were used for the experiment after sterilization by immersion in 70% ethanol for 1 h and the subsequent UV irradiation (wavelength: 254 nm; intensity: 1.4 mW/cm 2 , incubation period: 1 h). CellTiter 96 AQueous One Solution Cell Proliferation Assay System (Promega Corporation, Wisconsin, USA) which contains a novel tetrazolium compound [3-(4,5-dimethylthiazol-2-yl)-5-(3-carboxymethoxyphenyl)-2-(4-sulfophenyl)-2H-tetrazolium, inner salt; MTS] and an electron coupling reagent (phenazine ethosulfate; PES) was used for measuring cytotoxicity (Mosmann 1983). The MTS tetrazolium compound (Owen's reagent) is bioreduced by mitochondrial succinate dehydrogenase enzyme into a colored formazan product soluble in the tissue culture medium. The cell viability was assessed indirectly by measuring the mitochondrial succinate dehydrogenase enzyme activity. HaCaT cells were grown on Dulbecco's Modified Eagle Medium (DMEM) supplemented with 10% fetal bovine serum, 2 mM l-glutamine, 0.1 mM MEM nonessential amino acid and 1 mM sodium pyruvate. The cells were seeded at a cell density of 1 × 10 4 cells/well on flat bottomed 96-well standard microplates. The polymer samples were added to the wells and incubated for 24 h at 37 °C in a humidified 5% CO 2 incubator. Cells without polymer treatment were kept as control. After incubation, 20 μL of MTS solution was added to each well and the plates were incubated for 4 h in a CO 2 incubator. The absorbance was then measured with a microplate reader, at a wavelength of 490 nm and the cell viability was calculated as the percentage viability of cells with respect to the control experiment. Statistical analysis The experiments for PHA production and cytotoxicity evaluation were performed in triplicate and the mean values were taken. The values were subjected to Student's t-test and values p ≤ 0.05 were taken as statistically significant (Parker 1979). Poly(3-hydroxybutyrate-co-3-hydroxyvalerate) production in Bacillus aryabhattai PHB10 The strain PHB10 was tested for copolymer production in the presence of 10 mM propionic acid. The bacterium produced 3.9 (± 0.26) g/L of biomass containing 2.8 (± 0.31) g/L of PHBV after 48 h incubation. The effect of incubation temperature on copolymer production was evaluated at different temperatures and the results are presented in Fig. 1a. The optimum temperature for bacterial growth and PHBV accumulation was 31 °C with an yield of 71.15% of CDM. pH between 5 and 9 was also tested to study the effect of the initial pH of the culture medium on PHBV accumulation and the results are depicted in Fig. 1b. The optimum pH for the copolymer production was observed as 7 and the yield showed a decreasing trend towards either side of the pH value. Identification and characterization of the polymer The monomer composition of the PHBV was identified by GC-MS analysis. The results were compared with that of standard PHBV (Sigma-Aldrich, MO, USA) (Fig. 2a). The chromatogram of the tested PHBV (Fig. 2b) showed three major peaks with retention time 3.630 min, 19.224 min and 21.667 min. By comparing molecules in the GC-MS database, they were identified as 3-hydroxybutyric acid methyl ester, pentadecanoic acid methyl ester and hexadecanoic acid methyl ester respectively. The 3-hydroxybutyric acid methyl ester and pentadecanoic acid methyl ester were also detected in the chromatogram of PHBV standard. The 1 H NMR spectrum (Fig. 3a) showed the resonances at 1.26 ppm, 2.45-2.65 ppm and 5.22-5.28 ppm representing a methyl group (-CH 3 ), a methylene group (-CH 2 -) and a methine group (-CH-) respectively from the 3-HB monomer. The spectrum also showed resonances at 0.87 ppm, 1.62 ppm and 5.13-5.18 ppm representing a methyl group (-CH 3 ), a methylene group (-CH 2 -) and a methine group (-CH-) respectively from the 3-HV monomer. The 13 C NMR spectrum (Fig. 3b) showed peaks at 19.75 ppm, 40.77-40.80 ppm and 67.61-67.69 ppm representing a methyl (-CH 3 ) group, a methylene (-CH 2 -) group and an ester (-O-CH-) group respectively from the 3-HB monomer. The peaks at 26.85 ppm and 38.80 ppm correspond to the two methylene (-CH 2 -) groups and the peak at 71.90 ppm corresponds to an ester (-O-CH-) group from the 3-HV monomer. The resonance at 169.14 represents carbonyl carbon (-C-) atom from both the 3-HB and 3-HV monomers. The melting point of the material was determined by DSC analysis and was observed as 90 °C. Figure 4 shows the TGA thermogram of the PHA film. A 5% reduction in mass was observed around 90 °C and after that, the Evaluation of polymer induced cytotoxicity Polymer discs prepared from different polymer samples with varying combinations of PEG were evaluated against HaCaT cells for cytotoxicity. Cells treated only with the fresh medium were kept as the control experiment. Per cent viability of the cells in the test samples was compared to the control experiment and is presented in Fig. 5. Normal cell growth was observed in the presence of all combinations of the polymer samples. Each set of experiment contains a polymer without PEG, polymer + PEG-2kDa and polymer + PEG8kDa. When the polymer samples without PEG were compared, PHBV showed the least cytotoxicity with 88 (± 4.69) % cell viability, whereas the PHB standard showed higher cytotoxicity with only 62 (± 5.92) % cell viability. When the samples with PEG of 2 kDa were tested, all the polymer samples showed a significant reduction in cytotoxicity. The cells incubated with the PHBV-PEG2kDa blends were 99% viable. However, the samples blended with PEG8kDa induced higher toxicity than that with PEG2kDa. Discussion The genus Bacillus is well known for its property to accumulate short chain length (SCL) PHA such as PHB, as its genome harbours gene for SCL-specific PHA synthase (Pillai et al. 2017a, b). Reports are also available on the accumulation of copolymers of PHB in several Bacillus spp. when culture medium was supplemented with propionic acid (Güngörmedi et al. 2014;Moorkoth and Nampoothiri 2016). B. aryabhattai PHB10 is an environmental bacterial strain accumulating high levels of PHB (Pillai et al. 2017a). We tested the strain PHB10 for accumulation of the copolymer PHBV, in a culture medium supplemented with propionate and the obtained yield (2.8 g/L, 71% of CDM) was better than the previously reported PHBV content of 1.9 g/L (54% of CDM) and 0.9 g/L (26% of CDM) from other Bacillus spp. as reported by Kumar et al. (2006) and Moorkoth and Nampoothiri (2016) respectively. This study is the first report on PHBV accumulation in a B. aryabhattai strain. The optimum temperature for growth and PHBV accumulation was 31 °C, which was the same as observed during the PHB production studies with this bacterium (Pillai et al. 2017a). Besides, Masood et al. (2012) and Güngörmedi et al. (2014) have also observed that optimum temperature for growth as well as PHA accumulation was in a temperature range of 30-35 °C. The optimum pH for the copolymer production was 7 and the obtained yield showed a significant decreasing trend towards either side of the pH value, which is in agreement with the previous reports on PHBV accumulation in Bacillus spp. (Moorkoth and Nampoothiri 2016;Masood et al. 2012). The PHB accumulation study in this strain (Pillai et al. 2017a) and the reports on PHBV accumulation in other Bacillus spp. suggested the optimum incubation period of 48 h for maximum polymer yield (Güngörmedi et al. 2014;Moorkoth and Nampoothiri 2016) and hence the incubation period was set as 48 h in all the shake flask experiments. The GC-MS analysis revealed the monomer composition of the obtained polymer. The main peaks were corresponding to 3-hydroxybutyric acid methyl ester, pentadecanoic acid methyl ester and hexadecanoic acid methyl ester. The 3-hydroxybutyric acid methyl ester is the monomer methyl ester of 3-HB, whereas the pentadecanoic acid methyl ester and the hexadecanoic acid methyl ester are the trimer and tetramer methyl esters of 3-HV and 3-HB respectively (Bhuwal et al. 2014). These oligomers might have been formed as a result of incomplete digestion of the polymer sample during the methanolysis. The chromatogram of PHBV standard also showed the methyl esters of 3-hydroxybutyric acid and pentadecanoic acid. These observations confirmed the obtained polymer as PHBV. The 1 H NMR analysis demonstrated the resonances for the methyl group (-CH 3 ), methylene group (-CH 2 -) and methine group (-CH-) from both the 3-HB and 3-HV monomer units. The 13 C NMR spectrum provided the signals for the methyl (-CH 3 ) groups, methylene (-CH 2 -) groups, ester (-O-CH-) groups and carbonyl carbon (-C-) atoms from the 3-HB and 3-HV monomers. The chemical shift signals were in agreement with the previous findings of Abd-El-Haleem (2009) and Aramvash et al. (2016) which again confirmed that the polymer obtained from the B. aryabhattai was PHBV. DSC analysis proved that the melting point of the material was 90 °C which was very low when compared to 170 °C of the homopolymer PHB, produced by the strain (Pillai et al. 2017a). Generally, PHBV has a melting point around 100-150 °C and is decreased with an increase in the amount of HV in PHBV Liu et al. 2014). The TGA revealed that the obtained polymer began to lose its mass significantly with an increase in temperature from 220 °C and degrades completely at around 255 °C. This thermal behaviour was comparable to the thermal properties of PHBV, as reported by Wang et al. (2013). The observations on the thermal behaviour of the copolymer can be attributed to the higher hydroxyvalerate content in the sample, which may improve its ductility and flexibility . The molecular weight (Mw) and PDI of the obtained PHBV was higher than the PHB produced by this strain (Pillai et al. 2017a) and the PHBV obtained from the Bacillus sp. as reported by Moorkoth and Nampoothiri (2016). Generally, higher values of PDI were observed for compounds with higher molecular weight, especially in polymers recovered by sodium hypochlorite lysis method (Berger et al. 1989). The polymer showed an elongation at break of 13.30% and tensile strength of 10.3 MPa. Previously, PHBV with very low mol% of 3-HV was reported to be with tensile strength 36.2 MPa and elongation at break 1% (Jost and Miesbauer 2018). The PHBV obtained from the PHB10 was having better mechanical properties than PHBV reported by Kuciel et al. (2019). The values are in a range suitable for tissue engineering applications (Little et al. 2011). Elongation at break represents the capability of a material to resist changes of shape without crack formation. The better elongation at break value recorded here indicates the better elastomeric character of the obtained PHBV. The mechanical properties of PHBV obtained in this study suggest that the polymer contained high 3-HV content. PHAs are ideal candidates for biomedical engineering because of its high immunotolerance, low toxicity, and biodegradability (Lomas et al. 2013). They were reported to be more angiogenic than other similar polymers and having greater macrophage polarization properties. Hence PHA based polymer scaffolds could be an attractive candidate for skin reconstruction procedures (Castellano et al. 2017). Reports are also available on the enhancement in the biocompatibility of PHAs when blended with PEG (Cheng et al. 2003;Chan et al. 2011;Li et al. 2016). To get a more reliable information on cytotoxicity of polymer samples intended for skin graft applications, human keratinocytes which are the predominant cell type in the epidermis could be the ideal target. HaCaT cells are the immortalized human keratinocytes which have been widely applied in the studies related to epidermal homeostasis and its pathophysiology (Seo et al. 2012). Also, there are reports on the improvement in the physical properties and biodegradability of the polymer on blending with PEG (Xiang et al. 2013;Hu et al. 2017). The cytotoxicity values obtained from the study proved that the blending PEG of 2 kDa significantly reduced the polymer induced cytotoxicity of the PHB and PHBV. In an earlier study, the PHBV surfaces, when modified with PEG decreased the nonspecific adsorption of proteins from plasma and thereby improved the blood compatibility of implanted materials . Similar observations of improvement in the biocompatibility of PHB were also reported in Chinese Hamster Lung (CHL) fibroblast when blended with PEG (Cheng et al. 2003). Studies on neuralassociated olfactory ensheathing cells (OECs) in the presence of PHB blended with PEG2kDa also reported improved cell viability, biocompatibility and proliferation (Chan et al. 2011). It is evident from these observations that the PHBV when blended with PEG2kDa, did not affect the normal cell growth and proliferation and hence can be effectively used for skin graft applications. Conclusions The B. aryabhattai PHB10 accumulated copolymer PHBV up to 2.8 g/L (71.80% of CDM) in the presence of glucose and propionic acid. This study is the first report on PHBV accumulation in this bacterium. The polymer yield can be improved by optimization of the fermentation conditions. The blending of PHBV with PEG considerably reduced the polymer induced cytotoxicity on HaCaT cells, which is a promising result in terms of skin graft applications. Therefore, it will be worthwhile to study further the potential of PHB/PEG blends in the purview of its biomedical applications.
2020-01-07T15:42:45.527Z
2020-01-07T00:00:00.000
{ "year": 2020, "sha1": "7a5d3d18f78d77b9d01fb22bbdfb5a969aee5a37", "oa_license": null, "oa_url": "https://link.springer.com/content/pdf/10.1007/s13205-019-2017-9.pdf", "oa_status": "BRONZE", "pdf_src": "PubMedCentral", "pdf_hash": "e52e450cb7272ba5bb2cd291bd596393f627ece7", "s2fieldsofstudy": [], "extfieldsofstudy": [ "Medicine" ] }