id
stringlengths
3
9
source
stringclasses
1 value
version
stringclasses
1 value
text
stringlengths
1.54k
298k
added
stringdate
1993-11-25 05:05:38
2024-09-20 15:30:25
created
stringdate
1-01-01 00:00:00
2024-07-31 00:00:00
metadata
dict
44991980
pes2o/s2orc
v3-fos-license
Novel vaccine strategies against emerging viruses Highlights ► Immune protection is a powerful tool to prevent and control emerging diseases. ► Classical vaccine strategies are not always suitable for emerging viruses. ► Novel vaccine strategies offer enhanced safety, immunogenicity or cross-protection. ► Well-known vaccine platforms can speed-up vaccine development and validation. Novel vaccine strategies against emerging viruses Adolfo García-Sastre 1,2,3 and Ignacio Mena 1,2 One of the main public health concerns of emerging viruses is their potential introduction into and sustained circulation among populations of immunologically naïve, susceptible hosts. The induction of protective immunity through vaccination can be a powerful tool to prevent this concern by conferring protection to the population at risk. Conventional approaches to develop vaccines against emerging pathogens have significant limitations: lack of experimental tools for several emerging viruses of concern, poor immunogenicity, safety issues, or lack of cross-protection against antigenic variants. The unpredictability of the emergence of future virus threats demands the capability to rapidly develop safe, effective vaccines.We describe some recent advances in new vaccine strategies that are being explored as alternatives to classical attenuated and inactivated vaccines, and provide examples of potential novel vaccines for emerging viruses. These approaches might be applied to the control of many other emerging pathogens. Introduction Emerging diseases affecting livestock and humans represent an important threat to the world's economy and public health. Several factors including increasing urbanization, international travel and commerce, or climate change increase the likelihood that the threat of emerging pathogens will continue, if not worsen, in the future. When a virus emerges in an infection-free area, or jumps into a new species, the susceptible host population will likely have no or little pre-existing immunity to the pathogen. Lack of herd immunity can result in fast dissemination and in more virulent consequences of infection. Providing protective immunity through vaccination can be the most powerful and cost-effective strategy to prevent and control emerging infectious diseases. Developing a vaccine against an emerging virus might face several challenges [1,2] (summarized in Box 1). Although conventional vaccination strategies, based on inactivated virus or on the use of life attenuated strains, have been instrumental in the control and even eradication of some important animal and human infectious diseases, in many other cases they fail to deliver the required levels of immunogenicity, safety, cross-protection across the pathogens antigenic variability, or even exacerbate disease. Therefore, new strategies have been explored to obtain safer more effective vaccines (Table 1). Vaccines based on immunologically relevant viral antigens rather than on the whole virus could satisfy many of the challenges summarized in Box 1. However, individual antigens without the context of a viral infection are poorly immunogenic and therefore expression/delivery methods, as well as adjuvants (reviewed in [3,4 ]) must be carefully designed to reach protection. In this article we discuss some recent advances in the use of novel vaccine strategies for the control of emerging viruses. For simplicity, we focus on a few relevant examples, but these vaccine approaches or vaccine platforms might be applied to the development of safer and more effective vaccines against a number of emerging viruses. Recombinant proteins and synthetic peptides A safe strategy to induce immune responses is to deliver a viral antigen produced by recombinant methods or chemical synthesis. In addition to safety, recombinant protein vaccines can have additional advantages: First, production does not require the manipulation of the pathogen, avoiding the risk of accidental escape and the hurdles of high bio-safety and bio-containment requirements. Second, vaccine candidates can be designed even when there is limited information about the pathogen. Third, also, subunit vaccines can be used to overcome the natural immuno-dominance of highly variable epitopes and direct the immune responses against conserved and broadly protective epitopes. Fourth, since individual antigens elicit responses that are different from the response induced by natural infection these vaccine strategies could be used as DIVA (Differentiating Infected from Vaccinated Animals) vaccines with the accompanying serological test. The main disadvantage of subunit vaccines is that isolated proteins or peptides are usually poor immunogens because they fail to be recognized as Pathogen-Associated Molecular Patterns (PAMPs) and activate innate immune responses, which are required for the full development of acquired immunity. To increase the responses against conserved epitopes they must be presented in an immunogenic conformation and/or accompanied by potent adjuvants. Recently, a vaccine candidate based on the envelope glycoprotein of the BSL-4 pathogen Hendra Virus (HeV) (family Paramyxoviridae, genus Henipavirus) has been shown to induce complete protection in a ferret model [5]. Recombinant protein immunization has also been used to induce broadly reactive antibodies against Influenza A Virus (IAV) conserved epitopes. Vaccines that provide long-lived protection across several IAV subtypes, also known as 'universal influenza vaccines' would avoid the need for annual vaccination, and continuous re-formulation of the vaccine to match the circulating strains, and would protect against animal IAVs and future pandemic strains [6 ,7,8]. A recombinant protein vaccine (STF2.4xM2e) containing the highly conserved extracellular domain of the IAV M2 protein (M2e) has demonstrated safety and immunogenicity in a Phase I clinical trial. To increase immunogenicity, the M2e sequence was expressed in four tandem copies and fused to flagellin, a TLR5 ligand acting as adjuvant [9 ]. Wang and colleagues Novel vaccine strategies against emerging viruses García-Sastre and Mena 211 Table 1 Novel strategies applied to the development of vaccines against emerging viruses Strategy Advantages Disadvantages Example Recombinant protein and synthetic peptides Safe: no viral replication. Can direct the response to conserved epitopes. Might require the use of potent adjuvants or boosts. Production yields, cost and purification can be limiting. Recombinant viral vectors Elicit humoral and T cell responses. High level of antigen expression. Several vector platforms with different profiles available. Pre-existing immunity to the vector can decrease efficacy. Recombinant bacteria Adjuvant effect of the vector. Low cost, mass production. Limited experimental information, no clinical trials. Elicit humoral and T cell responses. Replicons have increased immunogenicity. Poor immunogenicity (but can be enhanced by adjuvants and heterologous prime-boost strategies). DNA vaccine against WNV in horses [47]. used a synthetic peptide from the conserved stalk region of the IAV hemagglutinin (HA) protein (Figure 1), coupled to the carrier protein keyhole limpet hemocyanin (KLH). Vaccination with two doses of peptide induced cross-reactive antibodies and protected mice against lethal challenge with different subtypes of IAV [10 ]. Other approaches include the fusion of the antigen to dendritic cell targeting/activating molecules [11]. While this strategy should result in delivery of the antigen to an antigen presenting cell in the appropriate stimulatory context, further investigation is required to understand the best targeting sequences resulting in high immunogenicity and lack of reactogenicity. Virus-like particles and multimeric presentation of viral antigens In the virion, structural proteins are usually arranged in tight and well-ordered conformation, which is believed to be recognized as a PAMP. Therefore, one way to increase the immunogenicity of viral antigens is to deliver them in multimeric conformation and as virus-like particles (VLPs) (reviewed in [12]). VLPs based on both enveloped and non-enveloped viruses can be used to immunize against the homologous virus or engineered to incorporate epitopes from a different pathogen. In addition to better immunogenicity, VLPs are considered very safe, because they contain no genetic material. VLP preparations do not require the use of inactivating agents (i.e. formalin) that might destroy immunologically relevant (conformational) epitopes. The VLP approach is been applied in the research toward a universal IAV vaccine (reviewed in [8]). Steel et al. prepared headless HA VLPs by co-expressing a deleted HA protein lacking the highly variable head domain ( Figure 1) along with the HIV Gag protein. Mice immunized with these headless HA VLPs induced broadly reactive antibodies against the conserved stalk region of HA and were protected from challenge with a lethal dose of the homologous virus [13]. The influenza virus M2e antigen has been conjugated to the hepatitis virus core protein which self assembles into VLPs (ACAM-FLU-A [14]) and demonstrated safety and immunogenicity in Phase I clinical trials. A similar approach was used to fuse the M2e sequence to the norovirus capsid protein that self-assembles into VLPs. Interestingly, the chimeric VLPs induced responses against both IAV and norovirus [15]. M2 containing VLPs were also obtained by coexpressing the full length M2 protein with IAV M1 protein [16]. Alternative to VLPs, several groups have described multimeric M2e strategies that increase its immunogenicity [17 ,18-20]. A VLP vaccine candidate against the Chikungunya Virus (CHIKV, family Togaviridae, genus Alphavirus [21]) was obtained by expressing the virus' structural proteins in a human cell line. Intramuscular inoculation induced neutralizing antibodies and completely protected from experimental infection in a nonhuman primate model [22]. Replication competent viral vectors Recombinant viruses have been used for several decades as vectors for protein expression and for vaccination. The list of virus families that are being explored as vectors for vaccination is too broad to be described in detail, and the topic has been reviewed recently elsewhere [23,24,25 ]. Viruses to be used as vaccine vectors can be manipulated to enhance their safety and immunogenicity by eliminating virulence factors; changing tropism by replacing Box 1 Potential challenges in the development of vaccines against an emerging virus. Incomplete information: our knowledge about an emerging pathogen can be limited for several reasons: it is a recently discovered virus; outbreaks are uncommon and there are few reported natural outbreaks; important immunological parameters such us correlates for protection, antigenic variability or immunodominant antigens are unknown. In addition, for highly variable viruses, the particular strain that will cause the next outbreak or pandemic might be unpredictable. Virulence and bio-safety requirements: some emerging viruses have high mortality rates, with no treatment or prophylaxis available, and must be manipulated under maximum (BSL-4) bio-safety conditions. For this type of viruses, using life attenuated strains or inactivated vaccines (that require the growth of large amounts of virus) are not acceptable options. Similar concerns apply to veterinary vaccines against highly contagious viruses to be used in disease-free countries. Lack of appropriate animal models: vaccine candidates need to be pre-clinically tested in animal models for safety, immunogenicity and efficacy (protection). The ideal animal model should have a well-known immune system and similar susceptibility and immune responses to the pathogen as the natural host. Other important considerations are prize, size, possibility to use large numbers, or ethical considerations. Possibility to Differentiate between Infected and Vaccinated Animals (DIVA or marker vaccines) is highly desired in veterinary vaccines against pathogens that are subject to trade restrictions. The time required to develop, validate, mass-produce and deliver a new vaccine should ideally be as short as possible. This could be facilitated by the use of well-known vaccine platforms that have already been tested and validated against similar pathogens. Vaccines to be used to control an ongoing outbreak should be stockpiled in advance or produced very rapidly, and induce protection after a single administration and short time after inoculation. envelope proteins; and increasing coding capacity by eliminating non-essential genes. One general advantage of viral vectored vaccines is that the antigen is expressed in the context of an actual viral infection, which activates innate immune responses required for the full development of adaptive humoral and T cell-mediated immunity [23]. Possible disadvantages are the competition of immuno-dominant antigens from the vector, or the loss of efficacy in the presence of pre-existing immunity against the vector. In some cases, safety issues derived from the pathogenesis of the vector itself are also required to be considered. Of interest for vaccination against emerging viruses, many characteristics of a virus-vectored vaccine -including the type and intensity of the immune response, safety considerations, or manufacturing techniques -are determined mainly by the vector and not by the pathogen. Therefore, developing and testing a vaccine against a newly discovered virus can be significantly shortened by the use of a viral vector platform with an extensive record of safety and efficacy. In addition to the virus families that have been historically used as vectors, such as poxviruses and adenoviruses, attractive new vector candidates are being developed. Newcastle Disease Virus (NDV) is an avian paramyxovirus that does not infect mammals. Attenuated NDV strains are used for vaccination in poultry (including a dual vaccine against NDV and H5N1 avian IAV [26]. In mammals, NDV vectored vaccines present two major advantages: first, there is no pre-existing immunity against the vector and second, the virus is not able to block the innate immune response in mammalian cells, which results in increased safety and immunogenicity. Recently, recombinant NDVs expressing the Rift Valley Fever Virus (RVFV) envelope proteins (Gn and Gc) have been shown to induce complete protection in mice and sheep [27,28]. Viral vectors have also been used to develop vaccines against highly pathogenic emerging viruses. Ebola Viruses (EBOV) are zoonotic Filoviruses that cause hemorrhagic syndromes with very high mortality rates. Several vaccine strategies have shown induction of specific immune responses and protection against lethal challenge in non-human primates [29]. In 2009 an experimental vaccine candidate was used as post-exposure emergency prophylaxis on a researcher, after accidental puncture with a needle containing Zaire EBOV. After consultation with experts from several countries, the chosen vaccine was a recombinant Vesicular Stomatitis Virus (VSV) expressing the EBOV envelope glycoprotein (GP) and it was administered 48 hours after exposure. The person developed antibodies against the vector and the GP protein, but not against other EBOV proteins. In addition, vector but not EBOV RNA was detected in the patient's serum [30 ]. A replication defective recombinant adenovirus (rAd5) expressing EBOV GP elicits complete protection after a single inoculation in nonhuman primates [31] and has shown safety and immunogenicity in humans [32 ]. A possible caveat of this strategy is that vaccine efficacy might be affected by pre-existing antibodies against the vector. The availability of reverse genetics techniques to directly manipulate the genome of many viruses, along with the increased knowledge about their molecular biology, has opened the opportunity to create a new generation of attenuated vaccine strains with increased safety and immunogenicity. Good examples are the DNS RVFV vaccine candidates. The non-structural protein NSs is a major virulence factor that modulates the host's immune response, but is not required for replication in cell culture. Applying reverse genetics to the existing attenuated strains, several groups have obtained viruses lacking NSs and demonstrated safety and immunogenicity in mice and lambs [33][34][35][36]. Another promising strategy to create attenuated vaccines is the exchange of sequences from the pathogen in the genome of a less virulent, closely related virus. Examples of this strategy are the flavivirus vaccine candidates based on the backbone of the attenuated Yellow Fever Virus strain YF-17D containing the genes of the envelope proteins prM and E2 from Japanese Encephalitis Virus (ChimeriVax-JE [37], currently licensed in Australia); and West Nile (ChimeriVax-WN02 [38]) and Dengue Virus (CYD 1-4 [39 ]), which have both completed Phase II clinical trials; and the pestivirus chimeras combining the genomes of Classical Swine Fever Virus and Bovine Viral Diarrhea Virus [40]. Recombinant bacteria as vaccine vectors In addition to being extensively used to produce recombinant subunit vaccines, bacteria can also serve as vectors for the in vivo delivery of antigens or DNA. Potential advantages of this platform are the low cost and easy to scale-up production, the availability of well characterized attenuated strains, the activation of the innate immunity by the vector and the efficient delivery to antigen presenting cells. Several genera are being explored as vaccine vectors, including Listeria, Salmonella [41], Lactococcus [42] and Bordetella [43]. Recombinant bacteria can be used as life attenuated vaccines, inactivated or even as cytoplasmdepleted bacterial ghosts. Recombinant Lactococcus lacti expressing the N protein of SARS-coronavirus have been shown to induce antibodies in mice [44]. Li et al. reported that a recombinant Bordetella pertusis expressing the influenza virus eM2 induces high titers of specific antibodies in mice, but failed to elicit protection in mice [45]. Nucleic acid vaccines Inoculation of cDNAs encoding viral antigens can lead to uptake and expression of the cDNA by antigen presenting cells and initiation of immune responses [46]. DNA vaccines have many potential advantages for vaccination against emerging viruses: plasmids expressing a viral antigen can be produced rapidly, even when only partial sequence information from the pathogen is available. Antigen is expressed in vivo and induces both humoral and cell-mediated immune responses. Large quantities of DNA can be produced in short time at a reduced cost, and DNA preparations are more stable than other types of vaccines, both very desirable properties for a vaccine that must be used in remote areas. Furthermore, DNA vaccines are considered very safe, they are suitable for DIVA applications and they are not affected by antivector immunity. The main limitation in the development of DNA vaccines is their intrinsic low immunogenicity. Therefore, great research effort has been invested in the improvement of immunogenicity by more efficient delivery approaches, such as gene gun, skin tattooing, or electroporation; targeting to immune effector cells and the use of potent adjuvants, either co-administered with the vaccine or encoded in the same plasmid. DNA vaccines are also frequently used in combination with other vaccine platforms in prime-boost strategies. Replicon vaccines are based on defective RNA genomes that are able to undergo replication and express encoded proteins, but cannot produce infectious viral particles. Viral RNA replication is a strong inducer of the innate immunity and, therefore, replicon vaccines may have superior immunogenicity than the equivalent DNA vaccines [53]. Replicon vaccine candidates can be generated by removing essential structural genes from the genome of the pathogen such us West Nile Virus [54] or RVFV [55 ,56], or by inserting in a replicon heterologous genes encoding antigens from a pathogen. By far, most heterologous replicon vaccines use alphavirus derived replicons (reviewed in [57]). Replicon vaccines can be delivered as propagation-defective replicon particles, or as plasmids containing the whole replicon sequence under the control of the appropriate promoter. Conclusion Emerging infectious diseases can present many challenges for vaccine development. Several novel vaccination strategies that have been developed in recent years can specifically address these challenges. Subunit vaccines, containing only part of the pathogen's antigens, can elicit protective responses that are different from those induced in the infected animal. Because they contain no infectious pathogen, there is no need for high biosafety measures, risk of accidental escape during production, residual pathogenesis, or reversion to virulence in the vaccinated individuals. The use of well-defined vaccine platforms, with an extensive record of safety and efficacy against similar pathogens can speed-up the process of development, validation and production of vaccines against new emerging and potentially emerging viruses. However, many challenges still lay ahead. Specifically, each vaccine platform has advantages and disadvantages mainly related to their balance between safety and immunogenicity, and ability to be used multiple times. Studies that compare multiple platforms in humans are still lacking. Future research will be needed for the improvement of the safety and immunological characteristics of vaccination strategies. Conflict of interest The Icahn School of Medicine at Mount Sinai owns intellectual property in the influenza virus vaccine field, with AG-S being one of the inventors.
2018-04-03T05:56:16.078Z
2013-03-07T00:00:00.000
{ "year": 2013, "sha1": "3244329f9f00189c6aa4f2c4cb53346a2a385ae7", "oa_license": null, "oa_url": "https://doi.org/10.1016/j.coviro.2013.02.001", "oa_status": "BRONZE", "pdf_src": "PubMedCentral", "pdf_hash": "3de06ec7b6d64366bf4e80c8671ab3cee16b43f2", "s2fieldsofstudy": [ "Biology" ], "extfieldsofstudy": [ "Biology", "Medicine" ] }
53294438
pes2o/s2orc
v3-fos-license
Time and frequency dependent changes in resting state EEG functional connectivity following lipopolysaccharide challenge in rats Research has shown that inflammatory processes affect brain function and behavior through several neuroimmune pathways. However, high order brain functions affected by inflammation largely remain to be defined. Resting state functional connectivity of synchronized oscillatory activity is a valid approach to understand network processing and high order brain function under different experimental conditions. In the present study multi-electrode EEG recording in awake, freely moving rats was used to study resting state connectivity after administration of lipopolysaccharides (LPS). Male Wistar rats were implanted with 10 cortical surface electrodes and administered with LPS (2 mg/kg) and monitored for symptoms of sickness at 3, 6 and 24 h. Resting state connectivity and power were computed at baseline, 6 and 24 h. Three prominent connectivity bands were identified using a method resistant to spurious correlation: alpha (5–15 Hz), beta-gamma (20–80 Hz), and high frequency oscillation (150–200 Hz). The most prominent connectivity band, alpha, was strongly reduced 6 h after LPS administration, and returned to baseline at 24 h. Beta-gamma connectivity was also reduced at 6 h and remained reduced at 24 h. Interestingly, high frequency oscillation connectivity remained unchanged at 6 h and was impaired 24 h after LPS challenge. Expected elevations in delta and theta power were observed at 6 h after LPS administration, when behavioral symptoms of sickness were maximal. Notably, gamma and high frequency power were reduced 6 h after LPS and returned to baseline by 24 h, when the effects on connectivity were more evident. Finally, increases in cross-frequency coupling elicited by LPS were detected at 6 h for theta-gamma and at 24 h for theta-high frequency oscillations. These studies show that LPS challenge profoundly affects EEG connectivity across all identified bands in a time-dependent manner indicating that inflammatory processes disrupt both bottom-up and top-down communication across the cortex during the peak and resolution of inflammation. Introduction Extensive research has shown that inflammatory processes affect brain function and behavior via several neuroimmune mechanisms that integrate the central nervous (CNS) and immune systems in response to environmental challenges (reviewed in Dantzer et al, 2008) [1]. This communication is known to be bi-directional, with the CNS controlling immune function via neurotransmitters and hormones and the immune system modulating the CNS via cytokines and inflammatory mediators [2]. Some of the best characterized mechanisms on how the immune system influences brain function include direct actions of cytokines on neurons [3][4][5] as well as indirect cytokine actions on glial and perivascular cells [1,6,7]. Moreover, cytokines have been shown to induce the activation of the kynurenine pathway (KP) of tryptophan metabolism resulting in the formation of several neuroactive metabolites that modulate glutamatergic and cholinergic neurotransmission [8,9]. Nevertheless, despite the vast information existing on the effects of inflammation and cytokines on neurotransmitter systems and behavior, defining their effects on higher order brain processes has proven challenging, as multiple mechanisms and possibly neurocircuitries are affected during an inflammatory reaction. Resting state electro-encephalography (EEG) is a useful tool to understand brain circuitries and fast time scale information transfer in normal and pathological conditions [10,11]. In humans, resting state EEG is a minimally invasive method to assess ongoing activity in the normal brain [12,13], across psychiatric and neurological conditions [14][15][16][17], and following pharmacological manipulations [18][19][20]. Of translational value, rodent EEG measures correspond to, with a number of caveats, human EEG measures. For example, the power spectra at the surface of the cortex for both rodents and humans follow a similar frequency power profile [21]. Collectively, synchronized EEG activity reflects the sum of spatially distributed neuronal oscillations that serve ongoing sensory processing, affective modulation, and higher-order thought processes of the brain. These coupled oscillations arrange into networks at multiple spatial and temporal scales to coordinate, or bind, neural activity between regions [14]. Therefore, resting-state EEG recordings in multiple regions of the cortex may provide a powerful tool to understand the effects of inflammatory processes on higher order brain function and network processing. Peripheral injection of subseptic doses of bacterial lipopolysaccharides (LPS) in rats and mice is an established model to elicit an inflammatory reaction and production of cytokines in the brain [22][23][24][25][26][27][28]. Depending on the dose and species, it produces a group of neurobehavioral symptoms known as sickness behavior, which persist from 2 to 24 hours [1,28]. After cessation of the inflammatory reaction, these symptoms are followed by behavioral emotional and cognitive deficits. Cytokines are known to mediate the symptoms of sickness, while emotional and cognitive processing appears to be mediated by downstream mechanisms involving the KP [1,[29][30][31][32]. Thus, during an LPS challenge different inflammatory mediators interact with the CNS in a time-dependent manner raising the possibility of differential effects on synchronized cortical oscillatory activity during this process. Therefore, the objective of the present study was to use multi-channel resting state EEG recordings from the rat cortical surface to study the effects of the neuroinflammatory process elicited by LPS on EEG connectivity and power spectra. Results from this study confirm the assumption of a time-dependent effect of LPS on different components of cortical EEG during the progression and resolution of inflammation. weaned at postnatal day (PND) 21 and one male offspring (n = 12) per dam was used in this study. Five days following weaning, animals were weighed and handled 3 times per week for three minutes per rat. Rats were implanted with surface electrodes at PND 54. Rats were maintained on a 12:12 L:D cycle (lights on at 07:00 AM) in plexiglass cages in groups of 2-3 per cage with food and water ad libitum. All animal procedures were approved by the Institutional Animal Care and Use Committee of the University of Maryland, Baltimore. Electrode implantation Rats were anesthetized with isoflurane and placed on a thermal pad to maintain temperature and monitored with a rectal thermometer. After ensuring a deep level of anesthesia, the top of the skull was shaved and one 2-3 cm incision was made along the midline of the scalp from just behind the line of the eyes to just in front of the ears. Ten burr holes not piercing the dura matter (5 on each side to the midline) were drilled into the skull at the following coordinates: frontal (2.0 mm lateral, 2.0 mm anterior to Bregma), fronto-central (2.0 mm lateral, 2.0 mm posterior to Bregma), mid-central (3.5 mm lateral, 4.0 mm posterior to Bregma), postero-central (5.0 mm lateral, 6.0 mm posterior to Bregma), posterior (3.0 mm lateral, 3.0 mm posterior to lambda) ( Fig 1A). Stainless steel jeweler's screws (1.2 mm diameter) were used as electrodes with insulated wire leads soldered pre-surgically and implanted in each of the burr hole locations. The posterior electrodes served as ground and reference respectively. The free ends of the electrode leads were inserted using gold pins soldered pre-surgically into a 10 pin female- Shown are the dwPLI pairs between all channels averaged over all baseline recordings at each frequency. The connectivity bands identified are denoted with top bars. D: Sickness behavior at baseline, 3, 6, 24 and 48 h after LPS administration. Kruskal-Wallis multiple comparison test with respect to baseline: � p = 0.02; �� p = 0.0011; ���� p < 0.0001. E-F: Cytokine expression in the cerebral cortex and hippocampal formation of rats in saline treated or 24 and 48 h after LPS administration. Rats receiving saline injections (n = 6) were not used in recording experiments. LPS resulted in a strong response for interleukin-1 beta (IL-1β) (E) and a modest change in interferon gamma (IFN-γ) (F). � p = 0.05; ���� p < 0.0001. to-male connector. The entire assembly was secured to the skull using dental cement. The incision was closed around the head mount using wound clips (9 mm) and removed 7 days postsurgery. Following surgery, animals were placed into their home cages with a thermal pad and monitored until recovery. Monitoring continued daily including weight, overall appearance, fur condition, lack of grooming and signs of infection. All animals fully recovered without any signs of distress. LPS administration Adult rats (PND 63) were injected intraperitoneally (i.p) with 2 mg/kg LPS (Sigma, St. Louis, MO, serotype 055:B5) between 09:00 and 10:00 AM and monitored at 3, 6, 24 and 48 h for temperature and sickness behavior using the 4 point scale as previously described [28,33]. Briefly, rats were checked for lethargy (demonstrated by diminished locomotion after prompting and curled body posture), ptosis (drooping eyelids), and piloerection (ruffled, puffy fur) with each symptom equal to 1 point resulting in a scale of 0 to 3 with 0 = no symptom and 3 = all symptoms present. EEG recordings were acquired at baseline 48 h before injections, and at 6 and 24 h after LPS administration. Six rats were killed after the 24 h recording session and 6 underwent recordings at 48 h after LPS administration and killed after this recording session. At the completion of the recording sessions, the animals were brought in their home cages to a separated procedure room (one animal at the time) and killed by CO 2 asphyxiation followed by decapitation and the brains removed and stored for cytokine determinations (Fig 1B). Recordings Animals were allowed to acclimatize to the recording room for 1 h the day before the first EEG recording session. On the day of recording, animals were weighed and left undisturbed for 1 h before the head stage was connected to the head mount and recording began. Rats were individually housed and all recordings were obtained in the rat's home cage. Recordings sessions included pairs or triplets, with cages side-by-side and with random allocation to recording position on each day. Immediately preceding the recordings, an observer rated the animals for sickness behavior and an experimenter remained in the room for the duration of the recording to ensure the integrity of the system. Data acquisition began immediately following application of the head stages to the animals. Resting state recordings were taken for 20 min. The EEG was obtained using a wireless telemetric 8-channel rodent electrophysiological recording system (ALA Scientific, Multichannel Systems-MCSW2100 system). Data were digitized at 2,000 Hz and EEG was continuously monitored for stable connections. EEG processing EEG data were processed using the functions from EEGLAB [34]. For pre-processing, data were segmented into epochs of 500 milliseconds. Electrodes with sections of poor quality recordings were interpolated using a simple linear function using all available channels. Poor quality recording for a channel on an epoch by epoch basis was denoted as: any electrode with Spearman correlation coefficient with all other electrodes < 0.35 after applying a band pass filter between 1 and 210 Hz; a standard deviation of a channel's voltage > 350; or any point-topoint difference greater than 500 uV. Epochs with less than 6 usable channels were rejected. Further artifact rejection was applied to data with a standard deviation < 50, standard deviation >750, or a voltage exceeding ± 5000 uV. The data were then converted into 3 seconds epochs, before a final stringent artifact rejection threshold of ± 1000 μV. To determine if interpolation of the data altered the results, an independent sensitivity analysis without interpolation was also conducted and is presented as supplementary material. The results of both analyses are highly consistent, supporting an effect of LPS on power and connectivity. Power and connectivity Power spectral density was calculated using the 'spectopo' function from EEGLAB, with a 1.5 seconds Hanning window, and a 50% overlap per epoch. Power derived from each epoch for each channel was then averaged across epochs. Band power was calculated as the average power with 6 commonly used frequency bands: delta (1-4 Hz), theta (4-8 Hz), alpha (8)(9)(10)(11)(12) Hz), beta (12)(13)(14)(15)(16)(17)(18)(19)(20)(21)(22)(23)(24)(25), gamma (25-80 Hz), and gamma/high frequency oscillations (HFO; . Finally, band power was averaged across electrodes to provide a single number per rat per session for analysis. Connectivity was assessed using the debiased weighted phase lag index (dwPLI) using the available functions from FieldTrip [35]. The dwPLI is an extension of phase synchronization methods described by Nolte et al. (2004) and Stam et al. (2007) [36,37], developed to minimize the effect of spurious connectivity arising from volume conduction and from the use of a common reference electrodes. The PLI takes the expectancy of the sign of the imaginary part of coherency, thereby reducing the dependence of the synchronization metric on the phase of the two signals. The dwPLI improves on the PLI by weighting the PLI by the magnitude of the imaginary part of coherency, and is defined as follows: Where Xj and Xk are the complex valued cross-spectra of trials j and k, respectively [38]. Based on the shape of the connectivity profile as a function of frequency, three connectivity bands were defined for further analysis (Fig 1C). A peak around the alpha band as a central point with a decay on either side in the theta and beta bands as limits resulting in a connectivity frequency band of 5 to15 Hz. Similarly, a connectivity band in the beta-gamma range defined by a peak between 20 to 80 Hz, and a connectivity band with a peak in the HFO range between 150 to 200 Hz ( Fig 1C). These connectivity bands were consistently identified in all experimental conditions. Connectivity was averaged within frequency bands across channels for representation in the figures. For analysis, connectivity was averaged over channel pairs within or across hemispheres. Phase-amplitude coupling The connectivity analysis was supplemented with an analysis of phase-amplitude coupling using the modulation index from Tort et al. (2010) [39]. In brief, low frequency carrier signals were band pass filtered between 4 and 16 Hz, using 2 Hz step sizes and 4 Hz bandwidths with an additional 3 Hz carrier frequency included. The high frequency signals were filtered between 30 and 200 Hz using 5 Hz steps and 10 Hz bandwidths. The filtered signals were then Hilbert transformed, and the phase and amplitude of the low and high frequency signals were extracted. The phases were binned into 20 intervals and the mean amplitude of the high frequency signaled over each bin was calculated. Lastly, to obtain the modulation index, the mean amplitude was normalized by dividing the bin value by the sum over the bins (see Tort et al., 2010 for more details) [39]. Real-time RT-PCR Brains were dissected and the cerebral cortex and hippocampus from one hemisphere were pooled and processed for RNA extraction as described previously [40]. Five hundred ng of total RNA per sample were reverse transcribed into cDNA in a 20 μl reaction volume using an iScript cDNA Synthesis Kit (Bio-Rad, Hercules, CA, USA) according to manufacturer's instructions and then diluted 1:1 with ultrapure water. Real-time RT-PCR was conducted using the iQ SYBR Green Supermix (Bio-Rad) in a 25 μl reaction using the set of primers listed in S1 Table. Melting curves confirmed the generation of a single amplification product per gene (S1 Fig). Relative expression was determined using the 2 -ΔΔCt method [41]. Statistical analysis Data were prepared for analysis by importing the power and connectivity metrics obtained from Matlab into R (version 3.4.4; R Development Core Team: http://www.R-project.org). Power, connectivity, and phase-amplitude coupling were averaged into bands before being entered into statistical analysis. Hierarchical Bayesian repeated measure ANOVAs were employed to analyze resting state power and connectivity using the "rstan" package in R for Hamiltonian Monte Carlo sampling, a form of Markov chain Monte Carlo sampling. Kernel density estimates of the averaged and difference scores between sessions were plotted to assess suitability for analysis. All analyses were also rendered robust to outliers departing from normality through the use of a t-distribution to model the residuals. These t-distributions are wider tailed versions of the normal distribution where the degrees of freedom parameter (df) controls the width of the tails. The analysis is also able to explicitly take into the heterogeneity of variance by allowing for different variances to be fit to each condition. The measure possesses an excellent Type I error rate when used for multi-channel data as shown using permutation testing [20, 42,43]. For the analysis of power and phase-amplitude coupling, treatment time (baseline, 6h, 24h) was treated as a within-subjects factor, while for connectivity, treatment and hemisphere (left, right) were treated as within-subjects factors. All models were run using 4 chains of 4,000 samples each. The first 2,000 samples of each chain were discarded as burn-in and adaptation. Convergence was monitored using the Gelman-Rubin statistic and the total number of effective samples per parameter of interest was > 2,000. Priors considered as minimally informative were scaled according to the data (2.5 � SD), and centered on a mean difference of 0 across sessions. To assess differences between groups, 95% highest density intervals (HDI) derived from each model's posterior were used [44,45]. Sickness behavior and cortico-hippocampal cytokine expression Overt signs of sickness behavior were evident in all animals at 3 h after administration of LPS, with maximal signs at 6 h. Symptoms were beginning to resolve at 24 h and were completely absent by 48 h post LPS administration (Kruskal-Wallis test = 43.2, p < 0.0001) (Fig 1D). This response corresponds to a typical behavioral profile at this dose of LPS in rats [46], which is related to the inflammatory response elicited by i.p. LPS administration. Likewise, cytokine expression was dominated by a robust IL-1β response measured at 24 and 48 h post LPS administration and a modest IFN-γ response at 24 h (Fig 1E and 1F). No differences were observed on TNF-α expression at these time points. Spectral power: LPS induces a slowing of the resting-state EEG The power spectrum averaged over all electrodes at each time point following LPS administration is presented across all frequencies up to 200 Hz (Fig 2A) or constrained to 40 Hz (Fig 2B). Connectivity LPS reduces connectivity. Connectivity was analyzed using the dwPLI for intra-and inter-hemispheric channel pairs averaged within each of the 3 defined frequency bands (see methods). The overall effects of LPS between all electrode pairs are shown in Fig 2D-2F, with robust effects in the alpha (5-15 Hz) and beta-gamma (20-80 Hz) connectivity bands at 6 h and in the beta-gamma and HFO (150 to 200 Hz) bands at 24 h after LPS. Furthermore, strength-weighted connectivity between each pair of recording electrodes are presented in Fig 3A and 3C. Baseline along with 6 and 24 h deviations (baseline subtracted) for intra- (Fig 3A) and inter- (Fig 3C) hemispheric pairs connectivity are represented by blue (decrease) and red (increase) connecting lines. Averaged overall intra-( Fig 3B) and inter- (Fig 3D) hemispheric connectivity are shown for individual animals. Both intra-and inter-hemispheric alpha connectivity was reduced 6 h following LPS (intra-and inter-hemispheric 6 h vs baseline contrast = -0. 13 Phase-amplitude coupling: LPS increases theta-gamma coupling. Phase-amplitude coupling was analyzed using the modulation index obtained from two cross-frequency interactions: theta-gamma (3-8 Hz carrier and 30-100 Hz amplitude) and theta-HFO (3-8 Hz carrier and 150-200 Hz amplitude), as is common in the literature [39]. The theta-gamma modulation index was increased 6 h following LPS administration (6 h vs baseline contrast = 0.00019, 95% HDI = [0.00003, 0.00034]) (Fig 4). While it appeared that theta-gamma coupling remained elevated at 24 h post-LPS, the 95% HDI did not reveal a difference (24 h vs baseline contrast = 0.00013, 95% HDI = [-0.00002, 0.00027]). Interestingly, theta-HFO coupling was increased at 24 h post-LPS (24 h vs baseline contrast = 0.00037, 95% HDI = [0.0001, 0.0006]), but not at 6 h (6 h vs baseline contrast = -0.00002, 95% HDI = [-0.00022, 0.00015]) (Fig 4). Discussion The present studies revealed a time-dependent effect of a single dose of LPS (2 mg/kg) on the rat resting state EEG. LPS caused an overall slowing of the EEG 6 h following LPS administration, increasing slow wave power (delta and theta bands) and reducing high frequency power (gamma and HFO bands), with a return to baseline 24 h after injection. LPS reduced intra- Resting state connectivity reductions elicited by LPS. The debiased weighted phase lag index (dwPLI) was used as measure of connectivity between channels. Connectivity topography graphs for each of the 3 frequency bands analyzed (alpha, 5-15 Hz; beta-gamma 20-80 Hz and HFO 150-200 Hz) denoting the connection 'strength' between electrodes separated by intra-(A) and inter-(C) hemispheric connections. Baseline connectivity is denoted in red, with the strength of each connection represented by line width. Changes in connectivity between 6 or 24 h (contrast) with baseline are represented with red (increases) and blue (decreases) with the width of the line denoting the magnitude of change. Note that baseline effects are expected to be positive due to properties of the dwPLI. For analysis, connectivity was averaged over all channel pairs either within (B) or between (D) hemispheres to yield an average measure of inter-vs intra-hemispheric connectivity. No hemisphere by time interactions were detected indicating similar effects of LPS across hemispheres. Data and analyses are presented for both hemispheres for completeness. LPS reduced intraand inter-hemispheric alpha connectivity at 6 h and returned to baseline at 24 h. Beta-gamma connectivity was reduced at 6 h (intraand inter-hemispheric) and remained reduced 24 h (inter-hemispheric) after LPS. By contrast, HFO connectivity (intra-and interhemispheric) were reduced at 24 h after LPS administration. and inter-hemispheric connectivity across all frequency bands analyzed in a time-and frequency-dependent manner showing that the neuroinflammatory processes elicited by peripheral LPS disrupts both bottom-up and top-down communication across the cortex during the peak and resolution of inflammation. The present results also indicate that while power and connectivity are related measures of EEG activity, these processes are independently affected by LPS and in a time dependent manner. LPS-induced reductions in resting state EEG functional connectivity are temporally sustained in the high frequency range Despite an increasing interest in the study of functional coupling between brain regions, the literature on the effects of inflammatory states on functional connectivity appear limited to a recent fMRI study [47]. Using EEG, inferring connectivity has imposed significant challenges as EEG-related measures of connectivity are strongly confounded by volume conduction and the use of a common reference electrode [38]. Recent connectivity methods such as the dwPLI used in the current study greatly reduce the influence of these confounders [38]. To our knowledge, these methods have not yet been applied to rodent surface recordings that are conceptually similar to human EEG recordings and may possess significant translational value. Using this approach, 3 connectivity bands were identified and clearly defined across both experiments, which closely relate to the bands identified with local field potential recordings in the dentate gyrus and CA3 regions of the hippocampus [48]. The most prominent connectivity band was a well-defined peak in the 5 to 15 Hz range. Interestingly, a similar peak in connectivity is seen around the alpha band in human resting state EEG studies using the dwPLI [20,36]. Two other connectivity bands were identified from the dwPLI spectra, one covering the beta-gamma frequencies from 20 to 80 Hz and another higher frequency band covering 150 to 200 Hz (Fig 1C). Administration of LPS substantially reduced alpha connectivity 6 h following LPS administration returning to baseline 24 h later. It is important to note that the generation of alpha oscillations (reflected by the alpha power value) was not affected by LPS administration. This indicates that power and dwPLI connectivity are independent components and are affected differentially by inflammation. Alpha power modulation is observed during cognitive processes that require top-down control and inhibition of irrelevant sensory information [49,50]. Thus, LPS-induced reductions in alpha connectivity specifically (and not power) may reflect an inability to coordinate the inhibition of irrelevant information. Alternatively, the overall reduced external sensory processing elicited during the acute phase of the inflammatory reaction [1] may render the functional significance of alpha coordination less necessary. These effects on alpha connectivity were robust 6 h after LPS and coincident with the full manifestation of the behavioral symptoms of sickness including lethargy, curled posture, piloerection and ptosis and are logically associated with this behavioral state. Alpha connectivity returned to baseline 24 h after LPS at the time that these behavioral symptoms subsided, further suggesting a relationship between reduced sensory processing and alpha connectivity during the LPS challenge. Remarkable effects of LPS on EEG resting state connectivity were observed in the high frequency range covering the beta-gamma (20 to 80 Hz) and HFO (150 to 200 Hz) peaks. Reductions in beta-gamma connectivity were observed at 6 h after LPS, at the time that power was also reduced in this band. Furthermore, reductions in both beta-gamma and HFO connectivity were present 24 h after LPS when power returned to baseline levels further supporting the notion that these two measures reflect closely related but yet distinct phenomena. As mentioned before, at this time-point the behavioral symptoms of sickness have subsided and the animals recover most of the psychomotor functions affected during the acute phase. However, extensive studies in the literature, including work from our group, have shown that motivation and emotional processing remain affected during this period of recovery from sickness [1,28,30,32,46]. It is possible then that persistent reduction in connectivity in these bands 24 h after LPS are directly associated with reduced motivational and cognitive processing elicited by LPS challenge. Under the framework of several computational theories [51][52][53] gamma and other high frequency oscillations are currently believed to drive feed-forward activations that most prominently, but not exclusively, originate from sensory cortices and travel "up" the cortical hierarchy. Beta-gamma connectivity reductions tentatively represent a combined impairment of feedforward and feedback communication, with the beta band likely serving as a fast feedback prediction signal and the gamma band serving a predominantly efferent/feedforward prediction error signal. Similarly, reduced HFO connectivity likely indicates a failure of the effectiveness of feedforward communication in the cortex, despite the ability to generate HFO at baseline strengths 24 h post-LPS, highlighting that reduced HFO connectivity is not merely a function of a reduction in signal strength. It would be relevant to test to what extent the transfer of information rather than the generation of the oscillations, is responsible for the reduced motivational component during recovery from LPS challenge. Furthermore, future studies could examine the effects of neurotransmitter modulators co-administered at the time of the peak of the inflammatory reaction to assess the neurotransmitter systems affecting connectivity during LPS challenge. LPS increases theta-gamma and theta-HFO coupling Cross frequency coupling, and in particular theta-gamma coupling, has been proposed to represent a form of neural communication and computation that emerges during high order brain functions [54,55]. In brief, the amplitude of high frequency oscillations can be modulated by the phase of lower 'carrier' frequencies during encoding of complex behavioral processes. Phase-amplitude coupling is often studied in the context of memory operations, targeting hippocampal-fronto cortical communications [39,[56][57][58]. To our knowledge, thetagamma coupling has not yet been studied in resting state EEG in animals undergoing an inflammatory challenge. Our data indicated time-dependent increases of theta-gamma and theta-HFO coupling elicited by LPS occurring in a temporally-dependent manner. This is, theta-gamma coupling was increased at 6 h and theta-HFO at 24 h after LPS. While speculative at this point, these effects are consistent with the concept of and increased effort of neural networks to maintain cognitive processing. For example, Tamura et al. (2017) [58] recently elegantly demonstrated increases in theta-gamma coupling in a genetic mouse model of cognitive dysfunction during a spatial working memory task. The increase in coupling was suggested to reflect a compensatory mechanism to maintain behavioral performance, a notion further supported by optogenetic and behavioral manipulations showing increased theta-gamma coupling with task difficulty [58]. This "compensatory" effect of theta-gamma coupling is consistent with recent findings of increased resting-state cross-frequency coupling in people with Alzheimer's disease [59]. This was also suggested to reflect an increase in the neuronal resources required to maintain the resting brain state and potentially, an attenuated complexity of the neuronal network [59]. Further studies may assess the possibility of a compensatory role for increased cross-frequency coupling elicited by LPS through assessment of explicit spatial working memory tests. LPS induced expected changes in EEG spectral power LPS is known to cause an increase in low frequency power, in particular delta power, associated with sleep alterations [60][61][62][63]. Our study showed increases in delta and theta power 6 h after LPS, which coincides with the peak of sickness behavior. Moreover, our study further reports on decreased power on the gamma and HFO bands at 6 h post LPS. Although we did not measure sleep, visual scoring of sickness symptoms confirmed increased lethargy and curled posture typical of sleep in rats at 6 h post LPS. Moreover, the time course of EEG disruption over the session was analyzed under the assumptions that the animals would be less likely to be asleep immediately after attaching the head stage on them. This analysis (S6 Fig) shows that the effects reported in this study are present within the first 10 minutes of recording, suggesting that the effects are unlikely driven primarily due to sleep. Notably, power for all frequency bands was restored 24 h after LPS supporting the notion that power and connectivity are affected by different mechanisms elicited by LPS. However, this assumption was not evaluated in the present experiments and will be a matter of future studies. Potential neuroimmune interactions during LPS relevant to the effects on cortical EEG Peripheral administration of LPS in rats causes a robust and widespread expression of the cytokine IL-1β across the entire CNS, which varies in its regional distribution and level of expression in a temporal manner [22][23][24][25][64][65][66]. Both the hippocampus and cortex respond with widespread expression of IL-1β produced mainly by microglial cells [23,25]. This is accompanied by the expression of additional cytokines including TNF-α, IL-6 and IFN-γ and inflammatory mediators such as nitric oxide (NO) [67,68]. The modulatory actions of these cytokines on neuronal electrical activity have been documented by several studies [69][70][71][72][73][74][75]. Moreover, direct effects of IL-1β on GABAergic and glutamatergic neurotransmission have been described by a number of studies [69,72,74]. Furthermore these cytokines, as well as LPS, increase kynurenine metabolism in the brain resulting in activation of the KP [76,77] producing several neuroactive metabolites that modulate glutamatergic neurotransmission [8,78]. Of note, kynurenic acid, a metabolite of the KP pathway, which acts as an endogenous NMDA antagonist [8], is increased by LPS challenge [79,80]. Thus, the effects of LPS on EEG spectra may primarily be the result of interference with glutamatergic and GABAergic neurotransmission in a temporally dependent manner. This may be reflected by the increases in low frequency power at earlier time-points followed by changes in higher frequency bands at later time-points. Moreover, we have recently shown that kynurenic acid producing astrocytes are concentrated in white matter tracts including the corpus callosum [40], therefore effects of LPS on connectivity were expected and confirmed. Of interest, future studies could examine the effects of GABA and glutamate modulators co-administered at different times of the inflammatory reaction to assess the neurotransmitter systems affected during LPS challenge. Conclusions The present studies using multi-electrode array recordings of resting state EEG in rats identified several connectivity bands that were significantly impacted by the neuroinflammatory process triggered by peripheral administration of LPS. These effects were time-dependent and coincided with different behavioral states associated with the emergence and resolution of the symptoms of sickness. These studies reveal specific effects of inflammation on brain EEG functional connectivity, thereby contributing to our understanding of the impact of neuroinflammation on mechanisms linking the immune response with higher order brain functions. The data is presented as a 5-point moving average (i.e., each data point is five 3 s epochs). Changes in low frequency power (delta, theta, alpha, and beta) were most evident 6 h following LPS administration during the first 10 minutes of recording. By comparison, LPSinduced reductions in high frequency power bands (gamma and HFO) were persistent across the whole recording period. At 24 h following LPS administration, the power time-course in each frequency band was consistent with baseline, with the exception of the high frequency power bands. High frequency bands showed gradual reductions over the 20 min recording session at 24 h following LPS. (TIF) 16. Babiloni
2018-11-15T22:18:44.829Z
2018-11-12T00:00:00.000
{ "year": 2018, "sha1": "74d0d3722fe2d17f04f895e6c512dac3101eafc7", "oa_license": "CCBY", "oa_url": "https://journals.plos.org/plosone/article/file?id=10.1371/journal.pone.0206985&type=printable", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "74d0d3722fe2d17f04f895e6c512dac3101eafc7", "s2fieldsofstudy": [ "Medicine", "Biology" ], "extfieldsofstudy": [ "Chemistry", "Medicine" ] }
2687173
pes2o/s2orc
v3-fos-license
RAM-Efficient External Memory Sorting In recent years a large number of problems have been considered in external memory models of computation, where the complexity measure is the number of blocks of data that are moved between slow external memory and fast internal memory (also called I/Os). In practice, however, internal memory time often dominates the total running time once I/O-efficiency has been obtained. In this paper we study algorithms for fundamental problems that are simultaneously I/O-efficient and internal memory efficient in the RAM model of computation. Introduction In the last two decades a large number of problems have been considered in the external memory model of computation, where the complexity measure is the number of blocks of elements that are moved between external and internal memory. Such movements are also called I/Os. The motivation behind the model is that random access to external memory, such as disks, often is many orders of magnitude slower than random access to internal memory; on the other hand, if external memory is accessed sequentially in large enough blocks, then the cost per element is small. In fact, disk systems are often constructed such that the time spent on a block access is comparable to the time needed to access each element in a block in internal memory. Although the goal of external memory algorithms is to minimize the number of costly blocked accesses to external memory when processing massive datasets, it is also clear from the above that if the internal processing time per element in a block is large, then the practical running time of an I/O-efficient algorithm is dominated by internal processing time. Often I/O-efficient algorithms are in fact not only efficient in terms of I/Os, but can also be shown to be internal memory efficient in the comparison model. Still, in many cases the practical running time of I/O-efficient algorithms is dominated by the internal computation time. Thus both from a practical and a theoretical point of view it is interesting to investigate how internal-memory efficient algorithms can be obtained while simultaneously ensuring that they are I/O-efficient. In this paper we consider algorithms that are both I/O-efficient and efficient in the RAM model in internal memory. Previous results. We will be working in the standard external memory model of computation, where M is the number of elements that fit in main memory and an I/O is the process of moving a block of B consecutive elements between external and internal memory [1]. We assume that N ≥ 2M , M ≥ 2B and B ≥ 2. Computation can only be performed on elements in main memory, and we will assume that each element consists of one word. We will sometime assume the comparison model in internal memory, that is, that the only computation we can do on elements are comparisons. However, most of the time we will assume the RAM model in internal memory. In particular, we will assume that we can use elements for addressing, e.g. trivially implementing permuting in linear time. Our algorithms will respect the standard so-called indivisibility assumption, which states that at any given time during an algorithm the original N input elements are stored somewhere in external or internal memory. Our internal memory time measure is simply the number of performed operations; note that this includes the number of elements transferred between internal and external memory. Aggarwal and Vitter [1] [1] proved that external merge-and distribution-sort are I/O-optimal when the comparison model is used in internal memory, and in the following we will use sort E (N ) to denote the number of I/Os per block of elements of these optimal algorithms, that is, sort E (N ) = O(log M/B N B ) and external comparison model sort takes Θ( N B sort E (N )) I/Os. (As described below, the I/O-efficient algorithms we design will move O(N · sort E (N )) elements between internal and external memory, so O(sort E (N )) will also be the per element internal memory cost of obtaining external efficiency.) When no assumptions other than the indivisibility assumption are made about internal memory computation (i.e. covering our definition of the use of the RAM model in internal memory), Aggarwal and Vitter [1] proved that permuting N elements according to a given permutation requires Ω(min{N, N B sort E (N )}) I/Os. Thus this is also a lower bound for RAM model sorting. For all practical values of N , M and B the bound is Ω( N B sort E (N )). Subsequently, a large number of I/O-efficient algorithms have been developed. Of particular relevance for this paper, several priority queues have been developed where insert and deletemin operations can be performed in O( 1 B sort E (N )) I/Os amortized [2,4,8]. The structure by Arge [2] is based on the so-called buffer-tree technique, which uses O(M/B)-way splitting, whereas the other structures also use O(M/B)-way merging. In the RAM model the best known sorting algorithm uses O(N log log N ) time [6]. Similar to the I/O-case, we use sort I (N ) = O(log log N ) to denote the per element cost of the best known sorting algorithm. If randomization is allowed then this can be improved to O( √ log log n) expected time [7]. A priority queue can also be implemented so that the cost per operation is O(sort I (N )) [9]. Our results. In Section 2 we first discuss how both external merge-sort and external distribution-sort can be implemented to use optimal O(N log N ) time if the comparison model is used in internal memory, by using an O(N log N ) sorting algorithm and (in the merge-sort case) an O(log N ) priority queue. We also show how these algorithms can relatively easily be modified to use O(N · (sort I (N ) + sort I (M/B) · sort E (N ))) and time, respectively, if the RAM model is used in internal memory, by using an O(N · sort I (N )) sorting algorithm and an O(sort I (N )) priority queue. The question is of course if the above RAM model sorting algorithms can be improved. In Section 2 we discuss how it seems hard to improve the running time of the merge-sort algorithm, since it uses a priority queue in the merging step. By using a linear-time internal-memory splitting algorithm, however, rather than an O(N · sort I (N )) sorting algorithm, we manage to improve the running time of external distribution-sort to O(N · (sort I (N ) + sort E (N ))). Our new split-sort algorithm still uses O( N B sort E (N )) I/Os. Note that for small values of M/B the N ·sort E (N )-term, that is, the time spent on moving elements between internal and external memory, dominates the internal time. Given the conventional wisdom that merging is superior to splitting in external memory, it is also surprising that a distribution algorithm outperforms a merging algorithm. In Section 3 we develop an I/O-efficient RAM model priority queue by modifying the buffer-tree based structure of Arge [2]. The main modification consists of removing the need for sorting of O(M ) elements every time a so-called bufferemptying process is performed. The structure supports insert and deletemin op- Finally, in Section 4 we show that when N B sort E (N ) = o(N ) (and our sorting algorithms are I/O-optimal), any I/O-optimal sorting algorithm must transfer a number of elements between internal and external memory equal to Θ(B) times the number of I/Os it performs, that is, it must transfer Ω(N · sort E (N )) elements and thus also use Ω(N · sort E (N )) internal time. In fact, we show a lower bound on the number of I/Os needed by an algorithm that transfers b ≤ B elements on the average per I/O, significantly extending the lower bound of Aggarwal and Vitter [1]. The result implies that (in the practically realistic case) when our split-sort and priority queue sorting algorithms are I/O-optimal, they are in fact also CPU optimal in the sense that their running time is the sum of an unavoidable term and the time used by the best known RAM sorting algorithm. As mentioned above, the lower bound also means that the time spent on moving elements between internal and external memory resulting from the fact that we are considering I/O-efficient algorithms can dominate the internal computation time, that is, considering I/O-efficient algorithms implies that less internal-memory efficient algorithms can be obtained than if not considering I/O-efficiency. Furthermore, we show that when B ≤ M 1−ε for some constant ε > 0 (the tall cache assumption) the same Ω(N · sort E (N )) number of transfers are needed for any algorithm using less than εN/4 I/Os (even if it is not I/Ooptimal). To summarize our contributions, we open up a new area of algorithms that are both RAM-efficient and I/O-efficient. The area is interesting from both a theoretical and practical point of view. We illustrate that existing algorithms, in particular multiway merging based algorithms, are not RAM-efficient, and develop a new sorting algorithm that is both efficient in terms of I/O and RAM time, as well as a priority queue that can be used in such an efficient algorithm. We prove a lower bound that shows that our algorithms are both I/O and internal-memory RAM model optimal. The lower bound significantly extends the Aggarwal and Vitter lower bound [1], and shows that considering I/O-efficient algorithms influences how efficient internal-memory algorithms can be obtained. Sorting External merge-sort. In external merge-sort Θ(N/M ) sorted runs are first formed by repeatedly loading M elements into main memory, sorting them, and writing them back to external memory. In the first merge phase these runs are merged Split-sort. While it seems hard to improve the RAM running time of the external merge-sort algorithm, we can actually modify the external distribution-sort algorithm further and obtain an algorithm that in most cases is optimal both in terms of I/O and time. This split-sort algorithm basically works like the distributionsort algorithm with the split algorithm modification described above. However, we need to modify the algorithm further in order to avoid the sort I (M )-term in the time bound that appears due to the repeated sorting of O(M ) elements in the split element finding algorithm, as well as in the actual split algorithm. First of all, instead of sorting each batch of M/2 elements in the split algorithm to split them over s = M/B − 1 < M/2 split elements, we use a previous result that shows that we can actually perform the split in linear time. [7]). In the RAM model N elements can be split over N 1−ε split elements in linear time and space for any constant ε > 0. Lemma 1 (Han and Thorup Secondly, in order to avoid the sorting in the split element finding algorithm of Aggarwal and Vitter [1], we design a new algorithm that finds the split elements on-line as part of the actual split algorithm, that is, we start the splitting with no split elements at all and gradually add at most s = M/B − 1 split elements one at a time. An online split strategy was previously used by Frigo et al [5] in a cache-oblivious algorithm setting. More precisely, our algorithm works as follows. To split N input elements we, as previously, repeatedly bring M/2 elements onto main memory, distribute them to buffers using the current split elements and Lemma 1, while outputting the B elements in a buffer when it runs full. However, during the process we keep track of how many elements are output to each subset. If the number of elements in a subset X i becomes 2N/s we pause the split algorithm, compute the median of X i and add it to the set of splitters, and split X i at the median element into two sets of size N/s. Then we continue the splitting algorithm. It is easy to see that the above splitting process results in at most s+1 subsets containing between N/s and 2N/s − 1 elements each, since a set is split when it has 2N/s elements and each new set (defined by a new split element) contains at least N/s elements. The actual median computation and the split of Remarks. Since sort I (M ) + sort E (N ) ≥ sort I (N ) our split-sort algorithm uses Ω(N · sort I (N )) time. In Section 4 we prove that the algorithm in some sense is optimal both in terms of I/O and time. Furthermore, we believe that the algorithm is simple enough to be of practical interest. Priority queue In this section we discuss how to implement an I/O-and RAM-efficient priority queue by modifying the I/O-efficient buffer tree priority queue [2]. N )). To support insertions efficiently in a "lazy" manner, each internal node is augmented with a buffer of size M and an insertion buffer of size at most B is maintained in internal memory. To support deletemin operations efficiently, a RAM-efficient priority queue [9] supporting both deletemin and deletemax, 3 called the mini-queue, is maintained in main memory containing the up to M/2 smallest elements in the priority queue. Insertion. To perform an insertion we first check if the element to be inserted is smaller than the maximal element in the mini-queue, in which case we insert the new element in the mini-queue and continue the insertion process with the currently maximal element in the mini-queue. Next we insert the element to be inserted in the insertion buffer. When we have collected B elements in the insertion buffer we insert them in the buffer of the root. If this buffer now contains more than M/2 elements we perform a buffer-emptying process on it, "pushing" elements in the buffer one level down to buffers on the next level of T : We load the M/2 oldest elements into main memory along with the less than M/B splitting elements, distribute the elements among the splitting elements, and finally output them to the buffers of the relevant children. Since the splitting and buffer elements fit in memory and the buffer elements are distributed to M/B buffers one level down, the buffer-emptying process is performed in O(M/B) I/Os. Since we distribute M/2 elements using M/B splitters the process can be performed in O(M ) time (Lemma 1). After emptying the buffer of the root some of the nodes on the next level may contain more than M/2 elements. If they do we perform recursive buffer-emptying processes on these nodes. Note that this way buffers will never contain more than M elements. When (between 1 and M/2) elements are pushed down to a leaf (when performing a bufferemptying process on its parent) resulting in the leaf containing more than M (and less than 3M/2) elements we split it into two leaves containing between M/2 and 3M/4 elements each. We can easily do so in O(M/B) I/Os and O(M ) time [1]. As a result of the split the parent node v gains a child, that is, a new leaf is inserted. If needed, T is then balanced using node splits as a normal B-tree, that is, if the parent node now has M/B children it is split into two nodes Deletemin. To perform a deletemin operation we first check if the mini-queue contains any elements. If it does we simply perform a deletemin operation on it and return the retrieved element using O(sort I (M )) time and no I/Os. Otherwise we perform buffer-emptying processes on all nodes on the leftmost path in T starting at the root and moving towards the leftmost leaf. After this the buffers on the leftmost path are all empty and the smallest elements in the structure are stored in the leftmost leaf. We load the between M/2 and M elements in the leaf into main memory, sort them and remove the smallest M/2 elements and insert them in the mini-queue in internal memory. If this results in the leaf having less than M/2 elements we insert the elements in a sibling and delete the leaf. If the sibling now has more than M elements we split it. As a result of this the parent node v may lose a child. If needed T is then rebalanced using node fusions as a normal B-tree, that is, if v now has 1/2 M/B children it is fused with its sibling (possibly followed by a split). As with splits after insertion of a new leaf, the rebalancing may propagate up along the path to the root (when the root only has one leaf left it is removed). Note that no buffer merging is needed since the buffers on the leftmost path are all empty. Remarks. Our priority queue obviously can be used in a simple O( N B sort E (N )) I/O and O(N · (sort I (M ) + sort E (N ))) time sorting algorithm. Note that it is essential that a buffer-emptying process does not require sorting of the elements in the buffer. In normal buffer-trees [2] such a sorting is indeed performed, mainly to be able to support deletions and (batched) rangesearch operations efficiently. Using a more elaborate buffer-emptying process we can also support deletions without the need for sorting of buffer elements. Lower bound Assume that N B sort E (N ) = o(N ) and for simplicity also that B divides N . Recall that under the indivisibility assumption we assume the RAM model in internal memory but require that at any time during an algorithm the original N elements are stored somewhere in memory; we allow copying of the original elements. The internal memory contains at most M elements and the external memory is divided into N blocks of B elements each; we only need to consider N blocks, since we are considering algorithms doing less than N I/Os. During an algorithm, we let X denote the set of original elements (including copies) in internal memory and Y i the set of original elements (including copies) in the i'th block; an I/O transfers up to B elements between an Y i and X. Note that in terms of CPU time, an I/O can cost anywhere between 1 and B (transfers). In the external memory permuting problem, we are given N elements in the first N/B blocks and want to rearrange them according to a given permutation; since we can always rearrange the elements within the N/B blocks in O(N/B) I/Os, a permutation is simply given as an assignment of elements to blocks (i.e. we ignore the order of the elements within a block). In other words, we start with a distribution of N elements in X, and should produce another given distribution of the same elements such that To show that any permutation algorithm that performs O( N B sort E (N )) I/Os has to transfer Ω(N ·sort E (N )) elements between internal and external memory, we first note that at any given time during a permutation algorithm we can identify a distribution (or more) of the original N elements (or copies of them) in X, Y 1 , Y 2 , . . . Y N . We then first want to bound the number of distributions that can be created using T I/Os, given that Now the T I/Os can thus at most create That this number is bounded by N (2eM/b) 2b T , where b is the average of the b i 's, can be seen by just considering two values b 1 and b 2 with average b. In this case we have Next we consider the number of distributions that can be created using T I/Os for all possible values of b i , 1 ≤ i ≤ T , with a given average b. This can trivially be bounded by multiplying the above bound by B T (since this is a bound on the total number of possible sequences b 1 , b 2 , . . . , b T ). Thus the number of distributions is bounded by B T N (2eM/b) 2b T = ((BN )(2eM/b) 2b ) T . Since any permutation algorithm needs to be able to create Ω((N/B) N ) distributions, we get the following lower bound on the number of I/Os T = o(B). Thus any algorithm performing optimal O( N B sort E (N )) I/Os must transfer Ω(N · sort E (N )) elements between internal and external memory. Reconsider the above analysis under the tall cache assumption B ≤ M 1−ε for some constant ε > 0. In this case, we have that the number of distributions any permutation algorithm needs to be able to create is Ω((N/B) N ) = Ω(N εN ). Above we proved that with T I/Os transferring an average number of b keys an algorithm can create at most (BN (2eM/b) 2b ) T < N 2T M 2bT distributions. Thus we have M 2bT ≥ N εN −2T . For T < εN/4, we get M 2bT ≥ N εN/2 and thus that the number of transferred elements bT is Ω (N log M N ). Since the tall cache assumption implies that log(N/B) = Θ(log N ) and log(M/B) = Θ(log M ) we have that N log M N = Θ(N log M/B (N/B)) = Θ(N · sort E (N )). Thus any algorithm using less than εN/4 I/Os must transfer Ω(N · sort E (N )) elements between internal and external memory. When B ≤ M 1−ε for some constant ε > 0 any, permuting algorithm using less than εN/4 I/Os must transfer Ω(N · sort E (N )) elements between internal and external memory under the indivisibility assumption. Remark. The above means that in practice where N B sort E (N ) = o(N ) our O( N B sort E (N )) I/O and O(N · (sort I (N ) + sort E (N )) time split-sort and priority queue sort algorithms are not only I/O-optimal but also CPU optimal in the sense that their running time is the sum of an unavoidable term and the time used by the best known RAM sorting algorithm.
2013-12-10T09:43:13.000Z
2013-12-06T00:00:00.000
{ "year": 2015, "sha1": "b0fb112f95071478978a51987521b4ac62ae242e", "oa_license": null, "oa_url": "http://arxiv.org/pdf/1312.2018", "oa_status": "GREEN", "pdf_src": "Arxiv", "pdf_hash": "cb99391781a3bd92aca545f6aab92a9e34ac50fc", "s2fieldsofstudy": [ "Computer Science" ], "extfieldsofstudy": [ "Computer Science" ] }
14659247
pes2o/s2orc
v3-fos-license
Gauge mediation with D terms I propose implementing General Gauge Mediation using the class of SU(N) × U(1) SUSY breaking models. As an existence proof, I have utilized the 4–1 model in building multi-parameter gauge mediation. These hidden sectors are relatively easy to use and avoid several model building pitfalls such as runaway directions. In addition models require no special tuning and may produce as many parameters as general gauge mediation allows. Introduction Minimal Gauge Mediation (MGM) provides a simple and predictive mass scheme for supersymmetric models [1]. MGM is achieved by adding sets of vector like multiplets to the MSSM which are charged under the normal gauge groups and couple to the hidden sector fields that participate in SUSY breaking. These 'messengers' acquire both a supersymmetric and a nonsupersymmetric mass, and once they are integrated out, these terms give gauginos mass at one loop and scalars mass at two loops. In its simplest incarnation, the messengers are coupled to a hidden sector singlet field that acquires a scalar vev and F term in the hidden sector, where X are N messengers and φ the singlet field. This yields one loop gaugino masses The entire mass spectrum is determined by a single parameter F φ / φ and we see that the gaugino and scalar mass ratios are completely fixed. However, because gauge mediation predicts a spectrum where sparticle masses scale with powers of their gauge couplings, everything that is charged under QCD is very heavy. Thus even though this model is predictive and flavor blind, problems persist in the spectrum. In particular the mass relationship M 1 :M 2 :M 3 ∼ 1:2:7 is predicted for gauginos. The chargino lower mass bound is 105 GeV [2]. We might infer from indirect signal searches like tri-leptons, that the lightest chargino is even heavier, approaching 150 GeV [3]. In MGM setting the chargino above the mass bound then requires a very heavy gluino due to the fixed gaugino mass ratio. Lower bounds on the lightest charged sparticles of 100 GeV also imply heavy squarks, if minimal gauge mediation holds. Squarks of 700 GeV would induce large corrections to the up type Higgs mass parameter. The conditions for electroweak breaking are known, and Higgs sector parameters must cancel down to the Z mass. Therefore the amount of tuning needed in the Higgs sector is of order (m hu /mz) 2 , or sub percent. A compressed and lighter spectrum would alleviate tuning problems that exist in the Higgs sector and open up SUSY parameter space to new and interesting signals. This would require modifying the gauge mediated predictions for sparticles. Meade et al. have laid out the formalism of General Gauge Mediation whereby the gauge mediated spectrum may be determined by up to 6 parameters, including 3 independent parameters in the gaugino sector [4]. Several recent models fall under the category of GGM model building, with weakly coupled renormalizable operators employing chiral fields only. For example Extraordinary GM compresses the spectrum without altering the gaugino mass prediction of MGM [5]. Other proposals compress the spectrum and achieve the full range of GGM parameters [6]. Weakly coupled renormalizable models which change the gaugino mass ratio prediction require, at least, splitting the doublet and triplet messenger couplings and coupling a single set of messengers to multiple scalars. Thus we would have a superpotential like, Now we may make the following field redefinitions: Gaugino masses are now proportional to two scales; We see that the gaugino mass ratio of minimal gauge mediation is not preserved and we have achieved a two parameter model. Models may be complicated even further by adding multiple scalars and multiple messengers. However models like this pose difficult model building challenges. For example, in models with multiple messengers, hypercharge D terms induce one loop masses for scalars proportional to their hypercharge unless an interchange symmetry of the messengers can be made to appear in the low energy theory. In models with multiple scalars which are built purely out of chiral fields, some care is required to make sure the theory is stabilized far from runaway directions so that all fields acquire proper vevs. In addition there is a generic problem with phases. In minimal gauge mediation, gaugino and scalar masses all come from a single mass scale, there are no relative phases between the gaugino masses. However, when model building with multiple scalars and messengers, splitting the gaugino mass requires the addition of many new couplings and in general phases occur. Instead, I propose the introduction of a single new source of SUSY breaking from a hidden sector U(1). This generates a new operator in the theory, a non-supersymmetric mass term which can be added to alter the minimal gauge mediation prediction without multiple scalars or multiple messengers. The GGM parameter counting is distinctly different from models in [6]. In addition, the hidden sector dynamics can be implemented in simple and familiar SU(N) × U(1) models. In Section 2 I introduce the D-term operator and use it to build the simplest GGM model. In section 3 I review the dynamics of the 4-1 hidden sector. In section 4 I use other operators in the 4-1 model to build gauge mediation and make an attempt at a unified model without phases. Section 5 concludes. SUSY Breaking D terms In addition to F terms in the hidden sector, we may consider another source of SUSY breaking, a U(1) gauge field whose D term acquires a vev by some dynamical mechanism. Since we want a D term that is the same size as the overall SUSY breaking scale, we may deduce that the D term vev is itself closely connected to, even required for, supersymmetry breaking. The lowest dimension new operator that one may write down with all indices contracted has the form where X is matter in a vector-like representation. When the D term is set to its vev this term becomes This is an additional B term, which is a source for nonsupersymmetric masses. Such a term has been used as a source for SUSY breaking messenger masses for example in [7]. The new operator only adds one more parameter to the low energy theory, the scale √ cD/M, so we may maintain an economy of parameters. Scalar masses for squarks and sleptons cannot be generated through direct contact terms with the hidden sector gauge field. Holomorphy prevents us from writing such a term in the superpotential. Instead the lowest dimension mass term we may write is 1 which is highly suppressed and not generated by any divergent diagrams. A Simple Way To Use D-terms Consider a messenger superpotential with a single scalar field Z that gets an F term and a scalar vev, and a hidden sector U(1) field. where the couplings for doublet and triplet messengers have been split. Couplings between scalars fields and the extra gauge fields may be forbidden by R symmetry. Z gets an F term and a scalar vev Z = z + θ 2 F z . Messengers get a SUSY breaking mass from the F term and the extra D term vev. Define B = D 2 /M so the gaugino masses are The B term may be chosen to be of the same order as F z . If the ratio of couplings λ Q /y Q is smaller than the ratio of λ L /y L we lower the mass ratio of gluinos to the other MSSM gauginos. Notice that there are three distinct parameters F z /z, λ Q B/y Q z, and λ L B/y L z. The 4 − 1 Model We now must address the best way to achieve a D-term vev. To get a D term of sufficient size, comparable to the overall scale of SUSY breaking, we may build a model in which the U(1) is required for supersymmetry breaking. The '4-1' Model of Dine and Nelson is a simple and interesting example [8]. The model has an SU(4) × U(1) gauge group. The matter content is as follows (subscripts indicate U(1) charges): an antisymmetric tensor A 2 , a fundamental F −3 , an anti-fundamentalF −1 and a singlet S 4 . There is only one allowed superpotential term, SU(4) then confines and the gauginos condense generating a non-perturbative term in the superpotential, This model contains a non-anomalous R symmetry which is broken once the cosmological constant is tuned to zero, and hence a massive R axion [9]. The scale of SUSY breaking we will assume is high enough that the R axion is unobservable. Making the choice, With the rescaling, φ → Λ λ 1/5 φ, the D-term is the scalar potential F-term contribution is, We may now minimize the potential. Notice that without the D-term there is a runaway direction. We may take b ∼ ǫ for ǫ arbitrarily small while a ∼ 1/ǫ and c ∼ 1/ǫ 2 . Here we can solve all of the F term equations. As we go out in the runaway direction SUSY is restored. However, as we turn on the coupling g 1 we find we can no longer satisfy the F and D term equations and SUSY is broken everywhere. To avoid running away to a supersymmetric minimum, we must generate a D term. The term D 2 is quartic in fields and for very small g 1 the minimum is far from the origin. Because of quartic behavior, as we turn g 1 up, the minimum moves in closer to zero and the D term becomes small compared to the F term. Note that The F term is always larger than the D term but regions of parameter space exist, for λ ∼ 10g 1 , where they are of the same order. We will see later how the size of this ratio effects phenomenology. In addition to generating a D term for the U(1), the 4-1 model also gives an additional useful operator for model building, the gaugino condensate of the SU(4) gauge multiplet. The Gaugino Condensate We see that the in the 4-1 model, in addition to having a U(1) D term, there is also a gaugino condensate. Gaugino condensates are useful for generating µ terms, see for example [10]. Proceeding in a way similar to the previous section, we see that we can couple messengers to the gaugino condensate as well as to the D terms. We write the messenger superpotential There is now a B-term for the scalar messengers as well as mu term generated by gaugino condensation. We have built the operators needed for gauge mediation not out F terms and vevs of chiral fields, but from gauge D terms and gaugino condensates. Gaugino masses are proportional to the ratio of B and µ and are not dependent on the scale M. This simple model does not break the gaugino mass ratio prediction of MGM, but instead reproduces the minimal gauge mediated phenomenology. Achieving the multiple parameters of GGM once again requires splitting the messenger couplings. Below, the messenger sector consists of a single set of messengers in the 5, 5 representation however one may imagine repeating these steps for multiple sets of messengers in 5, 5 or 10, 10 representations. Writing everything in terms of Λ we have the relation In general y 1 , y 4 , l 1 , l 4 may all be different from each other. What we need to break the MGM gaugino mass prediction is that y 1 /y 4 not be equal to l 1 /l 4 . In order to avoid messenger vevs we must have B < µ 2 or For the correct spectrum we may pick point like Λ ∼ 10 8 , M ∼ 20Λ with couplings, λ = 2.6x10 −2 and g 1 = 6x10 −1 and y's and l's of order 10 −1 . We get a spectrum with gauginos in the hundred GeV range and no vevs for messengers. Since there are two independent parameters for gluinos and winos, we may expect a spectrum with light squarks without the need to tune couplings. This model achieves 2 parameters of the possible 6 of GGM. If we had chosen messengers in a 10, 10 representation we would have gotten a three parameter spectrum. In fact, using the 4-1 models, the predictions for number of parameters and low energy spectrum follow from those in [6] where our scales are set by D terms and gaugino condensates rather that F terms and vevs of chiral fields. Extra Operators We may now attempt to write down potentially dangerous operators that get generated in the Kahler potential. The most important is a coupling of hidden sector fields and messenger fields, This operator does not break R symmetry and cannot contribute to gaugino masses. However it will induce an operator which is another source for scalar masses, and has been well studied in [11]. This is an extra mass term for messengers; and since messengers only couple to MSSM fields with SM gauge couplings, this new contribution will yield flavor blind masses. However these are not the standard mass terms of minimal gauge meditation. The scalar mass contribution from the new operators is where S is the Dynkin index of the messengers and C ai is the Casimir for the scalars. Notice that unlike the standard GM contribution, there is running from the scale of the cut off M, presumably where we have integrated out some heavy fields to generate the operators W W XX, to the scale at which the 4-1 model gauginos condense hence the log factor. This is scaled by powers of this operators anomalous dimension. In addition, this operator will be down by a factor of α 4 (M) compared to the standard GM scalar mass contribution since this operator involves two insertions of the hidden sector F terms. As long as the F terms are of manageable size and appear with a reasonable coefficient, we expect this operator not to dominate or drastically alter the spectrum. However, if the F terms become large and the log does not scale away with large negative anomalous dimensions this contribution can become as important as the standard GM contribution to scalar masses or even dominant. If the sign of the operator is negative the spectrum may even become tachyonic. The trick then is to stay in regions of parameter space where F terms are not too large. Another way around this constraint would be to forbid such operators all together. For example, if the hidden sector fields were sequestered from MSSM fields using boundary conditions in 5-D, these extra contributions may be extremely suppressed. A Unified Model In minimal gauge mediation we avoid relative phases in the gaugino sector because all gaugino masses come from a single mass parameter. However models with split gaugino masses usually have a relative phase. We would like as few phases as possible. In addition we would like to make a model that is as simple as possible. We might try to begin with a unified SU(5), then break it to SU(4) × U(1) needed for 4-1 SUSY breaking. A 10 and one 5 of SU(5) provide all of the chiral fields needed for the 4-1 model. In general we would begin with different couplings of the W 's to doublet and triplet messengers. The operators W W XX are generated by integrating out fields carrying quantum numbers under SU(4) and U(1). If the gauge fields are unified at some high energy we expect that once SU(5) breaks, the W 1 and W 4 messenger couplings will split. However it may be possible that the relative phases between terms-which start off the same when SU(5) is unifiedremain the same. This depends on the dynamics at the high scale. What follows is an attempt to build a model where SU(4) and U(1) unify. After some numerical estimates, we find that we may get the correct order of magnitude for the MSSM field masses and avoid tachyonic messenger masses if the gaugino condensation scale Λ is only a few decades above the cut-off M. We may compute the scale at which SU(4) confines by finding the pole in Run this coupling up to the unification scale, and it is the value of the coupling g 1 at high energy which will then run down. We see that if we run over two decades, our unified coupling is g(Λ) ∼ 1.4. The difficulty with this scenario is that for running over only a few decades the U(1) and SU(4) couplings do not split very much, it is not possible to make g 1 small. Therefore, the minimum of the potential comes close to the origin and the F terms are generically much bigger than the D terms. In the non-unified model one is free to pick smaller values for g 1 and this was not as great of a concern. The scalar masses are now dominated by the contribution mentioned in the previous section. Unless the hidden sector fields are sequestered with extra dimensions, or the extra operator has very large negative anomalous dimensions the extra contribution to scalar masses will be of the same order as the standard gauge mediated contribution. If the sign of these contributions is negative, some scalars may become tachyonic. In addition the extra contributions may reintroduce tuning by increasing scalar masses. This is not a concern if a suitable sequestering mechanism can be found. Even before we worry about finding suitably high energy dynamics, the viability this model is in question. Thus model building without gaugino phases requires further study. Conclusions It possible to build simple implementations of GGM by stepping outside the bounds of weakly coupled chiral models. Here I have demonstrated the viability of using the 4-1 SUSY breaking model. However it is likely that a range of SU(N) × U(1) models may also yield good results. In addition, I have shown the existence of a mixture of F-term and 4-1 style SUSY breaking that has different parameter counting than previous attempts at GGM completions. Hence I have generated new models for which a compressed SUSY spectrum is possible. Attempts to build models without phases in the gaugino sector fare worse. The minimal unified attempt to build models leads to large contributions to scalar masses. The problem of phases therefore is not resolved unless a suitable sequestering mechanism can be implemented. This is a topic for further work.
2008-08-30T17:43:05.000Z
2008-08-30T00:00:00.000
{ "year": 2011, "sha1": "9ac496a4a15842f640a9b6278831a571f2a3fb87", "oa_license": null, "oa_url": "http://arxiv.org/pdf/0809.0026", "oa_status": "GREEN", "pdf_src": "Arxiv", "pdf_hash": "9ac496a4a15842f640a9b6278831a571f2a3fb87", "s2fieldsofstudy": [ "Geology" ], "extfieldsofstudy": [ "Physics" ] }
6164031
pes2o/s2orc
v3-fos-license
Ensuring Query Compatibility with Evolving XML Schemas During the life cycle of an XML application, both schemas and queries may change from one version to another. Schema evolutions may affect query results and potentially the validity of produced data. Nowadays, a challenge is to assess and accommodate the impact of theses changes in rapidly evolving XML applications. This article proposes a logical framework and tool for verifying forward/backward compatibility issues involving schemas and queries. First, it allows analyzing relations between schemas. Second, it allows XML designers to identify queries that must be reformulated in order to produce the expected results across successive schema versions. Third, it allows examining more precisely the impact of schema changes over queries, therefore facilitating their reformulation. Introduction XML is now commonplace on the web and in many information systems where it is used for representing all kinds of information resources, ranging from simple text documents such as RSS or Atom feeds to highly structured databases. In these dynamic environments, not only data are changing steadily but their schemas also get modified to cope with the evolution of the real world entities they describe. Schema changes raise the issue of data consistency. Existing documents and data that were valid with a certain version of a schema may become invalid on a new version of the schema (forward incompatibility). Conversely, new documents created with the latest version of a schema may be invalid on some previous versions (backward incompatibility). In addition, schemas may be written in different languages, such as DTD, XML Schema, or Relax-NG, to name only the most popular ones. And it is common practice to describe the same structure, or new versions of a structure, in different schema languages. Document formats developed by W3C provide a variety of examples: XHTML 1.0 has both DTDs and XML Schemas, while XHTML 2.0 has a Relax-NG definition; the schema for SVG Tiny 1.1 is a DTD, while version 1.2 is written in Relax-NG; MathML 1.01 has a DTD, MathML 2.0 has both a DTD and an XML Schema, and MathML 3.0 is developed with a Relax-NG schema and is expected to have also a DTD and an XML Schema. An issue then is to make sure that schemas written in different languages are equivalent, i.e. they describe the same structure, possibly with some differences due to the expressivity of the language [14]. Another issue is to clearly identify the differences between two versions of the same schema expressed in different languages. Moreover, the issues of forward and backward compatibility of instances obviously remain when schema languages change from a version to another. Validation, and then compatibility, is not the only purpose of a schema. Validation is usually the first step for safe processing of documents and data. It makes sure that documents and data are structured as expected and can then be processed safely. The next step is to actually access and select the various parts to be handled in each phase of an application. For this, query languages play a key role. As an example, when transforming a document with XSL, XPath queries are paramount to locate in the original document the data to be produced in the transformed document. Queries are affected by schema evolutions. The structures they return may change depending on the version of the schema used by a document. When changing schema, a query may return nothing, or something different from what was expected, and obviously further processing based on this query is at risk. These observations highlight the need for evaluating precisely and safely the impact of schema evolutions on existing and future instances of documents and data. They also show that it is important for software engineers to precisely know what parts of a processing chain have to be updated when schemas change. In this paper we focus on the XPath query language which is used in many situations while processing XML documents and data. The XSL transformation language was already mentioned, but XPath is also present in XLink and XQuery for instance. Related Work Schema evolution is an important topic and has been extensively explored in the context of relational, object-oriented, and XML databases. Most of the previous work for XML query reformulation is approached through reductions to relational problems [4]. This is because schema evolution was considered as a storage problem where the priority consists in ensuring data consistency across multiple relational schema versions. In such settings, two distinct schemas and an explicit description of the mapping between them are assumed as input. The problem then consists in reformulating a query expressed in terms of one schema into a semantically equivalent query in terms of the other schema: see [6,18] and more recently [12] with references thereof. In addition to the fundamental differences between XML and the relational data model, in the more general case of XML processing, schemas constantly evolve in a distributed, independent, and unpredictable environment. The relations between different schemas are not only unknown but hard to track. In this context, one priority is to help maintaining query consistency during these evolutions, which is still considered as a challenging problem [16]. The work found in [13] discusses the impact of evolving XML schemas on query reformulation. Based on a taxonomy of XML schema changes during their evolution, the authors provide informal -not exact nor systematic -guidelines for writing queries which are less sensitive to schema evolution. In fact, studying query reformulation requires at least the ability to analyze the relationship between queries. For this reason, a closely related work is the problem of determining query containment and satisfiability under type constraints [1,9]. The work found in [1] studies the complexity of XPath emptiness and containment for various fragments (see [2] and references thereof for a survey). The main distinctive idea pursued in this paper is to develop a logical approach for guiding schema and query evolution. In contrast to the classical use of logics for proving properties such as query emptiness or equivalence [1,9], the goal here is different in that we seek to provide the necessary tools to produce relevant knowledge when such relations do not hold. Outline The rest of this paper is organized as follows: the next section introduces our framework, Section 3 presents its underlying logic, and Section 4 presents predicates for characterizing the impact of schema changes. We report on experiments on realistic scenarios in Section 5 before we conclude in Section 6. Analysis Framework Our framework allows the automatic verification of properties related to XML schema and query evolution. In particular, it offers the possibility of checking fine-grained properties on the behavior of queries with respect to successive versions of a given schema. The system can be used for checking whether schema evolutions require a particular query to be updated. Whenever schema evolutions may induce query malfunctions, the system is able to generate annotated XML documents that exemplify bugs, with the goal of helping the programmer to understand and properly overcome undesired effects of schema evolutions. For these purposes, our framework relies on the combination and joint use of several contributions: an extension of the logic introduced in [9] to deal with XML attributes (Sections 2 and 3); a set of logical features and high-level predicates specifically designed for studying and characterizing schema and query compatibility issues when schemas evolve (Section 4); a range of applications and procedures to cope with schema and query evolution (Section 5); a full implementation of the whole system, including: a parser for reading the problem description (text file), which in turn use specific parsers for schemas (Section 2.2), queries (Section 2.3), logical formulas (Section 3.2), and predicates (Section 4); compilers for translating schemas and queries into their logical representations (Sections 3.3 and 3.4); an optimized solver first described in [9,10] for checking satisfiability of logical formulas in time 2 O(n) where n is the formula size; and a counter example XML tree generator (described in [10]). Figure 1 illustrates how the previous software components are combined and used together, in a simplified overview of the global framework. We next introduce the data model we consider for XML documents, schemas and queries. XML Trees with Attributes An XML document is considered as a finite tree of unbounded depth and arity, with two kinds of nodes respectively named elements and attributes. In such a tree, an element may have any number of children elements, and may carry zero, one or more attributes. Attributes are leaves. Elements are ordered whereas attributes are not, as illustrated on Figure 4. In this paper, we focus on the nested structure of elements and attributes, and ignore XML data values. Type Constraints As an internal representation for tree grammars, we consider regular tree type expressions (in the manner of [11]), extended with constraints over attributes. Assuming a set of variables ranged over by x, we define a tree type expression as follows: We impose a usual restriction on the recursive use of variables: we allow unguarded (i.e. not enclosed by a label) recursive uses of variables, but restrict them to tail positions 1 . With that restriction, tree types expressions define regular tree languages. In addition, an element definition may involve simple attribute expressions that describe which attributes the defined element may (or may not) carry: a ::= attribute expression () empty list list | a disjunction list ::= attribute list list, list commutative concatenation l? optional attribute l required attribute ¬l prohibited attribute Our tree type expressions capture most of the schemas in use today [14,3]. In practice, our system provides parsers that convert DTDs, XML Schemas, and Relax NGs to this internal tree type representation. Users may thus define constraints over XML documents with the language of their choice, and, more importantly, they may refer to most existing schemas for use with the system. Queries The set of XPath expressions we consider is given by the syntax shown on Figure 2. The semantics of XPath expressions is described in [5], and more formally in [17]. We observed that, in practice, many XPath expressions contain syntactic sugars that can also fit into this fragment. Figure 3 presents how our XPath parser rewrites some commonly found XPath patterns into the fragment of Figure 2, where the notation (axis::nt) k stands for the composition of k successive path steps of the same form: axis::nt/.../axis::nt k steps . 3 Logical Setting Logical Data Model It is well-known that there exist bijective encodings between unranked trees (trees of unbounded arity) and binary trees. Owing to these encodings binary query ::= /path absolute path path relative path query | query union query ∩ query intersection path ::= path/path path composition path [qualifier] qualified path axis::nt step qualifier ::= qualifier and qualifier conjunction qualifier or qualifier disjunction not(qualifier) negation path path path/@nt attribute path @nt attribute step nt ::= node test σ node label * any node label axis ::= tree navigation axis self | child | parent descendant | ancestor descendant-or-self ancestor-or-self following-sibling preceding-sibling following | preceding RR n°6711 trees may be used instead of unranked trees without loss of generality. In the sequel, we rely on a simple "first-child & next-sibling" encoding of unranked trees. In this encoding, the first child of an element node is preserved in the binary tree representation, whereas siblings of this node are appended as right successors in the binary representation. Attributes are left unchanged by this encoding. For instance, Figure 5 presents how the sample tree of Figure 4 is mapped. Logical Formulas The concrete syntax of logical formulas is shown on Figure 6, where the metasyntax X means one or more occurences of X separated by commas. The reader can directly use this syntax for encoding formulas as text files to be used with the system described in Section 2 [8]. This concrete syntax is used as a single unifying notation throughout all the paper. The semantics of logical formulas corresponds to the classical semantics of a µ-calculus interpreted over finite tree structures. A formula is satisfiable iff there exists a finite binary tree with attributes for which the formula holds at some node. This is formally defined in [9], and we review it informally below through a series of examples. previous sibling There is a difference between an element name and an atomic proposition 2 : an element has one and only one element name, whereas it can satisfy multiple atomic propositions. We use atomic propositions to attach specific information to tree nodes, not related to their XML labeling. For example, the start context (a reserved atomic proposition) is used to mark the starting context nodes for evaluating XPath expressions. The logic uses programs for navigating in binary trees: the program 1 allows to navigate from a node down to its first successor and the program 2 for navigating from a node down to its second successor. The logic also features converse programs -1 and -2 for navigating upward in binary trees, respectively from the first successor to its parent and from the second successor to its previous sibling. Table 1 gives some simple formulas using modalities for navigating in binary trees, together with sample satisfying trees, in binary and unranked tree representations. The logic allows expressing recursion in trees through the recursive binder. For example the recursive formula: means that either the current node is named b or there is a sibling of the current node which is named b. For this purpose, the variable $X is bound to the subformula b | <2>$X which contains an occurence of $X (therefore defining none none the recursion). The scope of this binding is the subformula that follows the "in" symbol of the formula, that is $X. The entire formula can thus be seen as a compact recursive notation for a infinitely nested formula of the form: Recursion allows expressing global properties. For instance, the recursive formula:~l expresses the absence of nodes named a in the whole subtree of the current node (including the current node). Furthermore, the fixpoint operator makes possible to bind several variables at a time, which is specifically useful for expressing mutual recursion. For example, the mutually recursive formula: asserts that there is a node somewhere in the subtree such that this node is named a and it has at least one sibling which is named b. Binding several variables at a time provides a very expressive yet succinct notation for expressing mutually recursive structural patterns (that are common in XML Schemas, for instance). From a theoretical perspective, the recursive binder let $X = ϕ in ϕ corresponds to the fixpoint operators of the µ-calculus. It is shown in [9] that the least fixpoint and the greatest fixpoint operators of the µ-calculus coincide over finite tree structures, for a restricted class of formulas called cycle-free formulas. Translations of XPath expressions and schemas presented in this paper always yield cycle-free formulas (see [10] for more details). Compilation of Queries The logic is expressive enough to capture the set of XPath expressions presented in Section 2.3. For example, Figure 7 illustrates how the sample XPath expression: child::r[child::w/@att] is expressed in the logic. From a given context in an XML document, this expression selects all r child nodes which have at least one w child with an attribute att. Figure 7 shows how it is expressed in the logic, on the binary tree representation. The formula holds for r nodes which are selected by the expression. The first part of the formula, ϕ, corresponds to the step child::r which selects candidates r nodes. The second part, ψ, navigates downward in the subtrees of these candidate nodes to verify that they have at least one immediate w child with an attribute att. Figure 7: XPath Translation Example. This example illustrates the need for converse programs inside modalities. The translated XPath expression only uses forward axes (child and attribute), nevertheless both forward and backward modalities are required for its logical translation. Without converse programs we would have been unable to differentiate selected nodes from nodes whose existence is simply tested. More generally, properties must often be stated on both the ancestors and the descendants of the selected node. Equipping the logic with both forward and converse programs is therefore crucial. Logics without converse programs may only be used for solving XPath emptiness but cannot be used for solving other decision problems such as containment efficiently. A systematic translation of XPath expressions into the logic is given in [9]. In this paper, we extended it to deal with attributes. We implemented a compiler that takes any expression of the fragment of Figure 2 and computes its logical translation. With the help of this compiler, we extend the syntax of logical formulas with a logical predicate select("query", ϕ). This predicate compiles the XPath expression query given as parameter into the logic, starting from a context that satisfies ϕ. The XPath expression to be given as parameter must match the syntax of the XPath fragment shown on Figure 2 (or Figure 3). In a similar manner, we introduce the predicate exists("query", ϕ) which tests the existence of query from a context satisfying ϕ, in a qualifier-like manner (without moving to its result). Additionally, the predicate select("query") is introduced as a shortcut for select("query", #), where # simply marks the initial context node of the XPath expression 3 . The predicate exists("query") is a shortcut for exists("query", T). These syntactic extensions of the logic allow the user to easily embed XPath expressions and formulate decision problems out of them (like e.g. containment or any other boolean combination). In the next sections we explain how the framework allows combining queries with schema information for formulating problems. Compilation of Tree Types Tree type expressions are compiled into the logic in two steps: the first stage translates them into binary tree type expressions, and the second step actually compiles this intermediate representation into the logic. The translation procedure from tree type expressions to binary tree type expressions is well-known and detailed in [7]. The syntax of output expressions follows: Attribute expressions are not concerned by this transformation to binary form: they are simply attached, unchanged, to new (binary) element definitions. Finally, binary tree type expressions are compiled into the logic. The logical translation of an expression τ is given by the function tr(τ ) F T defined below: where the function s · (·) sets the type frontier: according to the predicate nullable(x) which indicates whether the type T = () bound to x contains the empty tree. The function tra(a) compiles attribute expressions associated with element definitions as follows: In usual schemas (e.g. DTDs, XML Schemas) when no attribute is specified for a given element, it simply means no attribute is allowed for the defined element. This convention must be explicitly stated into the logic. This is the role of the function "notothers(list)" which returns the negated disjunction of all attributes not present in list. As a result, taking attributes into account comes at an extra-cost. The above translation appends a (potentially very large) formula in which all attributes occur, for each element definition. In practice, a placeholder atomic proposition is inserted until the full set of attributes involved in the problem formulation is known. When the whole formula has been parsed, placeholders are replaced by the conjunction of negated attributes they denote. This extra-cost can be observed in practice, and the system allows two modes of operations: with or without attributes 4 . Nevertheless the system is still capable of handling real world DTDs (such as the DTD of XHTML 1.0 Strict) with attributes. This is due to (1) the limited expressive power of languages such as DTD that do not allow for disjunction over attribute expressions (like "list | a" ); and, more importantly, (2) the satisfiability-testing algorithm which is implemented using symbolic techniques [10]. Tree type expressions form the common internal representation for a variety of XML schema definition languages. In practice, the logical translation of a tree type expression τ are obtained directly from a variety of formalisms for defining schemas, including DTD, XML Schema, and Relax NG. For this purpose, the syntax of logical formulas is extended with a predicate type(" · ", ·). The logical translation of an existing schema is returned by type("f ", l) where f is a file path to the schema file and l is the element name to be considered as the entry point (root) of the given schema. Any occurence of this predicate will parse the given schema, extract its internal tree type representation τ , compile it into the logic and return the logical formula tr(τ ) F T . Type Tagging A tag (or "color") is introduced in the compilation of schemas with the purpose of marking all node types of a specific schema. A tag is simply a fresh atomic proposition passed as a parameter to the translation of a tree type expression. For example: tr(τ ) F xhtml is the logical translation of τ where each element definition is annotated with the atomic proposition "xhtml". With the help of tags, it becomes possible to refer to the element types in any context. For instance, one may formulate tr(τ ) F xhtml | tr(τ ) F smil for denoting the union of all τ and τ documents, while keeping a way to distinguish element types; even if some element names are shared by the two type expressions. Tagging becomes even more useful for characterizing evolutions between successive versions of a single schema. In this setting, we need a way to distinguish nodes allowed by a newer schema version from nodes allowed by an older version. This distinction must not be based only on element names, but also on content models. Assume for instance that τ is a newer version of schema τ . If we are interested in the set of trees allowed by τ but not allowed by τ then we may formulate: tr(τ ) F T &˜tr(τ ) F T If we now want to check more fine-grained properties, we may rather be interested in the following (tagged) formulation: In this manner, we can distinguish elements that were added in τ and whose names did not occur in τ , from elements whose names already occured in τ but whose content model changed in τ , for instance. In practice, a type is tagged using the predicate type("f ", l, ϕ, ϕ ) which parses the specified schema, converts it into its logical representation τ and returns the formula tr(τ ) ϕ ϕ . Such kind of type tagging is useful for studying the consequences of schema updates over queries, as presented in the next sections. Analysis Predicates This section introduces the basic analysis tasks offered to XML application designers for assessing the impact of schema evolutions. In particular, we propose a mean for identifying the precise reasons for type mismatches or changes in query results under type constraints. For this purpose, we build on our query and type expression compilers, and define additional predicates that facilitate the formulation of decision problems at a higher level of abstraction. Specifically, these predicates are introduced as logical macros with the goal of allowing system usage while focusing (only) on the XML-side properties, and keeping underlying logical issues transparent for the user. Ultimately, we regard the set of basic logical formulas (such as modalities and recursive binders) as an assembly language, to which predicates are translated. We illustrate this principle with two simple predicates designed for checking backward-compatibility of schemas, and query satisfiability in the presence of a schema. The predicate backward incompatible(τ, τ ) takes two type expressions as parameters, and assumes τ is an altered version of τ . This predicate is unsatisfiable iff all instances of τ are also valid against τ . Any occurrence of this predicate in the input formula will automatically be compiled as tr(τ ) F T &˜tr(τ ) F T . The predicate non empty("query", τ ) takes an XPath expression (with the syntax defined on Figure 2) and a type expression as parameters, and is INRIA unsatisfiable iff the query always returns an empty set of nodes when evaluated on an XML document valid against τ . This predicate compiles into select("query", tr(τ ) F T & #) where the predicate select("query", ϕ) compiles the XPath expression query into the logic, starting from a context that satisfies ϕ, as explained in Section 3.3. This can be used to check whether the modification of the schema does not contradict any part of the query. Notice that the predicate non empty("query", τ ) can be used for checking whether a query that is valid 5 against a schema remains valid with an updated version of a schema. In other terms, this predicate allows determining whether a query that must always return a non-empty result (whatever the tree on which it is evaluated) keeps verifying the same property with a new version of a schema. A second, more-elaborated, class of predicates allows formulating problems that combine both a query query and two type expressions τ, τ (where τ is assumed to be a evolved version of τ ): new element name("query", τ, τ ) is satisfied iff the query query selects elements whose names did not occur at all in τ . This is especially useful for queries whose last navigation step contains a "*" node test and may thus select unexpected elements. This predicate is compiled into: where element(τ ) is another predicate that builds the disjunction of all element names occuring in τ . In a similar manner, the predicate attribute(ϕ) builds the logical disjunction of all attribute names used in ϕ. new region("query", τ, τ ) is satisfied iff the query query selects elements whose names already occurred in τ , but such that these nodes now occur in a new context in τ . In this setting, the path from the root of the document to a node selected by the XPath expression query contains a node whose type is defined in τ but not in τ as illustrated below: node selected by query path from root to selected node contains node in τ \ τ XML document valid against τ but not against τ The predicate new region("query", τ, τ ) is logically defined as follows: The previous definition heavily relies on the partition of tree nodes defined by XPath axes, as illustrated by Figure 8. The definition of new region("query", τ, τ ) uses an auxiliary predicate added element(τ, τ ) that builds the disjunction of all element names defined in τ but not in τ (or in other terms, elements that were added in τ ). In a similar manner, the predicate added attribute(ϕ, ϕ ) builds the disjunction of all attribute names defined in τ but not in τ . The predicate new region("query", τ, τ ) is useful for checking whether a query selects a different set of nodes with τ than with τ because selected elements may occur in new regions of the document due to changes brought by τ . new content("query", τ, τ ) is satisfied iff the query query selects elements whose names were already defined in τ , but whose content model has changed due to evolutions brought by τ , as illustrated below: The predicate new content("query", τ, τ ) can be used for ensuring that XPath expressions will not return nodes with a possibly new content model that may cause problems. For instance, this allows checking whether an XPath expression whose resulting node set is converted to a string value (as in, e.g. XPath expressions used in XSLT "value-of" instructions) is affected by the changes from τ to τ . The previously defined predicates can be used to help the programmer identify precisely how type constraint evolutions affect queries. They can even be combined with usual logical connectives to formulate even more sophisticated problems. For example, let us define the predicate exclude(ϕ) which is satisfiable iff there is no node that satisfies ϕ in the whole tree. This predicate can be used for excluding specific element names or even nodes selected by a given XPath expression. It is defined as follows: This predicate can also be used for checking properties in an iterative manner, refining the property to be tested at each step. It can also be used for verifying fine-grained properties. For instance, one may check whether τ defines the same set of trees as τ modulo new element names that were added in τ with the following formulation: This allows identifying that, during the type evolution from τ to τ , the query results change has not been caused by the type extension but by new compositions of nodes from the older type. In practice, instead of taking internal tree type representations (as defined in Section 2.2) as parameters, most predicates do actually take any logical formula as parameter, or even schema paths as parameters. We believe this facilitates predicates usage and, most notably, how they can be composed together. Figure 9 gives the syntax of built-in predicates as they are implemented in the system, where f is a file path to a DTD (.dtd), XML Schema (.xsd), or Relax NG (.rng). In addition of aforementioned predicates, the predicate predicate ::= select("query") select("query", ϕ) exists("query") exists("query", ϕ) non empty("query", ϕ) new element name("query", "f ", "f ", l) new region("query", "f ", "f ", l) new content("query", "f ", "f ", l) predicate-name( ϕ ) Figure 9: Syntax of Predicates for XML Reasoning. descendant(ϕ) forces the existence of a node satisfying ϕ in the subtree, and predicate-name( ϕ ) is a call to a custom predicate, as explained in the next section. Custom Predicates Following the spirit of predicates presented in the previous section, users may also define their own custom predicates. The full syntax of XML logical specifications to be used with the system is defined on Figure 10, where the metasyntax X means one or more occurrence of X separated by commas. A global problem specification can be any formula (as defined on Figure 6), or a list of custom predicate definitions separated by semicolons and followed by a formula. A custom predicate may have parameters that are instanciated with actual formulas when the custom predicate is called (as shown on Figure 9). A formula bound to a custom predicate may include calls to other predicates, Schema Variables Table 2: Sizes of (Some) Considered Schemas. but not to the currently defined predicate (recursive definitions must be made through the let binder shown on Figure 6). Framework in Action We have implemented the whole software architecture described in Section 2 and illustrated on Figure 1 [8]. We have carried out extensive experiments of the system with real world schemas such as XHTML, MathML, SVG, SMIL ( Table 2 gives details related to their respective sizes) and queries found in transformations such MathML content to presentation [15]. We present two of them that show how the tool can be used to analyze different situations where schemas and queries evolve. Evolution of XHTML Basic The first test consists in analyzing the relationship (forward and backward compatibility) between XHTML basic 1.0 and XHTML basic 1.1 schemas. In particular, backward compatibility can be checked by the following command: backward_incompatible("xhtml-basic10.dtd", "xhtml-basic11.dtd", "html") The test immediately yields a counter example as the new schema contains new element names. The counter example (shown below) contains a style element occurring as a child of head, which is not permitted in XHTML basic 1.0: <html> <head> <title/> <style type="_otherV"/> </head> <body/> </html> The next step consists in focusing on the relationship between both schemas excluding these new elements. This can be formulated by the following command: backward_incompatible("xhtml-basic10.dtd", "xhtml-basic11.dtd", "html") & exclude(added_element( type("xhtml-basic10.dtd","html"), type("xhtml-basic11.dtd", "html"))) The result of the test shows a counter example document that proves that XHTML basic 1.1 is not backward compatible with XHTML basic 1.0 even if new elements are not considered. In particular, the content model of the label element cannot have an a element in XHTML basic 1.0 while it can in XHTML basic 1.1. The counter example produced by the solver is shown below: <html> <head> <object> <label> <a> <img/> </a> <img/> </label> <param/> </object> <meta/> <title/> <base/> </head> <body/> </html> XTML basic 1.0 validity error: element "a" is not declared in "label" list of possible children Notice that we observed similar forward and backward compatibility issues with several other W3C normative schemas (in particular for the different versions of SMIL and SVG). Such backward incompatibilities suggests that applications cannot simply ignore new elements from newer schemas, as the combination of older elements may evolve significantly from one version to another. MathML Content to Presentation Conversion MathML is an XML format for describing mathematical notations and capturing both its structure and graphical structure, also known as Content MathML and Presentation MathML respectively. The structure of a given equation is kept separate from the presentation and the rendering part can be generated from the structure description. This operation is usually carried out using an XSLT transformation that achieves the conversion. In this test series, we focus on the analysis of the queries contained in such a transformation sheet and evaluate the impact of the schema change from MathML 1.0 to MathML 2.0 on these queries. The first test is formulated by the following command: new_region("Q1","mathml.dtd","mathml2.dtd","math") The result of the test shows a counter example document that proves that the query may select nodes in new contexts in MathML 2.0 compared to MathML 1.0. In particular, the query Q1 selects apply elements whose ancestors can be declare elements, as indicated on the document produced by the solver: <math xmlns:solver="http://wam.inrialpes.fr/xml" solver:context="true"> <declare> <apply solver:target="true"> <eq/> </apply> <condition/> </declare> </math> Notice that the solver automatically annotates a pair of nodes related by the query: when the query is evaluated from a node marked with the attribute solver:context, the node marked with solver:target is selected. To evaluate the effect of this change, the counter example is filled with content and passed as an input parameter to the transformation. This shows immediately a bug in the transformation as the resulting document is not a MathML 2.0 presentation document. Based on this analysis, we know that the XSLT template associated with the match pattern Q1 must be updated to cope with MathML evolution from version 1.0 to version 2.0. All the previous tests were processed in less than 30 seconds on an ordinary laptop computer running Java under Mac OS X. Conclusion In this article, we present a logical framework and a tool for verifying forward/backward compatibility issues caused by schemas and queries evolution. The tool allows XML designers to identify queries that must be reformulated in order to produce the expected results across successive schema versions. With this tool designers can examine precisely the impact of schema changes over queries, therefore facilitating their reformulation. We gave illustrations of how to use the tool for both schema and query evolution on realistic examples. In particular, we considered typical situations in applications involving W3C schemas evolution such as XHTML and MathML. The tool can be very useful for standard schema writers and maintainers in order to assist them enforce some level of quality assurance on compatibility between versions. INRIA There are a number of interesting extensions to the proposed system. First, the set of predicates can be easily enriched to detect more precisely the impact on queries. For example, one can extend the tagging to identify separately every navigation step and qualifier in a query expression. This will help greatly in the identification and reformulation of the navigation steps or qualifiers affected by schemas evolution.
2008-11-26T06:37:01.000Z
2008-11-26T00:00:00.000
{ "year": 2008, "sha1": "415469c8c1f29595b0c149adacad793a53d0d0ee", "oa_license": null, "oa_url": null, "oa_status": null, "pdf_src": "Arxiv", "pdf_hash": "e3a72e948afefec3fc5016e72d4dbf436a17dcc5", "s2fieldsofstudy": [ "Computer Science" ], "extfieldsofstudy": [ "Computer Science" ] }
255614814
pes2o/s2orc
v3-fos-license
Metabolic and electrolyte abnormalities as risk factors in drug-induced long QT syndrome Drug-induced long QT syndrome (diLQTS) is the phenomenon by which the administration of drugs causes prolongation of cardiac repolarisation and leads to an increased risk of the ventricular tachycardia known as torsades de pointes (TdP). In most cases of diLQTS, the primary molecular target is the human ether-à-go-go-related gene protein (hERG) potassium channel, which carries the rapid delayed rectifier current (IKr) in the heart. However, the proarrhythmic risk associated with drugs that block hERG can be modified in patients by a range of environmental- and disease-related factors, such as febrile temperatures, alterations in pH, dyselectrolytaemias such as hypokalaemia and hypomagnesemia and coadministration with other drugs. In this review, we will discuss the clinical occurrence of drug-induced LQTS in the context of these modifying factors as well as the mechanisms by which they contribute to altered hERG potency and proarrhythmic risk. Drug-induced long QT syndrome or acquired long QT syndrome (aLQTS) is characterised by prolongation of the QT interval on the surface electrocardiogram (ECG) and is associated with a markedly increased risk of the potentially lethal ventricular arrhythmia known as torsades de pointes (TdP (Roden 2004)). A prospective study of hospital admission for drug-induced TdP reported 3.3 cases per million over the 4-week study period, translating to an annual incidence of 4/100,000 (Darpö 2001). However, this may be an underestimate for the broader population since TdP is often not reported in out-of-hospital cases (Birda et al. 2018;Lin et al. 2020;Yu et al. 2017). For hospitalised patients, the prevalence of severe diLQTS has been reported as between 1.6 and 3.3% of patients (Birda et al. 2018;Lin et al. 2020;Yu et al. 2017), with these patients having a higher all-cause mortality than their non-LQTS counterparts (Lin et al. 2020;Yu et al. 2017). Over the past 30 years, a range of cardiac (Kannankeril et al. 2011;Selzer and Wray 1964;Singh et al. 2000) and noncardiac (Schoonmaker et al. 1966) drugs have been shown to prolong the QT interval, with several being recalled from the market (Roden 2004). diLQTS can be caused by drugs that block any of the ion channel currents that contribute to normal cardiac repolarisation. In practice, however, the majority of drugs that cause diLQTS do so by inhibiting hERG/Kv11.1 potassium channels, encoded by the KCNH2 gene, which carries the rapid delayed rectifier K + current (I Kr ) in the heart (Vandenberg et al. 2012). This unintentional block of hERG is therefore a problem both for development of new therapeutic compounds, as well as management of patients prescribed such drugs (see Table 1 for a full list of compounds discussed in this review). Consequently, screening for potency of hERG channel block, as a surrogate for QT prolongation and repolarisation delay, is a mandated part of preclinical drug development ((ICH S7B 2005), Fig. 1). However, the link between a drug's potency to block hERG and the emergence of arrhythmia is complex. Of the majority of new chemical entities, up to 70% in some estimates (Shah 2005) can block hERG at some concentration, yet only a small percentage cause arrhythmia (Darpö 2001(Darpö , 2007. Moreover, even for drugs that are demonstrably "high risk", the severity of adverse events across the patient population can be highly variable ranging from minimal prolongation of cardiac repolarisation to the induction of lethal Class III antiarrhythmic I(Ks) and hERG channels Busch et al. (1998) Gastroprokinetic Serotonin 5-HT 4 receptor Barrows, et al. (2009), Kamiya et al. (2008), Lacerda et al. (2001), Lee et al. (2019), Lin et al. (2005c), Perrin et al. (2008), Thomas et al. (1998), Thouta et al. (2018) Psychoactive/psychedelic µ-, δ-and κ-opioid receptors, serotonin 5-HT 2A , HT 2C and HT 3− receptors, sigma σ 1 and σ 2 receptors, NMDA*** receptor, nicotinic acetylcholine (nACh) receptor, serotonin transporter (SERT) and dopamine active transporter (DAT) Thurner et al. (2014) Itraconazole Antifungal 14-α-sterol demethylase of the fungal cell membrane Pohjola-Sintonen et al. (1993) 1 3 arrhythmia (Kannankeril et al. 2011;Singh et al. 2000). A number of factors likely contribute to this variable response, including pre-existing disease resulting in electrical or structural remodelling of the myocardium, sex differences and an * Calcium voltage-gated channel auxiliary subunit alpha2 delta2 gene protein.**ATP-sensitive K + channel.***N-methyl-D-aspartate receptor Class I antiarrhythmic L-type calcium, hERG, slow IKs and KATP** channels Ayad et al. (2010), Barrows et al. (2009, Paul et al. (2002), Po et al. (1999), Roden et al. (1986), Selzer and Wray, (1964), Yang et al. (1997) Glycopeptide antibiotic Peptidoglycan matrix inhibitor of the bacterial cell membrane Varriale and Ramaprasad (1995) Verapamil Class IV antiarrhythmic L-type calcium channel Windley et al. (2018), Zhang et al. (1999) Fig. 1 Summary of environmental effects on drug potency. Many disease factors are known to shift the potency of drugs blocking hERG, such as fever, hypokalaemia, hypocalcaemia, etc. a A theoretical hERG tail current with scale indicated for current amplitude and time, as elicited by the protocol in the above insert. The black trace represents a control current evoked in drug-free conditions, with the blue trace representing 50% inhibition of the current evoked by a theoretical drug. A condition leading to less potent drug inhibition is represented in green, showing only 25% inhibition, with a condition leading to greater potency leading to 75% inhibition and as depicted in red. b A theoretical concentration response curve, with the main drug effect represented in blue. A condition creating lesser potency would lead to a rightward shift, as indicated in green, and, on an ECG, would lead to less QT prolongation, as seen in the insert and depicted in green. Conditions leading to greater potency are depicted in red, and would shift leftward and, on an ECG, would lead to more prolongation, as seen in the insert and as depicted in red. Assets for the ECG traces obtained from Servier Medical Art (Servier 2021) 1 3 individual's genetic background (Echt et al. 1991;Makkar et al. 1993;Roden and Viswanathan 2005). Aside from these patient-specific factors, a drug's proarrhythmic propensity can also be modified by other systemic/acquired factors in patients such as electrolyte disturbances, acidosis, febrile temperatures and coadministration with other drugs. The importance of such considerations has been highlighted recently in relation to repurposing of drugs for treatment of COVID-19. Specifically, various combinations of drugs that are known to carry some degree of proarrhythmic risk, including chloroquine, hydroxychloroquine, azithromycin, erythromycin and lopinavir/ritanavir, have been proposed as potential therapies (Delaunois et al. 2021;Zequn et al. 2021) in COVID-19 patients where fever (Aslam et al. 2021;Pan et al. 2020;Zhou et al. 2020), acidosis (Zhou et al. 2020) and electrolyte disturbances (Alfano et al. 2021;Lippi et al. 2020;Stevens et al. 2021) were also reported. Here, we will review both the clinical occurrence of diLQTS in the context of fever, hypokalaemia, hypomagnesemia and other electrolyte disturbances and the mechanisms by which these factors contribute to altered potency of hERG block and proarrhythmic risk. Effect of kalaemic variation on drug-induced long QT syndrome Potassium is the most abundant intracellular cation, which in healthy patients exists within the range of 3.6-5.0 mM in the plasma (El-Sherif and Turitto 2011; Salzman 2018). In the case of altered serum potassium, hypokalaemia is the most common electrolyte abnormality, occurring in over 20% of hospitalised patients, and is defined as a plasma K + level of less than 3.6 mM. This occurs most frequently as a result of decreased intake, increased renal or gastrointestinal loss or via transcellular shift (El-Sherif and Turitto 2011; Salzman 2018). Hyperkalaemia (plasma K + > 5.0 mM) is less common, reported in 8% of hospitalised patients, and occurs as a result of potassium-sparing diuretic use, higher intake, decreased excretion due to renal failure or damage or transcellular shift of potassium into the extracellular environment (El-Sherif and Turitto 2011; Salzman 2018). In patients taking drugs with established proarrhythmic risk, changes in serum K + have been observed to drive further QT prolongation and incidence of TdP. For example, Ayad et al. reported the case of a patient taking quinidine for 15 years without any incidence of QT prolongation who developed TdP and syncope as a result of hypokalaemia by way of gastrointestinal loss (Ayad et al. 2010). Similarly, in a study of 24 individuals, patients administered hERG blockers such as quinidine while taking potassium-depleting diuretics were identified to be at higher risk for QT prolongation and development of TdP, although some of these patients also presented with several other risk factors such as hypertension, cardiomyopathy or were also taking additional QT prolonging drugs (Roden et al. 1986). However, hypokalaemia rarely presents alone, meaning other parallel factors can also contribute to QT prolongation. In a study of 11 patients in whom diLQTS was present, including 8 who exhibited severe hypokalaemia, additional factors such as hypomagnesaemia, hypertension and alcohol use were also present (Digby et al. 2011), while in a larger study of 804 chronic kidney disease patients, lower serum K + and Ca 2+ were each found to be significant contributors to QT prolongation, often against a background of chronic diseases such as hypertension or diabetes (Liu et al. 2019). Mechanism of kalaemia-dependent changes in hERG block and QT prolongation Understanding the relationship between kalaemic variation and drug-induced prolongation of repolarization is complex, since variation in extracellular potassium has direct effects on cardiac repolarization, via effects on potassium channel function and expression, as well as drug binding (Barrows et al. 2009;Guo et al. 2009Guo et al. , 2011Limberis et al. 2006;Melgari et al. 2014;West et al. 1997;Yang et al. 2004Yang et al. , 1997. Here we will focus on studies that have specifically addressed potassium dependence of a drug's potency to block hERG. Across the literature, reports of the influence of K + on potency to block hERG across drugs is broadly consistent, with increasing extracellular potassium reducing the potency of block (Barrows et al. 2009;Busch et al. 1998;Lin et al. , 2008Lin et al. , 2005cLin and Papazian 2007;Mergenthaler et al. 2001;TeBay et al. 2021;Wang et al. 1997;West et al. 1997;Yang et al. 2004) and decreased potassium concentration increasing potency of block (Lin et al. 2005a;TeBay et al. 2021;Tschirhart and Zhang 2020). Two potential mechanisms have been proposed to explain this. First, it has been suggested that changes in the state or conformation of hERG as a function of K + might impact the potency of drugs that exhibit state-dependent binding. The hERG channel can exist in one of three states: closed, open or inactivated, with two voltage-dependent gates, a fast inactivation gate and a slow activation/deactivation gate (Vandenberg et al. 2012 . Supporting this idea, it has also been shown (in the absence of variation of external K + ) that hERG mutants with reduced inactivation could greatly attenuate the block of drugs with inactivated state preference such as cisapride and terfenadine (Perrin et al. 2008), while voltage protocols that drive occupancy of the inactivated state result in a higher observed potency for state-dependent drugs (Lee et al. 2016(Lee et al. , 2019. However, there is also evidence to counter the concept of state-dependent binding underlying the effect of potassium. Barrows et al. showed that despite significant reduction in hERG potency for cisapride and quinidine with increasing K + between 0 and 20 mM K + , there was little change in the fraction of channels existing in inactivated state at + 20 mV between these two potassium concentrations. Based on this evidence, they reasoned that state preference of block did not underpin the altered potency seen for these drugs (Barrows et al. 2009). Similarly, though again outside of a K + context, Thouta et al. used mutants that were constitutively open to explore the preference of terfenadine or cisapride for binding to the open or inactivated state and were able to show that degree of drug block did not change in accordance with the extent of inactivation, suggesting that these two drugs do not exhibit an inactivation state preference (Thouta et al. 2018). The second mechanism proposed to explain the potassium dependence of a drug's potency to block hERG is that electrostatic repulsion between the K + ion and the bound drug molecule induces a "knock-off" effect (Barrows et al. 2009;Wang et al. 1997). Wang et al. showed that an inactivationdeficient mutant (S631C, G628C) had near identical external K + sensitivity for E-4031 block as the wild-type channel (Wang et al. 1997) and proposed that since both potassium and E-4031 possess a single positive charge, an electrostatic repulsion mechanism could explain the effect of potassium on drug potency. The study found that with the differences in K + they had used (2 mM vs 98 mM), there would be sufficient free energy to account for the observed reduction in block (Wang et al. 1997). Further to this, it has been proposed that the ability of monovalent cations to "knock off" a drug from its binding site on the hERG channel depends on the ion's permeability (Barrows et al. 2009). Evidence for this includes a correlation between the observed degree of potency of block for cisapride and quinidine and ionic permeability when the permeant ion or chemical species is switched between potassium, rubidium, caesium and TEA, where the degree of block follows the ion's permeability through hERG of P K+ = P Rb+ > P Cs+ > > P TEA (Barrows et al. 2009). However, sensitivity of block to specific monovalent ions is also drug dependent, as the degree of block for quinidine was significantly different between 2 and 20 mM K + , as well as between K + and Cs + , whereas cisapride block was unchanged (Barrows et al. 2009). In reality, it is likely that both mechanisms may contribute, depending on the specific compound. In the current literature, mechanistic studies have generally sampled only small subsets of drugs, often because data has been generated using manual patch-clamp electrophysiology, which limits the throughput and scale of these investigations. To more confidently discern the mechanism by which altered K + affects drug potency, it is likely that studies of larger drug panels are required, which could be facilitated using high-throughput platforms such as automated patch-clamp or radioligand binding assays. For example, Diaz et al. used 3 H dofetilide binding assay to assess a panel of 56 compounds, showing that higher K + lead to reduced potency for some compounds, though increased potency for others (Diaz et al. 2004) -inconsistent with the broad trend reported in prior patch-clamp studies. However, in comparison with the gold standard of manual patch clamp, there was a greater than 5to sixfold difference between potencies measured in binding versus patch clamp for some compounds, with 6 of those compounds having greater than tenfold difference (Diaz et al. 2004). In resolving this question, the use of automated patch-clamp platforms, which combine throughput with gold standard electrophysiology, is likely the technology that will facilitate the scale and quality of information required for interpreting and predicting the clinical implications of K + on hERG drug block and proarrhythmic risk into the future. Clinical observations for altered serum divalent concentration Two divalent cations that are (i) present in human plasma at concentrations relevant for modification of hERG function and/or block, and (ii) have altered concentrations in pathophysiological states, are magnesium and calcium. In healthy patients, normal total plasma calcium concentration is in the range 2.2-2.55 mM, where concentrations outside of this range, typically lower, can contribute to QT prolongation and hence arrhythmic risk (Liu et al. 2019;Nijjer et al. 2010;Szymanski et al. 2013). However, free-or ionised-calcium concentrations are significantly lower (1.05-1.3 mM (Goldberg 2019)), due to binding to plasma proteins such as albumin (Labriola et al. 2009), making this the preferred clinical measurement in predicting prolongation of the QT interval (Kim et al. 2019) and a more suitable comparison for in vitro experiments than total Ca 2 + . Hypocalcaemia can be observed with renal insufficiency, parathyroid disease, reduced intake, acute pancreatitis, septic shock or other electrolyte disturbances, whereas hypercalcaemia is associated with hyperparathyroidism, vitamin D disturbances, endocrine disorders, 1 3 neoplastic disorders and many other malignancies (El-Sherif and Turitto 2011; Salzman 2018). For magnesium, the normal range is 0.7-0.95 mM, and while both hypomagnesemia and hypermagnesemia can result in QT interval prolongation (Topf and Murray 2003), their effects on electrophysiology are often hard to ascertain due to their frequent association with other electrolyte or electrophysiological abnormalities (Ayad et al. 2010;El-Sherif and Turitto 2011;Roden et al. 1986;Salzman 2018;Whang and Ryder 1990). Hypomagnesemia is common, especially in geriatric populations, and can occur due to decreased gastrointestinal uptake or renal loss, whereas hypermagnesemia is far rarer, especially outside of an obstetric population, given the large reserve of magnesium excretion potential the kidneys possess, often only occurring in the background of renal failure (El-Sherif and Turitto 2011; Topf and Murray 2003). Mechanism of divalent ion-dependent changes in hERG block While there is significant literature on the effect of divalent cations on cardiac electrophysiology and hERG channel 1 3 function, there are fewer comprehensive reports on divalent cation dependence of hERG drug block potency. Furthermore, the literature that does exist presents a somewhat inconsistent narrative. Increased extracellular Mg 2+ has been shown to increase the potency of hERG block for multiple compounds (Po et al. 1999;TeBay et al. 2021), whereas reduced internal Mg 2+ was found to reduce the potency of quinidine (Yang et al. 1997). Conversely, concentrations of extracellular Ca 2+ between 0.1 and 10 mM did not modify the block of either quinidine or cisapride (Barrows et al. 2009). Since there are suggestions that divalent ions could act as hERG/IKr blockers themselves, with binding sites identified within the hERG channel (Anumonwo et al. 1999;Ho et al. 1996Ho et al. , 1998Ho et al. , 1999, one potential mechanism could be that divalent ions together with hERG blocking drugs could result in an increased overall load of IKr inhibition (Po et al. 1999). Another potential explanation is that divalent ions regulate the deactivation kinetics of hERG, which could in turn affect drug dissociation and the degree to which some drugs exhibit "drug trapping" (Barrows et al. 2009). One factor that has confounded in vitro investigations in this area is the need for 1-2 mM concentrations of calcium in bath solutions for patch-clamp electrophysiology, which is critical for formation and maintenance of high-quality seals (Lin and Papazian 2007). As a result, investigations of the effects of variation in divalent ion concentrations in the physiological range are limited in these systems. This issue is particularly salient in automated high-throughput patch-clamp systems, where calcium fluoride seal enhancers are critical in establishing high-quality seals (Braun et al. 2021), meaning thorough investigation of the effects of divalent ions on drug block of hERG at large scale remains technically difficult. In addition to this practical challenge, there is also the issue of what is physiologically or clinically relevant. While observing the effects of wide ranges in concentration of divalent ions may be mechanistically interesting, calcium and particularly magnesium exist in narrow physiological ranges, meaning the clinical relevance of such studies are limited. Effect of acidosis and alkalosis on drug-induced QT prolongation Metabolic acidosis can increase the QT interval on the ECG (Yenigun et al. 2016) as well as lower the threshold for ventricular fibrillation. Such changes can become particularly problematic in the case of localised changes in pH surrounding ischemic regions of the heart, which produce heterogeneity in action potential duration and provide an electrical substrate for re-entry (Clayton and Holden 2005;Gebert et al. 1971;Podrid and Myerburg 2005;Surawicz 1985). Of specific relevance to this review, acidosis has also been reported as a comorbidity in cases of diLQTS (Riezzo et al. 2009). In relation to hERG channels, changes in pH can directly affect hERG function (Anumonwo et al. 1999;Jiang et al. 1999;Jo et al. 1999;Lin et al. 2005a;Shi et al. 2014;Van Slyke et al. 2012;Vereecke and Carmeliet 2000) as well as the molecular pharmacology of the drug channel interaction. In the latter case, early experiments showed that a reduction in pH to 6.8 could significantly reduce hERG block by dofetilide (West et al. 1997). Across numerous subsequent reports, there is broad consensus that extracellular acidification reduces hERG block by a range of compounds (Du et al. 2011;Lin et al. 2005aLin et al. , b, 2008TeBay et al. 2021;Thurner et al. 2014;Tschirhart and Zhang 2020;Wang et al. 2016;Zhang et al. 1999), with alkalisation enhancing drug block (Lin et al. 2005a;Thurner et al. 2014;Tschirhart and Zhang 2020;Zhang et al. 1999). There is however some complexity to this relationship since quite different results were seen when the extracellular solution was acidified using sodium acetate rather than hydrochloric acid. In this case, Fig. 2 Mechanisms of environmental effects on hERG and drug interactions. Some pathophysiological changes can have effects on the molecular mechanisms of hERG. a Represents a schematic showing hERG gating starting in the closed state (left), transitioning through to the open state (middle) by processes of depolarisation and transitioning again to an inactivated state (right) through depolarisation, with the reverse direction of these processes driven by repolarisation. Conditions that can increase deactivation, from open to closed state, include acidosis and high concentration of divalent ions, whereas conditions that could lead to a greater drive to inactivation includes low potassium ion concentration. Finally, raising temperature increases the threshold for hERG to exist in the open state. Beneath are drugs with state preference, with arrows indicated towards which hERG state they possess preferential binding towards, including dofetilide able to bind to open or inactivated state, with greater preference for the latter (Perrin et al. 2008;Wang et al. 2016;Yang et al. 2004), flecainide with open-state preference (Paul et al. 2002) and erythromycin with open or closed state preference (Guo et al. 2005). b Indicates the effect of acidosis on drug diffusion across the lipid bilayer. The site of binding is often located such that drug molecules require access from the intracellular side of the membrane and so must be able to cross the cell membrane. The left panel indicates drug administered extracellularly in the presence of extracellular acidosis. Where the local pH is far below the pKa of the drug molecule, a significant proportion of the drug molecule will become charged (D +) and hence unable to cross the lipid bilayer and reach the site of drug binding. Whereas when pH is only slightly below (or above) that of the molecules pKa, a greater proportion is available in the neutral or uncharged state (Dn), which can cross the cell membrane and reach its site of action, indicated by the closed green circle. On the right shows similar conditions yet for intracellular drug application with intracellular acidosis. Here, the difference is that a greater amount of neutral drug molecule would lead to a greater diffusion out of the cell, and hence, less drug is available for channel block, where instead with a local pH far below the drug molecule's pKa, the drug molecule becomes charged, and hence experiences trapping within the cell, and so a greater amount is available to block the channel. All channels, as well as lipid bilayer assets, were obtained from Servier Mediact Art (Servier 2021) ◂ 1 3 while lowered pH still reduced block by quinidine and azimilide, the potency of dofetilide was increased , with the authors suggesting this perhaps occurred because sodium acetate reduced the intracellular (as well as extracellular) pH . Furthermore, in experiments examining acidification of the intracellular space, while extracellular pH was maintained in the physiological range, dofetilide, flecainide and amiodarone's block was not diminished when the drugs were applied extracellularly (Du et al. 2011), while for ibogaine, intracellular application of the drug in the presence of intracellular acidification greatly increased the extent of block (Thurner et al. 2014). Mechanism of pH effects on hERG block Despite differences in drug class and chemical structure of compounds that block hERG, a common explanation for the effect of pH on drug potency has emerged, based on how charge on the functional groups of a drug molecule affects their partition coefficients and hence their ability to cross the cell membrane. For example, antimalarial drugs such as quinine and chloroquine are weak bases and can gain or lose protons from their amino groups depending on pH (Warhurst 1986). In their neutral form, these compounds are lipophilic, with a high partition coefficient (logP), and hence are able to cross the membrane to access their intracellular binding site. However, in more acidic environments, these molecules become protonated, more hydrophilic/lipophobic and less membrane permeable, limiting access to their intracellular binding site and reducing the observed degree of block (Warhurst 1986;Warhurst et al. 2003) (Fig. 1b). Consistent with this, it has been seen that a drug's potency to block hERG increases with lipophilicity, as measured by logP, or basicity, as measured by pKa (Kawai et al. 2011), while several studies of individual compounds also support this mechanism. For example, Zhang et al. calculated that for verapamil, with a pKa around 8.8, 4% of molecules would exist in a neutral form at pH 7.4, compared to 28% at pH 8.4 and 0.4% at pH 6.4, and observed a corresponding reduction in the potency of block as pH was decreased in vitro (Zhang et al. 1999). The authors also demonstrated that block by N-methyl-verapamil, a permanently charged analogue of verapamil, was not sensitive to changes in pH, confirming that the effect on block was specifically due to the charge on the drug molecule (Zhang et al. 1999). Similar explanations have also been posed for other drugs such as flecainide (Du et al. 2011), ibogaine (Thurner et al. 2014), fentanyl (Tschirhart andhydroxychloroquine (TeBay et al. 2021) supporting the case that this is a common mechanism for the effect of pH on a drug's potency to bock ERG. For some drug molecules, however, the picture can be more complicated. Dofetilide has multiple functional groups with different pKa values, including two methanesulfonamide groups, with pKa values of 9.0 and 9.6, as well as a nitrogen atom with a pKa of 7, making it a zwitterion (Du et al. 2011). At a pH of 7.4, 2.5 and 0.6% of the methanesulfonamide moieties are charged, compared with 28.5% of amine groups (Du et al. 2011), while at pH 6.3, 0.2% and 0.06% of the methanesulfonamide and 84% of the amine groups would be charged. Thus, the overall effect of acidic pH is a more charged, membrane impermeant molecule that shows reduced block of hERG at lower pH (Du et al. 2011). Other drugs have pKa values outside of the physiological/ pathophysiological range but can also exhibit modified potency of hERG block with respect to pH. For example, flecainide, with a pKa of 9.3, exists in 1.2% and 0.1% neutral form at pH 7.4 and 6.3, respectively (a 12-fold difference), so still exhibits significant changes in observed potency between these pH values. Conversely, at the other extreme, amiodarone has a pKa of 5.6 (98% neutral at pH 7.4 and 83% at pH 6.3) and is not sensitive to pH changes in the same range (Du et al. 2011). Finally, for some drugs such as ibogaine, this same mechanism can also result in internal accumulation of a drug molecule, where under low intracellular pH the drug molecule becomes ionised, and hence trapped within the cell, thus increasing the apparent potency of the drug (Fig. 2b) (Thurner et al. 2014). In addition to the effect of pH via charge on the drug molecule, a further layer of nuance exists in understanding how environmental pH can alter a drug's potency to block hERG. In a similar manner to extracellular potassium, pH can also affect hERG channel function and hence influences state-specific drug-channel interactions. Specifically, acidosis is known to accelerate hERG deactivation, affecting the occupation of the open state at a given voltage (Anumonwo et al. 1999;Jiang et al. 1999;Jo et al. 1999;Vereecke and Carmeliet 2000) (Fig. 2a). In relation to this, the neutral form of dofetilide has been reported to preferentially bind to the open state of the hERG channel, while the cationic form preferentially binds to the inactivated state (Wang et al. 2016). Using molecular docking simulations, Wang et al. showed that as the channel transitions between open and inactivated states, there is reorientation of the key residues F656 and Y652 that form the drug binding site. Concomitant with this, cationic dofetilide can change confirmation, bringing its benzene rings closer in an event known as π-π stacking, which allows the dofetilide molecule to bind to the channel and stabilise hERG in the inactivated state (Wang et al. 2016). Therefore, overall, a range of factors including the pKa of the compound, the pH of the extracellular versus intracellular environment, passage to the compounds intracellularly accessed binding site and the compound's state preference all contribute to the pH effect on hERG block in a compound-specific manner. Furthermore, in the physiological/pathophysiological range of pH, significant changes in hERG block, and hence QT prolongation, can occur, making this an important factor for consideration in relation to diLQTS. Effect of febrile temperature on hERG block and drug-induced long QT syndrome Elevated/febrile body temperature, as a result of illness and infection, is known to alter or exacerbate diLQTS phenotypes in patients. Perhaps, most commonly, this occurs in association with the use of antibiotics such as vancomycin and gentamicin (Varriale and Ramaprasad 1995), or antifungals such as posaconazole (Panos et al. 2016), to treat infection. However, febrile temperatures are also associated with other pathophysiological conditions such as hypertension and diabetes mellitus in patients who may also be prescribed drugs with potential to prolong the QT interval such as enalapril and glyburide, respectively (Varriale and Ramaprasad 1995). In vitro studies that are specific to febrile versus physiological temperature are limited, with inconsistent reports across different drugs. Erythromycin, for example, has been shown to be a more potent hERG blocker at physiological (37 °C) as opposed to ambient (22 °C) temperature, with further increased potency observed at febrile temperatures (42 °C) (Guo et al. 2005). In contrast, for moxifloxacin, no significant change in potency was observed between physiological temperature and 42 °C (Alexandrou et al. 2006). Similarly, our investigations showed that febrile temperature significantly increased the potency of azithromycin as compared to physiological temperatures, while for chloroquine and hydroxychloroquine, potency was significantly reduced (TeBay et al. 2021). Further insights into the effect of temperature on hERG block can be gleaned from experiments performed at subfebrile temperatures, which are far more common in the literature. Lacerda et al. reported that physiological temperatures (35 °C) evoked only a slight change in potency for terfenadine and loratadine (increase or decrease respectively), with no significant changes observed for cisapride and erythromycin when compared to ambient temperature (Lacerda et al. 2001). Contrary to this, other studies report significant effects of temperature on block of hERG by erythromycin (~ sevenfold increase in potency) (Kirsch et al. 2004) -a difference perhaps is a result of the different voltage protocols used between the two studies. In relation to this, Kirsch noted that at 22 °C, erythromycin did not reach steady state of block when employing a 2-s step pulse protocol with a 10-s interval, leading to an inaccurate estimate of IC 50 , while at physiological temperature, the true steady state was reached, because of the faster onset of block. This raises an important point that is equally applicable to any studies assessing hERG potency -that there is no "gold standard" protocol and the observed degree of block can be protocol specific. As a result, this potentially confounding factor should be considered in any comparison between studies, such as those described in this review. Overall, then it is clear from the literature that the effect of temperature on potency is compound specific, meaning consideration of the proarrhythmic risk associated with administration of potentially QT prolonging drugs to patients with fever needs to be made on a drug-by-drug basis. How does temperature modify potency of block? In a similar manner to kalaemic variation, experiments examining the temperature dependence of the potency of hERG block have suggested two potential mechanisms to explain temperature sensitivity: first, through modification of hERG channel function, particularly in relation to binding of state-dependent drugs, and, second, through direct effect on drug interaction with its binding site on the channel protein. In relation to the first of these, hERG electrophysiology displays complex temperature dependence, with increasing temperature causing a negative shift in the voltage dependence of activation, in concert with a positive shift in the voltage dependence of inactivation ( Vandenberg et al. 2006), resulting in an overall increased occupancy of the open state at physiological voltages (Fig. 2a). For compounds that exhibit state-dependent binding, these temperature-dependent shifts in state occupancy therefore have potential to affect the measured potency of block. In this regard, Yao et al. investigated the effects of temperature on hERG block by probing state-dependent inhibition with various voltage protocols and temperatures. For astemizole, overall decreased potency was observed at higher temperature, with the greatest degree of block observed with a nonstate selective protocol, suggesting that astemizole is able to block multiple states of hERG (Yao et al. 2005). Both terfenadine and ketoconazole similarly showed little preference between protocols optimised for close-or open-state occupancy and, consistent with that, showed little change in potency at higher temperatures. Finally, while E-4031 exhibited open-state preference during ambient temperature recordings, no change in potency was observed at higher temperatures (Yao et al. 2005). The relationship between channel state occupancy, temperature and channel block is therefore complex and requires further experiments across a wider selection of compounds to fully resolve. The second possible explanation for the effect of temperature on hERG potency -a direct impact of temperature on drug binding kinetics -has been probed using combinations of fast perfusion systems, voltage protocols and in silico modelling. Using ultra-fast solution exchange systems, Windley et al. were able to directly measure both the onset 1 3 of block and washout of cisapride, showing that the kinetics of both drug binding and dissociation were temperature sensitive and that complex characteristics of kinetics at higher temperatures could be explained by an accumulation of drug in an intermediate, non-blocking state (termed an encounter complex). Furthermore, they showed that in the context of the cardiac action potential, these temperature-dependent effects on drug binding kinetics were important in predicting the degree of prolongation associated with hERG block . Following this, a study of a broader range of drugs including verapamil, cisapride, bepridil and terfenadine found that while increasing temperature accelerated the observed onset of block (τ on ) for all drugs, the temperature dependence of association and dissociation rates was compound specific (Windley et al. 2018). Furthermore, while there was no significant effect of temperature on measured potency in steady-state block assays, the alterations to the kinetic parameters alone still resulted in variable temperature dependence of the predicted degree of action potential prolongation for each of the drugs (Windley et al. 2018). Overall, this data therefore supports the need to consider the influence of temperature on the kinetics of drug block, even in the absence of changes to potency, in relation to diLQTS. Furthermore, since the effects of temperature appear to be compound specific, pharmacological screening data for use for risk prediction in diLQTS should where possible be acquired at physiological temperatures. Drug coadministration While most in vitro studies focus on the effect of a single environmental factor on hERG potency, the reality in relation to QT prolongation in the clinical setting is more complex. Patients are often administered multiple drugs with potential to prolong repolarisation, in the background of combinations of electrolyte disturbances and/or chronic disease states (Ayad et al. 2010;Digby et al. 2010). For example, in a study by Digby et al., subjects were prescribed on average 2.8 QT prolonging drugs in the background of diseases including hypertension and dilated cardiomyopathy (Digby et al. 2011). Similarly, in a study of 48 patients hospitalised for TdP, the mean medication number per patient, including QT prolonging drugs in some instances, was 1.1, with electrolyte imbalances seen in 79% of patients (Lazzerini et al. 2018). This data therefore highlights the importance of considering how drugs might interact with each other, either directly or indirectly in understanding QT prolongation in patients. Regarding direct drug effects, the simplest consideration is that of an additive effect on hERG block. Most drugs that block hERG are thought to share a common binding site formed by a network of aromatic residues in the vestibule of the channel (Kamiya et al. 2008;Stansfeld et al. 2006). Given this common binding site, a patient taking multiple QT prolonging agents could simply be considered to have an increased load of hERG channel block -so increasing their potential for QT prolongation and TdP. In patients, these additive effects have most often been reported in association with coadministration of antipsychotic drugs. Lin et al. reported a patient presenting with schizophrenia who was prescribed risperidone, amisulpride and haloperidol, leading to sudden cardiac arrest, where discontinuation of amisulpride leads to a gradually normalised QTc interval (Lin et al. 2009). In the same study, the authors also described a second patient who developed a QTc interval of 510 ms when co-administered amisulpride and flupenthixol, with neither agent alone producing concerning QT prolongation (Lin et al. 2009). Aside from additive effects on hERG block, coadministration of drugs can also result in increased torsadogenicity via effects on drug metabolism. Increasing concentrations of berberine or clarithromycin have been shown to significantly inhibit activity of cytochrome P450 enzymes of the CYP3A family in vitro. Since this enzyme is a major metaboliser of many QT prolonging drugs, this reduction in CYP3A activity can lead to altered pharmacokinetics and hence a greater plasma concentration of either drug (Zhi et al. 2015). This link between inhibition of drug metabolism and proarrhythmia has been observed across multiple studies including reports that ketoconazole, erythromycin, diltiazem, itraconazole and grapefruit juice -all inhibitors of cytochrome P450 enzymes -have resulted in increased serum concentration of terfenadine, halofantrine and cisapride, leading to QT prolongation and TdP (Charbit et al. 2002;Paris et al. 1994;Pohjola-Sintonen et al. 1993;Rajput et al. 2010;Thomas et al. 1998). This phenomenon has also been detected in larger cohorts where coadministration of ketoconazole with domperidone was found to triple the plasma concentration of domperidone, exacerbating QTc prolongation to clinically significant levels, over and above that observed for either agent alone (Boyce et al. 2012). Systemic effects induced by other drugs have also been seen to modify the risk profile of QT prolonging compounds. For example, Roden et al. described cases where hypokalaemia caused by potassium-depleting diuretics were found to exacerbate quinidine-induced QT prolongation (Roden et al. 1986), while incidences of hypomagnesemia caused by protein pump inhibitor usage, in combination with QT prolonging medications such as ceftriaxone or disopyramide, were shown to trigger TdP (Lazzerini et al. 2018). Finally, another case described a patient treated with prednisolone for myasthenia gravis precipitating atrial fibrillation, which was in turn treated with disopyramide. The disopyramide administration resulted in worsening myasthenia gravis, leading to respiratory failure and serum disturbances including alkalosis and hypokalaemia, which together precipitated TdP (Hirose et al. 2008). Together, these cases demonstrate that regardless of the mechanism of their interaction, the simultaneous presence of multiple hERG blocking agents, and their interaction with systemic factors such as electrolytes, have clear potential to increase proarrhythmic risk, and patients should be monitored appropriately when QT prolonging medicines are co-administered. Conclusions In order to understand or predict the occurrence of druginduced QT prolongation and TdP in patients, it is clear that risk allocation is far more complicated than a static label assigned to individual drugs. Rather, a range of pathophysiological factors associated with disease states as well as coadministration with other drugs need to be considered when prescribing and managing the risk of therapeutics with potential to prolong the QT interval. While significant literature exists describing how factors such as pH, fever and kalaemic variation affect potency to block hERG, there are still gaps in our knowledge regarding the mechanisms of these effects, which may be better addressed via studies on more extensive drug libraries that are now feasible as a result of the increased use of high-throughput automated patchclamp screening platforms. Furthermore, incorporation of data from these large-scale screens into population models of cardiac electrophysiology (TeBay et al. 2021;Varshneya et al. 2021) will help us better understand the relationships between a drug's ion channel blocking potency, the effect of environmental modifiers, genetic background and risk of TdP. Author contribution Clifford TeBay and Monique Windley contributed to the conception and design. Clifford TeBay performed the literature review and wrote the first draft. All authors critically revised the manuscript. All authors approved the final manuscript. Funding Open Access funding enabled and organized by CAUL and its Member Institutions. Clifford TeBay is supported through an Australian Government Research Training Program Scholarship. Monique Windley is supported by an Australian National Health and Medical Research Council project grant to Adam Hill (GNT1164518). Declarations Ethics approval and consent to participate. Not applicable. Conflict of interest The authors declare no competing interests. Open Access This article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons licence, and indicate if changes were made. The images or other third party material in this article are included in the article's Creative Commons licence, unless indicated otherwise in a credit line to the material. If material is not included in the article's Creative Commons licence and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this licence, visit http:// creat iveco mmons. org/ licen ses/ by/4. 0/.
2023-01-12T14:42:09.389Z
2022-01-27T00:00:00.000
{ "year": 2022, "sha1": "38e805ee596f9cccb32b8d3c4b0940e02c1d59c3", "oa_license": "CCBY", "oa_url": "https://link.springer.com/content/pdf/10.1007/s12551-022-00929-7.pdf", "oa_status": "HYBRID", "pdf_src": "SpringerNature", "pdf_hash": "38e805ee596f9cccb32b8d3c4b0940e02c1d59c3", "s2fieldsofstudy": [ "Medicine", "Biology" ], "extfieldsofstudy": [] }
14179533
pes2o/s2orc
v3-fos-license
Non-Abelian Flux Tubes in SQCD: Supersizing World-Sheet Supersymmetry We consider non-Abelian 1/2 BPS flux tubes (strings) in a deformed N=2 supersymmetric gauge theory, with mass terms mu_{1,2} of the adjoint fields breaking N=2 down to N=1. The main feature of the non-Abelian strings is the occurrence of orientational moduli associated with the possibility of rotations of their color fluxes inside a global SU(N) group. The bulk four-dimensional theory has four supercharges; half-criticality of the non-Abelian strings would imply then N=1 supersymmetry on the world sheet, i.e. two supercharges. In fact, superalgebra of the reduced moduli space has four supercharges. Internal dynamics of the orientational moduli are described by two-dimensional CP(N-1) model on the string world sheet. We focus mainly on the SU(2) case, i.e. CP(1) world-sheet theory. We show that non-Abelian BPS strings exist for all values of mu_{1,2}. The low-energy theory of moduli is indeed CP(1), with four supercharges, in a wide region of breaking parameters mu_{1,2}. Only in the limit of very large mu_{1,2}, above some critical value, the N=2 world-sheet supersymmetry breaks down to N=1. We observe"supersymmetry emergence"for the flux-tube junction (confined monopole): the"kink-monopole"is half-critical considered from the standpoint of the world-sheet CP(1) model (i.e. two supercharges conserved), while in the bulk N=1 theory there is no monopole central charge at all. where M SUSY refers to the sector associated with bosonic generators in the superalgebra which are broken by the given soliton, by virtue of the introduction of central charges [9], plus their fermionic counterparts. In the case at hand, magnetic flux tubes, two translations are spontaneously broken. Realization of supersymmetry in this sector, associated with the unbroken generators (a half of translations and supertranslations are unbroken in the problem to be considered below) is fully fixed by flat geometry 1 . At the same time, M in Eq. (1.1), the reduced moduli space, associated with internal symmetries and the corresponding moduli, can have realizations of supersymmetry that are more contrived. A phenomenon of this typesupersymmetry enhancement -was discovered in Ref. [10] in the domain wall problem. The worldsheet dynamics on M, at the level of two derivatives, were described [10] by a threedimensional model which had twice more supercharges than one could have a priori expected. In the present work we report similar results for the non-Abelian strings which emerge as topological defects in some N = 1 four-dimensional super-Yang-Mills models with matter. The bulk model has four supercharges; the strings under consideration are 1/2 BPS. One could expect two supercharges in the world-sheet algebra. At the same time, the low-energy theory of moduli on the string world sheet -the CP (N − 1) model -has four supercharges. Two extra (or "supernumerary") supercharges which are realized on M cannot be lifted to supercharges of the bulk theory. Thus, the phenomenon of supersymmetry enhancement, or supersizing of the world-sheet supersymmetry, 2 is of a rather general nature and is not rare. It has a geometric origin and can be traced back to the Kähler structure of the reduced moduli space. 3 A particular bulk theory we will deal with is a deformed N = 2 supersymmetric SU(N)×U(1) theory. This model has been already heavily exploited [4] in the context of non-Abelian strings previously. Deformation discussed in [4] was a linear in A superpotential term, where A is the adjoint superfield ∈U (1). This deformation is known to be N = 2 preserving. Now, instead, we introduce mass terms µ 1,2 of the adjoint superfields A a and A which certainly break N = 2 down to N = 1 . Thus, the bulk four-dimensional theory has four supercharges. Concentrating mainly on the simplest case of SU(2)×U(1) we construct 1/2 BPS non-Abelian string solution exploiting techniques worked out previously. Because of half-criticality of our solution, a priori we could expect two supercharges on the reduced moduli space, i.e. an N = 1 low-energy theory of moduli. This is not what actually happens. We show, by performing an explicit analysis of the zero modes, that the world-sheet theory on the reduced moduli space is the supersymmetric CP (1) model (at the level of two derivatives). This model has N = 2 i.e. four supercharges (for a review see e.g. [12]). The (real) dimension of the bosonic part of M is two. The necessary condition for the enhancement of supersymmetry is the occurrence of four fermion zero modes. Thus, the most crucial and most technically involved part of the analysis of the zero modes is that of the fermion zero modes. Their construction is carried out explicitly, including two extra modes. Once we obtain four fermion zero modes and introduce corresponding fermion moduli (four), combining this with the knowledge that N = 1 supersymmetry on the world sheet is automatic, the Kähler structure of M immediately implies the full-blown N = 2 . Then we address the issue of the evolution of the mass deformation. Indeed, as µ 1,2 → ∞ (in fact, we only need µ ≫ √ ξ, where ξ is a Fayet-Iliopoulos parameter), the adjoint fields A a and A become very heavy and decouple from the bulk theory altogether, leading to N = 1 SQCD with the gauge group SU(2)×U (1). It is known [13] that N = 1 SQCD admits only Abelian BPS strings. The question is what happens with our non-Abelian 1/2 BPS strings as the parameters µ 1,2 grow. This question turns out to be subtle. It turns out that the parameter ξ/µ plays the role of an infrared regulator. Physically, at µ ≫ √ ξ the adjoint fields do decouple. However, in the limit µ → ∞, after the decoupling, the emerging N = 1 SQCD develops a Higgs branch, which is absent for any finite µ. At any finite µ the vacuum manifold is an isolated point, which makes the string solution, as well as zero modes, well-defined. If µ is large but finite, the mass of the would-be moduli corresponding to the "motion" along the Higgs branch is ∼ ξ/µ. Thus, there is a seemingly irreconcilable contradiction. On the one hand, it is clear that at µ ≫ √ ξ we must recover N = 1 SQCD. On the other hand, in the BPS string analysis the limit µ → 0 seemingly cannot be taken. A way out was in fact suggested in the literature in the context of a similar problem [14]. In Ref. [14] Abrikosov-Nielsen-Olesen (ANO) strings [15] were considered on the Higgs branch of an N = 2 gauge theory (with massive fundamental matter). Common wisdom says [16] that there are no ANO strings in this case (it would be more accurate to say that they inflate and become infinitely thick), because of the same infrared problem. It was discovered, however, that strings of finite size L are perfectly well-defined, no matter how large L is. The role of L is to provide an infrared regularization. The string thickness was found [14] to be proportional to ln L, while the string mass ∼ L/ ln L rather than pure L in the classical ANO case. If we do the same thing in our problem -i.e. consider a finite-length string -the limit µ → ∞ will become perfectly well-defined. The parameter µ/ξ will be replaced by L, which will provide infrared regularization. Unlike the problem considered in [14], in the present case the infrared divergence does not appear in the bosonic string solution per se. It is only the "extra" fermion zero mode normalization that is plagued by logarithmic divergence. There is a price one has to pay for the finite-length regularization -the loss of "BPS-ness." Since "BPS-ness" is a convenient feature, we find the finite-µ regularization to be more appropriate, even though it requires inclusion of the adjoint fields in the bulk Lagrangian. This seems to be a smaller price. Once we stick to the finite-µ regularization and normalizability of four fermion zero modes is achieved, the low-energy theory of moduli exhibits supersymmetry enhancement. The normalizing parameter, which depends on µ logarithmically, can be absorbed in the definition of the moduli fields and does not show up explicitly. Thus, if N = 1 bulk theory has an isolated vacuum (no Higgs branch) we can state with certainty that the low-energy moduli theory on the world sheet of the non-Abelian BPS string is indeed CP (1), with four supercharges, as long as we limit ourselves to two-derivative terms in the world-sheet Lagrangian. It is necessary to stress that although many features of the analysis reported here are parallel to those of the domain-wall problem [10], some important features are rather different. In particular, a Kähler structure for the moduli space, which appears automatic, is not sufficient now, generally speaking, for enhanced SUSY, since the Lorentz invariance in 1+1 dimensions imposes no useful constraints (as opposed to the situation [10] in 1+2 dimensions). Indeed, in the pure N = 1 limit (i.e with µ 1,2 = ∞), the Kähler structure for the bosonic moduli space persists. Then the minimal N = 1 world-sheet SUSY will be realized in the chiral (0,2) form consistent with the complex structure. In the flux-tube problem it is not the Lorentz invariance which ensures the oneto-two matching of bosonic versus fermionic zero modes, but, instead, the possibility of embedding the system within N = 2 SQCD. This possibility was not available in the domain-wall case [10]. As a warm up exercise we will also consider a seemingly well-studied problem of the ANO strings in N = 1 SQED. Of course, in this case, the internal moduli space M is absent. However, following the same line of reasoning as in the case of non-Abelian strings above, we can start from N = 2 SQED [17] (eight supercharges), construct the Abelian half-critical string which has four fermion moduli in M SUSY and then make the adjoint mass deformation term very large effectively returning to N = 1 SQED. For arbitrarily large but finite µ we will keep all four fermion zero modes: two natural and two "extra." Correspondingly, we will keep the N = 2 theory of moduli from M SUSY . Of course, in this methodical example it is a trivial free field theory (in 1+1 dimensions). 2 The bulk theory In this section we will briefly describe the bulk theories we will deal with. N = 2 SQED is discussed in detail in Ref. [17] while the version of SQCD we will focus on is thoroughly discussed in Refs. [3,4]. Abelian bulk theory Let us denote scalar and fermion fields in the "quark" hypermultiplets as q,q and ψ, andψ, respectively. Note that the scalars form a doublet under the action of global SU(2) R group, q f = (q,q). In terms of these fields the action of N = 2 SQED deformed by the (N = 2 )-breaking mass term µ of the adjoint field a takes the form 4 Here and below we use a formally Euclidean notation, e.g. F 2 µν = 2F 2 0i + F 2 ij , (∂ µ a) 2 = (∂ 0 a) 2 + (∂ i a) 2 , etc. This is appropriate since we are going to study static (time-independent) field configurations, and A 0 = 0. Then the Euclidean action is nothing but the energy functional. Furthermore, we define σ αα = (1, −i τ ),σα α = (1, i τ ). Lowing and raising of spinor indices is performed by virtue of the antisymmetric tensor defined as ε 12 = ε12 = 1, ε 12 = ε12 = −1. The same raising and lowering convention applies to the flavor SU(2) indices f , g, etc., see [4]. while ξ is the Fayet-Iliopoulos (FI) parameter. The vacuum in this theory is determined (up to gauge transformations) by the following vacuum expectation values (VEV's): The nonvanishing VEV of the squark field breaks U(1) gauge symmetry giving mass to the photon. The mass spectrum of the theory in the vacuum (2.2) was studied in Ref. [17], see also [18]. At non-zero µ, extended N = 2 supersymmetry in (2.1) is broken down to N = 1 and the states come in N = 1 supermultiplets. The massive vector multiplet has the mass while two chiral multiplets acquire masses where λ± are two roots of the quadratic equation and ω is the N = 2 breaking parameter At µ = 0 one gets λ ± = 1 , and all states listed above form the bosonic part of one long N = 2 massive vector multiplet [17]. As we switch the parameter µ on, this N = 2 vector multiplet splits into one vector and two chiral multiplets of N = 1 supersymmetric theory. In the limit of µ → ∞ the heavy neutral field a and its superpartners can be integrated out [19,20,17] leading to N = 1 SQED This theory has a two-dimensional Higgs branch of a hyperbolic form. As we increase µ in (2.1) we arrive, in the limit µ → ∞, at a base point on this Higgs branch with q = 0. Non-Abelian bulk theory The content of this section is a direct non-Abelian generalization of Sect. 2.1. The gauge symmetry of the model we will use is SU(2)×U(1). Besides the gauge bosons, gauginos and their superpartners, it has a matter sector consisting of two "quark" hypermultiplets, with degenerate masses. In addition, we introduce a Fayet-Iliopoulos D-term for the U(1) gauge field which triggers the quark condensation. Let us first discuss the undeformed theory with N = 2 . The superpotential has the form where A a and A are chiral superfields, the N = 2 superpartners of the gauge bosons of SU(2) and U(1), respectively. Furthermore, q A andq A (A = 1, 2) represent two matter hypermultiplets. The flavor index is denoted by A. Thus, in our model the number of colors equals the number of flavors. Next we add a superpotential mass term which breaks supersymmetry down to N = 1 , namely, where µ 1 and µ 2 are mass parameters for the chiral superfields in N = 2 gauge supermultiplets, U(1) and SU(2) respectively. Clearly, the mass term (2.9) splits these supermultiplets, breaking N = 2 supersymmetry down to N = 1 . The bosonic part of our SU(2)×U(1) theory has the form Here D µ is the covariant derivative in the adjoint representation of SU(2), while and τ a are the SU(2) Pauli matrices. The coupling constants g 1 and g 2 correspond to the U(1) and SU(2) sectors, respectively. With our conventions the U(1) charges of the fundamental matter fields are ±1/2. The potential V (q A ,q A , a a , a) in the Lagrangian (2.10) is a sum of various D and F terms, where the sum over repeated flavor indices A is implied. The first and second lines here represent D terms, the third line the F A terms, while the fourth line represents the squark F terms. We also introduced the Fayet-Iliopoulos D-term for the U(1) field, with the FI parameter ξ in (2.12), much in the same way as in Sect. 2.1. Note that the Fayet-Iliopoulos term does not break N = 2 supersymmetry [21,17]. The parameters which do break N = 2 down to N = 1 are µ 1 and µ 2 . The Fayet-Iliopoulos term triggers the spontaneous breaking of the gauge symmetry. The vacuum expectation values (VEV's) of the squark fields can be chosen as 13) up to gauge rotations, while the VEV's of adjoint fields are given by a a = 0, a = 0. (2.14) Here we write down q as a 2 × 2 matrix, the first superscript (k = 1, 2) refers to SU(2) color, while the second (A = 1, 2) to flavor. The color-flavor locked form of the quark VEV's in Eq. (2.13) and the absence of VEV of the adjoint scalar a a in Eq. (2.14) results in the fact that, while the theory is fully Higgsed, a diagonal SU(2) C+F survives as a global symmetry. This is a particular case of the Bardakci-Halpern mechanism [22]. The presence of this symmetry leads to the emergence of orientational zero modes of Z 2 strings in the model (2.10) [3]. Note that VEV's (2.13) and (2.14) do not depend on the supersymmetry breaking parameters µ 1 and µ 2 . This is because our choice of parameters in (2.10) ensures vanishing of the adjoint VEV's, see (2.14). In particular, we have the same pattern of symmetry breaking all the way up to very large µ 1 and µ 2 , where the adjoint fields decouple. With two matter hypermultiplets, the SU(2) part of the gauge group is asymptotically free, implying generation of a dynamical scale Λ. If descent to Λ were uninterrupted, the gauge coupling g 2 2 would explode at this scale. Moreover, strong coupling effects in the SU(2) subsector at the scale Λ would break the SU(2) subgroup through the Seiberg-Witten mechanism [23]. Since we want to stay at weak coupling we assume that √ ξ ≫ Λ, so that the SU(2) coupling running is frozen by the squark condensation at a small value, namely, Now let us discuss the mass spectrum in the theory (2.10). Since both U(1) and SU(2) gauge groups are broken by the squark condensation, all gauge bosons become massive. From (2.10) we get for the U(1) gauge boson m U(1) = g 1 ξ , (2.16) while three gauge bosons of the SU(2) group acquire the same mass m SU(2) = g 2 ξ . (2.17) To get the masses of the scalar bosons we expand the potential (2.12) near the vacuum (2.13), (2.14) and diagonalize the corresponding mass matrix. The four components of the eight-component 5 scalar q kA are eaten by the Higgs mechanism for U(1) and SU(2) gauge groups. Another four components are split as follows: one component acquires the mass (2.16). It becomes a scalar component of a massive N = 1 vector U(1) gauge multiplet. Other three components acquire masses (2.17) and become scalar superpartners of the SU(2) gauge boson in N = 1 massive gauge supermultiplet. Other 16 real scalar components of fieldsq Ak , a a and a produce the following states: two states acquire mass while the mass of other two states is given by where λ ± 1 are two roots of the quadratic equation for i = 1. Here we introduced two N = 2 supersymmetry breaking parameters associated with U(1) and SU(2) gauge groups, respectively, Furthermore, other 2×3=6 states acquire mass while the rest 2×3=6 states also become massive, their mass is Here λ ± 2 are two roots of the quadratic equation (2.20) for i = 2. Note that all states come either as singlets or triplets of unbroken SU(2) C+F . When the supersymmetry breaking parameters ω i vanish, the masses (2.18) and (2.19) coincide with the U(1) gauge boson mass (2.16). The corresponding states form bosonic part of N = 2 long massive U(1) vector supermultiplet [17]. With non-zero ω 1 this supermultiplet splits into massive N = 1 vector multiplet with mass (2.16), and two chiral multiplets with masses (2.18) and (2.19). The same happens to states with masses (2.22) and (2.23). If ω's vanish they combine into the bosonic parts of three N = 2 massive vector supermultiplets, with mass (2.17). At non-zero ω's these multiplets split to three N = 1 vector multiplets (for SU(2) group) with mass (2.17) and 2×3 chiral multiplets with masses (2.22) and (2.23). Note that essentially the same pattern of splitting was found in [17] for the Abelian case, see Sect. 2.1. Now let us take a closer look at the spectrum obtained above in the limit of large N = 2 supersymmetry breaking parameters ω i , In this limit the larger masses m + U(1) and m + SU(2) become Clearly, in the limit µ i → ∞ these are the masses of the heavy adjoint scalars a and a a . At ω i ≫ 1 these fields decouple and can be integrated out. The low-energy bulk theory in this limit contains massive gauge N = 1 multiplets and chiral multiplets with lower masses m − . Equation (2.20) gives for these masses (2.25) In the limit of infinite µ i these masses tend to zero. This fact reflects the emergence of a Higgs branch in N = 1 SQCD, see also Eq. (2.7). To observe the Higgs branch it is instructive to inspect the transition to µ = ∞ in (2.10). Equation (2.10) flows to N = 1 SQCD with the gauge group SU(2)×U(1) and the Fayet-Iliopoulos D-term, see [24] for a review. These operators are subject to a classical constraint which gets modified by instanton effects and becomes in the quantum theory [24]. Here Λ N =1 is the scale of N = 1 SQCD in terms of the scale Λ of the deformed N = 2 theory (2.10); Λ N =1 has the form In order to keep the bulk theory in the weak coupling regime, in the limit of large µ i we assume that Note that the presence of the FI term cannot modify (2.30) because ξ is not a holomorphic parameter. The vacuum (2.13) corresponds to the base point of this Higgs branch withq = 0. In other words, flowing from N = 2 theory (2.10) we do not recover the whole Higgs branch of N = 1 SQCD (2.26). Instead, we arrive only at an isolated vacuum, a base point of the Higgs branch, no matter how large µ is. What else is there to say? A question to be discussed is as follows: how our solution in whichq = 0 can be compatible with the quantum constraint (2.30)? It seems apparent that the classical vacuum withq = 0 at the base of the Higgs branch no longer exists at the quantum level. Our analysis is quasiclassical. We start withq = 0, so that the corresponding light moduli are not excited. Next we consider quantum corrections. What enters in the constraint (2.30) is the quantum average of the composite operator qq . The above VEV does not factorize, and Eq. (2.30) can still hold in our solution. In fact, we expect it to hold. While the light modes fluctuate along the Higgs branch, the massive modes fluctuate in the "orthogonal" directions. Account of these latter fluctuations must modify the classical constraint (2.29) transforming it into (2.30). Certainly, it would be instructive to check this explicitly. We leave this exercise for future studies. This issue is of a conceptual importance. Practically, though, it is rather unimportant since we work in the regime (2.32), so that the quantum deformation is parametrically small. Non-Abelian strings Recently, non-Abelian strings were shown to emerge at weak coupling [3,4,6,7] in N = 2 and deformed N = 4 supersymmetric gauge theories (similar results in three dimensions were obtained in [2]). The main feature of the non-Abelian strings is the presence of orientational zero modes associated with rotation of their color flux in the non-Abelian gauge group, which makes such strings genuinely non-Abelian. As soon as the solution for the non-Abelian string suggested in [3,4] for N = 2 SQCD does not depend on the adjoint fields it can be easily generalized to our model (2.10) with the broken N = 2 supersymmetry. We will carry out this program in Sect. 3.1. The non-Abelian string solution Here we generalize the string solutions found in [3,4] to the model (2.10). Since this model includes a spontaneously broken gauge U(1), it supports conventional Abrikosov-Nielsen-Olesen (ANO) strings [15] in which one can discard the SU(2) gauge part of the action. The topological stability of the ANO string is due to the fact that π 1 (U(1)) = Z. These are not the strings we are interested in. At first sight the triviality of the homotopy group, π 1 (SU(2)) = 0, implies that there are no other topologically stable strings. This impression is false. One can combine the Z 2 center of SU(2) with the elements exp(iπ) ∈U(1) to get a topologically stable string solution possessing both windings, in SU(2) and U(1). In other words, It is easy to see that this non-trivial topology amounts to winding of just one element of matrix q vac , say, q 11 , or q 22 , for instance, 7 Such strings can be called elementary; their tension is 1/2 of that of the ANO string. The ANO string can be viewed as a bound state of two elementary strings. More concretely, the Z 2 string solution (a progenitor of the non-Abelian string) can be written as follows [3]: where i = 1, 2 labels coordinates in the plane orthogonal to the string axis and r and α are the polar coordinates in this plane. The profile functions φ 1 (r) and φ 2 (r) determine the profiles of the scalar fields, while f 3 (r) and f (r) determine the SU(2) and U(1) gauge fields of the string solutions, respectively. These functions satisfy the following first order equations [3]: As explained below, α is the angle of the coordinate x ⊥ in the perpendicular plane. Furthermore, one needs to specify the boundary conditions which would determine the profile functions in these equations. Namely, for the gauge fields, while the boundary conditions for the squark fields are Note that since the field φ 2 does not wind, it need not vanish at the origin, and, in fact, it does not. Numerical solutions of the Bogomolny equations (3.4) for Z 2 strings were found in Ref. [3], see e.g. Figs. 1 and 2 in this paper. The tension of this elementary string is to be compared with the tension of the ANO string, in our normalization. The elementary strings are bona fide non-Abelian. This means that, besides trivial translational moduli, they give rise to moduli corresponding to spontaneous breaking of a non-Abelian symmetry. Indeed, while the "flat" vacuum (2.13) is SU(2) C+F symmetric, the solution (3.3) breaks this symmetry down to U(1). This means that the world-sheet (two-dimensional) theory of the elementary string moduli is the SU(2)/U(1) sigma model. This is also known as CP (1) model. To obtain the non-Abelian string solution from the Z 2 string (3.3) we apply the diagonal color-flavor rotation preserving the vacuum (2.13). To this end it is convenient to pass to the singular gauge where the scalar fields have no winding at infinity, while the string flux comes from the vicinity of the origin. In this gauge we where U is a matrix ∈ SU(2) and n a is a moduli vector defined as It is subject to the constraint n 2 = 1 . The vector n a parametrizes orientational zero modes of the string associated with flux rotation in SU (2). The presence of these modes makes the string genuinely non-Abelian. We stress that the orientational moduli encoded in the vector n a , first observed in [2,3], are not gauge artifacts. World-sheet effective theory In this subsection we briefly review derivation of the effective world-sheet theory for the orientational collective coordinates n a of the non-Abelian string. We follow Ref. [3,4]. (Generalization to the case of SU(N)×U(1) gauge group is done in [8].) As was already mentioned, this macroscopic theory is CP (1) model (CP (N − 1) model for the general case of SU(N)×U(1) gauge group) [2,3,4,6,8]. Assume that the orientational collective coordinates n a are slowly varying functions of the string world-sheet coordinates x k , k = 0, 3. Then the moduli n a become fields of a (1+1)-dimensional sigma model on the world sheet. Since the vector n a parametrizes the string zero modes, there is no potential term in this sigma model. We begin with the kinetic term [3]. To obtain the kinetic term we substitute our solution, which depends on the moduli n a , in the action (2.10) assuming that the fields acquire a dependence on the coordinates x k via n a (x k ). Then we arrive at the O(3) sigma model 8 where the coupling constant β is given by a normalizing integral (3.13) Using the first-order equations for the string profile functions (3.4) one can see that the integral here reduces to a total derivative and given by the flux of the string determined by f 3 (0) = 1. This allows us to conclude that the sigma-model coupling β does not depend on the ratio of the U(1) and SU(2) coupling constants and is given by (3.14) The two-dimensional coupling constant is determined by the four-dimensional non-Abelian coupling. In summary, the effective world-sheet theory describing dynamics of the string orientational moduli is the celebrated O(3) sigma model (which is the same as CP 1 ). The symmetry of this model reflects the presence of the global SU(2) C+F symmetry in the bulk theory. The relation between the four-dimensional and two-dimensional coupling constants (3.14) is obtained at the classical level. In quantum theory both couplings run. So we have to specify a scale at which the relation (3.14) takes place. The twodimensional CP (1) model (3.12) is an effective low-energy theory good for the description of internal string dynamics at low energies, much lower than the inverse thickness of the string which, in turn, is given by √ ξ. Thus, √ ξ plays the role of a physical ultraviolet (UV) cutoff in (3.12). This is the scale at which Eq. (3.14) holds. Below this scale, the coupling β runs according to its two-dimensional renormalization-group flow. The sigma model (3.12) is asymptotically free [25]; at large distances (low energies) it gets into the strong coupling regime. The running coupling constant as a function of the energy scale E at one loop is given by where Λ CP (1) is the dynamical scale of the CP (1) model. As was mentioned above, the ultraviolet cut-off of the sigma model at hand is determined by √ ξ. Hence, Note that in the bulk theory, due to the VEV's of the squark fields, the coupling constant is frozen at √ ξ. There are no logarithms in the bulk theory below this scale. Below √ ξ the logarithms of the world-sheet theory take over. At small values of the deformation parameter µ 2 , the coupling constant g 2 of the four-dimensional bulk theory is determined by the scale Λ of the N = 2 theory. Then Eq. (3.16) gives [4] Λ CP (1) = Λ , where we take into account that the first coefficient of the β function equals to 2 both in N = 2 limit of the four-dimensional bulk theory and in the two-dimensional CP (1) model. Instead, in the limit of large µ 2 , the coupling constant g 2 of the bulk theory is determined by the scale Λ N =1 of the N = 1 SQCD (2.26), as shown in Eq. (2.31). In this limit Eq. (3.16) gives where we take into account that the first coefficient of the β function in N = 1 SQCD equals to four. The renormalization group flow in our theory at µ 2 ≫ √ ξ is schematically presented in Fig. 1. Fermion zero modes Technically, this is a key section of the present work. Let us start from the N = 2 theory (2.10) with the breaking parameters set to zero, µ i = 0. Our string solution is 1/2 BPS-saturated. This means that four supercharges, out of eight of the fourdimensional theory (2.10), act trivially on the string solution (3.9). The remaining four supercharges generate four fermion zero modes which we call supertranslational modes because they are superpartners to two translational zero modes. The corresponding four fermionic moduli are superpartners to the coordinates x 0 and y 0 of the string center. The supertranslational fermion zero modes were found in Ref. [17]. As a matter of fact, they were found for the U(1) ANO string in N = 2 theory 9 but the transition to the model at hand is absolutely straightforward. We will not dwell on this procedure here. Instead, we will focus below on four additional fermion zero modes which arise only for the non-Abelian strings. They are superpartners of the bosonic orientational moduli n a ; therefore, we will refer to these modes as superorientational. In the N = 2 limit these modes were obtained in [4]. If we switch on supersymmetry (SUSY) breaking parameters µ i the number of supercharges in the four-dimensional bulk theory drops to four. The 1/2 BPS string would have two superorientational fermion zero modes in this theory. However, our string is a descendant of N = 2 theory where it has four superorientational zero modes. Clearly the number of zero modes cannot jump as we switch on parameters µ i , at least at small µ. This number is determined by index theorems. Thus, it is clear that (at least at small µ) our string has a set of superorientational fermion zero modes twice bigger than algebra tells. In this section we elaborate the issue of four zero modes explicitly at small and large µ while in Sect. 5 we will study the impact of their presence on the CP (1) model on the string world sheet. To begin with, in Sect. 4.1 we review these modes in the N = 2 limit and then examine what happens to them in the deformed bulk theory. N = 2 limit The fermionic part of the action of the model (2.10) is f∂ /λ f + Tr ψ i∇ /ψ + Tr ψ i∇ /ψ where the matrix color-flavor notation is used for matter fermions (ψ α ) kA and (ψ α ) Ak and the traces are performed over the color-flavor indices. Contraction of the spinor indices is assumed inside all parentheses, for instance, We write the squark fields in (4.1) as doublets of SU(2) R group which is present in N = 2 theory, q f = (q,q). Here f = 1, 2 is the SU(2) R index which labels two supersymmetries of the bulk theory in the N = 2 limit. Moreover, λ αf and (λ αf ) a stand for the gauginos of the U(1) and SU(2) groups, respectively. Note that the last two terms are N = 1 deformations in the fermion sector of the theory induced by the breaking parameters µ i . They involve only f = 2 components of λ's explicitly breaking the SU(2) R invariance. Next, we put µ i = 0 and apply the general method which was designed in [4] to generate superorientational fermion zero modes of the non-Abelian string in the N = 2 case. In Ref. [17] it was shown that the four supercharges selected by the conditions ǫ 11 = 0 , ǫ 22 = 0 (4.2) act trivially on the BPS string in the theory with the Fayet-Iliopoulos D term. Here ǫ αf are parameters of the SUSY transformation. Now, to generate the superorientational fermion zero modes the following method was used in [4]. Assume that the orientational moduli n a in the string solution (3.9) have a slow dependence on the world-sheet coordinates x 0 and x 3 (or t and z). Then the four supercharges selected by the conditions (4.2) (namely, ǫ 12 , ǫ 21 and their complex conjugates) no longer act trivially. Instead, their action now gives fermion fields proportional to x 0 and x 3 derivatives of n a . This is exactly what one expects from the residual N = 2 supersymmetry in the world-sheet theory. The above four supercharges generate the world-sheet supersymmetry in the N = 2 two-dimensional CP (1) model, where χ a α (α = 1, 2 is the spinor index) are real two-dimensional fermions of the CP (1) model. They are superpartners of n a and subject to orthogonality condition n a χ a α = 0. Real parameters of N = 2 two-dimensional SUSY transformation ε α and η α are identified with the parameters of the four-dimensional SUSY transformations (with the constraint (4.2)) as (4.4) The world-sheet supersymmetry was used to reexpress the fermion fields obtained upon the action of these four supercharges in terms of the (1+1)-dimensional fermions. This procedure gives us the superorientational fermion zero modes [4], where the dependence on x i is encoded in the string profile functions, see (3.9). Now we will directly verify that the zero modes (4.5) satisfy the Dirac equations of motion. From the fermion action of the model (4.1) we get the relevant Dirac equations for λ a , while for the matter fermions Next, we substitute the orientational fermion zero modes (4.5) into these equations and take the limit µ 2 = 0. After some algebra one can check that (4.5) do satisfy the Dirac equations (4.6) and (4.7) provided the first-order equations for string profile functions (3.4) are fulfilled. It is instructive to check that the zero modes (4.5) do produce the fermion part of the N = 2 two-dimensional CP (1) model. To this end we return to the usual assumption that the fermion collective coordinates χ a α in Eq. (4.5) have an adiabatic dependence on the world-sheet coordinates x k (k = 0, 3). This is quite similar to the procedure of Sect. 3.2. Substituting Eq. (4.5) in the fermion kinetic terms in the bulk theory (4.1), and taking into account the derivatives of χ a α with respect to the world-sheet coordinates we arrive at where β is given by the same integral (3.13) as for the bosonic kinetic term, see Eq. (3.12). We can use the world-sheet N = 2 supersymmetry to reconstruct the fourfermion interactions inherent to CP (1). The SUSY transformations in the CP (1) model have the form (see [12] for a review) where for simplicity we put η α = 0. Imposing this supersymmetry leads to the following effective theory on the string world sheet: This is indeed the action of the N = 2 CP (1) sigma model. Breaking N = 2 supersymmetry Now let us switch on our breaking parameters µ i . As was discussed in Sect. 3, the bosonic solution for the non-Abelian string does not change at all. It is still given by Eq. (3.9). However, the fermion zero modes do change. Now only four supercharges survive in the four-dimensional bulk theory. They are associated with the parameters ǫ α1 for f = 1. Nevertheless, we still can use the method of Ref. [4] reviewed in Sect. 4.1 to generate superorientational fermion zero modes. Condition (4.2) tells us that we now have only one complex parameter ǫ 21 of SUSY transformations unbroken by the string. This leads to the presence of two supercharges associated with two real parameters ε 1 and η 1 , according to identification (4.4), in the world-sheet theory. Following the same steps which led us to (4.5) and taking into account that the bosonic string solution (3.9) does not depend on µ i we then obtain We see that reduced supersymmetry generates for us only two fermion superorientational modes parametrized by the two-dimensional fermion field χ a 2 . This was expected, of course. The modes proportional to χ a 1 do not appear. This is because χ a 1 is related to the SUSY transformations generated by ǫ 12 (see (4.3) and (4.4)) which is no longer present in the deformed bulk theory. One can easily check that zero modes (4.11) still satisfy the Dirac equations of motion (4.6), (4.7) just because the parameter µ 2 does not enter the equations for λ α1 andψ. It is clear, however, that the other two fermion zero modes proportional to χ 1 do not disappear. They are just modified and can no longer be obtained by supersymmetry. To find them we have to actually solve the Dirac equations (4.6), (4.7). In this section we consider small µ 2 and develop perturbation theory for (4.6), (4.7). In Sect. 4.3 we treat the large µ 2 limit. We can solve (4.6), (4.7) order by order in µ 2 . Say, if we take (4.5) for the zerothorder approximation and substitute λ 22 from (4.5) into the last term in Eq. (4.6) we generate fermion zero modes to the first order in µ 2 . Let us actually do this. First we note thatψ kȦ 2 = 0, λ a12 = 0 . They vanish in the zeroth order (see (4.5)) and, as follows from Eqs. (4.6) and (4.7), are not generated in any order in µ 2 . It is also easy to check that the remaining fermion fields have the following form: Here we introduced four profile functions λ ± and ψ ± parametrizing the fermion fields λ 22 andψ˙1. The functions λ + and ψ + are expandable in even powers of µ 2 while the functions λ − and ψ − in odd powers of µ 2 . Substituting (4.13) into the Dirac equations (4.6), (4.7) we get following equations for fermion profile functions: (4.14) The leading contributions to the µ even solutions to these equations is where we express the zeroth-order fermion modes λ 22 andψ˙1 (4.5) in terms of the fermion profile functions. Substituting (4.15) into the last equation in (4.14) we can solve for the leading contributions to the µ odd profile functions. They can be expressed in terms of the string profile functions as follows: Using the boundary conditions (3.5) and (3.6) for the string profile functions it is easy to check that these solutions vanish at r → ∞ and are non-singular at r = 0. We conclude that the number of the superorientational zero modes of the non-Abelian string does not jump as we switch on the deformation parameters µ i . We keep all four zero modes parametrized by χ a 1 and χ a 2 . The modes proportional to χ a 1 are now modified. Still, we can find them order by order in µ 2 by solving the Dirac equations (4.14). As was mentioned in Sect. 1 (and will be explained in detail in Sect. 5) the four fermion zero modes imply N = 2 supersymmetry in the two-dimensional worldsheet sigma model (four supercharges). On the other hand, N = 2 supersymmetry in the bulk theory is broken down to N = 1 ( four supercharges). Thus, we do observe enhancement of supersymmetry on the string world sheet. On general grounds one might expect a breaking of the enhanced world-sheet supersymmetry at some critical value µ * i . What could happen is the fermion zero modes associated with χ a 1 could become non-normalizable at some value of µ 2 . Clearly, one would not be able to see the loss of normalizability in perturbation theory in µ 2 . In Sect. 4.3 we will examine the limit of large µ i and show that the fermion modes (4.13) become non-normalizable only at µ 2 → ∞. The large µ limit Let us dwell on the limit of large µ 2 , or, more explicitly 10 Integrating out heavy fields can be carried out in superpotentials (2.8), (2.9), as in [19,20,17], or directly in the component Lagrangian. One just drops the kinetic terms for the heavy fields and solves algebraic equations for these fields. We do it in the fermion sector of the theory in the Dirac equations (4.6) for λ aα2 . More exactly, we get expressions for the λ-profile functions in terms of the ψ-profile functions from the first and the third equations in (4.14). Namely, Dropping the kinetic term for λ's in the second and the fourth equations in (4.14) and substituting (4.20) in these equations we arrive at where m L is the light mass given in Eq. (4.19). Now observe that long-range tails of the solutions to these equations are determined by the small mass m L , while the string profile functions f and f 3 are important at much smaller distances R ∼ 1/m 0 . This key observation allows us to solve Eqs. (4.21) analytically. We will treat separately two domains: (i) large r, Large-r domain, r ≫ 1/m 0 In this domain we can drop the terms in (4.21) containing f and f 3 and use the first equation to express ψ − in terms of ψ + . We then get Substituting this into the second equation in (4.21) we obtain This is a well-known equation for a free field with mass m L in the radial coordinates. Its solution is well-known too 11 where K 0 (x) is the imaginary argument Bessel function. At infinity it falls-off exponentially, (4.25) while at x → 0 it has the logarithmic behavior, Taking into account (4.22) we get the solutions for the fermion profile functions at r ≫ 1/m 0 , In particular, at r ≪ 1/m L we have Intermediate-r domain, r ≤ 1/m 0 In this domain we neglect small mass terms in (4.21). We then arrive at These equations are identical to those for the string profile functions, see (3.4). Therefore, their solutions are known, up to normalization constants c 1,2 . To fix these constants we match the long-distance behavior in (4.30) with the short-distance behavior of the solutions in the domain r ≫ 1/m 0 given in (4.28). This gives the fermion profile functions at intermediate r, Equations (4.27) and (4.31) present our final result for the fermion profile functions in the limit of large µ 2 . They determine two fermion superorientational zero modes proportional to χ a 1 via Eq. (4.13). The main feature of these modes is the presence of the long-range tails determined by the small mass m L . Neither bosonic string solution (3.9) nor two other superorientational fermion zero modes (4.11) determined by N = 1 supersymmetry have these logarithmic long-range tails 12 . Effective world-sheet theory in the large-µ limit To fully specify the fermion sector of the world-sheet sigma model we substitute the fermion zero modes (4.27), (4.31) and (4.11) into the fermion action (4.1), much in the same way we did in Sect. 4.1 in the N = 2 limit. Then instead of Eq. (4.10) we get Here I f is the normalization integral for the deformed fermion zero modes (4.27) and (4.31). Its leading behavior at large µ 2 is given by coming from ψ − . Substituting the mass values from (4.18) and (4.19) we then obtain Note that the calculation actually gives us only the bilinear fermion terms in (5.1). We fix the coefficient in front of the quartic term using N = 1 supersymmetry on the world sheet generated by parameters ε 1 and η 1 , see (4.9). This supersymmetry is necessarily present in our world-sheet theory. In particular, it relates the coefficient in front of the kinetic term for χ a 1 and the one in front of the quartic term. Next, we absorb the normalization integral I f in the definition of the fermion fields χ a 1 . As a result, we arrive at the CP (1) model (4.10). This model has N = 2 supersymmetry in two dimensions (four supercharges). We thus confirm enhancement of supersymmetry in our effective theory on the string world sheet. As was explained in Sect. 1, this result could be expected on general grounds. The target space of the CP (1) model (S 2 sphere) is the Kähler manifold. Supersymmetry on the Kähler manifolds requires four supercharges. If our string is BPS and the world-sheet theory is local, the world-sheet supersymmetry must be enhanced. The same reasoning was recently used [10] to prove enhanced supersymmetry on the world volume of domain walls. If we started directly from N = 1 SQCD (2.26) we would have never obtained enhanced supersymmetry on the world sheet of the non-Abelian string. We would that the large-distance behavior of the long-range tails is such that the corresponding normalization factors diverge logarithmically. This divergence is cut off at m −1 L . find only two fermion zero modes (4.11), while the other two are non-normalizable. The reason for this is the presence of the Higgs branch in (2.26). Embedding (2.26) in the deformed N = 2 theory (2.10) lifts the Higgs branch and makes the second pair of the fermion zero modes normalizable at any finite µ 2 . This infrared (IR) regularization allows us to obtain N = 2 supersymmetric CP (1) model (4.10) as an effective theory on the world sheet of the non-Abelian string. Limits of applicability Although the two-derivative term we derived above is N = 2 supersymmetric for any finite µ one should expect the enhanced N = 2 supersymmetry to be broken at some (large) value of µ 2 due to induced terms with four or more derivatives. Let us determine this critical value. To this end let us note that higher derivative corrections run in powers of ∆ ∂ k , where ∆ is a string transverse size. At small µ 2 , The typical energy scale on the string world sheet is given by the scale Λ CP (1) of the CP (1) model which is given by (3.17) at small µ 2 . Thus, ∂ → Λ and higher derivative corrections in fact run in powers of Λ/ √ ξ. At small µ 2 higher derivative corrections are suppressed as Λ/ √ ξ ≪ 1, and we can ignore them. However, as we increase µ 2 the fermion zero modes (4.27), (4.31) acquire long-range tails. This means that an effective "fermion" thickness of the string grows and becomes Higher derivative terms are small if ∆ Λ CP (1) ≪ 1. Substituting here the scale of the CP (1) model given by (3.18) at large µ 2 and the scale of N = 1 SQCD (2.31) we arrive at where the critical value of µ 2 is given by Figure 2: A spectrum of relevant scales in the limit µ 2 ≫ √ ξ. If the condition (6.3) is met, the N = 2 CP(1) model gives a good description of the world-sheet physics. A spectrum of relevant scales in our theory is shown in Fig. 2. If we increase µ 2 above the critical value (6.4) the non-Abelian strings become effectively thick and their world-sheet dynamics is no longer described by N = 2 CP (1) sigma model. The higher derivative corrections on the world sheet explode. Since the higher derivative sector does not respect the enhanced N = 2 supersymmetry the latter gets broken down to N = 1 (two supercharges). Note that the physical reason for the growth of the string thickness ∆ is the presence of the Higgs branch in N = 1 SQCD (2.26). Although the classical string solution (3.9) stays compact, the presence of the Higgs branch shows up at the quantum level. In particular, the fermion zero modes feel its presence and acquire long-range logarithmic tails. Summarizing, the N = 2 CP (1) model with enhanced supersymmetry is a valid description of the world-sheet physics of the non-Abelian string if the condition (6.3) is met. Otherwise the N = 2 world-sheet supersymmetry is broken down to N = 1 by higher derivative terms. Simultaneously, the string at hand becomes "thick." By thick we mean that its transverse dimension is determined by the large parameter µ 2 /ξ → ∞ rather than by ξ −1/2 . Non-Abelian monopoles in N = 1 Since the N = 2 CP (1) model is the effective low-energy theory describing the world-sheet physics of the non-Abelian string all consequences of this model ensue, in particular, two degenerate vacua and a kink which interpolates between themthe same kink that we had in N = 2 [4] and interpreted as a (confined) non-Abelian monopole, the descendent of the 't Hooft-Polyakov monopole [26]. Let us briefly review the reason for this interpretation [5,4]. We first set to zero the N = 2 breaking parameters µ i in (2.10) and introduce a mass difference ∆m for two quark supermultiplets, see [4] for details. Let us start from the vanishing FI parameter ξ (i.e. start from the Coulomb branch). At ∆m = 0 the gauge group SU(2) is broken down to U(1) by a VEV of the SU(2) adjoint scalar a 3 ∼ ∆m. Thus, there are 't Hooft-Polyakov monopoles of broken gauge SU (2). Classically, on the Coulomb branch their mass is proportional to |∆m |/g 2 2 . In the limit ∆m → 0 they become massless, formally, in the classical approximation. Simultaneously their size become infinite [27]. The mass and size are stabilized by confinement effects which are highly quantum. The confinement of monopoles occurs in the Higgs phase, at ξ = 0. A qualitative evolution of the monopoles under consideration as a function of the relevant parameters is presented in Fig. 3. We begin with the limit ξ → 0 while ∆m is kept fixed. Then the corresponding microscopic theory supports the conventional (unconfined) 't Hooft-Polyakov monopoles [26] due to the spontaneous breaking of the gauge SU(2) down to U(1), (the upper left corner of Fig. 3). If we allow ξ be non-vanishing but |∆m| ≫ ξ (7.1) then the effect which comes into play first is the above spontaneous breaking of the gauge SU (2). Further gauge symmetry breaking, due to ξ = 0, which leads to complete Higgsing of the model and the string formation (confinement of monopoles) is much weaker. Thus, we deal here with the formation of "almost" 't Hooft-Polyakov monopoles, with a typical size ∼ |∆m| −1 . Only at much larger distances, ∼ ξ −1/2 , the charge condensation enters the game, and forces the magnetic flux, rather than spreading evenly a lá Coulomb, to form flux tubes (the upper right corner of Fig. 3). There will be two such flux tubes, with the distinct orientation of the color-magnetic flux (Z 2 strings discussed in Sect. 3.1). The monopoles, albeit confined, are weakly confined. Now, if we further reduce |∆m|, the size of the monopole (∼ |∆m| −1 ) becomes larger than the transverse size of the attached strings. The monopole gets squeezed in earnest by the strings -it becomes a bona fide confined monopole (the lower left corner of Fig. 3). A macroscopic description of such monopoles is provided by the twisted-mass CP (1) model on the string world sheet [5,4]. Namely two Z 2 strings are interpreted as two vacua of the CP (1) model while the monopole (string junction of two Z 2 strings) is interpreted as a kink interpolating between these two vacua. The value of the twisted mass equals ∆m while the size of the twisted-mass sigma-model kink/confined monopole is of order of |∆m| −1 . As we further diminish |∆m| approaching Λ CP (1) and then getting below Λ CP (1) , the size of the monopole grows, and, classically, it would explode. This is where quantum effects in the worldsheet theory take over. It is natural to refer to this domain of parameters as the "regime of highly quantum dynamics." While the thickness of the string (in the transverse direction) is ∼ ξ −1/2 , the z-direction size of the kink representing the confined monopole in the highly quantum regime is much larger, ∼ Λ −1 CP (1) , see the lower right corner of Fig. 3. In [4] the first order equations for 1/4 BPS string junction of two Z 2 strings were explicitly solved and the solution shown to correspond to a kink solution of the twodimensional CP (1) model. Moreover, it was shown that the mass of the monopole matches the mass of the CP (1)-model kink both in the quasiclassical (∆m ≫ Λ CP (1) ) and quantum (∆m ≪ Λ CP (1) ) limits. Thus, at zero ∆m we still have a confined "monopole" stabilized by quantum effects in the world-sheet CP (1) model (interpreted as a kink). Now we can switch on the N = 2 breaking parameters µ i . If we keep µ 2 less than the critical value (6.4) the effective world-sheet description of the non-Abelian string is still given by the N = 2 CP (1) model. This model obviously still has two vacua which should be interpreted as two elementary non-Abelian strings in the quantum regime, and a BPS kink can interpolate between these vacua. This kink should still be interpreted as a non-Abelian confined monopole/string junction. Its mass and inverse size is determined by Λ CP (1) which in the limit of large µ 2 is given by Eq. (3.18). This kink-monopole is half-critical considered from the standpoint of the CP (1) model (i.e. two supercharges conserved). Thus, we observe supersymmetry enhancement at the next level too. In fact, this is "supersymmetry emergence" rather than enhancement, since in the bulk N = 1 theory there is no such thing as the monopole central charge! Indeed, in the N = 2 model [23] there exists a "monopole" central charge [28] which implies, in turn, the critical nature of the 't Hooft-Polyakov monopole. By appropriately varying parameters of the model one can trace continuous evolution of the conventional (unconfined) 't Hooft-Polyakov monopole into a weakly confined monopole and then into 1/2-BPS non-Abelian confined kink-monopole in a highly quantum regime. In the N = 1 model at hand the monopole central charge cannot exist for symmetry reasons, and one cannot expect BPS-saturated 't Hooft-Polyakov monopoles. On the other hand, the kink central charge certainly exists in the two-dimensional superalgebra [29] pertinent to the CP (1) model. Here we encounter the notion of a central charge that exist in the low-energy moduli theory but cannot be lifted to the bulk theory as a matter of principle. A similar phenomenon does actually occur in the domain-wall system [10]. In the model discussed in [10] two central charges -of the domain wall and domain line types -are allowed [30]. But what we focus on now, is a different central charge. The relevant world-volume central charge in the domain-wall case corresponds to CP (1) "lumps." Although the existence of such states was not explicitly verified in Ref. [10] and the corresponding solution not found due to strong coupling issues, but the very fact that composites carrying this charge do exist in the domain-wall problem is beyond doubt. Indeed, since the 1/4-BPS (bulk quarter-criticality) wall junctions (domain lines) correspond to CP (1) kinks, of which there are two inequivalent kinds, one could in principle construct a system with the two domain lines (on the wall) joined at a single point in 1+3 dimensions. This single point is a "junction of junctions." There is no central charge for this localized junction of junctions in the 1+3 dimensional bulk theory, but it should nonetheless be a 1/4-BPS state on the wall world volume saturating both the kink and lump central charges. Non-Abelian strings in N = 1 SQCD The IR problems we encounter in N = 1 SQCD emerging at µ → ∞ are quite similar to those discussed in [14,18]. In these papers strings on the Higgs branches were studied. In particular, in [18] the Abelian strings in N = 1 SQED were considered 13 . This theory has a Higgs branch which can be lifted by embedding the theory in the deformed N = 2 SQED (2.1). In Ref. [18] strings at an arbitrary point on the (lifted) Higgs branch were considered, with both q and q nonvanishing (cf. Eq. (2.13), where q = 0). In this case the string appears to be non-BPS. The string solution consists of a "BPS core" and a long-range logarithmic tail of size ∼ 1/m L . To take the limit µ → ∞ one can proceed as follows [14,18]. Let us consider a string of a large but finite length L. Then at a very large r, r ∼ L , the problem is no longer two-dimensional and logarithmic tails are cut off. In other words, the scale 1/L plays the role of the IR cut-off instead of m −1 L . Now one can safely take the limit µ → ∞. Let us follow a similar approach to the problem at hand. Consider the string of a finite length L. Then the scale 1/L will play the role of an IR regularization for the fermion zero modes (4.28) and the normalization integral I f becomes finite. (Unfortunately, taking the length of a string to be finite destroys the BPS nature of a string). Now we can safely take the limit µ 2 → ∞. The normalization integral for the fermion zero modes (5.3) stays finite, It still can be absorbed into the definition of the field χ 1 . The world-sheet theory become non-local containing powers of higher derivative corrections, all of the same order. The non-locality arises because the string becomes thick. Note, that this effect does not affect the string tension. Abelian strings In this section we briefly review Abelian BPS strings solutions and their fermion zero modes in N = 2 SQED obtained in [17], and then elaborate on the issue of fermion zero modes in the U(1) theory (2.1), with broken N = 2 supersymmetry. In particular, we will focus on the large µ-limit when the theory (2.1) reduces to N = 1 SQED. The Abelian string solution with the minimal winding number in the model (2.1) has the form q(x) = e i α φ(r), where f (r) and φ(r) are profile functions for gauge and scalar fields, respectively. These functions satisfy the following first-order equations: The boundary conditions for these functions are for the gauge field, while the boundary conditions for the squark field are Equations (9.2) can be solved numerically. The tension of the string with the minimal winding is Note that the string solution does not depend on the deformation parameter µ, much in the same way as in the non-Abelian case. This is because the neutral scalar field a vanishes on the solution. Consider first the N = 2 limit µ = 0. The string is half-critical, so 1/2 of supercharges (related to SUSY transformation parameters ǫ 12 and ǫ 21 , see Sect. 4.1) act trivially on the string solution. The remaining four (real) supercharges parametrized by ǫ 11 and ǫ 22 generate four supertranslational fermion zero modes. They have the form [17] where the modes proportional to complex Grassmann parameters ζ 1 and ζ 2 are generated by ǫ 22 and ǫ 11 transformations, respectively. It is quite straightforward to check that these modes satisfy the Dirac equations, for the U(1) model (2.1) at µ = 0. Now, we switch on the breaking parameter µ, The number of supercharges in the bulk theory drops to four which means that we have only two supercharges, associated with the complex parameter ǫ 11 acting nontrivially on the string solution. If we apply these supercharges to the string solution (9.1) we generate only half of the modes in (9.6) proportional to ζ 2 . We get As in the non-Abelian case the other two zero modes proportional to ζ 1 do not disappear. They just get modified and can no longer be obtained by SUSY transformation. We derive them below by explicitly solving the Dirac equations (9.7) following the same steps as in Sects. 4.2 and 4.3. Repeating the same steps which lead us to Eq. (4.27) we get ψ + =m L ξ K 0 (m L r) , ψ − = − ξ d dr K 0 (m L r) . Equation (9.16) shows that the supertranslational fermion zero modes of the Abelian string in the model (2.1) acquire long-range tails too. In particular, in the limit µ → ∞ they become logarithmically non-normalizable. Still at any finite µ we can absorb the normalization integral into the definition of the two-dimensional fermion fields ζ 1 , exactly in the same way this was done for the superorientational modes in Sect. 5. This leads us to the following effective theory on the world sheet of the Abelian string: where x 0i (i = 1, 2) denote the coordinates of the string position in (1, 2)-plane. This is a free theory with two real bosonic and four fermionic fields of t, z. Counting the number of degrees of freedom we observe the enhanced N = 2 supersymmetry in two dimensions (four supercharges): the fields at hand form a supermultiplet of N = 2 . We see that the phenomenon of the enhanced world-sheet supersymmetry is quite general and occurs both for Abelian and non-Abelian strings. It can be traced back to strings in N = 2 supersymmetric bulk theory from which our strings are descendants. The-two dimensional theory (9.19) is a trivial free-field theory and it does not generate its own scale. Therefore, we cannot estimate the critical value of µ when the enhanced N = 2 supersymmetry breaks down to N = 1 in this case. The theory (9.19) is a low-energy effective theory which describes the string at small energies E, E ≪ m L . At larger energies higher derivative corrections to (9.19) become important. The higher derivative sector does not respect N = 2 supersymmetry and at large energies supersymmetry breaking effects take over. As we increase µ, the region of validity of (9.19) becomes exceedingly narrower. In the N = 1 SQED limit of µ → ∞ the string becomes thick and the effective theory on the string world sheet becomes non-local. It is worth stressing again that this happens due to the presence of the Higgs branch in N = 1 SQED. There is one more thing we must emphasize. The translational sector of the U(1) gauge theory (2.1) is discussed in this section just for the sake of simplicity. The generalization to the translational sector of the non-Abelian string in theory (2.10) is absolutely straightforward. We get the same results for the translational sector of the non-Abelian string. Conclusions This concluding section could have been entitled "How extended supersymmetry dynamically emerges from Kählerian geometry." After the phenomenon is identified, it seems to be rather trivial and transparent. Indeed, if we start from a bulk theory with 14 ν supercharges and obtain half-critical solitons with a nontrivial moduli space, a linear realization of ν/2 supercharges in the low-energy world-sheet theory of moduli is guaranteed. If, in addition, the geometry of the moduli space is Kählerian, and the numbers of the boson and fermion zero modes appropriately match, ν/2 extra "supernumerary" supercharges emerge with necessity. Apparently this is not a rare occurrence, since we encounter one and the same situation, enhancement of supersymmetry, in two most widely discussed problems -domain walls in N = 1 SQCD with N f = N c [10], and in the current problem of non-Abelian strings. It is worth stressing, however, that the reasons lying behind enhancement of supersymmetry in these two problems are not quite the same, as was explained in Sect. 1. We also observe "supersymmetry emergence" for the flux-tube junctions (confined monopoles): our kink-monopole is half-critical considered from the standpoint of the world-sheet CP (1) model (i.e. two supercharges conserved), while in the bulk N = 1 theory there is no monopole central charge at all. A similar phenomenon was also noted in Ref. [10]. A number of interesting questions remains unanswered or not answered in full. Let us list some of them. (i) In Sect. III.B3 of Ref. [10] it was shown that a mass deformation removing the continuous moduli space of the world-volume theory leaves the enhanced N = 2 supersymmetry intact, at least for small mass deformations. The lifting of the moduli space occurred through a generation of a Killing vector potential. More precisely, it was verified that, at leading order in the unequal mass deformation, the effect of the mass deformation reduced to a potential which is the norm-squared of a U(1) Killing vector on CP (1) (the so-called real mass deformation), see [31]. Such a potential preserves N = 2 as it maintains the complex structure. It was unclear what symmetry ensures this form for the potential. It was also unclear whether this particular form holds beyond the leading order in the deformation. It would be extremely interesting to explore whether or not a similar structure persists in the flux-tube case. (ii) It seems imperative to understand the necessary (rather than just sufficient) conditions for supersymmetry enhancement more precisely. In the context of this question it would be nice to find a symmetry argument which would explain why turning on a (finite) adjoint mass has no impact (up to field rescalings) on the fluxtube world-sheet theory. (iii) Another interesting question is: what happens with our "kink-monopole" state in N = 1 theory when we vary parameters moving towards weaker confinement? In other words a challenging and illuminating problem is: what happens when two scales in Eq. (2.32) are of the same order? We are not aware of any discussion of this regime in the literature. (iv) The issue of supersymmetry emergence seems intriguing. Is it promising from the standpoint of applications? We hope to return to the above issues elsewhere.
2014-10-01T00:00:00.000Z
2005-01-26T00:00:00.000
{ "year": 2005, "sha1": "7363454110cb52a27c2011be581f4420f0d5a064", "oa_license": null, "oa_url": "http://arxiv.org/pdf/hep-th/0501211", "oa_status": "GREEN", "pdf_src": "Arxiv", "pdf_hash": "b9875238bcd38e6a08ad7f9eb9ab18e3d7ae6c20", "s2fieldsofstudy": [ "Physics" ], "extfieldsofstudy": [ "Physics" ] }
225351733
pes2o/s2orc
v3-fos-license
CAD/CAM Milling versus Rapid Prototyping Surgical Guide Techniques in Dental Implant Placement This study was done to compare between the accuracy of surgical guides in implant placement using different techniques of construction; CAD/CAM milling and Rapid Prototyping (3D printing). 28 implants divided equally into two groups, group I implant inserted using CAD/CAM milling surgical guides while group II implant inserted using 3D printed (Rapid Prototyping) surgical guides. A pre-operative CBCT was taken to determine the virtual implant location regarding Coronal, apical and angular positions, after implant placement, a postoperative CBCT was taken and the Blue sky plan computer software was used to match the pre and postoperative CBCT images, to compare Angular, coronal, and apical deviation of the virtual and the actual implants positions by Superimposition with the post-operative CBCT. There were a statistically significant higher coronal, apical and angular deviation mean values between the virtual and the actual implants placement of group II than of group I. The CAD/CAM milled surgical guides had superior results than 3D printing surgical guides. INTRODUCTION Osseo-integrated implants are a practical substitute to the conventional prosthodontics; nevertheless, designing a prosthesis which is implant supported with proper function and esthetics is a challenge. Precise accuracy in planning and conduction of surgical steps is vital for assuring a high-success probability exclusive of iatrogenic damage. The success of implant placement primarily relies on well-organized treatment planning and correctly performed surgery. Disorderly placed implant is a very common problem that regularly complicates not only the clinical, but also the laboratory procedures of superstructures. This actually dictates a close teamwork between prosthodontists and surgeons to work conjointly as a single unit that will smoothen the accurate construction of the surgical stent or surgical guide. A surgical stent is an appliance utilized for radiographic assessment of the available bone regarding height and width pre-operatively or intra-operatively to provide the ideal site for implant placements 1. Surgical templates not only aid in diagnosis and treatment planning, but also eases proper positioning and correct angulation of the implant body in the bone. Furthermore, restoration-driven implant placement accomplished with a surgical guide template, for sure, decreases the clinical and laboratory complications. Thus, the increasing demand for dental implant placement using surgical guides has resulted in more advanced techniques for the fabrication of these templates. 2,3 Guides should be constructed of transparent material, stable and firm when in position. It should cover sufficient teeth to stabilize its location, and when teeth are absent, they should extend onto the un-reflected soft tissue regions 3 . A surgical guide is supported by the teeth, mucosa or bone and is usually made of polymer. It has pre-drilled holes and, during the dental implant surgery, the surgeon uses these holes to guide the osteotomy at the anticipated locations and angulations in the patient's subsequent implantation site 4. Several computer guided surgical stents fabrication methods have been advocated over the past several years including designrelated processing and milling based on coordinate synchronization. In design-related processing, a template is designed on a computer which is then used to construct a surgical stent either by subtractive or additive method. Then this study was prompted to evaluate which 3-D surgical stent is more accurate in implant placement. Surgical procedures and implant placement: Pre-medications include antibiotic (Clavulanate-potentiated Amoxicillin) 1gm/12 hours the day before surgery and continued for five days after, anti-inflammatory (Diclofenac sodium) 50mg and mouth wash (Chlorhexidine) were prescribed 3 times daily prior to surgery. After checking the local anaesthesia; the surgical stent was disinfected and inserted into the patient's mouth and supported by the remaining teeth. Osteotomy was performed till reaching the final drill of the simple guide with 2.8 mm diameter and 11.5 mm length. After stent removal, the implant (Neobiotech j dental (Neobiotech Co., Ltd. Guro-gu, Seoul, Korea) with diameter 4 mm and length 11.5 mm) was inserted through the osteotomy manually then continued using a ratchet. Post-operative care: Cold soft diet was recommended. Antibiotic, analgesic and anti-inflammatory that were prescribed to the patient before the surgery, continued for the following five days. The patient was instructed to come back the next day to check. Post-Operative Imaging & Image Super-Imposition: Elghamry et. al., Br J Med Health Res. 2020;7(07) ISSN: 2394-2967 Patients were recalled three days after implant insertion for another CBCT scan. This Postoperative CBCT was performed using the same pre-operative CBCT parameters and the same machine. The Blue sky plan computer software was used to match the pre and postoperative CBCT images, to compare Angular deviation, coronal deviation, and apical deviation of the virtual and the actual implants positions by superimposition with the postoperative CBCT after the implant insertion. Evaluation of Accuracy: The difference between the planned and the actual implants in three-dimensional view was calculated in two parts, the coronal and the apical parts, and is called the total sum of the coronal and apical differences. In addition, the angular deviation which is the three-dimensional angle between the long axis of the planned and the actual implant was calculated, tabulated, and statistically analyzed. RESULTS AND DISCUSSION Statistical analysis was performed using IBM SPSS Statistics Version 2.1 for Windows. Data was presented as mean and standard deviation (SD). The significance level was set at P ≤ 0.05. Kolmogorov-Smirnov and Shapiro-Wilk tests were used to assess data normality. Since all data showed normal distribution, Iindependent Student t-test (2 independent samples) was performed to compare angular, coronal and apical deviation between 3D printing and CAD/CAM surgical guides. Independent student t-test (Table 1) showed that 3D printing (group II) had a statistically significant higher angular deviation than milling (group I) surgical guides (P<0.0001). Independent student t-test (Table 3) showed that 3D printing (group II) showed a statistically significant higher apical deviation than milling (group I) surgical guides (P=0.002). Computer-assisted implant planning and subsequent template-guided implant placement must be highly accurate for optimal preoperative diagnostics and planning and, consequently, for developing a predictable procedure for implantation and prosthetic rehabilitation 5 . Comparison of apical deviation between 3D printing and CAD/CAM surgical guides: The results of this study revealed statistically significant difference between the virtual implants planning and the actual implants in all aspects with higher values showed in group II (3D printing). In this study the significant difference of implants position may be due to some instability of surgical guides during surgery that may result in implants deviation, misplacement of the radiographic templates during scanning. Moreover, the technique used was partially limiting procedure as the implant insertion was made free hand that may result in some deviation, another possible source of variation is deformation of the surgical guide during prototyping. On This differences may be due to the following causes as using the same implant system; implant installation was carried-out by one operator, to exclude human variations in experience. The accuracy is also a great matter of concern, especially in the case of immediate delivery of a prefabricated prosthesis 7 . Moreover Inaccuracies of 3D printing compared to milling is attributed the need for supporting structure. While for the milling protocol of PMMA was very sensitive due to the excessive hardness of the material that in reverse increased the pressure of cutting that might lead to thermal stresses and distortion of the material. Also cutting conditions might cause excessive vibration that could exert more thermal and mechanical stresses on the work piece especially in areas with thin thickness during the procedure 8 Results of this study also goes with other previous studies that reported deformation during SLA prototyping 9,10 It is worth noting that the length, angle, and proper position of an implant play critical roles in the placement procedure. The results of the present study found that the coronal, apical and angular deviations between the planned and actual implant position were less than the wellknown standard "safety zone" of 2 mm away from vital structures, with less deviations showed in CAD/CAM group 11,12 This study showed that angular, coronal, and apical deviation are accurate enough to avoid the damage of major anatomical structure during the procedure. In a study done to evaluate the accuracy of a 3D printing surgical guide determined by CBCT and model analysis, it was found that the deviation measured by CBCT is similar to that of other studies but angle deviation is somewhat higher. The reason for this is that the previous studies used surgical kit and implant fixture of the subsidiary company that makes surgical guide, while universal surgical guide kit is used in this study. Also, compared the previous studies, the more rearmost molars are included in this study. Reference marker was not used when taking CBCT, so higher error occurred during overlapping preoperative and postoperative CBCT 13 . On the other hand, an in vitro study was done to compare the accuracy of implant placement using 3D printed and machine milled surgical guides and it was concluded that no significant differences were found between both groups for any of the measurements 14. The present use of CAD/CAM processed surgical guides has provided a high degree of simplicity in morphologic diagnosis, determining the surgical procedure and establishing the subsequent prognosis. Moreover, CAD/ CAM technology has facilitated flapless surgeries by improvising on pre-surgical planning. They have also facilitated restoration-driven surgeries by integrating the restorative determinants into the surgical planning 15 . The Stereolithography surgical guides derived from CT scan planning data were found to be highly accurate and easy to use in either bone-supported, tooth supported, or mucosa-supported configuration to minimize the possibility of postoperative peri-implant tissue loss and to overcome the challenge of soft-tissue management during or after surgery 16,17,18 The clinical importance of these results may be relevant in such situations when multiple parallel but distant implants are placed, and where the degree of accuracy is critical for prosthetic restoration. Re-angulation or replacement of removable wearing parts could be reduced by the use of more accurate surgical implant placement 19 .
2020-07-30T02:06:50.720Z
2020-07-25T00:00:00.000
{ "year": 2020, "sha1": "ece2654b1643c592100285bee2ba1403ba9428a6", "oa_license": null, "oa_url": "https://doi.org/10.46624/bjmhr.2020.v7.i7.003", "oa_status": "GOLD", "pdf_src": "MergedPDFExtraction", "pdf_hash": "e962e6e50aa926fb8b40fb3ccef556f97224bcca", "s2fieldsofstudy": [ "Medicine", "Materials Science" ], "extfieldsofstudy": [ "Medicine" ] }
70414648
pes2o/s2orc
v3-fos-license
Osteonecrosis of Femoral Head ( ONFH ) After Renal Transplantation Approximately 25,000 patients undergo renal transplantation every year worldwide due to end-stage renal disease(ESRD). Renal transplantation is expected to lead to a progressive correction of the established renal bone disease, and osteonecrosis of the femoral head(ONFH) is a common and severe complication in these patients. It induces deformity of the hip joint and reduces the quality of life, especially in the young population ranging from 20 to 50 years old. Total hip replacement is not reasonable in this population due to the finite lifespan of implants. Clinical results suggest that free vascularized fibular grafting(FVFG) can slow or potentially halt the progression of osteonecrosis, it offers an alternative method for preserving the femoral head in younger renal transplant recipients. 206 of 48 patients and three of 96 hips had ONFH within 6 months after renal transplantation; In the study of Lee, ONFH developed in 6.3% of the 237 patients and 4.9% of the 473 femoral heads from 8 months to 16 months after renal transplantation. In the report of Children by Nishiyama et al, 141 renal transplants were performed in 129 children(72 boys and 57 girls), aged from 2 to 17 years. Osteonecrosis occurred in seven patients, in the following sites: the femoral head in four children including two bilateral cases: and the femoral condyle in Three children, with two bilateral cases. The mean period from transplantation to the diagnosis of OHFH was 18 months, 75% appeared more than 9 months after transplantation. Use of corticosteroids High doses of corticosteroids are used after renal transplantation to reduce rejection and improve graft survival. They have also been implicated as the major predisposing factor for post-transplant bone loss and osteonecrosis, and It takes a few years to develop osteonecrosis after the start of corticosteroids therapy. Although corticosteroids therapy represents a pathogenetic key factor other immunosuppressive drugs such as cyclosporine, tacrolimus, azathioprine and rapamycin clearly contribute to its prevalence and expression through their pleiotropic pharmacological effects. These drugs have been shown to increase overall bone turnover and/or to stimulate loss of bone mass independently. Long-term glucocorticoid administration and possibly cyclosporine treatment may chronically activate osteoclasts in spongy and/or cortical bone while osteoblast activity is inhibited, and the highest tertiles of cumulative glucocorticoid were significantly associated with BMD loss and osteonecrosis. A prospective study using MRI in renal transplant patients showed the occurrence of ONFH within several months after the initiation of steroid treatment, and a time discrepancy between the occurrence of ONFH and onset of symptom. Dose-related risk of osteonecrosis Furthermore, an appreciable dose-related risk of osteonecrosis was also found in patients receiving long-term steroid therapy. Hirota et al reported the relationship between ONFH and the daily dosage of steroid (#16.6 mg in terms of prednisolone) or the highest daily dose (#80mg in terms of prednisolone) and concluded that higher steroid dosage per day contributed to increasing the frequency of ONFH in renal transplant patients. In a retrospective study of the medical records of 750 patients who had received a renal transplant during the period of 1968-1995, Lausten showed an 11.2% incidence of symptomatic osteonecrosis with high-dose glucocorticoids(cumulative mean dose of prednisolone 12.5g at 1 year post-transplant) and 5.1% with low-dose(cumulative mean dose of prednisolone average dose 6.5g). This difference in numbers of femoral head necroses was highly significant (p < 0.005); The cohort of Kopecky et al showed an osteonecrosis incidence of 22%, had a prednisolone equivalent dose of 7±3.25g prednisolone during the first 90 days after transplantation; The cohort of Lopez-Ben et al showed an osteonecrosis incidence of 4%, had a slightly lower cumulative steroid dosage (2.1g prednisolone at 3 months after transplantation) compared to previously reported cohorts; Lee et al also found a low incidence of ONFH in renal transplantation patients at the time of 1 year posttransplantation, which seems to be related with low cumulative steroid dosage. There is no known threshold dose. In clinical practice, some patients fail to benefit from daily doses as www.intechopen.com Osteonecrosis of Femoral Head (ONFH) After Renal Transplantation 207 low as 2.5-7.5 mg of prednisone, whereas daily doses >7.5 mg will definitively induce osteoporosis and osteonecrosis in the majority of patients. Main mechanism The mechanism of ONFH after renal transplantation includes: 1. Thrombus due to the steroid-induced hypercoagulable state(Hypercoagulability of plasma was found after 3 months of steroid treatment in previous studies). 2. Reduction of blood flow in bone(Femoral head blood flow was 2-3 fold lower after 2 week high dose steroid treatment, decreased arterial inflow or increased venous outflow resistance can reduce intraosseous blood flow) 3. Osteoporosis of femoral(Steroid can decrease the absorption of calcium from intestine and increase its elimination via the kidneys. Direct and indirect effects on PTH secretion, changes in bone protein matrix, increased osteoclastic activity, and decreased protein synthesis all lead to a reduction in bone mass after renal transplantation, inhaled corticosteroids in doses above 1.5 mg/d may be associated with a significant reduction in bone density). 4. Metabolism disorder. 5. Fat embolism of femoral head. 6. Rise of intraosseous pressure(In the rigid intraosseous compartment, growth of fat cells may cause a rise in intraosseous pressure, and thereby compress the thin-walled sinusoids, with a subsequent decrease in bone blood flow). 7. Degenerative changes of the hip capsule(Degenerative changes in the arteries and arterioles of the capsule of the hip and the femoral head have been found in cadavers of renal transplant patients without clinical hip symptoms. There was thickening of the intima, gross diminution in the number and calibre of the vessels in the arteries of the femoral head and infarcts of subchondral bone). Other risk factors The type of donor, dialysis duration, acute rejection rate, and postoperative weight gain. The age of the graft recipients also matters. An age of less than forty years is a risk factor for osteonecrosis of the femoral head. Contradiction during the treatment ONFH continues to be a difficult problem to manage, especially in renal transplant recipients. Both the patients and surgeons were concerned about changes of renal function and survival of the graft. They paid little attention to the hip joint, although early signs of osteonecrosis were present. High doses of steroids were used in a continuous manner, neglecting the abnormal joint function, which hastened the deterioration of the femoral head. ONFH was diagnosed at a mean of 3.5 years after transplantation, it can progress to severe osteoarthritis and seriously impair the life quality of transplant recipients. Early hip joint symptoms, including progressing hip pain and joint dysfunction, always appear 9 to 19 months after transplantation, But in most of the clinical cases, severe joint pain and irreversible collapse of the femoral head had already developed when the diagnosis was established. Furthermore, steroid-induced ONFH following transplantation tends to have larger necrotic areas, and bilateral involvement is more common than unilateral involvement. the natural history of femoral head osteonecrosis has shown evidence that a large majority of clinically diagnosed cases will progress to femoral head collapse. The treatment of ONFH depends on the staging and severity of the clinical symptoms. Core decompression and THA Joint-preserving operations like core decompression cannot arrest the progression of the disease effectively. Total hip replacemen(THA)t is also unsuitable for younger patients because of their higher activity level and longer remainder of life. So Osteonecrosis of the femoral head continues to be a difficult problem to manage, especial in the patient with various kinds of renal diseases(such like IgA nephropathy, focal segmental glomerular sclerosis, membranous nephropathy, mesangial proliferative glomerulonephritis, crescent glomerulonephritis, lupus nephritis, minimal change nephropathy and renal transplantation after end-stage renal diseases ). Many patients with renal diseases inevitably lose the ability to live independently due to advanced stages of osteoarthritis. FVFG FVFG showed favorable outcomes. Compared with core decompression of femoral head, FVFG had a significantly lower conversion rate to total hip arthroplasty(stage II and III hips), because it can enhance the revascularization of bone tissue and arrest the progression of the necrosis. It is also an alternative method for younger patients without severe osteoarthritis of the hip joint. Advantages of FVFG The advantage of FVFG lies in the combination of femoral head decompression(Extensive decompression of the femoral head along with removal of necrotic bone theoretically interrupts the cycle of increased intraosseous pressure and ischemia and allows for revascularization of the femoral head), removal of necrotic bone, introduction of osteoinductive cancellous bone(Filling the defect with fresh cancellous bone provides both osteoinductive and osteoconductive stimulation of healing), and vascularized cortical bone support of the subchondral surface( The vascularized fibula provides a viable cortical bone strut to support the subchondral bone from collapse and further enhances the revascularization process). This procedure may benefit young patients with more advanced osteonecrosis of the femoral head by halting progression of collapse, prolonging reduction of symptoms, and postponing total hip replacement. Clinical application of FVFG With the emergence of microsurgical techniques, Judet et al first treated ONFH with FVFG in the late 1970s. The long-term results cover 68 hips in 60 patients with 18 of these classified as early failures requiring conversion to THA. The remaining 50 hips were followed on average for 18 years. Thirty-five hips scored good or very good, which corresponds to a 52% success rate, the data clearly show an increase in good results for patients younger than 40 years, and an increase in the rate of failures for patients older than 40 years. Specifically, of the patients younger than 40 years, 80% had good and very good results, whereas of the patients between 40 and 50 years, only 57% had good and very good results. After systemic research and long-term clinical study, Urbaniak et al improved the surgical technique. The results for 103 consecutive hips(eighty-nine patients) that had been treated with FVFG because of symptomatic osteonecrosis of the femoral head were reviewed in a prospective www.intechopen.com Osteonecrosis of Femoral Head (ONFH) After Renal Transplantation 209 study. total arthroplasty had been performed in thirty-one hips: five (23 per cent) of the twenty-two that were in stage III; seventeen (43 per cent) of the forty that were in stage IV; and seven (32 per cent) of the twenty-two that were in stage V. Harris hip scores had improved at the latest follow-up evaluation, compared with the preoperative values (p < 0.001). For the stage-II hips, the average score improved from 56 to 80 points; for the stage-III hips, from 52 to 85 points; for the stage-IV hips, from 41 to 76 points; and for the stage-V hips, from 36 to 75 points. 59 percent of the hips did not limit or only slightly limited the patient's ability to carry out daily activities, and 62 percent did not limit or only slightly limited the patient's ability to work. FVFG had decreased the need for pain medication for 86 percent of the hips that had not been subsequently treated with an arthroplasty. Regardless of whether or not a subsequent arthroplasty was done, 81 per cent of the patients (81 per cent of the hips) were satisfied with their decision to have fibular grafting. FVFG became widely performed in their clinical activities. Some other clinical research also showed good results. Zhang et al treated 56 hips in 48 patients with FVFG and followed patients for a mean duration of 16 months. Roughly 69.6% of femoral heads showed improvement on radiographs, and the Harris hip scores showed improvement ranging from 11-13 points. Most patients had full weight-bearing ability and took part in their daily activities. Aldridge et al reported an 88% success rate associated with FVFG in the femoral heads without collapse and a success rate of 78% with subchondral collapse. The results showed that FVFG is the most promising technique because it had satisfactory mid-term and long-term outcomes. Fibular grafts have also been proven to be alternative for the postcollapse stage of ONFH, It is also a worthwhile procedure in patients with postcollapse osteonecrosis. 188 patients (224 hips) who had undergone free vascularized fibular grafting, between 1989 and 1999, for the treatment of osteonecrosis of the hip that had led to collapse of the femoral head but not to arthrosis. The mean preoperative Harris hip score was 54.5 points, and it increased to 81 points for the patients in whom the surgery succeeded; 63% of the patients in that group had a good or excellent result, Patients with postcollapse, predegenerative osteonecrosis of the femoral head appear to benefit from FVFG, with good overall survival of the joint and significant improvement in the Harris hip score. FVFG is a well accepted treatment option for all symptomatic stages of the disease ,with proper patient selection, middle and long-term outcomes appear promising. FVFG continues to be a primary treatment option to provide relief of symptoms and preserve bone stock, especially in the younger patient population. Attampts on the renal transplant recipients FVFG has been rarely systematically reported in renal transplant recipients, although ONFH after renal transplantation is not rare in the clinical work. Recipients with renal insufficiency and unstable general conditions cannot withstand the excessive blood loss and traumatic stress of a hip operation. Postoperative renal graft dysfunction, severe anemia, electrolyte disorder, and infection are life-threatening complications. Therefore, laboratory indices, including the HB, WBC, ESR, BUN, SCr, UA, electrolyte, 24 hours urine volume, and urine protein quantity, are indispensable. In addition, the CRP and ESR should also be included as a nonspecific marker of the activation of the immune system in order to provide early signs of postoperative graft dysfunction and infection. Furthermore, the surgery should be rapid and less invasive. The use of toxic renal medicine should be avoided as well. Guo report three renal transplant recipients with ONFH who underwent FVFG in the orthopedics department of Shanghai Sixth Poeple's Hospital. Of the three cases, two cases showed radiographic improvement, one case showed radiographic unchanged. All the three cases have been living in good health and are satisfied with their joint function. The hip joint pain was significantly relieved and joint motion was also improved, The Harris hip score elevated 22 points in average, and the Visual Analogue Scale (VAS) was decreased by 37.3 points. Their quality of life has been greatly improved and the gait can return to normal after positive rehabilitation training. The patients were able to walk without aid, even engaged in sports. After operation, the patients returned to full activities with a better quality of life due to normal joint and kidney function. The follow-up results demonstrated that FVFG is safe, effective, and feasible for transplant recipients without serious renal graft dysfunction, anemia, or other systemic diseases. However, the safety of the operation should also be attributed to proficient surgical technique, meticulous laboratory monitoring, and deliberate postoperative supportive treatment. Indication discussion According to our previous experience, indications for using FVFG to treat ONFH in patients after the renal transplantation include: 1) A patient younger than 50 years who is not suitable for total hip replacement; 2) Severe hip pain that greatly impairs daily activity; 3) Necrosis of the femoral head less than Steinberg stage V (osteoarthritis stage); 4) The recipient ' s general physical condition is stable without renal graft dysfunction, serious anemia, metabolic disorder, or any other systemic diseases; 5) The recipient is active and in need of a high quality of life; 6) To take the safety of operation into account, FVFG ought to be performed at least one year after transplantation. Clinical example A 39-year-old man who was diagnosed with IgA nephropathy by renal biopsy and histopathological examination in January 1998. The disease developed to chronic renal failure six years later. The patient was maintained on hemodialysis for 10 months until unilateral renal transplantation in May 2005. He received steroid therapy for 18 months after the operation. The accumulative dose of corticosteroids was 7.5g (converted to prednisone dose). Tacrolimus (FK-506) 15mg per day and MMF 2.0g was also taken each day at the same time. He visited our hospital in November 2006 because of severe hip joint pain on the left side. The symptom became aggravated quickly, and the patient had to take 0.6mg Ibuprofen per day to maintain his daily activities. He was diagnosed with left side ONFH upon hip X-ray and MRI (classified as Steinberg stage III , Fig 1 a, b). A physical exam revealed a gait abnormality, deep inguinal region pain, positive Thomas sign, and Trendelenburg sign on the affected side. A decreased range of motion occurred in abduction and flexion. The Harris hip score was 72 points, and the VAS pain score was 80 points. All routine laboratory examination results were normal upon admission (shown in Table 1). FVFG on the left side was performed uneventfully one week later. The laboratory exam showed no significant change, except for a slight elevation of the WBC to 11.2×10 9 /L on postoperative day 1. The body temperature rose to 37.9°C. The WBC returned to 8.6×10 9 /L on day 3 and 7.4×10 9 /L on day 7 after antibiotic treatment (intravenous Cefuroxime 3.0g drip bid for 3 days), and the patient's temperature also returned to normal. The patient was discharged within 2 weeks in good health. No signs of infection or renal graft dysfunction were discovered during the 1 year and 8 months of follow-up. The latest radiograph results showed improvement (Fig 1 c). The left joint pain and stiffness were significantly relieved. A daily pain-killer was no longer needed. The patient ' s gait also returned to normal after positive rehabilitation training. He returned to his full activities with a better quality of life due to normal joint and kidney function. The Harris hip score rose to 89 points, and the VAS pain score decreased to 28 points.
2017-08-27T22:53:13.859Z
2012-02-29T00:00:00.000
{ "year": 2012, "sha1": "21282ab0bc427b9ccb308b5cac9edadd47088dcd", "oa_license": "CCBY", "oa_url": "https://www.intechopen.com/citation-pdf-url/29971", "oa_status": "HYBRID", "pdf_src": "MergedPDFExtraction", "pdf_hash": "21282ab0bc427b9ccb308b5cac9edadd47088dcd", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
17663226
pes2o/s2orc
v3-fos-license
Progress in ambient assisted systems for independent living by the elderly One of the challenges of the ageing population in many countries is the efficient delivery of health and care services, which is further complicated by the increase in neurological conditions among the elderly due to rising life expectancy. Personal care of the elderly is of concern to their relatives, in case they are alone in their homes and unforeseen circumstances occur, affecting their wellbeing. The alternative; i.e. care in nursing homes or hospitals is costly and increases further if specialized care is mobilized to patients’ place of residence. Enabling technologies for independent living by the elderly such as the ambient assisted living systems (AALS) are seen as essential to enhancing care in a cost-effective manner. In light of significant advances in telecommunication, computing and sensor miniaturization, as well as the ubiquity of mobile and connected devices embodying the concept of the Internet of Things (IoT), end-to-end solutions for ambient assisted living have become a reality. The premise of such applications is the continuous and most often real-time monitoring of the environment and occupant behavior using an event-driven intelligent system, thereby providing a facility for monitoring and assessment, and triggering assistance as and when needed. As a growing area of research, it is essential to investigate the approaches for developing AALS in literature to identify current practices and directions for future research. This paper is, therefore, aimed at a comprehensive and critical review of the frameworks and sensor systems used in various ambient assisted living systems, as well as their objectives and relationships with care and clinical systems. Findings from our work suggest that most frameworks focused on activity monitoring for assessing immediate risks, while the opportunities for integrating environmental factors for analytics and decision-making, in particular for the long-term care were often overlooked. The potential for wearable devices and sensors, as well as distributed storage and access (e.g. cloud) are yet to be fully appreciated. There is a distinct lack of strong supporting clinical evidence from the implemented technologies. Socio-cultural aspects such as divergence among groups, acceptability and usability of AALS were also overlooked. Future systems need to look into the issues of privacy and cyber security. Background The elderly population in the world is increasing as a result of the advancements in technology, public health, nutrition and medicine (Beard et al. 2012;Aytac et al. 1999). Rising life expectancy, declining birth rates and infant mortality will continue to influence this significant shift in demographics around the world, although at varying degree and pace (United Nations 2013). People aged sixty or over were more than 11.5 % of the global population in 2012. By 2050, this percentage is expected to double to two billion, and around thirty-three countries will have more than ten million people each, aged sixty or over (Haub 2012). The Organisation for Economic Co-operation and Development (2005) forecasted that during the first half of the twenty first century, its member countries would experience a drastic increase in the elderly population, as well as a steep decline in their working force population. For example, the percentage of the population aged 65 or over in the UK increased to 16 % of the total by 2009 while forecast suggests that 40 % of the country's population will be aged fifty or over by 2026 (Winkler et al. 2007). The demographic shift is not evenly distributed. Figure 1 illustrates the number of persons aged 65 years or over, per hundred children under 15 years in different regions between 1950 and 2050. The ageing of the population in the developed America and Europe is steep compared to Africa. Ageing in the Middle East is expected to rise rapidly over the next 35 years. In Asia, the Chinese population is aging rapidly, due to the onechild policy that the government enforces and the country's lower mortality rate (Zhang and Goza 2006). In the Middle East, the percentage of elderly to young people is low compared to western countries. However, the percentage of the aged population will increase throughout the region, with sharp increases in countries with declining fertility and extensive development. By 2050, around 22 % of the forecasted 1.1 billion people in the Middle East is expected to be of 60 years or over (United Nations Population Fund 2012). Although this share will still be low, the pace of aging will rapidly increase and by 2030 it is forecasted to be around 7 % (Hayutin 2009). Independent living by the elderly Many older people want to spend time in their home environment. Nearly 40 % of the world's elderly population live independently (United Nations 2012), almost half of whom are women while only a minority of older men live alone (Dwyer et al. 2000;Mba 2013). There are significant differences in the percentage of elderly people living independently in developed and developing countries. Elderly people who live independently represent around 75 % in developed countries (United Nations 2012). It is important to note that living alone or just with a spouse in developed countries may be regarded as economic independence while it may be an indication of vulnerability in developing countries (World Health Organization 2011), where social norm expects the older offspring to look after their parents in old-age. A nationwide survey in the UK by the Disabled Living Foundation (DLF) (2009) revealed that more people worry about losing their independence (49 %) than dying (29 %) as they grow older. Similar results have been found in a survey in the USA by the Home Instead Senior Care, the largest elderly care organization in the world (Mangoni 2014). The DLF survey findings also suggest that losing independence or becoming dependent on others was a bigger concern than financial worries, despite the survey being conducted during a stringent financial times in 2009. The home is, therefore, a focal point for ensuring independent, healthy, and socially inclusive living, and should be designed and equipped with the right infrastructure to support and host a variety of services that older people may require to meet their needs (Shikder et al. 2010). Moreover, easy access to the social environments (e.g. healthcare facilities, care support, supermarkets, cultural centers and places to socialize) either from homes, or integrated into homes through adapted digital technologies, is essential to offset potential challenges such as isolation, loneliness, and associated physical and mental decline. Access to useful, high-quality information is also vital for making informed choices about elderly care, particularly for those suffering from gradual cognitive impairment. Studies have shown that most elderly people living with neurological conditions give priority to live independently in their homes, even though they may be dependent on others for the management of their daily life (Chan et al. 2009). Ambient assisted living Ageing population will bring about some challenges for society (United Nations Population Fund 2012;Kwan 2012). Prolonged ageing has also resulted in an increase in neurological conditions such as age-related cognitive decline and chronic conditions among the elderly (World Health Organization 2006). Their quality of life is also affected by constraints related to physical activity, hearing and vision, and ultimately the loss of independence (Shikder et al. 2012). One of the key healthcare challenges is thus related to the provision of sustainable care to the growing number of elderly either in their homes or assisted living environments-by providing them with personalized care based on their profile and the surrounding context, commonly referred to as ambient assisted living system (AALS). Continuous and often real-time monitoring of the environment, and occupant behavior and health are the basis of AALS that provide a facility for triggering assistance through an event based system. These enabling technologies, along with preventative measures/care for healthy and active ageing are considered as the way forward from the perspectives of health and social care providers and professionals alike. The rationale is that healthy, active ageing can support independence-enabling the elderly to live well with simple or stable long-term conditions, as well as with complex comorbidities, dementia and frailty (Oliver et al. 2014). AALS provides user-specific support within the home environment, including the automated operation of equipment for maintaining comfort (e.g. heating, ventilation and air conditioning-HVAC systems), safety (e.g. lights) and warning (e.g. alarms for medicine) (Van Hoof et al. 2010). The system also enables support for tedious work; e.g. mobile and home robots offer assistance for moving objects or presenting food (Urdiales et al. 2013). For the elderly with cognitive impairments, the support for the tedious task is often responsive; i.e. the subject's daily activities are monitored first to identify activities and then support is provided for the identified task. Some systems perform specific tasks that require interaction with outside agents or systems; e.g. paying bills, ordering groceries, etc. (van den Broek et al. 2010). Significant advances in telecommunication; computing and sensor miniaturization; and the ubiquity of mobile and connected devices are influencing the development of AALS. Despite their recent progress and demonstration of positive effects on elderly people's daily living (Bharucha et al. 2009), several limitations of the research and practice of ambient assisted systems have been identified. First, most studies lack satisfactory clinical evidence in support of the enhancement of quality of life achieved by introducing AALS, a concern shared by Blaschke et al. (2009) and Demiris and Hensel (2008). Second, the level of end-users' acceptance of the technology regarding usability, eligibility of implementation, and ethical and privacy issues, is not explored in detail (Or and Karsh 2009). Third, the needs and demands of end-users such as the elderly and carers are not specifically addressed, and many projects are designed based on the assumption of researchers (Or and Karsh 2009;Chan et al. 2008). Fourth, Blaschke et al. (2009) points out that health and care workers responsible for the elderly have not always been well informed about the implemented AALS, in particular, the aspects that affect their work practices. Study contents As a growing area of research, it is essential to investigate the approaches for developing AALS in literature to identify common practices, limitations and directions for future research. This paper is, therefore, aimed at a comprehensive and critical review of the frameworks and sensor systems used in various ambient assisted living systems, as well as their objectives and relationships with care and clinical systems. Methods Published literature over the past 15 years in relevant electronic and non-electronic resources of peer-reviewed journal and scientific articles were searched to identify sources dealing directly with the support for independent living by the elderly, with a particular focus on ambient assisted living systems. The aim was to conduct a comprehensive assessment of the technology and classify recent developments so that the gaps can be identified, if any. Electronic resources related to the research topic keywords were searched through ScienceDirect, IEEE Xplore, Web of Science, PubMed and Google search engines, including Google Scholar. The keywords used were: elderly people, daily activities, environmental monitoring, assisted living technology, smart homes, behavior monitoring, activity recognition, and distributed sensing with both 'OR' and ' AND' connectives between search words. The search was cross-matched between the search keywords to cover possible combinations in all of the research databases. The obtained results were filtered into organizational websites, specialized books, and scientific articles. Abstracts of selected articles and books were then screened to identify potential literature directly related to the research topic. Moreover, a further search was conducted to follow potential authors' literature related to the topic under consideration. The criteria for selecting and retaining highly linked articles for detailed review were: • Coverage of the concept and philosophy of behavior recognition of the elderly; • Coverage of the details of the monitoring system for environmental and vital signs, especially with the elderly as test subjects; • Studies with significant contributions in smart home design and implementation; • Studies that investigated the effectiveness of assisted living technologies; • Studies covering detailed implementations and research projects related to independent living of the elderly; and • Key review papers and reports from established authors and health organizations. Finally, around 133 papers related to the research topic were retained for further investigation. Obtained results were clustered for structuring and organizing discussions, into groups, namely: activity modelling techniques; personal and environmental sensing and monitoring system; home environment characteristics; and recent research projects that addressed independent living by the elderly. Developments in ambient assisted living systems A typical AALS is illustrated in Fig. 2 where user behavior is monitored through a distributed home sensor system, which links caregiver and friends/family to the elderly's home through an assurance system. In some applications, relatives and emergency services are also linked to the system for instant alerting in specific situations. Table 1 provides a list of commonly used sensor types in AALS, along with their usage, signal type, installation difficulty, generated noise and cost. Most sensors do not generate noise when operated and if not connected to alarms. Sensors that generate binary signals are typically easier to install and require less calibration than the continuous type. intelligence is gathered through a sensor network and fused together with data in which information and communication technologies and this equipment are introduced in order to assist inhabitants' daily living activities such as moving furniture, timed medication, eating, dressing, communicating, and etc. The early stage of AAL homes projects focused on safety, in particular on alarm or notification in emergency situations such as the incidence of fall. and the system was generated primarily by users as listed in Table 2. The acronym, AALS, describes the ICT-augmented living environment in which ambient conditions are monitored via a sensor network and collected data are often fused together with information gathered from health and activity monitoring systems to (a) control the living environment for occupant safety and comfort, (b) provide family, friends and caregivers with up to date information on occupant status, and (c) inform short-and long-term health and care management. AALS is typically targeted at the elderly but the principles are similar to the paradigm of smart or intelligent homes, hence, it is applicable in a host of other relevant scenarios. Initial developments AALS centered around home automation in which distributed sensor systems were used to collect information about the state of the environment for performing certain actions and activating specific actuators to operate home devices and interchange data with outside domains. Fig. 2 Architecture of a typical ambient assisted living system. Adapted from (Pollack 2005) The AAL home name was deduced from the main idea of home automation, which uses distributed sensor system to collect information related to the state of the environment where humans are located inside, then in response to this information decide certain actions and activate specific actuators to operate certain home devices, perform certain function, and interchange data with outside domains. AAL home may be also known as smart space, aware-house, and collaborative ambient intelligence. AAL homes that have these capabilities can deliver elderly people with various types of home assistance, controlled medication, fall prevention, security features, and etc. Such systems generate secure feeling for elderly inside home domain. Moreover, it will help relatives to observe their dear elderly from anywhere with an internet connection (Cheek et al. 2005). Various laboratories trials, projects, and industrial showcases concerning AAL homes are available around the world; a lot of them share many features. Looking into the objectives these projects are aiming to achieve, they diverse in their technological innovation, information selection, validation method, and results confirmation. In this respect, current technologies of AAL home could be categorized in three categories: 1. Daily activities and social connectedness Those targeting facilitating social activities, social networking, and identify social efficiencies. 2. Safety enhancement Those targeting fall detection, personal emergency, and medication management systems. 3. Health monitoring Those targeting managing chronic diseases. It also includes active tele-health allowing remote interaction with patient and collects continuous Health Records. There are various projects worldwide, where some of which will be described in this section briefly. Daily activities and social connectedness Assisted Cognition Environment (ACE) is aimed at the use of artificial intelligence (AI) techniques to enhance and provide support for daily life of the elderly suffering from cognitive by sensing the surrounding environment and the patient's location, and interpret these data to identify behavioral patterns of the patient disorders (Kautz et al. 2002). Support is then offered to the patient through verbal and physical interventions, with the option to alert caregivers. Innovations in ACE can be divided into two: (a) the ability to create activity supervision model to reduce patients' spatial disorientation, and (b) the structured prompter that supports patients in performing their everyday multi-step tasks (Qixin et al. 2006). On the other hand, AWARE project is aimed at conceptualizing the living context of the elderly by introducing ubiquitous computing to provide important information to their family members who are concerned about them living alone. The key innovation is the ability to distinguish a particular individual from others by detecting the person's location using force sensitive load tiles on the floor that record foot step patterns, called ground reaction force (GFR), to create a model of unique footstep pattern of each individual. The GFR model is then compared with new GFR input data using hidden Markov models (HMMs) and feature-vector average (FVA) techniques to identify individuals. In addition, AWARE used radio frequency tags to locate frequently lost objects such as keys and glasses with a view to investigate, in a laboratory setting, how people lose their objects (Kidd et al. 1999). Successor to AWARE is the AWARE Smart Home project, which is aimed at improving social interactions between the elderly and their families, as well as the outside world (Kientz et al. 2008). Indoor position tracking was implemented using RFID sensors and computer vision based solutions to support an activity recognition system for identifying occupants' activities; e.g. watching TV, reading and preparing a meal. CASAS uses machine learning techniques to identify behavioral patterns of the elderly suffering from cognitive decline using data from motion sensors (Cook et al. 2003). Tests involving cognitive healthy and dementia subjects showed that the implemented learning algorithm was able to identify the differences in activities; however, it could not distinguish the cause of differences, whether it was a result of confusion due to dementia or a simple mistake. Managing an Intelligent Versatile Home (MavHome) project utilizes machine learning techniques to identify activities within a smart home environment, which is then used to actuate and control devices (home automation) with an overall aim to minimize the cost of maintaining the home and maximizing the comfort of its inhabitants (Cook et al. 2003). Lotfi et al. (2012) expanded CASAS by using a sequence of monitoring signals from different locations to describe the flow of occupant's activity, alongside the duration of these signals. The collected data are analyzed using clustering techniques and have been found to be more effective in distinguishing abnormalities in activities by demented subjects. With similar objectives, I-LivingTM, on the other hand, focused on secure communication between distributed wireless sensors of different communication protocols (Bal et al. 2011), with an user interface designed to provide the differently-abled elderly to enhance their independence. The integration of several wireless network protocols (e.g. Wi-Fi, Infrared, Bluetooth, and IEEE 802.11) with commercially available sensing technologies for localization and presence identification was based on an open system architecture (Qixin et al. 2006). CAALYX is a European project focusing on three areas of monitoring in a social connected system: home, roaming and central care services (Rocha et al. 2013). The integration of wearable light devices such as data loggers in smart phones and watches sets CAALYX apart from other AALS in the sense that a larger set of parameters can be taken into consideration. SISARL project focuses broadly on the use of consumer electronics to enhance the quality of life of elderly people and provide them with necessary help to achieve active and independent life (Bal et al. 2011). The project investigated several everyday living applications; e.g. the location of objects, use of medicine dispensers, monitoring of personal vital signs, detection of pattern irregularities, notifications, and the use of robotic platforms to enhance the dexterity and reachability of the elderly occupants. SOPRANO, a European project, is based on a combination of ontology-based techniques and a service-oriented device architecture (Müller et al. 2008). By separating system aspects such as sensors and actuators; context information and system behavior, SOPRANO provides a contract-centric framework for different solutions utilizing semantic technologies (Wolf et al. 2008). TeleCARE project presents a generic architecture for AALS (Whitten et al. 1998) through abstraction for both hardware and software, without specified information about dealing with third-party hardware drivers. Safety enhancement Casattenta aimed at integrating ambient intelligence technologies, sensor fusion and wireless communication in the form of a set of fixed and wearable sensors distributed alongside the monitored environment connected through a communication platform. The system was designed to support independent living by enabling the tracking and identification of critical situations such as the danger of fall and immobility conditions (Farella et al. 2010). Gator-Tech was designed as an intelligent environment based on supportive features found in smart home devices such as smart appliances, plug-and-play sensors, actuators, and smart floors for position tracking. The overall system is based on a generic design for smart environment, containing the definitions of service for sensors and actuators distributed in the monitored environment, to support independent living by the elderly (Helal et al. 2005). CareWatch was developed for monitoring sleeping patterns of cognitive-declined elderly and activating notification systems for care providers, with a view to prevent unsupervised home exits to release some of the burden from the care providers, especially during the night. The system is designed to increase the quality of life for both the care recipient and the caregiver. The Gerontological smart home environment (GER-HOME) was intended to improve the feeling of independence for the elderly, complaining from the loss of autonomy (CSTB 2011). GERHOME implemented automatic recognition of human behaviors using real-time video surveillance combined with other types of sensor data. The project presented a communication infrastructure based on intelligent agents allowing easy integration of different types of sensors within an existing system structure, based on an intelligent agent architecture. Technology Assisted Friendly Environment for the Third Age (TAFETA) project built in the form of a smart apartment, loaded with various types of sensors and actuators to detect and control environmental parameters. The system was tested to monitor movement continuously in the apartment and to identify sleeping quality of occupants (TAFETA 2011). ORCATECH was devoted to the development of technologies supporting independent living for a wide range of requirements in elderly people's health monitoring and home care support. The system comprises intelligent bed sensors to track sleeping patterns and prevent the elderly from falling by turning on room lights automatically when the system detects the person has awakened from sleep. ORCATECH also offers remote controlled tele-presence to provide health support to the elderly living alone, as well as enables social interactions with remote family members and caregivers (Nehmer et al. 2006). Health monitoring BioMOBIUS project was developed as a research platform comprising hardware, sensors, software, services and a graphical development environment (BioMobus 2011), leveraging existing platforms and libraries such as EyesWeb XMI (eXtended Multimodal Interaction), conceived for supporting research and development on expressive interfaces and interactive systems for gesture recognition and movement analysis (Camurri et al. 2007). The platform implementation comprised a sensing infrastructure to monitor physiological parameters, a processing platform for data integration and fusion, and an intelligent agent that converted measurements into useful expressive information for the clinicians. The aim was to monitor blood pressure, gait stability, risk alertness, and social activity. The system was designed to be adaptable for various hardware through its generic mixed wired and wireless interfaces. The MIT House_n focused on the design elements and associated technologies of a smart home implemented in a laboratory facility equipped with sensors in various locations. The platform was designed to be extensible for further development of innovative user interfaces while investigating the needs for environmental conditions monitoring, proactive healthcare, biometric monitoring, indoor air quality, and new construction solutions needs for health and activity monitoring (Chan et al. 2008). AlarmNet, a wireless sensor-based AALS, was aimed at providing health care monitoring for independent living (Center for Wireless Health 2011) by using heterogeneous wearable and stationary wireless sensing devices, which are combined with a user interface, database and decision logic. Mobile body worn sensors provide physiological sensing for blood pressure, pulse rate, and movement (accelerometer) data. Stationary sensors collect environmental data such as ambient temperature, air quality, light and user location. Collected data are filtered, aggregated, and analysed to adapt to residents' requirements. AlarmNet's flexibility allows the expansion of the system for integrating more sensing devices and for monitoring new parameters (Wood et al. 2008). Despite the intrinsic flexibility, AlarmNet is a closed architecture without the versatility of supporting thirdparty sensor and analytics. CodeBlue, on the other hand, was designed to examine the application of wireless sensor networks (WSN) for a range of medical applications including stroke patients' rehabilitation and disaster response (CodeBlue 2011). The WSN comprised batterypowered sensor devices enhanced with enough computation and communication modules that collected and processed vital signs, which were then integrated into the patient care record system for real-time medical use (Wood et al. 2008). Smart Medical Home focused on advancing interactive technologies for home health care (Ricquebourg et al. 2006). The project developed technologies to increase forward detection and anticipation of patient's health and medical condition by using an interactive medical advisory system to interact with the patient. Using speech recognition and artificial intelligence techniques, together with patient's available medical data, the interactive system advices residents for possible illness using structured interactive questions and answers in real time. WellAware provided an integrated structure comprising an unobtrusive sensor system and a user interface to enable professional caregivers as well as relatives to remotely monitor and deliver support to elderly people (Bal et al. 2011). Activity recognition An activity recognition system typically consists of two sub-systems: (a) sensor system that is able to detect what happens in the environment, and (b) an intelligent model that is able to recognize activities from sensors information. The aim of ambient intelligence is to enrich the surrounding with modern sensor devices interconnected by a communication network to form an electronic servant, which senses changes in the surrounding, then reasons the causes of these changes, and selects proper actions to benefit users of the environment. Direct sensing involves tracking the parameters that are related to the subject person himself, whereas indirect sensing focuses on identifying the environmental condition and spatial features. Both direct and indirect systems are employed in research and practice for capturing human behaviors. Direct sensing includes sound capture, video camera, and motion sensors, as well as wearable body sensors. Raw data/signals from these sensors are transferred to the database. Sensed data are typically annotated and often combined with each other to identify human behaviours in later stage of analysis. Health-related AAL systems can be divided into six main categories: • Physiological assessment pulse rate, respiration, temperature, blood pressure, sugar level, bowel and bladder outputs, etc. • Functional assessment general activity level measurements, motion, gait identification, meal intake, etc. • Safety monitoring analysis of data that detect environmental hazards such as gas leakage. Safety assistance includes functions such as automatic operation of bathroom/corridor lights, reducing trips and falls. • Security monitoring measurements that detect human threats such as intruder alarm systems and responses to identified threats. • Social interaction contain video-based communication to support mediated connection with family and virtual participation in activities etc. • Cognitive monitoring systems automatic reminders and other cognitive aids such as automated medication, key locators, etc. They also include verbal task instruction technologies for appliance operation and sensor assisted technologies that help users with deficits such as sight, hearing, and touch (Demiris and Hensel 2008). Distributed computing enables wider deployment of technology in everyday life. Smart sensors, devices, and actuators have become more affordable, powerful and easy to install. Rapid developments in embedded systems and in particular the system on chip (SoC) low power computing architecture such as ARM SoC (Furber 2000) enabled the embedding of intelligence in everyday devices and equipment. Patients can now be observed and assisted their own home instead of mobilizing them to hospitals, resulting in economical and secure care supervision (Dengler et al. 2007). Feature rich smartphones can have bi-directional communication with cloud infrastructure to offload compute-heavy tasks, offering opportunities for rich functionalities. They can be used to attract elders' attentions to certain actions, requirements, or guidance, while going about their daily activities, as well as communicate certain information to supporters and family members in critical situations. Consequently, these technologies can reduce healthcare costs significantly as well as the physical burden on health care supporters and family members (Fahim et al. 2012). The challenges of AALS effect on users were investigated by Allameh et al. (2011) that identified that user's acceptance of personal space modifications depends on user needs and lifestyle preferences. They worked classified the developments in AALS into three: ambient intelligent space (AmI-S), physical space (PS), and virtual space (VS)-integrated together to support independent life. Moreover, their model allows for changes in lifestyles due to changes in user activity. Currently, there is an interest for more detailed investigations on the linkage between AALS and user's lifestyles. Eunju et al. (2010) investigated the principles of activity recognition and demonstrated that it can be expanded to achieve increased societal ben-efits, especially in humancentric applications such as elderly care. Their application focused on recognizing simple human activities. Recognizing complex human activities is challenging and an active area of research. The nature of the problem; i.e. understanding human activities require an understanding of the activity profiles or patterns. Of various techniques, the first one is related to activity recognition based on an initial personalized model. Hence, a conceptual activity model should exist in the first step, which is then utilized to build a pervasive identification system (Chen and Nugent 2009). The second technique focuses on utilizing algorithms based on probability to generate a model for activity recognition (Wu and Huang 1999). Two of the most common methods used for this purpose are the Conditional Random Field (CRF) and the Hidden Markov Model (HMM) techniques. Le et al. (2008) illustrated a method that enables activity recognition of elderly who lives alone. They studied the case of a subject living in a house, equipped with non-invasive presence sensors, to detect and assess her loss of autonomy by studying the degree of activities performed. In their work, they first detected the subject's mobility states sequence in different allocations around the space. Then, from such states, they extracted descriptive rules to select activities that most influence the subject's autonomy. Medjahed et al. (2009) illustrated an activity recognition system using fuzzy logic in home environments with the help of a set of physiological sensors such as cardiac frequency, posture, fall detection, sound, infrared, and state-change sensors. They validated their approach on a real environment and used this activity identification approach to build a model for anxiety, with increasing or decreasing confidence according to the state of each sensor used. They successfully embedded the characteristic of data provided from different sensors using fuzzy logic which allowed recognition of daily living activities for generic healthcare applications. The work reported by Helmi and AlModarresi (2009) is a fuzzy system for pattern recognition that was utilized for activity modelling using tri-axial accelerometers. The accelerometers were utilized to detect and classify human motion into four categories: moving forward, upstairs, downstairs, and jumping movements. Their identification system depends on three different features: standard deviation, peak amplitude, and correlation between different axes and used as inputs to a fuzzy identification system. Fuzzy rules and input/output membership functions were defined from the experimental measurements. Their results supported that fuzzy inference system (FIS) outperform other types of classifiers. Papamatthaiakis et al. (2010) used data mining techniques to build a smart system that is able to recognize human activities. They studied everyday indoor activities of a monitored subject. Their experimental results showed that for some activities, the recognition accuracy outperform other methods relying on data mining classifiers. They claim that this method is accurate enough for dynamic environments. Zhu and Sheng (2011) illustrate a method for indoor activity identification that links the subject's motion and position data together. They attached an inertia sensor that detects the orientation in three dimensions to the subject's right thigh for motion data collection, and used an optical position system to get the subject's location data. The optical positioning system can be replaced by any other location detection system. This combination maintained high identification accuracy while being less invasive. They utilized two neural networks to identify basic activities. First, Viterbi algorithm for finding the most likely sequence of hidden states (Zhu and Sheng 2011) was employed to recognize the activities from motion data only, forming coarse classification stage. Second, Bayes' theorem was applied to update the recognized activities from motion data in the first stage. They built a mock apartment to conduct their experiments. The obtained results proved the method is effective and producing acceptable results for activity recognition. Chen et al. (2012) conducted a comprehensive survey examining the development in sensor based activity identification systems. They presented a review of the major characteristics of video-based and sensor-based activity identification systems to highlight the strengths and weaknesses of these techniques and to compare between data-and vision-driven activity recognition techniques. They categorized the assisted living technologies into two categories based on the sensing method: direct (Muñoz et al. 2011) and indirect ). Implementation challenges A survey on intelligent techniques used to support the elderly was presented in Pollack (2005). Several challenges in implementing effective technologies to support the elderly population still exist, including the employment of Artificial Intelligence (AI) techniques for reasoning under uncertainty. Despite the number of applications of machine learning and natural language processing techniques, the consideration of uncertainty and thereby the accuracy of recognition has not been robustly dealt with. Moreover, additional challenges arise from the integration with sensor networks, privacy, security, human-machine interaction, and cognition impairment. For example, Dibley et al. (2012) illustrated a cost effective real-time distributed sensor system for environmental monitoring that integrates several types of devices, such as temperature, humidity, motion, light, and magnetic sensors. They used ontology-based framework for pattern recognition, which eased the process of software development, as well as system integration. Data augmentation Augmentation and annotation of data is necessary for effective data analysis and elimination of irregularity and error for improved accuracy in a sensor network. There are several categories of data that can be augmented: temporal (e.g. date and time) and spatial (e.g. location) are the major categories of augmentation to the raw data (Blaya et al. 2009;Franco et al. 2010;Virone 2009). Other approaches include the development of ontology to include more detailed categorized information with a view to explain sensor event and context of occupants (Muñoz et al. 2011). Other contextual information such as messages and the design of the room can also be added (Rowe et al. 2007). The addition of more specific information on raw data is suggested as the key to increase accuracy of activity recognition (Van Kasteren et al. 2010). Data transfer and communication Signals from monitoring devices and sensors are illustrated as either binary values (ON or OFF) or continuous values (e.g. 21 °C in environmental temperature) to the activity monitoring system. There are several ways to transfer the signals to database. Once devices are activated, the signal would be transferred to local data storage such as personal computer through wired or wireless communication. The signals (raw data) might be annotated with information such as time of activation and location of monitoring devices. Systems based on structured communication cabling between devices, sensors and computers are important for reliable system performance. However, plug&play wireless systems can provide alternative communication means. Generally, integration with basic home services still can be fully achieved by structured cabling. Moreover, modern buildings have extremely poor radio transmission as many wireless devices already operating in the environment (Linskell 2011). The study of Jara et al. (2013), clarified that Ambient Assisted Living (AAL) technology developers are interested in real-time wireless transmission of human vital signs for the purpose of personalized healthcare applications for elderly people. Currently, personalized healthcare is bounded by the subject's vital signs availability, which is continuously changing. Hence, continuous monitoring of subject's vital signs is essential to provide certain health condition assessment. Hence, such continuous vital signs monitoring requires integration of wireless communication capabilities and embedded processing systems into lightweight, wearable, portable, and reliable monitoring devices that can be attached easily to the subject. Moreover, an interactive user interface system is also needed that is easy enough to be used by both the subject and supporter. In their work, they proposed a Near Field Communication (NFC) protocol as the medium for personalized healthcare following the concept of Internet of things. NFC is a technology that can be easily integrated in smartphones and portable devices that possess identification capability and able to construct communication channels among them. NFC still has challenges regarding its performance, efficiency, reliability of data transmission, as a result of the constrained resources and latency. These challenges are inherent to NFC technology as it is originally designed for simple identifications purposes, not for continuous data communication and processing as required for personalized healthcare. Hence, their work main novelty lies in designing a set of continuous vital signs data transmission devices communicating based on an optimized NFC system. Their system was integrated with user interface applications to provide information for caregivers and patients to support monitoring and managing patient's health status using wireless communication. In their work, they also performed a technical assessment on the system latency and usability regarding their NFC communication system usage for continuous vital signs monitoring, through performing practical implementation of the system on a group of elderly people and their caregivers. In a recent study by Arai (2013), he demonstrated a system that is able to continuously monitor subject's health condition. Moreover, he proposed a correction algorithm to eliminate errors in physical health monitoring introduced by wearable devices. All types of wearable sensors monitoring body temperature, pulse rate, blood pressure, number of steps, calorie consumption, accelerometer, EEG, and GPS information, are considered in this study. Monitoring data were transferred from patient's wearable devices to patient attached mobile device via Bluetooth. The mobile device is connected to the Internet through wireless communication network. Hence, the vital signs and psychological health data can be directly transmitted to the Information Collection Centre (ICC) for the purpose of health condition monitoring or help from designated caregivers when needed. From previous information, it can be seen that technologies has be demonstrated in many ways to form a closed system able to provide specific types of support. Within UK, many categories of AALS have been developed to address specific needs or target group with specific technology implementation. Figure 3 illustrated below, summaries most of the activities illustrated in UK in the form of target group, provided support, and technology demonstrated in the AALS environment (Linskell 2011). Sensor systems design is the first step in step for assisted living technology design. The second step is related to sensor data fusion for the purpose of activity recognition. Sayuti et al. (2014) discussed the trade-offs between measurements delay and throughput in a case study utilizing the lightweight priority scheduling scheme for activity monitoring from a distributed sensor system. The findings showed that the proposed scheme presented promising solution that supports decision making for Ambient Assistive Living (AAL) system in a real setting. The creation of an AAL environment not only embeds sensors to acquire information, it processes this information and interacts with the subject for enhanced quality of life. Sensor fusion for activity recognition There are several types of monitors which can be used to gather data of physical activities. In medical practice, it is common to continuously monitor patients' biological status such as heart rates and saturation level by using wearable monitors. However, as some researchers have suggested, it is not practical to apply these wearable monitors in the community setting in order to evaluate their daily activities. Many remote monitoring projects have been developed in recent years in laboratory and community settings using non-wearable environment monitors. Xin and Herzog (2012) in their work presented a wearable monitoring system designed to achieve continuous in-house and outdoor health monitoring to support elderly people's independence. The system acts as a health diagnosis assistant through its on-board intelligence to generate real-time reliable health condition diagnosis. The on-board decision support system continuously learns the subject's health characteristics at certain time intervals from the attached sensor system. Hence, a dynamic decision model is continuously adapted to the subject's health profile. The system is also able to measure deviations from the normal state and categorize whether it is a definite critical situation or a just a normal uncritical deviation. Pirttikangas et al. (2006) studied activity identification using wearable, small-sized sensor devices. They attached these small devices to four different locations on the subject's body. In their experiment, they collected data from 13 different subjects of both sexes performing 17 daily life activities. They extracted features from heart rate and tri-axial accelerometer sensors for different sampling times. They employed the forward-backward sequential search algorithm for important feature selection from these features. De Miguel-Bilbao et al. (2013) illustrated a non-invasive sensor system that consists of action sensors and presence sensors for monitoring daily life activities, as well as the configuration of the monitored homes and users. The post processing stage for activity monitoring is independent from the home topology monitoring process. The system extracted parameters can be considered as long-term monitoring data aiming at detecting and validating daily activities, enabling early detection of physical and cognitive dysfunctions . The monitoring of household activity method can help to improve global geriatric evaluation and enhance the possibility for a better remote monitoring system of elderly people in their homes. This knowledge can support design and manufacture of biomedical sensors that are small, reliable, sensitive, and inexpensive (Agoulmine et al. 2011). Shuai et al. (2010) focused on including activity duration into the learning of inhabitants' daily living activities and behaviour patterns in a smart home environment. They applied a probabilistic learning algorithm to study multi-inhabitants in the same smart home environment. They predicted both inhabitants and their ADL model utilizing the activity carried out and the people who are performing it through experiments performed in a smart kitchen laboratory. The experimental results for activity identification demonstrated high accuracy compared to unreliable results that are obtained with no activity duration information in the model. Their approach also provides a great opportunity for identifying drifts in long-term activity monitoring as an early stage detection of deteriorating situation. Language-based programming and interaction approach provides support for developers to freely express the global behaviour of a smart home application as one logical entity. The high-level language eases the implementation efforts for the application developer. By structuring the application development into different high-level models, developers can simplify application maintenance and customization due to changing user requirements or changes in the monitored living environment. In this way, people are directed to use rules for describing the required behaviour within a smart home environment. Consequently, by providing a rule-based modelling language the gap between the user-based application development and the actual system implementation can be reduced (Bischoff et al. 2007). Algase et al. (2003) investigated reliable measures that are suitable enough to identify the wandering behaviour. Most of the studies they researched for wandering behaviour were relying on simple classification of subject's state as wandering or not-wandering based upon personal caregiver judgments, which doesn't have clear consistent assessment. They found that unplanned ambulation is a key element across all methods used for wandering behaviour identification. They studied different types of sensors used to wandering behaviour identification. They found that the StepWatch device outperform all other devices as it is always able to identify wandering behaviour correctly. The StepWatch always produced the best estimate for the subject's wandering time spent, whereas other tested devices in the study were oversensitive to normal movement and produced substantial overestimates. Wireless body attached sensor devices and smart phones were utilized to monitor the health condition of elderly people in a recent study by Bose (2013). These body attached devices offered remote sensing for the elderly vital signs for health condition assessment anytime and anywhere. Moreover, it supported creating customized solution for each subject according to his individual health condition requirements. If the system detects an emergency situation or deteriorating conditions, the smart phone will alert pre-assigned supervisors or the elderly person's family or neighbours through text messages or making a phone call with predefined condition description voice message. In some cases, it even alerts the ambulance service with detailed report for the subject condition and location. Moreover, the system features some unique functions to support the elderly person's daily life basic requirements, such as regular medication reminder, medical guidance, etc. However, he highlighted in his work that there is still need for innovations required in the Wireless Sensor Networks (WSN) field to enable such technologies to reach reliable and confidence application in this domain. The work illustrated by Arai (2014), concerned with vital signs monitoring such as blood pressure, body temperature, pulse rate, bless, location/attitude, and consciousness using wearable distributed sensor network for the purpose of rescue of elderly people who will be in vital need for support in evacuation condition from a disaster location. Experimental results show that all of the vital signs as well as location and attitude identification of the elderly persons were correctly monitored with the proposed sensor networks. Moreover, it was clear that there is no specific correlation between pulse rate and the subject age, there is no specific calorie consumption that can be linked to age, EEG signal can be linked to eyes movement to predict psychological state, and there is clear difference between healthy person and patient with dementia disease. Finally, they found that there are links between blood pressure and physical/psychological stress (Arai 2014). Phua et al. (2009) illustrated studied memory and problem-solving abilities to produce what then they called Erroneous-Plan Recognition (EPR), aiming to identify imperfections or faults in specific plans implementation by memory problems' patients. Several challenges faced the researchers that are related to the correct definition of a plan within daily living activities, the choice of the activities to be monitored, the type of sensors required to recognize these activities, and the activities recognition technique to be used. In this study, they used independent sequential error detection layers to identify specific errors in the plan implementation. Their obtained result indicated that error data can be separated effectively. This study gave examples of how the suggested EPR system can work well with Deterministic Finite-State Automata (DFSA) technique for identifying error probabilities. Lauriks et al. (2007) provided detailed analysis for the state of the art in information and communication technologies (ICT) that can be applied in solving unmet needs by elderly people. They categorized these needs as tailored information system requirement, customized disease support requirement, social interaction requirement, health condition monitoring requirement, and observed safety requirement. ICT solutions targeting memory problems demonstrate that people with memory diseases are able to use simple electronic equipment with enough confidence. Instrumental ICT-based systems targeting social activities could be simply implemented via the use of mobile phones or entertaining robotic platforms. GPS-based tracking devices proved their ability to enhance feeling of safety. However, more studies regarding these ICT solutions in simulated daily life situations are required before going to commercialized implementation for elderly people daily life support. The final step after sensor data fusion is the activity recognition algorithms used to characterize the activities performed by the elderly. Activity recognition mechanisms Application of probability theory The datasets in the database must be statistically analysed by probability theory and regression analysis which will show some trends within the datasets. To eliminate noise from raw data and to detect patterns, probability distribution and cluster analysis must be employed in which annotated information must be the key for efficient data cleaning. The analysis method must be modified depends on the type of data such as binary or continuous. Based on the patterns of the data, behavioural models/algorithms could be constructed which can be used in machine learning or fuzzy decision making systems. Once behavioural mode has established new input data will be compared with the model/algorithm inside the computer and will be evaluated as "normal or abnormal" which will represent the activity of inhabitant in the environment. In some cases, identifying unusual day consists of irregular patterns and rhythms of behaviours through pattern mining using templates of behaviours based on current/last day activities and circadian activity rhythm (CAR) Junker et al. 2008). Hence, it is essential to recognise activities from various sets of sensors data into usable information. Consequently, AAL simulators need realistic sensors data. The study presented by Chikhaoui et al. (2012) illustrated an autonomous system for activity identification in a controlled environment linking the activities and extracted patterns from sensor data together. They used pattern mining techniques linked with probability theory to discover and recognize activities. In their work, they presented activity recognition system as an optimization problem in which activities are modelled as probability distributions over sequential patterns. The experimental results were extracted from real sensor data placed in an AAL environment. Their results demonstrated the effectiveness of the suggested system for activity identification. Helal et al. (2012) illustrated an automatic situation generation methodology to create faithful sensors system to monitor activities. Their system constitutes a 3D graphical user interface to achieve virtual spatial projection from simulated sensors network in virtual reality environment. This system gives users simulation data to contribute to activity recognition directly linked to a certain space. Their work showed how a 3D simulator named Persim, can be used for activities identification purposes in a virtual reality domain to fuse the datasets needed for real-time activity recognition application. Their system is structured based on computer interface used for generating data regarding activities carried-out by a virtual character in a virtual space using Persim 3D's intuitive graphical user interface (Helal et al. 2011). Application of wearable systems On the study of Lara et al. (2012) a system called Centinela was illustrated. This system combines the subject's body acceleration measurements with his vital signs to produce high accurate activity identification system. The system targeted five main activities namely, walking, sitting, running, descending, and ascending stairs. Their proposed design consists of an unobtrusive portable detecting device and a mobile phone. After testing three different time window sizes and eight different classifiers, results showed that Centinela platform can achieve around 95 % accuracy, which outperform other techniques when tested under the conditions. Moreover, the results indicated that vital signs measurements are important to differentiate between different types of activities. This finding strengthens the claim that vital signs mixed with motion information, form effective method to recognize human activities in general better than depending on motion data only. The position of the sensor was an important point in the study, where scientists identified that locating the motion sensor at the chest of the elderly person eliminates conflicts that may come if attached to the wrist (Tzu-Ping et al. 2009). In addition to activity recognition, the system presented a real-time vital signs monitoring interface adding easy health conditions monitoring to activity recognition target. Krishnan and Cook (2014) developed a wireless and nonintrusive sensor system that is able to capture the necessary activity information from sequence of sensor system measurements. In this study they proposed and evaluated a sliding time window approach to identify activities in a flowing fashion. To differentiate between different activities, they incorporated the so called time decay correlation weighting of sensor measurements within a time window. They concluded from their experiment that combining joint information of weighted current sensor measurements and previous contextual information generates best performing streaming activity identification system. Chernbumroong et al. (2013) addressed the issue of developing an activity identification system for assisted living technology application from the point of view of user acceptance, personal privacy, and system cost. The main aim of the research study was to design an activity identification system for recognition of nine different daily life activities of an elderly subject taking into account these aspects. The study proposed an activity recognition system for an elderly person using nonintrusive, low-cost, and wrist worn sensor devices. Their experimental findings showed that their system can achieve classification accuracy that exceeds of 90 %. They performed further statistical tests to support this claim, were they proved that by combining measurement data from accelerometer with temperature sensor reading, activity classification accuracy can be significantly improved. Application of motion systems In another study presented by Dinh and Struck (2009) a fall detection system that is able to monitor elderly people's daily activities for support in the case of emergency was presented. The fall detection function was performed using only one triaxial accelerometer device. The motion measurements were used as the inputs for a fuzzy logic inference system followed by a neural network that classifies the orientation of the subject. In case the basic stable position conditions are changed, the system could be modified easily through the fuzzy inference system membership functions and rules. The obtained results indicated that only one triaxial accelerometer is good enough to form a robust fall detection system, where knowledgebased identification technique presents an effective replacement to standard pattern identification methods in this application. In another study by Xu et al. (2012) a sensing cushion was illustrated which collects information about personal seating postures to support alerting signal generation in case of sitting for a long time that affects health. The cushion's is formed by two parts, a seat pan and backrest surface that are equipped with distributed pressure sensors, where pressure distribution data are collected by a local microcontroller, then transmitted wirelessly to a personal computer via a Bluetooth channel. The presented identification system was able to recognize nine different seating postures with very high accuracy to support advice about proper seating orientation over a long time of sitting. Virone (2009) illustrated a pattern recognition system for assessing behavioural rhythms in assistive aging technologies. The method was evaluated in assisted living environment using motion sensors to establish motionbased behaviours of elderly based on their activity displacements' habits. The method was extended to study specific patterns of everyday living activities assuming that activities could be pre-identified and adapted on the long term using activity learning system. The system was successful in detecting behaviours emerging from patterns of movements elaborated from motion sensors. However, the method feasibility was tested using semiartificial data. Dalton and OLaighin (2012) studied the performance of two different classifiers for physical activity recognition, a base-level and meta-level classifiers. They utilized different wireless kinematic sensors dedicated to each individual in a group of twenty five subjects performing certain fundamental physical activities inside a monitored environment. Participants were asked to perform these specific physical activities randomly in the environment. They extracted features from sensors measurements based on frequency-domain and time-domain analysis such as average magnitude, zero-crossing rate, auto-correlation, cross-correlation, central moments, spectral entropy, and dominant frequency. Then they used wrapper subset evaluation technique to reduce the obtained features vector size for classifiers comparison purposes. The essential finding from this study was related to the importance of the wrist and ankle sensors devices in physical activities recognition applications. Junker et al. (2008) illustrated a method for identifying sporadically occurring gesture information from constant data streaming collected from body-attached motion sensors. Their method was based on partitioning continuous sensor data signals in a two-stage identification approach for gesture recognition. In the first stage, similarity search technique is employed to select data sections that contain specific useful motions information. In the second stage, these signals are classified for gesture recognition purposes using hidden Markov models. They claimed that this technique presents solid strategy for identifying various gesture orientations from motion sensors as illustrated from two different test cases in their study. Application of vision systems On the other hand, image processing has been used extensively for activity recognition in computer vision systems. Although its popularity, its application in real life scenarios was limited as it is not entirely automated and requires high computational resources for information processing. In some works, automatic video sequence segmentation is applied for activity spotting. These segmented parts of the video are passed to an activity recognition algorithm. Activity detection is achieved by localizing video sequences in times that contain potential information and events. Motion detection combined with trajectory extraction is used for spotting important intervals. In this way, un-important parts of videos, such as motionless frames or long sequences with the same pattern are ignored. Generally, regions of interest (RoI) are defined when motion undergoes changes, where sample interest points are identified and tracked over time until activity ends, resulting in video sequence to be processed. This method is used to separate moving pixels from static ones through inter-illumination differences. Activity recognition can be then performed using K-means or Chi Square Kernel algorithms. Nicquevert and Boujut (2013) used egocentric vision technology to capture the actions of subjects from their visual point of view using wearable camera sensors. They applied this paradigm to achieve activities monitoring for clinical evaluation for the impact of the disease on persons with dementia. The identification of patients' position is the most important factor. Location estimation from distributed cameras at home is not sufficiently precise, due to the presence of others, where location estimation from egocentric video gives added value. The indoor environment is modelled by a set of known places and a 3D model that localize the subject from his vision camera. It combines interest points and the structure from camera vision to recognize the activity for analysis of the behaviour and the sequence of tasks executed by the subject. For this work, a dataset containing the eyes' location of the person performing the motion is recorded in order to compare with the gaze coordinates of the people watching these videos, where two points per frame are recorded (30 images per second). The subjects were asked to execute specific activities such as "preparing a meal" using different items placed in front of them. In total 17 videos of 4 min each are recorded from different participants. The relation between Actors' and Viewers' saliency maps is based on that there exists a time shift around 500 ms between the beginning of the activity and the gaze fixation on the target. Hence, our given a saliency map of the Viewer, the prediction of the Actor's saliency map is possible by simple time shift. Discussion It is evident from the review that none of the presented projects provide solutions to all the aspects of AALS discussed in this paper. In most studies it is assumed that the system has been designed based on the belief that the behaviour of the inhabitants will be consistent from day to day and will have a general pattern. Behaviour models have been developed by deterministic models, probability analysis and other methods based on recorded observations from a few days to weeks. As one can reflect of his daily activities such as brushing own teeth, one will not behave exactly in the same order and duration as it was the day before and unlikely to happen the next day. One might brush his teeth longer with floss, change the tooth paste, or lines the mouth three times not four times. A research finds interesting trends within their datasets for making "simulation models" which consists of designed irregular patterns of daily activities that the designed irregular patterns are not performed as it should be by participants. Hence, many of the conducted research touched on the subject of support for the elderly only on the top surface and lacks deeper investigation on dynamic irregular pattern identification for activity recognition. Consequently, a set of challenges can be set for future research to effectively address such irregularity in behavioral models. Besides health monitoring, one important aspect often ignored is to address the entertainment needs of these people, which is equally important for their well-being (Alm et al. 2009). Elderly can improve their quality of life through the support of entertainment and making their lives more enjoyable (Alm et al. 2007). It has been reported that multimedia enabled entertainment tools can promote effective treatment plan for the elderly with memory problems (Alm et al. 2009;Tamura et al. 2004). However, further study is required to obtain a scientific conclusion and prove it. Such studies also need to identify the requirements of an elderly entertainment support system from both the perspective of elderly and caregiver, which is a challenging task. There are existing many literature that address elderly monitoring from different perspectives, such as providing robotic assistance (Montemerlo et al. 2002;Gross et al. 2011), supporting reminding service (Si et al. 2007), delivering information services (Fink et al. 1998;Chang et al. 2009), health monitoring system (Ohta et al. 2002;Gupta et al. 2007) and health smart home (Le et al. 2008). A few of the works only talk about entertainment for elderly people (Matsuyama et al. 2009;Tamura et al. 2004), however, these works do not address for example, the requirement of an entertainment support system which is claimed to have positive effect on elderly life, due to lack of information on recreational activities for elderly. Hence, there is a need to study everyday activities that include entertainment requirements for the elderly and provide a system architecture supporting these requirements in conjunction with everyday living activities support requirements. The following sub-sections illustrate in details the elderly demographic distribution, and the level of support required. Commercial challenges There are many barriers to technology uptake in smart home environment, especially for elderly people with specific needs such as dementia or Alzheimer's disease. A lack of suitable outcomes framework to validate the installation as well as managing the whole process for assessing, "prescribing" and delivering technological solutions to meet specific needs. Limited experience with tele-care technology initiatives has demonstrated that pilot projects do not necessarily lead to wide scale of technology application. There is a lack of commercial concerns to provide smart home solutions for people with special needs due to most of the skills required exist in academia and with others outside the commercial environment (Linskell 2011). Technological challenges One major challenge in home assisted technology is related to continuous identification of the subject's vital signs and health conditions via wearable devices (Chan et al. 2008(Chan et al. , 2009). The challenge is basically related to the acceptability, durability, easiness, communication, and power requirements of these wearable devices. For instant, such devices need to be not only providing vital signs measurements, but also provide an assessment of the subject condition that is close to the doctor assessment when examining any patient. It needs also to be versatile in design with minimum weight, skin effect, and burden on the subject on his everyday life activities. Moreover, the power life of its battery and communication ability should be strong enough to be operated for days or weeks without the need for recharging. Additionally, it should be fault tolerant with high resistance to impact, heat, cold, and water. Combing all these requirements in the wearable devices is a high challenge factor for senor technology developers, that if achieved will boast the home assisted technology systems to further new dimension. Moreover, standards that are related to specifying elements of assistive living technology are almost unavailable for the system developers. Consequently, adaptability of different system components from sensors, communication protocol, decision support, and subject interaction method or language, is not maintained and every system is linked only to the developer initiatives. Availability of such standards will help system designers to integrate efforts and provide the market with the necessary devices and systems to meet the subject defined requirements. Social challenges Elderly people in general are often consciously aware of their privacy and possible intrusion. Acceptance of AALS by the elderly can, therefore, be challenging as the system may be perceived as intrusive. Most of the reviewed research appears to ignore and assume that users will accept the system in the way they design it. With limited available literature and surveys for user acceptance from the monitored subjects, this assumption is not always well regarded. Acceptability is culture dependent and will vary from one society to another. Gender and age have been found to influence people's perception of space (Mourshed and Zhao 2012), which may also affect the acceptability of a system, in particular where behavior is continuously monitored. A significant challenge for system developers is, therefore, identifying the level of user acceptance. Conclusion Ambient assisted living systems reviewed here were aimed supporting the elderly to live an independent life; help care givers, friends and family; and to avoid harm to the patients. Findings from our work suggest that most frameworks focused primarily on activity monitoring for assessing immediate risks, while the opportunities for integrating environmental factors for analytics and decision-making, in particular for the long term care were often overlooked. The potential for wearable devices and sensors, as well as distributed storage and access (e.g. cloud) are yet to be fully appreciated. Advances in low cost embedded computing and miniaturization of electronics have the potential for significant future developments in the area. There is a distinct lack of strong supporting clinical evidence from the implemented technologies. Socio-cultural aspects such as divergence among groups, acceptability and usability of AALS were also overlooked. Future systems need to look into the issues of privacy and cyber security.
2016-10-26T03:31:20.546Z
2016-05-14T00:00:00.000
{ "year": 2016, "sha1": "21fc206436b1fdce7e52c9dc1c7156c284ae66f1", "oa_license": "CCBY", "oa_url": "https://springerplus.springeropen.com/track/pdf/10.1186/s40064-016-2272-8", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "21fc206436b1fdce7e52c9dc1c7156c284ae66f1", "s2fieldsofstudy": [ "Computer Science" ], "extfieldsofstudy": [ "Medicine" ] }
862931
pes2o/s2orc
v3-fos-license
Conducting Systematic Outcome Assessment in Private Addictions Treatment Settings Systematic outcome assessment is central to ascertaining the impact of treatment services and to informing future treatment initiatives. This project was designed to be conducted within the clinical operations of 4 private addictions treatment centers. A structured interview was used to assess patients’ alcohol and other drug use and related variables (on treatment entry and at 1, 3, and 6 months following treatment discharge). The primary outcomes were percentage of days abstinent (PDA) from alcohol and drugs, PDA from alcohol, and PDA from other drugs. Collateral reports during follow-up also were gathered. A total of 280 patients (56% men) across the 4 programs participated. Percentage of days abstinent for each outcome increased significantly from baseline to the 1-month follow-up assessment, and this change was maintained at the 3- and 6-month follow-up assessments. Collateral reports mirrored the patient follow-up reports. Secondary outcomes of patient ratings of urges/cravings, depression, anxiety, and general life functioning all indicated significant improvement from baseline over the course of the follow-up. The results suggest the feasibility of conducting systematic outcome assessment in freestanding private addictions treatment environments. Introduction Systematic outcome assessment is central to determining the impact of clinical services provided and to informing future treatment activities and initiatives. 1,2 According to Filstead,3(p249) "accountability is essential to the delivery of high-quality clinical care". In response, there has been an ongoing demand not only to demonstrate that treatment services are providing benefit but also to ensure that this information is being communicated to the community and to policymakers. 4 The desire for outcome data comes from a variety of sources. [5][6][7] Direct treatment providers, for example, desire feedback on outcomes during and following their clinical interventions with patients. Program managers desire such information to identify what program components provide benefit and which may not and to inform the development of new treatment initiatives. Insurers seek data provided through outcome assessments to help ensure that their health service expenditures are yielding benefits for their enrollees. Finally, individuals considering or seeking clinical services desire such information to inform their decision making on where to turn for clinical services. This article is a description of the development and implementation of an outcome assessment project at 4 private addictions treatment centers. The collection of these data, gathered at baseline (ie, treatment entry) and through 6 months postdischarge from treatment, was intended to serve multiple aims. First, it was intended to fully describe patients' pretreatment use of alcohol and other drugs and their functioning in a variety of domains (including urges/cravings, self-help group involvement, legal involvement, depression, anxiety, and overall life functioning). These data also were intended to be useful in the prediction of treatment involvement (ie, length of stay in treatment, regular versus irregular discharge) and treatment outcome. Second, the follow-up data were intended to permit assessment of patients' posttreatment alcohol and drug use and functioning in these same domains. It was anticipated that the information gathered would be useful in documenting posttreatment functioning and, as warranted, shaping the nature of treatment provided to patients. The primary goal of this article is to provide data on the outcomes for patients receiving treatment in these more intensive treatment settings (residential, day hospital, intensive outpatient). A secondary goal is to disseminate the methods and procedures that we followed in completing this project to provide a foundation for others conducting addiction treatment evaluations in their own clinical settings. Methods The following sections describe (1) the process of developing the outcome assessment measure and (2) the implementation of the measure across multiple treatment sites. 2 Part 1-development of the outcome assessment measure The impetus for the development of this outcome assessment pursuit emanated through the clinical/administrative offices of a corporation operating a number of private addictions treatment programs throughout the United States. Arrangements were made with 2 external, experienced treatment outcome researchers to collaborate on the project. One of the researchers visited 2 of the treatment sites scheduled to participate in the project to become more familiar with the programs and meet the staff members. Subsequent interactions over a period of several months, predominantly involving the researchers and the organization's Chief Executive Officer/Chief Medical Officer, focused on identifying the domains of patient functioning to be assessed and the points in time for such assessments to occur. This effort was guided by the perspective that outcome assessment should go beyond the quantity and/or frequency of alcohol and other drug use. 8 The selection of patient functioning domains to be assessed beyond alcohol and other drug use was guided by existing literatures on the predominant comorbid conditions presented at treatment entry (eg, anxiety, depression) 9 and on factors associated with posttreatment relapses, such as urges/cravings, negative affect, and interpersonal stress. [10][11] In addition, an effort was made to capture an estimate of overall life functioning. An emphasis was placed on using assessments that were psychometrically sound. The final measure was administered as a structured interview. The measure was scheduled for administration at baseline (ie, treatment entry) and at 1, 3, and 6 months following discharge from treatment. The interview format was used because the follow-up interviews were to be conducted via phone contacts. The baseline interview (at the treatment site) was administered in a similar fashion to maintain consistency in the data collection approach. (A comparable version was developed for interviewing collateral informants, or patient-identified significant others, during follow-up.) Details on the final measure and the domains of assessment are provided below. Demographic variables 1. Basic demographic information (eg, age, sex, race/ethnicity, marital status, education, and employment status) was collected as part of the baseline assessment. Only information on current marital and employment status was gathered at the follow-up assessments. Alcohol and other drug use 1. Alcohol use-4 questions from the Quick Drinking Screen (QDS) 12,13 were used to assess alcohol consumption. For the baseline interview, the questions covered the 30-day period prior to the last use of alcohol or other substances before admission to treatment. For the follow-up interviews, the assessment window for the alcohol use questions was the 30 days prior to the targeted assessment point (months 1, 3, and 6). The questions covering any given time period assessed yielded several outcome variables, including percentage of days abstinent, drinks per drinking day (the average number of drinks consumed on days when drinking occurred), drinks per week, and percentage of days heavy drinking (defined as days when 5 or more standard drinks were consumed for men or 4 or more standard drinks for women). A "standard drink" was a beverage containing 0.6 oz alcohol, such as a 12-oz bottle/can of regular 5% alcohol beer, a 5-oz glass of regular (12%) wine, 1.5 oz of 80-proof hard liquor either straight or in a mixed drink, or a 12-oz wine cooler. When evaluated in relation to the psychometrically well-established Timeline Followback (TLFB), 14 which has excellent reliability and validity, the QDS was found to yield very similar summary drinking variables. 12,13 2. Other drug use-2 questions were used to assess frequency of drug use other than alcohol. Drug use was operationalized to include nonprescribed medication use; prescribed medication use was not assessed as it was not a focus of the treatment programs. These 2 questions yielded an index of percentage of days using drugs during the time periods covered in the respective follow-up assessments. As with the alcohol use questions, the baseline interview covered the 30-day period prior to the last use of alcohol or other drugs before admission to treatment and the follow-up interviews assessed 30 days prior to the targeted assessment point. In addition, participants who reported any drug use during the assessment window were asked to identify which drugs/drug types have been used ( ). Finally, a pair of questions were asked to ascertain the number of days in the 30-day period the participant was totally abstinent, that is, abstinent from alcohol as well as any other substance use. 3. Urges/cravings-2 questions were used to assess urges/ cravings for alcohol or other drugs. The questions selected were modified items from the Minnesota Cocaine Craving Scale. 15 The first question asked for an estimate of how frequently the participant experienced craving for alcohol or other drugs during that week (using a 7-point Likert scale rating). The second question, using a 5-point Likert scale, asked how strong, on average, were these urges or cravings for alcohol or other drugs during that week. The time frame for the questions at baseline was the week prior to the last use of alcohol or other drugs before admission to treatment, 3 and the time frame for the follow-up assessments was the 7-day period at the end of the targeted assessment point (months 1, 3, and 6). The coefficient α for the original measure was .83. 15 The coefficient α for the 2-item version used in this study, among participants responding to both items, was .71 for the baseline assessment. 4. Readiness to change-motivational readiness to change is thought to be central to the behavior change process. At the baseline assessment only, a "readiness ruler" measure was used to assess readiness to change. On this measure, the patient indicated on a figure the extent to which he or she was ready to change his or her substance userelated behavior. The figure represented a 10-point scale (1 = not ready to change; 5 = unsure about changing; 10 = ready to change). Self-help group involvement 1. Self-help group involvement-data on participants' frequency of attendance at self-help group meetings (eg, Alcoholics Anonymous, Cocaine Anonymous, and Narcotics Anonymous) were collected for each time frame assessed. The baseline assessment covered the 30-day period prior to the last use of alcohol or other substances before admission to treatment, and the follow-up assessments covered the 30 days prior to the end of the targeted assessment point. The participant was asked to indicate the number of days per week that such meetings were attended. Psychological status 1. Depression-2 items were included to assess depressed mood. On the first question, the participant indicated, on a 3-point Likert scale, the extent to which he or she felt sad, blue, or depressed during the period addressed. If any such feelings were reported, a follow-up question, adapted from Zimmerman et al, 16 was administered to obtain an estimate of the level of severity of the sad, blue, or depressed feelings (on a 4-point Likert scale). The latter item was found by Zimmerman et al to possess strong reliability and validity; the test-retest reliability of the item was high (.76) and it correlated significantly with the total scores and individual item scores of longer measures of the same constructs (P < .001). In this study, the time frame for the questions at baseline was the week prior to the last use of alcohol or other substances before admission to treatment, and the time frame for the follow-up assessments was the 7-day period at the end of the targeted assessment point (months 1, 3, and 6). The coefficient α for the 2-item assessment used in this study, among participants responding to both items, was .72 for the baseline assessment. 2. Anxiety-2 items from the Overall Anxiety Severity and Impairment Scale (OASIS) 17,18 were used to assess anxiety, tapping into frequency and severity of symptoms. The first addressed frequency of feeling anxious (on a 5-point Likert scale) and the second concerned how intense or severe was the anxiety experienced (on a 4-point Likert scale). The same past week time frame as described above for depression was used for this assessment. The OASIS has excellent test-retest reliability (.82) and excellent convergent validity with other measures of anxiety; coefficient α was .80. 19 The coefficient α for the 2-item assessment used in this study, among participants responding to both items, was .85 for the baseline assessment. Legal involvement 1. Arrests-2 questions were used to assess whether the participant had been arrested (aside from traffic tickets) in the period covered by the interview, and if so, whether the arrest was alcohol or drug related. For the baseline interview, the period was the past 90 days; for the 1-month interview, the period was the past 30 days (ie, the period since treatment discharge); for the 3-month interview, the period was the past 60 days; and for the 6-month interview, the period was the past 90 days. Overall life functioning 1. Overall life functioning-general quality of life was assessed using an item that captured the participant's rating of his or her overall quality of life during the past week time frames described above. The item, adapted from the work of Zimmerman et al, 16 yielded a rating of the participant's overall quality of life (using a 4-point Likert scale). This single-item self-report measure of overall quality of life has been found to be reliable and valid. In prior research, its test-retest reliability was high (.81) and scores on the item correlated with the total scores and individual item scores of longer measures of the quality of life construct (P < .001). 16 Part 2-implementation of the assessment measure Treatment sites. Four private addictions treatment programs located in the United States served as recruitment sites. Each of the sites offered multiple levels of care, including residential treatment, day treatment (5 hours per day, 5 days per week), and intensive outpatient treatment (3 hours per day, 3 to 4 days per week). Patients were enrolled into the outcome assessment project at the time of their first treatment contact with the programs, which generally occurred at the residential or day treatment levels of care. Most patients attending day treatment or intensive outpatient treatment resided in supportive housing. Two of the programs were located in California, the third in Florida, and the fourth in Tennessee. Patients frequently moved through more than one level of care while in the active treatment phase. The sites were selected because they differed Procedure. Potential participants were recruited from sequential admissions to each treatment site. They were approached at the time of admission or shortly thereafter by a staff member trained in the protocol. This individual described the project and answered any questions that arose. Patients interested in participating were provided with an Information Sheet on the project and were asked to read and sign the project Consent Form. The project procedures were reviewed and approved by an oversight independent review board (Aspire Institutional Review Board Protocol IRB-EBH-001). Participants were assessed on 4 occasions. The first assessment, called the baseline assessment, occurred in person on treatment entry (ideally within 3-5 days of entry to treatment). The next 3 assessments occurred by telephone at 1, 3, and 6 months following discharge from treatment. Treatment discharge was operationalized as program discharge or transition from intensive treatment (3 treatment days per week) to a lower intensity of treatment. The telephone follow-up assessments were performed by a research staff member not affiliated with any of the programs. Each assessment entailed administration of the structured interview. A $10 gift card (Amazon, Target, Starbucks) was provided for completing the 1-and 3-month telephone follow-up interviews after treatment discharge, and a $25 gift card was provided for completing the 6-month followup. (There was no compensation for completion of the baseline interview.) On occasions when a follow-up interview for the earlier time frame was not administered (eg, the participant was reached for the 3-month follow-up but missed the 1-month follow-up), the current interview was administered, followed by the alcohol and drug use, self-help group involvement, and legal involvement portions for the previous interview period. As part of the baseline assessment, participants were asked to complete a Locator Form, which included contact information for the participant. Participants also were asked to identify 2 individuals who could always get a message if contact with the participant be lost during the follow-up. In addition, as a condition of participation, each participant was required to identify a "collateral," such as a friend or family member, who would be able to provide another perspective on how things have been going for the participant. The questions asked of the collateral were similar to those asked of the participant so that an index of the validity of participant selfreports could be calculated. Collaterals also were asked how much contact they had with participants for the reporting period, their relationship to them, and their degree of confidence in the data they were providing. A given participant's collateral was scheduled to be contacted for 1 randomly determined follow-up assessment (ie, at the 1-, 3-, or 6-month follow-up assessment point). Efforts to reach the collateral continued for 1 month following the target date. If they were unsuccessful, then efforts were reinitiated at the next followup point (in the case of the 1-and 3-month contacts). Participants provided written permission before any given collateral was contacted. On discharge, a Medical Record Review Form on each participant was completed to collect information on the treatment period and type of discharge. Discharges were classified as regular (following successful completion of the program or transfer from intensive treatment [3 treatment days per week] to a lower intensity of treatment), administrative (discharge due to patient infraction of treatment program rules), or against medical advice (AMA). It should be noted that the administration of the evaluation measure required time, effort, and resources. For the baseline assessment, a clinical staff member typically devoted 30 to 45 minutes of time and effort. This included describing the project to the patient, answering any questions, obtaining written informed consent, completing the locator form and collateral contact information paperwork, and administering the measure itself. The telephone follow-up interviews (each took approximately 10-20 minutes to complete) were completed by a research staff member operating in the central offices of the operating company and thus external to the actual treatment sites. Depending on the number of follow-up interviews with the participants and their collaterals scheduled for a given week, which was somewhat variable over time, this staff member typically devoted 60% to 100% weekly effort to the project. This individual also was responsible for scanning the data forms so that the data could be entered into a spreadsheet. The subsequent data entry, data cleaning, and data analyses were performed under the supervision of the collaborating researchers, who developed analysis plans in consultation with the program administrators. Study retention Follow-up rates for the combined sample for the 1-, 3-, and 6-month assessments were 68%, 61%, and 60%, respectively. Most (78.9%, n = 221) of the participants completed at least 1 Connors et al. 5 follow-up interview. Data for all 3 follow-up assessment points were available for 45.7% of the sample (n = 128). There were few differences at baseline among participants who completed varying numbers of follow-up assessments in demographics, physical/mental health, and substance use. Regarding this, there were no significant differences between those who did 3), respectively. In total, 85% of the treatment discharges were classified as regular, 6.4% were administrative discharges, and 8.6% were classified as AMA. Hierarchical regression analyses predicting treatment duration revealed 3 significant independent baseline predictors of longer treatment duration: being unemployed, lower ratings of readiness to change, and using both alcohol and other drugs (as opposed to using only alcohol or only other drugs). Logistic regression analyses did not reveal any baseline variables that significantly predicted the type of discharge. Alcohol and other drug use The percentage of days abstinent for each primary outcome (percentage of days abstinent from alcohol, percentage of days abstinent from other drugs, and percentage of days abstinent from alcohol and other drugs) for the baseline and follow-up periods are displayed in Figure 1. Other domains of functioning Friedman tests were performed on the secondary outcome variables of urges/cravings (composite score), depression (composite score), anxiety (composite score), and overall life functioning. In each case, the Friedman test was significant (χ 2 (3) = 152.44, P < .001, for urges/cravings; χ 2 (3) = 89.62, P < .001, for depression; χ 2 (3) = 90.05, P < .001, for anxiety; and χ 2 (3) = 163.83, P < .001, for overall life functioning). Post hoc analyses with Wilcoxon signed rank tests were conducted with a Bonferroni correction applied and revealed that the report on each outcome at each follow-up was significantly different from baseline (all P 's < .001). Regarding this, follow-up reports of urges/cravings, depression, and anxiety significantly decreased from baseline and reports of general life functioning significantly increased from baseline. Furthermore, improvements were maintained throughout the 6-month follow-up. Self-help group involvement A repeated-measures ANOVA (with a Greenhouse-Geisser correction for sphericity) of self-help group involvement over time was significant, F 2.55,323.64 = 95.93, P < .001. Bonferroniadjusted pairwise comparisons revealed that the self-help group attendance at each follow-up was significantly higher in comparison with self-help attendance prior to baseline (all P's < .001). Collateral reports Collateral data were collected for 55% (n = 154) of the 280 participants. The relationship of the collateral to the participant was most frequently as a parent (46%) or spouse/partner (30%); 9% reported the relationship as a friend, 5% as a counselor, 5% as a sibling, and the remaining 5% as another relationship (ie, ex-partner, child, another family member). The breakdown of their frequency of contact with the participant was as follows: 57% daily, 6% 4 to 6 times a week, 22% 1 to 3 times a week, 4% 2 times a month, <1% monthly, or 10% less than monthly. The collaterals also provided a rating of their confidence in the accuracy of the drinking and drug use information they provided, on a 5-point scale ranging from 1 = a little confident/mostly guessing to 5 = very confident/very accurate. The mean confidence rating was 4.3; 78% provided a confidence rating of 4 or 5. As intended, the collection of collateral data was evenly distributed across the 1-, 3-, and 6-month follow-up points (32.4%, 34.3%, and 33.3%, respectively). For analytic purposes, Figure 1. percentage of days abstinent (pDA) from alcohol, drugs, and alcohol and drugs at baseline and at 1-, 3-, and 6-month follow-up assessments for combined sample. 7 data were collapsed across 1-, 3-, and 6-month assessments, as each participant had only 1 collateral report. Participant and collateral reports were significantly correlated (see Table 2); the correlation was .544 (P < .001) for percentage of days abstinent from alcohol and other drugs, .617 (P < .001) for percentage of days abstinent from alcohol, and .276 (P < .01) for percentage of days abstinent from other drugs. To examine the direction of discrepancies, participant and collateral reports were compared on the dimension of which source provided the larger (in the case of percentage of days abstinent, the more positive) report. These data, as shown in Table 3, revealed that collateral reports generally mirrored those provided by participants. For all substance use variables, most (>75%) of the reports were the same between collaterals and participants. When discrepancies did occur, there was no evidence of systematic participant underreporting or overreporting. Collateral confidence ratings in the accuracy of the information they were providing on the participant's alcohol and other drug use were not associated with the degree of discrepancy between collateral and participant reports. Discussion The primary goal of this project was to provide data on the outcomes for patients receiving treatment in intensive treatment settings (residential, day hospital, intensive outpatient). Regarding this, the participant and collateral interviews as part of this outcome assessment effort showed significant increases in each percentage of days abstinent outcome variable from baseline to the 1-month follow-up, improvements that were maintained at the 3-and 6-month follow-up contacts. A corresponding pattern of findings emerged for a range of secondary outcome variables reflecting other dimensions of function, including urges/cravings, depression, anxiety, and overall life functioning. The presentation of data in this report covered core primary and secondary outcome variables but did not include all of the information gathered from participants at baseline and followup. Instead, the present analyses are representative of the larger array of variables potentially available for evaluation. Furthermore, we have highlighted outcomes overall, not specific to the individual treatment sites. This was partly not only a function of small sample sizes but also a function of a broader focus on the range of outcomes provided through use of the outcome assessment measure that was developed. With the continued administration of the measure at baseline and follow-up, it would be possible to look at program site-specific outcomes in similar detail. The collateral data suggested that the participants tended to provide accurate self-reports of their posttreatment alcohol and other drug use. These results also indicate that collateral informants are a good additional source of data regarding participants' posttreatment substance use. The finding that among these private treatment programs there was good correspondence between participant and collateral reports of alcohol and other drug use is consistent with the broader literature. 20 This degree of correlation was particularly strong for reports of abstinence from alcohol and abstinence from alcohol and other drugs combined. Although the relationship between participants and collateral reports was significant for abstinence from other drugs, the correlation was not as large. It is possible that alcohol consumption (which was a component of each of those 2 abstinence categories) was more visible and salient behaviorally to the collaterals, elevating the correspondence for those 2 variables. Furthermore, it is noteworthy that when discrepancies occurred in reports of participants' substance use, participants and collaterals were equally likely to provide larger (ie, more positive) reports. Overall, these results indicate that collateral informants are a good additional source of data for A secondary aim of this article was to describe the process of developing this outcome assessment project for application in these private addictions treatment programs. The protocol implemented provided multiple types of information relevant to describing the population of individuals admitted for treatment, their treatment involvement (including days in treatment and type of discharge), and their posttreatment functioning. The primary outcome variables reflected alcohol and other drug use. Secondary outcome variables of interest included urges/cravings, depression, anxiety, self-help group involvement, and overall life functioning. The results suggest that it is feasible to implement an outcome assessment, including patient follow-up, within freestanding private addictions programs. As a result, the programs obtained detailed information on the posttreatment functioning of their program participants. Furthermore, there may also have been potential benefit to the participants, in that previous research has shown that patients benefit from the contact and feedback about their posttreatment efforts at sustaining abstinence and improving their overall life functioning. 21,22 An advantage of the measure used in this outcome assessment is that it is amenable to modification as a function of program needs or interests. For example, there might be desire to obtain more detailed information on particular drugs of abuse, such as opiates, as a function of the drug use perceived by staff during recent program admissions. Another possibility is studying the extent to which particular personality characteristics might predict duration of treatment stays and type of discharge. In that context, the baseline assessment could be modified to include the measurement of such variables. There were several findings from this outcome assessment that might be pursued in future research. For example, longer treatment durations were predicted by being unemployed, reporting lower readiness to change, and use of both alcohol and other drugs. It would therefore seem worthwhile to explore how these characteristics contributed to longer stays in treatment. Also, participants not contacted at the 3-month followup, compared with those contacted, reported stronger urges/ cravings at baseline, and participants not contacted at the 6-month follow-up, compared with those contacted, had greater depression at baseline. For purposes of further research, alternative follow-up strategies might need to be implemented with individuals with similar characteristics, such as having shorter intervals between follow-ups. As with any such project, there are limitations that should be noted. One limitation is that we did not track the participation rate. All consecutive admissions to the treatment programs were approached to participate. Although clinical staff anecdotally reported only rare declinations, we cannot empirically evaluate differences that may exist between the participants and the few who declined. Also, not tracked in this study was a classification of the regular discharges into those who were fully discharged from the program versus those who were discharged from the program into a lower level of care. Thus, it is not possible to ascertain if there were any differences in the outcomes for patients with these 2 types of regular discharges. In terms of follow-up contacts with participants, data for all 3 follow-up points were available for 45.7% of the sample. Although 78.9% completed at least 1 follow-up interview, efforts might be devoted in future applications of this protocol to increasing the rate of data collection across each follow-up point. Furthermore, there was only a 58% contact rate with collaterals. It could be the case that a greater emphasis on collateral contacts could raise this figure, including having participants identify more than one potential collateral contact and providing gift card payments to collaterals for their participation. Finally, this study did not include biological verification of alcohol and drug use status, such as urine screens, breath tests, and blood tests. Finally, there are several considerations if one were to implement the protocol described in this project at another treatment site. Most central is the availability of resources, such as staffing for conducting the baseline assessment and the follow-up interviews. In addition, resources would be needed for data entry, data analyses, and report development, along with participant remuneration (if included). To the extent that resources are limited, which often will be the case, one possibility is to implement the protocol for a limited period of time, as was the case in this study, as opposed to ongoing. Also, it could be that a decision is made not to include follow-up contacts with the collaterals. Another possibility might be to collaborate with colleagues from a local college or university in the conduct of the project. This might benefit both parties, with the treatment program obtaining the resource of students conducting the interviews and managing the data collection and the students obtaining experience in the conduct of research. Taken together, the results of this study suggest that it is feasible to implement outcome assessment, including a baseline assessment and follow-up, within private freestanding addictions treatment programs. The continuing conduct of such evaluations ideally will benefit multiple constituencies, including treatment providers, treatment program managers, program funders, health care insurers, and the general public more broadly.
2018-04-03T03:10:21.988Z
2017-07-10T00:00:00.000
{ "year": 2017, "sha1": "f26ab98659ce8deacf6ab29d377fb0703d128a9d", "oa_license": "CCBYNC", "oa_url": "https://journals.sagepub.com/doi/pdf/10.1177/1178221817719239", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "f26ab98659ce8deacf6ab29d377fb0703d128a9d", "s2fieldsofstudy": [ "Psychology", "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
237475449
pes2o/s2orc
v3-fos-license
Paranasal sinuses and human identification The characteristics of the paranasal sinuses (maxillary, frontal, sphenoid and ethmoid sinuses) are information of great relevance to Forensic Sciences, as their images can be used for human identification purposes. Due to their particularities, the paranasal sinuses provide valuable information for human identification, reducing the risk of errors during investigation by experts. Such structures are visualized from several imaging exams. This study evaluates the possibilities of human identification through the analysis of the paranasal sinuses, as well as the effectiveness of their analysis in estimating sex, age and ancestry. A comprehensive search was performed in the PubMed, SciELO, LILACS, and Web of Science databases. As inclusion criteria, texts that addressed the subject were selected. Imaging analysis of the frontal, maxillary and sphenoid sinuses is a useful tool for human identification, as well as for estimating sex, age and ancestry; usually provides a high level of accuracy. Regarding the ethmoid sinus, research is indicated to verify its use in human identification, as no publications on this specific subject were found. Additional research must be carried out (especially three-dimensional analysis of the paranasal sinuses), to develop standardized protocols, improving the work of experts, helping justice and society. Introduction The identity of a person corresponds to a set of physical, functional or psychological, and normal or pathological characteristics, making the individual identical only to himself. The identification process is complex, systematic, and organized, and the main objective is to determine the identity of the individual in evidence. It is a comparative process, meaning it necessarily needs to compare data knowingly from an individual with the corresponding data from the subject to be identified (Neves et al., 2021;Andrade et al., 2021;Barros et al., 2021a;Gioster-Ramos et al., 2021;Kuhnen et al., 2021;Barros et al., 2021b;France, 2017;Xavier et al., 2015;Nikam et al., 2015). However, this is a challenging task for mankind although of great importance, because besides dealing with humanitarian issues, it also directly impacts civil and criminal proceedings (Dostalova et al., 2012). Decomposing, skeletonized, or charred corpses, especially in large-scale accidents, usually need to be identified, requiring the use of anthropological methods (Singh et al., 2013). This occurs particularly when there is no suspicion about the identity of the deceased, which often requires assessing the biological profile of the subject. In the absence of ante-mortem data leading to personal identities, the biological profile helps to reduce the number of suspects investigated using identification methods (Passalacqua, 2009;Klales, Kenyhercz, 2015). Identification methods have a comparative characteristic between ante-mortem and post-mortem data. According to Interpol (2018), the primary methods are fingerprint analysis, DNA verification, and Forensic Dentistry (Fernandes, 2010). However, for these methods to be valid they must fulfill some requirements such as immutability, uniqueness, practicability, permanence, and classifiability (França, 2017). The use of anthropological methods is necessary when bodies that require identification are found (Singh et al., 2013) but there is no suspicion about the identity of the subjects. To determine individual characteristics, aiding the identification process, it is timely to analyze the bones of the skull, pelvis, and femur, as well as the sella turcica, mastoid cells, and paranasal sinuses (Cox et al. 2009). Several imaging tests such as Waters and panoramic radiographs, Computed Tomography (CT), Magnetic Resonance Imaging (MRI), and Cone-Beam Computed Tomography (CBCT) can provide the visualization of paranasal sinuses, but recently CBCT has become an adequate imaging method for analyzing sinuses due to its higher resolution, lower radiation, and lower cost than traditional CT (Scarfe et al., 2006). Furthermore, due to the characteristics of their contours and the comparison between the data obtained by imaging methods, paranasal sinuses can provide information for forensic identification (Lee et al., 2004). This study evaluates the possibilities of human identification through the analysis of paranasal sinuses, as well as the effectiveness of their analysis in estimating sex, age, and ancestry. Methodology This study was based on a descriptive literature review with a qualitative approach (Pereira et al., 2018), constituting an extensive study and allowing the inclusion of experimental and non-experimental research and theoretical and empirical literature to deepen the knowledge on the subject studied. A comprehensive search was performed in the PubMed, SciELO, LILACS, and Web of Science primary databases. The descriptors used were "paranasal sinus", "frontal sinus", "maxillary sinus", "ethmoid sinus", "sphenoid sinus", "human identification", "radiographic images", "cone-beam computed tomography", "threedimensional images ", "sex", "age", and "ancestry". Initially, the descriptors were searched individually, and then crossings were made between them. The sample was selected by including articles, dissertations, and theses that were available in full, published in Portuguese or English, at any period, and referred to the topic studied. Exclusion criteria were texts published in languages other than English and Portuguese and not related to the topic studied. The research was classified and evaluated, ending with the interpretation of results and synthesis of knowledge. This article also has a theoretical and reflective approach on the biological profile characteristics that contribute as an auxiliary forensic method of human identification through the analysis of paranasal sinuses. Anatomy and embryology of paranasal sinuses Positioned in the bones that comprise the nasal cavity, paranasal sinuses are named according to anatomical relationships as maxillary sinus, ethmoid sinus, frontal sinus, and sphenoid sinus (Keir, 2009;Gallup, Hack, 2011;Ebrahimnejad et al., 2016). As they are composed of air, they are classified as pneumatic cavities and lined by the mucoperiosteal membrane covered by the ciliated cylindrical pseudostratified epithelium, determining a direct or indirect communication with the respiratory system. They can ensure the harmony of facial growth and lighten the skull, protecting infraorbital and intracranial structures, protecting against injuries, and partially neutralizing impacts (Batista et al., 2011). For Orhan et al. (2017), due to their location, the sinuses cooperate to the development of facial structures, jaws, and upper airways; humidification and heat of inspired air; thermal insulation; increased voice resonance; cranial structure weight; and expansion of olfactory surfaces. Additionally, sensitive structures are insulated from the rapid fluctuations of temperature in the nose. Paranasal sinuses also help to improve nasal function and nitric oxide production, which contribute to nasal immune defense (Keir, 2009). The nasal cavity is a roughly cylindrical midline airway that extends forward from the wing of the nose to the posterior nostril. On each side, it is surrounded by the maxillary sinus and covered by the frontal, ethmoid, and sphenoid sinuses, front to back. Although it seems simple, the nasal anatomy is composed of complex and subdivided airways connecting to the sinuses. The four paranasal sinuses paired and aligned with the pseudostratified columnar epithelium are 1. maxillary sinus: the largest sinus, located in the maxilla; 2. frontal sinus: located above the eyes, inside the frontal bone; 3. ethmoid sinus: several discrete air cells in the ethmoid bone between the nose and eyes; 4. sphenoid sinus: located within the sphenoid bone (Cappello et al., 2020). Maxillary sinus In the embryonic period, the alveolar processes are close to the orbital edge, which contains tooth germs. The lateral wall of the nasal cavity is membranous, starting an invagination in the middle meatus region, forming a sinus sac toward this maxillary region and located medially to it. At birth, the maxillary sinus (MS) is small and located medially to the orbit. When children are born, they have a rudimentary slit-shaped ethmoidal labyrinth and maxillary sinus. The MS develops up to fifteen years of age. In the first year of life, the maxillary sinus descends from the orbit without reaching the infraorbital nerve canal. After sinus pneumatization, in the second year, it reaches the canal and surpasses it in the following two years. The entire development of the maxillary sinus depends on teeth eruption and ends when the permanent dentition descends, including the third molar. It is worth noting that the floor of the maxillary sinus of children is higher than the floor of the nasal cavity and as high as the middle meatus. Only at nine years old the floors of the maxillary sinus and the nasal cavity are aligned (Physiology and Endoscopic Nasosinusal Anatomy, 2011). The dimensions of the MS vary individually, but on average, the base in an adult is 35 mm and the height is 25 mm (Raja, 2009). These measures vary according to age, ancestry, sex, and personal conditions. Maxillary sinuses are presented in pairs, located in the body of the jaws, and on both sides. They have a square pyramid shape with the bottom facing the lateral wall of the nasal cavity. The apex corresponds to the junction of the maxillary zygomatic process with the zygomatic bone and, in some cases, extending internally. The apex is usually located 25 mm from the base. The sides of this pyramid refer to the following maxillary surfaces: the upper wall or the top of the maxillary sinus corresponds to the orbital aspect of the maxilla, on the floor of the orbit; the front wall corresponds to the anterior surface of the maxilla; the posterior wall corresponds to the infratemporal aspect of the maxilla and separates the sinus from the infratemporal fossa; and the inferior wall or inferior part of the maxillary sinus corresponds to the alveolar process of the maxilla (Batista et al., 2011). According to Pfaeffli et al. (2007), the MS radiography used to recognize individuals can be considered a complementary examination, providing relevant data to professional knowledge with standardized methods. For identification, the morphometric measurements of the maxillary sinus are compared with the shape, size, and contour of the images available. The morphology and metrics of maxillary sinuses can be used to estimate sex and for human identification (Musse et al., 2009) (Musse et al., 2011). The literature reports cases of identification performed by comparing the ante-mortem and postmortem characteristics of the morphology of maxillary sinuses (Musse et al., 2011). Frontal sinus The frontal sinus (FS) starts to develop in the fourth or fifth week of pregnancy, continuing through adolescence or early adulthood. The FS begins as an insignificant pneumatization of the newborn and is radiologically visible around four years of age. Craniofacial growth is synchronized with the frontal sinus. (Duke, Cassiano, 2005). Frontal sinuses are bilaterally divided by the main septum and there may be additional septa with varying sizes. They originate in the anterior part of the middle meatus (frontal recess) but they can also originate from an anterior ethmoidal cell that invades the frontal bone (Mafee, 1991;Harnsberger et al., 1991;Laine, 1992;Scuderi et al., 1993;Nasosinusal Physiology and Endoscopic Anatomy, 2011). The FS drains through the frontal nasal canal to the frontal recess, the stenosis between the frontal sinus and the anterior middle meatus usually located in the anterosuperior portion of the infundibulum, and Research, Society and Development, v. 10, n. 9, e48710918161, 2021 (CC BY 4.0) | ISSN 2525-3409 | DOI: http://dx.doi.org/10.33448/rsd-v10i9.18161 through the semilunar hiatus to the anterior part of the middle meatus, where they converge from the flow of the ipsilateral maxillary sinus. It can also drain directly into the middle meatus, above the infundibulum (Mafee, 1991;Harnsberger et al., 1991;Laine, 1992;Scuderi et al., 1993). The frontal sinus is undeveloped in 4% of people and its full development occurs between the ages of 10 and 12 (Hungary, 2000). The growth of the skull base and its anterior and middle fossae is centered in the sphenoid and ethmoid bones, and all neural and visceral bone architecture relates to it in anatomy and function. The interrelationships between these elements have been the basis for determining the constitutional type since childhood, and changes in this process can lead to craniofacial dysmorphisms (Mafee, 1991;Harnsberger et al., 1991;Laine, 1992;Scuderi et al., 1993). According to Nikam et al. (2015), both the height and the maximum width of the frontal sinus are unique characteristics of each individual. Silva et al. (2019) reported that the morphological information of frontal sinuses converges between antemortem and post-mortem radiographs for both metric and non-metric evidence, allowing to identify subjects. Sphenoid sinus The sphenoid sinus (SS) develops from the third month of pregnancy. During this period, the nasal mucosa invades the posterior part of the cartilaginous nasal capsule, forming a pouch-like cavity. In the last few months of fetal development, the wall around the cartilage ossifies. In the second and third years of life, the cartilage is reabsorbed and the cavity is fixed to the sphenoid body. At around six or seven years of age, the vaporization of the sphenoid sinus progresses and, by the age of 12, its pneumatization is completed, with pneumatization of anterior clinoids and the pterygoid process (Cappello et al. 2020). In the first years after birth, the SS originates from the invagination of the mucosa to the sphenoethmoidal recess, The sphenoethmoidal recess appears lateral to the nasal septum and can sometimes be seen on CT images in coronal slices, but is best seen in sagittal or axial slices Harnsberger et al., 1991;Scuderi et al., 1993;Zinreich, 1998). Auffret et al. (2016) evaluated the validity of the anatomical visual comparison of the CT of sphenoid sinuses in forensic identification. According to the authors, this sinus is useful because identification accuracy was 100%. Thus, the anatomical individuality of this sinus can be validly applied in forensic contexts for human identification (Capella et al., 2019). Ethmoid sinus The ethmoid sinus (ES) originates from the invagination of the mucosa of the middle and superior meatus and is located between the maxillary sinus, the eyeball, and the brain. Its growth is allowed by the fovea ethmoidalis and, depending on the development of the cribriform plate and the roof of the ethmoidal bone at birth, the volume of all its cells is similar to that of the maxillary sinus. Anterior ethmoid cells reach the same height as the superior orbital ridges at five years of age (Endoscopic Nasosinusal Physiology and Anatomy, 2011). There are three to four cells at birth and 10 to 15 cells in adulthood, with a total volume of 2 to 3 ml, which are located between the eyes. The anterior ethmoids drain into the ethmoid infundibulum, in the middle meatus. The posterior ethmoid sinus drains into the sphenoethmoidal recess in the upper nasal passage. The ES is supplied by the anterior and posterior ethmoid sinus arteries, which are branches of the ophthalmic artery. The ophthalmic artery is a branch of the internal carotid artery (Cappello et al. 2020). The ethmoid sinus is usually the most complex of the paranasal sinuses, with a highly variable anatomical structure and a close relationship with the orbit and the base of the skull. The first ES cell found is the ethmoidal bulla, which is located behind the semilunar hiatus and in front of the basal lamella. It is a circular structure with the side attached to the papyraceous lamina (Kuan, Palmer, 2021). The ethmoid sinus is composed of many thin-walled air cells, some of which may extend forward between the lacrimal sac and the nasal mucosa (Nerad, 2021). Paranasal sinuses and human identification Due to their characteristics and contours, paranasal sinuses can provide valuable information for human identification through detailed and clear analyses, reducing the risk of errors by the expert in the investigation process. The radiological assessment of paranasal sinuses and related structures is designed to accurately describe the anatomical area and confirm any skeletal changes or changes in the sinus mucosa and fluid level, and determine the existence and extent of pathologies (Fatterpekar et al. 2008;Ritter et al., 2011). Several authors have evaluated the maxillary, frontal, and sphenoid sinuses, relating them to possibilities of human identification with extraoral radiographs (Riepert et al., 2001;Pfaeffli et al., 2007;Silva et al., 2009) and CT scans (Perella et al., 2003;Pfaeffli et al., 2007;Tatlisumak et al., 2008;Uthman et al., 2010). Gioster- Ramos et al. (2021) point out that the uniqueness of their structures individualizes people, thus paranasal sinuses can be used for human identification. Currently, the use of CT allows three-dimensional analyses, which greatly enriches and increases the resources available from Forensic Sciences and helps justice and society to establish the human identity. Ruder et al. (2012) assessed the reliability of radiological identification through the visual comparison of CT of paranasal sinuses before and after death and confirmed that the visual comparison of CT of the skull is a robust and reliable method to identify unknown cadavers. For many years, traditional radiographs were used to study sinuses (Cagici et al., 2005). However, due to overlapping, two-dimensional radiographic images are difficult to interpret (Liang et al., 2010). Conversely, with technological advances, CBCT has become a valuable method for evaluating sinuses because interactive and three-dimensional images have a great impact on head images, improving the ability of professionals to accurately describe the condition and location of sinuses (Cakli et al., 2012;Poorey, Gupta, 2014). Paranasal sinuses for estimating the biological profile: sex, age, and ancestry The study of anthropometry is essential to clarify issues related to human identification for being highly important for linear and angular measures and measuring the area and volume of different parts of the body (Garcia, 2014). The anthropometric Research, Society andDevelopment, v. 10, n. 9, e48710918161, 2021 (CC BY 4.0) | ISSN 2525-3409 | DOI: http://dx.doi.org/10.33448/rsd-v10i9.18161 analysis of bones provides basic characteristics of the individual, such as sex, age at death, ancestry, and height (Fernandes, 2010). In the forensic context, the construction of the biological profile with individual characteristics is important. When unknown bodies are found and taken to the Forensic Medical Institute, forensic anthropologists initially estimate sex and age at death (Kim et al., 2006). Ancestry and stature are variables that, along with the previous two, constitute the so-called biological profile of an individual. A comprehensive description of sex based on skeletal characteristics is a basic step in forensic investigations (Franklin et al., 2006;Franklin et al., 2008). For Carvalho (2012), estimating the sex of an individual is one of the most important analyses in human identification processes. To determine sexual dimorphism across the skull structure, a visual comparison of bones or measurement of cranial components can be performed. Age estimation in forensic clarification is important, as it helps to identify corpses and classify individuals into capable and absolutely and relatively incapable, which information has an important forensic function (Schmidt, 2004). Ancestry estimation is performed through morphological characteristics and bone measurements, which facilitates compatibility with established ancestral groups (Ousley et al., 2009). However, the presence and maintenance of bone elements are required for morphological or metric analyses, as well as for assessing the age and sex of individuals (Batista, Santos, 2018). Brazil has a mixed population, which makes it difficult to determine the biological characteristics (biological profile) of the ancestral pattern when comparing foreign populations with their anthropometric characteristics and differences. Therefore, to obtain reliable information, forensic anthropologists must prudently use protocols and methods (Almeida Júnior et al., 2010), especially those developed with parameters obtained from studies with other populations. Among the parameters for estimating biological characteristics, ancestry is the most difficult, researched, and controversial (Vanrell, 2019). Due to the scarcity of research in this area, especially in highly mixed populations such as Brazil, its estimation is a challenge (Batista, Santos, 2018). The CT is described as one of the potential imaging methods that can help forensic professionals to identify the human body, especially CBCT (Phothikhun et al., 2012;Hishmat et al., 2014;Silveira, 2015). Regarding paranasal sinuses, these methods allow accurate assessments and analyses (Pfaeffli et al., 2007;Tucunduva, Freitas, 2008). There are several studies in the literature that used radiographic images to verify whether the frontal, maxillary, and sphenoid sinuses provide support for recognition in estimating sexual dimorphism, age, and ancestry. As for the ethmoid sinus, no data were found relating it to these variables. However, Robles et al. (2020) performed a study to propose a new method to determine biological characteristics with the three-dimensional (3D) reconstruction of paranasal sinuses to estimate age, sex, and ancestry. The findings of this study provide insights into the potential of using paranasal sinuses as an attribute to distinguish individuals and identify unknown human remains in forensic investigations. Thus, in 2017, Sherif et al. determined the accuracy of using paranasal sinuses measurements as a method of estimating sex with multi-detector computed tomography. According to the results, the maxillary sinus presents the highest precision in estimating sexual dimorphism, followed by the frontal sinus and sphenoid sinus, and concluding the effectiveness of its use for human identification. Demiralp et al. (2019) analyzed CBCT images of a sample of ancient skulls to determine whether the dimensions and volume of paranasal sinuses could be useful to identify the estimate of sex and age. The results did not show statistically Research, Society andDevelopment, v. 10, n. 9, e48710918161, 2021 (CC BY 4.0) | ISSN 2525-3409 | DOI: http://dx.doi.org/10.33448/rsd-v10i9.18161 significant differences between measurements (p < 0.05) and the authors concluded that the measurements of volume and dimensions of the sinus from CBCT data can be a promising technique for determining sex and age. Likewise, Teixeira et al. (2020) evaluated, with CBCT, whether the maxillary sinus can be used to estimate sex and age. All measurements were greater in men and, regarding age, the youngest group had greater height measurements. According to the authors, the measurement of the maxillary sinus in CBCT images can be applied to sexual dimorphism in a complementary way, but for age estimation, the use of the maxillary sinus was less accurate. The study by Araújo et al. (2015), using CBCT images, also had important implications for sex estimation, as it presented a statistically significant difference (p=0.0005) between the sexes, with a greater volume of MS observed in male individuals. However, regarding ancestry, it did not find a significant difference (p=0.4535) for brown and white individuals. Similarly, Soares et al. (2020) analyzed CBCT images and investigated sexual dimorphism in the Brazilian population, as well as the possibility of human identification through the maxillary sinus. The results show significant differences between the sexes. It was concluded that the evaluation of morphological and dimensional parameters of the MS are reproducible and thus considered valid for human identification. In the same sense, Faria Gomes et al. (2019) developed and validated a formula to estimate sex with measurements of maxillary sinuses, using CBCT in a Brazilian population. The measurements of maxillary sinuses were significantly greater in men, with height being the most dimorphic measure and presenting an accuracy of 77.7% for sex estimation, which can be a complementary method for human identification in the Brazilian population. The data obtained in the study by Uthman et al. (2011) also found a relationship between sex and measurements of the maxillary sinus. These authors analyzed reconstructed helical CT images and studied the accuracy and reliability of MS measurements for sex estimation, obtaining positive outcomes for the use of MS in the study of sexual dimorphism, with an overall accuracy of 71.6%. The study by Najem et al. (2020) showed a divergent result. To estimate sex and age, these authors used CBCT images and performed linear measurements of the maxillary sinus. The results showed that the sample studied did not present statistically significant differences in the measurements, which could not be used to determine such parameters. The same occurred in the study by Etemadi et al. (2017). These authors evaluated the volume of the maxillary sinus with CBCT and analyzed its association with sex and some craniofacial indices. The average volume of the MS was higher in men. However, this parameter cannot be used for sex estimation because the area under the receiver operating curve (ROC) was 62.7%. The study by Gulec et al. (2020), performed with CBCT in a Turkish subpopulation, was also contradictory, as the authors did not find a relationship of MS volume with sex and age. Using magnetic resonance imaging, Rani et al. (2017) estimated age and sex with the size and volume of the maxillary sinus. The results showed sexual dimorphism and a statistically significant difference. There were no statistically significant differences for the estimated age. Regarding the characteristics of the frontal sinus, Camargo (2000) concluded that the morphological analysis (area and perimeter) of this sinus can help to estimate sex. Likewise, Xavier et al. (2015) concluded that the FS is useful for human identification by conducting a literature review of 30 articles to assess the contribution of frontal and maxillary sinuses in Forensic Sciences to human identification and sex estimation. In 2017, Moore & Ross found that the frontal sinus is valid for estimating age and accurately predicting the age of subadults (Moore & Ross, 2017). Scendoni et al. (2021) implemented a study on frontal sinuses, which involved the development of a personal code using the method presented by Cameriere et al. (2020), besides the variables considered by Yoshino et al. (1987) for the personal identification of Italians, Kosovars, and Turks to test the aforementioned approach and compare the results between the three different populations. According to the authors, this method is suitable for different groups of people. When comparing frontal sinus radiographs before and after death, the possibility of identifying a person increases significantly. The results show that the model is more discriminative in identifying individuals of different nationalities. Similarly, Abdalla (2021) measured the anteroposterior length, width, and height of the FS in different age groups of both sexes with axial, coronal, and sagittal tomography. The results showed statistical differences that can have considerable value in determining the sex, age, racial origin, and ethnic group for living or dead individuals. As for the sphenoid sinus, according to Oliveira et al. (2009), it has great individual differences in area and volume, and it can be used to assess sexual dimorphism. The results of the study by Özer et al. (2018) showed that the SS can be used to estimate the sex and age of human remains, as well as determine age-related changes and population differences. Considering the Brazilian population and the sphenoid sinus to identify individuals, Ramos et al. (2021) performed linear and volumetric measurements of the SS with CBCT and concluded that these measurements are useful to characterize the sex of individuals. However, in the study by Oliveira et al. (2017), performed with helical CT, there were no significant correlations between age, sex, and sphenoid sinus volume. Conclusion The imaging analysis of frontal, maxillary, and sphenoid sinuses is a useful tool for human identification, as well as for estimating sex, age, and ancestry, as it commonly provides a high level of accuracy. Along with the two-dimensional analysis of sinuses, three-dimensional images are highly relevant for human identification. Although the three-dimensional analysis is still not routine, its use in Forensic Sciences is without a doubt an excellent means provided by new technologies. Additional research must be carried out to create standardized protocols using new technologies, especially the 3D analysis of paranasal sinuses, to improve the work of forensic professionals, assisting justice and society. Regarding the ethmoid sinus, studies must be performed to verify its use for human identification, as there is no published data on the subject.
2021-09-09T20:44:38.415Z
2021-01-01T00:00:00.000
{ "year": 2021, "sha1": "5f6c474a9e8f4306b81d1bbf8f2e37efe50781dc", "oa_license": "CCBY", "oa_url": "https://rsdjournal.org/index.php/rsd/article/download/18161/17417", "oa_status": "GOLD", "pdf_src": "Anansi", "pdf_hash": "16b3682b804731e852d4e61f90c75840a2886ca2", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [] }
214269812
pes2o/s2orc
v3-fos-license
Architectural Tourism Development Model as Sustainable Tourism Concept in Bandung Bandung is famous tourist destinations in Indonesia which experiences rapid tourism development. It is supported by city’s diverse tourism potential, include in: nature, culture, heritage building, culinary, fashion, recreation, and entertainment. Moreover, infrastructure and public facility improvement increase tourism attractiveness of Bandung. Two major infrastructure projects, namely Cipularang Toll and Jakarta - Bandung Fast Train will support Bandung tourism development. This condition is benchmark of Bandung tourist attractions. To maintain such potential, it is necessary for Bandung to have programmed, structured, and controlled tourism development model. Current tourism model is inclined to sustainable industrial concept preserving environment and local culture. Tourism programme is directed to generate income and green employment, with regard to conservation. This study aims to assist planning and management of sustainable tourism development by increasing level of understanding on how tourist destinations develop and change. Results of this study can be taken into consideration for stakeholder to develop a framework for strategic planning toward economically, environmentally and culturally sustainable tourism. Introduction Nowadays, tourism [1] and creative economy play an important role in Indonesia's development, as both has significant contribution to Gross Domestic Product (GDP) and employment -both directly and indirectly. Tourism [2] contributes to 11.8% of Indonesia's [2] GDP and creative economy contributed to 14.66% of total employment. Ministry of Tourism and Creative Economy has established a vision of "the realization of welfare and life quality of Indonesians through tourism and creative economy" [3]. The ministry, supported by International Labor Organization (ILO) and Australian Government develops Strategic Plan for Sustainable Tourism and Green Jobs for Indonesia [3]. This strategic plan is framework and reference to achieve sustainability and provide environmentally friendly employment in tourism sectors in Indonesia [3]. Strategic Plan for Sustainable Tourism and Green Jobs is an answer to improve welfare and life quality within communities. The program is supported by the ILO and Australian Government, as Indonesian government partners in exchanging views and building consensus to build a strong tourism industry. The strategic plan is designed based on consultations among tourism stakeholders, include in government, social partners, communities, industries, and the citizens. Richness of natural and cultural resources has become a major potential in both domestic and international tourist markets. Government is challenged to cultivate these potential into national asset for prosperity of the communities. In addition, hospitality and human resources are strategic in achieving national development goals and increasing nation's competitiveness. After Brazil, Indonesia tourism potential occupies second position. Local culture, natural beauty, and heritage buildings are pillars of Indonesia tourism. As the largest archipelago nation in the world, Indonesia has 17,508 large and small islands making it an extraordinary natural potential. Abundance of natural resources and green environments, West Java has enormous potentials in tourism. Tourism activities in West Java, especially Bandung, requires supporting facilities. Tourism activities have become Bandung main economy sector since 1920. Nowadays, tourism is growing more with the support of Cipularang toll road and fast train lines construction connecting Bandung to Jakarta. Tourism sector is significant in increasing local revenue of Bandung. Nearly 70% of Bandung local revenue comes from tourism sector, according to Bandung Culture and Tourism Office. Bandung has experienced an increase in domestic and foreign tourist arrivals for almost 14% per year. Therefore, tourism sector becomes critical subject of urban planning. Bandung is not only known as capital of West Java Province, but also famous tourism city. This city preserves cultural heritage that has superior values. Bandung is wellknown for its old architectural style inseparable from city development history itself. It owns several tangible or intangible relics. The long history of Bandung leaves a number of historical buildings. Bandung is given predicate of the most complete architectural laboratory beause of its art deco architecture treasures. Sturdy old buildings inherit architectural beauty styles. As a favorite tourist destination, Bandung has potential to (1) heritage tourism; (2) shopping and culinary tourism; (3) educational tourism; (4) recreational and cultural tourism; and (5) as well as Meeting, Incentive, Convention, Exhibition (MICE) destination. Bandung has diverse tourist destination, supported by geographical location, nature, adequate urban planning, and good accessibility. Based on the background, this study is intended to review development of tourist destinations in Bandung through Quality Function Deployment (QFD) analysis. This analysis is conducted over development of holistic tourist destinations by considering all related elements. Result of this analysis is expected to provide input for increasing tourist visits and managing historical buildings as assets of Bandung. Bandung Tourism Potential Besides Bali and Yogyakarta, Bandung is also the place of interest tourist visit. The city has unique, extraordinary beauty and industrial creativity. Bandung is especially crowded on weekend due to domestic and foreign visitation. The following table (Table 1) shows number of tourist visiting Bandung within the last five years. Tourist visits in 2015 has reached over 6 million; and increased to almost 12% (around 4% per year) in 2018. This increase is considered significant. This potential stimulates acceleration of tourism businesses growth and other tourism-related businesses, which affect increase in community welfare and local revenue. There are 14 (fourteen) potential tourism clusters in Bandung, include in: (1) Shopping and health tourism clusters on Sukajadi-Setrasari-Pasteur roads; (2) Traditional art and (14) Shopping tourism cluster on Cihampelas. Cluster development is a consequence of urban development and planning along with market demand, as it happened on Ir. Juanda Road which was originally a non-commercial area and now turned into busy shopping tourist destination. Bandung Tourism Destination Development Tourism sector is vital to push economic growth in many countries [4,5,6,7,8]. Therefore, it is necessary to consider several approaches in planning and developing tourism, including: (1) Continuous Incremental and Flexible Approach (in sense of planning as continuous process based on needs and results); (2) System Approach (tourism as an integrated system which needs to be planned through system analysis); (3) Comprehensive Approach (tourism development approach holistically considers elemental and environmental institutions as well as socio-economic implications); (4) Integrated Approach (tourism development approach as an integrated system of area plan and development); (5) Environmental and Sustainable Development Approach (tourism approach starts from planning process, continued by developing process and managing preserved natural and cultural resources as well as performing environmental analysis); (6) Community Approach (developing tourism approach by maximizing community involvement starting from planning until decision making on aspects that affect socio-economic conditions); (7) Implementable Approach (tourism development should formulate objective plans and recommendations, as well as applicable technique and strategies); and (8) Application of Systematic Planning Approach (an approach applied in tourism planning based on logical activity). Sustainable Tourism Concept Butowski (2012) refers sustainable tourism [9]concept to sustainable development [10] concept which emphasizes the need for rational management of natural resources [9]. This is in line with the UN's Secretary General's report on the need to change general concept of economic development through a clear natural resource management. Threats to the environment were main [9] issue in 1972 Stockholm UN conference [9]. The term of sustainable development was actually introduced at that time. Moreover, 1992 UN Conference in Rio de Janeiro agreed upon two important documents on the environment and development known as Rio Declaration. It contains 27 principles defining the rights [9] and obligations of countries in field of sustainable development, and AGENDA 21 [9], global action plan referring to actions needed to achieve sustainable development and high quality [9] of life. Concept of tourism development referring to principle of sustainable development [9] has actually been discussed since 1980s. Krippendorff (1986) develops concept of alternative tourism [11]. It identifies industrial society system as small-scale, treated [9] as the right [9] choice. Ceballos-Lascurain (1987) introduced concept of ecotourism [12], since then various terms of alternative tourism emerges [9], include in: green tourism, soft tourism, nature tourism, environmental friendly/ environmentally sensible tourism [9], responsible tourism, discreet tourism, appropriate tourism, and ecoethnotourism [11,13,14,16,17,18]. These tourism model are designed under evaluation approach which juxtaposes new forms of tourism with old mass tourism model. Butler (1980) states that sustainable tourism is the right answer in dealing with tourism problems today [18]. Butler proposes two ideas in tourism. (1) Based on semantic approach, sustainability guarantee [19] long-term survival [19] in accordance with changing market, and (2) concept of sustainable development [9], in sense of treating tourism as regional development without violating principles of sustainable development. This opinion is supported by Niezgoda, (2006) who states that conception of sustainable tourism [9] represents relationship between tourism, environment, and development, as shown in Figure 1. Based on Figure 1, sustainable tourism is essential for tourism development itself. According to Farrell and Twining-Ward (2004), sustainable tourism must be based on interdisciplinary approach due to degree of complexity and uncertainty of people behaviour in [9] tourism system [9] that affect tourism [9] itself, yet cannot guarantee satisfactory results [9]. The approach covers fields of ecosystem ecology [9], ecological economics, global change science, and complexity theory [9]. Farrell and Twining-Ward (2004) convey new concept of sustainable tourism [20] with the term "comprehensive tourism system and complex adaptive tourism systems (CATS)" [9]. Principle of sustainable tourism [9] must consider long-term needs of natural environment, positively influence the economy sector [9], and accepted in terms of ethics and culture [9] of local [9] community. Based on the 2008 World Conservation Congress in Barcelona, it was agreed basis of sustainable tourism concept (Table 2). Based on Table 1, sustainable tourism must consider natural, socio-cultural, and economic [19] aspects [19] and maintain balance of these aspects. Research Method This research applies qualitative studies using Quality Function Deployment (QFD) method. This method is intended to plan and develop structured products and allows to get specific results clearly about target as desired by customer. Main focus of QFD is to involve customers during product development process as early as possible. QFD is divided into two parts, namely customer table (shows customer information) and technical table (describes technical terms in response to customer need). In detail, QFD involves four matrices of: (1) Result and Discussion Tourist attraction and object potential are basic requirement for area or city development to become tourist city. Bandung owns potential in heritage building, natural environment, and socio-cultural asset [17,21,22,23] based on 2018 Bandung profile data. Planning and assisting development of travel destination are important factors in tourism. There are six important factors in tourism planning and development, including: (1) planning must be able to increase quality growth, it requires constructive change, in addition to development of potential attractions/objects to be sold; (2) tourism policy have important role in promotional activities (based on research result); (3) tourism planning requires public and private cooperation to realize expectations of stakeholders; (4) regional and local policy planning must be able to strengthen and support tourism development; (5) regional and local policy planning must be able to stimulate business people to contribute in regional development; and (6) business planning policies should be supported by both business people and government to provide accommodations for all nature and culture attraction. Heritage buildings: from architectural to educational tourist destination A destination can be grouped as developing tourism object when tourism activities exist from the start. To increase tourism potential, it needs sustainable development through ecological, socio-cultural, and economic stability. In 2011, Bandung Cultural Heritage Conservation Society issued a list of 100 old buildings categorized as cultural heritage and preserved buildings, which are divided into 6 groups Object Analysis, Tourist Attraction, and Market Analysis Based on 2018-2023 Bandung Strategic Plan, heritage buildings in the city are object to Bandung Heritage association investigation. As many as 100 buildings are classified into six categories. Of 100 cultural heritage buildings, there are 18 buildings which have potential to become architectural tourism destination and tourism attraction. Market segmentation of both domestic and foreign visitations are natural and artificial recreation, shopping, culinary, recreation, entertainment, education or religion travel. Current market conditions are based on: (a) Geographical aspect. Most of tourists visiting Bandung for shopping and culinary destination (52.74%), followed by educational tourism (32.56%). The origin of tourists visiting Bandung are mostly from West Java, Jakarta, and major cities in Indonesia. Tourists motivations coming to Bandung include in fun experiences, togetherness, out of routine, authentic experience, learning, refreshing, fresh physical environment, health motive, and pride. (b) Demographic aspect. Visitors who come to Bandung are mainly women (± 54.25%) and the rest are men. Female tourists have tendency to enjoy shopping and culinary objects in Bandung. Age of visitors ranges from 20-35 years old. Young tourists play out various activities, such as shopping, culinary, knowledge/architecture sightseeing, and others. (c) Psychographic aspect. Besides take joy in beauty of nature and heritage buildings the city, tourists are spoiled by diverse factory outlets and culinary places offering traditional to modern foods. Numerous tourist attractions makes Bandung favourite destination in Indonesia. But Bandung is still classified as transit city due to 1-3 days average hotel occupancy. Hostelry in Bandung is usually crowded only during weekends (Friday to Sunday) or school and religious holidays. For this reason, it is an opportunity for tourism service entrepreneurs to initiate tour and travel businesses packages combining tourism potential in Bandung. There are still few tour and travel agencies take this opportunity. This potential should be considered as business opportunity outside existing standard tour package. In order to meet customer demand, market players should enclose these aspects into tour packages. Analysis in determining tour packages must meet four aspects of tourist needs, namely: (1) Attraction: main product of a destination. It answers question of "What to see and what to do" during travel exploration. Attraction may be in form of nature's beauty and its uniqueness, local community culture, historical building, or artificial attraction (games and entertainment). Attractions should be unique and different from other region in order to have high value. (2) Accessibility is infrastructure and means of transportation to get to a destination. Highway access and road guidance are important aspects, beside reliable public transportation. (3) Amenity means a supporting facility that can meet the need and desire of tourists in a destination. It includes accommodation and restaurant, or other essential facilities for tourists, such as public toilet, parking lot, rest area, praying room, and resting place. (4) Ancillary is related to ability to manage a destination. Even though a destination is attractive, accessible, and supported with enough amenities, it will be neglected if not well managed. Considering potential market and tourists demand aspects, planning strategy for development of sustainable tourism in Bandung can be seen in figure 6. (1) Policy strategy in form of clear travel guidelines and good tourism management will open private investment opportunities. It is also necessary to increase promotion of all potential destinations, improve quality of human resources, and socialize local regulations related to tourism development. (2) Tourist facility and activity strategy are needed to optimize physical quality of buildings and services. Improved access to infrastructure and facilities and excellent service will support tourism development in accordance with applicable standard. (3) Marketing strategy should be divided into four strategies, namely (a) product strategy (promoting tourism object by adding unique tourist attractions and attracting broader segments); (b) price strategy (carried out through changes in market behavior patterns by giving adjustments to prices); (c) place/distribution strategy (renowned destinations as fundamental tourist attraction which need to be socialized continuously); and (d) promotion strategy (built through various promotional media, including optimizing sub-variables of attractions, amenity, accessibility, and ancillary services by allocating more funds for tourism development). Conclusion Bandung is a popular travel destination among domestic (Nusantara) tourists, especially for its shopping and culinary attraction. Dutch heritage buildings are potential to be promoted as excellent architectural and educational tourist destination of Bandung in particular and West Java in general. Evaluation on tourism products and actual market shows that heritage buildings have great potential in developing Bandung tourism business, especially as architectural and educational tourism destination. Improvement of supporting facilities is vital to development of architectural and educational tourism. Besides, government need to allocate greater funds for tourism development as regional leading sector. Management of business object and community-based tourist attraction need to be improved. The government needs to establish stronger and sustainable cooperation with tourism stakeholder, such as
2020-01-09T09:07:31.870Z
2020-01-02T00:00:00.000
{ "year": 2020, "sha1": "33f5cf9ac08419863885cdfa04f22ef0aedda946", "oa_license": null, "oa_url": "https://doi.org/10.1088/1755-1315/409/1/012005", "oa_status": "GOLD", "pdf_src": "IOP", "pdf_hash": "943c05cf499e8c8f0bb29646ecddb234519fac45", "s2fieldsofstudy": [ "Business" ], "extfieldsofstudy": [ "Physics", "Business" ] }
12250062
pes2o/s2orc
v3-fos-license
Cognitive Functions and Depression in Patients with Irritable Bowel Syndrome Background. Irritable bowel syndrome (IBS) is associated with depression and depression with impaired cognitive functions. The primary aim was to study associations between depression and cognitive functions in patients with IBS. Methods. IBS (according to the Rome III criteria), cognitive functions (evaluated with a set of neuropsychological tests), and depression (measured with Beck Depression Inventory II and Montgomery-Åsberg Depression Scale) were analysed in patients with idiopathic depression and in patients with unspecified neurological symptoms. Results. 18 and 48 patients with a mean age of 47 and 45 years were included in the “Depression” and “Neurological” group, respectively. In the “Depression” group, the degree of depression was significantly higher in patients with IBS than in those without. Depression was associated with impaired cognitive function in 6 out of 17 neuropsychological tests indicating reduced set shifting, verbal fluency, attention, and psychomotor speed. IBS was statistically significantly associated with depression but not with any of the tests for cognitive functions. Conclusions. IBS was associated with depression but not with impaired cognitive functions. Since the idiopathic depression was associated with cognitive deficits, the findings could indicate that the depression in patients with IBS differs from an idiopathic depression. Introduction Irritable bowel syndrome (IBS) is a common disorder with a high prevalence of comorbidities such as musculoskeletal pain, anxiety and depression, and emotional disturbances and is common in patients with an idiopathic depression [1][2][3][4]. The two disorders, IBS and the idiopathic depression, are associated and have in common several pathophysiological abnormalities [5,6]. The interaction between the gut and the brain called the "brain-gut axis" is mediated via humoral, immunological, and neuronal pathways. The axis is of importance for health and disease including gastrointestinal and psychological functions [7]. Cognitive deficits are common in patients with an idiopathic depression but have been less well studied in patients with IBS. The associations between IBS and cognitive functions are contradictory in part due to different methods for evaluation of the cognition [8][9][10][11][12][13]. The main reason for this study was the contradictory information about cognitive functions in patients with IBS, particularly the association between depression and cognitive functions. Knowledge about the association between depression and cognitive functions in patients with IBS is important for a better understanding of the "brain-gut axis" and a correct evaluation of the patients. Kennedy et al. pinpoint the lack of knowledge very well: ". . .it will be necessary to carefully control for psychiatric co-morbidity so that the relative contributions of anxiety and depression to deficits in cognitive functioning can be disentangled from the alterations associated with IBS alone" [14]. Gastroenterology Research and Practice This study was designed to compare patients with an idiopathic depression and unspecified neurological symptoms. The protocol specified the supplementary analyses related to IBS. The primary aim of these analyses was to study the associations between depression and cognitive functions in patients with IBS. Design and Participants 2.1.1. Design. Design was cross-sectional studies in two groups of patients. (i) Consecutive patients above 17 years of age with the diagnosis of idiopathic depression (according to ICD-10; F 32-34 spectre, without triggering factors) referred to a psychiatric outpatient clinic were included in the study after exclusion of organic diseases (the "Depression" group). (ii) Consecutive patients above 17 years of age admitted to an inpatient neurological clinic for thorough investigations of neurological symptoms were included after exclusion of organic disorders (the "Neurological" group). The patients had no objective neurological signs, all laboratory tests were normal, and all supplementary investigations (CT, MRI, spinal fluid examination, etc.) which were performed at the clinicians' discretion were normal. The patients presented with various symptoms such as headache, back pain, and vertigo. A medical history was recorded, a routine clinical examination was performed, and haematological and biochemical screening tests were taken in all patients. In order to exclude other diseases, other tests were accomplished according to the doctors' discretion. All patients filled in validated questionnaires for the classification of gastrointestinal disorders and depression. A set of neuropsychological tests was carried out. An experienced psychiatric study nurse performed the practical work with the questionnaires and the neuropsychological testing. The results are given as the number (proportion in per cent), mean (SD), and median (range) and analysed with exact unconditional table analyses, -test, and Mann-Whitney test (marked with * ). Variables (number of animals) (ref. value 17, SD 5) [24][25][26]: these are all verbal fluency tests that include measures of verbal, cloths, and animal material. The tests measure the ability to generate words beginning with a given letter or category within one minute. High scores are best. Statistics. The characteristics of the patients were analysed with descriptive statistics and reported as mean (SD), median (range), and proportion (percentage). Comparisons between the groups were analysed with an exact unconditional test for 2 × 2 tables, -test, and Mann-Whitney test depending on the type of the data and the normality. Predictors of depression and cognitive functions were studied with univariate general linear model analyses with the scores for depression and cognitive functions (one at a time) as dependent variables. Independent variables were the groups "Neurological"/"Depression, " "no IBS"/"IBS, " gender (fixed factors), and age and education (covariates). For the calculation of estimated marginal means, the interaction between the groups "Neurological"/"Depression" and "IBS"/"no IBS" was added to the model. Except for the adjustments made in the multivariable analyses, no adjustments were made for multiple comparisons. Patients. Out of 71 patients included in the study for comparisons between patients in the "Neurological" and "Depression" group, 5 in the "Neurological" group were excluded because of organic abdominal diseases or incompletely filledin questionnaires. This left 66 patients for the analyses: 18 in the "Neurological" group and 48 in the "Depression" group. Table 1 gives the characteristics of the participants with comparisons between the groups. Compared to the patients in the "Neurological" group, the patients in the "Depression" group had statistically significantly higher scores for depression and abdominal complaints (IBSSS), significantly impaired cognitive functions on 5 out of 17 tests, and a trend for a higher prevalence of IBS. Attention, cognitive processing, and verbal fluency were the cognitive functions with the most marked differences between the groups. Tables 2 and 3 compare patients with and without IBS in the two groups. The cognitive functions did not differ significantly between the patients with and without IBS in any of the two groups. In the "Depression" group, depression and abdominal complaints were significantly more severe in patients with IBS. The results are given as the number (proportion in per cent), mean (SD), and median (range) and analysed with exact unconditional table analyses, -test, and Mann-Whitney test (marked with * ). Univariate Regression Analyses. IBS was an independent predictor of depression but was not associated with differences in any of the tests for cognitive functions. The "Depression" group was associated with significantly reduced cognitive performance in 6 of the 17 tests. The most marked differences were seen in the tests for attention and cognitive processing (Stroop 1 (Word)) and verbal fluency (COWAT (Animal)). Table 4 gives the details. Figure 1 visualises the associations between the groups with and without IBS and the "Neurological" and "Depression" groups for some selected variables (BDI-II, Trail Making Test B, and HVLT immediate total recall). Both the "IBS" group and the "Depression" group were significantly associated with BDI-II, only the "Depression" group was associated with Trail Making Test B, and none of them was associated with HVLT immediate total recall. Discussion The main finding was that the cognitive functions measured with a broad spectre of reliable and validated tests were unrelated to IBS. In accordance with other studies, this study showed associations between IBS and depression and between the idiopathic depression and cognitive functions. We are not aware of previous studies on the association between IBS and cognitive functions after adjusting for the idiopathic depression. These new findings are of importance for the clinical evaluation of patients with IBS. Since depression and other comorbidities are common in patients with IBS, physicians might wrongly suspect those with depression or cognitive impairment. The normal cognitive functioning in patients with IBS has, with some exceptions, also been reported in other studies [9,10,14]. Kennedy et al. used several tests including Paired Associates Learning (PAL) test. They reported a subtle visuospatial memory deficit that remained after correction for psychiatric comorbidity, in one out of 5 PAL subtests, but no change in any of the other tests [10]. Brain imaging (functional magnetic resonance imaging and positron emission tomography) and neurophysiological recordings (cerebral evoked potentials, magnetoencephalography, and spinal reflex responses) have shown abnormal findings in patients with IBS. The clinical relevance of these findings, such as the relation to affective and cognitive functions, has not been established [27,28]. The results have theoretical and practical implications. Theoretically, impaired cognitive performance was expected since patients with IBS are often depressed and patients with depression have impaired cognitive performance. The aetiology, pathogenesis, and pathophysiology of depression are complex and might differ between various forms of The results are given as the number (proportion in per cent), mean (SD), and median (range) and analysed with exact unconditional table analyses, -test, and Mann-Whitney test (marked with * ). depression such as "idiopathic depression, " "reactive depression, " and "inflammation associated depression" [29][30][31]. Depression might be several diseases or disorders with unequal associations with cognitive functioning and different associations with the brain-gut axis. The findings are also of importance for clinical practice. Patients with IBS are sometimes regarded as "nagging" persons since they present with a wide range of comorbidities including anxiety, depression, muscle-skeletal pain, unexplained somatic symptoms, and poor social functioning. This study showed that their cognitive and intellectual functions were unaffected and indirectly indicates that their comorbidity is "real. " The findings should remind the clinicians not to assign patients with IBS of more comorbidities than necessary and to handle the symptoms they present seriously. The tight association between IBS and depression shown in this study is well known from other studies, as is the association between depression and cognitive performance [1-3, 8, 32-34]. This study showed that the cognitive functions in patients with depression were unequally affected. A significant impairment was related to the visual scanning, motor speed, and set shifting (the Trail Making Tests and the Stroop Tests) and to fine motor control and tempo (Grooved Pegboard Tests). The capacity for immediate and delayed recall (the HVLT and BVMT) was unaffected. The impaired WAIS-III Digit Symbol test, which has been evaluated as one of the most sensitive WAIS-III tests, could indicate intellectual impairment. Other functional differences were less clear. Overall, the results indicate that the set shifting, verbal fluency, attention, and psychomotor speed were reduced in patients with depression, whereas other functions were normal. The gut and the brain interact through a bidirectional neuronal, humoral, and immunological communication referred to as the brain-gut axis that affects both gastrointestinal and psychological functioning [7]. The system is only partly understood, but the influence of the gut microbiota and the function of the blood-brain barrier on the system have been ascertained [35,36]. Both IBS and depression are influenced by the brain-gut axis and have some common pathophysiological abnormalities that could explain the associations between the two disorders [5,6,31,37]. The importance of the brain-gut axis for cognitive functioning is unknown. The finding that there were no associations between the gut and cognitive functions could indicate that the interaction between the gut and depression differs from the interaction between the gut and cognitive functions. Strengths and Limitations. The use of a wide range of valid and reliable neuropsychological tools for the evaluation of cognitive functions is a significant strength of this study. Table 4: The associations between depression and cognitive functions and the groups "Neurological"/"Depression" and "no IBS"/"IBS. " The results of linear regression analyses after adjusting for age, sex, and years of education. Some other studies have used tool measuring psycho-socialemotional-thinking and not strict neuropsychological functioning that could explain the contradictory results [12,13]. The case-control design of the study was planned for comparisons between patients with and without depression and was not ideal for the study of IBS. Nevertheless, the design made the planned comparisons between patients with and without IBS in the two groups possible, and the analyses were according to the protocol. The "Neurological" group was used as controls because no somatic or psychiatric disorder could explain their unspecific neurological symptoms. A completely healthy control group would have been preferable. In addition to their unspecific neurological symptoms, the "Neurological" group had a high prevalence of comorbidities such as IBS and abdominal complaints and perhaps affective and cognitive disorders. IBSSS has been validated for the scoring of symptoms in subjects with IBS and not for the scoring of all functional gastrointestinal disorders, as it was used in this study. This use of IBSSS makes the results of the IBSSS less reliable and explains the high scores in patients without IBS. The lack of any significant differences between patients with and without IBS in the "Neurological" group was probably a type II error due to the small sample size. Not even the IBSSS differed between the groups, which indicated a high prevalence of gastrointestinal symptoms also in subjects without IBS. There was no tendency toward cognitive deficits in patients with IBS despite having significantly more depression. The total sample size was limited, and a type II error cannot be excluded. If an association between IBS and cognitive deficits has been missed, the association must be weaker than between IBS and depression, which was highly significant. Conclusions There were no significant associations between IBS and cognitive functions. IBS was associated with depression, and the idiopathic depression was associated with cognitive deficits. The findings could indicate that depression in patients with IBS differs from an idiopathic depression and that the interaction between the gut and depression differs from that of the gut and cognitive functions. Ethical Approval The study was approved by the Norwegian Regional Committees for Medical and Health Research Ethics and performed in , and Brief Visual Memory Test immediate total recall (c) in the "Depression" and "Neurological" groups divided into patients with and without IBS after adjusting for age, sex, and education.
2018-04-03T04:30:05.165Z
2015-05-21T00:00:00.000
{ "year": 2015, "sha1": "4944fe4de7291ee8dccf38502ef501b766107fed", "oa_license": "CCBY", "oa_url": "http://downloads.hindawi.com/journals/grp/2015/438329.pdf", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "4944fe4de7291ee8dccf38502ef501b766107fed", "s2fieldsofstudy": [ "Medicine", "Psychology" ], "extfieldsofstudy": [ "Medicine" ] }
235428619
pes2o/s2orc
v3-fos-license
Finite-time System Identification and Adaptive Control in Autoregressive Exogenous Systems Autoregressive exogenous (ARX) systems are the general class of input-output dynamical systems used for modeling stochastic linear dynamical systems (LDS) including partially observable LDS such as LQG systems. In this work, we study the problem of system identification and adaptive control of unknown ARX systems. We provide finite-time learning guarantees for the ARX systems under both open-loop and closed-loop data collection. Using these guarantees, we design adaptive control algorithms for unknown ARX systems with arbitrary strongly convex or convex quadratic regulating costs. Under strongly convex cost functions, we design an adaptive control algorithm based on online gradient descent to design and update the controllers that are constructed via a convex controller reparametrization. We show that our algorithm has $\tilde{\mathcal{O}}(\sqrt{T})$ regret via explore and commit approach and if the model estimates are updated in epochs using closed-loop data collection, it attains the optimal regret of $\text{polylog}(T)$ after $T$ time-steps of interaction. For the case of convex quadratic cost functions, we propose an adaptive control algorithm that deploys the optimism in the face of uncertainty principle to design the controller. In this setting, we show that the explore and commit approach has a regret upper bound of $\tilde{\mathcal{O}}(T^{2/3})$, and the adaptive control with continuous model estimate updates attains $\tilde{\mathcal{O}}(\sqrt{T})$ regret after $T$ time-steps. Introduction Autoregressive Exogenous (ARX) Systems: ARX systems are central dynamical systems in timeseries modelings. They represent stochastic linear dynamical systems (LDS) in the input-output form which have a wide range of applicability to real dynamical systems and amenability for precise analysis. Due to their ability to approximate linear systems in a parametric model structure, ARX systems have been crucial in many areas including chemical engineering, power engineering, medicine, economics, and neuroscience (Norquay et al., 1998;Bacher et al., 2009;Fetics et al., 1999;Huang and Jane, 2009;Burke et al., 2005). The ARX systems have corresponding linear time-invariant (LTI) state-space representations and in their most general form, they can be represented as follows, x t+1 = Ax t + Bu t + F y t , y t = Cx t + e t . The dynamics are governed by Θ = (A, B, C, F ) where x t is the internal state, y t is the output, u t is the input and e t is the measurement noise. Notice that by knowing the initial condition x 0 and Θ, one can recover the state sequence. These models provide a general representation of LDS with arbitrary stochastic disturbances. In particular, via different distributions of e t , they are able to model partially observed LDS (PO-LDS) with various process and measurement noises. For instance, LQG control systems, which are the canonical settings in control, can be modeled as ARX systems. In an LQG control system, the process and measurement noises have Gaussian distributions which corresponds (in predictive form) to an ARX system, where e t has a particular Gaussian distribution determined by the state-space parameters and noise distributions (Kailath et al., 2000). System Identification and Adaptive Control: They are the central problems in control theory and reinforcement learning (Lai et al., 1982). System identification aims to learn the unknown dynamics of the system from the collected data, whereas adaptive control pursues the goal of minimizing the cumulative control cost of dynamical systems with unknown dynamics. Thus, adaptive control inherently includes the system identification process to design a favorable controller. The data collection to achieve these tasks can be performed via independent control inputs yielding open-loop data collection, or via feedback controllers resulting in closed-loop data collection (Ljung, 1999). Finite-time System Identification and Adaptive Control: In contrast to classical results in both of these problems that analyze the asymptotic performances, recently, there has been a flurry of studies that consider the finite-time performance and learning guarantees in both. In finite-time system identification setting pioneered by Campi andWeyer (2002, 2005), currently, the main focus has been on obtaining the optimal learning rate of 1/ √ T after T samples. Using open-loop data collection to avoid correlations in the inputs and outputs, Oymak and Ozay (2018) 2019) suggest methods that achieve this rate for stable LDS. However, due to the difficulty in handling the correlations caused by the feedback controller, the closed-loop system identification guarantees are scarce. Recently, Lale et al. (2020b) propose the first finite-time system identification algorithm that attains the optimal learning rate guarantee for both open and closed-loop data collection. In finite-time adaptive control, the efforts have been centered around achieving sub-linear regret which measures the difference between the cumulative cost of the adaptive controller and the optimal controller that knows the system dynamics. Most of the prior works follow the explore and commit approach. This approach proposes to first use open-loop data collection to solely explore the system and then estimate the system dynamics and fix a policy to be applied for the remaining time-steps (Lale et al., 2020c;Mania et al., 2019;Simchowitz et al., 2020). The recent introduction of the first finite-time closed-loop system identification algorithm in Lale et al. (2020b) allowed the design of "truly" adaptive control algorithms that naturally use past experiences to improve the model estimates and the controller continuously. Deploying closed-loop data collection, Lale et al. (2021Lale et al. ( , 2020b provide adaptive control algorithms for PO-LDS that achieve optimal regret results. Contributions: In this work, we study finite-time system identification and adaptive control problems in ARX modeled systems with sub-Gaussian noise. First, we state the finite-time guarantees for learning the ARX systems that hold for both open and closed-loop data collection. Deploying the least-squares problem introduced in Lale et al. (2020b), we show that the estimation error of model parameters decays withÕ(1/ √ T ) rate after collecting T samples with persistent excitation. Secondly, we study the adaptive control problem in ARX modeled systems with sub-Gaussian noise. Leveraging the finite-time system identification results, we propose adaptive control frameworks for the ARX systems with arbitrary strongly convex or convex quadratic cost functions: 1. ARX systems with strongly convex cost functions: For this cost function setting, which can possibly be time-varying, we provide an adaptive control algorithm framework that deploys online learning for controller design and exploits the strong convexity. Using online gradient descent with a convex policy reparametrization of linear controllers, we show that adaptive control problem turns into an online convex optimization problem and optimal regret results can be achieved in this setting. To this end, we first show that the explore and commit approach, which fixes the model estimate after open-loop data collection, attains regret ofÕ( √ T ) after T time-steps of interaction via the proposed framework. HereÕ(·) presents the order up to logarithmic terms. We then show that if the model estimates are updated in epochs using the data collected in closed-loop, this adaptive control framework of ARX systems yields the optimal regret rate of polylog(T ). 2. ARX models with fixed convex quadratic cost function: For this setting, we propose an adaptive control framework that deploys the principle of optimism in the face of uncertainty (OFU) (Auer, 2002) to balance exploration vs. exploitation trade-off in the controller design. The OFU principle prescribes to use the optimal policy of the model that has the lowest optimal cost, i.e. the optimistic model, within the plausible set of systems according to system identification guarantees. We show that using this framework with the explore and commit approach yields regret ofÕ(T 2/3 ). Ultimately, we prove that the adaptive control based on OFU principle attains regret ofÕ( √ T ) if the model estimates are continuously updated using closed-loop data in ARX systems. These results subsume the prior works in PO-LDS and extend them to the general class of ARX systems with sub-Gaussian noise which can be adopted in various real-world time-series modelings (Table 1). Preliminaries The Euclidean norm of a vector x is denoted as x 2 . For a given matrix A, A 2 denotes its spectral norm, A F is its Frobenius norm, A ⊤ is its transpose, A † is its Moore-Penrose inverse, and Tr(A) is the trace. ρ(A) denotes the spectral radius of A, i.e., the largest absolute value of its eigenvalues. The j-th singular value of a rank-n matrix A is denoted by σ j (A), where σ max (A) := σ 1 (A) ≥ σ 2 (A) ≥ . . . ≥ σ n (A) := σ min (A) > 0. I is the identity matrix with appropriate dimensions. N (µ, Σ) denotes a multivariate normal distribution with mean vector µ and covariance matrix Σ. Consider the unknown ARX model of Θ given in (1). At each time-step t, the system is at state x t and the agent observes y t . Then, the agent applies a control input u t , observes the loss function ℓ t , pays the cost of c t = ℓ t (y t , u t ), and the system evolves to a new x t+1 at time step t + 1. Assumption 2.1 (Sub-Gaussian Noise) There exists a filtration (F t ) such that for all t ≥ 0, and j ∈ [0, . . . , m], e t,j s are R 2 -sub-Gaussian, i.e., for any γ ∈ R, E [exp (γe t,j ) |F t−1 ] ≤ exp γ 2 R 2 /2 and E e t e ⊤ t |F t−1 = Σ E ≻ σ 2 e I for some σ 2 e > 0. Following general construction of ARX models we assume that A is stable such that Φ(A) = sup τ ≥0 A τ /ρ(A) τ is finite. This is a mild assumption and captures extensive number of systems including detectable partially observable linear dynamical systems (Kailath et al., 2000). System Identification Using the dynamics in (1), for any positive integer h, the output of the system can be written as The behavior of an ARX system is uniquely governed by its Markov parameters. Definition 1 (Markov Parameters) The set of matrices that maps the previous inputs to the output is called input-to-output Markov parameters and the ones that map the previous outputs to the output are denoted as output-to-output Markov parameters of the system Θ. In particular, the matrices that map inputs and outputs to the output in (2) are the first h parameters of the Markov operator, Let G u→y (h) = [G 1 u→y G 2 u→y . . . G h u→y ] ∈ R m×hp and G y→y (h) = [G 1 y→y G 2 y→y . . . G h y→y ] ∈ R m×hm denote the h-length Markov parameters matrices. Consider the following h-length operator G and the subsequences of h input-output pairs from the data collected, either open or closed-loop or both, Using G, at each time step t, the output of the system can be written as Since A is stable, for h = c h log(T ), for some problem dependent constant c h and total execution duration of T , the last term in (4) provides a negligible bias term of 1/T 2 . Therefore, we solve the following regularized least squares problem to estimate the Markov parameters of the system: The problem in (5) This result shows that under persistent of excitation, the least squares problem (5) provides consistent estimates and the estimation error decays with the optimal rate. Note that both input-to-output and output-to-output Markov parameters of ARX system are submatrices of G. Therefore, the given bound trivially holds for Adaptive Control of ARX Systems with Strongly Convex Cost In this section, we will first introduce linear dynamic controllers (LDC) and provide a convex policy reparametrization, disturbance feedback controllers (DFC) (Simchowitz et al., 2020;Lale et al., 2020b), to approximate LDC controllers. We then provide the details of the setting of ARX systems regarding the loss and regret definition. Finally, we consider two variants of an algorithm that uses DFC policies in adaptive control of ARX system and provide the regret performances. Linear Dynamic Controllers (LDC): An LDC, π, is a linear controller with internal state dynamics s π t+1 = A π s π t + B π y t and u π t = C π s π t + D π y t where s π t ∈ R s is the state of the controller, y t is the input to the controller, i.e. the observation from the system, and u π t is the output of the controller. (A π , B π , C π , D π ) control the internal dynamics of the LDC. LDC include a large number of controllers including H 2 and H ∞ controllers of fully and partially observable LDS (Hassibi et al., 1999). The optimal control law for ARX models with quadratic cost is also an LDC (Section 5). Output uncertainties b t (G): The output can be decomposed to its components via G as follows, The output uncertainties of ARX system at time t is denoted as follows: This definition is similar to Nature's output adopted in Simchowitz et al. (2020);Lale et al. (2020b). It represents the only unknown components on the output. Notice that, one can identify the uncertainty in the output at any time step uniquely using the history of inputs, outputs and the Markov parameters. This gives the ability of counterfactual reasoning, i.e., consider what the output would have been, if the agent had taken different sequence of inputs and observed different outputs. Adaptive Control Setting Disturbance Response Controllers (DFC): For adaptive control of ARX systems with strongly convex cost functions, we adopt a convex policy parametrization called DFC. A DFC of length h ′ is defined as a set of parameters, i=0 acting on the last h ′ output uncertainties, i.e., This convex policy parameterization follows the classical Youla parameterization (Youla et al., 1976) and used for adaptive control of PO-LDS in Simchowitz et al. (2020);Lale et al. (2020b). DFC policies are truncated approximations of LDC policies and for any LDC policy there exists a DFC policy which provides equivalent performance (see Appendix A). Define the closed, convex and compact sets of DFCs, M and M r , such that the controllers . Throughout the interaction with the system, the agent has access to M r . Loss function: The loss function ℓ t (·, ·) is strongly convex, smooth, sub-quadratic and Lipschitz with a parameter L, such that for all t, 0 ≺ α loss I ∇ 2 ℓ t (·, ·) α loss I for a finite constant α loss and for any Γ with u , u ′ , y , y ′ ≤ Γ, we have, Regret definition: Let M ⋆ be the optimal, in hindsight, DFC policy in the given set M, i.e., . For ARX systems with strongly convex loss function, the adaptive control algorithm's performance is evaluated by its regret with respect to M ⋆ after T steps of interaction and it is denoted as REGRET(T ) = T t=1 c t − ℓ t (y M⋆ , u M⋆ ). The proposed algorithm for the ARX systems with strongly convex cost is given in Algorithm 1. It has two possible approaches depending on the persistence of excitation of given DFC set M r : explore and commit approach or adaptive control with closed-loop estimate updates. Adaptive Control via Explore and Commit Approach In the explore and commit approach, Algorithm 1 has two phases: an exploration (warm-up) phase with the duration of T w = O( √ T ) and an exploitation phase for the remaining T − T w time-steps. Warm-up: During the warm-up period, Algorithm 1 applies u t ∼ N (0, σ 2 u I) in order to recover the Markov parameters of the system. The duration of warm-up T w is chosen to guarantee reliable estimate of Markov parameters of ARX system and the stability of DFC controllers in exploitation phase. The exact duration of warm-up is given in Appendix C. Exploitation: At the end of warm-up, Algorithm 1 estimates the Markov parameters of ARX system, G, using the data gathered in warm-up. It deploys the regularized least-squares estimation of (5) to obtain G. At each time-step t, Algorithm 1 uses this estimate and the past inputs to approximate the output uncertainties, b t ( G) = y t − h−1 k=0 G k+1 u→y u t−k−1 + G k+1 y→y y t−k−1 . These approximate output uncertainties are then used to execute a DFC policy M t ∈ M r as given in (7). Upon applying the control input, the algorithm observes the output of the system along with the loss function Algorithm 1 Adaptive Control of ARX Systems with Strongly Convex Cost Deploy u t ∼ N (0, σ 2 u I) and store D Twarm = {y t , u t } Twarm t=1 and set M t as any member of M r --ADAPTIVE CONTROL ------------5: for i = 0, 1, . . . do 6: for t = 2 i T warm , . . . , 2 i+1 T warm − 1 do 10: Observe y t , and and pay a cost of ℓ t (y t , u Mt t ) 12: and pays the cost of c t = ℓ t (y t , u Mt t ). At each time-step, Algorithm 1 employs the counterfactual reasoning introduced in Simchowitz et al. (2020) to compute a counterfactual loss. Briefly, it considers what the loss would be if the current DFC policy has been applied from the beginning. This provides a noisy metric to evaluate the performance of the current DFC policy. The details of the counterfactual reasoning are in Appendix E. Finally, Algorithm 1 deploys projected online gradient descent on the counterfactual loss to update and keep the DFC policy within the given set M r for the next time-step. This process is repeated for the remaining T − T w time-steps. Note that deploying DFC policies turns adaptive control problem into an online convex optimization problem which is computationally and statistically efficient. Moreover, using online gradient descent for controller updates exploits the strong convexity grants the following regret rate. Theorem 3 Given M r , a closed, compact and convex set of DFC policies, Algorithm 1 with explore and commit approach attains REGRET(T ) =Õ( √ T ) with high probability. The proof is in Appendix E. In the proof, we first show that the choice of T w guarantees that the open-loop data is persistently exciting and the Markov parameter estimates are refined. Then, we show that the estimates of the output uncertainties, the DFC policy inputs and the outputs of the ARX system are bounded. Following the regret decomposition of Theorem 5 of Simchowitz et al. (2020), we show that with the choice of T w , the regret of running gradient descent on strongly convex losses scales quadratically with the Markov parameters estimation error. This roughly gives REGRET , giving the advertised bound. Adaptive Control with Closed-Loop Model Estimate Updates Prior to describing Algorithm 1 with closed-loop model estimate updates, we need a further condition on the sets M and M r , such that the DFC policies in these sets persistently excite the underlying ARX system. The exact definition of the persistence of excitation is given in Appendix B. Note that this condition is mild and briefly implies having a full row rank condition on a significantly wide matrix that maps past e t to inputs and outputs. One can also show that if a controller satisfies this, then there exists a neighborhood around it that consists of persistently exciting controllers. In the adaptive control with closed-loop model estimates approach, Algorithm 1 also has two phases: a fixed length warm-up phase and an adaptive control phase in epochs. Warm-up: Algorithm 1 applies u t ∼ N (0, σ 2 u I) for a fixed duration of τ that solely depends on the underlying system. This phase guarantees the access to a refined first estimate of the system, the persistence of excitation and the stability of the controllers during adaptive control. Adaptive control in epochs: After warm-up, Algorithm 1 starts controlling the system and operates in epochs with doubling length, i.e., the i'th epoch is of duration 2 i−1 τ for i ≥ 1. Unlike the explore and commit approach, at the beginning of each epoch, it uses all the data gathered so far to estimate the Markov parameters via (5). It then uses this estimate throughout the epoch to approximate the output uncertainties and implement the DFC policies. At each time step, the DFC policies are updated via projected online gradient descent on the computed counterfactual loss. The main difference from the explore and commit approach is that Algorithm 1 updates the model estimates during adaptive control which further refines the estimates and improves the controllers. Theorem 4 Given M r with DFCs that persistently excite the underlying ARX system, Algorithm 1 with closed-loop model estimate updates attains REGRET(T ) = polylog(T ), with high probability. The proof is in Appendix E and it follows similarly with Theorem 3. One major difference that allows to achieve the optimal regret rate is the use of data collected during adaptive control to improve the Markov parameter estimates. This approach roughly gives the following decomposition REGRET . Notice that unlike explore and commit approach, the estimation error decays at each epoch gives the advertised logarithmic regret. Adaptive Control of ARX Systems with Convex Quadratic Cost In this section, we present the setting of ARX systems with convex quadratic cost and the regret definition that competes against the optimal controller for this setting. Finally, we propose an optimism based adaptive control algorithm with two variants and provide the regret guarantees. Adaptive Control Setting The unknown ARX system belongs to a set S which consists of systems that are (A, B) and (A, F ) controllable and (A, C) observable. The ARX system has quadratic cost on u t and y t , i.e., c t = y ⊤ t Qy t + u ⊤ t Ru t where Q 0 and R ≻ 0, hence the cost is convex but not strongly convex. For this ARX system, the minimum average expected cost problem is given as follows Using the average cost optimality equation, one can derive the optimal control law for this problem (Appendix G). The optimal control law of ARX systems, π * , is a linear feedback policy, where P is the unique positive semidefinite solution to the discrete-time algebraic Riccati equation: Algorithm 2 Adaptive Control of ARX Systems with Convex Quadratic Cost 1: Input: Execute the optimal controller forΘ i Note that π * is an LDC policy with the optimal minimum average expected cost of J . We assume that the systems in the set S are contractible such that the optimal controller produces contractive closed-loop system dynamics for the state and the output, i.e. A + BK * x ≤ ρ < 1 and F + BK * y ≤ υ < 1. Finally, the regret measure in this setting is REGRET . Optimism in the face of uncertainty (OFU) principle: OFU principle has been widely adopted in sequential decision making tasks in order to balance exploration and exploitation. It suggests to estimate the model up to confidence interval and proposes to act according to the optimal controller of the model that has the lowest optimal cost within the confidence interval, i.e., the optimistic model. For adaptive control in this setting, we deploy the controllers designed via OFU principle. The proposed algorithm for the ARX systems with convex quadratic cost is given in Algorithm 2. It has two variants depending on the persistence of excitation of the optimal controller π * : explore and commit approach or adaptive control with closed-loop estimate updates. Adaptive Control via Explore and Commit Approach Similar to prior setting, in the explore and commit approach, Algorithm 2 has two phases: an exploration (warm-up) phase with the duration of T w = O(T 2/3 ) and an exploitation phase. Warm-up: Algorithm 2 uses u t ∼ N (0, σ 2 u I) for exploration. The exact T w is given in Appendix D and it guarantees reliable estimation of system parameters and the stability of OFU based controller. Exploitation: At the end of warm-up, Algorithm 2 estimates the Markov parameters of ARX system via (5) and constructs confidence sets (C A , C B , C C , C F ) for the system parameters up to similarity transform using SYSID-ARX, a variant of Ho-Kalman realization algorithm (Ho and Kálmán, 1966). The procedure follows similarly with SYS-ID of Lale et al. (2021) and the details are given in Appendix F. Algorithm 2 then deploys the OFU principle and chooses the optimistic system parameters,Θ, that lie in the intersection of the confidence sets and S. Finally, Algorithm 2 constructs the optimal control law forΘ via (9) and (10) and executes it for the remaining T − T w time-steps. Theorem 5 Given an unknown ARX system with convex quadratic cost, Algorithm 2 with explore and commit approach attains REGRET(T ) =Õ(T 2/3 ), with high probability. The proof is in Appendix F. In the proof, we first show that the choice of T w guarantees persistence of excitation in open-loop data and the stability of inputs and outputs. Then, we derive the Bellman optimality equation for ARX systems which we use for decomposing regret via OFU principle. This roughly gives REGRET Adaptive Control with Closed-Loop Model Estimate Updates Before describing Algorithm 2 with closed-loop model estimate updates, we need a further condition such that the optimal controller for the underlying ARX system persistently excited the system. This is again a mild condition and briefly implies that a significantly wide matrix which maps the past e t to inputs and outputs and formed via optimal controller is full row rank. The precise condition is given in Appendix B. Note that if the system parameter estimates are accurate enough, the controller designed with system parameter estimates persistently excite the ARX system. Similar to strongly convex cost setting, in the adaptive control with closed-loop estimates approach, Algorithm 2 has two phases: a fixed length warm-up phase and an adaptive control in epochs. Warm-up: Algorithm 2 uses u t ∼ N (0, σ 2 u I) for a fixed warm-up duration τ which grants refined estimates of the system parameters, persistence of excitation and stability for adaptive control phase. Adaptive control in epochs: After warm-up, Algorithm 2 starts adaptive control in doubling length epochs, i.e., i'th epoch has the duration of 2 i−1 τ . At the beginning of i'th epoch, it estimates the system parameters via (5), constructs the confidence sets and deploys OFU principle to recover an optimistic model,Θ i . Finally, it executes the optimal control law forΘ i until the end of epoch i. Thus, the main difference from explore and commit approach is the use of closed-loop data to further refine the model estimates. This improves the regret performance and the proof is in Appendix F. Theorem 6 Given an unknown ARX system with convex quadratic cost whose optimal controller persistently excites the system, Algorithm 2 with closed-loop model estimate updates attains REGRET(T ) = O( √ T ), with high probability. Related Works System Identification: The classical open or closed-loop system identification methods mostly consider the asymptotic performance of the proposed algorithms or demonstrate positive and negative empirical studies (Verhaegen, 1994;Forssell and Ljung, 1999;Van Overschee and De Moor, 1997;Ljung, 1999). These works mostly consider LQR or LQG systems in their state-space form. However, Chiuso and Picci (2005); Jansson (2003) provide asymptotic studies of closedloop system identification of LQG systems in predictive form which corresponds to the exact ARX systems formulation of LQG. Moreover, the ARX systems, in particular, have been studied extensively in system identification perspective due to their input-output form (Diversi et al., 2010;Bercu and Vazquez, 2010;Sanandaji et al., 2011;Stojanovic et al., 2016). In these works, the authors discuss the role of persistence excitation in consistent asymptotic recovery of ARX system parameters. On the other hand, the finite-time learning guarantees, which is the focus of this work, are not known. Adaptive Control: The classical works in adaptive control also study the asymptotic performance of the designed controllers (Lai et al., 1982;Lai and Wei, 1987;Fiechter, 1997). In the ARX systems setting, Prandini and Campi (2000a,b); Campi and Kumar (1998) study the asymptotic convergence to optimal controller of ARX systems using an early interpretation of OFU principle. The current paper is the finite-time counterpart of these studies and completes an important part of the picture in adaptive control of ARX systems by providing optimal regret guarantees. It also extends the prior efforts in adaptive control of LQR and LQG systems in regret minimization perspective to the general ARX systems setting (Abbasi-Yadkori and Szepesvári, 2011;Dean et al., 2018;Abeille and Lazaric, 2018;Agarwal et al., 2019a,b;Cohen et al., 2019;Faradonbeh et al., 2018Faradonbeh et al., , 2020aLale et al., 2020aLale et al., ,b,c, 2021Mania et al., 2019;Simchowitz and Foster, 2020;Simchowitz et al., 2020). References Yasin Abbasi-Yadkori and Csaba Szepesvári. Regret bounds for the adaptive control of linear quadratic systems. In Appendix A, after introducing some technical properties that would be used for proofs in regret guarantees of Algorithm 1, we show that the performance of LDC policies can be wellapproximated by DFC policies. We provide the precise definition of persistence excitation for both warm-up and adaptive control periods in Appendix B. In Appendix C and D, we give precise warmup durations for Algorithm 1 and 2 respectively. The technical details of Algorithm 1 as well as the proofs of Theorems 3 and 4 are given in Appendix E. The details of Algorithm 2 and the proofs of Theorems 5 and 6 are given in Appendix F where the proofs built on the Bellman optimality equation for ARX systems provided in Appendix G. Appendix A. LDC Policies and DFC Policies Recall that LDC policies have the following construction: Therefore, using the ARX system (1), we get where (A ′ π , B ′ π , C ′ π , D ′ π ) define the induced closed-loop system. The Markov operator for the system (A ′ π , B ′ π , C ′ π , D ′ π ) can be defined as Definition 7 (Proper Decay Function) ψ : N → R ≥0 is a proper decay function if ψ is nonincreasing and lim h ′ →∞ ψ(h ′ ) = 0. For a Markov operator G, ψ G (h) defines the induced decay function on G, i.e., ψ G (h) := i≥h G i . This decay represents the effect of past system inputs on the system output. For stable (open or closed-loop) systems, the Markov operator can be bounded trivially. This brings the following policy class to consider for ARX systems. Definition 8 (LDC policies with proper decay function) Π(ψ) denotes the class of LDC policies associated with a proper decay function ψ, such that for all π ∈ Π(ψ), and all h ≥ 0, i≥h G ′ π i ≤ ψ(h). In order to provide clean analysis, for the policy class Π(ψ), let κ ψ := ψ(0) such that i≥0 G ′ π i ≤ κ ψ . This class corresponds to stabilizing LDC policies. Moreover, for the open-loop system, let This follows from the assumption of A + F C is stable. Thus, the output of the LDC policy u π t has the following expanded Using the definition of y π t and y M t , we have Subtracting these two equations, we get, These show that for any LDC policy, the DFC approximation of it provides reasonable performance. Therefore, one can deduce that any stabilizing LDC policy can be well approximated by a DFC that belongs to the following set of DFCs, indicating that using the class of DFC policies as an approximation to LDC policies is justified. Appendix B. Persistence of Excitation In this section, the precise persistence of excitation conditions of the inputs are provided. First, open-loop persistence excitation is considered following similar analysis of Lale et al. (2020c), Appendix B.1. Then, the persistence of excitation in adaptive control is analyzed. We assume that, throughout the interaction with the system, the agent has access to a convex compact set of DFCs, M r which is an r-expansion of M, such that κ M = κ ψ (1 + r) and all controllers M ∈ M r are persistently exciting the ARX system. The persistence of excitation condition for the given set M r is formally defined in Appendix B.2 and in Appendix B.3, we show that persistence of excitation is achieved by the policies that Algorithm 1 and Algorithm 2 deploy. In the following,φ t = Sφ t for a permutation matrix S that gives B.1. Persistence of Excitation in Warm-up for Algorithm 1 & 2 The following guarantee holds for both Algorithm 1 and 2, since their warm-ups have the same sub-routine. Recall the state-space form of the ARX system in (1), During the warm-up period, t ≤ T warm , the input to the system is u t ∼ N (0, σ 2 u I). From the evolution of the system with given input we have the following: and r o t is the residual vector that represents the effect of [e i u i ] for 0 ≤ i < t − h, which are independent. Notice that G o is full row rank even for h = 1, due to first block identity matrix. Using this, we can representφ t as follows During the warm-up period, for all 1 ≤ t ≤ T warm , Σ(x t ) Γ ∞ , where Γ ∞ is the steady state covariance matrix of x t such that, From the (A + F C) stability assumption, we guarantee that the steady-state is bounded. For simplicity, assume that for a finite Φ(A + F C), (A + F C) τ ≤ Φ(A + F C)ρ(A + F C) τ for all τ ≥ 0. This assumption is mild and can be trivially replaced by strong stability condition introduced in Cohen et al. (2018), which is just a quantification of stability for the analysis. Using this, . Therefore, for all ≤ t ≤ T warm , with probability 1 − δ/2, we have We can conclude that during the warm-up phase, we have max i≤t≤Twarm φ i ≤ Υ w √ h where Υ w = C X w + E + U w . With this we are ready to set the persistence of excitation guarantee for the inputs during the warm-up period. To this end define Lemma 9 If the warm-up duration T warm ≥ T wp , then for T wp ≤ t ≤ T warm , with probability at least 1 − δ, we have where σ o := σ min (G ol ) Proof The proof follows similarly with Lale et al. (2020b). Using the fact that each block row of G ol is full row rank, via QR decomposition, we get with positive number on the diagonal. Note that the first matrix in QR decomposition is full rank. Since all the rows of second matrix in QR decomposition are in row echelon form, the second matrix is also full row rank. Therefore, G ol is full row rank, which gives, where Σ e,u ∈ R 2(m+p)h×2(m+p)h = diag(σ 2 e , σ 2 u , . . . , σ 2 e , σ 2 u ). This gives us for t ≥ T warm . Using Theorem 21 and (15)-(18), we get, which holds with probability 1 − δ/2. Using Weyl's inequality, during the warm-up period with probability 1 − δ, we have For the given choice of T wp , we obtain the advertised lower bound. Recalling Theorem 2 and using Lemma 9, we get at the end of warm-up, with probability at least 1 − 2δ. B.2. Persistence of Excitation Condition for Algorithm 1, PE of M ∈ M r In order to derive the persistence of excitation condition, assume that the underlying system is known. Thus, we have the following inputs and outputs to the system Note that b t = e t whose covariance matrix is dominated by σ 2 e I p . Therefore, Thus, The following gives the persistence of excitation condition: Persistence of Excitation of M ∈ M r on ARX Systems For the given ARX system Θ, for t ≥ 2h + h ′ , T G T Mt +Ō t is full row rank for all M ∈ M r , i.e., Note that for simplicity of the analysis, the length of the estimated Markov operator G in (5) and the Markov operator to recover output uncertainties b(G) are chosen to be the same in the main text. However, in practice, the length of the estimator could be increased to satisfy the persistence of excitation condition. B.3. Persistence of Excitation in Adaptive Control Period of Algorithm 1 In this section, we show that the Markov parameter estimates (Ĝ t ) throughout the adaptive control period of Algorithm 1 are close enough to the underlying parameters such that the controllers designed via these estimates do not violate the persistence of excitation condition. Define T G such that for ǫ = min 1, Lemma 10 After T c time steps of adaptive control period of Algorithm 1, with probability 1 − 3δ, the following holds for the remainder of adaptive control period, Proof During the adaptive control period, at time t, the input of Algorithm 1 is given by This gives the following input and output at time t: where r t = t−1 k=h+1 G k u→y u t−k + G k y→y y t−k . Moreover, we have Assumption E.1 on the choice of h which is satisfied under the minimal assumption of stability. From Lemma 13 and Lemma 12, we get u ∆b (t) ≤ κ M κ y ǫ G (1, δ) for all t in adaptive control period, where κ M = κ ψ (1 + r). Using the definitions from Appendix B.2, φ t can be written as, Using Lemma 13, for all t ≥ T warm . Let Υ c := (κ y + κ u ). Lemma 13 gives us that φ t ≤ Υ c √ h with probability at least 1 − 2δ. Therefore, for a chosen M ∈ M r , using Theorem 21, we have the following with probability 1 − 3δ: Finally, a standard covering argument will be utilized to show that this holds for any chosen M ∈ M r . We know that from Lemma 5.4 of Simchowitz et al. (2020), the Euclidean diameter of M r is at most 2κ M min{m, p}, i.e. M t F ≤ κ M min{m, p} for all M t ∈ M r . Thus, we can upper bound the covering number as follows, This gives us the following result for all centers of ǫ-balls in M t F , for all t ≥ T warm , with probability 1 − 3δ: Consider all M in the ǫ-balls, i.e. effect of ǫ-perturbation in M F sets, using Weyl's inequality we have with probability at least 1 − 3δ, for ǫ ≤ 1. Let ǫ = min 1, ) . For this choice of ǫ, we get For picking T warm ≥ T c , we can guarantee that after T c time steps in the first epoch of adaptive control, we obtain the lower bound. Recalling Theorem 2 and using Lemma 10, we get for all adaptive control epoch i, with probability at least 1 − 4δ. B.4. Persistence of Excitation Condition for Algorithm 2, PE of optimal controller of ARX After the warm-up phase, for t ≥ T warm , Algorithm 2 executes the input of u t =K x t x t +K y t y t . Using the state-space representation of ARX model, we get Moreover, . Thus for f t , we get: Rolling back for h time steps, we get the following, where r c t is the residual vector that represents the effect of e i for 0 ≤ i < t − h, which are independent. Using this, one can write the full characterization ofφ t as follows If the underlying system is known, then the optimal control law for the ARX system could be applied to control the system. In the following, G cl is the closed-loop mapping of noise process to the covariatesφ via optimal policy where Note thatḠ corresponds to truncated closed-loop noise to covariate Markov operator. Notice that if G is full row rank, following similar approach with the proof of Lemma 9, G cl is also full row rank. Thus, we have the following persistence of excitation condition on the optimal control law for the ARX system: Persistence of Excitation of Optimal Control Policy on ARX Systems The length of Markov operator to estimate is chosen such thatḠ formed via optimal control policy of the given ARX system is full row rank. Thus, σ min (G cl ) >σ c > 0. B.5. Persistence of Excitation in Adaptive Control Period of Algorithm 2 Finally, in this section we show that the Markov parameter estimates (Ĝ t ) throughout the adaptive control period of Algorithm 2 are close enough to the underlying parameters such that the optimistic controllers designed via these estimates still persistently excite the ARX system. To this end, define , where T param is the number of samples required to get less than 1 estimation error on the system parameters, defined in Section F.1. Moreover, κ Kx and κ Ky are bounds on the optimistic controllers within S due to boundedness of the set. Finally, let G be the upper bound on Gcl constructed via any ARX model parameter in the set S and let Lemma 11 After Tc time steps of adaptive control period of Algorithm 2, with probability 1 − 3δ, the following holds for the remainder of adaptive control period, Proof LetG cl be the closed-loop mapping of noise process to the covariates via optimal policy for the optimistically chosen ARX parameters. Recall that via persistence of excitation condition on the optimal controller, picking T warm ≥ T G cl guarantees that in adaptive control period of Algorithm 2, we have G cl t − G cl ≤σ c /2 which in turn gives σ min (G cl t ) ≥σ c /2 via Weyl's inequality. Thus, for all t ≥ T warm , we have that where Σ e ∈ R 2mh×2mh = diag(σ 2 e , . . . , σ 2 e ). This gives us σ min (E[φ tφ h (which holds with probability at least 1 − 2δ, see Section F for the bound on inputs and outputs during the execution of Algorithm 2). Therefore, for a chosen optimistic model, using Theorem 21, we have the following with probability 1 − 3δ: Finally, a standard covering argument will be utilized to show that this holds for any chosen M ∈ M r . We know that G cl t F ≤ G r for all G cl t . Thus, we can upper bound the covering number as follows, This gives us the following result for all centers of ǫ-balls in G cl t F , for all t ≥ T warm , with probability 1 − 3δ: Consider all G cl in the ǫ-balls, i.e. effect of ǫ-perturbation in G cl F sets, using Weyl's inequality we have with probability at least 1 − 3δ, . For this choice of ǫ, we get For picking T warm ≥ Tc, we can guarantee that after Tc time steps in the first epoch of adaptive control, we obtain the lower bound. • Persistence of excitation during the warm-up period to have reliable estimates for the exploitation phase, T wp = 32Υ 4 w h 2 log 2h(m+p) δ σ 4 min (G ol ) min{σ 4 e ,σ 4 u } in Section B.1, • Reliable ARX system parameter estimation, T N = T G 8h σ 2 n (H) , where T G is the warm-up duration to get at least unit norm estimation error at the end of the warm-up phase and σ n (H) is the n-th singular value of Hankel matrix constructed by the Markov parameters in Section F.1 • Stability of inputs and outputs, throughout Algorithm 2, T u and T y in Section F. Therefore, for the explore and commit approach of Algorithm 2, the warm-up duration D.2. Closed-Loop Model Estimate Updates In the closed-loop model estimate variation of Algorithm 1, the warm-up duration does not depend on the time horizon. Instead, the warm-up phase should guarantee that: • Persistence of excitation during the adaptive control period, for 1 ≤ j ≤ h. This gives what would be the inputs to the system with the current policy and current markov parameter estimates. Then, counterfactual output is computed, whereĜ j i = C( A + F C) j B obtained my the Makrov parameter estimates. This is an estimation of output of the system if the counterfactual inputs have been applied by the agent. Finally, Algorithm 1 computes the counterfactual loss using ℓ t : Algorithm deploys online projected gradient descent on f t to improve the controller at each time step: E.2. Estimation and Boundedness Lemmas and Main Regret Results In this section, we will present the exact statement for Theorem 3 and Theorem 4. Note that the proofs follow similar nature. Before stating these results, we state the following assumption on the choice of h, which simplifies the presentation and could be easily satisfied due to open-loop stability of the ARX system. where ψ G is induced decay function on Markov operator G, i.e., ψ G (h) := i≥h G i . Note that combining the choice of warm-up durations given in Appendix C and the results in Appendix B.1, we guarantee that open-loop data is persistently excited and the estimation error rate at the end of the warm-up phase is O(1/ √ T warm ). In the following, we show that with the choice of T warm , the Markov operator estimates are well refined. First define α, such that α ≤ α loss σ 2 e 1 + σ min (C) 1 + A + F C 2 2 . (44) Combining Theorem 2, Lemma 22 and Lemma 13, we also have β t ≤β for all t ≥ T warm wherē Moreover, let In the following, we show that sum of Markov parameter estimation errors are well-bounded with the particular choice of warm-up time. The following lemma will be the key in proving the regret results. proving the first inequality. The second inequality is numerical and follows from the choice of T warm ≥ max{T cx , T G , T r }. Finally, we show that the estimates of the output uncertainties (b t ( G i )), the DFC policy inputs (u Mt t ) and the outputs of the ARX system (y t ) are bounded. To this end, for some δ ∈ (0, 1), define κ b = R 2m log 2mT δ and κ u b = σ u 2p log 2pT δ . Lemma 13 (Boundedness Lemma) Let δ ∈ (0, 1). For the chosen warm-up duration of Algorithm 1 with explore and commit approach, i.e. T warm = T w , and for the chosen warm-up duration of Algorithm 1 with closed-loop model estimate updates, we have the following bounds ∀t with probability at least 1 − 2δ, This lemma follows trivially via standard sub-Gaussian vector norm inequality and the second inequality in Lemma 12. In the following precise statements of Theorem 3 and 4 are given and since their proofs differ only at one place, only the proof of Theorem 4 is given with the explanation about the difference. Theorem 14 (Precise Statement of Theorem 3) . If the loss function follows (8) for the given ARX system, then the Algorithm 1 with step size η t = 12 αt using explore and commit approach after a warm-up period (T warm ) of T w , given in (36), has its regret bounded as with probability at least 1 − 5δ. The choice of T warm guarantees that the regret is O( √ T ) with probability at least 1 − 5δ, i.e., REGRET(T ) ( . If the loss function follows (8) for the given ARX system, then the Algorithm 1 with access to persistently exciting M r , with step size η t = 12 αt using closed-loop model estimate updates via doubling epochs after a warm-up period (T warm ) of τ , given in (37), has its regret bounded as with probability at least 1 − 5δ. The choice of T warm guarantees that the regret is O(polylog(T )) with probability at least 1 − 5δ. Notice that both theorems have same regret decomposition but the one with closed-loop estimates improve the estimation error during adaptive control which leads to significantly improved regret rate. The following gives the proof for both. Proof The proof follows the regret decomposition of Simchowitz et al. (2020). We will first study the error in gradients on the counterfactual losses. Let y pred t denote the prediction of output if the true system uncertainty and Markov parameters are known, i.e. true counterfactual output of the system. Moreover, let f pred t (M ) denote the true counterfactual loss calculated by true counterfactual outputs and inputs as defined in Definition 8.1 of Simchowitz et al. (2020). They only have truncation error due to representation up to h. Using Lemma 23, we have that for any epoch i and at any time step t ∈ [t i , . . . , t i+1 − 1], the gradient f pred t (M ) is close to the gradient of the loss function of Algorithm 1: Pick a comparison controller M comp ∈ M r (h ′ , κ M ). For the competing set M(h ′ 0 , κ ψ ), we have the following regret decomposition: Each term will be analyzed separately in the following. Algorithm Truncation Error: From (8), we get Once the Markov parameters ( G t ) are estimated, Algorithm 2 constructs confidence sets for the unknown ARX model parameters and chooses an optimistic controller among these confidence sets. Algorithm 2 uses SYSID-ARX to recover model parameters. SYSID-ARX is similar to SYS-ID of Lale et al. (2020b) and internally follows a method similar to Ho-Kalman method (Ho and Kálmán, 1966), SYSID-ARX is given in Algorithm 3. Algorithm 3 SYSID-ARX 1: Input: G t , h, n, d 1 , d 2 such that d 1 + d 2 + 1 = h 2: Form two d 1 × (d 2 + 1) Hankel matrices H Gy→y and H Gu→y from G t and constructĤ t = 3: ObtainĤ − t by discarding (d 2 + 1)th and (2d 2 + 2)th block columns ofĤ t 4: Using SVD obtain the best rank-n approximation ofĤ − t denoted asN t ∈ R md 1 ×(m+p)d 2 , 5: 8: ObtainĈ t ∈ R m×n , the first m rows ofÔ t 9: ObtainB t ∈ R n×p , the first p columns ofĈ B t 10: ObtainF t ∈ R n×m , the first m columns ofĈ F t 11: ObtainĤ + t by discarding 1st and (d 2 + 2)th block columns ofĤ t 12: (2018), we get the guarantee that there exists a unitary transform T such that Using Lemma 5.2 of Oymak and Ozay (2018), we get Following similar analysis with Lale et al. (2021), we obtain Combining these with Theorem 2 give the confidence sets required for Algorithm 2: Note that even though the estimated system parameters are recovered up to a similarity transformation, the cost (J(·)) achieved by set of parameters with the same similarity transformation is fixed, allowing us the search for optimistic cost, i.e. , with probability at least 1 − 3δ. Note that this shows the ARX systems are estimated accurate enough that the system stays stable throughout the exploitation or adaptive control of Algorithm 2. In the following precise statements of Theorem 5 and 6 are given and since their proofs differ only at one place, only the proof of Theorem 5 is given with the explanation about the difference. Theorem 18 (Precise Statement of Theorem 5) Let δ ∈ (0, 1). Given an unknown ARX system Θ = (A, B, C, F ), and regulating parameters Q 0 and R ≻ 0, if Algorithm 2 with explore and commit approach with warm-up duration of T warm ≥ max{h, T wp , T N , T u , T y } interacts with the system for T time steps in total such that T > T warm , with probability at least 1 − 5δ, the regret of Algorithm 2 is bounded as follows, The choice of T warm = T 2/3 , i.e. T warm ≥ T w in (38), guarantees that the regret isÕ(T 2/3 ) with probability at least 1 − 5δ. Theorem 19 (Precise Statement of Theorem 6) Let δ ∈ (0, 1). Given an unknown ARX system Θ = (A, B, C, F ), and regulating parameters Q 0 and R ≻ 0 such that the optimal controller for Θ persistently excites the system, as given in Section B.4, if Algorithm 2 with closed-loop model estimate updates with warm-up duration of T warm ≥ τ as given in (39) interacts with the system for T time steps in total such that T > T warm , with probability at least 1 − 5δ, the regret of Algorithm 2 is bounded as follows, Proof For an average cost per stage problem in infinite state and control space like the given control system Θ = (A, B, C, F ) with regulating parameters Q and R, using the optimal average cost per stage J ⋆ (Θ) and guessing the correct differential(relative) cost, where (A + F C, B) is controllable, (A, C) is observable, Q is positive semidefinite and R is positive definite, one can verify that they satisfy Bellman optimality equation (Bertsekas, 1995). The lemma below shows the Bellman optimality equation the given system Θ, which will be critical in regret analysis. Lemma 20 (Bellman Optimality Equation for ARX System) Given state x t ∈ R n and an observation y t ∈ R m pair at time t, Bellman optimality equation of average cost per stage control of the system Θ = (A, B, C, F ) with regulating parameters Q and R is J ⋆ (Θ) + (Ax t + F y t ) ⊤ P − PB(R + B ⊤ PB) −1 B ⊤ P (Ax t + F y t ) + y ⊤ t Qy t = y ⊤ t Qy t + u ⊤ t Ru t + E (Ax t+1 + F y t+1 ) ⊤ P − PB(R + B ⊤ PB) −1 B ⊤ P (Ax t+1 + F y t+1 ) We give the proof and the expression for J ⋆ (Θ) in Section G. Using Bellman optimality equation for the optimistic system,Θ, we derive a regret decomposition for applying the optimal policy of Θ on Θ. Note that similarity transformation in recovering the ARX system parameters do not affect the regret decomposition. Thus, without loss of generality, we consider that the similarity transformation is identity. Appendix G. Optimal Control of ARX System with Convex Quadratic Cost and Bellman Optimality From the first principles (Bertsekas, 1995), the value function of the given system is quadratic and due to stochasticity we have the following format: V (x, y) = x y ⊤ P 11 P 12 P 21 P 22 x y + λ Using average cost optimality equation, we can determine the value function for the given system Θ as follows: x y ⊤ P 11 P 12 P 21 P 22 x y + λ = min u y ⊤ Qy + u ⊤ Ru + E Ax + Bu + F y CAx + CBu + CF y + e ⊤ P 11 P 12 P 21 P 22 Ax + Bu + F y CAx + CBu + CF y + e Expanding all and minimizing for u gives the optimal control of u = −(R + B ⊤ PB) −1 B ⊤ PAx + B ⊤ PF y where P = P 11 + P 12 C + C ⊤ P 21 + C ⊤ P 22 C. Inserting the expression for u, we have x ⊤ P 11 x + x ⊤ P 12 y + y ⊤ P 21 x + y ⊤ P 22 y + λ = From this, we get λ = Tr(P 22 E) where x y ⊤ P 11 P 12 P 21 P 22 x y = x y ⊤ A ⊤ P − PB(R + B ⊤ PB) −1 B ⊤ P A A ⊤ P − PB(R + B ⊤ PB) −1 B ⊤ P F F ⊤ P − PB(R + B ⊤ PB) −1 B ⊤ P A Q + F ⊤ P − PB(R + B ⊤ PB) −1 B ⊤ P F x y G 1 = A ⊤ P − PB(R + B ⊤ PB) −1 B ⊤ P A and G 2 = A ⊤ P − PB(R + B ⊤ PB) −1 B ⊤ P F satisfy all 3 equations. This one can write Bellman optimality equation as Thus, we get the following Bellman optimality equation:
2021-06-15T13:23:08.633Z
2021-08-26T00:00:00.000
{ "year": 2021, "sha1": "a95ca607b2d17ee8171e186a4ed32b7432d2bfc2", "oa_license": null, "oa_url": null, "oa_status": null, "pdf_src": "Arxiv", "pdf_hash": "bbf06f116f607574f316fed96bd4709c99454415", "s2fieldsofstudy": [ "Engineering", "Mathematics", "Computer Science" ], "extfieldsofstudy": [ "Computer Science", "Engineering", "Mathematics" ] }
119158639
pes2o/s2orc
v3-fos-license
Metrics of constant positive curvature with conical singularities, Hurwitz spaces, and ${\rm det}\, \Delta$ Let $f: X\to {\Bbb C}P^1$ be a meromorphic function of degree $N$ with simple poles and simple critical points on a compact Riemann surface $X$ of genus $g$ and let $\mathsf m$ be the standard round metric of curvature $1$ on the Riemann sphere ${\Bbb C}P^1$. Then the pullback $f^*\mathsf m$ of $\mathsf m$ under $f$ is a metric of curvature $1$ with conical singularities of conical angles $4\pi$ at the critical points of $f$. We study the $\zeta$-regularized determinant of the Laplace operator on $X$ corresponding to the metric $f^*\mathsf m$ as a functional on the moduli space of the pairs $(X, f)$ (i.e. on the Hurwitz space $H_{g, N}(1, \dots, 1)$) and derive an explicit formula for the functional. Introduction The determinants of Laplacians on Riemann surfaces often appear in the frameworks of Geometric Analysis (in connection with Sarnak program [21]) and quantum field theory (in connection with various partition functions). An explicit computation of the determinant of the Laplacian corresponding to the metric of constant negative curvature ( [4], see also [7]) provides an example of a beautiful interplay between the spectral theory and geometry of moduli spaces of Riemann surfaces. Due to Gauss-Bonnet Theorem metrics of constant positive curvature on compact Riemann surfaces are necessarily singular (unless the genus of the surface is equal to zero) and the same is true for metrics of zero curvature (unless the genus is equal to one). The determinants of the Laplacians in flat singular metrics are intensively studied (see, e. g., [11], [1], [13], [16], [9]), the case of constant positive curvature attracted attention only recently (in particular, in connection with quantum Hall effect). The only explicit computation of the determinant in the case of constant positive curvature (except for the classical result for the smooth round metric on the sphere [27]) is done in the case of the sphere with two antipodal conical singularities ( [24], see also [25] for corrections and a relation of this result to quantum physics). According to the result of Troyanov [22], there are only two classes of genus zero surfaces with metrics of constant curvature 1 with two conical points: • Surfaces with two antipodal conical singularities (i.e. the distance between them is π and they are conjugate points) of the same (arbitrary positive) conical angle. • Surfaces with two conical points of the same angle 2πk, k = 2, 3, . . . ; the corresponding conical metric is the pullback f * m of the standard metric m of curvature 1 on CP 1 under a meromorphic function f : CP 1 → CP 1 with two critical points. As we already mentioned, the determinant of the Laplacian on the surfaces of the first class was found in [24,25]. The motivation of this paper comes mainly from the need to compute the determinant of the Laplacian ∆ f * m for the surfaces of the second class. For this determinant we obtain the explicit formula which is the most elementary consequence of our main result. Here f : CP 1 → CP 1 is a meromorphic function with two simple critical points and the corresponding critical values z 1 and z 2 , the constant C is independent of z 1 and z 2 , and det ′ is the modified (i.e. with zero mode excluded) ζ-regularized determinant. The constant C can be found by using the result [24]: one has to consider a sphere with two antipodal singularities of conical angle 4π and compare formula (1.1) with the one given in [24]). Our main result generalizes (1.1) to the case of compact Riemann surfaces X of arbitrary genus and arbitrary meromorphic functions f : X → CP 1 (for simplicity we consider only functions f with simple critical values, the modifications required to consider the general case are insignificant and of no interest, and the result is essentially the same). Let H g,N (1, . . . , 1) be the Hurwitz moduli space of pairs (X, f ), where X is a compact Riemann surface of genus g and f is a meromorphic function on X of degree N and M = 2g − 2 + 2N simple critical points. We assume that all the critical values are finite, i.e. the poles of the function f are not the critical points and, therefore, are simple. The part (1, . . . , 1) (N times) of the symbol H g,N (1, . . . , 1) shows the branching scheme over the point at infinity of the base of the ramified covering f : X → CP 1 , the preimage of ∞ ∈ CP 1 consists of N distinct points. The space H g,N (1, . . . , 1) is known to be a connected complex manifold of complex dimension M , the critical values z 1 , . . . , z M of the function f can be taken as local coordinates. Let τ stand for the Bergman tau-function on the Hurwitz space H g,N (1, . . . , 1) (also known as isomonodromic tau-function of the Hurwitz Frobenius manifold). Referring the reader to [17], [14], [18] for definition and properties of this object, we would like to emphasize that the explicit expressions for τ through holomorphic invariants of the Riemann surface (prime form, theta functions, and etc.) and the divisor of the meromorphic function f are known; see [14,15] for genera g = 0, 1 and [17,18] for g 2. The metric f * m on X is a conical metric of curvature 1 with conical singularities at the critical points P 1 , . . . , P M of the function f , the conical angle at any critical point is 4π. In the present paper we first show that the operator zeta-function ζ(s) of the Friedrichs extension of the Laplace operator ∆ is regular at the point s = 0 and, therefore, one can define the (modified, i.e. with zero mode excluded) ζ-regularized Then we prove the following explicit formula for this determinant: where the constant C is independent of the point (X, f ) of the space H g,N (1, . . . , 1) and B is the matrix of b-periods of the Riemann surface X (in the case g = 0 the factor det ℑB in (1.2) should be omitted). In the simplest case one has g = 0, N = 2, and τ = (z 1 − z 2 ) 1/4 , then (1.2) implies (1.1). 2 Heat kernel asymptotic and det ′ ∆ Let ∆ stand for the Friedrichs extension of the Laplace-Beltrami operator on (X, f * m). The asymptotic of Tr e −∆t as t → 0+ can be found by methods developed in [2,3,6]. We need some preliminaries before we can formulate the result. Introduce the local geodesic polar coordinates (r, ϕ) on (X, f * m) with center at P k , where ϕ ∈ [0, 4π) and r ∈ [0, ǫ], ǫ is smaller than the distance from P k to any other conical singularity. In the coordinates (r, ϕ) the metric f * m takes the form f * m(r, ϕ) = dr 2 + sin 2 rdϕ 2 . Let h(r) = 2 sin r and ψ = ϕ/2 ∈ S 1 . Consider the selfadjoint operator in L 2 (S 1 ) with the domain H 2 (S 1 ). This operator is related to ∆ in the following way: In a small neighbourhood of P k the Laplacian can be written as acting in L 2 (h dr dψ). The operator L = −∂ 2 r + r −2 A(r) falls into the class of operators studied in [3] as A(r) satisfies the requirements [3, (A1)-(A6), page 373]. Then [3, Thm 5.2 and Thm 7.1] imply that for any smooth cut-off function ̺ supported sufficiently close to the singularity P k and such that ̺ = 1 in a small vicinity of P k one has where A j , B j , and C j are some coefficients and {α j } is a sequence of complex numbers with ℜα j → −∞. Moreover, the coefficient before t 0 log t in the above asymptotic is given by 1 4 Res ζ(−1), where ζ stands for the ζ-function of (A(0) + 1/4) 1/2 ; see [3, f-la (7.24)]. Clearly, A(0) = −2 −2 ∂ 2 ψ − 1/4 and the ζ-function of (A(0) + 1/4) 1/2 is given by where ζ R is the Riemann zeta function. Thus Res ζ(−1) = 0 and the term with t 0 log t in (2.2) is absent. For a cut-off function ρ supported outside of conical points P 1 , . . . , P M the short time asymptotic Tr(1 − ρ)e −∆t ∼ j −2 a j t j/2 can be obtained in the standard way from the formulas for the parametrix B N (λ) approximating (∆ − λ) −2 to the order N , see e.g. [6] or [5,Problem 5.1]. Hence the short time asymptotic for e −∆t is of the form (2.2), where the term t 0 log t is absent. As a consequence, the ζ-function has no pole at zero and we can define the modified (i.e. with zero mode excluded) determinant det ′ ∆ = exp{−ζ ′ (0)}. Asymptotic of solutions near conical singularities In a vicinity of P k we introduce the distinguished local parameter Here and elsewhere we denote the Laplace-Beltrami operators by ∆ * reserving the notation ∆ for their Friederichs extensions. The complex plane C endowed with the metric f * m(x,x) has a "tangent cone" of angle 4π at x = 0. Lemma 1. Let u, F ∈ L 2 (X) and ∆ * u = F (in the sense of distributions). Then in a small vicinity of x = 0 we have where a k and b k are some coefficients and the remainder R satisfies with any ǫ > 0 as x → 0. Moreover, the equality can be differentiated and the remainder Proof. The proof consists of standard steps based on the Mellin transform and a priori elliptic estimates, see e.g. [19,Chapter 6] for details. Let χ ∈ C ∞ c (X) be a cut-off function supported in the neighbourhood |x| < 2δ of P k and such that χ(|x|) = 1 for |x| < δ, where δ is small. where the right hand side (extended from its support to X by zero) is in L 2 (X). Indeed, for any cut-off function ̺ ∈ C ∞ c (X \ {P 1 , . . . , P M }) the standard result on smoothness of solutions to elliptic problems gives ̺u ∈ H 1 (X), where the Sobolev space H 1 (X) is the domain of closed densely defined quadratic form of ∆ * in L 2 (X). For a suitable ̺ we obtain [∆ * , χ]u = [∆ * , χ]̺u ∈ L 2 (X) and hence the right hand side of (3.4) is in L 2 (X). We rewrite (2.1) in the polar coordinates (r, ϕ), where r = |x| 2 and ϕ = arg x, multiply both sides by r 2 , and then apply the Mellin transformf (s) = ∞ 0 r s−1 f (r) dr, assuming that all functions are extended from their supports to r ∈ [0, ∞) and ϕ ∈ [0, 2π) by zero. As a result (2.1) takes the form is analytic in the half-plane ℜs > 1 (resp. ℜs > −1) and square summable along any vertical line in the corresponding half-plane. In the strip has simple poles at s = ±1/2 and a double pole at s = 0. We have where ǫ ∈ (0, 1/2). The elliptic a priori estimate with parameter where the last term can be neglected for sufficiently large values of |s|, justifies the change of the contour of integration in the inverse Mellin transform from ℜs = 1 − ǫ to ℜs = −1 + ǫ. We use the Cauchy theorem and arrive at where C does not depend on s. The Parseval equality turns (3.6) into the estimate This together with Sobolev embedding theorem implies The proof is complete. Let u ∈ L 2 (X) and v ∈ L 2 (X) be such that ∆ * u ∈ L 2 (X) and ∆ * v ∈ L 2 (X) (with differentiation understood in the sense of distributions) and bounded everywhere except possibly for P k . Consider the form here and elsewhere (·, ·) stands for the inner product in L 2 (X). By Lemma 1 we have (3.3) and The Stokes theorem implies Now simple calculation in the right hand side allows to express q[u, v] in terms of coefficients in (3.3) and (3.7) as follows: Recall that ∆ stands for the Friedrichs extension of the Laplace-Beltrami operator ∆ * on (X, f * m). As is known, for the domain D of ∆ we have D ⊂ H 1 (X). The embedding H 1 (X) ֒→ L 2 (X) is compact and the spectrum of ∆ is discrete. Thanks to |u(p)| u; H 1 (X) , p ∈ X, the functions in the domain D are bounded and thus for any u ∈ D the assertion of Lemma 1 is valid with Let χ ∈ C ∞ c (X) be a cut-off function supported in the neighbourhood |x| < 2δ of P k and such that χ(|x|) = 1 for |x| < δ, where δ is small. We denote the spectrum of ∆ by σ(∆) and introduce where the function χx −1 is extended from the support of χ to X by zero. It is clear that In the remaining part of this section we prove some results that previously appeared in the context of flat conical metrics [8,10]. The function Y (λ) and the coefficient b(λ) in (3.9) are analytic functions of λ in C \ σ(∆) and in a neighbourhood of zero. Besides, we have Proof. Since ker ∆ = span{1}, in a neighbourhood of λ = 0 the resolvent admits the representation where R(λ) is a holomorphic operator function with values in the space of bounded operators in L 2 (X). Observe that We obtain the equality (3.10) as follows: One can also show that the coefficients c(λ) and a(λ) = a(λ) in (3.9) are holomorphic in a neighbourhood of zero. Moreover, 4π d dλ a(λ) = Y (λ), Y (λ) . be a complete set of real normalized eigenfunctions of ∆ and let λ j be the corresponding eigenvalues, i.e. ∆Φ j = λ j Φ j , Φ j = Φ j , and Φ j ; L 2 (X) = 1. Then for the coefficients a j and b j =ā j in the asymptotic 12) where the series is absolutely convergent. Proof. The asymptotic (3.11) for Φ j ∈ D follows from Lemma 1. Starting from the eigenfunction expansion of Y (λ) we obtain This together with (3.8) and b j =ā j gives As a consequence, the series in (3.12) is absolutely convergent and Finally, we obtain (3.12) substituting the expression (3.13) and its conjugate into the inner product Y (λ), Y (λ) . Explicit calculation of b(0) and b(−∞) In this subsection we study the behaviour of the coefficient b(λ) from (3.9) as λ → −∞ and obtain explicit formulas for b(−∞) = lim λ→−∞ b(λ) and b(0). Let us emphasize that the choice of the local parameter x in a vicinity of P k ∈ X is a part of definition of the coefficients a(λ), b(λ), and c(λ) in (3.9). Proof. Case 1. Consider the meromorphic function f : X = CP 1 → CP 1 given by z = f (w) = w 2 ; the critical values of f are z 1 = 0 and z 2 = ∞. Clearly, w coincides with the distinguished parameter x = √ z − z 1 , the metric f * m and the Laplace-Beltrami operator ∆ * are given by (3.2), where z k = z 1 = 0. Introduce the geodesic polar coordinates (r, ϕ) on (CP 1 , f * m) with center at ∞ ∈ CP 1 by setting ϕ = 2 arg w ∈ [0, 4π) and cot(r/2) = |w| 2 , r ∈ [0, π]. In the coordinates (r, ϕ) we have and the function Y with asymptotic (3.9) can be found by separation of variables. Namely, we seek for Y of the form Y (r, ϕ; λ) = R(cos r)e −iϕ/2 . Since w = x and cos νr cos νπ = 1 − ν tan(νπ) r − π cot(r/2) |x| 2 + O(|x| 4 ) = 1 + 2ν tan(νπ)|x| 2 + o(|x| 2 ) as x → 0, we conclude that in the asymptotic (3.9) of (3.16) we have b(λ) ≡ 0 (and also c(λ) ≡ 0 and a(λ) = (1 + 2ν) tan(νπ)). Case 2. Considerf : CP 1 → CP 1 given by z =f (w) = w 2 +z 1 1−z k w 2 ; the critical values off are z 1 and −1/z 1 . As in the first case, the metricf * m has two antipodal 4π-conical points (at w = 0 and w = ∞). However the distinguished parameter x = √ z − z 1 does not coincide with w if z 1 = 0. As a consequence, the corresponding function Y and the coefficient b(λ) in its asymptotic (3.9) can be different from those obtained in Case 1. We notice that the isometry z → αz+β −βz+α of the base (CP 1 , m) of a ramified covering f : X → CP 1 can be lifted to the corresponding isometry of (X, f * m) and the latter commutes with ∆ * . Take the isometry z → z−z 1 z 1 z−1 of (CP 1 , m) sending z 1 to 0 and let J be its lift to (CP 1 ,f * m). We transform Y from (3.16) by J and renormalize It is straightforward to check that Y has the asymptotic (3.9) in the distinguished local parameter x = √ z − z 1 and for the corresponding coefficient b(λ) we have b(λ) = 1 2z Case 3. Finally, consider the general case. Let X be a compact Riemann surface and let f : X → CP 1 be a meromorphic function with simple poles and simple critical points P 1 , . . . , P M . Consider, for instance, the critical point P 1 . The function f has the same critical value z 1 as the functionf from Case 2. Small vicinities U (P 1 ) and U ( P 1 ) of the corresponding critical points P 1 ∈ X and P 1 ∈ X = CP 1 are isometric. In the local parameter x = √ z − z 1 (which is the distinguished one for both X and X) the differential expressions ∆ * and ∆ * are the same. Let ρ be a smooth cut-off function on X such that ρ is supported inside U ( P 1 ), ρ ≡ 1 in a vicinity of P 1 , and ρ depends only on the distance to P 1 . We identify P 1 and P 1 as well as U (P 1 ) and U ( P 1 ) and then extend the functions ρ Y and (∆ * − λ)ρ Y = [ ∆ * , ρ] Y from U (P 1 ) ≡ U ( P 1 ) to X by zero; here Y is the function (3.17) on X = CP 1 . Clearly, [ ∆ * , ρ] Y ∈ L 2 (X) and therefore (∆ − λ) −1 (∆ * − λ)ρ Y makes sense. Now we represent the function Y on X corresponding to P 1 in the form Let b(λ) be the coefficient from the asymptotic (3.9) of Y . We have + 1)). This together with (3.19) completes the proof. In order to find the value b(0) corresponding to a conical point P k we need to construct a (unique up to addition of a constant) harmonic function Y bounded everywhere on X except for the point P k , where Y (x,x; 0) = 1 x + O(1) in the distinguished local parameter x = √ z − z k (cf. (3.9)). Such a function was explicitly constructed in [8,10] using the canonical meromorphic bidifferential W ( · , · ) (also known as the Bergman bidiffential or the Bergman kernel) on X. This leads to an explicit expression for the coefficient b(0) in the asymptotic expansion (3.9) of Y , which was obtained as a part of Proposition 6 in [10]. To formulate the result we need some preliminaries. Chose a marking for the Riemann surface X, i.e. a canonical basis a 1 , b 1 , . . . , a g , b g of H 1 (X, Z). Let {v 1 , . . . , v g } be the basis of holomorphic differentials on X normalized via where δ ℓm is the Kronecker delta. Introduce the matrix B = (B ℓm ) of b-periods of the marked Riemann surface X with entries B ℓm = b ℓ v m . Let W ( · , · ) be the canonical meromorphic bidifferential on X × X with properties The bidifferential W has the only double pole along the diagonal P = Q. In any holomorphic local parameter x(P ) one has the asymptotics as Q → P , where S B (·) is the Bergman projective connection. Consider the Schiffer bidifferential The Schiffer projective connection, S Sch , is defined via the asymptotic expansion One has the equality In contrast to the canonical meromorphic differential and the Bergman projective connection, the Schiffer bidifferential and the Schiffer projective connection are independent of the marking of the Riemann surface X. Let us also emphasize that the value of a projective connection at a point of a Riemann surface depends on the choice of the local holomorphic parameter at this point. Now we are in position to formulate the needed result from [10,Prop. 6]. where x is the distinguished local parameter x = √ z − z k near the point P k . Perturbation of conical singularities Pick a regular point z 0 ∈ C such that z 1 , . . . , z M are (end points but) not internal points of the line segments [z 0 , z k ], k = 1, . . . , M . Consider the union U = ∪ M k=1 [z 0 , z k ]. The complement X \ f −1 (U) of the preimage f −1 (U) in X has N connected components (N sheets of the covering) and f is a biholomorphic isometry from each of these components equipped with metric f * m to CP 1 \U equipped with the standard metric (3.1). Thus the Riemann manifold (X, f * m) is isometric to the one obtained by gluing N copies of the Riemann sphere (CP 1 , m) along the cuts U in accordance with a certain gluing scheme. By perturbation of the conical singularity at P k we mean a small shift of the end z k of the cut [z 0 , z k ] on those two copies of the Riemann sphere (CP 1 , m) that produce 4π-conical angle at P k after gluing along [z 0 , z k ]. Let ̺ ∈ C ∞ 0 (R) be a cut-off function such that ̺(r) = 1 for x < ǫ and ̺(r) = 0 for r > 2ǫ, where ǫ is small. Consider the selfdiffeomorphism φ w (z,z) = z + ̺(|z − z k |)w of the Riemann sphere CP 1 , where w ∈ C and |w| is small. On two copies of the Riemann sphere (on those two that produce the conical singularity at P k after gluing along [z 0 , z k ]) we shift z k to z k + w by applying φ w . We assume that the support of ̺ and the value |w| are so small that only [z 0 , z k ] and no other cuts are affected by φ w . In this section we consider the perturbed manifold as N copies of the Riemann sphere CP 1 glued along the (unperturbed) cuts U, however N − 2 copies are endowed with metric m and 2 certain copies (mutually glued along [z 0 , z k ]) are endowed with pullback φ * w m of m by φ w . Let (X, f * w m) stand for the perturbed manifold, where f w : X → CP 1 is the meromorphic function with critical values z 1 , . . . , z k−1 , z k +w, z k+1 , . . . , z M . By ∆ w we denote the Friedrichs extension of Laplace-Beltrami operator on (X, f * w m) and consider ∆ w as a perturbation of ∆ 0 on (X, f * m). For the matrix representation of the pullback φ * w m of the metric m in (3.1) by φ w we have is the Jacobian matrix; i.e. the pullback is given by Clearly, on CP 1 we have ∆ 0 = − (1+|z| 2 ) 2 4 4∂z∂ z . A straightforward calculation also shows where O(|w| 2 ) stands for a second order operator with smooth coefficients supported on supp ̺ ′ (|z − z k |) and uniformly bounded by C|w| 2 . Notice that the domain D of ∆ w does not depend on w. Consider D as a Hilbert space endowed with graph norm of ∆ 0 . Let λ be an eigenvalue of ∆ 0 of multiplicity m. Let Γ be a closed curve enclosing λ but no other eigenvalues of ∆ 0 . Then as |w| → 0 uniformly in ξ ∈ Γ. Therefore the total projection P w for the eigenvalues of ∆ w lying inside Γ is given by The continuity of P w implies that dim P w L 2 = dim P 0 L 2 = m, i.e. the sum of multiplicities of the eigenvalues of ∆ w lying inside Γ is equal to m (provided |w| is small); these eigenvalues are said to form the λ-group [12]. where integration runs around the conical point at z k through two spheres CP 1 glued to each other along the cut [z 0 , z k ] and Φ 1 , . . . , Φ m are (real) normalized eigenfunctions of ∆ 0 corresponding to the eigenvalue λ; i.e. Φ j = Φ j , Φ j ; L 2 (X) = 1, and span{Φ 1 , . . . , Φ m } = P 0 L 2 (X). Proof. We have p n (w) = Tr(∆ n w P w ). Thus (here we applied the identity Tr AB = Tr BA) we obtain here we integrated by parts two times and implemented (4.1) to estimate the remainder. Thus Thanks to (4.1) we also obtain where here thanks to ̺ the integrand is supported near z k and integration runs through two spheres glued along the cut [z 0 , z k ]. Finally, the Stokes theorem implies (4.5) Since Φ j (p) C for p ∈ X, the last integrals in both formulas (4.5) tend to zero as ǫ → 0+. The assertion follows from (4.3), (4.4), and (4.5). Proof. The proof by induction relies on Lemma 6 and the relation where e 0 (w) = 1. We omit details. Notice that . We differentiate the right hand side and use Lemma 7 to derive asymptotics of resulting numerator and denominator as w → 0. We obtain This implies the assertion. 5 Variation of ln det ′ ∆ due to perturbation of conical singularities Proposition 1. Let w ∈ C correspond to perturbation of the conical singularity at P k by shifting z k to z k + w (see Sec. 4 for details). Then where b(λ) is the coefficient in the asymptotic (3.9) of the special solution Y (λ) ∈ L 2 (X) to (∆ * − λ)Y (λ) = 0 growing near P k as x −1 , where x is the distinguished holomorphic parameter x = √ z − z k . Proof. First we recall that only λ-groups but not single eigenvalues λ j can be differentiated with respect to w orw. Similarly, the series Tr(∆ − λ) −2 = ∞ j=0 (λ j − λ) −2 cannot be differentiated term by term, however, thanks to Lemma 8 we can always differentiate partial finite sums corresponding to the λ-groups. Thus, if summation with respect to j runs through m eigenvalues λ k = λ k+1 = · · · = λ k+m forming λ k -group, by Lemma 8 we obtain Let us rewrite the formula (4.2) for the coefficients A in terms of the local parameter x: By Lemma 1 the asymptotic (3.11) of Φ j can be differentiated, we have ∂ x Φ j = b j + O(|x| 1−ǫ ) with any ǫ > 0 as x → 0. This together with (5.1) implies A = 2π j b 2 j , and therefore Now we are ready to compute the partial derivative of zeta function with respect to w at w = 0. Let Γ ξ be a contour running at a sufficiently small distance ǫ > 0 around the cut (−∞, ξ], starting at −∞ + iǫ, and ending at −∞ − iǫ. We have Thanks to Lemma 3 we can integrate by parts to obtain Now we use the equality (3.12) from Lemma 3 together with Lemma 2 and arrive at see [18,Lemma 1]. Notice that under the SU(2) transformation z → dz−c cz+d , |d| 2 +|c| 2 = 1, the factor F = M k=1 (1 + |z k | 2 ) −1/4 in (5.3) transforms as Thus we see that the right hand side in (5.3) is SU(2)-invariant as it should be due to SU(2)-invariance of det ′ ∆.
2016-12-27T15:40:17.000Z
2016-12-27T00:00:00.000
{ "year": 2019, "sha1": "48840ac1e03627ee5dc159162ab1786a5c61d311", "oa_license": null, "oa_url": "http://arxiv.org/pdf/1612.08660", "oa_status": "GREEN", "pdf_src": "Arxiv", "pdf_hash": "48840ac1e03627ee5dc159162ab1786a5c61d311", "s2fieldsofstudy": [ "Mathematics" ], "extfieldsofstudy": [ "Mathematics" ] }
250342886
pes2o/s2orc
v3-fos-license
Clinical Patterns, Survival, Comorbidities, and Treatment Regimens in 149 Patients With Pemphigus in Tuscany (Italy): A 12-Year Hospital-Based Study Introduction Pemphigus encompasses a group of muco-cutaneous autoimmune bullous diseases characterized by the loss of adhesion between keratinocytes. The disease is associated with increased morbidity and mortality. Materials and Methods We characterized clinical patterns, survival, comorbidities, and drug prescriptions in patients with pemphigus referred to the Section of Dermatology of the University of Florence from January 2010 to December 2021. Results A total of 149 patients were identified (female/male sex ratio = 2.0). Median age at diagnosis was 57.7 ± 17.2 years; 108 patients were diagnosed with pemphigus vulgaris (PV) (72.5%) and 35 (23.5%) with pemphigus foliaceus (PF). Paraneoplastic pemphigus (PNP) and IgA-pemphigus accounted for three patients each. The overall survival rate was 86.9%. Accordingly, 14 (9%) patients died during the study period. The average age at death was 77.8 ± 9.3. Age at diagnosis was a risk factor for death in patients with pemphigus. Average concentration of Dsg3-IgG and Dsg1-IgG was 85.6 ± 68.8 and 75.9 ± 68.4, respectively. The most serious comorbid diseases included cerebro- and cardiovascular accidents and malignancies. Regarding the treatment regimen, we found a substantially stable use of systemic steroids in the 2010–2018 period; the prevalence of use of mycophenolic acid increased, whereas that of azathioprine decreased. The use of rituximab showed the highest increase in the 2013–2018 period. Proton-pump inhibitors and antibiotics were the most frequently prescribed non-immunomodulating drugs. Conclusions In this large series of the patients, patients with pemphigus showed a high incidence of serious comorbid diseases, highlighting the importance of a multidisciplinary approach for a proper management of the patients. Rituximab was the immunomodulating drug showing the highest increase in use over time, reflecting the growing evidence of its efficacy as a first-line treatment in pemphigus. The epidemiological characteristics of pemphigus vary according to the clinical variant, geographical regions, and ethnicities (3,4,8). PV is considered the most prevalent type of pemphigus, corresponding to 70% of all cases (3,4). In European countries, the average age at onset of PV varies from 50 to 60 years (9). Conversely, PV is extremely rare during childhood (10). PV seems to be more prevalent in female than male patients, with a female/male sex ratio (F/M SR) ranging from 1.1 in Finland to 5.0 in the USA (2,4). PF is divided in two different subtypes: sporadic and endemic. Sporadic PF is the second most common type of pemphigus, representing about 20% of pemphigus cases (2,4). The average age at onset is around 50 years, with no preference for sex or ethnicity (3). Endemic PF is a variant of PF with a high incidence rate in some rural areas of Brazil, Colombia, and Tunisia (11). Atypical pemphigus variants, including PNP, IgA pemphigus, and PH, are far less common. PNP accounts for 3%-5% of pemphigus cases (2,12). The exact incidence and prevalence of PNP are difficult to evaluate. PNP is almost always associated with an underlying malignancy, particularly hematological malignancies including non-Hodgkin's lymphomas, chronic lymphocytic leukemia, and Castleman's disease. Rarely, PNP can also arise in association with solid tumors (13). The average age at onset ranges between 45 and 70 years (14), although it can also occur in children and adolescents, especially when associated with Castleman's disease (15). Regarding gender, different data were reported in the literature: a French study reported a predominance of the male sex (58.5% of cases), whereas an international multicenter study including Asian patients reported a female predominance (56.7%) (16,17). Treatment of pemphigus largely relies on immunosuppressive treatments. High-dose systemic corticosteroids are considered frontline therapies and are necessary to achieve rapid clinical improvement. A variety of immunosuppressive treatments, including dapsone, azathioprine, mycophenolate, and cyclophosphamide, serves as steroid-sparing agents, allowing progressive tapering of systemic steroids but are less useful for the treatment of active disease (2). Rituximab (RTX), a monoclonal antibody targeting CD20, shows a remarkable clinical efficacy, longer clinical remission, and significant steroid sparing effects in patients with pemphigus and is now regarded as a first-line treatment in patients with moderate to severe disease (5,18). Despite a drastic reduction in pemphigus mortality since the advent of systemic corticosteroids and immunosuppressive treatments, pemphigus associated mortality appears to be 1.7-3.6 higher than that observed in the general population (19). One reason explaining the higher mortality of patients with pemphigus is linked to treatment-related adverse effects, such as severe infections. Other reasons may be related to associated comorbidities, particularly cardiovascular diseases and malignancies (20,21). Among pemphigus variants, PNP seems to be associated with the highest mortality, which is related to either the associated malignancy or the severe clinical course, characterized by a lower responsiveness to immunosuppressive regimens and the increased risk of systemic complications, such as bronchiolitis obliterans (22). The purpose of this study is to characterize clinical and epidemiological characteristics of patients with pemphigus who attended our dermatologic clinic over a period of 12 years. Patients We conducted a 12-year retrospective study including 149 patients diagnosed with pemphigus at the Rare Skin Diseases Unit of Azienda USL Toscana Centro, University of Florence from January 2010 to December 2021. All cases were included into the Registry of Rare Diseases of Tuscany. Inclusion and Exclusion Criteria Eligible for the study were all patients at every age that meet the criteria for the diagnosis of pemphigus proposed by current guidelines (5,6). Briefly, the diagnosis and classification of pemphigus was based on the clinical presentation and histopathological and immunopathological criteria, including i) detection of IgG or IgA intercellular deposits at direct immunoflouorescence microscopy from a perilesional tissue sample, ii) detection of circulating antibodies binding the inter-keratinocyte surface at indirect immunofluorescence, and/or iii) detection of IgG against desmosomal proteins, e.g., Dsg3 or Dsg1, by enzyme-linked immunosorbent assay or immunoblotting. Assessments of circulating anti-Dsg1 and anti-Dsg3 autoantibodies were performed using commercial kits (MBL MESACUP-2 TEST, Naka-Ku Nagoya Aichi, Japan). Patients whose diagnosis of pemphigus could not be confirmed by the abovementioned criteria or who were not living in Tuscany at the time of diagnosis were excluded from the analysis. Demographic and Clinical Characteristics of the Patients Data regarding the clinical characteristics and phenotype of the disease (PV, PF, PNP, and other rare variables), the demographic characteristics (age and sex), and the autoantibody profile at diagnosis (anti-Dsg1 and Dsg3 IgG autoantibodies) and within 12 months after diagnosis were collected from all the patients with pemphigus identified from the registry. Comorbidities and Pharmaceutical Prescriptions A subset of 78 cases (out of 149) endowed of the regional unique anonymous identification number was linked to the regional hospital discharge records and the drug prescription database. Associated comorbidities were extrapolated from hospital admissions. We focused on various comorbidities that have been associated with pemphigus according to the literature. Associated comorbidities were identified using the International Classification of Diseases, Ninth Revision, Clinical Modification. The drug prescription database contains information on dispensed drugs reimbursed by the National Health Service. Only outpatient prescriptions were collected in the database. The prevalence of use of the most common classes of prescribed drugs in pemphigus was calculated for each year of the 2011-2018 period, by dividing the number of pemphigus cases with at least one dispensing of each pharmaceutical class for the number of prevalent cases at the beginning of each year. Drugs prescribed during 2010 were excluded to avoid underestimation related to patients diagnosed in the last months of 2010 and started to be treated from 2011.The Anatomical Therapeutic Chemical (ATC) classification system was used to code drugs information. Two macro-areas of drugs have been identified:1) those used for the treatment of pemphigus and 2) those used for the management of pemphigus-associated comorbidities. Statistical Analysis Differences in demographic (age and sex) and in anti-Dsg1 and anti-Dsg3 antibodies at T0 (baseline) and T1 were evaluated overall and by pemphigus variants using Student's t-test for continuous variables and Fisher's exact test for categorical variables. For continuous variables, mean values with standard deviation (SD) were reported in the text. Survival estimates were calculated by sex, age class (<40 years, 40-59 years, 60-74 years, and ≥ 75 years), and pemphigus variants (PV and PF) using the Kaplan-Meier method, with the log-rank test to assess statistically significant differences. The effects of sex, age at diagnosis, pemphigus variant, and levels of anti-Dsg1 and anti-Dsg3 at T0 were estimated using Cox proportional hazards regression model and hazard ratios (HRs) with 95% confidence intervals (CI). Demographic and Clinical Characteristics Oral PV was detected in 53 out of 108 patients with PV (49.07%). Antibodies The mean titer of circulating autoantibodies (reported as UI/ml) at the time of diagnosis (T0) was 75.9 ± 68.4 for anti-Dsg1 and 85.6 ± 68.8 for anti-Dsg3 antibodies. In the PV groups, anti-Dsg3 IgG antibodies were significantly higher in patients with mucocutaneous than oral PV (135.9 vs. 95.3, p = 0.0003), whereas the average value of anti-Dsg1 IgG antibodies was significantly lower in patients with oral PV than in patients with muco-cutaneous PV (8.6 vs. 111.6, p < 0.0001t). As expected, there were significant differences in the mean value of anti-Dsg1 at T0 between PV and PF (59.1 vs. 136.8, p < 0.0001) and in the mean value of anti-Dsg3 at T0 between PV and PF (115.0 vs. 13.0, p < 0.0001). At T1, corresponding to the interval between diagnosis and the first 12 months of follow-up, a decrease in anti-Dsg1 antibodies was recorded in 71 out of 84 patients, and a decrease in anti-Dsg3 antibodies was recorded in 72 out of 83 patients (corresponding to 84.5% and 86.7% of the patients, respectively). The mean value of decrease of anti-Dsg3 antibodies between T0 and T1 was −52.3 ± 48.8, also in this case with significant differences between PV and PF (−65.2 vs. −15.4, p < 0.001). We next evaluated whether immunosuppressive adjuvants induced different degree of autoantibody reduction after at least 270 days following treatment. Interestingly, we observed that patients receiving rituximab experienced a higher decrease of anti-Dgs1 antibodies (92.8 ± 70.5) than patients who did not received it (52.4 ± 68.4), with a difference approaching the statistical significance (p = 0.07). On the contrary, we did not observe statistically significant differences in the decrease of anti-Dsg3 antibodies between the two groups. Survival During the study period, 14 (nine male and five female) out of 149 patients died (9.4%). The average age at death was 77.8 ± 9.3 years (range: 56.2-94.8); 77.9 and 77.6 years for male and female patients, respectively. No deaths were observed for patients below 40 years. The cause of death was retrieved in eight out of 14 patients. Five patients died due to complications related to an advanced cancer. One patient died due to an acute stroke, one died due to an acute cardiovascular event complicated by sepsis, and one died due to an ab ingestis pneumonitis. The Kaplan-Meier overall survival rate estimated during the study period was 86.9%. A significantly higher survival rate was observed in female than in male (94.2% vs. 76.4%, p = 0.03) (Figure 2A). Although survival rate was greater in the PF subtype than in PV (91.4% vs. 85.4%), the difference was not statistically significant (p = 0.39) ( Figure 2C). Cox proportional hazards regression showed that each additional year at diagnosis was associated with a 9% risk of dying (p < 0.001) and that male patients had a significantly increased risk of death than female patients (non-adjusted HR: 3.12; 95% CI: 1.04-9.33). However, after adjustment for age at diagnosis, the difference between male and female patients was not statistically significant. The levels of anti-Dsg1 and anti-Dsg3 at T0 did not appear to be a risk factor for survival, even after adjustment for sex and age. Comorbidities Cancer was found in nine out of 75 patients with pemphigus (12.0%); in detail, eight patients had received a diagnosis of a solid tumor (10.7%): among them, four cases (5.3%) occurred in patients prior to the diagnosis of pemphigus. The associated solid malignancies included the following: eosophageus carcinoma (one patient); carcinoma of the hypopharynx (two patients); uterine leiomyoma (two patients); malignant neoplasm of the retroperitoneum and peritoneum in one patient; and bladder carcinoma (one patient). A hematological malignancy was detected in one patient (1.3%). Regarding autoimmune diseases, two cases of thyroiditis (2.7%) and four cases of arthritis (5.3%) were recorded, all in female patients and before the diagnosis of pemphigus. Regarding neurological and psychiatric disorders, we found one patient who had been hospitalized due to a hereditary degenerative disorder of the central nervous system; one patient was hospitalized for encephalitis and two hospitalizations occurred for personality disorder. On the other hand, we have not found any cases of hospitalization for multiple sclerosis, epilepsy, organic psychotic conditions, such as dementia and alcohol-or drug-related mental disorders. Regarding cerebro-and cardiovascular diseases, we found six (8.0%) hospitalizations for heart attack of which four (5.3%) occurred following the diagnosis of pemphigus; nine (12.0%) hospitalizations for non-ischemic heart disease, although none following the diagnosis of pemphigus; one hospitalization for pulmonary hypertension preceding the diagnosis of pemphigus was reported; finally, five (6.7%) hospitalizations for cerebral stroke of which one occurred in the period following the diagnosis of pemphigus was recorded. About the vascular system, three (4.0%) hospitalizations for arterial vascular system disorders, two of these after the diagnosis of pemphigus, were detected; whereas five (6.7%) for venous and/ or lymphatic system disorders, of which four after pemphigus diagnosis, were recorded. Regarding the respiratory system, one case of acute infection of the upper respiratory tract and two hospitalizations for chronic obstructive pulmonary disease (COPD) were detected, both prior to the diagnosis of pemphigus; instead, two (2.7%) hospitalizations for pneumonia occurred following the diagnosis of pemphigus. None of them was taking either topical or systemic steroids at the time of hospitalization. No hospitalizations for pleural diseases were registered. Eleven diagnosis of diabetes mellitus were recorded (14.7%; five male and six female patients) of which eight (10.7%) occurred after the diagnosis of pemphigus. Regarding other metabolic disorders overweight and obesity were found in a total of eight (10.7%) patients, all registered prior to the diagnosis of pemphigus. Regarding ocular diseases, three patients with glaucoma were registered, of one after the diagnosis of pemphigus (4.0% and 1.3% respectively); one patient was hospitalized due to cataracts (1.3%) diagnosed after the diagnosis of pemphigus. A total of 11 patients (14.7%) diagnosed with genitourinary tract diseases were registered, of which three cases of nephritis registered after the diagnosis of pemphigus. Regarding the gastrointestinal (GI) system, three patients suffered from an inflammatory disease of the upper GI tract (including esophagitis, gastritis, peptic ulcers, and duodenitis) occurred following the diagnosis of pemphigus. No cases of intestinal tract infections or inflammatory bowel diseases were reported. A. Pemphigus Management Briefly, regarding the drugs recommended by the latest guidelines for pemphigus management, it was found that no patient had received Dapsone or Cyclophosphamide as a steroidsparing agent in the time period examined. The use of Mycophenolate Mofetil and Azathioprine as steroid-sparing therapies appeared to be almost stable throughout the years, albeit with a slight upward trend of Mycophenolate Mofetil compared to Azathioprine (Figure 3). The use of these drugs did not suffer considerable changes following the introduction of RTX. Specifically, in our study, RTX was started to be used from 2014, administered to one patient (2.9%) out of a total of 34 cases in study in that year. Since 2015, there has been an increase in the use of RTX. At the end of the study, 46.3% of patients had received at least one cycle of RTX. As expected, regarding the use of topical corticosteroids, a predilection in the use of high potency compared to medium potency was observed (in 2018, 13.4% and 3%, respectively). About the use of opioids, recommended by the current guidelines for pain relief, we recorded a progressive decrease starting from 2011 (21.4%) until 2018 (9%) (Figure 3). The intensity of use of Azathioprine and Mycophenolate, expressed as prescription/users (Pr/us; number of prescriptions of each drug/total number of cases with at least one dispensing per year), was also evaluated. Both drugs showed a steady trend between 2011 and 2018 ranging between 3.4 and 5.6 Pr/us and be twe en 4 .0 an d 4 .5 Pr/us f or Azat hi oprine an d Mycophenolate, respectively. B. Therapies for the Management of Comorbidities in Patients With Pemphigus PPIs and antibiotics were shown to be the most frequently consumed drugs among patients with pemphigus, with their use remaining substantially stable over the years (43.3% and 52.2% in 2018 for PPI and antibiotics, respectively). Interestingly, the consumption of systemic antivirals started from 2016, whereas from 2011 to 2015, no patients were found to be prescribed with systemic antivirals. Interestingly, four of the eight cases that have at least one prescription of antivirals between 2016 and 2018 have received at least one prescription of RTX. Since 2015, the prevalence of use of antifungal drugs for systemic use was about 2%. Compared to PPI and antibiotics, there was a lower consumption (with a maximum of 12.2% in 2016) of insulin and non-insulin antidiabetic agents, anti-hypertensive drugs, such as beta blockers, calcium channel blockers, ACE inhibitors, and sartans (with a maximum of 16.4%, 11.1%, 16.3%, and 21.8%, respectively), and antithrombotic and anticoagulant agents (Figure 4). DISCUSSION The distribution of the different disease phenotypes within our cohort is in line with data reported in the literature (23), with a clear prevalence of PV over PF (4). Interestingly, the mean age at onset of pemphigus observed in our study was around 57, substantially overlapping to other epidemiological studies, such as one French series including 266 patients (24). Similar to that study, we did not observe substantial differences in terms of age at onset in relation to sex or pemphigus variant (24). Differently, an English study including 138 patients with pemphigus demonstrated a significantly higher mean age at diagnosis (around 71 years) (25). Pemphigus-specific mortality was estimated to be around 5% of patients in previous studies (3). In our series, 9% of the patients died during the study period. Death occurred in 10.2% of patients with PV, in 5.7% of patients with PF, and in one patient (corresponding to 33.3%) with PNP. Of note, the cause of death was available only for eight out of 14 patients, and in all these cases, it was not directly related to pemphigus itself. In another large series by Kridin et al., including 237 patients with pemphigus, with a slightly lower age at onset than our study, death was reported in 19.8% of patients with and 23.3% of patients with PF (19). The differences between these two studies might by due to the different sample sizes and/or to the different period of follow-up. As expected, survival in our cohort was lower among patients with PV than PF. In addition, the survival rate decreased clearly with increasing age and was lower in male than female patients. Collectively, these data are similar to other previous studies in the literature (24). As expected, there was a significant association between PV and anti-Dsg3 IgG antibodies as well as PF and anti-Dsg1 antibodies. Further, a significant drop in autoantibody titers between pemphigus diagnosis and the first 12 months of follow-up occurred in the vast majority of patients with PV (83.6% and 87.1% for anti-Dsg1 and anti-Dsg3, respectively) and PF (94.4% and 83.3% for anti-Dsg1 and anti-Dsg3, respectively). After 1 year of treatment, anti-Dsg1 antibodies were shown to decrease more in patients receiving RTX compared to those treated with other immunosuppressive agents. Because circulating antibodies reflect well the clinical activity and severity of the disease, our results confirm the efficacy of the various immunosuppressive treatments in pemphigus. Concerning the treatment regimen, our study highlights the preferential use of Mycophenolate and Azathioprine as adjuvant steroid-sparing drugs for pemphigus management. Accordingly, none of the patients received treatment with other immunomodulatory drugs, such as dapsone, or immunosuppressive drugs, such as cyclophosphamide. Reasons include the fact that dapsone in not licensed for use in Italy; whereas cyclophosphamide showed an unfavorable safety profile compared to other steroid-sparing agents and has not shown meaningful superior efficacy compared to other drugs in randomized clinical trials. The increasing trend of RTX use observed in our sample is related to the accumulating evidence of efficacy of RTX in pemphigus (18,26,27), leading current guidelines to recommend the use of the drug as a first line option in pemphigus (5). Despite exciting results of RTX clinical trials (28), we observed that trend of prescriptions of Mycophenolate and Azathioprine remained substantially stable after its introduction in our cohort. Among drugs used for the management of disease-or treatment-related comorbidities, PPI and antibiotics were the most frequently prescribed. The high consumption of PPI can be explained by the long-term use of corticosteroids in patients with pemphigus. In this regard, our series confirms the usefulness of this class of drugs in preventing severe and/or chronic gastrointestinal toxicity of systemic corticosteroids, as suggested by the low incidence of diseases of the upper gastrointestinal tracts. Infections represent one of the most frequent comorbidities among patients with pemphigus and account for one of the main causes of pemphigus-related mortality (21,29). The high consumption of antibiotics suggests that infections are also common in our patients' cohort. Notably, it is arguable that most of the infections experienced by patients were of mild severity, as, except for two cases of pneumonia, both occurred in patients after pemphigus diagnosis, and there were no hospitalizations due to serious infections during the study period. Notably, we recorded a very low consumption of antimycotic drugs, which suggests a low incidence of mycotic infections among patients. This is partly contrasting other studies in the literature, which demonstrated an increased susceptibility of patients with pemphigus to either dermatophyte or deep and systemic fungal infections (29). Several studies reported herpes virus infections, including herpes zoster infections, as highly frequent in patients with pemphigus, especially in those receiving lymphocyte-targeting or B-cell depleting therapies, including Mycophenolate and RTX (30,31). Moreover, herpes virus infections can also serve as a trigger for sudden worsening of pemphigus during immunosuppressive treatments (32,33) and can be difficult to be recognized due to clinical similarities between herpes and pemphigus lesions (34). In our sample, despite a generally low consumption of systemic antivirals, half of the patients who were prescribed with this class of drugs had received treatment with RTX. In our study, about 12% of patients with pemphigus received a diagnosis of malignancy; in about half of them, cancer occurred after pemphigus diagnosis. These data are congruent with other studies in the literature, suggesting a significant association between cancer and pemphigus, including non-paraneoplastic variants. The latter cases are referred to as malignancy-induced or malignancy-exacerbated pemphigus (20,(35)(36)(37). Collectively these findings highlight the importance of increased awareness about the potential risk of malignancy in patients with pemphigus. Symptoms, including weight loss, fatigue, and chronic fever, should raise suspicion and requires prompt recognition and appropriate diagnostic workup. Although a strong systemic activation of coagulation has been mostly observed in bullous pemphigoid (38), similar to previous studies (21,39,40), we observed a higher incidence of cardiovascular and cerebrovascular diseases as well as thrombo-embolic events in our cohort, the vast majority of which reported after pemphigus diagnosis. Collectively, these findings suggest that patients with pemphigus may benefit from anti-hypertensive drugs, beta blockers, and anti-thrombotic drugs for preventing these potentially lethal adverse events. Comorbid diseases including diabetes mellitus, cataracts, and glaucoma, observed in our patients, are presumably related to toxicity of long-term use of systemic corticosteroids. Nutritional counseling may be thus important for patients to counterbalance the alterations of glucose metabolism due to systemic steroids. Major strengths of the study include the large sample size and the monocentric design. The main limitation of the study relies on its retrospective design. As a result, long-term observation varies significantly among patients. Other limitations include the fact that complete data on pemphigus-associated comorbidities and drug prescriptions were available for only 78 patients with pemphigus. For the same reason, we were not able to link the prescription database to the control population. A second limitation is that the regional prescription database does not collect data related to inpatient prescriptions. For this reason, for patients having experienced long hospitalization periods, the use of some drugs may have been underestimated. Finally, the causes of death were not available in the data exploited in this study and were retrieved only for some of the patients. CONCLUSIONS This study represents the largest series of patients with pemphigus from a single referral center in Italy. This epidemiological study confirms that, despite pemphigus represents a prototype of organo-specific diseases, it frequently occurs in association with several comorbid diseases. Moreover, immunosuppressive treatments, patients often require additional treatments for managing these comorbidities. This finding highlights the importance of an integrated and multidisciplinary network for a correct management of patients and for optimizing costs. Acute cerebro-and cardiovascular events and malignancies remain serious complications that dermatologists should keep in mind when managing patients with pemphigus. DATA AVAILABILITY STATEMENT The raw data supporting the conclusions of this article will be made available by the authors, without undue reservation. ETHICS STATEMENT The studies involving human participants were reviewed and approved by Azienda USL Toscana Centro. Written informed consent to participate in this study was provided by the participants' legal guardian/next of kin. AUTHOR CONTRIBUTIONS LQ, ACoi, RM, and MC contributed to conception and design of the study. LQ and ACoi organized the database. ACoi performed the statistical analysis. LQ, RM, and MC wrote the first draft of the manuscript. ACoi wrote sections of the manuscript. All authors contributed to manuscript revision, read, and approved the submitted version
2022-07-08T13:23:12.994Z
2022-07-08T00:00:00.000
{ "year": 2022, "sha1": "235e5989097301b09adb9969fda8d731fc441801", "oa_license": null, "oa_url": null, "oa_status": null, "pdf_src": "Frontier", "pdf_hash": "235e5989097301b09adb9969fda8d731fc441801", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
249693061
pes2o/s2orc
v3-fos-license
Toward a Mechanistic Understanding of Poly- and Perfluoroalkylated Substances and Cancer Simple Summary Poly- and perfluoroalkylated substances (PFAS) are industrial chemicals found in many household products that persist in the environment. While several excellent review articles exist on the potential harmful effects of PFAS, there are few focused on cancer. This concise and streamlined mini-review focuses on summarizing molecular mechanisms related to the potential cancer-promoting properties of PFAS. This review organizes and interprets the vast primary PFAS cancer biology literature and provides a coherent, unified, and digestible model of the molecular mechanisms that potentially explains PFAS cancer promotion. Abstract Poly- and perfluoroalkylated substances (PFAS) are chemicals that persist and bioaccumulate in the environment and are found in nearly all human populations through several routes of exposure. Human occupational and community exposure to PFAS has been associated with several cancers, including cancers of the kidney, testis, prostate, and liver. While evidence suggests that PFAS are not directly mutagenic, many diverse mechanisms of carcinogenicity have been proposed. In this mini-review, we organize these mechanisms into three major proposed pathways of PFAS action—metabolism, endocrine disruption, and epigenetic perturbation—and discuss how these distinct but interdependent pathways may explain many of the proposed pro-carcinogenic effects of the PFAS class of environmental contaminants. Notably, each of the pathways is predicted to be highly sensitive to the dose and window of exposure which may, in part, explain the variable epidemiologic and experimental evidence linking PFAS and cancer. We highlight testicular and prostate cancer as models to validate this concept. Introduction Poly-and perfluoroalkylated substances (PFAS) are a class of chemicals used in many industrial and consumer products to resist heat, stains, water, and grease ( Figure 1) [1]. Examples include Teflon, coatings on fast food wrappers and nonstick pans, floor polish, carpets, furniture fabrics, firefighting foams, clothing treatments, and many others [1,2]. The manufacture, application, and disposal of fluorochemicals, since the 1940s, have led to worldwide pollution of PFAS, which affects not only water sources, but also food production, soil, runoff, and groundwater sources [3,4]. Fire suppression activities are also a Multiple epidemiologic studies have been supportive but not definitive in linking PFAS exposure to cancer, including cancers of the kidney, testis, prostate, liver, breast, pancreas, bladder, and non-Hodgkin's lymphoma [20][21][22][23][24][25][26][27][28]. However, other studies have shown inconsistent or negative correlations [29][30][31]. This may be due to differences in study design, difficulties in modeling PFAS exposures, and differences in the dosages and windows of exposure to PFAS, which may be critical for a variety of cancers. A scoping review of 16 cohort studies, 10 case-control studies, 1 cross-sectional study, and 1 ecological study concluded that the cancer sites with the most compelling evidence for an association with PFAS exposure across studies were kidney and testicular cancers, followed by prostate cancer [14]. A separate meta-analysis, focused on kidney and testicular cancer, indicated a significant increase in cancer risk per 10 ng/mL increase in serum PFOA for kidney and testicular cancer, and that these associations were most likely causal [32]. In addition, rodent studies have shown that PFOA, PFOS, and GenX can increase the rate of Leydig cell adenoma, pancreatic acinar cell adenoma, hepatocellular adenoma and carcinoma, and thyroid adenoma, although the human relevance of these findings has been Multiple epidemiologic studies have been supportive but not definitive in linking PFAS exposure to cancer, including cancers of the kidney, testis, prostate, liver, breast, pancreas, bladder, and non-Hodgkin's lymphoma [20][21][22][23][24][25][26][27][28]. However, other studies have shown inconsistent or negative correlations [29][30][31]. This may be due to differences in study design, difficulties in modeling PFAS exposures, and differences in the dosages and windows of exposure to PFAS, which may be critical for a variety of cancers. A scoping review of 16 cohort studies, 10 case-control studies, 1 cross-sectional study, and 1 ecological study concluded that the cancer sites with the most compelling evidence for an association with PFAS exposure across studies were kidney and testicular cancers, followed by prostate cancer [14]. A separate meta-analysis, focused on kidney and testicular cancer, indicated a significant increase in cancer risk per 10 ng/mL increase in serum PFOA for kidney and testicular cancer, and that these associations were most likely causal [32]. In addition, rodent studies have shown that PFOA, PFOS, and GenX can increase the rate of Leydig cell adenoma, pancreatic acinar cell adenoma, hepatocellular adenoma and carcinoma, and thyroid adenoma, although the human relevance of these findings has been called into question [33][34][35][36]. The health concerns related to PFAS have attracted much attention from the public and the scientific community. Despite past efforts, the mechanisms of action of PFAS, especially in relation to cancer, are poorly understood. Here, we review and synthesize the major proposed cancer mechanisms related to PFAS exposure. Potential Mechanisms of PFAS Carcinogenesis Unlike known carcinogens such as benzo(a)pyrene and UV light that are genotoxic due to direct damage to DNA, there is little evidence that PFAS are direct mutagens or deregulators of DNA repair or genomic stability [37][38][39]. However, at high concentrations, PFAS have been demonstrated to damage DNA via reactive oxygen species generation [40,41]. It is unclear if this mechanism is relevant for typical levels of human PFAS exposure. In contrast, most of the evidence for PFAS-mediated effects has focused on epigenetics, transcription, cellular metabolism, and endocrine effects [11,12,37,[42][43][44]]. Metabolism Metabolic plasticity is one of the hallmarks of cancer [45]. PFAS exposure causes numerous metabolic alterations, through both PPAR-dependent and -independent mechanisms in the liver and other tissues [11,42]. Structurally, PFAS resemble fatty acids (FAs) and there is evidence that PFAS can act as ligands for peroxisome proliferator-activated receptors (PPARs) [46,47]. PPARs are transcription factors with many biological effects beyond their canonical role in controlling lipid and glucose metabolism [48]. Hence, activation of PPARs is an attractive mechanism to explain many of the biological effects of PFAS. The activation of PPARα has been extensively studied as a mechanism of PFAS-mediated liver toxicities, including fibrosis, cirrhosis, steatosis, non-alcoholic fatty acid liver disease, and liver cancer [49][50][51][52]. Similarly, the PFAS activation of PPARs has also been proposed to mediate dyslipidemia (especially high cholesterol), insulin resistance, adipogenesis, and several cancers, including colon, breast, and prostate cancer [11,42,[53][54][55][56][57]. Likely related again to a structural similarity with FAs, PFAS are known to accumulate in the liver and have been proposed as altering FA metabolism by binding to FA transporters and metabolic enzymes [11,42]. In contrast to PFAS activation of PPARs, there is less evidence for direct activation by PFAS of other metabolic and xenobiotic nuclear receptors that respond to FAs, including liver X (LXR), farnesoid X (FXR), constitutive androstane (CAR), and pregnane X (PXR). Since altered metabolism is a key feature of the cancer phenotype, the alteration of metabolic regulators such as PPARs offers an attractive mechanism for the proposed pro-carcinogenetic actions of PFAS [45]. Another mechanism related to FA mimicry is the proposed direct effect of PFAS on regulating cell membrane fluidity [58,59]. Published studies demonstrate a central role for PPARα signaling in PFOA/PFOS-induced liver and kidney carcinogenesis [21,60]. In addition, an important role for fatty acid metabolism has been proposed for other cancers including breast, prostate, and colon cancer [61-63]. PFOA has been proposed to increase the risk of metabolic syndrome in humans [57]. PFAS alter the hepatic metabolism, with alterations in amino acid biogenesis and the Krebs cycle [64]. In addition, the upregulation of enzymes involved in β-oxidation has been reported upon PFOS exposure [65]. PFOS also induced high peroxisome, endoplasmic reticulum, mitochondria, and membrane protein levels, and deregulated lipid and amino acid metabolism [66,67]. Prenatal exposure to PFAS can contribute to pediatric liver toxicity [68]. A study of 1105 mother-child pairs that assessed multiple PFAS in maternal blood found higher liver enzyme levels of alanine aminotransferase, aspartate aminotransferase and gamma-glutamyl transferase [68]. Furthermore, PFAS levels were associated with alterations in serum amino acid levels in children [69]. In a study of male Chinese subjects, six PFAS were associated with metabolic serum changes associated with oxidative stress [70]. Metabolic stress, as evidenced by metabolites of oxidative DNA damage and lipid peroxidation, has also been documented for both animal and cell line studies for a number of PFAS compounds [54,70]. An additional study of targeted metabolomics found perturbations in branched-chain and aromatic amino acid biosynthesis and glycerophospholipid metabolism and a link between PFAS and increased risk of non-alcoholic steatohepatitis in children [68]. Rodent experiments have shown that early and prenatal PFAS is associated with liver injury in offspring [71,72]. In summary, the activation of PPARs and associated metabolic perturbations, especially in the liver, is one of the most studied mechanisms of PFAS actions. The recent appreciation that many cancers are driven and sustained by metabolic reprogramming underscores the potential importance of this pathway in studying the proposed pro-carcinogenic effects of PFAS. How metabolic reprogramming at the hepatic and cancer cell/cancer progenitor cell level cross-talks with epigenetic and endocrine reprogramming is a key area of future research for understanding the potential carcinogenicity of PFAS ( Figure 2). 70]. An additional study of targeted metabolomics found perturbations in branched-chain and aromatic amino acid biosynthesis and glycerophospholipid metabolism and a link between PFAS and increased risk of non-alcoholic steatohepatitis in children [68]. Rodent experiments have shown that early and prenatal PFAS is associated with liver injury in offspring [71,72]. In summary, the activation of PPARs and associated metabolic perturbations, especially in the liver, is one of the most studied mechanisms of PFAS actions. The recent appreciation that many cancers are driven and sustained by metabolic reprogramming underscores the potential importance of this pathway in studying the proposed pro-carcinogenic effects of PFAS. How metabolic reprogramming at the hepatic and cancer cell/cancer progenitor cell level cross-talks with epigenetic and endocrine reprogramming is a key area of future research for understanding the potential carcinogenicity of PFAS ( Figure 2). Figure 2. Proposed mechanisms of potential PFAS cancer promotion. PPAR-dependent and -independent reprogramming of metabolism, epigenetics, and endocrine disruption are represented as interconnecting, mutually reinforcing pathways of potential PFAS tumor promotion. The precise details of how PFAS influences these pathways are still uncertain, as is the impact of other proposed PFAS mechanisms, including immunosuppression and oxidative stress. Endocrine Disruption PFAS cross the placenta and concentrate in breast milk; thus, exposure to the developing fetus and infant occurs [73,74]. PFAS are known to have endocrine-disrupting properties [75,76]. There are reports of adverse reproductive health and decreased fecundity linked to PFAS exposure [77,78]. Human semen quality has decreased over the last several decades. This time period coincides with the rise in production of endocrine-disrupting chemicals (EDCs), and PFAS have been associated with infertility in male mice and subfertility in female mice [79,80]. In several studies, estrogenic and anti-androgen activities were observed for a number of PFAS compounds [81][82][83][84]. There is evidence that PFAS Endocrine Disruption PFAS cross the placenta and concentrate in breast milk; thus, exposure to the developing fetus and infant occurs [73,74]. PFAS are known to have endocrine-disrupting properties [75,76]. There are reports of adverse reproductive health and decreased fecundity linked to PFAS exposure [77,78]. Human semen quality has decreased over the last several decades. This time period coincides with the rise in production of endocrine-disrupting chemicals (EDCs), and PFAS have been associated with infertility in male mice and subfertility in female mice [79,80]. In several studies, estrogenic and anti-androgen activities were observed for a number of PFAS compounds [81][82][83][84]. There is evidence that PFAS exposure is associated with decreased testosterone and poor sperm quality and numbers in humans [78,85]. For example, in a Japanese study, in utero PFOA and PFOS exposure was associated with decreased testosterone in male neonates [86]. In addition to in human studies, in rodents, PFAS have been observed to alter testosterone and estrogen levels, and were associated with impaired spermatogenesis and steroidogenesis and reduced sperm quality [81][82][83][84], although some inconsistent findings also exist [87]. In female rodents, PFOA alters mammary development [88,89]. PFOA has been associated with changes in the uterus and the reproductive health of female mice [90]. Several cancers associated with PFAS are hormone-dependent, including prostate and breast cancer, or have an etiology closely associated with endocrine disruption, as in testicular cancer [22][23][24][25][26][27][28]. In addition, endometrial cancer has been associated with endocrine disruption [91]. There is evidence that PFAS can alter endocrine hormone levels, potentially leading to disrupted reproductive health, especially with neonatal or pubertal exposure [92][93][94]. A major proposed mechanism of EDCs, in general, is their binding to nuclear receptors [95]. While there is strong evidence supporting the direct activation of PPARs, there is less evidence that PFAS directly activate endocrine receptors, including estrogen (ER) and androgen receptors (AR). Hence, the mechanism of endocrine disruption mediated by PFAS remains unclear, suggesting that indirect mechanisms, including epigenetic and/or metabolic reprogramming, may play roles in disrupting the production and secretion of endocrine hormones during critical windows of exposure [44,96] (Figure 2). In turn, early-life exposure to EDCs has been associated with epigenetic reprogramming that manifests later in life [97]. In summary, epigenetics may play a key role in initiating and maintaining potential pro-cancerous states mediated by non-mutagenic PFAS chemicals. Despite this, very few mechanistic studies have been reported. We speculate that epigenetic reprogramming by PFAS may be driven, in part, by metabolomic alterations in substrates and cofactors of epigenetic enzymes and, reciprocally, that epigenetic-mediated, transcriptional reprogramming plays a key role in establishing and stabilizing the metabolic and hormonal states required for continued tumorigenesis [123][124][125][126][127] (Figure 2). This hypothesis is motivated by the above-mentioned association between PFAS and metabolic, epigenetic, and endocrine disruptions and the recent appreciation of mechanistic relationships between these three pathways. In the following section, we highlight these principles with two cancers possessing epidemiologic links to PFAS: prostate cancer, which is strongly associated with metabolic disruption, and testicular cancer, which is strongly associated with epigenetic reprogramming. The Case for Testicular Cancer There is mounting evidence that testicular germ cell tumors (TGCTs) are especially driven by epigenetics and environmental exposures, including estrogenic exposures. This, coupled with recent epidemiologic evidence linking testicular cancer to PFOA, suggests that TGCTs may be a cancer type especially sensitive to PFAS exposure. TGCTs are the most common solid cancers of males aged 15-35 [128]. Testicular cancer is a disease of developmental origin, with evidence suggesting that they arise from aberrant primordial germ cells in utero [129]. TGCTs may be especially driven by epigenetics since they have a very low mutational rate compared to other solid tumors, and most patients lack the so-called "driver" mutations found in almost all other solid tumors [130,131]. There is also a link between environmental exposures, for example, estrogenic exposures in utero and early development, and TGCT incidence [132][133][134]. Further, the incidence of TGCTs has greatly increased in industrial nations in the past 50 years, consistent with the premise that exposure to toxic chemicals has impacted TGCT incidence [128]. Epidemiologic studies have indicated that the fetal gonads may be especially sensitive to pro-estrogenic and anti-androgenic insults [132][133][134]. For example, a meta-analysis of 10 studies on EDCs and testicular cancer risk concluded that maternal exposure, but not adult exposure, to EDCs was associated with a >2-fold higher risk of testicular cancer in offspring [132]. This has led to the proposition that testicular cancer is an extreme case of a "testicular dysgenesis syndrome" that includes cryptorchidism, hypospadias, poor semen quality, and male subfertility due to environmental abnormalities, especially those associated with low androgen levels during gonadal development [135,136]. In fact, the above-mentioned conditions, along with congenital disorders of sex development, are known risk factors for TGCTs [134,137,138]. Hence, TGCT etiology matches well with some of the most-studied mechanisms of PFAS action, namely, epigenetics and endocrine disruption. Supporting the idea that TGCTs may be especially sensitive to epigenetic perturbations, we recently found that the polycomb pathway and DNA methylation are interconnected epigenetic drivers of cisplatin sensitivity, resistance, and tumorigenicity in TGCT cells [134,139,140]. Of all cancers, testicular cancer has one of the strongest epidemiological links to PFAS exposure, including in cohort and ecological/case-control studies [13,14,24,25,32]. In the C8 Health Project Dupont plant study of individuals in a community exposed from 1950 to 2004, the incidence of testicular cancer increased with increasing PFOA serum levels, with a 3-fold higher risk in the most-exposed group [24]. TGCTs are one of the eight cancers that PFAS-exposed firefighters contract more often than the general public [141]. In addition, several studies in mice and humans suggest an increase in male reproductive toxicities after prenatal, childhood, adolescent, and adult PFAS exposures [33][34][35]. These include adverse effects on semen quality and quantity, and reproductive hormone levels, which are known to be risk factors for human TGCTs [142][143][144]. While some epidemiological studies specifically concerning PFAS exposure and decreased testosterone levels are conflicting, findings are generally consistent for cohorts exposed in utero, suggesting that the window of exposure is especially critical for PFAS effects on male reproductive health [75,[92][93][94]. The strong association between male subfertility and TGCT risk suggests the presence of common etiologic factors. Hence, the testis may be especially vulnerable to EDCs during certain, as yet undefined, windows of susceptibility. Studies in rats show that PFAS accumulate in the testis, and there is supportive data indicating testicular damage following PFAS exposure [145,146]. PFOS and PFOA exposure in mice and rodents, including in utero exposure, leads to impaired Leydig cell function and in some cases, Leydig cell tumors, both of which are associated with decreased testosterone levels [78,81,84,[145][146][147][148][149][150][151]. While some data are also conflicting, as they pertain to PFAS and decreased testosterone in rodents [78,152,153], the data are again more consistent for in utero exposure [81,148,149]. This same trend is also apparent for decreased sperm counts and altered spermatogenesis for PFAS-exposed mice and rats [83,149]. There is also a connection between TGCTs and PPARα, another proposed mechanism of PFAS action. In rodent models, PFAS exposure is known to increase liver expression of CYP19A1, through activation of PPARα resulting in increased estrogen and decreased testosterone levels [43,154]. There is also evidence of a direct effect of PFAS on Leydig cells, leading to deceased production and secretion of testosterone [147]. In summary, epidemiology and experimental evidence suggest that TGCTs may be a key tumor type with which to begin understanding the mechanistic details of epigenetic and endocrine-mediated carcinogenesis as potentially mediated by the PFAS class of environmental toxicants, which may also be relevant to other toxicants. The Case for Prostate Cancer There is evidence associating all three of the major outlined PFAS pathways with the potential promotion of prostate cancer. Prostate cancer and benign prostate cells are dependent on androgens and modulated by other hormones. Hence, it is possible that EDCs could modulate prostate cancer cell homeostasis, leading to prostate cancer progression. Several other EDCs, including cadmium, dioxin, polychlorinated biphenyls, and bisphenol A, have also been associated with prostate cancer progression [155]. PFAS exposure has been shown to potentially increase the risk of prostate cancer in some settings, including for men working in or living near chemical production plants, especially in individuals with a family history of prostate cancer [13,[22][23][24][25][26]156]. In addition to environmental and occupational exposures, lifestyle factors, including diet and body weight that alter lipid metabolism, dictate overall increased prostate cancer risk [157][158][159][160]. There is evidence from human prostate cell lines and transgenic mouse models that a high-fat diet contributes to prostate cancer progression by shifting the prostate metabolome to a pro-cancerous state [159,161]. Of note, these actions are mediated, in part, through PPARα, providing the potential for enhanced tumor promotion. We recently showed that PFOS exposure and a high-fat diet synergize to increase prostate cancer xenograft growth in mice [122,162]. PFOS treatment increased glucose metabolism and pyruvate production in prostate cancer cells [122]. In addition, we demonstrated that an enhancement of glycine and serine metabolism and enhanced glucose metabolism, through the Warburg effect in human prostate stem-progenitor cells, took place in response to PFOA and PFOS exposures [162]. Prostate stem-progenitor cells also express PPARα and retinoid X receptor-α which mediate PFAS effects in other tissues [162]. This suggests that PFAS exposure may synergize with a high-fat diet to activate PPARα, resulting in altered cell metabolism to potentially promote tumorigenesis in normal prostate and prostate cancer cells. The metabolic status of cancer cells determines phenotypic characteristics and drug responses of hormone-dependent cancers [163,164]. Published studies demonstrate that metabolic changes impact epigenetic marks during tumor progression [165][166][167]. Furthermore, PPARα itself is subject to control by epigenetic markers, providing another crosstalk mechanism between metabolism and epigenetics in regulating PFAS actions [21,60]. Metabolic alterations in cancer cells result in epigenetic reprogramming due to changes in the availability of substrates for epigenetic enzymes [123,[165][166][167][168][169]. For example, local acetyl-CoA production, via recruitment of metabolic enzymes to chromatin, enables coordination of environmental cues with histone acetylation and gene transcription, which may increase the fitness and survival of cancer cells [168,169]. Reciprocally, epigenetic reprogramming is a common way for cancer cells to adapt to a hostile metabolic environment, mediating inheritable changes in cellular metabolism by altering levels and activity of metabolic regulators [123][124][125][126][127]. Conclusions Exposure to PFAS may have adverse, cancer-related health effects, although data from animal models and epidemiology studies are not entirely consistent or conclusive, and many diverse mechanisms of carcinogenicity have been proposed. We contend that three major pathways or properties of PFAS underlie the majority of these mechanisms ( Figure 2). Metabolic disruption due to PPAR-dependent and -independent FA mimicry could lead to downstream effects on endocrine homeostasis and epigenetic priming. In turn, epigenetics can provide inheritable and sustainable reprogramming of metabolism and gonadal signaling. Finally, endocrine disruption mediated by PFAS can potentially result in far-reaching, hormone-mediated modulations of both the epigenome and the metabolome. These three interconnected and mutually enforcing pathways may combine to establish a pro-tumorigenic environment for cancer promotion. Notably, each of these pathways is predicted to be highly sensitive to dose, with the potential to be biphasic, and also highly dependent on the window of exposure during the human life cycle, which may explain the sometimes inconsistent epidemiologic and experimental evidence linking PFAS and cancer. These challenges must be met to fully understand the impact of PFAS on cancer development. [CrossRef] 65. Tan, F.; Jin, Y.; Liu, W.; Quan, X.; Chen, J.; Liang, Z. Global liver proteome analysis using iTRAQ labeling quantitative proteomic technology to reveal biomarkers in mice exposed to perfluorooctane sulfonate (PFOS). Environ. Sci. Technol. 2012, 46, 12170-12177. [CrossRef] [PubMed]
2022-06-16T15:41:03.205Z
2022-06-01T00:00:00.000
{ "year": 2022, "sha1": "bd9a7631d838f94628b03712bf4ff4c21729d1e4", "oa_license": "CCBY", "oa_url": "https://www.mdpi.com/2072-6694/14/12/2919/pdf?version=1655178877", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "e1b3627d7776362b737566138723fdbf22f9d909", "s2fieldsofstudy": [ "Environmental Science", "Medicine", "Chemistry" ], "extfieldsofstudy": [ "Medicine" ] }
222068153
pes2o/s2orc
v3-fos-license
Contrasting mitochondrial diversity of European starlings (Sturnus vulgaris) across three invasive continental distributions Abstract European starlings (Sturnus vulgaris) represent one of the most widespread and problematic avian invasive species in the world. Understanding their unique population history and current population dynamics can contribute to conservation efforts and clarify evolutionary processes over short timescales. European starlings were introduced to Central Park, New York in 1890, and from a founding group of about 100 birds, they have expanded across North America with a current population of approximately 200 million. There were also multiple introductions in Australia in the mid‐19th century and at least one introduction in South Africa in the late 19th century. Independent introductions on these three continents provide a robust system to investigate invasion genetics. In this study, we compare mitochondrial diversity in European starlings from North America, Australia, and South Africa, and a portion of the native range in the United Kingdom. Of the three invasive ranges, the North American population shows the highest haplotype diversity and evidence of both sudden demographic and spatial expansion. Comparatively, the Australian population shows the lowest haplotype diversity, but also shows evidence for sudden demographic and spatial expansion. South Africa is intermediate to the other invasive populations in genetic diversity but does not show evidence of demographic expansion. In previous studies, population genetic structure was found in Australia, but not in South Africa. Here we find no evidence of population structure in North America. Although all invasive populations share haplotypes with the native range, only one haplotype is shared between invasive populations. This suggests these three invasive populations represent independent subsamples of the native range. The structure of the haplotype network implies that the native‐range sampling does not comprehensively characterize the genetic diversity there. This study represents the most geographically widespread analysis of European starling population genetics to date. | INTRODUC TI ON Invasive populations are useful systems to investigate responses to novel environments, providing insight into mechanisms underlying invasion success and native species' capacity to adapt to a changing world (Moran & Alexander, 2014). Despite this opportunity, these studies often examine only one introduction, reducing their power to draw robust conclusions that are broadly applicable (Packer et al., 2017). For this reason, there is a growing interest in studying invasive species that have been introduced to multiple geographically and environmentally diverse localities (Kueffer, Pyšek, & Richardson, 2013;Packer et al., 2017). In this respect, the European starling (Sturnus vulgaris) is an excellent system to investigate evolutionary responses to a wide range of introduced environments, from tropical Fiji to temperate Argentina (Pinto, 2005). European starlings are native to the Palearctic but have been repeatedly introduced to novel environments, flourishing in their invasive ranges (Long, 1981). Starlings have now been introduced to every continent barring Antarctica (Rollins, Woolnough, & Sherwin, 2006, Figure 1). Their invasion success likely results from a suite of life-history and behavioral traits that may facilitate ecological flexibility. For example, they are often classified as diet generalists, preferring insects, but they will eat most other foods depending on availability of resources (Cabe, 1993). Another feature that likely plays a role in European starlings' ability to persist in new localities is their flexibility in patterns of seasonal migration. Although not all starling populations are migratory (e.g., in Australia and New Zealand, Higgins, Peter, & Cowling, 2006), in populations that are migratory, there is a great deal of individual variation in migratory behavior (i.e., individuals can be differentially migratory from year to year; Blem, 1981;Feare, 1984). Some research suggests that seasonal migration may be an adaptive strategy in response to seasonality; therefore, migratory flexibility in starlings may allow them to persist in seasonal environments and facilitate range expansion (Winger, Auteri, Pegan, & Weeks, 2019). This trait may also contribute to differences in population structure across introductions. European starlings were introduced to North America in 1890 as part of an American Acclimatization Society initiative to populate Central Park with the birds from Shakespeare's plays (Cooke, 1928;Phillips, 1928). The initial introduction consisted of approximately 60 individuals released in 1890 and 40 more in 1891, leading to a total of ~100 individuals released into Central Park in New York City (Cabe, 1993). From this founding population, starlings have expanded their range across all of North America where their current population exceeds 200 million individuals, over one-third of the global population of this species (Feare, 1984). This range expansion has taken place in the last 130 years, demonstrating their ability to persist in a heterogeneous novel environment. Given the diverse environments colonized by starlings in North America, it is interesting that nuclear markers indicate that little population structure exists (allozymes, Cabe, 1998; single nucleotide polymorphisms, Hofmeister, Werner, & Lovette, 2019). Other starling introductions from the 19th century have been previously studied, including the mid-19th century Australian introductions (Rollins et al., 2016;Rollins, Woolnough, Sinclair, Mooney, & Sherwin, 2011;Rollins, Woolnough, Wilton, Sinclair, & Sherwin, 2009) and the late 19th century South African introduction (Berthouly-Salazar et al., 2013). In Australia, up to sixteen different introduction attempts have been made with birds originating from the United Kingdom, from 1856 to 1881, with only two resulting in recorded established populations from ~165 original birds (Higgins et al., 2006;Long, 1981). Nuclear and mitochondrial markers identified concurrent population structure across the Australian range, and nuclear polymorphisms were associated with environmental variables in that population (e.g., aridity; Cardilini et al., 2020;Rollins et al., 2009Rollins et al., , 2011. In contrast to the high levels of propagule pressure in Australia, only one introduction to South Africa of ~18 birds originating from Britain in or around 1897 has been recorded (Winterbottom & Liversidge, 1954). The South African introduction enables a powerful comparison with the North American introduction because of similarities in timing of these events (1897 and 1890, respectively). Both the Australian and South African introductions have reduced mitochondrial genetic diversity in comparison to the native source population in the UK (Berthouly-Salazar et al., 2013;Rollins et al., 2011). Founding population sizes during introduction are often small, resulting in genetic bottlenecks and lower genetic diversity than in the native range (Baker & Stebbins, 1965;Nei, Maruyama, & Chakraborty, 1975). However, numerous insights from studies of other invasions suggest that decreased genetic diversity at introduction may not hinder these species' ability to become established in novel environments (Dlugosch, Anderson, Braasch, Cang, & Gillette, 2015;Frankham, 2005). Factors such as the number of introduction attempts, the timing of these attempts, dispersal patterns in the introduced range, and the rate of population expansion may play a larger role in shaping patterns of genetic diversity and ultimately contributing to successful colonization. A wide body of evidence suggests that adaptation in introduced ranges occurs rapidly, and this does not appear to be reliant on genetic diversity (Rollins et al., 2013). Here, we use mitochondrial control region sequence data to examine starling population structure in North America and compare mitochondrial genetic diversity in populations from the native-range and from three established invasions: North America, Australia, and South Africa. Although the limitations of using mitochondrial DNA in population genetic analyses have been well characterized (Ballard & Whitlock, 2004;Bazin, Glémin, & Galtier, 2006), there are several benefits associated with its use. First, previous studies of starlings in Australia, South Africa, and the UK used mitochondrial control region sequence data, so the comparative strength of our study is predicated on using the same marker. Second, Australian studies that have compared population structure using mitochondrial sequence data to that of microsatellite (Rollins et al., 2011) and single nucleotide polymorphism data (Cardilini et al., 2020) found similar patterns, supporting the validity of our approach. Third, mitochondrial DNA is still one of the most reliable sources of DNA that can be extracted from historical museum specimens (Guschanski et al., 2013;Mason, Li, Helgen, & Murphy, 2011;Ramakrishnan & Hadly, 2009), and population analyses using historical specimens rely on comparable datasets from modern birds, such as this. Finally, although mitochondrial DNA cannot provide a complete evolutionary picture, it is especially useful as evidence to clarify recent changes in a population (Zink & Barrowclough, 2008). This is especially true of the noncoding control region, which has high nucleotide diversity (Saccone, Pesole, & Sbisà, 1991). In this study, we use this unique biological system that features multiple, independent, and documented introductions to investigate how propagule pressure (e.g., the number of introductions), environmental factors, and the expansion rate in introduced ranges influence contemporary population structure and genetic diversity. Based on previous research using nuclear markers, we predict low levels of population structure within North America. We predict that the mitochondrial diversity of the North American population will be lower than that of Australia, where multiple introductions were made (Jenkins, 1959), and these occurred prior to and had a greater number of propagules than the New York introduction (Australian introductions started in 1854; Jenkins, 1959). Further, we predict similar levels of genetic diversity in South Africa and North America, due to similarities in timing of introductions and propagule pressure. We discuss microevolutionary changes that have occurred since the introduction of these populations across the world. (Table 2). | Amplification and sequencing The primers used to amplify the mitochondrial control region in North American specimens were initially designed to analyze mitochondrial diversity of the Australian population (Rollins et al., 2011). Rollins et al. (2011 designed a series of overlapping primers to be utilized in the amplification of museum specimens or highly degraded samples (Table S1). We used these primers to sequence the control region of North American samples in four overlapping segments. Two of these primers (svCRL1 and svPheH3) amplify most of the mitochondrial control region and also were used to amplify DNA from the starling population in South Africa (Berthouly-Salazar et al., 2013). For the PCRs, PuReTaq Ready-To-Go PCR Beads were rehydrated with 13.5 µl of molecular grade water, 5 µl of 10 µM forward and reverse primers, and 1.5 µl of DNA. The thermocycling conditions used here were identical to those described in the original paper (Rollins et al., 2011 | Population and expansion analysis Overlapping sequences were aligned using the software Geneious 11.1.2 (Kearse et al., 2012) to generate a consensus sequence for each individual from North America. All subsequent alignments including the samples from other continents were generated on Geneious using the standard settings and the Geneious alignment algorithm (Kearse et al., 2012). Median joining haplotype networks were created using Network v10.1.0.0 (Bandelt, Forster, & Röhl, 1999) and postprocessed using the maximum parsimony calculation to remove unnecessary median vectors (Polzin & Daneshmand, 2003 FSTAT v.2.9.4 (Goudet, 2003). Mismatch analyses were also conducted in Arlequin, but with the full dataset from each invasive population (see Table S5). | RE SULTS The haplotype network constructed using only the North American specimens (1,181 bp sequence) included 20 haplotypes encompassing 53 polymorphic sites and did not indicate the presence of regional population structure ( Figure S1). When we included samples from all continents (928 bp sequence; Figure 2), we identified 64 haplotypes encompassing a total of 46 polymorphic sites ( Table S4). (Table S3). Haplotype diversity and richness were highest in the native range, followed by the North American population (Table 2) | D ISCUSS I ON Starlings are a highly successful invasive species occupying a wide breadth of environments across the world, resulting from introductions of varying age and intensity. This system enables a unique opportunity to study molecular evolution and adaptation. Here we use mitochondrial sequence data to compare the population genetic structure and diversity of the three best-studied starling invasions: North America, Australia, and South Africa. Overall, our findings and those from data of other studies included here suggest that low genetic diversity is not an obstacle for this species' rapid (Dlugosch & Parker, 2008;Rollins et al., 2013). As expected, the invasive populations had lower genetic diversity than the population in the native range, likely caused by genetic bottlenecks at introduction. The highest haplotype richness (which accounts for differences in sample size) was found in the UK (R = 30.0); although only 45 individuals were sampled, we identified 30 haplotypes in this population. Surprisingly, despite higher propagule pressure in Australia as compared to that of North America or South Africa, Australia harbored the lowest haplotype richness (R = 7.7). The North American population, which was intermediate in terms of propagule pressure, has retained the most genetic diversity (R = 14.7). Given the timescales involved, this is unlikely to be caused by novel mutations arising in North America (but see Rollins et al., 2016). However, it could be caused by differences in genetic diversity of founders or by higher levels of differential survival between haplotypes in Australian or South African starlings as compared to those from North America. It may be that some haplotypes have been lost in the native range since founders were collected. Differences in population expansion rates in novel environments also could be responsible for the differences in genetic diversity we found, with faster expansion resulting in higher haplotype diversity and lower nucleotide diversity (Halliburton & Halliburton, 2004). The haplotype network including all populations (Figure 2) revealed some interesting relationships among haplotypes. South African starlings are genetically distinct from those of North America and Australia, suggesting that the founders for this population may have been sourced from a different region of the UK. North American and Australian starlings are genetically similar (intermixed in the network), but only shared a single haplotype (H_25), suggesting that the founders for these populations may have been sourced from the same region of the UK, but were likely to have been genetically distinct. As expected, UK samples were well distributed across the network, but many of the invasive haplotypes were not found in UK samples, highlighting the paucity of information that exists about starlings in their native range and making it difficult to further interpret sources of founding populations. For this reason, and because European starling populations are in decline in their native range (Heldbjerg et al., 2019), it may be important to further characterize this population. Previous studies have investigated population structure within introduced populations of starlings. Within Australia, genetically distinct groups of starlings have been characterized using nuclear and mitochondrial markers (Rollins et al., 2009(Rollins et al., , 2011 and evidence of local adaptation to the Australian environment has been described (Cardilini, Buchanan, Sherman, Cassey, & Symonds, 2016;Cardilini et al., 2020). However, in South Africa, no evidence of population structure was found (Berthouly-Salazar et al., 2013). The regional analysis conducted within North America in the present study also found little evidence of population structure in this invasive population. We did see a slight (F ST = 0.04) albeit statistically significant difference between Central and Western samples but this may be due to the low sample size from the Central United States (N = 20). Overall, our findings are consistent with an earlier investigation of this population, which utilized allozyme data (Cabe, 1998), and a recent study using genome-wide SNPs (Hofmeister et al., 2019). However, the latter indicated that there are genotypes associated with specific environmental features such as precipitation and/or temperature. This may imply that over time, population structure could develop in this invasive population, despite apparent high levels of dispersal. Interestingly, migration rates between Central and Western sites differ (Hofmeister et al., 2019) and banding data in North America have shown that the starlings are found to migrate in unpredictable ways, not always in the North and South direction, but also in the East and West directions (Brewer, 2000). Therefore, the genetic pattern we found may be due to the high dispersal rates and these unpredictable and latitudinal migration patterns. When we investigated genetic differentiation across continents, we found that invasive populations were genetically divergent (F ST ranged from 0.17 to 0.26, all statistically significant) and all significantly different from populations in the native range (F ST ranged from 0.06 to 0.17). North America was most similar to the UK and Australia was least similar. These differences are likely caused by a combination of discrete introduction sources and founder effects. However, this could also be due to differences in timing of introductions; the Australian introduction occurred earlier than the others (mid-19th century) so it is possible that these differences reflect shifts that occurred in the native range in the latter half of the 19th century. Not surprisingly, we found genetic evidence of spatial expansion in all three invasive populations. While there was genetic support for demographic expansion in both North America and Australia, the mismatch analysis of South African data did not support the sudden (demographic) expansion model (Figure 4). This may mean that the South African starling population may still be in the "lag phase", which typically occurs following introduction (Sakai et al., 2001). Neither Tajima's D nor Fu's F s values supported the presence of population expansion in any of the invasive populations. However, Fu's F s was significantly negative in the native range, which suggests that this population may either be undergoing expansion or that it has an excess of recent mutations (Fu, 1997). Given observations of population decline in the native range (described above), this might be a signal of directional selection, which could be a response to novel environmental stressors resulting from land use changes in the UK (Heldbjerg et al., 2019). It is also interesting to consider that differences in the environments of each of the three invasive ranges studies here may have influenced population expansion rates. The United Kingdom and surrounding parts of Europe (native range) are largely classified as temperate with a hot or warm summer (Beck et al., 2018). Temperate areas similar to the native range are the regions where most starling invasive range expansion has occurred. The starling population in North America is about the same latitude as that of the native range between 40°-55°N, whereas the invasive populations in Australia and South Africa occur at about 30°-35°S (Sullivan et al., 2009). In Australia and South Africa, starlings have not expanded to cover the same area that they have in a comparable amount of time in North America. In North America, starlings spread from New York to Alaska from 1890 to 1970, which represents 80 years and a rate of 90 km/year (Bitton & Graham, 2014). In Australia, starlings rapidly expanded their range into south-eastern Australia and were in Western Australia by the 1970s. However, starlings have not colonized the arid center (Higgins et al., 2006) of the continent, where the highest temperatures and lowest rainfall occur (Jones, Wang, & Fawcett, 2009)
2020-09-03T09:05:32.224Z
2020-08-27T00:00:00.000
{ "year": 2020, "sha1": "1ec4999be6f0b5982af968157f727ac7c3c94042", "oa_license": "CCBY", "oa_url": "https://onlinelibrary.wiley.com/doi/pdfdirect/10.1002/ece3.6679", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "65dee9f356b4e3e0e27843ac24861c78a7d1d41a", "s2fieldsofstudy": [ "Environmental Science" ], "extfieldsofstudy": [ "Geography", "Medicine" ] }
53347599
pes2o/s2orc
v3-fos-license
Crossover from paramagnetic compressed flux regime to diamagnetic pinned vortex lattice in a single crystal of cubic Ca(3)Rh(4)Sn(13) We report the observation of positive magnetization on field cooling (PMFC) in low applied magnetic fields (H<100 Oe) in a single crystal of Ca(3)Rh(4)Sn(13) near its superconducting transition temperature (T(c) approx 8.35 K). For 30 Oe<H<100 Oe, the PMFC response crosses over to a diamagnetic response as the temperature is lowered below 8 K. For 100 Oe<H<300 Oe, the diamagnetic response undergoes an unexpected reversal in its field dependence above a characteristic temperature (designated as T*(VL) = 7.9 K), where the field-cooled cool-down magnetization curves intersect. The in-phase and out-of-phase ac susceptibility data confirm the change in the superconducting state across T*(VL). We ascribe the PMFC response to a compression of magnetic flux caused by the nucleation of superconductivity at the surface of the sample. In very low fields (H<20 Oe), the PMFC response has an interesting oscillatory behaviour which persists up to about 7 K. The oscillatory nature underlines the interplay between competing responses contributing to the magnetization signal in PMFC regime. We believe that the (i) counterintuitive field dependence of the diamagnetic response for H>100 Oe and above T*(VL) (lasting up to T(c), (ii) the oscillatory character in PMFC response at low fields and (iii) the PMFC peaks near 8.2 K in 30 Oe<= H<= 100 Oe provide support in favour of a theoretical scenario based on the Ginzburg-Landau equations. The scenario predicts the possibility of complex magnetic fluctuations associated with transformation between different metastable giant vortex states prior to transforming into the conventional vortex state as the sample is cooled below T*(VL). I. INTRODUCTION Superconducting specimens of different genre and with varying pinning have been known [1][2][3][4][5][6][7][8][9][10] to display an anomalous paramagnetic response, instead of the usual diamagnetic Meissner effect, on field-cooling in small magnetic fields (H). Such a response has been designated as the Paramagnetic Meissner Effect (PME) or the Wohlleben effect 1 , since the advent of superconductivity in cuprates. Originally this feature was found in granular 2,3 form of the high T c superconductor (HTSC) Bi 2 Sr 2 CaCu 2 O 8 and in single crystals 4 of YBa 2 Cu 3 O 7 . Invoking the possible special d-wave symmetry of the superconducting order parameter in HTSC materials, different models, such as, the presence of an odd number of π-junctions in a loop leading to spontaneous circulating currents producing a positive magnetization signal, the presence of Josephson junctions (π-or 0-), spontaneous supercurrents due to vortex fluctuations or an orbital glass 5,6 , were proposed to explain the PME. However, the subsequent observation of positive magnetization even in conventional s-wave superconductors, like, moderately pinned Nb discs 7,8 , nanostructured Al discs 9 and a weakly pinned spherical single crystal 10 of Nb, has indicated that the origin of positive magnetization on field cooling in these materials is perhaps related to flux trapping [11][12][13][14] and its possible subsequent compression 11,13,14 . Magnetic flux can get trapped in the bulk of a superconductor below T c , as the preferential flux expulsion from the superconducting boundaries can lead to a flux free region near the sample edges, which would grow as the sample is further cooled 6,[10][11][12][13][14][15] . In such a situation the magnetization response is governed by two counter flowing currents 6,11 , a paramagnetic (pinning) current flowing in the interior of the sample, which is associated with the pinned compressed flux, and a diamagnetic shielding current flowing around the surface of the sample, which screens the flux free region near the sample surface from the externally applied fields. Since these currents flow in opposite directions, the resultant magnetization can either be positive or negative 6,11,14 . Attempts to understand PME via the Ginzburg-Landau (GL) equations have shown 13,14 that a large compression of magnetic flux in the interior of the superconductor is energetically equivalent to the creation of giant vortex states, with multiple flux quanta Lφ 0 , where the orbital quantum number, L > 1. Boundary effects in finite sized samples 13,14,16 show that the Meissner state (L = 0 state) need not be the lowest energy state, but, a giant vortex state with L > 0 (in fact with L > 1) would have lower energy. Giant vortices are thus trapped inside the superconductor 13 below a temperature where surface superconductivity 15 is nucleated. Pinning may lead to a metastable giant vortex state with constant L (> 1) getting sustained without decay into L states with lower energy 13 , as the temperature is gradually reduced. On approaching the bulk superconductivity regime, it is proposed theoretically 14 that the transformation of a metastable giant vortex state into different lower L states can lead to a magnetization response having the tendency to fluctuate between diamagnetic and paramagnetic values. In an earlier work 10 , some of the present authors reported the observation of surface superconductivity 15 concurrent with positive magnetization on field cooling (PMFC) (often designated as Paramagnetic Meissner Effect (see Ref. 6)) in a weak pinning spherical single crystal (r 0 ≈ 1.1 mm) of Nb. However, there were no features in these experiments which could be ascribed to the metastable nature of giant vortex states in the temperature interval of the PMFC regime. In recent years, we have studied the ubiquitous Peak Effect (PE) phenomenon 17 in single crystals of a large variety of low T c and other novel superconductors [18][19][20][21][22] . Amongst these, the cubic stannide, Ca 3 Rh 4 Sn 13 (T c ∼ 8.35 K) 21 , has a κ ∼ 18. For this compound, we now present new and interesting results pertaining to the PMFC, emanating from the dc and ac magnetization measurements performed at low fields in close vicinity of T c . The peak value of the paramagnetic signal in the field-cooled cool down (FCC) magnetization curves (M FCC (T )) is inversely proportional to the magnetic field (10 Oe < H < 100 Oe) in which the sample is fieldcooled. The paramagnetic signal close to T c at very low fields (H < 20 Oe) has a characteristic structure presenting a fluctuating response arising from competition between the paramagnetic and diamagnetic contributions. The ac susceptibility data also display interesting features, which appear consistent with the observations in dc magnetization measurements. A host of novel experimental findings reported here vividly illustrate the crossover from the compressed flux regime to the pinned conventional vortex lattice state, predicted and well documented by theorists in the literature 6,13,14 . II. EXPERIMENTAL DETAILS The single crystals of Ca 3 Rh 4 Sn 13 were grown by the tin flux method 21 . Each growth cycle yielded a number of single crystals whose detailed pinning characteristics varied somewhat. The dc magnetization measurements were performed using a commercial SQUID-Vibrating Sample Magnetometer (Quantum Design (QD) Inc., USA, model S-VSM). In S-VSM, the sample executes a small vibration around a mean position, where the magnetic field is uniform and maximum. This avoids the possibility of the sample moving in an inhomogeneous field during the dc magnetization measurements. The remnant field of the superconducting magnet of S-VSM was estimated at different stages of the experiment, using a standard paramagnetic Palladium specimen. To ascertain the set value of the current supply energising the superconducting coil to yield nominal zero field at the sample position, we also relied on the identification of the change in sign of the z-component of the magnetic field on its gradual increase (1 to 2 Oe at a time) via independently examining the change in sign of the (field-cooled) magnetization values of the superconducting Sn specimen. The zero-field current-setting could thus be located to within ±1 Oe in a given cycle of gradual change (increase or decrease) of field values from a given remnant state (positive or negative) of the superconducting magnet. The isofield temperature dependent magneti-zation curves were recorded by ramping the temperatures in the range of 0.1 K/min to 0.5 K/min. The ac susceptibility measurements were carried out using another SQUID magnetometer (Q.D. Inc., USA, Model MPMS-5). The ac measurements were made at a frequency of 211 Hz and ac amplitude of 2.5 Oe (r.m.s.). The applied fields in dc and ac measurements were kept normal to the plane of the rectangular platelet (1 mm × 2 mm × 1.5 mm) shaped sample used in the present study. The main panel of Fig. 1 shows a portion of the isothermal M vs H loop recorded at T = 4 K for a single crystal of Ca 3 Rh 4 Sn 13 . The upper critical field (H c2 ) and the onset field of the PE (H on p ) are marked in the main panel. An anomalous enhancement of the magnetization hysteresis below H c2 is a fingerprint of the peak effect (PE) phenomenon in Ca 3 Rh 4 Sn 13 19 . The inset (a) in Fig. 1 elucidates the deviation from linearity nucleating at the paramagnetic-superconductor boundary, taken as H c2 . The inset (b) in Fig. 1 shows the PE region in a portion of the M vs H loop at 6 K, with H c2 marked as well. The second magnetization peak feature 19 was not observed in the present sample. These data comprising only the PE attest to the high quality of the crystal 19,21 chosen for our present study. B. Positive magnetization close to the onset of superconductivity in isofield scans at low fields An inset in Fig. 2 displays one of the typical temperature dependence of the M FCC (T ) curves in low fields (viz., H = 30 Oe, here). M FCC signal can be seen to saturate to its diamagnetic limit at low temperatures (T < 6 K). At the onset of the superconducting transition (T c = 8.35 K), M FCC (T ) response is, in fact, paramagnetic, which is evident in the plots of the expanded M FCC (T ) curves for H = 30 Oe, 60 Oe and 90 Oe (see main panel of Fig. 2). The paramagnetic magnetization on field cooling (PMFC) in a given field (H ≤ 100 Oe) reaches a peak value before turning around to crossover towards diamagnetic values (near 8 K). The PMFC data for 30 Oe ≤ H ≤ 90 Oe in Fig. 2 reveal that (i) the height of the paramagnetic peak decreases monotonically as H increases and (ii) the competition between positive signal and the diamagnetic shielding response gives rise to the turnaround behaviour in PMFC signals near 8.15 K. No significant difference was noted between PMFC response at T > 8.2 K for H < 100 Oe in the data recorded (not shown here) during the field-cooled warm-up (FCW) and FCC modes. This in turn implies that the positive magnetization signals above about 8.2 K do not depend on the thermomagnetic history of the applied magnetic field. This led us to explore closely the isothermal magnetization hysteresis loops in the temperature range 8 K < T < 8.35 K. of M-H plots (at 8.25 K and 8.35 K) in Fig. 3(c) reveal that even at 8.25 K, a diamagnetic response (as determined by the difference between the two plots) is clearly present at about 250 Oe. On lowering the field below about 40 Oe, a sharp upturn takes the magnetization from diamagnetic to paramagnetic values. The paramagnetic response reaches its peak value at the zero applied field (in the z-direction). The peak value of the paramagnetic signal in zero field is seen to decrease with enhancement in field on either side of the zero field. An inset in Fig. 3(c) shows a comparison of the field variation of the paramagnetic response at 8.25 K and 8.30 K on either side of the zero field on an expanded scale. Note the asymmetry in the field variation of the paramagnetic response at positive and negative fields. The observed asymmetry at 8.25 K and 8.3 K is independent of whether the sample is cooled first in +500 Oe or −500 Oe. We believe that the paramagnetic response at zero field (in z-direction), which is superconducting in origin, reflects the magnetization signal due to compression of field corresponding to x-and y-components of the earth's field. The magnetization value at zero field (in M-H loops) is found to be larger at 8.25 K as compared to that at 8.3 K (cf. inset in Fig. 3(c)). Such an enhancement characteristic can be seen to continue at a further lower temperature of 8.2 K (see inset panel of Fig. 3(b)). An inset panel in Fig. 3 Fig. 3(a), therefore, appears to be a superposition of (i) a hysteretic M-H loop expected in a type-II superconductor and (ii) a PMFC signal decreasing with enhancement in field on either side of the nominal zero-field. The PMFC signal in M(T ) measurements in Fig. 2 for Ca 3 Rh 4 Sn 13 is an important observation at H < 100 Oe and T > 8 K. Above 100 Oe, the magnetization response in the superconducting state (at T < 8.35 K) is largely diamagnetic, however, an important unexpected change is witnessed in the field dependence of the diamagnetic response in the neighbourhood of 8 K, as described ahead. The most striking feature of these data is the intersection of the M FCC (T ) curves at 7.9 K (identified as T * VL ). Below T * VL , the magnitude of the diamagnetic response decreases as the field increases, as expected for the vortex lattice (VL) in a type-II superconductor. However, for 7.9 K < T < 8.35 K, the magnitude of the diamagnetic response is enhanced as the field increases, which is unusual for a conventional low-T c type-II superconductor. Such a behaviour, however, has been reported [23][24][25] in the context of a high-T c Josephson-coupled layered superconductor (JCLS) Bi 2 Sr 2 CaCu 2 O 8−δ (Bi2212) for H c, where a crossover happens at a corresponding T * value between the type-II response of a JCLS and the superconducting fluctuations-dominated response of the decoupled pancake vortices. In the case of Ca 3 Rh 4 Sn 13 , the crossover at T * VL is, however, between the pinned vortex lattice state (VL) and the compressed flux regime, giving rise to PMFC signals at H < 150 Oe in the neighbourhood of T c . We identify the region between T c and Fig. 4). Below 7.95 K, the response of the normalized M FCC (T ) curves for different H is like that in a pinned type-II superconductor, and above 7.95 K, there exists the compressed flux regime 11,13 , accounting for the positive peaks in magnetization above 8 K and up to T c . Figure 5 summarizes the M FCC data sequentially recorded from H = −16 Oe to +14 Oe in the single crystal of Ca 3 Rh 4 Sn 13 . The sample was initially cooled in the remnant field of the superconducting magnet, whose value was estimated by measuring the paramagnetic magnetization of the standard Pd sample. The current in the superconducting coil was then incremented step-wise so as to enhance magnetic field by 2 Oe each time. The following characteristics are noteworthy in Fig. 5: (i) While in positive fields (H ≥ 2 Oe), the PME signal close to T c gives way to diamagnetic Meissner response at lower temperatures, in negative fields, the same PME signal close to T c adds on to the positive Meissner response at lower temperatures. Thus, there is no change in the sign of magnetization response as a function of temperature in negative fields. The anomalous PME peak feature promi- Fig. 3. The fact that the positive signal at nominal zero fields decays with field on either side of the zerofield (cf. plots at 8.25 K and 8.30 K in the inset of Fig. 3(c)) implies that the signal would not change sign in M FCC (T ) curves measured for negative applied magnetic fields. For negative magnetic field, the diamagnetic shielding response emanating from a usual pinned type-II superconducting state would result in a positive signal in the magnetization measurements. Such a positive signal superposed on the PMFC magnetization signal (decaying with field) would rationalise the absence in the change of the sign of the PME signal in negative fields in temperature dependent scans. Figure 6 shows a comparison of zero field-cooled (ZFC) magnetization response, M ZFC (T ), in H = 8 Oe along with its M FCC (T ) run. To record the M ZFC (T ) run, the Ca 3 Rh 4 Sn 13 crystal was initially cooled down to 4 K in (estimated) zero field, the field was then incremented by +8 Oe and the magnetization was measured while slowly increasing the temperature above T c . The crystal was then cooled down to 4 K to record the M FCC (T ) data, and thereafter the magnetization was once again measured in the warm-up mode M FCW (T ) to temperatures above T c . The inset panel in Fig. 6 characteristic is evident in M FCC (T ) and M FCW (T ) runs, the M ZFC (T ) is devoid of any oscillatory modulation feature as the diamagnetic response (in +8 Oe) crosses over to yield the attribute of PME peak between 8 K and 8.35 K. The three curves in the inset panel of Fig. 6 meet near 8.2 K, above which the path independent paramagnetic response monotonically decreases. It is reasonable to state that during the ZFC run in H = 8 Oe, the quantized vortices will enter the sample at a temperature at which the lower critical field H c1 (T ) becomes less than 8 Oe (ignoring the surface barrier effects). The quantized vortices will distribute inside the sample to yield Bean's Critical State 26 profile and the macroscopic currents J c (B) will flow inside the sample. The onset of sharp fall in M ZFC (T ) above 7 K reflects the decrease in J c (B) with T on approaching the superconducting transition temperature. It is tempting to associate the oscillatory responses in Fig. 5 to the notion of competition between the (Abrikosov) quantized vortices splitting out of the giant vortex state(s) in the form of compressed flux, and the tendency of a given giant vortex to retain (i.e., conserve) its angular momentum 13 due to pinning. The high κ of Ca 3 Rh 4 Sn 13 ordains that the different L states of the giant vortex are closely spaced in energy. By lowering the temperature, there is a tendency to transform from L > 1 state to L = 1 state (Abrikosov state). However, theoretical work 13,14 has shown that, due to pinning, the system can exhibit metastability, wherein, there can be fluctuations in magnetization corresponding to the transformation between different metastable L states before the system attains the L = 1 state. D. Oscillatory behaviour in field-cooled magnetization curves at low fields In the framework of GL equations yielding multi-flux quanta, the magnetization due to different L states follow different temperature dependences at different reduced fields (i.e., applied field normalized to the thermodynamic critical field, H c ). In very low reduced fields (h) (e.g., h ≈ 0.001, κ ≈ 10 and cylindrical geometry), it has been calculated 13 that all the L states will make paramagnetic contributions such that higher L values contribute more. In the case of Ca 3 Rh 4 Sn 13 , where H c ≈ 3 kOe 19 , the PMFC response is observed in the range of reduced fields, 10 −3 to 10 −2 , where contributions from L ≥ 1 states slightly below T c could be paramagnetic. If the possible transitions between different high L states occur at the same temperature in the very low h range, one could rationalize the insensitivity of oscillatory pattern to the applied fields in Fig. 5. We may also add here that in the GL scenario, the irreversibility temperature is argued 13 to correspond to a crossover between giant vortex states and the Abrikosov quantized vortices, consistent with the observations shown in Fig. 6. The difference in the (diamagnetic) magnetization behaviour in FCC and FCW modes had been noted in samples of conventional low-T c 27 and high T c 28 superconductors. Clem and Hao 29 had shown how it could be rationalized in the framework of the Critical State Model 26 . The spatial distribution of macroscopic currents (J c (B), where B is the local magnetic field) that are set up within an irreversible type-II superconductor while cooling down is different from that which emerges while warming up the sample in the same external field. The diamagnetic M FCW curve typically lies below the M FCC curve, and the two curves merge at the irreversibility temperature 29 , where J c (B) vanishes. In high T c superconductors the irreversibility line lies well below the H c2 line. In strongly pinned samples of type-II superconductors, the irreversibility temperature T irr (H) approaches T c (H) 30 . In this context, the merger of M FCW and M FCC curves in H = 8 Oe (cf. inset, Fig. 6) could imply that the macroscopic J c (B = 8 Oe) approaches zero just above 8.2 K. We may further add that the overlap of M FCC and M FCW curves at T > 8.2 K in Fig. 6 and the behaviour of M vs H at 8.25 K in Fig. 3(c) validates the theoretical prediction 13 that the PMFC signal first decays rapidly with field, followed by the emergence of a diamagnetic response at higher fields. E. AC susceptibility measurements in Ca 3 Rh 4 Sn 13 Figures 7 and 8 summarize the in-phase (χ ′ ) and out-ofphase (χ ′′ ) ac susceptibility data recorded in h ac of 2.5 Oe (r.m.s.) iso-field and iso-thermal runs, respectively. The isofield runs were made while cooling down from the normal state (T > 8.35 K). The isothermal data were recorded along four or five quadrants within the field limts of ± 200 Oe, for the sample having been initially cooled in nominal zero field or + 500 Oe, respectively. Figures 7(a) and 7(b) show the χ ′ (T ) and χ ′′ (T ) plots recorded while cooling down the Ca 3 Rh 4 Sn 13 crystal in dc fields of 0 Oe (nominal value), 10 Oe, 30 Oe, 60 Oe and 90 Oe, respectively. The T c and T * VL values stand marked appropriately in these two panels. The χ ′ response below as well as above T * VL remains diamagnetic. However, a conspicuous change in temperature dependence of χ ′ can be noted to happen near T * VL . Such a change is often ascribed 10 to the crossover between the shielding response in the bulk to the shielding response from the surface superconductivity. In the present case, where we witness the PMFC signal above 8 K in dc magnetization data, it can be noted that ∆M/∆H is negative (cf. Fig. 4), which rationalizes the diamagnetic χ ′ response above T * VL in Fig. 7(a). The χ ′′ (T ) data in Fig. 7(b) shows a dissipation response measured with an ac amplitude of 2.5 Oe (r.m.s.) on either side of T * VL of 7.9 K. The two peaks of χ ′′ (T ) curve in nominal zero dc field in Fig. 7(b) support the notion of a crossover from superconductivity in the bulk (below 7.9 K) to the compressed flux regime (above it). The peak intensity of the higher temperature peak (above 7.9 K) diminishes as the field increases from 10 Oe to 60 Oe. This correlates with the decline in the paramagnetic response with enhancement in field in the temperature regime of compressed magnetic flux (cf. Fig. 2 and Fig. 3(c)). A comparison of χ ′′ (T ) curves from H = 0 Oe to 90 Oe below 7.9 K reveals that the lower temperature dissipative peak progressively becomes more prominent and the peak temperature moves inwards with the enhancement in dc field. This is the usual behaviour expected for enhanced irreversibility on cooling due to macroscopic currents set up within the bulk of a pinned type-II superconductor. The field (H) dependence of the peak temperature (T b p ) of the dissipative peak below 7.9 K can easily be rationalized in terms of field/temperature dependence of macroscopic currents (J c (B, T )), flowing as per Critical State Model 26 in the bulk of the sample. Hysteretic behaviour in isothermal χ ′ (H) and χ ′′ (H) were present at the T > 7.5 K, however, the qualitative feature in field dependence of χ ′ (H) and χ ′′ (H) during field ramp-up or ramp-down remained the same. Above 8 K, χ ′ (H) and χ ′′ (H) data did not display significant hysteresis. To facilitate the comparison with the dc magnetization data in Fig. 3 at selected temperatures below and above T * VL of 7.9 K for field ramp down from +200 Oe to 0 Oe, for sample having cooled in +500 Oe. The χ ′ (H) response at 6.5 K in Fig. 8(a) shows that the given h ac is almost completely shielded up to a dc field of 200 Oe. On raising the temperature to 7.0 K, the decline in | χ ′ | vs H in Fig. 8(a) reflects the field dependence of J c (B) at that temperature. The same trend continues on raising the temperature upto about 8.0 K. The χ ′′ vs H response at T = 6.5 K in Fig. 8(b) confirms that the h ac of 2.5 Oe is not able to yield appreciable dissipation inside the sample upto a dc field of 200 Oe. However, χ ′′ vs H response at 7.0 K clearly reveals the presence of dissipative peak at a dc field of about 50 Oe (marked as H b p ). Thereafter, the decrease in χ ′′ vs H reflects the field dependence of J c (B). A very interesting behaviour in χ ′′ vs H, however, emerges (see Fig. 8(d)) as the temperature is raised from 7.8 K upto 8.1 K and beyond. The χ ′ vs H response at T ≥ 8.0 K in Fig. 8(c) indicates that for the given h ac , χ ′ has somewhat feeble field dependence at very low dc field (H < 5 Oe). The χ ′′ vs H curves in Fig. 8(d), however, reveal that a qualitative change in very low field (H < 5 Oe) response occurs at temperatures above 7.9 K. Note that χ ′′ vs H curves at 8.1 K and 8.2 K in Fig. 8(d) show the dissipation is maximum at nominal zero field, and it decreases rapidly on enhancing the dc field. The χ ′′ vs H curve at 8.0 K in Fig. 8(d) can be seen to imbibe the feature of a rapid decline of dissipation (which is maximum at zero field) with field, followed by surfacing of the dissipation peak (at H b p ) due to currents in the bulk of the sample. The data in Fig. 8(d), therefore, illustrate once again the crossover from a pinned type-II superconducting state to the compressed flux regime across the temperature region of about 8 K. The enhanced dissipation near zero field above 8.1 K perhaps indicates the dissipation from giant vortex cores with large L nucleated by surface superconductivity, whose evidence we have already shown in Fig. 7(b). The inset of Fig. 9(a) shows a plot of χ ′′ vs T measured with an h ac of 2.5 Oe (r.m.s.) in a dc field of 190 Oe. The observation of a peak in χ ′′ (T ) at 7.1 K implies that the given h ac fully penetrates the bulk of the sample at this temperature in H dc = 190 Oe. The decrease in χ ′′ (T ) above 7.1 K reflects the usual decrease in J c with an increase in T . One can use this information to compute a relative dissipative response at H = 190 Oe w.r.t. the dissipation at the same field close the normal state, i.e, at 8.3 K [(χ ′′ (T )−χ ′′ (8.3 K)]/χ ′′ (7.1 K). This, in turn, amounts to computing the relative values of J c in a field of 190 Oe w.r.t. its value at 7.1 K. The main panel of Fig. 9(a) shows a plot of the above stated relative response as a function of temperature. Note a change in the slope of the plotted curve at about 7.9 K (the so called T * VL value). We believe that the region beyond 7.9 K identifies the temperature dependence of surface pinning. We have also plotted the remnant magnetization (or peak magnetization in close vicinity of the nominal zero field) determined from the M-H loops (as in Fig. 3) as a function of temperature in Fig. 9(b). Such a remnant value (M rem ) could be taken as indicative of overall pinning in the specimen. We have marked the location of T * VL (= 7.9 K) in the semi-log plot of M rem vs T in Fig. 9(b) to focus attention onto setting in of more rapid decline in M rem (T ) on going across from (irreversible) pinned vortex lattice to paramagnetic compressed flux regime, where the remnant signal provides a measure of the dominance of the paramagnetic current. IV. SUMMARY AND CONCLUSION We have presented the results of dc and ac magnetization measurements at low fields in a weakly pinned single crystal of a low T c superconductor, Ca 3 Rh 4 Sn 13 , which crystallizes in a cubic structure. This system had been in focus earlier 19 for the study of the order-disorder transformation in vortex matter (at H > 3 kOe) via the peak effect phenomenon. New results at very low fields and in close proximity of T c have revealed the presence of positive dc magnetization on field cooling. In H < 20 Oe, PMFC signals nucleating at 8.35 K can be seen to survive down to about 7 K. For 30 Oe < H < 100 Oe, the crossover from paramagnetic magnetization values to diamagnetic values is seen to occur near 8 K. For 100 Oe ≤ H ≤ 300 Oe, the field cooled magnetization curves are observed to intersect at a temperature of 7.9 K, below which the diamagnetic response is akin to that expected for a pinned vortex lattice in a type-II superconductor. We have attributed the PMFC response to the notion of compressed flux trapped within the body of the superconductor. Below 20 Oe, the surfacing of a curious oscillatory structure in the PMFC response prompted us to invoke the possible notion of a conservation of angular momentum for the giant vortex state 13,14 to account for this behaviour. The iso-field and iso-thermal ac susceptibility (χ ′ and χ ′′ ) data also seem to register the occurrence of a crossover between the compressed flux regime and the pinned vortex lattice. To conclude, we show in Fig. 10 the plot of H c2 values as a function of temperature in the form of a magnetic phase diagram in which the normal and superconducting regions are identified. Between 4 K and 7 K, H c2 versus T has a linear variation; on extrapolation, this linear behaviour fortuitously meets the T -axis (where H = 0) at T * VL of 7.9 K. For H < 300 Oe, the fingerprints of a compressed flux regime in the form of PMFC and/or anomalous diamagnetic response (∆M/∆H < 0) can be observed between T c and T * VL of 7.9 K. The region between H c2 (T ) line and the dotted line which meets the temperature axis at T * VL in Fig. 10 is the regime where we have identified the presence of surface superconductivity and surface pinning (cf. Fig. 9). If this were so, then the portion of H c2 (T ) which deviates from the extrapolated dotted line in Fig. 10, should be identified as a portion of the H c3 (T ) line. At somewhat below T * VL (e.g., at T = 7.7 K), an estimate of the ratio of fields associated with the dotted portion of the line and that of the H c2 (T ) line gives a value of about 2 which is more like the ratio of H c3 (T )/H c2 (T ). In a spherical single crystal of elemental Nb, whose κ value (∼ 2) was just above the threshold for type-II response, some of us had reported 10 the observation of surface superconductivity concurrent with the PMFC response over a large (H, T ) domain, such that the H c3 (T ) was distinctly different from H c2 (T ) line in its phase diagram (Fig. 4 in Ref. 10). In the present case of Ca 3 Rh 4 Sn 13 , where κ is large (∼ 18), the PMFC signal, presumably sustained by the nucleation of superconductivity at the surface, is present only at low fields and in the close proximity to T c . A sharp distinction between H c3 and H c2 is not discernible near T c , the surface superconductivity could, however, be responsible for the slight concave curvature of the H c2 (T ) curve near T c in the magnetic phase diagram (cf. Fig. 10). We believe that behaviour reported above in Ca 3 Rh 4 Sn 13 is generic. Similar features (In particular, an apparent absence of PME peak like feature in negative applied fields and the associated asymmetry between responses in positive/negative fields) would be present in other weak pinning superconductors. Preliminary searches in single crystal samples of other superconducting compounds, like, Yb 3 Rh 4 Sn 13 , NbS 2 , etc. have yielded positive indications 31 . The single crystals grown at University of Warwick form a part of the continuing programme supported by EPSRC of U.K. We thank Mahesh Chandran for fruitful discussions. SSB would like to acknowledge the funding from Indo-Spain Joint Programme of co-operation in S & T, DST, India.
2011-07-01T13:21:58.000Z
2010-10-14T00:00:00.000
{ "year": 2011, "sha1": "a16ec4aeb290b18b9b4ac4eaf49f5c145eace55b", "oa_license": null, "oa_url": "https://repositorio.uam.es/bitstream/10486/670769/1/crossover_kulkarni_prB_2011.pdf", "oa_status": "GREEN", "pdf_src": "Arxiv", "pdf_hash": "a16ec4aeb290b18b9b4ac4eaf49f5c145eace55b", "s2fieldsofstudy": [ "Physics" ], "extfieldsofstudy": [ "Physics" ] }
235214644
pes2o/s2orc
v3-fos-license
Combination of Modified Atmosphere and Irradiation for the Phytosanitary Disinfestation of Trogoderma granarium Everts (Coleoptera: Dermestidae) Simple Summary The khapra beetle is defined as one of the most important quarantine pests globally, and fumigating by methyl bromide, one of the ozone-depleting substances under the Montreal Protocol, is a routine measure used for phytosanitary treatment. To protect the Ozone layer, an environmentally friendly measure is needed to be developed. The middle- to late-stage larvae and adults were treated with irradiation, modified atmosphere (MA) alone, and their combinations at room temperature of 24–26 ℃. As a result, late-stage larvae are determined as the most tolerant stage. Ionizing radiation was used to enhance the effects of 1% and 2% O2 MA treatments that the obvious synergistic effects are presented in all combinations, resulted in saving as high as 60% of the estimated exposure times comparing with MA treatment alone. A total of 111,366 late-stage larvae were exposed to a 1% O2 atmosphere for 14 or 15 days after a 200 Gy irradiation, resulted in no survivor in the validating tests. Therefore, the MA-irradiation combination treatment can provide quarantine security at a very high level, it may be combined with international transportation (train or sea container) to disinfest the commodities infested by khapra beetle and other stored products insect pests. Abstract The khapra beetle, Trogoderma granarium Everts, is defined as one of the most important quarantine pests globally, and fumigation with methyl bromide, an ozone-depleting substance, is a common phytosanitary measure currently used. The modified atmosphere (MA), irradiation, and their combination treatments of T. granarium larvae and adults were performed at room temperature (24–26 ℃) to develop an ecofriendly phytosanitary disinfestation measure and to shorten the exposure time and overcome treatment disadvantages of irradiation. Late-stage larvae are determined as the most tolerant stage resulted in large LT99.9968 values of 32.6 (29.2–37.5) and 38.0 (35.1–41.7) days treated under 1% and 2% O2 (with N2 balance) atmosphere, respectively. Ionizing radiation was used to enhance the effect of MA and the mortality was highly significantly affected by all the interaction effects, indicating that the synergistic effects present in all the combined treatments. The synergistic ratios, which is defined as the estimated lethal time for MA treatment (LD90, LD99, and LD99.9968), divided by that of combined treatment, were between 1.47 and 2.47. In the confirmatory tests, no individuals recovered from a sum of 111,366 late-stage larvae treated under 1% O2 atmosphere for 14- or 15-d after 200 Gy irradiation, which resulted in validating the probit estimations and achieving an efficacy of 99.9973% mortality at 95% confidence level. Therefore, these treatment schedules are recommended to disinfest T. granarium infecting commodities for phytosanitary purposes under the warehouse, MA packaging, or in combination with international transportation by train or sea container. Introduction The khapra beetle, Trogoderma granarium Everts (Coleoptera: Dermestidae), is endemic to India, but viable populations may survive in almost any country in a closed storage environment [1]. The larvae can cause heavy economic losses to stored grains and other food commodities. Damage can be severe with weight losses of between 5-30% and, in extreme cases, 73% worldwide [2]. It is ranked as one of the 100 worst invasive species worldwide [3]. Like most stored product insects, T. granarium was introduced to other continents in recent centuries through international trade, even though adults do not fly. The khapra beetle is currently present in more than 40 countries of Asia, the Middle East, Africa, and Europe. It is listed as a quarantine species by the European and Mediterranean Plant Protection Organization (EPPO), China, the USA, and other countries [4][5][6]. The beetles have been intercepted many times in the port of Australia, China, the USA, and other countries. Furthermore, the number of interceptions increased steadily in recent decades [4,7,8]. As a result, phytosanitary measures, such as phytosanitary treatments, should be taken for the infested commodities. Therefore, it is necessary to develop the disinfestation measures for the phytosanitary treatment of T. granarium and its infesting commodities. At present, phytosanitary treatment of T. granarium and its infested commodities include fumigation (methyl bromide, phosphine) and temperature treatment [5,9,10]. Even if the methyl bromide has been defined as the ozone-depleting substance under the Montreal Protocol and should be banned and replaced [11], it was commonly used since the khapra beetle is highly resistant to pesticides, phosphine fumigation, extreme low and high temperatures [3,[12][13][14]. In order to protect the ozone layer, environmentally friendly phytosanitary treatment measures including; ionizing radiation, modified atmosphere (MA), and low-pressure treatment have been carried out to demonstrate the potential alternative modalities [5,15,16]. The application of MA has been used for controlling the stored arthropod pests by altering the concentration of oxygen (with N 2 balance), carbon dioxide, or their combinations in the storage environment of products; an international standard for its phytosanitary uses (ISPM No.44: Requirements for the use of modified atmosphere treatments as phytosanitary measures) have just been approved by the International Plant Protection Convention (IPPC) [16][17][18]. Several studies with MA have been performed, and the results showed that mature (late-stage) larvae of T. granarium are the most tolerant stage to high CO 2 and low-oxygen atmosphere [3,5]. Thus far, there are no treatment schedules formulated for phytosanitary disinfestation. Ionizing radiation at a low dose has been used to prevent the development and reproduction of arthropod pests, a minimum absorbed dose of 200 Gy is required for preventing reproduction (failure of F 1 egg-hatch) of the khapra beetle adult, the most radiation-resistant stage [19,20]. However, both MA and irradiation treatment involves a decrease in aerobic metabolism in insects; they are slow-acting control methods that need longer exposure times [21][22][23][24]. For example, Zhang found that the minimum times leading 100% mortality of T. granarium mature larvae at 32°C were 52, 27.5, and 13-d irradiated at the dose of 440, 880, 1320 Gy, respectively [25]. Furthermore, late-stage larvae of T. granarium are stimulated into facultative diapause by unfavorable conditions, including extreme temperatures, humidity, food, and crowded environments [5,26,27]. Diapausing larvae were highly resistant to dry, cold, heat, and hunger, in addition, it was markedly more tolerant to low oxygen tension than non-diapausing larvae [3,18,24]. The additive or synergistic effects of combining two or more disinfestation modalities, for example, irradiation-cold storage combination treatment of Melon fly, Zeugodacus cucurbitae Coquillet and Mediterranean fruit fly, Ceratitis capitata Wiedemann, and an MAirradiation combined treatment of the confused beetle, Tribolium confusum du Val., have been performed to develop a disinfestation measure, in which the irradiation enhances the effect of MA, but its effects are improved by cold storage [28,29]. Moreover, the combination of MA with vapor heat treatment have also been effectively used to lower down treatment temperature and shorten treatment time for the disinfestation of the codling moth, Cydia pomonella L. and the oriental fruit moth, Grapholita molesta Busck in apple, peaches, and nectarines [30]; the effects of heat treatment are enhanced by the presence of MA; after that, the treatment schedules based on Heat-MA combinations have already been adopted by the USDA [31] and recommend to the IPPC to formulate an international standard, an annex to ISPM 28 (Draft PT: Vapour heat-modified atmosphere treatment for C. pomonella and G. molesta on Malus pumila and Prunus persica (2017-037 and 2017-038)) [16,32]. Therefore, a combination of MA with other insect disinfestation measures, including temperature (especially heat treatment), irradiation, and chemicals, is a feasible means to fulfill the requirements for phytosanitary treatment [3,5,33]. In this research, the MA-irradiation combination treatment was thereby conducted for achieving a high level of mortality (i.e., probit 9 mortality) of T. granarium in a shorter treatment time to stop further damage to its host commodities, so as to determine the additive or synergistic effects of the combined treatment, and to develop a chemical-free and environmentally friendly phytosanitary treatment schedules alternative to methyl bromide fumigation. Therefore, the adult and middle-to late-stage larvae of T. granarium, which is respectively determined the most tolerant stage to irradiation and MA [5,34,35], were treated with a low-oxygen atmosphere (1%, 2% O 2 with N 2 balance), ionizing radiation alone or their combinations in the following tests: (i). Testing the combined/synergistic effects and examining tolerance to each treatment; (ii) dose-response tests on a single MA and combination treatment, and (iii) confirmatory tests on tens of thousands of the most tolerant stage(s) for validating the probit analysis and confirming the probit 9 treatment efficacy. Insect Rearing The khapra beetle progeny used in this study was originally from an intercepted sample which was found in an Iranian commercial ship in 2013 at the port of Suzhou, China; after then, it was reared for generations with pesticide-free groundnut cakes and peanut pieces together in the closed glass bottles. A constant temperature and humidity chamber (Chongqing Weir Experimental Equipment Co., Ltd., Chongqing, China) was used to place the rearing bottles by keeping the condition at 35 ± 1°C and 65 ± 10% R.H. in continual darkness. The adults (newly emerged) and larvae (middle-stage, late-stage, and their mixed stages), which were picked out of the rearing bottles with a fine brush and then placed in a plastic cup (6 cm in diameter and 5 cm in height,~120 individuals in each cup were used as a treatment), were respectively subjected to treatments and then reared at room temperature (24-26°C) in the Key Laboratory of Phytosanitary Treatment, Chinese Academy of Inspection and Quarantine, Beijing, China. During the experiments, strict biosecurity measures have been taken to prevent the khapra beetle from escaping and spreading. Experimental Design A conclusion has been reached by Hallman et al. [35] based on analyzing lots of researches that the most developed stage in insect is invariably the most radiation-tolerant when a common measure of efficacy is used. Therefore, the khapra beetle adults which is the most developmental stage should be more tolerant to radiation than others. However, mortality is rarely used for efficacy evaluation in the phytosanitary irradiation treatment [22,36]. For MA-irradiation treatment, irradiation was used to enhance the effects of MA treatment, and then mortality should act as the efficacy criterion. Thus, radiation tolerance to mortality in stages should be compared firstly in the testing. Gamma radiation of adults and mixed-stage larvae. The recommended doses for the hygienic treatment of pulses and cereals are 200, 400-600 Gy, respectively, according to the requirements of the Chinese national hygienic standard (GB14891.8-1997: Hygienic standard for irradiated beans, grains, and their products). To compare the radiation tolerance to mortality, newly emergence adults and mixed-stage larvae (middle to latestage) of T. granarium were exposed to gamma radiation at the dose of 200, 400, and 600 Gy, respectively. Each of the doses was replicated three times, and the mortality was checked on 7, 14, 21, and 28-d after treatment. Gamma radiation combination with MA treatments of adults and mixed-stage larvae. In order to compare tolerance to MA-irradiation combined treatment, the adults and mixed-stage larvae were treated under 1% O 2 atmosphere for the exposure time of 7, 14, and 21-d, respectively, after gamma irradiation at the dose of 200, 400, and 600 Gy. Each of the time-dose combinations was replicated three times. However, results indicated that all the treated beetles died between days 7-14; therefore, shorter exposure times and intervals should be tested in the following testing. MA in combination with X-ray radiation treatments of adults, middle-, and latestage larvae. The adults, middle-and late-stage larvae were firstly treated with X-rays at the same dose of 200, 400, and 600 Gy, respectively, and then subject to 1% O 2 atmosphere treatment with the exposure time of 3, 6, and 9-d, respectively. Each of the dose-time combinations was replicated three times. Dose-response test of MA alone or in combination with X-ray treatment of latestage larvae. To estimate the lethal time of LT 90 (the minimum lethal time leading to 90% mortality at a specific confidence level (i.e., 90%, 95%, 99%, where 95% confidence level was used for all the estimations in this research), LT 99 and LT 99.9968 of the khapra beetle, middleand late-stage larvae were respectively subjected to 1% or 2% O 2 MA treatment alone or in combination with 200 Gy X-ray radiation. The experimental design and exposure times for the dose-response tests are listed in Table 1, where the insects without any treatment were used as control, and each of the exposure times was replicated four times. Confirmatory tests. To validate the estimated minimum time for probit 9 mortality of T. granarium late-stage larvae, a preliminary validating test was first conducted to determine efficient exposure times used in the following tests. A total of 30,000 (each of 10,000, counted before testing) late-stage larvae were irradiated at 200 Gy, then exposed to 1% O 2 MA treatment for 13-, 14-, and 15-d, respectively. After then, the exposure time of 15-d was used to perform the remaining confirmatory testing. Treatments Gamma radiations. All the gamma radiations were completed at the National Institute of Metrology Research Irradiator, Beijing, China, where the primary 1.5 × 10 15 Bq Cobalt-60 source was used for conducting research. Irradiation reference standard and routine dosimetry were done with the Fricke system [37]. The plastic boxes containing khapra beetle samples were placed 50 cm far away from the center of the radiation source and ro-tated 180 • at mid-exposure. The dose rate measured in the first and second treatments were 8.4 and 8.0 Gy/min, respectively, with the dose uniformity of 1.15 and 1.13, respectively. X-ray radiations. An RS-2000 Pro X-ray irradiator (Rad Source Technologies, Inc., Atlanta, GA, USA) was used to conduct all the X-ray irradiations by using the operating parameters of 220 KeV and 17.6 mA. Every~120 adults or larvae (middle-or late-stage instars) in a plastic box were irradiated at doses of 200, 400, and 600 Gy, respectively. For the confirmatory tests, late-stage larvae (11,020-25,374, counted during mortality-evaluation) were wrapped up in a plastic bag for irradiation at 200 Gy. The dose rate monitored in all these irradiations was 9.0 Gy/min. MA (Low-oxygen atmosphere) treatment. All the MA treatments were conducted in the four-liter gastight airbags (Dalian Delin Gas Packaging Co., Ltd, Dalian, China). For each treatment, three or four plastic boxes containing the insect samples (irradiated or none) were placed into one gastight airbag through the opening (at the bottom), followed by the sealing of the airbag, exhausting all the air with a diaphragm pump, and injecting 1% or 2% O 2 (with N 2 balance) (Beijing Green Oxygen Tiangang Technology Development Co., Ltd, Beijing, China) into the airbag and kept it for a few minutes [38]. The exhausting-injecting procedure was repeated at least three times to purify the gas in the airbag. Then, all the airbags were placed in one room with a temperature of 24-26°C; the gases in the airbags were refined every two days until the exposure times reached. Insect Rearing after Treatments The treated cups or boxes were taken out of the airbags and kept for another seven days at room temperature. Then, the number of larvae, pupae, and adults (dead or survivor) were counted. Mortality was evaluated based on non-movement with acupuncture and/or color changes of the insect body. Data Analyses Mortality data for irradiation, MA treatment alone or their combinations were corrected by using Abbott's formula [39] and then subjected to two-way or three-way Analysis of Variance (ANOVA) to analyze the individual effects of main factors and their interaction effects; means (±SD, for all the mortality) were compared by Tukey's multiple comparison tests; where DPS software was used in the analysis [40]. The dose-response (time-mortality) data on MA treatment alone or in combination with irradiation were analyzed with Probit model by using PoloPlus 2.0 program to estimate the lethal exposure time (using nontransformed exposure times), in which any mortality data between 0 and 100%, and the shortest exposure time causing 100% mortality were used in the analysis [38,41]. Pare-wise comparison tests were performed by calculating the 95% confidence limits (CIs) of the lethal dose ratios at LT 90 , LT 99 , and LT 99.9968 so as to compare the significance of the tolerance of the khapra beetle between larval stages and treatments at different O 2 levels. If the 95% CIs excludes 1, then the LTx values are significantly different [38,42,43]. To determine the additive or synergistic effects in combined treatments, the synergistic ratios (SRs), which are defined by Hewlett and Plackett [44] and have been used by Chadwick [45], who call it the factor of synergism, and Lee et al. [46] in the combination of two pesticides or fumigants, was also calculated from the Equation (1). For the confirmatory tests, the mortality proportion (1 − Pu) associated with treating a number of khapra beetle with zero survivors is given by Equation (2) for a defined confidence level. where Pu is the maximum allowable infestation proportion, C is confidence level, and n is the number of test insects. Furthermore, the number (n) treated in confirmatory tests should be adjusted based on control survivorship [47][48][49]. Effects of Gamma Radiation Mortality of T. granarium generally increased with increasing exposure time (from 7 to 28-d) and radiation dose (from 200 to 600 Gy), while complete mortality was not achieved in adults or mixed-stage larvae either (Figure 1). The differences of corrected mortality were significant for the main factors of stages (F 1,71 = 2505.22, p ≤ 0.0001) and times (F 3,71 = 82.08, p ≤ 0.0001), and for the two-way interaction effects of stage × time (F 3,48 = 40.83, p ≤ 0.0001) and stage × dose (F 2,48 = 3.37, p = 0.0427); therefore, mortality for both larvae and adult increased significantly with increasing time, and the mean mortality value (±SD) for adults (96.0 ± 3.56%) was significantly larger than that of larvae (37.9 ± 16.91%), indicating that the larvae are significantly more tolerant to irradiation than adults (mortality was used for evaluating treatment efficacy). However, the main effects of radiation dose (three-way: F 1,71 = 1.67, p = 0.1995; two-way for larvae: F 2,35 = 2.47, p = 0.1060; 2-way for adults: F 2,35 = 3.23, p = 0.0573) and the interaction effects of dose by stage and/or time (two-way for larvae: F 6,24 = 0.24, p = 0.9586; two-way for adults: F 6,24 = 0.64, p = 0.7011) were insignificant, indicating that there are no synergistic effects between dose and times after irradiation. As a result, mortality of T. granarium increased slowly with increasing dose and there is no significant difference among gamma radiation at 200, 400, and 600 Gy. SRs > 1 describes synergism. For the confirmatory tests, the mortality proportion (1−Pu) associated with treating a number of khapra beetle with zero survivors is given by equation 2 for a defined confidence level. where Pu is the maximum allowable infestation proportion, C is confidence level, and n is the number of test insects. Furthermore, the number (n) treated in confirmatory tests should be adjusted based on control survivorship [47][48][49]. Effects of Gamma Radiation Mortality of T. granarium generally increased with increasing exposure time (from 7 to 28-d) and radiation dose (from 200 to 600 Gy), while complete mortality was not achieved in adults or mixed-stage larvae either (Figure 1). The differences of corrected mortality were significant for the main factors of stages (F1,71 = 2505.22, p ≤ 0.0001) and times (F3,71 = 82.08, p ≤ 0.0001), and for the two-way interaction effects of stage × time (F3,48 = 40.83, p ≤ 0.0001) and stage × dose (F2,48 = 3.37, p = 0.0427); therefore, mortality for both larvae and adult increased significantly with increasing time, and the mean mortality value (±SD) for adults (96.0 ± 3.56%) was significantly larger than that of larvae (37.9 ± 16.91%), indicating that the larvae are significantly more tolerant to irradiation than adults (mortality was used for evaluating treatment efficacy). However, the main effects of radiation dose (three-way: F1,71 = 1.67, p = 0.1995; two-way for larvae: F2,35 = 2.47, p = 0.1060; 2way for adults: F2,35 = 3.23, p = 0.0573) and the interaction effects of dose by stage and/or time (two-way for larvae: F6,24 = 0.24, p = 0.9586; two-way for adults: F6,24 = 0.64, p = 0.7011) were insignificant, indicating that there are no synergistic effects between dose and times after irradiation. As a result, mortality of T. granarium increased slowly with increasing dose and there is no significant difference among gamma radiation at 200, 400, and 600 Gy. Effect of MA in Combination with Gamma Radiation Most of the adult and mixed-stage larvae of T. granarium died within 7-d, and all of them died within 14-d when they were exposed to 1% MA treatment after gamma radiation ( Figure 2). Results of three-way ANOVA showed that the difference in mortality was highly significant (p ≤ 0.0001) for the main factors of the stage (adult > larvae), dose (600-Gy ≈ 400-Gy > 200-Gy), and exposure time (21-d = 14-d > 7-d), and for all the interaction effects. Therefore, larvae are also more tolerant to MA-irradiation combined treatment than an adult, just as irradiation treatment alone (Figure 1). Most of the adult and mixed-stage larvae of T. granarium died within 7-d, and all of them died within 14-d when they were exposed to 1% MA treatment after gamma radiation ( Figure 2). Results of three-way ANOVA showed that the difference in mortality was highly significant (p ≤ 0.0001) for the main factors of the stage (adult ˃ larvae), dose (600-Gy ≈ 400-Gy ˃ 200-Gy), and exposure time (21-d = 14-d ˃ 7-d), and for all the interaction effects. Therefore, larvae are also more tolerant to MA-irradiation combined treatment than an adult, just as irradiation treatment alone (Figure 1). .3%) is significantly less than that of 400-Gy + 1%O2 (98.7 ± 2.6%) and 600-Gy + 1%O2 (99.5 ± 0.8%), and there is not any significant difference among 200, 400, and 600 Gy irradiation-MA combined treatment for the adults. This mysterious result may be due to the long exposure time that results in a very high level of mean mortality (larvae: ≥80.5%; adult: ≥99.4%) ( Figure 2). Then, a shorter exposure time and intervals should be tested to determine the interaction and main effects of radiation dose. Effect of Combination MA with X-Ray Radiation For MA and X-ray combination treatment of T. granarium, results derived from threeand two-way ANOVA showed that the effects were highly significant for all the main factors and their interactions (p ≤ 0.0001). The mortality within a stage increased significantly with increasing radiation doses and exposure times; the least mortality for latestage larvae (77.4 ± 20.6%) means it is the most tolerant stage, followed by middle-stage larvae (87.3 ± 17.4%), while the adult is the least radio-tolerant stage with the largest mortality of 93.5 ± 8.4% (Table 2). For the two-way ANOVA, the interaction effects of dose × exposure time and main effects of dose were significant for the mixed-stage larvae (interaction F 4,18 = 15.26, p ≤ 0.0001; dose: F 2,26 = 15.26, p ≤ 0.0001), but they were insignificant for adults (interaction F 4,18 = 2.00, p = 0.1378; dose: F 2,26 = 2.00, p = 0.1639), resulted in the larval mortality for 200-Gy + 1%O 2 (93.5 ± 10.3%) is significantly less than that of 400-Gy + 1%O 2 (98.7 ± 2.6%) and 600-Gy + 1%O 2 (99.5 ± 0.8%), and there is not any significant difference among 200, 400, and 600 Gy irradiation-MA combined treatment for the adults. This mysterious result may be due to the long exposure time that results in a very high level of mean mortality (larvae: ≥80.5%; adult: ≥99.4%) ( Figure 2). Then, a shorter exposure time and intervals should be tested to determine the interaction and main effects of radiation dose. Effect of Combination MA with X-ray Radiation For MA and X-ray combination treatment of T. granarium, results derived from threeand two-way ANOVA showed that the effects were highly significant for all the main factors and their interactions (p ≤ 0.0001). The mortality within a stage increased significantly with increasing radiation doses and exposure times; the least mortality for late-stage larvae (77.4 ± 20.6%) means it is the most tolerant stage, followed by middle-stage larvae (87.3 ± 17.4%), while the adult is the least radio-tolerant stage with the largest mortality of 93.5 ± 8.4% (Table 2). In comparison with the new emergence adults, larvae (middle-to late-stage) have been determined more tolerant to gamma radiation alone (Figure 1) or in combination with a low-oxygen atmosphere in the previous treatments (Figure 2), while late-stage larvae are more tolerance to combined treatment than middle-stage larvae (Table 2), therefore, late-stage larvae are the most tolerant stage that should be used to conduct the doseresponse and confirmatory testing. Furthermore, the largest mortalities were obtained in the combinations of 600Gy-9d and 400Gy-9d, followed by 600Gy-6d and 200Gy-9d for the treatment of late-and middle-stage larvae (under 1% O 2 atmosphere), suggesting that there are four kinds of optimal combinations that can be used for the controlling strategies. Because the effects of the radiation dose were insignificant ( Figure 1) and irradiation is costly comparing with MA treatment, the lowest dose of 200 Gy that can provide quarantine security at the level of probit 9 is the optimum dose to be used in the combination treatment [19,20]. The outcomes of a two-factor analysis are quite complex, during a two-way ANOVA, the main effects are not necessary to be interpreted if the interaction effects are significant [40]. All the interaction effects of dose × time and the main effects of dose were highly significant for the MA-irradiation combination treatments (Table 2), but they are insignificant for gamma radiation alone (Figure 1), indicating that distinguished synergistic effects present in all the MA-irradiation combined treatments, and the main effects of radiation are dominated by the interaction effects of dose × time. This is also the possible reason for the large SRs value that has been obtained in all four kinds of combination treatments (Tables 3 and 4). Estimating Lethal Times Parameters of the probit analysis for middle-and late-stage larvae of T. granarium treated under 1% or 2% O 2 atmosphere alone or in combination with 200 Gy X-ray irradiation are presented in Table 3. The smaller value of heterogeneity (chi-square divided by degrees of freedom) means a good fit to the data, and lacking 100% mortality data in the dose-response tests may lead to an unsatisfactory estimation; therefore, good estimation had achieved in all treatments, except for the late-stage larvae treated under 1% (mortality ≤ 94.6 ± 4.7%) or 2% O 2 MA-irradiation combination that they have larger 95% confidence intervals (CIs). For middle-stage larvae of T. granarium, the positive slope in all treatments was larger than that for late-stage instars, as a result, the estimated mean values were less than that for late-stage instars; in addition, both of the lethal dose ratios test and 95% CIs overlap tests indicated that the difference is significant (Table 3); therefore, late-stage larvae are significantly resistant to MA alone or combination treatment than the middle-stage instars; furthermore, to reduce further damage, the shortest exposure times of 13.2-d (11.9-15.1) which leading the probit 9 mortality of late-stage larvae under 1% O 2 atmosphere (Table 3) should be used to conduct the following confirmatory tests. Synergistic Ratios Equation (1) was used to calculate the synergistic ratios (SRs, Table 4) based on the estimated mean value of lethal times in Table 3. The value of SRs based on LT 90 , LT 99 , and the extrapolated LT 99.9968 were very closed, ranging from 1.47 to 2.47; suggesting that the combination of MA and irradiation have presented obvious synergistic effects, which may save about 32 to 60% of the exposure times comparing with MA treatment alone. In addition, greater synergistic effects have been achieved for late instars comparing with middle-stage larvae; likewise, more efficient treatments have been achieved under 1% O 2 comparing with 2% O 2 atmosphere. Therefore, late-stage larvae treated under 1% O 2 MA-irradiation combination, which has been determined as an optimal combination (Table 2), and obtained the largest SRs mean (±SD) value of 2.43 ± 0.05 (Table 3), is the most optimal combination. Confirmatory Tests The exposure times of 13, 14, and 15-d, which was estimated by the probit model (Table 3), were used for the preliminary validation tests; however, 1 survivor was found in the treatment of 13-d exposure ( Table 5). After that, only 15 days of exposure time were performed in the remaining confirmatory tests. As a result, no survivors were found in a total of 901,366 treated late-instar larvae. Thus, the treatment efficacy (1-Pu) calculated from formula 2 is 99.9973% (counting the 20,000 larvae treated in the preliminary validating tests), assuming the confidence level at 95%, the estimation derived from the probit model was thereby validated. In addition, the number treated in the confirmatory tests should be adjusted to account for the percentage of survival in the control (96.8-98.1%), then, the adjusted number is 108,621, and the efficacy is 99.9970% at a 95% confidence level. The uncertainty for X-ray dose was 5%, and the monitored absorbed dose for gamma radiation was 173.9-199.8 Gy. Discussion Ionizing radiation and MA treatment are currently used measures for disinfecting and disinfestation of quarantine arthropod pests or microorganisms, both of which are environmentally-friendly but slow-acting measures; irradiated but living insect may be another obstacle to be overcome for the application of phytosanitary irradiation treatment [15,22,23,33]. The present results indicated that the minimum exposure times for probit 9 mortality of T. granarium late-stage larvae were 32.6 (29.2-37.5) and 38.0 (35.1-41.7) days (Table 3) under in 1% and 2% O 2 atmosphere at room temperature, and more than four weeks are need for complete mortality when the beetle irradiated at the dose of 200 to 600 Gy (Figure 1). However, obvious synergistic effects of their combinations have been demonstrated in the present (Tables 3-5) and other studies to be used for preservation treatments and insect disinfestation (i.e., T. confusum) to improve effectiveness, and to reduce costs, treatment time, and product damage [28,33,50]. For a combination of irradiation with other treatments, the desired response (efficacy evaluation) should be determined firstly since irradiation is different from any other treatment measures, and then the most tolerant stage(s) and additive/synergistic effects should be investigated and confirmed afterward [29,33]. The desired response for MA against stored-product insects is achieving mortality, whereas the desired response for irradiation is typically the prevention of adult emergence or adult sterility [15,22]. For the MA-irradiation combination treatment, there are two choices: the use of MA to modify the response to irradiation, such as Follett and Snook [29] choose cold storage to modify the response to irradiation treatment of two kinds of fruits flies; or as the use of irradiation to modified response to MA. For the present combined phytosanitary treatment of T. granarium, we chose the former to measure mortality for efficacy evaluation; the advantages for this selection are conducive to overcoming the major obstructs that affects the application of phytosanitary irradiation by the presence of living insects and preventing further damage to the stored products and foodstuff [22,33,48,51]. Generally, irradiation with gamma rays or X-rays have the same effects on insects; and the most developed adult should be the most tolerant stage since the radiation tolerance increases with their developmental stage by using a common criterion for efficacy evaluation, for example preventing developments or reproduces of adults [22,33,35]. However, when mortality is used for treatment efficacy criteria, the tolerance sequence of T. granarium has been changed. Zhang [26] found that the mature larvae are more resistant to gamma radiation than pupae and adult treated at 32°C with the radiation dose of 440, 880, and 1320 Gy, respectively; similarly, in the present research, middle-to late-stage larvae are determined to be more tolerant than adults in the gamma radiation alone or combined with MA treatment (Tables 2 and 3; Figures 1 and 2). The reason adults are more sensitive to radiation (causing mortality) than late-stage larvae, is possibly due to the slow-acting effects of radiation, the short life span (female: 14-15-d; male: 15-19-d, at 25 • C), and feeding habits (the adult rarely eat or drink) of the khapra beetle [1,22,52]. For the combined treatment of late-stage larvae T. granarium, previous results have shown that it is the most tolerant stage to low-oxygen or high CO 2 MA treatment, especially the diapausing larvae [1,3,26]. Fortunately, late-stage larvae were also determined more tolerance to low-oxygen MA alone or in combination with irradiation than middle-stage larvae (including adults to the combinations) in our testing (Tables 2 and 3). Consequently, late-stage larvae were used to conduct the following dose-response tests of low-oxygen MA alone and in combination with irradiation. Finally, the estimated LT 99.9968 of 13.2-d (11.9-15.1) for late-stage larvae was validated by treating a total of 111,366 late-stage larvae (Tables 4 and 5), resulted in high treatment efficacy of 99.9973% or 99.9970% (correcting with control mortality) at 95% confidence level [47][48][49][50]. This treatment efficacy may fulfill the most rigid requirements for phytosanitary treatment, probit 9 mortality at 95% confidence level, because the minimum requirements required for the approved treatment schedules should be the upper limit in the confirmatory tests [48,51,53]. For the present MA-irradiation combined treatment of T. granarium, the treatment schedules can be described as a minimum exposure time of 15-d treatment under the maximum concentration of 1% O 2 (with N 2 balance) atmosphere after irradiation at the minimum absorbed dose of 200 Gy. For the phytosanitary application, both packaged (at normal atmosphere) or unpackaged grains and foodstuffs can be irradiated before export or at the port of entry, followed by packaging (including MA packaging, MAP) to prevent recontamination; and then the MA treatment may be conducted in the warehouse or during the transportation in sea container or train cabin or using MAP [15,17,33,54]. The synergistic coefficients (i.e., co-toxicity coefficient, synergistic ratios, synergistic factors) are typically used to evaluate the additive or synergistic effects of the joint action of insecticidal compositions, in most cases, it is a combination of two chemicals [46,[55][56][57]; while the interaction effects (two-way or three-way ANOVA) have been analyzed to test synergistic effects for the combinations of chemical and physical conditions or multiple physical treatments, for example, ionizing radiation in combination with essential oil or cold storage [49,58]. In the present combined treatments, all the interaction effects among the treatment parameters (radiation dose, oxygen level, exposure time) were highly significant (Table 2, Figure 2), indicating that obvious synergistic effects present in all the MA-irradiation combinations. Moreover, the ANOVA may assist the determining of the optimum combination, the importance, and sequence of main factors, for example, we can only choose to apply radiation to modified MA and only take into account the LTs value of MA as a basis to calculate SRs (Formula (1)), because the main factor of radiation dose and interactions of dose × time were non-significant in the irradiation treatment alone (Figure 1). In addition, we also use the SRs to test the synergistic effects between MA and irradiation since there is no means to calculate the theoretical mortality induced by the two physical measures. As a result, all the SRs value was >1.47, especially, the mean values of SRs for late-stage larvae of T. granarium were 2.43 (1% O 2 ) and 1.71 (2% O 2 ) ( Table 4). By the way, the SRs used in this study is more like the toxicity index used by Sun and Johnson (1960) for a pesticide mixture [55]. The biological effects of irradiation are to create damage to DNA that prevents multiplication and randomly inhibits cell functions, resulting in the death of the cell [22,33]. While the specific mechanisms by which insects are affected by and adapt to low-oxygen and high CO 2 atmosphere remain poorly understood so far [23]. However, both irradiation and MA treatment can protect the treated food without the toxic residues left and cause a decrease in aerobic metabolism in insects, which may produce additive or synergistic effects (Tables 3 and 4) to accelerate the death of insect pests [22,23,33]. Moreover, the efficacy of the two treatments on the different stages complements each other to provide a high level of quarantine security to regulated pests since the adult stage is more radiation tolerant but more sensitive to MA than the larval stages [3,5,28]. Although a longer exposure time (32.6-d at 24-26°C, Table 3) is needed to produce completely mortality of T. granarium under 1% O 2 compared with the pure nitrogen (6-d at 30°C) or high CO 2 , but it is cheaper in use, more convenient to produce and implement in practice [3,17,59]; furthermore, a combined treatment time of 15-d at 24-26°C (Table 5) may be acceptable, especially in the combination of phytosanitary treatment with international transportation [28,54]. Both ionizing radiation and MA treatment are ecofriendly phytosanitary measures alternative to methyl bromide fumigation in quarantine and pre-shipment (QPS) uses [16,22,59]. First of all, this combination has great significance on reducing the damage to stored grains and food staffs and long-term transportation of foodstuffs during international trade, because the khapra beetle is recognized as one of the most 100 invasive species that cause extremely high infestation levels to a wide range of stored products [1,9,60,61]. Secondly, T. granarium diapausing larvae were determined as the most tolerant to MA, but a 200 Gy irradiation could provide treatment efficacy at probit 9 to the most tolerant stage, the adults; therefore, another advantage for this combination is that the low-oxygen MA-irradiation combination could provide quarantine security at a high level, even if the diapausing larvae are presented in commodities [19,20,22]. Once again, elevated CO 2 levels cause spiracles to open (remaining permanently open at ≥10% CO 2 ) resulted in insect death from water loss and impact on the nervous system by its direct toxic effects. CO 2 can also acidify the hemolymph leading to membrane failure in some cases [18,59]. Despite the similarities in response, arthropod mortality is generally greater in response to high carbon dioxide as exposure to the low-oxygen atmosphere [62]. Furthermore, when elevated CO 2 is added to low-oxygen atmospheres, the additive or synergistic effects have been observed depending on the concentrations and the insect species used [62][63][64]. Therefore, a promising treatment may be established by combination low-dose irradiation (save cost and time) with a low-oxygen and high CO 2 atmosphere to further shorten the exposure time (for MA treatment), and to be accepted by all the user (the exporter, importer, regulator) to an alternative to the QPS uses of methyl bromide. MAP has been broadly used for controlling insect pests and maintain the quality of stored and perishable products. It is easy to realize the combination of MA, transportation, and storage together after irradiation treatment [17,33,34]. There is a potential for this treatment schedule to be used for the phytosanitary treatment of infested commodity on sea container, warehouse, MAP, or railway cabins, for example, the China-Europe Railway Express which takes two weeks or more [54]. A low-oxygen atmosphere may reduce the radiation effects [16,22,35,38,65]. The procedure for the application of MA-irradiation combination treatment should be conducting irradiation at normal atmosphere, firstly, followed by MA including the controlled atmosphere. However, mortality of T. granarium late-aged larvae decreased significantly with decreasing treatment temperature (35 • C > 25 • C > 0 • C under 1% low-oxygen atmosphere) [3,5,66]. Furthermore, other factors, such as insect stage and relative humidity, may affect the treatment efficacy, further research is still essential to compare tolerance difference in all the possible stages [22,29,48,59], to test the effect of temperature and radioprotection under a low-oxygen atmosphere, and evaluate the commodity quality under commercial conditions. Conclusions The combination treatment of low-oxygen MA and irradiation have been confirmed to be an effective measure to disinfest the khapra beetle that is highly resistant to each of the treatment, since a minimum of 32.6 (29.2-37.5) and 38.0 (35.1-41.7) days were required to achieve a mortality of 99.9968% at 95% confidence level for late-stage larvae (the most tolerant to each of the treatment) treated at 1% or 2% O 2 atmosphere, respectively. A dose of 200 to 600 Gy radiation can be used to enhance the effect of MA treatment resulting in an obvious synergism even if the main effects of radiation dose and interaction effects of dose × time were insignificant in the radiation treatment alone. The interaction effects of two-way ANOVA, as well as the SRs, are used effectively to analyze the synergistic effects of the combination treatments; as a result, all the SRs are with 1.47 to 2.47, indicating that 32 to 60% of the exposure times are predicted to be saved comparing with MA alone. In addition, the probit estimation and synergistic effects were validated by treating a total of 111,366 late-stage larvae without survivors, and then the treatment schedules can be established for the phytosanitary disinfestation of the khapra beetles and other stored-product insects. Institutional Review Board Statement: Not applicable. Data Availability Statement: All data presented in this study are available in the article.
2021-05-28T05:20:11.228Z
2021-05-01T00:00:00.000
{ "year": 2021, "sha1": "249f3ccf10bffa539e5f00a5603017eb7744f054", "oa_license": "CCBY", "oa_url": "https://www.mdpi.com/2075-4450/12/5/442/pdf", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "249f3ccf10bffa539e5f00a5603017eb7744f054", "s2fieldsofstudy": [ "Agricultural And Food Sciences" ], "extfieldsofstudy": [ "Biology", "Medicine" ] }
245140438
pes2o/s2orc
v3-fos-license
Estimation of Rainfall from Climatology Data Using Artificial Neural Networks in Palembang City South Sumatera Estimation of climatological parameters, especially rainfall is a data requirement for all regions of Indonesia. The availability of rainfall data is used for early warning of flood or drought disasters. The study location is in Palembang City, South Sumatra Province, where floods and droughts often occur and lack of availability of rainfall data. This study aims to obtain the best model in estimating rainfall from climatological data. The analysis was carried out to estimate the rainfall from the climatological data using the Artificial Neural Networks method. The Artificial Neural Networks were applied and showed some results with the best calibration was at 16 years using TRAINLM with 1500 epochs that is the performances NSE = 0.54, RMSE = 99.37, and R = 0.74. Whereas the best validation was at 1 year that is the performances NSE = 0.41, RMSE = 87.32, and R = 0.65. Introduction Precipitation plays an important role in the hydrologic cycle. Precipitation is also the main focus in climatological studies. Learning about precipitation is very important in (a) precipitation characteristics identifying (b) precipitation statistical modeling and forecasting, and (c) floods and droughts mitigation. In tropical areas, the term precipitation has been replaced by rainfall, where snow is generally absent. The term rainfall is more commonly used than precipitation. The continuity and consistency of rainfall data are very significant in statistical analysis, such as time series analysis. Both continuity and consistency will be disturbed due to changes in the observational and incomplete records, which could vary in length from one or two days until many years [1]. Artificial Neural Networks (ANNs) are a strong method of computational that has been mainly used for pattern recognition, classification, and prediction. The major advantage of the ANNs as an alternative to the conventional and physical methods is that we do not use an explicit description in the mathematical equations for the complex processes of the system under development. Therefore, ANNs could generalize the robust nonlinear patterns of natural phenomena, including aggregation and disaggregation of rainfall with stabilization [2]. The objective study determines the calibration of rainfall and climatological data based on the ANNs models and the validation rainfall from the ANNs models and field stations. Stage of rainfall analysis a. Analysis of the quality of hydrological data based on the rainfall data that has been obtained. The tests used include; (i) consistency test using Rescaled Adjusted Partial Sums (RAPS). Method, and (ii) stationary test with F-test and t-test. b. Rainfall analysis using the Artificial Neural Networks Models. In this study, data sharing is used by input, target, and modelling output. The division of data composition from the calibration and validation process into several parts as 15-5 years, 16-4 years, 17-3 years, 18-2 years, 19-1 years. This means that in the distribution of data for 15-5 years, the initial 15 years (1999-2013) are used as calibration process and 5 years outside of that year are used as validation process. The network architecture is made using various layers and epochs using the backpropagation algorithm. Consistency test 2.4.1. Rescaled Adjusted Partial Sums (RAPS) The RAPS is a method where data consistency is indicated by the cumulative value of the deviation from the average value [3]. Sk * * = Sk * Dy where, is the observed data, ̅ is the average of data observed, n is the number of the total observations, Stationer test The stationary test is to test the variance stability and average values of a time series. It is also to determine whether or not the variance (F-test) and average (t-test) values are homogeneous [4]. F-Test where, N1 is the total number of the 1 st group sample, N2 is the total number of the 2 nd group sample, S1 is deviation standard of the 1 st group sample, S2 is deviation standard of the 2 nd group sample. ̅̅̅ is average data of the 1 st group sample, 2 ̅̅̅ is average data of the 2 nd group sample, 1 is the total number of the 1 st group sample, 2 is the total number of the 2 nd group sample, 1 is deviation standard of the 1 st group sample, 2 is deviation standard of the 2 nd group sample. Artificial Neural Networks (ANNs) ANNs are inspired by biological neurons in the human brain and consist of the computational unit interaction. ANNs build a relationship between the input and output, also produce a good response by following the biological processes of human brain activities such as saving information, learning, and training. The structure of an ANN includes the input layer, hidden layer, and output layer [5]. Calibration and Validation Calibration of a model is selecting a combination of parameters or an optimization process based on parameter values to improve the coherence of the observed and simulated watershed hydrological response [6]. The definition of validation or validity measures the extent to which the difference in scores reflects the actual difference between individuals, groups, or situations regarding the characteristics to be measured or the actual error in the same individual or group from one situation to another [6]. Nash-Sutcliffe Efficiency The Nash-Sutcliffe efficiency (NSE) is a normalized statistic that determines the relative magnitude of the residual variance ("noise") compared to the measured data variance ("information"). NSE indicates how well the plot of observed versus simulated data fits the 1:1 line [7]. where, is the ith observed for the constituent being evaluated, is the ith simulated value for the constituent being evaluated, is the mean of observed data for the constituent being evaluated, and n is the total number of observed [7]. Root Mean Squared Error Several error indices are commonly used in model evaluation. These include root mean square error (RMSE). These indices are valuable because they indicate an error in the units (or squared units) of the constituent of interest, which aids in analyzing the results. RMSE values of 0 indicate a perfect fit [7]. where X is the observed data, Y is the simulated value, and n is the total number of observations. Correlation coefficient It is an indicator of the strength of the relationship between observations and estimates. Higher positive coefficients indicate that estimates will be high or low when actual is high or low, respectively giving evidence about the suitability of the estimation method [7]. where, is the observed data, ̅ is the average of observed data, is the simulated value, ̅ is the average of simulated value [8]. Consistency test The results of the consistency test with RAPS Method at the degree of confidence = 5% and 1%, n = 240, then obtained the value of Qcritical = 1.36 for 5% and Qcritical = 1.75 for 1%, while Qcalculated = 0.588. Because Qcalculated (0.588) < Qcritical (1.75), so the rainfall data is considered consistent. Stationer test The results of the stationary test with the F-test showed stable climatological data except for wind speed, while the t-test showed stable climatological data except for air temperature. Artificial Neural Networks (ANNs) ANNs are an information processing system that has characteristics resembling a human neural network. ANNs are created as an abstraction of mathematical models and human understanding. The characteristics of the ANNs can be determined by the network architecture, training, and activation function. Feed-forward backup network is one of the algorithms that is often used in solving complex problems. It is possible because the network with this algorithm is trained using the guided learning method. The network is given a pattern consisting of the input pattern and the desired pattern. This exercise is done repeatedly so that all patterns have an output that can meet the desired pattern. In this study, the software used in determining the ANNs models in Matlab R2017a by utilizing the available toolbox. An example of configuration a network architecture on 16 years as a calibration stage and four years as a validation stage using 1500 epochs. The results of the ANNs models analysis based on performance, training state, and regression showed in Fig.3, Fig.4, and Fig.5. The following results were obtained when ANNs models were developed and applied in the Palembang City (Table 6). The Summary statistics (Table 6) indicated satisfactory calibration and validation using 20 years of data, although the results showed that the calibration is better than the validation. The best value of NSE used 1500 epochs during calibration is 0.54, and validation is 0.41. According to ANNs models simulated rainfall during calibration is satisfactory. However, on the other hand, validation is unsatisfactory, as shown by the statistical results. The best RMSE values used 1500 epochs during calibration is 99.37, and validation is 87.32. It indicated that the ANNs model's performance for rainfall error in the square unit on validation was a closer perfect fit than calibration. The coefficient correlation values varied from 0.59 to 0.74 during calibration and from 0.11 to 65 during validation. It indicated that the ANNs model's performance for correlation on calibration is stronger than validation.
2021-12-15T20:14:13.766Z
2021-01-01T00:00:00.000
{ "year": 2021, "sha1": "c31643877905a4e4b16a0482eeb6d56cd2ae5833", "oa_license": null, "oa_url": "https://doi.org/10.1088/1755-1315/930/1/012062", "oa_status": "GOLD", "pdf_src": "IOP", "pdf_hash": "c31643877905a4e4b16a0482eeb6d56cd2ae5833", "s2fieldsofstudy": [ "Environmental Science", "Computer Science" ], "extfieldsofstudy": [ "Physics" ] }
198993621
pes2o/s2orc
v3-fos-license
Investigating fold-river interactions for major rivers using a scheme of remotely sensed characteristics of river and fold geomorphology There are frequently interactions between active folds and major rivers (mean annual water discharges > 70 m3s-1). The major river may incise across the fold, to produce a water gap across the fold, or a bevelling (or lateral planation) of the top of the fold. Alternatively, the major river may be defeated to produce a diversion of the river around the fold, with wind gaps forming across the fold in some cases, or ponding of the river behind the fold. Why a river incises or diverts is often unclear, though influential characteristics and processes have been identified. A new scheme for investigating fold-river interactions has been devised, involving a short description of the major river, climate, and structural geology, and 13 characteristics of river and fold geomorphology: 1) Channel width at location of fold axis, w, 2) Channel-belt width at location of fold axis, cbw, 3) Floodplain width at location of fold axis, fpw, 4) Channel sinuosity, Sc, 5) Braiding index, BI, 6) General river course direction, RCD, 7) Distance from fold core to location of river crossing, C-RC, 8) Distance from fold core to river basin margin, C-BM, 9) Width of geological structure at location of river crossing, Wgs, 10) Estimate of erosion resistance of surface sediments/rocks and deeper sediments/rocks in fold, ERs, ERd, 11) Channel water surface slope at location of fold axis, s, 12) Average channel migration rate, Rm, 13) Estimate of fold total uplift rate, TUR. The first 10 geomorphological characteristics should be readily determinable for nearly all major rivers using widely available satellite imagery and fine scale geological maps. The last 3 characteristics should be determinable for most major rivers where other data sources are available. This study demonstrates the methodology of this scheme, using the example of the major rivers Karun and Dez interacting with active folds in the foreland basin tectonic setting of lowland south-west Iran. For the rivers Karun and Dez (mean annual water discharges 575 m3s-1 and 230 m3s-1, respectively), it was found that geomorphological characteristics Nos. 2, 3 and 7 had statistically significant differences (p-value ≤ 0.05) between the categories of river incision across a fold and river diversion around a fold. For river incision, at the fold axis, channel-belt width was always < 2.7 km, and floodplain width was generally (80 % of cases) < 5.7 km; whereas for river diversion, at the projection of the fold axis, these two characteristics had a wide range of values. For river incision, the distance from the fold core to the location where the river channel crossed the fold axis, was generally (80 % of cases) ≤ 8.5 km; whereas for river diversion, this distance was always > 22 km. Since it is highly likely that different characteristics will be important for other major rivers interacting with other folds, it is recommended that this scheme is now used to investigate a variety of major rivers from across the globe. By comparing the same parameters for different major rivers, a better understanding of fold-river interactions should be achieved. Introduction Interpreting the interactions between rivers and tectonics can be challenging.Principally, this is because rivers are inherently variable and complex, influenced by a wide range of both autogenic factors that include topography, hydrology and sedimentology, and allogenic factors that include structural geology and active tectonics, plus human activities, climate and relative sea-level (or base level) changes [1][2][3][4][5][6][7][8][9].Disentangling the various internal and external factors and their influences on geomorphology can be difficult.However, for major rivers, with mean annual water discharges of 70 m 3 s −1 or more [10], interacting with active folds over horizontal spatial scales of metres to tens of kilometres (river channel dimensions to fold dimensions), the difficulties are lessened, especially at locations upstream of coastal plain-valleys [11][12][13].This is because for a single major river at such scales, climate and rates of sediment supply from the basin hinterland are likely to be similar, as climate zones typically extend over scales of hundreds of kilometres [14][15][16], and upstream of the extent of the backwater length (typically a distance of more than 150 km from the shoreline) the influences of relative sea-level changes are likely to be minimal [12,17,18].Hence, at these river reach scales, the significant allogenic factors will be limited to tectonics and human activities, with prominent human impacts being limited to the last few millennia [13,[19][20][21]. Major rivers frequently interact with active folds, particularly as transverse rivers in foreland basin systems, where folds oriented roughly parallel to the orogenic axis may form a succession of "obstacles" to river courses, particularly in the orogenic wedge and foredeep [22,23].Conceptual models of the interactions between transverse rivers and growing folds have been constructed [22,[24][25][26][27][28].Such models indicate that where rates of river aggradation exceed rates of structural uplift associated with the fold, a river will flow without impedance across the fold and may bevel off the top of the emerging fold with little or no topographic relief developing [27,28].Where a fold does develop a surface topographic expression, a river will either flow across the fold by maintaining basinward-dipping channel slopes across the fold, or it will be defeated by the growing fold.To maintain a transverse course across a fold, a river needs sufficient stream power to erode and incise into the crest and across the axis of the fold at a rate greater than the difference between the rates of structural uplift and the rates of river aggradation [22,29].Whilst the precise controls on river erosion are debated, due to factors such as bed armouring [30,31], it is likely that river erosion into bedrock and sediments will increase with stream power.If the river is defeated, then it will be diverted around the fold by channel migrations or avulsions to flow through structural low points, frequently flowing initially roughly parallel to the fold axis and thence around the nose of the fold.Alternatively, the river may be ponded in a basin upstream of the fold [22,25,27,32]. According to such conceptual models, the responses of rivers and major rivers should be fairly predictable.A river may incise across an active fold as a water gap (a river valley of a maintained river course) or it may be defeated by the fold and diverted to leave a wind gap (a dry valley of a previous river course), with the configuration of these water and wind gaps varying with a number of factors, such as the type of fold [22,27,33,34].For instance, detachment folds would be expected to have a wind gap near the centre of the fold and a water gap near the propagating fold tip, whilst fault bend folds would be expected to a have a number of wind gaps across the length of the fold, with the defeated rivers diverted parallel to the fold axis [33,34].Whilst conceptually it is clear that a major river should incise across an active fold in some cases and divert around it in other cases, in practice it is often unclear as to how and why this occurs.For instance, there is a seemingly paradoxical tendency for a number of major rivers to transect many growing anticlines in the vicinity of their greatest structural and topographic relief [35][36][37].By contrast, some rivers frequently cross a growing fold near to the laterally propagating tip or nose of the fold [27,38].Alternatively, rivers may be diverted around the fold tips of laterally propagating anticlinal fold segments until these fold segments coalesce; after which the river may divert to feed a longitudinal river or it may incise across the coalesced fold at the topographic low of the merger location [39]. These different responses are probably due to changes in the fold-river interactions with time and the variable and complex nature of river systems [8,13,40,41].There may be different reaction, relaxation and recurrence times for events [42], multiple processes may act in combination to produce a specific phenomenon [42,43], different factors may result in similar effects [41], a river system may not adjust in a progressive and systematic fashion to modifications [44], and a river system may be dominated by autogenic processes and exhibit variability independent of external factors, due to systems of non-linearity or self-organised criticality [9,44,45].Nevertheless, with such systems there may be characteristics of the river or the fold which act as thresholds which the river needs to cross for the dynamic equilibrium of river incision across an active fold to develop and be maintained [42]. The characteristics which may act as thresholds will probably include those associated with the main controlling variables for the persistence of an antecedent river across a growing fold, as shown in Table 1 [22,25,28]. Table 1.The main controlling variables for the persistence of antecedent rivers crossing growing folds (Modified from [22,25,28]). Variable Effect Rate of sediment aggradation and rate of structural uplift Lower rates of sediment aggradation and lower rates of structural uplift promote persistence of an antecedent river, due to less erosion of the fold hanging wall being required Erosion resistance of rocks and sediments within fold Lower erosion resistances (thick alluvial strata, poor cementation and readily erodible bedrock) mean that lower stream power are required, thus promoting persistence of an antecedent river Water discharge of river Higher water discharges and higher stream power promote persistence of an antecedent river Stream power, flow depth, channel width, channel water surface slope of river Higher stream power promote persistence of an antecedent river.Narrower channel widths and steeper channel water surface slopes promote persistence of the antecedent river, due to associated increased stream power Sediment load Increased sediment load decreases proportion of stream power available for bed erosion, mantling of the bed with sediment precludes erosion of bed; thus, reduced sediment load may promote persistence of an antecedent river Width of geological structure Widening of a geological structures causes reduced channel water surface slopes and stream power; thus, narrower geological structures promote persistence of an antecedent river Transverse structures Transverse structures, such as faults, provide zones of less erosion resistant rocks that cut across structures, exploited by antecedent rivers The influences of some of these controlling variables are quite intricate, particularly those associated with river hydrology and sediment load [31].For instance, a river crossing a fold will produce aggradation upstream and downstream of the fold in a dynamic equilibrium, in which sufficient foreland-dipping channel slopes for producing erosive stream power across the zone of greatest fold uplift are maintained [22,27,46,47].If upstream or downstream aggradation is insufficient, as may be the case with reduced sediment load, then the river may be defeated and diverted around the fold [22,24].If upstream aggradation is excessive, then the river may be defeated by producing slopes that promote channel migrations or avulsions to other upstream locations [27,38,48].If downstream aggradation is excessive, then the river may also be defeated by reducing channel slopes to such an extent that stream power are insufficient to maintain erosion into the fold and maintain transport away of the eroded material [47,49].Nevertheless, Table 1 still provides an adequate foundation for differentiating between river incision across a fold and river diversion around a fold.Some of the controlling variables, such as stream power, flow depth, and sediment load, involve characteristics which need to be determined by fieldwork; whereas other controlling variables, such as width of geological structure, involve characteristics which can be determined relatively easily from remote sensing imagery and fine scale geological maps. Aim of the Study-A Scheme for Investigating Fold-River Interactions Using Remote Sensing The aim of this study is to demonstrate a new scheme which uses a short description of the major river and 13 remotely sensed characteristics of river and fold geomorphology to investigate fold-river interactions.The short description of the major river should include river measurements (including mean annual discharge) and short descriptions of the river course, climate, and structural geology.The first 10 geomorphological characteristics should be readily determinable from widely available remote sensing imagery and fine scale geological maps.This use of remote sensing allows a large number of major rivers to be investigated relatively easily, including those in remote or inaccessible areas, without recourse to expensive fieldwork.The last three geomorphological characteristics should be determinable where additional data sources are available.This study utilises the example of the major rivers Karun and Dez interacting with folds in lowland south-west Iran to show how to apply the scheme in practice. Selection of 13 Remotely Sensed Characteristics of River and Fold Geomorphology Remote sensing imagery and fine-scale geological maps have the advantage of being widely available data sources which only need processing for interpretation, rather than detailed fieldwork, but have the drawback that certain parameters, such as sediment grain size, sediment load, flow velocity, and channel depth, cannot be measured accurately from them.A number of the controlling variables in Table 1 involve geomorphological characteristics which are readily determinable from remote sensing and fine scale geological and topographical maps, as are other significant geomorphological characteristics, such as channel width, that are associated with other conceptual models [22,[25][26][27][28]. Also, previous detailed studies on interactions between specific major rivers and tectonics, particularly those of Jorgensen [50] involving rivers in western U.S.A., Lavé and Avouac [51,52] involving upland rivers in Nepal, and Woodbridge [13] involving lowland rivers in south-west Iran, have identified useful characteristics determinable using remote sensing and geological maps in their investigations of such interactions.All of these data sources have been used to compile a suite of 13 useful geomorphological characteristics to be determined in investigations of fold-river interactions: (1) Channel width at location of fold axis, w (2) Channel-belt width at location of fold axis, cbw (3) Floodplain width at location of fold axis, fpw (4) Channel sinuosity, Sc (5) Braiding index, BI (6) General river course direction, RCD (7) Distance from fold core to location of river crossing, C-RC (8) Distance from fold core to river basin margin, C-BM (9) Width of geological structure at location of river crossing, Wgs (10) Estimate of erosion resistance of surface sediments/rocks and deeper sediments/rocks in fold, ERs, ERd (11) Channel water surface slope at location of fold axis, s (12) Average channel migration rate, Rm (13) Estimate of fold total uplift rate, TUR In summary, channel width at the location of the fold axis should be a useful parameter since the conceptual model of Amos and Burbank [25] and the studies of Lavé and Avouac [51,52] in Nepal indicate that channel width may act as a key characteristic of river responses, with channel narrowing to enhance incision rates, apparently taking precedence over other changes for upland rivers crossing rapidly uplifting folds [27,[52][53][54].Channel-belt width and floodplain width at the location of the fold axis should be useful parameters, since narrowing of the channel-belt and narrowing of the floodplain will increase the proportion of stream power available for vertical erosion and thus promote the maintenance of a river incising across a fold.The study of Woodbridge [13] demonstrated the importance of channel-belt width, with a narrow average channel-belt width of less than c. 2.7 km being hypothesised as a threshold needed for the rivers Karun and Dez to produce and maintain river incision across a fold in lowland south-west Iran.Channel sinuosity and braiding index should both be useful parameters, since the study of Woodbridge [13] found trends for both reduced sinuosity and braiding index for river reaches incising across a fold; though, as with the studies of Jorgensen [50] in U.S.A. and Zámolyi et al. [55] in Hungary, these trends did not always achieve statistical significance.General river course direction should be a useful parameter, as the study of Woodbridge [13] found a tendency for river incision across a fold to have a general river course direction orthogonal to the fold axis for the river reaches which crossed the fold, whereas river diversion had a general river course parallel to the fold axis upstream of the fold, followed by a change in river course bearing of about 20 • -70 • to flow around the fold. Distance from the fold core to the location where the river crosses the fold axis should be a useful discriminative parameter since, naturally, there is very strong tendency for river incision across a fold to occur between the fold core and the fold nose, and for river diversion to occur beyond the fold nose [13,20,56].Distance from the fold core to the river basin margin, should be a useful parameter if the timing of initial fold-river interactions is important, as hypothesised by Woodbridge [13].Where a river incises across a fold due to it initially encountering the fold as a small, emerging fold, the fold core location is likely to be within the margins of the drainage basin of the river crossing the fold axis (positive measurement); whereas where a river diverts around a fold due to it initially encountering the fold as a larger, more developed fold, the fold core location is likely to be beyond the margins the drainage basin of the river crossing the fold axis, or its projection (negative measurement) [13,20].Width of geological structure should be a useful parameter, since, as shown in Table 1, the conceptual model of Burbank et al. [22] indicates that narrow geological structures promote river incision across a fold by avoiding the reduced channel slopes, stream power, and vertical erosion associated with widening geological structures.The erosion resistances of sediments and rocks in a fold can be estimated from fine scale geological maps where details of the sediments and rock types are known, and should be useful parameters since the conceptual models of Burbank et al. [22] and Bufe et al. [28] indicate that low erosion resistances promote river incision across a fold and river bevelling of the top of a fold.Also, some studies, such as that on the meandering of the River Dniester by Yeromenko and Ivanov [57], have found that variations in erosion resistances of rocks and sediments were significant in influencing river responses; though other studies, such as those on rivers and growing folds in northern Alaska reviewed by Burbank et al. [22] have found that variations in erosion resistances were not. To determine the last three geomorphological characteristics precisely, data sources in addition to one set of remote sensing imagery and one set of fine scale geological maps are preferable.Whilst slope can be measured from a DEM, greater precision for channel water surface slope measurement will be obtained from other data sources, such as precise hydrological and topographical surveys.Channel water surface slope at the location of the fold axis should be a useful parameter, since it was found to be a key characteristic for upland rivers in studies by Lavé and Avouac [51,52] in Nepal, Yanites et al. [58] in Taiwan, and by Amos and Burbank [25] in New Zealand.Average channel migration rate over time intervals of about 20-40 years should be a useful parameter, since lateral migration rates have been found to be significant in studies of river incision across a fold [13] and river lateral planation of the top of a fold [28].Fold total uplift rate, as estimated from additional data sources, should be a useful parameter since, in a number of conceptual models, low rates of structural uplift promote the maintenance of a river incising across a fold [22,24,28]. Summary of Methods There are three main elements to the application of the scheme to a specific major river or river system: (1) A short description of the river, including its course, and the climate and structural geology of the region through which it flows (2) Measurement of geomorphological characteristics Nos. 1 to 10 (3) Measurement of geomorphological characteristics Nos.11 to 13 Short Description of River The short description of the river introduces the major river and the context of the fold-river interactions.It should include data on river length, drainage basin area, mean annual water discharge, seasonality of discharge, and major direct human impacts on the river, and a short description of the river course.It should also include short descriptions of the regional climate and structural geology, with some details of the tectonic setting and the types of faults and folds.The short description of the river can be supplemented by maps of the river system and structural geology. Measurement of Geomorphological Characteristics Nos. 1 to 10 The measurement of the first 10 characteristics of river and fold geomorphology provides the main data for investigating different fold-river interactions.The only data sources needed to determine these 10 remotely sensed characteristics are: high-resolution remote sensing images, fine scale geological maps (preferably at 1:100,000 scale or finer), and maps of oil and gas fields and seismic survey sections (in cases where there are sub-surface folds).Such widespread data sources should be available for most of the major rivers of the world. For characteristics Nos. 1 to 3 (channel width, channel-belt width, and floodplain width), the measurements are made solely at the location of the fold axis or its projection.This is because these characteristics vary continuously along the length of the river and their measurements are not dependent on how the river is sub-divided into river reaches.By contrast, characteristics Nos. 4 and 5 (channel sinuosity and braiding index) are heavily dependent on how the river is sub-divided into river reaches.Hence, for these characteristics the measurements are made for river reaches immediately upstream of the fold, across the fold axis (or its projection), and immediately downstream of the fold.This is done so that changes in these characteristics associated with the fold can be more easily differentiated from changes due to the sub-division into river reaches and other variations.Similarly, for characteristic No. 6 (general river course direction), the measurements are made for river reaches immediately upstream of the fold, across the fold axis (or its projection), and immediately downstream of the fold.For general river course direction, it is changes relative to the fold axis which are more indicative of changes associated with the fold.Hence, these measurements are also made relative to the fold axis, and there is an emphasis on changes in river course direction between river reaches immediately upstream of the fold, across the fold axis, and immediately downstream of the fold.Characteristics Nos. 7 to 10 (distances from the fold core to the river crossing and river basin margin, width of geological structure, and estimate of erosion resistance) are mainly associated with the structural geology, rocks and sediment of the fold.Hence, for these characteristics, the measurements are made relative to structures of the fold, especially the fold core, the fold axis, and the fold limbs. Measurement of Geomorphological Characteristics Nos. 11 to 13 The measurement of the last three geomorphological characteristics provides additional data for investigating different fold-river interactions.The data sources needed for these characteristics may not be available for all major rivers worldwide, hence they may be considered as supplementary characteristics.The additional data sources could be precise hydrological or topographical surveys of the river, databases superimposing two sets of high-resolution remote sensing imagery separated by about 20-40 years, and data relating to vertical Earth surface movements, such as dating of displaced geomorphic surfaces e.g., [59], repeated precision GPS surveys, and precise levelling e.g., [60]. For characteristic No. 11 (channel water surface slope) the measurements are made solely at the location of the fold axis or its projection.This is because channel water surface slope is highly variable and the fold axis is a key location where similar conditions can be compared.Characteristic No. 12 (average channel migration rate) is heavily dependent on how the river is sub-divided into reaches.Hence, measurements for this characteristic are made for river reaches immediately upstream of the fold, across the fold axis (or its projection), and immediately downstream of the fold.Characteristic No. 13 (fold total uplift rate) is mainly associated with the structural geology of the fold.Hence, the estimates or measurements for this characteristic are made for the crest of the fold relative to the surrounding region. Details of Methods for the 13 Geomorphological Characteristics, as Applied to the Rivers Karun and Dez To introduce and demonstrate the use of the new scheme in practice, it has been applied to the River Karun and River Dez in the province of Khuzestan in lowland south-west Iran, as an example.As shown in Figures 1 and 2, the major rivers Karun and Dez (mean annual water discharges c. 575 m 3 s −1 and 230 m 3 s −1 , respectively) flow from the Zagros orogen in the N and NE across the Upper and Lower Khuzestan Plains into the Mesopotamian-Persian Gulf Foreland Basin to the S and SW [61].Their interactions with folds within the Upper and Lower Khuzestan Plains have been subjected to detailed investigations, as described by Woodbridge [13], Woodbridge and Frostick [56] and Woodbridge et al. [20].The data in these investigations was used to provide short descriptions of the rivers Karun and Dez, as given in Section 4.1.The data was also used to demonstrate the measurement of each of the 13 geomorphological characteristics, by using the example of the Sardarabad Anticline (SDA on Figure 2) to the north-west of Band-e Qir Figure 1, and its interactions with the River Dez (river incision across the fold) and the River Karun (Shuteyt branch) (river diversion around the fold).The Sardarabad Anticline appears to be a doubly plunging, segmented, asymmetric detachment fold which is about 58 km long × 9 km wide, and which rises to more than 70 m above the surrounding plains.The fold axis is oriented roughly ESE-WNW, curving to SE-NW at the eastern end, where it apparently merges with a roughly N-S oriented oblique lateral ramp [13,[62][63][64][65]. Measurement of Geomorphological Characteristics Nos. 1 to 10 For determining these 10 characteristics for the rivers Karun and Dez, the remote sensing images used were 30 m resolution false-colour Landsat Enhanced Thematic Mapper Plus (ETM+) images (28 July 2001 and4 August 2001) with Band 4 (near-infrared, 750-900 nm) displayed red, Band 3 (red, 630-690 nm) displayed green, Band 2 (visible green, 525-605 nm) displayed blue, and pan-sharpened with pan-chromatic Band 8 [66].The fine-scale geological maps used were mainly 1:100,000 scale geological maps, such as "Sheet 20824E Mulla Sani" of the Iranian Oil Operating Companies IOOC [67].The maps of oil and gas fields and seismic survey sections were from a variety of sources [68][69][70][71].The Landsat ETM+ images and detailed surveys of the rivers undertaken by the Dez Ab Engineering Company from 1997-2000 were used to sub-divide the main river courses of the Karun and Dez from the vicinity of Gotvand and Dezful to the Persian Gulf into a succession of straight-line river reaches.The average river reach length was 8.0 km, with an extreme range of 0.8-50.5 km.A river reach was defined as a length of river channel with a relatively homogeneous discharge and morphology [72].Significant changes in general river course direction, river planform, and river morphology were used to demarcate the end of one reach and the start of the next.This sub-division into river reaches, whilst necessarily subjective, facilitated the measurement of characteristics associated with river reaches, such as channel sinuosity and general river course direction [13,20]. Channel Width at Location of Fold Axis (or its Projection) Symbol: w Units: m (quoted to two decimal places) Measurement location: Where river channel crosses the fold axis (or its projection) Channel width is defined as the maximum extent of the river channel water surface, as distinguished on remote sensing images (or survey records), measured orthogonal to the river thalweg.Since the channel-forming discharge is commonly taken as the bankfull discharge and channel width varies significantly with river discharge, the aim is to measure the width between the channel banks at bankfull discharge [42,73].In practice, channel width also varies with distance along the channel, local irregularities and outcrops, vegetation, human impacts, and other factors, so it is recommended that the distance between the channel banks is measured from remote sensing images of a single date, preferably at a time of relatively high flows.Whilst variations could be reduced by determining average channel width over a distance of one or two meander wavelengths [74][75][76], this is not recommended since subtle changes in channel width would be missed in the frequent cases where the zone of maximal uplift is considerably smaller than the meander wavelength of a major river.Instead, for a single-thread meandering channel pattern, the width of the channel at or very near to the fold axis should be measured, with care to avoid measuring at localised broadening or constriction of the channel.For a multi-thread braided channel pattern, the widths of all channels at the fold axis location should be measured, and the sum recorded.For anastomosing or anabranching channel patterns, the widths of all channels associated with the main branch of the river at the fold axis location should be measured, and the sum recorded [77]. For the example of the River Karun (Shuteyt) diverting around the Sardarabad Anticline, channel width, w = 202,19 m, at the location where the projection of the fold axis intersects with the thalweg of the main river channel, as shown in Figure 3. Channel-belt width is defined as the maximum extent of the channel-belt of the river, as distinguished on remote sensing images, measured orthogonal to the axis of the river reach.For single-thread meandering and straight channel patterns, the measurement is to the extremities of all channels, abandoned channels, meanders, levées, crevasse channels and splays, oxbows, and meander scars that are associated with the active river channel.For a multi-thread braided channel pattern, the measurement is to the extremities of all channels, bars, islands, and abandoned channels associated with the active river channel [73].For anastomosing or anabranching channel patterns, the measurement is to the extremities of the main active river channels, with any anabranches clearly separated by floodplain areas being considered as discrete channel-belts not included in the measurement [77].Where there is uncertainty, such as discriminating between extensive braided rivers and discrete channels of anastomosing rivers, the default is to use the larger channel-belt width measurement. For the example of the River Karun (Shuteyt) diverting around the Sardarabad Anticline, channel-belt width, cbw = 2.051 km, as shown in Figure 3.In this figure, the channel-belt of the River Karun (Shuteyt) is highlighted in light red (that of the River Gargar is highlighted in yellow), and the channel-belt width measurement is indicated by the white and black checked straight line.The location of the measurement is the same as that for geomorphological characteristic No. 1. Floodplain Width at Location of Fold Axis (or its Projection) Symbol: fpw Units: km (quoted to three decimal places) Measurement location: Where river channel crosses the fold axis (or its projection) Channel-Belt Width at Location of Fold Axis (or its Projection) Symbol: cbw Units: km (quoted to three decimal places) Measurement location: Where river channel crosses the fold axis (or its projection) Channel-belt width is defined as the maximum extent of the channel-belt of the river, as distinguished on remote sensing images, measured orthogonal to the axis of the river reach.For single-thread meandering and straight channel patterns, the measurement is to the extremities of all channels, abandoned channels, meanders, levées, crevasse channels and splays, oxbows, and meander scars that are associated with the active river channel.For a multi-thread braided channel pattern, the measurement is to the extremities of all channels, bars, islands, and abandoned channels associated with the active river channel [73].For anastomosing or anabranching channel patterns, the measurement is to the extremities of the main active river channels, with any anabranches clearly separated by floodplain areas being considered as discrete channel-belts not included in the measurement [77].Where there is uncertainty, such as discriminating between extensive braided rivers and discrete channels of anastomosing rivers, the default is to use the larger channel-belt width measurement. For the example of the River Karun (Shuteyt) diverting around the Sardarabad Anticline, channel-belt width, cbw = 2.051 km, as shown in Figure 3.In this figure, the channel-belt of the River Karun (Shuteyt) is highlighted in light red (that of the River Gargar is highlighted in yellow), and the channel-belt width measurement is indicated by the white and black checked straight line.The location of the measurement is the same as that for geomorphological characteristic No. 1. Floodplain Width at Location of Fold Axis (or its Projection) Symbol: fpw Units: km (quoted to three decimal places) Measurement location: Where river channel crosses the fold axis (or its projection) Floodplain width is defined as the maximum extent of the floodplain of the river, as distinguished on remote sensing images, measured orthogonal to the axis of the river valley.The floodplain width can vary from the channel-belt width to many tens of channel-belt widths [73].The margins of the floodplain are usually fairly clear due to a slight change in slope at the base of the enclosing valley walls.Interpretive difficulties with floodplain width may arise where two or more major rivers occupy a large plain, especially a large coastal plain, and, in these cases, the measurement is to the extremities of the floodplain of the streams and wetlands within the drainage basin of the major river in question [78]. For the example of the River Karun (Shuteyt) diverting around the Sardarabad Anticline, floodplain width, fpw = 17.603 km, as shown in Figure 3.In this figure, the floodplain width measurement is indicated by the light brown and black checked straight line.The location of the measurement is the same as that for geomorphological characteristic No. 1. Channel Sinuosity Symbol: Sc No units (ratio quoted to three decimal places) Measurement location: River reaches immediately upstream of fold, across the fold axis (or its projection), and immediately downstream of fold Channel sinuosity is the ratio defined by the equation Sc = L c /L v where L c is channel length (m), and L v is straight-line valley length (m) [42].The channel length is the total distance between the two ends of the river reach measured along the thalweg of the main channel.For multi-thread braided, anastomosing, and anabranching channel patterns there can be interpretive difficulties regarding the main channel thalweg, though, generally, it should be interpreted as the course of the broadest channel.The straight-line valley length is the distance between the two ends of the river reach measured in a straight line along the axis of the river reach.Measurements are made for river reaches immediately upstream of the fold, across the fold axis (or its projection), and immediately downstream of the fold, to elucidate any changes in channel sinuosity associated with the fold. For the example of the River Dez incising across the Sardarabad Anticline, channel sinuosity, Sc = 1417 (immediately upstream of fold); 1120 (across fold axis); 1585 (immediately downstream of fold), as shown in Figure 4. Remote Sens. 2019, 11, x FOR PEER REVIEW 12 of 39 Floodplain width is defined as the maximum extent of the floodplain of the river, as distinguished on remote sensing images, measured orthogonal to the axis of the river valley.The floodplain width can vary from the channel-belt width to many tens of channel-belt widths [73].The margins of the floodplain are usually fairly clear due to a slight change in slope at the base of the enclosing valley walls.Interpretive difficulties with floodplain width may arise where two or more major rivers occupy a large plain, especially a large coastal plain, and, in these cases, the measurement is to the extremities of the floodplain of the streams and wetlands within the drainage basin of the major river in question [78]. For the example of the River Karun (Shuteyt) diverting around the Sardarabad Anticline, floodplain width, fpw = 17.603 km, as shown in Figure 3.In this figure, the floodplain width measurement is indicated by the light brown and black checked straight line.The location of the measurement is the same as that for geomorphological characteristic No. 1. Channel Sinuosity Symbol: Sc No units (ratio quoted to three decimal places) Measurement location: River reaches immediately upstream of fold, across the fold axis (or its projection), and immediately downstream of fold Channel sinuosity is the ratio defined by the equation Sc = Lc/Lv where Lc is channel length (m), and Lv is straight-line valley length (m) [42].The channel length is the total distance between the two ends of the river reach measured along the thalweg of the main channel.For multi-thread braided, anastomosing, and anabranching channel patterns there can be interpretive difficulties regarding the main channel thalweg, though, generally, it should be interpreted as the course of the broadest channel.The straight-line valley length is the distance between the two ends of the river reach measured in a straight line along the axis of the river reach.Measurements are made for river reaches immediately upstream of the fold, across the fold axis (or its projection), and immediately downstream of the fold, to elucidate any changes in channel sinuosity associated with the fold. For the example of the River Dez incising across the Sardarabad Anticline, channel sinuosity, Sc = 1417 (immediately upstream of fold); 1120 (across fold axis); 1585 (immediately downstream of fold), as shown in Figure 4. Braiding Index Symbol: BI No units (index quoted to one decimal place) Measurement location: River reaches immediately upstream of fold, across the fold axis (or its projection), and immediately downstream of fold The braiding index is a measure of the intensity of braiding, and for a river reach can be defined as the channel count index of the mean number of anabranches (or links) per river cross-section for that reach [79,80].Since the intensity of braiding varies with flow stage [81], it is recommended that measurements are undertaken from remote sensing images of a single date, preferably at a time of relatively high flows for compatibility with other measurements, such as channel width.The river reach is sub-divided into river cross-sections orthogonal to the valley axis which are approximately 1 km apart.For each river cross-section, the number of distinct anabranches is counted and the mean for the entire river reach is calculated.For single-thread meandering and straight channel patterns, the braiding index will be 1, or slightly greater than 1 where there are channel islands.For anastomosing or anabranching channel patterns, the braiding index is calculated for the main branch of the river.Measurements are made for river reaches immediately upstream of the fold, across the fold axis (or its projection), and immediately downstream of the fold, to elucidate any changes in braiding index associated with the fold. For the example of the River Dez incising across the Sardarabad Anticline, braiding index, BI = 1.0 (immediately upstream of fold); 1.2 (across fold axis); 1.2 (immediately downstream of fold), as shown in Figure 4.In this figure, thin yellow lines indicate the sub-division of each river reach into river cross-sections orthogonal to the valley axis which are 1 km apart. General River Course Direction Symbol: RCD Units: degrees (quoted to the nearest 5 • , as a compass bearing in degrees relative to true north, and as a bearing in degrees relative to the fold axis) Measurement location: River reaches immediately upstream of fold, across the fold axis (or its projection), and immediately downstream of fold The general river course direction is the general overall direction towards which the river flows for the length of a river reach [13].This can be gauged "by eye" by carefully viewing the remote sensing images and drawing a straight line of that orientation on the remote sensing image -the orientation of which will be similar to the river reach axes in the vicinity -and then measuring the bearing of that line to the nearest 5 • to avoid false precision. For the example of the River Dez incising across the Sardarabad Anticline, general river course direction, RCD = 130 • (10 • to fold axis) (immediately upstream of fold); 230 • (70 • to fold axis) (across fold axis); 135 • (15 • to fold axis) (immediately downstream of fold), as shown in Figure 5.In this figure, the general river course direction is indicated by white lines with black arrowheads.C-RC is defined as the horizontal distance from the centre of the fold core measured along the fold axis, and along the projection of the fold axis, where appropriate), to the location where the river channel thalweg crosses the fold axis or its projection [13].This is most easily measured on fine scale geological maps (typically 1:100,000 or 1:50,000 scale geological maps, depending on availability) on which the surface lithology, structural geology (including the surface extent and anticlinal axis of each fold), and river channels are accurately shown. The river crossing location is determined simply from where the fold axis (or its projection) intersects with the thalweg of the main river channel, as indicated on the fine scale geological map or on the remote sensing image.Where the main river channel has more than one intersection with the fold axis, as may be the case with a sinuous river, the intersection that is nearest to the fold core will be considered the river crossing location.The location of the centre of the fold "core" (the centre of the main part of the fold which emerged first on the ground surface) is considerably more difficult to determine, since the detailed developmental history of a fold is usually not known.For ease of measurement, the centre of the fold core should be located on the fold axis.For sub-surface folds with little or no surface topographic expression, known principally from oil and gas field locations and seismic surveys, the centre of the fold core should be interpreted as being midway along the approximate location of the fold axis on the ground surface (with particular consideration of the dip of sub-surface structures and stratigraphy).This interpretation can be modified in cases where the sub-surface structural geology is well known.For young, emerging folds the centre of the fold core can be interpreted with more confidence and will usually be coincident with the centre of the surface topographic expression of the fold.For older, emerged folds the location of the centre of the fold core is much less certain.It can generally be interpreted to be in the vicinity of the C-RC is defined as the horizontal distance from the centre of the fold core measured along the fold axis, and along the projection of the fold axis, where appropriate), to the location where the river channel thalweg crosses the fold axis or its projection [13].This is most easily measured on fine scale geological maps (typically 1:100,000 or 1:50,000 scale geological maps, depending on availability) on which the surface lithology, structural geology (including the surface extent and anticlinal axis of each fold), and river channels are accurately shown. The river crossing location is determined simply from where the fold axis (or its projection) intersects with the thalweg of the main river channel, as indicated on the fine scale geological map or on the remote sensing image.Where the main river channel has more than one intersection with the fold axis, as may be the case with a sinuous river, the intersection that is nearest to the fold core will be considered the river crossing location.The location of the centre of the fold "core" (the centre of the main part of the fold which emerged first on the ground surface) is considerably more difficult to determine, since the detailed developmental history of a fold is usually not known.For ease of measurement, the centre of the fold core should be located on the fold axis.For sub-surface folds with little or no surface topographic expression, known principally from oil and gas field locations and seismic surveys, the centre of the fold core should be interpreted as being midway along the approximate location of the fold axis on the ground surface (with particular consideration of the dip of sub-surface structures and stratigraphy).This interpretation can be modified in cases where the sub-surface structural geology is well known.For young, emerging folds the centre of the fold core can be interpreted with more confidence and will usually be coincident with the centre of the surface topographic expression of the fold.For older, emerged folds the location of the centre of the fold core is much less certain.It can generally be interpreted to be in the vicinity of the structurally highest part of the present-day fold, which depending on the specific fold could be near its highest topographic expression, midway along the fold axis, or near to where it merges with an older, more developed fold [13,82]. For the example of the River Dez incising across the Sardarabad Anticline, distance from fold core to location of river crossing, C-RC = 1.3 km, as shown in Figure 6.For the example of the River Karun (Shuteyt) diverting around the Sardarabad Anticline, distance from fold core to location of river crossing, C-RC = 32.2km, also as shown in Figure 6.In this figure, the centre of the fold core is indicated by the black and yellow circle, the C-RC measurement along the fold axis to the River Dez crossing is indicated by the solid dark green line with two black arrowheads, and the C-RC measurement along the fold axis to the River Karun crossing is indicated by the dashed dark green line with two black arrowheads.In Figures 6-8 structurally highest part of the present-day fold, which depending on the specific fold could be near its highest topographic expression, midway along the fold axis, or near to where it merges with an older, more developed fold [13,82]. For the example of the River Dez incising across the Sardarabad Anticline, distance from fold core to location of river crossing, C-RC = 1.3 km, as shown in Figure 6.For the example of the River Karun (Shuteyt) diverting around the Sardarabad Anticline, distance from fold core to location of river crossing, C-RC = 32.2km, also as shown in Figure 6 3.1.8.Distance From Fold Core to River Basin Margin Symbol: C-BM Units: km (quoted to one decimal place, indicating positive or negative) Measurement location: Along the fold axis, from the centre of the fold core to the nearest margin of the drainage basin of the river interacting with the fold C-BM is defined as the horizontal distance from the centre of the fold core, measured along the fold axis (and along the projection of the fold axis, where appropriate), to the nearest margin of the drainage basin of the river interacting with the fold [13].The centre of the fold core is determined from fine scale geological maps, as described in Section 3.1.7.The drainage basin margins are demarcated from remote sensing images or topographical maps, by determining which river Width of Geological Structure at Location of River Crossing Symbol: Wgs Units: km (quoted to one decimal place) Measurement location: Where river channel crosses the fold axis (or its projection), measured orthogonal to the fold axis (or its projection) Wgs is defined as the maximum horizontal surface extent of the geological structure at the location where the river channel thalweg crosses the fold axis (or its projection), measured orthogonal to the fold axis (or its projection) [13].For sub-surface folds with little or no surface topographic expression, known principally from oil and gas field locations and seismic surveys, this measurement is necessarily approximate.For river incision across a fold, the measurement is made orthogonal to the interpreted fold axis, between the margins of the mapped oil or gas field. For river diversion around a fold, the measurement is made orthogonal to the projection of the interpreted fold axis, between the projected margins of the nose of the mapped oil or gas field; a measurement which is highly subjective.For emerged folds with significant surface topographic expression, this measurement is much more certain.For river incision across a fold, the measurement is made orthogonal to the fold axis, between the surface extent of the fold limbs as determined from fine scale geological maps and fine scale topography.For river diversion around a fold, the measurement is made orthogonal to the projection of the fold axis, between the projected surface extent of the fold limbs of the nose of the fold; a measurement which is moderately subjective. For the example of the River Dez incising across the Sardarabad Anticline, width of geological structure, Wgs = 4.3 km, and for the River Karun (Shuteyt) diverting around the Sardarabad Anticline, Wgs = 4.1 km, as shown on Figure 8.In this figure, the Wgs measurement for the River Dez crossing is indicated by the thick blue-grey line, and the Wgs measurement for the River Karun crossing is indicated by the thick red line.This characteristic is defined as the resistance of sediments and rocks in a fold to river erosion, a parameter which can be difficult to quantify.It depends upon a variety of characteristics Distance From Fold Core to River Basin Margin Symbol: C-BM Units: km (quoted to one decimal place, indicating positive or negative) Measurement location: Along the fold axis, from the centre of the fold core to the nearest margin of the drainage basin of the river interacting with the fold C-BM is defined as the horizontal distance from the centre of the fold core, measured along the fold axis (and along the projection of the fold axis, where appropriate), to the nearest margin of the drainage basin of the river interacting with the fold [13].The centre of the fold core is determined from fine scale geological maps, as described in Section 3.1.7.The drainage basin margins are demarcated from remote sensing images or topographical maps, by determining which river channels, wadis, lakes, streams and creeks are associated with each major river and by drawing a line midway between the extents of these.The zero point for measurements is at the centre of the fold core, with +ve values where the fold core is located within the drainage basin of the river interacting with the fold, and -ve values where the fold core is located outside of the drainage basin of the river interacting with the fold. For the example of the River Dez incising across the Sardarabad Anticline, distance from fold core to river basin margin, C-BM = +3.8km, as shown in Figure 7.For the example of the River Karun (Shuteyt) diverting around the Sardarabad Anticline, distance from fold core to river basin margin, C-BM = −25.7 km, also as shown in Figure 7.In this figure, the centre of the fold core is indicated by the black and yellow circle, drainage basin margins are indicated by dashed blue lines, the C-BM measurement along the fold axis to the nearest River Dez basin margin is indicated by the solid dark purple line with one black arrowhead, and the C-BM measurement along the fold axis to the nearest River Karun basin margin is indicated by the dashed dark purple line with one black arrowhead. Width of Geological Structure at Location of River Crossing Symbol: Wgs Units: km (quoted to one decimal place) Measurement location: Where river channel crosses the fold axis (or its projection), measured orthogonal to the fold axis (or its projection) Wgs is defined as the maximum horizontal surface extent of the geological structure at the location where the river channel thalweg crosses the fold axis (or its projection), measured orthogonal to the fold axis (or its projection) [13].For sub-surface folds with little or no surface topographic expression, known principally from oil and gas field locations and seismic surveys, this measurement is necessarily approximate.For river incision across a fold, the measurement is made orthogonal to the interpreted fold axis, between the margins of the mapped oil or gas field.For river diversion around a fold, the measurement is made orthogonal to the projection of the interpreted fold axis, between the projected margins of the nose of the mapped oil or gas field; a measurement which is highly subjective.For emerged folds with significant surface topographic expression, this measurement is much more certain.For river incision across a fold, the measurement is made orthogonal to the fold axis, between the surface extent of the fold limbs as determined from fine scale geological maps and fine scale topography.For river diversion around a fold, the measurement is made orthogonal to the projection of the fold axis, between the projected surface extent of the fold limbs of the nose of the fold; a measurement which is moderately subjective. For the example of the River Dez incising across the Sardarabad Anticline, width of geological structure, Wgs = 4.3 km, and for the River Karun (Shuteyt) diverting around the Sardarabad Anticline, Wgs = 4.1 km, as shown on Figure 8.In this figure, the Wgs measurement for the River Dez crossing is indicated by the thick blue-grey line, and the Wgs measurement for the River Karun crossing is indicated by the thick red line. Estimate of Erosion Resistance of Surface Sediments/Rocks and Deeper Sediments/Rocks in Fold Symbols: ERs (surface); ERd (deeper) No units (estimate quoted on a relative scale from 1 to 8) Measurement location: Where river channel crosses the fold axis (or its projection) This characteristic is defined as the resistance of sediments and rocks in a fold to river erosion, a parameter which can be difficult to quantify.It depends upon a variety of characteristics including structural geology, rock type, sediment type, strength of intact rock (especially rock compressive strength, rock tensile strength, and rock mass strength), resistance to weathering, jointing and fracturing (especially width, spacing, orientation, continuity, and infilling of joints), degree of movement of water through the rock mass, porosity, grain size, type and degree of cementation; as well as characteristics of the river, such as discharge, nature and frequency of floods, river sediment supply, suspended sediment concentration, and river bed roughness.Many of these characteristics are difficult to measure and their relative importance in determining the general erosion resistance of a fold is not fully known [22,31,[83][84][85][86][87][88]. Hence, for each case an estimate is made that is quoted as an integer on a scale, accompanied by a short description of the lithology or sedimentology, where known.The estimate of the erosion resistance of sediments and rocks is according to this scale: Very low (Unlithified floodplain sediments-predominantly sands) 2. Low/Moderate (Mainly unlithified floodplain sediments-predominantly sands and silts; some quite poorly consolidated bedrock-such as Agha Jari Formation bedrock (quite poorly consolidated sandstones)-and other similar rocks-such as mudstones, evaporites and poorly consolidated limestones) 4. Moderate (Mainly quite poorly consolidated bedrock-such as Agha Jari Formation bedrock (quite poorly consolidated sandstones)-and other similar rocks-such as mudstones, evaporites and poorly consolidated limestones; some unlithified floodplain sediments) 5. Moderate/High (Mainly well consolidated bedrock-such as Bakhtyari Formation bedrock (very well consolidated conglomerates)-and other similar rocks-such as well consolidated limestones, marbles, sandstones and schists; some unlithified floodplain sediments and rocks of relatively low erosion resistance) 6. High (Mainly well consolidated bedrock-such as Bakhtyari Formation bedrock (very well consolidated conglomerates)-and other similar rocks-such as well consolidated limestones, marbles, sandstones and schists) 7. Very high (Very erosion resistant bedrock-basalts, gabbros, metasandstones and other very erosion resistant igneous and metamorphic rocks) 8. Extremely high (Extremely erosion resistant bedrock-quartzite, cherts, granites, andesites, gneisses and other extremely erosion resistant igneous and metamorphic rocks) The position on this scale can be determined by careful interpretation of fine scale geological maps and remote sensing images, plus fieldwork and work on the properties of rocks and sediments, where available.The surface erosion resistance of the fold, ERs, is that of the surface lithology and sedimentology of the fold; especially that in the general vicinity of a river channel at the upstream location where the river first encounters the limb of the fold.The deeper erosion resistance of the fold, ERd, is that of the deeper lithology and sedimentology of the fold; especially that exposed in the general vicinity of an incising river channel at the location of the fold axis.With emerging folds, ERd may be unknown in some cases where the sub-surface geology is only poorly known. For the example of the River Dez incising across the Sardarabad Anticline, ERs = 4, ERd = 5, and for the River Karun (Shuteyt) diverting around the Sardarabad Anticline, ERs = 4, ERd = 5, as shown on Figure 8.For the location of the River Dez crossing, ERs = Moderate (surface of unlithified floodplain sediments, with outcrops of Bakhtyari Formation bedrock (well consolidated conglomerates) and Agha Jari Formation bedrock (quite poorly cemented sandstones) at SW, W and E edges of floodplain), and ERd = Moderate/High (Bakhtyari Formation bedrock (well consolidated conglomerates) overlying Agha Jari Formation bedrock (quite poorly consolidated sandstones)).For the location of the River Karun (Shuteyt) crossing, ERs = Moderate (surface of unlithified floodplain sediments, with outcrops of Bakhtyari Formation bedrock (well cemented conglomerates) at SW and W edges of floodplain), and ERd = Moderate/High (assuming Bakhtyari Formation bedrock overlying Agha Jari Formation bedrock). Measurement of Geomorphological Characteristics Nos. 11 to 13 For determining these three characteristics for the rivers Karun and Dez, the precise hydrological and topographical surveys used were those of the Dez Ab Engineering Company from 1997-2000, supplemented by geomorphological fieldwork [13].This data facilitated the measurement of channel water surface slopes.The superimposed database used included false-colour Landsat ETM+ satellite images (dated 2001), fine-scale geological maps, and CORONA satellite images (dated 1966 and 1968) which had been geo-referenced, orthorectified and enhanced in a unified database using ArcGIS ® software [85,89].This database facilitated the measurement of channel migrations with time, and facilitated easier measurement of characteristics associated with the fold core and fold axis.The data relating to vertical Earth surface movements were of various types, including radiocarbon dating of marine terrace sediments and Optically Stimulated Luminescence (OSL) dating of river terrace sediments, and enabled estimates of fold total uplift rate to be made [13,56]. Channel Water Surface Slope at Location of Fold Axis (or its Projection) Symbol: s Units: m m −1 (quoted in standard form to nearest 1 × 10 −7 ) Measurement location: Where river channel crosses the fold axis (or its projection), measured for the river reach crossing the fold axis (or its projection) Channel water surface slope is determined by the equation, s = H c /L c where H c is change in channel water surface elevation (m) and L c is change in channel length (m) [90].Vertical accuracy should be at least of the order decimetres or better, ideally of the order of centimetres, or it will not be possible to discriminate the fine changes in slope associated with Earth surface movements, especially in lowland areas with very gentle slopes. It is very difficult to determine sufficiently accurate channel water surface slopes from satellite remote sensing and fine scale geological maps, mainly due to their relatively poor vertical accuracy.Whilst Digital Elevation Models constructed from Shuttle Radar Topography Mission-30 m (SRTM-30m) data and Advanced Land Observing Satellite World 3D-30 m (AW3D30) data can be useful, they only have a vertical accuracy (Root Mean Square Errors) of about 5.7 m and 8.3 m, respectively [91].Greater accuracy may be obtainable from fine-scale topographical maps (especially maps of 1:25,000 scale and finer, with contour intervals of 5 m or less and frequent spot heights) [76], to determine river bank (and thus channel water surface) elevations.However, in general, more accurate additional data sources, such as precise hydrological or topographical surveys of the river, or hydrological and geomorphological fieldwork, will be needed to sufficient vertical accuracy.Even with precise surveys, there will be a variety of factors influencing water surface elevation measurements, such as local vegetation and obstructions, human modifications, levées, pools and riffles, eddies, and daily variations in discharge.These factors frequently induce appreciable errors, particularly in lowland areas with very gentle slopes.Thus, to reduce the influence of these errors, the channel water surface slope should be measured for the entirety of the river reach crossing the fold axis, or its projection.With all channel pattern types, the water surface of the main channel thalweg is used for the measurement. For the example of the River Dez incising across the Sardarabad Anticline, channel water surface slope, s = 2.999 × 10 −4 mm −1 , and for the River Karun (Shuteyt) diverting around the Sardarabad Anticline, s = 3.5 × 10 −6 mm −1 .Precise hydrological and topographical surveys undertaken by the Dez Ab Engineering Company from 1997-2000 were used for these measurements. Average Channel Migration Rate Symbol: Rm Units: m yr −1 (quoted to three decimal places) Measurement location: River reaches immediately upstream of fold, across the fold axis (or its projection), and immediately downstream of fold Rm, the average channel migration rate over a specified period, can be defined by the equation Rm = (A/L c )/yr where A is total area of "migration polygons" drawn as shape files in a river reach between corresponding points of a river bank between remote sensing images of different dates (m 2 ); L c is channel length of reach (m), and yr is number of years between the remote sensing images [92]. To determine average channel migration rate, it is necessary to have access to high-resolution remote sensing images separated by a time interval of c. 20-40 years and Geographic Information System (GIS) software (such as ArcGIS ® ) to orthorectify and superimpose the two sets of remote sensing images.A time interval of c. 20-40 years should be long enough for significant channel migration to have taken place, though not so long that a channel may have migrated back to its original location.Where possible, one set of images should be high resolution aerial photographs or satellite images from the 1960 s or earlier, so that the time interval includes periods prior to major dam building and other major human impacts. One of the river banks-the left bank when facing downstream -is manually digitised for each image set, the "migration polygons" created by their intersections are highlighted and saved as shape files, and the total area of these "migration polygons" for the river reach is calculated.For single-thread meandering and straight channel patterns, the left bank of the channel is used for the measurement.For multi-thread braided channel patterns, the left bank of the outermost braid channel is used for the measurement.For anastomosing or anabranching channel patterns, the left bank of the main active river channels is used, with any anabranches clearly separated by floodplain areas being considered as discrete channel-belts not included in the measurement.Using this value of A for the total area of the "migration" polygons, the channel length of the reach, L c , and the mean time interval in years, the average channel migration rate, Rm, can be calculated. For the example of the River Dez incising across the Sardarabad Anticline, average channel migration rate, Rm = 11,129 m yr −1 (immediately upstream of fold); 1578 m yr −1 (across fold axis); 4502 m yr −1 (immediately downstream of fold), as shown in Figure 9 Estimate of Fold Total Uplift Rate Symbol: TUR No units (estimate quoted on a relative scale from 0 to 8, roughly equivalent to ranges of rates of uplift in mm yr −1 ) Measurement location: At, or near, the fold crest The fold total uplift rate is defined as the rate at which a fold is rising above the surrounding region; that is, the single fold uplift rate less the sum of the regional subsidence rate and the sediment aggradation rate [29].Generally, it is estimated or measured at, or near to, the fold crest because, in most cases, that is the part of the fold undergoing the greatest uplift relative to the surrounding region [93]. Fold total uplift rate cannot be determined solely from remote sensing images, remote sensing data, topographical maps, and geological maps.Other data sources are needed, which may be precision topographic survey (recurrent surveys over several decades to determine vertical surface movements) e.g., [90,94], or precision GPS survey (recurrent measurements from GPS stations over several years to determine horizontal and vertical surface movements) e.g., [60,95,96].Alternatively, the data sources may be the measurement and dating of uplifted geomorphic markers, especially marine terraces and river terraces e.g., [51,56,59,97,98], the measurement and dating of archaeological structures, especially disused ancient canals e.g., [13,99], and the measurement and dating of structural geology, especially the development and erosion of fold growth strata e.g., [32,100].Where such data is available for a fold, either by direct measurement or by careful interpretation, the estimated fold total uplift rate can be quoted as an integer on this relative scale: 1. Net subsidence (less than 0 mm yr −1 , the fold uplift rate is less than the sum of the regional subsidence rate and the sediment aggradation rate) 2. Extremely high (more than 12.0 mm yr −1 ) For the example of the River Dez incising across the Sardarabad Anticline, TUR = 3, and for the River Karun (Shuteyt) diverting around the Sardarabad Anticline, TUR = 3.The TUR for the Sardarabad Anticline was estimated to be Low/Moderate (about 0.2-0.5 mm yr −1 ) because OSL dating of river terrace sediments indicated uplift of the back-limb of the Sardarabad Anticline at a rate of 0.23-0.29 mm yr −1 [13,20]. Results for the Rivers Karun and Dez The results from applying the scheme to the River Karun and River Dez interacting with folds and emerging folds in lowland south-west Iran are given as a short description of the two rivers in Section 4.1, followed by tables of the results for the 13 geomorphological characteristics in Section 4.2.It is recommended that a similar format is used when applying the scheme to other major rivers in different parts of the world.The findings of Analysis of Variance (ANOVA) between river incision across a fold and river diversion around a fold, applied to the 13 geomorphological characteristics for the rivers Karun and Dez, are given in Section 4.3. Short Description of River River Karun (Iran) Length: 890 km Drainage basin area: 45,230 km 2 Mean annual water discharge: 575 m 3 s −1 (at Ahvaz in the Khuzestan Plains) Seasonality of discharge: Maximum in April (c.850 m 3 s −1 , or more than 2,000 m 3 s −1 prior to major dam construction); Minimum in October (c.280 m 3 s −1 ) (at Ahvaz in the Khuzestan Plains). Short description of river course: Source in central/eastern Zagros on slopes of Zardeh Kuh, elevation c. 4200 m-very winding, roughly west course through Zagros Mountains, often in accordance with general NW-SE structural grain and folding -generally west course across Zagros foothills-generally south course from Gotvand onwards across Upper Khuzestan Plains, with bifurcation at Shushtar into River Shuteyt (larger branch) to the west and River Gargar to the east, which re-unite at Band-e Qir in vicinity of confluence with River Dez -generally south-west course from Ahvaz across Lower Khuzestan Plains-joins Tigris-Euphrates-Karun delta at Khorramshahr and fans out in south-east direction into Persian Gulf [13,35,54,102]. Geomorphological Characteristics The geomorphology of the study area is complex and it was affected by tectonic processes over time.The results for the 13 geomorphological characteristics when applying the scheme to the River Karun and Dez interacting with folds in lowland south-west Iran are given in Table 2.This has been sub-divided into: Table 2 for the Turkalaki, Shushtar and Qal'eh Surkheh Anticlines and the River Karun; Table 3 for the Sardarabad, Qal'eh Surkheh and Kupal Anticlines and the River Karun and River Gargar; Table 4 for the Dezful Uplift and Sardarabad and Shahur Anticlines and the River Dez; Table 5 for the Ramin Oilfield, Ahvaz and Ab-e Teymur Oilfield Anticlines and the River Karun; and Table 6 for the Dorquain Oilfield Anticline and the River Karun. Statistical Analysis of Geomorphological Characteristics In tabulated form, the results for the geomorphological characteristics can be readily subjected to statistical analyses for investigating fold-river interactions.For the river Karun and Dez results given in Tables 2-6, Analysis of Variance (ANOVA) was applied between the categories of river incision across a fold and river diversion around a fold for each of the 13 geomorphological characteristics [13].The ANOVA findings are summarised in Table 7, in which F = Obtained F value (mean sums of squares due to between-group differences/mean sums of squares due to within-group differences), F crit = Critical F value needed to reject the null hypothesis, p-value = Level of significance of F value.In Table 7, bold text and yellow shading is used to highlight statistical significance, that is where p-value ≤ 0.05 (equivalent to a 5% significance level or a 95% confidence level or better) [74,121,122]. Discussion The new scheme for investigating fold-river interactions was successfully applied to the major rivers Karun and Dez in lowland south-west Iran.It was found to be relatively easy to use in practice, with the notable exception of determining geomorphological characteristics Nos.11, 12 and 13.The acquisition and interpretation of precise hydrological and topographical surveys of the rivers, the creation of a superimposed database of satellite images and fine scale geological maps, subdivision of the Karun and Dez into river reaches, creation of river "migration polygons", and acquisition of data relating to fold uplift rates, was time-consuming.Hence, especially since data for these last 3 geomorphological characteristics may not be available for all major rivers, they can be considered as supplementary characteristics. Significant Geomorphological Characteristics in Fold-River Interactions for the Rivers Karun and Dez in Lowland South-West Iran As is frequently the case in geomorphology, the measurements of the 13 geomorphological characteristics will vary according to where and how the observer take the measurements.Thus it is necessary to follow the directions given with the descriptions of each of the 13 characteristics, so that the measurements are standardised.This is especially the case with the subdivision into river reaches and the determination of the location of the fold core, where a greater degree of subjectivity is involved.Hence, it may be useful to include error estimates with the measurements of the geomorphological characteristics. Considering this, and the natural variability and complexity of major rivers, it might be expected that none of the 13 geomorphological characteristics would show statistically significant differences between the categories of river incision across a fold and river diversion around a fold.Nevertheless, the Analysis of Variance (ANOVA) findings in Table 7 for the rivers Karun and Dez interacting with folds in lowland south-west Iran, show three geomorphological characteristics with statistically significant differences at the 95% confidence level (p-value ≤ 0.05): channel-belt width at location of fold axis, floodplain width at location of fold axis, and distance from fold core to location of river crossing (geomorphological characteristics Nos. 2, 3 and 7). Both channel-belt width and floodplain width at the location of the fold axis are significantly narrower for river incision across a fold compared with river diversion around a fold.In cases of river diversion, channel-belt width and floodplain width at the projection of the fold axis may have a wide range of values.By contrast, in cases of river incision, channel-belt width is always (100% of cases) less than 2.7 km, and floodplain width is generally (80% of cases) less than 5.7 km, at the location of the fold axis.A narrow channel-belt and a narrow floodplain at the location of the fold axis are indicative of a reduction in the lateral migration of the river at the fold axis to increase vertical incision of the river to keep pace with fold uplift.The general scenario is one of broader channel-belts and floodplains immediately upstream and downstream of the fold due to increased aggradation to maintain channel slopes across the fold, and narrow channel-belts and floodplains across the fold due to increased erosion and incision to keep pace with fold uplift [32,46,47].A narrow channel-belt is present in all cases of river incision, probably because a channel-belt is a relatively small feature that typically develops over time intervals of several decades or more [123].Indeed, an average channel-belt width of 2.7 km or less may be a threshold for the rivers Karun and Dez in the Khuzestan Plains that needs to be maintained if a major river is incise across a fold in the long-term [13].By contrast, a narrow floodplain is not present in all cases or river incision, probably because a floodplain is a significantly larger feature that typically develops over time intervals of centuries [63].Thus, in extensive, relatively flat areas, such as the Lower Khuzestan Plains, the streams and wetlands of the floodplain of a major river may extend far beyond the surface expression of a small fold and thus be unaffected by it. It is not unexpected that the distance from the fold core to the location of the river crossing should discriminate between the two categories of fold-river interactions, since river incision occurs between the fold core and the fold nose, and river diversion occurs beyond the fold nose.Indeed, in all but one case of river incision the distance is less than 16 km (the exception of 43.6 km for the River Gargar incising across the Kupal Anticline is associated with pronounced human influences on the development of the River Gargar), whereas in all cases of river diversion the distance is greater than 22 kilometres.Interestingly, there is a strong tendency for a river to incise across the fold at locations near to that of the fold core (8.5 km or less in 80% of cases).Since the folds in the Khuzestan Plains are relatively young folds, this suggests that river incision across a fold at, or near to, the fold core is initiated at a very early stage in fold development, probably when the fold is initially emerging on the ground surface [13]. These findings can help to explain the seemingly paradoxical tendency of rivers to transect both young and old anticlines at or near to locations of their greatest structural and topographic relief.It can be considered that a fold initially emerges on the ground surface as a fold core, which in plan form may be an "oval", a "sausage", or another similar form, depending on the type of fold [33,82,[122][123][124][125].Where a major river initially encounters the fold as an emerging fold core, then the river may flow across the uplifting fold for sufficient time (at least several decades [123]) for the development of a narrow channel-belt; thus producing an incising river course across the fold in the vicinity of the fold core.As the fold grows vertically and laterally, depending on the size and nature of the river, the incising river course may be maintained and become "fixed" to produce a water gap in the fold in the vicinity of the subsequent structural culmination, or the river may be subsequently defeated to produce a wind gap and a diverted river course [124].By contrast, where a major river initially encounters a fold as a larger, emerged fold, the river may not flow across the uplifting fold for sufficient time for a narrow channel-belt to develop, due to repeated channel migration in response to lateral fold growth; thus producing a river course diverting around the fold nose [13,20]. Also, whilst other geomorphological characteristics are not discriminatory at the 95% confidence level, they do show some trends which support this model.River incision across a fold frequently has a general river course direction across the fold orthogonal to the fold axis, and river reaches across the fold axis have low channel sinuosities (generally < 1.4), steep channel water surface slopes (generally > 1 × 10 −4 mm −1 ), and low average channel migration rates (generally < 2 m yr −1 ).These trends might be related to the river initially encountering the fold as an emerging fold core, with reductions in lateral migration and increases in specific stream power as the river incises vertically in response to fold uplift.By contrast, river diversion around a fold frequently has a general river course direction upstream of the fold parallel to the fold axis and a course change of about 20 • -70 • to flow around the fold, and river reaches across the projection of the fold axis that have quite widely ranging channel sinuosities, gentle channel water surface slopes (generally < 1 × 10 −4 mm −1 ), and quite widely ranging average channel migration rates.These trends might be related to the river initially encountering the fold later in its development as a relatively large "obstacle", with a diverted river course, with frequent lateral migration in which there is only limited time for any increases in specific stream power to develop [13,20]. Significant Geomorphological Characteristics in Fold-River Interactions for Other Major Rivers These observed changes apply for the major rivers Karun and Dez in lowland south-west Iran.To investigate whether similar or different changes apply with other major rivers and other folds, the scheme should now be applied to a variety of major rivers across the globe.For other fold-river interactions, it is highly likely that different changes will be found, and that other characteristics of river and fold geomorphology may discriminate between river incision across a fold and river diversion around a fold. For instance, for the rivers Karun and Dez interacting with active folds in lowland south-west Iran, it was found that channel-belt width was a key discriminative characteristic, whereas channel width and channel water surface slope were not.By contrast, an investigation of two side-by-side upland rivers crossing rapidly uplifting folds (rates of uplift exceeding 10 mm yr −1 ) in the Himalayan foreland of central Nepal, found that both of the rivers exhibited a significant reduction in channel width across the zone of rock uplift [51,53,126].The smaller Bakeya River became steeper across the zone of rapid uplift, whereas the larger Bagmati River showed no significant profile steepening across the same zone [51,52].The research indicated that channel width acted as a key characteristic of river responses, and that if structural uplift should become sufficiently great, the channel width would reduce to less than a certain threshold width value to maintain an incising river course across a zone of uplift.Channel narrowing to enhance incision rates appeared to take precedence over other changes, such as channel steepening and reduced river profile concavity [27,52,53]; a scenario which has also been found with upland rivers elsewhere in the world.In central Taiwan, in response to increasing rates of differential uplift, upland rivers in studies were found to have progressively narrower channel widths until a channel width:depth ratio of about 10 was reached, after which they also steepened [58].In southern New Zealand, surveys of small upland channels indicated that 1 m-2 m of uplift resulted in a five-to ten-fold narrowing of river channels [25].Such findings enabled Amos and Burbank (2007) [25] to produce a conceptual model for a given river discharge, in which decreased channel width produced sufficient increased erosion to keep pace with uplift for small folds; whereas decreased channel width to a minimum value followed by subsequent channel steepening was needed to keep pace with uplift for larger folds [27].Hence, it is highly likely that there are significant differences between fold-river interactions in upland and lowland river catchments, with the geomorphological characteristics of channel width and channel water surface slope probably being more significant with upland rivers.This should be investigated by extending the database for the scheme to a variety of upland rivers. Also, it has been hypothesised that the seemingly paradoxical tendency for the rivers Karun and Dez in lowland south-west Iran to transect anticlines near to locations of their greatest structural and topographic relief, is primarily due to the nature and timing of the initial fold-river interactions [13,20].However, there are other mechanisms that may account for this, which apply after the initial stages of fold development.It may arise by the drainage network being superimposed from above via a structurally conformable more easily eroded horizon [35,36].It may arise in areas where the crust is deforming plastically in response to regional compression, as a consequence of focussed rock uplift in response to significant differences between net erosion along major rivers and the surrounding regions [127], or in response to significant unloading of the crust by river erosion that amplifies the background deformation to produce a doubly plunging anticline with a river valley at its centre [128,129].Alternatively, with continued crustal shortening and thickening, it may arise with amplification of a regional slope that produces higher erosion rates in transverse catchments than in longitudinal catchments, and which creates a new organisation of the drainage system following the regional slope [130].It is likely that there will be notable differences in the relative significance of the geomorphological characteristics with each of these mechanisms, which should be investigated by extending the database for the scheme to a wide variety of rivers across the globe. Conclusions This study has introduced and demonstrated a new scheme using remote sensing for investigating fold-river interactions for major rivers.This scheme involved a short description of the river, climate, and structural geology, and 13 geomorphological characteristics. The scheme was successfully applied to the major rivers Karun and Dez in lowland south-west Iran, using widely available satellite imagery and fine scale geological maps.It was relatively easy to use in practice, though geomorphological characteristics Nos.11, 12 and 13 involved additional data sources and additional processing, which was more difficult and time-consuming.Since the data needed for these last three geomorphological characteristics may not be available for all major rivers, they can be considered as supplementary characteristics. For the major rivers Karun and Dez (mean annual water discharges 575 m 3 s −1 and 230 m 3 s −1 , respectively) interacting with folds in lowland south-west Iran, it was found that geomorphological characteristics Nos. 2, 3 and 7 (channel-belt width, floodplain width, and distance from fold core to location of river crossing) had statistically significant differences (p-value ≤ 0.05) between the categories of river incision across a fold and river diversion around a fold.These findings suggest that the nature and timing of initial fold-river interactions is important in determining whether a river incises across a fold or diverts around it, and that the formation and maintenance of a narrow channel-belt and a narrow floodplain are necessary for a major river to incise across a fold, with this incision frequently being in the vicinity of the fold core and subsequent structural culmination. The scenario in the foreland basin tectonic setting of lowland south-west Iran involves major rivers (of which the Karun and Dez are the largest) interacting with relatively young, emerging, thrust-related folds, with gradual earth surface movements predominating due to lubricated décollements on evaporite layers.In this scenario, the new scheme was found to be useful and identified channel-belt width, floodplain width, and distance from fold core to river crossing as important characteristics in the interactions between the major rivers and the folds.The scheme should now be applied to a wide variety of major rivers across the globe, to determine its usefulness in other scenarios and improve our knowledge of fold-river interactions.By comparing the same parameters for different major rivers, a better understanding of fold-river interactions should be achieved. Figure 1 . Figure 1.The River Karun, River Dez, and other main rivers of Khuzestan province and its environs (Modified from Heyvaert et al., 2013) [63].Centred on 31°33'N 49°02'E.HM Huwayzah marshes SM Shadegan marshes International border Border of Khuzestan province.The Sardarabad Anticline is a 58 km long fold that is oriented roughly ESE-WNW and located to the north-west of the settlement of Band-e Qir Figure 1 . Figure 1.The River Karun, River Dez, and other main rivers of Khuzestan province and its environs (Modified from Heyvaert et al., 2013) [63].Centred on 31 • 33'N 49 • 02'E.HM Huwayzah marshes SM Shadegan marshes International border Border of Khuzestan province.The Sardarabad Anticline is a 58 km long fold that is oriented roughly ESE-WNW and located to the north-west of the settlement of Band-e Qir. Figure 3 . Figure 3.The measurement of w, cbw and fpw (False-colour Landsat image (2001) of the River Karun (Shuteyt branch) diverting around the Sardarabad Anticline, centred on c. 31°52′N 48°53′E).The axis of the anticline is shown as a red line with cross-bar, and straight-line river reaches are shown as thin green lines, with roughly orthogonal thin green lines demarcating successive reaches.3.1.2.Channel-Belt Width at Location of Fold axis (or its Projection) Symbol: cbw Units: km (quoted to three decimal places) Measurement location: Where river channel crosses the fold axis (or its projection) Figure 3 . Figure 3.The measurement of w, cbw and fpw (False-colour Landsat image (2001) of the River Karun (Shuteyt branch) diverting around the Sardarabad Anticline, centred on c. 31 • 52 N 48 • 53 E).The axis of the anticline is shown as a red line with cross-bar, and straight-line river reaches are shown as thin green lines, with roughly orthogonal thin green lines demarcating successive reaches. Figure 4 . Figure 4.The measurement of Sc and BI (False-colour Landsat image (2001) of the River Dez incising across the Sardarabad Anticline, centred on c. 31 • 57 N 48 • 36 E).The axis of the anticline is shown as a red line with cross-bar, and straight-line river reaches are shown as thin green lines, with roughly orthogonal thin green lines demarcating successive reaches. Figure 5 . Figure 5.The measurement of RCD (False-colour Landsat image (2001) of the River Dez incising across the Sardarabad Anticline, centred on c. 31°57′N 48°37′E).The axis of the anticline is shown as a red line with cross-bar, and straight-line river reaches are shown as thin green lines, with roughly orthogonal thin green lines demarcating successive reaches. 3. 1 . 7 . Distance From Fold Core to Location of River Crossing Symbol: C-RC Units: km (quoted to one decimal place) Measurement location: Along the fold axis, from the centre of the fold core to where the river channel crosses the fold axis (or its projection) Figure 5 . Figure 5.The measurement of RCD (False-colour Landsat image (2001) of the River Dez incising across the Sardarabad Anticline, centred on c. 31 • 57 N 48 • 37 E).The axis of the anticline is shown as a red line with cross-bar, and straight-line river reaches are shown as thin green lines, with roughly orthogonal thin green lines demarcating successive reaches.3.1.7.Distance From Fold Core to Location of River Crossing Symbol: C-RC Units: km (quoted to one decimal place) Measurement location: Along the fold axis, from the centre of the fold core to where the river channel crosses the fold axis (or its projection)C-RC is defined as the horizontal distance from the centre of the fold core measured along the fold axis, and along the projection of the fold axis, where appropriate), to the location where the river channel thalweg crosses the fold axis or its projection[13].This is most easily measured on fine scale geological maps (typically 1:100,000 or 1:50,000 scale geological maps, depending on availability) on which the surface lithology, structural geology (including the surface extent and anticlinal axis of each fold), and river channels are accurately shown.The river crossing location is determined simply from where the fold axis (or its projection) intersects with the thalweg of the main river channel, as indicated on the fine scale geological map or on the remote sensing image.Where the main river channel has more than one intersection with the fold axis, as may be the case with a sinuous river, the intersection that is nearest to the fold core will be considered the river crossing location.The location of the centre of the fold "core" (the centre of the main part of the fold which emerged first on the ground surface) is considerably more difficult to determine, since the detailed developmental history of a fold is usually not known.For ease of measurement, the centre of the fold core should be located on the fold axis.For sub-surface folds with little or no surface topographic expression, known principally from oil and gas field locations and seismic surveys, the centre of the fold core should be interpreted as being midway along the approximate location of the fold axis on the ground surface (with particular consideration of the dip of sub-surface structures and stratigraphy).This interpretation can be modified in cases where the sub-surface structural geology is well known.For young, emerging folds the centre of the fold core can be interpreted with more confidence and will usually be coincident with the centre of the surface topographic expression of the fold.For older, emerged folds the location of the centre of the fold core : White = Quaternary Alluvium and Recent Deposits (c. 1 Ma-Present; generally unconsolidated alluvial sands, muds, gravels, and marls).Yellow (Bk) = Bakhtyari Formation (Middle Pliocene to Pleistocene, c. 3 Ma-1 Ma; well-consolidated conglomerates, sandstones, and mudstones).Dark orange (Aj) = Agha Jari Formation (Middle Miocene to Middle Pliocene, c. 10 Ma-3 Ma; sandstones, marls, and mudstones).Light orange (Lbm) = Lahbari Member of Agha Jari Formation (Early to Middle Pliocene, c. 5.5 Ma-3 Ma; mudstones, marls, and sandstones) [69].Remote Sens. 2019, 11, x FOR PEER REVIEW 15 of 39 . In this figure, the centre of the fold core is indicated by the black and yellow circle, the C-RC measurement along the fold axis to the River Dez crossing is indicated by the solid dark green line with two black arrowheads, and the C-RC measurement along the fold axis to the River Karun crossing is indicated by the dashed dark green line with two black arrowheads.In Figures 6 to 8: White = Quaternary Alluvium and Recent Deposits (c. 1 Ma-Present; generally unconsolidated alluvial sands, muds, gravels, and marls).Yellow (Bk) = Bakhtyari Formation (Middle Pliocene to Pleistocene, c. 3 Ma-1 Ma; well-consolidated conglomerates, sandstones, and mudstones).Dark orange (Aj) = Agha Jari Formation (Middle Miocene to Middle Pliocene, c. 10 Ma-3 Ma; sandstones, marls, and mudstones).Light orange (Lbm) = Lahbari Member of Agha Jari Formation (Early to Middle Pliocene, c. 5.5 Ma-3 Ma; mudstones, marls, and sandstones) [69]. Figure 6 . Figure 6.The measurement of C-RC (1:100,000 fine scale geological map (IOOC, 1969 [67]) of the rivers Dez and Karun (Shuteyt branch) interacting with the Sardarabad Anticline, centred on c. 31°54′N 48°42′E).The axis of the anticline is shown as a red line with cross-bar, and straight-line river reaches are shown as thin green lines, with roughly orthogonal thin green lines demarcating successive reaches. Figure 6 . Figure 6.The measurement of C-RC (1:100,000 fine scale geological map (IOOC, 1969 [67]) of the rivers Dez and Karun (Shuteyt branch) interacting with the Sardarabad Anticline, centred on c. 31 • 54 N 48 • 42 E).The axis of the anticline is shown as a red line with cross-bar, and straight-line river reaches are shown as thin green lines, with roughly orthogonal thin green lines demarcating successive reaches. Figure 7 . Figure 7.The measurement of C-BM (1:100,000 fine scale geological map (IOOC, 1969 [67]) of the rivers Dez and Karun (Shuteyt branch) interacting with the Sardarabad Anticline, centred on c. 31°53′N 48°43′E).The axis of the anticline is shown as a red line with cross-bar, and straight-line river reaches are shown as thin green lines, with roughly orthogonal thin green lines demarcating successive reaches. Figure 7 . Figure 7.The measurement of C-BM (1:100,000 fine scale geological map (IOOC, 1969 [67]) of the rivers Dez and Karun (Shuteyt branch) interacting with the Sardarabad Anticline, centred on c. 31 • 53 N 48 • 43 E).The axis of the anticline is shown as a red line with cross-bar, and straight-line river reaches are shown as thin green lines, with roughly orthogonal thin green lines demarcating successive reaches. Figure 8 . Figure 8.The measurement of Wgs, ERs and ERd (1:100,000 fine scale geological map (IOOC, 1969 [67]) of the rivers Dez and Karun (Shuteyt branch) interacting with the Sardarabad Anticline, centred on c. 31°53′N 48°43′E).The axis of the anticline is shown as a red line with cross-bar, and straight-line river reaches are shown as thin green lines, with roughly orthogonal thin green lines demarcating successive reaches.3.1.10.Estimate of Erosion Resistance of Surface Sediments/Rocks and Deeper Sediments/Rocks in Fold Symbols: ERs (surface); ERd (deeper) No units (estimate quoted on a relative scale from 1 to 8) Measurement location: Where river channel crosses the fold axis (or its projection) Figure 8 . Figure 8.The measurement of Wgs, ERs and ERd (1:100,000 fine scale geological map (IOOC, 1969 [67]) of the rivers Dez and Karun (Shuteyt branch) interacting with the Sardarabad Anticline, centred on c. 31 • 53 N 48 • 43 E).The axis of the anticline is shown as a red line with cross-bar, and straight-line river reaches are shown as thin green lines, with roughly orthogonal thin green lines demarcating successive reaches. . In this figure, false-colour Landsat ETM+ images (28 July 2001 and 4 August 2001) make up the background; thin red lines indicate the location of the river channel banks on CORONA satellite images (23 September 1966 and 5 February 1968); yellow "migration polygons" indicate left bank channel migration inwards (or to the right) over a mean time interval of 34.2 years; green "migration polygons" indicate left bank channel migration outwards (or to the left) over 34.2 years.migration rate, Rm = 11,129 m yr (immediately upstream of fold); 1578 m yr (across fold axis); 4502 m yr −1 (immediately downstream of fold), as shown in Figure 9.In this figure, false-colour Landsat ETM+ images (28 July 2001 and 4 August 2001) make up the background; thin red lines indicate the location of the river channel banks on CORONA satellite images (23 September 1966 and 5 February 1968); yellow "migration polygons" indicate left bank channel migration inwards (or to the right) over a mean time interval of 34.2 years; green "migration polygons" indicate left bank channel migration outwards (or to the left) over 34.2 years. Figure 9 . Figure 9.The measurement of Rm (False-colour Landsat image (2001) of the River Dez incising across the Sardarabad Anticline, centred on c. 31 • 57 N 48 • 36 E).The axis of the anticline is shown as a red line with cross-bar, and straight-line river reaches are shown as thin green lines, with roughly orthogonal thin green lines demarcating successive reaches. Table 2 . Results for 13 geomorphological characteristics in fold-river interactions for the Turkalaki, Shushtar and Qal eh Surkheh Anticlines. Table 3 . Results for 13 geomorphological characteristics in fold-river interactions for the Sardarabad, Qal eh Surkheh and Kupal Anticlines. Table 4 . Results for 13 geomorphological characteristics in fold-river interactions for the Dezful Uplift and the Sardarabad and Shahur Anticlines. Table 5 . Results for 13 geomorphological characteristics in fold-river interactions for the Ramin Oilfield Anticline, Ahvaz Anticline and Ab-e Teymur Oilfield Anticline. Table 6 . Results for 13 geomorphological characteristics in fold-river interactions for the Dorquain Oilfield Anticline. Table 7 . Analysis of Variance (ANOVA) between river incision across a fold and river diversion around a fold, applied to 13 geomorphological characteristics in fold-river interactions for the rivers Karun and Dez in lowland south-west
2019-07-31T02:01:31.315Z
2019-07-12T00:00:00.000
{ "year": 2019, "sha1": "6b67d05bbaa1e7e33698987874863e40a4a4528a", "oa_license": "CCBY", "oa_url": "https://www.mdpi.com/2072-4292/11/17/2037/pdf?version=1567077130", "oa_status": "GOLD", "pdf_src": "MergedPDFExtraction", "pdf_hash": "2aaf62f6b382e51aee62160629b699ac530ac661", "s2fieldsofstudy": [ "Geology", "Environmental Science" ], "extfieldsofstudy": [] }
122213183
pes2o/s2orc
v3-fos-license
estimation using a three-layer model of human perception : Most previous studies using the dimensional approach mainly focused on the direct relationship between acoustic features and emotion dimensions (valence, activation, and dominance). However, the acoustic features that correlate to valence dimension are very few and very weak. As a result, the valence dimension has been particularly difficult to predict. The purpose of this research is to construct a speech emotion recognition system that has the ability to precisely estimate values of emotion dimensions especially valence. This paper proposes a three-layer model to improve the estimating values of emotion dimensions from acoustic features. The proposed model consists of three layers: emotion dimensions in the top layer, semantic primitives in the middle layer, and acoustic features in the bottom layer. First, a top-down acoustic feature selection method based on this model was conducted to select the most relevant acoustic features for each emotion dimension. Then, a button-up method was used to estimate values of emotion dimensions from acoustic features by firstly using fuzzy inference system (FIS) to estimate the degree of each semantic primitive from acoustic features, then using another FIS to estimate values of emotion dimensions from the estimated degrees of semantic primitives. The experimental results reveal that the constructed emotion recognition system based on the proposed three-layer model outperforms the conventional system. Introduction Most previous techniques for automatic speech emotion recognition focus only on the classification of emotional states as discrete categories such as happy, sad, angry, fearful, surprised, and disgusted [1]. However, a single label or any small number of discrete categories may not accurately reflect the complexity of the emotional states conveyed in everyday interaction. In the real-life, an emotional state has different degrees of intensity and may change over time depending on the situation from low to high degree. Therefore, an automatic speech emotion recognition system should be able to detect the degree or the level of the emotional state from the voice [2]. Hence, a number of researchers advocate the use of dimensional descriptions of human emotion, where emotional states are estimated as a point in a multi-dimensional space [3,4]. In this study, a three-dimensional continuous model is adopted in order to represent the emotional states using the emotion dimensions, i.e. valence, activation, and dominance. These dimensions are a suitable representation, because they are capable of representing low-intensity as well as high-intensity states [2]. However, although the conventional dimensional model for estimating emotions from speech signals allows the representation of the degree of emotional state, it has the following drawbacks: (i) we do not know what acoustic features are related to each emotion dimension, (ii) the acoustic features that correlate to the valence dimension are less numerous, less strong, and more inconsistent [4], and (iii) the values of emotion dimensions are difficult to estimate precisely only on the basis of acoustic information [5]. Due to these limitations, it has been difficult to directly predict the values of the valence dimension using the acoustic features. The goal of this paper is to improve the conventional dimensional method in order to precisely predict values of the valence dimension as well as improve prediction of those of the activation and dominance. This will be achieved by constructing a speech emotion recognition system which have the ability to accurately estimate emotion dimensions based on the thee-layer model of human perception. The aim of constructing this system is to prove the effectiveness of the proposed three-layer model. The following section introduces the proposed emotion recognition approach based on human perception. Emotion Recognition Strategy Conventional speech emotion recognition methods are mainly based on investigating the relationship between acoustic features and emotion dimensions as a two-layer model, i.e. acoustic feature layer and emotion dimension layer. For instance, Grimm et al. attempted to estimate the emotion dimensions (valence, activation, and dominance) from the acoustic features by using a fuzzy inference system (FIS) [6]. However, they found that activation and dominance were more accurately estimated than valence. Furthermore, many researchers also tried to investigate the most relevant acoustic features for each emotion dimension by using the correlation between a set of acoustic features and emotion dimensions [3][4][5]7]. In all these studies, the valence dimension was found to be the most difficult dimension to estimate. Consequently, some other studies focused only on exploring acoustic features related to valence dimension [8,9]. Some emotions related to valence were found to share similar acoustic features such as happiness and anger, which were characterized by increased levels of fundamental frequency (F0) and intensity. This is one reason why acoustic discrimination on valence dimension is still problematic i.e. no strong discriminative acoustic features are available to discriminate between positive speech (e.g. happiness) and negative speech (e.g. anger) [7]. Therefore, a number of researchers tried to discriminate between the positive and negative emotions by combining acoustic and linguistic features to improve the valence estimation [7,10]. However, the results on valence estimation remained poor. Human perception, as described by Scherer [12] who adopted a version of Brunswik's lens model originally proposed in 1956 [13], is a multi-layer process. Huang and Akagi adopted a three-layer model for human perception. They assumed that human perception for emotional speech does not come directly from a change in acoustic features but rather a composite of different types of smaller perceptions that are expressed by semantic primitives or adjectives describing an emotional voice [14]. The two-layer model has limited ability to find the most relevant acoustic features for each emotion dimension, especially valence, or to improve the prediction of emotion dimensions from acoustic features. To overcome these limitations, this paper aims to identify the most relevant acoustic features describing emotion di-mension using a novel idea based on human perception. We attempt to use the above human perception model proposed by Huang and Akagi [14] to find the most correlated acoustic features with emotion dimensions through semantic primitives. We assume that the acoustic features that are highly correlated with semantic primitives will have a significant impact for predicting values of emotion dimensions, especially valence. The findings can guide the selection of new acoustic features with better discrimination in the most difficult dimension. The feasibility of our three-layer model to improve emotion dimensions estimation; for valence, activation, and dominance was investigated. The proposed model consists of three layers: emotion dimensions (valence, activation, and dominance) constitute the top layer, semantic primitives the middle layer, and acoustic features the bottom layer. A semantic primitive layer is added between the two conventional layers acoustic features and emotion dimensions as shown in Fig. 1. Therefore, the approach we adopt to estimate values of emotion dimensions includes the following steps: • Feature selection: The most relevant acoustic features were selected by using a top-down method. First, the semantic primitives which have high correlations with each emotion dimension were selected. Then, the acoustic features which have high correlations with the selected semantic primitives found in the first step were selected. • Building a three-layer model for each emotion dimension: For example, in the case of valence dimension, the three layers are: valence dimension in the top layer, the highly correlated semantic primitives with valence dimension in the middle layer, all the highly correlated acoustic features with all semantic primitives in the bottom layer. • Emotion dimensions estimation: By using the constructed three-layer model, a button-up method was used to estimate values of emotion dimensions from acoustic features as follows. First, FIS was used to estimate the degree of each semantic primitive from acoustic features, and then another FIS was used to estimate values of emotion dimension from the estimated degrees of semantic primitives in the first step. The achieve the aim of this paper the following investigations are required: (1) whether the selecting acoustic features based on the proposed three-layer model of human perception will help us to find the most related acoustic features for each emotion dimensions, (2) whether using these selected acoustic features as inputs to an automatic emotion recognition system will improve the accuracy of all emotion dimensions especially valence, (3) finally, whether the automatic emotion recognition system is effective in the following cases: speaker-dependent, multi-speaker, and multi-language. Databases and Experimental Evaluation To construct an emotion recognition system, the elements of the proposed model were collected in this section. The databases and acoustic features used in this study are introduced. Moreover, the semantic primitives and emotion dimensions are evaluated by conducting two listening tests using human subjects as described in the below subsections. Speech Material and Subjects In this paper, our aim is to prove a new concept, not to construct a real-life application, consequently, acted emotions are quite adequate as a testing data [15]. Therefore, in order to validate the proposed system, we used two acted databases of emotional speech: one in Japanese (single-speaker) and the other in German (multi-speaker). The Japanese database is the multi-emotion singlespeaker Fujitsu database produced and recorded by Fujitsu Laboratories. A professional actress was asked to produce utterances using five emotional speech categories, i.e., neutral, joy, cold anger, sadness, and hot anger. In the database, there are 20 different Japanese sentences. Each sentence has one utterance in neutral and two utterances in each of the other categories. Thus, there are nine utterances for each sentence and 180 utterances for all 20 sentences. However, one cold anger utterance is missing so, the total number of utterance for Japanese database is 179. The Japanese database is inadequate for validating our emotion recognition system fully, because it is a single speaker database which is only suitable for speakerspecific task. To investigate the effectiveness of the pro-posed system for multi-speaker and different languages, a Berlin database [17] was selected. It comprises of seven emotional states: anger, boredom, disgust, anxiety, happiness, sadness, and neutral speech. Ten professional German actors (five female and five male) spoke ten sentences with emotionally neutral content in the seven different emotions. These sentences were not equally distributed between the various emotional states: 69 frightened; 46 disgusted; 71 happy; 81 bored; 79 neutral; 62 sad; 127 angry. This database was selected because: (1) it is an actedspeech database the same as the Fujitsu database, (2) it contains four categories similar to those in the Fujitsu database (happy, angry, sad, and neutral), and (3) it is a multi-speaker and multi-gender database which enable us to investigate the effect of speaker and gender variation in speech emotion recognition. To compare the results of the two databases, we used only the four similar categories. Furthermore, for training proposes, we used sentences equally distributed between the four emotional states: 50 happy, 50 angry, 50 sad, and 50 neutral. In total 200 utterances were selected from the Berlin database: 100 utterances were uttered by five males and the other 100 by five females divided equally between the four emotional states. To evaluate semantic primitives and emotion dimensions, we used listening tests. The Fujitsu database was evaluated by 11 graduate students, all native Japanese speakers (nine male and two female). While Berlin database was evaluated using nine graduate students, all native Japanese speakers (eight male and one female). No subjects have hearing impairments. Acoustic Features To construct a speech emotion recognition system, acoustic features are needed to be investigated. In this research, the most relevant acoustic features that have been successful in related works and features used for other similar tasks were selected. Therefore, 16 acoustic features that originate from F0, power envelope, power spectrum, and duration were selected from the work by Huang and Akagi [14]. In addition to these 16 acoustic features, five new parameters related to voice quality are added, because voice quality is one of the most important cues for the perception of expressive speech. Acoustic features related to duration are extracted by segmentation, and the rest are extracted by the high quality speech analysis-synthesis system STRAIGHT [18], leading to extraction of a set of 21 acoustic features that can be grouped in several subgroups: F0 related features: F0 contour and power envelope varied greatly with different expressive speech cat-egories, both for the accentual phrases as well as for the overall utterance. For each utterance the measurements made were F0 mean value of rising slope of the F0 contour (F0 RS), highest F0 (F0 HP), average F0 (F0 AP), and rising slope of the F0 contour for the first accentual phrase (F0 RS1). Power envelope related features: in a similar way to that for the F0 contour, for each utterance the measurements were: mean value of power range in accentual phrase (PW RAP), power range (PW R), rising slope of the power for the first accentual phrase (PW RS1), the ratio between the average power in high frequency portion (over 3 kHz), and the average power (PW RHT); Power spectrum related features: for spectrum we used formants, spectral tilt, and spectral balance: -Formants: measures were the mean value of (first formant frequency (SP F1), second formant frequency (SP F2), third formant frequency (SP F3) taken approximately at the midpoint of the vowels /a/,/e/,/i/,/o/, and /u/. The formants frequencies were calculated with LPC-order 12. -Spectral tilt (SP TL): is used to measure voice quality, and it was calculated from the following equation where A1 is the level in dB of the first formant, and, A3 is the level of the harmonic whose frequency is closest to the third formant [19]. -Spectral balance (SP SB): this parameter serves for the description of acoustic consonant reduction [20], and it was calculated according to the following equation where f i is the frequency in Hz, and E i is the spectral power as a function of the frequency [21]. Duration related features: total length (DU TL), consonant length (DU CL), and ratio between consonant length and vowel length (DU RCV). Voice quality: Voice quality conveys both linguistic and paralinguistic information, which can be distinguished by acoustic source characteristics. Currently investigation into voice quality has focused on measures of breathiness, such as H1-H2, where H1 and H2, are the amplitudes (dB) of the fundamental frequency and the second harmonic, respectively. As indicated by Menezes et al. in [11], H1-H2 is concerned with glottal opening. In this study, the mean value of H1-H2 for vowel /a/,/e/,/i/,/o/, and /u/ per utterance MH A, MH E, MH I, MH O, and MH U are used as an indication for voice quality. All the 21 acoustic features were extracted for both Fujitsu and Berlin databases. In order to avoid speaker dependency on the acoustic features that are used, we adopt an acoustic feature normalization method, in which all acoustic feature values are normalized by those of the neutral speech. This was performed by dividing the values of acoustic features by the mean value of neutral utterances for all acoustic features. Evaluations of Semantic Primitives In this study, the human perception model as described by Scherer [12] is adopted. This model assumes that human perception is a multi-layer process. It was assumed that the acoustic features are perceived by a listener and internally represented by a smaller perception e.g. adjectives describing emotional voice as reported by Huang and Akagi [14]. In this study 'smaller perception' means an earlier process of perception. These smaller percepts or adjectives are finally used to detect the emotional state of the speaker. These adjectives can be subjectively evaluated by human subjects. Therefore, the following set of adjectives describing the emotional speech were selected as candidates for semantic primitives: bright, dark, high, low, strong, weak, calm, unstable, well-modulated, monotonous, heavy, clear, noisy, quiet, sharp, fast, and slow. These adjectives were selected because they reflect a balanced selection of widely used adjectives that describe emotional speech. They are originally from the work of Huang and Akagi [14]. For the evaluation, we used listening tests. In these tests, the stimuli were presented randomly to each subject through binaural headphones at a comfortable sound pressure level in a soundproof room. Subjects were asked to rate each of the 17 semantic primitives on a five-point scale: "1-Does not feel at all", "2-Seldom feels", "3-Feels a little", "4-feels", "5-Feels very much". The 17 semantic primitives were evaluated for the two databases, and then ratings of the individual subject were averaged for each semantic primitive per utterance. The inter-rater agreement was measured by means of pairwise Pearson's correlations between two subjects' ratings, separately for each semantic primitive. For Japanese database, the average of Pearson's correlation among every pairs of two subjects for all semantic primitives evaluation were ranged between 0.68 and 0.85, moreover, for German database, the average of correlations were ranged between 0.66 and 0.86. This result suggests that all subjects agreed from a moderate to a very high degree. Emotion Dimensions Evaluation Most existing emotional speech databases have been annotated using the categorical approach, while, few databases have been annotated using the dimensional approach [22]. The Fujitsu and Berlin databases are categorical databases. Therefore, listening tests are required to annotate each utterance in the used databases using the dimensional approach. Thus, the two databases were evaluated by the listening tests along three dimensions: valence, activation, and dominance. For emotion dimension evaluation, a 5-point scale {-2, -1, 0, 1, 2} was used: valence (from -2 very negative to +2 very positive), activation (from -2 very calm to +2 very exited), and dominance (from -2 very weak to +2 very strong). The subjects used a MATLAB GUI to evaluate the stimuli. Repetition was allowed. They were asked to evaluate one emotion dimension for the whole database in one session. There were three sessions, one for each emotion dimension. As done in the work of Mori et al. [23] for emotion dimension evaluation, the basic theory of emotion dimension was explained to the subjects before the experiment started. Then they took a training session to listen to an example set composed of 15 utterances, which covered the used five-point scale, three utterances for each point in the used scale. In the test, the stimuli were presented randomly, for each utterance. Subjects were asked to evaluate their perceived impression from the way of speaking, not from the content itself, and then choose score on the five-point scale for each dimension individually. The average of the subjects' rating for each emotion dimension was calculated per utterance. The average of Pearson's correlation coefficient among every pairs of two subjects were as follows: for Japanese database 0.90, 0.85, and 0.89 for valence, activation, and dominance, respectively, and for German database 0.83, 0.87, and 0.86 for valence, activation, and dominance, respectively. This indicates that all subjects agreed to a high degree for all emotion dimension evaluation. Selection of Acoustic Features and Semantic Primitives This section describes the proposed acoustic features selection method to identify the most relevant acoustic features for emotion dimensions valence, activation, and dominance. For this purpose, we proposed a theelayer model that imitates the human perception to understand the relationship between acoustic features and emotion dimensions. Selection Procedures Our selection method is based on the following assumptions: 1) semantic primitives which are highly correlated with the emotion dimension are given large impact in the estimation of that dimension, and 2) acoustic features which are highly correlated with the semantic primitive are given large impact in the estimation of that semantic primitive. In this study, we consider the correlation highly correlated if its absolute value is grater than or equal to 0.45. To accomplish this task, the topdown method shown in Fig 2 was used as follows: • the correlation coefficients between each emotion dimension (top-layer) and each semantic primitives (middle layer) were calculated; • the highly correlated semantic primitives were selected for each emotion dimension; • the correlation coefficients between each selected semantic primitive (middle layer) in the second step and each acoustic feature (bottom layer) were calculated, • the highly correlated acoustic features were selected for each semantic primitive. For each emotion dimension, the selected acoustic features in the final step are considered as the most relevant features to the dimension in the top-layer. Correlation between elements of the threelayer model First, the correlations between the elements of the top layer and the middle layer were calculated as follows: 0.7 -0.9 0.9 -0.9 0.9 -1.0 -0.9 0.9 0.9 -0.9 -0.6 0.7 0.9 -1.0 0.9 0.8 -0.8 17 Dominance 0.6 -0.9 0.8 -0.9 1.0 -1.0 -0.9 0.9 0.9 -0.8 -0.5 0.6 0.9 -1.0 1.0 0.8 -0. 8 17 # 3 3 3 3 2 2 2 2 2 2 3 3 2 2 2 2 3 41 where s j and x (i) are the arithmetic means for the semantic primitive and emotion dimension, respectively. Table 1 lists the correlation coefficients between all semantic primitives and all emotion dimensions for the German database. Where, the numbers in bold represent the higher correlations demonstrated by the absolute value of the correlation, which is ≥0.45. In addition, '#' in the last row and last column represents the num-ber of higher correlations. For example, the number 7 in last column of the valence row indicates that their are seven semantic primitives highly correlated with valence. Second, the correlations coefficients between elements of the middle layer (semantic primitive), and the bottom layer (acoustic feature) are calculated as follows: Let f l = {f l,n }(n = 1, 2, . . . , N) be the sequence of values of the m th acoustic feature, l = 1, 2, . . . , L, and L be the number of extracted acoustic features in this study L = 21. Then the correlation coefficient R (j) l between the acoustic parameter f l and the semantic primitive s (j) can be determined by the following equation: where f l , and s (j) are the arithmetic means for the acoustic feature and semantic primitive respectively. Table 2 lists the correlation coefficients between all semantic primitives and 11 acoustic features that has at least two highly correlation with semantic primitives, for the German database. Similar analysis was done for the Japanese database in our previous work [16]. Selection Results For each emotion dimension, a perceptual three-layer model was constructed as follows: emotion dimension in the top layer, the most relevant semantic primitives for this dimension in the middle layer, and the most relevant acoustic features in the bottom layer. For example, Figs. 3(a) and 3(b) illustrate the valence perceptual model for German and Japanese database, respectively. Where the solid and dashed lines in these figures represent positive and negative correlations, respectively. Also, the thickness of each line indicates the strength of the correlation: the thicker the line, the higher the correlation. In case of valence dimension for the German database as shown in Fig. 3(a), it is evident that seven semantic primitives were found highly correlated with valence as shown in the middle layer in Fig. 3(a). These seven semantic primitives are highly correlated with nine acoustic features as shown in the bottom layer in Fig. 3(a). The valence perceptual model for German and Japanese language are compared as follows: For both languages, the valence dimension is found to be positively correlated with bright, high and clear semantic primitives, while it is negatively correlated with dark, low, and heavy semantic primitives. Therefore, the two languages not only share six semantic primitives but also similar correlations between the emotion dimensions and the corresponding semantic primitives. In addition, comparing the relationship between semantic primitives and acoustic features, it is found that the six semantic primitives that were shared by both German and Japanese have a similar correlations with six common acoustic features (MH A, MH E, MH O, F0 RS, F0 HP, and PW R). This finding suggests the possibility of some type of universality of acoustic cues associated with semantic primitives. Therefore, the proposed method can be used effectively to select the most relevant acoustic features for each emotion dimension regardless the used language. Discussion Our model mimics the human perception process for understanding emotions on the basis of Brunswick's lens model [13], where the speaker expresses his/her emotional state through some acoustic features. These acoustic features are interpreted by the listener into some adjectives describing the speech signal, and from these adjectives, the listener can judge the emotional state. For example, if the adjectives describing the voice are dark, slow, low, and heavy, these make the human listener feel that the emotional state is negative valence and very weak activation, resulting in it being detected as a very Sad emotional state in the categorical approach. On the other hand, the conventional acoustic features selection method was based on the correlations between acoustic features and emotion dimension as a two-layer model. To investigate the effectiveness of the proposed feature selection method, the results were compared with the conventional method. Table 3 lists the correlations coefficients between acoustic features and emotion dimensions directly. From this table, evidently only one acoustic feature is highly correlated with the valence dimension (|correlation(SP F 1, V alence)| = 0.55 > 0.45), while eight acoustic features are highly correlated with the activation and dominance dimensions. Therefore, valence shows a smaller number of highly correlated acoustic features than the activation and dominance. These results are similar to those of many previous studies [4]. Due to this drawback, most previous studies achieved a very low performance for valence estimation using the conventional approach [6,24]. The most important result is that, using the proposed three-layer model for feature selection, the number of relevant acoustic features to emotion dimensions increases. For example, the number of relevant features for the most difficult dimension valence increases from one to nine using the proposed method. Moreover, the number of features increased from eight to nine for ac- tivation and from eight to ten for dominance. The selected acoustic features can be used to improve emotion dimensions estimation as described in detail in the next section. Automatic Emotion Recognition System The aim of speech emotion recognition system based on the dimensional approach can be viewed as using an estimator to map the acoustic features to realvalued emotion dimensions (valence, activation, and dominance). The selected acoustic features from the previous section are used as an input to the proposed system to predict emotion dimensions. Emotion dimension values can be estimated using any estimator such as K-nearest neighborhood (KNN), Support Vector Regression (SVR), or Fuzzy Inference System FIS. In this study, for selecting the best estimator among KNN, SVR, and FIS, pre-experiments not included here indicated that our best results were achieved using an FIS estimator. Therefore, FIS was used to connect the elements of the three-layer model. Most statistical methodology are mainly based on a linear and precise relationship between the input and the output. However, the relationships among acoustic features, semantic primi-tives, and emotion dimensions are non-linear. Therefore, fuzzy logic is a more appropriate mathematical tool for describing this non-linear relationship [6,14,25]. System Implementation Adaptive-Network-based Fuzzy Inference System (ANFIS) [25] was used to construct the FIS models that connect the elements of our recognition system. Each FIS has a structure of multiple inputs and one output. Having identified the best acoustic features set, we constructed an individual estimator to predict the values (-2 to 2 rated by the listening test) of each emotion dimension. As an example, for the German database, to estimate the valence dimension using the perceptual model in Fig. 3(a), a bottom-up method was used to estimate the values (1 to 5 rated by the listening test) of the seven estimated semantic primitives in the middle layer from the nine acoustic features in the bottom layer as shown in Fig. 4. To accomplish this task, seven FISs were required: one to estimate each semantic primitive. In addition, one FIS was required to estimate the value of valence dimension from the seven semantic primitives. Similarly, the activation and dominance can be estimated using FIS for each semantic primitive and one FIS for the activation and dominance, respectively. Fig. 3(a)). Effectiveness of the selected features This subsection aims to investigate whether the selected acoustic features using the proposed method in Section 4 will improve emotion dimensions estimation. To accomplish this, the proposed automatic emotion recognition system was tested using three different groups of acoustic features, for each emotion dimension: (1) highly correlated acoustic features (absolute values of their correlations with semantic primitives is ≥ 0.45), In order to measure the performance of the proposed system, the mean absolute error (MAE) between the predicted values of emotion dimensions and the corresponding average values given by human subjects is used as a metric of the discrimination associated with each group. The MAE is calculated in accordance with the following equation where j ∈ {valence, activation, dominance}, x (j) i is the output of the emotion recognition system, and x ≤ 2 is the evaluated value by human subjects as described in Subsection 3.4. The accuracy of the classifier in terms of five-fold cross validation was calculated for the two databases. Figures 5(a) and 5(b) show the MAE for estimating (valence, activation, and dominance), for Japanese and German database, respectively, using three groups of acoustic features (highly correlated, lower correlated, all). The error bars in these figures represent the standard errors. Analysis of variance (ANOVA) was conducted to test whether the three groups are statistically different with respect to the use of correlated acoustic features for emotion dimensions estimation. For the Japanese database, at level 0.001, a significant discrimination among the three groups was observed: valence For both databases, the results reveal that by using the three-layer model, the MAEs obtained using the selected acoustical features group (highly correlated acoustic features) are the smallest in comparison with that using all the features. This means that our feature selection method is effective for improving emotion dimensions estimation. System Evaluation In this paper, an automatic speech emotion recognition system based on a three-layer model was implemented. This section presents the evaluation results for the proposed system. To investigate how effectively our system improves emotion dimensions estimation, the performance of the proposed system was compared with that of the conventional two-layer system by using two different languages: Japanese and German, using two different tasks (1) speaker-dependent, and (2) multispeaker. The most relevant acoustic features for each emotion dimension were selected using the proposed feature selection method for the two languages as described in Section 4. These selected features were used as the input for the conventional system and the proposed system. The desired output form these systems is the perceived emotion dimensions by listeners, not the emotions intended by speakers. Evaluation Results for Speaker-dependent Task In the speaker-dependent task, the automatic emotion recognition system was trained and tested using utterances for one speaker. For a Japanese database, the two automatic systems (the conventional two-layer and proposed three-layer systems) were used to estimate the valence, activation, and dominance from the selected acoustic features for 179 utterances included in the Japanese database. The five-fold cross validation was used to evaluate the automatic systems. The MAEs for emotion dimensions (valence, activation, and dominance) between the two systems output and human evaluation are shown in Fig. 6(a). The error bars represent standard errors. The German database contained ten speakers: five male and five female. Since each speaker made few utterances, the leave-one-out-cross-validation (LOOCV) was used for evaluation. The proposed system and the conventional two-layer system were evaluated using each speaker individually. Finally, the mean value for MAE from all speakers for each emotion dimension was calculated. The results are presented in Fig. 6(b). Using t-test, at level 0.05, the results for the two databases are as follows: for Japanese database, valence (t(178)=3.16, p≤ 0.05), activation (t(178)=2.47, p≤ 0.05), and dominance (t(178)=4.99, p≤ 0.05). These results are statistically significant for all emotion dimensions. However, for the German database, the results are statistically significant for valence (t(199)=2.09, p≤ 0.05) and dominance (t(199)=1.78, p≤ 0.05), but there is no significant differences for activation between the two-layer and the three-layer systems (t(199)=0.23, p-value=0.41). As can be seen from Figs. 6(a) and 6(b), the estimation results using the proposed three-layer system outperforms the conventional two-layer system for the two-languages for the speaker-dependent task. Evaluation Results for Multi-Speaker Task The German database was used to investigate the effect of multi-speaker on emotion dimension estimation. Thus, the proposed system was validated using the whole database, and all 200 utterances were used to im- plement this system. Five-fold cross validation was used to evaluate this system. The results for multi-speaker evaluation are shown in Fig. 7. The error bars represent standard errors. The results of the paired t-test at 0.05 significant level were as follows: valence (t(199)=2.83, p≤ 0.05), activation (t(199)=1.93, p≤ 0.05), and dominance (t(199)=3.38, p≤ 0.05). These results are statistically significant for all the emotion dimensions. These results reveal that the proposed system outperforms the conventional one in the multi-speaker task. Discussion Using the acoustic feature selection method described in Section 4, the most relevant acoustic features were selected for each emotion dimensions, for the Japanese and the German databases. To investigate the effectiveness of the selected acoustic features, the proposed system was tested using three different groups of acoustic features: selected, not selected, and all. The best performance for emotion dimensions estimation were achieved using the selected acoustic features group, for each emotion dimension, as demonstrated by the smallest values of the MAEs, for both German and Japanese databases. The MAEs for all dimensions, as shown in Figs. 6(a), 6(b), and 7, clearly show that the proposed three-layer system is effective and gives the best results for all emotion dimensions (valence, activation, and dominance) for both speaker-dependent and multi-speaker task. However, the MAEs for the multi-speakers task were higher than those for the speaker-dependent task. For the German and the Japanese databases, the overall best results were achieved for all emotion dimensions using speaker-dependent task. For both databases, all MAE values were very small; the maximum MAE was 0.28 for valence for the Japanese database as shown in Fig. 6(a). This value indicates that on average the error between human evaluation and system output is 0.28 which means that the output of the proposed system are very close to human evaluation. From this discussion, it is evident that the valence dimension estimation could be improved by using the proposed model. Therefore, the most important results from this study is that the proposed automatic speech emotion recognition system based on the three-layer model for human perception was superior to the conventional two-layer system. Mapping Values of Emotion Dimensions into Emotion Categories The categorical and dimensional approaches are closely related, i.e. by detecting the emotional content using one of these two schemes, we can infer its equivalents in the other scheme. For example, if an utterance is estimated with positive valence and high activation we could infer that this is happy, and vice versa. Therefore, any improvement in the dimensional approach will lead to an improvement in the categorical approach and vice versa. In this section, we want to strengthen our findings in this study by demonstrating that the dimensional approach can actually help us to improve the automatic emotion classification. So, the estimated values of emotion dimensions (valence, activation, and dominance) were used as inputs for Gaussian Mixture Model (GMM) to predict the corresponding emotional category. The classifications results into emotion categories using acoustic features directly is compared with the classification results using the estimated values of emotion dimensions as shown in Tables 4 and 5 for the Japanese and German databases, respectively. Classification for Japanese Database For the Japanese database, first, the acoustic features were used as input to train the GMM classifier to classify the Japanese database into five emotion categories: neutral, joy, hot anger, sadness, and cold anger. Moreover, the estimated values of emotion dimensions were used as input to train GMM to classify the values of every point in the space valence-activation-dominance into one emotion category. The confusion matrix of the results is shown in Table 4(a) for mapping acoustic features into categories and in Table 4(b) for mapping values of emotion dimensions into emotion categories. In these tables, the numbers represent the percentages of recognized utterances of the emotion category in the left column versus the number of utterances for emotions in the top line. Classification for German Database The results of classification of the German database into four emotion categories: neutral, happy, angry, and sad are represented by the confusion matrix as follows: Table 5(a) for mapping acoustic feature into categories, Table 5(b) for mapping emotion dimensions into categories for multi-speaker estimation, and Table 5(c) for mapping emotion dimensions into categories for speaker-dependent estimation. Discussion Emotion dimensions values are mapped into the given emotion categories using a GMM classifier. This is a Table 5 Classification results for German database using GMM classifier: (a) By mapping acoustic features directly to emotion categories (Average recognition rate 60.0%). remarkable improvement on the recognition rate. For the Japanese database, the overall recognition rate was 53.9% for direct classification using acoustic features and 94% using emotion dimensions. For the German database, the rate of direct classification using acoustic features was 60%, which increased to 75% and 95.5% using emotion dimensions for multi-speaker and speakerdependent tasks, respectively. The result reveals that the recognition rate in speaker-dependent tasks is higher than in multi-speaker tasks. This corresponds with previous studies indicating that speaker-dependent training of the estimator achieves the most accurate emotion classification results [26]. The most important result is that, the classification using emotion dimensions instead of acoustic features improves the recognition rate. Conclusion The aim of this paper is to improve the conventional dimensional method in order to accurately estimate emotion dimensions, especially the valence dimension. Therefore, we first proposed a novel acoustic features selection method based on a three-layer model of human perception, for selecting the most relevant acoustic features to each emotion dimensions. This method was successfully applied for two different language databases (Japanese and German), many acoustic features were found to be relevant for the valence dimension as well as for the activation, and dominance. We then proposed a speech emotion recognition system based on the three-layer model to estimate emotion dimensions (valence, activation, and dominance) from most related acoustic features. The proposed system was evaluated using two different languages (Japanese and German) in two different cases (speaker-dependent and multi-speaker). It was found that the proposed system outperforms the conventional two-layer system in both languages, for speaker-dependent, and multispeaker tasks. Finally, the estimated values of emotion dimensions were mapped into the given emotion categories using a GMM classifier for the Japanese and German databases. For the Japanese database, an overall recognition rate was 94% using emotion dimensions. For the German database, the recognition rate was 95.5% for speakerdependent tasks. In the future, in order to obtain a much more reliable and rich annotation results for emotion dimension and semantic primitives using a listening test, we will study the effect of using a balanced number of subjects in terms of gender and age. Moreover, we will investigate the effectiveness of the three-layer model for constructing a cross-language emotion recognition system which has the ability to detect emotion regardless of the language used for training.
2019-04-20T13:02:22.705Z
2014-02-01T00:00:00.000
{ "year": 2014, "sha1": "5a5e5e7d288c34f77f197f501cd59d3dbca14d4b", "oa_license": null, "oa_url": "https://www.jstage.jst.go.jp/article/ast/35/2/35_E1331/_pdf", "oa_status": "GOLD", "pdf_src": "MergedPDFExtraction", "pdf_hash": "27272d6a2e8363c7956550b7c8e6b1b913af4b08", "s2fieldsofstudy": [ "Computer Science" ], "extfieldsofstudy": [ "Computer Science" ] }
257560226
pes2o/s2orc
v3-fos-license
nPoRe: n-polymer realigner for improved pileup-based variant calling Despite recent improvements in nanopore basecalling accuracy, germline variant calling of small insertions and deletions (INDELs) remains poor. Although precision and recall for single nucleotide polymorphisms (SNPs) now exceeds 99.5%, INDEL recall remains below 80% for standard R9.4.1 flow cells. We show that read phasing and realignment can recover a significant portion of false negative INDELs. In particular, we extend Needleman-Wunsch affine gap alignment by introducing new gap penalties for more accurately aligning repeated n-polymer sequences such as homopolymers (\documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$n=1$$\end{document}n=1) and tandem repeats (\documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$2 \le n \le 6$$\end{document}2≤n≤6). At the same precision, haplotype phasing improves INDEL recall from 63.76 to \documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$70.66\%$$\end{document}70.66% and nPoRe realignment improves it further to \documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$73.04\%$$\end{document}73.04%. Most recent nanopore variant calling advances in this area have come from improvements in machine learning and data representation. For example, the move from prior work Clairvoyante [10] to Clair [9] involved "an entirely different network architecture and learning tasks". Clair3 then split the model into a pileup caller to filter out the noise and a higher-dimensional full-alignment caller to make the more difficult decisions [7]. PEPPER examined sorting reads by haplotype and a new architecture, and DeepVariant explored numerous possible data representations for final calling [8,11]. Orthogonally, we show that improved INDEL calling performance can be achieved through better read alignment by introducing novel gap penalties for homopolymers and tandem repeats, or "n-polymers". Nanopore read alignment In order to maximize the accuracy of pileup-based variant calling, reads should be aligned such that actual mutations are always aligned to the same location, despite sequencing errors. We find that simply using traditional affine gap penalties is not ideal because gap penalties G open and G extend are static, regardless of context [13]. For example, although our dataset consisted of only 0.8% INDEL errors, homopolymers of length 10 contained an INDEL error 41.8% of the time. Without lowering the INDEL penalty in the context of repetitive sequences, there is a mismatch between the likelihood and alignment penalty of common sequencing errors. This has an outsized impact on finegrained read alignment, often at the expense of consistently aligning actual mutations. Figure 1 demonstrates a specific example where static INDEL gap costs cause poor alignment concordancy in low-complexity regions. Reads are identical in the two pileups shown; only the alignments differ. In this example, two adjacent homopolymers are basecalled with inconsistent lengths. nPoRe recognizes that these two events were most likely independent, and separates them into two homopolymer length mis-calls/variants. In contrast, minimap2 merges two INDELs whenever possible, or aligns homopolymer length differences as SNPs when one homopolymer is lengthened and the other is shortened, resulting in inconsistent alignment. According to the truth VCF, the first homopolymer of all As had a single deletion. Looking at the third base in the coverage graphs in Fig. 1, we can see that nPoRe placed a deletion here for a much larger fraction of reads than minimap2. The likelihood of incorrectly basecalling an INDEL within a homopolymer increases significantly as homopolymer length increases. Figure 2a shows the confusion matrix for actual and basecalled homopolymer lengths in our dataset. This same trend is visible for tandem repeats of longer length, though to a lesser extent (Fig. 2b). Our own evaluation confirms these findings, and furthermore attributes the loss of INDEL recall to the first pileup-based variant calling step. Figure 3a and b show SNP and INDEL precision recall curves, respectively, for both Clair3's pileup and full-alignment The same reads, aligned by minimap2 and nPoRe, viewed in IGV [12]. Colored lines represent substitutions, black lines represent deletions, and purple vertical bars indicate insertions. Note that nPoRe alignments contain more INDELs than substitutions, and the starts of these INDELs are more consistently placed Fig. 2 a 1-polymer and b 3-polymer confusion matrices of actual and predicted n-polymer lengths (in percent, by row). "Actual" n-polymer lengths are the corresponding reference n-polymer lengths to which a read is aligned, and "predicted" n-polymer lengths are each read's basecalled n-polymer length models. Note that although the more complex full-alignment model significantly improves precision, it cannot improve recall as dramatically; only variant calls and lowconfidence reference calls from the previous pileup-based step are considered. Although substitutions comprise a majority ( 83.75% ) of the actual small germline variants in our dataset, INDELs account for 92.36% of the pileup-based false negative and 80.79% of false positive errors. Figure 4a shows that of these errors, 92.29% occur within n-polymer regions, despite n-polymer regions covering just 37.07% of evaluated regions. By improving the alignment of reads in these small n-polymer regions, we can have a significant impact on overall variant calling accuracy. Ground truth INDEL mutations are over-represented in n-polymer regions as well ( 79.64% of all INDELs). This is because Short Tandem Repeat (STR) variation is a common form of mutation due to strand slippage during DNA replication, resulting in one or more copies of a repeated unit being gained or lost. We define copy number INDELs as n-polymers (3+ exact copies of the same repeat unit), with a differing number of copies from the expected reference. For example, AAAA →AAAAA and ATA TAT →ATAT meet this definition, but ATAT →ATA TAT , AAT AAT AAAT →AAT AAT , and ATA TAT →ATATA do not. Despite our relatively strict definition of n-polymer copy number INDELs, however, 65.82% of all INDELs met this classification (Fig. 4b). nPoRe's algorithm is directly designed to reduce alignment penalties for n-polymer copy number INDELs and improve alignment in low-complexity regions. Related work Variable gap penalties have been around for a long time. In 1995, Thompson first introduced per-position gap opening and extension penalties [15]. Since then, the sub-field of homologous protein sequence alignment has made extensive use of variable gap penalties (PIMA [16], FUGUE [17], and STRALIGN [18]) due to a high correlation between INDEL likelihood and the existence of protein secondary structures such as α-helices and β-strands. SSALN was the first to use empirically-determined penalty scores (an approach similar to our own) [19], and SALIGN greatly increased the flexibility of the gap penalty function, although with a corresponding increase in computation [20]. Mar-ginAlign similarly used expectation maximization to obtain robust maximum-likelihood estimates for substitution, insertion, and deletion error rates, and then realigned reads for more accurate single-nucleotide variant calling [21]. Unfortunately, none of the numerous extensions these earlier works made to traditional Needleman-Wunsch alignment are directly applicable to the observed problem of long read n-polymer alignment. Affine gap penalties belong to a larger class of "convex" gap penalties, which also includes piecewise linear and logarithmic gap penalties [22]. These more complex alternatives solve a different problem: reliably grouping several medium-sized gaps into one larger gap. They do this by decreasing the penalty for gap extension with the length of the gap, and are commonly used for accurate alignment of large structural variants [23]. Such convex gap penalties do not solve the issue of fine-grained read alignment because they are still context-agnostic and at short INDEL lengths are highly similar to an affine gap penalty. One known strategy to mitigate the effect of homopolymer length basecalling errors is "homopolymer compression", in which repeated bases in a sequence are collapsed (GAA ATC CT→GATCT ) [24]. This method is commonly used by graph-based de novo assemblers in the earlier stages of graph construction to improve overlap detection between reads [3,25,26]. The recently-developed Verkko assembler goes even further, and compresses n-polymers (ATC ATC ATC →ATC ) [26]. Although n-polymer compression is useful for building a consensus graph, the original reads are generally used to generate the final sequence [25,26]. Read alignment following n-polymer compression is equivalent to running nPoRe with a null matrix N for n-polymer shortening and lengthening penalties. By defining a non-zero matrix N, this work penalizes copy number changes according to their measured likelihood. Several existing works focus on the alignment of Short Tandem Repeats (STRs), although most function as INDEL variant callers rather than read realigners [27][28][29]. More recently, machine learning based approaches for final variant calling have outperformed these earlier statistical approaches [8,10,11,30]. Several newer works do focus on read realignment, however. ReviSTER is one such tool for revising mis-aligned/ mapped reads through reference reconstruction with local assembly, though this is primarily helpful for improving mapping, not alignment. The Broad Institute has incorporated into their standardized analysis pipeline (Genome Analysis ToolKit, or "GATK") an IndelRealigner, recognizing that INDELs are frequently mis-called as SNPs at read edges [31]. STR-realigner is most similar to our work. It flags STR regions and aligns them separately, allowing repeated traversal of STRs during alignment [32]. They find that this approach improves the consistency of read alignment in and near repeated regions, improving downstream variant calling. STR-realigner was designed for short reads, however, and its runtime �(n 2 ) which is perfectly fine for short reads of length 101bp is not unacceptable for long reads which regularly reach lengths upwards of 100kbp. Our work introduces a variable gap penalty for n-polymer copy number INDELs, as shown in Fig. 5. INDELs are more likely to occur in n-polymers, and so we provide a lower context-specific gap penalty, allowing only copy number INDELs. The exact sequence is not considered in this work; all 2-polymers of length 3 are scored the same (e.g. ATA TAT and TGT GTG ). This work makes the following contributions: • We show that context-agnostic affine/convex gap penalties do not accurately reflect the likelihood of nanopore sequencing errors in n-polymer regions • We extend Needleman-Wunsch affine gap alignment to include context-dependent gap penalties for more accurately aligning n-polymers • We identify that during germline small variant calling, most INDEL false negative errors occur during the pileup-based variant calling stage • We introduce "follow-banding" for efficient read realignment • We develop a VCF standardization method that ensures variants are reported in the same format as our nPoRe realigner • We show that haplotype phasing and nPoRe realignment significantly improve pileup-based variant calling accuracy Overview This work focuses on improving the accuracy of germline small variant calling (heritable mutations < 50 bp in size). We do so by realigning mapped reads (inputting and outputting in standard BAM format) to improve fine-grained alignment and read concordance by adjusting each read's CIGAR string. Because we are concerned only with small variants, performing realignment within a ± 50 bp window of the original mapping/ Gap penalties for various copy number deletions, compared to a static affine gap penalty. The penalty is dependent on the local repeat pattern's periodicity ( n = 2 ) and length ( l = 3, 6,9) alignment is sufficient. This work is independent of downstream variant caller, and nPoRe can be used in combination with either Clair3 or PEPPER. To evaluate nPoRe, we retrain Clair3 from scratch with minimap2-and nPoRe-realigned reads. We find that when retraining Clair3, it is beneficial to "standardize" the ground truth VCF to report variants in a manner similar to nPoRe-realigned reads (details in "Methods" section). Realigning reads with nPoRe is relatively efficient and results in a significant increase in read concordance, which translates well to an improvement in final variant calling accuracy. Figure 6 reports the performance of all three evaluated Clair3 pipelines, with precision and recall for SNPs and INDELs given separately for each sub-region. Results are reported for both the original and standardized ground-truth VCFs (see "Methods" section). Figure 6 shows that n-polymer regions are responsible for the majority of INDEL errors, since with these regions excluded, INDEL precision and recall both exceed 95% . Performance in tandem repeat regions alone is relatively good, and homopolymers account for the majority of remaining errors. For a fixed INDEL precision of 2/3, sorting reads by haplotype (clair3→clair3-hap) improves INDEL recall from 63.76 to 70.66% . Realigning reads with nPoRe (clair3-hap→clair3-npore-hap) further improves INDEL recall to 73.04%. We chose to perform evaluations using both VCFs because although they contain the exact same information, the "standardized" VCF was more likely to report several INDELs instead of several SNPs (due to the lower n-polymer shortening/lengthening penalty), and occasionally broke an INDEL up into several smaller INDELs. As a result, the standardized VCF had 18.05% more INDELs (31,104) and 1.45% fewer SNPs (155,163) than the original VCF (25,500 INDELs and 157,454 SNPs). hap.py's vcfeval engine assigned partial credit for SNPs less frequently to nPoRe-aligned reads. The Read concordance If sequenced reads were to not contain errors, they would all perfectly agree with one another and variant calling would be easy. We would like to maximize the extent to which reads agree with one another, which we term "concordance" and measure per-haplotype and per-position in terms of Gini purity. Gini purity is defined as where N is the number of classes and P(i) is the probability of class i. Figure 7a (lower graph) shows the resulting Gini purity histogram with the classes A, C, G, T, -(deletion) on a logarithmic y-scale. If all reads agree, GP = 1 . If 50% call C, GP = 0.5 . In the worst case, where there is an even split between the five classes, GP = 0.2 . Reference positions with low Gini purity scores are therefore difficult to call, and are a likely source of both false positive and false negative variants. The lower graph in Fig. 7 compares the Gini purity score distributions in minimap2 and nPoRealigned BAMs. It shows a marked ≈ 50% decrease in positions with Gini purity less than 0.5 for the nPoRe-realigned BAM, demonstrating that nPoRe greatly improves alignment concordance across reads in difficult regions. Read concordance in the phased BAM pileup, evaluated by Gini purity computed per reference position, is shown in Fig. 7a. Insertion concordancy was evaluated separately, in Fig. 7b, where the classes are all insertions between base k and k + 1 (e.g. ǫ, A, AA, AAA, AT , ATT ... ). We plot insertions separately because the variable number of classes greatly affects the Gini purity score distribution. There appears to be an approximately 10% increase for all imperfect Gini purity scores, which we attribute to nPoRe's increased likelihood of calling INDELs (and as a result, more average classes and greater divergence). Timing We performed our evaluations on a system with 2× Intel Xeon E5 2697v3 2600MHz CPUs and 64GB total RAM. Timing results are shown in Table 1. From this evaluation, it is clear that any pipeline stages requiring computation on the full BAM file (marked with *) are considerably more expensive than working with just putative variants, a small fraction of the entire genome. Although our nPoRe realigner accounted for 79.6% of total CPU time, it only accounted for 27.8% of the real runtime, or just under twice as long as it took to index the BAM. nPoRe's CPU time was 51.6× its real time on our system with 56 total cores, demonstrating that we took full advantage of the available parallelism. Figure 8 shows the calculated score matrices for 1-and 3-polymers, corresponding to the confusion matrices in Fig. 2. In general, n-polymer INDELs are penalized less than the general-case affine gap INDEL penalty. Additionally, insertions are more common than deletions, and INDELs are more common in n-polymers of shorter repeat unit length (n). Discussion The current nPoRe algorithm implementation was designed to demonstrate that there is a significant difference in INDEL rates between repetitive and non-repetitive sequences, due to the common occurrence of n-polymer copy number INDELs and sequencing errors. In order to do so, we decided upon a strict definition of n-polymers that requires Table 1 Timing results for stages in the clair3-npore-hap pipeline (Table 3) Asterisks (*) denote steps for which the full BAM is required, rather than just the VCF at least three repetitions of the exact same repeat unit. We found that this strict definition includes around 65% of all INDELs in our dataset. Despite this, there are many repetitive regions in which sequencing errors are common but do not meet our strict definition of an n-polymer. For example, the sequence AAA TAA AAT AAA TAAAT is not an n-polymer because the second repetition of AAAT has an additional A. A more lenient definition of n-polymers would result in a broader application of reduced INDEL gap penalties for repetitive regions and may improve alignment results further. Additional leniency, however, would come at the cost of increased computation. We find alignment speed to be the greatest practical limitation of our nPoRe aligner, despite writing our alignment kernel in Cython and taking full advantage of the available parallelism. Genomics datasets are inherently large, and a hyper-optimized implementation with SIMD intrinsics and reduced data width may be necessary for large-scale applications. We have already explored reducing memory usage by shifting to a difference-based n-polymer cost matrix and only storing 2 * n max + 1 matrix rows in memory, resulting in about 10-20 Replacing the n-polymer cost matrix with a best-fit surface or function would likely improve efficiency further by reducing irregular memory accesses. We consider the main contribution of this work to be identifying fine-grained alignment as a significant source of small variant calling INDEL errors and developing an algorithmic solution. Speed can be improved through further engineering efforts. The astute reader may notice that our n-polymer copy number INDEL penalties were calculated based on the measured negative log likelihood of occurrence in the original BAM, which as we've pointed out, has issues with fine-grained alignment. Even if this were to affect our estimate of n-polymer copy number INDEL likelihoods by 2× , however, the effect on INDEL penalty is only log 2 ≈ 0.69 . Our algorithm has already reduced the cost of a 3-base deletion within a 3-polymer from G open + 2 * G extend = 7 to the range [3.0, 3.7], depending on the 3-polymer length. If necessary, a second iteration of INDEL likelihood estimation using the nPoRe-realigned BAM could be used to further improve score estimation. Conclusions We identify the main source of nanopore germline small variant calling errors to be copy number INDEL false negatives in n-polymer regions, and show that context-agnostic affine gap penalties do not accurately reflect the likelihood of nanopore sequencing errors. To improve nanopore pileup-based variant calling accuracy, we explore correcting finegrained read alignment. This work extends Needleman-Wunsch affine gap alignment to include repeat-aware gap penalties for n-polymers. In doing so, we also develop "followbanding" for efficient long read realignment and a method for standardizing groundtruth VCFs. We demonstrate that read realignment improves read concordance and variant calling accuracy, and release nPoRe 1 as an open source tool. Despite being located in low-complexity regions, calling the length of tandem repeats is clinically relevant. There is an entire class of neuropathological disorders associated with copy number variation known as "Tandem Repeat Disorders", or TRDs. Huntington's Disease is one such disorder caused by 40 or more repeats of the CAG 3-polymer at the end of the gene HTT, instead of a normal 10-30 copies. Other TRDs include Fragile X Syndrome, Kennedy's Disease, mytonic dystrophy, and several spinocerebellar ataxias [33]. Since nPoRe improves significantly improves read alignment and variant calling in tandem repeat regions, it will lead directly to more accurate diagnoses of such disorders. Overview Because we have designed a read realignment algorithm, we trust the initial mapping of each read. Each read and its corresponding section of the reference genome are realigned, and a new traceback (alignment path) is computed. In other words, our solution simply adjusts the CIGAR string of each read within the input BAM file to better model the most likely mutations and sequencing errors in an effort to achieve greater concordancy between reads. Our realignment algorithm is an extension of the Needleman-Wunsch algorithm for global alignment [34]. In addition to including known improvements such as an affine gap penalty and custom substitution penalty matrix [35], our algorithm allows the shortening and lengthening of homopolymers and tandem repeats (i.e. ACA CAC →ACA CAC AC). n-polymer repeats The literature often categorizes sequences consisting of one repeated base as "homopolymers", and repeated sequences of at least two bases as "tandem repeats" or "copolymers" [8,36]. Short tandem repeats (STRs) are often defined as repeated units 2-6 bases in length, and are also known as "microsatellites" or "simple sequence repeats" (SSRs) [37]. Rather than treating these classifications separately for nPoRe, we define an n -polymer to consist of at least 3 exact repeats of the same repeated sequence, where the repeat unit is of length 1-6 bases ( 1 ≤ n ≤ 6, l ≥ 3 ). For example, homopolymers such as AAAAA ( n = 1 ) and tandem repeats such as ACA CAC AC ( n = 2 ) and TTG TTG TTG ( n = 3 ) are n-polymers. Shorter or irregular repeated sequences such as AAT TAA TT and ACA ACA AACAC are not. An upper threshold of n max = 6 was selected because there is a marked decrease in the frequency of n-polymers for n > 6 . Tandem repeats are usually defined with the same upper bound on repeat unit length n for the same reason. Figure 4 shows that 6-polymers are already uncommon. Furthermore, nanopore R9.4.1 sequencers fail to accurately call the length of n-polymers because the pore's effective sensing width is 5-6 bases; i.e. the measured signal depends upon 5-6 adjacent bases simultaneously [38]. An n-polymer for n ≤ 6 is usually observed as a nearly-constant signal due to exact repetition, from which it is difficult to determine repeat length. For n-polymers where n > 6 , this is less of a problem, and fewer errors are observed. For a similar reason, a minimum n-polymer length of l = 3 was decided upon to classify a repeated sequence as an n-polymer. If l = 2 , there is never a series of n bases bordered on both sides by another copy of the same n bases. The two copies of n bases are each adjacent to a non-repeating region, and as a result the measured signal is non-constant and few basecalling errors occur. Penalty functions For each read, differences from the reference genome can be attributed to either sequencing errors or actual mutations. Regardless of origin (error or mutation), our goal is to align these reads to the reference in a manner that accurately captures the change that occurred. Existing aligners fail to do this by defining substitution and gap penalties based on estimated rather than measured rates of occurrence, and the algorithms do not account for common sequencing error modes such as tandem repeat length errors in nanopore sequencers. In contrast, we calculate penalties based on frequency measurements from the input BAM file. We define the penalty score for each difference (whether error or mutation) to be the negative log likelihood of that event occurring. As a result, finding the minimum-penalty alignment path is equivalent to finding the most likely set of errors and mutations that have occurred (assuming independence). Figure 9 shows the calculation of substitution penalty matrix P from confusion matrix C P , using Eq. 1. ǫ = 0.01 was included for numerical stability in the case that certain events were never observed. If we consider bases x = "ACGT ′′ , then P[i, j] is the negative log probability that base x[i] was observed as base x[j], either through a mutation or sequencing error: Affine gap penalties Confusion matrices for insertions ( C I ) and deletions ( C D ) were first generated by measuring the occurrence of small INDELs in the input BAM. Both matrices are 1D, since the expected INDEL length is always zero. Then, penalties were calculated by determining the negative log probability of each INDEL length i occurring: − log C I [i]+ǫ sum(C I )+ǫ . From these penalties, a best-fit gap opening penalty G open of 5 and gap extension penalty G extend of 1 was selected for both insertions and deletions [35]. Tandem repeat penalty matrix First, confusion matrix C N of shape 6 × 100 × 100 was generated by comparing expected and observed n-polymer lengths l (up to 100). For each n, or repeat unit size 1-6, a penalty matrix was calculated using the following equation, where i is the expected repeat length and j is the measured repeat length. To improve penalty regularity, particularly for longer n-polymers where few examples were observed, the following two properties were enforced for each possible combination of k > 0, n, i within the bounds of N: • Shorter INDELs are more likely: • Longer n-polymers are more likely to contain an INDEL of a given size: Reference annotation Reference annotations are used to track eligible n-polymers during alignment. For each possible n-polymer repeat unit length from n = 1 to n max , each reference position is annotated with l, the length or number of consecutive repeat units, and idx, the 0-based index of the current repeat unit ( 0 ≤ idx < l ). Table 2 shows example annotations for a short reference sequence for n = 1 and n = 2 . Recall that in order for a sub-sequence to be considered an n-polymer, the pattern must repeat exactly at least three times. Annotations may overlap, and non-zero annotations are only placed at the start of every n-polymer repeat unit. Alignment Before aligning read r to reference R, the reference is annotated with n-polymer information as discussed previously. Then, the five matrices D, I, M, S, and L are computed in lockstep one cell at a time, in that order. These matrices of size |r| × |R| represent the Table 2 Example n-polymer reference annotations states Deleting, Inserting, Matching, Shortening n-polymers, and Lengthening n-polymers, respectively. For each cell, each matrix stores a tuple (val, pred, run) containing the accumulated penalty value, in addition to the predecessor matrix and consecutive movements (run) within that matrix for backtracking purposes. Figure 10 demonstrates the cell dependency patterns and penalties in greater detail. For example, when computing cell i, j in S, the reference annotations l and idx, are first retrieved for each n for R [j]. All dependencies (marked with • for S) in Fig. 10 are considered, and the minimum value of these dependencies' cell values plus the associated penalties is calculated and stored in the result cell (marked with • for S). In other words, when looking at S[i, j], for each n, we do: Reference: A T A T A T A T T T T T A A A G C G C G C These two movements correspond to starting to shorten a tandem repeat (state M→S ), and continuing to shorten a tandem repeat (state S→S ). All movements into matrices S and L such as these are only allowed conditionally based on reference annotations (described in the following section). Note that if matrices S• and L are omitted (as well as all • and dependencies), this algorithm is equivalent to Needleman-Wunsch alignment with an affine gap penalty [34]. n-polymer INDEL conditions Unlike matrices D, I, and M, the results for matrices S and L are stored several cells ahead of the current cell, and cell dependencies are only allowed conditionally based on the reference annotations. This ensures that matrices S and L only allow INDELs which change the copy number of tandem repeats and homopolymers. Here are the three conditions c 1 , c 2 , c 3 used by our algorithm, and referenced in Fig. 10: Insertions, and Deletions, correspond to diagonal, vertical, and horizontal movements in the alignment matrix, respectively. An example alignment is denoted by • in Fig. 11a. After computation, the computed CIGAR string is collapsed (MMMDMIMM→ 3M1D1M1I2M). with light gray cells. Next, computation proceeds one anti-diagonal row of width 2b + 1 at a time, centered on the alignment path. The computation of anti-diagonal rows shifts either right or downward at each step, governed by the previous CIGAR operation, D or I. These anti-diagonal rows can be stored efficiently in matrix format, as demonstrated in Fig. 11b. Transforming the banded |r| × |r| matrix A to a (2b + 1) × 2|r| matrix B saves significant space because nanopore sequencing read lengths |r| can be up to several million bases [4], while realignment works well with a band width of b = 30. Offset arrays INSs and DELs can be precomputed using the CIGAR (Fig. 11). Given a cell in matrix B with indices i, j, its position in matrix A can be computed using the following formula: Reference annotations The worst-case time complexity for computing the reference annotations is O(|R|n 2 max l max ) , where |R| is the length of the reference R, and n max is the maximum n-polymer considered, and l max is the maximum n-polymer length. Since our n-polymer score matrix N is of size (6, 100, 100), n max = 6 , and l max = 100 . Thus, the time complexity is effectively O(|R|). Furthermore, these annotations must only be computed once, and cost can be amortized over all the reads that are aligned to the reference. We found the time required for reference annotations to be insignificant compared to alignment. These annotations require O(|R|n max ) space. Read alignment Once the reference annotations and score matrices have been computed, the nPoRe algorithm requires O(|R| |r|) time for each read r. The only additional overhead nPoRe incurs over Needleman-Wunsch with affine gaps is computing five Dynamic Programming (DP) matrices instead of three, as well as computing the new cell dependencies. All new penalties are conditional O(1) lookups. As discussed earlier, follow-banding further reduces both the time and space complexity of alignment from O(|R| |r|) to O(b|r|), where b is the band width. In total, the cost of aligning all reads is where m is the number of reads, or equivalently O(db|R|), where d is the average depth of coverage. All our code is open source and readily available at: https:// github. com/ TimD1/ nPoRe. Reference We used the GrCh38 reference from the Genome-In-A-Bottle (GIAB) consortium [36]. Reads We obtained our FASTQ files from ONT Open Datasets' May 2021 re-basecalling of HG002 PromethION R9.4.1 data using Guppy 5.0.6. Specifically, we used flow cell PAG07162, prepared using the Short Read Eliminator (SRE) protocol [40]. Depth of coverage was approximately 60× . For training, we used chr1-chr19, and for testing we used chr20-chr22. Stratification regions Stratification BED regions were calculated for n = 1...n max using the definition of n-polymers provided previously. Regions were extended by a single base on each side (slop=1) to include variants occurring at the edges of n-polymer regions. These BEDS were then merged and complemented as necessary to create stratification BEDS for all n-polymer regions and non n-polymer regions. Pipeline The full training and evaluation pipelines for all three Clair3 configurations tested are shown in Table 3. We used minimap2 version 2.17-r954-dirty [41], clair3 version v0.1-r9 [7], whatshap version 1.0 [42], and hap.py version v0.3.14 [36]. All three variant callers were trained from scratch using our 60× HG002 dataset on minimap2-aligned reads for chr1-chr19, and tested on chr20-chr22. We first extended the retrained Clair3 baseline (clair3), phasing the input reads by haplotype and training a phased pileup candidate caller (clair3-hap). This was done because it significantly improves read concordancy, and leaving reads unphased when calling difficult single-haplotype variants might overshadow concordancy improvements gained by nPoRe's alignment algorithm. This second baseline enables us the clearly delineate the gains from haplotype phasing and our nPoRe alignment algorithm. The final configuration was clair3-npore-hap, in which we performed ordinary variant calling with clair3, phased reads by haplotype, and then realigned them with nPoRe prior to variant calling. Haplotype phasing In order to add haplotype phasing information to Clair3, a single iteration of the ordinary pileup-based pipeline was first run. Proposed variants were then phased using whatshap phase, and reads were tagged by haplotype using whatshap haplotag. We then sorted reads by haplotype into three separate BAM files. When generating the input pileup tensor for training clair3-hap and clair3-npore-hap, for each position a pileup tensor was generated for both unphased reads, reads from the first haplotype, and reads from the second haplotype. These three pileup tensors were then concatenated to create a new input tensor for Clair3. Truth VCF standardization Figure 12 shows a simplified example of a typical minimap2 input BAM in comparison to the nPoRe-realigned output BAM. nPoRe is comparatively more likely to call n-polymer INDELs than SNPs, due to the reduced INDEL penalty. We found that clair3-npore-hap variant calling performance suffers if we train Clair3 with the original ground-truth VCF, since the realigned reads tend to report variants using an INDEL-heavy representation. To mitigate this, we altered the ground-truth VCF so that it reports variants using the same representation our aligner tends towards. An example of this "standardized" VCF is shown in Fig. 12. To achieve this, we copied our reference FASTA to create two haplotype FASTAs, and applied the phased ground-truth variants to each haplotype FASTA, storing the new CIGAR. Using a mapping position of 0, the generated haplotype references, and associated CIGARs, we considered these ground-truth haplotype references to be reads and aligned them to the original reference using nPoRe. Any substitutions, insertions, or deletions in the resulting alignment were then parsed into a new standardized ground-truth VCF file. This process ensures that the new "standardized" truth VCF contains the same exact ground-truth sequence as the original VCF when applied to the reference FASTA, but reports variants in a manner consistent with nPoRe. Fig. 12 VCF Standardization: the ground-truth VCF is modified to report variants in a manner similar to nPoRe-realigned reads. The resulting sequence is unchanged
2023-03-17T14:31:01.851Z
2023-03-16T00:00:00.000
{ "year": 2023, "sha1": "b16466dee61bd23c06d5f59d457f2e12e874cc39", "oa_license": null, "oa_url": null, "oa_status": null, "pdf_src": "Springer", "pdf_hash": "b16466dee61bd23c06d5f59d457f2e12e874cc39", "s2fieldsofstudy": [ "Materials Science" ], "extfieldsofstudy": [ "Medicine" ] }
221637287
pes2o/s2orc
v3-fos-license
Characterization of Volatile Organic Compounds of Healthy and Huanglongbing-Infected Navel Orange and Pomelo Leaves by HS-GC-IMS The Asian citrus psyllid (ACP), Diaphorina citri Kuwayama, is the only natural vector of bacteria responsible for Huanglongbing (HLB), a worldwide destructive disease of citrus. ACP reproduces and develops only on the young leaves of its rutaceous host plants. Olfactory stimuli emitted by young leaves may play an important role in ACP control and HLB detection. In this study, volatile organic compounds (VOCs) from healthy and HLB-infected young leaves of navel orange and pomelo were analyzed by headspace-gas chromatography-ion mobility spectrometry (HS-GC-IMS). A total of 36 compounds (including dimers or polymers) were identified and quantified from orange and 10 from pomelo leaves. Some compounds showed significant differences in signal intensity between healthy and HLB-infected leaves and may constitute possible indicators for HLB infection. Principal component analysis (PCA) clearly discriminated healthy and HLB-infected leaves in both orange and pomelo. HS-GC-IMS was an effective method to identify VOCs from leaves. This study may help develop new methods for detection of HLB or find new attractants or repellents of ACP for prevention of HLB. Introduction Huanglongbing (HLB), also known as citrus greening disease, is a worldwide destructive disease of citrus [1]. HLB has caused several billion dollars in losses to the citrus industry in Florida, USA: Citrus-bearing acres have decreased from 679,000 in 2003-04 to 402,000 in 2017-18 and the number of citrus growers went down from 7389 in 2002 to 2775 in 2017 [2]. HLB-infected citrus trees have yielded fewer and poorer-quality fruits, which are less juicy, and bitter and metallic in taste [3,4]. The Asian citrus psyllid (ACP), Diaphorina citri Kuwayama, is the only vector of bacteria responsible for HLB [5]. Novel and sustainable approaches to the control of ACP are urgently needed for successful HLB management programs. ACP mates, oviposits, and develops exclusively on new flush shoots [6]. Recent studies have shown that volatile organic compounds (VOCs) emitted by flushing shoots may play an important role in the detection, location, and evaluation of potential host plants by ACP [7]. The ability to understand the chemical composition of citrus leaf VOCs may facilitate ACP's ability to recognize the stimuli signal from its host plant and the interaction between them. Wenninger et al. demonstrated that ACP used olfactory cues in orientation to host plants and suggested using plant VOCs to monitor and of healthy orange leaf (HEAO), pomelo leaf (HEAP), Huanglongbing-infected orange leaf (HLBO), and pomelo leaf (HLBP) samples were established by HS-GC-IMS. The use of HS-GC-IMS to analyze VOCs in orange and pomelo leaf samples and distinguish healthy and HLB-infected leaves has not been reported. This work might provide a reference to develop a new method for detection of HLB and find new attractants or repellents of ACP for prevention of HLB. HS-GC-IMS Topographic Plots of HEAO, HLBO, HEAP, and HLBP The information of VOCs of HEAO, HLBO, HEAP, and HLBP were obtained via HS-GC-IMS analysis. A 3D spectrum was generated by a Flavor Spec ® instrument, as shown in Figures 1 and 2. The X-axis denotes the ion drift time, the Y-axis denotes the retention time of the gas chromatograph, and the Z-axis denotes the peak intensity in the topographic map. The VOCs in different samples demonstrated varying peak intensities. Number 1, 2, 3 indicate triplicate experiments, for example, HEAO1, HEAO2, and HEAO3 were triplicate experiments of HEAO. HLBO had more peak signals of VOCs than HEAO, and HLBP had more peak signals of VOCs than HEAP. A study on the changes of metabolites in citrus leaves in response to ACP stress might be helpful for HLB detection and ACP control. Molecules 2020, 25, x FOR PEER REVIEW 3 of 16 and pomelo leaf (HLBP) samples were established by HS-GC-IMS. The use of HS-GC-IMS to analyze VOCs in orange and pomelo leaf samples and distinguish healthy and HLB-infected leaves has not been reported. This work might provide a reference to develop a new method for detection of HLB and find new attractants or repellents of ACP for prevention of HLB. HS-GC-IMS Topographic Plots of HEAO, HLBO, HEAP, and HLBP The information of VOCs of HEAO, HLBO, HEAP, and HLBP were obtained via HS-GC-IMS analysis. A 3D spectrum was generated by a Flavor Spec ® instrument, as shown in Figure 1 and Figure 2. The X-axis denotes the ion drift time, the Y-axis denotes the retention time of the gas chromatograph, and the Z-axis denotes the peak intensity in the topographic map. The VOCs in different samples demonstrated varying peak intensities. Number 1, 2, 3 indicate triplicate experiments, for example, HEAO1, HEAO2, and HEAO3 were triplicate experiments of HEAO. HLBO had more peak signals of VOCs than HEAO, and HLBP had more peak signals of VOCs than HEAP. A study on the changes of metabolites in citrus leaves in response to ACP stress might be helpful for HLB detection and ACP control. For the convenience of comparison, a vertical view was used as shown in Figure 3. The background of HEAO1 is blue, and the red vertical line at horizontal coordinate 1.0 is the reactant ion peak (RIP, normalized drift time of 7.93). Each point on the right side of RIP represents a VOC. and pomelo leaf (HLBP) samples were established by HS-GC-IMS. The use of HS-GC-IMS to analyze VOCs in orange and pomelo leaf samples and distinguish healthy and HLB-infected leaves has not been reported. This work might provide a reference to develop a new method for detection of HLB and find new attractants or repellents of ACP for prevention of HLB. HS-GC-IMS Topographic Plots of HEAO, HLBO, HEAP, and HLBP The information of VOCs of HEAO, HLBO, HEAP, and HLBP were obtained via HS-GC-IMS analysis. A 3D spectrum was generated by a Flavor Spec ® instrument, as shown in Figure 1 and Figure 2. The X-axis denotes the ion drift time, the Y-axis denotes the retention time of the gas chromatograph, and the Z-axis denotes the peak intensity in the topographic map. The VOCs in different samples demonstrated varying peak intensities. Number 1, 2, 3 indicate triplicate experiments, for example, HEAO1, HEAO2, and HEAO3 were triplicate experiments of HEAO. HLBO had more peak signals of VOCs than HEAO, and HLBP had more peak signals of VOCs than HEAP. A study on the changes of metabolites in citrus leaves in response to ACP stress might be helpful for HLB detection and ACP control. For the convenience of comparison, a vertical view was used as shown in Figure 3. The background of HEAO1 is blue, and the red vertical line at horizontal coordinate 1.0 is the reactant ion peak (RIP, normalized drift time of 7.93). Each point on the right side of RIP represents a VOC. For the convenience of comparison, a vertical view was used as shown in Figure 3. The background of HEAO1 is blue, and the red vertical line at horizontal coordinate 1.0 is the reactant ion peak (RIP, normalized drift time of 7.93). Each point on the right side of RIP represents a VOC. The spectral diagram of HEAO1 was selected as the reference, while the spectral diagram of other samples was deducted from the reference. If two VOCs were identical, the background after deduction would be white. Peak intensities are indicated by different colors. Red spots indicate a higher concentration of the VOCs than the reference, whereas blue spots indicate a lower concentration of the VOCs. The data were displayed at the topographic plot zone with a retention time from 100 to 1000 s and drift time (RIP relative) from 1.0 to 2.5. It is obviously shown that HLBO had more VOC peak signals, and most VOCs had a higher concentration than HEAO. Molecules 2020, 25, x FOR PEER REVIEW 4 of 16 The spectral diagram of HEAO1 was selected as the reference, while the spectral diagram of other samples was deducted from the reference. If two VOCs were identical, the background after deduction would be white. Peak intensities are indicated by different colors. Red spots indicate a higher concentration of the VOCs than the reference, whereas blue spots indicate a lower concentration of the VOCs. The data were displayed at the topographic plot zone with a retention time from 100 to 1000 s and drift time (RIP relative) from 1.0 to 2.5. It is obviously shown that HLBO had more VOC peak signals, and most VOCs had a higher concentration than HEAO. A comparison of volatile organic compounds from HEAP and HLBP is shown in Figure 4. It is obviously shown that HLBP had more VOC peak signals and most VOCs had a higher concentration than HEAP. A comparison of volatile organic compounds from HEAP and HLBP is shown in Figure 4. It is obviously shown that HLBP had more VOC peak signals and most VOCs had a higher concentration than HEAP. The spectral diagram of HEAO1 was selected as the reference, while the spectral diagram of other samples was deducted from the reference. If two VOCs were identical, the background after deduction would be white. Peak intensities are indicated by different colors. Red spots indicate a higher concentration of the VOCs than the reference, whereas blue spots indicate a lower concentration of the VOCs. The data were displayed at the topographic plot zone with a retention time from 100 to 1000 s and drift time (RIP relative) from 1.0 to 2.5. It is obviously shown that HLBO had more VOC peak signals, and most VOCs had a higher concentration than HEAO. A comparison of volatile organic compounds from HEAP and HLBP is shown in Figure 4. It is obviously shown that HLBP had more VOC peak signals and most VOCs had a higher concentration than HEAP. Differences in the Characteristic Volatile Fingerprints of HEAO, HLBO, HEAP, and HLBP Based on the peak signal of the topographic plots, the fingerprints of HEAO and HLBO were generated using the Gallery Plot to accurately evaluate the VOCs, as shown in Figure 5. The full fingerprint of VOCs from the orange leaves HEAO and HLBO was divided into two parts as A and B for better comparison. The full fingerprint of VOCs from the pomelo leaves HEAP and HLBP is presented in part C. Differences in the Characteristic Volatile Fingerprints of HEAO, HLBO, HEAP, and HLBP Based on the peak signal of the topographic plots, the fingerprints of HEAO and HLBO were generated using the Gallery Plot to accurately evaluate the VOCs, as shown in Figure 5. The full fingerprint of VOCs from the orange leaves HEAO and HLBO was divided into two parts as A and B for better comparison. The full fingerprint of VOCs from the pomelo leaves HEAP and HLBP is presented in part C. In the fingerprint, each row represents the entire signal peak of one sample, and each column represents the same VOC in different samples. The content of VOCs is distinguished by colors. The In the fingerprint, each row represents the entire signal peak of one sample, and each column represents the same VOC in different samples. The content of VOCs is distinguished by colors. The higher the content, the brighter the color. VOCs with the same name in the fingerprints are presented as monomers, dimers, or polymers. The drift time of dimers or polymers was increased due to their proton affinity and higher content [27]. The composition and contents of VOCs in HEAO and HLBO can be compared intuitively using fingerprints. Unfortunately, some VOCs were not identified, due to the limited data library. A whole VOC profile should be seen using GC-MS data and HS-GC-IMS data together. As shown in Figure 5A, ten peaks, including hexanal, 3-pentanone, and 2-butanone, were identified. The brightness of the fingerprint in part A was much stronger in the HEAO fingerprint than that of HLBO. The numbers of identified VOCs in HEAO were more than those in HLBO. Some VOCs, such as 2-hexanol and its dimer, appeared in HEAO, while their fingerprint information in HLBO was minimal. In addition, 3-pentanone was present in HLBO; however, the brightness of its fingerprint was much weaker than that in HEAO. As shown in Figure 5B, twenty-six peaks, including terpenes (limonene, α-pinene, and β-ocimene) and other VOCs were identified. Most peaks in Part B showed a much brighter fingerprint in HLBO than that of in HEAO. The fingerprints of HEAP and HLBP were generated using the Gallery Plot to accurately evaluate the VOCs in pomelo leaves, as shown in Figure 5C. Ten peaks, including hexanal, 3-pentanone, 2-butanone, and limonene, were identified. Most peaks showed a much brighter fingerprint in HLBP than that in HEAP. Identification of Volatile Organic Compounds in HEAO, HLBO, HEAP, and HLBP The qualitative analysis of VOCs in HEAO and HLBO is represented in Table 1 and Figure 6. Some VOCs presented multiple signals as monomers, dimers, and polymers, due to their varying concentrations and adducts formation while moving through the IMS drift tube [27]. These VOCs had the same GC retention times, but different drift times. Table 1 lists all the identified VOCs from the GC-IMS library in orange leaf samples, including the compound name, retention index (RI), retention time (Rt), drift time (Dt) (RIP relative), and signal intensity (SI). RI values were calculated using the homologous series of n-2-ketones C4-C9: 2-butanone, 2-pentanone, 2-hexanone, 2-heptanone, 2-octanone, and 2-nonanone, as external standard on the FS-SE-54-CB capillary column. Acetone should be excluded for further analysis because it might come from the cleaning agent. As shown in Figure 6, the two-dimensional topographic plots of VOCs in HEAO and HLBO were obtained at the retention time and the normalized drift time by HS-GC-IMS. Each marked dot represents a type of identified VOC with the same serial number presented in Table 1. The higher the intensity of the red color, the higher the concentration of VOCs; the blue color has the opposite interpretation. These plots show some visual differences of VOCs by location and relative content between healthy and HLB-infected orange leaves. Highly significant differences (p < 0.001) in signal intensity between HEAO and HLBO were observed for 3-pentanone and its dimer, ethyl 2-methylbutanoate, limonene, α-pinene, ethyl acetate and its dimer, ethyl 2-methylpropanoate, benzaldehyde, and methyl 2-methylbutanoate. For each compound identified, the percent difference of average signal intensity between HEAO and HLBO samples was compared ( Table 1). The following equation for percent difference was utilized: The largest percent differences (higher than 300%) were for ethyl acetate dimer (5733.5%), 3-methylbutanol (684.9%), ethyl 2-methylbutanoate (611.6%), ethyl 2-methylpropanoate dimer (402.8%), ethyl 2-methylbutanoate dimer (380.3%), and ethyl propanoate dimer (317.6%). Of the compounds for which there was a highly significant difference, the HEAO signal intensity was higher for 3-pentanone, 3-pentanone dimer, and benzaldehyde. Conversely, the HLBO signal intensity was higher for ethyl 2-methylbutanoate, limonene, α-pinene, ethyl acetate and its dimer, ethyl 2-methylpropanoate, and methyl 2-methylbutanoate. VOCs showing a significant difference in signal intensity in both leaves might be possible indicators for detection of HLB. Representative VOCs showing a highly significant difference (P < 0.001) or the largest percent difference (>300%) of signal intensity in healthy and HLB-infected leaves are shown in Figure 7. Table 1. The qualitative analysis of VOCs in HEAP and HLBP is represented in Table 2 and Figure 8. Each marked dot in the two-dimensional topographic plot in Figure 8 represents a type of identified VOC with the same serial number presented in Table 2. VOC content was determined by the Table 1. Table 1. The qualitative analysis of VOCs in HEAP and HLBP is represented in Table 2 and Figure 8. Each marked dot in the two-dimensional topographic plot in Figure 8 represents a type of identified VOC with the same serial number presented in Table 2. VOC content was determined by the Table 1. The qualitative analysis of VOCs in HEAP and HLBP is represented in Table 2 and Figure 8. Each marked dot in the two-dimensional topographic plot in Figure 8 represents a type of identified VOC with the same serial number presented in Table 2. VOC content was determined by the brightness degree of color. These plots showed some visual differences in VOCs by location and relative content between healthy and HLB-infected pomelo leaves. The number of identified characteristic peaks (nine characteristic peaks excluding acetone) from the GC-IMS library in pomelo leaf samples was less than that of orange leaf (35 characteristic peaks excluding acetone). The signal intensity of 3-pentanone, 3-pentanone dimer, and limonene polymer has shown a highly significant difference (p < 0.001) between healthy and HLB-infected Shatian pomelo leaves. However, the signal intensity of 2-butanone, 2-butanone dimer, and hexanal did not show a significant difference between HEAP and HLBP. For each compound identified, the percent difference of average signal intensity between HEAP and HLBP samples was compared. The following equation was used for percent difference: (2) brightness degree of color. These plots showed some visual differences in VOCs by location and relative content between healthy and HLB-infected pomelo leaves. The number of identified characteristic peaks (nine characteristic peaks excluding acetone) from the GC-IMS library in pomelo leaf samples was less than that of orange leaf (35 characteristic peaks excluding acetone). The signal intensity of 3-pentanone, 3-pentanone dimer, and limonene polymer has shown a highly significant difference (P < 0.001) between healthy and HLB-infected Shatian pomelo leaves. However, the signal intensity of 2-butanone, 2-butanone dimer, and hexanal did not show a significant difference between HEAP and HLBP. For each compound identified, the percent difference of average signal intensity between HEAP and HLBP samples was compared. The following equation was used for percent difference: The largest percent differences (higher than 200%) were for limonene polymer (283.7% and 239.7%). These differences could be visually compared in Figure 8 where compounds 9 and 10, which represent limonene polymer in the plot of HLBP, had a brighter color than compounds 9 and 10 in the plot of HEAP. The differences between healthy and HLB-infected Shatian pomelo leaves might provide information for possible indicators for detection of HLB. Table 2. Similarity analysis of fingerprint based on PCA Principal component analysis (PCA) is a multivariate statistical analysis technique. By determining a few principal component factors to represent many complex variables in the samples, the regularity and difference among samples could be evaluated according to the contribution of principal component factors [30]. PCA was established using signal intensity to highlight the differences of VOCs in HEAO and HLBO samples, as shown in Figure 9. The distribution map for the first two principal components determined by PCA is displayed, which describes 86% and 8% of the accumulative variance contribution rate, and a visualization map was obtained. The PCA results clearly show that HEAO (sample 1) and HLBO (sample 2) in a completely independent space would be well-distinguished in the visualization map. HEAO could be well-distinguished according to the Table 2. The largest percent differences (higher than 200%) were for limonene polymer (283.7% and 239.7%). These differences could be visually compared in Figure 8 where compounds 9 and 10, which represent limonene polymer in the plot of HLBP, had a brighter color than compounds 9 and 10 in the plot of HEAP. The differences between healthy and HLB-infected Shatian pomelo leaves might provide information for possible indicators for detection of HLB. Similarity Analysis of Fingerprint Based on PCA Principal component analysis (PCA) is a multivariate statistical analysis technique. By determining a few principal component factors to represent many complex variables in the samples, the regularity and difference among samples could be evaluated according to the contribution of principal component factors [30]. PCA was established using signal intensity to highlight the differences of VOCs in HEAO and HLBO samples, as shown in Figure 9. The distribution map for the first two principal components determined by PCA is displayed, which describes 86% and 8% of the accumulative variance contribution rate, and a visualization map was obtained. The PCA results clearly show that HEAO (sample 1) and HLBO (sample 2) in a completely independent space would be well-distinguished in the visualization map. HEAO could be well-distinguished according to the positive score values of PC1, while HLBO could be well-defined according to the negative scores of PC1, and the difference in HEAO and HLBO could be distinguished by combining with the score values of PC2. PCA of the VOCs in HEAP and HLBP samples is shown in Figure 10. The distribution map for the first two principal components is displayed, which describes 69% and 13% of the accumulative variance contribution rate. The PCA results clearly show that HEAP (sample 3) and HLBP (sample 4) in a completely independent space would be well-distinguished in the visualization map. HEAP could be well-distinguished according to the positive score values of PC1, while HLBP could be well-defined according to the negative scores of PC1, and the difference in HEAP and HLBP could be distinguished by combining with the score values of PC2. Molecules 2020, 25, x FOR PEER REVIEW 12 of 16 PCA of the VOCs in HEAP and HLBP samples is shown in Figure 10. The distribution map for the first two principal components is displayed, which describes 69% and 13% of the accumulative variance contribution rate. The PCA results clearly show that HEAP (sample 3) and HLBP (sample 4) in a completely independent space would be well-distinguished in the visualization map. HEAP could be well-distinguished according to the positive score values of PC1, while HLBP could be welldefined according to the negative scores of PC1, and the difference in HEAP and HLBP could be distinguished by combining with the score values of PC2. Materials Gannan Newhall navel orange and Shatian pomelo (Citrus maxima (Burm.) Merr. cv. Shatian Yu) young leaves were used as the experiment material and collected in November 2019 and January 2020, respectively, from the orchard of Gannan Normal University, Ganzhou City in Jiangxi Province, China. The HLB-infected leaves were tested by the polymerase chain reaction (PCR) method. PCA of the VOCs in HEAP and HLBP samples is shown in Figure 10. The distribution map for the first two principal components is displayed, which describes 69% and 13% of the accumulative variance contribution rate. The PCA results clearly show that HEAP (sample 3) and HLBP (sample 4) in a completely independent space would be well-distinguished in the visualization map. HEAP could be well-distinguished according to the positive score values of PC1, while HLBP could be welldefined according to the negative scores of PC1, and the difference in HEAP and HLBP could be distinguished by combining with the score values of PC2. Materials Gannan Newhall navel orange and Shatian pomelo (Citrus maxima (Burm.) Merr. cv. Shatian Yu) young leaves were used as the experiment material and collected in November 2019 and January 2020, respectively, from the orchard of Gannan Normal University, Ganzhou City in Jiangxi Province, China. The HLB-infected leaves were tested by the polymerase chain reaction (PCR) method. Materials Gannan Newhall navel orange and Shatian pomelo (Citrus maxima (Burm.) Merr. cv. Shatian Yu) young leaves were used as the experiment material and collected in November 2019 and January 2020, respectively, from the orchard of Gannan Normal University, Ganzhou City in Jiangxi Province, China. The HLB-infected leaves were tested by the polymerase chain reaction (PCR) method. Apparatuses Analyses of samples were completed on a combined device of an Agilent 490 gas chromatograph (Agilent Technologies, Palo Alto, CA, USA) using a FS-SE-54-CB capillary column (15 m × 0.53 mm), and an IMS instrument Flavor Spec ® (Gesellschaft für Analytische Sensorsysteme mbH, Dortmund, Germany), equipped with an autosampler unit (CTC Analytics AG, Zwingen, Switzerland), was used in this study. HS-GC-IMS Analysis Methods The analysis method was performed as described by Yang et al. [27]. Fresh leaf (1 g, without any pretreatment) was cut into small pieces and transferred to a 20 mL headspace vial and then incubated at 40 • C for 20 min. Then, a 200 µL headspace was injected into the heated injector using a syringe at 85 • C. Nitrogen (99.99% purity) was used as the carrier gas. The sample was driven into an FS-SE-54-CB capillary column (15 m × 0.53 mm) by nitrogen at the following programmed flow: 2 mL/min for 2 min, 10 mL/min for 10 min, 100 mL/min for 10 min, and 150 mL/min for 30 min. The analytes were separated at 40 • C in the column and then ionized in the IMS ionization chamber at 45 • C. Drift gas flow was set at a constant flow of 150 mL/min. All analyses were performed in triplicate. VOCs were identified by comparing retention index (RI) and the drift time (the time taken for ions to reach the collector through the drift tube, in milliseconds) standard in the GC-IMS library (Gesellschaft für Analytische Sensorsysteme mbH, Dortmund, Germany). Statistical Analysis The analytical software included a Laboratory Analytical Viewer (LAV, Dortmund, Germany), three plug-ins (G.A.S. Dortmund, Germany), and a GC-IMS library search. IMS data were acquired and processed using LAV processing software and used to generate the analytical spectrum, where each point represented a VOC. The spectrogram differences were compared using the Reporter plug-in. The differences of fingerprint in different samples were compared via the Gallery Plot plug-in. Qualitative analysis of VOCs was achieved based on the National Institute of Standards and Technology (NIST) and IMS databases from the software's built-in GC-IMS library. Statistical analyses of the differences between mean values obtained for experimental groups were calculated using IBM SPSS Statistics 23.0. (IBM Corp. Released 2015. IBM SPSS Statistics for Windows, Version 23.0. Armonk, NY, USA). p values were calculated using a t-test between healthy and HLB-infected leaves for each compound. p values < 0.05 were regarded as significant, p values < 0.01 as very significant, and p values < 0.001 as highly significant. Conclusions As plant leaves are a major source of VOCs emitted in the atmosphere and plant foliar VOCs are very important in mediating plant-plant and plant-insect communication, many methods and analytical techniques have been developed for plant foliar VOC research [31]. Comparison of VOCs in navel orange and pomelo healthy and HLB-infected young leaves would be helpful to understand the role of VOCs played in the host plant of ACP, which may be beneficial in designing ACP control strategies, as well as HLB detection. In this study, VOCs of HEAO, HLBO, HEAP, and HLBP were identified and analyzed from topographic plots by the HS-GC-IMS technique. The signal intensity of some VOCs in HLBO and HLBP showed a highly significant difference compared to those in HEAO and HEAP, respectively. HLB-infected leaves emitted more VOCs than healthy leaves. These findings were in accordance with the phenomenon where plants tend to increase VOC emissions after herbivore attack [32][33][34]. The PCA results clearly showed that HEAO and HLBO, as well as HEAP and HLBP, were in a relatively independent space and were well-distinguished. A novel method was developed to evaluate the characteristic VOCs of orange leaf samples by establishing the fingerprint with HS-GC-IMS and PCA. As well as we know, using HS-GC-IMS to analyze healthy and HLB-infected orange and pomelo young leaves has not been reported by other research groups. Taken together, information of VOCs identified by the HS-GC-IMS fingerprint and PCA could be a useful tool for the identification and classification of orange and pomelo leaf samples. Our study may help develop new strategies for the detection of HLB or find new attractants or repellents of ACP for prevention of HLB. It may also help explore plant-insect and plant-pathogen communication under biotic stresses. Unfortunately, many VOCs were not identified, due to the limited data library, especially for pomelo leaf samples. The development of a data library of HS-GC-IMS and more synergistic methods and approaches are expected for plant foliar VOC research in the future.
2020-09-13T13:05:32.838Z
2020-09-01T00:00:00.000
{ "year": 2020, "sha1": "b7888dc2d9f655c7c9c6fce2abab3a5e264dad63", "oa_license": "CCBY", "oa_url": "https://www.mdpi.com/1420-3049/25/18/4119/pdf", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "4af6abf631d11924d8176bd010f9325d99005439", "s2fieldsofstudy": [ "Agricultural And Food Sciences" ], "extfieldsofstudy": [ "Medicine", "Biology" ] }
195554119
pes2o/s2orc
v3-fos-license
Improvement of Image Binarization Methods Using Image Preprocessing with Local Entropy Filtering for Alphanumerical Character Recognition Purposes Automatic text recognition from the natural images acquired in uncontrolled lighting conditions is a challenging task due to the presence of shadows hindering the shape analysis and classification of individual characters. Since the optical character recognition methods require prior image binarization, the application of classical global thresholding methods in such case makes it impossible to preserve the visibility of all characters. Nevertheless, the use of adaptive binarization does not always lead to satisfactory results for heavily unevenly illuminated document images. In this paper, the image preprocessing methodology with the use of local image entropy filtering is proposed, allowing for the improvement of various commonly used image thresholding methods, which can be useful also for text recognition purposes. The proposed approach was verified using a dataset of 140 differently illuminated document images subjected to further text recognition. Experimental results, expressed as Levenshtein distances and F-Measure values for obtained text strings, are promising and confirm the usefulness of the proposed approach. Introduction Image binarization is one of the most relevant preprocessing steps leading to significant decrease in the amount of information subjected to further analysis and allowing for an increase of its speed. Such an operation is typically applied in many systems which utilize mainly shape recognition methods and do not require the colour or texture analysis. Some good examples might be some robotic applications, including line followers and visual navigation in corridors and labyrinths, advanced driver-assistance systems (ADAS) and autonomous vehicles with lane tracking, as well as widely used optical character recognition (OCR) methods. Binary image analysis may also be applied successfully in embedded systems with limited amount of memory and low computational power. Nevertheless, the appropriate results of binary image analysis, in particular text recognition, depend on the correct prior binarization. In some applications, where the uniform illumination of the scene can be ensured, e.g., popular flatbed scanners or some non-destructive automated book scanners, even with additional infrared cameras allowing for software straightening the scanned book pages [1], the simplest global thresholding may be sufficient. However, in many other situations the illumination may be non-uniform, especially in natural images captured by cameras, and therefore more sophisticated adaptive methods should be applied. One of the most challenging problems related to the influence of image thresholding on further analysis is document image binarization and therefore newly developed algorithms are typically validated by using intentionally prepared document images containing various distortions. For this reason well-known document image binarization competitions (DIBCO) datasets are typically used to verify the usefulness and validate the advantages of binarization methods. These databases are prepared for yearly document image binarization competitions organized during two leading conferences in this field-the International Conference on Document Analysis and the Recognition (ICDAR) [2] and International Conference on Frontiers in Handwriting Recognition (ICFHR) [3], where the H-DIBCO datasets are used, containing only handwritten document images without machine printed samples. All DIBCO datasets contain not only the distorted document images but also "ground truth" binary images and therefore the binarization results can be compared with them at the pixel level analysing the numbers of correctly and improperly classified pixels [4,5]. Despite the fact that image binarization is not a new topic, some enhancements of algorithms are still proposed, particularly for historical document image binarization, as well as unevenly illuminated natural images. A proposal of such an improvement based on the image entropy filter, possible to apply in many commonly known binarization methods, is presented in this paper. The rest of the paper consists of the short overview of the most widely used image binarization methods, description of the proposed approach based on the use of local entropy filter, presentation and discussion of results and final conclusions. Brief Overview of Image Binarization Algorithms Probably the most popular image thresholding method was proposed in 1979 by Nobuyuki Otsu [6], who delivered the idea of minimizing the sum of intra-class variances of two groups of pixels classified as foreground and background, assuming the bi-modal histogram of the image pixels' intensity. Hence, this approach leads to maximization of inter-class variance and therefore a good separation of two classes of pixels, represented finally as black and white, is achieved. Due to the operations on the histograms, this method is fast, although it works properly only for uniformly illuminated images with bi-modal histograms. A similar approach, utilizing the entropy of the histogram instead of variances was proposed by Kapur et al. [7], whereas the idea of combining the global and local Otsu and Kapur methods was presented in the paper [8]. An extended adaptive version of Otsu method, known as AdOtsu, proposed by Moghaddam and Cheriet [9], assumed some additional operations such as multi-scale background estimation and calculation of average stroke widths and line heights. Since some images with unimodal histograms cannot be properly binarized using the above mentioned histogram-based methods another interesting idea was presented by Paul Rosin [10], who proposed to determine the threshold as the corner of the histogram curve. Since the images containing some shadows being the result of non-uniform illumination should not be binarized using a single global threshold, some adaptive algorithms, which require the analysis of each pixels' neighbourhood, were proposed as well. The most popular approach developed by Wayne Niblack [11] assumed the determination of the local threshold as the average local intensity lowered by the local standard deviation scaled by the constant parameter k. A further modification of this approach, utilizing the additional normalization of the local standard deviation by its division by its maximum value in the image, is known as Sauvola method [12]. Its multi-scale version was further developed by Lazzara and Géraud [13]. A simple choice of the local threshold as the average of the minimum and the maximum intensity within the local window (so called midgray value) was proposed by John Bernsen [14], whereas Bradley and Roth [15] developed the method using the integral image for the calculation of the local mean intensity of the neighbourhood. The implementation of this method, also in the modified versions utilising the local median and Gaussian weighted mean, is available as MATLAB adaptthresh function. Some other adaptive binarization methods were proposed by Wolf and Jolion [16], who used a relatively simple contrast maximization approach as a modification of Niblack's method, as well as Feng and Tan [17], where a similar idea based on the maximization of local contrast was used, however significantly slower due to the application of additional median filtering and bilinear interpolation. Another method proposed by Gatos et al. [18] utilizes a low-pass Wiener filtering and background estimation, followed by the use of Sauvola's thresholding with additional interpolation and post-processing using so called shrink and swell filters to remove noise and fill some foreground gaps and holes. More recent document image binarization methods include the idea of region-based thresholding using Otsu's method with additional use of support vector machines (SVM) presented by Chou et al. [19] as well as faster region-based approaches [20,21]. Another method utilising the SVM-based approach with local features was presented recently by Xiong et al. [22]. The algorithm proposed by Howe [23] utilizes a Laplacian operator, Canny edge detection and graph cut method to find the threshold minimizing the energy. Erol et al. [24] proposed a more general approach related to the localization of text on a document captured by mobile phone camera using morphological operations for background estimation. Another background suppression method, although working properly mainly for evenly illuminated document images, was proposed by Lu et al. [25], whereas another attempt to the application of morphological operations was presented by Okamoto et al. [26]. Lelore and Bouchara [27] proposed the extended fast algorithm for document image restoration (FAIR) algorithm based on rough text localization and likelihood estimation followed by simple thresholding of the obtained super-resolution likelihood image. A multi-scale adaptive-interpolative method was proposed by Bag and Bhowmick [28], useful for faint characters. A method proposed by Su et al. [29] exploited adaptive image contrast map combined with results of Canny edge detection, whereas an attempt to use multiple thresholding methods was presented by Yoon et al. [30]. Some faster ideas of image thresholding based on the Monte Carlo method were proposed as well [31][32][33], where the simplified histogram of the image was approximated using the limited number of randomly chosen pixels. On the other hand, Khitas et al. [34] developed recently an algorithm based on median filtering used for estimation of the background information. An application of local features with Gaussian mixtures was examined in the paper [35], whereas Chen and Wang [36] used extended non-local means method followed by adaptive thresholding with additional postprocessing. Bataineh et al. [37] developed an algorithm inspired by Niblack's and Sauvola's methods with additional application of dynamic windows. Further modifications of Niblack's method were proposed by Khurshid et al. [38], Kulyukin et al. [39] and recently by Samorodova and Samorodov [40]. A direct binarization scheme of colour document images based on multi-scale mean-shift algorithm with the use of modified Niblack's method was recently proposed by Mysoreet al. [41]. A review of many modifications of Niblack inspired algorithms can be found in Saxena's paper [42], whereas many other approaches are discussed in some other survey papers [43][44][45]. Some earlier methods can also be found in BinarizationShop software developed by Deng et al. [46]. Some recent trends in image binarization are related to the use of variational models [47] and deep learning methods [48]. Recently, Vo et al. [49] proposed another supervised approach based on hierarchical deep neural networks. A comprehensive overview of many document image binarization algorithms can be found in the survey paper written by Sulaiman et al. [50]. An interesting method of binarization of non-uniformly illuminated images based on Curvelet transform followed by Otsu's thresholding was proposed by Wen et al. [51]. However, the application of this algorithms requires the additional nonlinear enhancement functions and time-consuming multi-scale processing. Some of the binarization methods utilize the calculation of histogram entropy as well as image entropy. The most widely known approach proposed by Kapur et al. [7] may be considered as the modification of the classical Otsu's thresholding, which is based on earlier ideas presented by Thierry Pun [52,53]. Fan et al. [54] proposed a method maximizing the 2D temporal entropy, whereas Abutaleb [55] developed a method which uses pixel's grey level as well the average of its neighbourhood for minimization of two-dimensional entropy. Brink and Pendock [56] used the cross-entropy instead of distance or similarity between the original image and the result of binarization to optimize the threshold. Some similar multilevel methods have been further developed as well for image segmentation [57], also with the use of genetic methods [58]. A ternary entropy-based method [59], based on the classification of pixels into text, near-text, and non-text regions was proposed as well, which utilized Shannon entropy, whereas Tsallis entropy was used by Tian and Hou [60]. Nevertheless, entropy-based methods are generally less popular than simple histogram-based thresholding or some adaptive binarization methods. Apart from the typical image binarization, one can find some other applications of entropy related to classification of signals or images obtained as the results of measurements or some other experiments, e.g., in a gearbox testing system presented by Jiang et al. [61], where Shannon entropy of the vibration signal is used to detect worn and cracked gears. Development of any new image processing algorithms usually requires their reliable validation based on the comparison of the obtained results with the other methods. Stathis et al. [62] proposed a method of evaluation of binarization algorithms based on comparison of individual pixels, using the pixel error rate (PERR), peak signal to noise ratio (PSNR) and similar metrics, whereas some other approaches were presented in the survey paper by Sezgin and Sankur [63]. A much more popular approach is the use of typical classification metrics based on precision, recall, sensitivity, specificity or F-Measure [4,5], as well as the application of misclassification penalty metric (MPM) [64] or distance reciprocal distortion (DRD) [65]. Another binarization assessment method was presented by Lins et al. [66], which utilizes a dataset of synthetic images for comparison of various thresholding algorithms. Nevertheless, considering the final results of the document image recognition as the recognized text strings, a more useful approach would be the application of metrics calculated for characters instead of individual pixels. Apart from F-Measure, some metrics dedicated for text strings, such as Levenshtein distance, defined as the number of character operations necessary to convert one string into another, may be applied as well. Description of the Method Analysing the unevenly illuminated document images, important information can be achieved with the use of the local image entropy, which may be calculated using the MATLAB entropyfilt function. Using its default parameters the local measure of randomness of the grey levels of the neighbourhood defined by the 9 × 9 pixels mask was achieved and stored as the result for the central pixel. Such an approach may be useful for image forgery detection, switching purposes in adaptive median filtering as well as for image preprocessing followed by comparison of properties of image regions. Hence, the local entropy filter was considered in the proposed method as one of the preprocessing steps for adaptive image binarization of unevenly illuminated document images subjected to further optical text recognition. It is worth noting that most of the OCR engines used some "built-in" thresholding procedures and therefore their results are dependent also on the quality of the input data. For example, widely used freeware Tesseract OCR developed by Google utilized global Otsu's thresholding, whereas the commercial ABBYY FineReader software employed the adaptive Bradley's method. Therefore, the application of some other image binarization methods may improve or decrease the recognition accuracy, since the OCR "internal" thresholding does not change the input binary image. Hence, prior image thresholding may be considered as a replacement of the default methods used in the OCR engines. The proposed method caused the equalization of illumination of an image, increasing also its contrast, making it easier to conduct the proper binarization and further recognition of alphanumerical characters. It is based on the analysis of the local entropy, assuming its noticeably higher values in the neighbourhood of the characters. Hence, only the relatively high entropy regions should be further analyzed as potentially containing some characters, whereas low entropy regions may be considered as the background. The proposed algorithm consists of the following steps: • entropy filter-calculation of the local entropy using the predefined mask (in our experiments the most appropriate size is 19 × 19 pixels) leading to the local entropy map; • negative-simple negation leads to more readable dark characters on a bright background; assuming the maximum entropy value equal to eight (considering eight bits necessary to store 256 grey levels), the additional normalization can be applied with the formula Y = 1 − X 8 , where X is the local entropy map and the final range of the output image Y is 0; 1 ; The simplified flowchart of the method is shown in Figure 1, whereas the illustration of results obtained after consecutive steps of the algorithm is presented in Figure 2. Practical Verification The verification of the proposed method was conducted using the database of document images, prepared applying various illuminations (uniform lighting and six types of non-uniform or directional shadows). The well-known quasi-Latin text Lorem ipsum, used as the basis for the generated sample pages containing 536 words, was printed using five various font shapes (Arial, Times New Roman, Calibri, Verdana and Courier) and their style modifications (normal, bold, italics and bold+italics). Such printed 20 sheets of paper were photographed applying 7 types of illuminations mentioned above (six unevenly illuminated examples are shown in Figure 3). These 140 captured images were binarized in two scenarios: with and without the proposed preprocessing. In both cases several binarization algorithms were applied to verify the proposed approach in practice. All the obtained binary images were used as the input data for the Google Tesseract OCR engine. For each of the images, the number of correctly and incorrectly recognized characters were determined, allowing for the calculation of some typical classification metrics, such as F-Measure defined as: where PR and RC stand for the precision (true positives to sum of all positives ratio) and recall (ratio of true positives to sum of true positives and false negatives). Hence, they can be expressed as: where TP are true positives and FN false negatives, respectively. All positive and negative values are considered as the numbers of correctly and incorrectly recognized characters. The additional metric, which may be applied for the evaluation of text similarity, is known as Levenshtein distance, representing the minimum number of text changes (insertions, deletions or substitutions of individual characters) necessary to change the analyzed text into another. This metric was also applied for evaluation purposes, assuming the knowledge of the original text string (Lorem ipsum-based in these experiments). Results and Discussion The development of the final preprocessing algorithm allowing for the increase of the final OCR accuracy required an appropriate choice of some parameters mentioned earlier. The first of them is the size of the block used for the entropy filter which influences significantly the obtained results. Too small size of the filter would not be efficient due to its sensitivity to small details and noise whereas too big windows would be vulnerable to averaging effects. Since the default size of the filter in MATLAB entropyfilt function is 9 × 9 pixels, the first experiments were conducted using various windows to verify the influence of their size on the OCR results. The obtained results are presented in Figure 4, where the best values can be observed for 19 × 19 pixels filter. Therefore, the application of the default values would be inappropriate, particularly for the series #5 containing the non-uniformly illuminated images with sharp shadow edges as shown in Figure 3d. A similar difference may be observed during the choice of the most appropriate size of the structuring element applied during the morphological dilation, since the results obtained for the series #5 differ significantly from the others. Nevertheless, in all cases the choice of a similar size of the structuring element to the size of the block in the entropy filter leads to the best results as illustrated in Figure 5 (in our experiments 20 × 20 pixels structuring element was chosen). The additional reason of the choice of such structuring element was the processing time, which increased noticeably for bigger structuring elements as shown in Figure 6, where its values normalized according to the computation time obtained using the selected 20 × 20 pixels structuring element are presented. Unfortunately, relatively shorter processing did not guarantee good enough OCR accuracy, whereas increase of the structuring element's size and computation time did not enhance the obtained results significantly. Since the experiments were conducted using a personal computer, some processes running in background (including the Tesseract OCR engine) might have influenced the obtained results. Nonetheless, the relation between the size of structuring element and the processing time can be considered as nearly linear. Hence, the most reasonable choice was the smallest possible structuring element not affecting the acceptable OCR accuracy level. Having chosen the most appropriate parameters of the proposed preprocessing method, the obtained F-Measure values and Levenshtein distances for the whole dataset and each of the illumination types, as well as individual font faces and style modifications, were compared with some other methods applied without the proposed preprocessing. The comparison of the influence of the proposed preprocessing method on the F-Measure values is presented in Table 1, whereas respective Levenshtein distances are shown in Table 2. Analysing the results, a significant decrease of the Levenshtein distance, as well as the increase of the F-Measure values, may be observed for all methods, proving the usefulness of the proposed approach. The best results were achieved for Niblack, Sauvola and Wolf thresholding, as well as the simple Meanthresh method, which was significantly improved by the use of the entropy filtering-based preprocessing. Some exemplary results obtained using the proposed preprocessing as well as its application for Bradley binarization with Gaussian kernel are illustrated in Figure 7. The additional illustration of its advantages for three exemplary images with the use of Niblack and Sauvola methods is shown in Figure 8, whereas another such comparison for Bernsen and Meanthresh methods is presented in Figure 9. Since the properties of the proposed method may differ for various font shapes and styles, particularly for some of the thresholding algorithms, more detailed results are presented for them in Tables 3 and 4, where F-Measure values can be compared for the same methods with and without the proposed entropy-based preprocessing method. Comparing the influence of the proposed approach on the obtained OCR accuracy expressed as the F-Measure values calculated for individual text characters, relatively smaller enhancement may be observed for adaptive binarization methods, which achieve good results even without the proposed preprocessing method, such as Niblack or Sauvola. Nevertheless, in all cases the improvements may be noticed, also for the binarization method proposed by Wolf, which achieved much worse results for Courier fonts without the presented preprocessing method. A great improvement may also be observed for the simple mean thresholding as well as the direct usage of OCR engine's built-in binarization, whereas the proposed method caused a small decrease of recognition accuracy after Bernsen thresholding for some font shapes (Courier and Times New Roman). It is worth to note that the proposed entropy-based preprocessing method always leads to better text recognition of bold fonts. Conclusions Binarization of unevenly illuminated and degraded document images is still an open and challenging field of research. Considering the necessity of fast image processing, many sophisticated methods, which cannot be effectively applied in many applications, may be replaced by simpler thresholding supported by less complicated preprocessing methods without the necessity of shape analysis or training procedures. The approach proposed in the paper may be efficiently applied as the preprocessing step for many binarization methods in the presence of non-uniform illumination of document images, increasing significantly the accuracy of further text recognition, as shown in experimental results. Since its potential applicability is not limited to binarization of document images for OCR purposes, our further research may concentrate on the development of similar approaches for some other applications related to binarization of natural images and machine vision in robotics, particularly in unknown lighting conditions. Conflicts of Interest: The authors declare no conflict of interest. Abbreviations The following abbreviations are used in this manuscript:
2019-06-26T14:37:19.867Z
2019-06-01T00:00:00.000
{ "year": 2019, "sha1": "01ef75228c6687fbf3fa92de92886d1403451f07", "oa_license": "CCBY", "oa_url": "https://www.mdpi.com/1099-4300/21/6/562/pdf", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "77bb0ca538883dbdb2ace9141df98eb823b740fe", "s2fieldsofstudy": [ "Computer Science" ], "extfieldsofstudy": [ "Computer Science", "Medicine", "Mathematics" ] }
56542488
pes2o/s2orc
v3-fos-license
Evaluation of the Tp-e Interval and Tp-e/QT Ratio in Patients with Mitral Annular Calcification Objective: Mitral annular calcification (MAC) is regarded as a manifestation of cardiovascular disease. Recent studies have shown that prolongation of the interval between the peak and end of the T wave on electrocardiogram (Tp-e), which is accepted as an index of transmural dispersion of ventricular repolarization, and the Tp-e/QT ratio are associated with ventricular arrhythmias. In the present study, we aimed to evaluate ventricular repolarization using the Tp-e interval and Tpe/QT ratio in patients with MAC. Methods: Fifty patients with MAC (27 females and 23 males; mean age 71.6 ± 8.0 years) and 50 patients without MAC (26 females and 24 males; mean age 69.3 ± 6.2 years) were included in this study. Maximum and minimum QT and Tp-e intervals as well as corrected values according to heart rate were calculated using a 12lead electrocardiogram. QT dispersion and Tp-e/QT ratios were calculated. All parameters were compared between the two groups. Results: Patients with MAC had significantly higher Tp-e (75.8 ± 11.6 vs. 62.1 ± 8.7; p<0.001) and cTp-e intervals (84.9 ± 14.3 vs. 67.5 ± 9.7; p<0.001), Tp-e/QT (0.19 ± 0.02 vs. 0.15 ± 0.02; p<0.001) and cTp-e/QT ratios (0.19 ± 0.03 vs. 0.15 ± 0.02; p<0.001), and cQT values (390.1 ± 31.5 vs. 373.8 ± 26.1; p=0.006) compared with the control subjects. There were positive correlations between E/Em ratio and cTp-e interval (r=0.396; p=0.004) and between the E/Em ratio and cTp-e/QT ratio (r=0.535; p<0.001) in the MAC group. Conclusion: Our findings indicate that the patients with MAC had higher Tp-e and cTp-e intervals and Tp-e/QT and cTp-e/QT ratios compared with the subjects without MAC. Introduction Mitral annular calcification (MAC) is the fibrous, degenerative calcification of the annular ring that supports the mitral valve [1]. Its prevalence increases with ageing. It has previously been reported that patients with MAC have a higher prevalence of coronary artery disease, arrhythmias, aortic valve calcification, and cerebrovascular diseases [2][3][4]. MAC and atherosclerosis have similar risk factors including old age, hypertension, diabetes, and obesity; it is hypothesized that the presence of MAC reflects the duration and intensity of exposure to these risk factors [5]. Myocardial repolarization abnormalities are correlated with ventricular arrhythmias and high cardiovascular mortality. Myocardial repolarization is calculated using superficial electrocardiography (ECG) based on QT dispersion (QTd) and corrected QT dispersion (cQTd). In recent years, it was suggested that the interval between the peak and endpoint of the T wave (Tp-e) could be used as a marker of dispersion of total repolarization. Indeed, it was found to be a significant predictor of increased mortality due to ventricular arrhythmia and cardiovascular factors [6]. Thus, the Tp-e/QT ratio was used as a new index to assess ventricular repolarization that is not affected by heart rate variability, rendering it more reliable than the QT interval, QTd, or Tp-e interval [7]. In this study, we aimed to analyze ventricular repolarization dispersion (VRD) using new indices, the Tp-e interval and Tp-e/QT ratio, calculated on 12-lead superficial ECG in patients with MAC. Study design This prospective cross-sectional study performed, after the study protocol had been approved by institutional ethics committee. The principles of the Declaration of Helsinki were followed throughout the study, and informed consent was obtained from all participants. Study population Fifty consecutive patients with MAC admitted to our clinic for routine follow-up and 50 volunteers without MAC were included in this study. Patients with heart failure, moderate or severe valve disease, primary cardiomyopathy, obesity, hypertension, anemia, renal failure, chronic lung disease, thyroid dysfunction, electrolyte imbalance, bundle branch block, or atrioventricular conduction abnormalities on ECG were excluded. Any ECGs without clearly measurable Tp-e or QT intervals were also excluded from the study. All patients were in sinus rhythm, and none were on medications that could affect the QT or Tp-e interval, including antibiotics, antiarrhythmics, tricyclic antidepressants, antihistamines, or antipsychotics. Study protocol Age, sex, and other cardiac risk factors were recorded in all patients. Hypertension was defined as an arterial blood pressure of >140/90 mmHg measured on three separate occasions during different weeks, or the use of antihypertensive medications for at least 3 months. The patients' heart rate and body mass index (BMI; kg/m 2 ) were calculated. Fasting blood glucose, creatinine, blood urea nitrogen, total cholesterol, high-density lipoprotein (HDL) cholesterol, lowdensity lipoprotein (LDL) cholesterol, and triglyceride levels were measured. Diabetes mellitus was defined as patients on antidiabetic agents or with fasting plasma glucose levels >126 mg/dL. Coronary artery disease (CAD) was defined as the presence of an angiographic lesion occupying ≥ 50% of the coronary artery, a history of coronary bypass surgery, or percutaneous coronary intervention. All echocardiographic examinations (IE-33, Philips Medical Systems, Bothell, WA, USA) were performed by cardiologists blinded to the experimental design and patient histories. Measurements were performed according to the criteria of the American Society of Echocardiography [8], and three consecutive cycles were averaged for each parameter. Standard parasternal long-axis and apical 4-chamber, 2chamber, and 5-chamber images were obtained. M-mode, 2-D, Doppler (color flow, PW Doppler (PWD)), and tissue Doppler echocardiography images and measurements were evaluated. Left atrial (LA) diameter, end systolic and end diastolic dimensions were measured from parasternal long-axis view. Ejection fraction was calculated by using modified Simpson method. Mitral early diastolic velocity (E), mitral late diastolic velocity (A) the doppler image isovolumetric relaxation and contraction times (IVRT and IVCT, respectively) and deceleration time (dT) were recorded from the apical transducer position of the sample volume situated between the mitral leaflet tips, and the ratio of E to A (E/A ratio) was calculated. Myocardial early diastolic velocity (Em) and myocardial late diastolic velocity (Am) wave velocities were measured with the sample volume using PWD from the mitral lateral annulus. Diastolic dysfunction was classified by mitral inflow pattern according to guidelines [9], normal diastolic function was considered a lateral E/Em ratio of less than 10. Mild diastolic dysfunction was defined as a lateral E/Em ratio of greater than 10, an E/A ratio of less than 0.8 and a DT of greater than 200 ms; moderate diastolic dysfunction was characterized by a lateral E/Em ratio of greater than 10, an E/A ratio of between 0.8 and 1.5, and DT of between 160 ms and 200 ms; and severe diastolic dysfunction was defined as a lateral E/Em ratio of greater than 10, an E/A ratio of greater than 2 and DT of less than 160 ms. MAC was defined as an intense echocardiographic producing structure, >3 mm in width, with highly reflective characteristics that was located at the junction of the atrioventricular groove and the posterior or anterior mitral leaflet on the parasternal long-axis, apical 4-chamber or 2chamber, parasternal short-axis view [10]. The 12-lead ECG was performed at a paper speed of 50 mm/s with the subject at rest in the supine position. Resting heart rate was then measured from the ECG data. The QT and Tp-e intervals were calculated from ECG data manually by two cardiologists using calipers and a magnifying glass to decrease measurement errors. Subjects with U waves, negative, and biphasic T wave on their ECGs were excluded from the study. The average value from three examinations was calculated for each lead. The QT interval was measured from the beginning of the QRS complex to the end of the T wave and was corrected for heart rate using the Bazett formula. The QTd was defined as the difference between the maximum (QTmax) and minimum QT (QTmin) intervals of the 12 leads. The difference between the corrected QTmax and corrected QTmin (cQTmin) was defined as the cQTd [11]. Measurement of the Tp-e interval was performed from the precordial leads [12]. The Tp-e interval was defined as the interval from the peak to the end of the T wave and was corrected for heart rate (cTp-e). Tp-e/QT and corrected Tpe/QT (cTp-e/QT) ratios were then calculated from these values. The interobserver and intraobserver variation coefficients for the Tp-e/QT ratio were 2.8% and 3.1%, respectively, and those for the cTp-e/QT ratio were 3.1% and 3.0% respectively. Statistical analysis All statistical analyses were performed using the SPSS 22.0 statistical program (SPSS Inc., Chicago, IL, USA). Continuous variables were presented as means ± standard deviation, and categorical variables were presented as numbers and percentages. Normal distribution was determined using the Shapiro-Wilk test. The groups were compared using the t-test and chi-square test with Yates' correction in independent samples. Pearson's correlation coefficients were used to assess the association among continuous variables. A p-value<0.05 was considered to indicate statistical significance. Results The demographic characteristics of the participants are shown in Table 1. There were no significant differences between the patients and controls in terms of age, sex, smoking status, BMI, systolic or diastolic blood pressures, or fasting blood glucose, hemoglobin, creatinine, total cholesterol, LDL, HDL, or triglyceride levels. Moreover, the incidence of CAD was higher in the MAC group (p=0.833). Comparisons of the standard 2D and Doppler echocardiographic measurements are shown in Table 2. There were no significant differences between the groups with regard to left ventricular (LV) end-diastolic diameter, LV endsystolic diameter, interventricular septum thickness, posterior wall thickness, aortic root, or ejection fraction. The LA diameter was significantly greater in patients with MAC compared with the controls (p<0.001). Moreover, E (p<0.001) and the E/A ratio (p<0.001) were smaller, whereas Am (p=0.029) was greater, in patients with MAC compared with the controls. The ECG parameters of the groups are shown in Table 3. The cQTmin (p=0.006), Tp-e interval (p<0.001), cTp-e interval (p<0.001), Tp-e/QT ratio (p<0.001), and cTp-e/QT ratio (p<0.001) were significantly higher in patients with MAC compared with the control group. In the MAC group, positive correlations between the E/Em ratio and cTp-e interval (r=0.396; p=0.004) and between the E/Em ratio and cTp-e/QT ratio (r=0.535; p<0.001) were found (Figures 1 and 2). In addition, there were positive correlations between LA diameter and cTp-e interval (r=0.50; p=0.001), and between LA diameter and cTp-e/QT ratio (r=0.31; p=0.028) in MAC group. Discussion In this study, we investigated ECG parameters of ventricular repolarization and depolarization in patients with MAC. The parameters that showed VRD, such as the Tp-e interval and Tp- Annals of Clinical and Laboratory Research ISSN 2386-5180 Vol.5 No.4:199 e/QT ratio, were higher in patients with MAC compared with the controls. In addition, strong associations were found between ECG parameters that showed diastolic dysfunction and the aforementioned VRD parameters in the MAC group. MAC is a chronic, degenerative, non-inflammatory disease of the fibrous skeleton of the mitral valve. Its prevalence increases with age and is higher in females. In a multiethnic cohort study [10] MAC and severity of MAC were found to be strong and independent predictors cardiovascular events. After adjustment for other cardiovascular risk factors, MAC remained associated with an increased risk of myocardial infarction and vascular death. Also, a recent retrospective study used a large echocardiographic database found that Mac was independently associated with all-cause mortality [13]. Some researchers have suggested that MAC, calcification of the aortic valve, and coronary atherosclerosis have the same etiology and are in fact different forms of the same disease [14]. MAC was proposed as an independent predictor of the extent of CAD [15]. Boon et al. reported that MAC and aortic valvular calcification should be regarded as signs of extensive atherosclerosis [16]. In addition, MAC is correlated with cardiovascular events including atrial fibrillation, atherosclerotic disease, stroke, and death [17]. Studies have shown that a large LA is associated with a higher incidence of atrial fibrillation development in patients with MAC [18]. A larger LA in patients with MAC may be related to the comorbid diseases that typically accompany MAC [19]. Since MAC is a sclerodegenerative disease, it results in inflammation of the LA and increased "stiffness". This results in diastolic dysfunction and enlargement of the LA. Using the same mechanisms, MAC interferes with inter-and intra-atrial conduction, resulting in conduction defects [20]. In the present study, we showed a significant increase in LA diameter. In addition, there were positive associations between the LA diameter and cTp-e interval and between the LA diameter and cTp-e/QT ratio in MAC patients. The common co-occurrence of MAC and diastolic dysfunction is well known. Indeed, insufficient relaxation and limitation of the posterior valve cause diastolic dysfunction in MAC [21]. In addition, old age and comorbid diseases result in increased fibrous tissue, and hence, a defect in relaxation of the left ventricle occurs. In our study, diastolic dysfunction was significantly apparent in the MAC group compared with the control group. Sauer et al. evaluated 84 patients and reported significant correlations of the Em with the E/Em ratio and Tp-e interval. They also reported that the Tp-e interval increased as the grade of diastolic dysfunction increased [22]. Similarly, in our study, we found positive correlations between the E/Em ratio parameter of diastolic dysfunction and the cTp-e interval and between the cTp-e interval and cTp-e/QT ratio. Ventricular myocardial repolarization can be analyzed using the QT interval and measurement of the T wave. Previous electrophysiological studies have demonstrated the association between repolarization heterogeneity and stimulation of arrhythmias [23]. A number of clinical and experimental investigations have shown that primary or secondary elongation of the QT interval is a predisposing factor for ventricular arrhythmias [24]; QTd has been accepted as an index of ventricular arrhythmias [25][26][27][28][29]. An increased QTd was demonstrated in various disorders including acute coronary syndrome, ventricular arrhythmias, and cardiac 2017 4 autonomic neuropathy [26,27]. Priori et al. showed that QTd increased in idiopathic long QT syndrome [28], while Taşolar et al. [29] did not find significant differences in QTd and cQTd parameters between patients with MAC and controls. Similarly, in our study we did not find significant differences in QTd and cQTd between patients with MAC and the controls. On the other hand, some studies have suggested that QTd does not clearly demonstrate VRD. A previous study reported that QTd did not directly show heterogeneity of ventricular repolarization [30]. Moreover, Somberg et al. noted that QTd did not clearly demonstrate ventricular heterogeneity, and that medications that increase the QT interval should be used during follow-up [31]. Yamaguchi et al. showed that the Tp-e interval was a more valuable parameter for predicting torsades de pointes (TdP) in acquired long QT syndrome than was QTc or QTd, and that the use of QTd had some limitations in measuring the heterogeneity of ventricular repolarization [29]. In recent years, it has been reported that newer ECG parameters such as the Tp-e interval and Tp-e/QT ratio can be used for VRD analysis, and that these parameters are predictors of ventricular arrhythmias and cardiovascular death, similar to the QT interval and QTd [6,. To our knowledge, the effect of MAC on the Tp-e duration and Tp-e/QT ratio has not been evaluated before now. It was previously reported that QTd was calculated on DII, V5, and V6 derivations; however, the derivations to be used for Tp-e interval measurement have not been clearly indicated, and precordial derivations are typically used to perform measurements. We measured the Tp-e interval from the V6 derivation. An increased Tp-e interval is a beneficial parameter for predicting ventricular arrhythmias and cardiovascular events [34]. In addition, it was reported that parameters including Tp-e and QT intervals are affected by body mass and heart rate, while the Tp-e/QT ratio is not, and that the Tp-e/QT ratio is more sensitive for the prediction of ventricular arrhythmias than is the Tp-e interval or QTd [33]. Hevia et al. showed that use of the Tp-e interval was beneficial for risk stratification in patients with Brugada syndrome. They included 29 patients in their prospective study and followed the patients for a duration of 11-108 months, during which time they demonstrated a significantly increased prevalence of recurrent cardiac events in patients with a high Tp-e interval [12]. A study show that cQTmax, cQTd, and Tp-e interval were significantly higher in diabetic with subclinical LV diastolic dysfunction than in diabetic with normal LV diastolic dysfunction [35], TdP and sudden cardiac death were found to be correlated with the Tp-e interval in Brugada syndrome, hypertrophic cardiomyopathy, and acquired and congenital long QT syndrome [36]. A study performed on 338 STEMI patients with successful primary percutaneous coronary intervention revealed higher in-hospital mortality and cardiac event rates in patients with high Tp-e/QT ratios. The authors reported that a high Tp-e/QT ratio was an independent predictor of all-cause mortality after discharge from the hospital [7]. Shu et al. evaluated 120 patients with STEMI and demonstrated that the Tp-e/QT ratio was increased in patients who had malignant ventricular arrhythmia compared with patients who did not [37]. Taşolar et al. [38] reported that Tp-e and cTp-e intervals and Tp-e/QT ratio were increased significantly in healthy individuals who smoked compared with those who did not. In addition, they showed a significant correlation between these markers and the amount of cigarettes smoked. Similarly, in our study we found that the Tp-e and cTp-e intervals and Tp-e/QT ratio were increased in patients with MAC compared with the controls. However, our study is limited by its cross-sectional and observational design, as well as by the lack of clinical follow-up for arrhythmic events. Limitations of the study Our study has some limitations. Number of patients in our study is relatively small. The main limitations of our study were its cross-sectional design and lack of patient follow-up, since the study population could not be followed prospectively for ventricular arrhythmias. Further studies with larger sample sizes and longer follow-up periods are warranted to determine the correlation between the prevalence of ventricular arrhythmias and ventricular repolarization markers in patients with MAC. Conclusion The Tp-e and cTp-e intervals and Tp-e/QT ratio, which are indices of ventricular arrhythmia, were higher in asymptomatic patients with MAC. Increased Tp-e and cTp-e intervals and Tpe/QT ratios may be early predictors of the increased frequency of ventricular arrhythmias in patients with MAC.
2019-02-13T10:50:46.133Z
2017-01-01T00:00:00.000
{ "year": 2017, "sha1": "5b10c8cd7cb2dd654ec32b60b82fe601435358ab", "oa_license": null, "oa_url": "https://doi.org/10.21767/2386-5180.1000199", "oa_status": "GOLD", "pdf_src": "MergedPDFExtraction", "pdf_hash": "5b10c8cd7cb2dd654ec32b60b82fe601435358ab", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
24148388
pes2o/s2orc
v3-fos-license
Primary HIV infection: a medical and public health emergency requiring rapid specialist management Primary HIV infection (PHI) refers to the fi rst six months following HIV acquisition and represents a unique opportunity for expedited diagnosis, and consideration of rapid antiretroviral therapy (ART) initiation to improve immune function, reduce the size of the viral reservoir and limit the risk of onward viral transmission. Failure to diagnose and rapidly treat individuals with PHI has signifi cant individual and public health implications. The Strategic Timing of AntiRetroviral Treatment trial recently identifi ed a clinical benefi t of immediate ART over deferral of treatment according to CD4 count threshold, and has led to rapid changes in World Health Organization and specialist national guidelines. For all individuals living with HIV, the offer of immediate therapy irrespective of CD4 count is now recommended. This paper summarises the presentation and management of PHI, incorporating current research and guideline changes and discusses the role of PHI in onward transmission. PHI represents the initial six months following HIV acquisition. This stage of infection is associated with a high-level plasma viraemia which is subsequently limited by host immune responses which confer symptomatic and partial immunological recovery for the majority of individuals. 8 In the absence of ART a gradual decline in peripheral CD4 T cells is observed; on average between 50-70 cells/year. 9 Viral dissemination and the establishment of a viral reservoir throughout the body occurs rapidly after acquisition 9 and is the reason why ART cannot cure HIV infection. 10,11 Despite years of successful suppressive ART a latent pool of HIV-infected cells is thought to be the source of viral recrudescence on stopping therapy. 11 Challenges of PHI diagnosis The standard test for HIV infection is an HIV-specific antibody, which can be performed as a point-of-care finger prick test or a laboratory fourth generation combination assay detecting the presence of HIV antibody and or viral gag protein (p24). 12 Individuals presenting very soon after HIV acquisition, with acute infection, may present prior to the production of detectable levels of HIV-specific antibodies, using routine point-of-care or laboratory assays. In these situations, confirmation of an HIV diagnosis can be made from venous blood samples sent to a virology laboratory where the presence of viral proteins (p24) or viral HIV DNA or RNA can be made. 13 Laboratory third generation tests do not detect viral gag protein (p24) and therefore are less able to detect early infection prior to the development of detectable levels of HIV antibody. The lack of detectable antibodies in the initial stages of infection makes this a difficult diagnosis to make using current standard tests. The window period (ie the time between transmission and production of HIV antibodies when an HIV enzyme-linked immunosorbent assay result may be falsely negative) for a third generation antibody test is 21 days and for a fourth generation combination assay (ie the presence of p24 antigen in the absence of antibody) is 14 days after infection. 14 The recent HIV test algorithm (RITA), carried out by Public Health England, can also identify recent infection providing the clinic is part of the surveillance network. 15 Table 1 summarises the different tests available to diagnose HIV infection. The majority of symptoms associated with PHI are nonspecific 15 and hence often misdiagnosed or overlooked. Table 2 highlights some of the symptoms, signs and recommended tests that can be associated with PHI. In view of the increased to negotiate safe sex), and the incorporation of alternative sexual practices that do not involve the exchange of body fluids. Keeping individuals engaged in care and not passing HIV on during this period of hyper-infectiousness is of paramount importance. Initial counselling is essential as well as rapid contact tracing to enable urgent HIV testing and provision of PEP (if exposure occurred within preceding the 72 hours) to partners. Antiretroviral treatment The recent change in UK treatment guidelines 22 recommending initiation of ART for all people living with HIV irrespective of CD4 count is similarly pertinent to those diagnosed with PHI, and reflects new evidence from the Strategic Timing of AntiRetroviral Treatment 23 trial. The trial identified a significant reduction in the combined endpoint of AIDS events, serious non-AIDS events and death for immediate initiation of ART for all people living with HIV. PHI differs from chronic infection as there are reasons to fast track individuals with PHI for immediate ART initiation, whereas those with asymptomatic chronic infection and CD4 counts >350 cells/cm 3 do not have the same level of urgency to start therapy. > Preservation of CD4 T lymphocytes (total CD4 counts and ratio of CD4:8 T cells) correlates with all-cause mortality and recovery is directly related to the timing of ART initiation. 24 Initial management and counselling Heightened awareness of PHI with urgent referral for ART discussion has the potential to enhance clinical outcome and reduce the risk of onward viral transmission. Initial counselling must be provided, as well as urgent contact tracing to enable urgent HIV testing of partners and post-exposure prophylaxis (PEP), 20 if exposure occurred within the preceding 72 hours. Rarely PHI can present with opportunistic infections or severe neurological involvement requiring urgent management, otherwise, PHI tend to be mild, temporary and self-limiting. Interventions to consider at this time include ART, screening and treatment of concomitant STI, and the promotion of immediate changes in sexual behaviour. 21 These include consistent condom use, limiting drug and alcohol intake (which may impair the ability Key points Identification of primary HIV in any medical setting remains a priority. Encouragement of frequent testing across all settings. Rapid referral to an HIV specialist is essential. There is a time window of opportunity in ART within which immediate ART initiation confers enhanced clinical benefit. KEYWORDS : Acute HIV infection, immediate clinical HIV management, antiretroviral therapy ■ viral remission on stopping ARV, so called functional cure. 34,36 > All of these benefi ts are observed the closer to PHI that ART is initiated, particularly in the fi rst 12 weeks. 21,32,37,40 Duration of ART at PHI Three randomised controlled trials in PHI reported a modest benefit (delaying the decline in CD4 cell count, or time from PHI, to requiring lifelong ART) following a 48-week 38 or 60-week 39,40 course of ART. Interruption of therapy even if started close to the time of PHI is no longer recommended. 21 Therefore, once started, ART should be taken lifelong. ART regimen and PHI ART regimen should be prescribed in accordance with BHIVA guidelines 22 which is a three-drug regime; comprising a backbone of two nucleoside reverse transcriptase inhibitors and a third agent from a different class. At PHI whilst any currently approved antiretroviral combination can be commenced in PHI, preference for regimens including integrase inhibitor agents which have been shown to rapidly control viral replication should be considered especially among individuals with extremely high plasma viral load. There are no clinical, immunological or virological data supporting the initiation of more than three drug combinations. The most important factor is expedited ART initiation and good adherence. From UK observational cohort studies there is no evidence of increased rates of drug-related toxicities among individuals treated with high CD4 counts and no increased risk of development of drug resistance among individuals starting early compared with later. Viral suppression should be anticipated by 24 weeks on therapy for the majority of individuals adhering to ART. The time period between ART initiation and viral suppression remains an important risk for onward transmission and must be clearly explained. Prevention of HIV transmission The very high plasma viraemia, often compounded by high rates of concomitant STI and continued high-risk sexual practices among individuals who are unaware of their changed HIV status, makes the short period of PHI highly infectious. Phylogenetic studies among MSM in Brighton and other cities showed that individuals with PHI contributed over 40% of all new infections. 31,32 Indeed mathematical models suggest that PHI could be responsible for over half of all new HIV infections in focused epidemics such as the UK. 7 The use of ART in HIVpositive individuals is the most effective tool to prevent onward viral transmission among HIV serodifferent heterosexual and MSM couples. Recent PEP guidelines advise six months virological suppression before the infection transmission risk is too low to justify PEP provision. 41 At a population level, the critical barrier to prevention remains diagnosis. Summary Awareness of PHI among certain core high-risk groups, in particular MSM, in the UK is critical to avoid missed diagnoses which impact on individual care and the prevention of onward HIV transmission. Using increased and varied testing, and improving clinician and patient education to recognise symptoms of PHI, will improve PHI diagnosis. Prompt discussion about the benefits of initiating ART at HIV seroconversion (irrespective of CD4 count and viral load) should cover improved surrogate markers of disease progression and a marked reduction in risk of onward viral transmission. Rapid diagnosis followed by immediate risk reduction interventions, screening and treatment of concomitant STIs, and the early initiation of ART to reduce viral load are critical goals to better control the HIV epidemic. Clinicians should be able to recognise the signs and symptoms of PHI, be confident to offer the appropriate HIV tests and be familiar with local pathways for prompt referral to specialist services. ■
2018-04-03T00:44:17.254Z
2016-04-01T00:00:00.000
{ "year": 2016, "sha1": "5291f4ea142e367a53831ae0632e92d7789a2cfa", "oa_license": null, "oa_url": "https://www.rcpjournals.org/content/clinmedicine/16/2/180.full.pdf", "oa_status": "GOLD", "pdf_src": "Anansi", "pdf_hash": "272bd468e4c8729a4b241424f720ba450963059e", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
11311807
pes2o/s2orc
v3-fos-license
Conjonctival melanoma metastatic to the breast: a case report Background Breast metastasis is fairly uncommon and prognosis is dismal. Breast metastasis might be the first symptom or may occur during the course of other malignancies dominantly arising from the contralateral breast. Leukemia, lung cancer and conjunctival melanoma may spread to the breast. Case presentation A 43-year-old female patient was operated on for conjunctival melanoma. After two years the disease progressed quickly and cutaneous nodes appeared on the back and paraumbilical region. Physical and radiological examination showed a breast mass. No palpable lymph’s nodes were noted. She underwent an open biopsy. Histopathologic examination and immunohistochemistry confirmed breast metastases from melanoma. During post-operative staging multiple nasopharyngeal and oropharyngeal lesions were also objective. The patient was given palliative dacarbazine (250 mg/m2 per day for 4 days) for 4 cycles. She died 4 months after the diagnosis of breast metastases. Conclusion Histopathological evaluation should be mandatory in patients with medical history of malignancies in order to differentiate new primary tumors, metastases, and benign tumors. Background Melanoma of the conjunctiva is a relatively rare and highly aggressive ocular malignancy [1]. These tumors' may invade the orbit and the eye and metastasize to regional lymph nodes and systemically [2]. Breast metastases from extra mammary cancers are rare and melanoma is one of the malignancies that can metastasize to the breast. Benign and primary malignant breast tumors are quite common, but secondary tumors in the breast from metastatic malignancies are rare. Here we report a case of a young woman diagnosed with breast metastases from a conjunctival melanoma. Case presentation A 43-year-old female with no personal or familial pathological disease previously, and negative history of systemic disease treated by enucleation two years ago for primary conjunctival melanoma (Figures 1 and 2), without adjuvant radiotherapy. During regular follow-up visits no residual or recurrent lesion occurred. Two years after, she presented multiples cutaneous lesions. Physical examination demonstrates multiple cutaneous nodules on the back and paraumbilical region. Breast examination revealed a 2 cm hard lump in the left breast, and no palpable lymph nodes. Physical examination of the contralateral breast was normal. Mammography and ultrasound showed a lobulated contoured left breast mass reported as BI-RADS 4 (Breast Imaging-Reporting and Data System). An open biopsy was performed. Morphological examination showed a solid tumor suggesting a melanoma involvement (Figures 3 and 4). Immunoassaying was performed showing a negative staining for cytokeratin markers and hormonal receptors. However it showed strong positivity for the melanoma marker S-100 protein, and patchy staining for Melan-A and HMB-45 (Human Melanoma Black) ( Figure 5). These findings concluded a diagnosis of breast metastatic disease from melanoma, and eliminate malignant or benign primary breast tumor. During post-operative staging with a whole body computed tomography (CT) Scan, multiple nasopharyngeal and the oropharyngeal metastases were also noted. Histopathological examination of these lesions confirmed features of metastatic disseminated disease from melanoma. She was given palliative dacarbazine (250 mg/m 2 per day for 4 days). The patient received 4 cycles. She died 4 months later. Discussion Invasive conjunctival melanoma accounts for only 1-2% of all ocular melanomas [1]. Similar to cutaneous melanomas, conjunctival melanomas originate from melanocytes that are derived from the neural crest. It is a potentially lethal neoplasm with an average 10-year mortality rate of 30% [2]. Systemic metastases occur in 14% to 27% of cases. Breast involvement in melanoma is not an isolated finding; it is usually associated with disseminated disease. Subcutaneous tissue, lung, liver, and brain are common secondary involvements in this disease [2]. Breast metastasis might be the first symptom or may occur during the course of other malignancies. Shetty et al [3], reported a review of literature, presenting data from 1855 to 1992, and found 431 cases of secondary extra-mammary breast tumors. The majority of them represented metastases from malignant melanoma (79 cases), followed by lung cancer metastases (78 cases), ovarian cancer (50 cases), and prostate (39 cases), kidney (24 cases), and other (143 cases). The time between diagnosis of the primary melanoma and the occurrence of a breast metastasis ranged from 13 to 180 months (median 62). Clinically, a breast lesion may mimic primary breast carcinoma. It's necessary to differentiate primary to secondary extramammary tumors because prognosis and treatment of these neoplasm differ. Diagnosis of breast disease involves the work of multidisciplinary team of specialists. Radiologists perform necessary imaging for establishing optimal diagnosis. Core biopsy was done to obtain histological diagnosis. Immunocytochemical panel should be used to confirm the diagnosis of secondary metastatic melanoma to the breast. Our diagnosis was suspected by the comparative examination of the primary and metastases' histological findings, and confirmed by a complete immunochemistry panel (negative staining for cytokeratin and positive for melan A and HMB45). Breast metastases are poor prognostic sign [4]. Radvel et al, reported a 12.9 months median time of survival after diagnosis of breast metastases [5]. In our case, the patient died 4 months after starting a treatment based on dacarbazine. A recently published randomized controlled trial has shown that inhibitor (BRAF and MEK) kinase improved rates of overall and progression-free survival in patients with previously untreated melanoma with the BRAF V600E mutation [6]. Conclusion Metastasis to the breast must be considered in any patient with a known primary malignant tumor history who presents with a breast lump. Histopathological evaluation should be mandatory in patients with medical history of malignancies in order to differentiate new primary tumors, metastases, and benign tumors. Oncologists, surgeons, pathologists, and radiologists have to work together to reach the best possible therapy against this aggressive type of cancer. Consent The initial consent was orally obtained from the patient, and the written informed consent was obtained from the patient's next of kin for publication of this case report and any accompanying images. A copy of the written consent is available for review by the Editor-in-Chief of this journal.
2016-05-04T20:20:58.661Z
2014-09-09T00:00:00.000
{ "year": 2014, "sha1": "3592a7f7fc89e63376f78b5515656279757b5886", "oa_license": "CCBY", "oa_url": "https://bmcresnotes.biomedcentral.com/track/pdf/10.1186/1756-0500-7-621", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "3592a7f7fc89e63376f78b5515656279757b5886", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
246601033
pes2o/s2orc
v3-fos-license
A Lightweight Attention-Based Convolutional Neural Networks for Tomato Leaf Disease Classification : Plant diseases pose a significant challenge for food production and safety. Therefore, it is indispensable to correctly identify plant diseases for timely intervention to protect crops from massive losses. The application of computer vision technology in phytopathology has increased exponentially due to automatic and accurate disease detection capability. However, a deep convolutional neural network (CNN) requires high computational resources, limiting its portability. In this study, a lightweight convolutional neural network was designed by incorporating different attention modules to improve the performance of the models. The models were trained, validated, and tested using tomato leaf disease datasets split into an 8:1:1 ratio. The efficacy of the various attention modules in plant disease classification was compared in terms of the performance and computational complexity of the models. The performance of the models was evaluated using the standard classification accuracy metrics (precision, recall, and F1 score). The results showed that CNN with attention mechanism improved the interclass precision and recall, thus increasing the overall accuracy (>1.1%). Moreover, the lightweight model significantly reduced network parameters (~16 times) and complexity (~23 times) compared to the standard ResNet50 model. However, amongst the proposed lightweight models, the model with attention mechanism nominally increased the network complexity and parameters compared to the model without attention modules, thereby producing better detection accuracy. Although all the attention modules enhanced the performance of CNN, the convolutional block attention module (CBAM) was the best (average accuracy 99.69%), followed by the self-attention (SA) mechanism (average accuracy 99.34%). Introduction Tomato is a ubiquitous crop with high nutritional values in the world. More than 180 million tons of tomatoes were produced worldwide in 2018, and Asia is the biggest market and producer of tomatoes [1]. However, it is affected by many diseases and pests, and the precise identification of those diseases is a challenging task for agronomists [2]. Traditionally, farmers have utilized their experience and visual inspection to identify plant diseases, but this comes with some serious cost, efficiency, and reliability issues [3]. Sometimes, even an experienced farmer and agronomist might fail to correctly identify a plant disease due to the large variety of species and similar disease symptoms. Furthermore, an increase in global temperature due to climate change has increased the chances of diseases occurring and spreading quickly [4]. Therefore, automatic detection of plant diseases is of utmost necessity for timely intervention in order to prevent massive losses. Convolutional neural network (CNN) is a powerful deep learning algorithm for image detection and classification that automatically extracts and analyzes image features. Therefore, the application of CNNs is soaring in most domains. Although CNN was introduced by study the performance of lightweight CNNs with improved image recognition algorithm (attention modules) for detection of a few classes of plant diseases. In this study, a lightweight CNN with 20 layers and reduced trainable parameters was designed using the ResNet topology [24]. Then, the commonly used attention modules, namely the convolutional block attention module (CBAM) [15], self-attention module [16], squeeze-and-excitation module [25], and the dual attention module [26], were integrated into the base network to observe the impact of different attention mechanisms on conventional CNN. Moreover, the performance of the models with and without attention mechanisms was assessed by employing well-known classification metrics (accuracy, precision, recall, and F1 score). All the models were trained, validated, and tested using tomato disease datasets split at a ratio of 8:1:1 for training, validation, and testing. Furthermore, the productive number of attention modules and their locations in the base network were comprehensively assessed through an ablation study. Finally, the computational complexity of the models, the training and testing time per image, network parameters, and sizes were calculated and compared parametrically. Therefore, the main objectives of this study were to design a lightweight and computationally efficient network for classification of a few classes of plant diseases, improve the performance of conventional CNN by amending it with an attention mechanism, and identify an effective and efficient attention module for plant disease detection. Data Collection and Preprocessing Ten classes of tomato leaf images (9 disease and one healthy) that were part of the PlantVillage public datasets [27] were collected. The Fusarium wilt diseased images were captured from the greenhouse located at Gyeongsang National University, South Korea. In this way, a total of 19,510 images from 10 distinct disease classes and one healthy class were used to train, validate, and test the models. Most of the field-captured images were taken nondestructively, but few leaves were detached from the plant and captured on a white background. A sample of images from each class is presented in Figure 1. Similarly, Table 1 shows various information about the dataset, such as class assignment, the common and scientific name of the tomato diseases [13], the number of images per class, and the source of data collection. As image data preparation is very crucial in the deep learning model, different image preprocessing functions were carried out before applying to the model. The main preprocessing functions were labeling, resizing, rescaling, and augmentation of the raw images. Then, the images were split into training, validation, and testing sets at the ratio of 8:1:1 [12]. The larger the number of input images, the better the learning of the deep model. Thus, the image augmentation technique was performed to the training datasets. Lightweight Attention-Based Network Design A lightweight attention-based CNN model was designed using ResNet topology. It consisted of 20 layers, and the attention modules were embedded between the residual blocks 3 and 4 (after the 16th layer) [16]. Figure 2 shows the block diagram of the proposed model, and the detailed parameters of the base model are given in Table 2. The kernel filters of each layer of the base network were four times lower than the standard ResNet architecture, lowering the total network parameters to make it lighter and portable. In addition, the number of convolutional layers was limited to 20 to decrease the network complexity [28]. Conv1 layer had 16 kernel filters of large patch size (7 × 7) followed by a batch normalization layer, a rectifier linear unit (ReLU) activation layer, and a maximum pooling layer (Max. pooling), which reduced the size of feature maps to half of the input image size. Residual blocks 1 and 4 comprise a convolutional block containing three convolutional layers (conv.) accompanied by a batch normalization (BN) layer and an activation (ReLU) layer. Whereas Residual blocks 2 and 3 have a convolutional block followed by an identity block. The structure of the identity block was similar to the convolutional block except for the shortcut path. Finally, a global average pooling (Global Avg. Pooling) layer converted 2D feature maps to 1D before a dense output layer. The various attention modules were inserted into the base network at the same location, as shown in Figure 2. The necessary zero padding and maximum pooling layers were added to adjust the spatial dimension of the input and output feature maps. Convolutional Block Attention Module (CBAM) CBAM uses two attention modules (channel attention and spatial attention) in series, followed by the spatial attention module [15], as shown in Figure 3. The channel attention module was used to generate two feature maps using average and maximum pooling layers from the intermediate layer. Then, both feature maps were input to the shared multilayer perceptron (MLP), and the output feature maps were added before normalizing using the sigmoid function. The multiplied features between the channel attention module and convolutional layer were applied to the spatial attention module to determine the position of the important features in the image. The final feature maps from the channel and spatial attention modules are given in Equations (1) and (2). where CA(x) represents the channel attention feature maps, SA(x) is the spatial attention feature maps, σ represents the sigmoid function of the feature maps, f 7×7 represents the 7 × 7 convolutional operation, MLP is multilayer perceptron, AvgPool(x) is average pooling of input x, MaxPool(x) is the maximum pooling of input x, F s max is the feature maps obtained from maximum pooling operation, and F s avg is the feature maps from maximum pooling operation. Squeeze-and-Excitation (SE) Attention Module The dimension of the input feature map was squeezed to 1 × 1 × C by global pooling operation, and two fully connected (FC) layers followed by a rectifier linear unit (ReLU) and sigmoid activation layers were attached to build an excitation block [25], as shown in Figure 4. The squeeze-and-excitation (SE) feature maps were element-wise multiplication, with the input feature maps forwarding to the next layer. The computational operation of the SE module is expressed mathematically in Equation (3). Finally, a mathematical multiplication was carried out to incorporate the SE features maps to the main network's feature maps. where SE(x) represents the squeeze-and-excitation feature maps, F ex is squeezed or global pooled features, x is the input feature maps, W is the weight of the SE networks, σ is sigmoid operation, δ is ReLU operation, and W 1 and W 2 are the weights of the first and second dense layer, respectively. 2.2.3. Self-Attention (SA) Module Figure 5 represents the embedding of a self-attention module into the network and its architecture [16]. It consisted of three parallel convolutional and ReLU activation layers to extract the discriminating features from the input images. The output of the two convolutional layers was multiplied element-wise and fed to a softmax layer to generate an attention map. Then, the attention maps were multiplied by the transpose of the feature maps generated from the third convolutional branch to obtain self-attention feature maps. Finally, scaled attention maps were added to the input feature maps to generate output feature maps, as shown in Equation (4). where out(x) is output features maps after the self-attention (SA) module, SA(x) is the feature maps after self-attention module, In(x) is the input feature maps, So f t is softmax operation, µ is a scaling factor, P(x), Q(x), and R(x) are the feature maps generated from the three parallel convolutional paths of the SA module, S(x) is the feature maps after softmax operation, and T(x) is the transposed feature maps of the P(x).Q(x). " represents the same content as the above row (convolutional). Dual Attention (DA) Module The authors of [26] proposed a dual attention mechanism with two attention networks, namely position attention (PA) and channel attention (CA) networks for scene segmentation. The position attention network is similar to the self-attention module except for the activation layers and the use of some different strategies for attention map generation, as shown in Figure 6. The DA module also contains a channel attention network that performs two multiplication operations, softmax, and an addition operation. Equation (5) shows the overall mathematical operations carried out in the dual attention module, and Equations (6) and (7) provide the mathematical operation performed in the PA and CA networks. where DA(x) is the dual attentions' feature maps, PA( Network Training and Evaluation The lightweight base network and all the models with the various attention modules were trained, validated, and tested using the same image datasets. Moreover, the same training hyperparameters and evaluation strategies were applied to fairly compare their performance. As the deep CNN performance improves for a large number of training datasets, we used several data augmentation algorithms, as shown in Table 3. The data augmentation approaches were executed only on training datasets after splitting the whole images into training, validation, and testing sets. The increase in training image count due to the data augmentation process is provided in Table 4. Thus, the training images increased massively after the augmentation (8 times). Furthermore, the Adam optimizer with default learning rate was chosen as training hyperparameter to effectively converge the network [29]. Although the models were trained for a fixed 100 epochs, the optimally trained model was saved for testing purpose in every epoch to ensure minimum validation loss. Furthermore, an Adam optimizer that effectively converges the network [29] was chosen. Then, all the trained models were evaluated using the same testing datasets. The performance of the models was quantified by adopting the standard classification metrics, as shown in Equations (8)-(11) [30]. In addition, the size of the models was determined by counting the total number of network parameters and the memory space usage. On the other hand, the computational complexity was determined using the floating point operations (FLOPs), the total mathematical operations required to complete a forward and backpropagation of an input image. where TP stands for true positive, TN for true negative, FP is false positive, and FN is false negative of the predicted class. Training, Validation, and Testing Accuracy of the Models All the models were trained and validated with the same dataset, training, and validation parameters. Figure 7 shows the training and validation accuracy and loss plots of the different models. The base model without attention module (lw_resnet20) trained relatively slower (indicated by a black line) than the model with attention modules. The model with SE attention module (lw_resnet20_se, represented by a blue line) showed quick training ability, as shown in Figure 7a,b. The validation accuracy and loss of all the models provided a significant fluctuation in each epoch. The best training accuracy and loss were obtained from the base model, followed by the lw_resnet20_se model. However, the highest validation accuracy and loss results were achieve by the lw_resnet20_cbam model, followed by the lw_resnet20_da model, as shown in Table 5. Network Parameters and Efficiency The deeper the network, the more network parameters there are, thus increasing the size and computational complexity of the network [31]. The network parameters, size, training and testing efficiency, FLOPs, and the average accuracy on the test dataset are presented in Table 7. The proposed models had almost 16 times fewer network parameters and were 23 times less complex than the standard ResNet50 model [15]. The base model was found to be comparatively efficient and lightweight due to fewer network parameters but showed poor performance on the test dataset. The SE and CBAM modules are the lightest attention modules compared to SA and DA. Moreover, the channel attention of CBAM and the module structure of SE are somewhat similar except for the maximum pooling layers. The training time of the models was not significantly different amongst the various attention modules. The test time per image of the model was calculated by averaging the time taken to detect 1960 test images. The SA and DA modules are heavier than CBAM and SE, increasing the computational complexity. Tomato Disease Detection The low interclass variance, high intra-class variance, and mixed symptoms of two or more diseases on the same leaf are some of the serious challenges for plant disease detection using computer vision techniques [9]. As all the images were of tomato leaves, the chances of producing false positive (FP) and false negative (FN) were higher due to lower interclass variance. Moreover, some of the images of late blight and target spot disease were in the preliminary stage, so there was marginal difference between diseased images and healthy images or mistakenly labeled ones, as shown in Figure 9. Therefore, most of the models poorly detected early blight, healthy, late blight, and target spot leaf images. In contrast, all the models perfectly identified the Fusarium wilt diseased images because of the distinction in datasets. The majority of the Fusarium wilt images were captured directly on the plant (nondestructively), which made them unique with the background and leaf position (Figure 1). The precision, recall, and F1 score of the target spot class were minimum for all the models due to higher FPs and FNs. All models wrongly detected some healthy images as late blight, bacterial spot, and target spot diseases except the lw_resnet20_cbam model, which falsely identified 1% of target spot images as healthy leaves because some diseased images at a very early stage were almost visually indistinguishable from healthy leaves. Therefore, all the models failed to achieve 100% correct classification of healthy and diseased images. However, lw_resnet20_cbam and lw_resnet20_sa models performed well except for giving 1% FPs and FNs, respectively. On the other hand, almost all the models precisely identified bacterial spot, leaf mold, mosaic virus, and yellow leaf curl virus diseased images. Performance Evaluation of the Models The attention modules allow the network to identify the discriminative features and their location in the input images to emphasize key features during training. The channel attention module determines the salient features available in the input images. At the same time, the spatial or position attention reveal the spatial location of those key features. The number of attention modules and their place in the network is same for all. We fixed the position of the attention module between blocks 3 and 4 to permit the network to focus on specific high-level features because the datasets were of the same plant (tomato). The lw_resnet20_cbam outperformed in terms of classification accuracy and model lightness. The additional maximum pooling layer in the channel attention module of the CBAM provide even minute details of the salient features to the network, boosting the network's performance. DA also uses two attention modules (channel and position), but it failed to perform as well as the CBAM model. One reason for the lower performance might be the parallel combination of channel and position attention. As [15] suggests, series combination results are better than parallel. Moreover, the module structure is bulkier than other attention modules due to three parallel convolutional layers for position attention and three branching matrix operations of input feature maps for channel attention. In contrast, CBAM uses maximum, average pooling, and convolutional layers in the spatial attention module, which is computationally more efficient than matrix operations. The performance of our proposed model (lw_resnet20_cbam) was compared with models that were previously studied by various researchers. Some studies utilized the same tomato disease datasets with different deep CNN architectures. Some of them also implemented attention-based CNN to improve detection accuracy. Table 8 demonstrates the performance comparison of various CNN architectures used for the same tomato disease datasets. Only [2] used more tomato disease datasets (12 classes) than ours (11 categories). In addition, most researchers applied a generic model designed for a large number of image classification datasets, which is computationally inefficient for a small number of plant datasets. Moreover, the majority of generic models were used as transfer learning. From the table, it can be seen that none of the previous studies achieved better detection results than ours in such a large number of tomato leaf images with such a lightweight model. Therefore, this study will be helpful for future researchers to design efficient and effective networks for portable devices. Amongst the various attention modules, the SA module also showed competitive results although it came at the cost of more network complexity and size. Its architecture is almost similar to the position attention module of the DA network except for an additional ReLU activation layer in each convolutional branch. In addition, the SA model's performance superseded the DA model but could not match up to the CBAM model. The SE network utilizes a similar principle as CBAM's channel attention module, although it only uses global average pooling operation in contrast to the maximum and global pooling operations in CBAM. Nevertheless, its performance was similar to the DA model. However, the SE module is the lightest and most efficient attention module. Thus, it is equally important to identify key features and their locations in the input images. Furthermore, the channel and spatial attention module should be in series so that the model can detect dominant features and their place in the input images. Conclusions This study experimented with various attention modules and analyzed their performance in tomato disease classification. Attention modules used for different purposes were employed. The network architecture, computational complexity, and performance were comprehensively compared. From the results, it can be concluded that the determination of key features and their location in the input images is crucial to enhancing classification performance. Moreover, identifying key feature regions is wiser than finding essential features. The determination of critical features and their position should be sequential because merging these features will lead to loss of crucial information. Our proposed model outperformed the prevailing generic models used for plant disease detection in terms of accuracy and efficiency.
2022-02-06T16:12:21.881Z
2022-02-04T00:00:00.000
{ "year": 2022, "sha1": "b2f1c465d27194cac129369c9597c897217744e3", "oa_license": "CCBY", "oa_url": "https://www.mdpi.com/2077-0472/12/2/228/pdf?version=1644398576", "oa_status": "GOLD", "pdf_src": "ScienceParsePlus", "pdf_hash": "1151687d6840c36b27623c53f984ba7762c57e66", "s2fieldsofstudy": [ "Computer Science", "Environmental Science" ], "extfieldsofstudy": [] }
251756360
pes2o/s2orc
v3-fos-license
Misleading fruits: The non-monophyly of Pseudopiptadenia and Pityrocarpa supports generic re-circumscriptions and a new genus within mimosoid legumes Abstract Generic delimitation in Piptadenia and allies (mimosoid legumes) has been in a state of flux, particularly caused by over-reliance on fruit and seed morphology to segregate species out of Piptadenia into the genera Parapiptadenia, Pityrocarpa and Pseudopiptadenia. Although supporting their segregation from Piptadenia, previous phylogenetic analyses suggested that some of these segregated genera are not monophyletic. Here, we test the monophyly of Parapiptadenia, Pityrocarpa and Pseudopiptadenia with dense taxon sampling across these genera, including the type species of each genus. Our analysis recovers Parapitadenia as monophyletic, but places Pseudopiptadenia species in two distinct lineages, one of which includes all three species of Pityrocarpa. Given that the type species of both Pseudopiptadenia and Pityrocarpa are nested in the same clade, we subsume Pseudopiptadenia under the older name Pityrocarpa. The remaining Pseudopiptadenia species are assigned to the new genus Marlimorimia. Alongside high molecular phylogenetic support, recognition of Parapiptadenia, Pityrocarpa and Marlimorimia as distinct genera is also supported by combinations of morphological traits, several of which were previously overlooked. Introduction Generic delimitation in the mimosoid legumes is being continually revised, notably across the informal Piptadenia group sensu Lewis and Elias (1981), which included Anadenanthera Speg., Microlobius C. Presl, Mimosa L., Parapiptadenia Brenan, Piptadenia Benth., Pityrocarpa (Benth.) Britton & Rose, Pseudopiptadenia Rauschert and Stryphnodendron Mart. Most of the proposed generic re-circumscriptions within the Piptadenia group have involved segregating species out of Piptadenia, which was morphologically poorly-defined (Brenan 1955) and is known to be polyphyletic Jobson and Luckow 2007;Simon et al. 2016;Ribeiro et al. 2018). While previous phylogenetic and phylogenomic analyses confirm the segregation of Parapiptadenia, Pityrocarpa and Pseudopiptadenia and place them together with Stryphnodendron and Microlobius in the Stryphnodendron clade sensu Koenen et al. (2020), the monophyly of these three genera is still uncertain because of incomplete taxon sampling in previous analyses (Simon et al. 2016;Koenen et al. 2020;Ringelberg et al. 2022). Species of Parapiptadenia, Pityrocarpa and Pseudopiptadenia are trees inhabiting Neotropical rain forests and seasonally dry tropical forests and woodlands (SDTFWs sensu Queiroz et al. 2017), with the majority of species in South America and just two taxa in North America (Pi. obliqua (Pers.) Brenan var. obliqua and Ps. psilostachya (DC.) G.P. Lewis & M.P. Lima) (Brenan 1955(Brenan , 1963Rauschert 1982;Lima and Lima 1984;Lewis and Lima 1991;Queiroz 2009). Their bipinnate leaves vary widely in the number of pinnae, as well as leaflet number, size and shape. Flowers are pentamerous, dialipetalous or gamopetalous and arranged in elongated spikes. The diverse fruits and seeds have been the most prominent traits used to define each genus (Brenan 1955;Lewis and Elias 1981). Parapiptadenia includes six species with plano-compressed fruits opening along both sutures (typical legumes) and flat, compressed, narrowlywinged seeds lacking a pleurogram. Eleven species with similar seeds, but with follicles (fruits splitting along the upper suture only) were placed in Pseudopiptadenia (Rauschert 1982;Lewis and Lima 1991). The three species in Pityrocarpa, which was first proposed as a section of Piptadenia (Bentham 1842), differ from the other two genera by their regularly constricted moniliform legumes and lentiform whitish seeds with an U-shaped pleurogram (Jobson and Luckow 2007). The first phylogenetic analysis including these three genera recovered each as monophyletic, with Pseudopiptadenia contorta (DC.) G.P. Lewis & M.P. Lima and Ps. psilostachya forming a clade sister to Pityrocarpa (three species sampled), while the relationship of Parapiptadenia (three species sampled) to other genera was uncertain ( Fig. 1; Jobson and Luckow 2007). The relationships amongst these genera and the putative monophyly of Pseudopiptadenia were later questioned by analyses with larger DNA sequence datasets and increased taxon sampling (Simon et al. 2016;Ribeiro et al. 2018). In these analyses, Parapiptadenia (four species sampled) emerged as sister to a clade including all sampled species of Pseudopiptadenia (five species, including Ps. contorta and Ps. psilostachya), except Ps. brenanii G.P. Lewis & M.P. Lima, which was sister to Pityrocarpa (Fig. 1). This latter clade appeared more closely related to Stryphnodendron and Microlobius than to the group formed by Parapiptadenia and Pseudopiptadenia. Phylogenomic analyses with sparse taxonomic sampling recovered slightly different relationships between these three genera ( Fig. 1), but reinforced the non-monophyly of Pseudopiptadenia (Lima et al. 2022;Ringelberg et al. 2022). While it is clear that the non-monophyly of Pseudopiptadenia means that taxonomic adjustments are needed, the type species of the genus, Ps. leptostachya, has not been included in any previous phylogenetic analyses, raising doubts about its placement and, hence, about which generic name should be applied to the clade containing that species. In this study, we infer the phylogenetic relationships between Parapiptadenia, Pityrocarpa and Pseudopiptadenia using near-complete taxon sampling, including the type species of all three genera, and re-evaluate the circumscriptions of these genera, based on the resulting phylogenetic hypothesis. Phylogenetic inference To further test the polyphyly of Pseudopiptadenia indicated by previous studies (Simon et al. 2016;Ribeiro et al. 2018;Ringelberg et al. 2022) and further investigate sister group relationships across the Stryphnodendron clade, we carried out phylogenetic analyses including near-complete sampling of species of Parapiptadenia, Pityrocarpa, Pseudopiptadenia and allies. Phylogenetic analyses were based on the nuclear ribosomal 5.8S subunit and internal transcribed spacer region (nrITS) and plastid regions matK and trnD-trnT. We generated 60 new sequences (21 nrITS, 23 matK, 16 trnD-trnT), including two accessions of Ps. leptostachya, the type species of Pseudopiptadenia, sampled here for the first time. Published sequences of other members of the Stryphnodendron clade and other genera were obtained from GenBank (Hughes et al. 2003;Simon et al. 2009;Simon et al. 2016;LPWG 2017;Ribeiro et al. 2018). Sampling comprised 60 accessions, including nine species (18 accessions) of Pseudopiptadenia (only Jobson & Luckow 2007Simon et al. 2016Ribeiro et al. 2018Lima et al. 2022 the poorly known Ps. colombiana and Ps. pittieri were not sampled), all three species of Pityrocarpa (six accessions), all six known species of Parapipitadenia (11 accessions), plus representatives of the allied genera Microlobius (monospecific; two accessions) and Stryphnodendron (14 accessions, including members of the three major lineages of this non-monophyletic genus; see Lima et al. 2022). A selection of mimosoid lineages closely related to the Stryphnodendron clade (Jobson and Luckow 2007;Simon et al. 2016;Ribeiro et al. 2018;Ringelberg et al. 2022) were included as outgroups. Voucher details and GenBank accession numbers are provided in Table 1 and in the Suppl. material 1. Total DNA was extracted from about 20 mg of silica gel-dried leaf material using a modified CTAB-based protocol (Inglis et al. 2018a). We checked DNA quality and integrity using agarose gel electrophoresis and DNA quantity and purity estimated by Nanodrop spectrophotometry (Thermo Scientific). Laboratory procedures, primer sequences and amplification protocols followed Inglis et al. (2018b) for nrITS and Simon et al. (2016) for matK and trnD-trnT. PCR products were prepared for direct Sanger sequencing using ExoSAP (ThermoFisher) and both DNA strands were sequenced using the Big Dye v.3.1 kit (Applied Biosystems), using the amplification primers. We obtained further sequences included in the analysis from GenBank (Table 1). Data Resources The data underpinning the analysis reported in this paper are deposited in GitHub at https://doi.org/10.5281/zenodo.6611789 Results and discussion Our densely sampled phylogenetic analysis recovers Parapitadenia as monophyletic, reinforces the non-monophyly of Pseudopiptadenia and shows that Pityrocarpa is also non-monophyletic (Fig. 2). Although the backbone of the phylogeny remains weakly- The placement of Parapiptadenia, Pityrocarpa, and Pseudopiptadenia species in three distinct lineages and the robustly supported monophyly of Parapiptadenia agree with previous phylogenetic analyses ( Fig. 1; Simon et al. 2016;Ribeiro et al. 2018;Lima et al. 2022;Ringelberg et al. 2022). However, the relationships amongst these three clades and other members of the Stryphnodendron clade remain unclear, because of the lack of support across the backbone of the clade (Figs 1 and 2) and disagreement with previous analyses. For example, although analyses of nuclear and plastid data (Simon et al. 2016;Ribeiro et al. 2018) also placed Pseudopiptadenia p.p. and Parapiptadenia in the same clade, this group could be sister to the remainder of the Stryphnodendron clade (Simon et al. 2016) or sister to the clade comprising Stryphnodendron and Microlobius (Ribeiro et al. 2018). Phylogenomic analyses based on 997 nuclear genes (Lima et al. 2022;Ringelberg et al. 2022) placed Pseudopiptadenia p.p. as sister to a group including Stryphnodendron duckeanum Occhioni f. plus a clade formed by Parapiptadenia and the Pityrocarpa clade. Furthermore, these nodes across the backbone of the Stryphnodendron clade show high gene tree conflict ) coinciding with very short branches and weak support in both conventional and phylogenomic analyses, highlighting the difficulties of inferring relationships across this part of the mimosoid phylogeny. Despite uncertainties regarding generic relationships, our results provide an additional example of how over-reliance on particular traits, in this case fruits and seeds (Brenan 1955;1963;Lewis and Elias 1981), may lead to unnatural taxonomies. Presence of follicles and of flat and winged seeds, which were used to diagnose Pseudopiptadenia, are respectively shared by most lineages within the Stryphnodendron clade or homoplastic between Pseudopiptadenia p.p. and members of the Pityrocarpa clade. All this is not to say that fruits have no taxonomic significance, as the vast majority of Parapiptadenia species have distinctive legumes with valves plicate above the seeds, not seen in any other member of the Stryphnodendron clade. Nonetheless, most species in the Pityrocarpa clade, even though variable in seed morphology (flat and winged vs. lentiform and wingless), share a number of similarities, including the position of the extrafloral nectaries between or just below the first pair of pinnae; few pinnae pairs; inflorescence spikes in general solitary and axillary to coeval leaves; and bifoliolate seedlings (Fig. 3). These features are not shared with most Pseudopiptadenia p.p. species, which have extrafloral nectaries on the lower half of the petiole; many pairs of pinnae; inflorescence spikes arranged in complex efoliate synflorescences; and pinnate or bipinnate seedlings (see Table 2). Although fairly homogeneous within the Pityrocarpa clade and Pseudopiptadenia p.p., the characters highlighted above sometimes vary amongst and within species, particularly in a context including Parapiptadenia. For example, solitary inflorescences occur in species of both Parapiptadenia and the Pityrocarpa clade, while Pseudopiptadenia p.p. species sometimes do not have spikes arranged in complex synflorescences (e.g. particular specimens of Ps. bahiana and Ps. contorta). Nonetheless, taken together, the traits highlighted here provide better recognition of these lineages as distinct genera than fruit morphology alone. These results from phylogenetic and morphological analyses provide robust support for re-circumscription of Pseudopiptadenia as it was traditionally conceived and also Pityrocarpa. Given that the type species of these two genera are nested in the same clade and that no morphological traits support the recognition of a narrow circumscription of Pseudopiptadenia, we subsume the name Pseudopiptadenia under Pityrocarpa, the oldest validly published generic name (Britton and Rose 1928;Lewis and Lima 1991;Turland et al. 2018 Britton and Rose 1928]. Description. Unarmed trees or shrubs. Leaves bipinnate; petiole with an extrafloral nectary between or shortly below the first pair of pinnae; pinnae 1-4 (5) pairs, exceptionally to 10 pairs in Pi. leptostachya; leaflets 1-10 pairs per pinna, rarely to 20 pairs (Pi. brenanii and Pi. leptostachya), mostly rhomboid sometimes also asymmetrically elliptical or lanceolate. Inflorescences spikes, solitary in the axils of coeval leaves, commonly pendulous. Flowers pentamerous; petals free (except possibly Pi. leucoxylon), glabrous; stamens 10, anther gland present; ovary shortly stipitate and included within or exserted from the corolla. Fruit a follicle, dehiscing along the lower suture, flat compressed, mostly moniliform, the margins deeply and regularly constricted, rarely sinuous margins and shallowly constricted (Pi. brenanii and occasionally in Pi. leucoxylon); valves stiffly coriaceous. Seeds mostly flat compressed with a coriaceous testa and a narrow marginal wing, lacking a pleurogram or, less frequently, ovoid or discoid with a hard, whitish testa, wingless and with a 'U'shaped pleurogram (Pi. leucoxylon, Pi. moniliformis and Pi. obliqua); embryo with a rudimentary plumule (except Pi. brenanii). Seedlings with bifoliolate eophylls. Notes. As circumscribed here, Pityrocarpa includes seven species, all with a moniliform fruit, with the margins deeply constricted between the seeds (Fig. 4). This trait is shared by species formerly included in Pityrocarpa (sensu Jobson and Luckow 2007) and some species previously placed in the genus Pseudopiptadenia (sensu Lewis and Lima 1991). These two genera had been separated based on seed morphology, Pityrocarpa characterised by ovoid or discoid seeds with a hard, whitish seed coat and a 'U'-shaped pleurogram, while Pseudopiptadenia included species with flat compressed and narrowly winged seeds with a coriaceous testa lacking a pleurogram. Pityrocarpa brenanii and Pi. leucoxylon have fruits with only shallowly sinuous margins, more similar to species of the genus Marlimorimia. Besides sharing these fruit traits, Pityrocarpa species also have leaves with few pinnae (1 to 4 [5] pairs, rarely up to 10 pairs in Pi. leptostachya) and relatively large rhomboid leaflets compared to species of Marlimorimia. One exception are the leaves of Pi. brenanii, which are similar to those of M. bahiana. All species of Pityrocarpa present an extrafloral nectary between or shortly below the first pair of pinnae, in contrast to species of Marlimorimia that have the nectary below mid-petiole, frequently close to the pulvinus. Floral traits, although previously disregarded as being generically diagnostic in the group, provide further evidence for the distinction between Pityrocarpa and Marlimorimia. The solitary inflorescence spikes in the axils of coevally developing leaves in Pityrocarpa contrast with the more complex synflorescences of Marlimorimia ( Fig. 3; see notes under Marlimorimia). All species of Pityrocarpa have free and glabrous petals, except for Pi. leucoxylon, in which the petals are connate for a little over 1 mm (Barneby and Grimes 1984). Lima (1985) and Lewis and Lima (1991) provided additional information on embryos and seedlings that are potentially useful for distinguishing Pityrocarpa from Marlimorimia. Embryos of Pityrocarpa species have a rudimentary plumule, while in Marlimorimia, the plumule is developed and multifid. This seems to be correlated with seedling morphology as the studied species of Pityrocarpa have bifoliolate eophylls and those of Marlimorimia species have pinnate or bipinnate eophylls (Lewis and Lima 1991). Pityrocarpa brenanii, however, has embryo morphology more similar to that reported for species of Marlimorimia (Lewis and Lima 1991). Note. Lewis and Lima (1991) unintentionally lectotypified this name by indicating the holotype to be at B and the isotype to be at K. However, the B specimen was destroyed and, hence, cannot serve as a lectotype. Moreover, K holds two duplicates of an un-numbered Sellow collection. Here, we chose the one previously belonging to Bentham's herbarium as the lectotype. Diagnosis. Marlimorimia shares with Pityrocarpa the follicle, a fruit dehiscing along the lower suture only, and flat, compressed winged seeds, which lack a pleurogram. It can be differentiated from Pityrocarpa by the position of the extrafloral nectary on the petiole (from the base to the mid-petiole in Marlimorimia vs. between or just below the first pair of pinnae in Pityrocarpa); inflorescence spikes clustered in terminal pseudoracemes or in fascicles at efoliate nodes, surpassed by mature leaves (vs. solitary spikes in the axils of coeval leaves); petals united and joined into a gamopetalous corolla (vs. petals free and glabrous); and fruits with margins straight to shallowly sinuous (vs. margins deeply constricted). Type. Marlimorimia contorta (DC.) L.P. Queiroz & P.G. Ribeiro Description. Unarmed trees. Leaves bipinnate; petiole with an extrafloral nectary well below the first pair of pinnae, close to the pulvinus, always below mid-petiole; pinnae 5-10 to many pairs per leaf (2-3 pairs in M. colombiana and 3-5 in M. bahiana); leaflets mostly > 10 pairs per pinna, (6-8 in M. colombiana), mostly oblong to linear from an asymmetrical base, rarely rhomboid (M. bahiana). Inflorescences spikes, grouped in fascicles, these being arranged in terminal pseudoracemes or forming clusters below the coeval leaves. Flowers pentamerous; petals united into a gamopetalous corolla, pubescent; stamens 10, anther gland present; ovary shortly stipitate and included or exserted from the corolla. Fruit a follicle, dehiscing along the lower suture, flat compressed, straight, curved or longitudinally twisted, the margins usually straight, rarely irregularly sinuous and only becoming constricted where the seeds fail to develop (M. bahiana and M. warmingii), valves coriaceous, thin or thick. Seeds flat compressed with a coriaceous testa, presenting a narrow or somewhat wider marginal wing, pleurogram lacking; embryo with a developed, multifid plumule (unknown in M. colombiana and M. pittieri). Seedlings with pinnate or bipinnate eophylls (unknown in M. bahiana, M. colombiana and M. pittieri). Basionym. Piptadenia inaequalis Distribution. Marlimorimia comprises six species with a bicentric distribution in the two main areas of tropical humid forests in South America. Three species occur in eastern Brazil, two of which are restricted to the Atlantic wet forests (Marlimorima bahiana and M. warmingii) and M. contorta, which extends to inland semideciduous forests. The three other species are distributed in northern South America. Marlimorimia psilostachya is widely distributed across Amazonia, sparsely extending to Central America (Costa Rica) and M. colombiana and M. pittieri have restricted ranges in Colombia and Venezuela, respectively. Etymology. The genus Marlimorimia is named in honour of Dr. Marli Pires Morim, taxonomist at the Rio de Janeiro Botanical Garden, for her outstanding contribution to our knowledge of the diversity and taxonomy of Brazilian mimosoid legumes. Notes. The new genus Marlimorimia is proposed to accommodate a monophyletic group of species, previously classified in Pseudopiptadenia (sensu Lewis and Lima 1991;Luckow 2005), but which could not retain the genus name, because its type species is now included in Pityrocarpa. Besides the molecular phylogenetic evidence, morphology also supports recognition of Marlimorimia as distinct from Pityrocarpa. Marlimorimia brings together most of the species formerly placed in Pseudopiptadenia which have multipinnate leaves, small oblong to linear leaflets and fruits with straight (or shallowly sinuous) margins. Marlimorimia bahiana and M. colombiana, however, have leaves with few pinnae and rhomboid leaflets. Species of Marlimorimia have more complex inflorescences than those of Pityrocarpa. While the spikes of Pityrocarpa are solitary in the axils of coevally developing leaves, Marlimorimia species have spikes in fascicles of 2-3, which are arranged in terminal efoliate pseudoracemes or clustered on nodes below mature leaves (Fig. 3). Sometimes, as leaves expand, Marlimorimia synflorescences may resemble those of Pityrocarpa and Parapiptadenia (e.g. particular specimens of M. contorta such as Hatschbach 50149 [NY]). Nonetheless, flowers of Marlimorimia have pubescent petals united into a gamopetalous corolla (vs. free glabrous petals in the majority of Pityrocarpa species). Two types of fruits are found in Marlimorimia (Fig. 5). Some species have long linear fruits, frequently curved or longitudinally twisted with straight margins (M. colombiana, M. contorta, M. pittieri and M. psilostachya), while M. bahiana and M. warmingii have oblong fruits with shallowly sinuous margins. The valves of the fruits are woody, although usually thin, becoming thicker and harder in M. warmingii. The seeds of Marlimorimia, although superficially similar to those of most species of Pityrocarpa, have embryos with multifid plumules that result in seedlings with pinnate or bipinnate eophylls (Lima 1985;Lewis and Lima 1991).
2022-08-24T15:24:24.632Z
2022-08-22T00:00:00.000
{ "year": 2022, "sha1": "0442775e77764af51a4f57c020cc1b7a6899aba4", "oa_license": null, "oa_url": null, "oa_status": null, "pdf_src": "ScienceParsePlus", "pdf_hash": "4a4816fd170964e9c057c5d213f434c9455b13f5", "s2fieldsofstudy": [ "Biology" ], "extfieldsofstudy": [] }
245334735
pes2o/s2orc
v3-fos-license
A Novel Iterative Soft-Decision Decoding Algorithm for RS-SPC Product Codes This paper presents a generalized construction of RS-SPC product codes. A low-complexity joint-decoding scheme is proposed for these codes, in which a BP-based iterative decoding is performed based on the binary expansion of the whole parity-check matrix. Various powerful RS codes can be used as the component codes for RS-SPC product codes, which gives a good performance for local decoding (decode a single component codeword). The proposed BP-based iterative decoding is a global decoding, and it achieves an error-correcting capability comparable to codes of large blocklengths. This two-phase decoding scheme preserves the low decoding latency and complexity of the local decoding while achieves high reliability through the global decoding. The complexity of the proposed iterative decoding is discussed, and the simulation results show the proposed scheme offers a good trade-off between the complexity and the error performance. I. INTRODUCTION Reed-Solomon (RS) codes [1]- [3] are one of the most important maximum distance separable codes and they are widely used in many communication and storage systems. In most of these existing systems, RS codes are decoded by algebraic hard-decision decoding (HDD) algorithms, such as the Berlekamp-Massey (denoted by BM-HDD) algorithm [2]. However, HDD does not utilize soft information thus it usually incurs a significant performance loss compared to a suitable soft-decision decoding (SDD) algorithm. In order to make use of the soft information, the generalized minimum distance (GMD) decoding [4], Chase decoding [5], and algebraic soft-decision (ASD) decoding [6] algorithms were proposed. These SDD algorithms improve performance over the traditional HDD method, however, the performance gaps between them and the maximum-likelihood decoding (MLD) are still noticeable particularly for long RS codes. Iterative decoding algorithms based on belief propagation (BP), e.g., the sum-product algorithm (SPA) and the minsum algorithm (MSA), are widely used for decoding the low-density parity-check (LDPC) codes. However, the paritycheck matrices of RS codes are in general not sparse, thus directly applying the iterative decoding algorithms to RS codes is difficult. In order to deal with such high-density paritycheck (HDPC) matrices, Jiang and Narayanan proposed a BP algorithm by adapting the parity-check matrix (denoted by ABP) [7], where Gaussian elimination is applied before each iteration, thus its complexity may not be tolerated in practical applications. In [8], a BP algorithm by stochastically and cyclically shifting (denoted by SSID) was proposed to enhance the conventional BP algorithm for RS codes. This algorithm is easy to realize, whereas, it performs not well for the RS codes over large fields, e.g., GF(2 p ) with p > 6. Although there is no particularly effective SDD scheme for a single RS codeword, the SDD of an RS-based concatenated coding scheme improves the error performance a lot even over the MLD of a single RS codeword, for example, the turbo product codes (TPCs) [10] with RS component codes and the cascaded RS codes [11] which are jointly encoded through Galois Fourier transform. However, the decoding can not perform for a single component codeword for the cascaded RS codes in [11] due to the interleaving and the Galois Fourier transform. The decoding of a single component codeword and the whole codeword are denoted as the local decoding (LD) and the global decoding (GD), respectively. This two-phase decoding scheme preserves the low decoding latency of the LD while the error-correcting capability of the GD is comparable to codes of large blocklengths. In this paper, we present a low-complexity scheme for jointly encoding and decoding RS codes. The encoding is a direct product of RS codes and single-parity-check (SPC) codes thus a code of this type is called an RS-SPC product code. The conventional decoding scheme of product codes is a turbo process of the row and column decoders. We develop a BP-based iterative decoding scheme for RS-SPC product codes based on the binary expansion of the paritycheck matrix. An important advantage of this code is the error performance of the LD in general improves over the LD of TPCs. TPCs usually consist of 1 or 2-error-correcting component codes, while more powerful component codes are used for RS-SPC product codes. Moreover, the presented BPbased iterative decoding scheme achieves an error performance comparable to long-blocklength codes. The iterative decoding can be highly parallelized with a relatively low complexity. The simulation results show the RS-SPC product codes offer a good trade-off between the LD (i.e., complexity) and the GD (i.e., performance). II. RS-SPC PRODUCT CODE Consider a narrow sense t-error-correcting (n, k, δ) RS code over GF(2 p ) with n = 2 p − 1 and a minimum distance δ = n − k + 1 = 2t + 1. The parity check matrix over GF(2 p ) is given by the (n − k) × n matrix: where β is a primitive element in GF(2 p ). Let The density of H b is about 0.5 if the binary image of the paritycheck matrix is directly expanded from the form given by (1). The density of the binary parity-check matrix can be reduced to about 0.3 through the sparsification method in [9], and this relatively sparse binary parity-check matrix is denoted by H b . The two-dimensional product code is a simple combination of two short codes through row and column encoding. Let C 1 and C 2 be an (n, k, δ) RS code and a (k 2 + 1, k 2 , 2) binary SPC code for the row and column encoding, respectively. The RS-SPC product codeword P is a rectangular array of the form: where s i,j is the j-th code symbol of the i-th RS code, and b j is the parity bit of the j-th SPC code. In the rectangular array by (2), each row is an RS codeword in C 1 and each column is an SPC codeword in C 2 . Let the set β 0 , β, . . . , β p−1 form a basis of GF(2 p ), then s i,j can be expressed as s i,j = a 0 i,j β 0 + a 1 i,j β + · · · + a p−1 i,j β p−1 . The parity bit b j of the column j can be calculated as where the set [a : b] is the integer subset of the set [a, b]. Consider P consists of L RS codewords. For the array by (2), L is equal to k 2 /p thus the rate R s of the SPC component code is Lp/(Lp + 1). In order to enhance the error-correcting capability, we mildly modify the structure of the P to lower the rate of the SPC component code. A code symbol can be split into several tuples of equal lengths w. Then we make each column of P consist of L w-tuples (partial symbols) and a parity bit, giving the modified form as: where t i,j is the j-th w-tuple of the i-th RS code, i.e., This modified structure gives an RS-SPC product code with rate wLk/(wL + 1)n, denoted by P(n, k, w, L). Let H(n, k, w, L) be the parity-check matrix of P(n, k, w, L) over GF (2). Let ∆ (H, L) be an L × L diagonal array with L copies of H lying on its main diagonal and zeros elsewhere. Let Γ (H, L) be an 1 × L array with L copies of H. The H(n, k, w, L) can be represented as the following form: where I and O are the identity matrix and the all-zeros matrix with adaptive sizes, respectively, and R is a pn w × np matrix of the following cyclic form: III. ITERATIVE SOFT-DECISION DECODING In this section, we present a novel iterative soft-decision decoding scheme using the whole binary parity-check matrix given by (6) for RS-SPC product codes, which is quite different from the well-known Chase-Pyndiah iterative decoding algorithm [10] for TPCs. In the Chase-Pyndiah algorithm, the Chase decoder is used for every component code and then the soft information can be generated from the hard-decision list. The extrinsic information in the Chase-Pyndiah algorithm is exchanged in a turbo fashion for the row and column decoders. In the first subsection, we present a criterion for judging the codewords and speeding up the convergence. In the second subsection, a BP-based iterative decoding scheme for RS-SPC product codes is presented. A. Solution for the Undetected Errors Consider an RS code is transmitted on a discrete memoryless channel. Let ε be the probability that a transmitted symbol is error, with an equal probability of ε/(q − 1) to change into one of other (q − 1) symbols, where q = 2 p . Let P u (E, λ) denote the probability of undetected error after correcting λ or fewer symbol errors and it is given in [3,Ch. 7]. An upper bound on the probability P u (E, λ) is also given in [3, Ch. 7] that The P u (E, t) of a high-rate RS code over a small field GF(2 p ) is relatively high, whereas an RS code over a large field GF(2 p ) usually has a quite low P u (E, t), for example, the P u (E, t) of the (255, 223, 33) RS code is upper bounded by 2.6 × 10 −14 . Let y = (y 0 , y 1 , ..., y n b −1 ) and y H = (y H 0 , y H 1 , ..., y H n b −1 ) be a received vector and its hard-decision vector, respectively, where n b is the bit-length of the RS code. Letĉ = (ĉ 0 ,ĉ 1 , ...,ĉ n b −1 ) be the decoded sequence from the hard-out decoder. The soft weight W of a decoded sequence is defined as a normalized ML metric of the form: where the symbol ⊕ indicates addition modulo 2. From (9), we have 0 ≤ W ≤ 1. Let W θ be a soft weight threshold, where 0 ≤ W θ ≤ 1. If W < W θ , the decoded sequence has a high probability to be correct. The W θ should be carefully determined, otherwise the probability of undetected errors will be high or correct codewords will be missed. The threshold W θ can be easily optimized by the simulation. For example, we use the genie-aided decoder to find the W θ for P(31, 15, 1, 31), where the iterative decoding algorithm as explained in later subsection is applied. The average and maximum soft weights shown in Fig. 1 are obtained by running 10 7 decoding trials at each value of signal-to-noise ratio (SNR). The maximum soft weight is much larger than the average soft weight, and we can firstly set the soft weight threshold W θ slightly smaller than the maximum soft weight, e.g., W θ = 0.06 for this product code. Then we can finetune the W θ in the simulation, where the normal iterative decoding scheme (without genie) is applied, to achieve better performance. For practical considerations, the cyclic redundancy check (CRC) is usually used for error detection and then the threshold-based criterion may not be necessary. B. BP-Based Iterative Decoding From the perspective on the LD and GD, the BM-HDD is firstly performed for each RS component codewords and then the proposed BP-based iterative decoding is applied if necessary. Assume binary phase-shift keying (BPSK) transmission over an additive white Gaussian noise (AWGN) channel with two-sided power spectral density N 0 /2. The binary image c = (c 0 , c 1 , ..., c NP −1 ) of an RS-SPC product codeword is mapped into a BPSK sequence x = (x 0 , x 1 , ..., x NP −1 ), where x i = 1 − 2c i , and N P = n b L + n b /w is the bit-length of the RS-SPC product code. Letĉ = (ĉ 0 ,ĉ 1 , ...,ĉ NP −1 ) be an estimation for the codeword after decoding. The received vector y = (y 0 , y 1 , ..., y NP −1 ) is given by where w is the noise vector with the variance σ n 2 = N 0 /2. The log-likelihood ratio (LLR) of the code bit c i is given by Note that we use underlined letter L to distinguish from the code parameter L. Let L = (L(c 0 ), L(c 1 ), ..., L(c NP −1 )) denote the LLR vector. In the κ 1 -th iteration, let L κ1 = (L κ1 (c 0 ), L κ1 (c 1 ), ..., L κ1 (c NP −1 )) be the input LLR vector of the decoder. From the structure given by (6), the LLR vector can be divided into L sub-vectors L l , 0 ≤ l < L, for the RS codewords and one sub-vector L s for the parity bits. The parity-check matrix of the RS-SPC product code consists of an upper part and a lower part. The proposed iterative decoding is a two-stage process, which utilizes the different characteristics of the upper and lower parts. We divide the check nodes into two subsets M 1 and M 2 , where Let Θ −1 (L, µ) be the inverse operation of (12). Let N (i) denote the set of VNs connected to the CN-i, and N (i)\j denote the subset of N (i) without VN-j. Similarly, let M(j) denote the set of CNs connected to the VN-j, and M(j)\i denote the subset of M(j) without CN-i. For the CN and VN updates, we follow [7] to apply an a posteriori probability (APP) based update scheme which reduces the decoding complexity. Let ψ (·) denote one iteration by the MSA to generate a sum vector of extrinsic LLRs, denoted by L κ1 ext,h = (L κ1 ext,h (c 0 ), ..., L κ1 ext,h (c NP −1 )), for h = 1, 2, i.e., for h = 1, 2. The sum of extrinsic LLRs for each bit is calculated using MSA as In this paper, we use the BM-HDD as the LD. Since the LD is also an initialization for the GD, we briefly present the LD process. Let N err denote the number of error RS component codes, and N err is dynamically updated in the LD and GD. Let A c be the active set indicating the active VNs. First, set N err = L and A c = [0 : N P − 1]. Apply the BM-HDD for each RS component code C 1,l , for l ∈ [0 : L − 1], to obtain an estimationĉ l . Ifĉ l ∈ C 1 and the soft weight W l for C 1,l is less than the W θ , record the result and freeze C 1,l . The freeze operation means an RS component code is judged to be correctly decoded, which includes the steps as follows: • Record the decoded sequenceĉ l of C 1,l . • Remove the positions of C 1,l from the active set, i.e., If N err > 0 after the LD, then perform the proposed global iterative decoding. The GD can be expressed as the following steps: GD-1. Set the numbers of inner and outer iterations κ 1 = κ 2 = 0, maximum numbers of inner and outer iterations N 1 and N 2 , the stage-1 and stage-2 damping factors α 1 and α 2 . Since the LLR vector of an RS code has n shifted versions, N 2 can be set to n for best performance. For convenience, we only discuss two typical values 1 and n for N 2 . Then, the iterative decoding scheme with N 2 = 1 or N 2 = n is the low-complexity scheme or high-complexity scheme denoted by "LCS" or "HCS", respectively. IV. DECODING COMPLEXITY In the following, we compare the decoding complexity for the RS-SPC product codes and TPCs [10]. We consider the TPCs with the (n, k, δ) RS component codes, denoted by P R (n, k). We count the numbers of real number computations, including multiplications, comparisons, and additions. Each CN requires only two real number multiplications by (14) and each VN requires only one real number multiplication, which can be ignored. All modulo-2 computations are ignored. Moreover, the complexity of BM-HDD is roughly considered. The number of real number comparisons required to update a degree-d c CN is d c + ⌈log 2 d c ⌉ − 2, while the number of real number additions required to update a degree-d v VN by the APP-based algorithm is d v [12]. The upper part of H(n, k, w, L) has average row weight and column weight of ρn b and ρm b , respectively, where ρ is the density of H b . The lower part is a sparse matrix which has equal row weight and column weight of wL + 1 and 1, respectively. The numbers of real number computations required in one iteration for the CN and VN updates are respectively. In each iteration, at most L soft weights are calculated which requires ε ′ n b L additions and L multiplications. ε ′ is the probability of the bit-error. Compared to the CN and VN updates, the number of computations required for calculating the soft weight is much smaller thus it can be ignored. Let I max and I avg be the maximum and average numbers of iterations, respectively, where I max = N 1 N 2 for the proposed scheme. Then, the maximum number of real number computations required in the whole decoding process are approximately 2 (ρm b + 1) n b LI max . However, some codewords may be correctly decoded in advance and then frozen, so we can take 2 (ρm b + 1) n b LI avg to be an upper bound for the average number of real number computations. The average number of times the BM-HDD is performed for P(n, k, w, L) is upper bounded by LI avg . Then, we consider the decoding complexity for some main procedures of the Chase-Pyndiah algorithm. Let n b be the bit-length of the RS component code in P R (n, k). Suppose the Chase decoder finds η least reliable bits and generates 2 η test patterns. Each Chase decoder performs (2n b − η − 1) η/2 comparisons to generate 2 η test patterns. For evaluating the minimum Euclidean distance from the received vector, 2 η n b additions, 2 η n b multiplications, and 2 η − 1 comparisons are required. In the procedure of calculating the extrinsic information, at most (2 η − 1)n b comparisons are required to find the competing codeword. 2n b additions and 2n b multiplications Pyndiah [10] (2 η + 1.5) Iavg (2 η + 1.5) Iavg are required if the competing codeword exists, otherwise n b additions and n b multiplications are required. Suppose the competing codeword can be found with a probability of 0.5, we can average the numbers of additions and multiplications to both 1.5n b . Therefore, the number of real number computations for one component code in one iteration is about 3 × 2 η n b . Suppose that the decoding will be terminated if all the row and column parity-checks are satisfied. Then, let I avg be the average number of iterations, I avg ≥ 2. The average number of real number computations for the Chase-Pyndiah algorithm is 3 × 2 η n b nI avg . In addition, the average number of times the BM-HDD is performed for TPCs is 2 η nI avg . Due to different component codes used in P(n, k, w, L) and P R (n, k), we normalize the complexity to each bit. Let N 1,C and N 2,C (or N 1,H and N 2,H ) be the numbers of real number computations (or times the BM-HDD is performed) per bit for the proposed scheme and the Chase-Pyndiah scheme, respectively. These four normalized complexities are given as follows: Note that (17) are loose upper bounds and the accurate numbers may be much smaller. N 1,H and N 2,H are just the numbers of the times the BM-HDD is performed, but the detailed complexity of the BM-HDD for different component codes deserves further consideration. We only give the detailed complexity of the BM-HDD for RS codes, which is evaluated by the number of multiplications over GF(2 p ) and it may be found in [13]. The number of multiplications in the BM-HDD can be upper bounded by The parity-check computations also play an important role in both two decoding schemes. (n − k)n multiplications over GF(2 p ) are required for checking the codeword. For P(n, k, w, L), (n − k)nLI avg multiplications in total or n−k p I avg multiplications per bit are required. For P R (n, k), 2 η (n − k)n 2 I avg multiplications in total or 2 η (n−k) p I avg multiplications per bit are required. The above discussion for the decoding complexity is summarized in Table I. V. EXAMPLES AND SIMULATION RESULTS Two examples of RS-SPC product codes are given, and the simulation results and decoding complexity are discussed. Example 1. In this example, we consider the RS-SPC product code P(255, 239, 4, 32) from the (255, 239, 17) RS code over GF (2 8 ). The parameters α 1 = 0.32, α 2 = 0.8, W θ = 0.0025, and N 1 = 10 are set for the GD. The TPC P R (63, 61) with 1error-correcting RS component codes is also considered, which is one of the capacity-approaching TPCs. Chase-Pyndiah algorithm with 16 test patterns and 8 turbo iterations is applied for this TPC. The rates of P(255, 239, 4, 32) and P R (63, 61) are 0.9300 and 0.9375, respectively. The bit-error rates (BERs) are evaluated in Fig. 2. At a BER of 10 −7 , the HCS and LCS for P(255, 239, 4, 32) perform about 0.4 dB and 0.9 dB away from P R (63, 61), respectively. However, the trade-off between the LD and GD is an advantage of RS-SPC product codes. It is not surprising that the LD of P(255, 239, 4, 32) significantly outperforms the LD of P R (63, 61) by 1.9 dB at a BER of 10 −6 since the component code of this TPC can only correct 1 symbol error decoded by the BM-HDD. Consider the decoding complexity for these two codes. the average numbers I avg of iterations for decoding P(255, 239, 4, 32) are shown in Fig. 3. We can see that although the HCS performs many iterations for relatively low SNR, it only need a very small average number of iterations for large SNR. From Fig. 3, the HCS and LCS have almost the same average complexity for large SNR. For example, at the SNR (E b /N 0 ) of 5.8 dB, the HCS and LCS have the same I avg of about 2. For the GD of P(255, 239, 4, 32), the binary parity-check matrix with a density of 0.35 is used. For the TPC, we use the smallest number of iterations I avg = 2 to estimate the lowest complexity. Therefore, we can carry out the normalized complexity as follows: N 1,C ≤ 183.2, N 1,H ≤ 9.8 × 10 −4 , N 2,C ≥ 96, and N 2,H ≥ 0.085. Recall that (17) are loose upper bounds, and we find the accurate N 1,C = 71 and N 1,H = 7.3 × 10 −4 including the BM-HDD in LD (N 1,H = 2.5 × 10 −4 excluding the BM-HDD in LD) for E b /N 0 = 5.8 dB in the simulation. The complexity of P(255, 239, 4, 32) are lower in both the real number computations and hard-decision decoding than the complexity of the TPC P R (63, 61) for large SNR. We also need to explain that the complexity of the BM-HDD is only based on the number of times, but the BM-HDD for the (63, 61) RS code is much easier than decoding the (255, 239) RS code. A more accurate result may be derived from (19), and the result also shows the BM-HDD of P(255, 239, 4, 32) has lower complexity than that of P R (63, 61). Example 2. In this example, we consider the RS-SPC product code P(63, 51, 1, 15) from the (63, 51, 13) RS code over GF(2 6 ). P(63, 51, 1, 15) has blocklength of 6048 bits and rate 0.7589. The parameters α 1 = 0.32, α 2 = 0.8, W θ = 0.02, and N 1 = 20 are set for the GD. We also consider the TPC P R (31, 27) from the (31, 27, 5) RS code over GF (2 5 ), which has comparable blocklength of 4805 and rate 0.7586. Chase-Pyndiah algorithm with 16 test patterns and 8 turbo iterations is applied for this TPC. The BERs are evaluated in Fig. 4. The LD of P(63, 51, 1, 15) outperforms the LD of P R (31, 27) by 1.1 dB at a BER of 10 −6 . This is since different component RS codes are chosen. The HCS achieves the same performance as the Chase-Pyndiah algorithm for P R (31, 27), while the LCS performs about 0.7 dB from P R (31, 27). The HCS for this RS-SPC product code also has low average complexity for large SNR, e.g., it takes an average of 2 iterations to converge at E b /N 0 = 4.5 dB. VI. CONCLUSION In this paper, we present a BP-based iterative decoding scheme for RS-SPC product codes. These codes can be easily constructed from various RS codes, while TPCs are difficult to employ many widely-used RS codes as the component codes such as the RS codes of length 255. The only problem for RS-SPC product codes is to control the probability of undetected errors for the hard-decision decoding. Better component codes can be used for RS-SPC product codes, which gives a good performance for LD. The two-phase decoding scheme preserves the low decoding latency of the LD while the error-correcting capability of the GD is comparable to codes of large blocklengths. This flexible structure may meet the high-reliability and low-latency requirement for future communication systems.
2021-12-21T02:16:29.746Z
2021-12-01T00:00:00.000
{ "year": 2021, "sha1": "eff9971a72d0f7228bf93524ab3fb4ddd034000b", "oa_license": null, "oa_url": null, "oa_status": null, "pdf_src": "Arxiv", "pdf_hash": "eff9971a72d0f7228bf93524ab3fb4ddd034000b", "s2fieldsofstudy": [ "Computer Science", "Business" ], "extfieldsofstudy": [ "Computer Science", "Mathematics" ] }
221238648
pes2o/s2orc
v3-fos-license
Endoscopic Evaluation of Inflammatory Bowel Disease With High-Grade Dysplasia Should Not Be Delayed During the COVID-19 Pandemic: A Case Report A 63-year-old woman with long-standing ulcerative colitis was found by routine, highdefinition white light surveillance colonoscopy to have high-grade dysplasia (HGD) on a random biopsy of flat, erythematous mucosa of the rectosigmoid colon. Several previous surveillance colonoscopies had consistently shown mild-moderate pancolitis and pseudopolyposis without dysplasia. The patient’s bowel complaints were moderately controlled by oral mesalamine, with multiple previous refusals to escalate therapy to achieve mucosal healing. After a second gastrointestinal pathologist review confirming HGD, the case was discussed at our institution’s multidisciplinary inflammatory bowel disease (IBD) conference, after which the patient was offered colectomy. The patient, however, refused andwas referred to the advanced endoscopy service for chromoendoscopy and possible endoscopic resection. Shortly after the referral, the novel coronavirus disease 2019 (COVID-19) pandemic arose, and the patient chose to postpone her colonoscopy by 3 months because of the fear of contracting the novel coronavirus. When it was finally performed, a new 3 cm sessile lesion (Paris class IIb) in the rectosigmoid colon was identified (Figure 1). It was removed via endoscopicmucosal resectionwith snare tip coagulation of the resection border. Pathology revealed moderately differentiated adenocarcinoma. The patient was referred to colorectal surgery for colectomy. Dysplasia identified on random (nontargeted) biopsies of the colon mucosa without a visible lesion is defined as invisible dysplasia (1). The risk of undetected, synchronous colorectal cancer (CRC) in a patient with invisible HGD has been estimated to be as high as 42% (2). This high CRC risk prompted early guidelines and expert opinion to recommend that invisible HGD proceed immediately to colectomy. The 2015 Surveillance Colorectal Endoscopic Neoplasia Detection and Management in Inflammatory Bowel Disease Patients: International Consensus Recommendations (SCENIC) however suggests that patients with IBD and invisible dysplasia may be referred to an expert endoscopist for image-enhanced colonoscopy using chromoendoscopy to better guide decision-making for treatment or surveillance plans (1). If a visible dysplastic lesion is found by chromoendoscopy at the site of previously invisible dysplasia, this may allow for endoscopic resection and further surveillance rather than colectomy. The COVID-19 pandemic has halted the practice of endoscopy throughout the world. The concern for infection transmission during endoscopy, the need to conserve personal protective equipment, and the availability of endoscopy unit infrastructure had to be weighed against the ongoing need for endoscopic evaluation and therapy. The American Gastroenterology Professional Associations released a joint statement that briefly defined elective vs urgent/emergent procedures and provided their recommendations for which types of procedures should be delayed (3). The statement recommends that evaluation of IBD with dysplasia should not be delayed. Despite these recommendations, manyprocedures arebeingdelayedbecause of resources being diverted to the care for patients with COVID-19 or patient preference/fear, as occurred in this case. This case demonstrates that patients with IBD and HGD referred for possible endoscopic resection should not be delayed A 63-year-old woman with long-standing ulcerative colitis was found by routine, highdefinition white light surveillance colonoscopy to have high-grade dysplasia (HGD) on a random biopsy of flat, erythematous mucosa of the rectosigmoid colon. Several previous surveillance colonoscopies had consistently shown mild-moderate pancolitis and pseudopolyposis without dysplasia. The patient's bowel complaints were moderately controlled by oral mesalamine, with multiple previous refusals to escalate therapy to achieve mucosal healing. After a second gastrointestinal pathologist review confirming HGD, the case was discussed at our institution's multidisciplinary inflammatory bowel disease (IBD) conference, after which the patient was offered colectomy. The patient, however, refused and was referred to the advanced endoscopy service for chromoendoscopy and possible endoscopic resection. Shortly after the referral, the novel coronavirus disease 2019 (COVID-19) pandemic arose, and the patient chose to postpone her colonoscopy by 3 months because of the fear of contracting the novel coronavirus. When it was finally performed, a new 3 cm sessile lesion (Paris class IIb) in the rectosigmoid colon was identified ( Figure 1). It was removed via endoscopic mucosal resection with snare tip coagulation of the resection border. Pathology revealed moderately differentiated adenocarcinoma. The patient was referred to colorectal surgery for colectomy. Dysplasia identified on random (nontargeted) biopsies of the colon mucosa without a visible lesion is defined as invisible dysplasia (1). The risk of undetected, synchronous colorectal cancer (CRC) in a patient with invisible HGD has been estimated to be as high as 42% (2). This high CRC risk prompted early guidelines and expert opinion to recommend that invisible HGD proceed immediately to colectomy. The 2015 Surveillance Colorectal Endoscopic Neoplasia Detection and Management in Inflammatory Bowel Disease Patients: International Consensus Recommendations (SCENIC) however suggests that patients with IBD and invisible dysplasia may be referred to an expert endoscopist for image-enhanced colonoscopy using chromoendoscopy to better guide decision-making for treatment or surveillance plans (1). If a visible dysplastic lesion is found by chromoendoscopy at the site of previously invisible dysplasia, this may allow for endoscopic resection and further surveillance rather than colectomy. The COVID-19 pandemic has halted the practice of endoscopy throughout the world. The concern for infection transmission during endoscopy, the need to conserve personal protective equipment, and the availability of endoscopy unit infrastructure had to be weighed against the ongoing need for endoscopic evaluation and therapy. The American Gastroenterology Professional Associations released a joint statement that briefly defined elective vs urgent/emergent procedures and provided their recommendations for which types of procedures should be delayed (3). The statement recommends that evaluation of IBD with dysplasia should not be delayed. Despite these recommendations, many procedures are being delayed because of resources being diverted to the care for patients with COVID-19 or patient preference/fear, as occurred in this case. This case demonstrates that patients with IBD and HGD referred for possible endoscopic resection should not be delayed during the COVID-19 pandemic. Compared with the dysplasia to cancer progression in sporadic CRC, HGD in IBD has a much more rapid progression (4). Proper patient education is required to inform them of the risks of delayed endoscopic evaluation for IBD with HGD during the COVID-19 pandemic. CONFLICTS OF INTEREST Guarantor of the article: Keith S. Sultan, MD. Specific author contributions: All authors contributed to the writing and editing of this case report. Potential competing interests: None to report. Financial support: None to report. IRB approval: This falls under our retrospective IRB for endoscopic research. Consent statement: The patient provided informed consent to discuss her case in this correspondence.
2020-08-23T13:06:05.012Z
2020-08-19T00:00:00.000
{ "year": 2020, "sha1": "6a0efb004ebdaa08a19fdad40cb573997c48af01", "oa_license": null, "oa_url": "https://www.ncbi.nlm.nih.gov/pmc/articles/PMC7473792", "oa_status": "GREEN", "pdf_src": "PubMedCentral", "pdf_hash": "c391b690a56308a851f8bf1dbd8d3169d3ba24fc", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
216197028
pes2o/s2orc
v3-fos-license
Non invasive ventilation use for acute respiratory failure in general medical wards: a regional Italian survey Non-invasive ventilation (NIV) is a rapidly spreading method in the last years even outside of intensive care units for the treatment of patients with acute respiratory failure (ARF). Its use in general medical wards in Italy and Europe is still largely incomplete and there are clear gaps in terms of organization, training, selection and patient monitoring. If these gaps are filled, NIV has also proven to be effective in general medical wards, especially if they have a critical care area. This publication reports the data collected by an Italian regional survey on the use of NIV in internal medicine, highlighting positive and negative aspects. the effectiveness of NIV also for palliative care, for symptom control. The problem of setting where to make NIV has been debated for years in literature. The etiology and severity of ARF certainly play an important role in the choice but today also the changed epidemiology of patients, age and comorbidity, are decisive. [7][8][9][10] Until recently there were no documents of international consensus that defined the minimum criteria necessary to activate a NIV service within a hospital; today, however, the UK guidelines 2,4 define this area as level 2 of intensive care, a facility in which there is at least one dedicated nurse for every 2 patients in NIV (during daylight hours), are managed patients with organ failure with appropriate methods to support it, according to the philosophy of non-invasiveness, with a valid local organization. These characteristics are well suited to the critical/semi-intensive areas born in recent years within some Units of Internal Medicine in Italy according to organizational models for intensity of care and care complexity, but also where these areas are not well structured the literature allows the use of NIV in internal medicine departments if organizational, logistical and training standards are respected. 5 Despite this, the current use of NIV during ARF in medical units in Italy is still very heterogeneous. 11 There are places where the method is well known and used with a good organization, other realities where the ventilator is not present and the management of these patients is always and anyway referred to other specialists; among these 2 extremes there are intermediate situations very different from each other although geographically close, sometimes also belonging to the same region. Surveys prior to this [12][13][14][15][16] showed that the main limitations related to the use of NIV in internal medicine relate to gaps in organization, training, resources. [17][18][19][20][21][22][23] FADOI (Federation of Associations of Hospital Internist) is a scientific society of Italian hospital internist phisicians very active in the field of training and research, organized in autonomous Accepted paper regional directors and a central national organization. The Emilia Romagna section decided in 2018 to draw up a questionnaire on the use of NIV in ARF to be submitted to the units of internal medicine of the region to collect information on how and if they managed NIV. The questionnaire included 23 multiple-choice questions (appendix 1), which was easy to understand and quickly filled in (a text file with pre-filled form fields was used), was sent by e-mail to the directors of the internal medicine units of the Emilia Romagna region. 81 units were contacted, 33 responded by resubmitting the completed questionnaire. The data was transferred to an electronic spreadsheet in which statistical processing was carried out. The units surveyed did not present a specifically pneumological vocation, only in 2 of 33 cases there was a pulmonologist in the staff. Results The 33 units that responded to the survey were located in 30 different hospitals. Table 1 shows the percentage of units within the hospital based on the number of beds. Most of the units that responded to the questionnaire were therefore placed in small hospitals. Table 2 shows the units that have joined the survey according to the province they belong to. The estimated percentage of patients treated with NIV for ARF in one year was 7.36% of admissions, the average global value on all units using the method, with a range of 1 to 20%. Of the 33 centres that responded to the survey, only 1 centre stated that they did not use any NIV methods: it is clear that this report, in favour of centers using NIV, could be overestimated by the fact that these centers, already confidants with the method, were more motivated to respond. With regard to ventilation techniques and technologies: i) CPAP was used by 87% of the centres, 92% owned a CPAP device in the unit, usually a simple venturi flow generator or a disposable system; ii) Double-pressure methods (Bilevel or Pressure Support ventilation) were used in 83% of the centres. Fifty-one percent of operating units said they had at least one ventilator (range 1-4), and Accepted paper 32% said they would use it on loan from other units if necessary. In 88% of cases the ventilators were simple, home-derived machines or NIV-specific ventilators; no centers used complex intensive care ventilators. Table 3 shows the interfaces used by the different centers. The main forms of acute or acute on chronic respiratory failure treated with NIV in the different centers are shown in Table 4. COPD exacerbation with respiratory acidosis and acute cardiogenic pulmonary edema, were treated in all centers. These are the forms of ARF that are most prominent in terms of the effectiveness of NIV even outside of intensive care. In terms of local organization, only 35% of units had a well-defined and structured critical/semiintensive area within the ward where they could manage patients in NIV. In 65% of cases, therefore, patients were ventilated in traditional general medical ward. Diagnostic and therapeutic path on management of ARF were present in only 35% of the centers, and in this case the internist was not involved in the design of the path that involved other specialists (emergency, resuscitators, pneumologists) instead. In the medical staff of the medical units that implemented NIV, there was an "expert" internal physician in only 62% of cases. The prescription of ventilator treatment and monitoring was carried out independently by the internists of the ward only in a minority of cases: more often the treatment was co-management with other specialists ( Table 5). The presence of protocols within the medical unit or hospital for the management of patients with ARF in NIV was present in only 46% of cases; these protocols were complete in their essential parts as defined by the guidelines only in 20% of cases. Table 6 shows the single components of the protocols in relation to the percentage of centers that covered them. Accepted paper With regard to patient monitoring during ventilator treatment, this was complete, according to the guidelines, only in 13% of cases. 40% of medical units reported the use of less than 4 parameters. Table 7 shows the different monitoring parameters indicated as essential by the literature in relation to the percentages of the centres that used them. The NIV Team provides for the involvement of all specialists of a single hospital dedicated to the management of the patient with ARF in NIV, who must necessarily speak the same language, be trained in a homogeneous way, to give continuity to the treatment of the patient. Depending on the size of the hospital may be involved: emergency physicians, resuscitators, pneumologists, internists, cardiologists, and others. Its function, essential for the effectiveness of treatment, has been defined by the guidelines for years. In our survey, the NIV team was present in only 21% of NIV practice centers, in no company there were defined network models such as Hub & Spoke for example for the rapid centralization of the most critical patients or on the contrary for the reliance on the hospital of territorial competence for stabilized patients. In most cases there was no therapeutic continuity even with regard to the technologies used (masks, circuits, different ventilators). The most common complications related to NIV treatment and their overall prevalence in our survey are shown in Table 8. 40% of medical units report more than 4 frequent complications. As you can see, it is a higher prevalence than the data reported in the literature in other settings; this can be interpreted as a lack of local organization and staff training. In fact, staff training has always been regarded by literature as one of the secrets of NIV's success. It is essential for successful treatment, it is also important in terms of motivational aspects, the involvement of nursing staff is essential. In detail: i) training at the beginning of the centre's NIV experience: 15% of the centres did not complete initial training, 60% carried it out within the company, only 25% even with events outside the company; ii) periodical Retraining on a minimum annual basis: only 36% of units have been reported; iii) the involvement of nursing staff was Accepted paper considered optimal by 75% of the centers surveyed; iv) the feeling, the awareness of the effectiveness of NIV is still considered high, 100% of the physicians who answered the questionnaire have this opinion. Discussion The data collected in this Italian regional survey are essentially in line with what is reported in the literature in similar surveys conducted in recent years both in Italy and abroad, [11][12][13][14][15][16][17][18][19][20][21][22][23] with the particularity that this study, unlike others, is conducted exclusively in medical units that did not have a specific pneumological address: the NIV is rapidly spreading in internal medicine but still with non-homogeneous distribution even in a limited geographical areas. This is often due to the lack of structured pathways for the patient with ARF by health companies, and the management of improve and refine patient monitoring in NIV, which must not be particularly aggressive and complex, but respond to the philosophy of non-invasiveness and simplicity, but at the same time must be precise and timely. Standalone bedside multi-parameter monitoring systems but with transmission of data and alarms to control unit and portable systems (tablets) are well suited to the structure of a general medical ward without necessarily forcing the design of a critical area. Limiting invasiveness as much as possible, in this perspective the assessment of blood gases can also be carried out on capillary sampling, CO2 can also be monitored by transcutaneous way (TcPCO2); 2 vi) to facilitate the adaptation of the patient to treatment and synchrony with the ventilator, through the choice of a correct interface, proper nursing, the adoption of protocols and sedation techniques, a correct setting of ventilation parameters, minimizing complications and side effects. Finally, another aspect to be refined in the operators who devote themselves to the NIV in internal medicine is the specific training, which, as evidenced by the data of this survey, is often lacking. Staff must be trained not only at the beginning of their experience with NIV but must also be trained periodically with moments of retraining that involve practical aspects. Healthcare Accepted paper companies should be encouraged to organize internal training and clinical audits on these issues, but they should also make use of high-quality external training events (scientific societies or other companies with documented experience in the field). The recent and well-established turn over of medical and nursing staff in internal medicine requires further attention to continuous and periodic training. In conclusion, we believe some final considerations regarding the use of NIV in internal medicine are necessary, which emerge spontaneously from the data of this survey. The changing epidemiology of medical patients today, which includes elderly, complex, fragile and comorbidity patients, together with the lack of bed places in intensive care units, has meant that, as needed, internal medicine approached the NIV: a large proportion of patients with ARF are managed in general medical wards and in any case outside of intensive care because they would not be eligible for intubation and invasive ventilation (sometimes also by express will), and this trend is likely to increase in the next years especially for those patients with ARF potentially responsive to NIV (e.g. COPD exacerbated with respiratory acidosis, acute cardiogenic pulmonary edema). While large randomized controlled trials on the effectiveness of NIV in the different causes of ARF 29,30 have always enrolled relatively young patients without relevant comorbidities trying to demonstrate important endpoints (such as reduction in the need for intubation and mortality), today observational data from the use of NIV in the real world 4,31-33 tell us that we often treat completely different patients (who by characteristic and severity would have been excluded from large trials) and with different goals and results: we must often satisfy with the NIV to reduce symptoms (breathlessness) in the patient with ARF , to correct gases exchanges, to overcome the acute event, well aware that the survival of the patient often depends on age and other chronic clinical conditions with unfavorable prognosis. Here too the role of internal medicine is inevitable, palliative treatment of patients with end-stage pathologies, not only cancer, is now an important part Accepted paper of the daily work of the internist, and the literature tells us that NIV can play an important role. 3 In the choice of respiratory support, non-invasive versus invasive, in the patient with ARF today are crucial not only the severity and etiology of the ARF (probability of favorable response to NIV) but also the characteristics of the patient (age, comorbidity, will) and the organization/local resources. For an effective NIV treatment it is important how you do it, not where. It is desirable to further spread these methods in internal medicine units in Italy trying to make the different realities more Accepted paper Table 1. Percentage of internal medical units that joined the study based on the size of the hospital in which they were placed. Conclusions B.P., bed places. Table 5. Prescriptive and NIV treatment monitoring responsibilities in the different centers involved in the survey in relation to the different specialists involved. Table 6. Aspects covered in NIV management protocols in the different and their prevalence in the different medical units.
2020-03-31T08:07:37.367Z
2020-03-26T00:00:00.000
{ "year": 2020, "sha1": "ee510e78fa68952c6798dedc239c4f57d0add403", "oa_license": "CCBYNC", "oa_url": "https://italjmed.org/index.php/ijm/article/download/itjm.2020.1260/1254", "oa_status": "GOLD", "pdf_src": "MergedPDFExtraction", "pdf_hash": "ee510e78fa68952c6798dedc239c4f57d0add403", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
162183308
pes2o/s2orc
v3-fos-license
Arterial Stiffness in Patients With Renal Transplantation; Associations With Co-morbid Conditions, Evolution, and Prognostic Importance for Cardiovascular and Renal Outcomes Patients with chronic kidney disease (CKD), particularly those with end-stage renal disease (ESRD), are at increased risk of cardiovascular events and mortality. The spectrum of arterial remodeling in CKD and ESRD includes atheromatosis of middle-sized conduit arteries and, most importantly, the process of arteriosclerosis, characterized by increased arterial stiffness of aorta and the large arteries. Longitudinal studies showed that arterial stiffness and abnormal wave reflections are independent cardiovascular risk factors in several populations, including patients with CKD and ESRD. Kidney transplantation is the treatment of choice for patients with ESRD, associated with improved survival and better quality of life in relation to hemodialysis or peritoneal dialysis. However, cardiovascular mortality in transplanted patients remains much higher than that in general population, a finding that is at least partly attributed to adverse lesions in the vascular tree of these patients, generated during the progression of CKD, which do not fully reverse after renal transplantation. This article attempts to provide an overview of the field of arterial stiffness in renal transplantation, discussing in detail available studies on the degree and the associations of arterial stiffness with other co-morbidities in renal transplant recipients, the prognostic significance of arterial stiffness for cardiovascular events, renal events and mortality in these individuals, as well as studies examining the changes in arterial stiffness following renal transplantation. INTRODUCTION Patients with chronic kidney disease (CKD) and particularly patients with end-stage renal disease (ESRD) are individuals with an early and marked increase in arterial stiffness, characterized by alterations in the viscoelastic properties of large arteries (1)(2)(3). Although the mechanisms for the development of increased arterial stiffness in CKD are complex and not yet fully clarified, data from experimental and clinical studies have shown that both classical and non-classical cardiovascular risk factors, i.e., factors related to CKD, play an important role in arterial remodeling. Classical risk factors include age, hypertension, diabetes mellitus dyslipidemia, obesity, smoking, and others. Non-classical risk factors relate to a number of alterations relevant to CKD progression and the uremic milieu, such as vascular calcification generated by the disturbed metabolism of calcium and phosphate, excess stimulation of renin-angiotensinaldosterone system (RAAS), endothelial dysfunction and chronic inflammation present in advanced CKD and others, that can play an important role in adverse arterial remodeling (4)(5)(6). Several epidemiological studies have shown that patients with reduced renal function are at increased risk for cardiovascular events and all-cause mortality (7). The association between CKD and cardiovascular events is present even in patients with mild decrease in renal function which has not yet caused a noticeable increase in serum creatinine (8). As described above, traditional cardiovascular risk factors are highly prevalent in patients with CKD and greatly confer to the risk of developing cardiovascular disease. However, even when such factors are taken into account for risk prediction, they fail to accurately predict survival, and residual risk remains, possibly associated with specific alterations taking place in CKD. Inclusion of alterations such as arterial stiffness, may help better assess cardiovascular risk in these patients (9). Prospective studies in different populations, including several in ESRD patients, showed that parameters reflecting arterial stiffness, such as PWV, or the adverse morphology of the pulse wave, such as increased augmentation pressure and augmentation index are strong and independent indicators of cardiovascular and total mortality (2,(10)(11)(12). Meta-analyses of such studies including large number of patients had similar results (13)(14)(15). At the same time, several studies have shown that blood pressure (BP) at the central aorta are generally more closely associated to the incidence of cardiovascular events and total mortality in several populations, including patients with CKD and ESRD, compared to peripheral BP recorded at the level of brachial artery (11,(14)(15)(16), indicating that central BP is a better marker of cardiovascular risk. Kidney transplantation is the treatment of choice in patients with ESRD as it is associated with at least 2-fold longer survival in relation to hemodialysis or peritoneal dialysis and significant benefits in the quality of life of patients. However, the risk of cardiovascular death in transplanted patients remains significantly higher than that in general population (17,18). It was suggested that part of the high cardiovascular risk of these patients is attributed to irreversible lesions in the vascular tree, which are created during the period prior to renal transplant and which do not fully reverse after it. Herein we discuss in detail available studies on arterial stiffness in patients with renal transplantation, that examined the levels of arterial stiffness and its with co-morbidities and graft function, the prognostic significance of arterial stiffness for cardiovascular events, renal events, and mortality in this population, as well as the natural course of arterial stiffness following renal transplantation, aiming to provide an overview of this important field. ARTERIAL STIFFNESS AND CENTRAL (AORTIC) BP; PHYSIOLOGY, PATHOPHYSIOLOGY, AND ASSESSMENT Arterial stiffness is a term describing the loss of flexibility of elastic arteries, i.e., the aorta and the other major arteries. Elastic arteries (or conducting arteries) receive blood directly from the heart and are the largest arteries of the body, those closest to the heart. The walls of these arteries have abundant elastic fibers, which, apart from the solidity, offer the ability of dilatation of the vessels; thus aorta and major arteries can through dilatation to practically "store" part of the blood volume during heart systole and forward it to the periphery during diastole. This cushioning function is a fundamental part of the circulation as it helps to avoid major changes in BP and keep the flow of blood as smooth as possible in the time between two heart beats (5,(19)(20)(21). Arterial stiffness is the result of structural and functional changes in the arterial tree and increases progressively with age regardless of the levels of the BP (22). Thus, arterial stiffness is a complex result of the effect of aging and that of classic and non-classic cardiovascular risk factors on the arterial tree. In cases of increased arterial stiffness, the normal functioning of the aorta and large vessels is disrupted resulting major changes in the circulatory system (Figure 1). Normally, the initial arterial wave is produced by the left ventricle during contraction and travels to the periphery through a low resistance route, which keeps the mean blood pressure almost unchanged. When the pulse wave reaches the circumference, it is reflected and returns to the aorta. The points on which the reflex of the pulse wave occurs are the arterial intersections and the arterioles. In situations of increased arterial stiffness, due to the increased velocity of the pulse wave there is a faster return of reflecting arterial waves. These results in earlier return of the wave to the aorta, i.e., instead of returning to the aorta during the phase of dilation, as is normally the case, it returns during the contraction phase and therefore it is added and increases the systolic BP (SBP) by a BP degree, called "augmentation pressure." Further to that the absence of the returning during diastole, results in DBP reduction. Thus, the increase in arterial stiffness, is associated with changes in BP, such as increased SBP, reduced diastolic blood pressure (DBP) and therefore increased pulse pressure (PP) (19,22,23,(23)(24)(25). This increase in arterial stiffness is considered to be the main mechanism for the creation of isolated systolic hypertension, left ventricular hypertrophy, and heart failure with decreased cardiac output (26). Typically, increased arterial stiffness is an important factor in non-response of SBP in antihypertensive therapy which is a main characteristic of patients with resistant hypertension (27,28). Furthermore, arterial stiffness has been shown to be a strong and independent cardiovascular risk factor in the general population , but also in patients with diabetes mellitus, hypertension, dyslipidemia, coronary artery disease, heart failure and, typically CKD and ESRD (2,4,10,11,15,29,30). The velocity of transmission of the pulse wave along the arterial wall increases as we move from central to peripheral parts of the arterial tree, due to the change in the properties FIGURE 1 | Pressure waveforms and characteristics in patients with high and low arterial stiffness. In increased arterial stiffness, increased velocity of the pulse wave results in earlier return of the wave to the aorta, i.e., during systole instead of diastole. Thus, it is added and increases SBP (augmentation pressure), while DBP is decreased. CKD, chronic kidney disease; Tr, arrival time of reflected waves at central aorta from the onset of left ventricular ejection (T0) to inflection point A; AP = P1-P2 the augmentation of aortic systolic pressure induced by the return of the reflected wave, where P1 is the pressure at the first inflection point A and P2 is the pressure at the second inflection point B; Augmentation index (AI) (%) is defined by the formula: AI = 100 × AP/PP, where PP is the aortic pulse pressure (systolic minus diastolic pressure); Ts, period from start to the end of systole (ejection duration). of the arterial wall (progressive change from "elastic type" to "muscular type" arteries). Due to the different distance from the reflection points, the morphology of the final pulse wave (i.e., the synthesis of promoted and reflected wave) at each point of the arterial tree is different from the others. Thus, the maximum SBP and therefore PP are different along the arterial tree. In normal conditions, the SBP at the level of the brachial artery is higher than at the level of the central arteries, while the diastolic and mean BP differ much less between these points. Especially in healthy young people, central (aortic) systolic BP is lower than peripheral up to 30 mmHg or more (31). This phenomenon of SBP and PP increase when moving from central to peripheral arteries is defined as "pulse pressure amplification." This difference between peripheral and central BP is not always stable. The difference between central SBP or PP depends on many physiological factors (e.g., heart rate, geometry, and mechanical properties of the arterial tree, sex), but also on pathological factors (e.g., metabolic, inflammatory), and on the use of drugs (32). Typically, with increasing age and the relevant increase of the arterial stiffness in the aorta and the other elastic arteries, the difference between peripheral and central SBP and PP decreases significantly. The heart, the kidneys and the large vessels that feed the brain are anatomically closer and more strongly influenced by the effects of central rather than brachial BP. Therefore, it is reasonable to assume that central BP relates more closely to target-organ damage and that it has an important predictive value as far as cardiovascular mortality concerned. A plethora of data suggests that central BP is better correlated with target organ damage, cardiovascular risk and events, and mortality than peripheral BP (16,30,33). Indeed, it appears that central BP is more closely related to the thickening of tunica intima and tunica media of the carotids as well as to the hypertrophy of the left ventricle of the heart compared to brachial BP (33)(34)(35). In addition, the reversion of left ventricular hypertrophy and of intima-media thickness of the carotid was associated with the change of central and not brachial BP (36,37). Moreover, in subjects without established cardiovascular disease, central BP was superior to brachial BP in the prognosis of future cardiovascular events (33). In another study including patients with ESRD, only central BP and the reduction of augmentation pressure were independent predictive factors of total (and cardiovascular) mortality (30). A metanalysis that explored the predictive value of central pressures in the incidence of cardiovascular events and mortality, highlighted the independent and stronger predictive value of central BP over peripheral BP (15). There are currently several methods available to determine arterial stiffness and central pressures in a non-invasive way. In clinical practice, pulse wave velocity (PWV) along the aorta is the main parameter used to determine arterial stiffness. PWV is defined as the speed of transmission of the pulse wave along the arterial wall and is calculated by the ratio of the distance between two points of the arterial tree to the time of transmission of the pulse wave between these two points (19), i.e., PWV = distance of two points of the arterial tree (in meters) Time of transmission of the pulse wave between these points (in seconds) With the modern ways of PWV determination, the speed at which the pulse wave travels between two superficial points of the arterial tree can be calculated. Tonometric techniques (with devices such as Sphygmocor), ultrasonographic techniques, plethysmography, and indirect identification techniques (with devices such as Mobilograph) are used to determine PWV (11,(38)(39)(40). For example, with the tonometric method (which is the one used in most studies of the current field) the PWV over different segments of the arterial tree (most commonly carotid-femoral, carotid-radial or radial-femoral PWV) can be measured with simultaneous recording with two probes, or with two recordings referenced to a concurrently recorded ECG, with pulse wave transit time between the subsequent recording sites calculated with special software (11,39,40). With the same technique, by measuring at the level of a superficial accessible artery the pulse wave in the aorta can be determined and the augmentation pressure, augmentation index (AIx) and other parameters can be calculated. Initially, the waveform of the pulse wave is determined in a superficial artery (e.g., radial) and then, with the help of a mathematical function (generalized transfer function), the waveform in the aorta is estimated (40,41). The waveform of the aortic pulse wave is analyzed in order to calculate the amplification pressure, the AIx and also the aortic (central) systolic and diastolic blood pressure, the duration of the ejection phase of the left ventricle and the time at which the reflecting wave appears. In a similar manner, the oscillometric devices record BP at the diastolic phase for ∼10 s, build the brachial pulse waveforms, and then generate the aortic pulse waveform with generalized transfer function (40,42,43). Cross-Sectional Studies Assessing the Levels of PWV and Its Associations With Co-existing Risk Factors and Co-morbidities In previous years several cross-sectional studies aimed to assess the degree of arterial stiffness and explore its association to cardiovascular events and renal graft outcome in renal transplant recipients ( Table 1). Bahous et al. conducted a study (44), at which aortic PWV was measured non-invasively in 101 living kidney donors and their 101 corresponding recipients and was compared to healthy volunteers (divided into 2 groups: one recipient-related through familial links and the other nonrecipient related). Aortic PWV was significantly higher in donors and recipients than in healthy volunteers, even after adjustment for age, gender, and MAP (9.5 ± 2.5 m/s in donors vs. 12.0 ± 2.0 m/s in recipients vs. 8.5 ± 1.5 m/s in non-recipient related healthy volunteers vs. 8.9 ± 1.5 m/s in recipient-related healthy volunteers, with all comparisons between groups being statistically significant, p ≤ 0.01). The factors related to donor aortic PWV, evaluated at end of follow-up, were donor age, MAP, plasma glucose, smoking, and time since nephrectomy. The factors related to recipient PWV were age, MAP, and smoking habit (as in donors), but also graft rejection. When examining recipients with chronic allograft nephropathy, plasma creatinine doubling was associated with 2 factors after adjusting for age: acute rejection (p = 0.004) and donor PWV (p = 0.03) (44). Several years after the above observations, Kolonko et al. (45) reported on a cross-sectional study which included 142 stable renal transplant recipients at an average of 8.4 ± 1.8 years after transplantation in order to assess different markers of vascular injury (including PWV and IMT) and endothelial disfunction and explore their association with traditional and novel risk factors. A high prevalence of traditional cardiovascular risk factors was noted in the population studied. Left ventricular hypertrophy was present in 50% of the patients and atherosclerotic plaques were found in 31%. Mean IMT was 0.62 ± 0.13 mm and PWV 12.7 ± 4.4 m/s. Among the traditional risk factors, the only ones that were related to increased IMT and PWV were diabetes (IMT 0.67 ± 0.11 cm, PWV 14.5 ± 5.6 m/s, p < 0.01), LVH (IMT 0.67 ± 0.14 cm, PWV 13.5 ± 4.8 m/s, p < 0.001) and CVD (IMT 0.73 ± 0.13 cm, PWV 14.7 ± 5.6 m/s, p < 0.001). In multivariate regression analysis, PWV was associated with age (β 0.28, 95% CI: 0.125 to 0.435, p < 0.001) and presence of pre-transplantation diabetes (β 0.242, 95% CI: 0.077 to 0.407, p < 0.01) (45). Another recently published interesting cross-sectional study by the same group (46), tried to explore the levels of arterial stiffness and endothelial dysfunction in association with the effectiveness of antihypertensive treatment in renal transplant recipients. The study included 145 renal transplant recipients 7.6 ± 2.7 years after transplantation on average and measurements of PWV with the Sphygmocor device, flow-mediated dilation (FMD) and nitroglycerin-mediated dilation (NMD) along with 24-h ambulatory BP monitoring were performed. Overall, there were only 29 patients (20%) with well-controlled BP and 33 (23%) with borderline BP control. Eighty three patients (57%) failed to achieve the target blood pressure despite antihypertensive treatment. The study revealed a significantly higher PWV (median 9.6/interquartile range: 3.9 vs. 8.0/3.3 m/s, p = 0.002) but borderline lower FMD (8.4% ± 5% vs. 9.9% ± 5.7%, p = 0.09) in patients that did not reach the therapeutic BP goal as compared to those with good or borderline BP control. Further analysis of patients in subgroups based on the number of antihypertensive drugs revealed a significant trend for increased LVH prevalence and higher PWV values with increased number of antihypertensive drugs (8.7 ± 2.9 m/s in the untreated group vs. 8.9 ± 2.0 m/s in patients treated with 1 drug vs. 9.5 ± 3.0 m/s in patients treated with 2 drugs vs. 9.4 ± 2.0 m/s in those treated with 3 drugs vs. 11.1 ± 3.5 m/s in patients treated with 4 drugs, p = 0.02). Interestingly, there was no significant difference in FMD, NMD, and IMT between these subgroups (46). In contrast to the above, a study by Azancot Finally, a small cross-sectional study (49) including 17 consecutive renal transplant recipients who underwent a 24-h ABPM and PWV measurement in the early post-operative period (3 to 7 days after transplantation) and to whom anthropometric measurements and laboratory parameters were obtained, tried to explore the association of the above-mentioned parameters with the risk of cardiovascular disease in these patients. There was a significant correlation (r = 0.21, P < 0.05) between overweight as defined by BMI and the PWV measurements. On multivariate linear regression analysis, with PWV as a dependent variable, the factors independently associated with it were age, hemoglobin levels and 24-h SBP. An age increase of 10 years was correlated with a 0.47 m/s increase in PWV, while a 1 g/dl hemoglobin increase correlated to a 0.933 m/s decrease in PWV. Finally, a 24h SBP increase of 10 mm Hg was correlated with a PWV increase of 0.83 m/s (49). Cross-Sectional Studies on the Association of Arterial Stiffness and Graft Function The association between arterial stiffness and graft function was investigated in 2 recently published cross-sectional studies ( Table 1). In the first one (50), PWV measurement with ultrasound recordings was performed in 96 stable renal transplant recipients. The aortic PWV of the patients ranged from 4 to 14.2 m/s. The aortic PWV and the estimated GFR (using the MDRD equation) were inversely correlated (Pearson correlation coefficient was −0.427), and this correlation was statistically significant, suggesting a probable effect of arterial stiffness on graft outcomes (50). In the second study (51), PWV was measured in 83 renal transplant recipients, aiming to assess the degree and associations of arterial stiffness . Multivariable linear regression analysis, with PWV as a dependent variable, retained the following parameters as independent predictors of PWV in the final regression model: red blood cell distribution width (β 0.323, 95% CI 0.319 to 1.591, p = 0.004), age (β 0.297, 95% CI 0.023 to 0.106, p = 0.005), tacrolimus immunosuppression therapy (β −0.286, 95% CI −2.616 to −0.554, p = 0.004) and central DBP (β 0.185, 95% CI 0.004 to 0.122, p = 0.041) (51). Longitudinal Studies Assessing the Association of Arterial Stiffness With Cardiovascular Risk, Renal Outcomes, and Mortality in Renal Transplant Recipients In addition to the aforementioned cross-sectional studies, few retrospective and prospective cohort studies have tried to explore the association of arterial stiffness indices with cardiovascular risk, renal outcomes and mortality in renal transplant recipients ( Table 2). Retrospective Studies A cohort study (52) by Kim et al. included 171 ESRD patients eligible for kidney transplantation, in 84 of which follow-up brachial ankle PWV (baPWV) was available. The study aimed to assess the utility of arterial stiffness measurements as a marker in predicting cardiovascular disease in renal transplant recipients. The mean value of pre-transplant baPWV was 15.08 ± 3 m/s in ESRD patients and 93.4% had a higher baPWV value than healthy controls with same age and sex. Pre-transplant baPWV was higher in patients with a history of CVD than in those without CVD (18 ± 4.4 vs. 14.91 ± 2.65 m/s, p < 0.05) and was proved to be a strong predictor of CVD (OR 1.003, 95% CI: 1.001 to 1.005, p < 0.05). The optimal cut-off value of baPWV for the detection of CVD was 15.91 m/s with a sensitivity of 72.7% and specificity of 71.6% (area under curve 0.778, 95% CI 0.64 to 0.91, p < 0.05), and this value was an independent predictor of CVD in renal transplant recipients (OR 6.3, p < 0.05). Moreover, the occurrence rate of CVD was significantly higher in patients with "high coronary calcium score" compared to those with a "low coronary calcium score" and baPWV was also significantly higher in the first of the above two groups (16.27 ± 3.93 vs. 14.79 ± 2.65 m/s, p < 0.05). Further to the above, the association between arterial stiffness and all-cause mortality has been explored in two recent retrospective studies. The first one, conducted by Dahle et al. (53) included 1,040 renal transplant recipients to whom carotid-femoral PWV was measured 8 weeks after kidney transplantation. Approximate PWV quartiles were defined by cut-offs at 8, 10, and 12 m/s. During a median follow-up of 4.2 years 82 patients died. The association of PWV and mortality showed a ceiling effect and PWV was truncated at 12 m/s. Each 1 m/s increase in PWV up to 12 m/s, was significantly associated with mortality (HR 1.36, 95% CI 1.14 to 1.62, p = 0.001) (Figure 2). An interquartile range increase of 3.8 m/s in PWV tripled the risk of mortality (HR 3.21, 95% CI 1.63 to 6.31), an effect similar to the effect of 1 interquartile increase in age (21.6 years, with an estimated HR of 3.06, 95% CI 1.87 to 5.29) (53). Finally, in the most recent study assessing the association between arterial stiffness, mortality and graft survival in renal transplant recipients, Cheddani et al. (54) included 220 patients that were evaluated for PWV 3 months after transplantation. Among those, 169 repeated the evaluation at 12 months. During a median follow-up of 5.5 years, death and graft loss occurred Pre-transplant baPWV was higher in patients with history of CVD than in those without (18 ± 4.4 vs. 14.91 ± 2.65 m/s, p < 0.05) and was a strong predictor of CVD (OR1.003, 95% CI: 1.001 to 1.005, p < 0.05). The optimal cut-off value of baPWV for the detection of CVD was 15.91 m/s and this value was an independent predictor of CVD in RTRs (OR 6.3, p < 0.05). Moreover, the occurrence rate of CVD was significantly higher in patients with "high coronary calcium score" compared to those with a "low coronary calcium score" and baPWV was also significantly higher to the first when comparing the two groups ( in 10 and 12 patients, respectively. c-f PWV 3 months after transplantation was an independent risk factor for mortality Finally, the authors reported that the occurrence of renal and/or cardiovascular events following transplantation was influenced by two factors: heart rate (β, 7.16; p < 0.001) and PWV (β, 0.25; p < 0.006). When PP was multiplied by heart rate, this product was a significant (HR 3.7; p < 0.02) and independent factor influencing cardiovascular events in transplanted patients, in addition to a past history of cardiovascular events (HR 1.16; p < 0.04). The study is limited, however, by the lack of details on the methodology of the analysis (55). Claes et al. (56) conducted a prospective study in order to investigate the prognostic value of arterial stiffness and aortic calcifications in 253 renal transplant recipients. Carotidfemoral PWV was assessed in a subgroup of 115 patients and aortic calcifications (AC) was assessed by means of lumbar X-ray. AC were present in 61% of patients. The primary endpoint for this study were cardiovascular events. After a mean follow-up of 36 months, 32 CV events occurred in the overall group and 13 events in the PWV subgroup. When accounting for age, gender, and cardiovascular history, aortic calcification score (HR, 1.09 per 1 unit increase; 95% CI 1.02 to 1.17) and PWV (HR 1.45 per 1 m/s; 95% CI 1.16 to 1.8) remained independent predictors of cardiovascular events in Cox-regression analyses. Using ROC-analysis, the area under the curve for the prediction of CV events was 0.80 and 0.72 for sum aortic calcification and PWV, respectively. This study indicated that both arterial stiffness and aortic calcifications are strong and independent predictors of future cardiovascular events in non-selected renal transplant recipients and should be used in risk stratification (56). In a recent single-center observational prospective study (57), 37 kidney transplant recipients with no history of vascular event were evaluated in terms of vascular calcification. Among others, carotid-femoral PWV (cfPWV) and carotid-radial PWV (crPWV) were measured using applanation tonometry before and 1 year after transplantation. The study showed that pre-transplant CRP level (HR 1.660, p = 0.007) and PWV ratio (cfPWV/crPWV) (HR 7.549, p = 0.045) predicted cardiovascular events (57). Bahous et al. (58) conducted a study that included 95 recipients of living donor kidneys and their corresponding donors, aiming at determining the contribution of donor characteristic especially large artery stiffness, in addition to recipient parameters, to late post-transplant cardiovascular and renal graft outcome. The study revealed a borderline significant association of donor aortic PWV with the composite outcome (occurrence of a fatal or nonfatal cardiovascular event and/or doubling of serum creatinine or development of ESRD) (RR 1.8, 95% CI 1 to 3.4, p = 0.05) in renal transplant recipients. When renal and cardiovascular outcomes were separately analyzed, recipient eGFR and donor PWV were significant determinants of the renal outcome (HR 0.26, 95% CI 0.14 to 0.4, p < 0.0001 and HR 1.9, 95% CI 1.2 to 3.0, p = 0.02, respectively) and previous history of cardiovascular events was the only significant determinant of the cardiovascular outcome (HR 3.5, 95% CI 2.1 to 8, p = 0.001) (58). STUDIES EVALUATING ARTERIAL STIFFNESS BEFORE AND AFTER RENAL TRANSPLANTATION Prospective observational studies evaluating the impact of successful transplantation on arterial stiffness of the recipients are presented in Table 3. Most of them are based on measurements before and some when after surgery. In the first study to explore the long-term effects of renal transplantation and hemodialysis on arterial stiffness, Keven et al. (59) prospectively assessed c-f PWV using SphygmoCor device in 28 renal transplanted patients before and 12 months after renal transplantation and in 23 patients on hemodialysis at baseline and 12 months later. In renal transplant recipients PWV significantly decreased from 7.8 ± 1.8 to 6.2 ± 1.6 1 year after transplantation, which was a significant reduction compared to hemodialysis patients (p < 0.0001) (59). In a subsequent study, Ignace et al. (60) measured c-f PWV and heart-rate adjusted AIx (AIc75) before and 3 months after transplantation in 52 renal transplant recipients, using the Complior device . After adjusting for the reduction in mean BP, c-f PWV decreased significantly from 12.1 ± 3.3 to 11.6 ± 2.3 m/s (p < 0.05). Moreover, in an analysis stratified by age, this improvement was only present in patients older than 50 years of age as compared with patients younger than 50 years of age (-5.5 ± 2.2 vs. 2.1 ± 1.9 %, p < 0.05). As far as AIx75 is concerned, it decreased from 22 ± 11 to 14 ± 13% (p < 0.01), but this reduction was not age-dependent (60). STUDIES EVALUATING ARTERIAL STIFFNESS IN DIFFERENT TIME POINTS AFTER RENAL TRANSPLANTATION In addition to the above, there are a few prospective observational studies that performed on measurements in different time points after renal transplantation. Delahousse et al. (67) used the Complior device to measure c-f PWV in 74 renal transplant (68) used Sphygmocor to assess c-f PWV within 1 month of transplant (baseline) and 12 months posttransplant in 66 renal transplant recipients Median PWV score was 9.25 vs. 8.97 m/s at baseline and month 12, respectively but the change was not significant (median change of −0.07, p = 0.7) (68). In perhaps the most interesting of these studies, Karras et al. (69) evaluated arterial stiffness with c-f PWV in 161 renal transplant recipients, 3 and 12 months after transplantation. These recipients were separated in three different groups based on their donors, i.e., recipients from living donors, recipients from standard criteria donors, and recipients from extended criteria donors. Mean PWV decreased from 10.8 m/s (95% confidence interval, 10.5-11.2 m/s) at month 3 to 10.1 m/s (95% confidence interval, 9.8-10.5 m/s) at month 12 (p < 0.001). PWV reduction from month 3 to month 12 was significantly larger in patients with the living donor allograft compared to those with the deceased donor allograft (p < 0.001). When the extended criteria donor (ECD) group were compared to the standard criteria donor (SCD) group, the change in PWV also differed significantly, −0.7 (−1 to −0.4) in ECD vs. +0.1 (−0.4 to +0.4) in SCD (p < 0.01) (69). Finally, very recently, Saran et al. (70) measured c-f PWV using Schiller BR-102 plus PWA device in 181 renal transplant patients in two different postoperative periods. The early postoperative period was between 2 and 7 postoperative days and the late was 6 to 27 years after transplantation. In contrast to most of the above findings, the authors noted no significant difference between the average PWV in the early period after renal transplantation (8.02 ± 2.21 m/s) and in the late period (8.09 ± 1.68 m/s) (p = 0.777) (70). CONCLUSIONS Kidney transplantation is the treatment of choice in patients suffering from ESRD, yet cardiovascular risk in renal transplant recipients remains significantly higher than that of the general population. This excess risk is not fully explained by the burden of traditional cardiovascular risk factors present in renal transplant recipients. As increased arterial stiffness is a prominent feature of vascular changes in patients with CKD and ESRD and has been repeatedly associated with increased risk of cardiovascular events and mortality in these conditions, different types of studies have been conducted in order to assess the degree of arterial stiffness in renal transplant recipients and its associations with other risk factors but also with future cardiovascular risk, graft survival and overall mortality. Several cross-sectional studies discussed in detail herein, showed that higher PWV in renal transplant recipients was associated with various risk factors, co-morbidities, and associated target-organ damage including age, pre-transplant diabetes, increased ambulatory BP, increased waist circumference and visceral fat mass, smoking, coronary artery calcification, and left ventricular hypertrophy, but also with previous episodes of acute renal rejection, renal graft dysfunction and previous time on dialysis. Other studies have assessed the course of arterial stiffness before and after kidney transplantation. In most cases, arterial stiffness measured with PWV was markedly reduced after kidney transplantation. Of note, in some studies this improvement was shown to be age-dependent, suggesting an added cardiovascular risk reduction in older patients and was also more marked in cases of transplantation from living donors. In the subset of studies examining PWV in different timepoints after kidney transplantation (varying from 1 week to 2 years), mixed results were noted, with some studies showing that arterial stiffness was partly reversed during follow-up and that this improvement was dependent on donor age and greater in patients receiving a renal graft from a living donor, whereas other studies suggesting that arterial stiffness levels were similar in the early and late post-transplantation period Finally, and perhaps most importantly, in almost all studies evaluating the role of arterial stiffness as a predictor of future adverse outcomes, PWV was shown to be an independent predictor of cardiovascular events, loss of renal function and overall mortality. Thus, existing evidence suggests that increased arterial stiffness is a major pathophysiological player involved in the adverse cardiovascular profile of renal transplant recipients. As of this writing, however, there are not any clinical trials in transplant recipients (or any other population) aiming to assess whether therapeutic interventions to reduce of arterial stiffness would improve patient outcomes. To this scope, mechanistic studies to identify the major mechanisms through which renal transplantation beneficially affects PWV are also required. Overall, further studies are urgently warranted to better define the associations of PWV with other prominent risk factors, the change and evolution of arterial stiffness after kidney transplantation, its long-term prognostic significance and whether it could represent an additional therapeutic goal in order to improve patient and graft survival after renal transplantation. AUTHOR CONTRIBUTIONS MK and EX wrote the first draft of the manuscript. SM wrote sections of the manuscript and checked the Tables for intellectual content. PS and JB conceptualized the article, wrote sections of the manuscript, and checked the manuscript for intellectual content.
2019-05-24T13:06:42.563Z
2019-05-24T00:00:00.000
{ "year": 2019, "sha1": "3129e5440be450b8076aeadd255c314f2baf574d", "oa_license": "CCBY", "oa_url": "https://www.frontiersin.org/articles/10.3389/fcvm.2019.00067/pdf", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "3129e5440be450b8076aeadd255c314f2baf574d", "s2fieldsofstudy": [ "Medicine", "Biology" ], "extfieldsofstudy": [ "Medicine" ] }
258331805
pes2o/s2orc
v3-fos-license
Fingerprinting the Type-Z three Higgs doublet models There has been great interest in a model with three Higgs doublets in which fermions with a particular charge couple to a single and distinct Higgs field. We study the phenomenological differences between the two common incarnations of this so-called Type-Z 3HDM. We point out that the differences between the two models arise from the scalar potential only. Thus we focus on observables that involve the scalar self-couplings. We find it difficult to uncover features that can uniquely set apart the $Z_3$ variant of the model. However, by studying the dependence of the trilinear Higgs couplings on the nonstandard masses, we have been able to isolate some of the exclusive indicators for the $Z_2\times Z_2$ version of the Type-Z 3HDM. This highlights the importance of precision measurements of the trilinear Higgs couplings. Introduction The Standard Model (SM) of particle physics has been immensely successful in describing the electroweak interaction with great precision. However issues like neutrino mass and dark matter serve as major motivators to look for physics beyond the SM (BSM). Very often, such BSM theories extend the minimal scalar sector of the SM, which consists of only one Higgs-doublet. Therefore, quite naturally, scalar extensions of the SM are routinely investigated in the literature. Among these, multi Higgs-doublet models might be the most ubiquitous, primarily because such extensions preserve the tree-level value of the electroweak ρ-parameter. The simplest extension in this category is the two Higgs-doublet model which have been studied extensively [1]. Of late there has been a rise in interest in the study of three Higgs-doublet models (3HDMs) [2,3] where, as the name suggests, the scalar sector contains three Higgs-doublets. In the studies of multi Higgs-doublet models it is very often assumed that fermions of a particular charge couple to a single scalar doublet. This will make fermion mass matrices proportional to the corresponding Yukawa matrices and diagonalization of the mass matrices will automatically ensure the simultaneous diagonalization of the Yukawa matrices as well. As a result the model will be free from scalar mediated flavor changing neutral couplings (FCNCs) at the tree-level. In Ref. [4] it was explicitly demonstrated that tree-level FCNCs are absent if and only if there is a basis for the Higgs doublets in which all the fermions of a given electric charge couple to only one Higgs doublet. Such an aspect of the model is quite desirable in view of the flavor data [5]. These types of constructions are usually referred to as models with natural flavor conservation (NFC) [6] in the literature, of which there are five independent possibilities. Following the terminologies of Ref. [7], one can entertain four types of flavor universal NFC models, namely Type-I, Type-II, Type-X, and Type-Y, within the 2HDM framework. All these Yukawa structures have been concisely summarized in Table 1. Beyond these four options, there is one more interesting possibility where a particular scalar doublet is reserved exclusively for each type of massive fermion. This implies that the up-type quarks, the down-type quarks, and the charged-leptons couple to separate scalar doublets. Evidently, such an arrangement of Yukawa couplings is impossible within a 2HDM framework and one needs at least three scalar doublets to accommodate it. In this paper, we will refer to this possibility as the 'Type-Z Yukawa' and subsequently, the 3HDMs that feature a Type-Z Yukawa structure will be collectively called 'Type-Z 3HDMs'. These Type-Z 3HDMs have gained a lot of attention in the recent past. Theoretical constraints from unitarity and boundedness from below (BFB) have been studied in Refs. [8][9][10], the alignment limit is analyzed in Refs. [11,12], the custodial limit has been studied in Refs. [13], and quite recently, the phenomenological analysis involving the flavor and Higgs data have been performed in Refs. [14,15]. Other related studies appear in [16][17][18][19]. There are usually two different ways in which a Type-Z Yukawa structure is realized. The first method employs a Z 3 symmetry [11] whereas the second option uses a Z 2 × Z 2 symmetry [17]. Our objective in this paper will be to point out observable features which can distinguish between the two avatars of Type-Z 3HDMs. Since the Yukawa sector in both versions of Type-Z 3HDM is identical, we will turn our attention to the scalar potential with the hope that some distinguishing aspects can be uncovered. As we will see, only some of the quartic terms in the scalar potential mark the difference between the two variants of Type-Z 3HDM. We will therefore focus on the theoretical constraints from unitarity and BFB which concern the quartic parameters of the scalar potential. We hope that these constraints, in particular, will impact the parameter space in the scalar sector differently for the two Type-Z models. As a result, we expect to encounter some practical distinguishing features of these two models. Our article will be organized as follows. In Sec. 2 we will outline the two different options for obtaining a Type-Z Yukawa structure along with the corresponding implications for the scalar potential. In Sec. 3 we list the different constraints (both theoretical and phenomenological) faced by the scalar sectors of the 3HDMs under consideration. In Sec. 4 we spell out the details of our numerical analysis and highlight the important outcomes. We summarize our findings and draw our conclusions in Sec. 5. The model We have already presented the notion of NFC in the introduction. There are a few different ways of obtaining NFC in a 3HDM framework, which have been listed in a concise manner in Table 1 where φ 1 , φ 2 and φ 3 represent the three Higgs-doublets that constitute the scalar sector of our model. Among these, we are particularly interested in the possibility of Type-Z Yukawa structure which requires a 3HDM scalar sector at the very least. There are two different ways to ensure a Type-Z Yukawa structure. The first option is to employ a Z 3 symmetry as follows: fermion type Type-I Type-II Type-X Type-Y Type-Z up quarks (u) Table 1: Distinct possibilities for NFC in a 3HDM framework. The first four types can also be obtained within 2HDMs but the Type-Z requires at least a 3HDM. In our convention, the scalar doublet coupling to the up-type quarks is always labeled as φ 3 . and the second option will be to use a Z 2 × Z 2 symmetry in the following manner: In the equations above the down-type quark and charged-lepton right-handed fields are denoted as d R and R , respectively. Since both the symmetries in Eq. (1) entail the same Type-Z Yukawa couplings, we must turn our attention to the scalar sector phenomenologies for possible distinguishable features. The symmetries in Eq. (1) would obviously have their repercussions on the 3HDM scalar potential. To this end we note that the scalar potentials in both these cases consist of a common part as follows: Note that in the expression for V 2 , we have allowed terms that softly break the symmetries defined in Eq. (1). These will be important if we wish to access arbitrarily heavy nonstandard scalars (decoupled from physics at the electroweak scale) without spoiling perturbative unitarity [20][21][22]. The differences between the symmetries in Eqs. (1a) and (1b) are captured by the following quartic terms in the scalar potential: We, therefore, hope to find distinguishing aspects of these models by tracking the effects of these additional terms. In order to do this, it is important to conveniently parametrize our models in terms of the physical masses and mixings. We will closely follow the notations and conventions of some earlier works [13,15]. However, for the sake of completeness, we will give a brief summary of the important expressions which will be crucial for our numerical analysis later. To begin with, let us write the k-th scalar doublet, after spontaneous symmetry breaking, as follows: where v k is the vacuum expectation value (VEV) of φ k , assumed to be real. The three VEVs, v 1 , v 2 and v 3 are conveniently parametrized as where is the total electroweak (EW) VEV. The component fields in Eq. (4) will mix together and will give rise to two pairs of charged scalars (H ± 1,2 ), two physical pseudoscalars (A 1,2 ) and three CP-even neutral scalars (h, H 1,2 ). 1 For the charged and pseudoscalar sectors, the physical scalars can be obtained via the following 3 × 3 rotations, where, the rotation matrices are given by and In Eq. (6), ω ± and ζ stand for the charged and the neutral Goldstone fields respectively. For the CP-even sector, we can obtain the physical scalars as follows: with (10b) Of course, these physical masses and mixings cannot be completely arbitrary as they will have to negotiate a combination of theoretical and phenomenological constraints which will be described in the next section. Constraints In this section we study the constraints that must be applied to the model parameters in order to ensure theoretical and phenomenological consistency. On the phenomenological side, we first need to guarantee the presence of a SM-like Higgs which will be identified with the scalar boson discovered at the LHC. This can be easily accommodated by staying close to the 'alignment limit' [11] in 3HDM, defined by the condition In this limit, the lightest CP-even scalar, h, will possess the exact SM-like couplings at the treelevel and constraints from the Higgs signal strengths will be trivially satisfied. However, we will more interested in the extent of deviation from the exact alignment limit allowed from the current measurements of the Higgs signal strengths [23]. We then define the Higgs signal strength as follows: where the subscript 'i' denotes the production mode and the superscript 'f ' denotes the decay channel of the SM-like Higgs scalar. Starting from the collision of two protons, the relevant production mechanisms include gluon fusion (ggF ), vector boson fusion (V BF ), associated production with a vector boson (V H, V = W or Z), and associated production with a pair of top quarks (ttH). The SM cross section for the gluon fusion process is calculated using HIGLU [24], and for the other production mechanisms we use the prescription of Ref. [25]. Next we need to satisfy the constraints arising from the electroweak S, T and U parameters. We will use the analytic expressions derived in Ref. [26] and compare them with the corresponding fit values given in Ref. [27]. It might be worth pointing out that, similar to the 2HDM case, one can easily leap over the T -parameter constraints by requiring [13] We also take into consideration the bounds coming from flavor data. In the Type-Z 3HDM there are no FCNCs at the tree-level. Therefore the only NP contribution at one-loop order to observables such as b → sγ and the neutral meson mass differences will come from the charged scalar Yukawa couplings. It was found in Ref. [14] that the constraints coming from the meson mass differences tend to exclude very low values of tan β 1,2 . Therefore, we only consider to safeguard ourselves from the constraints coming from the neutral meson mass differences. To deal with the constraints stemming from b → sγ, we follow the procedure described in Refs. [15,28,29] and impose the following restriction which represents the 3σ experimental limit. Additionally, we also take into account the bounds from the direct searches for the heavy nonstandard scalars. For this purpose, we use HiggsBounds-5.9.1 following Ref. [30] where a list of all the relevant experimental searches can be found. It should be noted that we have allowed for decays with off-shell scalar bosons, using the method explained in Ref. [31]. For the theoretical constraints, we first ensure the perturbativity of the Yukawa couplings. For the Type-Z Yukawa structure, the top, bottom, and tau Yukawa couplings are given by which follow from our convention that φ 3 , φ 2 , and φ 1 couple to up-type quarks, down-type quarks, and charged leptons respectively. To maintain the perturbativity of Yukawa couplings, we impose |y t |, |y b |, |y τ | < √ 4π. Throughout our paper, we have used values of tan β 1,2 which are consistent with this perturbative region. However, we are mainly interested in the effects of the theoretical constraints from perturbative unitarity and BFB conditions. These constraints directly affect the scalar potential and therefore can potentially have different implications for the Z 3 and Z 2 × Z 2 incarnations of the Type-Z 3HDM. For the unitarity constraints, we use the algorithm presented in Refs. [8,32]. For the BFB constraints we use only the sufficient conditions of Ref. [15] for the Z 3 model and the sufficient conditions of Ref. [9] for the Z 2 × Z 2 model. Analysis and results In both versions of the Type-Z 3HDM the scalar potential of Eq. (2) contains a total of 18 parameters 2 (6 bilinear parameters and 12 quartic parameters). For our numerical analysis, we will trade these 18 parameters in favor of an equivalent but more convenient set of parameters which have a more direct connection to the physical reality. As a first step, we use the minimization conditions to replace three quadratic parameters, m 2 11 , m 2 22 , and m 2 33 , by the three VEVs, v 1 , v 2 , and v 3 which, in turn, are further exchanged with v, tan β 1 , and tan β 2 . The 12 quartic parameters are purposefully interchanged with the 7 physical masses (two charged scalar masses labeled as m C1 and m C2 , two pseudoscalar masses labeled as m A1 and m A2 , and three CP-even scalar masses labeled as m h , m H1 and m H2 ) and 5 mixing angles appearing in Eqs. (7) and (10). For each of the symmetry constrained 3HDM, we built a dedicated code, which is an extension of our previous codes [15,28,33]. We take v = 246 GeV and m h = 125 GeV as experimental inputs. The remaining parameters will be randomly scanned within the following ranges: 3 The lower limits chosen for the nonstandard masses satisfy the constraints listed in Ref. [34] and the lower limit on tan β 1,2 enables us to easily evade the constraints from the meson mass differences. When studying 3HDM, it was noted [11,14,15] that in order to be able to generate good points in an easy way one should not be far away from alignment, defined as the situation where the lightest Higgs scalar has the SM couplings. It was shown in Ref. [11] that this corresponds to the case when 2 We are assuming all the parameters to be real. 3 More details about (17d) are given after Eq. (23) below. with the remaining parameters allowed to be free, although subject to the constraints below. It turns out that for Z 3 3HDM [15], this constraint alone is not enough to generate a sufficiently large set of good points starting from a completely unconstrained scan as in Eq. (17d). In Ref. [14] it was noted that all the theoretical and experimental constraints on the scalar sector can be easily negotiated in the 'maximally symmetric limit' of 3HDM [35]. As pointed out in Ref. [14] one can easily migrate to the maximally symmetric limit by imposing the following relations among the physical parameters: Additionally, the maximally symmetric limit also requires the soft breaking parameters to be related as follows: where s x and c x are shorthands for sin x and cos x respectively. Therefore, we can make our numerical study very efficient by strategically scanning in the 'neighborhood' of Eqs. (19) and (20). In a previous phenomenological study of the Z 3 version of the Type-Z 3HDM [15] we found that one can deviate from the exact relations of Eqs. (18), (19) and (20) by a given percentage (10%, 20%, 50%) thereby enhancing the possibility of new BSM signals, while at the same time being able to generate adequate number of data points. To exemplify, we can ensure to be within x% of the alignment condition of Eq. (18) by choosing to scan within the following range: Extending this prescription we can simultaneously incorporate Eqs. (18) and (19) by scanning within the following range: 4 The set of points obtained after scanning over the above range will be labeled as 'Al-20%' in the subsequent text and plots. In a similar manner we generate another data set labeled as 'Al-10%' which are relatively closer to the conditions of Eqs. (18) and (19) by scanning over the following range: In this context it should be noted that the soft-breaking parameters, whenever they are free, are scanned in a very similar manner in the vicinity of Eq. (20). To explicitly demonstrate the efficiency of our scanning method, we display in Table 2 how restrictive the individual constraints of Sec. 3 can be. In these tables, N represents the number of initial input points and Y stands for the number of output points that can successfully pass through a given constraint labeled appropriately. Thus p = Y /N gives an estimate for the probability of Table 2: Impact of individual constraints for the two Type-Z models while the scanning is done following Eq. (23). successfully negotiating a particular constraint. The quantity δ p represents the typical uncertainty associated with the estimate of p and is calculated using the formula for the propagation of errors. From Table 2 it should be evident that the BFB constraints have a very low acceptance ratio for the input points. We should point out that our choice of scanning around Eqs. (18) and (19) does definitely increase the number of output points that pass through all the constraints. This is detailed in the Appendix, where we show equivalent numbers for a run generating points within 50% of the alignment limit. Now that our scan strategy has been laid-out clearly, we can proceed to describe the results from our numerical studies. We will do this in two stages. At first we will demonstrate the results from the general scans and point out features that may distinguish between the two variants of Type-Z 3HDMs. In the second part we will presume that some nonstandard scalars have been discovered and therefore we will work with some illustrative benchmark points in the hope of making the distinction between the two models more pronounced. Figure 1: Output points that pass all the constraints are plotted in µ γγ vs µ Zγ plane for the gluon fusion production channel. The scanning is done assuming the Al-10% condition of Eq. (23). The red and the green points correspond to the Z 3 and Z 2 × Z 2 models respectively. We have to always keep in mind that the difference between the two versions of Type-Z 3HDM is marked by the scalar potential. Therefore, we focus on the measurements that involve the scalar self-couplings. Quite naturally, our first choice will be to study µ γγ and µ Zγ (Higgs signal strengths in the two photon and Z-photon channels respectively) which pick up extra contributions from charged scalar loops that depend on couplings of the form hH + i H − i (i = 1, 2). However, as we have displayed in Fig. 1, the points that pass through all the constraints span very similar regions in the µ γγ vs µ Zγ plane for both versions of Type-Z 3HDM. Thus no significant distinction between the two models can be made from µ γγ and µ Zγ . Next we turn our attention to the trilinear Higgs self-coupling of the following form: In the SM we have g SM hhh = −m 2 h /(2v). Thus we define the following coupling modifier which is already being measured experimentally and some preliminary values have been reported in Refs. [36,37]. We have checked that for both the Type-Z models, κ h = 1 in the alignment limit defined by Eq. (18), as expected. Therefore we have to hope that the LHC Higgs data will eventually settle for some nonstandard values away from exact alignment so that some distinguishing features can be found. To this end we recall that the quartic parameters of Eq. (3) mark the essential difference between the two models. It should also be noted that in the limit λ plane. There we observe that values of κ h in the ballpark 0.8 or lower will definitely favor the Z 2 × Z 2 scenario over the Z 3 version of Type-Z 3HDM. To give these results a better physical context, in Fig. 3, we plot the same points in the κ h vs pseudoscalar mass planes. This figure clearly indicates that unlike the Z 3 model, the Z 2 × Z 2 model can still allow κ h values as low as 0.7. In passing, we also note that values of κ h around 1.1 or higher will disfavor both versions of Type-Z 3HDMs. Table 3: Benchmark values for the nonstandard masses (in GeV) used in Fig. 4. In a final and more optimistic effort, we presume that some nonstandard scalars have already been observed and we try to ascertain whether, in view of the set of nonstandard parameters, one of the Type-Z 3HDMs can be preferred over the other. Our benchmark values for the nonstandard masses appear in Table 3. The remaining parameters are scanned following Eq. (22). For these benchmark values we have plotted all the points that pass through the constraints in the sin(α 1 − β 1 ) vs sin(α 2 − β 2 ) plane. The results have been displayed in Fig. 4 where we have also color coded the value of κ h for each point. There we can see that the points span a relatively larger region for the Z 2 × Z 2 model. Therefore, if both sin(α 1 − β 1 ) and sin(α 2 − β 2 ) are measured to be close to 0.1 along with κ h to be around 0.7, then it would definitely point towards the Z 2 × Z 2 model. Thus, again, we have found that although we can find corners in the parameter space that can isolate the Z 2 × Z 2 model, it seems to be very difficult to point out exclusive features characterizing the Z 3 version of the Type-Z 3HDM. Summary To summarize, we have studied the two common incarnations of Type-Z 3HDMs. One of them employs a Z 2 ×Z 2 symmetry while the other relies on a Z 3 symmetry. We point out that the difference between these two models is captured by certain quartic terms in the scalar potential appearing in Eq. (3). Then we proceed to uncover the effects of these quartic terms in creating distinctions between the two Type-Z models. In doing so we have performed exhaustive scans over the set of free parameters in these models. Wherever possible, we have conveniently traded the Lagrangian parameters in favor of the physical masses and mixings. Even then, when all the relevant theoretical and experimental constraints are Table 3. The color bar associated with each plot marks the gradient of values taken by κ h . The plots in the left panel correspond to the Z 2 × Z 2 model whereas the plots in the right panel correspond to the Z 3 model. Clearly, the distinguishability between the two models depends on the benchmark point (mass region) chosen. imposed, a completely random scan generates very few output points that successfully negotiate all the constraints. Therefore, we adopt a more strategic scanning procedure which involve generating random points around a premeditated proximity of the 'maximally symmetric limit' defined by Eq. (19). In this way we have successfully generated sufficient number of points to populate our plots. For the plots, we were mainly interested in observables that involve the Higgs self couplings. We have found that although µ γγ and µ Zγ are not the best discriminators, the trilinear Higgs self coupling modifier (κ h ) has the potential to distinguish between the two models. We have concluded that relatively lower values of κ h will favor the Z 2 × Z 2 version of Type-Z 3HDM. We have also emphasized that some nonstandard physics need to be discovered in the LHC Higgs data for us to be able to discriminate between the two Type-Z 3HDMs. Our study underscores the importance of the ongoing effort to measure the trilinear Higgs self coupling with increased precision. A Impact of a wider search In order to assess the need for a search of points close to the alignment limit of Eqs. (18) and (19), we redo Table 2 Comparing Table 2 with Table 4, we notice that, away from alignment, the unitarity and µ constraints Table 4: Impact of individual constraints for the two Type-Z models while the scanning is done following Eq. (26). cut most of the allowed parameter space.
2023-04-27T01:15:49.135Z
2023-04-26T00:00:00.000
{ "year": 2023, "sha1": "44e192759dbab5b61bf7aae7758a0064aad2afad", "oa_license": null, "oa_url": null, "oa_status": null, "pdf_src": "Arxiv", "pdf_hash": "44e192759dbab5b61bf7aae7758a0064aad2afad", "s2fieldsofstudy": [ "Physics" ], "extfieldsofstudy": [ "Physics" ] }
225053763
pes2o/s2orc
v3-fos-license
Dynamics of COVID-19 transmission including indirect transmission mechanisms: a mathematical analysis The outbreak of the novel coronavirus severe acute respiratory syndrome-coronavirus-2 has raised major health policy questions and dilemmas. Whilst respiratory droplets are believed to be the dominant transmission mechanisms, indirect transmission may also occur through shared contact of contaminated common objects that is not directly curtailed by a lockdown. The conditions under which contaminated common objects may lead to significant spread of coronavirus disease 2019 during lockdown and its easing is examined using the susceptible-exposed-infectious-removed model with a fomite term added. Modelling the weekly death rate in the UK, a maximum-likelihood analysis finds a statistically significant fomite contribution, with 0.009 ± 0.001 (95% CI) infection-inducing fomites introduced into the environment per day per infectious person. Post-lockdown, comparison with the prediction of a corresponding counterfactual model with no fomite transmission suggests fomites, through enhancing the overall transmission rate, may have contributed to as much as 25% of the deaths following lockdown. It is suggested that adding a fomite term to more complex simulations may assist in the understanding of the spread of the illness and in making policy decisions to control it. Introduction On 23 March 2020, the UK government introduced a partial lockdown in an attempt to curtail the spread of coronavirus disease-2019 (COVID-19) through the transmission of severe acute respiratory syndrome-coronavirus-2. Leaving home was allowed only for essential reasons: food, health and work. Just over three weeks after the partial lockdown, the weekly death rate of registered COVID-19 deaths peaked at 9495 [1], but had fallen to 6680 two weeks later, and continued to decline through July. Allowing for the time from exposure to death, the decline is evidence that non-pharmaceutical intervention successfully suppressed the spread of the epidemic [2,3]. The main transmission mechanisms of COVID-19 are believed to be through viral-loaded respiratory droplets and close contact [4], although fomites [4,5] and respiratory aerosols [4,5,6] are also suspected to be factors in the transmission. The restrictions on movement, whilst reducing person-to-person direct transmission, potentially continued to allow transmission through the indirect means of objects contaminated by an infectious person. Although viable amounts of the SARS-CoV-2 virus survive under laboratory conditions on contaminated surfaces [5] and articles in proximity to an infectious patient may show traces of the viral RNA [7], it has not been demonstrated that viable viruses survive in a natural environment in sufficient concentration to transmit the infection through this route. On the other hand, experiments suggest the lifetime of SARS-CoV-2 on fomites is prolonged in a protein-rich environment like airway secretions [8]. The relative importance of indirect transmission compared with direct is unknown, even under lockdown conditions. The World Health Organization (WHO) reports there is no conclusive evidence for fomite transmission, direct evidence for which is complicated by the frequent presence of infectious individuals with the fomites, making it difficult to establish which is the causative agent [4]. The report none the less cautions that the consistent presence of fomites in the environment of infected cases suggests fomite transmission is an active means of transmission of the SARS-CoV-2 virus, as it is for other coronaviruses. Epidemic stochastic models and simulations (e.g. [3,9,10,11]), generally do not include transmission by fomites, as the effective reproduction number may be adjusted for their effects to account for gross population statistics such as infection and death rates. As discussed below, direct estimates of the rate of fomite transmission are made difficult by the rarity of fomites in the general population. Yet the policy implications for transmission through direct and indirect transmissions may differ. Given that a moderately high proportion of the infectious population is suspected to be asymptomatic [4], there is a potential for infectious individuals working in essential services and who have not yet had reason to self-isolate, to unwittingly contaminate material that reaches the public with respiratory droplets. Whilst a lockdown will curtail direct transmission, indirect communication of the virus through essential services such as post deliveries or food supplies may be relatively unaffected. Additional policies may be required to mitigate their effects. As an alternative to direct case studies for establishing the prevalence of fomite transmission of COVID-19, this note seeks to constrain the possible impact of indirect transmission through population modelling using the SEIR model with an added fomite term. As discussed in the next section, the constraint is nearly independent of the nature of the fomites, depending only weakly on the decay times of viruses on fomites. To focus the analysis, transmission within the UK is examined. An illustrative example is also presented of the possible implications for postal deliveries in the UK, although only upper limits may be determined for any particular source of fomite transmission since they all add together to the net fomite contribution inferred from a global population analysis. Model equations The standard set of SEIR differential equations for a population follows the dynamics of four sub-populations: the fraction s of the population susceptible to infection, the fraction e exposed to infection, the fraction i of infectious individuals and the fraction r of removed or recovered individuals. It is assumed no removed individual becomes susceptible again. Sub-populations s and i are coupled through a term R t si/D i where R t , the (time-dependent) effective reproduction number, is the average number of people an infectious person infects. The exposed and infectious periods are assumed to be exponentially distributed in time, with mean durations D e and D i , respectively. A fomite term f is added to represent the number of contaminated objects per capita. If C f is the average number of potentially contaminated objects a person comes into contact with per day, then C f i is the per capita number of objects contaminated per day. (The infectious fraction among individuals able to contaminate the objects is assumed the same as in the general population.) The possibility of inter-article contamination is not included. It is assumed a contaminated object transmits the infection to an average T f members of the susceptible population. The coupling term between the susceptible population and fomites is then T f s f/D f . This represents the transmission rate per capita to an average T f members of the susceptible population per capita by a number f of contaminated objects per capita for a duration D f that viruses survive on a contaminated object. 1 The form corresponds to an exponential decay in infectiousness of the fomites, where D f is the mean duration. The epidemic is initiated by the introduction of exposed and infectious carriers at the respective rates c e and c i per capita (of the initial population). The model equations are The susceptible, exposed and infectious fractions depend only on the product N f = C f T f , the number of infection-inducing fomites introduced into the population per day per infectious person. 2 Initially, R t = R 0 , where R 0 is the basic reproduction number when the epidemic starts. Input parameter values The parameter ranges considered are summarised in Table 1. The estimates for values of the SEIR parameter are taken from Davies et al. [9] and Flaxman et al. [3] for COVID-19 in the UK. Estimates for the mean duration D f of SARS-CoV-2 on materials are 0.41 (0.34-0.49 95% CI) day on plastic, 0.34 (0.28-0.41 95% CI) day on stainless steel and 0.21 (0.14-0.30 95% CI) day on cardboard [5], although it is noted that the measurements were under ideal laboratory conditions and may not be applicable in a real-world setting. The number of cases of COVID-19 introduced in the UK is unknown, but estimates suggest at least 1356 infected individuals entered the UK, and likely more, peaking in mid-March (day 77 in the year) at the rate of just under 70 per day with a full-width at half-maximum (FWHM) of about 8 days [12]. A normal distribution with this FWHM fails to capture the tails in the distribution. The source distribution is modelled instead as c(t) = c 0 / [1 + 4(t−t c0 ) 2 / FWHM 2 ], and apportioned to the exposed and infectious carrier sources in proportion to the duration of their respective periods: c e = D e c/ (D e + D i ), c i = D i c/ (D e + D i ). Once normalised to the initial rise in death rates, the results after lockdown are found insensitive to these choices. Although R t will not have changed to a new fixed value instantaneously after lockdown, for simplicity, lockdown conditions are modelled by taking R t = R 0 before the lockdown and R ld after. After lockdown easing, the reproduction number is taken to be R lde . Means for estimating transmission rates The posterior parameter values and predicted death rates are based on a maximum-likelihood analysis, where the likelihood of a given model is given by the product of the Poisson probabilities of the reported weekly deaths compared with the mean weekly death rates predicted by the model. The intervals for the modelled parameters listed in Table 1 are sampled uniformly. The derived confidence intervals for a given parameter are given by marginalising the model likelihoods over the remaining parameters to obtain posterior distributions for each parameter. A mean infected fatality ratio 0.0050 is adopted. This is based on the age-stratified case fatality ratio, adjusted for underestimates from limited case reporting [9], the projected age distribution in the UK for 2020 from the Office for National Statistics [13], and allowing for a factor two smaller infected fatality ratio compared with case fatality ratio [14], as summarised in Table 2. The daily death rate per capita for all cases is estimated from where n d is the total number of deaths per capita, and allowing for a mean three-week delay from exposure to death [9]. The delay is slightly enlarged to four weeks during the initial rise to ensure the peak death rate is captured, necessary to provide representative infection rates leading into the post-lockdown period. All models assume the same value for R 0 before lockdown to provide a fair comparison. By mid-July, it was becoming apparent that the decrease in the incidence rate of COVID-19 in the general population in the UK had levelled off, but was on the rise again in August and September [2]. Rather than model the immediate impact of the initial lockdown and the rise in August and later, only data from weeks 18 to 34 (allowing for a mean three-week delay from onset to death) are used to solve for N f, R ld and R lde . The data used are provided in Table 3. Fit parameters The rise in the number of weekly deaths before lockdown corresponds to R 0 = 3.072 ± 0.003 (95% CL) for the maximumlikelihood model, allowing for uniform sampling over 1.5 < R 0 < 5.5. This is consistent with the range R 0 = 2.68 ± 0.57 estimated by Davies et al. [9] from a meta-analysis of published studies. 3 The results below for indirect transmission are based on the postlockdown rates, with models assuming 0 ⩽ N f < 0.05, sampled uniformly over this interval. The reproduction numbers and infection-inducing fomite rates found for fomite decay times of D f = 0.21, 0.34 and 0.41 day are summarised in Table 4. They vary little for different values of D f , as the decay times are very short compared with the evolutionary timescale of the epidemic. They all represent the data equally well. A weighted average of all three (allowing for small differences in variances and likelihoods) after lockdown gives R ld = 0.79 ± 0.01 (95% CI) and N f = 0.009 ± 0.001 (95% CI). The post-lockdown value of R t < 1 reflects the reduction in the infection rate following lockdown [2,3]. The UK began to ease the lockdown on 4 July 2020. The decline in the fraction of the population in England testing positive for COVID-19 levelled off over the following week [2]. The average reproduction number found from a maximum-likelihood fit to the numbers of registered weekly deaths after easing is R lde = 0.99 ± 0.03 (95% CI). Significantly, a value exceeding unity is included in this range, suggesting the epidemic may have already returned to a growing phase by August. Compared with a counterfactual model with the same values of R ld and R lde as for the best-fitting model with fomites, the model including fomites suggests the presence of fomites contributed to an increase in the total number of deaths by about 25%, as shown in Figure 1 (dashed cyan line). These arise both through contamination by fomites and the subsequent direct transmission by the consequent infectious cases to the susceptible population. Illustrative case: postal deliveries in UK To give the constraint on N f some context, potential indirect transmission by delivered post in the UK is considered. The Royal Mail adheres to public health guidelines for its employees, and it has placed several further protective measures in place in the delivery of post to customers [15]. Potential points of further accidental contamination not readily eliminated are the distribution of post to post carriers and during the sorting and final delivery to customers. Approximately 14 billion letters and parcels are delivered per year by the Royal Mail [16]. The number of objects delivered per day per capita for a UK population of 67 million is then C f = 0.57 day −1 capita −1 . 4 The lifetime for SARS-CoV-2 on post is unknown. The value D f = 0.2 day for cardboard is adopted. The maximum-likelihood model for N f ⩾0 gives T f = 0.015 ± 0.002 (95% CI). Thus, only an average of three in 200 contaminated articles transmits the illness. Since other fomites may be expected to be present, this should be regarded as an upper limit, T f < 0.017 (98% CI). The corresponding transmission rate is shown in Figure 2. At its post-lockdown peak, the transmission rate by fomites is about 2 × 10 −4 per day per susceptible person (Table 4). By the end of the lockdown period, it has declined to under 5 × 10 −6 . These are well below the direct transmission rates of about 4 × 10 −3 per day per susceptible person at its postlockdown peak, and 10 −4 at the end of lockdown. None the less, the slowing down by fomite transmission of the reduction in the total infection rate during the lockdown may have been sufficient to increase the death rate by as much as 25% (Fig. 1). Effect of fomites on epidemic evolution Because of the practical difficulties involved in making direct measurements of the transmission rate of COVID-19 through fomites, a global population approach is adopted. It is found that adding a fomite term to the standard SEIR equations greatly improves the agreement of the model with the weekly death rate from COVID-19 reported in the UK. Compared with a best-fitting model with no fomites (N f = 0), shown in Figure 1, with post-lockdown reproduction number R ld = 0.84 (Table 4), a somewhat smaller reproduction number value (R ld = 0.79) is required to match the data when fomites are allowed for. The lower reproduction number is compensated for by the additional contributions from fomite transmission. A less intuitive consequence of fomite transmission is the larger reproduction number after lockdown is eased when allowing for fomites, R lde = 0.99, compared with the fit with no fomites, R lde = 0.92, a value that the fit including fomites excludes with over 99.9% confidence. The value for the fit without fomites is smaller because the infection rate was declining less slowly in the model before lockdown was eased compared with the model including fomites, as shown in Figure 1. To match the relatively small death rates after the lockdown was eased requires a smaller reproduction number than the model allowing for fomites. This shows that not allowing for fomites in a model may lead to an under-estimate of the reproduction number following a reduction phase in the epidemic. In the case modelled, the reproduction number found in the model with fomites includes within its 95% confidence interval R lde > 1, so that the epidemic in the UK may have already re-entered a growing phase by August. Direct verification of a fomite contribution would help validate the model, but this is made difficult by the low prevalence of infectious-inducing fomites, as shown in Figure 2 and Table 4. The most direct means of ascertaining the contribution of indirect transmission may be through direct random testing for contaminated material. As illustrated for UK postal deliveries, however, at most only a few in a thousand letters and parcels delivered in a day would be contaminated. Post-lockdown easing, the numbers are even smaller, below one in 10,000. This would require the testing of tens of thousands of independent, randomly selected delivered articles, which is likely prohibitive. Another approach would be to search for a statistically significant increase in COVID-19 among recipients of post from infectious (pre-symptomatic) postal workers later verified by testing to have been ill, but the numbers again will be small. Studies similar to this one could be repeated for other countries to see if similar improvements in matching the data are found, particularly if similar values of N f were found. Smaller, isolated environments may also be modelled, although small samples are increasingly prone to variations particular to each case. Cruise ships [18,19], and possibly large work spaces [20], may be especially helpful for establishing the production rate and prevalence of fomites. Surveys of potential fomites even in non-infected environments would help to assess how frequently fomites may be introduced into a given environment that could provide data for epidemic population modelling. Limitations Further measurements of the duration of SARS-CoV-2 on substances in real-world situations are required. Other factors than direct transmission and fomites may also contribute to the spread of the illness, such as aerosols, blood, urine and faeces, although transmission by any of these has not been demonstrated conclusively [4]. The differences found here from a model allowing only for direct transmission may partly, or even entirely, arise from other means of transmission such as these. Alternatively, it could reflect a continuously evolving reproduction number R t . The relative simplicity with which the fomite term improves the fit to the data, however, would seem to argue in its favour. Both direct and indirect transmission rates may differ among sub-populations of different ages. Allowing for age-dependent transmission rates and transmission between age groups would further add to the uncertainty in the contribution by fomites. The delivery rate is assumed to differ little from the mean for 2018−2019. Whilst the volume of letters delivered fell by 33% from April to May 2020, the volume of parcels increased by 37%. For the full year 2019-2020, there was little difference in the net volume of delivered letters and parcels from the previous year [17]. A. Meiksin Another limitation of the SEIR model is that it implicitly assumes exponential distributions for the exposed and infectious phases. The actual distributions are still unknown [21]. Other statistical distributions may prove more accurate once more data become available. A maximum-likelihood approach requires a probabilistic model for the data. In this study the weekly reports of the number of registered deaths in the UK resulting from COVID-19, as reported by the Office of National Statistics, were used. The numbers were modelled by the minimal assumption of Poisson fluctuations, as these depend only on the reported numbers. The determinations are based on a combination of testing and physician assessments. As such they are prone to testing limitations and possibly subjective judgement. Large day-to-day variations are found, suggestive of large correlations in time. Following ONS practice, weekly numbers were used to smooth (Table 3). these fluctuations and suppress their correlations. Further understanding of the nature of the fluctuations and possible remaining week-to-week correlations would likely broaden the error estimates provided here. These uncertainties are common to any population models of the epidemic. Policy implications The possibility of transmission from fomites may be especially relevant to policies designed to protect the more than two million clinically extremely vulnerable people in the UK, as self-shielding alone may not be adequate. Modelling differences in the infection rates between shielded and unshielded sub-populations may be a means of determining how great a risk factor indirection transmission is. If the risk of indirect transmission through postal deliveries is assessed to be a significant contributor to the spread of COVID-19, a possible means of mitigation is the effective use of face coverings, under appropriate guidance [22], by postal workers coming into direct contact with postal items within a day of delivery. A solution considered in the context of re-using PPE equipment is heating used equipment or exposing it to UV radiation [23]. Such an approach could be considered for post, such as exposure to sunlight for periods of several minutes to a half hour [24], and for other articles that commonly come in contact with the public such as food packages. The tests on PPE equipment, however, were inconclusive in terms of required dosages in realistic scenarios [23]. It is unknown how effective exposure to sunlight would be on post in a realistic environment; post is also often concealed until delivered for security reasons, so procedural adjustments would be required. Until improved assessments are made, or other means of removing or preventing contamination become available, perhaps the simplest advice to give the public is to isolate potentially contaminated articles for 24 h before handling or at least to wash their hands after doing so. Conclusions A maximum-likelihood analysis of a SEIR model with an added fomite term applied to the COVID-19 epidemic in the UK suggests a significant fomite contribution, with 0.009 ± 0.001 (95% CI) infection-inducing fomites introduced into the environment per day per infectious person. The fomite term significantly shifts the inferred values of R t compared with best-fit non-fomite solutions. It is suggested that fomites be incorporated into more refined stochastic models and simulations to better assess the effectiveness of non-pharmaceutical interventions in curbing the epidemic. Data availability statement All the data used to support this study are available through the cited references.
2020-10-24T13:05:47.761Z
2020-10-23T00:00:00.000
{ "year": 2020, "sha1": "808fcefe2a0770c4da6377a2ffe2fb141dc576d6", "oa_license": "CCBYNCND", "oa_url": "https://www.cambridge.org/core/services/aop-cambridge-core/content/view/A134C5182FD44BEC9E2BA6581EF805D3/S0950268820002563a.pdf/div-class-title-dynamics-of-covid-19-transmission-including-indirect-transmission-mechanisms-a-mathematical-analysis-div.pdf", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "e9575355423789334f35b8b0fe63967d53e3cc59", "s2fieldsofstudy": [ "Mathematics", "Medicine", "Environmental Science" ], "extfieldsofstudy": [ "Medicine" ] }
6733154
pes2o/s2orc
v3-fos-license
Self-templated Synthesis of Nickel Silicate Hydroxide/Reduced Graphene Oxide Composite Hollow Microspheres as Highly Stable Supercapacitor Electrode Material Nickel silicate hydroxide/reduced graphene oxide (Ni3Si2O5(OH)4/RGO) composite hollow microspheres were one-pot hydrothermally synthesized by employing graphene oxide (GO)-wrapped SiO2 microspheres as the template and silicon source, which were prepared through sonication-assisted interfacial self-assembly of tiny GO sheets on positively charged SiO2 substrate microspheres. The composition, morphology, structure, and phase of Ni3Si2O5(OH)4/RGO microspheres as well as their electrochemical properties were carefully studied. It was found that Ni3Si2O5(OH)4/RGO microspheres featured distinct hierarchical porous morphology with hollow architecture and a large specific surface area as high as 67.6 m2 g–1. When utilized as a supercapacitor electrode material, Ni3Si2O5(OH)4/RGO hollow microspheres released a maximum specific capacitance of 178.9 F g−1 at the current density of 1 A g−1, which was much higher than that of the contrastive bare Ni3Si2O5(OH)4 hollow microspheres and bare RGO material developed in this work, displaying enhanced supercapacitive behavior. Impressively, the Ni3Si2O5(OH)4/RGO microsphere electrode exhibited outstanding rate capability and long-term cycling stability and durability with 97.6% retention of the initial capacitance after continuous charging/discharging for up to 5000 cycles at the current density of 6 A g−1, which is superior or comparable to that of most of other reported nickel-based electrode materials, hence showing promising application potential in the energy storage area. Electronic supplementary material The online version of this article (doi:10.1186/s11671-017-2094-9) contains supplementary material, which is available to authorized users. Background To ease the energy crisis and environmental problems, there is an important and urgent need to develop clean and sustainable power sources as well as advanced energy conversion and storage devices [1]. Supercapacitors, usually known as electrochemical capacitors, have attracted tremendous attention owing to their higher energy density than traditional dielectric capacitors, higher power density than batteries, rapid charge/discharge rate, and quite long cycle life [2]. The exploration of high-performance electrode materials is a crucial challenge for the construction and application of supercapacitors. Up to now, a large number of supercapacitor electrode materials with different components, morphologies, and architecture such as nanostructured carbonaceous matter (e.g., porous carbon, graphene network, carbon nanotubes), metal sulfides (e.g., MoS 2 , Ni 3 S 2 , WS 2 ), metal oxides (e.g., MnO 2 , RuO 2 , CeO 2 ), metal hydroxides (e.g., Co(OH) 2 , Ni(OH) 2 ), conducting polymers (e.g., polyaniline, polypyrrole), and their hybrid composites have been well fabricated [2][3][4][5][6][7][8]. Unfortunately, most of them suffer from one or more problems like high cost, complicated preparative process, limited specific capacitance, unsatisfactory cycling stability, and low rate capability. Among these disadvantages, the inferior cycling stability is particularly acute, which severely restricts their further practical applications in the supercapacitor field [9]. Consequently, it remains a challenging task to develop highly stable electrode materials with excellent supercapacitive behavior through facile and cost-effective strategies. As a typical member of metal silicate hydroxides, nickel silicate hydroxide (Ni 3 Si 2 O 5 (OH) 4 ) has a layered structure formed by outer octahedral Ni(II)O 6 sheets and inner tetrahedral SiO 4 sheets [10]. Thanks to the earth abundance and environmental friendliness, Ni 3 Si 2 O 5 (OH) 4 has been widely utilized as adsorbents for heavy metal ions and organic dyes, carriers for drug release, molecular sieves, and catalyst supports [10][11][12][13][14]. However, its application as electroactive materials is quite limited because of its intrinsic poor electronic conductivity [10]. Despite this drawback, the layered structure of Ni 3 Si 2 O 5 (OH) 4 still endows it with an appealing feature for electrochemical applications, since such structure could provide numerous well-defined multichannels for fast mass transfer, which is a critical factor during electrochemical reactions [10]. To improve the conductivity of Ni 3 Si 2 O 5 (OH) 4 -based materials, hybridization of Ni 3 Si 2 O 5 (OH) 4 with a conductive matrix including reduced graphene oxide and carbon nanotubes has been recently achieved, and the resulting composites were successfully used in electrocatalytic water oxidation and lithium-ion batteries [10,[15][16][17]. Nevertheless, the report with respect to the application of Ni 3 Si 2 O 5 (OH) 4based materials in supercapacitor remains rare. Graphene, a single layer of graphite, has been regarded as one of the most promising materials due to its attractive physicochemical properties and functions like light weight, exceptional electronic conductivity, and splendid chemical stability [17]. Accordingly, integration of graphene or reduced graphene oxide (RGO) with other inorganic species to boost electrochemical behavior has become an effective strategy, and a variety of graphene-or RGO-containing hybrids (e.g., hollow-structured MoS 2 /RGO microspheres, RGO-wrapped polyaniline nanowires, nanocubic Co 3 O 4 / RGO composites) with reinforced supercapacitive performance have been explored as well [4,18,19]. Over the past few years, self-assembly of graphene oxide (GO) sheets on solid substrates via electrostatic interaction has been demonstrated to be a versatile way to prepare GO-and RGObased composites [20]. By means of this methodology, we have pioneered the fabrication of highly water-dispersible GO-encapsulated SiO 2 microspheres (Fig. 1). The excellent aqueous dispersity of the resultant SiO 2 /GO composite microspheres enabled them to be readily modified or treated for further functionalization [4,21,22]. Herein, we take advantage of this point and utilize them as the template and silicon source to prepare flower-like nickel silicate hydroxide/reduced graphene oxide (Ni 3 Si 2 O 5 (OH) 4 /RGO) composite hollow microspheres with a hierarchical porous structure in one pot. As illustrated in Fig. 1, SiO 2 /GO microspheres underwent a hydrothermal process in the presence of polyvinylpyrrolidone, nickel nitrate, and urea, during which the SiO 2 inner core reacted with nickel cations to produce Ni 3 Si 2 O 5 (OH) 4 in alkaline condition and its deposition, growth, and crystallization on substrate microspheres together with the reduction of GO to RGO were synchronously accomplished, giving rise to the final product of Ni 3 Si 2 O 5 (OH) 4 /RGO composite hollow microspheres. When employed as a supercapacitor electrode material, the synthesized Ni 3 Si 2 O 5 (OH) 4 /RGO microspheres released a maximum specific capacitance of 178.9 F g −1 at the current density of 1 A g −1 in a threeelectrode system and maintained 97.6% of the initial capacitance after repetitive charging/discharging at the current of 6 A g −1 over 5000 cycles, exhibiting outstanding long-term cycling stability and durability. Synthesis of Ni 3 Si 2 O 5 (OH) 4 /RGO Composite Hollow Microspheres Monodisperse colloidal SiO 2 microspheres with the diameter of~300 nm were first prepared based on a modified Stöber method (see Additional file 1 for experimental details). Subsequently, GO-encapsulated SiO 2 microspheres (i.e., SiO 2 /GO composite microspheres) were fabricated by sonication-assisted interfacial self-assembly of tiny GO sheets on cationic polyelectrolyte-decorated SiO 2 microspheres (i.e., PDDA-modified SiO 2 microspheres) through electrostatic interaction (see Additional file 1 for experimental details). Ni 3 Si 2 O 5 (OH) 4 /RGO hollow microspheres were one-step hydrothermally synthesized through a selftemplate route. Typically, 20 mg of SiO 2 /GO microspheres were dispersed in 12 mL of water, followed by introduction of 8 mL of mixed aqueous solution containing 80 mg of nickel nitrate hexahydrate, 0.6 g of PVP, and 1 g of urea under sonication. The resulting reaction mixture was then poured into a stainless autoclave (50 mL of capacity) and sealed, which was subsequently allowed to undergo a hydrothermal reaction at 180°C for 12 h. During the hydrothermal process, SiO 2 reacted with nickel ions and urea to yield Ni 3 Si 2 O 5 (OH) 4 , which was grown on the substrate microspheres. At the same time, the GO component was hydrothermally reduced to RGO, hence resulting in the generation of flower-like Ni 3 Si 2 O 5 (OH) 4 /RGO composite hollow microspheres in one pot. After that, the product was separated and washed with abundant water, followed by drying and annealing at 600°C for 2 h in Ar atmosphere. To demonstrate the role of PVP in the hydrothermal synthetic system, a Ni 3 Si 2 O 5 (OH) 4 /RGO hybrid material was similarly synthesized according to the above procedure but without the introduction of PVP. As control, contrastive bare Ni 3 Si 2 O 5 (OH) 4 hollow microspheres were hydrothermally fabricated by using pure SiO 2 microspheres as the template, followed by the same annealing treatment. In addition, bare RGO material was also prepared through hydrothermal reduction of tiny GO sheets at 180°C for 12 h. Characterizations Powder X-ray diffraction (XRD) patterns with the scanning range from 10°to 70°were obtained on a Bruker D8 ADVANCE diffractometer. Field emission scanning electron microscopy (FESEM) inspection was performed on a Hitachi SU8010 microscope working at the acceleration voltage of 3 kV. Transmission electron microscopy (TEM) observation was carried out on a JEOL JEM-2100F microscope operating at the acceleration voltage of 200 kV and equipped with an energy-dispersive spectroscopy (EDS) system. X-ray photoelectron spectra (XPS) were recorded on a VG ESCALAB MARK II instrument. Raman spectra were collected from a HORIBA Scientific Raman spectrometer with the excitation source of 532-nm laser line. Nitrogen adsorption-desorption isotherms were recorded on a Micromeritics ASAP 2020 apparatus at −196°C, and the specific surface area of the samples was calculated by the Brunauer-Emmett-Teller (BET) model. Electrochemical Measurements All the electrochemical tests were done on a CHI 760E electrochemical workstation (CH Instruments, Inc., Shanghai, China) with a three-electrode system by employing aqueous solution of 2 M KOH as the electrolyte. Hg/HgO electrode, platinum foil, and nickel foam substrate coated with active material were used as the reference electrode, counter electrode, and working electrode, respectively. To fabricate the working electrode, the active material was mixed with acetylene black and PVDF at the weight ratio of 80:10:10. Then, NMP was added into the mixture, followed by gentle grinding to generate a homogeneous slurry. After that, the resulting slurry was pasted onto a nickel foam current collector with the area of 1 cm × 1 cm, followed by drying at 60°C overnight in a vacuum oven, and the loading amount of active material on the working electrode was~2.5 mg. Cyclic voltammetry (CV) curves were recorded in the potential window between 0.15 and 0.65 V at various scanning rates. Galvanostatic charge/ discharge (GCD) measurements were done in the potential range from 0.2 to 0.6 V at a series of current densities. Electrochemical impedance spectroscopy (EIS) was carried out in the frequency range from 0.01 to 100,000 Hz at open-circuit potential with an ac perturbation of 5 mV. Figure S1a, b are the FESEM images of pure monodisperse SiO 2 microspheres with the diameter of~300 nm and a perfect smooth surface, displaying white color (inset of Fig. 2a). Figure 2b and Additional file 1: Figure S1c, d present the FESEM images of SiO 2 /GO microspheres, whose size seems to be unchanged as compared with SiO 2 microspheres while whose apparent color becomes yellow brown (inset of Fig. 2b). Also, the outer surface of SiO 2 /GO microspheres seems to be slightly rougher and some twisted crumples can be identified (Fig 2b and Additional file 1: Figure S1d), which should arise from the encapsulation of tiny GO sheets on substrate microspheres. These results confirm the successful sonication-assisted interfacial self-assembly of tiny GO sheets on the positively charged SiO 2 microspheres by virtue of electrostatic interaction. Ni 3 Si 2 O 5 (OH) 4 /RGO composite hollow microspheres were one-pot prepared by hydrothermal treatment of the SiO 2 /GO microsphere template in the presence of nickel nitrate, urea, and PVP, and their morphology was carefully inspected. Compared with the pristine SiO 2 and SiO 2 /GO composite microspheres, Ni 3 Si 2 O 5 (OH) 4 /RGO microspheres are bigger in size (~600 nm in diameter), and their external surface is composed of plenty of highly curved and wrinkled nanoflakes with the thickness of tens of nanometers (Fig. 2c, d), which should originate from the homogeneous deposition, coverage, and growth of Ni 3 Si 2 O 5 (OH) 4 on the substrate microspheres, leading to the hierarchical porous architecture with a flower-like shape. Meanwhile, differing from the apparent colors of pristine SiO 2 and SiO 2 /GO composite microspheres, Ni 3 Si 2 O 5 (OH) 4 /RGO microspheres show a dark color (inset of Fig. 2c), and such deep color is owing to the presence of the RGO component within the sample. Figure 2d exhibits the FESEM image of a typical Ni 3 Si 2 O 5 (OH) 4 /RGO microsphere with a broken shell, revealing the hollow structure, which was further verified by the following TEM and scanning TEM (STEM) examinations. As exhibited in Fig. 2e, f, an evident interior cavity with a uniform shell thickness of~150 nm was found in each well-defined Ni 3 Si 2 O 5 (OH) 4 /RGO microsphere, which is indicative of total removal of the SiO 2 template but without collapse of the hollow structure. The inset of Fig. 2e is a high-resolution TEM (HRTEM) image of an arbitrary Ni 3 Si 2 O 5 (OH) 4 nanoflake anchored on a Ni 3 Si 2 O 5 (OH) 4 /RGO microsphere, where the lattice fringes are visible and the interplanar spacing is calculated to be 0.74 nm, agreeing well with the (002) crystal plane of Ni 3 Si 2 O 5 (OH) 4 [9,16,17]. The elemental distribution of the Ni 3 Si 2 O 5 (OH) 4 /RGO microsphere presented in Fig. 2f was further analyzed by the corresponding EDS mappings. As can be clearly seen in Fig. 2g-j, all the Ni, Si, O, and C signals were detected and filled the microsphere area, demonstrating their homogeneous distribution in this sample. Notably, PVP plays an important role in the hydrothermal synthetic system. In the absence of PVP, although a Ni 3 Si 2 O 5 (OH) 4 /RGO hybrid material can be fabricated as well, such composite agglomerated seriously and its spherical morphology was rather inferior to that of Ni 3 Si 2 O 5 (OH) 4 /RGO composite hollow microspheres (Additional file 1: Figure S2). It is assumed that PVP favored the dispersion of substrate microspheres (i.e., the SiO 2 /GO microspheres) and effectively alleviated the agglomeration of products during the hydrothermal process, leading to well-defined Ni 3 Si 2 O 5 (OH) 4 /RGO composite hollow microspheres. Moreover, as a comparison, bare Ni 3 Si 2 O 5 (OH) 4 hollow microspheres were hydrothermally fabricated by employing pure SiO 2 microspheres as the template, and the synthetic conditions were identical to those for preparation of Ni 3 Si 2 O 5 (OH) 4 /RGO microspheres. Obviously, their sphere-like shape, hierarchical morphology, and hollow structure are similar to those of the counterpart (Fig. 2k, l), whereas their apparent color shows bright green (inset of Fig. 2k). Material Characterizations Powder XRD technique was made to characterize the structure and phase information of the products. As shown in the XRD patterns of both Ni 3 Si 2 O 5 (OH) 4 / RGO microspheres and bare Ni 3 Si 2 O 5 (OH) 4 microspheres (Fig. 3) [15,17]. To verify the existence of the RGO component incorporated in Ni 3 Si 2 O 5 (OH) 4 /RGO microspheres, the Raman spectra of Ni 3 Si 2 O 5 (OH) 4 /RGO microspheres and tiny GO sheets were characterized and depicted in Fig. 4. Obviously, there are a couple of bands at around 1350 and 1590 cm -1 in both curves, which are ascribed to the characteristic D and G bands of graphene-based species [14,15,17]. Generally, the D band arises from the structural defects and edges that damage the symmetry, while the G band refers to the first-order scattering of E 2g phonons [4,14]. Especially, the peak intensity ratio of the D to the G band (I D /I G ) is a useful measure to evaluate the graphitization degree of carbon matter [4,14]. The I D /I G value for Ni 3 Si 2 O 5 (OH) 4 /RGO microspheres is 1.08, which is higher than that for GO sheets (0.88), implying that the reduction of GO to RGO indeed occurred during the hydrothermal process, which was undoubtedly incorporated in the final product of Ni 3 Si 2 O 5 (OH) 4 /RGO microspheres [4,14,17]. X-ray photoelectron spectroscopy provides an effective tool for disclosing the surface composition and state of hybrid materials. Figure 5a gives the high-resolution XPS of C 1s of GO sheets. Figure 5b-e shows a set of high-resolution XPS of Ni 3 Si 2 O 5 (OH) 4 /RGO microspheres for the C 1s, Ni 2p, Si 2p, and O 1s regions, respectively. As envisioned, the detected signals suggest the presence of the four elements in the sample. Both of the C 1s spectra can be resolved into three Gaussian fitted peaks. The peak located at 284.6 eV is attributed to the oxygen-free C=C and C-C bonding, whereas the other two peaks found around 285.6 and 287.1 eV are related to diverse oxygen-containing groups including C-OH, O=C, and C−O−C [23,24]. The relative intensity of oxygen-containing groups in the C 1s spectrum of Ni 3 Si 2 O 5 (OH) 4 /RGO microspheres significantly decreased as compared with that in the C 1s spectrum of tiny GO sheets, once again indicating that the immobilized GO sheets wrapping on the substrate microspheres underwent a drastic loss of oxygen-containing groups during the hydrothermal reaction, leading to its reduction to the RGO component [4,22,23]. Figure 5c is the high-resolution XPS of Ni 2p of Ni 3 Si 2 O 5 (OH) 4 /RGO microspheres, where a pair of predominant peaks appear at 856.0 and 873.5 eV, corresponding to the binding energy (BE) of Ni 2p3/2 and Ni 2p1/2, respectively [10,25]. Two shakeup satellite peaks (denoted as "Sat." in Fig. 5c) close to the spin-orbit doublets are also visible at 862.0 and 880.1 eV with the BE separation of 18.1 eV [10,25]. All these data agree well with the reported ones and demonstrate the presence of Ni(II) in this sample [10,25]. Besides, the high-resolution XPS of Si 2p and O 1s reveal strong peaks at 102.3 and 531.3 eV, respectively, which are typical BE values for metal silicate hydroxides as well and mainly derive from the Ni−Si and Si−O bonding [9,10]. The porous feature of Ni 3 Si 2 O 5 (OH) 4 /RGO and bare Ni 3 Si 2 O 5 (OH) 4 microspheres was surveyed by BET measurements. As shown in their nitrogen adsorption-desorption isotherms (Fig. 6a), both of them can be classified into type IV isotherms with a typical hysteresis loop ranging from 0.5 to 0.9 P/P 0 in each of them, suggesting the presence of mesopores in the two specimens [4,26]. Based on the isotherms, the pore size distribution and specific surface area are deduced according to the Barrett-Joyner-Halenda model, and the corresponding plots are presented in Fig. 6b, which once again manifest the existence of welldeveloped porosity with an average pore size centered around 20 nm and a wide distribution from micropores to macropores in both samples [2,26]. Such result is consistent with their FESEM and TEM observations as well (Fig. 2), and the pores are possibly formed by the complex intertwining and stacking among the nanoflakes [27]. Thanks to the hierarchical porous architecture, the specific surface area of Ni 3 Si 2 O 5 (OH) 4 /RGO and bare Ni 3 Si 2 O 5 (OH) 4 microspheres is as high as 67.6 and 61.6 m 2 g -1 , respectively. It is assumed that the larger specific surface area of Ni 3 Si 2 O 5 (OH) 4 /RGO microspheres would increase the contact area between the electrolyte and electrode material, facilitate the mass transport of charged ions, and provide more reactive sites during electrochemical reactions and thus bring about preferable supercapacitive performance [4,28,29]. Electrochemical Investigation The electrochemical properties of Ni 3 Si 2 O 5 (OH) 4 /RGO hollow microspheres, bare Ni 3 Si 2 O 5 (OH) 4 hollow microspheres, and bare RGO material were evaluated by CV and GCD measurements in a three-electrode system employing aqueous solution of 2 M KOH as the electrolyte. Figure 7a displays [4,9]. Figure 7b is the CV curves of Ni 3 Si 2 O 5 (OH) 4 /RGO hollow microspheres at varied scanning rates of 2-100 mV s −1 . With elevating sweeping rates, the shape of the CV curves is not remarkably altered and the intensity of redox peaks gradually goes up with only a slight shift toward higher potential, demonstrating that fast electrochemical reactions take place at the interface between the electrolyte and active material and the Ni 3 Si 2 O 5 (OH) 4 /RGO hollow microsphere electrode possesses excellent rate capability [19,31]. Figure 7c depicts the GCD curves of Ni 3 Si 2 O 5 (OH) 4 /RGO, bare Ni 3 Si 2 O 5 (OH) 4 , and bare RGO electrodes in the potential range of 0.2-0.6 V tested at the current density of 1 A g −1 . It is not hard to see that the discharge time of the Ni 3 Si 2 O 5 (OH) 4 /RGO microsphere electrode is the longest. Such result is consistent with the CV measurements displayed in Fig. 7a and further confirms its superior supercapacitive behavior. The specific capacitance of a single electrode is able to be obtained on the basis of the equation described as follows: where C (F g −1 ) stands for the specific capacitance, i (A) represents the constant current, t (s) is the discharge time, ΔV (V) is the potential window, and m (g) is the mass of active material [4,9,27]. Therefore, the C of Ni 3 Si 2 O 5 (OH) 4 /RGO microspheres at the current density 1 A g −1 is deduced to be 178.9 F g −1 , which is clearly higher than that of bare Ni 3 Si 2 O 5 (OH) 4 microspheres (138.4 F g −1 ) and bare RGO material (12.2 F g −1 ). Figure 7d gives its GCD curves at a group of different current densities, based on which the C of the Ni 3 Si 2 O 5 (OH) 4 /RGO microsphere electrode is calculated to be 178.9, 166.5, 150.8, 138.9, 132.5, 126.8, 120.1, and 114.4 F g −1 at the current density of 1, 2, 3, 4, 5, 6, 8, and 10 A g −1 , respectively. The change in its C as a function of current density is also plotted in Fig. 7e. Obviously, the C of the Ni 3 Si 2 O 5 (OH) 4 /RGO microsphere electrode gradually drops with increasing current density. It is inferred that both the outer and inner pores and reactive sites would contribute to the electrochemical reactions at low current densities, giving rise to high C values, while only the external surface of the electrode material is involved in the charge/discharge processes at high current densities, thus resulting in the diminishment of the C value [4,29]. Compared with its maximum C at the current density of 1 A g −1 , its C at 5 and 10 A g −1 maintains as high as 74.1 and 63.9% of the initial one, respectively, indicating prominent rate capability. However, for the bare Ni 3 Si 2 O 5 (OH) 4 microsphere electrode, the C at 5 and 10 A g −1 decreases to only 57.2 and 47.2% of that at 1 A g −1 , respectively, exhibiting inferior rate capability (Additional file 1: Figure S3). It is assumed that two reasons are responsible for the capacitance enhancement and rate capability improvement of the Ni 3 Si 2 O 5 (OH) 4 /RGO microsphere electrode. On the one hand, Ni 3 Si 2 O 5 (OH) 4 /RGO microspheres feature a porous hollow structure with a highlevel hierarchy and larger specific surface area, which is quite favorable for the rapid transport and adsorption of electrolyte ions inside the electrode material. On the other hand, benefiting from the hybridization of Ni 3 Si 2 O 5 (OH) 4 with RGO, the electronic conductivity is significantly improved, thus facilitating more effective electron transport within the Ni 3 Si 2 O 5 (OH) 4 presented in Fig. 7f. Both of them show a depressed semicircle in the high-frequency region together with a straight line in the low-frequency region. In the highfrequency region, the intercept at the real axis and the diameter of the semicircle represent the equivalent series resistance (R s ) of the electrode and the charge transfer resistance (R ct ) at the electrode/electrolyte interface, respectively [32][33][34]. Apparently, compared with the bare Ni 3 Si 2 O 5 (OH) 4 microsphere electrode, the Ni 3 Si 2 O 5 (OH) 4 /RGO microsphere electrode possesses much smaller R s and R ct values, which are indeed indicative of its better electronic conductivity and allow for faster electron transport within the electrode matrix [32][33][34]. In the low-frequency region, the straight line reflects the Warburg impedance, which can be used to describe the diffusive resistance of electrolyte ions [32][33][34]. The Ni 3 Si 2 O 5 (OH) 4 /RGO microsphere electrode shows a higher slope than the bare Ni 3 Si 2 O 5 (OH) 4 microsphere electrode in the linear part, suggesting more rapid ion diffusion inside it [32][33][34]. These EIS findings further support and verify the abovementioned analyses on the excellent electrochemical performances of the Ni 3 Si 2 O 5 (OH) 4 /RGO microsphere electrode. Cycle life plays a key role in the application of electrode materials in supercapacitors, since little change in their capacitance would make the supercapacitors work steadily and safely [27]. The cyclic performance of the Ni 3 Si 2 O 5 (OH) 4 /RGO microsphere electrode was determined by repetitive GCD measurements for up to 5000 cycles at the current density of 6 A g −1 (Fig. 8a, b) capacitance decays quite slowly (Fig. 8a), and the shape of the GCD curve for the last 10 cycles remains good enough (Fig. 8b). The capacitance retention of the electrode even reaches up to 97.6% after the whole tests, which is preferable or comparable to a number of nickel-based supercapacitor electrode materials reported previously (Table 1). Besides, the Ni 3 Si 2 O 5 (OH) 4 /RGO microsphere electrode after such tests was subjected to FESEM examinations as well, which disclosed the hierarchical porous architecture with spherical morphology freed from significant collapse and deformation (Fig. 8c, d). The structural integrity during the repetitive charging/discharging process largely contributes to the capacitance retention and convincingly demonstrates the a b c d Fig. 8 [45] splendid cycling stability, durability, and application potential in practical supercapacitors. Conclusions In summary, GO-encapsulated SiO 2 microspheres were prepared by sonication-assisted interfacial self-assembly of tiny GO sheets on positively charged SiO 2 microspheres. By employing the resulting SiO 2 /GO composite microspheres as the template and silicon source, Ni 3 Si 2 O 5 (OH) 4 / RGO composite hollow microspheres were one-pot hydrothermally synthesized, which possessed unique hierarchical porous architecture with a large surface area. When used as a supercapacitor electrode material, Ni 3 Si 2 O 5 (OH) 4 /RGO composite hollow microspheres delivered a maximum specific capacitance of 178.9 F g −1 at the current density of 1 A g −1 , which was better than that of currently developed contrastive bare Ni 3 Si 2 O 5 (OH) 4 hollow microspheres and bare RGO material, exhibiting enhanced supercapacitive property. Of note, the Ni 3 Si 2 O 5 (OH) 4 /RGO microspheres had salient rate capability and longterm cycling stability, which maintained 97.6% of the initial capacitance after continuous charge/discharge for up to 5000 cycles, displaying a remarkably supercapacitive advantage over lots of other reported nickel-based materials. These results testify that Ni 3 Si 2 O 5 (OH) 4 /RGO composite hollow microspheres are a promising candidate for high-performance energy storage devices and systems. Moreover, we also anticipate that the present self-template synthetic strategy would be adopted to develop more and more other metal silicate-based materials with distinct morphologies and structures for important applications in various fields. Additional file Additional file 1: The experimental details for synthesis of SiO 2 microspheres and SiO 2 /GO composite microspheres. Figure S1
2018-04-03T00:53:45.650Z
2017-05-04T00:00:00.000
{ "year": 2017, "sha1": "2699c243e5953498826b69f1ffd660a2580043e3", "oa_license": "CCBY", "oa_url": "https://nanoscalereslett.springeropen.com/track/pdf/10.1186/s11671-017-2094-9", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "2699c243e5953498826b69f1ffd660a2580043e3", "s2fieldsofstudy": [ "Materials Science" ], "extfieldsofstudy": [ "Materials Science", "Medicine" ] }
248427172
pes2o/s2orc
v3-fos-license
Two Decades of Colorization and Decolorization for Images and Videos Colorization is a computer-aided process, which aims to give color to a gray image or video. It can be used to enhance black-and-white images, including black-and-white photos, old-fashioned films, and scientific imaging results. On the contrary, decolorization is to convert a color image or video into a grayscale one. A grayscale image or video refers to an image or video with only brightness information without color information. It is the basis of some downstream image processing applications such as pattern recognition, image segmentation, and image enhancement. Different from image decolorization, video decolorization should not only consider the image contrast preservation in each video frame, but also respect the temporal and spatial consistency between video frames. Researchers were devoted to develop decolorization methods by balancing spatial-temporal consistency and algorithm efficiency. With the prevalance of the digital cameras and mobile phones, image and video colorization and decolorization have been paid more and more attention by researchers. This paper gives an overview of the progress of image and video colorization and decolorization methods in the last two decades. There are a large number of gray-scale or black-and-white images and video materials in various film, television, picture archives, medical, and other fields. Coloring them can greatly enhance the detail features and help one better identify and use them. Traditional manual coloring method consumes a lot of manpower and material resources, and may not get satisfactory results. Given a source image or video, colorization methods aim to automatically colorize the target gray image or video reasonably and reliably, which thereby greatly improves the efficiency of this work. Image or video decolorization, also known as grayscale transformation, converts a three-channel color image or video into a single-channel grayscale one. Decolorization is actually a process of dimension reduction, so that the resulting grayscale image or video often only contains the most Shiguang Liu is with College of Intelligence and Computing, Tianjin university, Tianjin 300350, P.R. China. (e-mail: lsg@tju.edu.cn). important information, which greatly saves storage space. A grayscale image or video can better display the texture and contour of objects. Decolorization can also be widely applied in the field of image compression, medical image visualization, and image or video art stylization. Black and white digital printing of images, with the advantages of low cost and fast printing, is common in daily life, one important process of which is decolorization, i.e., a color image sent to a monochrome printer must undergo a color-to-grayscale transformation. Below we will summarize various image and video colorization and decolorization methods in the last two decades. II. IMAGE COLORIZATION Colorization refers to adding colors to a grayscale image or video, which is a ill-posed task duet to that it is ambiguous to assign colors to a grayscale image or video without any prior knowledge. So, at the early stage, user intervention is usually involved in image colorization. Later, automatic image colorization methods and deep-learning based colorization methods emerged. A. Semi-Automatic Colorization Semi-automatic colorization methods require some amounts of user interactions. Among them, color transfer methods ( [87], [73], [1] and image analogy methods ( [22] in Chapter 2 are widely used. In this case, a source image is provided as an example for coloring a given grayscale image, i.e., the target image. When the source image and the grayscale image share similar contents, impressive colorization results can be achieved. Nevertheless, these methods are labor intensive, since the source image and the target image should be manually matched. A luminance keying based method for transferring color to a grayscale image is described in Gonzalez and Woods [17]. Color and grayscale values are matched with a pre-defined look-up table. When assigning different colors for a same gray level, a few luminance keys should be simultaneously manipulated by the user for different regions, making it a tiresome process. As an extension of color transferring method between color images [73], Welsh et al. [87] proposed to transferring from a source color image to a target grayscale image. It matches color information between the two images with swatches. Levin et al. [38] presented an efficient colorization method which allows users to interact with a few scribbles. With the observation that neighboring pixels in space-time share Fig. 1. Illustration of the colorization method using optimization [38]. From left to right: an input grayscale image marked with color scribbles by the user, the colorization result by [38] (middle), and the ground truth. similar intensities should have similar colors, they formulate colorization as an optimization problem for a quadratic cost function. As shown in Fig. 1, with a few color scribbles by the user, the indicated colors can be automatically propagated in the grayscale image. Nie et al. [69] developed a colorization method by a local correlation based optimization algorithm. This method depends on the color correlativity between pixels in different regions, limiting its practical application. Nie et al. [70] presented an efficient grayscale image colorization method. This method achieved comparable colorization quality with [38] with much less computation time by quadtree decomposition based non-uniform sampling. Furthermore, this method greatly reduces the problem of color diffusion among different regions via designing a weighting function to represent intensity similarity in the cost function. This is an interactive colorization method, where the user need provide color hints by scribbling or seed pixels. Irony et al. [29] presented a novel colorization method by transferring color from an exampler image. This method uses a strategy of higher-level context of each pixel instead of independent pixel-level decisions in order to achieve better spatial consistency in the colorization result. Specifically, with a supervised classification scheme, they estimate the best example segment for each pixel to learn color from. Then, by combining a neighborhood matching metric and a spatial filter for high spatial coherence, each pixel is assigned a color from the corresponding region in the example image. It is reported that this approach requires considerably less scribbles than previous interactive colorization methods (e.g., [38]). Yatziv and Sapiro [90] proposed an image colorization method via chrominance blending. This scheme is based on the concept of color blending derived from a weighted distance function that is computed from the luminance channel. This method is fast and allows the user to interactively colorize a grayscale image by providing a reduced set of chrominance scribbles. This method can also be extended for recolorization and brightness change (Fig. 2). As shown in Fig. 2, given a target color image (a), the goal is to recolorize the yellow car into a darker one. Firstly, the blending medium is defined by simply marking areas to be changed and unchanged with scribbles (b). Then, the marks are propagated to be a grayscale matte (c). The brightness of the target image is changed by subtracting the grey-level matte from the intensity channel (d). (e) and (f) show more recolorization results by adding the grey-level matte to the C b and C r channels, respectively. Image color can be viewed as a highly correlated vector space. Abadpour and Kasaei [1] realized grayscale colorization by applying the PCA (Principal Component Analysis) based transformation. They propose a category of colorizing methods that generate the color vector corresponding to the grayscale as a function. This method is significantly faster than previous approaches while producing visually acceptable colorization results. It can also be extended for recolorization. Nevertheless, this method is restricted by complicate segmentation that is tiresome by using the magic select tool in Adobe Photoshop. Luan et al. [63] proposed an interactive system for users to easily colorize natural images. The colorization procedure consists of two stages: color labeling and color mapping. In the first stage, pixels that should have similar colors are grouped into coherent regions in the first stage, while in the second stage color mapping is applied to generate vivid colorization effect through assigning colors to a few pixels in the region. It is very tedious to colorize texture by previous methods since each tiny region inside the texture need a new color. In contrast, this method handles texture by grouping both neighboring pixels sharing similar intensity and remote pixels with similar texture (see Fig. 3). This method is effective for natural image colorization. However, the user should usually provide multiple stokes on similar patterns with different orientation and scales in order to produce fine colorization results. Liu et al. [58] proposed an example-based colorization method that is aware of illumination differences between the target grayscale image and the source color image. Firstly, an illumination-independent intrinsic reflectance map of the target scene is recovered from multiple color references collected by web search. Then, the grayscale versions of the reference images are employed for decomposing the target grayscale image into its intrinsic reflectance and illumination components. The color is transferred from the color reflectance map to the grayscale reflectance image. By relighting with the illumination component of the target image, the final colorization result can be produced. This method needs to search suitable source images for reference by web search. Liu et al. [42] presented a gray-scale image colorization method by control of single-parameter. The polynomial fitting model of the histograms of the source image and the grayscale image are computed by linear regression, respectively. With the user-assigned order of the polynomials, the source image and the grayscale images can be automatically matched. By transferring between the corresponding regions of the source image and the gray-scale image, colorization can be finally achieved. Quang et al. [72] proposed an image and video colorization method based on the kernel Hilbert space (RKHS). This method can produce impressive colorization results. Nevertheless, it requires initialization for different regions by manual, that is time-consuming if there are many different contents in the grayscale image. B. Automatic Colorization The above colorization methods require the user to perform colorization by manual, either providing a source image or using scribbles and color seeds for interaction. Since there is usually no suitable correspondence between color and local texture, automatic colorization is necessary. Li and Hao [39] proposed an automatic colorization approach by locally linear embedding. Given a source color image and a target grayscale image, this method clips both of them into overlapping patches, which are supposed to be distributed on a manifold [13], [5]. For each patch, its neighborhood in the training patches is estimated and its chromatic information is predicted by the manifold learning [74]. With multimodality, Charpiat et al. [7] predict the probability distribution of all possible colors for each pixel of the image to be colored,, rather than selecting the probable color locally. Then, the technique of graph cut is employed to maximize the probability of the whole colored image globally. Morimoto et al. [66] proposed an automatic colorization method using multiple images collected from the web. Firstly, this method chooses images with similar scene structure with the target grayscale image I m as the source images. The gist scene descriptor, a feature vector expressing the global scene in a lower dimension is used to aggregate oriented edge responses at multi-scales into coarse spatial bins. Then the distance between the gist of I m and that of the images from the web are computed. The most similar images are chosen as source images, which are used for colorization. Here, the transferring method of [87] was used for colorization. However, this method restricts from the searching results from the images collected from the web, which may produce unnatural results due to the source images that are structurally similar but semantically different. To this end, Liu and Zhang [50] proposed an automatic grayscale image colorization method via histogram regression. Given a source image and a target grayscale image, the locally weighted regression is performed on both images to obtain the feature distributions of them. Then, these features are automatically matched by aligning zero-points of the histogram. Thus, the grayscale image is colorized in a weighted manner. Figure 4 shows a colorization result by this method. Although this method can achieve confident colorization results, it may fail for images with strong texture patterns or varied lighting effects (e.g., shadows and highlight). Liu and Zhang [51] further proposed a colorization method based on texture map. Assuming that a source color image with the similar content with the target grayscale image can be provided by the user, this method is aware of both the luminance and texture information of images so that more convincing colorization results can be produced. Specifically, given a source color image and a target grayscale image, their respective spatial maps are computed. Note that the spatial map is a function of the original image, indicating the luminance spatial distribution for each pixel. Then, by Fig. 5. An example of stroke-preserving manga colorization [71]. (a) the target manga drawing with user scribbles, (b) the colorization result, and (c) the enlarged views. Note that a color-bleeding algorithm is utilized here, so that even if the user provides careless scribbles, the leave region can still be accurately separated from tree branches. performing locally weighted linear regression on the histogram of the quantized spatial map, a series of spatial segments are computed. Within each segment, the luminance of target grayscale image is automatically mapped to color values. Finally, colorization results can be yielded through local luminance-color correspondence and global luminance-color correspondence between the source color image and the target grayscale image. Beyond natural images, Visvanathan et al. [84] automatically colorized pseudocolor images by gradient-based value mapping. This method targets for visualizing pixel values and their local differences for scientific analysis. C. Cartoon Colorization Some researchers also extended the colorization technique to cartoon images. Sýkora et al. [81] proposed a semiautomatic, fast and accurate segmentation method for black and white cartoons. It allows the user to efficiently apply ink on the aged black and white cartoons. The inking process is composed of four stages, namely segmentation, marker prediction, color luminance modulation, and final composition of foreground and background layers. Qu et al. [71] proposed a method for colorizing black-andwhite manga (comic books in Japanese) containing a large number of strokes, hatching, halftoning, and screening. Given scribbles by the user on the target grayscale manga drawing, Gabor wavelet filters are employed to measure the patterncontinuity and thereby a local, statistical based pattern feature can be estimated. Then, with the level set technique, the boundary is propagated to monitor the pattern continuity. In this way, areas with open boundaries or multiple disjointed regions with similar patterns can be well segmented. Once the segmented regions are obtained, conventional colorization methods can be used to color replacing, color preservation as well as pattern shading. Figure 5 shows an example of strokepreserving manga colorization by this method. D. Deep Colorization Cheng et al. [8] proposed a deep neural network model to achieve fully automatic image colorization by leveraging Fig. 6. The framework of the automatic colorization method via learning representations [35]. a large set of source images from different categories (e.g., animal, outdoor, indoor) with various objects (e.g., tree, person, panda, and car). This method consists of two stages, (1) training a neural network, and (2) colorizing a target grayscale image with the learned neural network. Larsson et al. [35] trained a model to predict per-pixel color histograms for colorization. This method trains a neural architecture in an end-to-end manner by considering semantically meaningful features of varying complexity. Then, a color histogram prediction framework is developed to treat uncertainty and ambiguities inherent in colorization so as to avoid jarring artifacts. As shown in Fig. 6, given a grayscale image, with a deep convolutional architecture (VGG), spatially localized multilayer slices are chosen as per-pixel descriptors. The system then estimates hue and chroma distributions for each pixel p with its hypercolumn descriptor. Finally, at test time, the estimated distributions are used for color assignment. Zhang et al. [97] treat image colorization as a classification problem considering the underlying uncertainty of this task. They leverage class-rebalancing during training to increase the diversity of colors. At test time, this method is performed as a feed-forward pass in a CNN with a million color images. This method demonstrates that with a deep CNN and a carefullytuned loss function, the colorization task can generate results close to real color photos. Iizuka et al. [28] proposed an automatic, CNN-based grayscale image colorization method by combining both global priors and local image features. The proposed network architecture is able to jointly extract global and local features from an image and fuse them for colorization. Specifically, their model is composed of four parts, namely a low-level features network, a mid-level features network, a global features network, and a colorization network. Various evaluation experiments were performed to verify this method with user study and many historical hundred-year-old black-and-white photographs. Figure 7 shows an example of this method. Zhang et al. [98] propose a CNN framework for userassisted image colorization. Given a target grayscale image, and sparse, local user edits, this method can automatically produce convincing colorization results. By training on a large amount of image data, this method learns to propagate user edits by merging both low-level cues and high-level semantic information. This method has help non-professionals to design a colorful work, since it has great ability to achieve fine colorization results even with random user inputs. Deshpande et al. [11] learned a low-dimensional smooth embedding of color fields with a variational autoencoder (VAE) for grayscale image colorization. A multi-modal conditional model between the gray-level features and the lowdimensional embedding is learned to produce diverse colorization results. The loss functions are specially designed for the VAE decoder to avoid blurry colorization results and respect uneven distribution of pixel colors. This method has potential to handle other ambiguous problems, since the lowdimensional embeddings has ability to predict diversity with multi-modal conditional models. However, high spatial detail is not taken into account in this method. III. IMAGE DECOLORIZATION Image decolorization is often used as a preprocessing for downstream image processing tasks such as segmentation, recognition, and analysis. Recently, decolorization has attracted more and more attention of researchers. In the early stage, the three channels R, G, and B are represented by a single channel or only the brightness channel information is used to represent the grayscale image. However, these simple color removal methods suffer from contrast loss in the gray image. To this end, researchers have proposed local and global decolorization methods in order to preserve the contrast of color images in the resulting grayscale images. A. Early Decolorization Methods The early image decolorization method is simple, which directly processes the (R, G, B) channels of a color image in the RGB color space. These methods include the component method, the maximum method, the average method, and the weighted average method. The component method uses one of the (R, G, B) in the color image as the corresponding pixel value in the grayscale image, written as where (i, j) is the pixel coordinate in an image. Note that any one of G 1 , G 2 , G 3 can be selected as needed. The maximum method takes the maximum value of (R, G, B) in the color image as the gray value of the grayscale image. The average method is to average the three component values of (R, G, B) in the color image to obtain a gray value. The weighted average method uses the weighted average of three components with different weights as the grayscale image. (4) In addition to using the color component of the RGB space, it is also common to employ the brightness channel of other color spaces to represent the gray value of a grayscale image. For example, Hunter [27] uses the L channel of the Lαβ space to represent a grayscale image, while Wyszecki and Stiles [88] adopt the Y component in the Y U V color space to represent the grayscale image. In the Y U V color space, the Y component is the brightness of pixels, reflecting the brightness level of an image. According to the relationship between the RGB color space and the Y U V color space, the mapping between the brightness y and three color components can be established as The luminance value y is used to represent the gray value of the image. Based on this observation, Nayatani [67] proposed a color mapping model with independent input, i.e., input three components independently and set the weights of the corresponding components as needed. These early methods are easy to implement, however, they would cause the loss of image contrast, saturation, exposure, etc. To this end, researchers explored decolorization methods with higher accuracy and efficiency, including local decolorization methods, global decolorization methods, and deep learning based decolorization methods. B. Local Decolorization Methods Local decolorization methods usually use different strategies in solving the mapping model from a color image to a grayscale one. The strategy deals with different pixels or color blocks, and increases the local contrast by strengthening the local features. Bala and Eschbach [4] proposed a decolorization method that locally enhances the edge and contours between adjacent colors through adding high-frequency chrominance information into the luminance channel. Specifically, a spatial highpass filter weighting the output with a luminance-dependent term is applied to the chrominance channels. Then the result is added to the luminance channel. Figure 8 shows a flow chart of this method. Neumann et al. [68] view the color and luminance contrasts of an image as a gradient field and solve the inconsistency of the field. They chose locally consistent color gradients and performed 2D integration to produce the grayscale image. Since its complexity is linear in the number of pixels, this method is simple yet very efficient, which is suitable for handling high-resolution images. Smith et al. [76] proposed Fig. 8. The flow chart of the spatial color-to-grayscale transform method [4]. Here the Lab color space is taken as an example. Note that "HPF" represents high-pass filter. a perceptually accurate decolorization method for both images and videos. This approach consists of two steps: (1) globally assigning gray values and determine color ordering, and (2) locally improving the grayscale to preserving the contrast in the input color image. The Helmholtz-Kohlrausch color appearance effect is introduced to estimate distinctions between isoluminant colors. They also designed a multiscale local contrast enhancement strategy to produce a faithful grayscale result. Note that this method makes a good balance between a fully automatic method (first step) and user assist (second step), making it suitable for dealing with various images (e.g., natural images, photographs, artistic works, and business graphics). Figure 9 shows that, for a challenging image consists of equiluminant colors, this method is able to predict the H-K effect that makes a more colorful blue appear lighter than the duller yellow. A limitation of this approach comes from the locality of the second step, which may fail to preserve chromatic contrast between non-adjacent regions and lead to temporal inconsistencies. Lu et al. [60] proposed a decolorization method aiming to preserving the original color contrast as far as possible. A bimodal contrast preserving function is designed to constrain local pixel differences and a parametric optimization approach is employed to preserve the original contrast. Owing to weak color order constraint, they relax the color order constraint and seek to better maintain color contrast and enhance the visual distinctiveness for edges. Nevertheless, this method cannot greatly preserve the global contrast in the image. Moreover, since the gray image is produce by solving the energy equation in an iterative manner, the efficiency of this algorithm is relatively low. Zhang and Liu [96] presented an efficient image decolorization method via perceptual group difference (PGD) enhancement. They view the perceptual group instead of individual image pixels as the human perception elements. Based on this observation, they perform decolorization for different groups in order to maximumly maintain the contrast between different visual groups. A global color to gray mapping is employed to estimate the grayscale of the whole image. Experimental results showed that, with PGD enhancement, this approach is capable of achieving better visual contrast effects. The local decolorization methods may distort appearance for regions with constant colors and therefore lead to undesired haloing artifacts. C. Global Decolorization Methods Global decolorization methods perform decolorization on the whole image in a global manner, including linear declorization and nonlinear decolorization techniques. Linear declorization methods. Gooch et al. [18] proposed Color2Gray, a saliency-preserving decolorization method. This method is performed in the CIE L * a * b * color space instead of the traditional RGB color space. Considering that the human visual system is sensitive to change, they preserve relationships between neighboring pixels rather than representing absolute pixel values. The chrominance and luminance changes in a source image are transferred to changes in the target grayscale image so as to produce images maintaining the salience of the source color images. Grundland and Dodgson [19] proposed an efficient, linear decolorization approach by adding a fixed amount of chrominance to lightness. To achieve a perceptually plausible decolorization result, Kuk et al. [34] proposed a color to grayscale conversion method by taking into account both local and global contrast. They encode both local and global contrast into an energy function via a target gradient field, which is constructed from two types of edges: (1) edges connecting each pixel to neighboring pixels, and (2) edges connecting each pixel to predetermined landmark pixels. Finally, they formulate the decolorization problem as reconstructing a grayscale image from the gradient field, that is solved by a fast 2D Poisson solver. Nonlinear declorization methods. Kim et al. [33] presented a fast and robust decolorization algorithm via a global mapping that is a nonlinear function of the lightness, chroma, and hue of colors. Given a color image, the parameters of the function are optimized to make resulting grayscale image respect the feature discriminability, lightness, and color ordering in the input color image. Ancuti et al. [2] introduced a fusion-based decolorization technique. The input of their method include three independent RGB channels and an additional image that conserves the color contrast. The weights are based on three different forms of local contrast: a saliency map to preserve the saliency of the original color image, a second weight map taking advantages of well-exposed regions, and a chromatic weight map enhancing the color contrast. By enforcing a more consistent gray-shades ordering, this strategy can better preserve the global appearance of the image. Ancuti et al. [3] further presented a color to gray conversion method aiming to enhance the contrast of the images while preserving the appearance and quality in the original color image. They intensify the monochromatic luminance with a [76], the decolorization result of Kim et al. [33], Kim et al. [33], the decolorization result of Lu et al. [60], the decolorization result of Lu et al. [61], and the decolorization result of Liu et al. [43]. Fig. 11. A comparison among different decolorization result with running time [61]. From left to right: the input color image, the result of the built-in Matlab function rgb2gray (8 ms), the result of [60] (1,102 ms), and the result of [61] (30 ms). Note that all the methods were implemented in Matlab. mixture of saturation and hue channels in order to respect the original saliency while enhancing the chromatic contrast. In this way, a novel spatial distribution can be produced which is capable of better discriminating the illuminated regions and color features. Liu et al. [43] developed a decolorization model based on gradient correlation similarity (Gcs) so as to reliably maintain the appearance of the source color image. The gradient correlation is employed as a criterion to design a nonlinear global mapping in the RGB color space. Figure 10 shows a comparison result between this method and other image decolorization methods including Smith et al. [76], Kim et al. [33], Lu et al. [60], and Lu et al. [61]. It can be seen from the results that this method is able to better preserve features in the source color image which are more discriminable in the grayscale image; and it also has good ability to maintain a desired color ordering in color-to-gray conversion. Liu et al. [44] further proposed a color to grayscale method by introducing the gradient magnitude [89]. Song et al. [77] regard decolorization as a labeling problem to maintain the visual cues of a color image in the resulting [78]. The top row is the original color images. The middle row shows the failure results of current decolorization methods (from left to right: Gooch et al. [18], Gundland and Dodgson [19], [76], Kim et al. [33], Ancuti et al. [3], Lu et al. [60], and Lu et al. [61]). The bottom row are results by Song et al. [78] which are produced by modifying rgb2gray() with adjusted weights for R, G, and B channels. grayscale image. They define three types of visual cues, namely color spatial consistency, image structure information, and color channel perception priority, that can be extracted from a color image. Then, they cast color to gray as a visual cue preservation process based on a probabilistic graphical model, which are solved via integral minimization. Most of the above image decolorization methods attempt to preserve as much as possible visual appearance and color contrast, however, little attention was devoted to the speed issue of decolorization. The efficiency of most method is lower than the standard procedure (e.g., Matlab built-in rgb2gray function). To this end, Lu et al. [61] proposed a real-time contrast preserving decolorization method. They achieved this goal by three main ingredients: a simplified bimodal objective function with linear parametric grayscale model, a fast non-iterative discrete optimization, and a sampling based P -shrinking optimization strategy. The running time of this method is a constant O(1), independent of image resolutions. As shown in Fig. 11, this method takes only 30ms (the rightmost result) to decolorize an one megapixel color image, that is comparable with the built-in Matlab rgb2gray function (the left second result), but achieving a better color to gray conversion result which is visually similar to a compelling contrast preserving decolorization method [60] (the right second result). Lu et al. [62] further presented an optimization framework for image decolorization to preserve color contrast in the original color image as much as possible. A bimodal objective function is used to reduce the restrictive order constraint for color mapping. Then, they design a solver to realizing automatic selection of suitable grayscales via global contrast constraints. They also propose a quantitative perceptualbased metric, E-score, to measure contrast loss and content preservation in the resulting grayscale images. The E-score is to jointly consider two measures CCPR (Color Contrast Preserving Ratio) and CCFR (Color Content Fidelity Ratio), written as E score = 2 · CCP R · CCF R CCP R + CCF R It is reported that this is among the first attempts in the color to gray field to quantitatively evaluate decolorization results. Considering that the above decolorization methods suffer from the robustness problem, i.e., may fail to accurately convert iso-luminant regions in the original color image, while the rgb2gray() function in Matlab works well in practice applications. Song et al. [78] proposed a robust decolorization method by modifying the rgb2gray() function. Figure 12 shows that this method is able to realize color to gray conversion for iso-luminance regions in an image, while previous methods, including Gooch et al. [18], Gundland and Dodgson [19], [76], Kim et al. [33], Ancuti et al. [3], Lu et al. [60], and Lu et al. [61] fail in this task. In this method, they avoid indiscrimination in iso-luminant regions by adaptively selecting channel weights with respect to specific images rather than using fixed channel weights for all cases. Therefore, this method is able to maintain multi-scale contrast in both spatial and range domain. Sowmya et al. [80] presented a color to gray conversion algorithm with a weight matrix corresponding to the chrominance components. The weight matrix is obtained by reconstructing the chrominance data matrix through singular value decomposition (SVD). Ji et al. [31] presented a global image decolorization approach with a variant of difference-of-Gaussian band-pass filter, called luminance filters. Typically, the filter has high responses on regions of which colors differ from their surroundings for a certain band. Then, the grayscale value can be produced after luminance passing a series of band-pass filters. Due to that this approach is linear in the number of pixels, it is efficient and easy to implement, D. Deep Learning Based Decolorization Methods By training partial differential equations (PDEs) on 50 input/output image pairs, Lin et al. [40] constructed a mapping model for the task of color to gray conversion. It is reported that their learned PDEs can yield similar decolorization results with those of Gooch et al. [18]. Hou et al. [23] proposed the Deep Feature Consistent Deep Image Transformation (DFC-DIT) framework for oneto-many mapping image processing tasks (e.g., downscaling, decolorization, and tone mapping). The DFC-DIT achieves transformation between images with a CNN as a non-linear mapper respecting the deep feature consistency principle that is enforced with another pretrained and fixed deep CNN. As shown in Fig. 13, this system is comprised of two networks, a transformation network and a loss network. The former is used to convert an input to an output, and the later servers as computing the feature perceptual loss for the training of the transformation network. [99]. This framework is composed of four parts: a low-level features network, a local semantic feature network, a global feature network, and a decolorization network. The four components are tightly coupled so as to learn a complex color-to-gray mapping. The low-level features network uses four groups of convolution layers to extract low-level features from the input image. With the FCN (Fully Convolutional Networks) structure, the local semantic feature network acquires instance semantic information with semantic tags of an image, such as dog and airplane. The global feature network serves to produce global image features by processing low-level features with several convolution layers. Finally, the decolorization network with the Euclidean loss outputs the resulting grayscale image. Considering that the local decorization methods are less accurate enough to process local pixel leading to local artifacts, while the global methods may fail to treat local color blocks, Zhang and Liu [99] proposed a novel image color to gray conversion method by combining local semantic features and global features. In order to preserve color contrast between adjacent pixels, a global feature network is developed to learn the global features and spatial correlation of an image. On the other hand, in order to preserve the contrast between different object blocks, they take care of local semantic features of images and fine classification of pixels during learning deep image features. Finally, with fusion of both the local semantic features and global features, this method performs better in terms of contrast preservation than the state-of-theart decolorization approaches. Figure 14 gives a flow chart of this method. According to the human visual mechanism, exposure plays a critical role in human visual perception, e.g., low-exposure and overexposure areas usually easily catch the attention of an observer. However, exposure is missed in existing decolorization methods. To this end, Liu and Zhang [54] proposed an image decolorization approach by fusion of local features and exposure features with a CNN framework. This framework consists of a local feature network and a rough classifier. The local feature network aims to learn the local semantic features of the color so as to maintain the contrast among different color blocks, while the rough classifier classifies three types of exposure states: low-exposure, normal-exposure, and overexposure features of an image. Figure 15 shows the ability of this method to treat images with different exposures. IV. VIDEO COLORIZATION People are willing to watch a colorful film instead of a grayscale one. Gone with the Wind in 1939 is one of the first colorized films [16] which is popular with the audience. However, it is challenging to obtain a convincing video colorization because of its multimodality in the solution space and the requirement of global spatiotemporal consistency [36] Fig. 15. A comparison of the results with and without the exposure feature network [54]. From left to right are input images (a), results without (b) and with (c) the exposure feature network. From top to bottom row represent low-exposure, over-exposure, and normal-exposure. is also inherently more challenging than Unlike single image colorization, video colorization should also satisfy temporal coherence. In view of this point, the above single image colorization cannot be used for video colorization. Currently, researches [30], [85], [52], [65], [36] realized video colorization by propagating the color information either from a color reference frame or sparse user scribbles to the whole target grayscale video. Vondrick et al. [85] regard video colorization as a selfsupervised learning problem for visual tracking. To this end, they learn to colorize gray-scale videos by copying colors from a reference frame by exploiting the temporal coherency of color, rather than predicting the color directly from the grayscale frame. Jampani et al. [30] proposed Video Propagation Network (VPN), processes video frames in an adaptive manner. The VPN consists of a temporal bilateral network (TBN) and a spatial network (SN). The TBN aims for dense and video adaptive filtering, while the SN is used for refining features and increasing flexibility. This method propagates information forward without accessing future frames. Experiments showed that, given the source color image for the first video frame, this method can propagate the color to the whole target grayscale video. Given the color image for the first video frame, the task of this method is to propagate the color to the entire video. This method can also be used for video processing tasks requiring the propagation of structured information (e.g., video object segmentation and semantic video segmentation). Meyer et al. [65] proposed a deep learning framework for video color propagation. This method consists of a short range propagation network (SRPN), a longer range propagation Fig. 16. An overview of the deep learning framework for video color propagation [65]. Both a short range network and a long range color propagation network are used to propagate colors in a video. The results of these two networks and the target grayscale image together constitute the input to the fusion and refinement network to output the final color frame. Fig. 17. The framework of the automatic video colorization method with self-regularization and diversity [36]. This model consists of a colorization network f and a refinement network g. f is used to colorize each grayscale video frame and outputs candidate colorization images. By inputting the i-th colorized candidate images and two confidence maps, g produces a refined video frame. network (LRPN), and a fusion and refinement network (FRN). The SRPN aims to propagate colors frame-by-frame ensuring temporal stability. The input to SRPN are two consecutive gray scale frames and it outputs an estimated warping function that is used to transfer the colors of the previous frame to the next one. The LRPN introduces semantical information by matching deep features extracted from the frames, which are then used to sample colors from the first frame. Except long range color propagation, this strategy also contributes to restore missing colors because of occlusion. With a CNN, the SRPN is used to combine the above two stages for fusion and refinement. Figure 16 gives an overview of the framework of this method. Lei and Chen [36] proposed a fully automatic, selfregularized approach to video colorization with diversity. As shown in Fig. 17, this method is comprised of a colorization network f for video frame colorization and a refinement network g for spatiotemporal color refinement. A diversity loss is designed to allow the network to generate colorful videos with diversity. Moreover, the diversity loss can also make the training and process more stable. V. VIDEO DECOLORIZATION As for video decolorization, people mostly extend image decolorization methods to process video frames, which would easily lead to the flicker phenomenon due to the spatiotemporal inconsistency. Video decolorization should take into account Fig. 18. An overview of the video decolorization using visual proximity coherence optimization [83]. Firstly, the decolorization proximity for each frame is estimated. The DC-GMM classifier is then used to select a specific decolorization strategy, and finally decolorize the frame into grayscale using the selected strategy. Secondly, with DC-GMM, video frames are classified into three categories, i.e., high-proximity, median-proximity, and low-proximity. Finally, a salience C2G method is employed to maintain temporal coherence and alleviate flickering between frames. both the contrast preservation of each video frame and the temporal consistency between video frames. Since the method of Smith et al. [76] can preserve consistency avoiding changes in color ordering, they extended their two-step image grayscale transformation method to treat video decolorization. Owing to the ability to maintain consistency over varying palettes, Ancuti et al. [2] applied their fusionbased decolorization technique for video cases. Given a video, Ancuti et al. [3] searched in the entire sequence for the color palette that appears in each image (mostly identified with the static background). In this way, they extend their saliencyguided decolorization approach to video decolorization. For a video with relatively constant color palette, they computed a single offset angle value for the middle frame in a video. Song et al. [79] proposed a real-time video decolorization method using bilateral filtering. Considering that human visual system is more sensitive to luminance than the chromaticity values, they recover the color contrast/detail loss in the luminance. They represent the loss as residual image by the bilateral filter. The resulting grayscale image is a sum of the residual image and the luminance of the original color image. Since the residual image is robust to temporal variations, this method can preserve the temporal coherence between video frames. Moreover, as the kernel of the bilateral filter can be set as large as the input image, this method is efficient and can run in real time on a 3.4 GHz i7 CPU. Tao et al. [82], [83] defined decolorization proximity to measure the similarity of adjacent frames and presented a temporal-coherent video decolorization method using proximity optimization. They then respectively treat frames with low, medium, and high proximities in order to better preserve the quality of these three types of frames. Finally, with a decolorization Gaussian mixture model (DC-GMM), they classify the frames and assign appropriate decolorization strategies to them via their corresponding decolorization proximity. Figure 18 shows an overview of this method. Most of the existing video decolorization methods directly apply image decolorization algorithms to treat video frames, which would easily causes temporal inconsistency and flicker Fig. 19. The framework of the video decolorization method based on the CNN and LSTM neural network [53]. Given a video sequence Ct|t = 1, 2, 3, ..., N , it is processed into sequence images. Then the local semantic content encoder extracts deep features of these sequence images, adjusts the scale of the feature maps, and inputs them to the temporal features controller. After the output feature maps are fed into the deconvolution-based decoder, the resulting grayscale video sequence Gt|t = 1, 2, 3, ..., N is produced. phenomenon. Moreover, there may be similar local content features between video frames, which can be used to avoid redundant information. To this end, Liu and Zhang [53] introduced deep learning into the field of video decolorization by using CNN and a long short-term memory neural network. To the best of our knowledge, this is among the first attempts to perform video decolorization using deep learning techniques. A local semantic content encoder was designed to learn the same local content of a video. Here, the local semantic features were further refined by a temporal feature controller via a bi-directional recurrent neural network with long short-term memory units. Figure 19 shows an overview of this method. VI. CONCLUSION AND FUTURE WORK This paper summarized the progress of colorization and decolorization methods for image and videos in the last two decades. According to that if user interaction is involved, we classified the image coloriztion methods into two categories, semi-automatic colorization methods and automatic colorization methods. As for image decolorization methods, we first discussed the early image decolorization methods, including the component method, the maximum method, the average method, and the weighted average method. We then summarized the existing image decolorization methods from the perspective of global decolorization and local decolorization. Finally, we also introduced the latest deep learning based colorization and decolorization approaches for images and videos. Although convincing results can be achieved by the current colorization and decolorization methods. We think some challenges still remains. For example, a user-friendly image and video colorization and decolorization system is still needed. It is necessary to further improve the computational efficiency of the colorization and decolorization methods, especially for high-definition images and videos. Moreover, more objective metrics specific to colorization and decolorization assessment are required. Finally, large-scale datasets are needed for deep learning based image colorization and decolorization techniques. In the future, we believe researchers will pay more and more attention to this field.
2022-04-29T06:47:38.814Z
2022-04-28T00:00:00.000
{ "year": 2022, "sha1": "f242bcacfba787cfffcc7a4b86c954adf8c58787", "oa_license": null, "oa_url": null, "oa_status": null, "pdf_src": "Arxiv", "pdf_hash": "d7c98574d7ff32b03a08c93bda3c55342078f9ec", "s2fieldsofstudy": [ "Computer Science" ], "extfieldsofstudy": [ "Computer Science" ] }
16949269
pes2o/s2orc
v3-fos-license
Evidence from a Randomized Trial That Simvastatin, but Not Ezetimibe, Upregulates Circulating PCSK9 Levels Background Proprotein convertase subtilisin/kexin type 9 (PCSK9) is a secreted inhibitor of the low-density lipoprotein (LDL) receptor and an important regulator of LDL metabolism. Elevated PCSK9 levels have been associated with cardiovascular risk. The purpose of this study was to investigate how ezetimibe and simvastatin, alone and in combination, affect PCSK9 circulating concentrations. Methods A single center, randomized, open-label parallel 3-group study in healthy men (mean age 32±9 years, body mass index 25.7±3.2 kg/m2) was performed. Each group of 24 subjects was treated for 14 days with either simvastatin 40 mg/d, ezetimibe 10 mg/d, or with both drugs. Multivariate analysis was used to investigate parameters influencing the change in PCSK9 concentrations under treatment. Results The baseline plasma PCSK9 concentrations in the total cohort were 52±20 ng/mL with no statistically significant differences between the groups. They were increased by 68±85% by simvastatin (P = 0.0014), by 10±38% by ezetimibe (P = 0.51) and by 67±91% by simvastatin plus ezetimibe (P = 0.0013). The increase in PCSK9 was inversely correlated with baseline PCSK9 concentrations (Spearman’s R = –0.47, P<0.0001) and with the percent change in LDL cholesterol concentrations (Spearman’s R = –0.30, P<0.01). In multivariate analyses, only baseline PCSK9 concentrations (β = –1.68, t = –4.04, P<0.0001), percent change in LDL cholesterol from baseline (β = 1.94, t = 2.52, P = 0.014), and treatment with simvastatin (P = 0.016), but not ezetimibe (P = 0.42), significantly influenced changes in PCSK9 levels. Parameters without effect on PCSK9 concentration changes were age, body mass index, body composition, thyroid function, kidney function, glucose metabolism parameters, adipokines, markers of cholesterol synthesis and absorption, and molecular markers of cholesterol metabolism. Conclusions Ezetimibe does not increase circulating PCSK9 concentrations while simvastatin does. When added to simvastatin, ezetimibe does not cause an incremental increase in PCSK9 concentrations. Changes in PCSK9 concentrations are tightly regulated and mainly influenced by baseline PCSK9 levels and changes in LDL cholesterol. Trial Registration ClinicalTrials.gov NCT00317993 Introduction Both, statins, which are cholesterol synthesis inhibitors, and ezetimibe, a cholesterol absorption inhibitor, lower low-density lipoprotein (LDL) cholesterol by 40-60% and 20%, respectively. These drugs are often administered together in order to achieve a further decrease in LDL cholesterol levels, when clinically necessary [1]. Proprotein convertase subtilisin/kexin type 9 (PCSK9) is a secreted protein produced mainly in the liver, which binds to the hepatic LDL receptor (LDLR) and targets it for degradation [2]. Gain-of-function mutations of PCSK9 are associated with familial hypercholesterolemia and premature cardiovascular disease [3], while PCSK9 deficiency leads to low LDL cholesterol concentrations and protection against cardiovascular disease [4]. PCSK9 concentrations have been associated with response to statins [5] and with major cardiovascular events [6]. Statins have been shown to upregulate both LDLR and PCSK9 [7]. In turn, increases in PCSK9 concentrations may limit the beneficial effects of statins [8,9], although this observation is not supported by all studies [10]. These data indicate that the function of circulating PCSK9 is physiologically and clinically significant. Therefore, it would be of interest to investigate how lipid-modifying pharmacological agents affect PCSK9 concentrations. While statins have been shown to increase PCSK9 [10], there are very few data regarding the effects of ezetimibe (alone or combined with a statin) on PCSK9 concentrations [11]. Moreover, it is unknown what other parameters influence changes in PCSK9 concentrations under lipid-lowering therapy. The present randomized study examined the effect of ezetimibe, alone or in combination with simvastatin, on circulating PCSK9 levels. Study Design and Subjects The study design has been published before [12,13]. The protocol for this trial and supporting CONSORT checklist are available as supporting information; see Checklist S1 and Protocol S1. In brief, 72 healthy male subjects were randomized to 3 treatment groups to receive in an open-label design for 14 days at an allocation ratio of 1:1:1 either ezetimibe 10 mg/d, simvastatin 40 mg/d, or both drugs. Randomization was performed according to a predetermined random list (balanced 6-block design) by use of sealed envelopes. Inclusion criteria were age between 18 and 60 years, body mass index (BMI) between 18.5 and 30 kg/m 2 , fasting LDL cholesterol concentrations ,190 mg/dL, fasting triglycerides ,250 mg/dL and normal blood pressure (,140/90 mmHg). Excluded from the study were subjects who had received lipidlowering drugs within 12 weeks prior to study entry, those with a history of excessive alcohol intake, liver disease, renal dysfunction (glomerular filtration rate ,60 mL/min), rheumatologic disease, coronary heart disease, diabetes or other endocrine disorders, eating disorders, history of recent substantial (.10%) weight change, history of obesity (BMI.35 kg/m 2 ) or taking medications known to affect lipoprotein metabolism, glucose metabolism, or the immune system. All patients were advised to keep their usual dietary habits throughout the trial. Blood was drawn in the morning after a 12-h fast at days 1 (before the initiation of treatment) and 15 (at the end of the 2-week treatment period). The original trial has been registered at ClinicalTrials.gov NCT00317993. The study was performed at the outpatient clinic of the University of Cologne in 2005 and the protocol was approved by the Ethics Committee of the University of Cologne, and all subjects gave written informed consent. The sponsors had no influence on study design, analyses or interpretation of the data. Analytical Measurements Human PCSK9 concentrations were measured as previously described [14]. The polyclonal antibody used was prepared in rabbit and directed against affinity purified proPCSK9 (aa 31-454) produced in bacteria [15]. The antibody recognizes both the prosegment (aa 31-152) and the catalytic subunit (aa 153-454) of PCSK9 and not its C-terminal CHRD (aa 455-672). In short, LumiNunc Maxisorp white assay plates (Nunc, Denmark) were Calibrators and plasma samples were incubated for 30 min in a water bath at 46uC prior to plate addition (100 mL) in duplicate. We found that pre-incubation at 46uC enhances the antigen recognition by the antibody. The plates were incubated overnight at 37uC with shaking. After washing, 100 mL of human PCSK9-Ab-HRP diluted 1:750 was added and incubation continued for 3 h at 37uC with shaking. Finally, after washing, 100 mL of substrate (SuperSignal TM ELISA Femto Substrate, Pierce) was applied to each well. Chemiluminescence was quantitated on a Pherastar luminometer (BMG Labtech). A standard curve was Homeostasis model assessment. { LDL receptor protein is given as flow cytometry-specific fluorescence, calculated by subtracting the nonspecific fluorescence intensity from the total fluorescence intensity. " Gene expression is given as number of the respective mRNA copies divided by the number of copies of the TATA housekeeping gene mRNA. doi:10.1371/journal.pone.0060095.t001 established using a conditioned medium containing recombinant human PCSK9 as described previously [14], where the linear portion of the assay occurs between 2.5-20.0 ng/mL of human PCSK9. Analytical details for the determination of non-cholesterol sterols (as markers of cholesterol synthesis and absorption) and quantitative real-time PCR of mRNAs of HMG-CoA reductase, LDL receptor, Niemann-Pick C1-like protein 1 (NPC1L1), and PCSK9 from plasma mononuclear cells as well as flow-cytometry for cell surface LDLR protein expression have been described in detail previously [1]. All other biochemical analyses were made in the core laboratory of the Cologne University Medical Center using standard laboratory procedures [16]. Statistical Analyses Statistical analyses were performed using Stata 12 (StataCorp LP, College Station, TX). Descriptive data are given as means 6 SD or proportions (in percent), unless otherwise indicated. The primary outcome parameter of the parent trial was change in LDL cholesterol. The primary outcome parameter of the current posthoc analysis is change in PCSK9 concentrations. Thus, no sample size calculation was performed for this latter outcome. Associations between baseline PCSK9 concentrations and baseline clinical and biochemical parameters were examined using correlation analyses (Spearman's rank test). Bivariate regression models were performed to investigate which parameters (baseline and ontreatment values) influence the change of PCSK9 concentrations LDL receptor protein is given as flow cytometry-specific fluorescence, calculated by subtracting the nonspecific fluorescence intensity from the total fluorescence intensity. " Gene expression is given as number of the respective mRNA copies divided by the number of copies of the TATA housekeeping gene mRNA. doi:10.1371/journal.pone.0060095.t002 from baseline. In a final model, multivariate regression analyses were performed. In these analyses, also the baseline values and the effects of the 2 treatments, ezetimibe and simvastatin, were modeled. Statistical significance was assumed at P,0.05 using 2sided tests. Figure 1 shows the flow of participants through the trial. Within the total cohort, PCSK9 baseline concentrations were 51.7619.9 ng/mL with no statistically significant difference between the 3 groups. Table 1 summarizes the baseline clinical and biochemical data in the three groups. There were no important differences between the groups. Table 2 shows the results of the correlation analyses between baseline parameters and PCSK9 concentrations. Mean baseline LDL cholesterol concentrations were 111630 mg/dL with no significant difference between the 3 groups. As expected, LDL cholesterol decreased by 22, 41 and 60% in the ezetimibe, simvastatin and ezetimibe plus simvastatin groups, respectively ( Figure 2). The changes in PCSK9 concentrations were +9.9638% (n.s.), +67.8685.2% (P = 0.0012) and +67.3690.7% (P = 0.0013) in the 3 groups, respectively ( Figure 2). Baseline PCSK9 levels were not influenced by age, body mass index, percent body fat, estimated glomerular filtration rate, thyroid-stimulating hormone, or high-sensitivity CRP ( Table 2). There was a significant positive correlation with HDL cholesterol and weak but non-significant positive correlations with total and LDL cholesterol, with parameters of glucose metabolism (fasting glucose and insulin, HOMA index) and with adipokines (leptin, high-molecular weight adiponectin). The correlations with markers of endogenous cholesterol synthesis (lathosterol, desmosterol and cholestenol) were negative and significant, while correlations with markers of cholesterol absorption (cholestanol, sitosterol and campesterol) were slightly positive but non-significant. There was a significant positive correlation with the overall ratio of campesterol to lathosterol. A high c/l ratio was shown previously to indicate a high rate of intestinal absorption of cholesterol, whereas a low ratio indicates a low absorption [17]. There was a significant negative correlation with LDLR protein expression. Results The increase in PCSK9 was strongly inversely correlated with baseline PCSK9 (Spearman's rho = -0.47, P,0.0001) and with the percent change in LDL cholesterol (Spearman's rho = -0.30, P,0.01), (Figure 3). The effects of individual parameters shown in Table 2 on the percent change in PCSK9 levels were further investigated in multiple regression analyses. All analyses were performed using baseline and change of the respective parameter during treatment. Adjustments were made for baseline PCSK9 concentrations and drug treatment. In the final model, only baseline PCSK9 concentrations (b = -1.68, t = -4.04, P,0.0001) and the percent change in LDL cholesterol from baseline (b = 1.94, t = 2.52, P = 0.014) had a significant influence on change in PCSK9 concentrations. Moreover, simvastatin (P = 0.016), but not ezetimibe (P = 0.42), had a statistically significant effect. The parameters of the final model explained a substantial and significant proportion of the variance in change of PCSK9 concentrations (R 2 = 0.31, F(4,67) = 7.62, P,0.00001). As shown in Figure 4, the strongest increase in plasma PCSK9 was observed in subjects with low baseline PCSK9 (,40 ng/mL) and a pronounced LDL cholesterol-lowering effect under treatment ($50% from baseline). In subjects with high baseline PCSK9 ($60 ng/mL), PCSK9 concentrations were hardly affected (-20 to +23%), even in the presence of pronounced LDL-lowering. Vice versa, when PCSK9 was low at baseline (,40 ng/mL), even a moderate LDL cholesterol decrease (30 to 50% from baseline) led to robust upregulation in PCSK9 concentrations (up to 120%). Multivariate analyses indicated that significant changes in PCSK9 by lipid-lowering medication were seen only in subjects receiving simvastatin (either as monotherapy or in combination), but not in subjects receiving ezetimibe. Further, to test the hypothesis that individuals with higher baseline levels of PCSK9 respond less well to simvastatin [18] and vice versa [19], we examined the relationship between PCSK9 levels and responses to ezetimibe, simvastatin and ezetimibe plus simvastatin ( Figure 5). No significant correlations were observed between baseline PCSK9 levels and the response to LDL-lowering treatment. Discussion The current randomized trial investigated the effects of the 2 lipid-lowering drugs simvastatin and ezetimibe, alone and in combination, on PCSK9 concentrations. A multitude of clinical, biochemical and molecular parameters were assessed as covariates. The average baseline PCSK9 concentrations were very similar to the levels observed by others using the same method of PCSK9 measurement [10,14]. The main finding of this study is that the change in PCSK9 concentrations induced by lipid-lowering is influenced mostly by baseline PCSK9 and by the decrease in LDL cholesterol. Other clinical and biochemical parameters had no effect. Of the 2 drugs tested, only simvastatin had significant effects on PCSK9 levels. Interestingly, the increase in PCSK9 levels observed with the standard dose of simvastatin, 40 mg/d, was not further enhanced by the addition of ezetimibe despite its incremental effect on LDL cholesterol lowering. Moreover, ezetimibe monotherapy had no significant effect on PCSK9 concentrations, in concordance to recent findings of Lakoski et al. [20], although it lowered LDL cholesterol levels by 20%. A possible explanation for these findings is that, since statins upregulate PCSK9 expression, treatment with 40 mg simvastatin daily for 2 weeks is sufficient to maximally increase circulating PCSK9 concentrations to a plateau, with no further increase possible by further LDL cholesterol lowering. Huigen et al. also observed this plateau effect when comparing atorvastatin 10 mg and 80 mg [6]. On the other hand, a low dose of a statin, simvastatin 10 mg daily, has been found insufficient to increase PCSK9 levels [20]. Thus, PCSK9 seems to be tightly regulated within a certain range of statin-induced LDL cholesterol decrease. Ezetimibe may not increase PCSK9 concentrations because of its weak LDL-lowering effects, which seemingly are not strong enough to upregulate PCSK9 expression. Alternatively, a reason may be the absence of pleiotropic effects in comparison to statins. Statins may stimulate PCSK9 expression independently of lipid-lowering -e.g., they upregulate PPAR-a/ b/c/d, which are involved in the regulation of PCSK9 expression in the liver [21,22]. The latter argument may also explain why further LDL-lowering by ezetimibe when added to a statin does not result in further PCSK9 increase. The correlation we observed between change in plasma PCSK9 levels and the percent reduction of LDL cholesterol from baseline is in accordance to recent findings by others [14,23]. We surmise that this correlation is driven by parallel upregulation of PCSK9 and LDLR mRNAs in response to the intracellular LDL cholesterol-lowering effect of statins. Our study has limitations. Firstly, no a priori power calculations were made for changes in PCSK9 concentrations because the primary outcome parameter of the parent trial was change in LDL cholesterol. Secondly, treatment duration was relatively short. However, longer treatment periods with ezetimibe have shown similar results [20] and the maximal LDL cholesterol-lowering effect of statins and ezetimibe is achieved within 2 weeks [24,25]. Furthermore, due to the relatively small size of our study existing associations may have been underestimated or missed. Our findings need to be confirmed in larger trials. The open-label design of the parent study may have introduced bias. Finally, PBMC might not accurately reflect hepatic PCSK9 gene and protein expression under all circumstances or with all forms of pharmacological intervention. However, recent evidence strongly supports the use of PBMC for the study of genes related to hepatic cholesterol metabolism [26,27] and PBMC have been used for this purpose in many studies [1,[28][29][30]. Moreover their use has been advocated as a convenient means to provide organ specific data without organ tissue itself [31,32]. Strengths of the study include its randomized design, robust statistical methodology, blinded measurements of plasma PCSK9 concentrations, and the use of a 'drug-naive' population, devoid of co-medications and co-morbidities, which could potentially alter lipid metabolism, and excellent treatment adherence (pill count 99.1%). Moreover, this is the first randomized trial examining multiple clinical and biochemical parameters possibly modulating PCSK9 concentrations, ranging from gene expression to markers of cholesterol absorption and synthesis, to adipokines, glucose metabolism and other parameters, in one cohort. Conclusions In conclusion, the current data support and expand previous reports suggesting that ezetimibe, alone or combined with simvastatin, is not associated with an increase in PCSK9. These findings may help identify those individuals that would benefit most from treatment with PCSK9 antibodies, which are in clinical development. Finally, our results indicate that changes in PCSK9 concentrations during lipid-lowering treatment are tightly regulated and are mainly influenced by baseline PCSK9 levels and statin-induced changes in LDL cholesterol, underlining the relevance of genetic variations in PCSK9. Supporting Information Checklist S1 CONSORT Checklist.
2017-04-20T18:38:45.155Z
2013-03-27T00:00:00.000
{ "year": 2013, "sha1": "65e788beb03d6f3fda7e4aee8672be18d17252ec", "oa_license": "CCBY", "oa_url": "https://journals.plos.org/plosone/article/file?id=10.1371/journal.pone.0060095&type=printable", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "61e16eb725a82894f824c671b0f105515f52973e", "s2fieldsofstudy": [ "Medicine", "Biology" ], "extfieldsofstudy": [ "Chemistry", "Medicine" ] }
236140138
pes2o/s2orc
v3-fos-license
Reasons for delayed spinal cord decompression in individuals with traumatic spinal cord injuries in Iran: A qualitative study from the perspective of neurosurgeons Purpose The median time from the event leading to the spinal cord injury (SCI) to the time of decompressive surgery is estimated to be 6.9 days in Iran, which is much longer than the proposed ideal time (less than 24 h) in published guidelines. The current qualitative study aimed to determine the reasons for the observed decompression surgery delay in Iran from the perspective of neurosurgeons. Methods This qualitative study is designed to perform content analysis on the gathered data from face-to-face semi-structured interviews with 12 Iranian neurosurgeons. Results The findings of the current study suggest that patient-related factors constitute more than half of the codes extracted from the interviews. Overall, the type of injury, presence of polytrauma, and surgeons' wrong attitude are the main factors causing delayed spinal cord decompression in Iranian patients from the perspective of neurosurgeons. Other notable factors include delay in transferring patients to the trauma center, delay in availability of necessary equipment, and scarce medical personnel. Conclusion In the perspective of neurosurgeons, the type of injury, presence of polytrauma, and surgeons' wrong attitude are the leading reasons for delayed decompressive surgery of individuals with SCI in Iran. Introduction Spinal cord injury (SCI) is a catastrophic condition that can lead to long-term and permanent disability, resulting in financial costs for both patients and the health care system. 1 The gravity of this public health problem has led to multidisciplinary research and initiatives in Iran, including the designation of a national SCI registry, i.e. National Spinal Cord Injury Registry of Iran (NSCIR-IR) to better understand SCI management in Iran. 2 Global evidence suggests that timing is crucial in the neurologic prognosis of individuals with SCI, 3,4 and advises that surgical decompression should be performed in less than 24 h 5,6 or less than 8 h, if possible. 3,7 Iranian national studies as well as international studies suggest that early surgical decompression (in less than 24 h) shortens length of hospital stay and improves outcomes in individuals with SCI. 8e11 However, NSCIR-IR data indicate that the median time from the traumatic event to the operating room is estimated to be 6.9 days in Iran (interquartile range not specified). 2 The current study intends to identify the reasons, from the perspective of Iranian neurosurgeons, for delay of surgical decompression in individuals with traumatic SCI in Iran using semi-structured interviews as a qualitative deductive method. Methods Study methods were designed based on the qualitative content analysis method, a technique to decontextualize text to codes and categories to achieve meaningful inferences. 12,13 The rationale for choosing this method was to recognize the most possible factors responsible for the decompression surgery delay to guide the next studies in higher hierarchies. Although the themes of this study were pre-determined to be healthcare system-related, patient-related, and surgeon-related, interviews were based on encounter context themes (ECT) methodological device 14 to build a two-way road of knowledge and experience between researcher and participant and avoid the mechanistic inference of meanings from the participants' experience regarding the study questions. This study was performed at four Iranian governmental hospitals in three cities (Tehran, Tabriz, and Rasht) that the NSCIR-IR endorses. Regarding NSCIR-IR-linked hospital distribution, the primary sampling strategy involved the non-probability convenience method, as reaching out to all participants with the same effort would be prohibitively difficult and timeconsuming. Snow-ball strategy accompanied the convenience method, and previous participants recruited some eligible participants. Participants (Table 1) fulfilled the following criteria for inclusion in the present study; neurosurgeons working in a government-sponsored hospital that collaborates with the NSCIR-IR, had at least five years of SCI management experience. Interviews were conducted using a questionnaire and guideline document (Appendix 1). Data saturation occurred following 12 interviews; no new code or novel themes were being articulated at this point. Two researchers were responsible for conducting interviews. They recorded the interviews, transcribed them to text anonymously, and built a document for each interview. The analyzers received these documents, randomized them, coded the data manually, and assigned the codes to the pre-determined themes. An example of the process is shown in Table 2. Two different researchers performed this process independently. This paper examines the reasons for delay in spinal cord decompressive surgery from neurosurgeons' subjective perspective, and therefore best fits the interpretivism paradigm. This study is reported using the Standards for Reporting Qualitative Research guideline. 15 Results Analysis of data by analyzers demonstrated a Cohen's kappa coefficient of 0.84. Articulated codes and their frequencies are shown in Table 3. As these results suggest, in Iran, neurosurgeons believe that patient-related factors are the most frequent reasons for spinal cord decompressive surgery delay in individuals with SCI. About 57% of the codes extracted from interviews fell within the patient theme. Roughly half of the codes belonging to the patient theme were constituted from one single code: type of SCI in each patient (e.g. complete and incomplete SCI). The second most common code involved co-existence with polytrauma. Other less frequent codes of this theme included patient socioeconomic status, patient consent for early surgery, past medical history, and the use of drugs or medications that resulted in postponement of surgery. The healthcare system-related theme was the next most common theme (30%). The most emphasized codes in this theme were delay in transfer to a trauma center capable of conducting decompressive surgery, delayed preparation of required equipment for surgery, and unavailability of personnel. Delay in other departmental services (e.g. general surgery and internal medicine) and scarce diagnostic and treatment equipment were the least frequent codes of the healthcare system-related theme. Finally, 12.5% of codes belonged to the surgeon theme, which predominantly consisted of surgeons' wrong attitude (10%). Nearly all of the participating neurosurgeons were familiar with the latest guidelines, and only one code (2.5%) was attributed to the surgeon's inadequate knowledge of surgical timing. Overall, the most frequent codes extracted from interviews were type of injury (27.5%), presence of polytrauma (15%), and surgeons' wrong attitude (10%), each belonging to the patient, healthcare system, and surgeon themes, respectively. Discussion In a study performed in China, Zhu et al. 16 showed that early neurosurgical interventions combined with rehabilitation in patients with complete SCI improve motor outcomes. Although it did not analyze SCI, a separate prospective observational study conducted in China, Africa, India, South Asia, East Asia, and Latin America proposed that delayed hospital admission is the most important reason for delayed orthopedic fracture management in low-and middle-income countries. 17 As demonstrated by a retrospective analysis accompanied by a surgeon survey in Canada, 18 most neurosurgeons recognize that the ideal time frame to perform decompressive surgery after traumatic SCI (TSCI) is within 24 h of injury. Despite these findings, a substantial number of patients with TSCI still receive operative management outside of the initial 24 h period. The authors of this study concluded that there is a need for strategies to increase knowledge translation and decrease administrative barriers Furlan et al. 19 proposed that in Canada, healthcare system-related factors are more prominent than patient-related factors in wait time for decompressive surgery following SCI. Healthcare system-related factors are likely essential contributors to decompressive surgery delay in Iran, especially considering our study's context, which was conducted in a developing country under economic sanctions. However, healthcare system-related factors are less critical than patient-related factors in determining decompressive surgery timing from the perspective of Iranian neurosurgeons. Iranian neurosurgeons mainly indicated that the type of injury is the underlying reason for the reported delay. They frequently mentioned that individuals suffering from complete SCI do not benefit from early decompressive surgery, and that it is reasonable to postpone surgery until the patient's clinical status is stabilized and other possible injuries are discovered. Individuals with incomplete SCI might experience clinical improvement after early decompressive surgery, and neurosurgeons therefore generally perform decompressive surgery for incomplete SCI as quickly as possible. Interestingly, this finding is similar to the results of another study in the Netherlands. 20 We hope that these findings compel the NSCIR-IR to re-calculate the mean time from injury to decompressive surgery depending on the type of SCI. It could be possible that after this re-calculation, healthcare system-related factors get more attention and play a more distinct part in decompression surgery delays in Iran, just as they did in the mentioned study in Canada. 18 Polytrauma was also found to be a key determinant of decompressive surgery timing in our study. The presence of polytrauma complicates the decision-making process for early decompressive surgery because it leads to the involvement of other medical services in patient management. For instance, patients with cardiac tamponade in addition to SCI must be stabilized prior to spinal cord decompressive surgery. National policies and regulations also compel hospitals to manage individuals suffering from injuries secondary to road traffic crashes free-of-charge. Despite these policies, studies show that individuals suffering from TSCI as a result of road traffic accidents are still subject to costly expenses in Iran. 8 Nevertheless, our study's findings suggest that patients' socioeconomic status does not influence the decision-making process from neurosurgeons' perspective. This finding might be related to more effective implementation of the aforementioned national policies than six years ago, when the referred study was conducted. Delay in healthcare system function was generally attributed to delayed patient and surgery equipment transfer. Interviewed neurosurgeons indicated that patients in rural areas sometimes rely on their own family or acquaintances for hospital transportation, contributing to additional delay in patient arrival to a trauma center. Patients in urban areas are usually transferred in a timely fashion, and their arrival time does not typically play a substantial role in surgical timing. Overall, interviewed surgeons in the current study do not believe that delayed hospitalization (composed of patient transfer time and hospital admission delay) plays a crucial role in delayed surgical interventions of patients with SCI in Iran. This is contrary to findings of Pouramin et al. 17 in some other lowand middle-income countries and can be attributed to the better function of pre-hospital emergency medical services in Iran. Surgical preparation requires coordination between hospitals and medical equipment companies, and in Iran it is common to encounter delay in equipment delivery by companies. Scarce medical personnel is often not due to a lack of human resources, but rather because other medical services, such as anesthesiology and operating room staff, do not recognize the SCI as a medical emergency and do not prioritize SCI patients. SCI patients with polytrauma must be assessed and treated by other surgical services before decompressive surgery, and in some neurosurgeons' opinion, this delay may beget delay in spinal decompression. This delay is most likely not a defect but is part of their work's nature. Similar to the study mentioned above, 18 nearly all of the interviewed neurosurgeons were familiar with the latest evidence regarding time to surgical decompression, and nearly all surgeons mentioned that it is in the best benefit of the individuals with SCI to undergo decompressive surgery within 24 h. Our results demonstrate that neurosurgeons' knowledge should not be counted as a factor leading to delay in decompressive surgery. However, one of the most frequently encountered codes was the wrong attitude of the surgeons. This code is assigned to phrases that imply that surgeons' decisions regarding surgical timing are not made based on scientific evidence but rather considering the surgeon's benefit. It is reasonable to conclude that Iranian surgeons consider operating on an individual with SCI is not cost-effective, and the surgeon can spend his or her time performing other surgeries which he or she thinks would be more cost-effective. This wrong attitude does not consider the patients' best benefit and should be corrected. Finally, Alice Eagly and Shelly Chaiken defined attitude as a psychological tendency that is expressed by evaluating a particular entity with some degree of favor or disfavor in their remarkable text Psychology of Attitude. Attitudes shape the behavioral intention, finally resulting in certain behaviors. The authors of this study suggest that required steps must be undertaken to change the wrong attitudes of surgeons regarding the decompression surgery timing, as one of the most critical factors leading to decompression surgery delay in Iran. These steps could include holding workshops and conferences for surgeons emphasizing the importance of early surgical decompression in individuals with SCI, preparing feedbacks and evaluations by the healthcare system informing each surgeon regarding the decompression surgery timing of him/her. Implementing regulations and laws compelling surgeons to perform decompression surgery in a timely fashion might also be beneficial in changing surgeons' attitudes and behavior. Limitations of the study A qualitative study might lack of generalizability. This study was only performed in some NSCIR-IR-linked government hospitals without any sampling from private hospitals, and therefore may not be generalizable to other settings. Additionally, this study's qualitative design was based on the intention to identify the possible reasons for decompressive surgery delay in Iran, and subsequent quantitative studies should validate our findings. In addition, our sample size remained in 12 samples due to data saturation occurrence. This might be considered as a low sample size and a limitation of the current study as well. Conclusion Knowledge translation efforts and approaches to optimize medical systems are required to facilitate higher rates of early surgical intervention. Our results, generated in Iran, are representative of developing countries, which comprise almost half of the world. According to Iranian neurosurgeons, patient-related factors, such as type of injury and presence of polytrauma, are the leading reasons for delayed decompressive surgery for SCI. Other important reasons include healthcare system-related factors, such as delayed patient transfer to a trauma center, and surgeon-related factors such as improper attitude. Finally, neurosurgeons' understanding of the decompressive surgery timing is compatible with the latest evidence, and should not be assumed as a factor contributing to delayed decompressive surgery. Funding The current study was funded by Sina Trauma and Surgery Research Center, Tehran University of Medical Sciences (grant number 98-01-38-41413). Ethical statement The Ethics Committee of Tehran University of Medical Sciences approved this study and the reference number is IR.TUMS.MEDI-CINE.REC.1398.731. Participants of this study permitted the recording, transcription and analysis of the interviews. Declaration of competing interest The authors declare no competing interest.
2021-07-21T06:18:08.272Z
2021-07-01T00:00:00.000
{ "year": 2021, "sha1": "a63fc484405c9bac360e73bc8d2ff426d8fac2a2", "oa_license": "CCBYNCND", "oa_url": "https://doi.org/10.1016/j.cjtee.2021.07.001", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "76a746908d56e6f47f91c3147e79a36275dad4ba", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
255656029
pes2o/s2orc
v3-fos-license
Assisted Reproductive Technology without Embryo Discarding or Freezing in Women ≥40 Years: A 5-Year Retrospective Study at a Single Center in Italy The protocols commonly used in assisted reproductive technology (ART) consist of long-term embryo culture up to the blastocyst stage after the insemination of all mature oocytes, the freezing of all the embryos produced, and their subsequent transfer one by one. These practices, along with preimplantation genetic testing, although developed to improve the live birth rate (LBR) and reduce the risk of multiple pregnancies, are drawing attention to the possible increase in obstetric and perinatal risks, and adverse epigenetic consequences in offspring. Furthermore, ethical–legal concerns are growing regarding the increase in cryopreservation and storage of frozen embryos. In an attempt to reduce the risk associated with prolonged embryo culture and avoid embryo storage, we have chosen to inseminate a limited number of oocytes not exceeding the number of embryos to be transferred, after two days or less of culture. We retrospectively analyzed 245 ICSI cycles performed in 184 infertile couples with a female partner aged ≥40 from January 2016 to July 2021. The results showed a fertilization rate of 95.7%, a miscarriage rate of 48.9%, and a LBR of 10% with twin pregnancies of 16.7%. The cumulative LBR in our group of couples was 13%. No embryos were frozen. In conclusion, these results suggest that oocyte selection and embryo transfer at the cleaving stage constitute a practice that has a LBR comparable to that of the more commonly used protocols in older women who have reduced ovarian reserve. Introduction Louise Brown was born in 1978 with the transfer into the uterus of a single embryo obtained after the laparoscopic retrieval of one oocyte without ovarian stimulation and in vitro fertilization (IVF) [1]. In the following decade, controlled ovarian stimulation (COS) made it possible to collect a higher number of oocytes with the simultaneous transfer into the uterus of more embryos, increasing the success rate of the assisted reproductive technique (ART). Subsequently, embryo freezing became a standard procedure in ART centers to avoid multiple pregnancies. With the development of the vitrification technique, oocyte freezing became possible, and the efficiency of embryo freezing improved. Another relevant advance in ART is the extended embryo culture up to the blastocyst stage [2]. This procedure has shown a higher implantation potential and a better live birth rate (LBR) in fresh transfer compared to cleaving embryos [3]. However, other studies have shown similar results in cumulative LBR between the two stages of embryo culture [4,5]. Nowadays, a constant trend is to freeze all embryos produced in ART cycles and transfer them one by one-the so-called single embryo transfer (SET)-in a suitably prepared endometrium to reduce multiple pregnancies and maximize the cumulative LBR [6]. Indeed, several studies have shown that SET reduces multiple pregnancies at a rate similar to that of spontaneous pregnancies (3%); in the case of multiple embryo transfer, the rate of negative consequences for couples such as a decline in ART birth rates [40], although personalized treatments should be a relevant aim for ART [41]. In this complex framework that shows possible limits and risks related to ART practices, we report the results of a personalized clinical practice used in a subgroup of women over 40 in our ART center. The protocol consisted of the selection of a limited number of oocytes to be injected, no more than the number of embryos to be transferred, and after a short time of embryo culture (two days or less). The primary aim of this retrospective study was to evaluate the success rate of this protocol in the light of some of the more widespread problems and emerging risks of ART practices. Patient Selection This clinical study included 245 ICSI cycles performed from January 2016 to July 2021 in 184 infertile couples with a female partner aged ≥40 years whose clinical charts were evaluated retrospectively. The mean age of the women was 42.4 ± 1.7 (range 40-47 years) and the mean of the previous failed ICSI attempts was 1.5 ± 1.9. The assessment of ovarian reserve was performed through antral follicular count (AFC) which was evaluated by the same experienced gynecologist (CM). Controlled Ovarian Hyperstimulation Controlled ovarian hyperstimulation protocols were performed using recombinant human follicle-stimulating hormone (rhFSH) (Gonal-F, Merck Serono, Geneva, Switzerland) according to ovarian reserve, and gonadotropin-releasing hormone (GnRH) antagonist (0.25 mg) from the day when a follicle reached 15 mm in diameter. We also administered recombinant human luteinizing hormone (LH) at 75 IU (Luveris, Merck Serono, Geneva, Switzerland) every 12 h along with rhFSH increased by 75 IU during GnRH administration, according to data showing better outcomes in older female patients [42] and dramatic decrease in serum LH as a consequence of GnRH antagonist administration [43]. Follicular development monitoring was performed by real-time ultrasound scans from day 2 of the treatment cycle to the day of hCG administration based on the patient's response to stimulation. The response was monitored by ultrasound, and measurements of serum levels of 17ß-estradiol, progesterone, and FSH, including on weekends or holidays. When at least one ovarian follicle reached a diameter of 18-20 mm, ICSI was performed 36-38 h after administration of human chorionic gonadotropin (hCG, Gonasi, 10,000 IU) (IBSA, Lodi, Italy). Oocyte Retrieval Oocyte retrieval was scheduled on a 7-day basis and performed with local analgesia or under sedation 36-38 h after hCG administration based on response to ovarian stimulation. Sperm Preparation The first semen collection was obtained approximately 5-6 h before the microinjection of oocytes, which was scheduled approximately 40 h after the hCG administration to the female partner. All male partners had 2-7 days of abstinence, as suggested by the WHO 2010 criteria [44]. All semen samples were collected by ejaculation within the Fertility Center to minimize conditions that could alter sperm parameters/function. All semen analyses were performed by the same expert embryologist according to WHO 2010 criteria. The assessment of sperm motility was performed on a 10 µL drop on a slide with a 22 × 22 mm coverslip and a stage heated to 37 • C, with a reticule lens. The slides were examined with phase-contrast optics at a magnification of 400×. We evaluated 400 spermatozoa per replicate for an accurate assessment of motility. We asked male partners with severe oligoasthenozoospermia (OA) to provide a second consecutive ejaculation 1 h after the first, after explaining the possibility of having better sperm parameters in the second ejaculate [45,46]. Intracytoplasmic Sperm Injection Procedure The ICSI procedure was performed with spermatozoa obtained by "swim-up" using the first or second ejaculate according to the sperm parameters of the male partner as described in Section 2.4. The "swim-up" technique was performed directly from the liquefied semen. For this purpose, several aliquots of semen were taken from each sample and placed in test tubes underneath an overlay of washing medium (Origio Italia Srl, Rome, Italy). Round-bottom tubes or four-well dishes were used to optimize the interface surface area between the semen layer and the culture medium. The samples were allowed to incubate at 37 • C in an incubator for 30-45 min. Spermatozoa with the best motility and ability to migrate were then collected. Collected cumulus-enclosed oocytes were maintained in 500 µL of Continuous Single Culture™ Medium-Complete (CSCM-C) (Irvine Scientific, FujiFilm, Tilburg, the Netherlands) in 4-well multi-dishes (Nunclon Surface, Roskilde, Denmark) under oil (oil for embryo culture, Fuji Film, Europe), and maintained in the incubator for 2 h after their retrieval. Afterward, they were decumulated in hyaluronidase drops (Hyaluronidase Solution, Fuji Film, Europe). The ICSI procedure was performed according to the standard technique. Selection of Oocytes and Transfer Policy The choice of the oocyte to be inseminated was made after decumulation. The best oocytes to inseminate were those with the following characteristics: small perivitelline space and no granulation [47], intact first polar body (PB) [48,49], and a smooth surface [50]. We discarded oocytes with vacuolar cytoplasm or central granulation, ovoid-shaped formation [51], cytoplasmic inclusion [48], smooth endoplasmic reticulum (SER) aggregates [52], and refractive bodies [53]. Oocyte selection was also based on oolemma elasticity, a parameter that positively influences the outcome of ICSI. In particular, we have distinguished three different degrees based on the elasticity of the oolemma: grade A refers to oocytes that have penetrated the oolemma without the need for cytoplasmic aspiration (no elasticity); grade B refers to oocytes showing oolemma penetration requiring mild or moderate cytoplasmic aspiration (average elasticity); grade C refers to oocytes showing oolemma penetration requiring strong cytoplasmic aspiration (excessive elasticity). If no oocyte reached the highest grade, which is grade B, the closest grade was chosen based on the oolemma characteristics (grade C and, lastly, grade A). Embryo culture was performed in a standard incubator at 37 • C under 6% CO 2 and 5% O 2 in CSCM-C (Irvine Scientific, FujiFilm, Tilburg, The Netherlands). Embryo transfer was usually performed after 2 days of culture. In some cases, the transfer was performed at the pronuclear stage. After 36-44 h of culture, all embryos were carefully examined with both a dissecting and an inverted microscope. The classification of the embryos was carried out according to the system proposed by Puissant [54]. The number and size of blastomeres as well as the presence or absence of anucleated fragments were carefully recorded so that embryos could be scored as follows: 4 = embryos with clear and regular blastomeres and no fragmentation or a maximum of five % of the embryo surface occupied with small anucleated fragments; 3 = embryos with few or no fragments but with unequal blastomeres (>1/3 difference in size); 2 = embryos with more fragments but less than 1/3 of the embryo surface; 1 = fragments on >1/3 of the embryo surface. Two points were added if the embryo had reached the 4-cell stage by 48 h after fertilization. With regard to the maximum number of embryos to be transferred, we followed the guidelines of the Practice Committee of the American Society for Human Reproduction and Society for Assisted Reproductive Technology [9]. Therefore, in patients aged 40 years, three or four embryos could be transferred in the case of a particularly unfavorable prognosis; in patients aged 41-44 years, four embryos could be transferred, or even five when an unfavorable prognosis was present. A prognosis was considered unfavorable in the case of multiple previous ART cycle failures or no live births after an ART cycle. The informed consent was signed by the couples after an extensive discussion with the physicians on the maximum number of oocytes to be inseminated and, consequently, of embryos to be transferred. Eleven oocytes belonging to three couples were cryopreserved at their express request. In all cases, the selection of oocytes for the transfer of the resulting embryos was carried out according to the described criteria. Ethical Approval The study was conducted in the ART "Biofertility IVF Center" (Rome, Italy) on infertile couples undergoing ICSI treatment. It was reviewed and approved by the Institutional Review Board at the "Biofertility IVF Center", which indicated that ethical approval was not required for this study. Data collection followed the principles outlined in the Declaration of Helsinki. All patients provided their informed consent, agreeing to supply their anonymous information for this and future studies. Statistical Analysis Quantitative data were reported as mean ± SD throughout the study. The following rates were calculated: fertilization rate (FR = number of fertilized oocytes/number of oocytes inseminated), implantation rate (IR = number of gestational sacs/number of embryos transferred), clinical pregnancy rate (CPR = number of pregnancies with at least one fetal heartbeat/number of pick-up cycles with at least one oocyte retrieved), live birth delivery rate (LBR = number of deliveries with at least 1 live birth/number of pick-up with at least 1 oocyte retrieved), miscarriage rate (MR = number of spontaneous abortions/total number of pregnancies), and cumulative live birth rate (CLBR = number of deliveries with at least 1 live birth/total number of women with aspirated oocyte(s)) were calculated. Data were analyzed with SPSS 23.0 for Windows (SPSS Inc., Chicago, IL, USA). Results A total of 245 ICSI cycles performed in 184 couples with female partners aged ≥40 years were considered. Among the 184 couples enrolled, 39 couples underwent two ICSI cycles, 7 couples three attempts, and 2 couples four cycles. Table 1 shows the clinical and demographic characteristics of the couples enrolled in this study and their previous failed ICSI attempts, which include both attempts performed in our and other ART centers. In six cycles, no oocytes were retrieved. Considering the repeated attempts for each couple, only two women had no transfer. Therefore, 182 women aged ≥40 years underwent 239 cycles with oocyte retrieval and embryo transfer. A total of 705 embryos were transferred with a mean number of 2.9 ± 1.4 embryos per transfer. In 35 cycles, the embryos were transferred at the pronuclear stage. Twenty-four women have had at least one pregnancy. All pregnancies occurred in women between the ages of 40 and 44 years. As reported in the Materials and Methods section, according to the guidelines of the Practice Committee of the American Society for Human Reproduction and Society for Assisted Reproductive Technology [9], we transferred a maximum of five embryos when an unfavorable prognosis was present. In total, we transferred five embryos in 48 cycles of our cohort. Interestingly, all pregnancies occurred when at least three embryos were transferred, except in five cases where two embryos were transferred. In detail, if we consider the 24 cycles with pregnancy leading to delivery and live birth: in half of them (12 cycles), five embryos were transferred; in five cases, four embryos were transferred; in two cases, three and in five cases, two. Of the 24 pregnancies, four were twins; in three of those cases, five embryos were transferred, while in one case five embryos were transferred. No triplets occurred. In Table 3, we report the number and grade of embryos transferred and the related success rates. Four pregnancies with deliveries occurred among 35 cycles with embryo transfer at the pronuclear stage, with a LBR of 11.4% (4/35), and included the two cycles with women at 44 years ending with normal delivery. Table 4 presents the LBR based on the age of the women. The twenty-four women who achieved pregnancy had a previous mean failure of 1.9 ± 1.8. The causes of infertility of the couples enrolled are reported in Figure 1. Table 1. Demographic and clinical characteristics of the female and male partners of the couples enrolled in this study. Women Age (years, mean ± SD) 42.4 ± 1.7 Antral follicle count (mean ± SD) 8.4 ± 4.9 Total dosage of gonadotropin administered (IU) (mean ± SD) 3376.6 ± 1335.9 Men Age (years, mean ± SD) 43.9 ± 5.9 Sperm concentration (mil/mL, mean ± SD) 46.7 ± 40.6 Total sperm motility (%, mean ± SD) 47 The outcomes of ICSI cycles are shown in Table 2. with a LBR of 11.4% (4/35), and included the two cycles with women at 44 years end with normal delivery. Table 4 presents the LBR based on the age of the women. T twenty-four women who achieved pregnancy had a previous mean failure of 1.9 ± The causes of infertility of the couples enrolled are reported in Figure 1. Discussion The results of the present study indicate that the selection of oocytes before ICSI to obtain a predetermined number of fresh cleaved embryos to be transferred is effective in terms of FR and LBR in a group of women ≥40 years. Our delivery rate (number of monitored deliveries/number of pick-ups or DR) was not lower than that of the Italian registry for the same age group (4.7% = 728/15,419) published in 2019 and not far from all age groups (11.2% = 5151/46,090) [55]. Furthermore, we reported a cumulative delivery rate (CLBR) for fresh cycle of 13%, whereas the Italian registry reported a CLBR of 5.9% for women aged between 40-42 years and 1.6% for women aged ≥43 years. With regard to CLBR per pick up, the Italian registry listed 10.3% for women aged between 40-42 years and 3.2% for women aged ≥ 43 years, whereas the U.S. registry reported a cumulative transfer live birth delivery (LBD) rate per pick-up of 13% in 2019 [56]. It should be noted that the CLBR with frozen embryos rules out cycles unable to give enough embryos for freezing procedures. With regard to the success rate of blastocyst transfer in aged women, Tannus and colleagues reported a higher LBR than ours (21.6%), although the mean AFC was 14, mean of previous failed cycles was 0.5, and oocytes collected was 11 [57]. Our patient group exhibited a less favorable AFC (8.4) and a higher number of previous failed ART cycles (1.5 ± 1.9) ( Table 1). In another study conducted by De Croo and colleagues, in women with a mean age of 35 years, the comparison of LBRs with transfer at the blastocyst stage (1 or 2 embryos) versus cleaved embryos was 21.1% and 19.1%, respectively [13]. A specific and usual reason for long-term embryo culture up to the blastocyst stage is to perform PGT-A. Apart from the reduced rate of blastocyst formation in patients over 40 years of age, several concerns have been raised for the PGT-A technique, such as the high genetic mosaicism rate, which interferes with the precise evaluation of embryo chromosomal arrangement and the mismatch in the aneuploidy rate between the trophectoderm and the inner cell mass [29]. Furthermore, increased obstetric and perinatal risks are reported with PGT-A compared with non-PGT-A cycles, particularly the development of hypertension in pregnancy [58]. However, PGT-A has become the most widely utilized add-on procedure in ART practice [29] and is a reference for validating or at least comparing the results of many clinical trials in the U.S. One of the possible advantages of reconsidering embryo transfer in the cleavage stage is the epigenetic risk after embryo exposure to a long culture environment in terms of fetal health. Many studies have shown that extended embryo culture significantly affects obstetric and perinatal outcomes [19,[59][60][61]. Large-for-gestational age/macrosomia, hypertensive disorders, and perinatal mortality appear to increase with frozen embryo transfer [62]. Vroman and colleagues have demonstrated that embryo culture from the one-cell to blastocyst stage results in placental overgrowth, reduced fetal weight, and lower placental DNA methylation in rats [63]. Surprisingly, a recent study demonstrated that human genomic activation initiates at the one-cell stage [64]. There is evidence that the longer the in vitro cultures last (i.e., blastocyst transfer in comparison to the cleavage stage), the more epigenetic changes occur [65,66]. We know that only a percentage of fertilized oocytes arrive at the blastocyst stage in vitro and recent observations suggest that metabolic and epigenetic dysfunctions underlie the arrest of human ART embryos before their compaction [67]. From a biological point of view, we cannot rule out the existence of better "culture" conditions for embryos in uterus rather than in vitro (temperature, pH, osmolarity, and numerous unknown factors). Interestingly the LBR subsequent to transfer at the pronuclear stage (at 44 years, two of them delivered at term without obstetric or perinatal complications) does not seem negligible (11.4%), supporting the idea that an artificial incubator environment might be more stressful than that within the uterus for embryos in older women. Most of the studies on obstetric and perinatal risk in ART are linked to placental abnormalities that are increased when ART is the chosen treatment of infertility, particularly in more stressful conditions for embryos, such as long-term cultures and PGT-A [68]. Concerns about the possible consequences of ICSI for the health of the offspring have been reported since its first introduction into clinical practice [69]. Although ICSI use was associated with a significantly higher risk of congenital malformations [70], other studies did not report a significant difference in terms of congenital malformation between children conceived with IVF/ICSI compared to natural conception [71,72]. A recent systematic review and meta-analysis showed no differences in the epigenetic effects of offspring between couples treated with ICSI or traditional IVF [73]. In summary, after many decades of ICSI practice and according to most reports, children born after ICSI have perinatal outcomes comparable to those conceived after standard IVF. With regard to the possible success rate of oocyte selection in ART practice, in 2004 and for many years, the Italian ART legislation limited the maximum number of oocytes to be fertilized during an ART cycle to three, and all resulting embryos had to be transferred at once due to the banning of embryo cryopreservation [74]. Ragni and colleagues, in a study including 1861 cycles performed in seven Italian fertility centers, showed that the pregnancy rate per oocyte retrieval and the rate of multiple pregnancies before and after the new law were 27 and 24.2% (p = 0.18), and 25.8 and 20.9% (p = 0.11), respectively [68]. It is worth noting that in countries such as Germany and Switzerland, it is impossible to cryopreserve embryos and selection is based on oocytes at the pronuclear stage. Regarding the problem of twin pregnancy rate, our results (16.7%) appear acceptable, considering that the Italian registry showed a rate of 10.6% in 2019 and it was 16.9% in Europe [75]. Recent reports criticize the SET policy in favor of double embryo transfer at the blastocyst stage [6]. However, our transfers were performed at the cleavage stage, in which a higher number of embryos transferred is to be considered comparable with a lower number at the blastocyst stage. Considering the risk of twin or multiple-order pregnancies and their resultant cost, the results of our study should be taken with caution for general clinical practice. Indeed, as reported in Table 3, in a considerable number of cycles we decided to transfer three or more embryos because there was a poor prognosis. At present, insemination with the transfer of more than two embryos should not be a routinely offered practice, even though the latest U.S. guidelines allow the transfer of more than two embryos for older women, low-quality embryos, and repeated implantation failures. On the other hand, blastocyst transfer is associated with a higher risk of monozygotic twinning (MZT) [76] which has a more severe prognosis than dizygotic twinning for the risk of twin-to-twin transfusion due to their shared placenta. After 8435 frozen-thawed single blastocyst transfers with hormone replacement treatment, MZT was observed in 2.32% of cases [77], while the natural prevalence was 0.4% [78]. However, the transfer of a very limited number of embryos at their cleavage stage after insemination of selected oocytes may represent a practical option in cases of high risk for twin or multiple pregnancies. Clinical trials with a mix of oocyte selection and embryo selection at the cleavage stage may be considered, even in couples with female partners under the age of 40. It could represent a kind of double selection in order to simultaneously reduce the obstetric/epigenetic risk and the risk of multiple pregnancies. Regarding the efficiency of oocyte selection, we notice that our FR was higher (95.7%) compared with the usually reported data such as that of ESHRE/Alpha consensus (≥65% for competence value) [11], probably due to a selection of oocytes based on multiple morphologic elements studies [48,49,79]. We recognize that oocyte selection in ART is mostly still imperfect, mainly because it is subjective. However, with the introduction of artificial intelligence in ART, new tools may be available to promote more objective observations [80,81], as we have previously proved [82]. Nevertheless, we should not forget that even the current embryo selection is also a subjective laboratory procedure. The possible transition from embryo to oocyte selection, using more reliable methods, could provide us with valuable information on the relationship between oocyte quality and stimulation protocols and, consequently, embryo development. In summary, our study showed the success rate and twin delivery rate in women over the age of 40 using a protocol with oocyte (rather than embryo) selection and transfer of embryos in the cleavage stage. Non-negligible LBR and moderate multiple pregnancy rates were recorded. No embryos were frozen. The application in clinical practice of the results described in the present study can be relevant for geographical areas where embryo freezing is not possible for ethical reasons or law restrictions, or for couples with low prognosis with a female partner aged ≥40 who do not accept oocyte donation. Furthermore, the financial implications and cost/benefits of this protocol, i.e., strong personalization and drug use, are to be considered. In this regard, milder stimulation for this group of patients based on their residual ovarian reserve could offer similar chances of success. The small size of this subgroup of patients undergoing ART and the lack of a control group are the main limitations of our study. However, we enrolled a particular group of couples with female partners aged ≥40 years, which made it difficult to establish a control group. Certainly, further prospective randomized controlled trials are needed to assess the relevance of our retrospective findings. Conclusions In conclusion, the possibility of oocyte selection and embryo transfer at the cleavage stage appears to be a reasonable strategy in older women who have reduced ovarian reserve and a high number of previous ART failures. As clinicians, we should consider current trends in reproductive medicine from a broad perspective, taking into account all possible consequences involving obstetricians, neonatologists, pediatricians and all other professionals interested in the long-term health consequences of ART laboratory practice. Although further studies are needed to confirm these findings, our preliminary results suggest that a return to more natural steps in reproductive medicine may be safer if the obstetric risks and epigenetic consequences on offspring linked to long-term culture protocols are confirmed in the future.
2023-01-12T16:45:56.597Z
2023-01-01T00:00:00.000
{ "year": 2023, "sha1": "e0bedb7dd08f55e7593c230eab618d79eff4a906", "oa_license": "CCBY", "oa_url": "https://www.mdpi.com/2077-0383/12/2/504/pdf?version=1673093619", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "4f4aa798aad7b55614dcd7c51c41608be2bffe40", "s2fieldsofstudy": [ "Biology" ], "extfieldsofstudy": [] }
241344893
pes2o/s2orc
v3-fos-license
Does Materiality Motivate Management to Shorten Misstatement Detection Periods? In this paper, I examine if misstatement materiality motivates managers to shorten misstatement detection periods. Following the literature, I find that management shortens the gross detection period by about 116 days for material misstatements than for the immaterial misstatements. The impact of materiality is even more evident for the disclosure of serious (fraud/SEC investigated) than non-serious (error-related) misstatements. Additional tests using net detection periods and alternative measures of materiality yield consistent results, hence alleviating the concern that my finding is mechanical due to the regulatory requirement on disclosure. Finally, I provide evidence, via a path analysis, that materiality raises litigation concerns, which motivate managers to shorten detection periods. INTRODUCTION This paper examines the impact of materiality on the length of misstatement detection periods (MDPs). Firms do not disclose the exact misstatement identification date in most cases due to the lack of regulations on this disclosure requirement. Therefore, the extant literature examines gross MDPsthe length of time from the end of the misstated period to the date of disclosurein misstatement studies (Karpoff and Lou, 2010;and Hirschey et al., 2015). A shorter period enhances the usefulness of information by providing capital market participants with more timely information about misstatements. 1 This timeliness in revelation is a critical means to protect investors' interest as well (Skinner 1994 and1997). Given that material misstatements result in severe negative market responses (Palmrose et al. 2004), hence are more detrimental to investors' interest, it is desirable to disclose them even earlier. Therefore, the Securities and Exchange Commission (SEC) requires that firms disclose material misstatements by filing Form 8-K item 4.02 (big R) within four business days if management makes a nonreliance judgement by concluding that the correction of these misstatements would alter investors' decisions (SEC 2004, Release No. 33-8400). However, there is no conclusion on whether and to what extent materiality drives management to shorten MDPs. The second motivation of this paper is to investigate the relations between litigation and the length of MDPs. 2 Disclosure of bad news literature ignores misstatements when examining the association between timely disclosure and litigation because timely disclosure of misstatements is less likely to decrease the incidence of litigation (Field et al., 2005 and 1 Chapter 3 of SFAC No. 8, Conceptual Framework for Financial Reporting issued by FASB and Framework for the Preparation and Presentation of Financial Statements issued by IASB both emphasize that timeliness is one of the four characteristics that enhance the usefulness of accounting information. 2 The disclosure time for material misstatements is almost unified due to SEC 2004 requirement. Earlier disclosure implies a short detection period. Therefore, these two terms are used interchangeably in this paper. Donelson et al., 2012). However, misstatements could be either immaterial or material. This difference in materiality merits more in-depth investigation. Even though material misstatements are less likely to avoid litigation than immaterial ones, it is unclear if timely disclosure could reduce litigation for a material misstatement firm in a within-group setting. This paper aims at examining all the issues above. Timely detection and disclosure of misstatements, especially the material ones, play a critical role in protecting investor interest. The new millennium comes with a wave of financial scandals, such as Enron, Worldcom,Tyco,Healthsouth,Freddie Mac,etc. 3 In the Enron scandal, management had to restate financials from 1997 through 2000. However, from the end of the restated period to the announcement of restatements on October 16, 2001, it took the public about 300 days to learn that Enron executives committed an accounting fraud. Investors suffered a disastrous $74 billion loss. Should managers have revealed the materially misstated financials to the public earlier, a significant number of investors would have been able to avoid losses by refraining from investing in Enron. Then, on average, does materiality drive managers to discover and disclose misstatements earlier? Reduction of litigation theory suggests that a timely revelation of material misstatement lowers the costs of litigation by weakening plaintiffs' claim that management violates Rule 10b-5 by misstating material fact with intent (element 3 of Rule 10b-5), and reducing the size of plaintiffs and the magnitude of negative market response (Skinner 1994). In addition, management may choose to report material misstatements in a timely manner to demonstrate their competence, lower future costs of capital as well as comply with SEC requirements. SEC 2004 mandates disclosure of material misstatements in Section 4.02 of an 8-K filing within four business days of management's initial non-reliance judgment, while giving management leeway to disclose immaterial errors through other less apparent channels, such as press releases or other SEC filings, with significant delays. Given that a material misstatement has a significantly more negative impact on stock returns (Palmrose et al., 2004), managers may exercise extra effort in scrutinizing material accounting errors. The above argument suggests that management has incentives to shorten the length of MDPs. On the other hand, agency theory suggests a delay in the detection and disclosure of material misstatement or an underestimate of materiality. Material misstatements may cause significant negative market reactions, such as negative abnormal returns, more executive turnover, higher litigation risks, poorer reputation in the labor market, higher cost of equity capital, tighter loan contracting terms, and loss in credibility of financial reports, etc (Hribar and Jenkins, 2004;Desai et al., 2006;Graham et al., 2008;Plumlee and Yohn, 2008;Wilson, 2008;Collins et al., 2009;and Burks, 2010). For the purpose of delaying/reducing the above negative impact, management may prolong MDPs by putting off the recognition date of non-reliance judgement, which triggers the mandatory disclosure. Alternatively, managers may choose to disclose material misstatements as immaterial, but still make timely disclosures. Because materiality is subject to management's evaluation using quantitative and qualitative criteria, Thompson (2017) provides empirical evidence that managers have the ability to exercise their discretion in the qualitative judgement to classify more restatements as immaterial and avoid Form 8-K filings. In view of the conflicting predictions based on the above theories, I empirically examine the association between materiality and the length of MDPs. I use Form 8-K filings (BigR) provided by Audit Analytics to capture the materiality of misstatement. This measure better reflects management's perception on the materiality of MDPs and is developed based on both qualitative and quantitative criteria, hence greatly reducing measurement errors. The sample period spans from August 23, 2004to December 31, 2015 This sample period provides an opportunity to measure gross MDPs and estimate net MDPs. After controlling for factors affecting the complexity of financial reporting, I find that, on average, firms detect and disclose material misstatements 116 days earlier than immaterial ones in a pooled sample of 2,566 observations. I observe similar results when using firms filing both material and immaterial misstatements in a given year as a natural experimental setting and in a cross-sectional test. Additional tests show that management complies with SEC 2004 requirements by reporting material misstatements earlier, regardless of the external classification on severity. However, MDPs of material serious (fraud/SEC investigated) misstatements are about 70 days shorter than material non-serious (accounting/clerical error caused) misstatements. Among the control variables, I notice that corporate social responsibility (CSR) also contributes significantly to the timeliness of disclosing misstatements in the pooled sample and cross-sectional subsamples. One alternative explanation to my finding is that the observed difference in the length of detection periods between material and immaterial misstatements is mechanical due to the fourbusiness-day disclosure requirement in SEC 2004. I address this issue in three ways. First, assuming all material misstatements are disclosed on the fourth day after identification, I estimate net MDPs by subtracting four business days from gross MDPs for material misstatements, hence eliminating the mechanical impact of SEC 2004 on the disclosure time. Following Palmrose et al. (2004), I use the cumulative earnings impact of restatements as a measure of materiality for Form 8-K filing firms and conduct a within-sample test. Second, I use restatement severity as another proxy for materiality. Serious restatements either involve fraud or receive SEC investigations, hence must be material to investors and creditors. Even though the Spearman test shows significant positive correlation between serious restatements and BigR, I do not eliminate BigR serious restatements in the test because it would result in a very small sample size. 4 Third, I use the cumulative earnings impact of restatements as a measure of materiality, but test on non-Form 8-K filers, because their disclosure time is not subject to SEC 2004 requirement. All three tests generate results that are qualitatively consistent with my primary findings. They provide further evidence that materiality is the driving force behind management's timely detection and disclosure of misstatements. Another confounding factor for the above results is the difficulty in detecting material misstatement. It is possible that the larger the scale of misstatements, the easier it is to identify them. I relieve this concern by testing if qualitative components of materialityfactors not related to the magnitude of cumulative earnings effectaffect timeliness. The result shows that the other aspects of materiality enhance the timely disclosure of misstatement as well. Next, I examine the impact of the timely disclosure of materiality on the likelihood of litigation. Donelson et al. (2012) provide evidence that timely disclosure of bad earnings news lowers the probability of litigation. Files (2012) finds that cooperation with the SEC increases restatement firms' likelihood of sanction. However, the SEC may reward those firms that cooperate and make forthright disclosures with lower monetary penalties. Based on these findings, I posit and find that timely disclosure of materiality decreases the probability of litigation within the group of material-misstating firms. A path analysis provides more statistical evidence that materiality raises litigation concerns. These concerns drive managers to shorten detection periods. My paper contributes to the literature in multiple ways. First, this paper extends Hirschey et al. (2015) by better gauging the impact of materiality on the length of detection periods. The sample period in Hirschey et al. (2015) spans from 1997 to 2006. Even though Hirschey et al. (2015) include Form 8-K filings as one control variable, most sample firms did not disclose materiality via Form 8-K from 1997 to August 23, 2004. Results reported in Table 4 of Hirschey et al. (2015) show that Form 8-K filings shorten detection periods by less than two days. 5 It is possible that the impact of materiality is underestimated due to the mixture of pre-and post-SEC 2004 data. Using mandatory filings after SEC 2004, I find a much more significant effect of materiality on shortening detection periods. Furthermore, this paper provides evidence that the impact of materiality on the detection period is not mechanically driven by SEC 2004 requirements. I use the cumulative earnings effect of restatements as an alternative proxy for materiality and conduct tests on net MDPs of Form 8-K filers and gross MDPs of non-Form 8-K filers, hence eliminating the impact of SEC 2004 on the length of detection periods. Furthermore, I use misstatement severity from a third party's perspective as an alternative measure of materiality. All results show consistently that materiality lessens the length of MDPs. To my knowledge, this paper is the first one that estimates net MDPs by using Form 8-K filings in the literature. Third, I provide evidence that earnings impact is not the sole concern of materiality that pushes managers to detect and reveal misstatements earlier. An examination of material misstatements that are irrelevant to earnings shows that management discloses them significantly earlier than immaterial misstatements. However, due to the small sample size, I cannot pinpoint the specific qualitative factor(s) that plays a critical role in motivating timely disclosures. Fourth, this paper contributes to the bad news disclosure literature. Consistent with the assumption of higher litigation risk for material misstatement in the bad news disclosure literature, I find that materially misstating firms are more likely to be sued than those immaterially misstating firms are. However, further studies show that within the group of materially misstating firms, timely disclosure of misstatements decreases the likelihood of litigation, relatively. To my knowledge, this finding is documented for the first time in the bad news disclosure literature. My research has significant implications for academia, investors and market regulating bodies. First, given that the disclosure period is somewhat fixed for material misstatements, management can manipulate only the discovery period. Future research may concentrate on the net detection period and identify motivations behind discovery period manipulations. Second, I provide evidence that most managers exercise their due diligence in identifying and revealing material misstatements across time. Third, investors and regulators may use my findings to evaluate if a firm is making a timely announcement of restatement(s). Significantly longer-thanexpected detection periods may serve as a signal of managers' unwillingness or incompetency in discharging its due diligence. Then, investors should be more cautious when considering including these firms into their portfolio, while regulators may launch investigations into whether management is complying with legal requirements. The rest of the paper is organized as follows. Section 2 reviews the length of detection period literature and develops hypotheses. Section 3 describes the data and sample selection. Section 4 discusses the model specification and empirical results, while Section 5 provides my conclusions. LITERATURE REVIEW AND Hypotheses Development Timely disclosure of misstated financials is desirable for the protection of investors' interest. The SEC requires firms to restate financials due to misapplication of accounting standards, fraud, misrepresentation, or accounting errors. The extant literature shows that several papers have investigated what factors shorten each stage of restatements, including the period misstated, misstatement identification and disclosure period, quantitative detail revelation (dark) period, and the time thereafter (See Figure 1 for a timeline of restatement periods). For example, Singer and Zhang (2018) document that audit firm tenure is positively associated with the period misstated. Hirschey et al. (2015) examine a sample from 1997 to 2006 and find stronger corporate governance, but not characteristics of restatements, expedites the discovery and disclosure of misstatements. Schmidt and Wilkins (2013) set their focus on a "dark period", the number of days between restatement announcement dates and detailed quantitative information revelation dates. They report negative associations between the dark period and audit quality and audit committee expertise. Badertscher and Burks (2011) provide evidence that fraud investigations prolong the restatement disclosure period, hence delaying subsequent earnings announcements and SEC filings. Studies focusing on materiality document that managers are more likely to waive qualitatively material misstatements, but auditors are striving to maintain their independence and audit quality in the disclosure of material misstatements (Keune and Johnstone, 2012;Jadallah, 2017;Thompson, 2017). However, the relations between materiality and the length of MDPs is underexplored. Even though Hirschey et al. (2015) control for Form 8-K filings in their MDP model, it is mainly due to the concern of the mechanical impact of SEC 2004. It is unclear if materiality motivates management to shorten the length of MDPs. Furthermore, the effect of materiality on the detection period is potentially under-estimated, because the materiality of a majority of their sample misstatements is not captured by their measure. 6 I posit that that materiality promotes management to disclose misstatements in a timely manner to lower litigation risk, show their competence, and cooperate with auditors. Litigation against managers is one of the severe consequences of misstatements. Litigation reduction theory hypothesizes that timely disclosure of bad news could decrease litigation risk. Plaintiffs who have filed a case against managers most often need to provide evidence of violations of Rule 10b-5 (Skinner 1994). They need to show five elements under Rule 10b-5: "(1) a misstatement or omission of (2) a material fact (3) made with intent (4) that the plaintiff justifiably relied on (5) causing injury in connection with the purchase or sale of securities" (Skinner 1994, page 4). By definition, material misstatements meet the first two elements. Chapter 3 of SFAC No. 8 defines "materiality" as "Information is material if omitting it or misstating it could influence decisions that users make on the basis of the financial information of a specific reporting entity". If managers conclude that misstatements are material, they must have affected the plaintiff's decisions. Therefore, it satisfies element (4). Palmrose et al. (2004) provide evidence that misstatements result in significant losses to investors. It is not hard for plaintiffs to cite this finding and negative market returns around the misstatement announcement date to show element (5). The challenge is to prove that managers committed misstatements intentionally. If managers detect and disclose misstatements in a timely manner, it is hard for plaintiffs to prove that managers misstated information with intention. It is more likely that a judge will decide to dismiss the lawsuit. Even if the lawsuit proceeds, a timely detection and disclosure would minimize the number of investors affected by the misstatement, hence reducing the size of the plaintiff class and the amount of compensation to them. Reputation theory suggests that the ability to detect and disclose material misstatements is a demonstration of management's competence as well. The sooner management detects the misstatement, the higher ability it proves to investors, hence maintaining or facilitating the reestablishment of the business' finance credibility (Desai et al., 2006;and Hennes et al., 2008). Furthermore, Graham et al. (2015) point out that the forthright communication of bad news to investors may help build an image of transparency to outsidersanother critical reputational asset. Perceived higher transparency reduces the cost of capital and promotes future return on investment. Consequently, it may regain investor confidence in managers' ability to operate the business. Benefits from long-term auditor-client relations may motivate management to cooperate with auditors in joint detection of material misstatements as well. Ghosh and Moon (2005) provide evidence that investors perceive higher earnings quality for longer auditor tenure and earnings response coefficients from returns-earnings regressions are higher for extended tenure. The long time relationship is at risk in the case of misstatements. The theory of reputation protection suggests that auditors are unwilling to agree on waiving material misstatements if that move threatens their most valuable assetreputationand brings in higher litigation risks. Even when there is an increase in audit fees, auditors are unwilling to tolerate any manipulations because the downside income risk and litigation risks are elevated (Larcker and Richardson, 2004;and Keune and Johnstone, 2012). The theory of reputation protection also suggests that auditors will strive to identify any material misstatements in a timely manner so as to better protect their reputation. Failure to cooperate with auditors are likely to jeopardize the auditorclient relationship. Therefore, a better choice for management is to cooperate with auditors. This cooperation may facilitate joint identification of material misstatements and help sustain a longer relationship with auditors. Taken together, the above discussions suggest negative relations between materiality and the length of MDPs as stated in the first hypothesis: H1: Materiality of misstatement is negatively associated with the length of the misstatement detection period. On the other hand, competing theories suggest a delay in the detection and disclosure of material misstatements. Agency theory suggests that management has incentives to extend the length of detection periods or even waive the materiality resolution by exercising its control in the process. Misstatement detections are attributable to firms, auditors and the SEC (Hribar and Jenkins, 2004;Palmrose et al., 2004;and Keune and Johnstone, 2012). Palmrose et al. (2004) document that firms are the major force in identifying and announcing misstatements. Even if auditors are the ones who first identify misstated accounts, they need to inform client managers and audit committees of misstatements and reach a resolution with them on restatement materiality (Keune and Johnstone, 2012). During the materiality judgement process, the role of managers, auditors, and audit committees in resolving detected misstatements is not publicly observable (Kinney and Libby 2002;Nelson et al. 2002). The opacity gives management latitude to control, to a certain extent, the length of MDPs. Furthermore, agency theory predicts that management would behave in its best interest when using its discretion in materiality resolutions. Career concerns might be the major driving force behind managers' decisions (Collins et al., 2009;and Kothari et al., 2009). Desai et al. (2006) document an approximately 60% turnover rate of at least one of three top positions (Chairman, CEO, or President) in the restatement firms within 24 months of the announcement of a restatement. The likelihood of executive turnover increases with the severity of restatements (Hennes et al., 2008). Even if executives save their jobs, they might be obliged to pay back part of their compensation that is deemed inappropriate due to clawback policies (Pyzoha, 2015). More transparent restatement announcements may also increase litigation risk. Files et al. (2009) examine where managers place their restatement announcement in a press release: headline, body or footnote. They find that a more prominent disclosure is associated with more negative returns and a higher likelihood of lawsuits. Timely disclosure of misstatements is a form of transparency. Therefore, it may potentially increase the likelihood of litigation. Therefore, both executive turnover and litigation risks give management incentives to underreport material misstatements as immaterial. In this case, when these underreported "immaterial" misstatements are detected and disclosed in a timely manner, it is less likely to observe any difference between material and immaterial MDPs. Then, the question is: is management able to manipulate the length of MDPs or even materiality? Technically, it is feasible in multiple ways. One way is to use its discretion in materiality assessment to conceal restatements (big R) into revisions (little r), but still disclose them in a timely manner. Thompson (2017) investigates management manipulation of the materiality judgements. He provides evidence that managers are more likely to use qualitative criteria to waive restatements of material misstatements, which resulted in proportional increases in revisions of immaterial misstatements in the last decade. However, based on his analysis, 40% of those financials revised should be restated. Alternatively, since SEC 2004 mandates disclosure of materiality within four business days only after the non-reliance judgement, they can delay the recognition of non-reliance discoveries, hence putting off the Form 8-K filing dates. If either way prevails, no difference would be observed in the length of material and immaterial MDPs. Auditors play an important role in materiality resolutions. Can auditors fully prevent management from manipulating the length of MDPs and/or materiality? It is arguable if they can deter management from abusing its discretion. Auditors are economically dependent on clients since their major sources of income are from audit fees. This dependence puts pressure on auditors to allow clients to waive material misstatements so as to achieve their financial goals. Using analyst forecast consensus as a setting, Libby and Kinney (2000), Ng (2007), and Ng and Tan (2007) provide evidence supporting the theory of economic dependence. They find that auditors are more likely to give management a free pass to waive material misstatements, especially quantitative misstatements, if the correction of these misstatements would result in missing analyst forecasts. In my setting, the theory of economic dependence suggests that auditors would allow managers to either make immaterial resolutions or delay the non-reliance decisions. In summary, if agency theory and economic dependence theory prevail, the relations between materiality and the length of MDPs could be positive. Given the conflict between competing theories, I empirically test H1 using the archival data available in the Audit Analytics database. The second purpose of this study is to examine the relations between the length of material MDPs and litigation. Even though I argue that litigation reduction theory may explain why management strives to shorten the length of MDPs, there is no empirical evidence to support this argument. Both Field et al. (2005) and Donelson et al. (2012) exclude misstating firms from their studies of the timely disclosure of bad news and litigation. The rationale is timely disclosure of misstatements is less likely to deter litigation. Given that material misstatements are presumed to have misled plaintiffs to make wrong decisions, the materialmisstating managers are more likely to face litigation than immaterial-misstating managers are. However, these managers may compete with other material-misstating managers in disclosing misstatements sooner, so that they can demonstrate that their misstatements are unlikely to be intentional. If this strategy affects the judge's perception of management's intention, the judge is more likely to dismiss lawsuits against the firm. Therefore, I propose my second and third hypotheses as: H2: Materiality of misstatement is positively associated with the likelihood of litigation. H3: Among material-misstating firms, shorter detection period is associated with less likelihood of litigation. DATA AND SAMPLE SELECTION I test the above hypotheses by using a sample of restatements of misstated financials from August 23, 2004 -the effective date of SEC 2004to 2015. I identify restatement firms by using the Audit Analytics Non-reliance Restatement database. Following Jadallah (2017), I define a restatement as a big R if it is disclosed through a Form 8-K item 4.02 filing, and as a little r, otherwise. To be included in the sample, I require a sample firm have financial accounting data from the Compustat database and corporate social responsibility data from the MSCI KLD database. [insert Table 1] Table 1 shows the sample distribution in each year. The screening process results in 2,566 misstatements. Among them, 1,203 material misstatements are disclosed as restatements (big R), while 1,363 immaterial misstatements are revisions (little r). Consistent with Thompson (2017), I find that the number of big R is decreasing almost monotonically from 2005 to 2015. Model Specification In H1, I posit that materiality motivates management to lessen the length of MDPs expressed in quarters (DetQtr), which is measured as the number of days from the end of the misstated period to the misstatement disclosure date divided by 90. DetQtr is a measure of gross MDPs because it is consist of both the detection and disclosure periods. I formulate the relations between materiality and the detection period length in the following OLS regression model (1) (1) In the base model, the explanatory variable of prime interest is the materiality dummy (BigR), which is set to 1 for a Form 8-K, item 4.02 filing after August 23, 2004 and 0 otherwise. In addition, I control for a number of variables that may affect the length of the detection period (DetQtr). The first control variable -Ethdumis an indicator variable for corporate social responsibility (CSR). Stakeholders and stewardship theories suggest management is morally obligated to fulfill its social responsibilities. Even if doing so may sacrifice managers' own interests, they should tailor their policies to protect the interests of other stakeholders, such as investors, workers, customers, suppliers, etc. Some studies provide supporting evidence that more socially responsible firms are less likely to engage in earnings management and fraud/SEC investigated restatements (Hong and Andersen, 2011;Kim et al., 2012;Wans, 2017). If managers weigh the moral obligation to society more heavily in their responsibilities, they would more carefully examine their accounting practices, choices and estimates. Therefore, they are more likely to discover irregularities and errors and take actions to minimize the negative impact on stakeholders' interests. In the case of auditors and the SEC initiated restatements, a socially responsible firm is more likely to cooperate in revealing irregularities and errors, hence promoting the disclosure of misstatement. Following Kim et al. In model (1), I control for firm characteristics that may affect the length of detection periods (Hirschey al. 2015) as well. These control variables include: size (LogAT), marketto-book ratio (MB), leverage (Lev) and age (Age). Myers et al. (2013) argue that size, the natural logarithm of total assets, may affect a firm's reporting environment. Bell and Carcello (2000) document that a rapidly growing firm has incentives to engage in fraudulent accounting to inflate reported sales. For this study, it implies that management may be unwilling to reveal misstatements that, if corrected, would result in a halt/reversal of its growth trend. I calculate the market-to-book ratio (MB) as the ratio of the market value of equity to the accounting book value. Next, I use leverage (Lev), the ratio of a company's long-term debt to its total assets, and Age, the public listing time on exchanges, to control for financial distress. Firms are more likely to experience financial distress if they rely more on external financing through debt instruments or are in the early stages of their development, and younger firms are more likely to have weak corporate governance structures. Therefore, these firms are more likely to commit accounting frauds (Forez et al., 1991;Beasley, 1996). However, if a firm relies more on external financing, management might be motivated to set up forthright communication policies to reveal misstatements as soon as possible to build long-term credibility (Hirschey et al. 2015). Then, Lev would be negatively associated with the detection period (DetQtr). Based on the above discussion, I expect that management in young firms may be unwilling to disclose misstatements or make timely disclosures, but make no directional expectation on the association between the detection period and leverage. Public auditors play a critical role in monitoring clients' accounting practice, identifying accounting errors and frauds, and correcting misstatements. However, the theories of economic dependence and reputation protection have opposite predictions on auditors' impact on materiality resolutions and disclosure policies (Libby and Kinney, 2000;Ng, 2007;Ng and Tan, 2007;Keune and Johnstone, 2012;Singer and Tang, 2018). Therefore, the impact of auditors on detection periods remains an empirical question. I include Big 4, set to 1 for Big 4 auditors and 0 otherwise, to control for audit quality. Palmrose et al. (2004) and Land (2010) argue that misstatements of a larger dollar amount raise more concerns to investors. Following Schmidt and Wilkins (2013) In H2 and H3, I posit that material-misstating firms are more likely to face class actions than immaterial-misstating firms are, but the litigation risk is lower for timely material-misstatement disclosing firms in comparison to the peers. The association between the likelihood of litigation and materiality and timely disclosure is specified in logistic regression Models (2) In Models (2) and (3) Descriptive Data [insert Table 2 LogAT and Lev (correlation= 0.56). However, it seems multicollinearity is not a concern. I test variance inflation factors (VIF) in the models, with the highest VIF value being 2.10, much lower than the traditional threshold of 10.00 (Belsley et al., 1980). Empirical Results [insert Table 3] I report test results of the impact of materiality on the detection period length in Another interesting finding is that corporate social responsibility contributes to shorter detection periods. The estimated coefficient on Ethdum is -0.155 (t=-2.712), significant at the five percent level, suggesting that more socially responsible firms would detect and reveal material misstatements seven days earlier than others. However, none of the coefficients of the other control variables are significant. The R-Square increases to 0.137 after including all control variables in the base model. One potential issue with the base model is omitted control variables. It is possible that some firm specific latent factors other than materiality are driving the observed shorter MDPs. I alleviate this concern by comparing material misstatement firms with themselves. Using firms that report both material and immaterial misstatements in the same year as a quasi-experimental setting, I rerun the tests in the base models and report results in Column 3. If materiality is not the driving force, the coefficient estimated on materiality is expected to be insignificant. However, consistent with the results in Columns 1 and 2, I find that management would discover and disclose material misstatements much faster than immaterial misstatements as evidenced by the coefficient estimate of -1.257 (t-value= -4.191) on BigR. When using immaterial misstatements revealed by the same firm as the control group, the coefficient estimate on Ethdum is not significant at any traditional level. This may imply that socially responsible firms do not discriminate in the timely disclosure of material and immaterial misstatements. Due to the small sample size (508 observations), the R-Square is only 0.103, slightly smaller than the full sample. The finding in Column 3 raises another question: is it possible that the observed impact of materiality on timely disclosure in the pooled sample test is due to multiple disclosureboth material and immaterial misstatement disclosurein the same year? I answer this question by removing immaterial misstatements in Column 3 test from the pooled sample. This process generates 2,299 observations with a single filing for each firm in one year. Column 4 presents the test results, where I find that the coefficient estimates on BigR (-1.295, t-value=-12.141) and Ethdum (-0.164, t-value=-2.191) are almost identical to the one in the full sample test. However, I observe no significant relationship between DetQtr and CGOV and Big4, as well as other control variables in any of the above tests. The overall results in Table 3 provide supporting evidence to H1 that materiality is negatively associated with the length of MDPs. [insert Table 4] Furthermore, I hold the misstatement severity level constant by splitting the sample into serious and non-serious restatement subsamples and report test results of Model (1) in Table 4. Hennes et al. (2008) and Dechow et al. (2010) advocate that research in restatements of misstated financials distinguish between serious (fraud or SEC investigated) and non-serious (clerical or accounting errors) restatements so as to provide more insights into determinants and consequences of variation in restatements. Palmrose et al. (2004) It should be of interest to the literature to learn the impact of interactions between selfassessed and outsider-perceived importance of misstatements on the detection period. Even though the above tests consistently show that materiality shortens the length of MDPs, a great concern is that all the above findings are mechanical due to the four-day disclosure requirement in SEC 2004. I address this concern by conducting three additional The variable of interest is the absolute value of cumulative earnings impact (AbsNegEff). It captures the magnitude of misstated earnings. In addition, I use the signed value of cumulative earnings impact (NegEff_1) in Model (1a) because misstatements that result in negative impact to restated earnings are more concerning to investors. [insert Table 5] FSECrst equals 1 if one firm's misstatement is fraud related or receives the SEC investigation, and 0 otherwise. Both fraud and the SEC investigation are strong indicators that the misstatement is material to plaintiffs. However, more than 50% of serious restatements are disclosed through Form 8-K filing, hence inevitably highly correlated with BigR. To lessen the correlation, I also include restatements in the pre-SEC 2004 period when no BigR exists in the test and obtain a sample of 2,866 observations. If firms made non-serious restatements of material misstatements in a timely manner in the pre-SEC 2004, I would expect to observe no or less impact of FSECrst on the length of detection periods. Third, I test the relations between materiality and gross MDPs by using non-Form 8-K misstatements. There is no mandated disclosure requirement for these misstatements under SEC 2004. Therefore, they are not subject to the mechanical influence of SEC 2004. For these misstatements, I use Cumulative Income Impact as a proxy for materiality. Because these misstatements have much less impact to earnings than material misstatements, I use Cumulative Income Impact Dummy (NegDum) in Model (1c) to capture the extreme effect: DetQtr i =α0 +α1 NegDum i +α2 Ethdum i +α3 CGOV i +α4 LogAT +α5 MB i +α6 Lev i +α7 Age i +α8 Big4 i + Fixed Industry Effects + Fixed Year Effects + ε (1c) where NegDum is set to 1 if a correction of misstatement decreases net income by over $20,979 and 0 otherwise. Even though the cumulative income effect of these misstatements is not high enough to be qualified as material, it is still rational to expect that higher income effect raises a greater concern to investors. This research design decreases the sample size to 675 observations. Due to the control of NegDum, I remove NegEff_1 from Model (1c). Even though the above three research designs alleviate the mechanical impact concern, they do not exclude another possible explanation to the major finding. The observed negative association between materiality and gross MDPs could be due to the possibility that greater Cumulative Income Impact is easier to detect even if there is absence of management's incentives. I address this concern by examining if non-income related components of materiality affect the length of MDPs as well. I use Non-Income Related Material Misstatement (Non-NI BigR) as a substitute for BigR in Model (1d): where Non-NI BigR is equal to 1 for material misstatement, and 0 otherwise. I conduct an OLS regression test on Model (1d) by using material misstatements that have no cumulative income impact. This requirement results in a sample size of 132 observations. [insert Table 6] Columns 1, 2, and 3 of Table 6 show test results by using Models 1b, 1c, and 1d, respectively. The coefficient estimates on FSECrst, NegDum and Non-NI BigR are -0.267 (t-value=-1.726), -0.567 (t-value=-2.639), and -2.257 (t-value=-3.197), correspondingly. All coefficients are significant at the 10% level, at least. Consistent with the findings in Table 4, the alternative measures of materiality are negatively associated with the length of MDPs, hence providing further support to H1, while alleviating both the mechanical and easy detection concerns. The economic implication of findings in Columns 1 and 2 is that, on average, serious misstatements and more income-decreasing restatements shorten the detection period by 24 and 51 days, respectively. The weaker impact of these two alternative materiality measures on the length of detection periods might be attributable to the change in sample and the materiality level. The result in Column 3 indicates that non-earnings related components of materiality decrease the length of MDPs as well. Due to the small sample size, I am cautious about the interpretation of economic implications. [insert Table 7] In H2, I posit that material misstating firms are more likely to face class actions than immaterial misstating firms are due to the severe impact of their misstated financials. Table 7 reports the result by using Model (2). The coefficient estimate on BigR is 0.640 (Wald-Chi Square=13.424) and significant at the 1% level, suggesting that materiality of misstatement increases the odds of litigation by 0.64. This finding provides supporting evidence to H2. However, the insignificant coefficient on BigRQtr suggest that when both material and immaterial misstating firms make timely detection and disclosure of misstatements, the litigation risk is indifferent. [insert Table 8] However, within-group tests show that a shorter detection period decreases a material-misstating firm's litigation risk as reported in (1) and (2), respectively. These results are in line with the conjecture in H3 that materiality motivates misstating firms to accelerating detection and disclosure to relatively lower their litigation risk. [insert Table 9] I conduct a path analysis to provide more solid evidence that materiality channels its impact on the length of detection periods through litigation considerations. Because the litigation concerns are unobservable when managers screen for material misstatements, I use group cases filed again misstating firms as an ex-post proxy for litigation concerns. Table 9 shows the result that materiality shortens detention periods through both the direct (BigR -> DecQtr) and indirect paths (BigR -> LTGT Concern -> DecQtr). Through the indirect path, materiality raises managers' litigation concerns, which consequently motivate managers to reduce the length of detection periods. The coefficient estimates are significant at the one percent level in both direct and indirect path tests. Hirschey et al. 2015 argue that it is more appropriate to use negative binomial regressions in detection period tests. Following Hirschey et al. (2015), I use negative binomial regressions to test my base model as well. The untabulated results are qualitatively the same as what I have reported in Table 3. CONCLUSIONS In this paper, I examine if materiality motivates management to shorten the length of MDPs. Using Form 8-K item 4.02 filings (BigR) as a proxy for misstatement materiality, I find that material misstating firms detect and disclose misstatements 116 days earlier than immaterial misstating firms do. The effect of materiality is more evident when discovering and disclosing serious restatements. The biggest concern of using BigR to capture materiality is the mechanical concern due to the mandated four-business-days disclosure requirement in SEC 2004. I alleviate this concern by using misstatement severity and the cumulative (Larcker et al., 2007;Dechow et al., 2010). Even though I find no association between the detection period length and corporate governance, I cannot rule out the possibility of measurement errors in the proxy. Therefore, I would suggest a cautious interpretation of the test results. It might be worthwhile for future studies to develop better proxies for governance and reexamine, in the presence of materiality, the impact of governance on the length of detection periods. This figure depicts the timeline over which the misstated period, the detection period and the dark period are measured. We acknowledge that some firms choose to disclose details of misstatements on the initial misstatement disclosure dates. Thus, the dark period is not necessary for each misstatement. (3) -0.03 -0.08 1.00 CGOV (4) 0.03 0.03 -0.02 1.00 LogAT (5) -0.03 -0.14 0.27 -0.20 1.00 MB (6) -0.04 0.09 0.06 0.01 -0.15 1.00 Lev (7) -0.01 0.01 0.04 -0.04 0.56 0.02 1.00 Age (8) -0.01 -0.14 0.14 -0.08 0.30 -0.08 0.10 1.00 Big4 (9) 0.01 -0.08 0.09 -0.18 0.23 0.04 0.04 0.10 1.00 NegEff_1 (10) This table provides summary statistics for variables used to examine the relationship between the detection period length and materiality. All variables are defined in Appendix A. . 1, 2, 3, and 4 show results without controls, with controls using the pooled sample, using firm-years with both material and immaterial misstatements, and with one type of misstatement, respectively. T-values are reported in parentheses. All variables are defined in Appendix A. The symbols *, **, and *** indicate significance at the 10%, 5%, and 1% levels, respectively. Columns 1 and 2 show results using serious and non-serious restatement, respectively. T-values are reported in parentheses. All variables are defined in Appendix A. The symbols *, **, and *** indicate significance at the 10%, 5%, and 1% levels, respectively. estimates on variables used in examining the relationship between the length of net detection periods and the absolute value of cumulative earnings impact (AbsNegEff) in Column 1 and the signed value of cumulative earnings impact multiplied by negative one (NegEff_1) in Column 2. The sample used in these tests are Form 8-K filers. Net DetQtr i =α0 +α1 AbsNegEff/NegEff_1 i +α2 Ethdum i +α3 CGOV i +α4 LogAT +α5 MB i +α6 Lev i +α7 Age i +α8 Big4 i + Fixed Industry Effects + Fixed Year Effects + ε (1a) T-values are reported in parentheses. All variables are defined in Appendix A. The symbols *, **, and *** indicate significance at the 10%, 5%, and 1% levels, respectively. 1b,1c,1d) T-values are reported in parentheses. All variables are defined in Appendix A. The symbols *, **, and *** indicate significance at the 10%, 5%, and 1% levels, respectively. Year Effects + ε (2) All variables are defined in Appendix A. Wald-Chi square values are reported in parentheses. The symbols *, **, and *** indicate significance at the 10%, 5%, and 1% levels in a two-tail test, respectively. The symbol # denotes significance at the 10% level in a one-tail test. All variables are defined in Appendix A. Wald-Chi square values are reported in parentheses. The symbols *, **, and *** indicate significance at the 10%, 5%, and 1% levels in a two-tail test, respectively. The symbol # denotes significance at the 10% level in a one-tail test. This table reports coefficient estimates on variables used in a path analysis of the relations between materiality, litigation concerns, and the length of detection periods. All variables are defined in Appendix A. The symbols *, **, and *** indicate significance at the 10%, 5%, and 1% levels in a two-tail test, respectively.
2020-07-30T02:04:04.159Z
2020-07-17T00:00:00.000
{ "year": 2020, "sha1": "c99f42df57b672a0801e2188b0947df472b6b657", "oa_license": "CCBYNC", "oa_url": "https://doi.org/10.26226/morressier.5f0c7d3058e581e69b05d070", "oa_status": "GREEN", "pdf_src": "Adhoc", "pdf_hash": "4d5ce0376abc19f304670a9a89c3348327d4485e", "s2fieldsofstudy": [], "extfieldsofstudy": [] }
269481646
pes2o/s2orc
v3-fos-license
Psychometric validation of the hyperglycaemia avoidance scale UK (HAS‐UK) Hyperglycaemia aversion in type 1 diabetes can be associated with severe hypoglycaemia and impaired awareness of hypoglycaemia but is not routinely assessed clinically. This study aimed to undertake the first psychometric validation of the UK version of the Hyperglycaemia Avoidance Scale (HAS‐UK). | INTRODUCTION Hyper-and hypoglycaemia are frequent occurrences for those living with type 1 diabetes, and both are associated with unpleasant symptoms and adverse health outcomes, 1 as well as cost implications related to healthcare utilisation.Hypoglycaemia symptoms include irritability, dizziness, and sweating, as well as more serious consequences such as confusion, loss of consciousness, seizure, and risk of death.Micro-and macro-vascular complications arise from persistent hyperglycaemia.These adverse outcomes are reduced when glucose is maintained within target range, 2 but achieving this is challenging. From the time of diagnosis onward, hyperglycaemia is frequently discussed during diabetes consultations, including informing individuals of the risk of serious complications that can occur, and while substantial work has been conducted investigating fear of hypoglycaemia, [3][4][5] there is less literature on fear of, or aversion to, hyperglycaemia. Distress related to hyperglycaemia is common, 6 and in some individuals this distress includes hyperglycaemia aversion 7,8 which is characterised by concerns related to hyperglycaemia and a detail-focused self-management approach to avoid or alleviate hyperglycaemia, often running blood glucose below the recommended levels. 7,8Anecdotally, hyperglycaemia aversion is frequently seen clinically but is not routinely assessed formally.Hyperglycaemia aversion is important to identify as it may associate with a preference for low glucose and increase the risk of hypoglycaemia. 7,8xposure to frequent hypoglycaemia is a risk factor for the development of impaired awareness of hypoglycaemia, 9 itself a recognised risk factor for severe hypoglycaemic episodes.Hyperglycaemia aversion and consequent avoidance have the potential to lead to greater acceptance of hypoglycaemia, which in turn may lead to increased frequency and severity of hypoglycaemia. 10 The ability to identify and support individuals at risk requires validated tools that assess the extent, and emotional experiences and behavioural manifestations, of hyperglycaemia aversion. The Hyperglycaemia Avoidance Scale (HAS) was developed and validated in the USA, aiming to quantify the extent and impact of hyperglycaemia-related concerns. 11The scale includes 22 items distributed over four subscales: immediate action, worry, low blood glucose preference, and avoid extremes.The scale was found to have excellent reliability across all factors.The validation study data found that the HAS subscales were predictive of prospective severe hypoglycaemia as well as adverse mishaps during driving. The HAS-UK is a modified version of HAS. 11Content and face validity were previously assessed by the investigators of the HypoCOMPaSS trial. 12Fourteen adults living with type 1 diabetes completed the HAS and were interviewed before and after doing so, including cognitive debriefing.Both participant and specialist clinician input identified areas of change needed.The areas identified included linguistic adaptations for UK English (e.g.changing 'feeling mad at yourself' to 'feeling annoyed at yourself'), adjustments needed to reflect relevance for users of insulin pumps and multiple daily injection users and changes to the blood glucose measurement units from mg/dL to mmol/L.The response format was also altered from a numerical Likert scale (comprising end and midpoint anchors of 'never', 'sometimes' and 'always') to a tick-box scoring grid with five frequency ratings ('never', 'rarely', 'sometimes', 'often' and 'always').The full HAS-UK questionnaire and the original HAS questions can be found in Appendix S1. The HAS-UK has a number of differences when compared to the original HAS and has not been subject to formal psychometric evaluation.This study aimed to validate the HAS-UK via exploratory factor analysis in the adult type 1 diabetes population, as well as examine the internal consistency and convergent validity of the measure in order to assess its clinical utility. | Study populations A validation study was conducted using the HAS-UK.This questionnaire was completed by participants recruited from three studies, all of whom lived with type 1 diabetes.Data were aggregated to create a larger individual participant sample size to increase power of the current analysis.The three studies were as follows. 2.1.1 | HYPE (Avoidance of hyperglycaemia in people with type 1 diabetes) The study was about hyperglycaemia aversion in type 1 diabetes.People living with type 1 diabetes who attended What's new? • Hyperglycaemia aversion is often seen clinically in people with type 1 diabetes and can be associated with severe hypoglycaemia and impaired awareness of hypoglycaemia.This was a study which recruited people living with type 1 diabetes, attending the specialist clinics of the UK sites of the HARPdoc RCT, who were matched for sex and diabetes duration with people recruited into the HARPdoc RCT, but who did not have impaired awareness of hypoglycaemia (Gold score ≤3) or recurrent severe hypoglycaemia, as a comparator group to the HARPdoc RCT participants. 13articipants were not paid for their time.This study was granted ethical approval by the London Dulwich and Wales Research Ethics Committees (IRAS numbers 216381 and 271164) and the Institutional Review Board of the Joslin Diabetes Center.All participants gave written informed consent prior to any study procedure.Data were collected from 2019 to 2020. | HAS-UK The HAS-UK is a 24-item questionnaire which asks respondents about behaviours engaged in to avoid high blood glucose levels, and feelings around high blood glucose levels.Responses are selected on a five-point scale ('Never', 'Rarely', 'Sometimes', 'Often' and 'Always'), and total scores are summated resulting in a range from 0 to 96 points.Higher total scores indicate greater hyperglycaemia aversion.The questionnaire contains two additional items asking respondents the highest blood glucose level that they would feel comfortable with on a given day, and the highest HbA1c that they would feel comfortable with.When participants complete the HAS-UK, it is titled 'The high blood sugar survey'. | Additional questionnaires Along with HAS-UK, HYPE participants were invited to complete additional measures related to diabetes and well-being to assess convergent validity and the HAS-UK's clinical utility. | Gold score 14 This is a single item which asks participants 'Do you know when your hypos are commencing?'.Participants respond on a seven-point Likert scale from 1 (always aware) to 7 (never aware).Scores of ≥4 indicate impaired awareness of hypoglycaemia. 2.3.2 | Hypoglycaemia fear survey II (HFS-II) 15 This is a 33-item questionnaire, comprising the 15-item behaviour (scored 0-60) and 18-item worry subscales (scored 0-76; in both cases higher scores represent greater fear of hypoglycaemia)..3.3 | Problem areas in diabetes 5 (PAID-5) 16 This measure asks participants to select on a five-point Likert scale how much each of five areas of diabetes is a problem for them at present, ranging from 'not a problem' to a 'serious problem'.The measure yields scores of 0-20.Higher scores suggest greater diabetesrelated stress, and a score of ≥8 suggests high levels of distress. 2.3.4 | General anxiety disorder 7 (GAD-7) 17 A seven-item measure of anxiety, where respondents are asked the frequency with which they have experienced certain symptoms within the past 2 weeks ranging from 'not at all' to 'nearly every day'.The measure is scored 0-21, with higher scores suggesting greater levels of anxiety. 2.3.5 | Patient health questionnaire 9 (PHQ-9) 18 A nine-item measure of depression, where respondents are asked the frequency with which they have experienced certain symptoms within the past 2 weeks from 'not at all' to 'nearly every day'.The measure is scored 0-27, with higher scores suggesting greater levels of depression. 2.3.6 | State-trait anxiety inventory, trait subscale (STAI-T) 19 The STAI-T measures trait anxiety.Individuals answer 20 questions about how they generally feel, and each item is on a four-point Likert scale from 'almost never' to 'almost always'.The measure is scored 20-80, with higher scores suggesting greater trait anxiety.PAID-5 and Gold score data were available for HARPdoc and COBrAware participants, and HFS-II data were available for HARPdoc participants.These were therefore also included in analyses. | Data analysis Statistical analyses were carried out using SPSS (version 26). First, exploratory factor analysis was carried out using combined data from all three studies (HYPE, HARPdoc, COBrAware).Individuals with missing data on any HAS-UK items were excluded from analyses.Sensitivity analyses including only the HYPE study, as the cohort recruited comprised general type 1 diabetes with no specific requirements for additional characteristics such as severe hypoglycaemia or preserved hypoglycaemic awareness, were also performed.Principal component analysis (PCA) was performed, with orthogonal rotation (varimax) used due to the exploratory nature of the analysis.The factor structure of the HAS-UK was informed by considering both the eigenvalues of factors (above 1.0) and also from observing the elbow in the scree plot. 20Items with loading ≤0.3 were removed from analysis given concerns about stability of items with loadings below this threshold. 21nce the optimum factor structure was ascertained, factor scores were calculated for each individual across studies to use in subsequent analyses by adding together the items from that factor to create subscales. The next stage of validation comprised evaluating internal consistency.This step assessed correlation of questions loading onto a common factor and measured reliability regarding the consistency of responses.This study calculated Cronbach's Alpha (α), considering α ≥ 0.7 to represent acceptable internal consistency. Convergent validity was assessed with data from all three study cohorts, using Pearson's correlation between the HAS-UK and the PAID-5, which measures diabetesrelated distress. Associations with psychological and clinical factors were then considered to assess the HAS-UK's clinical utility, using independent samples t-tests and Pearson's correlations.Results were considered statistically significant when p < 0.05. | Characteristics of participants 3.1.1| All participants (total) Of the 431 participants in the three studies, the mean age was 49.5 years and 58.0% were women.Mean duration of diabetes was 29 years, with 192 (44.5%) participants using multiple daily injections and 229 (53.1%) using an insulin pump (Table 1).Four HARPdoc participants and one COBrAware participant had missing HAS-UK data and were subsequently excluded from the analyses. | HYPE 465 potential participants were contacted by email, and 253 complete survey responses were received following this, comprising 54.4% response rate.Of the 253 responses, 252 were completed online and one was returned on paper.The mean age was 48.7 years and 58.9% were women.Mean duration of diabetes was 26.6 years with 107 (42.3%) participants using multiple daily injections and 146 (57.7%) using an insulin pump. | HARPdoc 22 As published, 626 people were assessed, including a large US cohort identified and 'cold-called' from researchpermitted medical records.Of these, 123 consented, 118 completed a baseline assessment, and 99 were recruited.The mean age was 54 years, and 55.6% were women.Mean duration of diabetes was 35.8 ± 15.4 years with 31 of 97 participants with data (32%) using an insulin pump.HAS-UK data were available for 95 participants.Also as published, 106 people consented to the COBrAware study and 81 returned questionnaire data.Three participants did not include useable HAS-UK scores, leaving 78.Their mean age was 47 years, and 58% were women.Their mean diabetes duration was 29 years, with 34.8% on pump. Further population characteristics for all three study subgroups are summarised in Table 1.The three groups were roughly comparable in terms of age, ethnicity, and sex, with long mean diabetes duration, which was shortest in the HYPE group. | Scale structure and internal consistency A total of n = 426 were included in the factor analysis (four observations from HARPdoc and one from COBrAware had missing HAS-UK data and were excluded).The Kaiser-Meyer-Olkin (KMO) of sampling adequacy was 0.851, with Barlett's test of sphericity statistically significant (p < 0.001) indicating that the sample was adequate for factor analysis. 21PCA identified six factors with eigenvalues >1.0; however, from observing the scree plot (Figure 1), there was a clear elbow after T A B L E 1 Participant characteristics. the third factor, with scores off after this factor.As a result, the three-factor solution was retained explaining 42.26% of the variance.These factors were named F1: 'Worry', F2: 'Blood glucose decisions', and F3: 'Lifestyle decisions'.Any item that loaded onto two factors was placed under a single factor, determined by strength of factor loading and face validity.There were no items with factor loadings <0.3, and no concerning levels of cross-loading were observed.Sensitivity analyses including only the HYPE sample observed an identical factor solution (see Appendices S2 and S3).Table 2 shows all scale items with factor loadings of greater than 0.3, together with their respective factor weights.All factors were retained.Table 2 also shows mean HAS-UK item scores. To evaluate internal consistency, Cronbach's alphas were calculated: worry: α = 0.866; blood glucose decisions: α = 0.761; lifestyle decisions: α = 0.539, indicating acceptable internal consistency for the worry and blood glucose decisions factors, but not for the lifestyle decisions factor. | Convergent validity Correlations between HAS-UK total score and both worry and blood glucose decisions factors were strong, and between the HAS-UK total score and the lifestyle decisions factor was moderate (Table 3). The combined data also showed a moderate correlation between the HAS-UK total score and the PAID-5 (r = 0.550, p < 0.001), and convergent validity was therefore supported. | Associations with other variables Analyses were carried out to compute the correlations between total HAS-UK scores, the three-factor scores, and the total scores of other psychometric questionnaires, as well as the Gold scores and HbA1c measures.Pearson correlation coefficients are shown in Table 3, which indicates which study cohorts are included in each correlation. Independent samples t-tests were conducted to investigate associations with clinical variables.Pairwise comparisons were made for HAS-UK total score, worry, blood glucose decisions, and lifestyle decisions factors with: age at diagnosis (<18 or ≥18 years) (HYPE cohort), glucose sensor monitoring (all three study cohorts), insulin modality (pump or MDI) (all three study cohorts), presence of impaired awareness of hypoglycaemia (all three study cohorts), and occurrence of severe hypoglycaemia over the last year (HYPE and HARPdoc).A pairwise comparison was also made for participants' highest comfortable blood glucose level and occurrence of severe hypoglycaemia over the past year (HYPE and HARPdoc). HAS-UK total score was greater in participants using insulin pumps compared to MDI users (pump mean score 46.19 (SD 12.97) vs. MDI mean score 42.42 (SD 11.70), p = 0.002), as was worry score (pump mean score 23.31 (SD 7.70) vs. MDI score 21.25 (SD 7.96), p = 0.008) and blood glucose decisions score (pump mean score 18.32 (SD 5.86) vs. MDI score 16.97 (SD 5.29), p = 0.015).Blood glucose decisions score was higher in those using a continuous sensor than a meter for self-monitoring of blood glucose (sensor mean score 18.10 (SD 5.68) vs. meter mean score 16.67 (SD 5.47), p = 0.02).Those with IAH had higher F I G U R E 1 Scree plot from exploratory factor analysis. A B L E 2 Mean item scores and exploratory factor analysis of the HAS-UK for HYPE, HARPdoc, and COBrAware cohorts. Blood glucose decisions Lifestyle decisions 1. Try to lower your blood glucose when it is higher than 10 mmol/L | DISCUSSION This study confirms the validity of HAS-UK following its adaptation from a US version to reflect cultural and practice differences in adults with type 1 diabetes living in the UK, and to update terminology to reflect changing methods of insulin delivery.The study found excellent internal consistency amongst worry and blood glucose decisions factors, although the internal consistency for lifestyle decisions was not considered acceptable.Convergent validity was supported by a moderate correlation between the HAS-UK total score and the PAID-5.The HAS-UK total score was greater in insulin pump users than MDI; blood glucose decisions score was higher in those using a continuous blood glucose sensor compared to a meter.The HAS-UK total scores and worry subscale were both positively associated with all self-report questionnaires around emotional and psychological health and hypoglycaemia fear.Blood glucose decisions and lifestyle decisions factors also showed a positive correlation with these questionnaires, apart from the HFS-II worry subscale in both cases, but less strongly.This aligns with expectations as 'worry' represents an emotional construct, and these questionnaires are designed to measure distress, whereas the other two factors relate more to behaviours and preferences, and some individuals with hyperglycaemia aversion may not express associated distress. 8lood glucose decisions and the single question about highest comfortable blood glucose level were both moderately negatively correlated with HbA1c, suggesting that adults living with type 1 diabetes are able to enact this preference for lower blood glucose levels effectively.The items comprising the blood glucose decisions factor T A B L E 3 HAS-UK total and factor correlations. 4. 5. generally good face validity for self-management decisions that may be indicative of hyperglycaemia aversion 8 and be associated with clinical risks such as severe hypoglycaemia and impaired awareness of hypoglycaemia 9 (e.g.item 17: 'Feel comfortable about being hypo if that is what is takes to avoid high glucose'), and thus are likely to have utility in supporting clinicians to identify individuals who may be at risk.It is noteworthy that total HAS-UK score was greater in those using insulin pumps than MDI.Problematic hypoglycaemia is one of the clinical indications for insulin pump usage. 23There is, however, a risk that transitioning an individual with problematic hyperglycaemia aversion, as indicated by severe hypoglycaemia, onto an insulin pump may in fact inadvertently further enable them to run their blood glucose at a lower level, especially if combined with a continuous glucose monitor, 8 which may further increase the risk of hypoglycaemia.It is also likely that those who are more motivated to avoid hyperglycaemia may choose to use an insulin pump to support them in enacting this preference. Hypoglycaemia frequency, severity, awareness, and fear are routinely assessed in clinical practice and in research.Fear of hypoglycaemia is often quoted as a contributor to higher HbA1c, but the role of hyperglycaemia aversion as a risk factor for problematic hypoglycaemia is less well established.Validation of the HAS-UK instrument and demonstration of associations with outcomes suggest that this may be a useful adjunct to understand both the risk of problematic hypoglycaemia and a potential for intervention.Despite improvements in diabetes treatment technology, current data suggest that nearly 9% of people using automated insulin delivery systems still report recurrent severe hypoglycaemia in a year, 24 , with evidence that cognitions around hypoglycaemia including prioritisation of hyperglycaemia avoidance may contribute to the residual problem. 10 It is likely that the HAS-UK may be a valuable tool to identify people who need additional support to avoid hypoglycaemia even with technology.The two standalone questions about highest comfortable blood glucose level and highest comfortable HbA1c may also prove useful guides to understanding whether the person's concerns are 'excessive' or clinically concerning.Given that the questionnaire assesses both active avoidance of hyperglycaemia and affective concerns around hyperglycaemia, which may not associate with behavioural responses, it may be prudent to consider whether the measure may more accurately be called the 'Hyperglycaemia Aversion Scale' as opposed to the 'Hyperglycaemia Avoidance Scale'. Although the HYPE cohort was a general type 1 diabetes population, there may have been some selection bias in terms of those who showed interest and participated in the study, which may have implications for generalisability.The HARPdoc and COBrAware cohorts were biased by intention to be enriched by participants with and without problematic hypoglycaemia, respectively.For all three studies, it is not possible to determine if there were any differences between those who chose to participate and those who did not. The present analyses sought to undertake psychometric evaluation of the existing HAS-UK.The variance explained by the final solution was lower than recommended in the general literature, indicating that further approaches to refine the structure of the measure may enhance the properties of the HAS-UK.Additional psychometric validation might further the clinical utility of the measure, including identifying individual items that might be contributing to poorer reliability and arguably be less clinically valuable in the assessment of hyperglycaemia aversion (e.g. the 'lifestyle decisions' factor).This should include examining the HAS-UK for test-retest reliability, as well as measurement invariance and differential item functioning across demographic and clinical subgroups.
2024-05-02T06:17:09.700Z
2024-04-30T00:00:00.000
{ "year": 2024, "sha1": "ed82930c12d9f859d237fa6174e1b2e16f8cd930", "oa_license": "CCBY", "oa_url": "https://onlinelibrary.wiley.com/doi/pdfdirect/10.1111/dme.15342", "oa_status": "HYBRID", "pdf_src": "Wiley", "pdf_hash": "f295f468d7b26239cb0a5a2b4818d801109d0251", "s2fieldsofstudy": [ "Medicine", "Psychology" ], "extfieldsofstudy": [ "Medicine" ] }
244127786
pes2o/s2orc
v3-fos-license
A functional dissociation of the left frontal regions that contribute to single word production tasks Controversy surrounds the interpretation of higher activation for pseudoword compared to word reading in the left precentral gyrus and pars opercularis. Specifically, does activation in these regions reflect: (1) the demands on sublexical assembly of articulatory codes, or (2) retrieval effort because the combinations of articulatory codes are unfamiliar? Using fMRI, in 84 neurologically intact participants, we addressed this issue by comparing reading and repetition of words (W) and pseudowords (P) to naming objects (O) from pictures or sounds. As objects do not provide sublexical articulatory cues, we hypothesis that retrieval effort will be greater for object naming than word repetition/reading (which benefits from both lexical and sublexical cues); while the demands on sublexical assembly will be higher for pseudoword production than object naming. We found that activation was: (i) highest for pseudoword reading [P>O&W in the visual modality] in the anterior part of the ventral precentral gyrus bordering the precentral sulcus (vPCg/vPCs), consistent with the sublexical assembly of articulatory codes; but (ii) as high for object naming as pseudoword production [P&O>W] in dorsal precentral gyrus (dPCg) and the left inferior frontal junction (IFJ), consistent with retrieval demands and cognitive control. In addition, we dissociate the response properties of vPCg/vPCs, dPCg and IFJ from other left frontal lobe regions that are activated during single word speech production. Specifically, in both auditory and visual modalities: a central part of vPCg (head and face area) was more activated for verbal than nonverbal stimuli [P&W>O]; and the pars orbitalis and inferior frontal sulcus were most activated during object naming [O>W&P]. Our findings help to resolve a previous discrepancy in the literature, dissociate three functionally distinct parts of the precentral gyrus, and refine our knowledge of the functional anatomy of speech production in the left frontal lobe. Introduction The left frontal lobe plays a well-researched role in speech production ( Basilakos et al., 2018 ;Flinker et al., 2015 ;Long et al., 2016 ;Mugler et al., 2018 ). However, there is controversy as to the specific roles that distinct left frontal regions play in the generation of a speech plan. For example, as detailed below, some studies have associated the assembly of sublexical articulatory codes (e.g. phonemes and syllables) with activation in the left dorsal precentral gyrus, whereas others have claimed that sublexical assembly is supported by a more ventral region of the precentral gyrus (see Table 1 ). Here we consider the challenges of assigning specific functions to discrete regions and tackle this problem by using a multi-factorial design that enables us to tease apart the demands on articulatory planning from more general, non-linguistic processes such as working memory, attention and cognitive control. "yot " not "yatched ") compared to "regular " spelling-to-sound correspondences that are "consistent " with most other words in the same language (e.g. mint, hint, tint, flint, stint, print, splint). A plausible explanation is that this common activation reflects the demands on executive control ( Fiez et al., 1999 ) because, in both cases, there is a conflict between lexical and sublexical processing -and the reader therefore has to attend to one and inhibit the other. For example, when reading the word "yacht ", the sublexical spelling-to-sound association ( "yatched ") is inconsistent with the lexical spelling-to-sound association ( "yot "). The output from sublexical assembly ( "yatched ") therefore needs to be inhibited. Conversely, when reading the pseudoword "chiden ", the reader must inhibit the production of real words that look alike (e.g. children and chicken). For regularly spelled words, the demands on executive control are less because lexical and sublexical codes are, by definition, consistent. Several studies have attempted to dissociate processing related to sublexical assembly and generic processing demands during speech production, but the conclusions have been inconsistent. For example, Fiez et al. (1999) and Mechelli et al. (2005) found that, compared to regular words, reading pseudowords and irregularly spelled words increased activation in the vicinity of the pars opercularis ( Table 1 ), consistent with generic demands on mapping orthography-to-phonology, as opposed to sublexical assembly. In contrast, Mei et al. (2014) and Twomey et al. (2015) showed that activation at the same site (in standard space) is involved in sublexical assembly even when response times (reflective of general processing demands) are controlled. The role of the left dorsal precentral gyrus is also unclear. While Mechelli et al. (2005) and Twomey et al. (2015) associated it with sublexical processing; Binder et al. (2005) reported increased activation in this region for irregular than regular word reading, which is more consistent with generic demands. Further investigation is therefore required to understand these inconsistent conclusions. In the current study, we considered how areas that were more activated for pseudoword than word production responded during object naming. Considering their response to object naming provides three advantages. First, object naming relies on lexical retrieval of articulatory codes and can be compared to reading and repeating the same object names, thereby controlling for speech output. Second, it is slower and more attention demanding than word reading ( Glaser and Glaser, 1989 ), allowing us to segregate activation related to: (i) generic processing demands (object naming and pseudoword reading > word reading), (ii) sublexical assembly (pseudoword reading > object naming); (iii) lexical retrieval (object naming > pseudoword reading); and (iv) phonologicalto-articulatory recoding (words and pseudowords > object naming). Third, the perceptual parts of pictures or sounds of objects do not pro- vide any sublexical cues as to how the name is pronounced. This contrasts to irregular word reading, where high activation may reflect automatic but unsuccessful attempts at sublexical assembly. Finally, by including the corresponding conditions in the auditory modality (repetition of heard words and pseudowords, and naming objects from their sounds), we can dissociate activation related to articulatory planning from activation related to modality-specific processing (e.g. that related to mapping orthography onto phonology). In summary, our literature review ( Table 1 ) highlights a lack of clarity in how activation in and around the dorsal versus ventral left precentral gyrus contributes to speech production. Using a multi-factorial fMRI design, we investigated which parts of the left precentral gyrus were most consistent with: (1) the demands on sublexical assembly of articulatory codes (assumed to be higher for pseudoword reading than object naming) or (2) retrieval effort (assumed to be higher for object naming and pseudoword production than word production). Although our questions concern regions in the left frontal lobe, we also examined whole brain activation to delineate the neural networks in which different left frontal regions participate. Methods The data used in this paper have previously been reported in Oberhuber et al. (2016) where the goal was to dissociate the function of different parts of the left supramarginal gyrus. Here we focused on teasing apart how distinct left frontal lobe regions contribute to speech production. Experimental design There were 8 conditions that comprised a 2 × 2 × 2 factorial design ( Table 2 ). Factor I was stimulus modality (auditory versus visual); Factor II was verbal versus nonverbal stimuli (words and pseudowords versus objects and baseline stimuli); Factor III was the presence or absence of semantic content (familiar words and object names versus unfamiliar pseudowords and baseline stimuli). Examples of the visual stimuli are shown in Fig. 1 . Each condition was presented in a separate run, with blocks of stimuli alternating with rest. Full details of the experiment (e.g. regarding stimulus selection) can be found in Oberhuber et al. (2016) . Participant groups There were two non-overlapping participant groups ( n = 25 and 59) that both performed the same 8 tasks of interest embedded within one of two different experimental paradigms. In addition to the 8 speech production conditions examined in the current analysis, Group 1 completed 1-back matching tasks on the same 8 stimulus sets; while Group 2 completed 5 tasks that involved sentence production, verb production, noun production and semantic decisions on pictures of objects or their heard object names. These additional tasks were presented in separate scanning sessions and were not examined in the current analysis. Although the presentation parameters in the two paradigms were not exactly the same (see Table 3 ), our focus is on results that were observed across both datasets. Direct comparison of the same effects in Group 1 and Group 2, did not reveal any significant differences. Counterbalancing In Paradigm 1 ( n = 25), the same object concepts were rotated across the 4 semantic conditions -either as written object names, heard object names, pictures of objects or sounds of objects. In addition, written pseudowords were matched to spoken pseudowords. This ensured that the speech being produced was the same for the matched conditions (across subjects). The order of conditions was counterbalanced across participants in Group 1. In Group 2 ( n = 59), we used a fixed condition order so that inter-subject variability could not be attributed to differences in condition order. The figures illustrating our results demonstrate that our effects of interest were observed in both groups -which further strengthens our conclusions. Table 3 provides participant, experimental and scanner details for each group of subjects. fMRI data preprocessing Data preprocessing and statistical analysis were performed in SPM12 (Wellcome Centre for Human Neuroimaging, University College London, UK), running on MATLAB 2012a. Functional volumes were spatially realigned to the first EPI volume and unwarped to compensate for non-linear distortions caused by head movement or magnetic field inhomogeneity. The unwarping procedure was used in preference to including the realignment parameters as linear regressors in the first-level analysis because unwarping accounts for non-linear movement effects by modelling the interaction between movement and any inhomogeneity in the T2 * signal. After realignment and unwarping, the realignment parameters were checked to ensure that participants moved less than one voxel (3mm 3 ) within each scanning run. The anatomical T1w image was co-registered to the mean EPI image generated during the realignment step and then spatially normalised to the MNI space using the unified normalisation-segmentation routine in SPM12. To spatially normalise all EPI scans to MNI space, the deformation field parameters that were obtained during the normalisation of the anatomical T1w image were applied. The original resolution of the different images was maintained during normalisation (voxel size 1 × 1 × 1 mm 3 for structural T1w and 3 × 3 × 3 mm 3 for EPI images). After normalisation, functional images were spatially smoothed with a 6 mm full-width-half-maximum isotropic Gaussian Kernel to compensate for residual anatomical variability and to permit application of Gaussian random-field theory for statistical inference ( Friston et al., 1995 ). First level statistical analyses Each preprocessed functional volume was entered into a subject specific fixed effect analysis using the general linear model. Stimulus onset times were modelled as single events. For Paradigm 1 (Group 1), we used 2 regressors per task, one modelling instructions, and the other modelling each stimulus. For Paradigm 2 (Group 2), the stimulus regressor was replaced with three different regressors for correct, incorrect, and delayed/no responses, resulting in a total of 4 regressors per task. This is because Paradigm 2 was designed for patients who were expected to make errors. Importantly, the current study (with neurotypical participants) did not find significant differences between effects of interest in Paradigm 1 (activation across trials of the same stimulus type) and Paradigm 2 (activation related to correct trials only). This is not unexpected given the very low number of incorrect/no response trials in both groups. Stimulus functions were convolved with a canonical haemodynamic response function and high pass filtered with a cut-off period of 128 s. For each scanning session/run (that alternated one condition of interest with fixation), we generated a single contrast that compared activation in response to the stimuli and task of interest to resting with fixation. This resulted in 16 different contrasts (one per condition) for each participant. Each contrast for each individual was inspected to ensure that there were no visible artefacts (e.g. edge effects, activation in ventricles) that might have been caused by within-scan head movements. Second level statistical analysis The first level analysis for each participant yielded 8 separate contrasts (one per condition > fixation), i.e. words (W), pseudowords (P), objects (O) and baseline (B) in the visual and auditory modality (see Table 2 ). The second level analysis modelled 16 conditions; 8 for each group of participants. Contrasts were computed across group and the consistency across groups is demonstrated in the Figures illustrating the results. The effects of interest were: (1) the main effect of verbal compared to nonverbal stimuli (W&P > O&B); and (2) the interaction of verbal/nonverbal and semantic/nonsemantic (i.e. P&O > W&B). Post hoc tests were then used to segregate three different effects driving the interaction: Contrast A [P > W&O] segregated activation that was higher for pseudoword reading/repetition compared to word reading/repetition and object naming (i.e. consistent with the demands on sublexical assembly). We also expected that activation related to sublexical assembly would be higher for words than objects (i.e. P > W > O). Contrast B [P&O > W] segregated activation that was higher for object naming and pseudoword reading/repetition compared to word reading/repetition (consistent with generic retrieval demands). Contrast C [O > W&P] segregated activation that was higher for object naming compared to word reading/repetition and pseudoword reading/repetition. We did not include the baselines in these contrasts as this is less conservative (baselines put lower processing demands on sublexical processing and executive control) and our goal was to distinguish processing for P&W&O. Each of these contrasts was repeated three times: once across modality, once in the visual modality and once in the auditory modality. If an effect was observed in one modality only, we checked and reported the interaction of that effect with the main effect of stimulus modality (visual versus auditory). We report all results when the main contrast (see Table 2 and above) was significant at p < 0.05 after family-wise error correction in height. To ensure that the activation fitted the effect of interest, we used the inclusive masking option in SPM (thresholded at p < 0.05 uncorrected), see Table 4 A for details. The type of processing that we expected to be probed for each effect is provided in Table 4 B and rationalised in the Discussion. Behavioural results Details of the in-scanner behavioural performance for our participants are illustrated in Fig. 2 and reported in Oberhuber et al. (2016) . Accuracy scores for Experiment 2 were computed after two outliers (subjects with less than 50% accuracy) had been removed. In brief, the average in-scanner accuracy was 95% for Group 1 and 98% for Group 2. Response times (RTs) were only available for Group 2 (due to technical failure in Group 1) and were computed after two participants were excluded due to missing RT data. Across modality, RTs were slower for auditory than visual speech production stimuli due to the sequential delivery of each auditory stimulus, in contrast to the simultaneous delivery of all parts of each visual stimulus. Within modality, participants W = words, P = pseudowords, O = objects, Int. = interaction of semantics and verbal input, Vx = number of contiguous voxels at p < 0.001 uncorrected. All effects were significant after voxel-level correction for multiple comparisons across the whole brain. Fig. 2. In-scanner behavioural scores . Task specific accuracy for Group 1 (grey plots) and Group 2 (black plots, n = 58 following removal of 1 outlier) and response times (RTs) for Group 2 only ( n = 57 following exclusion of 2 subjects with missing RT data due to technical failure). Plots show mean scores with standard deviation (SD) as red bars. W = words, P = pseudowords, O = objects, C = colours (visual baseline), H = humming sounds (auditory baseline). were slower on more demanding tasks, specifically: (a) object naming than word repetition or reading, consistent with object naming being more demanding; (b) object naming than pseudoword production, and (c) pseudowords than words with this effect trading with less accurate pseudoword production than object naming. fMRI results Left frontal activation (in front of the central sulcus) was highly significant for the main effect of verbal > nonverbal stimuli (W&P > O&B) across stimulus modality. Peak activation [ − 54, + 3, 27; Z-score = 6.2] was located in the left ventral precentral gyrus (head and face area; see Fig. 3 ). The interaction (P&O > W&B) between verbal/nonverbal and semantic/nonsemantic also yielded highly significant frontal activation that we segregated, with post hoc tests, into three different effects (A, B and C), as described below. Sublexical assembly (P > W&O) Activation that was highest for pseudowords (P > W&O) was observed for visual stimuli only, in the anterior part of the left ventral precentral Baseline conditions (B) in the visual (columns 1-4 and 9-12) and auditory modalities (columns 5-8 and 13-16). Columns 1-8 are from Group 1. Columns 9-16 are from Group 2. The coloured bars highlight the activation conditions. The error bars are standard error. Although each effect of interest was highly significant, these plots show that there is high selectivity without specificity (i.e. all regions were activated across conditions). dPCg/vPCg/vPCs = dorsal/ventral precentral gyrus/sulcus; IFJ/IFS = Inferior frontal junction/sulcus. Regions associated with sublexical assembly (P > W&O) are shown in red; naming (O > W&P) in magenta; generic retrieval demands (P&O > W) in blue; verbal > nonverbal (W&P > O&B) in green (For interpretation of the references to color in this figure legend, the reader is referred to the web version of this article.). The same pattern of effects was also observed in the left anterior putamen (as reported in Oberhuber et al., 2013) and the left postcentral sulcus. Generic demands on articulatory planning (P&O > W) Activation was higher for pseudowords and objects than words deep in the inferior frontal junction, extending laterally through the precentral sulcus to the dorsal precentral gyrus ( Table 4 ; blue in Fig. 3 ), with no significant difference between the visual or auditory modalities ( p > 0.05 uncorrected). The same response pattern (P&O > W) was also observed in the bilateral anterior insula/frontal operculum and pre-SMA. Highest for naming (O > W&P) Activation was higher for objects than pseudowords and words in the left inferior frontal sulcus and left pars orbitalis ( Table 4 ; magenta in Fig. 3 ), with no significant difference between the visual or auditory modalities ( p > 0.05 uncorrected). The same response pattern (O > W&P) was also observed in the left middle temporal sulcus, left fusiform, bilateral visual cortices and bilateral cerebellum. Other left frontal lobe activation No activation was detected in the precentral gyrus, precentral sulcus or pars opercularis for: the main effects of semantic > nonsemantic; nonsemantic > semantic; nonverbal > verbal; or auditory > visual. However, the main effect of visual > auditory stimuli identified left precentral activation [peak at − 42, 3, 30] that was highest for reading pseudowords (effect A) and least for repeating words or gender naming. Discussion Prior studies have reported that increased demands on sublexical assembly of articulatory codes (e.g. phonemes and syllables) increases activation in either dorsal ( Mechelli et al., 2005 ) or ventral ( Mei et al., 2014 ;Twomey et al., 2015 ) parts of the left precentral gyrus ( Table 1 ). However, possible confounds in the experimental designs of previous studies make it difficult to determine the type of processing that engages each region. To further dissociate the functional contribution of distinct left frontal regions to speech production, we compared activation for word and pseudoword production to that observed during object naming, which exerts high demands on the retrieval of whole-word articulatory plans. Our results indicate that the response in the left ventral precentral gyrus (head and face area), bordering the ventral precentral sulcus (vPCg/vPCs), is most consistent with sublexical assembly of articulatory codes, because activation was higher for pseudoword reading than object naming and word reading. In contrast, we found that the response in the left dorsal precentral gyrus (dPCg) extending into the left inferior frontal junction (IFJ) is most consistent with retrieval demands, because activation was higher for object naming and pseudoword reading/repetition than word reading/repetition. This functional dissociation between ventral and dorsal parts of the precentral gyrus is consistent with the heterogeneity evidenced by multimodal connectivitybased parcellation ( Genon et al., 2018 ). Our multi-task approach also allowed us to dissociate other functionally distinct regions in the left frontal lobe that are differentially engaged during single-word speech production. Below, we discuss how each of our findings confirm, extend and challenge the results of previous studies, and their relevance for refining our understanding of the functional anatomy of speech production. A summary of the findings, and interpretation related to prior literature can be found in Table 6 . Sublexical assembly (P > W&O in the visual modality) Left frontal activation associated with sublexical processing was identified on the anterior surface of the left ventral precentral gyrus (vPCg), bordering the ventral precentral sulcus. The MNI co-ordinates of peak activation in this area ([ − 57, 9, 18] and [ − 54, 6, 27]) corresponds to those associated with sublexical assembly in Mei et al. (2014) and Twomey et al. (2015) using completely different experimental designs. In Mei et al. (2014) , native English speakers were trained to read words presented in unfamiliar Korean Hangul characters by either recognising the words as a whole or by relying on the sublexical spelling to sound relationships. When reading the same words in the scanner, those using a sublexical assembly strategy increased activation at MNI coordinates [ − 56, 6, 24] compared to those who read the words lexically. In Twomey et al. (2015) , a very similar area (MNI co-ordinates [ − 51, 8, 22]) was more activated when words emerged on the screen sequentially compared to when they emerged as a whole. Other reading studies ( Binder et al., 2005 ;Mechelli et al., 2005 ) did not associate the vPCg with sublexical assembly because activation increased for words with irregular compared to regular spellings (see Table 1 ) and irregular spellings cannot be read successfully using sublexical assembly. Our alternative interpretation of the enhanced vPCg/vPCs response during irregular reading is that skilled readers will automatically engage sublexical assembly when presented with familiar orthography. Moreover, unsuccessful sublexical processing may persist for irregular word reading until the correct pronunciation is retrieved via lexico-semantics. The vPCg activation we associate with sublexical processing was on the anterior surface of vPCg, bordering the ventral precentral sulcus. Here, cortical activity has been related to the motor planning of vocal tract actions required to produce speech sounds (articulatory gestures) at discrete times ( Mugler et al., 2018 ). In this context, enhanced activation for pseudoword reading compared to word reading and object naming can be explained by enhanced demands on encoding novel sequences of articulatory gestures. Although vPCg/vPCs activation was not enhanced for pseudoword repetition compared to word repetition and auditory naming, it was not specific to reading. Specifically, we also found highly significant vPCg/vPCs activation ( p < 0.05 corrected) for repeating words and for repeating pseudowords ( Fig. 3 ), consistent with the demands on articulatory planning that is independent of stimulus modality. The increased demands that pseudoword word reading places on articulatory planning can be explained by the absence of facilitation from (i) an auditory shortterm representation of the intended speech output ( Strand et al., 2008 ) that is available during auditory repetition; and (ii) the lexical/semantic familiarity associated with word reading. Generic demands on articulatory planning (P&O > W) The area associated with generic retrieval demands was located deep in the left frontal lobe, with one peak falling in the left inferior frontal junction (located at the junction of the inferior precentral sulcus and inferior frontal sulcus) and a second peak in the left dorsal precentral gyrus (dPCg). The inferior frontal junction (IFJ) is part of a network associated with attention, cognitive control and working memory ( Cole and Schneider, 2007;Roth et al., 2006 ) (Roth et al., 2006;Cole and Schneider, 2007;Muhle-Karbe et al., 2016 ;Tamber-Rosenau et al., 2018 ;Zhang et al., 2018( Zhang et al., 2018) that also includes the dorsolateral prefrontal cortex, anterior insula, and pre-SMA ( Sundermann and Pfleiderer, 2012 ) -all regions that were co-activated with the IFJ in the current study (blue areas in Fig. 3 ). The dPCg has previously been associated with sublexical assembly because it was more activated for reading pseudowords compared to reading irregularly and regularly spelled words ( Mechelli et al., 2005 ); and for reading text delivered sequentially rather than simultaneously ( Twomey et al., 2015 ). Our finding that activation was higher for object naming than word reading is not consistent with this claim. Instead, our findings are more consistent with prior studies that demonstrated a role for the left dPCg in retrieving fine-grained motor plans and anticipating rhythms ( Chen et al., 2008 ) during speech articulation and finger movements ( Meister et al., 2009 ); particularly when people watch/listen to material for which they have been highly trained to generate very specific action responses, including dance movements ( Calvo-Merino et al., 2005 ), piano music ( Lahav et al., 2007 ) and violin music ( Dick et al., 2011 ). According to this hypothesis, left dorsal precentral activation should be lower when retrieval demands are lower (i.e. for reading and repeating words), as observed in the current study. Highest activation for object naming (O > W&P) In contrast, retrieving articulatory plans from semantic stimuli (i.e. semantic-to-articulatory recoding) enhanced activation in (i) the left pars orbitalis (pOrb), a region already associated with controlled semantic retrieval ( Sabb et al., 2007 ), and (ii) the left inferior frontal sulcus, a region already associated with word retrieval ( Arya et al., 2019 ;Price, 2012 ). The left inferior frontal sulcus has also been associated with the integration of bottom-up and top-down multi-sensory information (semantic, nonsemantic and nonverbal) prior to response selection ( Adam and Noppeney, 2010 ;Gau and Noppeney, 2016 ;Noppeney et al., 2010 ). The main effect of verbal > nonverbal stimuli (W&P > O&B) In a central part of vPCg, we found that activation was higher for verbal stimuli (words and pseudowords) than nonverbal stimuli (object, colour and gender naming) in both auditory and visual modalities (green in Fig. 3 ). As activation in this part of vPCg was not higher for pseudowords than words, it is not consistent with the expected demands on sublexical assembly of articulatory plans. We therefore propose that enhanced activation in the central part of vPCg for verbal more than nonverbal stimuli reflects the association of articulatory codes with phonological representations of the stimuli (as opposed to the subsequent assembly of these codes). Although further studies are required to investigate this hypothesis, we speculate that phonological-to-articulatory recoding may be evoked faster and sustained longer when processing verbal stimuli, compared to nonverbal stimuli because (i) we are highly trained to link verbal stimuli to their speech sounds and articulatory codes and (ii) nonverbal stimuli may rely more heavily on perceptual and semantic processing. Summary and conclusions Our literature review ( Table 1 ) highlighted inconsistency in the brain regions associated with the demands on sublexical assembly of articulatory plans. Some studies have proposed that the left dorsal precentral gyrus (dPCg) is involved in sublexical assembly, whereas others have claimed that sublexical assembly is supported by more ventral regions. Using a multi-factorial design that included object naming conditions as well as word and pseudoword reading and repetition, we associated the demands on sublexical assembly with activation in the anterior part of the left ventral precentral gyrus (vPCg), bordering the left ventral precentral sulcus (vPCs). In contrast, we show that the response in a more dorsal part of the precentral gyrus (dPCg) is more consistent with retrieval effort and demands on executive functioning. We have also described the contrasting response properties of other left frontal lobe regions that contribute to speech production and compared our interpretation with that of previous studies ( Table 6 ). Of par-ticular interest is the dissociation of two parts of the ventral precentral gyrus: the anterior part associated with sublexical assembly and a more central part that was activated by verbal (words and pseudowords) compared to nonverbal (objects, patterns and humming) stimuli. This motivates future studies using techniques that provide higher spatial resolution (e.g. single-subject data from 7T fMRI) to further investigate the contribution of different vPCg regions to speech production. Overall, our findings resolve a previous discrepancy in the literature, dissociate three functionally distinct parts of the left precentral gyrus, and refine our understanding of the functional anatomy of speech production. Declaration of Competing Interest The authors declare no competing financial interests. Data availability The data that support the findings of this study are available upon request from the senior author (C.J.P.).
2021-11-16T18:01:34.185Z
2021-11-01T00:00:00.000
{ "year": 2021, "sha1": "668aafdc02279bca2ff0dfccc43d67fa047dd736", "oa_license": "CCBY", "oa_url": "https://doi.org/10.1016/j.neuroimage.2021.118734", "oa_status": "GOLD", "pdf_src": "Elsevier", "pdf_hash": "668aafdc02279bca2ff0dfccc43d67fa047dd736", "s2fieldsofstudy": [ "Biology" ], "extfieldsofstudy": [ "Computer Science", "Medicine" ] }
9942580
pes2o/s2orc
v3-fos-license
Two-photon dynamics in coherent Rydberg atomic ensemble We study the interaction of two photons in a Rydberg atomic ensemble under the condition of electromagnetically induced transparency, combining a semi-classical approach for pulse propagation and a complete quantum treatment for quantum state evolution. We find that the blockade regime is not suitable for implementing photon-photon cross-phase modulation due to pulse absorption and dispersion. However, approximately ideal cross-phase modulation can be realized based on relatively weak interactions, with counter-propagating and transversely separated pulses. Strong nonlinearity at the single-photon level is desirable to the realization of all-optical quantum devices. Ensembles of highly excited Rydberg atoms under electromagnetically induced transparency (EIT) condition combine the advantages of strong atom-field coupling without significant absorption and non-local atomic interaction, and have attracted intensive experimental [1][2][3][4][5][6][7] and theoretical studies [8][9][10][11][12][13][14][15][16][17] recently. The strong correlation directly between single photons inside Rydberg atomic ensemble was observed [7], and the formation of a Wigner crystal of individual photons is also predicted [15]. When such interaction is applied to implement the cross-phase modulation (XPM) between two individual photons with a non-zero relative velocity as in Fig. 1 [18][19][20][21], a main difference from a single probe beam propagation in Rydberg EIT medium [1][2][3][4][5][6][7][8][9][10][11][12][13][14][15][16][17] is that no steady state exists for the pulses, because their interaction varies with the relative distance, pulse velocity that changes pulse sizes, as well as the absorption in medium. The realistic timedependence in the inherent nonlinear dynamics makes a complete solution of the problem rather challenging. With the combination of a semi-classical approach for pulse propagation and a complete quantum approach for pulse quantum state evolution, we find a realistic picture for the dynamical process by showing the concerned figures of merit. We show that our proposed setup outperforms the previously considered Rydberg blockade regime [20] clearly in terms of much lower photon absorption and negligible group velocity dispersion. The detailed two-photon XPM via Rydberg EIT is as follows. One respectively couples the far-away input photons to cold Rydberg atoms under the EIT condition to form the light-matter quasi-particle called darkstate polariton (DSP) [22]. The spatial distribution of the pulses necessitates a quantum many-body description of the process. The prepared DSPs are in the state |1 l = d 3 xf l (x)Ψ † l (x)|0 for l = 1, 2, where f l (x) are their snapshots with d 3 x|f l (x)| 2 = 1, and Ψ(x) = cos θÊ(x)−sin θŜ(x) as the superposition of electromagnetic field operatorÊ(x) and Rydberg spin-wave field operatorŜ(x) is the DSP field operator. The manybody version of the atom-field Hamiltonian for the excited level |e can be diagonalized in terms of two bright-state polariton (BSP) fieldsΦ + (x) = sin θ sin φÊ(x) + cos φP (x) + cos θ sin φŜ(x) andΦ − (x) = sin θ cos φÊ(x) − sin φP (x) + cos θ cos φŜ(x), where their spectrum ω ± = 1 2 (∆ 1 ± ∆ 2 1 + g 2 N + Ω 2 c ) is a function of the input photon detuning ∆ 1 , pump beam Rabi frequency Ω c and atom density N . The combination coefficients for the polariton field operators satisfy the relations tan θ = g √ N /Ω c and tan 2φ = g 2 N + Ω 2 c /∆ 1 with g as the atom-field coupling constant. When the DSPs get close to each other, the interaction between the pulses takes effect. Here we consider the Van der Waals (VdW) potential ∆(x) = −C 6 /|x| 6 in Rydberg atomic ensemble. Such interaction, however, also causes the transition of DSP to BSPs containingP l (x) components decaying at the rate γ. The decay of theP l (x) field is described by [23] with the white-noise operators of the reservoirs satisfying . The evolved pulse quantum state under all above mentioned factors should be close to the ideal output e iϕ |1 1 |1 2 (ϕ is a uniform one) for realizing a photon-photon XPM. Before studying the input's quantum state evolution, one needs to ascertain the pulses' propagation in the medium, so that their interaction time should be known. The absorption and dispersion of the pulses can be found in a semi-classical approach [24,25] that treats the input pulses as the classical fields E l (x), which are equivalent to the averages Ê l (x) of the quantum fields (up to a constant). In this framework the atom-field coupling is described by the following equations for the atomic density matrix elements [24]: where µ ij are the transition dipole matrix elements and γ ij the decay rates of the relevant levels. The interaction with another pulse shifts the energy level of |r and hence adds an extra term ∆ R ( to the detuning ∆ 2 of the pump beam, where T (t) is the time-dependent transmission rate. This practice of reducing the interaction effect to a c-number detuning ∆ R is equivalent to a mean field treatment for the spin-wave fields in (A-2). One has the time-dependent solution to (4a)-(4b) under the weak drive approximation [24,25] for single photons, wherê It is straightforward to obtain the timedependent refractive index and decay rate from the susceptibility χ (1) (t) = −2N µ eg ρ eg (t)/(ǫ 0 E l ) based on (5). When two pulses approach each other, one phenomenon that could happen is known as Rydberg blockade. For the red-detuned photons (∆ 1 > 0) the rising magnitude of negative ∆ R constantly shifts the refractive index curve going through the EIT point at a certain detuning ∆ 1 toward that of the corresponding two-level system. In the limit |∆ R | ≫ γ the system will virtually turn into a two-level one; see Fig. 2(a). One signature of Rydberg blockade is a platform of nearly unchanging group velocity shown in Fig. 2(b). In the blockade regime the pulse group velocity asymptotically tends to that of the corresponding two-level system; only those with ∆ 1 ≤ 0.5γ in Fig. 2 can reach the speed of light c with growing negative ∆ R . The pulses will enter the superluminal regime characterized by anomalous dispersion [25], which is accompanied by huge dissipation, if the interaction induced detuning ∆ R of the positive sign is gradually added to the pump beam of the system in Fig. 2. Equivalently this phenomenon happens to the blue-detuned single-photon pulses in the presence of the attractive VdW potential. This danger of completely damping the input photons should be avoided in practice. We therefore focus on red-detuned photons coupled to ensemble and propagating toward each other under the attractive interaction. As the pulses get closer, they will expand spatially because the characteristic size of their distributions Ψ † lΨ l (z, t) is proportional to the average of the distributed group velocity v g (z, t) over the pulses. This modifies the ∆ R calculated with the relative distance and absorption of the pulses, which constantly keep For the same Z, more extending pulses have higher δv due to more spatially inhomogeneous interaction. changing as well. We use a numerical algorithm to simulate this dynamical process. From the coordinate origin Z = 0 situated on the center of one pulse, the longitudinal relative distance −L ≤ Z ≤ L to the other pulse's center throughout their motion is divided into n d grids. The detuning ∆ R at the i-th (0 < i ≤ n d − 1) position is calculated with the pulse size and transmission rate at the i − 1-th position. Together with the obtained numerical values of ∆ R at the previous positions of 0 ≤ k ≤ i − 1, it is plugged into (5) for the numerical integral to find the susceptibility χ (1) . In the same way the updated group velocity and transmission rate from the susceptibility at the position i will be used to calculate the ∆ R at the i + 1-th position. Running the iterative procedure with sufficiently large grid number n d approaches the real pulse motion. Figure 3 illustrates an example of pulse motion found by the above mentioned numerical method. As shown in Figs. 3(a) and 3(b), the greater interaction between more transversely adjacent pulses is inseparable with the more significant pulse losses. In where Rydberg blockade starts to manifest, the accumulated pulse absorp-tion has been harmful to the survival of the interacting photons (see Figs. 3(b) and 3(c)). The pulse absorption rate and group velocity in the blockade regime tend to those for a two-level system with the corresponding system parameters, so the only way to reduce the pulse loss in the blockade regime is using a higher photon detuning ∆ 1 . However, one trade-off for doing so is to require a narrower pulse bandwidth (correspondingly a longer pulse size) to fit into the smaller EIT window, incurring a more prominent effect measured by the ra- Fig. 3(d). Here v g (Z, σ(Z)) is the group velocity at the location of the characteristic longitudinal size σ(Z) from the pulse center, and v g (Z, 0) is that at the pulse center. The non-uniform group velocity distribution over pulses (in the co-moving coordinate with the pulse centers) indicated by the ratio is equivalent to a group velocity dispersion that could make the pulses totally disappear even without absorption. Another disadvantage for large pulse size σ is that the detuning value ∆ R from the spatially distributed pulses (proportional to 1/σ 6 for the VdW potential) will be below the magnitude for a significant XPM. Our results thus show that in the blockade regime considered in [20] the imperfections due to absorption and others are actually much more problematic. The next target is to understand the real-time evolution of the DSP state |1 1 |1 2 given before (1). Under the perfect EIT condition, there is the approximation σ gr = −µ eg E l /Ω c (σ gr = |g r|) or its quantum manybody versionŜ l (z) = −(g √ N /Ω c )Ê l (z) after neglecting the non-adiabatic corrections for the narrow-band pulses, implying the identical propagation of the quantized DSP field with the electromagnetic field treated as classical in (4a)-(4b) [25]. In the suitable weak interaction regime we find for the two-photon process, such as the most transversely separated pulses in Fig. 3 (corresponding to the refractive curves close to that of ∆ R = 0 in Fig. 2(a)), this approximation still holds with a small ratio ∆ R /Ω c . The kinetic Hamiltonian for the slowly moving DSPs in the weak interaction regime can, therefore, be constructed as Meanwhile, for a slow light with cos θ ≪ 1, the BSPs interact very slightly with the DSPs and among themselves because they contain negligible Rydberg excitation. Their quick decoupling from the system and decaying into the environment allow one to treat the BSPs as motionless oscillations, though their group velocities can be read from their spectrum in (1). Our method for pulse state evolution is to adopt the joint evolution U (t, 0) as the time-ordered exponential Te −i t 0 dτ {H(τ )+HD(τ )} on the initial state |ψ in = |1 1 |1 2 |0 c as the product of the input pulse state and the reservoir vacuum state |0 c . Tracing out the reservoir degrees of freedom in the evolved state U (t, 0)|ψ in gives the evolved system state. We have three noncommutative items (H K , H AF and H I ) in H(t), as well as the dissipation Hamiltonian H D (t) of (A-4), for the joint evolution operator U (t, 0). Directly applying U (t, 0) on the DSP operators in |ψ in is impossible, as it is equivalent to analytically solving a nonlinear Langevin equation. One technique to circumvent the difficulty is the factorization of an evolution operator into the relatively tractable ones [27]. For our problem we have U (t, 0) = U K (t, 0)U AF (t, 0)U I (t, 0)U D (t, 0) [28]. Among the factorized processes U X (t, 0) = T exp{−i t 0 dτH X (τ )}, for X = K, AF, I and D,H K andH D are indifferent to their original form H K and H D , respectively. The operator U D (t, 0) takes no effect on |ψ in , but the noncommutativity of H D with H AF makes the BSP field operators in H AF become those inH AF as follows: where φ +(−) = cos φ(sin φ). A sufficiently large γ approximates the commutator [Ξ ±,l (z, τ 1 ),Ξ † ±,l (z ′ , τ 2 )] = e −γφ±|τ1−τ2| δ(z − z ′ ) as vanishing for τ 1 = τ 2 . Under this approximation the BSP operators in U I (t, 0) also take the forms in (6), hence the evolved stateÛ I (t, 0)|ψ in (unnormalized) to the first order of cos θ, where the notations c 1 = cos θ sin φ, c 2 = cos θ cos φ, c 3 = sin θ, z τ = z + τ 0 dτ ′ v g,l (τ ′ ), and |0 t = |0 |0 c are used to simplify the result. The detailed procedure for deriving the evolved state is given in [28]. The succeeding operation U AF only affects the BSP components in (7), while U K displaces the coordinate ofΨ † l (z l ). The interaction potential ∆(z 1 − z 2 ) renders the DSP part in (7) no longer factorizable with respect to z 1 and z 2 . This entangled piece deviates from the ideal output state e iϕ |1 1 |1 2 with a uniform phase ϕ. We measure the degrees of such deviation by comparing the real output |ψ out = U (t, 0)|ψ in with a reference state |ψ 0 out = U K (t, 0)U AF (t, 0)U D (t, 0)|ψ in . In the absence of U I (t, 0) this reference keeps to be in the product state |1 1 |1 2 |0 c , even if the amplitude f l (z l ) in the output photon state |1 l could be lowered due to any residual absorption. The output's fidelity F with the ideal one and the associate cross phase ϕ can thus be found from the overlap √ F e iϕ = ψ 0 out |ψ out , where the two output states are normalized. Similar definitions for F and ϕ can be found in [29,30]. Fig. 3. L is the medium size. The system parameters are the same as in Fig. 3. The insertion describes an imagined situation by reducing the initial pulse velocity to 10 −2 m/s. (b) Fidelity and cross phase for two pulses propagating together along two tracks separated by a = 1.5 σ. Due to pulse absorption, their group velocity is not stable in such co-propagation (for example, it drops from 11.007 m/s to 11.002 m/s from L = 2σ to 5σ). In Fig. 4 we plot the fidelity and cross phase for the most transversely separated pulses in Fig. 3. Due to the steep decay of the VdW potential at long distances, both fidelity and cross phase for the counter-propagation in Fig. 4(a) quickly converge to fixed values with increasing medium size. A cross phase of π rad that still keeps close to unit F could be achieved if the VdW coefficient |C 6 |, for example, is lifted by about nine times with a different Rydberg level. Contrary to a widely held notion, counter-propagation does not automatically ensure high fidelity; see [30]. The insertion of Fig. 4(a) shows the fidelity for an imagined motion of two pulses passing each other very slowly. The same propagation geometry indicates that the degrading fidelity in the slow motion comes from the growing pulse entanglement over a longer interaction time. In comparison we also study the copropagating pulses in Fig. 4(b). The co-propagation exhibits considerable trade-off between F and ϕ, and would be unfavorable for making large phases of good quality. In summary, we have studied the process of two-photon interaction via a Rydberg atomic ensemble. Our approach based on the complete dynamics for both single atoms and ensemble enables a more realistic description of the situation without steady state. The previously considered regime near Rydberg blockade is found to be short of the favorable figures of merit for photonphoton XPM. We also prove that approximately ideal XPM creating considerable nonlinear phase can be realized with counter-propagating and transversely separated pulses that weakly interact with each other. The photonphoton XPM we have discussed can be the basis for an all-optical deterministic quantum phase gate. Supplementary Information for "Two-photon dynamics in coherent Rydberg atomic ensemble" A. Decomposition of Joint Evolution Operator The system Hamiltonian in the concerned problem consists of three parts. The atom-field coupling Hamiltonian for the ensemble is where ω ± = 1 2 (∆ 1 ± ∆ 2 1 + g 2 N + Ω 2 c ), and the bright-state polariton (BSP) fields are defined aŝ Φ + (z) = sin θ sin φÊ(z) + cos φP (z) + cos θ sin φŜ(z), The polarization fieldP (x) in the above is the continuous average i∈∆V |g i e|/ √ ∆N of the flip operators |g i e| for the atoms inside a small volume ∆V around x, which contains ∆N ≫ 1 atoms. So is the definition i∈∆V |g i r|/ √ ∆N for the spin-wave fieldŜ(x). The second part that describes the pulse interaction process is Here we use the notations c 1 = cos θ sin φ, c 2 = cos θ cos φ, and c 3 = sin θ from the main text. The third part is the kinetic Hamilton for the DSPs, where v g,l (t) is found with a semi-classical treatment of the atom-field coupling in the main text. Similarly the BSP kinetic Hamiltonian can be constructed with their group velocities 1/2c(1 ± ∆ 1 / ∆ 2 1 + g 2 N + Ω 2 c ) from the spectrum in (A-1). In a slow light regime considered in the main text, the BSPs go much faster than and interact very slightly with the DSPs, while they decay into the environment. Such quick decoupling of the BSPs from the system allows one to approximate them as motionless oscillations, and this simplifies the coordinates for the BSP field operators in most equations below. In addition, the coupling between the polarization fieldsP l and reservoir that leads to the dissipation is described by where the random-variable noise operators satisfy [ξ l (z, t),ξ † l (z ′ , t ′ )] = δ(z − z ′ )δ(t − t ′ ). The infinitesimal action of the joint evolution U (t, 0) = Te −i t 0 dτ (H(τ )+HD(τ ) , where H = H K + H AF + H I , on the field operators and the joint quantum state of the system and reservoir gives rise to the exact Langevin equation about the system operators and the exact master equation about the system state, respectively [23]. The solution to these equations are difficult to find in the presence of the nonlinear term in (A-2). Here we present a different approach to find the transformation U (t, 0)Ψ l (z)U † (t, 0) by factorizing the joint evolution operator U (t, 0) into relatively tractable processes. First, we separate the kinetic part out of the total evolution operator as follows: where U K (τ, 0) = T exp{−i τ 0 dt ′ H K (t ′ )}. The proof for this exact factorization can be found in [27]. The interaction Hamiltonian in the second time-ordered exponential of the above becomes The effect of the above transformation is the displacement of the coordinates for the DSP field operators. The other terms in the second time-ordered exponential of (A-5) are not changed. Secondly, the system-reservoir coupling process in the second time-ordered exponential of (A-5) is separated out to the right side as follows: In the first time-ordered exponential of the above, the BSP fields will be transformed to The transformed BSP operators therefore satisfy the following commutation relations: Then the first time-ordered exponential in (A-8) takes the form The action of the second term inside the time-ordered exponential in (A-12) can be further separated out as in Eq. (A-5), and the accompanying effect is to transform the BSP operators inside the other time-ordered exponential as follows: Here we have neglected the mixing of the two transformed BSP fields due to their coupling to the same reservoir. The pulse interaction process after the factorization now takes the form . For a sufficiently large damping rate γ, the commutators in (A-11) can be regarded as vanishing for τ 1 = τ 2 , and then the BSP field operators in the above equation can be approximated asΞ ±,l with g 1(2) (t 1 , t 2 ) ≈ 0 for any pair of t 1 and t 2 . So far the joint evolution operator has been decomposed as Evolution of Pulse Quantum State Now we study the evolution of the joint state for two identical pulses, where |0 c is the reservoir vacuum state, under U (t, 0). It is equivalent to finding the transformation We will apply the decomposed form in (A-15) for the purpose. The operation U D (t, 0) does not change |ψ in . The transformation byÛ I (t, 0) is found through This is an exact form obtained by expressingÛ I (t, 0) as an infinite product of the small elements around each moment, which transform the DSP operator as follows: The operation byÛ I (t, τ ) inside the time-ordered exponential and integral of (B-3) can be further performed to obtain a form of this exact transform in terms of an infinite series. There is the following commutator ≈ {ic 2 1 e −γ sin 2 φ|τ −τ ′ |/2 sin g 1 (τ, τ ′ ) + ic 2 2 e −γ cos 2 φ|τ −τ ′ |/2 sin g 2 (τ, forŴ l (z, τ ). Together with the fact e −γ sin 2 φ|τ −τ ′ |/2 ≪ 1, e −γ cos 2 φ|τ −τ ′ |/2 ≪ 1 for a sufficiently large damping rate γ (this means a negligible correlation time window for the colored noisesn ±,l introduced in Eqs. (A-9) and (A-10)), the above commutator can be approximated as vanishing for τ = τ ′ in a slow light regime with |c 1 | ≪ 1 and |c 2 | ≪ 1, which is created for the input photons under the EIT condition. Meanwhile one has sin g 1(2) (τ, τ ′ ) = 0 for τ = τ ′ . Then there is the relationÛ I (t, τ )Ŵ l (z −τ l , τ )Û † I (t, τ ) ≈Ŵ l (z −τ l , τ ) from the approximation of the vanishing commutator in (B-5), and the non-Abelian phases in (B-3) can be reduced to the Abelian ones due to such approximate commutativity ofŴ l (z −t l , t) at the different time. Moreover, in a slow light regime where the BSPs containing negligible Rydberg excitation quickly decouple from the system through decaying to the environment and escaping from the medium, the DSP components transformed back from the BSP components through the transformation U I (t, τ ) c 1Ξ ′ +,l (z l ) + c 2Ξ ′ −,l (z l ) U † I (t, τ ) in the second term of (B-3) is negligible. Therefore, the DSP operator transformation in (B-3) can be finally approximated as in the regime considered in the main text. In Eq. (8) of the main text we express this DSP operator evolution with a further approximated form, considering the vanishing commutators in (A-11) for different time due to a sufficiently large damping rate γ. To find the evolution for the state |ψ in , one needs the following operation based on (B-6), where the relationŴ 2 z −τ 2 |0 t = 0 has been considered. Similar to Eq. (B-6), the phase operator e −ic 2 Putting all these together one will obtain the entangled state (unnormalized and to the first order of c 1(2) ) due to the evolution under U I (t, 0). After finding the above U I (t, 0)|ψ in , it will be straightforward to do the further transformations under U AF (t, 0), which transforms the BSP components as in (A-13), and by U K (t, 0), which displaces the DSP coordinates. Tracing out the reservoir degrees of freedom makes no difference to the DSP part for the output state of the system. As we explain in the main text, the cross phase for the output state |ψ out = U (t, 0)|ψ in and its fidelity with an ideal output state from XPM are found through its overlap with the reference state |ψ 0 out = U K (t, 0)U AF (t, 0)U D (t, 0)|ψ in . In the absence of the pulse interaction process, there is no BSP components in the reference state |ψ 0 out . The approximations we make for the simplification of the evolved state in the main text, therefore, do not affect the values of the cross phase and fidelity in Fig. 4 of the main text.
2014-03-12T16:59:14.000Z
2014-01-07T00:00:00.000
{ "year": 2014, "sha1": "f68c155a109e387148d21b31b8460f5e7688fe35", "oa_license": null, "oa_url": "http://arxiv.org/pdf/1401.1540", "oa_status": "GREEN", "pdf_src": "Arxiv", "pdf_hash": "f68c155a109e387148d21b31b8460f5e7688fe35", "s2fieldsofstudy": [ "Physics" ], "extfieldsofstudy": [ "Physics", "Medicine" ] }
227295697
pes2o/s2orc
v3-fos-license
CCL2 is associated with microglia and macrophage recruitment in chronic traumatic encephalopathy Background Neuroinflammation has been implicated in the pathogenesis of chronic traumatic encephalopathy (CTE), a progressive neurodegenerative disease association with exposure to repetitive head impacts (RHI) received though playing contact sports such as American football. Past work has implicated early and sustained activation of microglia as a potential driver of tau pathology within the frontal cortex in CTE. However, the RHI induced signals required to recruit microglia to areas of damage and pathology are unknown. Methods Postmortem brain tissue was obtained from 261 individuals across multiple brain banks. Comparisons were made using cases with CTE, cases with Alzheimer’s disease (AD), and cases with no neurodegenerative disease and lacked exposure to RHI (controls). Recruitment of Iba1+ cells around the CTE perivascular lesion was compared to non-lesion vessels. TMEM119 staining was used to characterize microglia or macrophage involvement. The potent chemoattractant CCL2 was analyzed using frozen tissue from the dorsolateral frontal cortex (DLFC) and the calcarine cortex. Finally, the amounts of hyperphosphorylated tau (pTau) and Aβ42 were compared to CCL2 levels to examine possible mechanistic pathways. Results An increase in Iba1+ cells was found around blood vessels with perivascular tau pathology compared to non-affected vessels in individuals with RHI. TMEM119 staining revealed the majority of the Iba1+ cells were microglia. CCL2 protein levels in the DLFC were found to correlate with greater years of playing American football, the density of Iba1+ cells, the density of CD68+ cells, and increased CTE severity. When comparing across multiple brain regions, CCL2 increases were more pronounced in the DLFC than the calcarine cortex in cases with RHI but not in AD. When examining the individual contribution of pathogenic proteins to CCL2 changes, pTau correlated with CCL2, independent of age at death and Aβ42 in AD and CTE. Although levels of Aβ42 were not correlated with CCL2 in cases with CTE, in males in the AD group, Aβ42 trended toward an inverse relationship with CCL2 suggesting possible gender associations. Conclusion Overall, CCL2 is implicated in the pathways recruiting microglia and the development of pTau pathology after exposure to RHI, and may represent a future therapeutic target in CTE. Background Immune cell trafficking, gliosis, and neuroinflammation are fundamental immune responses designed to protect the brain from harm [1]. Uncontrolled or unregulated neuroinflammation, however, has been implicated as a causative event in many neurodegenerative diseases [2,3]. One important facet of the inflammatory response is the signaling cascades used to bring inflammatory cells to the areas of damage or pathology (i.e., chemokines). Interestingly, similar inflammatory cell recruitment responses can be observed across distinct injuries. After significant damage to the brain, brain derived microglia and peripheral derived macrophages are recruited to areas of tissue damage in efforts to reduce pathologic protein accumulation and repair the damage [4]. Additionally, microglia recruitment around Aβ plaques in Alzheimer's disease (AD) are commonly observed. In both CTE and AD, chronic signaling through repetitive injuries or failure to remove toxic protein products is hypothesized to result in constant recruitment of inflammatory microglia/macrophage and may perpetuate a chronic neuroinflammatory response and disease propagation. Recently, it has been observed that the neuroinflammatory response may be involved in the pathogenesis and disease progression of the neurodegenerative disease chronic traumatic encephalopathy (CTE) [2]. CTE is a progressive tauopathy found in individuals with a history of repetitive head impacts (RHI) typically obtained through playing contact sports such as American football, hockey, soccer, or rugby, in addition to injuries sustained during military service [5,6]. Evidence from biomechanical computation, helmet sensor data, and neuropathologic autopsy suggests that blood vessels found in the frontal cortex at the depth of the cortical sulcus are observed to be affected the earliest and most severely in CTE, while other regions, including medial and occipital regions such as the calcarine cortex, were relatively spared [6][7][8][9]. The amount of neuroinflammation and severity of pathology has been found to be proportional to the time spent playing contact sports and has been suggested to be an important mechanism of pathogenesis [2,10]. Although it is unclear which specific neuroinflammatory factors are involved, there is strong evidence that microglia are highly involved at all levels of disease severity [2,11,12]. Therefore, it would be of interest to better study the signals required to recruit microglia to regions of damage and pathology. One of the most potent microglia/macrophage chemokines is monocyte chemoattractant protein 1 (MCP1) or more commonly referred to as CCL2 (chemokine (C-C motif) ligand 2) [13]. CCL2 is produced by many CNS resident cells such as astrocytes, neurons, oligodendrocytes, endothelial cells, and also, microglia themselves. Altered expression of CCL2 or its receptor, CCR2, have been found to play mechanistic roles in a variety of brain pathologies. Loss of CCR2 was found to reduce microglia recruitment and increase Aβ in murine models of AD [14]. Elevated levels of CCL2 have been found in acute and chronic multiple sclerosis plaques [15]. Overexpression of CCL2 in experimental stroke models observed increased infarct volume and greater ischemia [16], while CCL2 deficient mice had less tissue damage after permanent middle cerebral artery occlusion [17]. Taken together, CCL2 plays an important role in propagating pathology through recruitment of peripheral and central immune cells to the area of injury and initiates an inflammatory response that is often prolonged and harmful. Overall, we hypothesize that CCL2 will be positively associated with exposure to RHI and be part of the signaling cascade recruiting microglia to regions of damage and neuropathology in CTE. Herein, we investigate CCL2 protein levels across multiple brain regions to determine if there is a regional specific increase that relates to initial tau deposition. Additionally, we test the hypothesis that although CCL2 might be elevated in other tauopathies such as AD, the regional increased observed in CTE will be distinct. Finally, we explore whether there is a differential effect of pathologic proteins, such as hyperphosphorylated tau or Aβ, on CCL2 expression. The work presented here seeks to identify connections between CCL2 and neuropathology that could become future targets for novel therapeutic strategies aimed to prevent pathology before it begins. Subjects Post-mortem human brain tissue was obtained from 261 subjects from different study groups using previously described procedures [18][19][20]. Different sets of cases were used for histology and immunoassay experiments based on the availability of frozen or formalin-fixed paraffin embedded (FFPE) tissue. Overall, a total of 224 cases with frozen samples were used for the immunoassay experiments and 53 cases with FFPE tissue were used for histology experiments. There were 16 cases that overlapped for both histology and immunoassay analysis and were used for both. Cases for both histology and immunoassay experiments were drawn from multiple brain bank sources. First, 124 individuals that had a history of exposure to American football at either the professional or amateur level were selected from the Understanding Neurological Injury and Traumatic Encephalopathy (UNITE) group consisted of 18 control cases were obtained from the national PTSD brain bank. Control cases lacked a diagnosis of a neurodegenerative disease and did not carry a diagnosis of PTSD. The next group consisted of 119 subjects from clinic and community aging-based brain banks: Boston University Alzheimer's Disease Center (BU ADC) and Framingham Heart Study (FHS). In the FHS, an athletic history assessment identical to UNITE was performed with the donor's next of kin [21]. Athletic history was not available for BU ADC participants. In all groups, cases were excluded from the study if they carried a neuropathologic diagnosis of frontotemporal lobe degeneration, neocortical Lewy bodies, or motor neuron disease. A complete description of the neuropathologic analysis is found in the "Neuropathologic examination" section of the methods. Next-of-kin provided written consent for participation and donation. Institutional review board approval for brain donation was obtained through the Boston University Alzheimer's Disease and CTE center, Human Subjects Institutional Review Board of the Boston University School of Medicine, and Edith Nourse Rogers Memorial Veterans Hospital (Bedford, MA). Demographics, athletic history (type of sports played, level, position, age of first exposure to sports and years playing contact sports), military history (branch, location of service and duration of combat exposure), and traumatic brain injury (TBI) history (including number of concussions) were queried during a telephone interview as detailed previously [22]. Sample data including mean age, gender, and years of playing American football is present in Table 1. A histogram showing the distribution of age of death between all the groups is present in Supplemental figure 1. Neuropathologic examination Pathological processing and evaluation were conducted using previously published methodology [5,6]. All brain tissue was processed identically by fixation in periodatelysine-paraformaldehyde and stored at 4°C. Brain volume and macroscopic features were recorded during initial processing. Twenty-two sections of paraffinembedded tissue were stained for Luxol fast blue, hematoxylin and eosin, Bielschowsky's silver, phosphorylated tau (pTau) (AT8), alpha-synuclein, amyloid-β (Aβ), and phosphorylated TDP-43 using methods described previously [23]. A neuropathological diagnosis of CTE was made using the NINDS criteria [6]. Neuropathological criteria for CTE require at least one perivascular pTau lesion consisting of aggregates in neurons, astrocytes, and cell processes around a small vessel; these pathognomonic lesions (referred to as the CTE lesion) are most often distributed at the depths of the sulci in the cerebral cortex and are distinct from the lesions of aging-related tau astrogliopathy by the presence of neuronal tau pathology [24]. Neuropathological evaluation occurred blinded to the clinical evaluation and was reviewed by four neuropathologists (VA, BH, TS, AM); discrepancies in the neuropathological diagnosis were resolved by consensus conference. Cases were only included in the RHI group if they received a negative neuropathologic diagnosis for AD, neocortical Lewy body disease, frontotemporal lobar degeneration, or motor neuron disease. Cases that received a neuropathologic diagnosis of CTE stage 1 or 2 were grouped together as "Low CTE" (n = 27) while cases that were CTE stage 3 or 4 were grouped as "High CTE" (n = 47). Cases that had a history of playing American football but were not found to have CTE were labeled "RHI without CTE" (n = 20). Individuals in the AD cohorts were grouped for AD using the NIA-Reagan criteria [25]. The criteria was Data expressed as mean ± standard deviation AD Alzheimer's disease, CTE chronic traumatic encephalopathy, PMI post-mortem interval, N/A not available as followed: for "High" there needed to be neuritic plaques and neurofibrillary tangles in the neocortex (CERA D frequent, Braak stage of V/VI) (n = 24), "Intermediate" required moderate neocortical neuritic plaques and neurofibrillary tangles in limbic regions (CERAD moderate, Braak Stage III/IV) (n = 28), "Low" was denoted when there were sparse neuritic plaques and/or neurofibrillary tangles in a more limited distribution and/or severity (CERAD infrequent and/or Braak I/II) (n = 60). Cases in the AD group were excluded from the study if they carried a neuropathologic diagnosis of CTE, neocortical Lewy bodies, frontotemporal lobar degermation, or motor neuron disease. Immunoassay for CCL2 and Aβ 42 Flash frozen brain tissue was obtained from the dorsolateral frontal cortex (DLFC) and the calcarine cortex, weighed, and placed on dry ice. Cases were used based on the presence of tissue within the Brain Bank. Not all cases had available calcarine cortex. Freshly prepared, ice cold 5 M guanidine hydrochloride in Tris-buffered saline (20 mM Tris-HCl, 150 mM NaCl, pH 7.4 TBS) containing 1:100 Halt protease inhibitor cocktail (Thermo Scientific) and 1:100 phosphatase inhibitor cocktail 2 & 3 (Sigma) was added to the brain tissue at 5:1 (5 M Guanidine hydrochloride volume (ml): brain wet weight (g)) dilution and homogenized with Qiagen Tissue Lyser LT, at 50 Hz for 5 min. The homogenate was then incubated while rocking overnight at room temperature. Lysate was diluted according to manufacture protocol and spun down at 17,000 g, 4°C, for 15 min. The supernatant was then applied to Meso Scale Discovery (MSD) Chemokine Panel 1 (human) Kit V-PLEX Plus (Thermo Scientific) and Aβ 42 ELISA (CAT K15200E-2), following manufactures protocols. Guanidine hydrochloride extraction methods will result in the extraction of both soluble and insoluble forms of proteins for analysis of total levels. Final concentrations were expressed as pg/g. Histologic and immunofluorescence staining Histological staining and analysis of total AT8 pTau density, Iba1+, and CD68+ cell count in the DLFC at the depth of the cortical sulcus was performed using the Aperio ScanScope (Leica) as previously described [26]. For immunofluorescence staining, tissue was extracted from the DLFC, embedded in paraffin, and cut at 20 μm. Immunofluorescence staining was performed using Akoya Bioscience Opal Polaris 7 color manual IHC detection kit as per the manufactures instructions as previously described (tau isoform paper when published). Sections were incubated with antibodies to anti-Iba1 (1: 500, Wako), anti-PHF-tau (AT8) (1:1000, Pierce Endogen), anti-TMEM119 (1:100, Abcam), and DAPI. Stained sections were digitized using an Axio Scan.Z1 slide scanner (Zeiss) and visualized using Zen Blue (Zeiss). Microglia and macrophage vessel quantitation Quantification of Iba1+ cell density around CTE lesions as well as microglia vs macrophages density was carried out using Indica Laboratory HALO through manual counts by a blinded observed. To determine the total Iba1+ cell density around CTE lesion blood vessels, three blood vessels that were surrounded by tau and identified as a CTE neuropathologic lesion and three blood vessels not surrounded by tau were selected in each case and used for immunofluorescent analysis. All blood vessels selected were present at the depth of the cortical sulcus in the DLFC. Blood vessels were identified by DAPI stain marking a distinct circular structure and verified with neuropathologists. The number of Iba1+ cells directly contacting the vessel were counted and averaged together. To determine the abundance of brain derived microglia vs peripherally derived macrophages, dual Iba1 and TMEM119 staining was utilized. As both microglia and macrophages label with Iba1, the presence or absence of TME119 was used to identify cell types [27]. Iba1+ TMEM119+ cells were identified as possible microglia, while Iba1+ TMEM119− cells were identified as possible macrophages. Similar to the total Iba1+ density analysis, blood vessels at the depth of the sulcus in the DLFC were examined to determine if there were changes specific to tau accumulation. Three blood vessels surrounded by tau consistent with a CTE lesion and three blood vessels that lacked tau were selected in each case. The number of Iba1+ TMEM119+ and Iba1+ TMEM119− cells that directly contacted the vessel were counted and averaged together. The number of TMEM119+ and TMEM119− cells were divided by the total Iba1+ cell count to establish percentages of each cell type. Percentages of microglia and macrophages were compared around non-tau and tau containing vessels (control vs lesion vessels). Control cases with no tau pathology were also included. As no vessels contain tau in control cases, only non-tau associated vessels were examined. Statistics Statistical analysis was performed using SPSS (v24, IBM) and Prism (v8, Graphpad Software). Separate two-way ANOVAs were used to examine Iba1+ cell density and the microglia/macrophage ratio around CTE lesioned blood vessel compared to blood vessels that lacked perivascular tau in low and high CTE. Shapiro-Wilk testing revealed CCL2, Aβ42, and AT8 tau density did not have a normal distribution. Since linear regression analysis requires normally distributed data, CCL2, Aβ 42 , and AT8 tau density were transformed using a rank-based method as previously described [28][29][30]. Briefly, the technique transforms the non-normal variable into a percentile rank for each value, then applies the inverse-normal transformation to the ranks to form a variable which consists of normally distributed Z scores. Further Shapiro-Wilk testing demonstrated transformed values had a normal distribution and were sufficient for linear regression analysis. Separate linear regression analyses were run to compare CCL2 in the DLFC to the total years of playing American football, Iba1+ cell density, and CD68+ cell density. One-way ANOVAs with a Kruskal-Wallis post-test was performed to test differences between CCL2 in the DLFC, calcarine cortex, and the DLFC/calcarine cortex ratio. Ordinal regression was used to examine if the DLFC/calcarine cortex ratio increased according to pathologic disease stage. Multiple linear regressions were used to determine which pathologic protein best correlated with CCL2 levels in the DLFC with age at death, Aβ 42 , and AT8 density as independent predictor variables and CCL2 as the dependent variable across the different analysis groups. Sensitivity analyses run separately within male AD cases and female AD cases were also performed. The CTE pathognomonic lesion recruits Iba1+ cells To determine if the pathology found in CTE was directly related to increased glial cell recruitment, the Iba1+ cell density directly around blood vessels that presented with pTau pathology (i.e., the CTE pathognomonic lesion) was compared to neighboring blood vessels in the same tissue without pTau pathology (i.e., control vessels) (Fig. 1). Overall, an increase in Iba1+ cells was observed around lesion vessels compared to control vessels in the same individual (Fig. 1a, b). Increases were observed in both low and high stage CTE. To determine if the increase in Iba1+ cells was the result of CNS derived microglia or infiltration of peripheral macrophages, dual Iba1/ TMEM119 staining and quantitation was performed. TMEM119 has been previously found to only label CNS derived microglia and not peripheral macrophages [27]. When counting the number of Iba1+/ TMEM119+ and Iba1+/TMEM119−, it was observed that the majority of cells around all the blood vessels in control, low, and high stage CTE cases were TMEM119+ (Fig. 1c, d). However, a significant increase in TMEM119− cells was found around the lesion vessel in both low and high stage CTE, where they accounted for 15.8% and 19.0% respectively of the total Iba1+ cells when compared to control vessels that had an average of 5.3% and 6.8% respectively (Fig. 1c). Glial recruitment signals correlate with exposure to American football, Iba1+, and CD68+ cell density The mechanism behind the Iba1+ recruitment was investigated next. CCL2 is a potent glial chemokine that that is upregulated after impacts, potentially linking RHI to glial recruitment. Analysis of the number of years playing American football (a correlate to the amount of total RHI received) demonstrated a significant correlation between playing longer and increased protein levels of CCL2 within the DLFC (Fig. 2a). Furthermore, CCL2 levels significantly correlated with the overall Iba1+ cell density (Fig. 2b) and CD68+ inflammatory cell density (Fig. 2c). When examining the slops of each linear regression present in Fig. 2a- CCL2 is elevated with disease severity To further examine if CCL2 levels were related to neuropathology, CCL2 protein levels were investigated across CTE stages and through multiple brain regions. In the DLFC, a brain region where CTE pathology can be observed early in disease, CCL2 was found to be increased in both low and high stage CTE when compared to control cases (Fig. 3a). When looking at the calcarine cortex, a brain region that is relatively spared from CTE pathology in all but the most severe cases, only differences between control and high stage CTE were observed (Fig. 3b). In order to determine if the observed CCL2 changes were representative of a brain wide increase of CCL2 or a region-specific increase, CCL2 in the DLFC was standardized to CCL2 levels in the calcarine cortex. When comparing across disease groups using an ANOVA, only high stage CTE had a significantly elevated DLFC/calcarine ratio when compared to control cases (Fig. 3c). However, adjusting for age using an ordinal retrogression analysis, the CCL2 ratio was observed to correlate with the step-by-step increase across control, RHI without CTE, low CTE, and high CTE (estimate = 1.622, p < 0.001), independent of age at death (estimate = 0.055, p < 0.001). We next wanted to compare CCL2 levels and the differential brain region response, to a similar neurodegenerative disease, Alzheimer's disease (AD). Using the NIA-Reagan criteria for AD, no significant difference was observed in the DLFC (Fig. 3d) or the calcarine cortex (Fig. 3e). When standardizing DLFC values to calcarine cortex values, no overall difference was observed (Fig. 3f). Tau, not Aβ, is best correlated with CCL2 expression Although AD and CTE are related neurodegenerative diseases that can present with both pTau and Aβ pathology, the pathologic protein most commonly believed to drive disease (pTau for CTE and Aβ for AD) differs. This presents an opportunity to examine differential effects of various pathologic proteins on CCL2 levels and explore possible novel interaction pathways. Multiple linear regression analysis was performed to determine if pTau or Aβ was related to CCL2 levels in the DLFC. Overall, the amount of pTau, as measured by AT8 staining, was significantly correlated with CCL2 independent of age at death in both CTE and AD (Table 2). Aβ 42 levels, measured by immunoassay, were found to not correlate with CCL2 when grouping all cases together. However, when segregating male and female AD cases, a trend toward a negative correlation between Aβ 42 and CCL2 in men was observed (Table 3). Discussion Here, we have shown that there is increased Iba1+ cell recruitment around pTau containing blood vessels in CTE. When investigating possible recruitment factors, the chemokine CCL2, was observed to correlate with years of playing American football, number of Iba1+ cells, and number of CD68+ neuroinflammatory cells. Further analysis demonstrated protein levels of CCL2 were elevated preferentially in the frontal cortex, a region where CTE pathology can first be observed. CCL2 did not correlate with the NIA-Reagan criteria for AD likelihood in a separate group of cases that lacked a significant history of exposure to head impacts. Analysis of the specific effect of pTau and Aβ 42 demonstrated that pTau correlated with CCL2 in both AD and CTE cases. Aβ 42 did not have any correlation on CCL2 in CTE cases; however, there was a negative correlation found between Aβ 42 and CCL2 in AD males. Overall, the present study expands on previous work demonstrating neuroinflammation and glial recruitment is a consequence of RHI and might be implicated in CTE pathogenesis [2]. (See figure on previous page.) Fig. 1 Greater microglia and macrophages are recruited to the CTE lesion blood vessels. The Iba1+ cell density specific to the CTE pathognomonic lesion was investigated to determine if tau specific glial recruitment occurs. a Representative image of Iba1+ cells found around control and CTE lesion blood vessels at the depth of the cortical sulcus in the DLFC. Left panel is a low power image of the depth of the cortical sulcus. Right panels are high power images of control and CTE lesion blood vessels. White arrows denote Iba1+ cells with processes contacting blood vessel. Asterisk denotes a blood vessel. Scale bar = 50 μm. b Quantitation of the average number of Iba1+ cells found around lesion and control vessels in low and high stage CTE. Each dot represents a single person. c Quantitation of the percentage of TMEM119+/Iba1+ and TMEM119−/Iba1+ cells found around the lesion and control vessels in control, mild, and severe CTE. Each dot represents the Iba1+/TMEM119+ (black circles) or Iba1+/TMEM119− (white squares) percentage from a single person. d Representative image of Iba1+/TMEM119+ and Iba1+/ TMEM119− cells around lesion and control vessels. Increased macrophage recruitment was observed around lesion vessels. Asterisk denotes a blood vessel. Scale bar = 100 μm. Error bars are expressed as mean ± SEM. Statistics between mild and severe CTE generated with a two-way ANOVA. *p < 0.05, **p < 0.01 Fig. 2 CCL2 levels correlate with the years spent playing American football, the number of Iba1+ and CD68+ cells in cases with RHI. Levels of DLFC CCL2 were compared against a the number of years spent playing American football, b the density of Iba1+ microglia/macrophages, c and the density of CD68+ inflammatory cells found in the DLFC at the depth of the cortical sulcus. All cases had a history of playing American football. Each dot represents a single person. Significance and slope of the line was calculated using linear regression analysis. As CCL2 was found to have a non-normal distribution, a rank bank transformation technique was used to achieve the required normal distribution needed for linear regression analysis. The transformation resulted in normally distributed Z scores which are plotted on the y axis . c, f To determine if CCL2 was specifically elevated in the DLFC, CCL2 in the DLFC was divided by CCL2 values in the calcarine cortex to obtain a ratio (control n = 13, RHI without CTE n = 16, low CTE n = 25, high CTE n = 42, low AD n = 37, intermediate AD n = 16, high AD n = 12). Values over 1 represent more CCL2 in the DLFC compared to the CC. Statistics were generated via a one-way ANOVA with a Kruskal-Wallis post-test comparing differences to the control cases. Each dot represents a single case. Error bars show median and interquartile range. *p < 0.05, **p < 0.01, ***p < 0.001 relative to control cases The current results suggest CCL2 is part of the neuroinflammatory signaling cascade after RHI. The initial mechanisms behind CCL2 elevation after head impacts may be protective. Recruiting microglia and monocytes to areas of damage is critical to remove dead tissue, prevent infection, and promote recovery. However, prolonged, chronic, or intense signaling turns the initial protective response into a damaging one. Consistent with chronic signaling through RHI, CCL2 was found to be correlated with the years spent playing American football. Additionally, CCL2 trended higher in the RHI without CTE group compared to controls suggesting the chronic exposure and the damage associated with playing American football is potentially sufficient to induce CCL2 in the brain, independent of pTau. After injury, brain derived microglia and peripheral-derived macrophages are commonly observed to be recruited to the region of damage. In CTE, the area of most concentrated RHI damage are blood vessels at the depths of the cortical sulcus in the frontal cortex [6]. In agreement, elevated Iba1+ cells were seen to accumulate in correlation with perivascular deposits of pTau (i.e., the CTE pathognomonic lesion [6]). Although the majority of accumulating cells were Iba1+/TMEM119+, a subset were TMEM119− and believed to be infiltrating peripheral macrophages, which has been previously reported after RHI [8]. Future work will be needed to verify the peripheral macrophages involvement, as it is possible the TMEM119− population are microglia that downregulate the TMEM119 gene expression during inflammation. Although CCL2 was observed to correlate with both increased number of microglia and increased inflammatory activity, it is likely that CCL2 is only involved in the recruitment of cell and the glial inflammatory response occurs via secondary factors (i.e., proximity to damage neurons, pTau, or Aβ). To that end, multiple linear regression modeling demonstrated that when including Iba1, CD68, and age at death into the same model, only the Iba1+ cell density correlated with CCL2. This suggests that in the current study, CCL2 is only recruiting microglia to areas of damage. Although the results suggest CCL2 and the neuroinflammatory response could be elevated prior to pTau accumulation, it is difficult to determine causality and the order of events due to the cross-sectional nature of studies using postmortem human tissue. However, regardless of which occurs first, our previous work suggests that once there is an increase in neuroinflammatory microglia, a feedback loop occurs where pTau causes inflammation which further induces pTau deposition [2]. In addition to enhanced pTau deposition, the inflammatory response results in tissue damage [31]. In experimental models, loss of CCR2 has been observed to block recruitment and reduce the area damaged suggesting a beneficial effect of limiting microglia and macrophages [31]. Moreover, several studies have confirmed that blocking the CCL2 signaling pathway through genetic means or small molecules have potentially protective effects after head impacts [32]. Considering these studies, future research should examine the possible beneficial effect of blocking microglia recruitment and its outcome on local neuroinflammation and pTau deposition. In addition to blood vessels, the frontal cortex is where CTE pathology is typically first observed and the calcarine cortex is relatively spared [6]. It is not entirely clear why the frontal cortex as opposed to the parietal or temporal cortex exhibits pathology first, but it hypothesized to be related to the physical area of contact (i.e., helmet to helmet hits) in addition to kinetics and physics of head impacts [7,33]. This would suggest that changes related to CTE pathology would initially be restricted to the region where pathology occurs first. Several studies have shown that CCL2 was elevated in the CSF after TBI [34,35]. However, tissue regional specificity of CCL2 in the brain has not been examined before. Here, we show that in low and high stage CTE, when standardizing frontal cortex CCL2 values to those in the calcarine cortex, more CCL2 was found in the DLFC. Ordinal regression also demonstrated a step-by-step increase from control cases, RHI without CTE, low CTE, and high CTE when controlling for age of death. This comparison provides compelling evidence that glial recruitment signals are directed to the regions of greatest injury and CTE pathology as opposed to a non-specific TBI-related brain-wide increase. CCL2 can be produced as a consequence of a variety of stimuli. Although the present study focused on RHI, increases in CCL2 has been found in various neurodegenerative diseases and other injuries. To further explore how similar or different the CCL2 dynamics found after RHI were to other stimuli, subjects with AD without a history of RHI were examined. The inclusion of cases with AD allowed investigation into how factors such as aging or specific pathogenic proteins like pTau and Aβ could factor into CCL2 production independent of head impacts. Using multiple linear regression modeling, it was observed that pTau did correlate with CCL2 in both AD and CTE. Additionally, age was observed to correlate with CCL2 as well, demonstrating exposure to head impacts is not the only driving factor in CCL2 expression. This represents the diverse nature of the immune response with multiple stimuli converging on specific pathways. Neurons and microglia have significant crosstalk with a diverse range of receptors designed to maintain a homeostatic environment [36]. Disruption of this crosstalk will lead to neuroinflammation. Head impacts and the subsequent neuronal damage represent one way to disrupt this normal homeostatic relationship. Additionally, pTau aggregation in neurons also induces neuronal dysfunction and damage, independent of RHI. This shows that although the stimulus is different, similar CCL2 increases are observed across multiple diseases. Although this limits CCL2s ability to be used as a specific biomarker for disease, it does suggest that CCL2 could be a general target for therapeutics that possibly might be effective across multiple disorders. Surprisingly, Aβ levels were not found to correlate with CCL2. Aβ plaques are associated with a microgliosis and are a consistent feature of AD and variably present in CTE [37]. However, our results demonstrated that CCL2 was significantly associated with pTau and not Aβ 42 in the overall group as well as in both CTE and AD groups examined separately. It is unclear why the Aβ results in the current study does not agree with reports, mainly using AD transgenic mouse models, suggesting CCL2 facilitates Aβ deposition [38,39]. One explanation is that many of the Aβ related studies are performed in an in vitro setting or murine models of disease and do not fully recapitulate the human in vivo system. An important distinction is that many transgenic mouse lines only express Aβ or pTau pathology in isolation, when the true human disease environment is much more complex. Similarly, the current study did not examine separate effects of soluble vs insoluble versions of both pTau and Aβ which likely drive different aspects of disease. Additionally, the current study only examines the neurodegenerative environment at the time of death. This snapshot in time does not capture early pathologic changes at the beginning of disease. It is possible early CCL2 activity does drive initial Aβ deposition, however, this effect subsides after several years when pathology is more severe. Interestingly, when separating the AD group by gender, a trend toward an inverse relationship was observed in males. This is consistent with the role microglia play in phagocytosing Aβ [40]. In cases with a higher CCL2 signal, more microglia might be recruited to phagocytose plaques resulting in less Aβ. However, as previously mentioned, it is difficult to fully observe this effect when other pathogenic proteins, head impacts, and age also affect CCL2 levels. Further investigation into the possible unique effects of Aβ will be needed. The current study is not the first examination into chemokines in CTE. Previous work has demonstrated the chemokine CCL11 was elevated in CTE [26]. Furthermore, CCL11 was able to differentiate CTE and AD. An important distinction is that CCL11 was believed to be produced by the choroid plexus and not locally in areas of damage [41]. Additionally, CCL11 has a broader range of action and can affect more diverse immune cell types, although it can also induce a glial inflammatory response [42][43][44][45]. This represents the complex nature and interplay of the immune response and highlights the idea that no chemokine exists in isolation. Therefore, it will be important to examine how other chemokines act in concert with CCL2 and regulate glial recruitment and neuroinflammation in order to get a clear understanding of the neuroinflammatory cascade that occurs after head impacts. Conclusion In conclusion, these results begin to reveal a possible mechanistic pathway for the association of CCL2 and neuroinflammation with CTE. Initially, exposure to RHI leads to tissue damage in the frontal cortex at the depth of the cortical sulcus and around blood vessels. To repair the damage, microglia and macrophages are recruited via targeted CCL2 signaling. After years of sustained RHI and chronic neuroinflammation, pTau deposition begins. The comparison across multiple neurodegenerative diseases suggest that convergent mechanisms between AD and CTE such as pathologic protein deposition and normal aging can also increase CCL2 levels through mechanisms distinct from RHI. It is feasible that the presence of pTau and neuronal dysfunction results in even further CCL2 production and glial recruitment contributing to a proposed vicious circle that might drive CTE pathology. Overall, we suggest that CCL2 might be a possible mechanism of early immune cell recruitment to areas of RHI damage and could be a novel target for future therapeutics to abate or reduce neuroinflammation.
2020-12-06T14:10:22.978Z
2020-12-01T00:00:00.000
{ "year": 2020, "sha1": "528b7e484f10828759c4b40da40efb4a99245009", "oa_license": "CCBY", "oa_url": "https://jneuroinflammation.biomedcentral.com/track/pdf/10.1186/s12974-020-02036-4", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "1367397443c25798614ebd4df8daf067fa744a64", "s2fieldsofstudy": [ "Medicine", "Environmental Science" ], "extfieldsofstudy": [ "Medicine" ] }
2934416
pes2o/s2orc
v3-fos-license
Symmetry Scheme for Amino Acid Codons Group theoretical concepts are invoked in a specific model to explain how only twenty amino acids occur in nature out of a possible sixty four. The methods we use enable us to justify the occurrence of the recently discovered twenty first amino acid selenocysteine, and also enables us to predict the possible existence of two more, as yet undiscovered amino acids. Introduction The genetic code uses four "letters" or the bases adenine(A), thymine(T), guanine(G) and cytosine(C) in the four nucleotides constituting the DNA (or uracil(U) in the corresponding RNA template) by reading them in groups of three. A and G are purine bases while C and T are pyrimidines. Like T, U pairs with A. During protein synthesis, these triplets of three bases (or codons) encode for specific amino acids. The genetic code however, is degenerate, and even though there are 64 possible codons, only 20 amino acids relevant to mammalian proteins actually occur in nature. It has remained intriguing that despite the redundancy of the codons, the genetic code did not expand any further and stopped at the number 20. It is therefore of interest to find out if the genetic code has any mathematical property which gets optimised when the number of codons becomes nearly thrice the number of the amino acids. We attempt here to answer this question by adapting some standard group theoretical methods of particle physics to molecular biology. The genetic code is nearly the same for all organisms -non-canonical genetic codes are used in mitochondria and some protozoa [1]. Here, we consider only the universal genetic code. Out of the 64 possible codons, it is now known that 61 code for the known 20 amino acids -the remaining three (UAG, UGA, and UAA) code for termination or "stop" codons. The codon AUG for methionine also codes for the initiation of the translation process, and is therefore also called the "start" codon. It was discovered some years back that one of the stop codons, UGA, translates under certain circumstances to a twenty first amino acid selenocysteine [2]. It is certainly conceivable, that the other two stop codons UAG and UAA similarly code also for some as yet undiscovered amino acids. Our approach is a semi-empirical one, but it enables us to not only justify the occurrence of selenocysteine, but it allows us also to predict the possible existence of two more, new, as yet undiscovered amino acids. We look at the hydrophilic and hydrophobic tendencies of the amino acid residues constituting the proteins, as they play a key role in determining the conformation of a protein and the way it folds. The idea of using group theoretical techniques in studying the genetic code is not new -references [4] deal in length with searches for symmetries among Lie groups for trying to explain codon degeneracy in the genetic code. In their very interesting papers, the authors of [4] view the universal genetic code as having evolved through a chain of successively broken symmetry events, from a primordial amino acid having a particular symmetry (which they assume to be Sp (6)). Their approach, however, does not presently account for the twenty first amino acid selenocysteine having properties similar to cysteine. We approach the problem from a different point of view : we have tried to show that it is possible to classify the 64 possible codons into well-defined multiplets -the hydropathic properties of the amino acids they code for are determined by the multiplet they belong to. Our approach has also a predictive power (presently lacking in the papers in [4]), enabling one to approximately predict certain properties of two other possible, as yet undiscovered amino acids. The role of codons in protein synthesis Protein synthesis is initiated by a process called transcription in which the cell makes a copy of the gene -a messenger RNA (or mRNA) template, from the DNA with the help of an enzyme called RNA polymerase [1,5]. Transcription stops when the enzyme reaches a "stop" sequence at the end of the gene, upon which the mRNA dissociates from the DNA, moves to the cytoplasm and gets attached at its Shine-Dalgarno sequence to a ribosomal RNA (rRNA) located within the ribosome. In the cytoplasm, transfer RNA (tRNA) molecules form complexes called aminoacyl-tRNAs with their respective amino acids, in a process driven by the enzymes aminoacyl-tRNA synthetases. An aminoacylated tRNA moves to the ribosome where its anticodon recognizes its corresponding complementary codon of the mRNA and incorporates the amino acid residue at the correct position specified by the mRNA codons into a growing peptide chain of the protein, this process being called translation. The folds governing the conformation of a protein are thus determined by its primary structure -the sequence of the amino acids in the peptide chain, which in turn depend upon the sequence of codons in the functionally mature mRNA and the exon sequences in the DNA. Group theoretical methods for Codons Keeping all of the above complex dynamics in mind, one could still try to look for any possible symmetries in the system, and see whether a much-simplified, minimal mathematical model for the codons could capture any of the physics and biochemistry of the actual biological system. To begin with, we first recall the basic chemical structure of an amino acid. It has a central carbon called the α-carbon which is attached to four groups -a hydrogen atom, an acidic carboxyl (-COOH) group, a basic amino (-NH 2 ) group and a distinct side chain (-R) group -this last group essentially determines its chemical properties. The amino acids we know, can be classified into two broad categories on the basis of their solubilities in water : hydrophobic and hydrophilic. At pH 7.0, hydrophobic (non-polar) -R groups are contained by alanine, valine, leucine, isoleucine, proline, phenylalanine, tryptophan, methionine, cysteine and glycine. Hydrophilic side chains are polar, so that they can be further classified as acidic, basic or neutral, depending upon their charges -at pH 7.0, lysine, arginine and histidine have basic side-chains, aspartate and glutamate are acidic and there are those which have polar but neutral side-chains: asparigine, glutamine, serine, threonine and tyrosine. Yet another consideration for classification of amino acids could be on the basis of whether the side chains are aliphatic or aromatic. The categories or multiplets into which the amino acids fall, appear to reflect certain underlying internal symmetries. We know that while the base triplets (codons) do not constitute the amino acids, the base sequence within each codon dictates the identification of and translation to a particular amino acid. We can therefore hypothesise that the bases possess certain basic symmetries. We look for the properties of the system which do not change or only approximately change in time, and the symmetries associated with the conserved quantities. Let T (a) represent a group of transformations which leave the Hamiltonian H of the physical system invariant. We assume that the transformations are represented by unitary operators U (a), ('a' denoting a parameter of the transformation) operating on a complex vector space [6]. The eigenvalue equation for the system would be: where φ n is an eigenfunction of H with energy eigenvalue E n . Operating on this with U (a), one obtains : where we have let Since U leaves the Hamiltonian invariant : These basic techniques can be used to develop a symmetry scheme for the nucleotides and the codons. A and G can be regarded as different states of the same object, the purine, described by a state vector in an abstract, complex vector space, and similarly, C and T /U as different states of a pyrimidine. The purine and the pyrimidine state vectors are then each, two-component matrices : where ψ R and ψ Y denote the purine and pyrimidine state vectors respectively. A unitary transformation U (Λ) which involves a rearrangement of the components, but which leaves the magnitude (ψ i ψ i ) 1 2 , (i = R, Y ) invariant can be written as: Transition mutations involving replacement of one purine by another purine, or one pyrimidine by another pyrimidine can be represented by (4). The states representing A and G could be taken to correspond respectively to 'up' and 'down' states of ψ R , with respect to a chosen axis in the internal vector space. A state intermediary between these two states could be regarded as a superposition of the two states, with the state having the larger probability measure, having the higher possibility of becoming the final state of the mutation. Proceeding similarly, we can define the full system of all the four bases by a four-component vector φ i (i = 1, . . . , 4) : A rotation through an angle Λ in this internal space which transforms φ to φ ′ : where I k are 'k' number of 4 × 4 matrices, and are representations of the generators of the transformation group, changes the state of the nucleotide system, but not the total number of nucleotides. Transversion mutations in which a purine is replaced by a pyrimidine, or vice-versa, are also covered by the transformation (5). Since there are four different kinds of bases out of which three together code for one amino acid, we view the amino acids as arising out of "3-base" representations of the group SU (4 where 4 is the fundamental representation of SU(4),4 is the conjugate representation, and the subscripts S and M denote states which are formed from the symmetric combinations and the mixed symmetry combinations, respectively, of the product tensors. Each multiplet is the realization of an irreducible representation of SU (4), and because the members of each have masses which are not exactly, but only nearly degenerate, the SU (4) symmetry is only an approximate symmetry. Notice that the total codon count of 64 is respected, but now the codons are grouped in separate multiplets. Each multiplet has a characteristic property which is shared by all its members. Since the bases do not themselves constitute the amino acids, it follows that though the J i (i = 1, . . . 3), are conserved numbers for the codons, they need not necessarily be additively conserved for the Kronecker product, since the permutations of the bases within a triplet only code to different amino acids. We find that when we group together the amino acids into four categories: hydrophobic, weakly hydrophobic, hydrophilic and imino, and then try to adapt the SU(4) quantum numbers [7,8] for the quark triplets (baryons) to the codons, they fall beautifully into well-defined categories, as follows. The numbers J i we have assigned for the base triplets are shown in Tables 1-3 The 3-dimensional plots of J 1 , J 2 and J 3 for all the codons are shown in Figures 1-4. Except for the case of proline (P), all the codons coding for a particular amino acid share the same J 3 number. All the amino acids in Fig.1 are hydrophobic, and all in Fig.3 are polar and hydrophilic. It is of course well-known that there exist several different hydrophobicity scales, and there is no unique assignment of clear-cut hydrophobicity values for amino acids [9]. The amino acids in Fig.1, are, in general, widely accepted to be more hydrophobic than the others. We have classified proline (P) separately, as a realization of the conjugate4 representation of SU(4), although it is very hydrophobic, since it is technically an imino (-NH) acid rather than an amino (-NH 2 ) acid, as its side chain is bonded to the nitrogen as well as to the central α -carbon. It is extremely interesting that with the assignment of the J i numbers as in Tables 1-3, the codon UGA which usually codes for "Stop", falls in the multiplet of weakly hydrophobic amino acids, and has the same J 3 value of 2 as cysteine (C). UGA codes also for the newly discovered twenty first amino acid selenocysteine (SeC) -the sulfur atom in C is replaced by selenium in SeC. On the basis of these observations, one could similarly predict the existence of two more as yet undiscovered amino acids -the codons UAG and UAA which are presently known to code only for the "Stop" signal. One could hypothesise that if UAG were to code for a new (twenty second) amino acid, then that would have properties similar to H, and similarly, UAA if coded for a twenty third amino acid would have properties similar to K or R, even though these two codons, both differ from Y only at the wobble position. In our symmetry scheme for codons, we have not yet found it possible to assign J i numbers for each base individually so as to give additively, the total J i numbers for each base triplet coding for the amino acids, in a consistent way. This is not unreasonable, since as emphasised before, the bases do not constitute the amino acids. Synonymous codons occur at different frequencies even though they all code for the same amino acid. Correspondingly, the tRNA molecule for the codon used more, occurs in larger amounts than its isoacceptors. This fact is reflected in the differing J 1 and J 2 values for synonymous codons. The probability for the occurrence of a particular synonymous codon would be weighted by J i dependent factors in the corresponding partition function and its free energy. Thus, the occurrence of only twenty one amino acids out of a possible sixty four, can be explained in a satisfactory manner within our scheme. Discussion We have classified the 64 codons within a semi-empirical model which very closely resembles the decomposition of the Kronecker product 4 ⊗ 4 ⊗ 4 of SU(4), after assuming that bases A, C, G and T /U can be regarded as different states of a vector in an abstract, complex vector space. We represent transition and transversion mutations involving replacement of purines and pyramidines, by rotations through an angle in this internal space. Our model explains the existence of synonymous codons (thus explaining how twenty one amino acids (including selenocysteine) have been found to occur so far out of a possible sixty four). It also enables us to predict the possible existence of two more, as yet undiscovered amino acids. The stability of a fully folded native protein structure is a consequence of a balance between the hydropathies of the constituent amino acid residues in its primary structure, electrostatic interactions and hydrogen bonding. It would thus be a useful exercise to incorporate the ideas in this paper in an analytical manner to approach the protein folding problem, since incorporation of all the internal symmetries into the partition function is essential to get the correct form of the free energy of the system. Our ideas and methods could also be very useful in providing a rigorous mathematical basis for studying DNA replication and protein synthesis using quantum algorithms.
2003-08-25T16:17:24.000Z
2003-08-25T00:00:00.000
{ "year": 2003, "sha1": "84668cdd5d73c756ba1c0a02b620209f2416b694", "oa_license": null, "oa_url": "http://arxiv.org/pdf/physics/0308091", "oa_status": "GREEN", "pdf_src": "Arxiv", "pdf_hash": "84668cdd5d73c756ba1c0a02b620209f2416b694", "s2fieldsofstudy": [ "Biology" ], "extfieldsofstudy": [ "Physics", "Biology" ] }
246335486
pes2o/s2orc
v3-fos-license
The Advantages and Disadvantages of Learning a Second Language Early With the accelerating of the worldwide favor of learning a second language, this article illustrates the advantages and disadvantages of learning a second language early by analysing relevant research and articles. The advantages of learning a second language are based on the prefrontal cortex in the brain that can help promote the cognitive development like attentional ability, memory and creativity. The acquisition of a second language can in fact benefit from the cognitive development and the academic development. In contrast, learning a second language too early can lead to a loss of the native language and the inadequate mastery of native language makes it more difficult to learn a foreign language. Academic burden and external factor such as the lack of proper education can hinder the second-language-process as well. INTRODUCTION As time goes on, globalization will continue to accelerate the intensification of worldwide social relations which link distant localities in such a way that local happenings are shaped by events occurring many miles away and vice versa [1]. Language is important in globalization because it serves as a channel for interactions between people and spaces while it is also influenced by global influences. Hence, to integrate into the globalization process, there is a boom in learning a new language today to communicate with people in different countries and areas. There are many advantages for people to be bilingual or multilingual: 1) Switching between languages is essentially a form of brain training. Mentally regulating two or more languages gives the brain cognitive benefits and leads to higher degree of metalinguistic awareness, similar to how regular physical training improves the body medically [2]. 2) Language learning can also make you a better person. Language learning improves one's ability to empathize, or see a situation from another's point of view, according to social studies. When you learn a new language, you not only pick up new words and sounds, but you also pick up new thoughts. It's as if you're looking at the world through various lenses [3]. 3) Language abilities might also help you get work and increase your international trade opportunities. Languages are an essential aspect of the 21st century skill set in an increasingly multicultural and multilingual employment environment [4]. Language abilities may not guarantee employment in itself, but they can give you an edge over a monolingual candidate. 4) If you're looking to buy something, English is good, but it might not be the best language to use if you want to sell anything [5]. According to government statistics, the UK loses roughly 3.5 percent of its GDP each year due to a lack of language skills in the workforce. In a globalized economy, other countries, on the other hand, can use their multilingualism as a resource with exchange value. The economic worth of multilingualism in Switzerland, for example, is estimated to gain 10% of GDP because many Swiss enterprises can readily operate in multiple languages [6]. With so many advantages, learning a second language has become a popular option for many people and families. In a case of learning a second language, however, recently the issue that should students learn a second language at a young age has attracted people. For example, there are numerous advantages and disadvantages that children have when learning English as a foreign language in order to become bilinguals [7]. Some people think that learning a second language has a significant advantage in that it teaches youngsters to Advances in Social Science, Education and Humanities Research, volume 631 focus their attention on the important variables in the context, including ambiguous or contradictory information. Increased cognitive capacities may aid in the development of the skills thought to be involved in effective communication in youngsters. Knowing two words that describe the same notion, such as "good" and "great," for example, can help children realize that an object or event can be described in multiple ways, which can aid their understanding of other people's viewpoints. "Research has proven that the brain interprets language differently after 10 or 12 years old since it is constantly building neural connections till then. The frontal lobe of the brain is where we process language as children. When we acquire a language as a teenager or adult. However, the brain has to 'scramble' to locate new storage space. In basic terms, when you learn a language as a youngster, your brain absorbs it quickly; after that, it takes a lot more effort" [8]. Plus, learning a second language early can help students have better academic development. On standardized tests administered in English, students who speak a second language perform statistically better. The College Entrance Examination Board reported in its 1992 report-College Bound Seniors: the 1992 Profile of SAT and Achievement Test Takers, that students who had studied a foreign language for four years or more scored higher on the verbal section of the SAT than students who had studied four years or more in any other subject area. Furthermore, the average mathematics score for people who had studied a foreign language for four years or more was the same as for people who had studied mathematics for the same amount of years. In contrast, the other ones think that learning a second language has disadvantages for children. Bilingual children face several disadvantages when learning English as a foreign language. If children have been exposed to different languages since birth, they may begin speaking three to six months later than children who are raised in a monolingual environment will temporally mix languages [9]. Rehman says that "You can expect your bilingual child to begin speaking about 3-6 months later than his/her monolingual peers." Another significant disadvantage of learning a foreign language at a young age is that children will mix languages for a period of time. Another significant disadvantage of learning a foreign language at a young age is that children will mix languages for a period of time. "It is normal for bi-/multilingual children to mix up languages until about the age of 4. If children are lacking the right word in language A, they will borrow it from language B to communicate their message". According to Rehman, it implies that individuals may mix languages at any time, influencing how they connect with others and transmit their ideas or messages. The impact of ages on learning a second language is still not clear and convictive. To make it completer and more comprehensive, the purpose of this article is to analyse the advantages and disadvantages of learning a second language early by reviewing relevant articles and research in recent years. Brain and cognitive development There are many theories that second language learning is closely related to the brain especially in the early stage. The prefrontal cortex in the brain plays an important role in it. Precisely, the development of prefrontal cortex is related to the cognitive development which is fundamental to the acquisition of language and other skills. According to Piaget [10], this ability develops in a predictable pattern through a sequence of well-defined phases and milestones. The kid, between the ages of 2 and 7, enters a representational stage of extended verbal symbolism after a first stage of rudimentary sensory-motor integration and primitive symbolization. External feedback, such as language from other people, gradually gets more complex and regulated. The youngster develops the ability to postpone gratification. From the ages of 7 to 11, language and behavior become more structured, less reliant on external stimuli, and more inventive. Enter games, sports, erector sets, and problem-solving. These two stages in the Stage Theory shows us how the language learning other abilities improve as the development of cognition. Additional control ability Plus, the cognitive development can also be affected by the language acquisition. Children's frontal lobe functioning for controlling attention is affected by it too. The development of children's cognitive and neurological systems is influenced in part by their daily learning experiences, which include language acquisition. Children face a variety of linguistic and socio-linguistic circumstances during the course of language acquisition, all of which necessitate some form of conflict resolution, for example, adjudicate the meanings of similar sounding words like "I" and "eye" [11]. The doubling of these conflicting contexts that are typical of bilingual language acquisition (e.g., increasing the number of possible homophones) and the unique need to selectively attend to one language while suppressing the other, according to theories of bilingual cognitive development, may alter bilinguals' attentional control mechanisms [12][13][14] [15]. The ability to deliberately focus and shift attention is known as attentional control [16]. In a standard word-image matching test, for example, participants take longer to choose a picture when they encounter photos with similar initial sounds such as "card" and "cart," versus "card" and "lion" [17]. Participants are faced with verbal interference throughout this exercise, which forces them to ignore the competing distractor. Importantly, bilingual participants' performance in this task can be influenced by both within-language and cross-language distractors. These findings demonstrate not only the attentional difficulties of language processing, but also the broader assumption that bilinguals' languages are frequently co-active [18] [19]. The higher requirement for attentional control across various contexts of bilingual language usage, from word recognition to discourse, is hypothesized to result from such continual co-activation of bilinguals' two languages [15]. Thus, theories of bilingual development propose that early childhood bilingual exposure during periods of rapid brain development may result in earlyemerging and lifetime modifications in children's attentional control abilities [20]. Other cognitive performances Besides the improvement of attentional control abilities, learning a second language can help children strengthen their memory, creativity and other cognitive performance. In contrast to the mixture between native language and second language, children can differentiate two different languages within the first weeks of life. "Learning another language actually enhances a child's overall verbal development," says Roberta Michnick Golinkoff, author of How Babies Talk. The study goes on to show that acquiring a second language at a young age has a number of other cognitive benefits. Children who study a foreign language outperform their peers in terms of overall basic skills in elementary school. They go on to score higher on SATs, according to the College Entrance Examination Board. Children who learn a foreign language at an early age have greater problem-solving abilities, improved spatial linkages, and increased creativity. Learning a second language at a young age promotes flexible thinking and communication skills, allowing youngsters to approach challenges from multiple perspectives. Furthermore, studies demonstrate that multilinguals have better memory, planning, and multitasking abilities. When learning many languages as a child, the brain is educated to pay attention to key information and ignore irrelevant information, a skill that subsequently enables improved focus, memory, planning, and multitasking abilities. According to research, multilinguals employ more of their brains than monolinguals and outperform monolinguals in creativity tests [21]. Academic development Another advantage of learning a second language early is that it can help kids obtain academic development. Recent research of the reading abilities of 134 four-and five-year-old children, for example, discovered that bilingual children grasped the broad symbolic representation of print better than monolingual children [22]. Another study examined achievement test results from kids in Fairfax County, Virginia, who had engaged in immersion, the most intensive sort of foreign language program, for five years. The study concluded that those students outperformed all comparison groups on achievement assessments and remained strong academic performers throughout their schooling [23]. Finally, a study conducted in Louisiana in the 1980s found that, regardless of race, gender, or academic level, students who received daily instruction in a foreign language (taught as a separate subject rather than through immersion) outperformed those who did not on the third-, fourth-, and fifth-grade language arts sections of Louisiana's Basic Skills tests [24]. All of these results suggest that second language study helps enhance English (native language) and other academic skills. According to several research, students who acquire foreign languages perform statistically better on standardized college entrance examinations than those who do not. For example, the College Entrance Examination Board reported that students who had averaged four or more years of foreign language study outperformed those who had studied four or more years of any other topic on the verbal component of the Scholastic Aptitude Test (SAT) [25]. The influence of native language on the second language learning It can be learned from the above that age plays a key role in second language learning and younger language learners have many benefits from the acquisition of a second language. But whether learning a second language early facilitates the learning process remains unknown. As old as primary school and middle school students who have mastered their first language can do better in the acquisition of the second language. For example, Lightbown and Spada cite research conducted by Snow and Hoefnagel-Hohle on a group of English speakers learning Dutch as a second language [26]. This research was enlightening shed light on very useful because it included students of different ages, ranging from six to sixty years old. Surprisingly, this research found that teenagers, not children or adults, were by far the most successful learners. Snow and Hoefnagel-Hohle discovered that young learners struggled with activities that were beyond their cognitive maturity, but adolescents acquired more quickly in the early phases of second language development. The study concludes that when adults and adolescents used their original language on a daily basis in social, professional, and academic interactions, they were able to make significant progress in native language learning [27]. According to the research, learners who have excellent academic skills in their home language will learn a second language faster than those who do not have similar skills in their native language. In other words, effective first-language acquisition is critical for learning a second language. According to the research, learners who have excellent academic skills in their home language will learn a second language faster than those who do not have similar skills in their native language. Learning a second language means impairing the use of the first one "Subtractive Bilingualism," the name given the problem by Wallace Lambert who first discussed it in relation to French-Canadian and Canadian immigrant children whose acquisition of English in school resulted not in bilingualism, but in the erosion or loss of their primary languages [28][29][30]. The phenomeon is wellknown in the United States. It is the narrative of numerous American immigrant and native children and adults who have lost their ethnic languages as a result of linguistic assimilating into the English-speaking environment of school and culture. Even if it was the only language they spoke when they first started school, few American-born children of immigrant parents are totally skilled in the ethnic language. Once young children have learned English, they are less likely to retain or develop the language spoken at home, even if it is the only language their parents know. This has been the tale of previous immigrant groups, and it is the story of today. The only difference is that the process appears to be moving much faster today [31]. Learning a second language exerts pressure on kids and parents Another disadvantage is that bilingual children will have to deal with the additional academic load that comes with learning to read and write in another language on top of the first; this means that they will have to work twice as hard. If parents want their children to not only speak another language but also read and write it, they will need to provide extra instruction outside of regular school hours. Silke Rehman believes "Organizing language lessons requires considerable effort, both financially and in terms of time. However, all parents would agree that the advantages outweigh the effort." An additional academic burden or supplementary tuition, on the other hand, becomes boring and difficult for children. As a result, they prefer to engage in other types of activities, such as sports, and as a result, they decide to discontinue their bilingual education. External factors impede the process of learning a foreign language Furthermore, a variety of factors, including the circumstances in which the languages are learned, might influence the outcome of multilingual development. The context or environment is very crucial in the learning process of children. Furthermore, children should use every resource in their environment as a tool to benefit from and readily learn the language. Parents (family) and school are the most essential factors within the child's setting for success in becoming a bilingual child. However, there are a number of variables at school and at home that impede the process of learning a foreign language, such as ineffective attempts at integration into society, a shortage of teachers, a lack of classrooms, parents and teachers fluency in the foreign language, and so on. (www.everythingesl.net). 1) Unsuccessful attempts at integration into Society are one of the most serious issues confronting bilingual education in the United States. "Bilingual education was thought vital since it was meant to better integrate the children of immigrants and minorities into society," writes Aparna Iyer. The bilingual education system required distinct teachers and classrooms and believed in gradual integration into society by allowing children to receive instruction in their home language for three or more years." 2) The unavailability of teachers is also a factor which blocks the second language learning process on kids. For example, In El Salvador, public schools have only one English teacher for the entire student body. Furthermore, some teachers are not allocated in their area of specialization. An English instructor, for example, may be appointed to teach science or another topic, or vice versa. As a result, children are not receiving enough English education to become fluent in the language. It means that public schools are not interested in helping their children become multilingual, which is a significant disadvantage. Just as Aparna Lyer says that "Bilingual education requires a number of trained teachers who are proficient in both English and their native language, assuming that Spanish is one of the mediums of instruction" [32]. CONCLUSION In this article, advantages and disadvantages of learning a second language are argued systematically. First of all, advantages of learning a second language are that the development of the prefrontal cortex is related to the cognitive language which can actually be beneficial to the acquisition of a second language. Secondly, cognitive development is also affected by the process of learning a second language such as attentional ability, memory and creativity which is increased as continuation of the second-languagelearning process. Thirdly, learning a second language helps kids obtain academic development to outperform those students in the same phase. In contrast, there are disadvantages of learning a second language early. To begin with, "the earlier to learn a second language is better" is wrong. In fact, effective native-language acquisition is crucial for learning a second one which means that it is not efficient enough for a kid who has not mastered his mother tongue to learn a second language. Moreover, learning a second language means impairing the use of the first one after analyzing the phenomenon in the United States that immigrant kids can lose their own languages after assimilating into the English-speaking environment. Plus, the burden on the family and kids themselves can lead them to lose the motivation and interests in acquiring a second language. In this case, the advantages outweigh the disadvantages. For the natural edge, kids should seize the opportunity to learn even to master a second language in their young age. The burdens and the education can be handled correctly with the help of parents and professional teachers. The native language and the second language should be treated equally as kids grow up to become bilingual. Not only because of the natural edge which kids have, but also the benefits that learning a second language brings, kids are supposed to learn a foreign language as early as possible.
2022-01-28T16:02:15.633Z
2022-01-01T00:00:00.000
{ "year": 2022, "sha1": "2f20ee8c4b7fca7128cb62a8f36375a00fdc4c2e", "oa_license": "CCBYNC", "oa_url": "https://www.atlantis-press.com/article/125968656.pdf", "oa_status": "GOLD", "pdf_src": "Anansi", "pdf_hash": "39c45371dfb65212e4803a79b9e3f3753c1e7a0d", "s2fieldsofstudy": [ "Linguistics", "Education" ], "extfieldsofstudy": [] }
218983410
pes2o/s2orc
v3-fos-license
80-year-old man with dyspnoea and bilateral groundglass infiltrates: an elusive case of COVID-19 COVID-19 is a novel viral infection caused by severe acute respiratory syndrome-coronavirus-2 virus, first identified in Wuhan, China in December 2019. COVID-19 has spread rapidly and is now considered a global pandemic. We present a case of a patient with minimal respiratory symptoms but prominent bilateral groundglass opacities in a ‘crazy paving’ pattern on chest CT imaging and a negative initial infectious workup. However, given persistent dyspnoea and labs suggestive of COVID-19 infection, the patient remained hospitalised for further monitoring. Forty-eight hours after initial testing, the PCR test was repeated and returned positive for COVID-19. This case illustrates the importance of clinical vigilance to retest patients for COVID-19, particularly in the absence of another compelling aetiology. As COVID-19 testing improves to rapidly generate results, selective retesting of patients may uncover additional COVID-19 cases and strengthen measures to minimise the spread of COVID-19. SUMMARY COVID-19 is a novel viral infection caused by severe acute respiratory syndrome-coronavirus-2 virus, first identified in Wuhan, China in December 2019. COVID-19 has spread rapidly and is now considered a global pandemic. We present a case of a patient with minimal respiratory symptoms but prominent bilateral groundglass opacities in a 'crazy paving' pattern on chest CT imaging and a negative initial infectious workup. However, given persistent dyspnoea and labs suggestive of COVID-19 infection, the patient remained hospitalised for further monitoring. Forty-eight hours after initial testing, the PCR test was repeated and returned positive for COVID-19. This case illustrates the importance of clinical vigilance to retest patients for COVID-19, particularly in the absence of another compelling aetiology. As COVID-19 testing improves to rapidly generate results, selective retesting of patients may uncover additional COVID-19 cases and strengthen measures to minimise the spread of COVID-19. BACkgRoUnd In December 2019, a novel virus, severe acute respiratory syndrome-coronavirus-2 (SARS-CoV-2), was identified in Wuhan, China. Initially thought to be comparable to influenza, our understanding of COVID-19, caused by SARS-CoV-2 is evolving daily. It has caused a global disturbance due to its high transmission rate. WHO officially labelled COVID-19 as a pandemic on 11 March 2020, with the disease having spread to >190 countries. As of 07 April 2020, there were more than 1 400 000 confirmed cases with over 80 000 deaths. 1 SARS-CoV-2 is a non-segmented, positive sense RNA virus that was first isolated from people who had visited the Huanan seafood market in Wuhan, China. 2 Coronaviruses are naturally found in bats, which were postulated to be the primary reservoir for zoonotic transmission to humans in prior cases of coronavirus infection. 3 4 This is, expectedly also true for SARS-CoV-2 as genetic studies have identified more than 96% similarity in the whole genome sequencing of SARS-CoV-2 and a bat SARS-related coronavirus (RaTG13) in China. 5 In addition, pangolins have also been identified as potential reservoirs of coronavirus. 6 SARS-CoV-2 binds the ACE2 receptor located on type II alveolar cells and intestinal epithelia. This is the same receptor used by the severe acute respiratory syndrome coronavirus-1 (SARS-CoV-1), hence the technical name for COVID-19 being SARS-CoV-2. 7 8 The clinical presentation for SARS-CoV-2 varies from being asymptomatic to developing mild upper respiratory tract infection to severe pneumonia resulting in acute respiratory distress syndrome. 9 This has posed challenges in halting the transmission via droplets due to asymptomatic carriers as well as identifying patients who can potentially decompensate later in the clinical course. As we learn more about COVID-19, we need to adapt and identify the means of early diagnosis, its management and most importantly, its prevention. We present a case of an 80-ear-old man who posed a diagnostic dilemma and the thoughts behind our decision-making process which could be useful to other clinicians managing patients with COVID-19. CASe pReSenTATion An 80-year-old man presented to the emergency department with dyspnoea and nausea. His comorbidities included atrial fibrillation requiring cardioversion currently receiving anticoagulation with apixaban, non-ischaemic cardiomyopathy causing biventricular heart failure and left bundle branch block requiring cardiac resynchronisation therapy-defibrillation placement with most recent ejection fraction of 54%, hyperlipidaemia, gastrooesophageal reflux disease and pseudogout. He was a remote smoker having quit more than 50 years ago, worked as a financial planner and denied any concerning exposures. His dyspnoea was primarily with exertion and had gradually progressed over the preceding 6-8 weeks. His primary residence was in Tennessee but he travelled extensively for work, most recently to New York, almost 4 weeks prior to his presentation. He did not have any known sick contacts. In the week prior to presentation, the patient had an acute change in exertional dyspnoea that resulted in difficulty climbing a flight of stairs. This acute change correlated with new onset nausea and loss of appetite. Notably, he did not have fevers, cough, sputum production, haemoptysis, chest pain, orthopnoea, lower extremity oedema or weight gain. He initially sought recommendations from his local primary care provider a week prior to presentation who temporarily increased the patient's dose of furosemide. However, this did not alleviate his dyspnoea. inveSTigATionS In the emergency department, he was afebrile and normotensive with mild tachypnoea and oxygen saturation of 92% on room air. On examination, he new disease Figure 1 Chest X-ray with bilateral patchy airspace opacities (left), CT chest with bilateral groundglass opacities and crazy-paving pattern. was well appearing, had bibasilar rales with a systolic murmur likely from known tricuspid regurgitation, but without significant jugular venous distension or lower extremity oedema. The remainder of his physical examination was unremarkable. Laboratory workup revealed normocytic anaemia with haemoglobin 119 g/L, normal white cell count of 7.5×10 9 /L with reduced absolute lymphocyte count of 0.76×10 9 /L, N-terminal pro brain natriuretic Peptide (NT-pro BNP) 1722 pg/mL (normal 5-128 pg/mL), D-dimer 1392 ng/mL (normal ≤500 ng/mL), C-reactive protein (CRP) 82.6 mg/dL (normal ≤8 mg/dL), high sensitive troponin T 26 ng/L (normal ≤15 ng/L) without significant change after 2 hours, aspartate aminotransferase 61 U/L (normal 8-48 U/L) with otherwise unremarkable liver function tests and a normal renal function panel. A 12-lead ECG showed a paced rhythm without significant changes from prior readings. Chest radiograph revealed new patchy airspace opacities bilaterally. Due to an elevated D-dimer and progressive dyspnoea, a chest CT scan with pulmonary angiogram was obtained. CT chest with pulmonary angiogram was negative for pulmonary embolism but demonstrated diffuse bilateral patchy groundglass opacities predominantly in the mid to lower lung zones, which were consistent with crazy paving pattern (figure 1). These findings were new compared with a scan obtained 12 months prior, which showed an unremarkable pulmonary parenchyma. He was admitted to the inpatient medicine service for further workup under modified contact and droplet isolation (use of gown, gloves, surgical mask and eye shield). Influenza and respiratory syncytial virus PCR were negative. Due to the COVID-19 pandemic, his travel history and reports of community transmission within the USA, a nasopharyngeal swab for SARS-CoV-2 PCR was obtained, which returned negative. The pulmonary medicine team was consulted for consideration of bronchoscopy for further diagnostic workup. Due to high suspicion of infection, haemodynamic stability and immunocompetent status, testing with an extended respiratory pathogen panel and repeat SARS-CoV-2 PCR was recommended. Both tests were negative 24 hours after the initial SARS-CoV-2 PCR. The case was reviewed with the institutional infection prevention and control team who recommended repeating SARS-CoV-2 PCR 48 hours from the initial test. This was subsequently obtained and was positive, consistent with COVID-19 infection. Importantly, due to high clinical suspicion, modified contract and droplet precautions were maintained while the SARS-CoV-2 PCR tests were pending. diFFeRenTiAl diAgnoSiS The differential diagnosis of his clinical presentation was broad and included viral or atypical infection including pneumocystis pneumonia, inflammatory/interstitial lung disease such as eosinophilic pneumonia, non-specific interstitial pneumonitis or hypersensitivity pneumonitis and heart failure exacerbation. Heart failure exacerbation was less likely due to a stable echocardiogram, normal cardiac device interrogation a week prior to presentation, stable weight and absence of volume overload on examination or imaging. TReATMenT The patient was subsequently transferred to a dedicated medicine service caring for patients positive for COVID-19. Due to reports of sudden acute decompensation in older patients with COVID-19, 10 he was observed in the hospital for a longer duration despite being haemodynamically stable. oUTCoMe And Follow-Up His inflammatory markers down-trended (table 1) which correlated with symptomatic improvement and he was discharged in stable condition after a total of 8 days of hospitalisation. diSCUSSion This case illustrates the importance of clinical suspicion and supplemental diagnostics including CT chest imaging and laboratory data to diagnose COVID-19. The primary symptoms in patients hospitalised with COVID-19 infection are fever (88.7%), cough (67.8%), fatigue (38.1%), dyspnoea (18.7%), myalgia (14.9%) and chills (11.5%). Nausea or vomiting (5.0%) and diarrhoea (3.8%) were less common. Common radiological findings included ground-glass opacities (56.4%) and bilateral patchy shadowing (51.8%). No radiological or CT findings were found in 17.9% of patients with non-severe disease and in 2.9% with severe disease. On admission, lymphocytopenia (83.2%), thrombocytopenia (36.2%) and leucopenia (33.7%) were noted. Elevations in serum CRP, D-dimer, creatine kinase, alanine aminotransferase and aspartate aminotransferase were reported in some cases. 11 A recent study in China retrospectively reviewed the initial chest CT of patients with COVID-19 and found ground-glass opacity (61.3%) ground-glass opacity with consolidation (35.5%), crazy-paving pattern (25.8%), rounded opacities (25.8%) and air bronchograms (22.6%). 12 'Crazy paving' is a non-specific chest CT finding produced by the amplified density of lung parenchyma that manifests as a ground glass appearance superimposed on reticular thickening of the inter and intralobular septae. 13 This can be seen in sarcoidosis, drug induced pneumonitis, pneumocystis jirovecii pneumonia, pulmonary proteinosis, interstitial lung disease, pulmonary adenocarcinoma, pulmonary haemorrhage, cryptogenic organising pneumonia and bacterial pneumonia. 13 To provide care for patients with 'crazy paving' on chest CT, a thorough investigation into the different causes should be undertaken but COVID-19 should remain high on the differential due to its increasing prevalence. Nasopharyngeal swabs remain the primary confirmatory test for COVID-19. As suggested by the US Centers for Disease Control and Prevention, negative results should not be the sole determinant to rule out COVID-19 infection. The optimum specimen type and peak viral levels have not been determined, and to detect the virus, multiple specimens at different time points may be required. False negatives may also occur if a specimen is improperly collected or processed or if an inadequate number of organisms are present. Ultimately, the positive and negative predictive values of the test are dependent on prevalence of the disease. 14 There have been three published case reports of initially negative COVID-19 PCR tests in patients subsequently new disease determined to have COVID-19 infection. 15 16 Other sites of collection were recently tested in confirmed cases of COVID-19 with bronchoalveolar lavage specimens showing the highest positive results (93%) followed by sputum (72%), nasal swabs (63%), faeces (29%), blood (1%) and urine (0%). 17 Even though bronchoalveolar lavage and sputum have higher positive results, these should be avoided due to the possibility of aerosolisation of the virus and potential exposures to healthcare workers in the setting limited healthcare resources. This patient had several laboratory abnormalities that have been associated with worse outcomes including a serum neutrophil/lymphocyte ratio >3, 18 D-dimer >1000 ng/mL 19 and total lymphocyte count <0.8 (table 1). 19 It is important to draw labs on presentation and periodically monitor throughout hospitalisation to project a patient's trajectory. When a patient is ultimately able to return home, quarantine is essential to preventing further spread of the virus. A test-based strategy is currently recommended to clear the patient from isolation which involves fulfilling all criteria including resolution of fever without antipyretics, improvement in respiratory symptoms and two negative COVID-19 PCRs at least 24 hours apart. 20 This strategy may change based on the effectiveness of contact tracing and transmission of COVID-19 prior to onset of symptoms or isolation. 21 At present, quarantine and negative COVID-19 PCR confirmation remains the cornerstone in preventing transmission. Finally, this case presents important public health considerations including how to allocate scarce critical care resources in a public health emergency. Previously straightforward conversations with patients regarding their resuscitation status will change in a public health emergency. The act of performing cardiopulmonary resuscitation (CPR) on a patient with COVID-19 potentially increases viral transmission to healthcare providers and requires use of scare personal protective equipment that could be used on patients with a higher chance of recovery. It has been suggested that attending physicians, during this COVID-19 pandemic and a public health emergency, may withhold CPR from patients with or without COVID-19 if they deem CPR to not be medically appropriate, even at the dissent of the patient or their representative. 'Medically appropriate' is a term that takes into account the risk to healthcare workers performing CPR, the patient's prognosis if CPR was successful and that the patient would remain a priority to continue receiving critical care resources following CPR. 22 Creation of an independent triage team with allocation criteria for intensive care admission and ventilation based on likelihood of long-term survival may also become important. This type of framework provides the greatest amount of help to the greatest number of people. 23 Ultimately, if we are faced with a public health emergency and triage of scare resources, we will need to employ effective crisis leadership skills. These skills include being adaptable, empathetic, prepared, resilient, transparent and trustworthy. Leadership during a crisis includes making decisions not based on reputation but rather based on the values of the group, organisation and community that the provider represents. 24
2020-05-30T13:02:11.009Z
2020-05-01T00:00:00.000
{ "year": 2020, "sha1": "1f4608155170ae58ac5bf90db04de25c7340d976", "oa_license": null, "oa_url": "https://casereports.bmj.com/content/bmjcr/13/5/e236069.full.pdf", "oa_status": "BRONZE", "pdf_src": "PubMedCentral", "pdf_hash": "44a6e3be832e539506952a0cdd63c3aa7ebe7fa0", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
236594734
pes2o/s2orc
v3-fos-license
Analysis of learning difficulties in vertebrate zoology during the COVID-19 pandemic based on student learning styles Article Information ABSTRACT Submitted: 2020-12-28 Accepted: 2021-05-02 Published: 2021-05-04 Vertebrate zoology lectures during the COVID-19 pandemic brought about a transformation from face-to-face lecture systems to online self-learning. Students with various learning styles experience difficulties in studying vertebrate zoology courses. The purpose of this study was to analyze the learning difficulties of the COVID-19 pandemic zoology vertebrate course as a whole and based on student learning styles. This research was conducted at the University of PGRI Madiun and IKIP Budi Utomo in May 2020. The number of respondents was 140 students of the Biology Education Department. This type of research is qualitative research with a survey method. The instrument used was a questionnaire analysis of vertebrate zoological learning difficulties using the google form. Data were analyzed descriptively qualitatively. The results showed that the learning difficulties of students, in general, were related to the understanding of scientific names as much as 71% and the fulfillment of teaching materials as much as 51%. The learning difficulties of students with a visual learning style consisted of understanding scientific names (69%) and the fulfillment of teaching materials (57%). The learning difficulties of students with auditory learning styles consist of understanding scientific names (81%). The learning difficulties of students with the kinesthetic learning style consisted of understanding scientific names (67%). This study concludes that students' difficulties are dominated by understanding scientific names and the availability of teaching materials. INTRODUCTION Vertebrate zoology is a course that studies the introduction, classification and taxonomy of vertebrate animals. This course is a compulsory subject for students of the biology education study program. This course is presented in the form of material delivery, practicum and field visits. The integrated field lectures that are carried out will provide opportunities for students to explore objectivity in a full and authentic way (Ibrahim et al., 2018). The material discussed in this course includes basic taxonomy, nomenclature and classification of vertebrates, pisces, amphibians, reptiles, aves, and mammals (Faizah et al., 2013). Students in taking this course are expected to have theoretical abilities and their application in everyday life (Yuhanna & Retno, 2018). Students who take the Vertebrate Zoology course this year are also expected to be able to adapt to the current lecture patterns during the COVID-19 pandemic. The COVID-19 pandemic since March 2020 has brought significant changes to all aspects of life. The COVID-19 pandemic has adverse effects on education including, learning disruptions, and decreased access to education and research facilities, job losses and increased student debts (Onyema, 2020). The change in the educational paradigm, which was originally implemented face-to-face, has transformed into online learning or in a full network. This has a significant impact on the learning process and student learning outcomes in the vertebrate zoology course. Changes that are sudden and not followed by readiness in terms of infrastructure, mental and study habits bring challenges for lecturers and students (Kurniawan et al., 2020;Yustina et al., 2020). The learning process in the vertebrate zoology course also undergoes a transformation in practicum methods and procedures. Students are directed to self-study at home with a pattern of adapting to new habits. The problem that arises in learning vertebrate zoology during the COVID-19 pandemic is that students are not ready and used to learning independently and completely online. Students' digital literacy skills are still low, especially in accessing activities, searching for digital literature and creating digital content as a form of task fulfillment and evaluation (Yustina et al., 2020). Students do not understand the material of vertebrate zoology because the learning system is not optimal. In addition, there is also a feeling of anxiety about COVID-19 which results in students being passive, less productive and less motivated in learning. Each student has their own weaknesses and challenges according to their abilities and learning styles. Every student has different characteristics and learning styles in understanding a material (Kaur et al., 2018). Student learning styles reflect the character that is built. Different learning styles show the fastest and best way for each individual to be able to absorb information from outside himself (Papilaya & Huliselan, 2016). This difference in learning styles also affects the student's response to receiving the transformation of the learning model during the pandemic. Learning styles based on the ability of the senses to carry out their activities consist of 3 groups, namely visual learning styles, auditory learning styles and kinesthetic learning styles (Wahyuni, 2017). An understanding of student learning styles can help lecturers / educators to facilitate students in learning (Kholid et al., 2016). Visual learning style is a learning style by seeing, observing or looking at, ojek as a learning resource (Rijal & Bachtiar, 2015;Wahyuni, 2017). The most dominant sense is the sense of sight. This type of visual learning style has the characteristic of liking neatness and skill, if speaking tends to be faster, likes to make careful planning for the long term, very thorough to the details. Students with visual learning styles remember more easily by seeing the visualization of an object. Students of this type are not easily distracted by the conditions of the environment in which they are studying, because their main focus is observing objects (Ningrum et al., 2018). Auditory learning style is a learning style that is dominated by the sense of hearing (Rijal & Bachtiar, 2015;Wahyuni, 2017). This learning style focuses on what is heard and obtained from the results of discussion and brainstrorming. Students with auditory learning styles are very comfortable with what to tell, read aloud and are good at speaking. What is uncomfortable is the disturbance or disruption of the learning environment (Wahyuni, 2017). Students with this learning style do not like to write and see real objects. The third learning style is kinesthetic. Kinesthetic learning style is a learning style that is easier to absorb information by moving, doing, and touching something that provides information (Papilaya & Huliselan, 2016;Wahyuni, 2017). This learning style really likes being involved in real processes, for example in product design practicums and field visits. Hands-on implementation is needed to support students' long term memory. The characteristics of this learning style are students need to have real experience, memorize by walking, dynamic and lots of physical activity (Rijal & Bachtiar, 2015). Learning difficulties in the vertebrate zoology course need to be described in detail according to the learning style of the student. Each learning style has different activities in learning a concept. So that the types of difficulties and learning support needs are also different (Nursasono et al., 2020). This condition must be understood by lecturers so as not to generalize solutions to deal with student learning difficulties. The urgency of this study is that an analysis of student learning difficulties is indispensable to determine the conditions and needs of students studying vertebrate zoology during the COVID-19 pandemic. Lecturers need to identify, record, and analyze learning difficulties as material to be used as a study of lecture development during the pandemic. Based on interviews and observations in the previous semester. The majority of student learning difficulties in taking vertebrate zoology courses are the availability of teaching materials, mastery of material, understanding scientific names, applying practicum and making practicum reports. The objectives of this study were 1) To analyze the general learning difficulties of students in the vertebrate zoology course during the COVID-19 pandemic. 2) Analyzing student learning difficulties in the vertebrate zoology course based on learning styles during the COVID-19 pandemic. So, this data can be used as material for lecturers to study to improve the quality of vertebrate zoology lectures in the next semester. RESEARCH METHODS This type of research is a qualitative research with a survey method. This research was conducted at the Universitas PGRI Madiun and IKIP Budi Utomo. The research was conducted in May 2020. The number of respondents was 140 students of the Biology education study program. The survey method is carried out in stages 1) Formulating research problems and determining the objectives of the survey. 2) Determine concepts and hypotheses and dig literature. 3) Taking population and samples. 4) Making questionnaires and instruments. 5) Retrieval of data using Google form. 6) Data processing. 7) Analysis and conclusion. The instrument used was a questionnaire to analyse the learning difficulties of vertebrate zoology which was developed by the author. Authors determine the aspects of learning difficulties from the observation sheets and interviews in the previous semester. This questionnaire contains 7 questions related to student learning difficulties while taking the vertebrate zoology course during the COVID-19 pandemic. Aspects that are measured are learning difficulties in terms of the availability of teaching materials, mastery of the material, understanding scientific names, implementing practicum and making practicum reports. Each aspect is stated to have a high difficulty level if it is more than a percentage of 50% and a low (easy) difficulty level is below a percentage of 50%. The collected data were analysed descriptively qualitatively to produce relevant conclusions. FINDINGS AND DISCUSSION The study of vertebrate zoology during the COVID-19 pandemic was carried out online and independently. Students understand the material using digital sources. The data shows that 140 respondents were involved in this study by filling out a questionnaire using google form. Researchers did not look at the zoological learning activities of vertebrates directly. The composition of respondents based on learning styles is stated in Figure 1. Learning styles are related to the success of the learning process (Abidoye & Olorundare, 2020). Figure 1 shows that the visual learning style is 67%, auditory is 22% and kinesthetic is 11%. This data shows that most students have a tendency to visual learning styles. Analysis of student learning difficulties in taking vertebrate zoology courses during a pandemic aims to evaluate lectures. This data is also used to determine follow-up efforts to improve the learning process. The survey data for student learning difficulties includes 5 aspects, namely teaching materials, mastery of the material, understanding scientific names, implementing practicum and making practicum reports. The ability of students to learn is unique. The basic consideration for determining these five aspects is by reference studies, unstructured interviews and discussions with students. The way students understand the material also depends on their tendency to learn. The learning process involves the interaction between human senses, learning resources and the surrounding environment. Learning styles also have an effect on students' cognitive abilities (Kaur et al., 2018;Rijal & Bachtiar, 2015). Figure 1. Composition of The Respondent's Learning Style The graph of the percentage of learning difficulties for vertebrate zoology is as shown in Figure 2. Based on the graph in Figure 2, the highest level of difficulty of all respondents is the understanding of scientific names as much as 71% and teaching materials as much as 51%. The difficulty level of the material aspect (16%), practicum (25%) and practicum report (21%) is not considered a high level difficulty because it is below 50%. The scientific name is an integral component of the discussion of vertebrate taxonomy. The binomial system of nomenclature was coined by taxonomists to equate perceptions around the world. Memorizing and understanding scientific names is difficult for some students. In addition to the use of foreign terms that are difficult for students to understand, scientific names also have their own rules (Kurniawan et al., 2015). Understanding of scientific names is very necessary to maintain scientific communication in describing a species. These findings have implications for lecturers to make formulations that make it easier for students to learn the scientific names of vertebrate animals. The second difficulty is in the aspect of the availability of teaching materials. Teaching materials are an important component to support learning. Students need teaching materials as learning resources that contain materials and learning outcomes in the vertebrate zoology course. Fulfillment of current teaching materials is very necessary for understanding concepts and increasing student knowledge (Yuhanna & Retno, 2018). Figure 2. Percentage of Student Learning Difficulties in The Vertebrate Zoology Course during the COVID-19 Pandemic Students with a visual learning style have a distinctive character. Visual learning styles emphasize learning abilities by seeing and observing learning resources. Students in obtaining information tend to be interested in looking at writing, pictures, posters, diagrams, graphics, and so on (Kanadlı, 2016). The learning difficulties of students with a visual learning style in the vertebrate zoology course are presented in Figure 3. The highest difficulty level for students with a visual learning style is learning scientific names as much as 69% and teaching materials as much as 57%. The percentage for scientific names and teaching materials is high and dominates more than 50% of students. The practical aspect is 27%, the practicum report is 23% and mastery of the material is 14%, which shows that the level of difficulty is low. The scientific name is a mandatory thing that students must understand in taking the vertebarta zoology course (Faizah et al., 2013;Kurniawan et al., 2015). Students with a visual learning style need to read scientific names to remember and understand. If the scientific name is only conveyed briefly, it is not surprising that students have difficulty. Scientific names can also be understood by recording and seeing the visual manifestation of species in more detail (Kurniawan et al., 2015;Yuhanna & Retno, 2018). Lecturers must respond to this learning difficulty by formulating integrated scientific name learning in teaching materials that are read by students with a visual learning style. The availability of teaching materials during a pandemic is important to support independent learning and support student competence. Students with visual learning styles usually learn optimally by observing objects directly. Students with this learning style tend to need to see learning resources in real terms. This type of student really likes to read material and observe existing learning resources. The availability of teaching materials is needed for students with a visual learning style. Teaching materials serve to help students remember concepts, shapes, colors and artistic understanding. Teaching materials are also used for deeper understanding of concepts by reading them over and over again. The more frequency the student reads, the material will also enter the student's long term memory. The availability of this teaching material needs a response from the lecturer to provide independent learning facilities for students. The development of teaching materials can also be directed at local potential. Auditory learning styles have a tendency to use the listener's senses to absorb information. Students with auditory learning styles are more comfortable listening to explanations from other people to learn something. Students with the auditory learning type also prefer to talk, discuss and explain in detail the concepts discussed (Fetalvero, 2017). Figure 4 shows the percentage of learning difficulties in students with auditory learning styles. The highest level of difficulty is in understanding scientific names as much as 81%. While the other 4 difficulties consisted of 39% teaching materials, 19% practicum, 19% practicum reports and 16% mastery of material. These four types of difficulties are not considered to be difficulties, because the percentage is below 50%. Vol. 6, No. 01 (2021), pp. 49 -57 Based on the data in Figure 4, the main focus is directed at the difficulty of understanding scientific names. This percentage indicates that most students with auditory learning styles are not relevant to the existing method of reading scientific names. Students with auditory learning styles need innovation in learning scientific names. So far, the scientific name is not specifically presented in certain chapters, but it is implied in the material in each class of vertebrates consisting of pisces, amphibians, reptiles, mammals and aves. Students with auditory learning styles do have weaknesses in finding, understanding and studying information implied in a learning resource. Students with the auditory learning type rely on listening and discussion activities to remember concepts including scientific names. The relevant method for students with auditory learning styles is cooperative learning, discussions, presentations and group assignments (Harie, 2016). In addition, scientific names can also be transformed into digital content that can be heard by students. The use of multimedia, web and android-based information technology is very possible to support the understanding of scientific names for students with auditory learning styles (Kurniawan et al., 2015). Kinesthetic learning styles are characterized by the process of absorbing information through physical movements (Permana, 2016). Students with this learning style really like movement and physical activity in understanding a concept. Students will make many movements and actively use existing media and learning resources. Practical learning is very popular with students with this learning style. Figure 5 shows that among students with a kinesthetic learning style, the greatest difficulty experienced is understanding the scientific name as much as 67%. Other difficulties are described successively, namely the difficulty of teaching materials as much as 44%, material 28%, practicum 22% and practicum reports as much as 17%. The four aspects other than scientific names do not include learning difficulties, because the percentage is below 50%. Students with kinesthetic learning styles do not experience difficulties in practicum and making practicum reports, because basically students with this learning style are very relevant to practicum activities. This learning style needs the support of relevant practicum instructions (Karmila & Khaerati, 2016;Permana, 2016). Students with this learning style during the pandemic were able to adapt to the use of virtual laboratories and independent practicums. Independent practicum is an alternative for fulfilling science process skills competencies during the COVID-19 Pandemic (Putri et al., 2020). Each student with a different learning style has their own difficulties. Like students with auditory and visual abilities, Latin names are also a scourge for students with kinesthetic learning styles. Latin names are a problem for various students, because Latin names are not interpreted as a movement. Other field data shows that lecturers in the vertebrate zoology course provide material scientific names in writing in books and teaching materials. This causes students with a kinesthetic learning style to understand and understand the Latin names of many animals. Understanding Latin names in vertebrate zoology is not only limited to material exposure, but is also reflected in the activities and learning activities that must be carried out . So that students can explore learning resources by building their knowledge through direct interaction with their experiences and environment. CONCLUSION Based on the data presentation, it can be concluded that the learning difficulties of students in general in the vertebrate zoology course during the COVID-19 pandemic were related to the understanding of scientific names as much as 71% and the fulfillment of teaching materials as much as 51%. The learning difficulties of students with a visual learning style consisted of understanding scientific names (69%) and the fulfillment of teaching materials (57%). The learning difficulties of students with the auditory learning style consisted of understanding scientific names (81%) The learning difficulties of students with the kinesthetic learning style consisted of understanding scientific names (67%). The impact of the results of this study is the basis for determining the strategy for the development of vertebrate zoology lectures. This study becomes the basis for meeting the learning needs of students based on the analysis of learning difficulties. Suggestions for further research are a solution to student learning difficulties.
2021-07-20T13:16:02.454Z
2021-05-04T00:00:00.000
{ "year": 2021, "sha1": "0f0c93b34a6797b0e11d7548eb497b3dc096e03d", "oa_license": "CCBYSA", "oa_url": "http://ejurnal.budiutomomalang.ac.id/index.php/edubiotik/article/download/1209/744", "oa_status": "GOLD", "pdf_src": "Anansi", "pdf_hash": "bbf8111f2037055cd6720b918ba028b0fbddda19", "s2fieldsofstudy": [ "Education" ], "extfieldsofstudy": [ "Psychology" ] }
249225298
pes2o/s2orc
v3-fos-license
Conversational Recommender System for Impromptu Tourists to Recommend Tourist Routes Using Haversine Formula − In this paper, we use two terms to describe tourists, i.e. planned tourists and impromptu tourists. Planned tourists are tourists who intentionally travel. Meanwhile, impromptu tourists are those who accidentally become tourists because they are in a new area for an activity. Previously, tourists who were going to travel usually relied on the services of travel agents to get recommendations for tourist attractions, different from impromptu tourists this was not done before. Impromptu tourists sometimes do not have much time to carry out tourism activities so that impromptu tourists only visit the closest tourist attractions from their location. Lack of experience in a new area and only relying on information on the internet makes make it difficult for tourists to find tourist attractions that suit their needs, as well as to plan travel plans. For the method we use the Haversine Formula to calculate the distance. The results of this study are a web application that recommends tourist attractions and routes to several tourist attractions, which can be done at one time. Based on the evaluation of the time complexity in the route search, linear complexity is obtained which shows good performance with optimal conditions. INTRODUCTION Tourism is a sector that can be developed as a regional income. On February 1, 2012, the Central Statistics Agency (BPS) No. 09/02/TH. XV shows the total number of foreign visitors is around 7.65 million people. Indonesia is recorded to contribute foreign exchange in the range of 8.6 billion US dollars. This affects the increase in economic growth and Gross Domestic Product (GDP). The government carries out effective promotions in the tourism sector to encourage regional income [1], [2]. The government's target in tourism promotion is tourists. Tourists are people who travel to visit certain places with the aim of recreation, self-development, or learning the uniqueness of tourist attractions for a temporary period. Activities to travel can not be separated in human life [3], [4]. These activities have become routine things that will be done by many people [5], [6]. Promotion of tourist attractions by providing complete facilities is an attraction for tourists to visit a tourist spot. Planning and organizing a tourist place is very important in increasing development in a country, especially the city of Bandung. Since 1941, the city of Bandung has won the title as the city that has the most tourist destinations in Indonesia, from time to time the development of tourism in the city of Bandung always presents interesting tourist attractions with the latest tourist attractions that are present every year, this is evidenced by the many categories of tourism. Such as natural attractions, shopping tours, cultural tours to religious tourism sites. Not only that, the city of Bandung which is located in a highland with a mountainous atmosphere makes this city feel comfortable and cool to visit [7]. Adi Kurniawan, et all [8] stated that there are two forms of people who will travel, i.e. planned Tourists and Impromptu Tourists. Planned tourists are tourists who deliberately travel and have already determined when to go and the destination of the tourist attractions to be visited. Meanwhile, impromptu tourists are those who accidentally become tourists because they did not previously plan to travel, this happens to those who are attending an event somewhere, then have free time so that they want to visit tourist attractions around the event. Information about tourist attractions is an important part of making a tour. Previously, tourists who would go on a tour usually relied on the services of a travel agent to obtain information related to tourist objects. Travel agents will recommend tour packages that tourists will visit later [9]. However, for impromptu tourists, this is not planned in advance, but rather an activity that they suddenly want to do. This raises a problem, i.e. that no one has recommended tourist trips for impromptu tourists Currently, information about tourist attractions is very easy to obtain in various social media and print media. However, there is a lot of information on tourist attractions available, and it is based solely on their popularity, so travelers will only visit tourist attractions that are trending, rather than ones that are interesting and meet their demands [10], [11]. In addition, impromptu tourists who will take a tour do not have much time, so impromptu tourists usually visit the closest places from their current location. Lack of experience in a new area and only relying on information on the internet makes make it difficult for tourists to find tourist attractions that suit their needs, as well as to plan travel plans [6]. One solution to this problem is that a system is needed that can recommend tourist attractions in terms of distance by considering tourist preferences. Recommender System is a system that can recommend an item. Recommendations really help someone in making a choice. For example, recommendations regarding tourist attractions to be visited [4]. In this study, we developed a conversational recommender system (CRS) that can interact with users to get user preferences in doing a tour. This system can recommend the nearest tourist spot from the user's location using the Haversine Formula method. The Haversine formula is used to calculate the distance between two points on the earth based on the length of a straight line (longitude) without ignoring the curvature (latitude) of the earth [12]. As a result, the system can calculate the user's distance to tourist attractions and the system can also display detailed tourist information, mileage and routes to several tourist attractions that can be done at one time. This route can guide tourists to nearby tourist spots from tourist locations. This research has the aim of helping impromptu tourists get recommendations for tourist attractions in the city of Bandung, by looking at the preferences of tourists' needs. We developed a conversational recommender system (CRS) that uses a navigational strategy by asking questions (NBA) to find out the needs of the user [9]. For our method, we implement the Haversine Formula algorithm to measure the user's distance from the location of tourist attractions so as to produce a list of the closest tourist attractions and travel routes from the user's location. System Design In this study, we use CRS interaction with several dialogue questions to the user regarding user data and preferences. The questions consist of 7 input nodes, namely: Religion, gender, age, location, weather, motivation, and activity. Activities become the main node that determines recommendations for tourist attractions to be visited. There are 6 activities that become tourist destinations, namely: Shopping, Culture, Worship, Sports, Picnics and Recreation. The selected activity will exit based on the motivation of the user. The forms of motivation are: having fun, learning new things, health, looking for activities and looking for goods. Furthermore, the system will search for the nearest tourist attractions based on the user's location with the Haversine Formula method, we set the closest distance to a maximum of 15 km from the user's location point to tourist attractions. So the system will display several lists of nearby tourist attractions based on tourist activity references and user locations. Furthermore, the user can choose which tourist attractions will be visited and the system will display the travel route to several selected tourist spots. Figure 1 is a block diagram of the designed system. In the Haversine method, the formula consists of 3 blocks, as follows: a. The input block consists of the user's location set as the starting point and the location of the Bandung city tourist destination as the second point measured using the values of latitude and longitude using the original distance on google maps. b. The process block is an application system with a Bandung city destination dataset that has been built previously to calculate mileage using the Haversine method c. The output block is a recommended list of the closest tourist attractions from the user with a maximum distance of 15 km from the user's position and displays the distance traveled and travel routes to tourist attractions. Dataset In this study, we use a dataset from research conducted in 2019 [6], with data on tourist attractions in the city of Bandung which is the focus of this research. The details of the data include the name of a tourist spot, address, telephone number, lat, long, category, opening hours, closing hours, latitude and longitude values of a tourist spot on google map, will be calculated using the Haversine Formula method so that the distance from two points is obtained, i.e user point and tourist point. The following is some data on tourist attractions from The dataset used is shown in table 1. Conversational Recommender System (CRS) The preferences of users are so diverse, sometimes it is difficult to understand the intent of the user when using a service, by using a conversational recommender system (CRS) it is hoped that a need will be fulfilled, where this system develops a system of repeated question dialogues to find out the needs of its users. In this study, CRS will interact between the system and the user to get user data and preferences for what activities the user wants to do in traveling. So that the system can filter the information and recommend places according to user preferences. The conversational interaction step is carried out by asking several questions to the user. The system will display a dialog of questions related to data from the user to ask the motivation and tourism activities to be carried out. This system interaction model uses a navigational approach by asking (NBA) on CRS [13], [14]. The questions asked will relate to tourist attractions that will be recommended later. a. Questions about motivation In this study, we classify there are 4 motivations that become a person's motives in carrying out tourism activities, that is: (1) having fun related (shopping, recreation); (2) Learn new related things (culture, worship); (3) related health (sports); (4) Looking for related activities (shopping, culture, sports, recreation, worship, picnics). The results of the motivational input will filter what tourist activities the user will do. b. Questions about tourist activities The priority level regarding tourism activities has a value level from one to six. The higher the value of a priority, the value that is used as the first priority for weighting preferences. Below is the form of the activity tables: Haversine Formula Method The Haversine Formula method is a method that is derived by calculating latitude and longitude which is used to estimate the coordinates of the planetary crust input variables [14], [15], [16]. The Haversine formula is useful in navigation because it determines the distance between two on the earth's surface based on gps coordinates, assuming the earth is round like a circle having a radius and two spherical coordinates, i.e. latitude and longitude with lon1, lat1 and lon2, lat2 [16]. The Haversine formula is as follows: The calculation process with the Haversine method that produces the distance from the starting point to the tourist destination uses the above formula. It can be seen in Table 8 shows the result of the Haversine calculation with the origin using the lat and lon values of the Bandung Horizon hotel. Implementation Results The result of this research is a web application whose system can recommend tourist attractions in the city of Bandung by considering the preferences and distance of the user's location to tourist attractions, using the Haversine Formula as a method for measuring distance. As a result, system can recommend the closest tourist attractions to the city of Bandung from the user's location. For example, in the following example, an experiment is carried out when the user enters the website until the user is recommended a tourist spot. a. Main Page View Figure 2 shows the main page when the user opens the web application and if the user tries to find travel recommendations based on preferences, the user can start it by clicking the explore button. Data and preferences Figure 3 shows clicking the explore button the user will be directed to fill in several questions from the system related to data on religion, gender, age. Furthermore, in figures 4 and 5 the user inputs the motivation and tourist activities he wants to do when traveling A. Requirements Validity Test This test uses the Requirement Traceability Matrix (RTM) which is doing Black Box Testing which focuses on testing system functionality. Table 9 shows that all user needs have been met and system functions have been running according to user needs. [17], [18]. The algorithm can be seen in Table 9. Algorithm 1 is carried out when the user clicks on several tourist attractions as a result of the system recommendations, then the system calculates the distance and looks for routes for tourist trips. The calculation is seen from the number of tourist attractions that the user selects, then the execution time is calculated in millisecond time units. The results of the calculations can be seen in table 10. In Figure 9 shows a graph of the trend of testing the time complexity and efficiency of the algorithm. The results of this test show good performance with optimal conditions seen from the growth of the data indicating that the complexity is linear. Linear complexity is complexity that grows with the size of the data. CONCLUSION Based on the system design, implementation and system testing that we have done, it can be said that the system can recommend tourist attractions for impromptu tourists in Bandung City. Recommended tourist attractions based on user preferences by developing CRS that can interact with users. Using the Haversine formula which plays a role in measuring the distance from the user's location to tourist attractions. The system recommends several nearby tourist attractions and provides travel routes to several tourist attractions that can be selected to visit at one time. Based on testing using RTM, it was found that the functionality of the system has been met according to user
2022-06-01T15:20:56.280Z
2021-10-26T00:00:00.000
{ "year": 2021, "sha1": "a1313f53f382ce482e27231317376fb449e615ef", "oa_license": "CCBY", "oa_url": "https://ejurnal.stmik-budidarma.ac.id/index.php/mib/article/download/3229/2213", "oa_status": "GOLD", "pdf_src": "Anansi", "pdf_hash": "ad1323771f11378e92571d05bc9631c406edb5b0", "s2fieldsofstudy": [ "Business" ], "extfieldsofstudy": [] }
234191440
pes2o/s2orc
v3-fos-license
Size and resin fractionations of dissolved organic matter and characteristics of disinfection by-product precursors in a pilot-scale constructed wetland Controlling the formation of disinfection by-products (DBPs) is a major issue in the drinking water industry, and understanding the characteristics of DBP precursors in treatment processes for micro-polluted raw water is key to improving water quality. In this study, a sampling program was undertaken to investigate the fate of dissolved organic matter (DOM) and the characteristics of DBP precursors in a pilot constructed wetland imitating the Yanlong Lake ecological project. Using XAD resin adsorption and ultrafiltration techniques, the dissolved organic carbon, UV254, and DBP formation potential (DBPFP) were measured in different DOM fractions in raw water and wetland effluents. After the constructed wetland treatment, the low molecular weight fraction (<3 kDa) of DOM and DBPFP generally showed a decreasing trend along the water path, while the high molecular weight fraction (>3 kDa) of DOM increased. The specific DBPFP (SDBPFP) was much higher in the <1 kDa fraction than in the other fractions. Although the hydrophobic fraction of DOM was the most abundant in all stages of the wetland treatment, the SDBPFP of the hydrophilic fraction was higher than that of the hydrophobic fraction. Furthermore, compared with raw water, the DOC, UV254 and DBPFP in the treated wetland effluents increased; however, all of the chemical DOM fractions exhibited decreased SDBPFP in accordance with a decrease in the specific ultraviolet absorbance during wetland treatment. These conclusions indicate that the DOM produced by thewetland system may generate DBPs less readily compared with the DOM of raw water. INTRODUCTION In China, due to rapid development of industry and agriculture, approximately 60% of urban drinking water sources are polluted to varying degrees (Wang & Wang ). To ensure the safety and quality of drinking water and respond to sudden pollution incidents, construction of storage reservoirs incorporating artificial wetlands and other ecological mitigation measures has become an economically favorable alternative to energy-intensive engineered treatment plant approaches to improving the quality of raw water (Haynes  () also demonstrated that aromatic compounds react easily with chlorine to form DBPs such as THMs and HAAs. Yang et al. () used a vertical subsurface flow constructed wetland and surface wetland tandem system to treat micro-polluted raw water in the Yangtze River and found that the THM formation potential (THMFP) of the system effluent increased by 20.52% compared with the influent water. DOM in the roots and leaves of plants also has an effect on DBP production, and soluble microbial products and aromatic proteinaceous substances (polyphenols) can enhance the THMFP (Wei et al. ). In addition, aquatic animals often breed prolifically in ecological engineering sites, and the amino acids, proteins, and fats contained in their metabolites can also act as precursors for DBPs (Sun et al. ). However, although previous studies have indicated that the DOM produced by plants and animals in wetlands may act as precursors for DBPs, field investigations and characterization of DOM in constructed wetlands have received only limited research attention. DOM from wetlands is a heterogeneous mixture of complex organic materials including humic substances, hydrophilic acids, proteins, lipids, carboxylic acids, polysaccharides, amino acids, and hydrocarbons (Leenheer & Croué ; Cheng et al. ). It is impossible to identify and investigate these compounds individually; therefore, the preferred method for evaluating DPB precursors is classification of DOM in water bodies based on a given characteristic and measurement of the reaction behavior of the DOM with this characteristic. Resin adsorption (RA) and ultrafiltration have been widely and successfully applied in characterizing the chemical and physical properties of DOM from natural waters (Wei et al. ). In particular, the XAD-8 and XAD-4 resins have been widely used to separate DOM into hydrophobic (HPO) and hydrophilic (HPI) components (Leenheer ). The purpose of this work is to evaluate the behaviors and characteristics of DBP precursors in three types of wetlands that form an ecological engineering system for a water source with micro-polluted raw water, by analyzing the molecular weight distribution and chemical fractionation of DOM. Dissolved organic carbon (DOC) and UV 254 values were measured to investigate the variation patterns of different molecular weight and chemical fractions. Additional experiments were conducted to evaluate the DBP formation potential (DBPFP) of each weight and chemical fraction in raw water and wetland effluent. Constructed wetlands Experiments were carried out in three different types of constructed wetlands: a surface wetland, submerged plant pond, and ecological pond, hereafter named wetland A, B, and C. The constructed wetlands ( Figure 1) were located within a pilot plant on one of the inflow streams of a drinking water reservoir in Yancheng City, Jiangsu Province, China, designed to imitate the Yanlong Lake ecological project (a replicated field-scale study). In this pilot plant, raw water from the Viper River is driven into a high water tank by a lifting pump and then passed through wetland A, B, and C successively under gravity. A schematic diagram of the composite constructed wetland is presented in Figure 1. Table 2 were transplanted from a local natural field to the wetlands. In addition, a non-classical biological manipulation technique was adopted in wetland C: silver carp and bighead carp were added to control the algae density in the raw water and prevent eruption of cyanobacteria, which are detrimental to water quality. The total density of carp was 30 g m À3 , and the quantity ratio of silver carp to bighead carp was 2:1. Some of the wetland parameters and installation compositions are listed in Table 2. After planting and enriching with fish, the influent flow was gradually increased from 0.3 to 0.6 m 3 days À1 over three months, and the plants were allowed to grow freely. When the plants were well established and the constructed wetlands stabilized, the investigation commenced. The influent flow to the wetlands system was set to 0.6 m 3 days À1 , corresponding to a theoretical hydraulic retention time of 21 days, based on the actual hydraulic retention time of Yanlong Lake of approximately 21 days and 10 hours (Xu et al. ). There were four sampling points at the entrance and exit of each wetland, as shown in Figure 1. All data in this study were collected in April 2018. fraction, the effluent from the XAD-8 resin containing HPI acids, bases, and neutral compounds was then passed through the XAD-4 resin. The TPI fraction was obtained by eluting the XAD-4 resin with the same eluent used to wash the XAD-8 resin. The soluble organics passing through the XAD-4 resin were the HPI fraction, containing HPI bases and neutral compounds (not retained on either the XAD-8 or XAD-4 resin). All separated water samples were adjusted to pH 7.0 (±0.2) using HCl or NaOH. Procedure of DOM fractionation Another set of water samples was fractionated to different molecular weight classes using a stirred ultrafiltration cell (Millipore, 8400) with YM disc membranes (Amicon, nominal molecular weight cut-offs are 0.5, 1, 3 and 10 kDa). All membranes were rinsed with ultrapure water to ensure a residual DOC concentration of <0.2 mg·L À1 . The DOC mass balance of our size fractionation had been controlled in (100 ± 5)% following the cleaning procedure used by Guo & Santschi () and Wei et al. (). A schematic of the specific separation steps is presented in Figure 2(b). Then, DOC, UV 254 , and DBPFP were measured for all fractionated samples. DOM and DBPFP measurements A total of five water samples were collected in precleaned 250 mL glass bottles at each sampling location on the sampling date in April 2018. The samples were then mixed in a larger bottle and cooled immediately in an ice cooler. After filtration through a 0.45 μm cellulose membrane filter, the raw water and wetland effluent samples were stored in a dark room at 4 C to minimize changes in the constituents. When ready for analysis, the samples were Table 3. RESULTS AND DISCUSSION Fraction distributions of the DOM The molecular weight distributions of DOM in raw water and wetland effluents are presented in Figure 3(a). The low molecular weight fraction (<3 kDa) of DOM generally components, and that the aromatic structure present in HPO substances is the main precursor of THMs (Rook ). Therefore, the SUVA results showed that the wetland treatments increased the quality of aromatic materials but decreased the aromaticity, which may reduce the DBP level produced per unit mass. Size and chemical fractions of DBPFP The THMFP and HAA formation potential ( This indicated that the DBPFP of DOM generated in the wetlands is smaller than that of the raw water. In other words, our findings implied that the DOM produced by the constructed wetland system may not readily generate DBPs during chlorination.
2021-05-11T00:03:27.035Z
2021-01-20T00:00:00.000
{ "year": 2021, "sha1": "40a52ec2da36741a1c7ccf4151d466a8fc11c669", "oa_license": "CCBY", "oa_url": "https://doi.org/10.2166/ws.2021.013", "oa_status": "GOLD", "pdf_src": "MergedPDFExtraction", "pdf_hash": "8f7665f626470faaeb1612af31143008fb684a0c", "s2fieldsofstudy": [ "Environmental Science", "Chemistry" ], "extfieldsofstudy": [ "Environmental Science" ] }
229463394
pes2o/s2orc
v3-fos-license
Table Organization Optimization in Schools for Preserving the Social Distance during the COVID-19 Pandemic Featured Application: The obtainment of a methodology for maximizing the social distancing by increasing the distance among the school desks in the classrooms during the coronavirus pandemic through a Genetic Algorithm optimization. Abstract: The COVID-19 pandemic has supposed a challenge for education. The school closures during the initial coronavirus outbreak for reducing the infections have promoted negative effects on children, such as the interruption of their normal social relationships or their necessary physical activity. Thus, most of the countries worldwide have considered as a priority the reopening of schools but imposing some rules for keeping safe places for the school lessons such as social distancing, wearing facemasks, hydroalcoholic gels or reducing the capacity in the indoor rooms. In Spain, the government has fixed a minimum distance of 1.5 m among the students’ desks for preserving the social distancing and schools have followed orthogonal and triangular mesh patterns for achieving valid table dispositions that meet the requirements. However, these patterns may not attain the best results for maximizing the distances among the tables. Therefore, in this paper, we introduce for the first time in the authors’ best knowledge a Genetic Algorithm (GA) for optimizing the disposition of the tables at schools during the coronavirus pandemic. We apply this GA in two real-application scenarios in which we find table dispositions that increase the distances among the tables by 19.33% and 10%, respectively, with regards to regular government patterns in these classrooms, thus fulfilling the main objectives of the paper. Introduction COVID-19 was declared a pandemic public health menace on 11 March 2020, by the World Health Organization (WHO) [1]. This supposed the assumption of this severe acute respiratory syndrome (SARS-CoV-2) as a challenge for humanity to deal with a virus with the potential to condition our normal coexistence. In this pandemic, more than 190 countries have been affected by this outbreak and more than 1.23 M deaths have been cumulated globally according to the John Hopkins University as of 5 November 2020 [2]. People with COVID-19 commonly show fever, cough, musculoskeletal symptoms, gastrointestinal symptoms, dyspnea or anosmia/dysgeusia even causing, in the most severe infections, death [3][4][5]. These symptoms can even persist months after overcoming the infection and present novel evidence such as hair loss or cutaneous spots [6]. These symptomatic patients have been treated in hospitals or isolated at home with their close contacts in order to control the propagation of COVID-19 [7]. However, the challenge arises in the asymptomatic coronavirus patients not diagnosed, which propagate the virus without restricting their mobility [8]. Even the symptomatic patients have an initial asymptomatic phase since alveolar macrophages, which are likely the first immune cells to encounter SARS-CoV-2 during the infection, are incapable of sensing the virus in the first stages [9]. As a consequence, countries around the world have opted for severe lockdowns, frontier restrictions or contact tracing for controlling the virus propagation [10] but causing severe impacts on the economy [11] or other different health problems such as mental health or sleep disturbances [12,13]. In this context, education has been severely affected by the effects of the lockdowns that have forced new educational models in which online learning has taken special relevance [14]. These efforts have partially mitigated the school closure and home confinement. However, especially for young children, traditional evidence of studies performed during school holidays have shown that children during these periods are physically less active, modify their sleep patterns, increment their screen time or follow less favorable diets [15]. These effects have been even more pronounced during this outbreak due to the impossibility of the children of socializing with their classmates or playing outside their homes [16]. Therefore, the reopening of schools has been a priority for many governments worldwide [17]. Furthermore, schools are called to remain open during the second national lockdowns in some countries such as France, the United Kingdom, Germany and Italy [18]. However, there are some studies that argue that a strict control on precautionary measures must be done in order to control possible COVID-19 outbreaks at school [19]. Therefore, several restrictions and rules have been imposed for preserving the safe return to school such as the imposition of social distancing, wearing facemasks, reducing the students in the classrooms or hydroalcoholic gels for hand cleaning in every classroom. In Spain, schools reopened in September facing all these measures described and fixing a social distancing of 1.5 m for reducing the exposition of the children to the virus in the school centers. This led to reorganizing the student desks for guaranteeing the social distancing inside the classrooms. However, this is a complex problem to be addressed, which forces the reduction of the number of students in traditional classrooms. Many schools tried to find the most appropriate distribution of the tables for fulfilling the government social distancing requirements but they found problems in particularly irregular classrooms where regular patterns in the table disposition (i.e., rectangular or zig-zag grids) do not reach the best achievable results. Even, the beneficial effect of the social distancing for reducing the contagion probability in the COVID-19 pandemic [20] recommends the finding of the table organization that maximizes the distance among the student desks even in classrooms in which the attainment of a valid table disposition can be more easily found. However, the finding of the optimal table disposition is a combinatorial problem, which is similar to the technological Node Location Problem (NLP) [21], which has been assigned as NP-Hard [22,23]. Therefore, a heuristic solution to this problem is recommended. Simulated annealing [24], the firefly algorithm [25] or the elephant herding optimization [26] have been traditionally used for the NLP even though Genetic Algorithms (GA) [27][28][29] and memetic algorithms [30] are the most recommended for this problem for their trade-off among diversification and intensification of the space of solutions, which is essential for finding optimal results. As a consequence, in this paper, we propose for the first time, in the authors' best knowledge a GA optimization for the Table Location Problem (TLP) for finding optimal table dispositions at school that maximizes the distance among the student desks for increasing the social distancing in the classrooms thus reducing the children exposure to COVID-19 in their daily lessons. The remainder of the paper is organized as follows: we analyze the TLP and its similarities to the NLP together with a complexity analysis in Section 2, the problem definition and the real scenarios in which we applied our algorithm are described in Section 3, the GA for the TLP is introduced in Section 4, the results achieved are presented in Section 5, Section 6 presents the discussion for the TLP and we conclude the manuscript in Section 7. Analysis and Complexity Studies on the Table Location Problem The TLP entails the definition of the two-dimensional Cartesian coordinates for the location of each student in the classroom. This supposes the associated table distribution in the plane for ensuring the follow-up of the classroom keeping the necessary social distancing by reducing the probability for the students of being infected from the coronavirus during the pandemic. Therefore, the TLP is a combinatorial problem in which the number of possible table distributions (P) is factorial [30]: where n PLT is the number of possible locations where a table can be located in the optimization process and n s the number of students. This analysis shows the dimensions of the space of solutions in which the size is incremented when considering a larger number of students and when a larger number of possible table locations is defined. Since the number of students is commonly predetermined, an adequate selection of the n PLT must be performed in order to achieve optimal results. This number should be, on the one hand, high enough for granting sufficient resolution in the table location and on the other hand, reduced for performing time-effective optimizations [29]. In this paper, we have proved different configurations selecting the final hyperparameter based on the achievement of slight modifications in the principal statistical variables when reducing the spatial resolution. Each one of these considerations makes the TLP factorial in complexity [31] and very similar to the technological NLP, which has been assigned as NP-Hard [22,23]. As a consequence, the finding of an optimal solution for the TLP recommends a heuristic approach similarly to the NLP and can also be categorized as NP-Hard. Many different metaheuristics have been used for the NLP, which could be applied to the TLP such as simulated annealing [24], the firefly algorithm [25], the elephant herding optimization [26], the dolphin swarm [32], the bat algorithm [33], the grey wolf optimization [34], the bacterial foraging algorithm [35] or diversified local search [36]. However, GA have been the most extended in the literature [27][28][29]37] since they provide an optimal balance between diversification and intensification of the space of solutions for the NLP [38]. In addition, their flexibility and adaptation to any problem makes the GA the best candidate to address the TLP. Problem Definition and Scenario of Application A mathematical characterization of this problem is required for correctly defining the methods followed for its solution. In this section, we define the problem and the characteristics of the real-world scenarios in which we perform the table localization optimizations. Problem Definition Let t i = x i , y i be the spatial coordinates of the table i considered during the optimization process, n s the number of students in the classroom and consequently the number of tables to be located, T the set including every possible combination of tables in the classroom, T i a subset containing a possible disposition of the tables in the classroom, T j the subset containing every possible combination of T except T i , f T i (t i ) the evaluation of the quality of the location of the table i belonging to the set T i , the TLP is defined as finding the optimal T i fulfilling the following relation: Consequently, the TLP is defined as the finding of the optimal table disposition in the classroom for maximizing a fitness function of which the main purpose is the maximization of the distances among tables but preserving the legislation of each country, which restricts the table disposition to having a minimum required distance among every table. Scenario of Application The TLP is a real-application problem that has been addressed in collaboration with the Marist Brothers School San José in the city of León, Spain. The government of the autonomous country of Castilla and León has promoted the disposition of the students in two kinds of grids for addressing the TLP (i.e., a rectangular and a zig-zag grid) in order to satisfy the 1.5 m separation defined by the Spanish legislation. However, this kind of table disposition is suboptimal and the regular patterns do not necessarily obtain the best results in this type of combinatorial problem [39] since better results can be achieved by using metaheuristics for solving this complex problem. Therefore, we have analyzed two different classrooms of this school (i.e., Class A and Class B) with different characteristics, which forces different optimization goals. The first class consists of 16 tables while the second scenario needs to allocate 21 students, although the first scenario is smaller, its student density is considerably lower than the second class. Therefore, the Class A scenario is less restrictive than the second classroom in respect to the 1.5 minimum separation protocol. In order to model these classes and the table allocation area, we have defined three different regions for each class. The first area, the bigger one, represents the class limits, both classes were approximately rectangular; thus, both scenarios are considered to be of such shape. The second area is defined as the Table Location Environment (TLE) and it is the region within the class limits where the tables can be positioned. For this optimization, the positions of the tables are referenced from the students coordinates; thus, each point of the TLE is a possible location for a student. Therefore, a series of table measurements, shown in Table 1, have been considered in order to properly define the TLE region. Table 1. List of parameters measured from the studied classrooms, implemented into the scenario modeling and table distribution optimization. Moreover, although major obstacles of the classrooms were considered into the TLE limit definition, some prohibited areas remained inside the TLE region, thus the codification of these areas is required. The third region in the scenario is the prohibited areas or obstacles, which are defined as areas where a student cannot be allocated. In the simulations presented in this paper, we have displayed the limits of these obstacles with respect to the student's table for visual clarification. Thus, in the following figures, neither the student nor the table can be positioned inside these obstacle regions. Parameters Values As for the original student distribution, the first class, shown in Figure 1a, displays an orthogonal mesh distribution of tables, with this scenario being less restrictive in terms of student allocation. On the other hand, Class B, shown in Figure 1b, requires a greater density of students, thus utilizing a triangular mesh distribution, which generally produces more efficient results in the table optimization. Figure 2a,b shows the classroom models used in this study. Genetic Algorithm Optimization for the TLP Genetic Algorithms have proven to be among other metaheuristic techniques an excellent trade-off between diversification and intensification of the space of solutions in the optimization procedure. GA were initially proposed by Holland [40] and refined by Goldberg [41] afterward. These algorithms are based on the theory of evolution and rely on the fitness adaptation of a population of individuals to the problem specific scenario. These individuals contain the problem's distinct variables, from which the solution is dependent of. Through the optimization, our population is exposed to a certain pressure selection, so that the most adapted individuals prevail and pass out their remarkable genes to the following generation of individuals. This cycle of evolution, carried out through a sufficient amount of iterations, may achieve a state where an individual or a group of individuals contains an optimal enough solution to the problem, thus concluding the optimization. In the following section we discuss the codification and implementation of the GA proposed for the TLP. Codification of the Individuals The population of a GA must be coded in a way so that it contains all the essential information for the optimization procedure. For this particular case, the main variable of the optimization is the spatial distribution of the tables among the specified region (i.e., the coordinates of each table). Therefore, each individual of the population contains a particular table distribution, carrying the coordinates of each table along the TLE. Figure 3 shows the codification scheme followed in this paper, executed in the Python programming language, where we create a population list composed of n individuals. Each individual represents a distinct The binary codification is particularly useful when undergoing the genetic operators, such as crossover or mutation. Moreover, the binary structure grants a superior degree of flexibility when facing irregular scenarios, through the application of a binary scaling [29]. Evaluation of the Individuals Once the population is coded and generated, the next step in the GA structure is to determine the value of each individual, through a specially designed fitness function. For our particular case of study, the objective of the optimization is the distributions of the student's tables in a way so that a minimum separation of 1.5 m is obtained, thus fulfilling the Spanish legislation. However, it is desirable in multiple ways for the distance between each pair of tables to be maximized, thus reducing contagion probability and increasing student and professor mobility through the class. Hence, the fitness function proposed must guarantee the minimum separation among every table while also seeking to enhance even beyond the 1.5 m separation distance. Although it is possible to compute both elements of the optimization simultaneously (i.e., 1.5 minimum separation and mean distance maximization), we have concluded that it is more fruitful to implement a two-step fitness function evaluation that foremost aims to guarantee the safety distance, being this an imperative requisite for the optimization. Once the 1.5 has been obtained, the second phase of the fitness function takes place, seeking to optimize the mean distance between tables. This is similar to the process of predefining an initial population of the GA not randomized containing optimal potential solutions in particularly difficult evolutionary optimization scenarios [42]. For the first phase of the evaluation, we have considered the severity of the 1.5 m infraction into the evaluation of the individuals. In GA optimizations, it is crucial to preserve the convergence through the generations to the final solution. By introducing some progression into the evaluation, even though the individuals may not have achieved the minimum separation, we assure a rewarding mechanism for those individuals that represent a more scattered distribution. Therefore, the following fitness function equations have been implemented for the first phase of the evaluation: where This progressing evaluation grants a steadier convergence of the population to a state where most individuals respect the 1.5 m separation. Once this evaluation has taken place, only if a certain individual or group of individuals have achieved a value of 0, the evaluation proceeds to the following phase. In this second phase we seek to optimize the mean distance between tables; thus, the following fitness function is proposed: where µ is the mean distance between pairs of tables; σ is the standard deviation of the distribution of the distances; λ is a weight hyperparameter; and ρ is the hyperparameter that determines whether or not a pair of tables is considered to be close to each other. In this second phase of the fitness evaluation, we measure both the mean distance and the standard deviation of the table distancing. The introduction of the standard deviation into the optimization aims to obtain a more uniform table distribution. The uniformity of the table distancing plays a vital role in the contagion probability, being undesirable the scenario where some tables are considerably closer than the mean distance. Moreover, we have encountered experimentally that this scenario (i.e., where a reduced number of tables are rather separated from the rest while the rest are substantially close to each other) to be particularly common in the different optimization stages. Therefore, the implementation of the standard deviation in our fitness evaluation addresses this phenomena by penalizing these imbalanced distributions. Figure 4 shows the two-step fitness evaluation procedure proposed for the TLP. Furthermore, the main goal in this paper is to obtain a table distribution that minimizes the COVID-19 contagion probability, thus, maximizing the separation for every pair of tables is crucial. The implementation of the mean distance between tables as the primary value estimator results in the introduction of the selection pressure required for the GA convergence to the desired solution. Selection and Elitism The selection operator aims to arrange the individuals of the population in a way that enhances the optimization performance. This criterion is based on the fitness value given by the fitness function of each individual in the previous step. However, we can encounter multiple selection methodologies throughout the literature. These methodologies differ in the pressure selection that they introduce into the optimization. GA optimization is mainly driven by two core aspects, the intensification and the diversification of the solution [43]. The diversification phenomena introduce entropy into the optimization process, this randomness slows the convergence to the final solution, allowing a greater exploration of the solution environment, enhancing the quality of the solutions obtained. On the other hand, the intensification boosts the convergence to a solution, thus directing the genetic evolution to the optimal path in a reduced space of solutions [44]. The balance between these two factors is key in any GA optimization. An excessive approach to intensification may conclude in premature convergence of the problem into a local maximum due to the lack of exploration, while a heavy intensification focus may compromise the convergence to any solution. Moreover, this balance depends not only on the problem studied but also may differ from different initial conditions or scenarios of applications. Therefore, we must study the performance of different genetic operators for each particular case of study [45]. Thence, we will analyze the performance of the selection techniques of Tournament 2 (T2), Tournament 3 (T3) and Roulette (R) selection, being these methodologies among the most expanded selection techniques throughout the literature [46]. Furthermore, in addition to the selection methodologies, we can introduce selection pressure into the optimization through the use of elitism. This technique aims to preserve the better adapted individuals throughout the generations, seeking to influence the optimization convergence into the optimal path. Therefore, through elitism we preserve a certain percentage of individuals along with omitting those with a lesser fitness value [43]. The percentage used is another hyperparameter that needs to be adjusted for each particular application. An excessive value may incur in a premature convergence while an insufficient percentage may not achieve the desired results. Crossover and Mutation The crossover operator seeks to create the new generation of individuals, based on the genetic characteristics of their predecessors. Being the pairing arrangement decided in the selection operator, by combining parents with different genetic properties, it is possible for their offspring to better both of their parents. However, there coexist multiple crossover techniques, and likewise to the selection operators, their performance depends on the characteristics of the problem's scenario. Therefore, we must study the behavior of multiple crossover methodologies in search of the most appropriate for every particular problem [47]. Therefore, we analyze the implementation of the Single-Point crossover (SP), MultiPoint crossover for 2 and 3 cross-points (MP2 and MP3, respectively) and Uniform crossover (U), being these techniques among the most expanded methodologies throughout the literature [48]. Moreover, it is possible to introduce a higher degree of entropy into the optimization through the application of mutation. In this operator, we randomly modify certain genes of a specific number of individuals. The purpose of this alteration is to perform a further exploration of the solution environment, since it is possible for a random mutation to induce a rather superior performance of the individual, influencing the optimization direction [43,49]. This operator plays a key role in the GA optimization, and although its implementation might appear less favorable, its adequate application allows the achievement of a greater solution in the whole GA optimization. However, it is vital to balance the degree of entropy generated, being an excessive amount of mutation detrimental for the convergence; thus, we must study the appropriate amount for the desired scenario of application [43,50]. Stop Criteria Once the new generation is created, the iteration of the GA ends, giving way to the successor generation to be evaluated in the following iteration. This cycle of evaluation, selection and crossover continues throughout multiple iterations, until the stop criteria of the GA is fulfilled. Heuristic methodologies like GA are frequently applied in optimization problems where the optimal solution cannot be easily obtained. Furthermore, in these problems it is common that any solution obtained cannot be verified to be optimal. For this particular problem, although we can evaluate a certain distribution by measuring its mean distance and verifying the 1.5 m separation accomplishment, we are unable to define if a certain distribution of tables is optimal. Therefore, it is necessary to define a stop criterion in the GA optimization. In this paper, a double stop condition based on two logic parameters is proposed. Firstly, the GA optimization shall stop if a certain percentage of the population is identical, thus concluding the optimization when the algorithm has converged to a certain solution. In this state, any deviation from this solution presents a lower fitness value than the previous value; however, this phenomena is not limited to the optimal solution, being possible for this solution to be a local maximum [51]. Secondly, the GA optimization shall also stop if a certain number of iterations have been completed. The addition of this parameter is crucial in any GA optimization, since it is possible that the GA does not achieve a convergence to any solution, thus entering the GA into an infinite loop. Results In this section, we present the results of the previously detailed GA in both classroom scenarios. All algorithms were coded and executed in the Python software environment, performing every test with an Intel(R) i7 2.4 GHz of CPU and 16 GB f RAM. Previously, we have studied the significant impact of different genetic operators into the GA global performance. Their effect on the balance between intensification and diversification of the solution is key in order to obtain an adequate solution through the GA optimization. Therefore, in order to obtain the most appropriate genetic operators for each particular scenario, firstly, we must study the performance of each individual combination of genetic operators through multiple simulations. All simulations were executed with the following parameters, as shown in Table 2. Hence, in search of the optimal combination of genetic operators, we have analyzed the performance of each possible individual combination among these functions, resulting in the comparison shown in Table 3. Table 3. Comparison of the fitness value obtained by the multiple selection and crossover genetic operators previously proposed. Table 3 shows the difference in the optimization performance of the GA depending on the combination of genetic operators selected and the scenario of application. Crossover Operators The first classroom presents a great opportunity for table location optimization; however, the results of the optimization differs from each combination of selection and crossover techniques utilized. The second scenario is far more restrictive than the previous one, and in some combinations, the GA did not achieve a valid solution, proving the importance of testing the performance of multiple operators. For both scenarios, the roulette selection and the uniform crossover showed inferior performance than the other studied techniques. Generally, R. selection and U. crossover are particularly heavy focused techniques on intensification and diversification, respectively, thus their introduction to this problem may result inadequate. On the other hand, from the other combinations, tournament-2 with three-point crossover (T2-MP3) and two-point crossover (T2-MP2) showed satisfying results for the first and second scenario analyzed, respectively. These combinations of selection and crossover methodologies exhibiting an adequate balance between diversification and intensification in the maximum and mean fitness values obtained throughout the simulations. Furthermore, once we selected both methodologies, we evaluated the most advantageous elitism and mutation values for each scenario of application, as shown in Table 4. The resulting configurations of the GA proposed in Table 4, executed for both scenarios obtaining the following table distribution, as shown in Figure 5. The evolution of the algorithms and convergence to the final solution is also shown in Figure 6. Table 4. These resulting distributions obtained by the GA in both scenarios thrive at optimizing the available space, achieving a mean separation, shown in Table 5, unreachable for regular meshes. Furthermore, due to the introduction of the standard deviation into the fitness evaluation, the resulting distributions, although presenting irregular patterns, exhibit some degree of uniformity. This fact proves essential when minimizing Covid-19 contagion in our scenarios. Results in Table 5 show an improvement up to 19.33% in the mean distance between tables from the GA optimization. This statement proves the viability of these algorithms for any scenario of application, also achieving a 10% increase in mean distance in the second, most restrictive scenario. Moreover, the obtained distribution for the first scenario was later implemented into the original classroom studied, proving the viability of this methodology, resulting in the distribution shown in Figure 7. Therefore, the GA proposed in this paper successfully enhances the table distribution for multiple scenarios with respect to the Covid-19 contagion probability, thus fulfilling the main objective of this research. Discussion This paper presents a technological solution for the TLP in the schools during the COVID-19 pandemic. The distance achieved among the students' desks allows the reduction of the probability of contagion of this emerging infection, which collaborates with other rules such as wearing facemasks or using hydroalcoholic gels for creating safe places for the children to follow their daily lessons. Our approach addresses a novel problem, which is firstly introduced in this paper. The location of the tables has been proved in the manuscript to be a combinatorial NP-Hard problem, which recommends a heuristic solution to find acceptable results. The huge dimension of the space of solutions dependent on the resolution of the TLE and the number of tables to be located introduces difficulties during the optimization process, which requires the definition of a two-step optimization procedure in which we first ensure a combination of the tables in a space that meets the government 1.5 m minimum separation among all the tables and we later expand this distance to minimize the contagion risks in the classrooms. We have implemented a GA optimization for this purpose due to the flexibility of adaptation of this metaheuristic to different similar technological problems [27][28][29]37,[52][53][54] and the trade-off achieved between diversification and intensification of the space of solutions. The results obtained in the manuscript have demonstrated the suitability of the application of metaheuristics to solve this kind of problem that emerged during the pandemic, improving the government solutions proposed. In our future works, we will extend the analysis of the TLP to other different metaheuristics such as simulated annealing, diversified local search or the combination of the GA with a local search procedure to explore improvements in the optimization results. Furthermore, we will consider novel optimization scenarios such as the school canteen, restaurants, or pubs in which novel challenges for the optimization can arise. Such as multi-objective optimization for different criteria leading to algorithms such as NSGA-II, NSGA-III or MOEA. Conclusions The COVID-19 pandemic has supposed a challenge for humanity to deal with a virus that has changed the normal coexistence. As a consequence, many restrictions have been imposed globally for reducing the probability of contagion of a virus that can even cause, in the most severe cases, death. Therefore, social distancing, wearing facemasks, hydroalcoholic gels for hand cleaning or reducing the capacity allowed in indoor spaces have been some of the rules adopted for containing the virus propagation. Education has been one of the most affected sectors. Most of the countries decided to move to online learning during the first lockdowns promoted for facing the emergency of the initial coronavirus outbreak. This has complicated the normal learning of children, their social relationships and their physical health activity. Consequently, most of the countries have considered the reopening of the schools as a priority, even keeping the schools opened during the second coronavirus outbreak that it is nowadays facing Europe. This has promoted the implementation of restrictions and rules at schools for reducing the infectious potential of the virus. One of them has consisted of smart dispositions of the tables in the classrooms for maintaining the social distancing. In Spain, the government has fixed a minimum of 1.5 m among the students' desks, which has led to reducing the number of children in each classroom and designing regular patterns in the disposition of the tables such as orthogonal or triangular mesh configurations. However, the problem of the disposition of the tables is NP-Hard and a metaheuristic solution is recommended for obtaining improved results through irregular table disposition patterns that maximize the distances among the tables thus minimizing the probability of getting the coronavirus at schools. In this paper, we introduce for the first time to the authors' best knowledge a Genetic Algorithm optimization for the Table Location Problem for addressing the table disposition during the COVID-19 pandemic. We analyze the definition and complexity of the problem and we propose a methodology for its resolution. This methodology is applied in two different real-application scenarios (i.e., Class A and Class B) in which we prove the suitability of the optimization of the table disposition for obtaining improved results. Results show that an increase of the mean distance among tables of 19.33% in Class A and 10% in Class B can be attained following the proposed methodology thus fulfilling the main objectives of this paper. Conflicts of Interest: The authors declare no conflict of interest. Abbreviations The following abbreviations are used in this manuscript: GA Genetic Algorithm MP2 Two-Points MultiPoint crossover MP3 Three-Points MultiPoint crossover NLP Node Location Problem R Roulette selection SARS-CoV-2 Severe Acute Respiratory Syndrome TLE Table Location Environment TLP Table Location Problem T2 Tournament-2 selection T3 Tournament-3 selection U Uniform crossover WHO World Health Organization
2020-11-26T09:04:21.677Z
2020-11-25T00:00:00.000
{ "year": 2020, "sha1": "9f0218cf5e3d488bafbc0ae7b11d8b48e2ecb9fe", "oa_license": "CCBY", "oa_url": "https://www.mdpi.com/2076-3417/10/23/8392/pdf?version=1606383735", "oa_status": "GOLD", "pdf_src": "ScienceParsePlus", "pdf_hash": "5cbf21432faf7076b7b70ab1c6eefb8a63f934bd", "s2fieldsofstudy": [ "Education" ], "extfieldsofstudy": [ "Computer Science" ] }
213515891
pes2o/s2orc
v3-fos-license
A rare case of toxic epidermal necrolysis in pregnancy Stevens-Johnson syndrome (SJS) is a group of toxic necrolytic group of disorder of skin and mucous membrane with significant morbidity and mortality. SJS and TEN are considered a spectrum of the same disease. In SJS, there is <10% of epidermal detachment and in TEN there is >30% of epidermal detachment; with SJS/TEN overlap lying between these two extremes. It is a highly serious allergic reaction to medications affecting the skin and mucous membranes, characterized by severe purulent conjunctivitis, stomatitis with extensive mucosal necrosis, and purpuric macules. The pathogenesis is imperfectly understood and includes genetic factors. Pregnant women with SJS or TEN are a unique subset, and both conditions can simultaneously affect the mother and fetus. It is believed to be immune complex mediated reaction. It is triggered most commonly by certain drugs such as antibiotics and anti-viral agents. The common culprits are antimicrobials sulphonamide followed by nonsteroidal anti-inflammatory drugs (NSAIDs), anticonvulsant drugs, and anti-gout drugs. It is a rare condition with a reported incidence of one case per million people per year. Till date, few cases of pregnancy with SJS/TEN have been reported. We are reporting a case of 20-year-old primigravida with 31+3 weeks of gestation presenting with extensive toxic epidermal necrosis. This patient was managed in our institute with involvement of multidisciplinary team and had a successful pregnancy outcome. Perinatal outcome was also good in this case. INTRODUCTION Stevens-Johnson syndrome (SJS) is a group of toxic necrolytic group of disorder of skin and mucous membrane with significant morbidity and mortality. SJS and TEN are considered a spectrum of the same disease. In SJS, there is <10% of epidermal detachment and in TEN there is >30% of epidermal detachment; with SJS/TEN overlap lying between these two extremes. 1 It is a highly serious allergic reaction to medications affecting the skin and mucous membranes, characterized by severe purulent conjunctivitis, stomatitis with extensive mucosal necrosis, and purpuric macules. The pathogenesis is imperfectly understood and includes genetic factors. 2 Pregnant women with SJS or TEN are a unique subset, and both conditions can simultaneously affect the mother and fetus. [2][3][4] It is believed to be immune complex mediated reaction. It is triggered most commonly by certain drugs such as antibiotics and anti-viral agents. The common culprits are antimicrobials sulphonamide followed by nonsteroidal anti-inflammatory drugs (NSAIDs), anticonvulsant drugs, and anti-gout drugs. It is a rare condition with a reported incidence of one case per million people per year. Till date, few cases of pregnancy with SJS/TEN have been reported. We are reporting a case of 20-year-old primigravida with 31+3 weeks of gestation presenting with extensive toxic epidermal necrosis. This patient was managed in our institute with involvement of multidisciplinary team and had a successful pregnancy outcome. Perinatal outcome was also good in this case. CASE REPORT A 20-year-old primigravida at 31 weeks 4 days period of gestation (POG) presented to our obstetric emergency with complaint of rashes all over the body. She was a booked case at a private clinic with regular antenatal visits and uneventful antenatal period till 29 weeks of gestation. She consulted her private practitioner at 29 weeks POG for high grade fever and burning micturition, for which she was investigated and advised tablet Cefixime along with other supportive medications in view of positive Widal test. Although fever subsided after initiation of treatment; she continued oral Cefixime for 14 days as advised. 2 days after completion of therapy, she again developed fever with rashes all over body, with gradual development of pruritic maculo-papular eruption over trunk, limbs and face involving eyes and oral cavity. With this history she presented in obstetric emergency at Lady Hardinge Medical College. There was with no complaint of pain abdomen, leaking or bleeding per vagina and no history of allergy to drugs and food product. There was no other significant past medical and family history. Physical examination was significant for poor general condition, dehydration, fever (101.2°F), pruritic morbilliform eruption all over body with involvement of hand, feet, and oral mucosa. Pedal edema was present and there was inflammation around lesions ( Figure 1). Figure 1: Morbilliform eruptions all over the body. CNS and cardiorespiratory examination was normal. Abdominal examination revealed a 32 weeks sized uterus with cephalic presentation with a regular fetal heart sound of 132 beats/min. After dermatology referral a provisional diagnosis of viral fever with rashes or drug induced reaction (rashes) was made. Patient was admitted and fever investigation, cultures, skin biopsy were sent. Patient was started on oral acyclovir and oral azithromycin. Calamine lotion was advice for local application. Initial investigation showed blood group B+, haemoglobin of 10.4 gm/dl, total leucocyte count of 12,800, platelet count 1.6 lacs, fasting blood sugar was 78 mg %, kidney and liver function tests were normal, coagulation profile and urine routine microscopy reports were normal. Obstetric USG report showed SLIUF 31+2 weeks cephalic presentation placenta anterior not lowlying liquor adequate EFW 1533 gms with normal doppler study. Investigation of fever showed positive Widal test. Over a period of one-week patient's condition deteriorated, new and old mucocutaneouse eruption with large blisters covered more than 75% of body surface area ( Figure 2) and gradually patient had difficulty in eye and mouth opening, had multiple spikes of fever, toxic look and was dehydrated. There was maternal tachycardia, hematuria, increased pallor, edema and swelling all over the body. A multidisciplinary approach (medicine, dermatology, eye, ENT), was sought and patient was provisionally diagnosed as TEN with secondary infection and was put on intravenous acyclovir and i/v azithromycin, metrogyl, linzolids. Systemic steroid therapy was started. For analgesia injection paracetamol was continued. Along with intravenous fluid patient was allowed orally. For eye care tobramycin and ciplox eye drop and tear drops were given 3 to 4 times a day. For oral care candid mouth paint, betadine gargle and mucaine gel was used. For blister care blister drainage with intact skin was done and parafiine gauze dressing with 1% clotrimazole and 2% muperocin cream was done twice a day. Skin biopsy report showed TEN and confirmed the diagnosis ( Figure 3) and patient was started on cyclosporine 50 mg TDS. Over the period of one week new blister eruption decreased and old blister started healing but superimposed secondary infection of blisters with sepsis developed as successive blister pus culture sensitivity and blood culture sensitivity report showed acenetobactor infection. For that patient's antibiotics was changed to intravenous colistin and clindamycin. After 8 days patient went into spontaneous labour. She was delivered a live male baby of 1.7 kg vaginally. Her intrapartum and immediate postpartum period was uneventful. Over the period of one-month patient condition gradually improved and patient was discharged in satisfactory condition. The only remaining sequelae on follow up was hypo-hyper pigmentation of skin ( Figure 4). There was no remaining oral, ocular and genitourinary complication. Baby also had no complication on follow-up. DISCUSSION TEN is a rare life-threatening adverse cutaneous reaction with epidermolysis of more than 30% TBSA. SJS or toxic epidermal necrolysis (TEN) is one of the dermatologic conditions that can be potentially fatal. 5 Although the exact etiology of SJS/TEN is not fully understood, it is believed to be an immune mediated hypersensitivity reaction in which cytotoxic T-lymphocytes play a role in the pathogenesis. 6 Patel et al, reported that penicillins are one of the antimicrobials frequently causing severe cutaneous adverse drug reactions (CADRs) in the Indian population. 7 Suchi et al, reported a case TEN/SJS in pregnancy precipitated by oral Cefixime. 8 Similarly, our patient also had history of intake of oral Cefixime before development of TEN. SJS is marked by the rapid attack of fever, skin lesions and sores on the mucous membranes of eyes, mouth, nasal passage, lips and genitals. Clusters last for about 2-4 weeks. The diagnosis is often obvious by the appearance of lesions and rapid progression of symptoms. Histologic examination of sloughed skin shows necrotic epithelium; a characteristic feature. The condition is fatal and may result in death from pneumonia, septicaemia, myocarditis or renal failure. Severe scarring of the genital tract may also occur occasionally. There has been one case report of vaginal stenosis following SJS in pregnancy, which was discovered 6 weeks after cesarean section for breech presentation. 9 However, our patient delivered baby vaginally and there was no problem on follow up. The management includes prompt withdrawal of all potential causative drugs, intravenous fluid replacement. There have been previous reports of SJS/TEN manifesting in both the mother and the foetus when the disease occurs during pregnancy. 10 In our case the baby was not affected. It is difficult to prevent an initial attack of Stevens-Johnson syndrome because what triggers it is not known. However, if Steven-Johnson syndrome occurred once, which was caused by medication, the drug is to be avoided to prevent another attack. A recurrence is usually more severe than the first episode and, may be fatal. Attack of SJS developing in pregnancy can be fatal because immunity is compromised. However, early diagnosis and prompt management saved the mother and the child.
2020-01-30T09:15:13.256Z
2020-01-28T00:00:00.000
{ "year": 2020, "sha1": "4804717113e0b5ab1e4b70ec527234adbf0d2af9", "oa_license": null, "oa_url": "https://www.ijrcog.org/index.php/ijrcog/article/download/7745/5272", "oa_status": "GOLD", "pdf_src": "Anansi", "pdf_hash": "396b79ba90ffd597615271c36a7d59c0ceba293d", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
4251992
pes2o/s2orc
v3-fos-license
Ring-like oligomers of Synaptotagmins and related C2 domain proteins We recently reported that the C2AB portion of Synaptotagmin 1 (Syt1) could self-assemble into Ca2+-sensitive ring-like oligomers on membranes, which could potentially regulate neurotransmitter release. Here we report that analogous ring-like oligomers assemble from the C2AB domains of other Syt isoforms (Syt2, Syt7, Syt9) as well as related C2 domain containing protein, Doc2B and extended Synaptotagmins (E-Syts). Evidently, circular oligomerization is a general and conserved structural aspect of many C2 domain proteins, including Synaptotagmins. Further, using electron microscopy combined with targeted mutations, we show that under physiologically relevant conditions, both the Syt1 ring assembly and its rapid disruption by Ca2+ involve the well-established functional surfaces on the C2B domain that are important for synaptic transmission. Our data suggests that ring formation may be triggered at an early step in synaptic vesicle docking and positions Syt1 to synchronize neurotransmitter release to Ca2+ influx. DOI: http://dx.doi.org/10.7554/eLife.17262.001 Introduction Synchronized rapid release of neurotransmitters at the synapse is a highly orchestrated cellular process. This involves maintaining a pool of synaptic vesicles (SV) containing neurotransmitters docked at the pre-synaptic membrane, ready to fuse and release their contents upon the influx of calcium ions (Ca 2+ ) following an action potential, while also preventing the spontaneous fusion of SVs in absence of the appropriate cue (Südhof and Rothman, 2009;Jahn and Fasshauer, 2012;Südhof, 2013;Rizo and Xu, 2015). The core machinery required for the Ca 2+ triggered neurotransmitter release are the SNARE proteins (VAMP2, Syntaxin, and SNAP25) as well as Munc13, Munc18, Complexin and Synaptotagmin (Südhof and Rothman, 2009;Jahn and Fasshauer, 2012;Südhof, 2013;Rizo and Xu, 2015). A combination of biochemical, genetic and physiological results have pinpointed Synaptotagmin as a central component involved in every step of this coordinated process Jahn and Fasshauer, 2012;Südhof, 2013;Rizo and Xu, 2015). The principal neuronal isoform, Synaptotagmin 1 (Syt1), is a SV-associated protein, with a cytosolic domain consisting of tandem Ca 2+ -binding C2 domains (C2A and C2B) attached to the membrane via a juxtamembrane 'linker' domain (Brose et al., 1992;Takamori et al., 2006). Accordingly, Syt1 acts as the immediate and principal Ca 2+ sensor that triggers the rapid and synchronous release of neurotransmitters following an action potential (Brose et al., 1992;Geppert et al., 1994;Fernández-Chacó n et al., 2001). Upon Ca 2+ binding, the adjacent aliphatic surface loops on each of the C2 domains partially insert into the membrane and this enables the SNAREs to complete membrane fusion by mechanisms that are still uncertain Rhee et al., 2005;Hui et al., 2006;Paddock et al., 2011). Syt1 is also needed for the initial stage of close docking of SVs to the plasma membrane (PM), requiring in particular the interaction of the polybasic region on C2B domain with the anionic lipid, phosphatidylinositol 4, 5-bisphosphate (PIP2) at the PM (Bai et al., 2004;Wang et al., 2011;Parisotto et al., 2012;Park et al., 2012;Honigmann et al., 2013;Lai et al., 2015). The C2B domain also binds to the neuronal t-SNAREs (Syntaxin/ SNAP25) on the PM, which positions the Syt1 on the pre-fusion SNARE complexes and contributes to the docking of the SV but is by itself insufficient (de Wit et al., 2009;Parisotto et al., 2012;Mohrmann et al., 2013;Kedar et al., 2015;Park et al., 2015;Zhou et al., 2015). Despite a wealth of information on Syt1 function and underlying molecular mechanism, critical questions remain. Deletion (or mutations) of Syt1 eliminates fast synchronous release and increases the normally small rate of asynchronous/spontaneous release (Geppert et al., 1994;Littleton et al., 1994;Bacaj et al., 2013). Reciprocally, removing Complexin increases the spontaneous release amount and the remaining Syt1 is only capable of mounting asynchronous release, though this release is still Ca 2+ -dependent (Huntwork and Littleton, 2007;Hobson et al., 2011;Jorquera et al., 2012;Cho et al., 2014;Trimbuch and Rosenmund, 2016). This suggests that Syt1, acting in concert with Complexin, also functions as a clamp to both restrain and energize membrane fusion to permit rapid and synchronous release (Giraudo et al., 2006;Krishnakumar et al., 2011;Kümmel et al., 2011). How this clamping is accomplished still remains a mystery. In addition, fast neurotransmitter release exhibits a steep cooperative dependency on Ca 2+ concentration, which implies that several Ca 2+ ions need to be bound to one or more Syt1 molecules to trigger release Neher, 2000, 2005;Matveev et al., 2011). Further, reduced Ca 2+ binding affinity does not change this Ca 2+ cooperativity (Striegel et al., 2012), suggesting multiple copies of Syt1 molecules might be involved in gating release. However, the exact mechanism of the cooperative triggering of SV fusion is unclear. We have recently shown that Syt1 C2AB domains can form Ca 2+ -sensitive ring-like oligomers on phosphatidylcholine (PC)/phosphatidylserine (PS) lipid surfaces (Wang et al., 2014). This finding suggests a simple and elegant mechanism: If these Syt1 ring-like oligomers were to form at the interface between SVs and the plasma membrane, they could act sterically to prevent fusion, until this barrier eLife digest Reliable communication between neurons is essential for the brain to work properly. This is accomplished by tightly controlling how chemical messengers, called neurotransmitters, move between neurons. Neurotransmitters are typically packaged into bubblelike structures called synaptic vesicles and are released only when the neuron receives an input electrical signal. A set of proteins orchestrates the release of the neurotransmitters from the neuron, which happens after the synaptic vesicles fuse with the cell membrane. Synaptotagmin, a protein found on the surface of the synaptic vesicle, plays many roles in neurotransmitter release. It helps to attach the synaptic vesicle to the cell membrane and also prevents the vesicles from fusing to the membrane in the absence of an appropriate input signal. Most importantly, it detects when the electrical signal arrives at the neuron by binding to calcium ions that flood the cell following the input signal. This triggers the rapid fusion of the vesicles to the cell membrane. It is not clear how Synaptotagmin is able to carry out its different roles and in particular, control how neurotransmitters are released as calcium ions enter the cell. Zanetti et al. have now used a technique called negative stain electron microscopy to investigate how Synaptotagmin molecules taken from mammals arrange themselves on the surface of a membrane. In this technique, individual Synaptotagmin proteins on the surface of a synthetic membrane are chemically marked and their structure is imaged using an electron beam. Using this approach under conditions resembling those in cells, Zanetti et al. found that 15-20 copies of Synaptotagmin came together and formed ring-like structures on the membrane surface. These ring structures were rapidly broken apart when calcium ions were added to them. Further investigations suggest that the ring structures form when synaptic vesicles first attach to a membrane. Overall, it appears that the Synaptotagmin rings act as washers or spacers to prevent the vesicle from fusing to the cell membrane until the rings are disrupted by the arrival of calcium ions. Future studies are now needed to investigate whether the ring structures form inside cells and whether they act together with other proteins involved in neurotransmitter release. is removed when Ca 2+ enters and triggers ring disassembly i.e. the Syt1 ring would synchronize fusion to Ca 2+ influx. In addition, the oligomeric nature of Syt1 could explain the observed Ca 2+ cooperativity of neurotransmitter release. Here we show that the ring-like oligomer is a common structural feature of the C2 domain containing protein and describe the physiological correlates of the Syt1 ring oligomer which argues for a functional role for the Syt1 ring in orchestrating the synchronous neurotransmitter release. Results Circular oligomeric assembly is a common feature of C2 domain proteins We had previously described the formation of Ca 2+ -sensitive ring-like oligomers on lipid monolayers with the C2AB domain of Syt1 (Wang et al., 2014). To explore this further, we analyzed the organization of membrane bound C2AB domains of other neuronal isoforms of Synaptotagmin (Syt2, Syt7 and Syt9) on lipid surface under Ca 2+ -free conditions by negative stain electron microscopy (EM). Syt2 and Syt9 act as Ca 2+ sensors for synchronous SV exocytosis but are expressed in only a subset of neurons (Xu et al., 2007), while Syt7 has been posited to mediate the Ca 2+ -dependent asynchronous neurotransmitter release (Bacaj et al., 2013). EM analysis on lipid monolayer was carried out as described previously (Wang et al., 2014). Briefly, the lipid monolayer formed at the air/water interface was recovered on a carbon-coated EM grid and protein solution was added to the lipid monolayer under Ca 2+ -free conditions (1 mM EDTA) and incubated for 1 min at 37˚C. Negative-stain analysis revealed the presence of ring-like oligomers for all the Syt isoforms tested ( Figure 1). Despite the variability in the number of ring-like structures between different isoforms, the size of the ring oligomers were remarkably similar, with an average outer diameter of~30 nm ( Figure 1). In all cases, each ring was composed of an outer protein band of a width of~55Å , which is consistent with the dimensions of a single C2AB domain (Fuson et al., 2007). This data shows that the ability to form the circular oligomers is not unique to Syt1, but conserved among the Syt isoforms and further suggests that it might be an intrinsic property of the C2 domains. Therefore, we next tested the C2AB domains of Doc2B, C2ABCDE domains of extended Synaptotagmin 1 (E-Syt1) and the C2ABC domains of E-Syt2. Doc2B is a C2 domain protein expressed in the pre-synaptic terminals and a putative Ca 2+ sensor that regulates both spontaneous (Groffen et al., 2010) and asynchronous release (Yao et al., 2011). E-Syts are endoplasmic reticulum (ER) resident proteins, which contain multiple C2 domains and have been implicated in ER-PM tethering, the formation of membrane contact sites, and in lipid transport and Ca 2+ signaling (Giordano et al., 2013;Reinisch and De Camilli, 2016;Fernandez-Busnadiego, 2016;Herdman and Moss, 2016). Doc2B and E-Syt2 formed circular oligomeric structures on lipid monolayers analogous to those seen with Syt isoforms (Figure 1). However, we observed very few and unstable ring-like oligomers with E-Syt1 ( Figure 1). The lack of ring-like oligomers for E-Syt1 might be due to the insufficient concentration of this protein on the membrane surface as E-Syt1 has very weak affinity to the membrane under Ca 2+ -free conditions (Idevall-Hagren et al., 2015). The uniform dimensions of the ring oligomers of the multi-C2 domain proteins suggested that the ring is formed by a single C2 domain, with the other C2 domain(s) projecting away radially (Figure 1). This implies that the ring oligomerization is not a general property of all C2 domains, but only a select few. Consistent with this, we find that the Syt1 C2B domain alone can form the ring-like oligomers albeit a bit smaller in size, but the Syt1 C2A cannot (Figure 1-figure supplement 1). Brief treatment of the pre-formed ring oligomers with 1 mM Ca 2+ (Figure 1-figure supplement 2) revealed that all of the Syt isoforms (Syt1, Syt2, Syt7, and Syt9) and Doc2B were sensitive to Ca 2+ and are rapidly disrupted, but E-Syt were either un-affected (E-Syt2) or even stabilized (E-Syt1). Altogether, our data suggests that ring-like oligomers are a common structural feature of C2 domain containing proteins, but their sensitivity to Ca 2+ is divergent (discussed below in detail). Complete cytoplasmic domain of Syt1 forms rings under physiologically relevant conditions To assess the functional relevance of the Syt1 ring oligomers, we sought to understand the molecular aspects of the oligomer assembly and the Ca 2+ susceptibility under physiologically-relevant conditions. The ring oligomers assembled with the minimal C2AB domain of Syt1 were highly sensitive to the ionic strength of the buffer and the anionic lipid content on the monolayer. A minimum of 35% PS in the monolayer and buffers containing <50 mM KCl were required to obtain stable ring structures (Wang et al., 2014). We reasoned that the inclusion of conserved N-terminal juxtamembrane region (~60 residues) that connects the C2AB domains to the membrane anchor, might help stabilize the ring oligomers. The juxtamembrane linker domain has been shown to be vital for Syt1 role in activating synchronous release and in clamping the spontaneous release (Caccin et al., 2015;Lee and Littleton, 2015). It also has the ability to interact with the membrane and has been shown to self-oligomerize (Fukuda et al., 2001;Lai et al., 2013;Lu et al., 2014). We purified the entire cytoplasmic domain of Syt1 (Syt1 CD , residues 83-421) using a stringent purification protocol (Seven et al., 2013;Wang et al., 2014) to remove all polyacidic contaminants, which could promote non-specific aggregation of the protein (Seven et al., 2013) and this is confirmed by a single peak in the size-exclusion chromatography (Figure 2-figure supplement 1A). As expected, lipid binding analysis showed that the juxtamembrane domain enhances and stabilizes the membrane interaction of Syt1 under physiologically-relevant experimental conditions (Figure 2-figure supplement 1B). To visualize the organization of the Syt1 CD on lipid monolayers under Ca 2+free conditions, we adapted the conditions used previously to obtain Syt1 C2AB rings (Wang et al., 2014). Negative stain EM analysis showed that Syt1 CD can form stable ring-like oligomers ( Figure 2A) on monolayers under physiologically-relevant lipid (PC/PS at 3:1 molar ratio) and buffer (100 mM KCl, 1 mM free magnesium, Mg 2+ ) composition. The outer diameter of these Syt1 CD rings ranged from 19-42 nm, with an average size of 30 ± 4.5 nm ( Figure 2B), analogous to the Syt1 C2AB rings (Wang et al., 2014). Based on the helical indexing of the Syt1 C2AB tubes (Wang et al., 2014), we estimate that this corresponds to 12-25 copies of Syt1 molcule, with average~17 copies of Syt1. The Syt1 CD rings were robust as we did not observe many collapsed ring structures, like the 'clams' or 'volcanos', routinely seen with C2AB rings (Wang et al., 2014) and were stable under a widerange of the ionic strengths and anionic lipid content ( Figure 2C). Therefore, we used the Syt1 CD to delineate the mechanistic details of the Syt1 ring oligomer assembly and its Ca 2+ -sensitivity in a physiologically relevant environment. Syt1 C2B interaction with PIP2 is required for ring formation The assembly of the Syt1 CD ring oligomers strictly required the presence of anionic lipid (PS) in the monolayer (Figure 2-figure supplement 2) and the amount of the negative charge in the monolayer and the ionic strength of the buffer affected the number and integrity of the Syt1 CD rings ( Figure 2C). Therefore, to identify which parts of Syt1 are involved in positioning the Syt1 on the membrane to promote the ring assembly, we focused on the conserved polybasic regions of Syt1. Disrupting the polylysine motif on the C2A (K190A, K191A) or the arginine cluster on the C2B (R398A, R399A) did not affect the ring formation ( Figure 2D), but mutations of key lysine residues (K326A, K327A) within the polybasic patch on the C2B drastically reduced (~90%) the number of the Syt1 CD rings, even when 25% PS was included in the monolayer ( Figure 2D). This suggests that the electrostatic interaction between the polylysine motif on C2B and the anionic lipids on the membrane surface is required for the ring formation. Consequently, we tested the effect of PIP2 on the ring assembly as the polylysine motif on C2B has been shown to preferentially bind PIP2 with high affinity (Bai et al., 2004;Parisotto et al., 2012;Park et al., 2012;Honigmann et al., 2013;Krishnakumar et al., 2013;Lai et al., 2015). Syt1 CD ring formation did not require PIP2, but inclusion of PIP2 in the lipid monolayer (25% PS, 3% PIP2, 72% PC) improved the number and the integrity of the Syt1 CD rings ( Figure 3A and E). However, PIP2 was essential to obtain stable Syt1 CD ring oligomers when ATP at physiological concentrations (1 mM Mg-ATP) was included ( Figure 3B,C and E). ATP is a critical co-factor, which modulates Syt1 function as it reverses the inactivating cis-interaction of Syt1 with its own membrane while preserving the functional trans-association to the plasma membrane (Park et al., 2012;Vennekate et al., 2012). This is because ATP effectively screens the interaction of Syt1 with weakly anionic PS, but not with the strong negative charges on the PIP2 head group found exclusively on the PM (Park et al., 2012(Park et al., , 2015. Correspondingly, lipid binding assays showed that the ATP blocks the binding of Syt1 CD to PS-containing vesicles, but not to PS/PIP2 membranes (Figure 3-figure supplement 1). Corroborating this, 6% PIP2 as the sole anionic lipid (6% PIP2, 94% PC) in the lipid monolayer was found to be sufficient to form ring oligomers, even in the presence of 1 mM ATP ( Figure 3D and E). Taken together, our data shows that under physiological ionic conditions, the Ca 2+ -independent interaction of the C2B domain with PIP2 on the PM, which has been implicated in the vesicle docking both in vitro and in vivo Parisotto et al., 2012;Park et al., 2012;Honigmann et al., 2013;Lai et al., 2015), is key to assembling the Syt1 ring-like oligomers. Ca 2+ -triggered membrane insertion of Syt1 C2B disrupts the ring oligomers Similar to Syt1 C2AB , Syt1 CD rings were sensitive to Ca 2+ and brief treatment (~10 s) with Ca 2+ drastically disrupted the integrity of the preformed Syt1 CD ring oligomers ( Figure 4A). Calcium ions at concentrations in the range measured in intra-terminal region during synaptic transmission Neher, 2000, 2005;Neher and Sakaba, 2008) fragmented and disassembled the rings in a Ca 2+ concentration-dependent fashion ( Figure 4A). PIP2 had little or no effect on the Ca 2+ sensitivity of the Syt1 CD as we observed very similar reduction in Syt1 CD rings with or without 3% PIP2 across all Ca 2+ concentration tested (Figure 4-figure supplement 1). To verify that the Ca 2+ sensitivity of the Syt1 CD rings is indeed due to specific Ca 2+ binding to Syt1 and to map this sensitivity, we generated and tested Syt1 CD mutants that disrupt Ca 2+ binding to the C2A and C2B domains respectively (Shao et al., 1996). As shown in Figure 4B, disrupting Ca 2+ binding to C2B (Syt1 CD D309A, D363A, D365A; C2B 3A ) rendered the ring oligomers insensitive to calcium ions, while blocking Ca 2+ binding to the C2A domain (Syt1 CD D178A, D230A, D232A; C2A 3A ) did not alter the effect of Ca 2+ on the Syt1 CD rings (Figure 4-figure supplement 2). Likewise, mutations of aliphatic loop residues in the C2B domain (Syt1 CD V304N, Y364N, I367N; C2B 3N ), which insert into the membrane following Ca 2+ binding, made the Syt1 CD ring oligomers insensitive to Ca 2+ wash, but corresponding mutations in the C2A calcium loops (Syt1 CD F231N, F234N, S235N; C2A 3N ) had no effect ( Figure 4C, Figure 4-figure supplement 3). The mutation analysis shows that the rapid disruption of the Syt1 rings requires Ca 2+ binding to the C2B and the subsequent reorientation of the C2B domain into the membrane. In other words, the dissociation of the Syt1 ring oligomers is coupled to the conformational changes in C2B domain, which is involved in Ca 2+ activation and is physiologically required for triggering synaptic transmission. Discussion In support of a functional role for the Syt1 ring-oligomers, we find that the molecular basis of the Syt1 ring oligomer assembly and its reversal are coupled to well-established mechanisms of Syt1 action. The interaction of the conserved lysine residues in the polybasic region of the C2B domain with PIP2 on the inner leaflet of the pre-synaptic plasma membrane is a key determinant in both ring assembly and in synaptic vesicle docking in vivo (Martin, 2012;Honigmann et al., 2013), suggesting these processes are mechanistically linked. In addition, Syntaxin clusters PIP2 (by binding via its basic juxtamembrane region) and it has been suggested that it is these clusters that recruit the SVs (Honigmann et al., 2013). Given the high local concentration of both PIP2 (estimated to be up tõ 80 mol% in such micro-domains [Honigmann et al., 2013]) and Syt1 (anchored in the synaptic vesicles), it is easy to imagine how the ring-like oligomers could form at the docking site in between the synaptic vesicle and the PM. There are~16-22 copies of Syt1 on a synaptic vesicle (Takamori et al., 2006;Wilhelm et al., 2014), enough to form a ring oligomer of~27-37 nm in diameter, assuming no contribution from the plasma membrane pool of Syt1. This is consistent with the Syt1 ring diameters observed on the lipid monolayers ( Figure 2B). Several studies have shown that the Syt1-PIP2 docking interaction precedes the engagement of the v-with t-SNAREs (van den Bogaart et al., 2011;Parisotto et al., 2012). The prior formation of a Syt1 ring would thus position it to ideally prevent the complete zippering of the SNAREs, in addition to acting as a washer (or spacer) to separate the two membranes. The height of the ring,~4 nm (Wang et al., 2014) would allow for the N-terminal domain of the SNARE complex to assemble, but such a gap would impede complete zippering. In effect, the Syt1 rings would block SNARE-mediated fusion and hold the SNARE in a pre-fusion half-zippered state ( Figure 5). This is consistent with the earlier observation that docked vesicles appear to be 3-4 nm away from plasma membrane (Fernandez-Busnadiego et al., 2011). Besides positioning the Syt1 to promote the ring assembly, the binding of the polybasic region to the PIP2 clusters on the PM would also hold back the Ca 2+ binding loops from the membrane (Figure 5). In fact, modeling of the C2AB domain onto the EM density map of the tubular structures of the Syt1 C2AB suggests that the C2B calcium loops locates at the interface of the Syt1 oligomer (Figure 5-figure supplement 1). Such an arrangement would explain how the Syt1 ring could synchronize SV fusion to Ca 2+ influx. Ca 2+ binding to the C2B domain and subsequent conformational change, which incidentally is required to trigger neurotransmitter release (Fernández-Chacó n et al., 2001;Rhee et al., 2005;Paddock et al., 2011), would induce reorientation of the C2B domain from the ring geometry and thus, break the ring oligomers. As such, this would remove the steric barrier and permit the stalled SNAREpins to complete zippering and trigger SV fusion to release neurotransmitters ( Figure 5). This is congruent with the recent report (Bai et al., 2016), showing that the switch between the functional states (clamped vs. activated) of Syt1 involves large conformational change in the C2 domains. Besides membranes, Syt1 also binds to t-SNAREs and this interaction is functionally relevant for fast neurotransmitter release (de Wit et al., 2009;Mohrmann et al., 2013;Zhou et al., 2015;Wang et al., 2016). Recent reports have mapped the key t-SNARE binding interface to the C2B domain (Zhou et al., 2015), which is believed to form before the influx of Ca 2+ and is maintained during Ca 2+ activation process (Krishnakumar et al., 2013;Zhou et al., 2015;Wang et al., 2016). We note that in our Syt1 ring oligomer model, this binding interface on the C2B ( Figure 5-figure supplement 1) is accessible and free to interact with the SNAREs. However, the occupancy and positioning of the SNARE complexes on the Syt1 ring oligomer is not known and as such, is the focus of our ongoing research. Nevertheless, it is easy to imagine that such an interaction would allow the Syt1 ring to act as a primer to organize the core components of the fusion machinery to allow for a rapid and synchronous neurotransmitter release. Further, the oligomeric structure could provide a mechanistic basis for the observed Ca 2+ -cooperativity in triggering SV fusion. Obviously, the 'washer' model is speculative and functional and physiological studies are required to ascertain its relevance. Based on our data, the key principles of the ring oligomer assembly and its Ca 2+ sensitivity can be summarized as follows: The ring-oligomer formation is mediated by a single C2 domain (within multi-C2 domain protein), which binds the anionic lipids on membrane surface via the polybasic Figure 5. 'Washer' model for the regulation of neurotransmitter release by Syt1. (A) The SV docking interaction of the Syt1 polylysine motif (blue dots) with the PIP2 (yellow dots) on the plasma membrane positions the Syt1 on the membrane to promote the ring-oligomer formation. The ring assembly might precede the engagement of the SNARE proteins. (B) Syt1 ring-oligomers assembled at the SV-PM interface act as a spacer or 'washer' to separate the two membranes. The height of the ring (~4 nm) would allow the partial assembly of the SNARE complex, but prevent complete zippering and thus, block fusion. NOTE: The positioning and occupancy of SNAREs on Syt1 ring is not known and are shown for illustrative purposes only. (C) Upon binding calcium ions (red dots), the Ca 2+ loops that locates to the oligomeric interface, re-orients and inserts into the membrane, thus disrupting the ring oligomer to trigger fusion and release neurotransmitters. Thereby, the Syt1 ring oligomers will synchronize the release neurotransmitters to the influx of calcium ions. DOI: 10.7554/eLife.17262.015 The following figure supplement is available for figure 5: (Figures 1 and 2) and Ca 2+ induced re-orientation of the same C2 domain away from the ring geometry disrupts the ring oligomers ( Figure 4). In other words, the Ca 2+ sensitivity of the ring oligomers requires the same C2 domain to have the capacity to bind both anionic lipids and Ca 2+ . This is true for the C2AB domains of the Syt isoforms and Doc2B and hence, these ring oligomers are Ca 2+ sensitive (Figure 1-figure supplement 2). However, in the case of the E-Syts, the C-terminal C2 domains (C2E for E-Syt1 and C2C in E-Syt1) that are involved in anionic lipid dependent membrane tethering (thereby the ring formation) lack the putative Ca 2+ binding loops, with the N-terminal C2 domains mediating the Ca 2+ -dependent membrane interaction (Giordano et al., 2013;Reinisch and De Camilli, 2016). Hence, the E-Syt rings are insensitive to Ca 2+ (Figure 1-figure supplement 2). Further, E-Syt1 exhibits very weak membrane binding under Ca 2+ free conditions, which is enhanced upon Ca 2+ addition (Idevall-Hagren et al., 2015). The increased surface concentration of the E-Syt1 in the presence of Ca 2+ could explain the improvement in the number of E-Syt1 rings observed under these conditions (Figure 1-figure supplement 2). In summary, we find that ring-like oligomers are a common structural feature of C2 domain containing proteins, not all of which are regulators of exocytosis. Particularly interesting are the E-Syts, which function to enable the ER and plasma membrane to come into intimate contact -close enough for lipids to be transferred. Our results suggest this might be achieved by bridging two membranes with an intervening structure, most probably based on ring oligomers. Such an organization could stabilize the contact sites and also enhance the lipid transfer function of E-Syts. However, more research is required to understand this better. Interestingly, yeast cells have both E-Syts (for membrane adhesion) and SNAREs (for membrane fusion) but do not contain vesicle-associated Syt protein and do not carry out calcium-regulated exocytosis. Perhaps this set the stage for exocytosis to evolve when the C2 domains combined with a vesicle-associated protein to form ring-like oligomer i.e. washers that reversibly impeded SNAREpins. Protein expression and purification The Syt1 CD wild-type and mutant proteins were expressed and purified as a His 6 -tagged protein using a pET28 vector, while Syt C2AB isoforms and Doc2B were expressed and purified as a GST-construct. The proteins were purified as described previously (Seven et al., 2013;Wang et al., 2014), with few modifications. Briefly, Escherichia coli BL21 (DE3) expressing Sytconstructs were grown to an OD 600~0 .7-0.8, induced with 0.5 mM isopropyl b-D-1-thiogalactopyranoside (IPTG). The cells were harvested after 3 hr at 37˚C and suspended in lysis buffer (25 mM HEPES, pH 7.4, 400 mM KCl, 1 mM MgCl 2 , 0.5 mM TCEP, 4% Triton X-100, protease inhibitors). The samples were lysed using cell disrupter, and the lysate was supplemented with 0.1% polyethylimine before being clarified by centrifugation (100,000 Âg for 30 min). The supernatant was loaded onto Ni-NTA (Qiagen, Valencia, CA), or Glutathione-Sepharose (Thermo Fisher Scientific , Grant Island, NY) beads (3 hr or overnight at 4˚C) and the beads was washed with 20 ml of lysis buffer, followed by 20 ml of 25 mM HEPES, 400 mM KCl buffer containing with 2 mM ATP, 10 mM MgSO 4 , 0.5 mM TCEP. Subsequently, the beads were resuspended in 5 ml of lysis buffer supplemented with 10 mg/mL DNaseI, 10 mg/mL RNaseA, and 10 ml of benzonase (2000 units) and incubated at room temperature for 1 hr, followed by quick rinse with 10 ml of high salt buffer (25 mM HEPES, 1.1 M KCl, 0.5 mM TCEP) to remove the nucleotide contamination. The beads were then washed with 20 ml of HEPES, 400 mM KCl buffer containing 0.5 mM EGTA to remove any trace calcium ions. The proteins were eluted off the affinity beads in 25 mM HEPES, 100 mM KCl, 0.5 mM TCEP buffer, either with 250 mM Imidazole (His-tag proteins) or using Precission protease for GST-tagged constructs and further purified by anionic exchange (Mono-S) chromatography. Size-exclusion chromatography (Superdex75 10/300 GL) showed a single elution peak (~12 mL) consistent with a pure protein, devoid of any contaminants. Coding sequences of C2A-E domains from human E-Syt1 was cloned into pCMV6-AN-His vector (OriGene). The plasmid was transfected into Expi293 cells (Thermo Fisher Scientific, Grant Island, NY) for protein expression. After three days of transfection, cells were collected and lysed by three cycles of freeze and thaw (liquid N 2 and 37˚C water bath). His-tagged E-ESyt1 C2ABCDE was then purified by His 60 Nickel Resin (Clontech, Mountain View, CA), with Imidazole elution. For E-Syt2 ABC production, the coding sequence was cloned into a modified pCDFDuet-1 vector (Novagen, Danvers, MA), which has an N-terminal GST tag and a Prescission protease cleavage site and transformed into BL21(DE3). The cells were grown at 37˚C to an OD 600 of~0.6-0.8, then were shifted to 22˚C before induction with 0.5 mM IPTG. Cells were harvested 18 hr after induction. The proteins were purified by Glutathione Sepharose 4B chromatography . GST tags were removed by treatment with Prescission protease. Both E-Syt proteins were further purified by gel filtration on a Superdex200 column . The gel filtration buffer contained 20 mM HEPES at pH 8.0, 150 mM NaCl, and 0.5 mM TCEP. All chromatrography was carried out using AKTA system (GE Healthcare, Marlborough, MA) In all cases, the protein concentration was estimated using Bradford assay with BSA as standard and the nucleotide contamination was tracked using the 260 nm/280 nm ratios. The protein was flash frozen and stored at À80˚C with 10% glycerol (20% glycerol for Syt1 CD ) without significant loss of ring-forming activity. Lipid monolayer assay To form the lipid monolayer, degased ultrapure H 2 O was injected through a side port to fill up wells (4 mm diameter, 0.3 mm depth) in a Teflon block. The surface of the droplet was coated with 0.5 ml of phospholipid mixture (0.5 mM total lipids). The lipid mixtures, DOPC/DOPS & DOPPC/DOPS/ PIP2 were pre-mixed as required, dried under N 2 gas and then re-suspended in chloroform to the requisite concentration before adding to the water droplet. The Teflon block then was sealed in a humidity chamber for 1 hr at room temperature to allow the chloroform to evaporate. Continuous carbon-coated EM grids (400 mesh; Ted Pella Inc., Redding, CA ) were baked at 70˚C for 1 hr and washed with hexane to improve hydrophobicity. Lipid monolayers formed at the air/water interface were then recovered by placing the pre-treated EM grid carbon side down on top of each water droplet for 1 min. The grid was raised above the surface of the Teflon block by injecting ultrapure H 2 O into the side port and then was lifted off the droplet immediately. Proteins were rapidly diluted to 5 mM in 20 mM MOPS, pH 7.5, 5 mM KCl, 1 mM EDTA, 2 mM MgAC 2 , 1 mM DTT, 5% (wt/vol) trehalose buffer and then added to the lipid monolayer on the grid and incubated in a 37˚C humidity chamber for 1 min. The final KCl concentration in the buffer were adjusted to 100 mM or 140 mM as required. To facilitate structural analysis of the rings, we further optimized the incubation conditions by using an annealing procedure: Rings were nucleated at 37˚C for 1 min followed by a 30-min annealing step at 4˚C. The grids were rinsed briefly (~10 s) with incubation buffer alone or with buffer supplemented with CaCl 2 (0.1, 0.5 and 1 mM free) for Ca 2+ treatment studies. The free [Ca 2+ ] was calculated by Maxchelator (maxchelator.stanford. edu). Subsequently, the grids were blotted with Whatman#1 filter paper (Sigma-Aldrich, St. Louis, MO), negatively stained with uranyl acetate solution (1% wt/vol), and air dried. The negatively stained specimens were examined on a FEI Tecani T12 operated at 120 kV. The defocus range used for our data was 0.6-2.0 mm. Images were recorded under low-dose conditions (~20 eÀ/Å 2) on a 4K Â 4K CCD camera (UltraScan 4000; Gatan, Inc., Pleasanton, CA), at a nominal magnification of 42,000Â. Micrographs were binned by a factor of 2 at a final sampling of 5.6 Å per pixel on the object scale. The image analysis, including size distribution measurements was carried out using ImageJ software.
2018-04-03T06:21:35.772Z
2016-07-19T00:00:00.000
{ "year": 2016, "sha1": "c511c0f4764209a66e512139bd63885fab906c59", "oa_license": "CCBY", "oa_url": "https://doi.org/10.7554/elife.17262", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "681b21dc1f6803a5a8e67b67e170ca7ea1101e7c", "s2fieldsofstudy": [ "Biology", "Chemistry" ], "extfieldsofstudy": [ "Biology", "Medicine" ] }
245056685
pes2o/s2orc
v3-fos-license
Mortality Risk from PM2.5: A Comparison of Modeling Approaches to Identify Disparities across Racial/Ethnic Groups in Policy Outcomes Background: Regulatory analyses of air pollution policies require the use of concentration–response functions and underlying health data to estimate the mortality and morbidity effects, as well as the resulting benefits, associated with policy-related changes in fine particulate matter ≤2.5μm (PM2.5)]. Common practice by U.S. federal agencies involves using underlying health data and concentration–response functions that are not differentiated by racial/ethnic group. Objectives: We aim to explore the policy implications of using race/ethnicity-specific concentration–response functions and mortality data in comparison to standard approaches when estimating the impact of air pollution on non-White racial/ethnic subgroups. Methods: Using new estimates from the epidemiological literature on race/ethnicity-specific concentration–response functions paired with race/ethnicity-specific mortality rates, we estimated the mortality impacts of air pollution from all sources from a uniform increase in concentrations and from the regulations imposed by the Mercury Air Toxics Standards. Results: Use of race/ethnicity-specific information increased PM2.5-related premature mortality estimates in older populations by 9% and among older Black Americans by 150% for all-source pollution exposure. Under a uniform degradation of air quality and race/ethnicity-specific information, older Black Americans were found to have approximately 3 times higher mortality relative to White Americans, which is obscured under a non–race/ethnicity-specific modeling approach. Standard approaches of using non–racial/ethnic specific information underestimate the benefits of the Mercury Air Toxics Standards to older Black Americans by almost 60% and overestimate the benefits to older White Americans by 14% relative to using a race/ethnicity-specific modeling approach. Discussion: Policy analyses incorporating race/ethnicity-specific concentration–response functions and mortality data relative to nondifferentiated inputs underestimate the overall magnitude of PM2.5 mortality burden and the disparity in impacts on older Black American populations. Based on our results, we recommend that the best available race/ethnicity-specific inputs are used in regulatory assessments to understand and reduce environmental injustices. https://doi.org/10.1289/EHP9001 Introduction Among the air pollutants regulated by the Clean Air Act of 1970 (CAA), prior studies find that fine particulate matter [particulate matter with an aerodynamic diameter of ≤2:5 lm (PM 2:5 )] is responsible for the largest share of estimated costs of air pollution (U.S. EPA 1999EPA , 2011aMuller et al. 2011). The bulk of PM 2:5 costs are through premature deaths (U.S. EPA Agency 2011b); in 2011, an estimated 107,000 premature deaths in the United States were attributed to air pollution . The U.S. Environmental Protection Agency (U.S. EPA) is required to carry out benefit-cost and regulatory impact analyses to assess the effects of the CAA and associated administrative rules (U.S. EPA 2012). Reductions in PM 2:5 -induced mortality are a major contributor to benefits of air pollution policies. Most analyses used by the U.S. EPA to quantify lowered mortality risks associated with PM 2:5 reductions employ a log-linear concentrationresponse function (CRF) between PM 2:5 exposure and mortality. Two such CRFs typically used are from the American Cancer Society (ACS) study (Krewski et al. 2009) and the Harvard Six Cities analysis (Lepeule et al. 2012). However, these studies evaluated populations composed of people of socioeconomic status (SES) higher than the national average, predominantly White populations in well-monitored urban areas (for example, Black Americans constitute only 4% of the population in ACS CPSII study) (Pope et al. 1995). Thus, these estimates provide limited information on the health effects of air pollution in rural areas, among racial/ethnic minorities, or in low-SES populations. Recent evidence from a study of all Medicare beneficiaries (with 60,925,443 Americans) indicates that the impact of PM 2:5 on mortality among older populations is 3 times higher for Black Americans than for White Americans, including at exposures below the current National Ambient Air Quality Standard (NAAQS) for PM 2:5 (annual average 12 lg=m 3 ) (Di et al. 2017). Although multiple studies have demonstrated that people of color and other disadvantaged populations have disproportionately higher exposures to air pollution than White Americans (Hajat et al. 2015;Tessum et al. 2019), we incorporate and document the additional impact that higher baseline mortality rates and higher pollution susceptibility have on pollution-caused health outcomes across these racial/ethnic groups. Standard practice in most health risk assessments involves applying CRFs to all adult populations, assuming no differences in pollution-related risk of death across racial/ethnic groups and concentration levels and combined with population-weighted average mortality rates. We hypothesize that the use of CRFs and underlying health data that are not differentiated by racial/ethnic subgroup would lead to an underestimation of the health impacts of air pollution, especially for racial/ethnic minority communities. This study contributes to the extant literature in several ways. First, we estimate air pollution deaths among older populations (65+ y of age) using the 2014 National Emissions Inventory (NEI), a comprehensive (economy-wide) inventory available for the United States (U.S. EPA 2016). Second, we capitalize on recent innovations in the epidemiological literature that report both nonlinear relationships between ambient PM 2:5 concentrations and mortality risk faced by older individuals, as well as race/ethnicityspecific concentration-response functions (CRFs). Because this new epidemiological study relied on the Medicare population, we only estimate deaths for the relevant age groups (65+). Ours is the first study to use these new results in a national assessment of the health impact from PM 2:5 among older populations. Furthermore, we employed baseline health data specific to subgroups of interest (specifically, the five largest racial/ethnic groups: White, Black American, Hispanic American, Asian American, and Native American). Third, partially motivated by increases in PM 2:5 ambient concentrations between the years 2016 and 2018 , we analyzed and quantify mortality impacts in older populations and associated costs from a simulated uniform 1 lg=m 3 -increase in PM 2:5 and break these down by racial/ethnic group. Finally, we also explored the public health benefits of the Mercury and Air Toxics Standards (MATS). This aspect of the paper developed a "no-MATS" counterfactual, and it then assessed, on a county-level basis, the distribution of mortality risks and costs that MATS avoided. PM 2:5 -Attributable Premature Mortality We first estimated premature mortality attributable to PM 2:5 exposure. To do so, we employed the Air Pollution Emissions Experiments and Policy Analysis (APEEP) integrated assessment model [version 3 (AP3)] Muller 2014). AP3 is an updated version of the second version of APEEP-AP2 (Holland et al. 2016;Jaramillo and Muller 2016). The model, run in MATLAB ® , uses emissions of local air pollutants to estimate the ambient concentrations of PM 2:5 , resulting exposures, mortality risks, and monetary costs. The model encompasses emissions of five pollutants: sulfur dioxide (SO 2 ), nitrogen oxides (NO x ), ammonia (NH 3 ), primary PM 2:5 , and volatile organic compounds (VOC). All emissions of these pollutants reported by the U.S. EPA in the 2014 NEI released in the contiguous United States are included. AP3 differentiates emissions by source type and locations. It models nearly 700 individual point sources and attributes all remaining point source emissions reported by the U.S. EPA to the county in which the facility exists. The U.S. EPA reports ground-level, area source emissions (cars, trucks, trains, households, small businesses, and agriculture, among others) as aggregated county emissions. AP3 attributes these discharges to the county in which the U.S. EPA reports the release. AP3 employs an air-quality model to link emissions to concentrations. Fundamentally, the model relies on Gaussian dispersion modeling. However, it employs simplified representations of the atmospheric chemical processes that link SO 2 , NO x , NH 3 , and VOC emissions to ambient concentrations of secondary PM 2:5 . The model uses rate constants along with a module applied in every receptor location which translates ambient predicted concentrations into ambient sulfate, ammonium nitrate, and ammonium. Each of these species are important constituents of total PM 2:5 . The predictions of total ambient PM 2:5 produced by AP3 have been evaluated against both monitoring data provided publicly by the U.S. EPA and predicted concentrations produced by chemical transport models in previous analyses (Gilmore et al. 2019). The comparison of predicted annual means from an earlier version of AP3 with ambient monitoring data revealed a correlation coefficient for total PM 2:5 of about 0.60, which was on par with that from a chemical transport model included in the analysis. AP3 is also equipped with detailed county-level vital statistics inclusive of population and mortality rate data for 19 different age groups and 5 different major racial/ethnic groups : Native Americans, Asian Americans, Black Americans, Hispanic Americans, and White Americans. We sourced the data on population and mortality rates for these five racial/ethnic groups from the U.S. Census Bureau and the Centers for Disease Control and Prevention (CDC) WONDER databases (https:// wonder.cdc.gov/ Updated 22 December 2020; https://www. census.gov Updated 8 October 2021). The CDC WONDER database classifies racial groups as follows: American Indian or Alaska Native, Asian or Pacific Islander, Black or African American, and White (corresponding to "Native Americans," "Asian Americans," "Black Americans," and "White Americans," respectively, in this paper). Furthermore, the database separates the results by Hispanic ethnicity. Thus, to create the non-Hispanic racial/ethnic groups, we chose the "non-Hispanic" designation for each of the four above-listed racial groups, and to classify "Hispanic Americans" as such, we used the "All Races" Hispanic or Latino classification. This approach created the five distinct racial/ethnic groups we present in this paper. The U.S. Census Bureau's classification of racial/ethnic groups (as described at https://www.census.gov/topics/population/race/ about.html) are as follows: For non-Hispanic racial groups, we gathered data on populations of non-Hispanic "White alone," "Black or African American alone," "Asian alone," and "American Indian and Alaska Native alone or in combination with other races." For Hispanic or Latino ethnicities, the U.S. Census Bureau reports Hispanic populations, which can correspond to any of the races listed above. Given that the four racial groups described above were labeled "Non-Hispanic," these five racial/ethnic categories are mutually exclusive. We downloaded the 2014 data on county-level populations by age and racial/ethnic group from the Census (https://www2. census.gov/programs-surveys/popest/datasets/2010-2017/counties/ asrh/cc-est2017-alldata.csv) and the 10-y average mortality for all causes of death at the county level, ending in 2014 from CDC WONDER. Table 1 reports the national average mortality rates by age and racial/ethnic group. Because we used publicly available data, we did not require informed consent protocols or internal review board or ethics approvals. To estimate the premature mortality risk faced by individuals 65 y of age or older attributable to exposure to PM 2:5 , AP3 uses a health impact function of the following form: where M a,i,t equals premature deaths attributable to PM 2:5 exposure for individuals 65 y of age or older, county (i), age cohort (a), at time (t); b equals a statistically estimated coefficient from the epidemiological literature; Pop a,i,t equals population count for individuals 65 y of age or older, county (i), age cohort (a), at time (t); PM 2:5,i,t = PM 2:5 concentration, county (i), time (t); and MR a,i,t equals baseline mortality rate for age cohort (a), county (i), time (t). The AP3 model concludes by attributing a monetary value to mortality risk from PM 2:5 by employing a Value of Statistical Life (VSL) approach (Viscusi and Aldy 2003). The VSL is the marginal rate of substitution between money (typically wage income) and mortality risk. It is not intended to reflect or capture the value of the prevention of certain death. It is a rate of exchange between money and small changes in risks of death. The VSL parameter used in AP3 is the U.S. EPA's preferred value: about $8 million in 2014 U.S. dollars. The VSL is applied uniformly across persons of different incomes, ages, and racial/ ethnic groups. We calculated per capita mortality costs by dividing the total mortality costs for each racial/ethnic group at the county level by the relevant population. We then mapped the county aggregate and per capita costs to demonstrate the geographic variability in pollution-related premature mortality impacts. Our maps divided the distribution of per capita costs into five classes, as defined by the Jenks classification method and portrayed in map legends. The highest class shows the price per capita for outlier counties and reflects the areas with the highest mortality costs per racial/ethnic group. The focus of this analysis is to identify the influence of different CRFs and vital statistics on mortality estimates across different exposure levels and racial/ethnic groups. We used PM 2:5 mortality CRFs from the recent cohort study of 60,925,443 Medicare beneficiaries across the United States followed over 13 y (2000 through 2012) (Di et al. 2017). The study used zip code annual average PM 2:5 concentrations, predicted through the use of an artificial neural network that incorporated information such as satellite-based measurements, simulation outputs from a chemical transport model, land-use terms, and meteorological data. These were trained and validated against regulatory monitor data. The CRFs of the risk of death associated with a 10 lg=m 3 increase in PM 2:5 were estimated using a two-pollutant Cox proportional-hazards model that controlled for ozone, sex, racial/ethnic group, Medicaid eligibility, 5-y categories of age at study entry, 15 zip code-level or county-level variables from various sources, and a regional dummy variable to account for compositional differences in PM 2:5 across the United States. From Di et al. (2017), we obtained two sets of estimates: a single linear (b) coefficient for the full cohort population, as well as race/ethnicity-specific subgroup (b) coefficients. We coupled the use of these alternative CRF forms with different ways of including baseline mortality rates. In total, we considered the following four epidemiological strategies: the Di et al. (2017) linear, non-race/ethnicity-specific CRF with both populationweighted mortality rates and race/ethnicity-specific mortality rates, and the Di et al. (2017) race/ethnicity-specific linear CRFs with both population-weighted and race/ethnicity-specific mortality rates. All hazard ratios (HR) and their associated confidence intervals (CI) are listed in Table 2. Policy Scenario 1: Uniform Increase in Underlying Concentrations Using the same model, CRFs, and data as in our analysis of PM 2:5 -attributable premature mortality, we first modeled a scenario of air-quality degradation, in which all counties experience an increase of 1 lg=m 3 PM 2:5 . To do so, we ran AP3 twice: first with the pollution baseline and once with an extra 1 lg=m 3 of pollution. The results are the difference in outcomes across the two scenarios. The average baseline concentration across counties (calculated at the population-weighted centroid of each county) is 7:1 lg=m 3 ; this scenario therefore represents an 18% change in air pollution on average, although given the variation in pollution across counties, the percentage increase can exceed 100%. The intent of this simulation is to isolate differences in modeling strategies across racial/ethnic groups and age groups. That is, by standardizing the PM 2:5 change, we can clearly attribute differences in resulting mortality effects to the CRFs and vital statistics used. Policy Scenario 2: MATS Abatement Technology Our second scenario modeled the health effects of the MATS policy using AP3. Specifically, we used data on abatement technology adopted by generators in direct response to MATS policy. The U.S. Department of Energy's Energy Information Administration Form 860 provides information at the power plant level on the first year in which the generators used MATS abatement technology for compliance with the policy. We used central engineering estimates of emissions reductions rates for the compliance technology (Kaminski 2003; U.S. EPA n.d.; Ake and Licata 2011) to infer what emissions of SO 2 and PM 2:5 would have been had firms not elected to use the technology. For the MATS scenario, we employed only the race/ethnicityspecific mortality rates paired with both the race/ethnicitynonspecific linear and the race/ethnicity-specific CRF from Di et al. (2017). PM 2:5 -Attributable Premature Mortality Total premature mortality attributable to PM 2:5 exposure among people over the age of 65 y in 2014 across the United States ranged from 121,331 to 132,696 deaths (Table 3). Table 3 reports attributable mortality estimates for all racial/ethnic groups, using the six different CRF approaches discussed in the "Materials and Methods" section. The Di et al. (2017) linear CRF with mortality rates not differentiated by racial/ethnic group yielded an estimate of 121,331 deaths among persons over 65 y of age in 2014 (see Table 3, column 1). We first examined the effect of imposing race/ethnicity-specific mortality rates relative to population-weighted average mortality rates. As Table 1 demonstrates, among people younger than 85 y, Black Americans have the highest mortality rates, and for those older than 85 y, White Americans have the highest mortality rates. The use of race/ethnicity-specific mortality rates did not significantly affect the total number of deaths, but rather it distributed them differently across racial/ethnic groups by altering the race/ethnicityspecific distribution of premature mortality risk. Specifically, we found that using population-weighted average mortality rates underestimated the health effects of air quality on Black Americans by approximately 11%, regardless of the CRF function employed (comparing Table 3, column 1 to column 2; and column 3 to column 4). This approach also underestimated the health benefits of airquality improvements on White Americans, although the effect was much smaller (2%), and it overestimated the impacts on other racial/ ethnic groups. Our next set of results demonstrated the effect of employing the race/ethnicity-specific linear CRFs (Table 3, columns 3 and 4) relative to linear, non-race/ethnicity-specific CRF (Table 3, columns 1 and 2). Using the race/ethnicity-specific CRF increased total deaths by 9% relative to the results under the linear CRF (comparing columns 1 to 3 and 2 to 4). However, the difference across racial/ethnic groups was more pronounced. Figure 1 graphically shows how much the non-race/ ethnicity-specific CRF from Di et al. (2017) misestimates the impacts across all racial/ethnic groups. The figure shows the percentage difference in PM 2:5 -attributable mortality from using a race/ethnicity-specific CRF relative to the non-race/ethnicityspecific CRF. Using the race/ethnicity-specific linear CRFs resulted in significantly greater premature mortality for racial/ ethnic minorities, with Black Americans having over 150% additional premature deaths than predicted using the non-subgroup-specific CRF. Similarly, Hispanic Americans had 52% additional premature deaths, and Native Americans and Asian Americans had approximately 30% additional premature deaths than predicted under the non-subgroup-specific CRF. However, White Americans had 13% fewer deaths under this approach. Further, under the race/ethnicity-specific CRF, the pollutionattributed premature mortality among older Black Americans accounted for as much as 25% of all PM 2:5 -attributable deaths in populations over 65 y of age, although nationally, older Black Americans make up only 9% of the total population (this can be seen in Table 3, comparing the percentages in column 5 to the percentages in the other columns). In contrast, PM 2:5 -associated mortality was proportional to the population share for older White Americans in the scenarios without race/ethnicity-specific CRFs (see Table 3, columns 1-2), but this changed when we employed race/ethnicity-specific CRFs. As can be seen in Table 3, columns 3 and 4, the race/ethnicity-specific CRF for White Americans resulted in a PM 2:5 -associated mortality burden of only 65%, even though the population share of this group was roughly 80%. Table 2 provides mortality risk HRs associated with a 10 lg=m 3 increase in PM 2:5 (and corresponding 95% CIs) for the different Table 3. Total estimated race/ethnicity-specific PM 2:5 -attributable deaths among populations age 65 y and older in the United States using different concentration-response functions and baseline mortality rates. Race/ethnicity Underlying mortality data National share of older populations by racial/ethnic group, as a percentage of total population above 65 y of age CRFs. The figure represents the difference in PM 2:5 -attributable mortality between using non-race/ethnicity-specific CRF and using race/ethnicity-specific CRFs. The baseline in the difference is non-racial/ethnic specific CRFs; thus the percentage represents how much higher the mortality estimate is when using a racial/ethnic-specific CRF. The relevant CRFs are from Di et al. (2017); see Table 2. Note: CRF, concentration-response function. CRFs. Table 4 presents the results from our analyses using the upper and lower bounds of the CRF CIs presented in Table 2. Some degree of overlap in CIs occurred in the case of Native Americans, given the large uncertainty around the race/ethnicityspecific CRF for this racial group. As shown in Table 4, the estimates for Native Americans had similar central estimates across the two CRFs (846 vs. 850), with the race/ethnicity-specific CRFs resulting in a CI of 526 to 1,145. All other racial/ethnic groups had clear differentiation between the results for the different CRFs employed. We next estimated the cost of PM 2:5 -attributable premature mortality in older populations by applying a uniform VSL to all premature deaths caused by pollution. Multiplying the total number of deaths across all racial/ethnic subgroups as listed in Table 3 by our VSL of approximately $8 million, we found that total mortality costs are between $1,059-1,155 billion dollars (in 2014 U.S. dollars). To demonstrate the geographic distribution of costs associated with PM 2:5 -attributable premature mortality, we created maps depicting per capita mortality costs for each racial/ethnic group (Figures 2-7), using the race/ethnicity-specific Di et al. (2017) CRFs. Maps depicting aggregate mortality costs by race/ethnicity can be found in the supplemental material, Figures S1-S5. On a per-capita basis, costs exhibited a somewhat homogenous distribution across the country and across racial/ethnic groups, though the tails of the distributions differed significantly across race/ ethnicity. Specifically, the highest per-capita mortality costs for non-White racial/ethnic groups ranged between 4 and 6 times higher than that for White Americans, as can be seen in the map legends. For example, the highest class of per capita mortality costs for Black Americans (Figure 3) and Hispanic Americans ( Figure 5) was above $20,000, though the highest class of per capita costs for White Americans (Figure 7) was only above $3,500. Table 5 reports the aggregate PM 2:5 -related exposure deaths among populations above 65 y of age in 2014 according to the different CRFs, and it then compares these to the deaths produced by a simulation in which 1 lg=m 3 is added to all county-level concentrations. Across all CRFs and underlying health data employed, we found that PM 2:5 -related deaths would increase by approximately 10% nationally under this uniform air-quality degradation. Applying a uniform VSL to this increase in PM 2:5 -attributable premature mortality resulted in approximately $100 billion in mortality costs, although this amount can increase to $113 billion, depending on the CRF and mortality rates employed (see Table 5, final column). Table 6 continues to explore the mortality incidence of the uniform 1 lg=m 3 PM 2:5 increase. In this table, the focus is on both incidence by racial/ethnic group and age group for populations over 65 y of age. The numbers in the table report the total mortality burden due to the uniform PM 2:5 increase relative to White American populations, by age group (thus, all values for White American population mortality equal 1). Mortality burden was here defined as PM 2:5 -attributed mortality risk from the change in pollution, specific to each demographic. Policy Scenario 1: Uniform Increase in Baseline Concentrations These results demonstrate that under the linear [non-race/ ethnicity-specific; Di et al. (2017)] CRF, there is little differentiation in mortality burden across racial/ethnic groups. Native Americans of all age groups had a slightly larger burden than White Americans of the same corresponding ages. Black Americans under 75 y of age also incurred a slightly larger mortality burden than White American populations of the same age (though results appear to equalize for older age groups). Furthermore, using this CRF resulted in Hispanic Americans and Asian Americans having a lower mortality burden than White Americans (with ratios less than 1 for all racial/ethnic group/age group combinations). However, when we used race/ethnicity-specific CRFs, we found significantly different outcomes for all people of color, except for Asian Americans (a group that consistently had a lower pollution-related mortality burden than White Americans had, in part due to lower underlying mortality rates; see Table 1) from a uniform 1 lg=m 3 -PM 2:5 increase in pollution. Black Americans of all ages above 65 y incurred mortality burdens up to 3 and one-half times greater than those of White Americans of the same age (the race/ethnicity-specific burden, relative to White Americans, presented in Table 6 for Black Americans is 2.4-3.6). Similarly, Hispanic Americans sustained a 20%-28% greater mortality burden than White Americans (as the race/ethnicity-specific burden, relative to White Americans, presented in Table 6 for Hispanic Americans is 1.205-1.284). Native American populations under the age of 85 y experienced a 36%-55% higher mortality burden than White Americans. Policy Scenario 2: MATS Abatement Technology In this scenario, we modeled how the abatement technology adopted due to MATS affected pollution and, in turn, premature mortality. Figure 8 shows that, had these technologies not been adopted, PM 2:5 concentrations would have been on average 0:5 lg=m 3 higher across all counties, reflecting roughly a 5% increase. However, we report significant variation in this increase, with changes up to 3 lg=m 3 in counties near the Ohio River. Table 7 presents the changes in ambient concentrations under the MATS scenario and compares it to the uniform change in ambient concentration. On average, the increase is much smaller from the MATS relative to a uniform 1 lg=m 3 -change in concentrations, though the increase goes as high as 50% for some counties. We next calculated the benefits across racial/ethnic groups in terms of avoided premature mortality of MATS abatement technology. Table 8 shows the premature deaths avoided and monetary benefits of MATS abatement technology, under the linear and the race/ethnicity-specific CRFs from Di et al. (2017) (both of these results employ race/ethnicity-specific underlying mortality rates). We found that, across all racial/ethnic groups, abatement technology led to 2,631-2,763 avoided premature deaths in older populations in 2015 and 4,079-4,261 in 2016, depending on the CRF employed (see Table 8). As can be seen in Table 8, the vast Note: All estimates above use race/ethnicity-specific mortality rate estimates and CRFs from Di et al. (2017), as reported in Table 2. CRF, concentration-response function. majority (72%-85%) of estimated avoided deaths were for White American populations, followed next by 10%-23% of avoided deaths for Black American populations (the range depends on the CRF employed). Importantly, the use of the linear, non-race/ethnicity-specific Di et al. (2017) CRF underestimated total avoided deaths due to MATS policy by about 4.4% (for both 2015 and 2016 combined), although this effect was much larger for certain racial/ethnic groups. Specifically, we found that using a linear, non-race/ethnicityspecific approach (as reported in Table 6, columns 1 and 3) would underestimate (relative to using race/ethnicity-specific data and CRFs) the benefits to older (above 65 y of age) Black American communities by almost 60%, whereas the benefits to older White American communities are overestimated by about 14%. Applying a VSL of $8 million to these lives saved resulted in the linear non-racial/ethnic specific CRF approach underestimating the benefits of MATS abatement technology by at least $2:7 billion from 2015 to 2016. Summary of Results Our analysis demonstrated that the use of non-race/ethnicity stratified CRF estimates in air pollution policy assessments (an approach that ignores differences in vulnerability to impacts across racial/ethnic subgroups) resulted in underestimates of the health impacts overall and, in particular, the impact on Black American and Hispanic populations. By using race/ethnicityspecific CRFs we highlight in this paper the inequities of air pollution-related mortality and demonstrate that the common practice of not differentiating these CRFs by racial/ethnic group undervalues air-quality improvements to racial/ethnic minority communities. Specifically, we found that by using race/ethnicity-specific CRFs and underlying health (mortality) data, rather than the prevalent approach used in most federal policy analyses, these changes a) caused an increase in the baseline estimates of aggregate deaths (by almost 10%); b) attributed a greater share of these baseline pollution-related deaths to Black American, Hispanic American, and Asian American populations; and c) implied that, in a reverse uniform 1 lg=m 3 PM 2:5 pollution reduction scenario, benefits would accrue more to Black Americans ( ∼ 3 × ), Native Americans ( ∼ 1:4 × ), and Hispanic Americans ( ∼ 1:3 × ) than to White Americans (see Table 6). For Black American populations in particular, the effect of using race/ethnicity-specific exposure-response functions produces the largest changes in policy impacts: We found that the mortality burden for Black Americans can reach 25% of all deaths, although this group accounts for only 9% of total older populations (see Table 3, column 5). Furthermore, using a real-world example of the MATS policy, we demonstrated how federal policy to improve air quality can produce measurable environmental justice improvements by reducing health disparities from air pollution exposure. It also showed that we will continue to significantly underpredict the environmental justice benefits of energy policies if we do not employ the most up-to-date assessments of CRFs that are specific to different racial/ethnic subgroups. An assessment of the monetary cost of mortality in older populations attributed to a 1 lg=m 3 increase in PM 2:5 (approximately $113 billion) underlined the general conclusion of other studies that PM 2:5 pollution represents an enormous cost to society (Tschofen et al. 2019). For context, this cost amounts to approximately double the estimated cost of all major federal rules issued by the U.S. EPA from 2006 to 2016 (Office of Management and Budget 2017). This research thus highlights important inequities associated with current federal policy approaches, concerning pollution exposure for people of color, in particular for Black Americans. External Validity of Results Our estimate of approximately 100,000 premature deaths from PM 2:5 exposure for populations over the age of 65 is in line with previous estimates. Tessum et al. (2019) estimated 131,000 deaths in 2015 using a different integrated assessment model and the Krewski et al. CRF (2009). Although this estimate included all persons over the age of 25, older individuals incur the majority of premature mortality risk. Further, Burnett et al. (2018) report a range of premature deaths from PM 2:5 in the U.S. of between 121,000 and 213,000, inclusive of all age groups. Regarding the results for the MATS analysis, our results are similar to EPA's estimates in the lower bound of the 2011 MATS Regulatory Impact Analysis (4,400 avoided deaths for 2016, for ages above 30 y) (U.S. EPA 2011a). We would expect to find estimated avoided deaths at the lower bounds of the U.S. EPA's estimates for two reasons. First, we estimated only the effects on age groups above 65 y, whereas the U.S. EPA estimated this for all ages above 30 y. Second, we modeled only the effect of abatement technology on reduced mortality and ignored any air-quality improvements that would arise from the exit of coal plants, which likely contributed to an even greater reduction in deaths. Strengths and Limitations of Our Work A strength of this study is that the Di et al. (2017) CRFs that we employed are from a large nationally representative longitudinal cohort study, including racial/ethnic minority and rural populations, in contrast to Krewski et al. (2009), which consisted of a predominantly urban White American population. The Di et al. (2017) study also used fine-scale air pollution exposure estimates and may be less likely to be biased due to exposure misclassification and more likely to be representative of exposures and effects across the exposure range relevant to the United States than the CRFs presented in Krewski et al. (2009). An additional strength of the present paper is our reliance on multiyear aggregated baseline mortality rates from the CDC. Our use of comprehensive, publicly available mortality data ensures that spatial distribution of mortality rates reflects robust patterns in risk and is not an artifact of events during a single year. We note several limitations to our work. First, whereas the U.S. EPA's NEI is a comprehensive source of national air pollution emissions (U.S. EPA 2016), our reliance on it introduced uncertainty in the baseline estimates of PM 2:5 concentrations. As with any model, the manner in which the AP3 integrated assessment model (see "Material and Methods" section) links emissions to concentrations is not perfect. We note, however, that its performance against the U.S. EPA's air-quality system monitoring network for PM 2:5 and chemical transport models has been evaluated in prior work and found to be satisfactory for policy analysis (Gilmore et al. 2019). Another limitation to our work is the geographic aggregation at the county level, given the structure of AP3. Intracounty PM 2:5 concentrations may vary significantly, and Black Americans are generally more likely to live near highways and other major emissions sources within a county (Tian et al. 2013;Perlin et al. 1999). Furthermore, county-level mortality rates may overlook significant variation in mortality rates within the county itself, especially in larger counties (Southerland et al. 2021). Overall, this points to the need for investigation of these impacts at a scale finer in resolution than the county, and further research on identifying subcounty variation in pollution exposure is warranted (such as provided by the U.S. EPA's Downscaler Model; U.S. EPA 2017). However, that research is outside the scope of this paper, and our current estimates are likely a lower bound on the actual health impacts of air pollution on Black American communities, because county averages will smooth over these hyperlocal differences in exposure and susceptibility. Implications for Policy Studies have shown that historically racist policies such as red lining and citing of highways and polluting facilities have resulted in racial/ethnic minority and other disadvantaged populations living in areas with a disproportionately higher number of emitting facilities (Mikati et al. 2018;Banzhaf et al. 2019) and facing higher PM 2:5 exposure burden in comparison with White American populations. In addition, policies and actions to reduce air pollution are generally concentrated in wealthier and less diverse populations, resulting in widening disparities (Jbaily et al. 2020;Richmond-Bryant et al. 2020). Yet the issues of air pollution-related health impact inequities extend beyond exposure alone. Many of the same racist policies, institutional practices, and poor cultural representations have caused disinvestment in racial/ethnic minority communities, resulting in differential quality and distribution of housing, transportation, Di et al. (2017); see Table 2. The "Linear CRF" corresponds to the "All Racial/Ethnic Groups" CRF. CRF, concentration-response function. economic opportunity, education, food, access to health care, and beyond. All of these inequities manifest in health disparities, higher underlying mortality rates, and greater susceptibility to pollution-caused disease (Morello-Frosch et al. 2011;Payne-Sturges et al. 2021). During the COVID-19 pandemic of 2020-2021, these same pathways contributed to Black Americans facing an inequitably larger mortality burden (2 times larger than for White Americans; APM Research Lab 2021) from SARS-CoV-2 (Afifi et al. 2020, Doumas et al. 2020, Persico and Johnson 2021 and were laid bare when race/ethnicity-specific data were collected and used. Though these issues are outside the scope of our study, our findings demonstrate the importance of collecting and using the most up-to-date race/ethnicity-specific data when considering policy-making decisions. In fact, the U.S. EPA's own integrated science assessment concludes that "the evidence is adequate to conclude that non-Whites, particularly Blacks, are at increased risk for PM 2:5 -related health effects based on studies examining differential exposure and health effects" (U.S. EPA 2019). Our results emphasize the importance of conducting health impact assessments of air quality-related policies with a recognition of the underlying differences across racial/ethnic groups. An understanding that the marginal effect of air-quality changes will not affect all racial/ethnic groups in the same way is critical, particularly given the nation's pervasive and systemic racial/ethnic injustices. Assuming that all racial/ethnic groups are equally affected by air pollution will continue to contribute to injustices faced by people of color, especially in light of policies that can alter the magnitude or distribution of pollution across racial/ethnic groups. Fundamentally, the choice of which vital statistics and CRFs to use when estimating the effect of any air quality-related policy will directly affect its calculated benefits and resulting environmental justice implications. Several large-scale and notable federal benefit-cost analyses that assess PM 2:5 mortality impacts generally use mortality rates that are averaged across all racial/ethnic groups and pair this with linear non-race/ethnicity-specific CRFs (see for example, U.S. EPA 2011a, 2011b, 2012), leading to a misrepresentation of the race/ethnicity-specific outcomes of policies. Our results have particularly important implications for the NAAQS program of the CAA, which regulates particulate pollution. The law explicitly mandates that these standards must be set at a level that protects public health with an "adequate margin of safety" (National Primary and Secondary Ambient Air Quality Standards). In particular, the U.S. EPA must also consider the impacts on the health of vulnerable populations (Executive Order 12898; Executive Order 14008). Our study demonstrated the policy importance of recognizing the fact that air pollution risks differ by racial/ethnic group; thus, parameterization of overall national standards should be tailored to protect the most vulnerable subgroup. In addition, the NAAQS program is structured around airquality criteria that are to be based on the best available science. To date, epidemiological studies were unable to reliably disaggregate CRFs specific to vulnerable subgroups. Given the recent innovations and statistical power of Di et al. (2017), and in light of the associated racial/ethnic disparities and underestimated aggregate mortality impacts, U.S. EPA should take these disparate impacts into consideration when making a determination regarding what is an adequately protective "margin of safety" in future NAAQS reviews. This paper demonstrated the importance of choosing the most accurate specification when estimating health effects of air quality due to policy changes, because underlying mortality rates, pollution exposure, and pollution susceptibility differ significantly across racial/ethnic groups. Using generalized health data to estimate the benefits of air-quality policies will lead to incorrect estimates, and likely underestimate the benefits of these policies to most racial/ethnic minorities. Thus, it is essential that federal agencies perform regulatory impact analyses with the use of as much granular data as possible, both in terms of CRFs and mortality rates. Methods should adequately represent differences in health outcomes by demographic group and across ambient pollution levels, as demonstrated herein. Wherever such breakdowns are not possible, gap-filling using prior methods could be employed. These changes may significantly improve future airquality policy outcomes and help reduce environmental justice disparities.
2021-12-09T06:22:24.352Z
2021-12-01T00:00:00.000
{ "year": 2021, "sha1": "9ae104085f9ca968bc9e8ffd898c81a8e31a0314", "oa_license": "pd", "oa_url": "https://ehp.niehs.nih.gov/doi/pdf/10.1289/EHP9001", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "a65488fd23346f082fb17c24f195e1b3c6d8bb08", "s2fieldsofstudy": [ "Environmental Science" ], "extfieldsofstudy": [ "Medicine" ] }
16877844
pes2o/s2orc
v3-fos-license
Opposing Signaling of ROCK1 and ROCK2 Determines the Switching of Substrate Specificity and the Mode of Migration of Glioblastoma Cells Despite current advances in therapy, the prognosis of patients with glioblastoma has not improved sufficiently in recent decades. This is due mainly to the highly invasive capacity of glioma cells. Little is known about the mechanisms underlying this particular characteristic. While the Rho-kinase (ROCK)-dependent signaling pathways involved in glioma migration have yet to be determined, they show promise as one of the candidates in targeted glioblastoma therapy. There are two ROCK isoforms: ROCK1, which is upregulated in glioblastoma tissue compared to normal brain tissue, and ROCK2, which is also expressed in normal brain tissue. Blockage of both of these ROCK isoforms with pharmacologic inhibitors regulates the migration process. We examined the activities of ROCK1 and ROCK2 using knockdown cell lines and the newly developed stripe assay. Selective knockdown of either ROCK1 or ROCK2 exerted antidromic effects on glioma migration: while ROCK1 deletion altered the substrate-dependent migration, deletion of ROCK2 did not. Furthermore, ROCK1 knockdown reduced cell proliferation, whereas ROCK2 knockdown enhanced it. Along the signaling pathways, key regulators of the ROCK pathway are differentially affected by ROCK1 and ROCK2. These data suggest that the balanced activation of ROCKs is responsible for the substrate-specific migration and the proliferation of glioblastoma cells. Introduction Glioblastoma multiforme (World Health Organization grade IV), which is the most common brain tumor in humans, has a median survival time of only 12-14 months. One reason for this poor prognosis is the ability of single tumor cells to invade diffusely into the neighboring brain parenchyma. After tumor resection followed by adjuvant radiation and chemotherapy, 90 % of patients are subject to recurrences within months, usually in the tissue adjacent to the resection area [1]. Therefore, any efficient therapeutic approach must reach cells that have invaded far beyond the radiologically and intraoperatively visible borders because cells migrate along the white matter tracts and basement membranes of blood vessels [2][3][4][5][6]. Key players in the process of glioma migration seem to be the Ras homolog gene family (Rho)-associated protein kinases (ROCKs) [7][8][9]. The two ROCK isoforms, ROCK1 and ROCK2, act downstream of the small GTPase Rho member A (RhoA). The ROCK1 and ROCK2 peptides display some similarities in the kinase activity domain at the Nterminus, the coiled-coil domain, and a pleckstrin homology (PH) domain at the C-terminus. GTP-bound RhoA activates the Rho kinases by displacement of the PH domain, thus enabling different substrates to bind to the kinase domain [10,11]. LIM kinase (LIMK) is activated by phosphorylation through ROCKs. This activation leads to a phosphorylation of cofilin, which in turn inhibits actin depolymerization, leading to a consequent increase in actin polymerization. Furthermore, ROCKs also phosphorylate the myosin-binding subunit of myosin phosphatase, leading to the inactivation of the phosphatase activity [12]. ROCKs also mediate-at least in parta translocation of Rho signaling to the nucleus, where Rho regulates the functions of various transcription factors including the four and a half LIM domains protein (FHL2) and estrogen receptor (ER) [13,14]. While integrins and matrix metalloproteases are typically involved in the mesenchymal type of migration, ROCKs are involved in amoeboid movement and migration [15]. Recently, we have established a co-culture migration assay that allows cells to migrate along myelinated axons, with a view to examine the molecular mechanisms underlying tumor cell migration along white matter tracts using pharmacological ROCK inhibition [16]. Furthermore, we established a modified stripe assay to determine the substrate specificity changes of glioma cells under ROCK inhibition by using the unselective inhibitor Y27632 [17]. In the present study, we investigated whether the shRNAinduced inhibition of either ROCK1 or ROCK2 alone influences glioma migration and proliferation and elucidated the differences between these two ROCKs in glioblastoma cell migration, substrate preferences, and cell proliferation. We were able to show that ROCK1 alone is capable of inducing the migratory effects and ROCK2 displays controversy effects of ROCK1. Stable Transfection Four different shRNA oligonucleotides for ROCK1 inhibition [sure silencing shRNA plasmid ROCK1 hygromycin (KH01966H)], including one negative control vector and four different shRNA oligonucleotides for ROCK2 knockdown [sure silencing shRNA plasmid ROCK2 hygromycin (KH09606H)] were purchased from SA Biosciences (Hilden, Germany). Cells were seeded at a density of 0.8×10 5 in 24well plates with 500 μl of culture medium the day before shRNA transfection. Cells were transfected by adding Attractene Transfection Reagent (Qiagen) according to the manufacturer's recommendations and maintained in culture medium for 24 h before selection with 600 μg/ml hygromycin. After colony formation, at least 60 independent clones were chosen for each vector sequence. The extent of RNA knockdown under these conditions was determined with quantitative real-time polymerase chain reaction (qRT-PCR) analyses, Western blotting, and immunostaining. Immunofluorescence After permeabilizing the cells with 0.1 % Triton X-100 (Sigma) in phosphate-buffered saline (PBS), they were blocked with 0.5 % bovine serum albumin in PBS. The rabbit ROCK1 and ROCK2 antibodies were applied to separate groups of cells at a dilution of 1:100 and incubated at 4°C overnight. A tetramethylrhodamine/isothiocyanateconjugated anti-mouse antibody (T1689, Sigma, 1:200 dilution) was used as the secondary antibody. Actin filaments were visualized by incubating the cells for 30 min with fluorescein isothiocyanate (FITC)-phalloidin. The cell nuclei were counterstained with 4′,6-diamidino-2-phenylindole (DAPI) and then mounted in anti-fading medium Mowiol (Merck, Darmstadt, Germany). Fluorescence was documented using an Axiophot microscope (Zeiss) with AxioVision Software (Zeiss). qRT-PCR Analyses Total RNA was isolated from sub-confluent cultured cells using an RNeasy Plus Mini kit (Qiagen). Total RNA (1 μg) was transcribed into cDNA with the High Capacity cDNA Reverse Transcription Kit (Applied Biosystems, Foster City, CA, USA) in a reaction volume of 20 μl. After cDNA synthesis, 1 μl from the reaction volume was used for qRT-PCR measurements with the following SYBR Green primers: AAAAATGGACAACCTGCTGC (ROCK1, forward) GGCAGGAAAATCCAAATCAT (ROCK1, reverse) CGCTGATCCGAGACCCT (ROCK2, forward) TTGTTTTTCCTCAAAGCAGGA (ROCK2, reverse) Relative RNA levels were calculated and compared between shRNA-and control-transfected cells. The data were normalized relative to those for GAPDH using the following primers: The relative expressions were calculated using the 2 −ΔΔCT method. All measurements were conducted in duplicate and the experiments were repeated at least three times. Wound-Healing Assay Cell migration was analyzed using a wound-healing assay. Briefly, cells were seeded at a density of 0.8×10 5 per well in a 24-well plate. After 24 h, cell monolayers were scratched using the back side of a standard 100-μl pipette tip. After being washed three times with PBS, photomicrographs were taken of the scratches, including the flanking front lines of cells (at ×40 magnification), and then incubated under standard conditions. Migration into the scratched area was documented at 24 and 48 h after wounding. Scratch closure by migrating cells was compared between ROCK1/ROCK2 knockdown cells and control-transfected cells. Wound closing was evaluated relative to the total area of wounding by counting the migrating cells using a light microscope (Zeiss) and TScratch Software (CSE Lab, Zurich, Switzerland). Experiments were performed independently three times, with four to eight scratches being evaluated for each experimental condition. Monolayer Migration and Proliferation Assay Permanox LabTek eight-well chamber slides (Nunc, Langenselbold, Germany) were coated with poly-L-lysine and Matrigel solution (BD Biosciences) 24 h before use. The chambers were filled with a 200-μl volume of pre-warmed DMEM, after which sterile glass sedimentation cylinders were placed in each chamber. Cells in DMEM were seeded at a density of 2× 10 3 into the lumen of the cylinders in a volume of 2 μl. The cylinders were removed after the cells had been allowed to adhere to the substrate (16-24 h after seeding). The resulting colonies were photographed immediately after removal of the cylinders and again at intervals of 24 and 48 h thereafter. Cell migration was evaluated by measuring the increase in the area of the colonies using ZEN software (Zeiss). The change in the area of each colony at each time point was standardized against the colony area measured from the photograph taken immediately after removal of the cylinders. Cellular proliferation of ROCK1 and ROCK2 knockdown cells was assessed with the 3-(4,5dimethylthiazol-2-yl)-2,5-diphenyltetrazolium bromide (MTT) assay [18]. Stripe Assay The principle of the stripe assay is that the cells can choose between the different surfaces during migration, so that any differences in affinity, motility, and cell proliferation can be observed [17]. The cells were seeded onto the stripped membranes at a concentration of 0.8×10 5 cells/ml and allowed to migrate for 48 h. They were then fixed in 4 % paraformaldehyde and transferred to glass coverslips. The stripes were visualized by staining the substrates with FluoSpheres before they were applied to the membranes; the cells were stained with DAPI after fixation. Cell quantification was achieved by first photographing the membranes (at least 15-20 photographs per membrane) and then counting the cells on the different stripes using ImageJ software. The chosen stripe components were homogenized rat retina free of myelin and myelin-containing perinatal rat brains. Biomatrix (BM), which is commercially available, was chosen as the extracellular matrix (Serva, Mannheim, Germany). Statistics The results are presented as the mean ± SEM percentage values. Statistical significance was analyzed using paired Student's t test; the level of statistical significance was set at p <0.05. Expression Analyses and Stable Knockdown of the ROCKs Seven glioma cell lines were first evaluated to screen for the best cell line for stable knockdown of both ROCKs. At the mRNA level, the 86HG39 and D54MG cell lines displayed the strongest expression of ROCK1, while the strongest expressions of ROCK2 were found in the U373MG and 86HG39 cell lines (Fig. 1a, b). The expression levels of ROCK1 and ROCK2 proteins differ from the mRNA results; here, we found the highest expression for ROCK1 in the cell lines D54MG and U373MG and for ROCK2 in D54MG, 86HG39, and U353MG (Fig. 1c, d). Because of the expression levels and the genetic aspects of the cell lines [19,20], we decided to use D54MG and 86HG39 human glioma cell lines for further investigations. To reveal the cellular location of ROCK1 and ROCK2 in both cell lines, we performed fluorescence immunohistology staining (Fig. 1e, f). Both proteins show a cytoplasmic and membrane-associated location in human glioblastoma cell lines. To avoid off-target effects, we used two different shRNA vector sequences (referred to as seq1 and seq3 for ROCK1 and seq2 and seq4 for ROCK2) and a vector control to induce the knockdown, and at least 60 different clones were screened for each vector construct and each cell line. The reduction of both ROCK1 and ROCK2 expressions in the selected clones was verified using qRT-PCR analyses and Western blotting. ROCK1 mRNA expression in the D54MG cell line was reduced to 16.6 % for sequence 1 clone 4 (D54MG seq1) and to 14.4 % for sequence 3 clone 13 (D54MG seq3). The knockdown of ROCK1 was more efficient in the 86HG39 cell line, with an expression level of 7.0 % for sequence 1 clone 12 (86HG39 seq1) and of 9.4 % for sequence 3 clone 10 (86HG39 seq3; Fig. 2a). ROCK2 mRNA expression in the D54MG cell line was reduced to 13.4 % for sequence 2 clone 2 (D54MG seq2) and to 5.7 % for sequence 4 clone 39 (D54MG seq4); that in the 86HG39 cell line was 2.7 % for sequence 2 clone 52 (86HG39 seq2) and 3.9 % for sequence 4 clone 3 (86HG39 seq4; Fig. 2b). A distinct reduction in the level of protein expression of ROCK1 ( Fig. 2c) and ROCK2 ( Fig. 2d) was also found in all four clones. D54MG seq1 has a ROCK1 protein level of 34.0 % and seq3 of 74.4 %; in the cell line 86HG39, we found ROCK1 protein levels of 79.7 % (seq1) and 47.0 % (seq3; Fig. 2f). ROCK2 protein level was also affected by ROCK1 knockdown (Fig. 2g). Here, we found a reduced ROCK2 expression in D54MG seq1 (79.5 %) and in 86HG39 seq3 (39.8 %). The knockdown of ROCK2 leads to ROCK2 protein levels of 51.7 and 87.8 % for D54MG seq2 and seq4 and of 53.9 and 33.2 % for 86HG39 seq2 and seq4. Analysis of the ROCK1 protein expression in ROCK2 knockdown clones exhibits no changes. The inhibitor Y27632 affects both kinases ROCK1 and ROCK2. ROCK1 protein expression in the cell line D54MG was reduced to 63.3 % and in the cell line 86HG39 to 61.5 %. ROCK2 protein level shows reduction to 58.1 % for D54MG and only a slight effect on 86HG39 to 98.2 % ( Fig. 2e-g). ROCK1 and ROCK2 Influence Cell Proliferation and Change Cell Morphology Next, we determined whether the effects of ROCK1/ROCK2 knockdown on cell migration are based on changes in cell proliferation. A significant decrease in cellular growth was observed in ROCK1-deficient cells relative to the vector controls (set as 100 % Fig. 3b). Immunofluorescence staining for ROCK1/ROCK2 and FITC-phalloidin were used to determine whether knockdown of ROCK1 and ROCK2 alters the cellular phenotype. All of the ROCK1 knockdown clones displayed changes in cell morphology and developed a mesenchymal-like phenotype. Inhibition of ROCK1 and ROCK2 led to several cytoskeletal and morphologic changes including inhibition of stress fibers, enhancement of the number and length of cell processes, and an increase in the degree of membrane ruffling (Fig. 2h, i). The knockdown cells displayed a stellar appearance with an increase of the number and length of actin-positive membrane ruffles. These data show that there is no distinct difference in the change in cell phenotype between ROCK1 and ROCK2 knockdown cells. Differential Effects on Cell Migration of ROCK1 and ROCK2 Functional analysis of the effects of ROCK1 and ROCK2 knockdown on cell migration was conducted with uncoated wound-healing migration assays as well as radial monolayer assays coated with laminin, respectively. Simultaneous inhibition of both ROCK1 and ROCK2 was performed using a monolayer migration assay on a laminin-coated surface with D54MG and 86HG39 cells using the ROCK inhibitor Y27632. A significant reduction in cell migration was found. In the D54MG cell line, the addition of 100 μM Y27632 resulted in migration rates of 46.8±6.4 and 51.3±8.1 % at 24 and 48 h, respectively. In the 86HG39 cell line, simultaneous inhibition of both ROCK1 and ROCK2 reduced the cell migration rates to 38.7±3.5 and 68.2±2.3 % at 24 and 48 h, respectively (Fig. 4a). ROCK1 and ROCK2 knockdown cells were separately subjected to repeated migration assays in order to establish whether the reduction in migration observed when both ROCK1 and ROCK2 are inhibited is based on the reduction of both or whether reduction of either kinase is sufficient to account for the effect. When ROCK1 protein synthesis was inhibited in the D54MG cell line, the cells migrated faster on an uncoated surface, at rates of 168.9±2.5 % (seq1) and 160.8±2.4 % (seq3) at 24 h and of 167.0±1.9 % (seq1) and 156.8±1.1 % (seq3) at 48 h relative to the migration rate of the control cells (set as 100 %; ±1.0 % at 24 h, ±3.1 % at 48 h). Similar results were observed with the 86HG39 cell line, for which the wound closing rates were 136.2±0.6 % (seq1) and 145.7± 0.4 % (seq3) at 24 h and 156.7±0.9 % (seq1) and 144.6± 0.6 % (seq3) at 48 h (Fig. 4b). Thus, knockdown of ROCK1 leads to a highly significant increase in cell migration on an uncoated surface. In contrast, ROCK1 knockdown cells displayed a significant decrease in cell migration on the radial monolayer coated with laminin. With the D54MG cell line, there were reductions in the migration rate of 68.0±3.7 % (24 h) and 53.2±4.8 % (48 h) for sequence 1 and of 58.5± 6.3 % (24 h) and 47.7±5.2 % (48 h) for seq3 (relative to the control, set as 100 % for all time points). The 86HG39 cell line exhibited a comparable inhibition of cell migration on the laminin-coated surface under ROCK1 knockdown, whereby the migration rates were reduced (relative to control, set as 100 %) to 52.7±2.6 % (24 h) and 51.9±5.3 % (48 h) for seq1 and to 44.3±2.3 % (24 h) and 58.5±6.2 % for seq3 (Fig. 4d). Together, these findings show that ROCK1 knockdown leads to a substrate-dependent migration effect, with enhanced migration on an uncoated surface and reduced migration on a laminin-coated surface (Fig. 4b, d). increased ROCK1 protein expression in the cell lines U373MG and D54MG and an increased ROCK2 expression in the cell lines 86HG39, D54MG, and U343MG. Immunofluorescence staining of 86HG39 and D54MG glioma cell lines using antibodies raised against ROCK1 (e) and ROCK2 (f) and FITC-phalloidin (green; to stain the cytoskeleton) shows cytoskeleton-and membrane-associated location of ROCK1 and ROCK2 in both cell lines used In contrast to ROCK1 knockdown, ROCK2 knockdown increased the migration rate in both assays. Migration on the uncoated surface was significantly increased (relative to the control of 100±2.1 %) for both cell lines and for all tested clones, at 141.0±16.8 % for 86HG39 seq4 at 48 h and 273.0± 16.8 % for 86HG39 seq2 at 24 h (Fig. 4c). On the radial monolayer migration assay with a laminin-coated surface, the migration rate relative to the control was 123.6±11.8 % for D54MG seq4 at 48 h and 213.6±11.8 % for 86HG39 seq4 at 24 h. These data show that the reduction in ROCK2 expression in both glioblastoma cell lines led to a substrateindependent increase in migration (Fig. 4e). The cell migration effects of the two ROCKs thus differ in that those of ROCK1 are substrate-dependent whereas those of ROCK2 are independent of the substrate. Knockdown of ROCK1, But Not ROCK2, Leads to a Significant Change in the Substrate Specificity of Tumor Cells The stripe assay, which allows cells to be confronted simultaneously with two different substrates [21], was used to further examine the effects of the ROCKs on cell migration. In a previous study, we showed that glioma cells have a distinct preference for the extracellular matrix compared to all other substrates [17]. To reflect the components of an intact 3D brain environment, membrane fractions from unmyelinated rat retina were used to represent gray matter, along with purified myelin and BM, and all three substrates were tested against each other using ROCK1 and ROCK2 knockdown cell lines and the control cell lines. Furthermore, the ROCK inhibitor Y27632 was applied to the ROCK1 knockdown cells and the controls to elucidate whether Y27632 has an additive effect. The untreated control cells of both cell lines used (D54MG and 86HG39) again exhibited a distinct preference for BM, followed by myelin. All of the ROCK1 knockdown clones of these cell lines with two different shRNA vector sequences changed their preference for BM when tested against myelin. BM and myelin contained 70.4±1.6 and 29.6±1.63 % of the D54MG control cells, respectively. The ROCK1 knockdown clone seq1 seemed to lose preference for BM since the cells were distributed similarly on the two types of stripe (56.3± 2.51 % on BM vs. 43.7±2.51 % on myelin). Comparable results were obtained for clone seq3, with 53.3±2.44 % of the cells being located on BM and 46.7±2.44 % on myelin. The application of Y27632 to ROCK1 knockdown cells resulted in a complete reversal of their preference. Even in the Fig. 2 Verification of ROCK1 and ROCK2 knockdown in the human glioblastoma cell lines 86HG39 and D54MG at the mRNA and protein levels. Knockdown of ROCK1 using two different shRNA sequences (seq1 and seq3) and a vector control (set as 100 %) in D54MG and 86HG39 cell lines at the mRNA (a) level shows a reduction of ROCK1 mRNA expression level in D54MG cells to 16.6 and 14.4 % and in 86HG39 cells to 7.0 and 9.4 %. On the protein level (c), densitometric measurement (f) reveals also a reduction in ROCK1 protein expression to 34.0 % (seq1), 74.4 % (seq3) for D54MG, and 79.7 % (seq1) and 47.0 % (seq3) for 86HG39. ROCK2 expression in ROCK1 knockdown clones is also affected as D54MG seq1 reveals a 79.5 % expression of ROCK2 and 86HG39 seq3 of 39.8 %. Quantification of ROCK2 knockdown with two different shRNA sequences (seq2 and seq4) and a control (set as 100 %) at the mRNA level (b) reveals a reduced ROCK2 expression levels to 13.4 and 5.7 % for D54MG cells and 2.7 and 3.9 % for 86HG39 cells. On the protein level (d , g ), we found reductions of ROCK2 protein expression to 51.7 % (seq2) and 87.8 % (seq4) for D54MG and 53.9 % (seq2) and 33.2 % (seq4) for 86HG39. ROCK1 protein expression was not influenced by ROCK2 knockdown. Using the inhibitor Y27632 (e), we could verify a knockdown of both ROCK1 and ROCK2 on the protein level in both cell lines. Immunofluorescence staining of the 86HG39 and D54MG cell lines with ROCK1 (h) and ROCK2 (i) knockdown shows changes in cell morphology (indicated by white arrows ) in ROCK knockdown cells relative to control cells with normal ROCK expression Y27632-treated control cells (i.e., without ROCK1 knockdown), only 39.7±1.93 % of the cells were located on BM while 60.3±1.93 % were on myelin. This change in substrate preference was even more marked in the Y27632-treated ROCK1 knockdown cells (seq1, 36.9±2.44 % on BM vs. 63.1± 2.44 % on myelin; seq3, 35.5±2.02 % on BM vs. 64.5±2.02 % on myelin). This indicates that while ROCK1 alone can influence the substrate specificity, the inhibition of both ROCKs not only changes the specificity from approximately 70/30 to 50/50 but also completely switches the cells' preference toward myelin. The 86HG39 cell line yielded comparable results (Fig. 5), with a switch of the cells' preference from BM to myelin with the ROCK inhibitor and ROCK1 knockdown clones compared to untreated 86HG39 control cells. Fig. 4 Different migration assays using the D54MG and 86HG39 glioblastoma cell lines with ROCK inhibitor Y27632 and ROCK1/ROCK2 knockdown. In a coated radial monolayer migration assay, the cells displayed a reduction in cell migration when treated with the ROCK inhibitor Y27632 (a). Cells with a stable ROCK1 knockdown exhibited enhanced cell migration using a wound-healing assay on an uncoated surface (b), but a significant decrease in cell migration using a radial monolayer assay on a laminin-coated surface (d). Glioma cell lines with a stable ROCK2 knockdown exhibited an increase in migration on both the uncoated surface (c) and the laminin-coated surface (e) (*p <0.05; **p <0.001, n =3, mean±SEM) To clarify the preference of the cells toward myelin, in a third step, the retina was compared with purified myelin. In this approach, ROCK1 inhibition also led to a switch of the substrate specificity of all tested ROCK1 clones in both cell lines used (Fig. 5i, j). An additional dose of 100 μM Y27632 led to a further increase in the change in substrate specificity of the ROCK1 knockdown clones (Fig. 5i, j). Whether these additive effects of the ROCK inhibitor Y27632 are attributable to a combination of ROCK1 and ROCK2 inhibition or are purely the effect of a nearly 100 % inhibition of ROCK1 warrants further investigation. The cell lines with a stable ROCK2 knockdown exhibited only slight changes in substrate preference compared to the control cells. Comparison of the BM with myelin substrates for the ROCK2 knockdown D54MG cells revealed cell contributions (BM vs. myelin) of 72.3±1.74 % vs. 27 (Fig. 6a). Testing BM against the retina revealed a preference for BM in control cells (BM vs. retina: 72.1±1.74 % vs. 27.9± 1.74 % and 72.7±1.86 % vs. 27.3±1.86 %) and a slight shift but no significant change in preference for all four ROCK2 knockdown clones, to 57.10±0.53 % vs. 42.90± 0.53 % (BM vs. retina). Furthermore, ROCK2 knockdown enhanced the preference toward the retina, but did not induce the same change in substrate preference as found for ROCK1 knockdown (Fig. 6b) Signaling of ROCK Knockdown The pathways involved in the effects of ROCK knockdown in glioblastoma cells were analyzed using Western blot analysis to determine the protein expression levels of phosphoLimK, phosphoRac1/cdc42, cyclin D1, Akt1, phosphoAkt, β1integrin, β-catenin, phosphoERK1/2, and RhoA in cells with a stable ROCK1 and ROCK2 knockdown (Fig. 7a, b) as well as in Fig. 8 Second part of signaling pathway analyses of cells with altered ROCK1 and ROCK2 expressions. Here, we analyzed the protein expression of RhoA (d), β1-integrin (e), β-catenin (f), pERK1 (g), and pERK2 (h) using densitometric measurement in cells with ROCK1 knockdown (a), ROCk2 knockdown (b), and cells treated with Rho kinase inhibitor Y27632 (c) glioma cells treated with 100 μM of Rho kinase inhibitor (Fig. 7c). Although a reduced level of ROCK2 was observed in cells with a stable ROCK1 knockdown, indicating a relationship between the expression levels of ROCK1 and ROCK2, downregulation of ROCK2 did not affect the expression level of ROCK1 (Fig. 2c, d). ROCK1 and ROCK2 affected the regulation of phosphoRac1/cdc42 (ser71) differently, with a reduction of ROCK1 leading to a decrease in the phosphorylation status of Rac1/cdc42 in three out of four clones, whereas a reduction of ROCK2 had the opposite effect in all four clones. Only 86HG39 seq1 cells yielded an enhancement in the phosphorylation of Rac1/cdc42 under ROCK1 knockdown. ROCK1 knockdown reduced cyclin D1 expression, except line 86HG39 seq1, and ROCK2 knockdown increased cyclin D1, except 86HG39 seq4. Using pERK1/2, we could show that ROCK1 knockdown leads to an inhibition of ERK activity and that ROCK2 knockdown leads to the opposite effect and enhances the phosphorylation of ERK1/2 in all tested clones (Fig. 8g, h). ROCK1 knockdown slightly affects Akt1 expression, while phosphorylated Akt1 normalized to Akt1 expression displayed a reduction in ROCK1 knockdown clones D54MG seq1 and seq3, but an increase in all four ROCK2 knockdown clones (Fig. 7i). Using the inhibitor Y27632, both cell lines-D54MG and 86HG39-show a reduction in phosphoAkt1 expression compared to the control cells. ROCK1 and ROCK2 knockdown decreased the expression of RhoA in the D54MG cell line; the 86HG39 cell line showed inconsistent results. Both ROCK1 and ROCK2 displayed an influence on β1-integrin and β-catenin expressions. The reduction of ROCK1 or ROCK2 expression leads in three out of four tested clones to a reduction in β1-integrin and β-catenin expressions as well. By inhibiting both kinases, β1-integrin protein expression was reduced in D54MG and 86HG39 cell lines, but β-catenin expression shows a reduction of expression only in the cell line 86HG39, not for the cell line D54MG (Fig. 7 d-f). Using the glioma cell lines D54MG and 86HG39 treated with 100 μM Rho kinase inhibitor, nearly all tested proteins displayed the same regulation as in the cells with ROCK1 knockdown. Only for β-catenin did we find no significant changes in expression in the cell line D54MG treated with Y27632. Discussion This study has shown that inhibition of ROCK1 and ROCK2 has a significant and differential influence on cell migration and proliferation. ShRNA was used to downregulate ROCK1 and ROCK2 expressions in the D54MG and 86HG39 glioblastoma cell lines, and the effects on cell migration, proliferation, substrate-dependent migration, and signaling pathways were compared. After verifying the knockdown at both the mRNA and protein levels in all the used cell lines and clones for both ROCK1 and ROCK2 knockdown, we examined changes in cell shape and morphology. There was no phenotypic difference between ROCK1 and ROCK2 knockdown cells with respect to morphology, indicating that the two ROCKs exert comparable influences on the cell cytoskeleton. Inhibition of ROCK1 resulted in a decreased proliferation, whereas inhibition of ROCK2 had the opposite effect, significantly enhancing proliferation relative to the control cells and regulating cyclin D1, whose role is also apparent in fibroblasts [22], corneal epithelial cells, and hepatic stellate cells [23,24], to mediate the canonical Wnt/TCF pathways involving βcatenin [25,26]. In contrast to the opposing effects of the ROCKs described here, only ROCK2 was involved in cell proliferation changes in SH-SY5Y cells [27], indicating different pathways among cell lines. The influence of both Rho kinases on proliferation was additionally shown by analyzing the expression of EKR1/2 phosphorylation. ROCK1 knockdown reduces the level of phosphorylation, whereas ROCK2 knockdown leads to an increase in ERK activity. Expanding on the effects on cell proliferation, we analyzed the effects of ROCK1 and ROCK2 knockdown on cell migration using different migration substrates that are natural partners of migrating cells within the brain. Strikingly, a significant increase in cell migration for both ROCK1 and ROCK2 knockdown cells using the wound-healing scratch assay was shown, contrasting previous reports using different inhibitors [28,29]. ROCK1 knockdown cells plus the ROCK inhibitor Y27632 migrated more slowly than the control cells on laminin, whereas cells with a ROCK2 knockdown migrated faster than the control and ROCK1 knockdown cells. These findings indicate that the migration effect of ROCK1 is substrate-dependent, while that of ROCK2 is not. This substrate dependency of glioma cells with altered ROCK expression was further scrutinized by conducting experiments with a stripe assay [21,30,31]. Untreated cells exhibited a significant substrate preference for BM [17]. ROCK1 knockdown cells changed their preference and migrated preferably toward unmyelinated retina and myelin compared to BM. This ROCK knockdown-induced preference toward myelin is not surprising since a hallmark of glioma migration is the long-distance movement of these cells along myelin-rich white matter tracts [3,5,32]. This behavior was confirmed in the stripe assay showing that reduced ROCK1 also changed the substrate preference of the cells. Addition of the ROCK inhibitor Y27632 to ROCK knockdown cells led to an almost complete switch in substrate preference toward the myelinated substrate, indicating an additive pharmacological effect, while the results of the same setup with ROCK2 knockdown cells were not as distinct. Indeed, comparison of BM and myelin as migration substrates revealed that only one shRNA sequence out of several tested in both cell lines changed the specificity from a 70/30 to a 55/ 45 distribution of cells. This finding indicates that the substrate dependence of glioma cells is mediated by ROCK1, not by ROCK2. Alternatively, a balanced and simultaneous regulation of both kinases may be in play, given that ROCK1 knockdown also influences ROCK2 expression, and not conversely, and cells with ROCK1 knockdown displayed the same regulatory effects on migration and proliferation as those treated with Y27632. Furthermore, analyzing downstream pathways in cells treated with Rho kinase inhibitor Y27632, we found in nearly all examined proteins the same regulation as in cells with ROCK1 knockdown. Although the two ROCK isoforms display an 80 % homology, ROCK1 is likely operating upstream of ROCK2 and is the key regulator of the activity of both kinases. A driving force of cell movement is essential for cell migration, and this force is provided mostly by reorganization of the cytoskeleton, with directed protrusions at the front of the cells and cell detachment at the rear [9,33,34]. This reorganization of the cytoskeleton is mediated by members of the Rho family of GTPases, such as Rho, Rac, and Cdc42 [35]. Comparison of the expression of Cdc42/Rac in cells with ROCK1 and ROCK2 knockdown revealed that a reduction of ROCK1 also reduces the phosphorylation of Cdc42/Rac, but a reduction of ROCK2 increases the phosphorylation of Cdc42/ Rac. Rac and Cdc42 regulate the polymerization of actin through the activation of Scar/WAVE and WASP/N-WASP complexes [36,37], while phosphorylation of cdc42/rac1on Ser71 is assumed to attenuate the actin-driven motility. A decreased phosphorylation was observed in ROCK1 knockdown clones in the present study, together with reduced migration and proliferation, indicating the drastic remodeling of the cell morphology rather than changes in migration capability. As anticipated, inhibition of the expression of ROCK2 led to the opposite effect on Cdc42/Rac, with increased phosphorylation of Cdc42/Rac associated to increased migration. Interestingly, both ROCK1 and ROCK2 knockdown resulted in a decreased RhoA protein. This suggests the presence of a feedback loop of ROCK expression on RhoA since ROCK1 and ROCK2 are downstream effectors of RhoA in glioma cells. This pathway may be regulated by phosphorylation, and thereby inactivation of p190 Rho GTPase-activating protein (p190A RhoGAP), since it was shown that in smooth muscle cells, ROCKs are involved in the induction of RhoA activity through the phosphorylation of p190A RhoGAP [38]. Dynamic regulation of the actin cytoskeleton is the main factor in cell motility and cell division, involving the phosphorylation of LIMK. The activation of ROCK by Rho leads to the LIMK-mediated inactivation of cofilin and results in the accumulation of actin and the formation of lamellipodia by inhibiting the actin depolymerization function of cofilin [39]. Interestingly, inhibition of ROCK1 and ROCK2 in the present study led to enhanced phosphorylation of LIMK in both cell lines and in all of the tested shRNA sequences. Rac also activates LIMK, thereby affecting the phosphorylation of cofilin, but reduces actomyosin-based cell contractility [40,41]. The enhancement of LIMK phosphorylation in the ROCK1 knockdown cells might therefore occur by enhanced activation of Rac. Since ROCK2 knockdown leads to the inactivation of Cdc42/Rac, the enhanced LIMK phosphorylation might be based on a different pathway and may be the result of reduced ROCK1 expression, which is unaffected in ROCK2 knockdown cells. The upstream factors involved in the effects of ROCK knockdown were identified by further analyzing the expression of β1integrin. Integrin clustering and focal complex formation require the activity of Rho family members [42] such as p190A RhoGAP, which disrupts integrin clustering [43]. Interestingly, the influence of integrins on various cell activities such as actin remodeling and proliferation is not limited to one direction. The integrin ligand-binding activity can also be regulated from signals inside the cells [44,45]. Cell signaling from inside-out to integrins is thought to be mediated by phosphatidylinositol 3kinase [46] regulated by ROCK [47][48][49]. This regulation is confirmed in the present study which shows that ROCK1 knockdown leads to a decrease in β1-integrin expression and a slight increase in Akt expression, without significantly affecting Akt phosphorylation. We observed similar changes in β1-integrin expression in ROCK knockdown cells, but also a significant progression of Akt phosphorylation mediated by ROCK2, but not by ROCK1.
2017-08-03T00:37:48.370Z
2013-10-30T00:00:00.000
{ "year": 2013, "sha1": "e5437b71a148345de1b19fc424864c4de0f1bab6", "oa_license": "CCBY", "oa_url": "https://link.springer.com/content/pdf/10.1007/s12035-013-8568-6.pdf", "oa_status": "HYBRID", "pdf_src": "PubMedCentral", "pdf_hash": "e5437b71a148345de1b19fc424864c4de0f1bab6", "s2fieldsofstudy": [ "Biology", "Medicine" ], "extfieldsofstudy": [ "Biology", "Medicine" ] }
244128365
pes2o/s2orc
v3-fos-license
Effect of using different levels of cassava meal in a concentrate cassava peel diet on chemical composition, in vitro gas production, and rumen fermentation This study was designed to evaluate effects of using different levels of cassava meal in a concentrate cassava peel diet on chemical composition, in vitro gas production (IVGP) and rumen fermentation. The treatments applied were: A=cassava peel (20%)+ cassava meal (70%)+cassava leaves (5%)+moringa leaves (5%); B=cassava peel (20%)+ cassava meal (60%)+cassava leaves (10%)+moringa leaves (10%); C=cassava peel (20%)+ cassava meal (50%)+cassava leaves (15%)+moringa leaves (15%) cassava meal; D=cassava peel (20%)+ cassava meal (40%)+cassava leaves (20%)+moringa leaves (20%); E=cassava peel (20%)+ cassava meal (30%)+cassava leaves (25%)+moringa leaves (25%) with 3 replications arranged in a Randomized Block Design (RBD). The result showed that the increasing levels of cassava meal in the ration significantly increased organic matter (OM) and nitrogen-free extract (NFE) content (P<0.05), but reduced crude protein (CP), ether extract (EE), crude fiber (CF), neutral detergent fiber (NDF), and acid detergent fiber (ADF) contents. Similarly, a significant increased (P<0.01) were found in values of cumulative in vitro gas production, dry matter and organic matter digestibility, but decreased NH3 concentration (P<0.05) due to the increased of cassava meal. It is concluded that increasing levels of cassava meal in concentrate has led to higher OM content of the ration and being available for rumen fermentation. Introduction Cassava (Manihot utilissima Cranz) is one of potential feeds for ruminants in some tropical regions and parts of the cassava plants that have been commonly used for feed are leaves, peels, tubers and cassava meal. Previous studies reported that DM, CP, CF and OM contents of cassava leaves were 27.2%; 31.4%; 12.8%; and 94.6% respectively [1]. Cassava peels contained CP ( Cassava leaves has been reported to have high potential as protein supplement on fibrous basal diet and a higher daily gain of growing sheep (112 g/d) was recorded compared to gliricidia leaves supplementation (97.1 g/d) [4]. The use of cassava peels as an energy source to supplement maize stover basal diet had the highest OM digestibility and daily weight gain of crossbred Limousine (81.6% and 1.35 kg/d respectively) at 30% [3]. Similar results were obtained when cassava meal was used to supplement maize stover [5]. The present study aimed to evaluate effects of using different levels of cassava meal in a concentrate cassava peel diet on chemical composition, in vitro Gas Production, and Rumen Fermentation. Location and time This study was conducted in the Nutrition and Animal Feed Laboratory Faculty of Animal Science Brawijaya University and abattoir in Malang from September to November 2020. Materials Materials used were feedstuffs consisted of cassava peels, cassava meal, cassava leaves, and Moringa oleifera leaves. Chemical analysis Proxymate analysis was carried out according to the procedure of AOAC [6] to determine DM, OM, CP, EE, and CF. Determination of fiber fractions (ADF and NDF) was carried out according to the procedure of [7]. In vitro gas production measurement, DMD and OMD values were done following the method of [8 and 9]. 2.5. Statistical analysis Data obtained were analyzed by analysis of variance (ANOVA) and followed by Duncan's Multiple Range Test if the treatments gave significant effect on the variables measured. Table 2 showed that the higher cassava meal in ration resulted in higher OM and NFE values, but CP, EE, CF, NDF, and ADF contents were lower due to the increased in cassava meal. The highest CP content was found in treatment E (19.28% DM), it seems to be related to the lowest proportion of cassava meal (30% of ration) and higher proportion of cassava (25%) and Moringa leaves (25%) in the diet. As a reported that CP content in Cassava leaves is 21% [11] and Moringa leaves 33.9% [12]. The contents of CF, ADF, and NDF were much influenced by the presence of cassava meal, where the higher proportion of cassava meal, the lower fiber content in all treatments. IVGP, digestibility, and NH3 The in vitro gas production (IVGP), dry matter digestibility (DMD), and organic matter digestibility (OMD) of the ration were presented in table 3. Statistical analysis showed that treatments significantly affected values of IVGP, DMD and OMD at P<0.01, and NH3 concentration at P<0.05. The highest IVGP values were recorded in treatment A (164.76 ml/500 mg DM) followed by treatments B, C, D, and E. Similar trends were also observed in DMD, and OMD values, in which those values were reduced by the increased proportion of cassava meal. Meanwhile, the NH3 concentration in the rumen decreased with the higher addition of cassava meal. The presence of cassava and moringa leaves as protein sources must have been the reason contributing to the difference in NH3 values. Ammonia concentration of all treatments were adequate to support rumen microbial activity, especially for degrading nutrients. This was indicated by higher values of both DMD and OMD as the proportion of cassava leaves and moringa leaves increased. Different superscript in the same row showed the significant effect at P<0.05 (*) and P<0.01 (**) Conclusions In conclusion, increased levels of cassava meal in the ration increased its OM content and rumen fermentation indicated by increasing values of IVGP, DMD, and OMD but, rumen NH3 concentration has appeared in an opposite direction.
2021-11-16T20:04:06.069Z
2021-11-01T00:00:00.000
{ "year": 2021, "sha1": "971c301327bff68cf374614e190787dfa1c03e06", "oa_license": null, "oa_url": "https://doi.org/10.1088/1755-1315/888/1/012053", "oa_status": "GOLD", "pdf_src": "IOP", "pdf_hash": "971c301327bff68cf374614e190787dfa1c03e06", "s2fieldsofstudy": [ "Agricultural And Food Sciences" ], "extfieldsofstudy": [ "Physics" ] }
19582737
pes2o/s2orc
v3-fos-license
Data on anti-insulation detection via Point of Thermal Inflexion (PTI) in 1248 cases; 13 climates, four occupancy profiles, six wall configurations and four insulation levels. The data in this article are the simulation results of 1248 cases that were carried out to detect anti-insulation behaviour in the article titled "Anti-insulation mitigation by altering the envelope layers' configuration" (Idris and Mae, 2017) [1]. These cases are generated by a matrix of 13 climates, 6 envelope layer configurations, 4 occupancy profiles and 4 levels of insulation thickness. The data are concerned with the annual cooling and heating loads of these cases. In addition, the data include the Point of Thermal Inflexion (PTI) values and their anti-insulation pattern, when PTI is found. The PTI values are compiled in a single summary file and supplied as well. All These data are shared via this article where they can be reused in different ways, but mainly for serving researchers that intend to approach anti-insulation behaviour from different points of view. a b s t r a c t The data in this article are the simulation results of 1248 cases that were carried out to detect anti-insulation behaviour in the article titled "Anti-insulation mitigation by altering the envelope layers' configuration" (Idris and Mae, 2017) [1]. These cases are generated by a matrix of 13 climates, 6 envelope layer configurations, 4 occupancy profiles and 4 levels of insulation thickness. The data are concerned with the annual cooling and heating loads of these cases. In addition, the data include the Point of Thermal Inflexion (PTI) values and their anti-insulation pattern, when PTI is found. The PTI values are compiled in a single summary file and supplied as well. All These data are shared via this article where they can be reused in different ways, but mainly for serving researchers that intend to approach anti-insulation behaviour from different points of view. & Subject area Energy in Buildings More specific subject area Cooling energy conservation Type of data Excel Files How data was acquired The data are the annual cooling and heating load simulation results. The cooling loads were further organised to derive the anti-insulation patterns and find their Points of Thermal Inflexion (PTI) values. Data format .xls Experimental factors The PTI value is sometimes modified based upon the anti-insulation pattern of the case. Experimental features Data were produced by simulating the thermal loads of a windowless single-cell room using EnergyPlus over a matrix of 1248 cases. Data source location Data accessibility Data is within this article Value of the data The data of such a vast number of simulation cases would promote better understanding of antiinsulation behaviour by allowing observation of it under various groups of parameters (variables clusters). These data can also be of great importance and time saving for studies that are aimed at developing the governing correlation equations or to find weights of the anti-insulation influencing factors. The data involved six layer configurations, in which studies concerned with the dynamic thermal behaviour of the wall configurations would find it useful. Specifically, in studying the configurations performance over two principal parameters, i.e. over 13 climates and the two AC operation profiles (continuous and intermittent). By providing the heating loads, further studies can utilise the annual total loads (Sum of cooling and heating) to systematically develop a novel insulation optimisation approach, which is solely based on anti-insulation. Data This article dataset is comprised of 53 excel files. The first file, named "PTI Summary of All 1248 cases", is the PTI values of all the 1248 cases which are colour-coded based on their anti-insulation patterns. This file has six tabs, four of these are denoting the data summary of the four occupancy profiles. The other two tabs are for the cases grand summaries, i.e. the PTI values compilation table, and the PTI patterns summary table. The subsequent 52 files are for the specific environmental conditions, under which the six layer configurations are examined. The 52 files are the production of 13 climates and four occupancy schedule profiles. The file naming convention is as follows [Serial Number_Climate Representative City_Occupancy Profile.xls], and the naming abbreviations are provided in Table 1. Each file of these 52 files has 8 tabs; six are the primary PTI graphs for the 6 layer configurations. The remaining two tabs contain; first, a tab that comprises the raw cooling and heating loads simulation results, and second, a tab that displays the layer configurations performance ranking. Experimental design, materials and methods The EnergyPlus standalone version (8.4) was employed as a simulation software. The "IdealLoadsAir-System" object was used to calculate the annual cooling and heating loads, where It calculates the energy that is being consumed to maintain the desired set-points. A window-less single-zone room of 6 Â 6 Â 3 m served as a case study, and It was assumed that all its surfaces (walls, roof and floor) have the same construction. The six construction configurations were produced by rearranging brick and insulation board layers. The total brick width was fixed at 20 cm across the configurations, whereas the insulation varied to generate the PTI graphs. For each of the 1248 cases, the PTI values were obtained by plotting the cooling load of 25 permutations, i.e. five insulation thicknesses and five cooling set-points, while having the 20 cm bare-brick cases as the energy saving/loss benchmarks. Eventually, the PTI values are sorted and statistically processed, then transferred into the spreadsheets that are supplied herewith in this article. Further information on the adopted methodology is presented in [1].
2018-04-03T02:47:38.073Z
2017-04-15T00:00:00.000
{ "year": 2017, "sha1": "30bb64d3d497093dd12cbdf29071de48e5b709a0", "oa_license": "CCBY", "oa_url": "https://doi.org/10.1016/j.dib.2017.04.016", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "30bb64d3d497093dd12cbdf29071de48e5b709a0", "s2fieldsofstudy": [ "Engineering", "Environmental Science" ], "extfieldsofstudy": [ "Materials Science", "Medicine" ] }
24774251
pes2o/s2orc
v3-fos-license
Associations between an Invasive Plant (Taeniatherum caput-medusae, Medusahead) and Soil Microbial Communities Understanding plant-microbe relationships can be important for developing management strategies for invasive plants, particularly when these relationships interact with underlying variables, such as habitat type and seedbank density, to mediate control efforts. In a field study located in California, USA, we investigated how soil microbial communities differ across the invasion front of Taeniatherum caput-medusae (medusahead), an annual grass that has rapidly invaded most of the western USA. Plots were installed in habitats where medusahead invasion is typically successful (open grassland) and typically not successful (oak woodland). Medusahead was seeded into plots at a range of densities (from 0–50,000 seeds/m2) to simulate different levels of invasion. We found that bacterial and fungal soil community composition were significantly different between oak woodland and open grassland habitats. Specifically, ectomycorrhizal fungi were more abundant in oak woodlands while arbuscular mycorrhizal fungi and plant pathogens were more abundant in open grasslands. We did not find a direct effect of medusahead density on soil microbial communities across the simulated invasion front two seasons after medusahead were seeded into plots. Our results suggest that future medusahead management initiatives might consider plant-microbe interactions. Introduction Plant communities are typically composed of a combination of native and non-native species. The majority of these non-native species are benign, demonstrating little to no negative effect on neighboring organisms. However, a small fraction of these non-native plants are characterized as invasive because they are able to profoundly modify local plant and animal communities, nutrient cycling, hydrological regimes and fire frequency [1][2]. Not only do these impacts erode biodiversity and devalue ecosystem services, but they can also enhance further invasion by con-and heterospecific exotics (e.g. [3]). Soil microbial communities might mediate relationships between invasive plant species and their ecosystem impacts [4][5][6]. Soil microbial communities, which are typically dominated by fungi and bacteria, can be altered by invasive plants directly through growth facilitation or inhibition near the root zone [7], and indirectly through changes in abiotic conditions (e.g. pH or nutrient availability) that occur in tandem with weed establishment [8]. For example, speciesspecific effects of non-native grasses on soil nutrients have been shown to subsequently modify soil microbial community composition, biomass, and bacterial:fungal ratios [9]. In addition to being vulnerable to impacts from aboveground plant dynamics, soil microbial communities may also play an important role in mediating the success of plant invasions [4]. For example, extant soil biota have been shown to enhance invasion success of some of the world's most noxious invasive plants, such as exotic knotweeds (Fallopia spp.) [10]. In general, it has been concluded that invasive species may be differentially affected by soil bacterial or fungal pathogens as compared to native plant species [11,8], but see [12], which could be important for developing reliable control strategies for invasive plants that demonstrate resistance to current management efforts. The invasive annual winter grass medusahead (Taeniatherum caput-medusae [L.] Nevski) has invaded much of the western USA and has been shown to decrease soil carbon stocks, reduce native plant diversity, and enhance fire frequency [13]. A recent meta-analysis of medusahead control outcomes in annual grassland and intermountain regions identified large variance in the effectiveness of conventional approaches for managing medusahead [14], suggesting that underlying variables, such as habitat type and seedbank density, might mediate control efforts. Despite increasing recognition that bacterial and fungal communities can influence plant invasion dynamics, only two published studies have investigated the direct relationship between medusahead and soil microbial communities [15][16]. These studies have conflicting results, suggesting both that the interaction between medusahead and soil microorganisms might or might not enhance its own invasion. Understanding medusahead effects on the soil microbial community is critical for enhancing predictions of invasion effects and for developing effective management strategies. We investigated how soil microbial communities differ across the invasion front of medusahead in experimental plots in open grassland and oak woodland habitat in the Sierra Foothill region of California, USA. We attempted to understand (1) if medusahead modifies soil microbial communities across the invasion front (simulated by differences in seed density) within systems; and (2) how soil microbial communities differ between areas where medusahead invasion is successful (open grassland habitat) and not successful (oak woodland habitat), and the factors that could be responsible for these differences. We hypothesized that medusahead would modify the soil microbial communities within each habitat. We expected this for two reasons. First, early work on this species [15], as well as more recent work on other invasive annual grasses with similar invasion dynamics to medusahead, have demonstrated linkages between the soil microbes and invasion success [17]. Second, plant-soil interactions are common in the savannah/oak woodlands of California [18], so we would expect strong effects from the extant soil microorganisms. We also hypothesized that soil microbial communities (in particular symbiotic and pathogenic fungi) would differ between areas where medusahead invasion is typically successful (open grasslands) and typically not successful (oak woodlands). In California, invasion of winter annual grasses can be strongly limited within oak canopies [19], possibly due to the different microbial communities associated with oak trees compared to adjacent open grasslands [20][21]. Through shading, litter input, and hydraulic lift, Mediterranean oak trees can also modify a wide variety of soil edaphic factors, such as pH and organic matter concentrations, which directly influence soil microbial communities (e.g. [22]). At present, we do not know what role, if any, soil microbial communities play in mediating the likelihood of successful medusahead invasions. Study site and experiment The study site was located on a research reserve in Yuba County, California (39°14 0 N, 121°1 8 0 W), which experiences a Mediterranean climate of hot, dry summers and cool, wet winters. Permission to perform this experiment was granted by the owner of the property, the University of California Division of Agriculture and Natural Resources. The field study did not involve endangered or protected species. Mean annual precipitation is 75cm and mean annual temperature is 17.8°C. Soils at the site are fine-loamy, mixed, superactive Ultic Haploxeralfs and fine, mixed, superactive Typic Rhodoxeralfs. Soil pH ranges between 5.7 and 6.2. The experiment is located in an annual grassland system that is irregularly interrupted by small patches of winter-deciduous blue oak (Quercus douglasii), and evergreen interior live oak (Q. wislizeni) that provide approximately 40% shade. The area has experienced seasonal low intensity grazing by livestock since the 1960's. The experimental set-up is described fully in [19,23]. Briefly, plots (1m 2 , separated by 2m) were installed in open grassland habitat and paired oak woodland habitat. The two habitats differ in the identity of dominant herbaceous species and the presence of leaf litter [19]. Average soil temperatures and soil moisture in the top 5cm of soil were also lower in oak woodland plots (18.4°C and 1.6%, respectively) compared to open grassland plots (22.2°C and 6.4%, respectively). The experimental site was mowed, solarized to enhance seedbank germination and then sprayed with glyphosate herbicide to kill existing and newly germinated plants. In September 2013, fully replicated (n = 4) plots were hand seeded with one of five densities of field-collected medusahead (0, 100, 1000, 10000, and 50000 seeds/m 2 ), mixed in with 500 grams of medusahead thatch. Immediately following the addition of medusahead seed, 6,000 seeds each of neighboring grass species (annual rye, and Blando brome) and 4,000 seeds of a clover mix were added-for a total of 16,000 neighbor seeds-to maintain a realistic competitive environment. Medusahead tiller density the following season reflected the seeding rate. This treatment was expected to simulate differences in medusahead invasion intensity from low to high infestation [24]. A defoliation treatment was applied to half the plots in April 2014. This treatment was intended to simulate a mowing or grazing regime included in a typical management program. Defoliation was applied when 75% of the medusahead tillers were in the 'boot' stage within a plot. All standing biomass in treatment plots was clipped using electronic shears positioned approximately 15cm above the soil surface. To understand how factors associated with the oak woodland habitat might contribute to medusahead invasion, an additional set of replicated 1m 2 plots were installed in the open grassland sites. Treatments were deployed in order to simulate environmental factors associated with the oak woodland habitat, and included the presence of shading, the presence of oak litter, and the presence of both shading and litter. Shading was applied via 50% shade cloth suspended over the plots, and litter was applied by collecting 500g of litter from under paired oak canopies and distributing it evenly over treatment plots. Medusahead seeds (at a density of 50000 seeds/m 2 ) were then introduced to these plots. Soil sampling In April 2015, we collected surface soil cores (7cm depth), where the majority of fungal and biomass is present [25] in four random locations in each plot. Soil from each plot was mixed together in the field, sieved and placed in a plastic bag. Soil samples were frozen in the field and transported at -20°C to the University of Colorado, Boulder for microbial analyses. Molecular analyses Microbial diversity was assessed using high-throughput sequencing methods to describe the composition of taxonomic marker gene sequences. For bacterial analyses, we sequenced the V4 hypervariable region of the 16S rRNA gene, using the 515-F (GTGCCAGCMGCCGCGGTAA) and 806-R (GGACTACHVGGGTWTCTAAT) primer pair [26]. For the fungal analyses, we sequenced the first internal transcribed spacer (ITS1) region of the rRNA operon, using the ITS1-F (CTTGGTCATTTAGAGGAAGTAA) and ITS2 (GCTGCGTTCTTCATCGATGC) primer pair [27]. The primers included Illumina adapters and an error-correcting 12-bp barcode unique to each sample. PCR products were quantified using the PicoGreen dsDNA assay, and pooled together in equimolar concentrations for sequencing on an Illumina MiSeq instrument. All sequencing runs were conducted at the University of Colorado Next Generation Sequencing Facility. Reads were demultiplexed using a custom Python script (https://github.com/leffj/helpercode-for-uparse), with quality filtering and phylotype clustering conducted using UPARSE [28]. For quality filtering, we used a maxee value of 0.5 (that is, a maximum of 0.5 nucleotides incorrectly assigned in every sequence). Singleton sequences were removed prior to phylotype clustering. Quality filtered sequence reads were then mapped to phylotypes at the 97% similarity threshold. Phylotype taxonomy was assigned using the Ribosomal Database Project (RDP) classifier with a confidence threshold of 0.5 [29] trained on the 16S rRNA Greengenes database [30] or the ITS rRNA UNITE database [31], for bacteria and fungi respectively. Sequences representing any phylotypes unclassified at the domain level or classified as mitochondria, chloroplasts, archaea or protists were removed. Subsequently, we removed potential contaminants (i.e. phylotypes with abundances greater than 1% in the blanks and no-template controls [32]), and we normalized the sequence counts using a cumulative-sum scaling approach [33]. We used FUNGuild to identify fungi functional guilds [34]. Soil sample information, phylotype abundance tables, and bacterial and fungal representative sequences are publicly available in FigShare (10.6084/m9.figshare.3113125). Statistical analyses Soil microbial community similarity patterns were represented by non-metric multidimensional scaling (NMDS) using the Bray-Curtis distance metric. We used nested analysis of variance (ANOVA) and permutational multivariate analysis of variance (PERMANOVA) based on 1,000 permutations [35] to assess the explanatory power of the different treatments on soil microbial richness and community similarity patterns, respectively. Differences in the proportion of taxa and fungal functional guilds were tested using non-parametric Wilcoxon tests after false discovery rate (FDR) correction [36]. All multivariate statistical analyses were implemented in the R environment (www.r-project.org) using the vegan package (vegan.r-forge.rproject.org). Results The total number of phylotypes across all soil samples was 5604 and 4349, for bacteria and fungi respectively (S1 Fig). The average number of phylotypes per soil sample was 1781 and 687, for bacteria and fungi respectively. Oak woodland soil samples tended to harbor more bacterial and fungal phylotypes than open grassland samples (although the differences were not statistically significant; ANOVA P > 0.05; Fig 1A and 1B respectively; see S2 Fig for Shannon diversity). Both bacterial and fungal soil community composition were significantly different between open grasslands and oak woodlands (PERMANOVA R 2 = 0.24, P < 0.001, R 2 = 0.28, P < 0.001, respectively; Fig 1C and 1D respectively). Bacteria from the proteobacterial classes alpha, beta, gamma and delta, and Acidobacteria subgroup 6 as well as fungi from the Pezizomycetes, Agaricomycetes and Eurotiomycetes classes were more abundant in oak soils than in grassland soils (Wilcoxon test P < 0.01 after FDR; S3 Fig; see S1 Table for results at the genus level). Acidobacteria from the classes Solibacteres and Acidobacteriia, Spartobacteria, Fig 2). We found no significant effects of seed density treatments or clipping treatments on soil microbial richness (ANOVA P > 0.05 for both bacteria and fungi; Fig 1A and 1B) or microbial community composition (PERMANOVA P > 0.05 for both bacteria and fungi; Fig 1C and 1D) within each habitat type. Likewise, for those plots where we simulated the oak environment, we observed no significant effects of the litter or shade treatments on the richness of soil bacterial communities (Wilcoxon test P > 0.05; Fig 3A) or bacterial community composition (PERMA-NOVA P > 0.05; Fig 3C). For fungi, we detected weak significant effects of litter on richness (Wilcoxon test P < 0.01; Fig 3B) and on fungal community composition (PERMANOVA R 2 = 0.09 P < 0.01; Fig 3D). Discussion Soil biota has been implicated in the facilitation of invasive plant dominance [5,37]. However, not all invasive species support plant-soil microbe feedbacks as a driver of invasion [38][39]. We attempted to identify how relationships between the weedy annual grass medusahead and soil microorganisms might mediate invasion success. Unexpectedly, we did not find evidence for medusahead density effects on the soil microbial communities across a simulated invasion front in either habitat. This supports other studies that have documented instances where soil communities are unresponsive to the presence of invasive weeds in arid grasslands [40] and in other systems [41]. However, several aspects of our experiment could hinder our ability to capture an existing relationship between medusahead seed density and soil microbial communities. First, we assessed the relationship between medusahead seed density and bulk soil microbial communities and not rhizosphere communities. Rhizosphere microbial communities are different from bulk soil communities [42], so the potential effects of medusahead invasion intensity on microbial communities may be observable at smaller spatial scales in the rhizosphere (but see [43]). Second, although this study was conducted across two growing seasons, there may have been insufficient time for soil microbial communities to respond to different seed densities of medusahead [44]. Although microbial communities have been shown to respond to changes in aboveground plant communities in as little as a month [45], these communities could be especially slow to respond to the presence of weeds in environments where soil edaphic factors are slow to change in response to invasion. Moreover, in California grasslands, soil appears to be particularly buffered from aboveground changes [40]. Third, extracellular microbial DNA and DNA from dead cells can persist in soils for years and thus, obscure DNA-based present estimates of soil microbial composition [46]. Finally, as this study only assessed the composition of the microbial communities, we cannot eliminate the possibility that medusahead seed density can influence the activity and function of belowground soil communities. Using reciprocal soil transplant experiments, [47] reported higher medusahead biomass in introduced soil than in native soil, which suggests that medusahead success is partially due to release from native soil pathogens [48][49]. In addition to escaping from soil pathogens, certain plant invasive species have been shown to accumulate local pathogens [11,49]. Although previous studies have reported that medusahead is sensitive to antagonistic fungi [50][51], we observed important differences in bacterial and fungal community composition between open grassland sites where medusahead is typically found in high densities and oak woodland sites where medusahead is typically found in low densities. Specifically, we found significantly higher abundances of fungal pathogens in open grasslands compared to oak woodland habitats. Environmental conditions in the grassland habitat are likely more ideal for both soil and foliar fugal pathogens, which can be important drivers of above ground plant dynamics (e.g. [52]). Indeed, these pathogens have been documented in grasslands in other studies (e.g. [53]). These results collectively highlight the potential contribution of microbial mechanisms (e.g., via pathogen accumulation) to medusahead dominance in California grasslands. Because the differences in bacterial and fungal communities exist in the absence of medusahead, it is more likely that favorable grassland soil microbial communities facilitate medusahead establishment instead of resulting from the invasion itself. In addition to negative interactions, a large number of plant species establish symbiotic associations with soil microorganisms (in particular with mycorrhizal fungi and nitrogen-fixing bacteria, [6,54]). In this experiment we detected a higher proportion of ectomycorrhizal fungi in soil samples from oak woodland habitats. This result is expected as ectomycorrhizal fungi are important to oak trees for acquiring nutrients and for increasing root absorptive area [18,55]. Our results also show that oak litter (rather than shade) influence overall soil fungal community composition and richness, but not soil bacterial community composition and richness. Although plant litter inputs can change important environmental conditions for soil bacteria such as pH and base cation content [42], soil fungi are key decomposers of plant necromass and depend more directly on leaf litter than bacteria [56]. We also observed a significantly higher proportion of arbuscular mycorrhizal fungi in open grasslands. Given the generally non-specific interactions with arbuscular mycorrhizal fungi, it has been proposed that several invasive plants make use of these fungi to enhance their success [57][58][59][60]; but see [61]. Other invasive plants (for example, the garlic mustard Alliaria petiolata) inhibit mycorrhizal fungi on which natives depend [62]. Collectively, this work suggests that biocontrol and management initiatives should consider the potentially beneficial plant-microbe interactions rather than just focusing on antagonistic relationships. The context dependency associated with invasion success and weed management efficiency is well documented for both medusahead as well as other weedy species (e.g. [63]). The presence and abundance of bacterial and fungal groups potentially underlie this context dependency in several instances. Environmental changes, such as exacerbated drought conditions, might modify suitability of oak woodland habitat and perhaps enhance invasibility of previously resistant systems. Therefore, given the complex relationships between aboveground and belowground biota [64], understanding the potential mechanisms mediating the association between invasive plant species and soil microorganisms could provide practical information for developing effective management strategies, as well as insight into the ecology of plant-soil food webs and diversity.
2018-04-03T06:13:54.232Z
2016-09-29T00:00:00.000
{ "year": 2016, "sha1": "eece4913d6cab3d71d8a746ba4d769eae25d7097", "oa_license": "CCBY", "oa_url": "https://journals.plos.org/plosone/article/file?id=10.1371/journal.pone.0163930&type=printable", "oa_status": "GOLD", "pdf_src": "ScienceParsePlus", "pdf_hash": "48f5553444199dbfc8f556bea1e5fe3682b6de60", "s2fieldsofstudy": [ "Environmental Science" ], "extfieldsofstudy": [ "Biology", "Medicine" ] }
78089767
pes2o/s2orc
v3-fos-license
Influence of the Hybrid Sewage Treatment Plant ’ s Exploitation on Its Operation Effectiveness in Rural Areas The article evaluates the effectiveness of the removal of organic pollutants—nitrogen and phosphorus—from household sewage in a hybrid bioreactor with a submerged fixed bed. The experiment was carried out in two exploitation variants that were both conducted in a laboratory model of the hybrid bioreactor: (I) cycles of 120 min of aeration and 60 min of no aeration with a constant sewage dosage, and (II) cycles 60 min of aeration and 60 min of no aeration, with a periodic sewage dosage in the no-aeration phase. The experiment was carried out on real sewage primarily treated in a septic tank. The amount of pollution removal was calculated and compared with the mandatory standards according to Polish law. Moreover, the susceptibility of the sewage to the biological treatment, nitrification, and denitrification activity was determined. The research shows a higher effectiveness for the 60/60 model in comparison to the 120/60 model. High operation efficiency was observed regarding the removal of organic pollution and nitrate nitrogen. The tested structure showed very low nitrification activity combined with intense denitrification. These processes were observed in the 60/60 variant. The structure was often overloaded with the nitrate nitrogen, which was considered to be the nitrification process inhibitor. It was suggested that phosphorus was also removed by the denitrifying bacteria. Introduction A significant number of rural settlements in Poland, as well as small urban centers, still lack the satisfactory sewage management regulations, contrary to their common and widespread water supply systems [1].The observed dynamic development of the sewerage systems and sewage treatment infrastructure in the rural areas is in most cases due to the European Union (EU) directives and funding.Regardless of the significant increase of the sewerage system's length in the rural areas, the needs are only partially fulfilled.The local sanitation system is definitely more important; it drains the sewage into the septic tanks.Despite the dynamic development of the sewerage system in the villages, it accounts for only 19% of the whole water supply system.The disproportion between the water supply systems and the sanitation systems causes serious threats to the sanitary conditions and the environment in these areas.The characteristics of the rural settlement, which consists of dispersed buildings, should impose on designers and investors the usage of effective, reliable, and relatively cheap technologies to treat the individual households sewage [2].With such technologies, it is possible for the Polish villages to attain the needed ecological goals, which are environment-friendly, as in the case of collective sewerage system and local sewage treatment plant building plans, and are very often placed in the indefinite future.In Polish conditions, small amounts of sewage treatment technologies that use the filtering drainages, sand filters, or hydrobotanic beds are the most frequent ones.These systems also are environmentally-friendly and promote the sustainable development of rural areas.Currently, systems with small amounts of sewage treatment in artificial conditions, using biological beds or the active sludge technologies, are becoming more and more popular. In practice, the classic systems of the household sewage treatment plants with the active sludge are "miniatures" of the large systems.However, regarding the different sewage characteristics in the areas with the dispersed buildings, contrary to the areas within the collective sewerage system, very often exploitation problems occur in such small systems, which is related to the interruptions in the aeration due to the lack of electricity supply or the instability of active sludge biogenesis development related to a high heterogeneity of sewage supply. Systems with microorganisms' biofilm are considered to be much more beneficial in comparison to the traditional systems, where the biomass is suspended in the bioreactor [3][4][5].These systems are characterized by the stable sewage treatment process, a long period of biomass retention, and a much higher resistance to the toxic substances and sudden changes in the external conditions.Moreover, bioreactors with the biomass attached to the filling material are much smaller than the classic systems with the suspended microorganisms; therefore, they are easier to use in small sewage treatment plants. In the cases with existing mini-sewage treatment plants that operate on the active sludge technology, it is possible to increase their operation stability, which influences their reliability, by introducing flexible or stable microorganisms immobilized on the artificial carriers [6,7].Such hybrid systems are the combination of suspended and immobilized biomass, and are more resistant to the quantitative and qualitative changes of sewage than the classic active sludge; they also lack some of the biological beds' defects [8,9].The introduction of plastic blocks into the classic bioreactor with the active sludge may increase the effectiveness of organic matter removal from the sewage without the need to expand the treatment plant.Both the ecology and exploitation of such a procedure is arguable, and it is much cheaper than adding new objects to the system [10].However, exploitation bioreactors require considerable costs for the aeration of a chamber.Thus, analysis solutions that decrease the exploitation costs of the bioreactor are necessary; they permit removal pollutions such as other technologies.Such solutions can include periodic aeration or a sewage dosage bioreactor.Vertical flow filters with filling in the form of polyurethane foam can be such an example [11,12].Tomei et al. [13] observed very high organic removal (99% after just 5 h of treatment), and the effective biodegradation of the organic fraction of the wastewater (>90% at the end of the test) was observed during an experiment with the use of a hybrid system consisting of a biological reactor containing spiral-coiled polymeric tubing through which the mixed sewage was pumped.A hybrid DMBR-IVCW system, which combines a dynamic membrane bioreactor (DMBR) unit and an integrated vertical-flow constructed wetland (IVCW) unit, was applied by Kong et al. [14] to treat domestic sewage.Zhu et al. [15] used a hybrid membrane bioreactor (HMBR) for ship domestic sewage treatment.The degradation of organics in an aeration tank during this experiment follows the first-order reaction. The technology of hybrid reactors belongs to the biological methods of wastewater disposal.It uses the process of the natural immobilization of biomass, which consists of the formation of a biological membrane on the surface of carriers, and in consequence increases the efficiency of wastewater treatment.[16].In addition, this technology eliminates the common problem of clogging deposits.Moreover, regarding the activated sludge method, it was shown that immobilized biomass on supports in technological systems solves the problem of sludge swelling and allows resignation from the recirculation of activated sludge [17,18].In the biogenesis of hybrid reactors, regarding activated sludge, there is no development of filamentous bacteria [19].According to Chan et al. [20], hybrid reactor technology, in contrast to the conventional activated sludge system, is distinguished by better oxygen permeability, a shorter sewage retention time, higher charges of organic compounds, a higher degree of nitrification, and a larger contact area with wastewater.Sindhi and Shah [21] outlined the disadvantages of hybrid reactors, including the limited possibility of process control and the lower popularity of this technology.Hybrid reactor technology brings a number of technological possibilities. An important advantage of this system is the possibility of accepting a large pollutant load.The process of the natural immobilization of biomass on supports, and thus the prolonged age of microorganisms, independent of the hydraulic time of sewage retention, allows for highly effective nitrification [22]. The main premise behind conducting the research was that in Poland, technologies based on activated sludge or biological beds for removing pollution from a single building are widely used.However, these systems work with low efficiency due to insufficient reductions in nutrients as well as organic compounds.In addition, these systems are very sensitive to changes in sewage inflow.However, both activated sludge and biological biofilms have their advantages, so it was decided to combine both solutions and investigate the hybrid system.To intensify the work of simultaneous denitrification/nitrification, it was decided that the reactor would work under changing conditions (anoxic/oxygen), which in turn could increase the reduction of nutrients.Such cyclic aeration may in effect be less costly for the operator compared to aerated solutions with recruiters, which was another reason to undertake such research. The aim of the research is to evaluate the influence of various exploitation conditions (changeable aeration versus the lack of aeration, constant versus periodic sewage dosage) on the effectiveness of organic and nutrient removal as well as on the efficiency of the elementary processes in a hybrid bioreactor with a submerged fixed bed. The Research Station The tests were carried out on a laboratory model of the hybrid bioreactor.The model is made of a cuboid container with a built-in secondary settlement tank.The container is divided into two parts: the first is the aeration tank, and the second is the settlement tank.The container walls are made of metal, except for one that is made of a plexiglass slab.The aeration tank dimensions are as follows: length-700 mm; width-300 mm; height-700 mm; thus, the total volume is V tot = 147 dm 3 .The secondary settlement tank's total volume is V tott = 58.5 dm 3 , as shown in Figure 1.The plastic bed block was placed inside the aeration tank, which was submerged in the sewage.The block's dimensions are: length-400 mm, width-300 mm, and height-300 mm.The specific surface of the carriers was 150 m 2 •m -3 .Sewage was evenly distributed in the aeration tank due to the installed overflow.The treated sewage was drained from the secondary settlement tank with a sawtooth overflow weir.The inflow and outflow of the sewage from the bioreactor were placed in the way, which enabled the piston flow.The bioreactor's content was subsurface aerated with the disc diffuser covered with PTFE membrane produced by Stamford Scientific International Inc. (Poughkeepsie, NY, USA).The air was pumped into the system with a HIBFLOW HP-60 compressor.The rotameter was placed between the compressor and the diffuser to enable the regulation of the supplied air amount.The excessive sludge was drained outside of the settlement tank by the peristaltic pump type PER-R0601.Part of the sludge from the secondary tank was recirculated into the aeration tank by the same type of peristaltic pump. Research Procedures The research analyzed a real sewage sampled from the septic tank of the single-family household.The transported sewage was dosed into the bioreactor using the peristaltic pump.The aeration tank was inoculate activated sludge from the RetroFAST hybrid sewage treatment plant's treated domestic wastewater.The experiment was carried out in two variants in order to simulate bioreactor's performance in different exploitation conditions: I-constant sewage dosage 24 h a day, with changeable aeration cycles-120 min with aeration/60 min with no aeration; II-sewage dosage during the periods with no aeration, and various aeration cycles-60 min with aeration/60 min with no aeration.The average quantity of air pressed to the bioreactor was 55 dm 3 •h −1 .After taking the samples from the raw sewage and the treated sewage, physicochemical analyses of the following pollution indexes were performed: temperature, total suspended solids, BOD5, CODCr, ammonium, nitrite and nitrate nitrogen, and total phosphorus.The sewage temperature was measured using a digital thermometer, and total suspended solids were measured with the gravimetric method as follows: BOD5 according to the standard PN-EN 1899-1:2002 and PN-EN 1899-2:2002; CODCr according to the standard PN-ISO 15705:2005; ammonium, nitrite and nitrate nitrogen using the photocolorimetric method according to PN-ISO 7150-1:2002, PN-87/C-04576.07 and PN-EN 26777:1999; and total phosphorus was measured using the photocolorimetric method according to the standard PN-EN 1189-2000. Once a day, the following parameters were monitored during the process in the aeration tank: sewage temperature, dissolved oxygen concentration, oxygenation, and pH reaction.The evaluations were made with the pH/oximeter CPX type equipped with the proper measurement probes: -COG-1 type oxygen and a PEPS-1electrode.Nitrification and denitrification efficiency was evaluated on the basis of the Carrera et al. [23] formulas: where: L(N-NH4)-bioreactor's loading with the ammonium nitrogen, gN-NH4•gsmo −1 •d −1 ; M-biomass concentration in the bioreactor, gsmo•m −3 ; Research Procedures The research analyzed a real sewage sampled from the septic tank of the single-family household.The transported sewage was dosed into the bioreactor using the peristaltic pump.The aeration tank was inoculate activated sludge from the RetroFAST hybrid sewage treatment plant's treated domestic wastewater.The experiment was carried out in two variants in order to simulate bioreactor's performance in different exploitation conditions: I-constant sewage dosage 24 h a day, with changeable aeration cycles-120 min with aeration/60 min with no aeration; II-sewage dosage during the periods with no aeration, and various aeration cycles-60 min with aeration/60 min with no aeration.The average quantity of air pressed to the bioreactor was 55 dm 3 •h −1 .After taking the samples from the raw sewage and the treated sewage, physicochemical analyses of the following pollution indexes were performed: temperature, total suspended solids, BOD 5 , COD Cr , ammonium, nitrite and nitrate nitrogen, and total phosphorus.The sewage temperature was measured using a digital thermometer, and total suspended solids were measured with the gravimetric method as follows: BOD 5 according to the standard PN-EN 1899-1:2002 and PN-EN 1899-2:2002; COD Cr according to the standard PN-ISO 15705:2005; ammonium, nitrite and nitrate nitrogen using the photocolorimetric method according to PN-ISO 7150-1:2002, PN-87/C-04576.07 and PN-EN 26777:1999; and total phosphorus was measured using the photocolorimetric method according to the standard PN-EN 1189-2000. Once a day, the following parameters were monitored during the process in the aeration tank: sewage temperature, dissolved oxygen concentration, oxygenation, and pH reaction.The evaluations were made with the pH/oximeter CPX type equipped with the proper measurement probes: -COG-1 type oxygen and a PEPS-1electrode.Nitrification and denitrification efficiency was evaluated on the basis of the Carrera et al. [23] formulas: where: L(N-NH 4 )-bioreactor's loading with the ammonium nitrogen, gN-NH 4 (N-NH 4 ) i -ammonium nitrogen concentration in the bioreactor's inflow, gN-NH 4 •m −3 ; (N-NH 4 ) o -ammonium nitrogen concentration in the bioreactor's outflow, gN-NH 4 •m −3 ; (N-NO x ) i -oxidized nitrogen concentration in the bioreactor's inflow, gN-NO x •m −3 ; (N-NO x ) o -oxidized nitrogen concentration in the bioreactor's outflow, gN-NO x •m −3 ; Results Figure 2 presents how BOD 5 has been changing in the raw and treated sewage during the experiment.High BOD 5 variation is characteristic of the raw sewage.It results from the raw sewage samples that were treated in this experiment being taken from a septic tank that was a part of the sewage drainage and treatment system for a single household.The high variability of the pollutant's amount is typical of the sewage from the single households in the rural parts of Poland.The average BOD 5 value in the raw sewage during the experiment was 289.0 ± 120.3 mgO 2 •dm −3 .The average BOD 5 values in the raw sewage were similar in both experiment variants: average BOD 5 for variant I was 284.1, and the average BOD 5 for II was 295.6 mgO 2 •dm −3 .In the case of sewage treated in the beginning of the process and after the conditions changed, the BOD 5 value changed, which was caused on one hand by the bioreactor's accustomation and the creation of the biomass suspended and immobilized on the bed, and on the other hand by the existing biomass' acclimatization to the variable conditions in the bioreactor.The acclimatization period was definitely shorter after the conditions changed than in the beginning of the experiment.This results from the biomass having already been formed and developed during the change of the experiment's variant, so the adaptation process to the changeable conditions was much shorter than in the situation when there was an insufficient amount of microorganisms in the bioreactor to mineralize the organic matter.The relatively short adaptation period is also the result of biofilm creation, which is much more resistant to the sudden conditions' changes than the suspended biomass.During the experiment, except for the mentioned periods of the accustomation and acclimatization of biomass, the BOD 5 values in the bioreactor's outflow were slightly changeable.The average BOD 5 value in the treated sewage in the variant I was 38.9 ± 19.9 mgO 2 •dm −3 , and in variant II it was 24.0 ± 18.1 mgO 2 •dm −3 .The average BOD 5 reduction during variant I equaled 86.3%, and in II, it equaled 91.9%.Such results confirm that the replacement of the constant sewage dosage into the bioreactor with the periodic dosage and shortening of the aeration period from 120 to 60 min caused the improvement of BOD 5 removal from the sewage.According to the mandatory Polish legislation about the quality of sewage drained into the collector, the tested change of the bioreactor's exploitation conditions regarding the average BOD 5 values fulfilled the quality standards for the treated sewage from the treatment plants of 15,000 equivalent people (EP), and in the case of the reduction, it was larger than 100,000 EP.In the case of variant I, the treatment plant met the standards for smaller objects: up to 2000 EP. COD changes during the experiment are similar to BOD 5 , as shown in Figure 2. The average COD value in the raw sewage in the whole tested period was 651.4 9 ± 213.79 mgO 2 •dm −3 , whereas in variant I, it was 706.4 9 ± 221.43 mgO 2 •dm −3 , and in variant II, it was 568.89 ± 189.10 mgO 2 •dm −3 .In this case, the reduction of COD values in the raw sewage observed in the second variant might have caused the higher reduction of this index, regarding the smaller COD loading of the sludge.The average COD value in the bioreactor's outflow in variant I was 219.9 ± 131.2 mgO 2 •dm −3 , which caused the 68.9% reduction, whereas in variant II, it was 87.5 ± 30.4 mgO 2 •dm −3 , with a reduction of 84.6%.In the case of variant I, the average COD value in the treated sewage and the reduction exceeded the admissible level stated in the regulation cited above for all of the treatment plant's sizes, and variant II met all of the standards for all of the treatment plants' sizes.In the case of the COD value, better bioreactor's operation could be observed in variant II.COD changes during the experiment are similar to BOD5, as shown in Figure 2. The average COD value in the raw sewage in the whole tested period was 651.4 9 ± 213.79 mgO2•dm −3 , whereas in variant I, it was 706.4 9 ± 221.43 mgO2•dm −3 , and in variant II, it was 568.89 ± 189.10 mgO2•dm −3 .In this case, the reduction of COD values in the raw sewage observed in the second variant might have caused the higher reduction of this index, regarding the smaller COD loading of the sludge.The average COD value in the bioreactor's outflow in variant I was 219.9 ± 131.2 mgO2•dm −3 , which caused the 68.9% reduction, whereas in variant II, it was 87.5 ± 30.4 mgO2•dm −3 , with a reduction of 84.6%.In the case of variant I, the average COD value in the treated sewage and the reduction exceeded the admissible level stated in the regulation cited above for all of the treatment plant's sizes, and variant II met all of the standards for all of the treatment plants' sizes.In the case of the COD value, better bioreactor's operation could be observed in variant II. Similarly, in case of the total suspended solids' value, its high changeability in the raw sewage can be seen, as well as a definitely higher treatment capacity in variant II, as shown in Figure 2. Regarding the raw sewage, the total suspended solid's concentration in variant I was 284.8 ± 144.4 mg•dm −3 , and in variant II, it was 206.5 ± 58.2 mg•dm −3 .Due to the system's accustomation, a high variability of total suspended solids' concentration can be noticed during variant I.In the beginning of variant II, due to the changes in exploitation as well as in the other bioreactor's conditions, more than a 100.0 mg•dm −3 increase of the total suspended solids concentration was observed in the outflow, which decreased after seven days to below 50.0 mg•dm −3 .The average concentration of this indicator for the outflow in variant I was 89.5 ± 59.1 mg•dm −3 , while for variant II it was significantly lower, and was 39.8 ± 35.8 mg•dm −3 .The total suspended solids' reduction in variant I equaled 68.6%, and for II, it equaled 80.1%.In case of variant I, the total suspended solids' concentration in the outflow exceeded the admissible standards according to the Polish regulations for all of the treatment plants' sizes, and in case of variant II, the requirements were met for the smallest objects: up to 2000 RLM. Nitrogen compounds' transformations are greatly influenced by exploitation conditions.Regarding the ammonium nitrogen in the raw sewage, as shown in Figure 3, the average concentration for variant I was 67.0 ± 18.1 mg•dm −3 , and for variant II, it was 63.2 ± 16.9 mg•dm −3 .Therefore, it can be assumed that during the experiment, the concentration of this nitrogen form stood steady on a similar level.However, important differences occurred in the treated sewage.The average ammonium nitrogen concentration in the raw sewage in variant I was 78.9 ± 19.0 mg•dm −3 , and in variant II, it was 46.9 ± 12.2 mg•dm −3 , which caused reductions of 17.8% (concentration increase in the outflow compared with the inflow) and 25.8%, respectively.Similarly, in case of the total suspended solids' value, its high changeability in the raw sewage can be seen, as well as a definitely higher treatment capacity in variant II, as shown in Figure 2. Regarding the raw sewage, the total suspended solid's concentration in variant I was 284.8 ± 144.4 mg•dm −3 , and in variant II, it was 206.5 ± 58.2 mg•dm −3 .Due to the system's accustomation, a high variability of total suspended solids' concentration can be noticed during variant I.In the beginning of variant II, due to the changes in exploitation as well as in the other bioreactor's conditions, more than a 100.0 mg•dm −3 increase of the total suspended solids concentration was observed in the outflow, which decreased after seven days to below 50.0 mg•dm −3 .The average concentration of this indicator for the outflow in variant I was 89.5 ± 59.1 mg•dm −3 , while for variant II it was significantly lower, and was 39.8 ± 35.8 mg•dm −3 .The total suspended solids' reduction in variant I equaled 68.6%, and for II, it equaled 80.1%.In case of variant I, the total suspended solids' concentration in the outflow exceeded the admissible standards according to the Polish regulations for all of the treatment plants' sizes, and in case of variant II, the requirements were met for the smallest objects: up to 2000 RLM. Nitrogen compounds' transformations are greatly influenced by exploitation conditions.Regarding the ammonium nitrogen in the raw sewage, as shown in Figure 3, the average concentration for variant I was 67.0 ± 18.1 mg•dm −3 , and for variant II, it was 63.2 ± 16.9 mg•dm −3 .Therefore, it can be assumed that during the experiment, the concentration of this nitrogen form stood steady on a similar level.However, important differences occurred in the treated sewage.The average ammonium nitrogen concentration in the raw sewage in variant I was 78.9 ± 19.0 mg•dm −3 , and in variant II, it was 46.9 ± 12.2 mg•dm −3 , which caused reductions of 17.8% (concentration increase in the outflow compared with the inflow) and 25.8%, respectively. For variant I, the ammonium nitrogen concentration in the outflow was lower than in the inflow, regardless of the much higher oxygen concentration than is necessary for the nitrification process (the average oxygen concentration in the bioreactor in variant I was 3.5 mgO 2 •dm −3 ).Similarly to the previously discussed pollution indicators, such a state is influenced by the lack of specialized microorganisms that are capable of the nitrification process.The situation changes in variant II, where this indicator is reduced.During variant II, the average oxygen concentration in the bioreactor was 1.8 mgO 2 •dm −3 .It is important that not only the change of the exploitation conditions influenced the transformations of the nitrogen compounds.Regarding the nitrifiers needing old sludge age, when the concentration of the biomass (suspended as well as immobilized on the filling) increased over time, the effectiveness of ammonium nitrogen removal from the sewage also increased.This influenced the nitrification process, which is presented in Figure 4.For variant I, the ammonium nitrogen concentration in the outflow was lower than in the inflow, regardless of the much higher oxygen concentration than is necessary for the nitrification process (the average oxygen concentration in the bioreactor in variant I was 3.5 mgO2•dm −3 ).Similarly to the previously discussed pollution indicators, such a state is influenced by the lack of specialized microorganisms that are capable of the nitrification process.The situation changes in variant II, where this indicator is reduced.During variant II, the average oxygen concentration in the bioreactor was 1.8 mgO2•dm −3 .It is important that not only the change of the exploitation conditions influenced the transformations of the nitrogen compounds.Regarding the nitrifiers needing old sludge age, when the concentration of the biomass (suspended as well as immobilized on the filling) increased over time, the effectiveness of ammonium nitrogen removal from the sewage also increased.This influenced the nitrification process, which is presented in Figure 4.For variant I, the ammonium nitrogen concentration in the outflow was lower than in the inflow, regardless of the much higher oxygen concentration than is necessary for the nitrification process (the average oxygen concentration in the bioreactor in variant I was 3.5 mgO2•dm −3 ).Similarly to the previously discussed pollution indicators, such a state is influenced by the lack of specialized microorganisms that are capable of the nitrification process.The situation changes in variant II, where this indicator is reduced.During variant II, the average oxygen concentration in the bioreactor was 1.8 mgO2•dm −3 .It is important that not only the change of the exploitation conditions influenced the transformations of the nitrogen compounds.Regarding the nitrifiers needing old sludge age, when the concentration of the biomass (suspended as well as immobilized on the filling) increased over time, the effectiveness of ammonium nitrogen removal from the sewage also increased.This influenced the nitrification process, which is presented in Figure 4.During system's accustomation, a lack of nitrification was observed, which is confirmed by the negative values of the nitrification rate.After the accustomation period, when an appropriate amount of nitrifiers' biomass was produced, the nitrification rate was increasing.During variant II, in which the reduction of ammonium nitrogen was observed, the average nitrification rate was 0.08 ± 0.09 mgN-NH 4 •gsmo −1 •d −1 .If the sludge loading with ammonium nitrogen is much higher than the nitrification rate, N-NH 4 removal is less efficient, and the ammonium form cumulates in the bioreactor.It seems that the loading with ammonium nitrogen and the hydraulic retention time didn't play any significant role in the transformation of this nitrogen form, because in both variants, the amounts were similar, as shown in Table 1.However, the reduction of ammonium nitrogen was observed with the average hydraulic retention rate lower than 7 d.It is possible that the nitrification rate was influenced by the sewage temperature in the bioreactor, as shown in Figure 5.This indicates the growth of the process' rate with the increase of the temperature, which is a common regularity.This presented relationship is statistically important at α = 0.05. From the beginning of the experiment until the end of variant I, the tendency of nitrate nitrogen reduction is observed, as shown in Figure 3.It is correlated with a lack of nitrification in this period.The average nitrate nitrogen concentration in the raw sewage during variant I was 1.7 ± 0.8 mgN-NO3•dm −3 , and in the treated sewage, it was 0.77 ± 0.3 mgN-NO3•dm −3 .In this case, the N-NO3 reduction was most likely caused by heterotrophic bacteria, which are able to develop much more quickly than the chemoautotrophic nitrifiers.The average reduction of this nitrogen form was 54.7%.Significant nitrate (III) reduction can be observed in variant II of the process, where the average concentration in the outflow was 0.01 mgN-NO3•dm −3 against the inflow concentration of 1.4 mgN-NO3•dm −3 , which means that the reduction was 99.3%.This is the example of intense denitrification that occurs together with nitrification.Periodic feeding of the bioreactor with the sewage (during the non-aeration phase) causes the anoxic conditions in the active sludge flocs and in the biofilm, mostly in its deeper parts, which enables the denitrification of N-NO3 in the biofilm or the flocs' peripheries, which is the result of the nitrification process.This indicates the growth of the process' rate with the increase of the temperature, which is a common regularity.This presented relationship is statistically important at α = 0.05. From the beginning of the experiment until the end of variant I, the tendency of nitrate nitrogen reduction is observed, as shown in Figure 3.It is correlated with a lack of nitrification in this period.The average nitrate nitrogen concentration in the raw sewage during variant I was 1.7 ± 0.8 mgN-NO 3 •dm −3 , and in the treated sewage, it was 0.77 ± 0.3 mgN-NO 3 •dm −3 .In this case, the N-NO 3 reduction was most likely caused by heterotrophic bacteria, which are able to develop much more quickly than the chemoautotrophic nitrifiers.The average reduction of this nitrogen form was 54.7%.Significant nitrate (III) reduction can be observed in variant II of the process, where the average concentration in the outflow was 0.01 mgN-NO 3 •dm −3 against the inflow concentration of 1.4 mgN-NO 3 •dm −3 , which means that the reduction was 99.3%.This is the example of intense denitrification that occurs together with nitrification.Periodic feeding of the bioreactor with the sewage (during the non-aeration phase) causes the anoxic conditions in the active sludge flocs and in the biofilm, mostly in its deeper parts, which enables the denitrification of N-NO 3 in the biofilm or the flocs' peripheries, which is the result of the nitrification process. The average rate of the denitrification process in the variant I rate was 3.47 ± 5.13 gN-NO x •gsmo −1 •d −1 , and in variant II, it was 7.56 ± 5.71 gN-NO x •gsmo −1 •d −1 .Periodic feeding of the bioreactor with the sewage and the shorter aeration time, which caused the decrease of oxygen concentration in the bioreactor, caused the increase of the denitrification rate.The high denitrification rate and therefore the high reduction of the oxygenated form of Sustainability 2018, 10, 2689 9 of 17 nitrogen resulted from the treated sewage being the source of easily decomposable substrates for the heterotrophic microbes.Figure 6 shows that when the COD/BOD 5 ratio didn't exceed 2.6, the denitrification rate was between 1.9-14.1 gN-NO x •gsmo −1 •d −1 .Alongside an increase of hardly decomposable COD (a higher COD/BOD 5 ratio), the denitrification rate decreased, and when the COD/BOD 5 ratio was 6, this process stopped. sewage being the source of easily decomposable substrates for the heterotrophic microbes.Figure 6 shows that when the COD/BOD5 ratio didn't exceed 2.6, the denitrification rate was between 1.9-14.1 gN-NOx•gsmo −1 •d −1 .Alongside an increase of hardly decomposable COD (a higher COD/BOD5 ratio), the denitrification rate decreased, and when the COD/BOD5 ratio was 6, this process stopped. The average concentration of nitrite nitrogen in the raw sewage during variants I and II was 0.05 ± 0.04 mgN-NO2•dm −3 .Due to the temporary N-NO2 form, nitrite nitrogen concentration is fluctuating during the whole process.In the treated sewage during variant I, the average concentration of N-NO2 is 0.041 ± 0.038 mgN-NO2•dm −3 .A significant increase of N-NO2 concentration is observed in variant II of the process, when the average concentration was 0.32 ± 0.28 mgN-NO2•dm −3 .The change of exploitation conditions (decrease of oxygen concentration in the bioreactor) is believed to inhibit the second part of the nitrification process, which causes the observed accumulation of this indicator in the treated sewage.The reduction of total phosphorus from the sewage was insignificant during the whole experiment.In variant I, the total phosphorus concentration in the raw sewage was 18.4 ± 4.9 mgPog•dm −3 , and in variant II, it was 19.8 mgPog•dm −3 .Therefore, it can be stated that in the whole tested period, the concentration of this indicator was constant in time, as shown in Figure 3.For the treated sewage in variant I, the reduction of this index could not be observed except for in the very beginning and at the end of the experiment (average concentration was 20.1 ± 9.3 mgPog•dm −3 ).This can be explained by the lack of developed biomass suspended in the bioreactor at the beginning of the experiment, which meant that microorganisms didn't consume the phosphorus from the sewage.The improvement of this situation could be noticed during variant II.Except for one case, during this period, the total phosphorus concentration in the bioreactor was decreasing.Its average concentration was 16.2 ± 6.8 mgPog•dm −3 , so the reduction level was 18.2%.Phosphorus was consumed by the suspended bacteria in the amount that allowed them to cover the basic needs; however, the excessive intake of this biogenic compound was not observed.This was most probably due to the lack of an anaerobic zone in the system, which caused a lack of PAO bacteria.It may also be assumed 6. Relationship between the COD/BOD 5 ratio in the raw sewage and the nitrification rate. The average concentration of nitrite nitrogen in the raw sewage during variants I and II was 0.05 ± 0.04 mgN-NO 2 •dm −3 .Due to the temporary N-NO 2 form, nitrite nitrogen concentration is fluctuating during the whole process.In the treated sewage during variant I, the average concentration of N-NO 2 is 0.041 ± 0.038 mgN-NO 2 •dm −3 .A significant increase of N-NO 2 concentration is observed in variant II of the process, when the average concentration was 0.32 ± 0.28 mgN-NO 2 •dm −3 .The change of exploitation conditions (decrease of oxygen concentration in the bioreactor) is believed to inhibit the second part of the nitrification process, which causes the observed accumulation of this indicator in the treated sewage. The reduction of total phosphorus from the sewage was insignificant during the whole experiment.In variant I, the total phosphorus concentration in the raw sewage was 18.4 ± 4.9 mgP og •dm −3 , and in variant II, it was 19.8 mgP og •dm −3 .Therefore, it can be stated that in the whole tested period, the concentration of this indicator was constant in time, as shown in Figure 3.For the treated sewage in variant I, the reduction of this index could not be observed except for in the very beginning and at the end of the experiment (average concentration was 20.1 ± 9.3 mgP og •dm −3 ).This can be explained by the lack of developed biomass suspended in the bioreactor at the beginning of the experiment, which meant that microorganisms didn't consume the phosphorus from the sewage.The improvement of this situation could be noticed during variant II.Except for one case, during this period, the total phosphorus concentration in the bioreactor was decreasing.Its average concentration was 16.2 ± 6.8 mgP og •dm −3 , so the reduction level was 18.2%.Phosphorus was consumed by the suspended bacteria in the amount that allowed them to cover the basic needs; however, the excessive intake of this biogenic compound was not observed.This was most probably due to the lack of an anaerobic zone in the system, which caused a lack of PAO bacteria.It may also be assumed that since a high denitrification ratio was observed in variant II, simultaneous phosphorus intake could occur as the result of denitrifying dephosphatation, as shown in Figure 7. that since a high denitrification ratio was observed in variant II, simultaneous phosphorus intake could occur as the result of denitrifying dephosphatation, as shown in Figure 7.The relationship between denitrification ratio rdenitrification and total phosphorus concentration in the treated sewage is the evidence.The calculated correlation coefficient r = 0.62 was statistically important at α = 0.05 level.The result of dephosphatation denitrification, apart from the high N-NO3 reduction, was the phosphorus removal from the sewage.This process was more intense during variant II of the experiment, which had no aeration. The influence of the BOD5/Pgen ratio in the raw sewage on the total phosphorus concentration in the bioreactor's outflow was also noticed.The lowest concentration of this indicator in the treated sewage was observed when BOD5/Pgen > 23, as shown in Figure 8.When the BOD5/Pgen ratio was lower, the lack of an easily decomposed substrate used by the Poli-P bacteria in the dephosphatation process was observed.The relationship between denitrification ratio r denitrification and total phosphorus concentration in the treated sewage is the evidence.The calculated correlation coefficient r = 0.62 was statistically important at α = 0.05 level.The result of dephosphatation denitrification, apart from the high N-NO 3 reduction, was the phosphorus removal from the sewage.This process was more intense during variant II of the experiment, which had no aeration. The influence of the BOD 5 /P gen ratio in the raw sewage on the total phosphorus concentration in the bioreactor's outflow was also noticed.The lowest concentration of this indicator in the treated sewage was observed when BOD 5 /P gen > 23, as shown in Figure 8.When the BOD 5 /P gen ratio was lower, the lack of an easily decomposed substrate used by the Poli-P bacteria in the dephosphatation process was observed.that since a high denitrification ratio was observed in variant II, simultaneous phosphorus intake could occur as the result of denitrifying dephosphatation, as shown in Figure 7.The relationship between denitrification ratio rdenitrification and total phosphorus concentration in the treated sewage is the evidence.The calculated correlation coefficient r = 0.62 was statistically important at α = 0.05 level.The result of dephosphatation denitrification, apart from the high N-NO3 reduction, was the phosphorus removal from the sewage.This process was more intense during variant II of the experiment, which had no aeration. The influence of the BOD5/Pgen ratio in the raw sewage on the total phosphorus concentration in the bioreactor's outflow was also noticed.The lowest concentration of this indicator in the treated sewage was observed when BOD5/Pgen > 23, as shown in Figure 8.When the BOD5/Pgen ratio was lower, the lack of an easily decomposed substrate used by the Poli-P bacteria in the dephosphatation process was observed. Discussion The quantity and quality of sewage is highly determined by the rural households' characteristic.Tenants who did not have access to the sewage treatment and drainage systems are accustomed to saving water.This is the reason for the concentration of higher pollutants in a smaller amount of water.Moreover, the water usage trends in the households causes the high heterogeneity of the sewage drainage, which exposes the treatment systems to the instable performance.In case of the systems operating on classic active sludge, which has become a more and more popular trend in Poland for small amounts of sewage treatment, severe exploitation problems may occur, such as sludge rinsing during significant hydraulic frictions, which in consequence influences the treated sewage quality.A high irregularity of the pollutants' concentration in the household sewage is also shown in the research carried out by W ąsik and Chmielowski [24], Kaczor et al. [25], Bugajski et al. [26], Nowak et al. [27], Chmielowski et al. [28].These authors, while testing the quality of sewage drained from the rural areas, noticed significant fluctuations in the BOD 5 values, COD, and/or total nitrogen.The analyses of the new sewage treatment methods show that the introduction of the biomass immobilized on the carriers as the submerged or mobile bed into the classic bioreactor with the active sludge increases the effectiveness of pollutant removal from the sewage.In the analyzed system, the obtained organic pollutants' removal depended on the variant and ranged from 68.9% to 91.9%: the removal of total suspended solids ranged from 68.6% to 80.1%, the removal of ammonium nitrogen ranged from 17.8% to 25.8%, the removal of N-NO 3 ranged from 54.7% to 99.3%.However, the removal of total phosphorus was maintained at about 18.2%.The presented results indicate the high effectiveness of the organic matter and N-NO 3 removal, and significantly better effectiveness was observed in the system that operated in the sequence of 60 min of no aeration/60 min of aeration with the periodic sewage dosage in the no-aeration phase.This situation is probably because heterotrophic bacteria that was capable of organic matter removal and denitrification dominated in the tested system, whereas the denitrification was the aerobic process.The average loading of the bioreactor with the COD loading was 2.43 gCOD•gsmo −1 •d −1 and 0.63 kgCOD•m −3 •d −1 , respectively.The average organic matter removal rate was 0.46 kgCOD•m −3 •d −1 , which equaled 1.19 gCOD•gsmo −1 •d −1 in variant I, and 0.49 kgCOD•m −3 •d −1 and 1.80 gCOD•gsmo −1 •d −1 in variant II.The same results described other researchers.Guo et al. [29] obtained similar results concerning the removal of sewage pollutants in a hybrid bioreactor filled with a bed made of the silk fibers.The results of the research showed a 90% reduction of COD and a 50%-80% reduction of total nitrogen.The cited researchers showed, similarly to the authors of the present paper, that the introduction of the raw sewage into the bioreactor during aeration decreases the effectiveness of nitrogen removal in comparison to the situation when the bioreactor is fed during the anoxic phase.Low denitrification effectiveness is caused by the lack of organic carbon, which acts as the electron donor for the denitrifiers, because the aerobic microorganisms oxidize a lot of the available carbon source. Helmer and Kunst [30] and Helmer et al. [31] tested the possibility of the removal of nitrogen compounds using the rotating biological contractor.The authors found that the concentration of nitrogen compounds decreased with an oxygen concentration of 1.0 mgO 2 •dm −3 and without the external organic carbon source.Ammonium nitrogen was transformed into gas nitrogen in autotrophic conditions.About 40% of inorganic and organic nitrogen was found to be transformed into the final gas products.Therefore, it was suggested that heterotrophic microorganisms are capable of nitrification and denitrification at the same time in aerobic conditions.Microbiological analyses showed that Thiosphaera pantotropha and Nitrosomonas sp. were able to nitrify/denitrify simultaneously.Similar results were obtained by Menoud et al. [32] in the analyses of sewage nitrogen transformations in the bioreactor with SIPORAX™ blocks.It was ascertained that as the result of the oxygen concentration gradient, nitrifiers develop in the external parts of the carrier, whereas denitrifiers develop in the internal parts.The performed research showed that the maximum capacity of the simultaneous nitrification/denitrification was 0.61/0.83kgN•m −3 •d −1 , and the oxygen concentration was above 1.0 mgO 2 •dm −3 .Rodgers [33], while testing the system with the plastic bed periodically submerged in the bioreactor and a constant sewage dosage, gained an organic matter removal rate as high as 3.8 kgCOD•m −3 •d −1 with a 92% efficiency of COD reduction. Research regarding a moving bed sequencing batch reactor has also been conducted by other authors.Cao et al. [34] analyzed the effect of dissolved oxygen concentration on oxygen diffusion and the bacterial community structure in a moving bed sequencing batch reactor.Sytek-Szmeichel et al. [35] tested the efficiency of wastewater treatment in MBSBBR systems in specified technological conditions with a sequence MBSBBR bioreactor.Dulkadiroglu et al. [36] modeled nitrate concentrations in a moving bed sequencing batch biofilm reactor using an artificial neural network technique.In the work Gilbert et al. [37], the low temperature partial nitritation/anammox in a moving bed biofilm reactor treating low strength wastewater was evaluated.Koupaie et al. [38] evaluated an integrated anaerobic/aerobic fixed-bed sequencing batch biofilm reactor for the decolorization and biodegradation of azo dye Acid Red 18, where the comparison of using two types of packing media was carried out.Bassin et al. [39] focused on effect of different operational conditions on biofilm development, nitrification, and nitrifying microbial population in moving-bed biofilm reactors.The research carried out by Lim et al. [40] concerned the enhancement of nitrogen removal in a moving bed sequencing batch reactor with intermittent aeration during an aerobic REACT period.Persson et al. [41] studied the structure and composition of biofilm communities in a moving bed biofilm reactor for nitritation/anammox at low temperatures.Jaroszy ński et al. [42] analyzed the impact of free ammonia on anammox rates in a moving bed biofilm reactor.The studies carried out by Zekker et al. [43] focused on the effects of anammox enrichment from reject water on blank biofilm carriers and carriers containing nitrifying biomass on the operation of two moving bed biofilm reactors. In the nitrifying beds loaded with a high amount of ammonium nitrogen, the nitrification rate ranged between 0.18-0.35gN-NH 4 •gsmo −1 •d −1 [44,45], and in the one found by Helmer et al. [31] in the rotating disk filters, the nitrification rate remained on the level of 7.2 mgN-NH 4 •gsmo −1 •h −1 with an oxygen concentration of 1.0 mgO 2 •dm −3 .The presented analyses show the nitrification rate at the level of 0.08 gN-NH 4 The increase of the denitrification rate in variant II of the experiment could have been caused by the periodic sewage dosage during the no-aeration phase.Such a type of sewage dosage, especially with a high concentration of dissolved COD fraction, significantly increased the denitrification process [46]. The high level of nitrates (III) reduction in the sewage, which was observed in the presented research regardless of the oxygen dissolved in the bioreactor, suggests the presence of microorganisms that are capable of denitrification in the aerobic conditions.This phenomenon is often observed in the systems with biomass immobilized on the carrier (e.g., circular, moving, membrane, and fluidal beds) or in the so-called granulated active sludge in the periodically operating systems [47][48][49][50][51][52][53][54][55][56][57].This is caused by the occurrence of the oxygen concentration gradients in the biological membrane or in the granulated active sludge.The occurrence of external aerobic layers together with the anoxic conditions in the deeper membrane layers is possible in this situation [58,59].Podedworna and Żubrowska-Sudoł [60] in the research of the moving-bed sequencing batch biofilm reactor, MBSBBR, obtained a similar reduction of organic pollution with the loading between 0.227-0.684kgCOD•m −3 •d −1 .The quoted authors gained nearly 100% nitrification of the ammonium nitrogen with the sludge age of 1-2 d, which resulted from the nitrifiers' immobilization on the bed.However, the high denitrification efficiency resulted from the increase of the bioreactor loading up to 0.528-0.687kgCOD•m −3 •d −1 . In the discussed experiment, the increase in the phosphorus removal effectiveness was observed together with the increase of the BOD 5 /P ratio.A similar phenomenon was observed by Vaboliene et al. [61] in the experiment in the active sludge bioreactor with the simultaneous nitrification/denitrification option.The lowest total phosphorus concentration was found with the BOD 7 /P ratio higher than 15.Although a relationship between the total phosphorus concentration in the outflow and the denitrification rate was observed, the denitrification caused only partial phosphorus removal from the sewage.In the anaerobic conditions, the efficacy of the energy production from the denitrifying dephosphatation amounted to about 40% in relation to the energy produced in the aerobic conditions.The denitrifying dephosphatation could have been limited in the tested system due to the low N-NO 3 concentration, which is the electron acceptor for the Poli-P bacteria [62].Mishima et al. [63], in their research of a flow bioreactor with the anoxic, anaerobic, and aerobic zones filled with the hygroscopic gel, found an increased phosphates intake during the denitrification process.A similar observation was found in the presented research. It seems that similarly to ammonium nitrogen, phosphorus was removed mainly due to the heterotrophic bacteria intake.Partly, this was caused by a low suspended biomass' concentration in the bioreactor, which enables the possibility of the increased phosphorus intake from the sewage.A relatively small amount of the suspended biomass in the bioreactor was observed, which was 86.8 mg•dm −3 on average.Similar observations were found by Hamoda and Al-Ghusain [64], where in the tested bioreactor with the submerged bed (the ceramic vertical elements), the suspended biomass constituted about 5% of the total biomass in the bioreactor, which corresponded with the 120 mg•dm −3 concentration in the beginning to 25 mg•dm −3 at the end of the bioreactor.The main suspension mass was attached to the bed. Conclusions The work presents the effectiveness of the removal of organic pollutants, nitrogen and phosphorus, from household sewage in a hybrid bioreactor with a submerged fixed bed.The experiment was carried out in a laboratory model of the hybrid bioreactor in two exploitation variants: variant I-120 min of aeration/60 min of no aeration and a constant sewage dosage, and variant II-60 min of aeration/60 min of no aeration with a periodic sewage dosage in the no-aeration phase.The experiment was carried out on real sewage primarily treated in the septic tank.On the basis of the carried out research, it can be concluded that the tested hybrid system provided high organic pollutants' and nitrate nitrogen reduction from the sewage.Regarding the removal of organic pollutants and nutrients, the variant with 60 min of aeration and 60 min of no-aeration phases and a periodic sewage dosage during the no-aeration phase seemed to be more favorable.Such exploitation increases the denitrification process.In the tested system, a low nitrification rate combined with high denitrification was observed.This situation was observed mostly in variant II.That the system was often overloaded with ammonium nitrogen caused the inhibition of the nitrification process.Ammonium nitrogen as well as total phosphorus were mainly removed due to the intake of heterotrophic microorganisms.The denitrifying dephosphatation process could have had an influence on the total phosphorus removal, which is confirmed by the statistically important relationship between the total phosphorus concentration in the treated sewage and the denitrification rate.Generally, on the basis of the performed study, the following conclusions can be formed: (1) There was a higher stable amount of pollutants removal from the sewage in the case with biofilm creation.The biofilm is much more resistant to sudden changes in the conditions than the suspended biomass.(2) In variant II (60 min of aeration and 60 min of no-aeration phases, and a periodic sewage dosage during the no-aeration phase), in comparison to variant I (constant sewage dosage 24 h a day, changeable aeration cycles-120 min with aeration/60 min with no aeration) there was a greater removal of organic pollutants and nutrient.The mean removal of organic compounds was equal to 86.3% for BOD 5 and 68.9% for COD in variant I, but for variant II, these rates were 91.9% and 80.1% for BOD 5 and COD, respectively.Ammonium nitrogen was observed to increase the concentration in the outflow during variant I by 17.8%, and by 25.8% for variant II. Figure 1 . Figure 1.Model of the hybrid bioreactor. Figure 1 . Figure 1.Model of the hybrid bioreactor. 16 Figure 2 . Figure 2. Changes of BOD5, CODCr, and total suspended solids values during the experiment. Figure 2 . Figure 2. Changes of BOD 5 , COD Cr , and total suspended solids values during the experiment. Figure 4 . Figure 4. Nitrification rate and sludge loading with ammonium nitrogen during the experiment. Figure 3 . Figure 3. Changes of ammonium nitrogen, nitrites, nitrates, and total phosphorus concentration during the experiment. Figure 3 . Figure 3. Changes of ammonium nitrogen, nitrites, nitrates, and total phosphorus concentration during the experiment. Figure 4 . Figure 4. Nitrification rate and sludge loading with ammonium nitrogen during the experiment.Figure 4. Nitrification rate and sludge loading with ammonium nitrogen during the experiment. Figure 4 . Figure 4. Nitrification rate and sludge loading with ammonium nitrogen during the experiment.Figure 4. Nitrification rate and sludge loading with ammonium nitrogen during the experiment. Figure 5 . Figure 5. Relationship of the nitrification rate and temperature in the bioreactor.Figure 5. Relationship of the nitrification rate and temperature in the bioreactor. Figure 5 . Figure 5. Relationship of the nitrification rate and temperature in the bioreactor.Figure 5. Relationship of the nitrification rate and temperature in the bioreactor. Figure 6 . Figure 6.Relationship between the COD/BOD5 ratio in the raw sewage and the nitrification rate. Figure 7 . Figure 7. Relationship of denitrification ratio and total phosphorus concentration in the treated sewage. Figure 8 . Figure 8. Relationship of the total phosphorus concentration in the treated sewage and the BOD5/Ptot ratio in the raw sewage. Figure 7 . Figure 7. Relationship of denitrification ratio and total phosphorus concentration in the treated sewage. Figure 7 . Figure 7. Relationship of denitrification ratio and total phosphorus concentration in the treated sewage. Figure 8 . Figure 8. Relationship of the total phosphorus concentration in the treated sewage and the BOD5/Ptot ratio in the raw sewage. Figure 8 . Figure 8. Relationship of the total phosphorus concentration in the treated sewage and the BOD 5 /P tot ratio in the raw sewage. ( 3 ) The rate of nitrification during variant II was equal to 0.08 ± 0.09 mgN-NH 4 •gsmo −1 •d −1 .If the sludge loading with ammonium nitrogen is much higher than the nitrification rate, then N-NH 4 removal is less efficient, and the ammonium form cumulates in the bioreactor.The rate of nitrification was positively correlated with the sewage temperature in the bioreactor.(4) The highest rate of denitrification was observed during variant II, and was equal to 7.56 ± 5.71 gN-NO x •gsmo −1 •d −1 in comparison to variant I (3.47 ± 5.13 gN-NO x •gsmo −1 •d −1 ). Table 1 . Sludge loading with ammonium nitrogen and hydraulic retention time in the bioreactor in the analyzed variants.
2019-03-12T05:35:39.753Z
2018-08-01T00:00:00.000
{ "year": 2018, "sha1": "502c842db2f92e7e5293407f3c0a60f85eb2bb2a", "oa_license": "CCBY", "oa_url": "https://www.mdpi.com/2071-1050/10/8/2689/pdf?version=1533108809", "oa_status": "GOLD", "pdf_src": "ScienceParseMerged", "pdf_hash": "502c842db2f92e7e5293407f3c0a60f85eb2bb2a", "s2fieldsofstudy": [ "Environmental Science" ], "extfieldsofstudy": [ "Economics" ] }
272200388
pes2o/s2orc
v3-fos-license
Hantavirus in Northern Short-tailed Shrew, United States Phylogenetic analyses, based on partial medium- and large-segment sequences, support an ancient evolutionary origin of a genetically distinct hantavirus detected by reverse transcription–PCR in tissues of northern short-tailed shrews (Blarina brevicauda) captured in Minnesota in August 1998. To our knowledge, this is the first evidence of hantaviruses harbored by shrews in the Americas. R odents and their ectoparasites serve as reservoirs and vectors of myriad viruses and other pathogenic microbes. In contrast, the role of insectivores (or soricomorphs) in the transmission and ecology of zoonoses is largely unknown. Because some soricomorphs share habitats with rodents, shrews might also be involved in the maintenance of the enzootic cycle and contribute to the evolutionary history and genetic diversity of hantaviruses. Hantavirus antigens have been detected in the Eurasian common shrew (Sorex araneus), alpine shrew (Sorex alpinus), Eurasian water shrew (Neomys fodiens), and common mole (Talpa europea) in Russia and the former Yugoslavia (1)(2)(3). More than 20 years ago, when Prospect Hill virus was discovered in meadow voles (Microtus pennsylvanicus) captured in Frederick, Maryland, USA, serologic evidence suggestive of hantavirus infection was found in the northern short-tailed shrew (Blarina brevicauda) (4). However, virus isolation attempts were unsuccessful, and molecular tools such as PCR were unavailable. Empowered by robust gene-amplifi cation techniques and the complete genome of Thottapalayam virus (TPMV) isolated from the Asian house shrew (Suncus murinus) (5,6), we have identifi ed a genetically distinct hantavirus in the northern shorttailed shrew. Of the 30 northern short-tailed shrews tested, hantavirus M-segment sequences were amplifi ed from lung tissues of 3 of 12 animals captured in Camp Ripley (46.185°N, 94.4337°W), a 53,000-acre, state-owned military and civilian training center near Little Falls, in Morrison County, Minnesota, USA, in August 1998 (Table 1). Pairwise alignment and comparison of the 1,390-nt region (463 aa) spanning the Gn and Gc glycoprotein-encoding M segment indicated differences of 33.6%-41.9% and 32.7%-47.4% at the nucleotide and amino acid levels, respectively, from representative hantaviruses harbored by Murinae, Arvicolinae, Neotominae, and Sigmodontinae rodents ( Table 2) (7), by 27.3%-28.8% and 24.0%-25.0%, respectively. The higher degree of sequence similarity in the L segment between RPLV and other hantaviruses probably signifi es the limits of functional preservation of the RNA-dependent RNA polymerase. Repeated and exhaustive phylogenetic analyses based on nucleotide and deduced amino acid sequences of the M and L genomic segments generated by the maximum-likelihood method indicated that RPLV was distinct from rodentborne hantaviruses (with high bootstrap support based on 100 maximum likelihood replicates) (Figure). Similar topologies were consistently derived by using various algorithms and different taxa (including La Crosse virus) and combinations of taxa, which suggested an ancient evolutionary origin. However, defi nitive conclusions about the molecular phylogeny of RPLV and its relationship to TGNV and other soricidborne hantaviruses must await complete-genome sequence analyses. Conclusions As we had previously encountered in sequencing the entire genome of TPMV (J.-W. Song, R. Yanagihara, unpub. data), the divergent genome of RPLV presented challenges in designing suitable primers for RT-PCR. We were also constrained by the limited availability of tissues from the 3 infected shrews and the need to retain small portions of tissues for future virus isolation attempts. Consequently, we have been hitherto unable to obtain the full-length sequence of RPLV. The northern short-tailed shrew (family Soricidae, subfamily Soricinae), 1 of 2 poisonous mammals in North America (8), inhabits forests and grasslands within the central and eastern half of the United States, extending north to Canada, west to Montana, and south to Tennessee and Georgia. Cytochrome b mitochondrial DNA and 16S rRNA sequence analyses support a monophyletic origin for the genus Blarina, with phylogeographic structuring of northern short-tailed shrews into well-defi ned groups to the east and west of the Mississippi River basin (9). Current studies will examine whether RPLV is harbored by the eastern haplogroup of northern short-tailed shrews and by the southern short-tailed shrew (Blarina carolinensis), a closely related species, which inhabits the southeastern United States, extending as far north as southern Illinois and south-central Virginia and as far south as central Florida. Given the sympatric and synchronistic coexistence of northern short-tailed shrews with Neotominae and Arvicolinae rodents (such as Peromyscus leucopus and Microtus pennsylvanicus) and their ferocious territorial behavior, hantavirus spillover may be possible. Viruses closely related antigenically to Hantaan virus have been isolated from the Asian house shrew (Suncus murinus), greater whitetoothed shrew (Crocidura russula), and Chinese mole shrew (Anourosorex squamipes) (10)(11)(12), which suggests that shrews are capable of serving as incidental hosts for hantaviruses typically harbored by rodents. Shrews that harbor genetically distinct hantaviruses pose a compelling conceptual framework that challenges the long-accepted dogma that rodents are the sole reservoirs of hantaviruses. Viewed within the context of the recent detection of TGNV in the Therese shrew in Guinea, the identifi cation of RPLV in the northern short-tailed shrew in the United States indicates that renewed efforts, facilitated by the rapidly expanding sequence database of shrewborne hantaviruses, will lead to the discovery of additional hantaviruses in soricids throughout Eurasia, Africa, and the Americas. Our preliminary studies indicate 3 other novel soricidborne hantaviruses in the Republic of Korea, Vietnam, and Switzerland. To establish if >1 of these newly identifi ed hantaviruses is pathogenic for humans will require development of robust serologic assays (13) and application of other sensitive technologies, such as microarray analysis (14,15), for rapid detection of shrewborne hantavirus RNA in human tissues and bodily fl uids.
2014-10-01T00:00:00.000Z
2010-06-01T00:00:00.000
{ "year": 2007, "sha1": "74da05daf7d828d674c03eb4ff8685ec4b234c11", "oa_license": "CCBY", "oa_url": "https://wwwnc.cdc.gov/eid/article/13/9/pdfs/07-0484.pdf", "oa_status": "GOLD", "pdf_src": "CiteSeerX", "pdf_hash": "74da05daf7d828d674c03eb4ff8685ec4b234c11", "s2fieldsofstudy": [ "History", "Political Science" ], "extfieldsofstudy": [] }
216179757
pes2o/s2orc
v3-fos-license
Research on Teaching Reform of Civil Engineering Materials Based on Individuation and Interactivity Teachers should take the initiative to develop specialties of students and drive the progress of the curriculum. The traditional course of civil engineering materials has some obvious problems such as redundant course content, unreasonable scoring assessment mechanism, and low output of post-class practice. As a result, students in the classroom are less motivated and do not get insight into the understanding of this course. To conquer these limitations, this paper aims to optimize the traditional teaching system from three aspects: (1) the teaching method of course teaching, (2) the course evaluation system and (3) the extracurricular practice output. In addition, it also helps to carry out teaching activities for students based on the perspective of individuality and interaction as well as cultivate applied talents. This paper also provides a reference for the innovative teaching model. Introduction Civil Engineering Materials is a basic professional course for civil engineering education in colleges and universities, and its contents involve various materials commonly used in the field of engineering [1][2]. The courses are mainly offered by civil engineering, road and bridge engineering, river crossing engineering, and architecture. The contents of course would be changed based on different majors. Most universities in China offer this course in the first semester of the second year undergraduate study or in the first semester of sophomore year. It plays a major role in the learning system of civil engineering, and it conforms to the national teaching reform orientation of 'building energy saving' [3][4][5]. For this course, there are some existing challenges including the complicated knowledge points, weak connection between different chapters, and poor planning of the practical links and teaching sessions, due to the special nature of the course itself [6]. Thus students often lack autonomous learning and motivation, just passively accept the traditional "cramming" teaching model. It is difficult to truly develop the students' practice ability and innovative thinking. Thus, the graduated students are difficult to deal with the practical engineering problems, which is conflict with national education at the present stage [7]. There are many innovative educating methods have been adopted by the researchers in our country to improve the education efficient, referring to the state-of-art education model from foreign countries [8][9][10][11]. This mainly includes CDIO concept teaching method [12][13], flipped classroom teaching model [14], interactive teaching method [15][16], case analysis method [17]. The methods presented above attempt to improve students' learning enthusiasm and innovation, which can enrich the students' extra-curricular knowledge to some extent. Thus, the main purpose of this paper is to propose a new education method to deal with the limitations of existed education mentioned above. More specifically, there are three key categories regarding the education reform: reforming the interaction between teachers and students based on the existed traditional teaching method; introducing the university students' innovation and entrepreneurship training program into the practical teaching experiment class such as civil engineering materials; adjusting the assessment criteria and mechanism of experimental operation. Problems in Traditional Teaching Programs Teaching is a key step for student to understand theoretical knowledge, and practice their scientific thinking, improve their application ability and motivate their innovative consciousness. However, the existed teaching model would restrict the thinking mind of student, which is limited inside the classroom. In addition, the students cannot learn more knowledge outside the classroom. Basically, the current teaching model includes five steps: (a) Teachers would arrange the content of course before teaching; (b) Students would get their first look about the civil building materials via some images; (c) Teachers would check whether the students are fully understand the knowledge during the class teaching; (d) Students would join a team to operate some experiments and analyze the experimental results. (e) Teachers would use the final test to examine the learning situation of students. The Lectures Are Mainly Demonstration, and Students Are Difficult to Access the Physical Objects It is well known that the main line of the course of civil engineering materials is based on material science. Taking the civil engineering materials published in China building industry press as an example, the order of its chapters is shown as follow: construction steel, inorganic cementitious materials, cement concrete and mortar, masonry materials, asphalt materials, etc. For the construction materials described in each chapter, teachers often give lectures in the following teaching methods: Text + multimedia (PPT) Text + multimedia (PPT) + animation (mainly Flash) Text + multimedia (PPT) + video (short video within 5 minutes) Text + multimedia (PPT) + multiple combination The main characteristic of these teaching methods is let the teachers to be leading role in the classroom while the students are only the participants. This method pays too much attention to the explanation of theoretical knowledge, but ignores the importance of practical courses for students' learning, which makes it difficult for students to get in touch with the defects of civil materials and engineering experience. The reasons for this situation are as follows: (1) the practical teaching resources are relatively scarce; (2) the course schedule of theoretical and practical courses still needs to be further improved. Unreasonable Practice Site and Practice Teaching Planning The experimental practice of civil engineering materials at current stage is followed the procedures as follow [18][19]. At the beginning, the teacher will generally demonstrate how to operate the experimental equipment and how to record the experimental data. After that, they will let the students to handle the experimental test by themselves. Finally, the teacher will evaluate and score students' classroom performance according to the experimental report. Table 1 shows the basic experimental courses offered at this stage. Practical classes are not limited to the above courses, but also include decoration material experiment, masonry material experiment, plastic pipe performance experiment, etc. The traditional experimental class practice class is 3 periods at one time, each period is 45-50 minutes. However, the teacher's explanation or video demonstration will occupy a lot time of class, the time left for students to practice is only 1-1.5 hours. Students often need to delay the class dismissed or adds lessons after class to complete the experiment. The narrow experimental site and limited experimental funds are also the main reasons that hinder the teaching effect of this experimental course. The above reasons lead to the lack of interest of students in the later stage of the experiment. At the same time, the scores of civil engineering materials practice course only occupy 20%-30% of the total score. The scores of practice course is made of: an experimental report 50% + classroom performance 30% + classroom attendance 20%. The above evaluation system often makes students pay too much attention to the examination part and loses the enthusiasm for practical operation. In addition, the criteria for class performance are not obvious. Some team members can still get high scores even if they do not work hard, which is likely to cause some emotional dissatisfaction for other students who work hard and loss of fairness. Less Output after Class with Students Themselves Innovative teaching methods and skills are the key part for the evolution of engineering course. In the next few years, the new teaching model requires more interactive, intelligent and personalized teaching methods and techniques [24]. The civil engineering materials course should focus on classroom teaching and practice. Since students have less contact with experiments in class, the theory and practice are seriously disjointed. The 'Output oriented method' starts from output and ends at output, which pays special attention to the effective evaluation of students' output results [25]. Traditional teaching mode mainly focuses on input. Not only teachers, but even students themselves will ignore the output after class. The participation of students in after-class practice activities has been statistical analysis based on about 130 civil engineering students. Table 2 indicates that the students who did not participate in after-class practice activities have poor professional instrument operation skills and personal quality cultivation, leading to low classroom evaluation. This also proves that teacher should effectively guide the student how to release 'learning fatigue' through practical teaching. Teaching Reform Measures for Civil Engineering Materials Course Based on the teaching problems mention above, how to 'give a good class' and how to 'cultivate a good person' has become an urgent problem that need to be solved. Thus, the author attempts to propose some relevant curriculum reform measures based on the author's enrich teaching experience, which is eager to cultivate the students' personality and increase their interaction. Adjustment and Innovation Subject Teaching Method On the basis of traditional teaching, students should take the initiative to understand the learning method of civil engineering materials. Research teaching practice model can be developed. Firstly, the demonstrative part of the class should be improved. On the one hand, special sample rooms can be set up in the laboratory, which can help teachers to move the teaching facilities into the laboratory and show geotechnical materials such as cement, concrete and lime to students through physical objects. On the other hand, teachers should also make or collect animations and videos of experiments, tests and engineering applications of civil engineering materials. When practical conditions are limited or practical class hours are limited, multimedia teaching method can be adopted to play in class. In addition, some research teaching activities should be arranged according to the course schedule, in which 4-5 students can be set as a team to attend these activities. The team leader needs to collect the information of all members and submit it to the teacher. The team leader should take responsibility to assign jobs for the members and evaluate the contribution of different members, which will be the basis for the teacher's final grading. Teachers can set relevant research emphases according to the teaching content of civil engineering materials course, which can be set according to the arrangement of teaching schedule according to table 3. According to the discussion content, students are required to complete the data collection, group communication, PPT production, on-site report, etc. They are required to combine the current domestic and foreign research on new technologies and new materials. The teacher takes the role of answering questions and takes 10 minutes at the end of each class for each group to report their work. The research-based learning method can effectively mobilize students' learning enthusiasm and can be used as the basis for teachers to grade their classroom performance. Improve the Teaching Evaluation System of Practical Courses The traditional evaluation system takes the final examination as the main characteristic. Taking the Ningbo Institute of Technology, Zhejiang University as an example, the final examination paper is produced by the teacher, and academic affairs office finally decides which examination paper is used as the final examination, and the remaining one is reserved for make-up examination. The remaining one is limited to the make-up examination. Based on the analysis of students' review for the final exam, nearly 84.2% of students believe that they can achieve an ideal result in the final exam of civil engineering materials only by rote learning. This is related to the key point of knowledge highlighted by the teacher in the class. At the same time, the high proportion of this key knowledge in the final test would lead to a bad situation where students can get a high score but did not know about practical skills. Moreover, it might generate some student who can achieve high score but low practical ability. Table 4 shows that the final exam of civil engineering materials. The proportion of experimental operation questions was increased in the exam paper after the reform. There are examples as shown below: listing 3-4 instruments in civil engineering laboratory and explain their functions; Evaluating the laboratory curriculum teaching sentiment and so on; Correcting the mistake that made by the students in the lab. The above methods are conducive to cultivating application-oriented talents of civil engineering specialty, and enhancing students' enthusiasm for practical course, thus playing a role of [26][27]. At present, SRT research platform, which is generally set up by domestic universities based on the scientific research projects of the college and topped by the national innovation and entrepreneurship training program for college students, is a good platform for after-class practice ( figure 1). On the one hand, the results can effectively review the classroom knowledge of civil engineering materials and further expand this knowledge. On the other hand, it can significantly improve students' comprehensive quality and innovation ability for scientific research. Table 5 shows that the civil engineering students of the Ningbo Institute of Technology. Zhejiang University applied for university-level innovation training program and national college students' innovation program in 2018, it can be seen that civil engineering materials are the most welcome projects for the undergraduates. Apart from the differences in the professional guidance of teachers in the sample schools, the subject of civil engineering materials can make it easier for undergraduates to start operation. The main reason is that undergraduate have been exposed to civil engineering material practice course and have a preliminary understanding of how to operate experiments. The teachers should work together to carry out the scientific research training for the students according to their hobby and knowledge foundation. Conclusion In contemporary education and teaching system, the key point in the revolution of education is to cultivate innovative students and motivate their enthusiasm of studying [28]. Through the course teaching reform of civil engineering material curriculum, students can give full play to their dominant position and teachers can be more recognized. In view of the problems existing in the traditional teaching mode of civil engineering materials, figure 2 proposes three improvements to the traditional teaching model from three aspects: course teaching means, course evaluation system and extracurricular practice output. It hopes that this mode can provide reference for the current efficient and innovative teaching mode. Under the latest practical teaching mode of civil engineering materials, students can apply the knowledge in class to their future career. At the same time, with the advent of the era of information technology, more and more multimedia and remote teaching technologies have been introduced into the classroom [29][30]. How to apply these technologies to the teaching of civil engineering materials will also be a major point for future consideration.
2020-04-02T09:17:32.748Z
2020-03-27T00:00:00.000
{ "year": 2020, "sha1": "839013253eecf0abd008b295d93df91e3db8fbf9", "oa_license": null, "oa_url": "https://doi.org/10.1088/1757-899x/774/1/012135", "oa_status": "GOLD", "pdf_src": "IOP", "pdf_hash": "6ddd19ef0b1c4a658c1a919c6bc9783d2a7eb116", "s2fieldsofstudy": [ "Education" ], "extfieldsofstudy": [ "Engineering", "Physics" ] }
231597693
pes2o/s2orc
v3-fos-license
Phenotypic Selection on Flower Color and Floral Display Size by Three Bee Species Plants exhibit a wide array of floral forms and pollinators can act as agent of selection on floral traits. Two trends have emerged from recent reviews of pollinator-mediated selection in plants. First, pollinator-mediated selection on plant-level attractants such as floral display size is stronger than on flower-level attractant such as flower color. Second, when comparing plant species, distinct pollinators can exert different selection patterns on floral traits. In addition, many plant species are visited by a diverse array of pollinators but very few studies have examined selection by distinct pollinators. In the current study, we examined phenotypic selection on flower color and floral display size by three distinct bee species, the European honey bee, Apis mellifera, the common eastern bumble bee, Bombus impatiens, and the alfalfa leafcutting bee, Megachile rotundata, foraging on Medicago sativa. To estimate phenotypic selection by each bee species and for all bees combined simultaneously and on the same group of plants, we introduce a new method that combines pollinator visitation data to seed set and floral trait measurements data typical of phenotypic selection study. When comparing floral traits, all bee species selected on the number of racemes per stem and the number of stems per plant, two components of floral display size. However, only leafcutting bees selected on hue or flower color and only bumble bees selected on chroma or darkness of flowers. Selection on chroma occurred via correlational selection between chroma and number of open flowers per raceme and we examine how correlational selection may facilitate the evolution of flower color in plant populations. When comparing bee species, the three bee species exerted similar selection pattern on some floral traits but different patterns on other floral traits and differences in selection patterns were observed between flower-level and plant-level attractants. The trends detected were consistent with previous studies and we advocate the approach introduced here for future studies examining the impact of distinct pollinators on floral trait evolution. INTRODUCTION Plants exhibit a high level of floral trait diversity. Flower size, flower color, flower shape, and various aspects of floral display size can vary among plants in a population, among populations of a plant species or among plant species (Brunet, 2009;Dart et al., 2012). The role of pollinators in shaping such floral diversity has been of great interest to evolutionary biologists (Galen, 1996;Fishman and Willis, 2008;Harder and Johnson, 2009;Sletvold et al., 2017). In the last three decades, the attention has focused on identifying the role of pollinators, as opposed to other biotic or to abiotic factors, as agent of selection on floral traits (Strauss and Whittall, 2006;Parachnowitsch and Kessler, 2010;Caruso et al., 2019). The two literature reviews of phenotypic selection in plants have indicated that selection on floral traits by pollinators tend to be greater than by herbivores (Parachnowitsch and Kessler, 2010;Caruso et al., 2019) but can be of similar strength as selection by abiotic factors (Caruso et al., 2019). To isolate the impact of pollinators on selection of floral traits, it has been suggested to measure phenotypic selection in two groups of plants, one group of hand-pollinated and one group of open-pollinated plants (Fishman and Willis, 2008;Sandring and Ågren, 2009;Parachnowitsch and Kessler, 2010;Sletvold et al., 2017). Selection gradients are estimated for each group and the difference in the selection gradients between the hand-pollinated and the open-pollinated treatments is attributed to pollinator-mediated selection on the floral traits of interest. When concentrating on directional selection for studies that compared hand-pollinated and open-pollinated treatments, two patterns emerged. First, pollinators differentially selected on distinct categories of floral traits. Selection was strongest on floral traits associated with pollinator efficiency such as the length of the corolla tube, followed by plant level traits associated with pollinator attraction such as floral display size and finally selection was weakest for flower level traits associated with pollinator attraction such as flower size and flower color (Caruso et al., 2019). Second, distinct pollinators had different impacts on the selection of floral traits and, among plant species, long-tongue flies or birds tended to exert the strongest selection on floral traits and Lepidoptera the weakest (Caruso et al., 2019). Few studies have compared selection by distinct pollinators within a plant species (Sahli and Conner, 2011;Worley, 2012, 2013). Conflicting selection among pollinators was identified for some floral traits while for other traits distinct pollinators exerted similar patterns of selection (Sahli and Conner, 2011). For example, in Polemonium brandegeei, hummingbirds selected for stigmas exserted beyond the anthers and for longer and wider corolla tubes while hawkmoths selected for stigmas recessed below the anthers and for narrower corolla tubes Worley, 2012, 2013). These studies examined each pollinator separately and on different sets of plants (Kulbaba and Worley, 2013) or examined pollinators separately and combined in cages (Sahli and Conner, 2011). But plants in natural populations are differentially visited by distinct pollinators whose abundance and efficiency vary and it would be useful to quantify the impact of the major pollinators on the floral traits of interest simultaneously and on the same group of plants. Pollinator visitation has been used as a proxy for reproductive success in some phenotypic selection studies (Campbell et al., 1997;Zhao and Wang, 2015). Here, we propose to combine pollinator observations with measurements of seed set and floral traits of plants to examine phenotypic selection on floral traits by distinct pollinators. Each plant in the population is expected to receive differential visits by the distinct pollinator species and a different proportion of its flowers will be visited by each species. Such proportions can be used to differentially attribute seeds set on a plant to the distinct pollinators. In addition, data on pollinator efficiency can be combined with flower visits data to proportionally attribute seeds to each pollinator species. Relative fitness (RF) and phenotypic selection can then be measured within each bee species. Phenotypic selection by all pollinators combined can be measured using total seed set per plant to calculate RF. This approach is developed here to illustrate how phenotypic selection by all pollinators combined can be differentially attributed to each pollinator species as we study phenotypic selection on flower color and floral display size by three bee species in Medicago sativa. We determine whether selection by pollinators is stronger for plant-level attractants like floral display size than for flower-level attractants such as flower color (Caruso et al., 2019). We also examine whether the three bee species exert similar or different patterns of selection on these floral traits and how this translates into the overall pattern of selection on the plants. Measuring selection on floral traits by different bee species on the same group of plants provides a more realistic depiction of pollinator-mediated selection on floral traits in plant populations. Study Species Medicago sativa L. is an open-pollinated perennial legume that requires bees for seed production. Flowers are clustered into racemes and plants exhibit variation in the number of open flowers per raceme, number of racemes per stem (inflorescence), and number of stems per plant and flower color can also vary, ranging from shades of purple, to white, to yellow (Bauer et al., 2017). Medicago sativa flowers require tripping for pollination, where pollinators apply pressure to the keel of the flowers which releases its anthers and stigmas. Flowers remain open following tripping but there is little evidence of further pollen deposition by pollinators on already tripped flowers (J. Brunet, pers. obs.). The tripping rate, the proportion of visited flowers that are tripped by a pollinator, varies among bee species (Cane, 2009;Brunet and Stewart, 2010;Pitts-Singer and Cane, 2011). Typically, alfalfa leafcutting bees have the highest tripping rate, followed by bumble bees and finally honey bees (Pitts-Singer and Cane, 2011;Brunet et al., 2019). Honey bees (Apis mellifera) and alfalfa leafcutting bees (Megachile rotundata) are used as managed pollinators in alfalfa seed production fields. In addition, many wild bee species, including the common eastern bumble bee (Bombus impatiens), are known to visit and effectively pollinate alfalfa (Brookes et al., 1994;Brunet and Stewart, 2010). Experimental Set Up Five patches of M. sativa with 81 plants per patch, initially planted 0.3 m apart, were set up in a linear arrangement at the West Madison Agricultural Research Station in Madison, WI. One bumble bee hive was set up at the center edge, one honey bee hive 30 m away, and a leafcutting bee domicile was set up at the northwest corner facing southeast. About 1.2 lbs of leafcutting bees were released prior to M. sativa peak bloom. Floral trait and fitness measurements were obtained from all flowering plants in two center patches where each plant was numbered. Floral Traits The floral traits examined in this study included components of floral display size and flower color. For floral display size, we recorded the number of stems per plant, racemes per stem and open flowers per raceme. For each flowering plant, we counted the number of stems and the number of racemes per stem on ten randomly selected stems or on all stems if a plant had fewer than ten stems. The number of open flowers per raceme was recorded on ten randomly selected racemes per plant. The average number of racemes per stem and open flowers per raceme were tabulated for each plant. Flower color was determined from spectral measurements of the banner petal for three flowers per plant using the USB 4000 spectrophotometer (Ocean Optics, Orlando, Fl., 350-1,000 nm). Reflectance data were analyzed using Spectra Suite v.10.7.1 software (Ocean Optics). Flowers of M. sativa do not reflect in the UV range (Bauer et al., 2017) and spectral measurements were taken in the visible light range (400-700 nm). We used equations from Endler (1990) as modified by Smith (2014) to calculate three components of flower color: chroma (darkness or saturation), hue (color), and reflectivity (brightness). Details of these calculations can be found in Bauer et al. (2017). A plant value represented the average of the three flowers. An alternative would have been to use a hexagon color vision model, a method that considers bee photoreceptors when quantifying color (Chittka, 1992;Chittka and Kevan, 2005). We have used such models to examine how flower color affected the choice of plants by bees (Bauer et al., 2017). However, in this study, while bees may be doing the selection, they are selecting on the plant traits and not on their perception of those traits. We thus chose hue, chroma and receptivity to describe flower color. While the best method to quantify flower color when pollinators are selecting on the trait may deserve further attention, such discussion is beyond the scope of this study. Female Reproductive Success We used the total number of seeds produced per plant as a measure of female reproductive success. On each plant, on ten randomly selected stems or all stems if a plant had fewer than ten stems, we counted the number of pods per stem. A pod is a fruit developing from one flower on a raceme. We collected ten randomly selected fruiting racemes per plant and placed each one in an individually marked paper coin envelope. In the laboratory, the number of pods per raceme were recorded and pods were shredded to obtain the number of seeds per raceme. For each plant, we obtained the average number of mature seeds per pod per raceme and, using the 10 fruiting racemes per plant, we calculated the average number of mature seeds per pod on a plant. To obtain the total number of seeds set per plant, we multiplied the average number of pods per stem by the average number of seeds per pod and multiplied this value by the number of stems produced on a plant. Proportion of Seeds Attributable to Each Bee Species To estimate the proportion of seeds on each plant attributable to a given bee species, we used available data on the number of flower visits to a plant by each of the three bee species. Pollinator visitation data were collected on these plants during a two-week observation period at peak bloom for M. sativa the year of the study (Bauer et al., 2017). To determine the number of pollinator visits to a plant, we followed bees in a patch and two observers recorded each plant visited by a bee, the number of racemes visited on a plant and flowers visited per raceme on each plant until the bee left the patch or was lost to the observers. This provided floral visits by at least one of the three pollinators for most plants in the two patches (Supplementary Data). The pattern of visitation in the patches was typical for the major bee species visiting M. sativa throughout its flowering period. To attribute the number of seeds to a bee species based on the number of flower visits, for each plant, we multiplied the proportion of flowers visited by each of the three bee species by the number of seeds set on that plant. This approach links, for each plant, the pollinator visitation data to its seed set during that period, as seeds were collected about four weeks following pollinator observations, the period it takes for fruits and seeds to mature in this plant species. The number of flowers visited by a bee species is a useful measure of pollinator visits, but to better link floral visits to seed set we also integrated the tripping rate of a bee species to the floral visitation data. In M. sativa, flowers must be tripped before they can produce seeds and tripping rate varies among bee species (Cane, 2009;Pitts-Singer and Cane, 2011). We obtained a second measure of pollinator visits which combined floral visits with the tripping rate of a bee species. Previous observations in the area indicated a tripping rate of 55% for bumble bee, 25% for honey bee (Brunet and Stewart, 2010) and 80% for leafcutting bee under warm temperatures typical of alfalfa seed-production fields (Brunet et al., 2019). For each plant, the number of flowers visited by a bee species on that plant was multiplied by the bee species specific tripping rate. We call this measure the number of flowers tripped by a bee species. For each plant, we calculated the number of tripped flowers by each bee species and the proportion of flowers tripped by each bee species. We multiplied these proportions by the number of seeds set on the plant to assign seeds to each of the three bee species based on the number of tripped flowers. Plant Relative Fitness Plant relative fitness (RF) was estimated by dividing the absolute fitness of a plant by the mean absolute fitness of the group of plants under consideration (Lande and Arnold, 1983). The absolute fitness of a plant was quantified as the number of seeds set on a plant. RF was obtained for all plants for which floral trait measurements were available (N = 153). We calculated RF of a plant over all bees, based on the total number of seeds it produced, and within each bee species. Within a bee species, RF was the number of seeds on a plant attributable to a given bee species, based either on the proportion of flowers visited or the proportion of flowers tripped by a specific bee species, divided by the mean for that bee species. Using this approach, the mean RF was 1.0 within each bee species and potential differences in seed production across pollinators were removed. We also calculated the opportunity for selection for overall RF and for RF by bee species based on proportion of visits or proportion of tripped flowers. Opportunity for selection was measured as the variance in RF. Phenotypic Selection To measure phenotypic selection, we examined the relationship between the trait value of a plant and its RF (Lande and Arnold, 1983). Each floral trait examined was scaled such that its mean was 0 and its variance was 1: (trait valuetrait mean)/trait standard deviation. We performed phenotypic selection analyses on the number of stems per plant, the number of racemes per stem, the number of open flowers per raceme, and hue, chroma and reflectivity. We first performed phenotypic selection analyses using RF calculated over the total seed set of a plant. We also examined phenotypic selection within each bee species, where RF was calculated as explained earlier, either based on proportion of flowers visited or proportion of flowers tripped by a bee species on each plant. RF was relativized and traits standardized within each bee species which eliminated any potential differences in traits or fitness across bee species. The number of plants was similar for overall fitness and within each bee species and represented plants with floral traits and seed set data (N = 153). The number of plants that received no visits by a specific bee species did vary, with a greater number of plants not visited by leafcutting bee (N = 107 plants), followed by honey bee (N = 46) and last bumble bee (N = 16). We used regression analyses, examining linear and nonlinear regressions, to estimate various selection parameters, following the methods suggested by Lande and Arnold (1983). Untransformed variables were used to obtain the values of the selection coefficients. To obtain the statistical significance of the selection coefficients, RF values were log transformed in order to improve the model's residuals. This procedure was followed because selection coefficients are not known to be affected by a poorly fit model while the probability values are (Lande and Arnold, 1983;Mitchell-Olds and Shaw, 1987;Brodie and Janzen, 1996). In addition, due to the large number of zeros, the model's residuals for leafcutting bee still indicated a poor fit to the data after transforming RF. We therefore used bootstrapping to estimate the 95% confidence intervals around the selection coefficients and determine whether they were statistically significant (Davison and Hinkley, 1997). We used bootstrapping for all cases for comparison purpose. We performed 1,000 bootstraps using the bootstrap function in the package "boot" (Canty and Ripley, 2020) in R (version 3.6.1). For directional selection, we estimated the selection differential (S i ), which represents the change in the population mean of trait (i) after selection (Arnold and Wade, 1984). The selection differential can be obtained from the slope of a linear regression between the standardized value of a trait and the corresponding plant RF. This coefficient includes both direct and indirect selection and multiple regression analyses were performed to isolate direct selection. The partial regression coefficient for a trait represents the selection gradient (βi) for that trait (i) and illustrates direct selection on a trait after removing indirect selection from all other traits present in the analysis. When traits are correlated, a trait that appears to respond to selection may simply be correlated to the trait under selection, hence the need to isolate direct selection. The coefficients S or β both represent directional selection and a positive value indicates that the phenotypic mean of a trait (i) increases under selection while it decreases when values of Si or βi are negative. Because selection can also be non-linear and work on the shape of the trait distribution, we first added a quadratic term to the single regression and obtained the non-linear (quadratic) selection differential C ii (Table 1), where C 22 illustrates the non-linear (quadratic) term of the single regression. We then performed multiple regressions with linear, quadratic and cross product terms to obtain the non-linear or quadratic selection gradient γ ii , represented by the partial regression coefficient for the quadratic term, and to detect correlational selection γ ij using the partial regression coefficient for the cross product terms ( Table 1; Brodie, 1992;Roff and Fairbairn, 2012). The quadratic coefficient gradients were estimated as double the quadratic regression coefficients (Stinchcombe et al., 2008;Sahli and Conner, 2011). We graphically illustrated the statistically significant cross product terms representing correlational selection gradients using the function "persp" in R (R Core Team, 2019). To represent the non-linear selection for the statistically significant quadratic selection gradients we used generalized additive models (GAMs) using the "mgcv" package in R (Wood, 2011). These models automatically fit a spline regression (Wood, 2011). Results from the GAMs were plotted using "ggplot2" (Wickham, 2016) and "gridExtra" (Auguie, 2017). Besides using regression analyses, we also examined the distributional selection gradient on the floral traits (Henshaw and Zemel, 2017). This measures total selection on a trait and can be broken down into a directional component (dD) illustrating selection on a trait mean and a non-directional component (dN) that reflects selection on the shape of the trait distribution. This approach permits estimation of the general selection differential (S) and selection gradients (β). We used the R code available from 1 | Selection parameters obtained based on different regression analyses using plant relative fitness and standardized floral traits. Model Single regression Multiple regression Linear S Selection differential; slope; both direct and indirect selection β Selection gradient; partial regression coefficient; direct selection Non-linear (linear and quadratic terms) C ii Non-linear or quadratic selection differential is C 22 Non-linear (linear, quadratic and cross product terms) γ ii Non-linear or quadratic selection gradient; partial regression coefficient of quadratic term Non-linear (linear, quadratic and cross product terms) γ ij Correlational selection gradient; Partial regression coefficient of cross product term The subscript i indicates trait(i) and j indicates a separate trait. The methodology followed to obtain the different selection coefficients is summarized in Table 1. The letter P stands for probability and NS for not statistically significant. Flr stands for open flowers per raceme and Chr for flower chroma. Github 1 to run distributional selection differential analyses on our data following Henshaw and Zemel, 2017. The opportunity for selection was 1.74 for overall fitness, calculated using total seed set per plant. When RF was based on the proportion of flower visits, the opportunity for selection was 1.24 for bumble bee, 2.53 for honey bee and 46.48 for leafcutting bee. The average seed set attributable to each bee species, based on proportion of flower visits, was 396.27 seeds per plant for bumble bee; 261.03 for honey bee; and 61.68 for leafcutting bee. When RF was based on the proportion of tripped flowers, the opportunity for selection was 1.15 for bumble bee, 3.16 for honey bee and 35.54 for leafcutting bee. The average seed set of a plant attributable to each bee species was 453.32 seeds for bumble bee, 187.24 seeds for honey bee, and 84.25 seeds for leafcutting bee. All Bees Combined Over all pollinators combined, we observed a positive directional selection differential S and selection gradient β on the number of stems per plant indicating selection to increase the number of stems per plant ( Table 2). For the number of racemes per stem, there was a statistically significant positive directional selection differential S and selection gradient β indicative of selection for an increase in the number of racemes. However, we also detected a statistically significant negative quadratic selection differential C 22 although the quadratic selection gradient γ ii was not statistically significant, suggesting indirect non-linear selection on the number of racemes per stem (Table 2) Bumble Bee For bumble bees, the positive directional selection differential S and gradient β were both statistically significant for the number of racemes per stem and for the number of stems per plant suggesting selection to increase both traits ( Table 3). In addition, we observed a statistically significant positive correlational selection gradient between the number of open flowers on a raceme and the darkness of a flower (chroma) (γ FlrChr ) for all cases except when RF was based on the number of tripped flowers and the statistical significance of selection coefficients were tested using bootstrapping (Table 3). Bumble bees favored plants with more open flowers per raceme and with darker flowers ( Figure 1B). There was also a positive correlational selection gradient between the number of open flowers and flower reflectivity (γ FlrRef ) but it was only statistically significant when RF was based on the proportion of flowers visited and when log transformed RF regression model was used to detect the significance of the selection coefficients (Table 3). Honey Bee For honey bee, there was a statistically significant negative quadratic selection differential (C 22 ) and quadratic selection The statistical significance of the selection coefficients was determined using either regression models with log transformed relative fitness or using bootstrapping. The methodology followed to obtain the different selection coefficients is summarized in The statistical significance of the selection coefficients was determined using either regression models with log transformed relative fitness or using bootstrapping. The methodology used to obtain the different selection coefficients is summarized in Table 1. The letter P stands for probability and NS for not statistically significant. Rcps stands for racemes per stem and stpp stands for stems per plant. gradient (γ ii) for the number of racemes per stem indicative of non-linear selection ( Table 4). Results of the spline regression analysis indicates that honey bees exert some stabilizing selection on the number of racemes per stem (Figure 2A). For the number of stems per plant, both the positive directional selection differential S and selection gradients β were statistically significant but we also detected non-linear positive selection, suggestive of disruptive selection, with statistically significant quadratic selection differential (C 22 ) and gradient (γ ii ) but only when bootstrapping was used to determine the statistical significance of the selection coefficients ( Table 4). The coefficient of directional selection was much larger than the non-linear selection coefficient (Table 4) which translated into mostly directional selection for increased number of stems per plant as indicated by the spline regression analysis (Figure 2B). Patterns were similar whether RF was based on the proportion of visited or tripped flowers ( Table 4). Leafcutting Bee For leafcutting bee, only bootstrapping was used to determine the statistical significance of the selection coefficients. For the number of racemes per stem, we detected both directional and non-linear selection ( Table 5). There was a statistically significant positive selection differential S and gradient β but also a statistically significant negative quadratic selection differential C 22 and gradient γ ii at least when RF was based on the proportion of visited flowers ( Table 5). The spline analysis indicated that leafcutting bees exerted some stabilizing selection on the number of racemes per plant ( Figure 3A). For the number of stems per plant, we detected a positive directional selection differential S and gradient β favoring plants with more stems. Finally, we observed a statistically significant negative quadratic selection gradient γ ii for flower color or hue, indicating some stabilizing selection on hue by leafcutting bees (Table 5 and Figure 3B). Distributional Selection Differential When performing distributional selection differential (DSD) analyses, we detected positive directional selection for the number of racemes per stem and the number of stems per plant for all bees combined and for each bee species (Table 6). We did not detect non-linear selection on any components of floral display size and did not detect selection on any components of flower color for either all bees combined or any of the bee species (Table 6). We present the DSD results to contrast with the results obtained using the Lande and Arnold (1983) approach. We will leave other studies to discuss discrepancies between approaches and below concentrate on the results obtained using the more traditional method originally proposed by Lande and Arnold (1983). Selection on Flower Color Relative to Floral Display Size The number of stems per plant and the number of racemes per stem were selected by all three bee species. In contrast, (Sahli and Conner, 2011). The plants used in this study exhibited a high level of phenotypic variation in both flower color and floral display size and the variation in flower color was greater than typically occurs in wild M. sativa populations. We also observed strong opportunity for selection overall and within each pollinator species. The foraging behavior of pollinators may help explain the difference between selection on plant-level and flower-level attractants. Pollinators forage for rewards and their goal is to collect pollen and nectar to provide for their young and feed themselves. Components of floral display size such as the number of racemes per stem and the number of stems per plant are both indicative of the amount of resources available on a plant. Bumble bees can determine whether a flower offers pollen or not and can detect the number of pollen-producing flowers on a plant. They are attracted to inflorescences, a plant-level attractant, based on the number of pollen-producing flowers . Similarly, bumble bees may be able to detect the number of nectar-producing flowers on inflorescences (Makino and Sakai, 2007). Bumble bees, on the other hand, cannot distinguish between flowers presenting distinct amount of pollen, a flowerlevel attractant, unless it is linked to another trait such as flower size or flower color Thairu and Brunet, 2015). Similarly, while bees have innate preferences for flower color (Simonds and Plowright, 2004;Raine and Chittka, 2007), they learn to associate a flower color with a reward and can switch their preference of flower color for the color providing the most reward (Ings et al., 2009;Thairu and Brunet, 2015). The fact that plant-level attractants such as floral display size directly advertise resource availability to pollinators may help explain why, relative to flower-level attractants, they are more likely to be selected by pollinators within plant populations. An association between a reward and flower color is more likely to occur among plant populations or plant species of distinct colors rather than within a population where the association between a color and a reward can be broken down by recombination . This may help explain why flower color polymorphisms are more common among than within plant populations (Narbona et al., 2018). Pollinators have been suggested as the selective agents responsible for flower color polymorphisms among populations (Streisfeld and Kohn, 2007) and in some cases the genes responsible for the change in flower color associated with each pollinator have been elucidated (Streisfeld et al., 2013). Similarly, the genetic basis of flower color differences has been elucidated for some plant species and shown to be responsible for the pollinator preference (Bradshaw and Schemske, 2003;Hoballah et al., 2007). However, it remains unclear whether the pollinator preference created the flower color diversification or whether the association between flower color and pollinator arose following the fixation of the flower color in the population or species due to a different factor. Within plant populations, correlational selection between flower color and floral display size may facilitate the evolution of flower color via pollinators. Correlational selection, as was observed in the current study between number of open flowers per raceme and flower chroma, provides a mechanism to associate a flower-level attractant like flower color to a plantlevel attractant that advertise resource availability to a pollinator. Moreover, correlational selection leads to the development of genetic correlations between traits (Roff and Fairbairn, 2012) and The statistical significance of the selection coefficients was only determined using bootstrapping due to the high number of plants with no leafcutting bee visits. The methodology used to obtain the different selection coefficients is summarized in Table 1. The abbreviation NS stands for not statistically significant. Rcps stands for racemes per stem and hue for flower hue. correlational selection between a flower color and floral display size has been shown to increase the frequency of a color morph within a population even in the absence of differences between color morphs in seedling germination or survival (Gomez, 2000). Correlational selection by pollinators, between flower color and a plant-level attractant, may facilitate the maintenance of flower color polymorphism within plant populations. The role or correlational selection in the evolution of flower color in plant populations deserves more attention. Distinct Pollinators and Selection of Floral Traits Distinct pollinators can exert different or conflicting selection on floral traits (Galen, 1989;Sahli and Conner, 2011;Kulbaba and Worley, 2013) and in the current study, we found different patterns of selection on some floral traits by the distinct bee Relative fitness was based on the proportion of floral visits. Total selection is measured by DSD while dD represents the directional and dN the non-directional component of selection. The general selection differential is S and β is the selection gradient. Statistically significant values are bolded. traits. Overall, there is directional selection on the number of racemes per stem and stems per plant with indirect nonlinear selection on racemes per stem. There is also correlational selection between number of open flowers per raceme and flower chroma. Clearly bumble bees are solely responsible for the correlational selection while all three bee species exert directional selection on the number of stems per plant. While both honey bees and leafcutting bees exert some stabilizing selection on the number of racemes per stem, the overall selection is mostly directional. Bumble bees were the most abundant pollinators and better trippers than honey bees, the second most frequent visitors. The differential influence of the three bee species on floral traits indicates that the overall pattern of selection in a population will vary with the abundance and efficiency of its pollinators. We therefore, expect temporal or spatial variation in pollinators (Brunet, 2009;Narbona et al., 2018) to influence the temporal or spatial pattern of selection on floral traits (Kelly, 1992;Siepielski et al., 2009Siepielski et al., , 2013Narbona et al., 2018). However, environmental factors may also vary among populations or temporally within populations and affect floral trait evolution (Schemske and Bierzychudek, 2001;Strauss and Whittall, 2006;Caruso et al., 2019). Interestingly, yearly variation in abiotic factors can modify the pattern of correlational selection (Maad, 2000). Both pollinators and abiotic factors should be considered when examining phenotypic selection of floral traits over time or space (Narbona et al., 2018;Sletvold, 2019). RF and Phenotypic Selection Within Bee Species The methodology introduced in this study permits evaluation of phenotypic selection by distinct pollinators simultaneously, using the same set of plants. It more realistically describes the process of pollinator-mediated selection in natural populations. Sample sizes remain the same, over all bees and within each bee species, although the proportion of flowers not visited by a bee species may vary among bee species. Of interest is the fact that pattern of selection obtained over combined pollinators could be explained by the patterns observed for each bee species. Moreover, some selection patterns were only significant at the level of a bee species. For example, selection on flower color by leafcutting bees was not expressed at the whole plant level, likely because leafcutting bees were not as common in the study and were responsible for a lesser proportion of the seeds produced by the plants in the population. We assigned the selection patterns observed in this study to pollinators rather than to another biotic or to abiotic factors. This approach was followed because the number of pollinator visits increase seed set in this plant species (Bauer et al., 2017); M. sativa plants set few seeds in the absence of pollinators (Bohart, 1957); plants were grown in a common environment minimizing variation in resource availability; and herbivory was not observed. Gathering pollination data in phenotypic selection studies will provide useful information on pollinator-mediated selection by distinct pollinators. We will further argue in a separate manuscript that comparing selection gradients between hand-pollinated and open-pollinated plants may not be the most efficient method to assign selection to pollinators (Brunet, in preparation). The approach introduced in this study relies on good quality pollinator data and a link between visitation and seed set. The pollinator visitation data should be representative of the plant species under study over its flowering season. The plants used to collect pollinator data should represent the variation in floral display size that occurs spatially and temporally in the population. If pollinator types vary throughout the day or the flowering season, one should sample to reflect such variation. To link floral visits to seed set, it is best to sample seeds on visited plants after a period that reflects the time it takes for seeds to reach maturity. Finally, while applied to female reproductive success, the methodology could be extended to male reproductive success. In this case the proportion of floral visits to plant(i) is used for proportional visits by the distinct pollinators as it reflects the pollen leaving plant(i). The total seeds assignable to plant(i), on plant (i) if selfing occurs and on other plants in the population, represent the seed set for male function for plant(i). Results of this study illustrate how the approach proposed can attribute overall phenotypic selection patterns to individual pollinators and we therefore advocate the approach introduced here for future studies examining the impact of distinct pollinators on floral trait evolution. CONCLUSION The methodology introduced to isolate and combine the phenotypic selection patterns of distinct bee species on floral traits provides patterns of selection similar to what has been observed in previous studies. The selection patterns observed over all bees could be assigned to specific bee species. All three bee species selected for components of floral display sizes but not all bees favored components of flower color although the selection coefficients were strong. This difference between plantlevel and flower-level attractants could be explained by the fact that floral display size but not flower color directly advertises resource availability to pollinators. Spatial and temporal variation in the abundance of the distinct pollinators is expected to affect patterns of selection of flower traits, particularly for traits differentially selected by the distinct pollinators. Correlational selection between floral display size, a plant-level attractant, and flower color, a flower-level attractant, is expected to facilitate the evolution of flower color by pollinators within plant populations. Studies of pollinator-mediated selection would benefit from combining data on pollinator visitation rates together with seed set and measurements of floral traits when examining the impact of distinct pollinators on floral trait evolution. DATA AVAILABILITY STATEMENT The original contributions presented in the study are included in the article/Supplementary Material, further inquiries can be directed to the corresponding author/s. AUTHOR CONTRIBUTIONS JB conceived the study and wrote the manuscript. AB collected the data and samples from the field, processed the samples, and did preliminary data analyses. AF performed the data analyses for the manuscript, prepared the Figures, and provided comments. All authors contributed to the article and approved the submitted version.
2021-01-14T14:24:39.108Z
2021-01-14T00:00:00.000
{ "year": 2020, "sha1": "ae60c7b2653eeb69087f19a605bdd14c3888acc9", "oa_license": "CCBY", "oa_url": "https://www.frontiersin.org/articles/10.3389/fpls.2020.587528/pdf", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "ae60c7b2653eeb69087f19a605bdd14c3888acc9", "s2fieldsofstudy": [ "Biology", "Environmental Science" ], "extfieldsofstudy": [ "Medicine" ] }
25254101
pes2o/s2orc
v3-fos-license
Interferometric CO Observations of submillimeter-faint, radio-selected starburst galaxies at z~2 High-redshift, dust-obscured galaxies -- selected to be luminous in the radio but relatively faint at 850um -- appear to represent a different population from the ultra-luminous submillimeter- (submm-) bright population. They may be star-forming galaxies with hotter dust temperatures or they may have lower far-infrared luminosities and larger contributions from obscured active galactic nuclei (AGN). Here we present observations of three z~2 examples of this population, which we term submm-faint radio galaxies (SFRGs) in CO(3-2) using the IRAM Plateau de Bure Interferometer to study their gas and dynamical properties. We estimate the molecular gas mass in each of the three SFRGs (8.3x10^{9} M_odot,<5.6x10^{9} M_odot and 15.4x10^{9} M_odot, respectively) and, in the case of RG163655, a dynamical mass by measurement of the width of the CO(3-2) line (8x10^{10} csc^2i M_odot). While these gas masses are substantial, on average they are 4x lower than submm-selected galaxies (SMGs). Radio-inferred star formation rates (=970 M_odot\yr) suggest much higher star-formation efficiencies than are found for SMGs, and shorter gas depletion time scales (~11 Myr), much shorter than the time required to form their current stellar masses (~160 Myr; ~10^{11} M_odot). By contrast, SFRs may be overestimated by factors of a few, bringing the efficiencies in line with those typically measured for other ultraluminous star-forming galaxies and suggesting SFRGs are more like ultraviolet- (UV-)selected star-forming galaxies with enhanced radio emission. A tentative detection of \rga at 350um suggests hotter dust temperatures -- and thus similar gas-to-dust mass fractions -- as the SMGs. We conclude that SFRGs' radio luminosities are larger than would naturally scale from local ULIRGs given their gas masses or gas fractions. Submm surveys have provided an efficient probe of star-formation activity in ultraluminous infrared (IR) galaxies (ULIRGs, >10 12 L ⊙ ) in the distant Universe (e.g., Smail, Ivison & Blain 1997;Hughes et al. 1998;Barger et al. 1998), with bright submm emission providing unambiguous evidence of massive quantities of dust, heated predominantly by young stars rather than AGN (e.g., Chapman et al. 2003a;Alexander et al. 2005;Menendez-Delmestre et al. 2007;Valiante et al. 2007;Pope et al. 2008). Before the availability of the Atacama Large Millimeter Array (ALMA), confusion will continue to limit the sensitivity of current submm sur-veys. As a result, many ULIRGs fall below the detection limits due to variations in their spectral energy distributions (SEDs) -usually parameterized in terms of dust temperature (T d ) -meaning that entire populations of star-forming galaxies may have been missed by submm surveys. For a fixed far-infrared luminosity (FIR), a galaxy with a higher T d will be weaker in the submm at 850µm than a galaxy with a lower T d . Specifically, raising T d from the canonical ∼35 K for SMGs to 45 K will result in a factor ∼10× drop in 850µm flux density (Blain 1999;). These galaxies should, though, be accessible in the radio waveband, regardless of their specific SEDs, since the radio correlates with the integrated FIR emission (Helou et al. 1985) with a small ∼ 0.2dex dispersion and no observable dependence on SED type. However, there is potential for large AGN contaminations in the radio, as has often been the case with mid-IR selection of z > 1 ULIRGs (e.g., Houck et al. 2005;Yan et al. 2005Yan et al. , 2007Sajina et al. 2007;Weedman et al. 2006aWeedman et al. , 2006bDesai et al. 2006) and the facilities required to provide the high-resolution, multi-frequency radio data needed to decontaminate the samples (e.g., Ivison et al. 2007a) are not yet available. Substantial populations of apparently star-forming galaxies at z ∼ 2 have been uncovered through deep 1.4-GHz radio continuum observations, many of which are not detected at submm wavelengths with the current generation of instruments (Barger et al. 2000;Chapman et al. 2001Chapman et al. , 2003bChapman et al. , 2004a. These galax-ies are luminous in the radio and spectroscopy suggests that star formation is powering their bolometric output (there is little or no sign of high-ionization emission lines, characteristic of AGN in their UV/optical spectra). These galaxies could, in principle, span a range in properties from deeply obscured AGN to far-IR-luminous starbursts. In the latter case, one would expect a different SED from a typical SMG -a higher T d for instance. These submm-faint radio sources have a large volume density at z ∼ 2, even larger than the SMGs (Haarsma et al. 1998;Richards et al. 1999;Chapman et al. 2003a, C04;Barger et al. 2007). There are ρ = 2 × 10 −5 Mpc −3 radio sources with L 1.4GHz > 10 31 ergs s −1 Hz −1 at z ∼ 2 compared with ρ = (6.2 ± 2.3) × 10 −6 Mpc −3 for SMGs brighter than 5 mJy at 850 µm at the same epoch (Chapman et al. 2003b. As essentially all of these SMGs form a subset of these radio sources Pope et al. 2006;cf. Ivison et al. 2002) this implies ∼ 14 × 10 −6 Mpc −3 luminous radio sources remain undetected at submm wavelengths. Understanding the exact properties of these galaxies is therefore of great importance. If they are all forming stars at the rates implied by their radio luminosities, they would triple the observed SFR density (SFRD) at z ∼ 2. By contrast, if their radio luminosity comes from a mix of star formation and AGN, they have less impact on the global SFRD but they increase the highly obscured AGN fraction at these epochs (e.g., Daddi et al. 2007a;Casey et al. 2008) and contribute substantially to black hole growth. Together with other observations, the redshifted cooling emission lines of CO allow us to assess and compare the energy source of SFRGs with that of SMGs and other distant star-forming galaxies via measurements of their gas and dynamical masses. In this paper, we present the results of a pilot study with the IRAM -Plateau de Bure Interferometer to detect molecular gas in SFRGs through the rotational CO(3-2) line emission. In § 2 we describe the sample properties and observations both with PdBI and other facilities. Section § 3 presents the CO(3-2) detections and limits obtained from the PdBI observations, § 4.1 estimates gas properties, starformation rates and efficiencies, and § 4.2 compares the SFRGs to other galaxy populations. Finally § 5 discusses the results and places them in a broader galaxy evolution context. Throughout we assume a cosmology with h = 0.7, Ω Λ = 0.72, Ω M = 0.28 (e.g., Hinshaw et al. 2008). sample properties and observations Our sample is drawn from an expansion of the C04 submm-faint, radio-selected galaxy (SFRG) program, with galaxies drawn from several deep radio survey fields with typical sensitivity limits of σ = 4−8 µJy (e.g., Biggs & Ivison 2006). In the submm, the survey fields are imaged to a typical depth of σ 850µm ∼ 1 − 2 mJy (e.g., Scott et al. 2002. We selected sources with redshifts, radio luminosities, and submm limits typical of the population (< z >= 2.1, < L 1.4GHz >= 2 × 10 31 ergs s −1 Hz −1 , L 850µm < 1 × 10 31 ergs s −1 Hz −1 (< 2 mJy for z ∼ 2) at the ∼ 2σ level, lying within ±1σ of the median SFRG in (C04), and observing RG J163655 and RG J131236 based on suitability of RA and confidence in the optical spectroscopic redshifts. A third source from our sample, RG J123711, was observed pre- top panels: CO(3-2) spectra for the two candidate detections. The spectra are shown smoothed with a 50 km/s boxcar filter, and with respect to the zero velocity offsets defined from the Hα emission line redshift (red dashed line). The best-fit Gaussian profile is shown for the emission line in RG J163655, along with the Uv-Inferred redshift from inter-stellar absorption lines (blue dashed line). bottom panels: velocity-averaged spatial maps of CO emission, from −1500 to −800 km s −1 (RG J163655) where contours are from −1 to 5σ in steps of σ (0.05 mJy beam −1 ). RG J123711 does not represent a formal CO detection. We measure a significance of 3.2σ integrating over the full band). The field of view, on a side, is 35 ′′ , with the size of the beam shown to the lower left. Both CO emitters lie exactly at the radio source position to within 1 ′′ . viously with PdBI in the same field as an SMG in the program described by Neri et al. (2003) and Greve et al. (2005), and we include this object here. We note that while the neighboring SMG (SMM J123712.0+621326) lies only 8 ′′ to the south-east, we are confident that RG J123711 is not a luminous submm emitter. Firstly, SMM J123712, has a strong CO line detection (Smail et al., in preparation) with a ∼ 5σ detection S CO(3−2) = 1.2 Jy km s −1 , comparable to typical SMGs from Greve et al. (2005). Secondly, the 850-µm emission peak (with a R=7 ′′ beam) in our SCUBA map is centered on the radio source position, whereas no significant peak is observed at the position of RG J123711. Removing a 850µm point source from the position of SMM J123712 reveals an even lower 850-µm flux density (0.1 ± 1.2 mJy) at the position of RG J123711 than the 2.2±1.2 mJy conservatively adopted for our calculations (Table 2). Importantly, the small 850-µm/1.4-GHz flux ratio for this source is clearly comparable to SFRGs and not to the typical radio-detected SMGs in Chapman et al. (2005) or Ivison et al. (2007b). The properties of these SFRGs are listed in Tables 1 and 2, and displayed in Figs 1 and 2. 2.1. PdBI observations RG J163655 and RG J131236 were observed in their redshifted CO(3-2) lines and in the continuum at ∼108 GHz using the newly refurbished PdBI receivers for 11.5 and 4.8 hr, respectively. Observations were made in D configuration on 2007 January 24, April 28, May 08 and June 03, with good atmospheric phase stability (seeing, 0.7-1.4 ′′ ) and reasonable transparency (0.5 mm of precipitable water vapor). For RG J163655, the Hα redshift showed a considerable offset from that inferred from UV absorption and emission lines. While this was conceivably due to a large velocity starburst wind (∼1300 km s −1 ), we considered the possibility that one redshift might have calibration or resolution problems. We therefore observed RG J163655 split over two slightly offset frequency settings to span all measured redshifts. The overall flux scale for each observing epoch was calibrated using a variety of sources. In each observing epoch between three and six sources were used. The visibilities were resampled to a velocity resolution of 55 km s −1 (20 MHz) providing 1σ line sensitivities of 1.6 mJy beam −1 . The corresponding synthesized beam, adopting natural weighting, was similar for both sources, 5.0 ′′ by 4.0 ′′ at PA ∼80 degrees, east of north. Observations of RG J123711 proceeded similarly to those described by Greve et al. (2005). The PdBI data for all three SFRGs were calibrated, mapped and analyzed using the GILDAS software package. The CO(3-2) spectra and images of the two candidate SFRG detections, RG J163655 and RG J123711, are shown in Fig. 1 of Kovacs et al. (2006), where observational details can be found. A near-IR spectrum from UKIRT/UIST covering the Hα/[N ii] region (Swinbank et al. 2006) finds a large Hα/[N ii] ratio suggestive of a relatively low metallicity, [Fe/H] ∼ −0.9, and no strong AGN component. While the [O iii]λ500.7 region was also covered with the instrument, we did not clearly detect any emission lines, setting a limit on the [O iii]λ500.7 line of 3.75 × 10 −17 W m −2 again suggesting that an AGN does not dominate the energetics of this galaxy. The line width of the Hα emission (FWHM rest ) is 420±100 km s −1 and the integrated line flux of 5.0×10 −19 W m −2 then suggests a SFR uncorrected for dust extinction of 150±50 M ⊙ yr −1 (Kennicutt 1998). The 1.4-GHz radio emission was unresolved by the Very Large Array (VLA) in its A configuration (Biggs & Ivison 2006). A relatively compact UV morphology is observed in the Hubble Space Telescope imaging, with R 1/2 =0.25 ′′ (Swinbank et al. 2006). RG J131236: In C04, the optical (rest-frame UV) spectrum was presented, showing Lyα in emission but all other detectable lines in absorption, and was classified as a pure starburst. The near-IR spectrum of RG J131236 from Keck/NIRSPEC covering the Hα/[N ii] region (Swinbank et al. 2004), again finds a large Hα/[N ii] ratio suggestive of a low metallicity, [Fe/H] ∼ −0.9. The line width of the Hα emission is 450 ± 220 km s −1 and the integrated line flux of 2.3 ± 1.2 × 10 −19 W m −2 then suggests an SFR uncorrected for dust extinction of 110 ± 40 M ⊙ yr −1 . The radio emission was unresolved by the VLA. Only ground-based imaging exists for this SFRG, showing a faint, unresolved source in 0.8 ′′ seeing. RG J123711: An X-ray-detection and obscured AGN classification due to its X-ray luminosity by Alexander et al. SFR uncorrected for dust extinction of 16 ± 9 M ⊙ yr −1 . Radio imaging of this SFRG with MERLIN (0.3 ′′ synthesized beam; Casey et al. 2008b) reveals a double source structure with a ∼1 ′′ elongated feature and a relatively compact R 1/2 =0.4 ′′ component. results The velocity-integrated line fluxes or limits for all three SFRGs are listed in Table 1. For RG J163655, inspection of the data cube shows a significant 4.9σ detection of CO(3-2) line emission at the phase center, integrated over the velocity channels at ∼ −1000 km s −1 with a velocity width of 400 km s −1 fwhm. Fitting a Gaussian profile to the CO spectrum, we derive a best-fit redshift for the CO(3-2) emission of z = 2.1859 ± 0.0002, and estimate the CO flux is by summing the channels from −2σ to +2σ of Gaussian fit to the line. We note that no significant continuum emission is detected from the line-free region (∼650 MHz of bandwidth) down to a 1σ sensitivity of 0.07 mJy beam −1 , consistent with the submm limit, assuming a dust spectral index, ν +3.5 , for a modified blackbody with emissivity, β = +1.5. For RG J131236 no significant emission is observed at the phase center, although the limit on the CO gas mass is still of great interest relative to the SMGs. For RG J123711 we tentatively detected (3.2σ) a positive signal integrated over the full band from −600 to +600 km s −1 , centered on the Hα determined redshift of z = 1.996. A precise CO redshift cannot be determined for RG J123711 as the line shape cannot be determined in the low S/N spectrum. To assess the possibility that we have simply detected continuum in this source, we analyze the radio spectral index, measured to be steep from the 8.4 GHz/1.4 GHz flux ratio (Muxlow et al. 2005), S ν ∝ ν −0.69 , and the synchrotron contribution at ∼3 mm is negligible. This does not however preclude a contribution from an AGN component with the opposite spectral slope emerging at higher frequencies. It is clear that the full radio spectrum is needed to explore this issue, as well as the possibility of an obscured AGN. In RG J163655, the CO-inferred redshift is close to the redshift measured from various interstellar absorption lines in the Keck/LRIS UV spectrum, but is blue shifted by 1100 km s −1 from the Hα line detected by Swinbank et al. (2006). Re-analysis of the near-IR spectrum does not significantly change the result as the sky line calibrations appear to be as presented in Swinbank et al. (2006). If the detected line were dominated by [N ii] with Hα/[N ii]< 1 -and we stress that there is no evidence in the spectrum of this -then the implied velocity offset would be ∼800 km s −1 , somewhat closer to the average CO velocity and consistent with the higher velocity peak in the detected CO profile. We cannot attribute the CO emission to an offset companion (as was the case for SMG J09431 in Tacconi et al. 2006) as the CO centroid is exactly at the near-IR and radio position to within the 1 ′′ centroiding uncertainty (∼beam size × (S/N) −1 ). The Hα-inferred redshift has not always been representative of the CO redshift in SMG surveys (the average CO-Hα offset is 150 km s −1 ), presumably because the luminous core is so deeply dust enshrouded that wind outflows or satellite H ii regions are more strongly detected in Hα. We therefore put forward the hypothesis that we are detecting a highly dust-obscured gas-rich galaxy in CO(3-2), either a companion seen in projection or else one not well sampled by the Hα observations. The restframe fwhm of the CO line is 376 ± 40 km s −1 , close to that found in Hα by Swinbank et al. (2006), but likely a coincidence given that the CO and Hα redshifts are discrepant. The 350µm SHARC-2 imaging of RG J163655 shows a tentative continuum detection, S 350µm = 2.4±6.5 mJy at the radio position, however given the telescope pointing errors and the low signal-to-noise (S/N) of any expected emission, we search a region comparable to the beam size (9 ′′ ). A 2σ peak (S 350µm = 13.7 ± 6.9 mJy) lies 7 ′′ from the radio position at 16h 36m 54.6s, +41 • 04 ′ 28 ′′ J2000. Within this area there are only ∼3 SHARC-2 beams and the chance of a spurious 2σ peak is only ∼6%. There is a high likelihood (∼90%) that this peak is related to RG J163655 and this flux range is completely consistent with our SED fit to the radio photometry for RG J163655 (Fig. 2). We assume both a line luminosity ratio of r 32 = L ′ CO(1−0) = 1 (i.e. a constant brightness temperature) and a CO-to-H 2 conversion factor of α = 0.8 M ⊙ K km s −1 pc 2 . These values are appropriate for local galaxy populations exhibiting similar levels of starformation activity to our SFRGs (e.g., local ULIRGs - Solomon et al. 1997), and this choice also facilitates comparison with SMGs modeled with the same values (Greve et al. 2005). We discuss later how the affect of adopting typical Milky Way values α CO and r 32 . All three SFRGs show a peaked SED in the mid-IR (Fig. 2) suggesting that this spectral region is dominated by stars rather than AGN. These properties are used to derive the rest-frame K-band (∼ 2.2µm) flux, and convert to a stellar mass, in a matter similar to Borys et al. (2005), adopting their L K /M ⊙ = 3.2 characteristic of a burst with an age of ∼ 250 Myr. We interpolated between IRAC bands to estimate S 2.2µm (Table 3). Derived Properties We then proceed to estimate various derived properties (listed in Table 3). Starting with the gas surface density, for RG J123711, we assume the CO emission traces the same large, extended morphology (> 1 ′′ FWHM diameter) traced by resolved MERLIN radio imaging (see Casey et al. 2008b for details), suggesting a low gas surface density. For RG J163655 and RG J131236, neither the CO emission nor the VLA radio emission is resolved. Without further information beyond the optical imaging described previously, we assume the gas in these two SFRGs is distributed in a disk with a similar radius to SMGs (e.g., Tacconi et al. 2008) of R 1/2 =1.7 kpc (0.25 ′′ ), resulting in higher inferred gas surface densities The CO luminosities for the three SFRGs are plotted as a function of FIR luminosity and compared to other high-redshift galaxies detected in CO in Fig. 3. A dynamical mass for the well-detected RG J163655 can be estimated by analyzing the CO line profile. We base our analysis on a single Gaussian fit to the line. CO emission is comparatively immune to the effects of obscuration and outflows and therefore provides a unbiased measurement of dynamics within the CO-emitting region. The line width of the CO emission (410±40 km s −1 implies a dynamical mass of (8.4 ± 2.1) × 10 10 csc 2 i M ⊙ , assuming the gas lies in a disk with inclination i and a radius of 0.25 ′′ (1.7 kpc). Based on this, we calculate a gas to dynamical mass fraction of f = Mgas M dyn ∼ 0.10 sin 2 i. We note that the mean angle of randomly oriented disks with respect to the sky plane in three dimensions is i = 30 • (Carilli & Wang 2006), resulting in an average inclination correction of csc 2 i = 4. Star-Formation Rate and Efficiency The radio luminosity of the SFRGs forms our baseline estimate for the far-IR luminosity and SFRs (listed in Table 3), since we have only upper limits at 450 and 850 µm. We caution however that our flux-limited radio selection biases our sample to find objects of the same radio luminosity as SMGs, regardless of the origin of the radio power. There are clear examples within the wider SFRG sample where AGN dominate the radio power despite an apparent starburst spectrum in the rest-frame UV (e.g., Casey et al. 2008a). The average < SFR radio > = 970 M ⊙ yr −1 , assuming the radio/FIR relation q = log(FIR/3.75 × 10 12 Hz)/S 1.4GHz ) (Helou et al. 1985), with q = 2.34 (Yun et al. 2001), a correction factor of 2.3 to total IR luminosity (TIR) appropriate for hotter dust SEDs (Dale & Helou 2002), and the conversion from Kennicutt (1998) SFR(M ⊙ yr −1 ) = 1.8 × 10 −10 L 8−1000µm (L ⊙ ). The SFRs from their dust-corrected rest-frame 1500Å continuum flux are factors ∼ 50× less (as described in C04), and the UV emission is clearly not probing the true luminosities of these systems (Table 3). The Hα emission line suggests SFRs (Table 3) ten times less than the radio (< SFR Hα >= 92), although with the average extinction factor of A V ∼ 2.9 ± 0.5 proposed for SMGs in Takata et al. (2006), this becomes < SFR Hα,corr > = 1300 M ⊙ yr −1 . This is consistent, on average, with the radio-inferred SFRs, although the individual radio/Hα corr ratios show very poor correspondence. We have, of course, argued in the case of RG J163655 that the CO(3-2) and the Hα line emission may be coming from distinct regions, so these arguments do not obviously apply in every case, and the average correction factor applied to the L Hα may not be appropriate either individually or for the population. The 24µm fluxes (Table 2) would represent strong supporting evidence for large SFRs. We calculate SFR 24µm as in Pope et al. (2006) for consistency with SMGs (although strong 24-µm luminosities could also reveal a dominant hot AGN dust torus). In a large sample of SFRGs, the 24-µm luminosity distribution is indistinguishable from that of SMGs (Casey et al., 2008b). Two of our present three SFRGs have 24-µm observations, and only one is detected, the limit in the second case not being particularly constraining relative to the radio. The SFR 24µm (Table 3) could be consistent with the SFR radio given uncertainties in calibrating the SFR indicators. For a reference point -which can be scaled through by uncertainties in the SFR -we adopt the SFR radio and calculate surface densities Σ SFR , and star-formation efficiencies (SFE = L FIR /M H2 ) for the SFRGs, using assumed sizes described previously, and listed in Table 3. We find large SFEs (4× larger than the SMGs on average), although the observational constraints on the SFRs for the SFRGs are consistent with having been over-estimated by a factor ∼ 2 − 3× which would bring them into reasonable agreement with the envelope of star-formation efficiencies for SMGs, LBGs and local ULIRGs. We can also roughly estimate a lower limit to the gas to dust mass ratio where CO is detected (Table 3). We estimate similar dust mass limits for the SFRGs using assuming κ ν ∝ ν β , β = +1.5, and B ν (T d ) ∼ ν + 2 in the M dust < 1.6 × 10 8 M ⊙ from the 2σ limit on the 850 µm flux of ∼ 2.2 mJy, assuming a dust mass absorption coefficient of κ 850µm = 0.15 m 2 kg −1 . We find ratios of 50 and 100, with at least a factor of ∼ 6 uncertainty accounting for our uncertainty of the dust temperature (∆ T d ≃ ±5 K), dust emissivity coefficient (∆ β ≃ ±0.5), and mass absorption coefficient (about a factor of ∼ 3; e.g. Seaquist et al. 2004). Assuming the molecular gas reservoirs we detect are fueling the star formation within these galaxies, then there is enough gas to sustain the current star formation for τ depletion ∼ M (H 2 )/SFR ranging from less than 9 to 13 Myr. Since we have also estimated stellar masses, we can compare the gas depletion time with the time to form the current stellar mass of the system. At the current SFRs, τ formation ∼M stars /SFR ranges from 80-240 Myr, which are comparable to the assumed ages of the stellar populations. Comparison to other populations Comparison of the SFRGs to the SMGs is of primary importance, since a major goal of the observations is to understand the degree to which SFRGs should be treated on a similar footing to SMGs in models and evolutionary calculations. Taking the average CO line luminosity and gas mass we find intrinsic line luminosities a factor of ∼4 lower than the median for SMGs (cf. < L ′ CO >= (3.8 ± 2.3) × 10 10 L ⊙ and < M gas >= (3.0 ± 1.6) × 10 10 M ⊙ ; Greve et al. 2005). The CO line width of RG J163655 is also much lower than the median of SMGs (< FWHM >= 780 km s −1 , Greve et al. 2005). Given the CO-inferred gas masses are somewhat low compared to SMGs we find, not surprisingly, that the gas depletion timescales are short compared to SMGs (which have τ depletion ∼ 40 Myr, Greve et al. 2005), though the SFRGs are still within a physically plausible range. We only have a useful constraint on the size of the emitting region for RG J123711 to compare with higher resolution images of SMGs, although in general the SFRGs exhibit similar radio sizes and morphologies to SMGs (compare Chapman et al. 2004band Biggs & Ivison 2008to Casey et al. 2008b). The typical gas to dynamical mass fraction in SMGs is estimated to be ∼ 0.3 assuming a merger model (Greve et al. 2005), while they have SFEs a) For RG J131236, we set a limit for a 500 km s −1 fwhm line centered at the Hα redshift. Fig. 3.-A comparison of the CO and radio luminosities for the three SFRGs described here, the SMGs from Greve et al. (2005) and Tacconi et al. (2006), the lensed LBGs detected in CO (Baker et al. 2004, Coppin et al. 2007, Kneib et al. 2005, the luminous z ∼ 2 BX/BzK galaxies undetected in CO (Tacconi et al. 2008) and the two z ∼ 1.4 BzK galaxies detected in CO by Daddi et al. (2008). The solid line is the best-fitting relation with a form of log L ′ CO = α log L FIR +β to the local LIRGs and ULIRGs and the highredshift SMGs from Greve et al. (2005) (however it is no longer quite the best fit when radio luminosity is considered consistently across the populations). With gas conversions fixed, the SFRGs appear to have lower CO gas masses than SMGs, although if the radio-inferred SFRs are overestimated then the SFRGs could still lie on the plotted gas/SFR relation. Greve et al. 2005), gas-to-dust mass ratios of ∼ 200 (with a factor of a few uncertainty in the dust mass alone) and gas surface densities of σ gas ∼ 3000 M ⊙ yr −1 pc 2 (Tacconi et al. 2006). Borys et al. (2005) estimate the average stellar mass for SMGs at z > 1.5 to be 3.2 +3.4 −1.6 × 10 11 M ⊙ , slightly larger than the average 1.8 × 10 11 M ⊙ for our three SFRGs. Overall, this comparison suggests that SFRGs may be somewhat smaller mass objects (lower stellar mass, lower CO mass and lower dynamical mass) than SMGs, but share with them a large radio luminosity. The SFRs of both SMGs and SFRGs are subject to sizable uncertainties, not least of which is the initial mass function (e.g., Baugh et al. 2005). Pope et al. (2006) have pointed out that SMG SFRs estimated from the 24 µm Spitzer-MIPS observations are lower than those estimated from the 850 µm or radio wavebands, although this could represent an issue of relative calibrations of these indicators in this luminosity regime rather than intrinsic properties. We can also compare the SFRGs to local populations and less luminous star-forming galaxies at z ∼ 2. Locally, L ′ CO increases with L FIR for (U)LIRGs, with the Greve et al. (2005) sample of SMGs extending this trend out to the highest far-IR luminosities ( 10 13 L ⊙ ). For comparison, in Fig. 3 we have plotted the SFRGs on the L ′ CO -L FIR diagram along with SMGs lying on the local relation, as well as three LBGs from the literature within the considerable uncertainties in their far-IR luminosities, three undetected BX/BzK galaxies (Tacconi et al. 2008), and two CO-detected BzK galaxies (Daddi et al. 2008 -these sources lie above the relation). The SFRGs lie somewhat below this relation. discussion As our calculations above have shown, it is very difficult to estimate the precise gas masses for SFRGs due to various uncertainties. However, a potentially important result emerges from these CO observations: compared to SMGs, SFRGs appear to be significantly more efficient at producing stars from a given molecular gas mass. If this is strictly true, then SFRGs cannot be interpreted as scaled up versions of local ULIRGs as Tacconi et al. (2006Tacconi et al. ( , 2008 have argued is the case for SMGs, since their gas masses appear to be lower than expected for their radio luminosities. There are two considerations to be taken into account here. Firstly, the far-IR luminosities and thus SFRs may be over-estimated from the radio, for instance if buried AGN were present. While none of these sources have AGN signatures in their UV or optical spectra, a deeply obscured AGN could still be driving a significant portion of the radio luminosity (e.g., Daddi et al. 2007b;Casey et al. 2008a). A further possible complication comes again from the radio selection of these objects. There is a ∼0.25 dex scatter in the radio-FIR relation (Yun et al. 2001), and while locally there is no apparent correlation between SED shape and radio-FIR scaling, it is possible in our SFRGs that we are selecting galaxies which are amongst the lower 0.25 dex (weaker FIR per unit radio). If the SFRs in our galaxies were several times lower then the efficiencies would be similar to the average SMG (Fig. 3). Secondly, the conversion from CO(3-2) to molecular gas mass may not be the same as for SMGs. If α CO were greater than the ∼1 estimated for local ULIRGs (and as inferred to be correct for SMGs), these SFRGs could make up the shortfall in molecular gas mass from the average SMG (∼ 4×), although the conversion would have to approach that typically adopted for the Milky Way α CO = 4.6 (Solomon & van den Bout Table 3 Derived properties of the SFRGs a) SFR from 24µm luminosity, assuming the average SED in Dale & Helou (2002). b) Star formation efficiency (SFE) is the SFR divided by the molecular gas mass. c) Stellar mass calculated as Borys et al. (2005), adopting their L K /M ⊙ = 3.2. d) No limit is possible for RG J131236 since both gas and dust are limits. 2005 ). This is unlikely given that the SFRGs often show clear evidence in high-resolution radio observations for merger-driven starbursts (Casey et al. 2008b). It is noteworthy that while our observations have highlighted an ultraluminous z ∼ 2 population which may build stars in an extremely efficient mode (or at least as efficient as SMGs if their SFRs are overestimated by factors of several), recent observations (Daddi et al. 2008) have identified a population of z ∼ 1.5 galaxies which they detect in CO(2-1) exhibiting the opposite property: low-efficiency star formation. These Daddi et al. galaxies are selected as large massive disk-like galaxies, and it is perhaps not surprising that they form stars in an apparently quiescent "spiral-galaxy" mode. Nonetheless, it is intriguing that galaxies in the high-redshift Universe have been discovered with such a wide range of starforming efficiencies (from poor to extreme) all lying in the 10 12−13 -L ⊙ regime. Massive galaxies are being built in a variety of modes in the z = 1−3 peak star-formation period. The short depletion times compared to the long times to form the stellar masses suggests we may be seeing SFRGs in the last phase of their current star-formation episodes. However, one would expect on average to find SFRGs as a population half-way through their gas consumption lifetimes. In this context, the small ages imply either a very high duty cycle or that the SFRs are over-estimated from the radio luminosity. We reiterate that the large FIR luminosity estimates for our SFRGs are based mainly on the radio/FIR relation which is apparently applies for SMGs (Kovacs et al. 2006), but is only marginally supported for SFRGs through the 24-µm luminosity and dust-corrected Hα measurements. It is possible that we are over-estimating their star-formation activity, which would lead us to under-estimate the duration timescales above. We also note that the observed CO may be closely associated with the star formation, and thus is probably warm and highly visible (i.e. with low α CO ). It does not rule out the existence of a cooler less visible component (with high α CO ), not intimately associated with the current zone of star formation (e.g. in the inner disk), which may still become available within its own dynamical timescale to fuel star formation. The result above may therefore be affected by the selection for the highly visible component of the molecular gas. However, the cold gas would need to be fairly widely distributed (to ensure that it doesn't violate the dynamical limits on the total gas+stellar mass in the central regions), yet it must also be able to flow into the central regions on a timescale of <100 Myrs (near the limit of the gas sound speed ∼ 100 km/s). In order to evaluate the implications of these results for massive galaxy formation, we recall that these three galaxies have typical characteristics for the SFRG population at all wavelengths measured. Our results underline the major role of gas consumption over short timescales and with high efficiencies, characterizing rapid and strong merger-driven bursts as a major growth mode for both stellar mass and black holes in the distant Universe. Even if the SFRs in these SFRGs were overestimated by a factor of a few, they would remain ULIRG-class galaxies. If we assume that ∼50% of the submm-faint SFRGs at z ∼ 2 are dominated by star formation at levels comparable to SMGs, we arrive at a density of ∼ 5 × 10 −6 Mpc −3 , similar to that observed for SMGs . Together, the SMGs and SFRGs represent a volume density 10× smaller than measured for galaxies inferred to be forming stars at low efficiencies by Daddi et al. (2007b) which have space densities of order of 10 −4 Mpc −3 . With SFRs a few to 10× larger in the SMGs and SFRGs, the net effect is roughly equal numbers of stars being formed in both high-and low-efficiency modes at z ∼ 2. Further study should ascertain whether other SFRGs follow a similar pattern to the galaxies studied in this paper. A substantial sample will allow the properties of the gas to dynamical mass ratio to be determined accu-rately for the population, since we only currently have one well-detected line profile, and the dynamical mass determination is limited by the unknown inclination. conclusions • We conclude that the radio luminosities of these SFRGs are higher for their overall mass (gas plus stellar) than for the SMGs (given that if the radio-SFRs are over-estimated for one class, they could well be for both). • We note that SMGs in general, and also these SFRGs, are outliers of the stellar mass-SFR correlation (Daddi et al. 2007a), probably due to the higher efficiency in forming stars for a similar stellar mass and CO luminosity. Lower gas masses in the SFRGs would imply even higher SFEs than the SMGs. By contrast, if the SFRs are significantly over-estimated by the radio, or the COto-H 2 conversion were significantly different from SMGs, then SFRGs could have similar efficiencies to typical ULIRGs. Together with the apparent low-efficiency starforming (U)LIRGs from Daddi et al. (2008), the SMGs and SFRGs with SFRs several to 10× larger than the Daddi et al. galaxies suggest roughly equal numbers of stars being formed in both high-and low-efficiency modes at z ∼ 2. Massive galaxies are being built in an impressive variety of modes in the z = 1−3 peak star-formation period. • If the radio-inferred SFRs are correct, then these SFRGs are more efficient star formers than SMGs, and cannot obviously be interpreted as scaled up versions of local ULIRGs as Tacconi et al. (2006Tacconi et al. ( , 2008 have argued is the case for SMGs. The SFRGs' radio luminosities are larger than would naturally scale from local ULIRGs given the gas masses or gas fractions. These observed gas masses and star-formation properties may be typical of the SFRG population and further work is justified to explore this population with improved statistics. • Our results underscore the fact that ultraluminous galaxies in the high-redshift Universe have been discovered with a wide range of star-forming efficiencies, the SFRGs apparently being one extreme. Massive galaxies are likely being built in a variety of modes in the z = 1−3 peak star-formation period. acknowledgements We thank an anonymous referee for a very careful reading and helpful comments. This work is based on observations carried out with the IRAM Plateau de Bure Interferometer. SCC acknowledges a fellowship from the Canadian Space Agency and an NSERC discovery grant. IRS acknowledges support from the Royal Society. AMS acknowledges support from STFC. We acknowledge the use of GILDAS software (http://www.iram.fr/IRAMFR/GILDAS).
2008-07-23T15:06:41.000Z
2008-07-23T00:00:00.000
{ "year": 2008, "sha1": "daaecc5100e8b6e5355ec039175e3120b1a5073f", "oa_license": null, "oa_url": "https://authors.library.caltech.edu/14627/1/CHAapj08e.pdf", "oa_status": "GREEN", "pdf_src": "Arxiv", "pdf_hash": "daaecc5100e8b6e5355ec039175e3120b1a5073f", "s2fieldsofstudy": [ "Physics" ], "extfieldsofstudy": [ "Physics" ] }
81835608
pes2o/s2orc
v3-fos-license
Effective Engagement of Adolescent Asthma Patients With Mobile Health–Supporting Medication Adherence Background: Mobile health (mHealth) apps have the potential to support patients’medication use and are therefore increasingly used. Apps with broad functionality are suggested to be more effective; however, not much is known about the actual use of different functionalities and the effective engagement. Objective: The aim of this study was to explore the use and the effective engagement of adolescents (aged 12 to 18 years) with the Adolescent Adherence Patient Tool (ADAPT). Methods: The ADAPT intervention consisted of an app for patients, which was connected to a management system for their pharmacist. The aim of the ADAPT intervention was to improve medication adherence and, therefore, the app contained multiple functionalities: questionnaires to monitor symptoms and adherence, medication reminders, short movies, pharmacist chat, and peer chat. For this study, data of the ADAPT study and a cluster randomized controlled trial were used. Adolescents with asthma had 6 months’ access to the ADAPT intervention, and all app usage was securely registered in a log file. Results: In total, 86 adolescents (mean age 15.0, SD 2.0 years) used the ADAPT app 17 times (range 1-113) per person. Females used the app more often than males ( P =.01) and for a longer period of time ( P =.03). On average, 3 different functionalities were used, and 13% of the adolescents used all functionalities of the app. The questionnaires to monitor symptoms and adherence were used by most adolescents. The total app use did not affect adherence; however, activity in the pharmacist chat positively affected medication adherence ( P =.03), in particular, if patients sent messages to their pharmacist ( P =.01). Conclusions: mHealth apps for adolescents with asthma should contain different functionalities to serve the diverging needs and preferences of individual patients. Suggested key functionalities to promote use and effectiveness in adolescents with asthma are questionnaires to monitor symptoms and a health care provider chat. (JMIR Introduction Mobile health (mHealth) interventions have the potential to support patients with their medication use and are therefore increasingly used [1][2][3][4].Patients highly appreciate those type of interventions, mainly because of the high usability, feasibility, and acceptability of mHealth [5].However, the evidence for efficacy of mHealth for chronic patients is limited, except for moderate quality evidence of improvement in asthma patients [3].mHealth seems, in particular, promising for specific patient groups such as adolescents because almost all adolescents own a smartphone (95%); they widely use their phone for social networking, and they are generally poor adherents [6,7].However, until now, not many mHealth interventions are developed for adolescents, although mHealth interventions for adolescents were rated as feasible and acceptable with modest evidence for their efficacy in improving adherence [8][9][10].Therefore, we developed the Adolescent Adherence Patient Tool (ADAPT), an interactive mHealth intervention to improve medication adherence in adolescents with asthma.A patient-centered approach and a theoretical framework were used to develop this intervention [11].As a result, the intervention consisted of a smartphone app for patients, which was connected to a desktop application for pharmacists, enabling communication between patients and health care providers. Previous studies showed that multifaceted mHealth interventions are more effective in improving medication adherence than interventions targeting only 1 aspect of nonadherent behavior [4,[12][13][14] because medication adherence is a complex behavior affected by many factors [15].Accordingly, the ADAPT intervention contained multiple functionalities to support medication adherence: questionnaires to monitor symptoms and adherence, medication reminders, short movies, pharmacist chat, and peer chat [11].We evaluated the ADAPT intervention in a cluster randomized controlled trial and adherence improved significantly in adolescents with asthma having poor adherence rates [16]. Besides the efficacy of mHealth, it is important to study the actual use of mHealth interventions.Currently, little is known about the actual use of mHealth apps by adolescents with asthma.Moreover, it is important to identify the association between the use of different mHealth functionalities and the effect on the intended outcome, also known as effective engagement.This will provide directions for other mHealth interventions aiming to improve adherence, as there is still limited evidence for the efficacy of mHealth [17,18].Therefore, the aim of this study was to explore the use of the ADAPT app, a complex adherence mHealth intervention, by an adolescent with asthma and to study the effective engagement of patients with the ADAPT app. Data Collection Data of the ADAPT study, a cluster randomized controlled trial, were used.The aim of the ADAPT study was to evaluate the effect of the ADAPT intervention on adherence, measured with the Medication Adherence Report Scale (MARS) [19].The complete ADAPT study protocol and effectiveness of the mHealth intervention have been described elsewhere [11,16].Briefly, adolescents with asthma (aged 12 to 18 years) who were in the possession of a smartphone were eligible for participation.In total, 638 patients were invited for the intervention group and 103 (16.1%) signed the informed consent.There was a 16% dropout rate (n=8 withdrew consent, n=7 did not download the app, and n=1 was lost to follow-up), resulting in 87 patients and 27 pharmacists, who had 6 months access to the ADAPT intervention.The control group consisted of 147 patients and 27 pharmacists (data not shown). We asked patients in the intervention group (N=87) to use the app at least once a week; they received a weekly push notification.After 6 months upon completing the study, patients received a gift card (regardless of their app usage).All ADAPT app use was securely registered in a log file, that is, a document with an automatically produced and timestamped documentation of events. Adolescent Adherence Patient Tool Intervention The ADAPT app (Figure 1) was connected to a desktop application of the patient's own community pharmacist [11].The different functionalities of the app are described below. Questionnaire to Monitor Symptoms Patients received a weekly push notification (26 times in total) to complete the Control of Allergic Rhinitis and Asthma Test (CARAT) to monitor their symptoms [20].This validated questionnaire consisted of 10 questions where a total score between 0 and 30 could be obtained (>24 indicated good disease control).The total score could be divided into 2 subscores: allergic rhinitis score (items 1 to 4, score>8 indicated good control) and asthma score (items 5 to 10, score ≥16 indicated good control).Patients had access to their obtained CARAT scores in the ADAPT app and received textual feedback about their results.The CARAT scores were also sent to the pharmacist's desktop application, and pharmacists received email notifications when patients had no disease control (CARAT score ≤24). Medication Alarm Patients could set a medication alarm to prevent forgetting.The alarm was adjustable to the patient's preferences, that is, patients could set the alarm once or twice a day at their preferred time.The alarm was not connected to their inhaler medication; thus, it did not register if medication was already taken.Unfortunately, use of the medication alarm was not registered in the log file, as the alarm settings were saved locally. Short Movies Almost every week a short movie about an asthma-related topic (eg, lifestyle, medication use, and friends) became visible in the app to educate and motivate the patient.Patients did not receive a push notification, although in the app a notification was visible when a new movie became available.In total, 21 movies became available during the 6-month study period.Pharmacists had access to the movie database and could send additional movies based on the patient's needs, for example, about inhaler techniques. Peer Chat The peer chat gave patients the opportunity to share experiences and discuss asthma-related topics with other participants.This was an age-specific functionality, as peers are important during adolescence [21].Adolescents recommended this functionality during the developmental phase.The messages were divided over 6 topics: asthma, general, going out, pets, sports, and other.There was no moderator involved as we did not want to disrupt the interaction between adolescent peers. Pharmacist Chat The pharmacist chat facilitated direct contact between the adolescent and their pharmacist, which is important because adolescents are not often seen in the pharmacy [22].Pharmacists voluntary signed up for the ADAPT study and were randomized to the intervention group.Pharmacists could contact their own patients via the intervention, as in the Netherlands every patient is registered at 1 pharmacy and mostly fill all their prescriptions there.Pharmacists received email notifications when patients sent a message.The aim of this functionality was to educate and motivate patients. Adherence Questions Once every 2 weeks (14 times in total), 2 questions concerning adherence appeared in the app.The questions were based on items of the MARS.The first question was related to unintentional nonadherence: How often did you forget to take your medication in the previous week?and the other was related to intentional nonadherence: How often did you decided to miss out a dose in the previous week?Patients could answer these questions using a 5-point Likert scale ranging from 1 (always) to 5 (never). Data Analysis Descriptive statistics of all variables were calculated.For skewed data, the median with interquartile range (IQR) is shown instead of the mean with standard deviation (SD).We divided the adolescents in 3 groups based on the frequency of the app usage during the 6-month study period: low (≤10), average (>10 and ≤25), and high (>25) frequent users.All log file data were converted to Excel and, thereafter, statistical analyses were performed using R (R Foundation for Statistical Computing, version 3.4.3)packages nlme and lme4.P values less than .05were considered statistically significant. Effective Engagement We used (generalized) linear mixed-effects models to evaluate the effective engagement of adolescents and to compare groups.The 27 pharmacies of the ADAPT study (clusters) were used as random effects in the models. Ethics and Confidentiality The ADAPT study was approved by the Medical Review Ethics Committee of the University Medical Centre Utrecht (NL50997.041.14) and by the Institutional Review Board of Utrecht Pharmacy Practice network for Education and Research, Department of Pharmaceutical Sciences, Utrecht University [23].All participants had to sign informed consent before the start of the study; for patients younger than 16 years, both parents also had to sign.The trial is registered in the Dutch Trial Register (NTR5061).All (personal) app data were encrypted using 128-bit Advanced Encryption Standard and were securely saved using Hypertext Transfer Protocol with a Secure Sockets Layer certificate (HTTPS). The exact use per functionality is described in Table 1; the CARAT questionnaire, adherence questions, and short movies were used by most adolescents.There were differences in characteristics and functionalities used between the 3 user groups: low, average, and frequent users (Table 1).The low frequency app users had lower self-reported adherence rates compared with the average group (MARS 19.3 vs 21.4; P=.04), and the high frequency group contained more females compared with the low frequency group (73% (19/26) vs 44% (12/27); P=.04).Almost all low frequent users (93%; 25/27) completed the CARAT questionnaire, and more than half (56%; 15/27) completed the adherence questions at least once.No one sent a message in the peer chat.The majority of high frequent users sent a message to their pharmacist (81%; 21/26) and watched a movie (77%; 20/26), which differed significantly from the other groups (Table 1). Adolescents used, on average, 3 different functionalities of the app (IQR 3-4; range 1-5).An overview of the combinations of different functionalities used is presented in Figure 2, showing a wide variety in app functionality use.All 5 functionalities were used by 13% (11/87) of the adolescents.Examples of the total app usage per person are shown in Multimedia Appendix 1. Questionnaire to Monitor Symptoms The CARAT questionnaire is the most frequently used functionality of the app; in total, 1047 questionnaires were completed by 85 (98%) adolescents (Multimedia Appendix 2).Adolescents received 26 weekly reminders during the study period (6 months) to complete the CARAT; however, they individually completed the CARAT on average 10 times (IQR 4-17).There was a lot of variation between patients; range 1-84. Adherence Questions The majority of adolescents (83%; 72/87) completed the adherence questions at least once, with a total of 221 completed questionnaires.The median of completed adherence questions per person was 2 (IQR 1-4; range 1-11), whereas the adherence questions appeared 14 times during the study period. Short Movies Half of the adolescents (51%; 44/87) watched at least one movie.More females (n=29) than males (n=15) watched movies (P=.04).In total, 21 short movies appeared in the app; however, on average, 4 different movies were watched per person (IQR 2-6; range 1-20), and each movie was seen once (IQR 1-1; range 1-4).The movies that appeared first in the app were seen most.In addition, 1 pharmacist sent an additional movie with inhaler instructions to support a patient; this movie was seen twice.a Frequency of app use: low=used the app ≤10 times; average=used the app >10 and ≤25 times; high=used the app >25 times. b P values derived of (generalized) linear mixed-effects models. Of the 12 adolescents who started the conversation, one-third (n=4) did not receive a message back (Table 3; reasons unknown).In total, 38 adolescents (44%) sent on average 2 messages (IQR 1-5; range 1-17) to their pharmacist, and more females (n=28) than males (n=10) sent messages to their pharmacist (P=.004).In total, 34 conversations were held where both pharmacists and patients sent at least 1 message; examples are shown in Multimedia Appendix 3. Peer Chat The peer chat was used by 21% (18/87) of the adolescents; in total, they sent 150 chat messages.Per adolescent, 4.5 messages (IQR 3-11; range 2-29) were sent.Most messages were sent within the topics sports (67 messages by 8 adolescents), other (34 messages about age, school, and residence by 6 adolescents), and general (24 messages about participating in the study and the app by 8 adolescents).The 18 adolescents participated on average in 2 topics (IQR 1-2; range 1-5).Examples of peer chat messages are shown in Multimedia Appendix 3. Effective Engagement The total app use was not associated with a difference in self-reported adherence (P=.12).Use of the CARAT questionnaire (P=.26), adherence questions (P=.65), short movies (P=.80), or peer chat (P=.21) also did not affect the adherence outcome.However, logged activity in pharmacist chat positively affected self-reported adherence (MARS score increased with 0.1 points per message; P=.03).Data showed that messages sent by pharmacists were not related to the outcome (P=.06), whereas activity of patients in the pharmacist chat did positively affect the outcome (P=.01), that is, if patients sent messages to their pharmacist, it positively affected adherence (MARS score increased with 0.3 points per chat message). Principal Findings Adolescents have different preferences when using an mHealth app, as there was a wide variety in app usage per person.This supports the need for multifaceted mHealth interventions.The questionnaire to monitor symptoms was the frequently used functionality, for which they received weekly reminders.Females seemed to be more active in the ADAPT app; they used the app more often, for a longer duration, and more females sent messages to their pharmacists and watched movies.Total app use was not associated with the outcome; however, sending a chat message to the pharmacist positively affected medication adherence.On the basis of our results, we recommend a health care provider chat as a key functionality for mHealth interventions to improve adherence in adolescents with asthma. The ADAPT intervention contained a unique combination of functionalities to improve adherence and targeted a specific patient population: adolescents with asthma.We showed that the adolescents who used the app for 10 to 25 times (average users) had the highest adherence score at the start (MARS 21.4).One would expect the highest adherence score among the low frequent users because if patients are highly adherent, they do not need the intervention, or that among the high frequent users, as they are also likely to be highly adherent to the intervention use.However, we did not find this, although there was no difference between adherence rates among average and high users; thus, higher adherence rates might be related to more frequent app use, that is, more adherent to the intervention. RenderX The most used functionality was the questionnaire to monitor symptoms (Table 1), which was also shown in a study with adult asthma patients [24].The symptom questionnaire provides patients (and their health care providers) insights into their disease symptoms over time, which should support self-management [25,26].Surprisingly, we did not find an effect of the questionnaire use on adherence.Patients received a weekly push notification to complete the questionnaire, which might explain why this functionality was the most used functionality.However, the adherence questionnaire was the second most used functionality (Table 1), for which patients did not receive a push notification.The reason why most patients completed the questionnaires is unknown, among others, curiosity might play a role.On the basis of all these questionnaire data (adherence and symptom control), health care providers could deliver personalized care to support patients, which is suggested to be more effective than usual care [27][28][29][30].Therefore, we recommend questionnaires as a useful functionality for mHealth aimed at adolescents.The peer chat was an age-specific functionality based on the preferences of adolescents [11,31] because peers are important for them [32].Previous studies showed positive effects of peer-led interventions for asthma patients in improving attitudes and quality of life [33,34], and online peer support groups increased self-confidence [35].In our study, no effects of the peer chat were found on adherence.Only 21% of the adolescents (18/87) used the peer chat, suggesting that it was not appropriate for everyone.However, the adolescents who used it sent quite a lot of messages (8 per person).Therefore, more research is needed toward a peer chat functionality in a larger population, as more interaction is expected when more patients participate, which in turn might support the use of the peer chat. The pharmacist chat is a new communication method for both patients and pharmacists.It provided pharmacists with a tool to personally reach patients, which is in particular relevant for adolescent patients, as their adherence is low and they are not often seen in the pharmacy [22].This electronic consult might overcome patient's barriers to approach a health care provider.However, this study showed that not all adolescents and pharmacists were comfortable with using this new tool because only 44% of the adolescents (38/87) and 82% of the pharmacists (22/27) used the ADAPT pharmacist chat.Moreover, 4 adolescents (with different pharmacists) did not receive an answer to their question or comment (Table 3).For further implementation of mHealth, it is important that patients always receive an answer, otherwise it will hinder further implementation [36].Health care providers should therefore be stimulated and motivated to actively engage in mHealth, and we suggest a back-up plan, for example, automatically sending personalized short message service text messages for patients who did not receive an answer within 24 hours or an urgent email notification for pharmacists. For further implementation of mHealth in clinical practice, it is important to study the cost-effectiveness of the ADAPT intervention.Most mHealth interventions are cost-effective [37]; however, the active involvement of health care providers, in our case pharmacists, might negatively affect the cost-effectiveness.Thus, comprehensive economic evaluations are needed [38] to study the cost-effectiveness of the ADAPT intervention and to identify the optimal involvement of pharmacists (from an economical perspective). Limitations We used log data to analyze the ADAPT app usage, which is a reliable method; however, there are some limitations.Data used in this study are derived from a cluster randomized controlled trial; thus, there might be a response bias, that is, the participants were probably more motivated to use the intervention than the general population.However, use of the intervention still varied per person, suggesting that mHealth use depends on patients' needs and preferences.Another limitation is that patients received a weekly reminder to complete the CARAT questionnaire, which might be a reason why the CARAT is mostly used.Moreover, many researchers are using electronic monitors to measure adherence of youth with asthma; thus, further research should focus on effective engagement using electronic adherence measurements instead of self-reports.In addition, we studied the physical engagement of adolescents with the app (number of times used), although there is also psychological engagement with the intervention [17,39], which we did not measure.The psychological engagement might also explain why patients use certain functionalities.Moreover, the generalizability of our results is limited because our findings are based on a study among adolescents with asthma in the Netherlands.Therefore, more research is needed to confirm our findings in other countries and populations.However, these results suggest that the possibility to chat with a health care provider is an important functionality for mHealth interventions aiming to increase adherence. Conclusions This study showed that a complex mHealth intervention to support adherence is used differently by adolescents with asthma.The questionnaires to monitor asthma symptoms and adherence were used by most adolescents, which provided valuable data for health care providers and patients.Moreover, the use of the pharmacist chat positively affected adherence.These findings suggest that mHealth apps should contain different functionalities to serve the diverging needs and preferences of individual patients.A questionnaire to monitor symptoms and adherence and a chat with the health care provider are recommended key functionalities for mHealth apps for adolescents with asthma. Figure 1 . Figure 1.The Adolescent Adherence Patient Tool with the different functionalities.CARAT: Control of Allergic Rhinitis and Asthma Test. c MARS: Medication Adherence Report Scale. d CARAT: Control of Allergic Rhinitis and Asthma Test.e Used at least once.f Not applicable. Figure 2 . Figure 2. Overview of the combinations of functionalities used by 86 adolescents.Adh: adherence questions; CARAT: Control of Allergic Rhinitis and Asthma Test; peer: peer chat; pharm: pharmacist chat. Table 1 . Descriptives of the adolescent app users and the differences between the frequency groups. Table 2 . Descriptives of pharmacists using the pharmacist chat. Table 3 . Descriptives of patients using the pharmacist chat.
2019-03-18T14:06:49.298Z
2019-03-01T00:00:00.000
{ "year": 2019, "sha1": "6b802a01ad1f587525292249e3b5787e0ec00c2f", "oa_license": "CCBY", "oa_url": "https://s3.ca-central-1.amazonaws.com/assets.jmir.org/assets/preprints/preprint-12411-accepted.pdf", "oa_status": "BRONZE", "pdf_src": "ScienceParsePlus", "pdf_hash": "854f7e9634ad1cd4049d5ae755d8ed2e44b255fb", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
245876202
pes2o/s2orc
v3-fos-license
Integrin-β4 regulates the dynamic changes of phenotypic characteristics in association with epithelial-mesenchymal transition (EMT) and RhoA activity in airway epithelial cells during injury and repair Background: In airway disease such as asthma a hyperactive cellular event of epithelial-mesenchymal transition (EMT) is considered as the mechanism of pathological airway tissue remodeling after injury to the airway epithelium. And the initiation of EMT in the airways depends on the epithelial disruption involving dissolution and/or destabilization of the adhesive structures between the cells and ECM. Previously, we have shown that integrin-β4, an epithelial adhesion molecule in bronchial epithelium is an important regulator of cell proliferation and wound repair in human airway epithelial cells. Therefore, in this study we aimed to investigate whether integrin-β4 also regulates EMT phenotypes during injury and repair in airway epithelial cells of both wild type/integrin-β4-/- mice in vivo and cultured cells treated with integrin-β4/nonsense siRNA in vitro. Methods: We induced injury to the airway epithelial cells by either repeated exposure to ozone and mechanical scratch wound, and subsequently examined the EMT-related phenotypic features in the airway epithelial cells including biomarkers expression, adhesion and cytoskeleton reorganization and cell stiffness. Results: The results show that in response to injury (ozone exposure/scratch wound) and subsequent spontaneous repair (ozone withdrawal/wound healing) both in vivo and in vitro, the airway epithelial cells underwent dynamic changes in the epithelial and mesenchymal biomarkers expression, adhesion and cytoskeleton structures as well as cell stiffness, all together exhibiting enhanced EMT phenotypic features after injury and reversal of the injury-induced effects during repair. Importantly, these injury/repair-associated EMT phenotypic changes in airway epithelial cells appeared to be dependent on integrin-β4 expression. More specifically, when integrin-β4 was deficient in mice (integrin-β4-/-) the repair of ozone-injured airway epithelium was impaired and the recovery of ozone-enhanced EMT biomarkers expression in the airway epithelium was delayed. Similarly, in the scratch wounded airway epithelial cells with integrin-β4 knockdown, the cells were impaired in all aspects related to EMT during wound and repair including cell proliferation, wound closure rate, adhesion and cytoskeleton protein expression (vinculin and vimentin), mesenchymal-like F-actin reorganization, cell stiffness and RhoA activation. Conclusion: Taken together, these results suggested that integrin-β4 may be essential in regulating the effects of injury and repair on EMT in airway epithelial cells via influencing both the cell adhesion to ECM and cells' physical phenotypes through RhoA signaling pathway. Introduction Airway epithelium is a critical interface between the environment and the host; continuous exposure to environment hazards and oxidative stress mediated injury, which has been implicated in allergic diseases including chronic obstructive pulmonary disease (COPD) and asthma [1,2]. To maintain the physiological function in airway, cells can modulate their physical features including shape and stiffness in response to various signals from the cells' microenvironment [3][4][5][6]. Additionally, it is known that cells sense mechanical signals from their microenvironment via integrins that connect the cells to the extracellular matrix (ECM). As heterodimers composed of noncovalently linked α and β subunits, there are at least eight different subtypes of integrins known to be expressed in human airway epithelial cells including α2β1, α3β1, α6β4, α5β1, α9β1, αvβ5, αvβ6 and αvβ8 [7]. These integrins interact with other signaling molecules to regulate cellular processes including differentiation, proliferation and migration [8,9]. Moreover, although in some non-airway cell types, specific integrins (e.g., ITGβ8, ITGαvβ3) have been proven to contribute to cell migration via RhoA signaling pathway [10,11], the contribution of specific integrins to regulation of airway epithelial cell behaviors in relation to airway epithelial repair is still unclear. Previous evidence showed that the epithelial and interstitial repair has been attributed to a hyperactive cellular behavior of epithelial-mesenchymal transition (EMT) [12]. Following disruption of epithelial integrity, airway epithelial cells at the wound edge acquire an EMT-like phenotype to facilitate cell migration [13]. In addition, EMT has also been reported to mediate the dysregulation of airway epithelial repair caused by inflammation and elevated TGF-beta1 via a primarily Smad 2/3 dependent mechanism [14]. In our previous studies, we have shown that integrin-β4 is a key regulator of cell proliferation and wound repair in human bronchial epithelial cells (16HBE14o-) [15,16]. Other studies have shown that integrin-β4 is essential for structural organization of vimentin filaments and actin dynamics in lung epithelial cells [17,18]. Considering these functions of integrin-β4 and that actin cytoskeleton is a major determinant of cell mechanical properties/behavior such as cell stiffness/cell migration [19,20], it is reasonable to assume that integrin-β4 may regulate airway EMT by regulating mechanical properties/behavior of the airway epithelial cells. Therefore, in this study, we sought to investigate whether integrin-β4 is implicated in the dynamic changes of EMT-related biological and physical phenotypic features of the airway epithelial cells, and identify the underlying mechanism. Ethics All protocols and methods described in this study were performed in accordance with the principles and regulations as described by relevant guidelines (Grundy, 2015) [21]. All procedures involving mice were conducted in accordance with the governmental and international guidelines. Ethics approval was acquired from the Ethics Committee of Xiangya Hospital of Central South University (Approval number: 201803246). Generation of transgenic mice CCSP-rtTAtg/-/TetO-Cretg/-/ITGB4fl/fl triple transgenic mice were generated in-house as described previously [22][23][24][25]. The mice were held under specific pathogen-free conditions in groups of 4-8 mice per cage. Natural dark and light cycles (12 h) were maintained in each cage, along with standard feed and water ad libitum. Only male mice were used for the study. To induce Cre expression in the respiratory epithelial cells and produce ITGB4-/-mice, 1% doxycycline (Dox) in drinking water was administered to 8-week-old mice, which were continued throughout the entire experiment. The Dox-treated ITGB4fl/fl male littermates lacking CCSP-rtTA, TetO-Cre or both transgenes were used as control ITGB4+/+ mice. Ozone treatment Ozone treatment in cells or mices was performed as previously described [26,27]. Briefly, cells or mices in ozone groups were exposed to 1.5 ppm ozone for 30 min/d for 1-4 consecutive days. While cells and mice in repair groups maintained in culture condition respectively for another 24-96 h. Ozone was generated by a commercial ozonator (Model LT-100, Ltian, Beijing, China). Assessment of airway responsiveness For all groups, airway responsiveness was assessed 2 h after the end of ozone exposure. Mice were anesthetized respectively of chloral hydrate (300 ml/100 g) by intraperitoneal injection. Gradually increasing doses of methacholine (0.32-3.12 mg/ml) were delivered intravenously, and the RL data were measured by a direct plethysmography system (Buxco Electronics, Biosystems XA, USA). Cell culture and primary cells prepare The HBEC cell line 16HBE14o-was a kind gift from Professor Gruenert at the San Francisco Branch Campus of the University of California [28]. We obtained human pulmonary fibroblasts (HPF) from the Cell Resource Center at Peking Union Medical College. Primary mouse airway epithelial cells were prepared according to a previously published procedure [29]. These cells were cultured at 37°C in 5% CO2 in high-glucose DMEM containing 100 U/ml penicillin, 100 U/ml streptomycin, and 10% fetal bovine serum (FBS). Cell culture reagents were purchased from Gibco (Invitrogen, Grand Island, NY, USA). Small interfering RNA synthesis and transfection The effective ITGB4 siRNA [30] (50-CAGAAGA UGUGGAUGAGUU-30) and nonsense siRNA were designed and synthesized by Guangzhou RiboBio (RiboBio Inc., Guangzhou, China). HBE cells were transfected by negative control siRNA and effective silencing siRNA, respectively. Transfections were performed using Lipofectamine 3000 (Invitrogen) according to the manufacturer's instructions. The efficiency of gene silencing after siRNA transfection was detected using real-time PCR and Western blot analysis. Live-imaging and Wound-healing Assay For wound-healing assay, primary mouse airway epithelial cells were allowed to reach 100% confluence followed by 24 h of starvation. A mechanical scrape injury was induced by creating a wound with a p200 pipette tip across the wells, which were then washed and replenished with starvation medium (DMEM containing 1% FBS). The border migratory cells at the wound-edge were observed with real-time tracking and examined by an automated time-lapse microscope (Cell Observer System, Zeiss, Göttingen, Germany) equipped with a temperature and CO2 control chamber. Phase contrast images (5X objective) of six representative areas per well were captured every 30 min by matching the wounded region until the wound had completely closed (usually about 24 h). Real-Time PCR analysis Real-time PCR was carried out using iTaqTM Universal SYBR® Green Supermix (Bio-Rad Laboratories, CA, USA) with the CFX96 TouchTM Real-Time PCR machine (Bio-Rad). The primers used for real-time PCR were synthesized as described in Table 1. Target gene expression was normalized against GAPDH/HPRT and calculated using the 2 -ΔΔCT method. Immunocytochemistry Lung of each mouse and cells were fixed in cold 4% para-formaldehyde. For immunohistochemistry staining, the sections and cells were soaked in 3% H 2 O 2 (Sigma-Aldrich) in order to inhibit endogenous peroxidase. Optical Magnetic Twisting Cytometry The stiffness of cells was probed using optical magnetic twisting cytometry (OMTC). The details of this method have been described elsewhere [31]. Ferrimagnetic beads (4.5 μm in diameter) were fabricated in Dr. Jeffery Fredberg's lab at the Harvard School of Public Health. RGD-coated magnetic beads were incubated with cells for 20 min and then washed twice with PBS to remove unbound beads. During each experiment, beads were magnetized horizontally and then twisted in an oscillatory magnetic field with a fixed frequency of 0.3 Hz, 60 cycles. Such sinusoidal torque caused the beads twist in a trajectory with back-and-forth horizontal translation (Fig. 1Ea). The stiffness of F-actin (G′) was calculated from the ratio of the applied magnetic torque to the measured lateral bead displacement, and for each experimental condition, the measurement of G′ was repeated 6-12 times. Baseline cellular stiffness was denoted as G′0. For the comparability of the stiffness among different experimental batches and groups, G′ was normalized to G′ 0 in each experiment. In addition, the bead exclusion criteria were applied according to the amplitude, stability, angle and direction during bead oscillation. Atomic force microscope (AFM) For atomic force microscopy (AFM), cells were seeded onto conventional glass slides. AFM images (100µm×100µm) and force measurements were recorded using the NanoWizard ® 3 (JPK Instruments AG, Berlin, Germany) AFM system. The system was equipped with a fluid-heating chamber (Cellhesion, JPK Instrument AG, Berlin, Germany) that made sure the culture medium was maintained at 37°C. Soft silicon nitride cantilevers (MLCT, Bruker, Karlsruhe, Germany) were used with a normal spring constant of 0.01 N/m. The loading rate of the probe was 1 um/s. Imaging was done in contact mode exclusively. The stiffness of cells, reflected through Young's Modulus (E, Pa), was measured using the force curve during the extension of the Z-piezo obtained by calculating the amount of cantilever deflection. For cell measurements, the force curves were collected respectively from the perinuclear region and peripheral region of each cell and measured at more than 6 sites per cell and 10-20 times per site. Using JPK data processing software, all data were processed by curve-fitting with the Hertz contact model to obtain the Young's modulus. Cell proliferation assay The proliferation of 16HBE14o-cells was evaluated by a MTT (3-(4,5-dimethylthiazol-2-yl)-2,5-diphenyl-tetrazolium bromide) assay as previously described [32]. Briefly, cells were inoculated into 96-well assay plate at a density of 10 4 cells per well (0.1 ml/well) followed by starvation for 24 h to synchronize cell growth. Subsequently, the supernatant was removed, and dimethyl sulfoxide (AR, Yonghua Chemical Technology, China) was added to each well. The mixture was shaken for 10 min to dissolve the crystals. The absorbance was acquired by using an automatic micro-plate reader at 570 nm (Elx100, Thermo Fisher Scientific, Inc, Waltham, MA, USA). FRET Microscopy of 16HBE14o-Cells The FRET biosensor for RhoA have been described previously [33]. The RhoA FRET biosensor is a gift from Professor Klaus Hahn at University of North Carolina. Briefly, the RhoA biosensor including a Rho-binding domain of the effector rhotekin (RBD, amino acids 7-89), followed by an unstructured linker of optimized length of cycan fluorescent protein (CFP), a pH-insensitive variant of yellow fluorescent protein (YFP), and hull-length RhoA. Upon activation by GTP-loading, RBD specifically binds to the Rho, which brings YFP and CFP into proximity and thereby increasing FRET. RhoA activation can approximated simply as being proportional to the FRET/CFP emission ratio at a given subcellular location due to the fluorescent proteins are attached to one another. After co-transfection with integrin-β4 SiRNA and RhoA biosensor for 36-48 h, 16HBE14o-cells were detached with 4 mM EDTA (pH=7.4) in phosphate-buffered and seeded on fibronectin-coated 15-mm diameter glass bottom cell culture dish (801002, NEST, China) for 4-6 h before image acquisition. During the imaging process, the cells were maintained in serum-free 5% CO2 at 37 ℃. The images were collected with a Cell Observer System (Zeiss) equipped with the following filters (excitation; dichronic; emission): CFP (424/24 nm; 455; 460/40 nm), YFP (426/20 nm; 455; 520/30 nm). Emission ratios of YFP/CFP were generated and computed by the Metafluor software to represent the FRET efficiency before they were subjected to quantification and statistical analysis. Statistical analysis Data were presented as means ± standard deviation (SD) from 3-6 representative experiments. The number of replicate experiments is specified in each figure legend. Statistical significance was determined by one-way analysis of variance (ANOVA) followed by Dunnett's t test. All data were check for normal distribution and the Pearson correlation test was performed to evaluate the relationship between adhesion molecules and EMT phenotypes. Statistical analysis was performed with SPSS 21.0 statistical software package (SPSS 21.0, Inc., Chicago, IL, USA) and GraphPad Prism v5.01 software (Graph-Pad Software, USA). P <0.05 was assumed to denote statistical significance. Airway epithelial cells exhibited dynamic changes in EMT biomarkers expression and cytoskeletal structure and stiffness in response to ozone exposure and withdrawal To study EMT phenotypic features of airway epithelial cells in response to ozone exposure, we cultured 16HBE14o-cells (a human bronchial epithelial cell line) and treated the cells with ozone (1.5 ppm) for 2 consecutive days at 30 min/d. As shown in Fig. 1A-B, repeated exposure to ozone treatment in 2 days induced a dramatic increase in the expression level of the mesenchymal biomarkers (α-SMA and Vim), and a moderate yet still significant decrease in that of the epithelial biomarkers (E-cad and CK-19) in the cells. This confirmed that the ozone treatment did induce molecular EMT features in the airway epithelial cells (16HBE14o-cells) in culture. However, these changes in either the mesenchymal or the epithelial biomarkers recovered or even reversed in a time-dependent manner after the ozone exposure was withdrawn for up to 48 h. In order to assess the cytoskeletal structure of the 16HBE14o-cells, we analyzed the fluorescence microscopic images of the cells and quantified the fluorescence intensity of F-actin labeling across the cells. Typically, the 16HBE14o-cells showed highly concentrated F-actin structure around the cell periphery ( Fig. 1Ca-b, arrows and triangles), in contrast to the human pulmonary fibroblasts (HPF) that showed extensive F-actin structure both around the periphery and throughout the body of the cells ( Fig. 1Ce-f, arrows and triangles). Such differences in F-actin distribution between these two cell types were further highlighted by quantitative comparison of the fluorescence intensity profiles and the corresponding mean intensity of individual cells at the linear region of interest, obtained by cross-sectioning through the cell in perpendicular to the cell's long axis (thick/thin white line in Fig. 1Cb and f, and Fig. 1Cc-d and g-h respectively). Although in both cell types the mean intensity at the peripheral was significantly greater as compared to that in the central region, the 16HBE14ocells exhibited a higher ratio of peripheral to central mean intensity compared to the HPFs, suggesting a highly heterogenous cytoskeletal structure of the airway epithelial cells (p<0.001, Fig. 1Ci). As shown in Fig. 1D, compared to controls, the cells repeatedly exposed to ozone for 2 d exhibited significantly disrupted F-actin fibers in the peripheral but markedly thickened F-actin fibers in the central region, which was quantitatively confirmed by the significant increase in the ratio of central/peripheral F-actin mean intensity. These morphological changes suggest that repeated exposure to ozone rendered the 16HBE14o-cells a cytoskeletal structure morphologically like that of HPF, indicating the cells underwent a mesenchymal-like cytoskeletal reorganization ( Fig. 1Ce-h). After the ozone exposure was withdrawn for 24 h the cells started to change back to their original epithelial cytoskeletal structure, restoring continuous F-actin fibers at the peripheral and reduced F-actin fibers in the central region (indicated by arrows and triangles). In the meantime, the ozone-induced high ratio of central/peripheral F-actin intensity also gradually decreased, and eventually returned to the pre-ozone-exposure baseline level after 48 h of ozone withdrawal (Fig. 1D). Since the cytoskeletal structure is linked to the cellular mechanical properties, we further assessed the ozone-induced changes in stiffness of the 16HBE14o-cells using optical magnetic twisting cytometry (OMTC) that is well established for studying mechanical behaviors of collective adherent cells in culture as shown in Fig. 1Ea. We found that 2d repeated exposure to ozone resulted in a 57% decrease in the normalized stiffness (G'/G0') of the 16HBE14ocell. Upon withdrawal of the ozone exposure, the cell stiffness first rapidly reversed within 24 h, and then fully recovered at 48 h (Fig. 1Eb). In addition to ozone effect on the collective cells, we also quantitatively evaluated the ozone effect on individually separated cells in terms of topology and stiffness using atomic force microscopy (AFM, Fig. 1F). The AFM deflection images clearly showed that the ozone exposure caused topographic changes to the cells, resulting in formation of extensive lamellipodia protrusion (Fig. 1Fa-f, see the arrow pointed areas). As compared to the controls, ozone exposure markedly decreased the perinuclear stiffness (from 0.946 ± 0.130 kPa to 0.699 ± 0.178 kPa) and increased the peripheral stiffness (from 1.083 ± 0.250 kPa to 2.967 ± 1.103 kPa) in the cells, and the ozone-induced changes in stiffness at both the perinuclear and peripheral regions rapidly vanished after ozone withdrawal (Fig. 1G). Airway epithelial cells also exhibited dynamic changes in the expression of key adhesion molecules such as integrin-β4 in response to ozone exposure and withdrawal Many evidences indicate that the expression of EMT biomarkers is closely associated with dynamic and efficient remodeling of cell adhesive contacts [34][35][36]. Accordingly, we examined the expression of multiple key epithelial adhesion molecules (occludin, claudin-1, ICAM-1, integrin-β1, integrin-β4, ZO-1 and CTNNAL-1) in cultured 16HBE14o-cells during the ozone exposure and withdrawal. As shown in Fig. 2, repeated exposure to ozone for 2 d resulted in up regulation of the mRNA expression of Occludin, Claudin-1, ICAM-1, integrin-β1 and integrin-β4, but down regulation of that of ZO-1, and CTNNAL-1. These changes in the adhesion molecules expression were also abolished after the ozone exposure was withdrawn for 24-48 h. As shown in Table 2-3, we further analyzed the correlation between the adhesion molecules gene expression, the EMT biomarkers expression, the F-actin structure (F-actin ratio) and the cell stiffness (G'/Go' and Young' modulus) of the cultured 16HBE14o-cells during the ozone exposure and withdrawal. We found that the integrin-β4 expression was specifically highly correlated with the F-actin ratio and the cell stiffness of the cultured 16HBE14ocells during the ozone exposure and withdrawal. Table 2. Correlation between epithelial adhesion molecules expression and EMT phenotypes in 16HBE14o-cells in response to ozone exposure. Integrin-β4 deficiency in vivo not only enhanced airway resistance but also persisted changes in EMT biomarkers expression in the airway tissue during ozone exposure and withdrawal By using a conditional integrin-β4 deficiency mouse model (CCSP-rtTA tg/-/TetO-Cre tg/-/ ITGB4 fl/fl ) as well as repeated ozone exposure, [22] we observed in vivo the influence of integrin-β4 on the pathological consequences in the airway epithelium in response to ozone exposure and withdrawal. In this case, integrin-β4 was only deleted in the airway epithelial cells so that no lethal effect was caused to the integrin-β4 null mice. The efficiency of integrin-β4 deletion was validated by both the real-time PCR and immunohistochemistry stain (Fig. 3A, B). Then the mice were assessed in terms of airway resistance induced by aerosolized methacholine (at 1.56 mg/ml), and the EMT biomarkers expression in the airway tissue. Our results showed that compared to the wild type (integrin-β4 +/+ ) mice, the integrin-β4 deficient (integrin-β4 -/-) mice exhibited significant increase in the methacholine-induced airway resistance (RL, % above baseline) during repeated exposure to ozone (1.5 ppm, 30 min per day, for 4 consecutive days). After the ozone exposure was withdrawn, the RL in both wild type and integrin-β4 -/mice continued to increase and peaked at 48 h, and then returned to the level before ozone exposure at about 96 h (Fig. 3C). The wild type and integrin-β4 -/mice also showed remarkable difference in the profile of EMT biomarkers expression in response to ozone exposure and withdrawal. Compared to the wild type (integrin-β4 +/+ mice) mice, the integrin-β4 -/mice exhibited markedly enhanced EMT features (increase in alpha-SMA and vimentin, decrease in E-cadherin and CK-19) in response to repeated ozone exposure for 4 d. More importantly, after ozone withdrawal the ozone-induced EMT features in the integrin-β4 -/mice largely persisted for up to 96 h while in the wild type mice the ozone-induced EMT features quickly peaked at 48 h and then turned to decrease at 48-96 h (Fig. 3D and E). Integrin-β4 silencing delayed cell stiffening recovery and impaired wound healing ability in the ozone stressed airway epithelial cells and wound healing ability To determine whether integrin-β4 silencing affects the ability of airway epithelial cells to repair following the injury caused by environmental hazards, we examine the dynamic changes of cytoskeletal reorganization and cell stiffening in 16HBE14o-cells pre-treated with integrin-β4-specific small interfering RNA (siRNA). The efficiency of integrin-β4 silencing was validated in terms of mRNA and protein expressions of integrin-β4 in the 16HBE14o-cells after transfection with siRNA for 48-72 h by Real-time PCR and western blotting as shown in Fig. 4A-B. Since the expression of integrin-β4 was specifically highly correlated with the F-actin ratio and cell stiffness of 16HBE14o-cells. We further used optical magnetic twisting cytometry (OMTC), which is a well-established method for studying F-actin cytoskeleton mechanics of collective adhering cells cultured in monolayer to investigate whether integrin-β4 deficiency would impact the cytoskeleton stiffness in ozone stressed 16HBE14o-cells. As shown in Fig. 4C, the normalized stiffness (G'/G0') of 16HBE14o-cells with integrin-β4 silencing (integrin-β4 KD) markedly increased, as compared to the control (NC). And after ozone exposure for 2 days, the cytoskeleton stiffness was decreased in both groups, decreased 38% in NC group and 46% in integrin-β4 KD group. The depressed cytoskeleton stiffness in NC group rapidly reversed within the first 24 h after ozone withdrawal, and fully recovered at 48 h after ozone withdrawal. However, the cytoskeleton stiffness of 16HBE14o-cells in integrin-β4 KD group remained depressed at 48 h after ozone withdrawal. By using integrin-β4 siRNA and classical scratch-wound assay, we further investigate whether integrin-β4 play a role in the ability of wound healing of airway epithelial cells. Compared to the controls (NC group), 16HBE14o-cells subjected to integrin-β4 siRNA (integrin-β4 KD) appeared to be impaired in the ability to repair the scratch wound (i.e., with larger remaining wound area) during the period of up to 24 h (Fig. 4D), as well as markedly reduced cell proliferation (Fig. 4E). Airway epithelial cells exhibited dynamic changes of fibroblast-like morphology and EMT features during scratch wound healing In addition to ozone treatment, we also subjected the cultured airway epithelial (16HBE14o-) cells to scratch wound and then examined the changes of EMT features not only as afore described but also by live cell imaging. The time-lapse microscopy video images revealed that at 6 h after scratch wound the originally epithelial 16HBE14o-cells at the border region of each side of the scratched wound started to extend protrusions and then migrate into the cell-free area. During this process the cells changed from cuboidal shape to fibroblast-like elongated spindle shape (mesenchymal phenotype), then detached from one side and migrated to the other side of the wound. At 24 h after scratch, the wounded area was almost completely recovered by the cells that eventually changed back into typical cuboidal shape (Supplementary Video S1). In association with migration and morphological change, the cells also exhibited marked shift in the EMT biomarkers expression. Specifically, the cells displayed decreased E-cadherin expression and increased a-SMA expression as they migrated from the border region toward the other side of the wound, whereas the cells in the middle of the wound displayed the highest level of a-SMA expression and then less a-SMA expression as they moved close to the other side of the wound (Fig. 5A, red arrows). Similarly, the cells displayed transient cytoskeleton reorganization such as developing filopodia/ lamellipodia and assembling either mesenchymal-or epithelial-like F-actin fiber structures as they migrated across the scratch wound (Fig. 5B). The normalized stiffness (G'/G0') of the cells as measured by OMTC was first decreased due to the scratch wound, and then progressively recovered as the cells migrated to heal the wound within 24 h (Fig. 5C). Young's modulus measured by AFM in individual migrating cells indicated that during scratch wound and subsequent early stage of wound healing (up to 6 h) the stiffness at the nuclear region was gradually increased while at the periphery decreased. Such changes in stiffness began to reverse during the later stage of wound healing (6-24 h, Fig. 5D). Integrin-β4 silencing in vitro impaired airway epithelial cells in wound healing, cytoskeletal reorganization, cell stiffening and RhoA activation Furthermore, integrin-β4 deficiency was found to influence the cytoskeletal reorganization and cell stiffness potentially via RhoA activation pathway in the airway epithelial cells. On epithelial cytoskeleton reorganization, we assayed the time-course of changes in the expression of vimentin, F-actin fibers and corresponding cell stiffness in the cells at the beginning and at regular intervals during cell migration to close the wound. As shown in Fig. 6A, compare to the control, the integrin-β4 KD cells exhibited markedly decreased expression of vimentin and F-actin fibers in cells both at the edge of the wound (a) and during migrating in the middle of the wound (b). Moreover, the integrin-β4 KD cells showed markedly disrupted intracellular F-actin connection and reduced formation of filopodia and lamellipodia and thus inhibited spreading in the cells during migrating in the middle of the wound. In the meantime, the integrin-β4 KD cells remained significantly stiffer (i.e., greater normalized stiffness, G'/G0' as measured by OMTC) throughout the period of wound healing assay (0-24 h), as compared to the control (NC) (Fig. 6B). Fig. 6Ca shows that integrin-β4 deficiency did not cause significant changes in the height topology of the airway epithelial cells as visualized by AFM deflection images. However, AFM force measurement indicated that compared to control (NC), the integrin-β4 KD cells were similar in stiffness at the cell peripheral but became significantly softer at the perinuclear region as the cells were migrating in the middle of the wound (Fig. 6Cb-c). These cells also exhibited markedly decreased expression of vinculin as shown in Fig. 6D, demonstrating an impaired linkage between the actin cytoskeleton and the focal adhesion due to integrin deficiency as suggested by previous report [37]. Since integrin-mediated adhesion has been implicated to involve members of the Rho family of small GTPases [38], we thus examined RhoA activity in the 16HBE14o-cells with integrin-β4 silencing (integrin-β4 KD) using fluorescence resonance energy transfer (FRET)-based biosensors. As shown in Fig. 7, the integrin-β4 KD cells indeed exhibited reduced level of RhoA activity as compared to 16HBE14o-cells either untreated (Control) or treated with nonsense siRNA (Nonsense SiRNA). Discussion In this study, we demonstrate both in vivo and in vitro that the airway epithelial cells responded to environmental stresses (either exposure to airborne pollutants such as ozone or injury by mechanical scratch) with a dynamic yet intuitive presentation of EMT features in the airway epithelial tissue/cells, indicating a phenotypic transition during the stress-induced injury and subsequent spontaneous wound healing/repair after the stress was removed/stopped. Specifically, the stress generally promoted mesenchymal phenotypic features in the airway epithelial cells including particular expression profile of specific molecules known to be associated with either epithelial or mesenchymal phenotype, as well as cytoskeletal remodeling and cell stiffness variation. More importantly, for the first time we showed that the transition of phenotypic features, especially the changes in mechanical properties of the airway epithelial cells during the processes of injury and spontaneous repair was mediated by integrin-β4 as an epithelial adhesion molecule. We also found at least in vitro that integrin-β4 deficiency impaired the ability of airway epithelial cells to spontaneously recover from injury by inhibiting EMT-associated physical and chemical activities such as cytoskeletal reorganization, cell stiffening and RhoA activation. The idea of EMT was first proposed by Elizabeth Hay in the early 1980s to describe the phenomenon that in response to stress the epithelial cells usually lose epithelial characteristics while gaining mesenchymal features [39]. Early studies have identified E-cadherin and α-SMA as the major players of molecular phenotypes in EMT [40,41]. Nevertheless, it has also been found that the genes and proteins that are involved in cytoskeleton structural remodeling and cellular polarity complex contribute to EMT as well [42,43]. For instance, cytokeratin and vimentin is either repressed or activated to acquire potential of cell motility in EMT [44], whereas FSP-1, a highly specific protein marker for fibroblast regulates the synthesis or assembly of cytoskeleton proteins via altering internal morphogenic cues [45,46]. In the present study, we found that in response to either repeated ozone exposure or mechanical scratch, the airway epithelial cells always exhibited immediate suppression of E-cadherin and CK-19 expression but induction of α-SMA and vimentin expression, together with F-actin cytoskeleton reorganization that rendered more mesenchymal-like features as characterized by increased ratio of central/peripheral F-actin fluorescence intensity in the cells. And the cytoskeleton reorganization to disrupt the peripheral F-actin bundles and develop thick perinuclear stress fibers in airway epithelial cells is known to be closely associated with increasing the cells' ability to elongate and contract [47]. Therefore, it is highly likely that the airway epithelial cells underwent such spatiotemporal reorganization of F-actin fibers in order to gain a greater potential to deform and thus escape the harmful microenvironment. In addition to the conventional evaluation of molecular markers expression and F-actin structural organization, here for the first time we quantitatively evaluated whether the cells changed their mechanical properties such as stiffness in accordance with changes in the molecular/F-actin features during EMT, as indicated previously [35]. We measured the stiffness of either collective or individual airway epithelial cells using either OMTC or AFM, respectively. We found that the collective cells responded to injuries (repeated exposure to ozone) with marked reduction of cell stiffness, which is consistent with the mesenchymal phenotype requirement in EMT for soft cells to enable motility-driven fundamental cell behaviors such as migration and invasion [48]. In individual cells, we found that repeated exposure to ozone resulted in stiffening in the peripheral region but softening in the perinuclear region of the cell. These results suggest inhomogeneous remodeling of the cytoskeleton structure and correlated variation of the cell stiffness in the ozone-stressed 16HBE14o-cells, which together would ultimately benefit cell spreading or invasion via decreasing cell adhesion with the substrate and increasing cell contraction for deformation and protrusion. This not only provides further evidence that organization of actin filaments is the overriding determinant of cell stiffness [49], but also demonstrates that during EMT the cells are required to soften their nuclei in order to change from a defensive epithelial state to an invasive mesenchymal state in favor of cell migration. Furthermore, it is important to note that the bronchial epithelial cells exist with fundamental physiological integrity characterized by well-developed apical-basal polarity and intercellular contacts. Thus, the key event in EMT initiation is dependent on the disruption of the epithelial integrity by dissolving and/or destabilizing the cell-cell/ECM adhesive structures [42]. In our previous work, we have shown that the airway epithelial defect in asthma is closely associated with abnormal expression of epithelial adhesion molecules such as CTNNAL-1, ICAM-1 and integrin-β4 [50][51][52]. In this study, we expanded to analyze a panel of epithelial adhesion molecules whose genes are known to not only interact with each other but also involve in airway wound repair [50,51], EMT modulation [53][54][55][56], and actin cytoskeleton reorganization [57][58][59]. Indeed, we found that all these molecules in the airway epithelial cells changed their mRNA expressions in similar time-dependent fashions as their EMT features during the repeated exposure to ozone and subsequent spontaneous recovery, and among them integrin-β4 appeared with the most specifically highly correlated changes between mRNA expression, peripheral/perinuclear F-actin ratio and cell stiffness (Fig. 1&2, Tables 2&3). Simvastatin, an HMG coA-reductase inhibitor, have been shown to protect murine against lipopolysaccharide-induced acute lung injury [60], which was associated with up-regulation of integrin-β4 [61]. In addition, integrin-β4 can also act as a mediator of endothelial cell protection in the setting of excessive mechanical stretch at levels relevant to ventilator-induced lung injury [62]. On the other hand, as a heterodimeric transmembrane receptor that is essential to keep the structural adhesion of epithelial cells, integrin-β4 is known to be significantly upregulated in the event of epithelial injury by a variety of environmental hazardous factors including ozone exposure and mechanical scratch [15,16]. In this study, we showed that in mice with conditional deficiency of integrin-β4 in the bronchial epithelium, ozone exposure induced a significantly enhanced EMT molecular markers expression in the airway tissue and such ozone-induced EMT biological phenotype was delayed recovering after withdrawal of the ozone exposure (Fig. 3). This suggests that integrin-β4 may play a positive role in the homeostasis of physiological phenotypes in the airway epithelium in response to environmental stress. In fact, integrins are widely recognized to play very important roles in mediating various cellular behaviors including establishing cell polarity to attach to ECM and reorganizing actin cytoskeleton for generating intracellular forces to control proliferation, differentiation and migration [9,63]. For example, integrin α6β4 has been shown to associate with Laminin-1 and thus modulate formation and stabilization of the actin-containing motility structures in carcinoma cell migration, indicating integrins could link to the actin microfilament cytoskeleton via forming focal adhesions [64]. In addition to its role as a mechanical anchor, the focal adhesion is also an important messenger to transmit chemical signals from ECM to cytoskeleton and then modulate the mechanical properties of the cell for it to adapt to the complicated dynamic microenvironment [65][66][67]. It has been shown that during the processes of focal adhesion formation and maturation, cell polarity maintenance and cell migration promotion, the cytoskeleton network undergo a dynamic reorganization of its major components including actin and vimentin [68]. Among them, actin undergoes continuous directional de/polymerization to reorganize the structure of the thin filaments in the cytoskeleton, which facilitates quick generation of contractile forces in the cells to modulate cell shape and motility as a response to chemical and/or mechanical signals [69]. Vimentin, on the other hand, forms intermediate filaments that connect to paxillin/focal adhesion to support the cells for overcoming tremendous elastic stress as well as plays an important role in pseudopodia formation and cell migration [70]. In tumor cells, it has been shown that vimentin knockout reduces the ability for the cells to adhere to ECM, to migrate and invade, while increases the expression of integrin β4 [71]. Despite the extensive knowledge of the general roles of integrin in cell behavior mediation, it has not been studied before for the specific effect of integrin-β4 on mechanical properties of migrating airway epithelial cells. By using cells treated with integrin-β4-specific siRNA and the classic scratch-wound healing model, here we found a close link between integrin-β4 and EMT physical features in the migrating airway epithelial cells during the process of wound and repair (Fig. 4). Specifically, knockdown of integrin-β4 in the airway epithelial cells led to disrupted F-actin cytoskeleton and decreased vinculin expression in all cells but decreased vimentin expression only in the cells either located in the border region or migrating in the middle region of the scratched wound, plus depressed development of filopodia/lamellipodia in the migrating cells (Fig. 6). While confirming the positive role of integrin-β4 in mediating bronchial epithelial repair via promoting cell proliferation and cell spreading as well as cell migrating, these findings verified that integrin-β4 also directly affected the structural remodeling of the cytoskeleton in the airway epithelial cells during the process of injury and repair. Using OMTC and AFM to measure cell stiffness, we obtained the first-time evidence that integrin-β4 directly influenced the variation of stiffness in the airway epithelial cells during and after the repeated exposure to ozone. Specifically, during the time-course of ozone exposure and removal, the cells treated with integrin-β4 siRNA (integrin-β4 KD) appeared to be delayed in recovery of their cytoskeleton and adhesion structures, and collectively remained to be much stiffer as compared to their counterparts treated with nonsense siRNA (NC). Individually and in the absence of ozone exposure, however, the cells treated with integrin-β4 siRNA were much softer at the perinuclear but not the peripheral regions, which might be due to suppressed vimentin expression. This is reasonable because the cell's mechanical properties/behaviors such as stiffness and contraction are largely determined by its focal adhesion and cytoskeleton structures [72][73][74]. In particular, vimentin is highly expressed around the cell's nucleus and vimentin filaments are connected with microtubules to maintain the cell mechanical strength, therefore the suppression of vimentin expression would tend to decrease the stiffness at the perinuclear region, which is beneficial for promoting cell migration. [71,75] Since Rho GTPases are known as the principal molecular regulators of integrin-mediated actin cytoskeleton remodeling [76,77], we first studied RhoA activity in the live cells with/out suppressing integrin-β4 expression. It turned out that suppression of integrin-β4 expression with siRNA indeed attenuated the RhoA activity in the airway epithelial cells (Fig. 7). This may have elucidated a dual regulatory effect of integrin-β4 on the EMT in airway epithelial cells during injury and subsequent spontaneous repair in response to repeated ozone exposure/mechanical disruption, i.e., on the one hand, integrin-β4 may affect the cell adhesion to ECM, and on the other hand, it may regulate the RhoA-mediated force generation in the cells [55,78]. Taken together the findings in this study have demonstrated that integrin-β4 was associated with not only the dynamic reorganization of adhesion and cytoskeleton structures but also the modulation of cell mechanical properties in the airway epithelial cells in response to injury caused by repeated ozone exposure/mechanical scratch and subsequent spontaneous repair. All these dynamic changes in the adhesion/cytoskeleton structures and mechanical properties seemed to indicate enhanced EMT phenotypic characteristics during injury and subsequent reversal of the injury-induced EMT features during repair in the airway epithelial cells. This has not only significantly extended our knowledge regarding the biological functions of integrin-β4 but may also provide insight into novel pathological mechanisms of fibrotic airway remodeling associated with obstructive airway diseases such as asthma and COPD.
2022-01-11T14:44:16.092Z
2022-01-09T00:00:00.000
{ "year": 2022, "sha1": "b7cc93c5fa95c3e7bde6634f8e675f36c9fce066", "oa_license": null, "oa_url": null, "oa_status": null, "pdf_src": "MergedPDFExtraction", "pdf_hash": "b7cc93c5fa95c3e7bde6634f8e675f36c9fce066", "s2fieldsofstudy": [ "Biology", "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
79426767
pes2o/s2orc
v3-fos-license
Prevalence of Tuberculosis Infection in a Cohort of Cattle that Enters the Food Chain in Accra, Ghana using Bovigam Tuberculosis (TB) continues to be an important public health problem worldwide. Currently there are 286 TB cases per every 100,000 people in Ghana. This figure is three times higher than the TB burden estimated by the World Health Organization. The numerical contribution of bovine TB to the general TB burden is unknown. Herdsmen, livestock workers, and veterinarians and the general public are at high risk of contracting bovine TB. Zoonotic TB caused by Mycobacterium bovis is present in animals in most developing countries including Ghana. Unfortunately, activities to check and control cattle with TB infection from entering the food chain are often inadequate or unavailable. This study therefore aims to determine the prevalence of TB infection in a cohort of cattle that enters the food chain in Accra, Ghana. A cross sectional study involving five major abattoirs was conducted in the Greater Accra region between the periods of September 2012-June 2013. After routine inspection of live cattle by veterinary officials, 10 mL of blood was drawn from 94 cattle before slaughter and tested for TB infection using BOVIGAM. Six (6.4%) of the 94 cattle screened, were positive for TB infection. All except one abattoir had at least one cattle testing positive. Although the study recorded a lower prevalence of 6.4%, all animals tested were deemed fit for slaughter by veterinary officials and was to enter the food chain. The low sensitivity of routine abattoir inspection for infected organs and negative results of post-mortem examination reinforces the need for more sensitive screening tool such as BOVIGAM. Introduction Tuberculosis (TB) is caused by a group of bacteria collectively known as the Mycobacterium tuberculosis complex [1]. The commonest strain of Mycobacterium found among TB patients in Ghana is the M. tuberculosis followed by Mycobacterium africanum and Mycobacterium bovis [2]. M. bovis is virulent for cattle but can infect other animals and humans causing disease and pathology similar to M. tuberculosis, which is naturally pathogenic for man [3,4]. It has been established that 3% of pulmonary TB in Accra, Ghana is caused by M. bovis [2] which raises concern about possible aerosol transmission between cattle and human population or within the human population. Despite rigorous control efforts, the current global estimates indicate that 1/3 of the world's population has TB infection and 5-10% of these individuals if HIV negative will develop active TB during their lifetime, contributing to a global annual incidence of approximately 9.2 million cases [5]. A study in the Ho district of the Volta Region of Ghana revealed a prevalence rate of 3.1% bovine TB infection in cattle and 5.9% within a cluster [6] whilst and others have also indicated transmission from humans to animals and vice versa [7]. Abattoirs provide ideal settings for screening of cattle because they are the last point for inspection before slaughter and it is critical that TB is detected at this time to prevent TB-infected cattle in our food chain. Over the years, in vivo intradermal comparative tuberculin skin test has been used as standard test for diagnosis of TB in cattle worldwide, in spite of lacking the sensitivity and specificity [8]. Monitoring bovine TB in cattle by bacteriological assay is not feasible, costly, time consuming and most laboratories are ill equipped [9]. In recent years, gamma interferon (γ-IFN) assay has been used for detection of bovine TB in that it detects the cytokine γ-interferon which is predominantly released by T-cells after in vitro stimulation with bovine Purified Protein Derivative (BvPPD) and avian Purified Protein Derivative (AvPPD) [10,11]. BOVIGAM® is an example of gamma interferon (γ-IFN) assay used for the diagnosis of bovine TB infection in cattle. Animals infected with M. bovis can be identified by measuring the cytokine interferon gamma (IFN-γ) against tuberculin, an antigen used to aid in the diagnosis of TB infection. Tuberculin Purified Protein Derivatives (PPD) antigens are presented to lymphocytes in whole blood cultures and the production of IFN-γ from the stimulated T cells is detected using a monoclonal antibody-based sandwich enzyme immunoassay (EIA). Lymphocytes from uninfected cattle do not produce IFN-γ to tuberculin PPD antigens and hence IFN-γ detection correlates with infection. This study aimed to determine the prevalence of TB infection in a cohort of cattle that enter the food chain in Accra, Ghana using Bovigam. Study design This was a cross sectional study involving five major abattoirs namely: Tema abattoir (GIHOC), Madina abattoir, University of Ghana farms (Legon Abattoir), Accra abattoir and Amasaman abattoir in the Greater Accra region of Ghana. Abattoir practices, including the extent of examination of live animals (ante mortem) and carcasses Sample size Ninety-four cattle slaughtered for consumption were involved. Because slaughtering of cattle was not regular in the selected abattoirs only 94 animals were available for sampling during the period of sample collection. Sample collection Sample collection was done shortly after the animals had been inspected and declared fit for slaughter. 10 mL of blood was drawn from the jugular vein for Bovigam test between September 2012 and June 2013. Handling of blood from cattle and the laboratory analysis was conducted at the Noguchi Memorial Institute for Medical Research. Bovigam test Stage one-whole blood culture A minimum volume of 5 mL of blood from each animal was collected into a heparinised blood collection tube and mixed evenly. Three 1.5 mL aliquots of heparinised blood from each animal were aseptically dispensed into wells of 24-well tissue culture trays. The wells were labeled; Phosphate Buffered Saline (PBS), AvPPD and BvPPD. 100 µL of PBS, (NIL Antigen control), avian PPD and bovine PPD were aseptically added to the appropriate wells containing previously dispensed blood. The culture tray was swirled ten times both clockwise and counter clockwise on a flat smooth surface to mix for 1 minute. The culture tray containing blood and antigens was incubated for 16-24 h at 37°C in a humidified atmosphere. 500 µL of plasma was harvested into sterile Eppendorf tubes and stored at -20°C prior to ELISA. Stage two -Bovine IFN-γ EIA All test plasmas and reagents except the Conjugate 100X concentrate were brought to room temperature (22+/-3°C) before use. Freeze dried component was reconstituted based on the manufacturer's instruction. 50 µL of Green Diluent was added to the required wells. 50 µL of test and control samples were also added to the appropriate wells containing the Green Diluent. The control samples were added last. The plates were then placed on a microplate shaker to mix the content thoroughly for 1 min, after which they were incubated at room temperature on the microplate shaker at a setting of 600 rpm for 1 h. The content was poured out and washed six times with freshly prepared wash buffer. After the six washes, plates were placed face down on clean filter paper and allowed to drain and flick several times over absorbent paper to remove excess wash buffer as possible. 100 µL of freshly prepared conjugate reagent was added to the wells and incubated on a microplate shaker set at 600 rpm for 1 h (conjugate reagent was prepared based on manufacturer's instruction). The content was again poured out and washed six times with wash buffer. 100 µL of freshly prepared enzyme substrate solution was added to the wells and mixed thoroughly in a microplate shaker. The plates were covered and incubated on the microplate shaker set at 600 rpm in a dark area for 30 min. 50 µL of the enzyme stopping solution was added to each well and agitated gently, while care was taken not to transfer chromogen from well to well. Absorbance of each well was measured within 5 min of terminating the reaction using a microplate reader fitted with 450 nm filter. The absorbance value was used to calculate the results. Blood plasma collected from cattle having an OD value greater than 0.100 above that of avian PPD and nil (PBS) antigen, indicates the presence of M. bovis infection. Of the 94 cattle screened for TB infection using the Bovigam test, 6 (6.4%) were positive for TB infection. All except Legon abattoir had at least one cattle testing positive ( Table1: Bovigam results by facility. Discussion More than a decade ago, the TB prevalence in cattle around the Accra plains was estimated at 11-19 % using the intradermal tuberculin (ITT) skin test [12]. This present study using the Bovigam test, however, recorded a much, lower prevalence of 6.4%. This figure does not suggest a reduction in TB cases in cattle, but is rather a reflection of the test used which is more specific than the ITT used in the previous study. This assertion is supported by several studies that have reported that the ITT is less sensitive and specific than the Bovigam [10,[13][14][15]. In our study, all the animals tested with the Bovigam, had been deemed fit for slaughter, after routine examination by veterinary officials. The examination involved evaluation of the general body condition scored as 1 "good'', 2 ''bad'' and 3 ''very bad'' as well as inspection of inguinal lymph node and prescapular lymph node judged as normal or enlarged. In spite of the animals being declared ''free of development of detectable signs of bovine TB, cellular immune responses will however be detectable earlier than the pathological changes caused by the disease (e.g., visible lesions), and before the bacterial loads exceed the numbers necessary for the detection of M. bovis from tissue samples by culture [16]. This is one major advantage of Bovigam over other tests that rely on detection of bacillus or signs of active infection. Bovigam essentially detects infection rather than disease so can be used as a screening tool in large herds to prevent the spread of infection by detection and subsequent quarantine of infected cattle early before visible signs of TB are seen. Compared to other studies in Turkey [17] and Cameroon [18] which recorded a TB prevalence of 49.2% and 60%, respectively, in cattle bound for slaughter, the prevalence of 6.4% recorded in our study is on the lower side. The low prevalence notwithstanding, it is still a cause for concern as these cattle had been adjudged to be fit for slaughter and was to enter the food chain. Our study also highlights the low sensitivity of routine abattoir inspections for infected organs and reinforces the need for more sensitive screening tools such as the Bovigam. A recent study in Ethiopia has revealed that the probability of missing an animal with a TB lesion during routine abattoir inspection is 95.24% [19]. Additionally, in many of such cases even post mortem investigation does not yield any positive results, because due to the low bacteria load or latent stage of the disease, organs would appear normal. Consumption of undercooked meat from such cattle could potentially lead to infection. Also, considering the fact that many animals are slaughtered outside the abattoir system where there are no clinical examinations, the prevalence of TB in cattle entering the food change will be much higher.
2019-03-16T13:11:18.611Z
2016-12-12T00:00:00.000
{ "year": 2016, "sha1": "892a368b6cd8ab329b2d07b0856933cc5f47c4ef", "oa_license": "CCBY", "oa_url": "https://doi.org/10.4172/2161-1068.1000229", "oa_status": "HYBRID", "pdf_src": "MergedPDFExtraction", "pdf_hash": "d4ddeaffbf5c80008ba2fd9b1c7087d18e90bfac", "s2fieldsofstudy": [ "Agricultural And Food Sciences" ], "extfieldsofstudy": [ "Medicine" ] }
3984558
pes2o/s2orc
v3-fos-license
Moving from trust to trustworthiness: Experiences of public engagement in the Scottish Health Informatics Programme The Scottish Health Informatics Programme (SHIP) was a Scotland-wide research programme exploring ways of collecting, managing and analysing electronic patient records for health research. As part of the SHIP public engagement work stream, a series of eight focus groups and a stakeholder workshop were conducted to explore perceptions of the role, relevance and functions of trust (or trustworthiness) in relation to research practices. The findings demonstrate that the public’s relationships of trust and/or mistrust in science and research are not straightforward. This paper aims to move beyond simple descriptions of whether publics trust researchers, or in whom members of the public place their trust, and to explore more fully the bases of public trust/mistrust in science, what trust implies and equally what it means for research/researchers to be trustworthy. This has important implications for public engagement in interdisciplinary projects. Introduction Public trust in science is a subject of much academic and policy discussion with many international bodies seeking to 'improve' public trust in order to address a perceived threat to the public authority of science (Bates et al. 2010;Wynne 2006). At least since the UK House of Lords Science and Technology Committee's landmark statement that there was a 'crisis of trust in science' (House of Lords Science and Technology Committee 2000), it has routinely been suggested that a series of high-profile scientific controversies and scandals (e.g. BSE, thalidomide and the MMR triple vaccine), together with the rapid pace of scientific progress have resulted in an erosion of public trust in science. This is considered significant since: . . . science and technology demand assenting publics to maintain their hold on the collective imagination, not to mention the purse-strings. (Jasanoff 2005: 248) As such, considerable attention has been paid to 'improving' public trust in science. Most notably, this has been pursued through efforts to increase public understanding of science, on the assumption that, where the public is sceptical or mistrusting, this can be explained by ignorance or lack of understanding, and as such can be 'corrected' through better dissemination of scientific knowledge or 'facts' (Jasanoff 2005;Wakeford 2010). Public understanding of science (PUS) explanations of public mistrust have come to be widely criticised and discredited on a number of grounds. First, they present the public as 'passive recipients of scientific knowledge' (Cunningham-Burley 2006: 206) and imply that science inevitably has the 'right' answers (Yearley 2005). Secondly, the underlying assumption that greater knowledge or understanding of science results in greater acceptance remains unproven. Understanding is not a simple process through which individuals simply or straightforwardly receive the 'correct' knowledge but rather 'lay' publics actively deconstruct, question and evaluate claims to scientific knowledge (Hagendijk and Irwin 2006). Thirdly, PUS has been criticised for its lack of reflexivity. While it has sought to explain a perceived lack of public trust in science, it has overlooked considerations of what it means for science to be trustworthy (Wynne 2006). As Wynne (2008: 21) contends: We cannot properly conduct relevant research on publics in relation to science, unless we also critically examine the elephant in the room -what is the 'science' which we are supposing that people experience and sense in each of these situations? For these and other reasons, since the turn of the century there has been a move away from top-down efforts at 'improving' public trust through education or awareness raising and a: . . . general recognition of the need for two-way dialogue between the public, on the one hand, and scientists and policy-makers on the other. (Wakeford 2010: 87) Public engagement with science has now largely replaced PUS as the key mechanism for addressing the 'crisis of public trust' (Wynne 2006). Yet the underlying justification remains unchanged and the goal continues to be 'improving' public trust in science. Thus, such approaches remain largely unreflexive and overlook important considerations of what it means for science to be trustworthy, or what institutional arrangements lead to public trust/mistrust/distrust (Wynne 2006). As Wynne (2006) contends, while PUS suggested a deficit of public understanding, public engagement with science has replaced this with a deficit of public trust to be addressed through more information or more transparency. This remains a top-down approach and important considerations of institutional arrangements, or of what the publics are mistrustful of, are overlooked. Moreover, the presumption that a 'crisis of trust' has come about as a result of a series of controversies or scandals which rocked public confidence in science is largely unquestioned. However, Wynne (2006) suggests that there may never have been a time when the public unquestioningly trusted science. Rather, public trust in science has always been conditional and ambivalent. Indeed, there is much evidence to suggest that general public support for science coexists alongside ambivalence and scepticism (Haddow and Cunningham-Burley 2008;Cunningham-Burley 2006;Wynne 1996). As Wynne (2006: 212) notes: . . . there is lots of enthusiasm for [science] -but this is discriminating enthusiasm. The public's relationship with science is too sophisticated to be characterised by a simple trust/distrust binary relationship. Rather in many cases publics adopt an ambivalent form of trust -described by Wynne (1992Wynne ( , 1996Wynne ( , 2006 as an: . . . 'as-if' trust -in response to their 'knowingly inevitable, and relentlessly growing, dependency upon expert institutions. (Wynne 2006: 212) Given the central role of scientific knowledge within society, publics have little choice but to trust in science. But this trust remains conditional and does not mean that they will inevitably have confidence in the scientists or scientific institutions conducting research. Public trust in science remains a valid subject of research but the aim should not be: . . . to 'fix' the lack of automatic trust (of the public in science) that may concern scientific institutions. (Marks 2011: 544) Rather, there is a need for more symmetrical and reflexive considerations of what it means for publics to trust science, and equally of what it means for science to be trustworthy. There continues to be a need to explore the relationships of public trust/mistrust in science in order to understand what this means for scientific research/researchers and for the position of science in society. In particular, research ought to reflect on what it means for research and/or researchers to be trustworthy and on what bases public trust is founded. This requires a more nuanced understanding of the nature of public trust and greater critical reflection on the institutional arrangements of science. To a certain extent this more nuanced approach is reflected in calls for responsible research and innovation (RRI) as highlighted by the European Commission's Science in Society programme (Owen et al. 2012). There is no singular understanding of what it means for research and innovation to be responsible, though Owen et al (2012) have identified several core discourses in this area. However, there is a shift in emphasis from engendering public trust in science to ensuring that science is trustworthy. This shift has very significant implications for managing science-public relations. The SHIP Given such a shift, challenging questions are raised around the implications for research and researchers, particularly regarding how research can ensure public trust-or its own trustworthiness. This paper therefore reports on the findings of a public engagement project related to a large science programme and the ways in which trust was understood by both associated researchers and members of the public. The SHIP was a Scotland-wide research programme exploring ways of collecting, managing and analysing electronic patient records (EPRs) for health research. SHIP researchers developed systems to work across institutional boundaries allowing both health and non-health data to be easily linked on a national scale while protecting patient confidentiality. This was intended to be a powerful tool for understanding patterns of health and disease in the population and for assessing the effectiveness of interventions in delivering public benefit (for more information see <www.scotship.ac.uk> accessed 20 Apr 2016). Given its ambitious nature, the project inevitably raised a range of social and ethical considerations (e.g. around the ways in which personal data are stored, accessed and used or relating to processes for safeguarding confidentiality and respecting individuals' autonomy). A programme of public engagement activities was therefore included as one of the core projects within SHIP. This had a number of aims, including: • To understand the Scottish public's preferences, interests and concerns relating to the sharing of health data for research. • To explore the extent to which the public supported SHIP's aims. • To ensure that SHIP operated transparently and in the public interest. The initiation of SHIP reflects significant broad interest in secondary uses of health data (i.e. uses other than those for which they were initially collected With the expansion of research uses of data there has been a growing interest in public acceptability. In part, this relates to recognition of the importance of ensuring that data is shared and used in ways which are seen to be in line with public interests or preferences. The recent highly publicised controversy surrounding care.data in England has highlighted the importance of ensuring that the uses of the data are publicly acceptable. Similarly the failed introduction of Australia's National Electronic Health Record Systems (NEHRS) demonstrates the importance of fully engaging with and addressing public concerns, taking account of how such programmes reflect, or jar with public values (Garretty et al. 2014a,b). Thus, increasing attention is paid to the public acceptability of secondary uses of data and to ensuring that these uses are understood and supported by the wider public (from whom the data originate). Public acceptabilityand public trust-is crucial for ensuring the legitimacy of current practices and systems of governance. As Bradwell and Gallagher (2007: 18-9) have suggested: . . . personal information use needs to be far more democratic, open and transparent. And this means: . . . giving people the opportunity to negotiate how others use their personal information in the various and many contexts in which this happens. Until recently the literature in this area has been dominated by practitioner perspectives and public views have been underrepresented or underreported. For example, Robling et al. (2004: 104) stated that: The acceptability to patients of access to medical records without their consent has frequently been assumed. However, the lack of any evidence about the acceptability of such activities from the potential research subjects -members of the UK public -is striking. Where studies have explored public attitudes towards secondary uses of their data they have typically focused around issues relating to the anonymisation of data or (lack of) consent mechanisms (Damschroder et al. 2007;Saxena et al. 2006;Trinidad et al. 2012;McGuire et al. 2008;Willison et al. 2003;Medical Research Council and Ipsos MORI 2007). Broader issues around how programmes such as SHIP are perceived and the extent to which they are trusted have, until now, received less consideration. However, the literature is increasingly pointing to the centrality of trust in shaping public attitudes and responses to the secondary uses of data (Damschroder et al. 2007;Davidson et al. 2013;Ipsos MORI 2014;Trinidad et al. 2010). Early scoping work conducted by members of the public engagement team involved interviews with members of SHIP's Scientific Management Group (SMG) to explore expectations and understandings of the role of public engagement within SHIP. These indicated that one of the key objectives that SMG members considered public engagement should aim to fulfil was ensuring public trust in SHIP. This highlighted the salience of public trust to research. However, among the members of the SMG there were subtle divergences between those who suggested that public engagement might increase public trust through increasing awareness and understandings (i.e. through information provision) and others who suggested that public engagement might provide insights which would allow them to adapt aspects of SHIP in order to reflect public preferences and/or address concerns and as a result ensure public trust. This is an important difference in perspective, which leads to different expectations of public engagement. It also reflects the different positions on trust outlined above. Some SMG members demonstrated a deficit model approach in suggesting that public engagement be used instrumentally to create public trust through awareness raising, whilst others advocated a more reflective approach and viewed public engagement as an opportunity to reflect public values, interests or concerns within SHIP-thus ensuring public trust through making SHIP trustworthy. This illustrates some of the challenges encountered in conducting public engagement effectively within interdisciplinary projects. It also highlights a lack of clarity or consensus relating to the value or relevance of public trust and the public's relationship with science. Thus, the interviews with SMG members raised important questions regarding the relationships of trust between publics and SHIP and of how SHIP might ensure high levels of trustworthiness. The growing body of literature on public attitudes to uses of data in health research has also drawn attention to the centrality of trust in shaping or informing public responses (Asai et al. 2002;Damschroder et al. 2007;Davidson et al. 2013;Ipsos MORI 2014;Trinidad et al. 2010). SHIP public engagement activities therefore sought to explore these issues further, and through deliberation with associated researchers and members of the public examined both existing relationships of trust and also opportunities for building trustworthiness into the design and operation of SHIP. Focus groups The first empirical stage of public engagement activities was a series of eight focus groups (conducted between October 2010 and February 2011) which explored public awareness of, and attitudes towards, uses and potential uses of EPR for health research. These involved a wide range of public groups across Scotland and a total of 50 participants from a diverse range of backgrounds. Participants were recruited through pre-established groups such as patient support groups (relating to diabetes and mental health), a youth centre (with both young people and youth workers), an organisation representing people from black and ethnic minority backgrounds, a group of nursing researchers and friendship groups from a variety of professional backgrounds (including law, social work and social science research). Focus group participants were sampled through purposive sampling focused on maximising diversity across the focus groups in order to access a broad range of viewpoints and perspectives. The aim was to have a diverse, rather than statistically representative, sample (Barbour 2007). It was important that individuals within each of the groups shared common traits or interests and, in most cases, were pre-acquainted as this meant that they felt comfortable and able to discuss the issues freely (Barbour 2007). In two of the focus groups one or more participant(s) were acquaintances of the researcher/moderator and acted as gatekeepers for recruiting other participants. As has been found in previous studies using focus groups (Munday 2006), this enabled a level of understanding of group dynamics and viewpoints which would not otherwise have been possible. The first focus group was run as a pilot (with a group of social science researchers). However, given that this was successful in eliciting a range of valuable viewpoints and since little was changed in the topic guide after this pilot it was decided to include the findings from this focus group. The participants from the pilot focus group were all social science researchers and hence had an atypical awareness and understanding of research processes which they reflected on in discussing their personal experiences and their own position as individual subjects of data. Indeed, most focus groups included individuals with some experience of research (whether in a professional, voluntary or personal capacity). Including a wide range of perspectives, levels of experience and understanding within the focus groups is a strength of this study and enables reflection on the range of viewpoints expressed across diverse groups. The groups took place across Scotland (in Edinburgh, Glasgow, North Lanarkshire, West Lothian, Aberdeen, Inverness and Moray) and included a diverse range of age groups (the youngest participants being 16 and the oldest in their 70s), a roughly even split of genders was achieved (with 27 female and 23 male participants). A semi-structured approach was taken. A topic guide was developed to ensure a level of consistency between the focus groups. However, this was very flexible and allowed participants to raise issues and/or concerns which they considered to be relevant. The semi-structured design also meant that topics of discussion did not always arise in a pre-determined order, and that the focus groups were able to explore unanticipated areas of interest. As is recognised to be an advantage of focus group research, this approach allowed for a responsive, conversational style resulting in open and frank discussions and enabled individuals to engage with a topic which was previously unfamiliar to them (Barbour 2007). Stakeholder workshop As will be illustrated below, the focus group findings indicated, among other things, that trust was a highly salient factor influencing responses to SHIP. Given the relevance of trust in shaping public responses, it was felt that it would also be important to understand how trust is perceived and experienced by the range of actors who may use or benefit from SHIP (e.g. researchers, analysts, data controllers). Therefore, in collaboration with colleagues in the Information Governance work stream of SHIP, a workshop was held with a range of stakeholders during the SHIP biannual conference (9-11 September 2011). This explored stakeholders' perceptions of the role, relevance and functions of trust (or trustworthiness) in relation to research practices. A total of 28 conference delegates participated in the workshop. The range of perspectives included: researchers, social scientists, government analysts, data controllers and lay representatives. Participants came from across the UK (England, Wales, Scotland and Northern Ireland) as well as Australia, Canada and the Netherlands. After two short presentations summarising work carried out by the Public Engagement and Information Governance work streams of SHIP, workshop participants took part in small group discussions 1 which focused on the following key questions: • What does trust mean to you? • What do you think makes a researcher trustworthy? • Do you think enhancing trust (or procedures for enhancing trust) hinder or enable researchers in any way? The discussions were facilitated, recorded and lasted around 35 minutes, after which time key findings from each of the groups were fed back to the whole group, and closing reflections were offered. This paper presents findings from the focus groups with members of the public and from the stakeholder discussions at the workshop in order to illustrate the various ways in which trust and trustworthiness were understood in relation to SHIP and to research more broadly. Throughout the paper the different parts of this research project are referred to as 'focus groups' and 'stakeholder workshop'. Using the term 'stakeholder' to refer to the participants of the 'stakeholder workshop' is not intended to suggest that focus group participants are not also stakeholders. Clearly, as data subjects and potential beneficiaries of data-linkage research, focus group participants are also stakeholders in SHIP. There are also some overlapping interests between participants in the stakeholder workshop and the focus groups as the workshop also involved lay representatives. Inevitably, regardless of their professional roles, all participants are also members of the public and data subjects. As such, although the stakeholder workshop enabled exploration of professional and informed perspectives, the distinction between the characteristics and interests of workshop and focus group participants is not altogether clear-cut. However, for sake of clarity in discussing the two components of the research the terms 'focus group participants' and 'stakeholders' are used throughout the paper. The analysis of discussions from the focus groups and stakeholder workshop followed an inductive approach to identifying themes within and across the discussions. This aimed to identify areas of agreement among the participants but also to highlight the diversity of views, interests and concerns which were expressed. Accordingly, the following discussion engages with the range of attitudes and responses articulated, and does not seek to make generalised statements about public opinion or preferences. Findings Across the focus groups and the discussions at the stakeholder workshop it was evident that trust was a highly salient concept. As will be illustrated below, judgements of actors' or institutions' trustworthiness were often central to focus group participants' responses and attitudes. In particular, there were sharp contrasts between participants who trusted that research would operate in 'the public interest' and those who were generally more sceptical of the intentions and interests of researchers or research institutions. Such judgements of trustworthiness strongly influenced the extent to which individuals supported the aims of SHIP. For example, some focus group participants were generally supportive of SHIP since they trusted that data would be used for appropriate and necessary purposes, and that research would (at least probably) ultimately lead to benefits for healthcare: I've got a very simple and naive answer for this. And it's this. The state knows most things about you anyhow. So if . . . the more information the state has and the apparatus of state, the better they can handle the people and make . . . from a medical point of view, it's to make them better. I know that sounds terribly naive, there are lots of other issues that probably people will bring up. But that's the way I see it. The more information they get about the populous, the better it will be for them in the long run, and that's the way I see it. Conversely, more sceptical focus group participants questioned whether research would necessarily or straightforwardly translate into benefits for healthcare. Moreover, some participants questioned the underlying justification of SHIP. For example, one participant suggested that there may be a hidden agenda: My concern is that, I think at the end of the day this will in no way benefit any individual or any patient, I think there is a bigger agenda somewhere and I think having access to the kind of information that they are seeking to find is quite frightening and I'm not happy at all with any of it either, because what's it for, what is it really for? Other focus group participants expressed more ambivalent positions. For example, the following quote illustrates one participant's 'as-if' trust (Wynne 1992(Wynne , 1996(Wynne , 2006 in that she acknowledges that data could be used for good or bad purposes but chooses to 'put her belief in the system': . . . I suppose it's back to that whole thing about using it for the power of good rather than the power of evil, isn't it? [. . .] I suppose [. . .] you put your belief in the system that universities are there to try and sort of safeguard that this will be used for the correct reason. There's all these things in place. And if we can use the information to benefit people, however that is, whether it's their social care, their healthcare, their living circumstance, their longevity, then we would all be saying we see it as a good thing. But we always have that kind of wee devil on the other shoulder saying . . . Within the stakeholder workshop at the SHIP conference participants demonstrated a great deal of enthusiasm to engage on the topic of trust in research and highlighted that this was perceived as relevant in a number of ways. It was noted that there is no universal understanding of trust, and no way of ensuring that a project, activity or institution will be considered trustworthy by all parties since (as was evident in the focus groups) some people are likely to be more suspicious whilst others will be more trusting. However, workshop participants were widely agreed that trust is crucially important to research processes and institutions, and that if this is lacking it 'could derail what we [as researchers] are doing'. There was therefore widespread acknowledgement that public trust was necessary and important for research. Trust in who? The focus group discussions indicated clear patterns in who was generally trusted by participants-or more specifically who was trusted to handle or manage individuals' personal medical data. When asked who they felt should be responsible for the management of such data the majority of participants initially responded that this should be an individual's healthcare provider (typically their GP). Participants routinely expressed high levels of confidence in healthcare providers' competence at handling personal data appropriately and sensitively. For example, it was widely asserted that medical records should be shared between health practitioners and many participants noted that they were happy for their information to be shared within the NHS. However, typically they were not sure how extensive data sharing currently was, or which parts of the NHS would have access to what information, and some difficulty was experienced in trying to define for whom access to data would be relevant and/or necessary. . . . There's obviously thousands of people working in the NHS, at what point do you say, "Well, you can have access to it, but you can't." Is it people who've got daily contact with patients, is it researchers, is it consultants and doctors, paramedics, how do you define who's going to get access or not? Although there were a few exceptions, in general participants were happy for their medical records to be shared with the doctors and specialists involved in their healthcare. However, most participants (at least initially) contended that information from their records should not be shared with anyone who is not directly involved in their healthcare. Varying levels of trust were expressed in non-clinical NHS employees. In particular, concerns were frequently raised about receptionists having access to medical records and confidential patient information. For example, there were concerns that receptionists might misuse information or look up the records of people they know. Nevertheless, for many focus group participants the NHS, as an institution, was highly trusted and many participants stated that they were content for data-management and data-sharing processes to be governed from within the NHS. In particular, several participants recounted that NHS computers have high levels of security and that members of NHS staff must abide by strict codes of conduct. This demonstrated a certain level of confidence in the NHS to oversee how data is protected. However, there was some concern about data being passed outside of the NHS: But then how well are the people, I mean there's quite a lot of screening goes on to the employment of people in the NHS, you know, to professional people and does the same standards apply to other agencies that would have access to your records. Nevertheless, it should be noted that many participants did not share this level of trust in the NHS. For example, participants acknowledged the potential for people within the NHS to misuse personal data: What if someone got a hold of it, that's what I think [. . .] But by someone in the NHS for example being unprofessional, getting a hold of anyone's information, I just think that's a concern. Unless they were using it for a health purpose then fine, but it just feels that someone could just go on and, right, I know their name and date of birth, address, I can find out everything about them. A number of focus group participants demonstrated significantly different perceptions of primary healthcare providers (e.g. GPs) and other professionals within the NHS, or of the NHS as an institution. Given that many individuals have existing relationships with particular primary healthcare providers, in some cases built up over many years, and that these are the professionals in the NHS with whom individuals are likely to be most familiar, this may suggest that a familiar relationship with an identifiable individual is important for securing public trust. This reflects the observation that relationships of trust are, at least partly, based on emotional ties between individuals and affective judgements of the trustworthiness of individuals (Rowe and Calnan 2006). The importance of relationships to trust was also a recurring theme throughout discussions at the stakeholder workshop. Stakeholders suggested that relationships of trust must be built up over time. Some workshop participants felt that it was easiest to build up relationships of trust when a research project had an individual in contact with research subjects. It was suggested that the human element of this was important and that trust could be facilitated through such things as being friendly, polite and considerate. By contrast, it was felt that where there is no individual relationship between members of a research team and research subjects, it can be more difficult to engender such trust. For research projects this level of familiarity between the research team and subjects may be easiest within a primary research context but may be more difficult to foster in secondary research. Trust and altruism/commercialisation Focus group participants who had been supportive of data sharing between healthcare professionals were typically more hesitant when asked to think about the ways in which personal data might be used for research. However, there was generally a preference for research to be conducted by academic researchers and participants frequently expressed high levels of trust in universities. For example, it was suggested that the involvement of universities gave participants greater confidence in the systems in place: . . . you put your belief in the system that universities are there to try and sort of safeguard that this will be used for the correct reason. (Mental Health Support Group1 -Female3) I think the very fact that [universities]'re involved in it speaks volumes for me. That helps me to accept it or otherwise these great places wouldn't be involved. They're institutions full of great academics. Of course, caution is needed here since the focus groups were being run by an academic researcher and this may have influenced participants' responses. Yet, it should be noted that the participants were not asked directly how they felt about university or academic involvement in research, but rather raised this themselves. Through discussions it was clear that an important factor influencing positive perceptions of academic researchers was a perception that they were more altruistic than non-academic researchers. In particular, focus group participants often noted that they felt most comfortable with academic researchers as they were not expected to be motivated by profit. Focus group participants were generally uncomfortable with the idea of organisations or private companies making profits out of access to personal medical data. In particular, there was concern that drug companies might exploit the NHS by using information from medical records to develop new drugs which they would then sell back to the NHS: Because they're the ones who make the profit in the end, because if they get all this information free and then they sell back the drugs to the NHS at exorbitant prices. (Diabetes Support Group -Female5) Many focus group participants were concerned about the possibility of private companies (most notably pharmaceutical companies) having access to information in medical records or being involved in research which would access this information. This reflects a widely held concern that: . . . an awareness of any profit motive underlying scientific research will eventually lead to significant erosion in trust, and a devaluing of science by the community. (Critchley 2008: 310) It is frequently asserted that the public is less trusting of research which is conducted by private companies/organisations, and that the creation of profit from research is a key factor influencing this mistrust (Critchley 2008;Critchley and Turney 2004;Hargreaves et al. 2002). Previous research has highlighted that commercial involvement is an area of public concern (Grant et al. 2013;Hill et al. 2013;Nair et al. 2004). It has also often been assumed that members of the public have little or no understanding of the current role of commercialisation in research (Millstone and van Zwanenberg 2000;Van Gend 2002). Yet, qualitative and deliberative research which has engaged with this topic has found that while members of the public have concerns about the commercialisation of research they are often aware of its role and acknowledge the relevance of private company involvement in research (Davidson et al. 2013;Grant et al. 2013;Haddow et al. 2007). Moreover, Haddow et al. (2007) found that members of the public may be accepting of commercialisation so long as appropriate conditions are met (e.g. through benefit sharing). Similarly, while participants in our focus groups had concerns, in general they were not entirely opposed to commercialisation and often acknowledged the relevance of pharmaceutical company access/involvement in research. In line with the findings of Haddow et al. (2007), many focus group participants accepted that pharmaceutical companies had a role to play in public health research and would support this so long as there were sufficient safeguards in place to protect against inappropriate use of the data. The ones who are, you know, of course for the purpose of advancement of science and treatment of the patients and diseases they should be involved, but how, again it comes to the mechanisms and the ways of controlling them, not any pharmaceutical company should have access, but those who are involved in the research, a particular research, they should certainly have access to certain information which are required by them, but again, that should be certainly safeguarded and controlled. Confidentiality was considered to be of particular importance when records might be accessed by private companies such as pharmaceutical companies. In such instances anonymisation of data was generally viewed as being of greater importance than it otherwise would be. However, some focus group participants also questioned whether pharmaceutical companies would be interested in individuals. For example, in a focus group with a mental health support group several participants expressed doubt that drug companies would want individual-level information. It was suggested that in most cases they would only be interested in aggregate data or statistics and this was not viewed as a major concern. However, in several other focus groups there were significant concerns that identifiable information could be misused by private companies (such as pharmaceutical companies) for marketing purposes. As such, although there was an evident pattern of higher levels of trust in academic researchers and healthcare professionals, and lower trust in private companies, these relationships were not straightforward or static. Rather, participants indicated that the extent to which they would trust particular researchers depended on a range of institutional factors and assurances about necessary and appropriate safeguards being in place. Trust and transparency A key theme to emerge through discussions at the stakeholder workshop was the connection between trust and transparency. Many workshop participants considered openness about research practices and outcomes to be crucial for ensuring public trust. For example, one workshop participant described a responsibility to inform data subjects of how their data was being used and to provide feedback on outcomes of this use. Most workshop participants agreed that public engagement should be focused on communicating positive messages about how data is used: 'promoting the success stories'. As such, while it was noted that public engagement should not involve manipulating or 'spinning' information, it was felt that researchers or data controllers should be more proactive in communicating the positive aspects of research and data use. It was also suggested that there would be some benefit in raising public awareness of the complex legal environment surrounding data sharing and that this might demonstrate the legitimacy of researchers' access to data. Similarly, it was contended that there is a lack of understanding of what researchers actually do, or of how they use data. One workshop participant suggested that most people imagine researchers to be based in laboratories and are not aware of types of research involving data analysis. It was said that members of the public have no experiential knowledge of this type of research and that this can lead to low understanding and lack of trust. Raising public awareness was therefore considered key to ensuring public trust. In these ways transparency and public engagement were largely viewed as opportunities for awareness raising or information provision (or even public relations). Focus group participants also pointed to transparency as being important for ensuring public trust. In many cases focus group participants' concerns or reservations about SHIP stemmed from a perceived lack of openness about the ways in which data are currently collected and used or how these processes are governed. For example, participants described a sense of inequity in that they felt that they were expected to allow more and more people to have access to their information but that they were not expected to want access to information about how it was being used, for what purposes or by whom: This is what I feel they want to know all about us, but we're, we're not supposed to know all about them sort of thing whose doing it. (Diabetes Support Group -Female5) Some focus group participants suggested that this lack of openness may be a deliberate effort to withhold information from the public and pointed to an awareness of previous instances where public information had been used and/or disseminated without public knowledge: And Governments are also . . . our Government let's not generalise too widely, our Government is not very good at transparency with things like data, at saying what it's going to do, what it currently plans and then saying it changes its mind somewhere along the line. In particular, and to a certain extent corroborating the discussion at the stakeholder workshop, focus group participants wanted greater information about how processes to manage requests for data access would be overseen and about who would be accountable for any breaches of privacy and/or misuse of data. One focus group participant (a nursing researcher) stated: I do research, and my research is really important to me that I keep this data sensitive and there is no tracing to it. And it's almost like who's going to do that? You know, who's going to look after this? Who's going to ensure that there isn't a breach of these aspects when they're going to people who don't maybe have such ethical governance. (Nursing and Midwifery As such there were calls for greater openness and transparency in relation to how data is currently used, and how requests for data access are managed: It is important, I think the public should definitely be more informed and well informed and quite clearly explain to people why the data has been collected and what purpose and how it is used. I think they have a right to know. However, in contrast to the position advanced in the stakeholder workshop, focus group participants emphasised that it was important that the information that was provided should be accurate, impartial and uncensored. Some focus group participants contended that any initiatives to raise awareness should be run or overseen by an independent body in order to avoid biased or inaccurate information: I'd like to see an NGO definitely working on the, just constantly thinking about that education campaign and, you know, how that represents itself but somebody like a liberty, I mean, a civil liberty group involved, making sure, you know, because you can . . . the way you present something on a television advert in terms of, okay, we're now doing this and it will help us cure cancer, or help us deal with this, oh by the way, we'll . . . it will also make sure that, you know, people know exactly where you live! You know, but don't worry about that, we're curing cancer! You know, it would definitely need regulation in terms of how that gets presented. The different interpretations of transparency demonstrated at the stakeholder workshop and within the focus groups illustrate differing understandings of the relationship between science and the public, and of the role of public engagement. While stakeholder workshop participants referred primarily to 'informational transparency' implying openness about how data are used and the value of data-linkage research, focus group participants were largely more concerned about 'participatory transparency' and 'accountability transparency' calling for openness about governance and decisionmaking practices (Brown and Michael 2002). In this way, much of the discussion at the stakeholder workshop could be viewed as exemplifying a deficit model of public engagement, whereby public trust can be 'improved' through the provision of appropriate (and selective) information. Conversely, focus group participants indicated that they would appreciate a more open exchange of information and greater equity in the science-public relationship. Whilst stakeholders at the workshop discussed public engagement as a means of generating public trust in research/researchers, focus group participants viewed public engagement as a potential indicator of the trustworthiness of the research and/or researchers. Trust and trustworthiness The emphasis on trustworthiness of research/researchers as opposed to public trust in research/researchers is an important theme which emerged from the focus groups. The extent to which focus group participants considered SHIP to be trustworthy strongly influenced their responses. In particular, reflecting the emphasis on 'accountability transparency' noted above, this led to calls for more information about how SHIP would operate. During the focus groups participants asked many questions about the ways by which processes within SHIP would be governed and how access to personal medical information would be controlled at an institutional level. For example, it was asked: I wonder who the captain of this ship is really then? You know, like the gatekeeper? Similarly, it was stated: Also I think there's always a danger of leakage as well, I think it can get everywhere, I think you need to be aware of that as well whether the health services control or whether pharmaceutical research companies, etc. I think would be the main thing, who controls it, who is responsible for it, and how much information is out there or how much information they can access. Focus group participants acknowledged that as individuals they had little control over how data-sharing processes were governed: It's also a bit like pension funds in the sense that it's a big complicated set up that as one person, we don't really have much control over what happens [. . .] So, I might like my pension fund [. . .] not to invest in the arms industry, for example, or tobacco industry, but it's actually quite hard to change something that big as one person. As such, who is in control of these processes and decisions was an important consideration influencing focus group participants' attitudes and responses, and this was one area about which participants indicated they would like further information. The focus group participants also suggested that members of the public should have a role in overseeing processes within SHIP and that lay representatives could play an important role in ensuring accountability and the protection of public interests. However, regardless of who is in control of the processes and mechanisms governing access to medical records data, the majority of focus group participants contended that misuse of data or breaches of privacy would inevitably occur from time to time. As such an important consideration related to accountability and what would happen in instances of misuse of data. One participant noted: Do you think that perhaps the reason we're not happy with many people having that level of power over our data, partly because we don't believe that the penalties for misusing data are severe enough. I mean, for me, that's a crucial point, I actually think there would be less mismanagement of data if the penalties of knowingly selling or giving away personalised information carry far greater criminal penalties [. . . Currently] They don't, I mean, you're not going to go to jail for it! Whereas, perhaps if you did people would be less likely to, you know, purposely sell personalised information. Moreover, it was contended that there may be powerful interests preventing such cases resulting in penalties or prosecution: Its not only the penalties you have to recognise that the people, the companies that are going to be using the information are going to have an awful lot of money so, like, there's the question of whether you even got as far as a penalty. Make it a whacking big penalty. No, the way you . . . you're looking for safeguards and someone, like, this will actually be applied fairly and then you've got your distrust of some legal system and various biases within it and, sort of, individual versus corporate power. You tend to assume that the corporate is going to win. Participants in many of the focus groups demonstrated scepticism about the existing governance mechanisms or oversight procedures. There was some concern that committees of oversight bodies would operate with a presumption in favour of allowing data sharing for research to go ahead and that a range of commercial or political interests may have influence in preventing or impeding robust accountability procedures. Participants at the stakeholder workshop also stated that breaches were inevitable and suggested that public trust was important for avoiding negative responses to such breaches. Simultaneously, stakeholders also contended that how researchers or institutions respond to breaches is important as such responses can either foster or damage public trust, again emphasising the importance of 'accountability transparency'. Stakeholders at the workshop expressed varying, and at times conflicting, views on the role of governance mechanisms in relation to trust in research and/or researchers. For some, governance systems were crucial for ensuring and maintaining trust. However, for others the existence of complex governance mechanisms and safeguards was itself potentially a source of mistrust in researchers or research institutions. For example, one workshop participant suggested that members of the public might respond to governance mechanisms/systems by asking: 'Why does research require all this? What are you trying to protect us from?'. As such there was concern that an awareness of governance mechanisms could lead to suspicion. Nevertheless, for many workshop participants compliance with standards set through governance systems was considered crucial for ensuring trust in research and/or researchers. It was argued that people trust researchers because they assume that there are oversight and governance processes in place and that researchers will comply with these. Compliance was therefore viewed as crucial for trust, however, one workshop participant commented that 'whether compliance is sufficient is another question'. Discussion and conclusions Public trust in science continues to be a topic of much academic and policy debate. In our research it has been very clear that trust is also perceived as a salient issue for both researchers and members of the public. However, reflecting recent work in the science and technology studies literature, it is clear that the public's relationships of trust and/or mistrust in science and research are not straightforward. Such relationships have been shown to be characterised by ambivalence and public trust has been demonstrated to be highly conditional and variable. Thus, this paper has aimed to move beyond simple descriptions of whether publics trust researchers, or in whom members of the public place their trust, and to explore more fully the bases of public trust/mistrust in science, what trust implies and equally what it means for research and/or researchers to be trustworthy. The research methods also represent an example of increasingly frequent public engagement activities associated with large science projects which-it should not be denied-are themselves an effort to increase public trust and to ensure RRI. Within the focus groups there were clear patterns in which actors were generally considered to be more or less trustworthy (i.e. primary healthcare providers and academic researchers were generally considered more trustworthy than commercial actors such as pharmaceutical companies). However, this pattern did not straightforwardly translate into support for research conducted by healthcare professionals or academics and opposition to research conducted by pharmaceutical companies. Focus group participants demonstrated an awareness of the realities and practicalities of research and, in particular, noted that it may be relevant or necessary for pharmaceutical companies to be involved in or conduct research using personal medical data. Equally, participants' generally higher levels of trust in academic researchers did not mean that they were happy for academic researchers to have unfettered access to personal medical data. Rather, participants' responses and their levels of support for data sharing and researcher access to personal medical information depended on a range of factors such as: institutional arrangements for data-sharing processes, transparency of processes and the existence of robust accountability procedures. The extent to which individuals perceived research institutions or data controllerswhether public or private, academic or commercial-to be transparent and to ensure high levels of accountability was crucial to informing their responses. Moreover, the extent to which individuals anticipated that members of the public could have control over their personal medical data, or could play a role in overseeing data-sharing processes also influenced perceptions of trustworthiness. Members of the SHIP SMG emphasised that public engagement should aim to foster high levels of public trust in SHIP. As noted above, for some members of the SMG this trust was expected to come about through information provision and awareness raising. Similarly, stakeholders within the workshop at the SHIP conference suggested a need for greater transparency and public engagement in order to ensure public trust in research. For many workshop participants this transparency should focus on communicating positive messages about the value, importance and benefits of data sharing for health research. However, although awareness raising has a role to play, the focus groups have demonstrated that transparency must go much further than the selective communication of positive messages if it is to secure public trust. Instead, public participants in the focus groups emphasised the importance of trustworthiness within research and data-sharing processes. Transparency may be one indicator of trustworthiness, but requires open communication of uncensored information. Therefore, research/researchers will be more likely to be perceived as trustworthy if transparency and public engagement involve open dialogue with members of the public and opportunities for deliberation, rather than controlled dissemination of information. This emphasis on transparency reflects broader attention to this area over recent years. There has been increasing emphasis on transparency as a mechanism for addressing lack of public trust in science and scientific institutions. As Brown and Michael (2002: 260) have noted: . . . in seeking to resolve the problems of trust and credibility, transparency has become ever more central to the revalidation of otherwise increasingly circumspect professions, institutions and commercial organisations. However, this emphasis on transparency can conceal its problematic nature. For example, as illustrated in the various ways that the stakeholder workshop and focus group participants discussed transparency, this can be pursued and achieved in different ways. Transparency might take the form of: informational transparency requiring disclosure of information on which decisions are based; participatory transparency, enabling public participation in decision-making processes or; accountability transparency whereby decision-makers are held accountable (see Balkin (1999), discussed in Brown and Michael 2002). Moreover, as noted by Brown and Michael (2002), ensuring transparency does not represent a simple solution to low levels of public trust since trust may itself be a necessary precondition for transparency being perceived as adequate or genuine. Low levels of public trust lead to public scepticism of participatory or consultative events and of the individuals or organisations facilitating them. Where trust is not already present participants or observers are likely to be sceptical of the level of transparency enacted. Therefore, transparency alone may be inadequate to build trust, instead a circular conundrum emerges whereby transparency is necessary to build trust, but trust is required in order for the transparency to be recognised as adequate (Brown and Michael 2002). Brown and Michael (2002) argue that in order to break this circle what is needed is not more openness but rather more 'authenticity'. This authenticity this can be signalled through emotional engagement and demonstration of the pain or suffering endured through decision-making processes. They contend that demonstrations that decision-makers have attempted to engage, incorporate and address disparate views to such an extent that it has caused them distress or suffering give them authenticity which, in turn, builds trust and lends confidence in the transparency of decision-making. This has important implications for public engagement in interdisciplinary projects. While public engagement is routinely conceptualised as a mechanism for ensuring public trust, such approaches may be of limited value. Public engagement can more appropriately be viewed as a mechanism for ensuring the trustworthiness of research. Yet such exercises are also performances of authenticitythat is they represent attempts to demonstrate to wider publics that institutions or programmes such as SHIP are meaningfully grappling with the challenges of addressing disparate viewpoints. Such public engagement exercises can have many benefits: providing insights into how research, or researchers are perceived; what concerns or preferences exist and to what extent practices and aims reflect public values; providing opportunities for researchers to reflexively address their own trustworthiness and seek to build high levels of trustworthiness into research practices and institutional arrangements. However, as Brown and Michael (2002) argue, such processes may not be adequate to build trust where this is not already present (at least in low levels). As such public engagement exercises can aim to build relationships with publics to engender trust through demonstrating personal and emotional commitments to transparency and participation. While there is no simple toolkit for building public trust such open, human processes are likely to be more fruitful means of ensuring sustainable public trust compared to more traditional approaches to awareness raising or consultation. However, that is not to deny an important role for awareness raising and the provision of information. Within the focus groups there were clear examples of areas about which members of the public would like more information (i.e. how is personal medical data currently used and what safeguards are in place to protect confidentiality), but this information provision is likely to be most effective when it responds to public questions or concerns rather than preemptively selecting what the public should (and should not) know. As such public engagement is likely to be most effective when it incorporates dialogic and deliberative forms of communication. Thus, efforts to improve transparency should be focused on 'informational transparency', 'participatory transparency' and 'accountability transparency' simultaneously. In such a way it is not simply an opportunity for publics to learn about science or research, but also for scientists or researchers to learn about the ways in which their work is perceived by, and impacts on, publics and to what extent it reflects public values. Thus, public engagement should not be aimed at 'improving' public trust in science, but rather at improving the trustworthiness of science. As the international interest in secondary uses of routinely collected data grows this becomes an ever more salient topic for research institutions, funders and governments. In the light of recent controversies surrounding care.data in England and NEHRS in Australia (to give but two examples), if the ambitious plans for 'optimising data sharing for research' in order to 'derive maximal societal benefit' (Medical Research Council and Wellcome Trust 2006) are to be realised ensuring that such programmes have public support and are widely viewed to be operating in the public interest will be essential. This necessitates considerable attention being paid to ensuring that such programmes are regarded as trustworthy by members of the public and requires significant efforts to not only maximise transparency in data sharing and governance processes, but also to build and maintain relationships with wider publics to foster trust in open and meaningful ways.
2018-04-03T04:48:28.040Z
2016-05-11T00:00:00.000
{ "year": 2016, "sha1": "55e498478b6dc5ffbb152ee978ec25a31821c6e1", "oa_license": "CCBY", "oa_url": "https://academic.oup.com/spp/article-pdf/43/5/713/8784018/scv075.pdf", "oa_status": "HYBRID", "pdf_src": "PubMedCentral", "pdf_hash": "0c262b3fcb33463c0c0629fe06c7f60334c8ed2b", "s2fieldsofstudy": [ "Medicine", "Sociology" ], "extfieldsofstudy": [ "Sociology", "Medicine" ] }
261541081
pes2o/s2orc
v3-fos-license
Diet on hirudoterapy to increase therapeutic effectiveness in hypertanic diseases . This article describes the experience of using hirudotherapy with a special diet. Patients were divided into sex and age groups, the effect of such therapy was monitored. Introduction In the modern world, hirudotherapy is an extremely relevant method of treating many diseases. On the one hand, this is due to a wide range of biotherapy methods, and on the other hand, a high risk of various complications from the use of synthetic drugs. Treatment with medical leeches, or hirudotherapy, is one of the most ancient examples of the use of the healing powers of wildlife in medical practice. Avicenna in his treatise "The Canon of Medicine" paid great attention to medical leeches. Even then, the active use of medical leeches for medicinal purposes began in concussion, kidney, liver, joint diseases, in the treatment of tuberculosis, epilepsy, hysteria and many other diseases [1][2][3]. Hirudotherapy is one of the most ancient methods of treating various pathological conditions, affecting the rheological properties of blood, lipid metabolism and the human immune system. Treatment with leeches leads to diverse and versatile effects, the main of which are anti-ischemic, neuroprotective, anesthetic, anti-inflammatory and bactericidal. Methods 418 hypertensive patients were under observation in the study. All patients underwent general clinical and instrumental methods of examination. Result and discussion Analyzing the experience of using medicinal leeches for medicinal purposes, we found the results of numerous studies indicating the high efficiency of hirudotherapy in the treatment of various diseases. According to studies conducted using hirudotherapy in various fields of medicine (neurology, neurosurgery, cardiology), hirudotherapy has minimal contraindications and side effects, which is very important for elderly and senile patients. Conducting a study of the literature, we did not find in any of them a mention of a diet during treatment with a medicinal leech. Diet therapy was strictly coordinated with hirudotherapy. Medical nutrition was prescribed in the form of special diets according to the nosological unit of the disease. We have specially identified patients prone to obesity, a history of hypertension, coronary heart disease and patients with metabolic disorders. We identified a contingent of patients aged 35-45 years. Sometimes diet therapy is the main method of treatment, sometimes it serves as an obligatory medical background, against which all others are applied, including specific and hirudotherapy. Patients with atherosclerosis limited the content of animal fat, cholesterol-containing substances, simple carbohydrates (glucose, fructose), table salt, vitamin D and extractive substances with an abundance of lipotropic factors (cottage cheese, oatmeal, soy, etc.), vitamins C, B1, B6, P , PP, cell membranes (fruits, vegetables), sitosterols, phosphatides (vegetable oils), seafood. In the treatment of patients with hypertension and chronic cardiovascular insufficiency, a diet is used that contains no more than 2-3 g of table salt in products, enriched with potassium, magnesium and vitamins, which has a physiological norm of proteins, fats and carbohydrates. Against the background of this diet, a magnesium diet is periodically prescribed for a short time, designed for the depressant effect of magnesium salts [4][5][6]. Also noteworthy is the vegetable diet proposed by Caldwell Esselstyn, who has achieved a significant improvement in the condition of many patients with coronary insufficiency. Patients with metabolic disorders were recommended a diet of dosed consumption of a variety of foods, daily a lot of foods rich in fiber. These are fruits, vegetables, legumes, cereals. All these products are low-calorie, rich in vitamins and minerals. Sweet, muffin, fried -excluded from the menu. All patients were forbidden to drink alcohol, as alcoholic drinks bring extra calories and no nutrients. In the diet recommended to consume more water. Pure water contains 0 calories, and if you drink a glass of water 30 minutes before a meal, then the feeling of satiety will come earlier. A leech, biting a patient, releases hirudin into his blood and a number of other positive secrets that stabilize the blood coagulation system, have a beneficial effect on the vascular wall, and improve microcirculation. The use of leeches is the only bloodletting at the level of microcirculation. As a system, it is here that the main metabolic processes of cell vital activity take place. Medical leech is considered a unique healing agent. Its therapeutic effect results from the finely coordinated and rapid work of the whole complex of organs of this complex built animal. Therefore, proper care of the leech is considered important so that they can provide assistance so that the number of leeches is sufficient to cover all the needs for everyday use in medical institutions. This leads to improved blood circulation and oxygen supply to all internal organs. In the treatment of hypertension with leeches, the effect is reduced to a decrease in the volume of blood circulating in the bloodstream and, since the secretion of the salivary glands of leeches has a very significant hypotensive effect, lowering blood pressure. It has been noted that hirudotherapy changes the body's reactivity, resulting in increased sensitivity to ongoing drug therapy. Due to this, it is often possible to reduce the doses of drugs used, and sometimes completely abandon their use [7][8][9][10]. Patients aged 35-39 years, including 132 men and 60 women with ischemic disease, hypertension, metabolic disorders after 5-9 sessions in the amount of 5-15 leeches during treatment from 10 days to 1 month with a strictly prescribed diet, general condition improved, the blood coagulation system stabilized, pain in the chest subsided, shortness of breath and abnormal heartbeat disappeared. Blood pressure decreased, mood improved, headaches and dizziness ceased to bother (Fig.1). Applied patients aged 40-44 years, including 150 men and 76 women with ischemic disease, hypertension, metabolic disorders, who received 5-9 sessions in the amount of 5-15 leeches in terms of 10 days to 1 month, the general condition improved markedly, pain or discomfort in the arms, left shoulder, elbows, jaw, or back has disappeared. Difficulties in breathing and lack of air were stopped. Nausea, vomiting and dizziness were not observed. The skin turned pink, the patients became more cheerful. Few patients did not follow the diet we proposed. Their clinical improvement is less pronounced for a longer period, on average, 25-30 days longer than the previous group. Conclusion Such a diet and hirudin secreted by a medical leech normalizes lipid metabolism, the state of the vascular wall, the coagulation and anticoagulation systems of the blood, the functions of the circulatory apparatus, and other systems. All groups of patients showed a positive dynamics of the clinical picture and the therapeutic effect came from 5-9 sessions and lasted 3 months. When studying long-term results on an outpatient basis, it was found that a more stable hypotensive effect and, accordingly, a good subjective state is achieved in patients with borderline arterial hypertension. As the results of this study showed, treatment with leeches in compliance with a strict diet improves the general condition of the patient, correcting some pathological processes (inflammation, microcirculation disorders, hypoxia, etc.). Hirudotherapy in compliance with a strict diet interferes with the basic mechanisms of the development of the pathological process, controls the set of reactions arising at different structural and functional levels of the formation of the disease. Hirudotherapy with a strict diet has a normalizing effect on the vascular-motor center, higher centers of the autonomic nervous system (reflex), which, along with improving the adaptive capabilities of the cardiovascular system, leads to positive changes in peripheral and central hemodynamics. The use of hirudotherapy in various areas of human activity has shown its effectiveness, and even after many centuries it does not lose its relevance. Hirudotherapy for hypertension has proven itself on the positive side. The especially observed hypotensive effect in the next 6 months after hirudotherapy in the complex treatment of hypertension leaves no doubt and shows the need for its wider use in practice.
2023-09-06T15:20:24.015Z
2023-01-01T00:00:00.000
{ "year": 2023, "sha1": "d977d54ecb8936d0eabd1de8523511ab69ef3a54", "oa_license": "CCBY", "oa_url": "https://www.bio-conferences.org/articles/bioconf/pdf/2023/10/bioconf_ebwff2023_05031.pdf", "oa_status": "GOLD", "pdf_src": "Anansi", "pdf_hash": "a32ddeafcb067838bee46011742ea410df0c917c", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [] }
231940743
pes2o/s2orc
v3-fos-license
Functional patient-derived cellular models for neuropsychiatric drug discovery Mental health disorders are a leading cause of disability worldwide. Challenges such as disease heterogeneity, incomplete characterization of the targets of existing drugs and a limited understanding of functional interactions of complex genetic risk loci and environmental factors have compromised the identification of novel drug candidates. There is a pressing clinical need for drugs with new mechanisms of action which address the lack of efficacy and debilitating side effects of current medications. Here we discuss a novel strategy for neuropsychiatric drug discovery which aims to address these limitations by identifying disease-related functional responses (‘functional cellular endophenotypes’) in a variety of patient-derived cells, such as induced pluripotent stem cell (iPSC)-derived neurons and organoids or peripheral blood mononuclear cells (PBMCs). Disease-specific alterations in cellular responses can subsequently yield novel drug screening targets and drug candidates. We discuss the potential of this approach in the context of recent advances in patient-derived cellular models, high-content single-cell screening of cellular networks and changes in the diagnostic framework of neuropsychiatric disorders. Current bottleneck in neuropsychiatric drug discovery Major neuropsychiatric disorders represent a substantial burden on worldwide health, accounting for 31% of years lived with disability (YLD) 1 and a lifetime prevalence of over 20% of the global population (approximately 17% for major depressive disorder, 2.4% for bipolar disorder and 1-2% for schizophrenia and autism depending on geographic region [2][3][4] ). They are associated with significant comorbidities including cardiovascular disease, suicide, substance abuse, immune disorders, obesity and diabetes 1,3 . Current treatments are effective in only 40-60% of individuals 5,6 , providing symptomatic relief as opposed to a cure. Other limitations include debilitating side effects, such as oversedation and delayed-onset of therapeutic efficacy 3,6 . Despite this urgent medical need, no drugs with fundamentally new mechanisms of action have emerged for over two decades 6,7 and many pharmaceutical companies have abandoned their neuropsychiatric R&D initiatives altogether 7 . This is largely because there is a fundamental lack of understanding with regards to the pathophysiology of neuropsychiatric disorders which has compromised the identification of novel drug targets 7 . The major neuropsychiatric medications share mechanisms of action, including effects on monoaminergic neurotransmission 7 , with compounds that were discovered serendipitously in the 1950s and 1960s 6,7 . Since then the pharmaceutical industry has focused on the development of a vast array of monoaminergic drug derivatives with improved efficacy, safety or administration profiles [6][7][8] . However, because the fundamental mechanisms of drug actions have remained similar, specific patient subgroups and symptom spectra (such as negative symptoms in schizophrenia) which were refractory to first generation drugs have not been addressed by newer generation monoaminergic drugs 3,7 . Likewise, the tenuous relationship between behavioral traits in preclinical animal models and neuropsychiatric symptoms in humans is often validated using existing monoaminergic drugs 6,7 , further precluding any mechanistically novel pharmacophores. Finally, the full mechanisms of action of many of the monoaminergic drugs and non-specific binding to off-target receptors are yet to be characterized 6,7,9 . Only recently have primary targets of existing neuropsychiatric drugs, such as the dopamine 2 receptor (DRD2) or glutamate receptor subunits (GRM3, GRIN2A, GRIA1) in schizophrenia, been linked to genetic risk of disease at the population level through large-scale genome-wide association studies (GWAS; see Glossary) 10 . However, polygenic risk scores explain only a fraction of genetic disease liability, for example 7% in schizophrenia 3 relative to 64-81% heritability derived from family and twin studies 11 . Moreover, putative individual GWAS risk alleles account only for a marginal increase in disease risk with odds ratios typically under 1.1 and differences in allele frequencies between cases and controls often less than 2% 10,12 . The concept that each neuropsychiatric patient presents with a different combination of multiple common but weak, or in some cases rare but penetrant, risk alleles 3 has led to the use of in silico pathway analyses to identify cellular pathways which may represent convergent drug targets at the population level 13 . However, this approach is hindered by the fact that expression quantitative trait loci (eQTL), protein function and pathway analysis databases are insufficiently annotated to provide meaningful functional analyses relative to the molecular and cellular complexity of the human brain. Moreover, these resources often implicate non-specific pathophysiological alterations such as cell motility, glycolysis, synaptic plasticity or differentiation 13 , which are too general to represent 'druggable' targets. This is compounded by a limited understanding of how complex environmental risk factors, such as childhood social adversity, maternal infection, urbanicity, migration status or substance abuse, interact with genetic risk loci (gene-environment interactions) to impact disease etiology, onset and progression 3,14,15 . Thus, despite the wealth of molecular profiling data accrued in recent years it is very hard to translate these insights into functional target-based drug discovery (Fig. 1). A final limitation is that the patient profiling strategies applied to date lack the dimensionality of a true systems biology approach, in that they do not measure the strength of interactions between molecular risk factors and how they change over time to impact integrated disease phenotypes at the cellular or physiological level. The dynamic nature of disease processes and loss of homeostatic coping mechanisms can only be assessed empirically if individual patient-derived samples are subjected to multiple system perturbations or functional challenges with kinetic resolution 16 . Patient-derived cellular models of neuropsychiatric disorders Drug target discovery in neuropsychiatric disorders has historically focused on the pathophysiology of the central nervous system (CNS) using post-mortem brain tissue, neuroimaging or animal model paradigms. While these approaches have added to our understanding of the disorders, they lack the vital feature of being able to assess dynamic cellular changes in relevant human tissue. However, the emerging concept that neuropsychiatric disorders are systemic disorders with corresponding manifestations in the brain and peripheral tissues [17][18][19][20] suggests that different cellular models derived from peripheral cells could offer an unprecedented opportunity to screen for functional drug targets in relevant patientderived tissue. Induced pluripotent stem cells (iPSCs), created by introducing key pluripotency genes into adult somatic cells, have received considerable attention in recent years as a potential source of patient-derived cellular models for neuropsychiatric disorders, including schizophrenia, bipolar disorder, autism spectrum condition, Timothy syndrome, Fragile X syndrome and major depressive disorder 21,22 . iPSCs have been reprogrammed into a variety of different brain cell lineages, including corticalexcitatory, hippocampal and inhibitory neurons, microglia, oligodendrocytes and astrocytes 21,22 . Importantly they have demonstrated putative disease hallmarks, such as altered neuronal connectivity in schizophrenia or neuronal hyperexcitability in bipolar disorder, which were reversed by antipsychotic and mood-stabilizing medications, respectively, suggesting that they could potentially predict clinical drug efficacy. Recent developments in this field have concentrated on scaling-up iPSC-derived cultures to form more complex multi-dimensional cell networks which enable spatial interactions between different cell-types to be explored. These include co-cultures of microglia-mediated synaptic pruning 23 , microfluidic hippocampal synapses 24 , neural spheroids and brain organoids. Brain organoids have furthermore displayed a diversity of brain cell types, photosensitivity and complex cortical-like features 25,26 and have been used to study complex developmental processes such as neuronal progenitor proliferation, interneuron migration and cortical layer formation 21 . The use of brain organoids is still in early stages for neuropsychiatric disorders. For example, one single-cell RNA sequencing study reported altered GABAergic specification and Wnt signaling in brain organoids derived from monozygotic twins discordant for schizophrenia 27 . Another study reported downregulation of pathways involved in synaptic biology, neurodevelopment and cell adhesion, concurrent with reduced stimulation and depolarization responses in brain organoids from individuals with bipolar disorder 28 . Nevertheless, organoids have been successfully employed in a number of other disease indications such as drug repurposing screens against Zika virus, SARS-CoV-2 infection modeling and precision medicine for cystic fibrosis and a range of cancers 29 . While these iPSC-derived models represent an unprecedented opportunity to explore neuropsychiatric cellular alterations in relevant CNS tissue with the genetic background of patients, they continue to face several limitations. These include difficulties in selection of the iPSC colonies, specificity of end fate differentiation, intrapatient variability of iPSC clones, karyotypic instability across passages and differential power requirements for idiopathic versus monogenic gene variants 21,22 . These are compounded in the case of organoids by differences in intrinsic versus directed patterning and the inability to mature to postnatal stages, potentially due to lack of vascularization 21 . Together, these features have meant that this approach remains relatively high-cost, variable, and low-throughput. Cells which share many of the characteristics of brain cell lineages can also be induced directly from primary patient tissue without the need for reprogramming, including neuronal-like cells from fibroblasts 30 , microglial-like cells from peripheral monocytes 31 and olfactory neurosphere-derived cells 32 . Finally, CNS cell lines or cells from control donors can be cultured in patient-specific body fluids, for example using patientderived serum or cerebrospinal fluid, to investigate the effects of disease-associated secreted factors 33 . • Unknown targets of existing drugs? • Narrow scope of monoaminergic mechanisms of action. • Low predictive efficacy of animal models for drugs with new (non-monoaminergic) mechanisms of action. • Increased drug development risk (cost, time, regulatory restrictions, clinical trial failure) for CNS candidates. High-content screening • Patient-specific combinations of multiple common but weak, or rare but penetrant, risk alleles. • Difficulties identifying causative genetic variants in regions of linkage-disequilibrium. • Unknown physiological functions and interactions between proteins? • Non-specific pathophysiological implications of pathway analyses. • Side-effects (e.g. over-sedation, weight gain, metabolic syndrome, cardiovascular abnormalities, extrapyramidal symptoms, kidney toxicity). Fig. 1 Translation gap in neuropsychiatric drug discovery. The figure summarizes the major obstacles and pending questions in neuropsychiatric drug discovery (boxes right) at the drug, patient, environmental risk factor, protein and gene levels. Disease heterogeneity, diagnostic uncertainty and incomplete characterization of the molecular targets of current neuropsychiatric medications have led to many patients who either do not respond to treatment, present with treatment-refractory symptom domains or suffer from debilitating side effects. On the other hand, the genomic complexity of neuropsychiatric disorders, in terms of multiple common but weak or rare but penetrant risk alleles (shows schematic distribution of allele frequencies vs. odds ratios for GWAS risk loci and copy number variants adapted from ref. 3 ), unknown susceptibility loci (missing heritability) and uncharacterized interactions between genetic and environmental risk factors, in addition to incomplete functional annotation of protein interaction databases, has made it difficult to accurately prioritize potential drug targets at the cellular level based on molecular profiling data. Functional testing of patient and control cells using ligand libraries and high-content screening provides a means to summarize the integrated effects of multiple molecular and environmental risk factors (red) as convergent abnormalities in cellular response ('functional endophenotypes') which may represent more physiologically relevant drug targets for specific patient subgroups. (PBMCs) are possibly the best example of this application. They are both accessible for sampling and amenable to high-content screening in suspension 34 . Consequently, they represent a scalable model with the potential to satisfy the power requirements of neuropsychiatric disease investigations whilst facilitating the depth of cellular exploration necessary to reveal complex disease processes in their native state. The majority of investigations using PBMCs in neuropsychiatry have focused on determining the relative proportions of different cell subsets, their activation status or their cytokine secretion profiles 35,36 , consistent with hypotheses of immunological dysfunction in these disorders, and more recently on interactions with the human microbiome 19 . However, recent data suggests that PBMCs can also provide a surrogate model for exploring systemic alterations in a subset of CNS drug targets. Subtypes of CNS receptors (e.g., dopamine and 5HT receptor subtypes) and their cell signaling substrates (e.g., Akt1 and GSK-3β) have been shown to be altered in the brain, as well as PBMC subsets of neuropsychiatric patients and correlated with therapeutic efficacy or disease severity 17,[37][38][39] . GWAS data also suggests the enrichment of single nucleotide polymorphisms associated with neuropsychiatric (schizophrenia) risk loci within PBMC subtype-specific gene expression enhancers 10 . Moreover, PBMCs have shown preliminary evidence of parallel epigenetic changes to those observed in the brain following exposure to environmental stressors, such as early life social adversity 15,40 , raising the possibility of exploring drug-target interactions which are specific to environmental risk factors. Although many of the pathways which are shared between PBMCs and CNS cells are likely to respond differently and the degree of functional overlap between lineages remains to be fully determined, recent evidence suggests that subsets of pathways (e.g., calcium signaling via PLC-γ) or even individual protein-protein interactions, which do overlap, might serve as a proxy for clinically relevant targets which are otherwise inaccessible in primary patient samples 41 . Likewise, it is possible that, at least in a subpopulation of patients, targeting proteins which mitigate immune dysfunction may contribute to symptom remission, as exemplified by the modest efficacy of celecoxib in clinical trials involving first-episode schizophrenia patients with predominantly positive symptoms 42 . The functional cellular endophenotype strategy for neuropsychiatric drug discovery The functional cellular endophenotype (see Text box) strategy aims to directly identify abnormal functional responses in patient-derived live cells, relative to healthy individuals, and subsequently use these responses as novel drug screening targets (Fig. 2) 41 . First, live cells (e.g., iPSCderived neurons, PBMCs; Fig. 2a) from patients and controls are incubated with mechanistically diverse ligand libraries (e.g., CNS receptor agonists, cytokines, hormones, growth factors, antigens or intracellular signaling modulators; Fig. 2b). Second, responses for each ligand treatment relative to the vehicle are assessed across multiple functional readouts (e.g., phosphorylation of cell signaling proteins or mRNA expression) in parallel using single-cell high-content screening (e.g., flow cytometry, mass cytometry, high-content microscopy or single-cell RNA sequencing; Fig. 2c). Third, immunophenotyping is used to resolve responses across different cell subpopulations (e.g., PBMC subsets or iPSC-derived cell subtypes; Fig. 2d) within the heterogeneous cell sample. This creates a combinatorial expansion of the number of functional assays performed in each cell sample (Fig. 2e). Each ligand-readout-cell subtype combination represents a cellular response 'node'. All nodes together provide a profile of the functional repertoire of the cells from each donor. In addition, the same matrix can be applied at different time points or with different ligand doses to provide kinetic resolution or functional titration of the cellular responses. Comparison of these node profiles between donors in different clinical groups (Fig. 2f), for example neuropsychiatric patients vs. healthy controls, allows the identification of cellular responses which are altered in the disease state. Crucially, the disease-associated cellular responses can then be targeted through phenotypic drug library screening to derive novel drug candidates capable of normalizing these responses (Fig. 2g). Finally, clinically relevant disease mechanisms linked to drug responses can be elucidated by follow-up genomic or proteomic experiments (Fig. 2h). The application of this strategy is particularly relevant for tackling complex disorders, such as neuropsychiatric conditions. The use of patient-derived cells provides a unique opportunity to model the genomic and epigenomic complexity of neuropsychiatric disorders in a physiologically relevant context. Recent data suggests that the genetic architecture of neuropsychiatric disorders consists of multiple common but weak or rare but penetrant genetic risk factors, some of which are inherited while others may be sporadic (or 'de novo') 3,10,43 . Moreover, each patient likely has a different combination of these risk factors. It is therefore plausible that drug targets are best represented at the pathway level where integrated effects of these diverse risk factors are likely to converge 44 . These distinct downstream abnormalities in pathway responses (functional endophenotypes), which are shared by subgroups of patients despite divergent genetic backgrounds, represent an opportunity to summarize genetic heterogeneity, in addition to environmental risk factors, at a time when functional interactions between risk variants are currently too complicated to model or even unknown. Examples of functional endophenotypes include altered calcium responses in T cells at PLC−γ1 linked to ATP2A2 polymorphisms in schizophrenia 41 or spontaneous calcium hyperexcitability in dentate gyrus-like neurons derived from iPSCs in bipolar disorder 45 . Moreover, the use of functional testing in live cells allows the elucidation of relevant disease-specific alterations in cellular networks (or pathways) which are not reflected by quantitative changes in mRNA or protein levels in their basal state, as demonstrated by glycolytic pathway alterations following antigenic stimulation in schizophrenia patient PBMCs 46 . This includes perturbations in homeostatic and regulatory mechanisms consistent with the concept of altered 'cellular coping' 47 . High-content single-cell functional screening High-content screening technologies, such as flow cytometry 48 , mass cytometry 34 , high-throughput microscopy 49 , and single-cell RNA sequencing 50 , enable the depth of functional exploration necessary to identify endophenotypes in neuropsychiatric patient-derived cells 41 . The simultaneous detection of multiple readouts (e.g., signaling protein phosphorylation 34 or mRNA expression 50 readouts to be correlated across thousands of single-cell measurements in each sample. This can serve to generate hypotheses as to causative signaling relationships and alterations in network connectivity associated with disease at the target discovery stage, for example increased negative regulation within the Akt1 pathway in CD4+ T cells from autism spectrum condition and schizophrenia patients 51 . Moreover, changes in the phosphorylation activation status of key therapeutic targets can be normalized relative to total protein abundance or mRNA expression, a feature that has recently revealed novel mechanisms of action for the mood stabilizer lithium in iPSC-derived neurons from patients with bipolar disorder 52 . The ability to measure multiple markers at the single-cell level also affords the statistical power necessary to identify clinically relevant functional phenotypes in minority cell sub-populations within a heterogeneous patient-derived cell sample and define functional overlap between cells from divergent lineages (e.g., PBMCs and neurons). In this respect, computational approaches (e.g., SPADE 53 , viSNE 54 , or CITRUS 55 ) which provide high-dimensional representations of deep lineage phenotyping combined with multiple functional measurements represent a valuable means for extracting diseaseassociated cellular phenotypes from high-content data without relying on prior knowledge. This has been applied to identify cellular phenotypes relevant to prognosis in other disease indications including acute myeloid leukemia (AML) 56 . Such an approach has particular potential, although as yet unapplied, for neuropsychiatric disorders as it is unclear which cell subtypes represent the best functional surrogates for different aspects of CNS pathology or drug discovery indications. An essential feature of the high-content functional screening approach is the ability to tailor the ligands and cellular readouts used for high-content exploration of patient samples to increase the likelihood of relevant drug target identification. Collectively, G-protein-coupled receptors (GPCRs), ion channels and protein kinases and phosphatases represent the targets for the vast majority of currently approved medications 57 , especially for neuropsychiatric disorders, consistent with their roles as key cellular functional executioners. Thus, targeting these proteins in the drug target discovery phase represents a heuristic means for screening the most 'druggable' part of the genome. Importantly, while many of these highly functional cellular proteins, for example GPCRs, are not easily detectable by traditional proteomic screens, an amplified signaling event downstream of these low abundance proteins can be accurately measured using fluorescence flow cytometry or mass cytometry 34 . Furthermore, technologies such as cellular barcoding 58 , which permit multiplexing of the ligand treatments, can be employed to increase the number of functional conditions analyzed in a limited clinical sample, for example 64 concurrent ligand conditions applied to schizophrenia PBMCs 41 . Finally, at the drug discovery stage, candidate compounds can be screened to identify multi-target efficacies, a feature common to existing neuropsychiatric drugs, or potentially toxic off-target interactions directly in patient samples at early stages in the drug development pipeline. The importance of characterizing neuropsychiatric drug interactions outside of conventional targets is poignantly illustrated by the association of TREK-2 potassium channel binding with antidepressant efficacy 59 or histamine H 1 receptor affinity with the side-effects of antipsychotic-induced weight gain and sedation 9 . High-content resolution of cellular responses can also be used to explore synergistic interactions between highly specific ligands acting at different sites in the cellular network, a strategy which has shown the potential for overcoming treatment resistance related to genetic heterogeneity in other disease indications such as oncology 60 . Drug target prioritization and lead compound validation One of the major challenges, having identified relevant functional endophenotypes in neuropsychiatric patient samples, is the prioritization of pathway responses with potentially causal disease influences for subsequent drug screening. In this respect, a multi-tiered approach may be useful. First, given the possibility of multiple hits arising from high-content screening (described below) it is important to statistically adjust for false discoveries and extensively cross-validate the findings using techniques which take into account the structure of the data, such as non-parametric permutation procedures and nested cross-validation, as well as to consider primarily functional nodes with exceptional significance in drug-naïve patient vs. control comparisons. Second, target nodes for which activity is correlated to disease severity at baseline (before treatment) or to improvements in symptomatology over the course of efficacious treatment, if longitudinal follow-up samples are available, are more likely to be related to active psychopathology. Third, if genotyping data is available for the same samples, the nodes which correlate to polygenic risk scores, summarizing known genetic risk, or individual risk variants might be suggestive of targets which are supported by parallel genetic evidence, at least in subgroups of patients, and could offer mechanistic insights underlying the endophenotype. Fourth, expression of the target node in brain tissue and/ or recapitulation of the target response in brain cell lineages, although not essential, can serve to prioritize targets with CNS activity. This can be further supported by evidence of behavioral abnormalities in animal models, in which the target node has been knocked-out or knocked-in, or developmental changes in transgenic model organisms such as zebrafish 61 . While correlation does not necessarily imply causation, these criteria can serve to prioritize nodes which are more likely to represent causative variants and thus, potentially relevant therapeutic targets. As a final consideration, nodes of comparable significance across these criteria may be chosen based on their amenability to high-throughput drug screening. For example, this may include nodes with a higher signal to noise ratio (Z-prime test), expression in cell-types which are more easily scaled-up in a costeffective manner and more specific readouts (e.g., proteinepitope phosphorylation) relative to generalized responses (e.g., inflammatory cell proliferation). A recent study using this approach for drug target discovery in schizophrenia, assessed 3696 cell signaling responses in PBMCs from individuals with schizophrenia and matched controls with a six-week longitudinal follow up 41 . This study prioritized an abnormal response to thapsigargin at PLC-γ1 as the most relevant drug target based on being the most significant node in the drugnaïve patient vs. control comparison, normalization over the course of efficacious clinical antipsychotic therapy, correlation to schizophrenia risk allele loading at the sarcoplasmic/endoplasmic reticulum calcium ATPase 2 (ATP2A2) risk locus 10,62 , concurrent activity in neuronal SH-SY5Y cells and parallel evidence of schizophrenia-like behavioral changes in animal models following forebrainspecific ablation of PLC-γ1 63 . Having prioritized the relevant drug targets from patient-derived cellular models, phenotypic drug screening can be used to identify compounds which normalize these pathway responses and could serve as potential novel drug candidates. This provides a means to identify novel drug candidates even before the full spectrum and functional interactions of putative risk alleles and environmental stressors are defined. For example, one study focused on Timothy syndrome 64 , a disorder caused by a missense mutation in L-type CaV 1.2 calcium channels and associated with developmental delay and autism spectrum condition, showed abnormalities in action potential firing and calcium signaling using patch clamp recording and calcium imaging in iPSC-derived neurons from patients relative to controls. This was further characterized to show differences in calcium-dependent gene expression following depolarization, including tyrosine hydroxylase, with concurrent increases in dopamine and noradrenaline secretion. The authors then screened different L-type calcium channel blockers to show that the tyrosine hydroxylase endophenotype could be improved using roscovitine, a cyclin-dependent kinase inhibitor and atypical L-type channel blocker. Interestingly, in the aforementioned study relating to functional endophenotypes in schizophrenia PBMCs, screening of an FDAapproved compound library (n = 786) identified different subsets of L-type calcium channel blockers (e.g., nicardipine, nisoldipine and nimodipine) capable of reversing calcium signaling deficits in response to thapsigargin at PLC-γ1 41 . This highlights this compound class as potentially worthy of follow up across different neuropsychiatric indications, a feature supported by the genetic association of L-type calcium channel subunits (e.g., CACNA1C and CACNB2) across several major neuropsychiatric disorders 65 . While this strategy represents a means to rapidly generate early stage candidates, several subsequent steps are relevant when translating these findings towards potential clinical trials. First, novel drug candidates can be directly compared within the same cellular model to established treatments, or to each other, to identify lead compounds which show putative enhanced target specificity, cellular potency or brain penetrance. For example, this has been demonstrated for subtypes of 1,4-dihydropyridines within the L-type calcium channel blocker class in phenotypic screening of functional cellular endophenotypes in schizophrenia 41 . Second, functional endophenotype strategies to date have been modest in terms of sample numbers (discussed below) and validation in larger patient cohorts is necessary to determine whether the target response and drug candidates are reproducible and whether there might be heterogeneity in terms of drug response in the target population. Third, given the overlap in genetic risk factors between different neuropsychiatric disorders, it is important to determine target specificity by comparing target activity in different neuropsychiatric disorders. Previous studies have shown that subsets of abnormalities in cell signaling responses can be shared between different neuropsychiatric disorders while others are unique 51 . Furthermore, this heterogeneity manifests at the individual level whereby individuals with different diagnoses can have partially overlapping signaling profiles. Given the changing diagnostic landscape of neuropsychiatric disorders, it is plausible that targets related to symptom subtypes which extend across diagnostic boundaries could find utility in multiple indications. For example, one study reported that alterations in phosphorylation responses at proinflammatory proteins NF-κB p65 (pS529) and Stat3 (pS727) were shared between conditions with negative symptomatology (schizophrenia and major depression) while aberrant responses to phosphatase inhibitor calyculin A at S6 (pS235/pS236) were shared between conditions with potential psychotic symptomatology (schizophrenia and bipolar disorder) 51 . Conversely, disorders which do not share the same targets can represent relevant exclusion criteria for future clinical trials. Fourth, novel compounds still need to undergo preclinical trials to determine efficacy, toxicity and pharmacokinetics. Despite the limitations of current preclinical models in terms of equating behavioral changes to complex psychiatric symptoms and the reliance on existing treatments as gold standards, functional endophenotypes at least offer the alternative to genetically engineer the target response instead of using acute pharmacological interventions to precipitate symptom-like behaviors. An alternative to the reliance on animal models is the screening of approved medications (drug repurposing) whereby the well documented toxicology, pharmacokinetic, dosing and medicinal chemistry profiles of these compounds could serve to expedite their clinical application to neuropsychiatric indications at a lower cost relative to new chemical entities 66,67 . Finally, in terms of clinical trial design, the same functional endophenotypes used for drug discovery have the potential to serve as ex vivo treatment response predictors, which could stratify patients during clinical drug development to overcome the heterogeneous results of previous clinical trials. Examples include ex-vivo calcium responses at PLC-γ1 in T cells 41 , glucocorticoid sensitivity in whole blood 68 , or CRMP2 phosphorylation in iPSCderived neurons 52 correlated with in vivo clinical efficacy in schizophrenia, major depression and bipolar disorder, respectively. In this regard, an increase in the proportion of clinical trials which focus on drug-naïve or recentonset patients relative to chronic treatment-resistant patients would help to improve the development of effective early intervention strategies. Moreover, where the functional target is sensitive to clinically approved drugs ex vivo, response prediction can be used to validate the target and support the potential in vivo efficacy of novel drugs 41 . Limitations and perspective The functional cellular endophenotype strategy in patient-derived cellular models represents a reverse engineering approach. Traditional target-based, or 'rational', drug discovery aims to quantify pathologicallylinked gene products and propose a mechanistic drug target using in silico pathway analysis, followed by screening for new drugs in a purpose-built reporter system (e.g., transfected cell line) and inferring clinical relevance. In contrast, the functional endophenotype strategy, proposed here, aims to identify compounds with differential activity directly in physiologically relevant patient-derived cells, relative to healthy individuals, and subsequently dissect their mechanisms of action and underlying genetic targets. Despite the progress made, there are several limitations and key features worth considering to optimize its future utility. First, obtaining large sample numbers of clinically wellcharacterized neuropsychiatric patients and sufficient volumes of viable patient-derived cells is a major challenge logistically and in terms of cost. Functional endophenotype studies to date have used relatively few samples, generally less than ten samples for iPSC-based studies 45,69,70 and up to several dozen samples using PBMCs 41,51,68 , suggesting that they are likely underpowered relative to the complexity of neuropsychiatric phenotypes. The power requirements for target definition using this approach therefore remain to be accurately determined. However, the fact that relatively small endophenotype strategies in schizophrenia PBMCs (n = 12 patients for discovery, n = 30 patients for validation) 41 have identified similar lead compounds (L-type calcium channel blockers) as suggested by much larger GWAS studies (n = 36,989 patients) 10,66 raises the possibility that they might have lower power requirements as a result of summarizing genetic risk at the pathway level, a feature echoed by studies using patient iPSCs and cerebral organoids 28,69 . Nevertheless, the increased cost of functional studies on live cells and the possibility of expectancy bias, means that it is important to crossreference cellular responses with large-scale genetic and proteomic studies such that emerging functional targets might be interpreted in light of better-powered existing studies as the field develops. The effect of cost in limiting sample size is particularly relevant for iPSC-based and organoid studies, where extended culture protocols are needed to reprogram and differentiate cells towards neuronal lineages. In these studies the trade-off between increasing the total number of donors and increasing the number of independent iPSC clones per donor is critical to determining statistical power 69 . Although independent iPSC clones from the same donor are vital to quantifying intra-patient variability (derived from the transformation and differentiation processes), it has been suggested that the use of single iPSC lines for each donor, while maximizing the number of donors, may be the most efficient strategy to maximize statistical power in light of false discovery constraints 69 . Moreover, it is recognized that decreasing inter-patient heterogeneity by focusing on more genetically homogenous patient and control groups might further improve statistical power. This can take the form of selecting patients with highly-penetrant rare genetic variants with a large effect size, patients with high polygenic risk scores based on common variants or gene-editing (e.g., CRISPR-Cas9) to introduce specific risk alleles in isogenic iPSC lines 69,71 . A final consideration in terms of cost is that while cellular assays may initially be more expensive than genotyping or steady-state protein profiling, the resulting functional endophenotypes are more directly amenable to drug screening. In contrast, the interpretation and engineering of genomic or proteomic targets into cellular systems can represent a considerable additional cost beyond the initial target identification. While sample numbers remain low for patient-derived cellular models in neuropsychiatry, initiatives like the NextGen Genetic Association Studies Consortium, which integrated data from over 2000 iPSC lines with GWAS and QTL data to identify functional cellular phenotypes for cardiovascular disease 72 , suggest that the same upscaling may be possible in the field of mental health. In this respect, composite workflows starting with more accessible cell types (e.g., PBMCs) followed by more resource-intensive cellular systems (e.g., brain organoids), or vice versa, may prove efficient and cost effective. This will likely be complemented by recent efforts to scale-up iPSC-derived cell types for high-throughput compoundscreening 22 , inclusion of a greater number of iPSCs from complex idiopathic vs. monogenic disorders and direct comparisons of target overlap between different cellular models from the same individuals. Greater numbers of valuable drug-naïve samples might also be facilitated by including high-risk individuals (e.g., with family history of neuropsychiatric disease) or patient groups where the disease often remains undiagnosed (e.g., major depressive disorder in the context of chronic stress). In line with increasing the power of functional endophenotype strategies, it will also be crucial to leverage available data to control for false discoveries and expectancy bias using statistical methods such as non-parametric permutation procedures and nested cross-validation, which take into account the data structure. Second, comparing cohorts with high and low polygenic risk profile scores, or with and without rare penetrant risk variants, across key environmental risk factors is an essential step in understanding disease heterogeneity and targeting treatments to specific disease aetiologies. Third, as the diagnostic framework of neuropsychiatric disorders evolves beyond DSM-5 and ICD-10, it will be important to incorporate cellular responses, in addition to other biomarker strategies, to help predict response to clinical treatment on an individual basis and define diagnostic categories which align more closely with therapeutic indications. Fourth, while cellular responses to existing neuropsychiatric treatments can be helpful to validate functional endophenotypes, establish relevant drug discovery workflows and provide clinical correlates for predicted efficacy, the field must eventually depart from the reliance on existing medications in order to identify mechanistically novel drugs which target resistant symptom spectra and avoid the 'catch-22 scenario' which has limited the scope of animal models to date. Finally, disease mechanisms underlying functional cellular endophenotypes require further dissection. Complementary screening technologies such as siRNA, CRISPR-Cas9 genome editing, or protein-specific inhibitors provide opportunities to systematically knock-out or knock-in the function of network proteins to gauge their influence on the target response, as demonstrated in DISC1 iPSC-neuronal models of schizophrenia 73 or GSK-3β animal models of bipolar disorder 74 . Fluorescenceactivated cell sorting of cells from the same patient and cell subtype which differentially exhibit the putative pathological response can also enable characterization of genomic or proteomic readouts whilst controlling for molecular variation between sample donors and cell lineages. Lastly, the combination of technologies such a single-cell RNA sequencing with multiplexed ion beam imaging 75 in patient-derived brain organoids could provide spatial resolution for understanding the functional interactions between cells which drive neuropsychiatric disease in a physiologically relevant context. Conclusion In conclusion, the presented approach is not the sole solution for addressing the paucity of novel therapeutic options for neuropsychiatric disorders. Its wider applicability, including the pharmacokinetic, brain penetrance and safety profiles of the candidate compounds, remains to be determined in addition to better understanding which neuropsychiatric conditions are likely to best be served by this approach. However, in a field where primary disease tissue is scarcely accessible and genetic complexity is daunting, relative to the magnitude of the public health burden, this approach could offer a complementary strategy to expedite the identification of relevant drug candidates and personalized treatment response predictors. Publisher's note Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations. Cellular coping The ability of a cell to regulate the effects of a cellular insult or stressor using homeostatic mechanisms. Drug repurposing The identification of novel therapeutic indications for drugs which are already approved by regulatory agencies for the treatment of other diseases/disorders. Functional cellular endophenotype Abnormal cellular response to a functional ligand in a specific cell subtype, which is shared by subgroups of patients relative to controls, and serves to summarize the effect of complex genetic or environmental risk. Gene-environment interaction A different effect of a genotype on disease risk in persons with different environmental exposures. Genome-wide association study (GWAS) Study which examines the association between a set of genetic variants, distributed across the genome (usually single-nucleotide polymorphisms), and the manifestation of different behavioral or biological traits across individuals in a population. High-content screening Method used in biological research and drug discovery to identify substances such as small molecules, peptides or RNAi that alter the phenotype of a cell across multiple parameters. Induced pluripotent stem cells (iPSCs) Type of pluripotent stem cell that can be generated by reprogramming of adult cells. Peripheral blood mononuclear cell (PBMC) Type of circulating blood cell with a round nucleus, including lymphocytes (T cells, B cells, NK cells) and monocytes.
2021-02-17T15:25:32.048Z
2021-02-17T00:00:00.000
{ "year": 2021, "sha1": "b0ae9563c3091be2041d51763cf7f330007af611", "oa_license": "CCBY", "oa_url": "https://www.nature.com/articles/s41398-021-01243-8.pdf", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "b0ae9563c3091be2041d51763cf7f330007af611", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
79965721
pes2o/s2orc
v3-fos-license
ANALYSIS OF THE TRUTH IN ADVERTISING ON THE EFFICACY PROVIDED BY ASSISTED REPRODUCTION CLINICS This paper analyses the efficacy data from assisted reproduction clinics, obtained from both scientific society reports and from studies published in specialised journals, in order to compare them with information published by Spanish assisted reproduction clinics on their websites. It aims to verify whether this information matches the reality of the findings in the media analysed or, in contrast, differs from the aforementioned scientific evidence. Our study shows marked discrepancies between the evidence of existing statistical data, and figures published by most of the clinics on their websites, which could constitute false advertising. Introduction A major tool used by assisted human reproduction clinics (AHRC) to attract clients is to show -through their websites -their efficacy in helping women who attend them to have a muchdesired child.One ethical problem that may arise, however, is whether this advertising is based on proven scientific data, for if this is not the case, it might be thought that the clinics were using "false advertising" to achieve their ends.The aim of this study was to evaluate this. To this end, the pregnancy and live birth rates provided by various scientific associations were reviewed and compare with those presented by the aforementioned clinics on their websites and brochures, in order to check whether or not there are discrepancies between both data sources.We also studied the cumulative pregnancy and live birth rates, with particular emphasis on the latter, since the cumulative live birth rate is what should -in our opinion -be more consistent with the percentages presented by clinics regarding the possibility of having a child. Materials and methods In this study, we looked at the pregnancy and live birth rates in three geographical areas using various sources: in Spain, for which we used European Society of Human Reproduction and Embryology (ESHRE) and Spanish Fertility Society (SEF) annual reports; in Europe, for which we used ESHRE annual reports from 1997 to 2010; and globally, for which we used articles from scientific journals, such as: Human Reproduction, Reproductive Bio-Medicine, Fertility and Sterility, The New England Journal of Medicine, The Lancet and the British Journal of Obstetrics and Gynaecology. The information presented by Spanish AHRC to their clients was obtained from a review of their websites between 19th and 29th May, 2015. One interesting aspect of our study was to compare the trend in these indices in the 14 years evaluated.With the exception of the 1997 LBR (which was 13.07 %), the other indices varied very little over the years, and in fact did not exceed 6.92 percentage points for the PR and 5.8 points for the LBR (if the 1997 rate is excluded).It should also be highlighted that, if data from the last five years only is considered, the PR varied by 0.7 percentage points, and the LBR by 1.8 points.These figures show the scant improvement achieved in recent years in both types of indices, despite advances in technical procedures in the same time period. Intracytoplasmic sperm injection Making a similar assessment to the one carried out with IVF, it was found that when intracytoplasmic sperm injection (ICSI) was used (Table 1), the PR varied between 23.37 % and 29.9 %, with a mean rate of 27.22 %, while the LBR varied between 12.68 % and 21.10 %, with a mean rate of 18.31 %. As with IVF, the PR and LBR obtained between 1998 and 2010 when ICSI was used had a maximum variation of between 5.1 and 4.93 percentage points, respectively; even if data from the period 2006-2010 only were considered, the PR decreased by 1.1 % and the LBR increased by 2.7 %.In other words, as with IVF, both rates (PR and LBR) scarcely varied in the years evaluated, despite possible technical improvements in both IVF and ICSI. Pregnancy and live birth rates in Spain Both rates were obtained using ESHRE (14) and EFSEF data(15). In-vitro fertilisation IVF data for Spain provided by the ESHRE for the period 1997 to 2010 ( One fact to highlight is that in Spain, the LBRs are markedly lower than the PRs.We have no reasonable explanation for this, although it does not have any bearing on our paper since it is not the specific aim of this study. Intracytoplasmic sperm injection When the same data were analysed for ICSI, the PR corresponding to those two years was 31.7 % and 31.0%,while the LBR was 18.8 % and 18.1%, respectively. Cumulative pregnancy and live birth rates The efficacy of assisted human reproduction techniques using IVF and ICSI has until now generally evaluated the pregnancy and live birth rate per stimulation cycle.However, these rates do not appear to be the most appropriate, because when a couple attend an assisted procreation clinic, they basically want to know the likelihood of having a child after one or several stimulation cycles.This technically corresponds to the cumulative live birth rate (CLBR). The cumulative pregnancy rate (CPR) can also be used, i.e. the likelihood, expressed in percentage, that a woman will become pregnant following several stimulation cycles.However, since obviously not all pregnancies go full term, ending with the birth of a child, we consider the CLBR to be the most appropriate index for evaluating the efficacy of these methods. To that end, we evaluated the CLBR obtained in various studies and under different circumstances, such as the woman's age, whether fresh or frozen oocytes were used, whether these were autologous or donor oocytes, cause of the infertility and other circumstances. As shown in Table 3, several studies that provided objective data on the CLBR were conducted since the beginning of the 1990s.Several of these found that the woman's age decisively affected the rates when autologous oocytes are used.Other studies not included in this Table confirm this (16,17). We believe that these should be taken into consideration, since increasingly older women are now attending AHRCs to have a child.Accordingly, the CPR and CLBR obtained in women over 38 years could be more relevant to the standard practice of these techniques. In 2010, Gelbaya performed a systematic review and meta-analysis that evaluated the CLBR in relation to whether one or two embryos had been transferred (31), finding that, in a study by Thurin et al. (32), the CPR after three stimulation cycles was 38.8 % when one embryo was transferred, and 42.9 % when two embryos were transferred. Outcomes provided by public AHRCs Of the 46 Spanish public clinics that offer assisted human reproductive techniques, we were able to obtain success rates for only three.The most reliable statistics were presented by Hospital Universitario Virgen de las Nieves, Granada, with a PR following IVF of 32.8 % in 2014 (37).The mean PR in the three clinics was 35.3 %. Remarkably, although all of these public clinics have participated in the SEF data register in recent years, none have publically reported their success rates.(30). Outcomes provided by private AHRCs The data obtained from private AHRCs are very different.Of the 123 private clinics that we analysed, 48.78 % published some information on their PRs, but interestingly, none published in-formation on the rate per cycle or CLBR, which is very surprising. All the information is shown in Table 5, where it can be seen that, as regards the private clinics, the CPR with autologous oocytes following one the CLBR.Accordingly, it is this rate on which most emphasis will be placed in this study. Pregnancy and live birth rate per cycle of IVF or ICSI If we consider European data (Table 1), the mean PR obtained using IVF after one stimulation cycle is 26.41 %, with a LBR of 18.81 %.The latter is the most significant, since in reality it is the one that translates into the likelihood that a European woman has of achieving a live birth after a single stimulation cycle. When the number of live births using ICSI was evaluated (also after one stimulation cycle) (Table 1), the mean PR was 27.22 % and the LBR was 18.31 %. When we looked at ESHRE data for IVF in Spain for the 14 years between 1997 and 2010 (Table 2), the PR after a single stimulation cycle was found to range between 23.2 % and 35.0 %, with a mean of 30.55 %, while the CLBR, also after a single stimulation cycle, ranged between 10.4 % and 30.6 %, with a mean rate of 18.65 %.As already mentioned, though, the LBR per stimulation cycle is not the same as the outcomes provided by the clinics for the likelihood that a woman will have a child (since this may be achieved after three or more stimulation cycles), but it is a figure that brings us closer to that reality. Nevertheless, in relation thereto, if the likelihood of having a child following one stimulation cycle is, for example, 35 %, one might think that after three stimulation cycles, this likelihood could reach or exceed 75 %.However, this is not apparent in reality, because as mentioned earlier, as the number of stimulation cycles increases, the number of live births achieved after the first, second and third (or even more) cycles increases very little with respect to that achieved in the first cycle (24).We therefore consider that the index that provides information closest to reality is the LBR after three stimulation cycles.Moreover, this is the practice generally followed in most clinics. In this respect, according to SEF data, the CLBR in Spanish AHRC after three stimulation cycles rarely exceeds 50 % (15).stimulation cycle varies between 28 % and 72.2 %, with a mean rate of 47.2 %. These rates for women under 35 years range from 39.0 % to 82.4 %, with a mean rate of 59.0 %; for those aged 35 to 39 years, they range from 27.0 % to 77.8 %, with a mean rate of 47.4 %; and for those older than 40 years, the rates range from 12.0 % to 48.6 %, with a mean rate of 30.7 %. When donor oocytes are used, the rates -as can be seen in Table 5 -are better, with a mean of 65.0 %, as would be expected when the woman's age does not affect the outcomes.What is surprising though is the high CPR, which ranges from 75 % to 98 %, with a mean of 85.3 %. All these data are those presented explicitly by the AHRCs on their websites.In our opinion, however, the most startling claims are those that make reference to comments or statements that could be labelled as more commercial.Here, we refer to statements from those clinics that guarantee that the couple will have a child (Table 5, column 4), i.e. that they are 100 % effective, and other comments stating that they can provide solutions to any infertility issues (Table 5, column 5). Discussion To compare the success rates that private AHRCs in Spain claim on their websites and brochures with the data that these same clinics provide to the different scientific entities and professional associations, we carried out three types of analysis.Firstly, we evaluated the PR and LBR per stimulation cycle following IVF or ICSI for Spain and Europe, from data provided by scientific associations such as the SEF and ESHRE.Secondly, we obtained the cumulative data for these same rates in Spain, Europe and the rest of the world; and thirdly, we reviewed the outcomes that Spanish AHRCs profess to their clients on their websites and brochures, with special emphasis on their success rates, i.e. the likelihood that women attending them will have a child, since this is the main reason for attracting clients (offering them a high probability of having a child).In our opinion, technically, this success rate is equivalent to When donor oocytes are used, the rates were higher, varying between 44.4 % and 85.7 %, with a mean rate of 65.0 %.The mean CPR will be higher than the mean LBR, as this is what usually occurs, as can be seen in the ESHRE data shown in Tables 1 and 2. It is particularly striking that the CLBR is not publicised by any of the private clinics reviewed. It should also be highlighted that the overall PR reported by these private clinics on their websites for one stimulation cycle ranges between 28.0 % and 72.2 %, with a mean value of 47.2 % when autologous oocytes are used, and 65.0 % when donor oocytes are used (Table 5).When this rate -for the same clinics-was evaluated from SEF data, the PR was 30.55 % for IVF and 32.59 % for ICSI (Table 2), meaning that the mean efficacy outcomes reported on the websites of the Spanish AHRC analysed are 49.5 % higher than those of the SEF and ESHRE in one stimulation cycle with autologous oocytes, and up to 108.9 % higher when donor oocytes are used. This finding certainly supports our thesis that many Spanish AHRC present statistics on their websites that are very far removed from those provided by official sources. Another notable aspect is that 16 of these clinics expressly state on their websites and brochures that they guarantee that women attending them will become pregnant, which would mean a 100 % success rate. It should also be highlighted that some of these clinics state that they can resolve all their clients' fertility issues, and at the same time have been audited by companies or institutions of proven good standing.Finally, many of them also state that they use the most advanced technology. Conclusion Many Spanish AHRCs present data on their websites that are not consistent with those obtained from official ESRHE or SEF reports.This is first of all because they do not report data on live births, which is the rate that best matches the real likelihood that assisted reproduction treatments When the data obtained using ICSI were evaluated, the outcomes were very similar to those achieved with IVF (Table 2). When data for Spain were analysed using the information provided by the SEF, the outcomes were akin to those in the EHSRE reports, and as such very similar comments can be made in both cases. Most interestingly with respect to Spain, we found that in the last years analysed (2011 and 2012), neither rate varied much (and in fact they even fell), meaning that the efficacy of the techniques has not improved recently (15). Outcomes using ICSI were very similar to those for IVF, so the comments are equally applicable. Cumulative live birth rates The CLBRs (Tables 3 and 4), together with evaluation of the data provided on the AHRC websites (Table 5), are of particular interest, since these rates are the ones that will undoubtedly translate into the real likelihood that a woman attending a AHRC will have a child; moreover, the latter are the figures that clinic websites show their potential clients. Although there is a large variation in the CLBR of the different countries for which data are presented in the study by Ishihara et al.(30) -ranging from 18.3 % for Italy to 41.8 % for the United States -the figures in Table 3 (which provides data from 13 different studies) show an overall mean LBR of 56.3 %, if outcomes from the four studies that provide this information are included.This value is much higher than the percentage obtained for Spain, with 22.9 % in one of the studies(30), and 18.65 % (IVF) and 19.40 % (ICSI) according to the ESHRE, as the mean value for the period 1997-2010 (Table 2). Data provided by Spanish AHRC on their websites A total of 169 websites were evaluated out of a total of 278 AHRC (232 private and 46 public clinics).Table 5 includes the PR per stimulation cycle when autologous oocytes were used, and ranges between 28 % and 72.2 %, with an overall mean rate of 47.2 %. Statements guaranteeing that 90 % of women will reach their objective of having a child are particularly noteworthy.This, in our opinion, can constitute false advertising, something that merits a very negative ethical rating. will eventually lead to the goal of parenthood.This, of course, is what clients seek in these clinics.With respect to the PR published by Spanish AHRCs on their websites, the data differ openly from those reported by the ESHRE and SEF, professing mean efficacy outcomes 49.5 % higher than official reports in a stimulation cycle with autologous oocytes, and up to 108.9 % higher when donor oocytes are used. Table 1 : Pregnancy and live birth rates with IVF and ICSI in Europe from 1997 to 2010 after one stimulation cycle (ESHRE annual reports). Table 2 varied between 17.4 % and 30.6 %, with a mean rate of 18.65 %.As can be seen, the mean PR in Spain (30.55 %) is notably higher than the European mean (27.22 %), which certainly indicates the technical quality of Spanish AHRC. Table 2 : Pregnancy and live birth rates with IVF in Spain from 1997 to 2010 after one stimulation cycle (ESHRE annual reports). Table 3 : CLBR according to various authors efficacy of the technique did not improve in that time. They found that in women with diminished reserve, the CLBR was only 28.3 %, but that other causes of infertility barely affected this rate, which in this group of women was 62.1 %(28). Also in 2013, Stern et al. again studied the CLBR in relation to the cause of the infertility that led the patient to consult the AHRC, making spe-cial reference to diminished ovarian reserve.The most important point in the study by Ishihara et al. is that the overall CLBR was 28.5 %; this is the percentage that expresses, among all the clinics evaluated, the likelihood that a woman can have the desired child.When evaluating AHRCs in Spain, it is interesting to differentiate between public and private clinics.In order to elicit the outcomes provided by AHRCs on their websites, 169 web pages were analysed; this represents 278 clinics, 27.22 % of which were public, while 72.78 % were private. Table 4 : CLBR in several developed countries 2007 Table 5 : Information presented by the websites of Spanish private AHRCs.Note: A total of 169 websites representing 278 AHRCs were reviewed, 232 of which were private and 46 public.Only those clinics that provide information on any of the indices shown in the table are listed.Data consulted between 19th and 29th May, 2015.
2019-03-17T13:08:12.839Z
2017-10-11T00:00:00.000
{ "year": 2017, "sha1": "f1a7970f8f40e43b7156cea4048614f80112db75", "oa_license": "CCBY", "oa_url": "https://scielo.conicyt.cl/pdf/abioeth/v23n2/1726-569X-abioeth-23-02-00311.pdf", "oa_status": "GOLD", "pdf_src": "Anansi", "pdf_hash": "dcd5b0f97e98d064eca7c449ce20a8358696074c", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [ "Sociology" ] }
263846418
pes2o/s2orc
v3-fos-license
Pharmacognostical and phytochemical studies on leaves of Tagetes erecta Linn Background: Tagetes erecta Linn. (Asteraceae) is a well-known plant for its antihypertensive, antioxidant, antidiabetic, aphrodisiac, and hepatoprotective properties. To investigate the pharmacognostic, physicochemical, and phytochemical determinations of this plant's leaves. Materials and Methods: The macroscopic characteristics of the leaves, such as size, colour, surface characteristics, texture, fracture characteristics, and odour, were studied pharmacognostically. The cellular characteristics of the drug were studied using a microscope on both the intact leaves and the powdered drug. According to WHO guidelines, extractive values, loss on drying (LOD), total ash, water-soluble and acid-insoluble ash, and moisture content of Tagetes erecta leaves powder were determined. For the various phytoconstituents, preliminary phytochemical screening and qualitative chemical examination studies have been conducted. Results: TLC analysis revealed the presence of alkaloids, glycosides, flavonoids, steroids, saponins, and tannins. Microscopic examination revealed the presence of xylem vessels, vascular bundles, and phloem fibres. Conclusion: To authenticate, standardise, and avoid adulteration in the raw material, pharmacognostical and preliminary phytochemical screening of Tagetes erecta leaves will be beneficial. The diagnostic microscopic characteristics and physicochemical data will aid in the creation of a monograph. The chromatographic fingerprinting profile can be used to standardise Tagetes erecta leaf extracts and formulations. INTRODUCTION Natural plant products have been used for a variety of reasons throughout human history. Many of these natural compounds exhibit biological activity that could be useful in medication development. To cure different disorders, including cancer, the Indian school of medicine known as "Ayurveda" uses mostly plantbased medications or formulations. In the worldwide market, herbal medications have a lot of room for expansion. Herbal pharmaceuticals have been the subject of research in the areas of natural product chemistry, pharmacognosy, pharmaceutics, pharmacology, and clinical therapies, and most of the major pharmaceutical companies have updated their strategy to favour natural products. Many herbal cures have been advised for the treatment of various ailments, either separately or in combination, in various medical treatises. [1] Traditional civilizations all throughout the world use medicinal plants and derived medicine, and they are becoming increasingly popular in modern society as natural alternatives to manufactured chemicals. [2] The plant Tagetes erecta, also known as Marigold, is a member of the Asteraceae family (Compositae). It is a sturdy, branching herb that is native to Mexico and other warmer portions of America, as well as naturalised in the tropics and subtropics, such as Bangladesh and India. It's a popular garden plant that produces a highly scented essential oil (Tagetes oil), which is mostly utilised in the formulation of high-end perfumes. Various portions of plant sources have been employed as nutraceuticals, food supplements, ancient remedies, significant constituents in contemporary medications, and Ayurveda since the Rig-Veda. The therapeutic properties of bioactive elements in plants, such as alkaloids, tannins, flavonoids, and phenolic compounds, have a clear physiological effect on the human body. [3] Although studies showing the full phytochemical screening of Tagetes erecta have been published, several secondary metabolites in the plant have been found to be present. Thus, pharmacognostic research and proximate analysis of Tagetes erecta leaves have been attempted in the current work. The extraction of the leaves into various extracts, followed by chemical tests and TLC analyses, was another method used for the initial phytochemical screening of the leaves. [4] Collection and authentication of plants The leaves of Tagetes erecta Linn. Were collected from Amravati (Maharashtra). The plant was identified and authenticated by Dr. Indrapratap S. Thakare, Department of Agriculture Botany, P. R. Pote Patil College of Agriculture, Amravati and dried in the shade at room temperature. Dried leaves were powdered in grinder and powder material was kept in air tight container for further study. Macroscopic evaluation According to WHO guidelines, the size, colour, surface characteristics, texture, fracture characteristics, and odour of the leaves were investigated. Microscopic evaluation The cellular characteristics of the drug were studied under the microscope on both the intact leaves and the powdered drug. Study of Transverse section The leaves were placed in a test tube, and 5% potassium hydroxide in methanol was added to keep the sample submerged. For a few minutes, the samples were boiled. Transverse sections of the drugs were placed in a watch glass filled with water using a brush. The sections were then transferred to a watch glass containing a 1:1 solution of Phloroglucinol-Hydrochloric acid and stained for 2-3 minutes. The sections were then transferred to water-filled watch glasses, where the excess stain was washed away. With the help of a brush, the sections were then placed on clean glass micro-slides. A few drops of water were added, and the slide was covered with a clean cover-slip. The slides were prepared for examination under the microscope. T.S. is depicted of leaves with various cellular characteristics without staining, phloroglucinol and Sudan red were used to identify starch, mucilage, fats, and fixed oils in transverse sections. [5] Physicochemical evaluation Determination of Loss on drying The loss on drying is the weight loss in percent w/w caused by the loss of water and any volatile matter that can be driven off under specific conditions. Procedure In a silica crucible, 2 gm of air dried drug reduced to powder was placed. Initially, the crucible was cleaned and dried, and the weight of an empty dried crucible was determined. The powder was applied in a thin, even layer. The crucible was then placed in a 1050°C oven. The powder was dried for 4 hours and cooled to room temperature in desiccators, and the weight of the cooled crucible with powder was recorded. Weight of empty crucible = x g Weight of dried leaf powder = y g. Weight of Crucible + leaf powder = x + y g. Weight of Crucible +leaf powder after drying at 105 = z Loss in weight due to removal of moisture L = (x + y) -z % LOD = final weight/Initial weight X 100 % LOD = L/y X 100 6 Determination of Total Ash value Weighing and lighting a crucible 2 gm of mucilage powder were weighed and placed in a crucible. In the Muffle furnace, incinerate the crucible at a temperature not exceeding 450°C. The desiccator was used to cool the crucible. The procedure was repeated until it became white and reached a constant weight. The percentage of ash was calculated using air dried drug. Total Ash = 100/Y × (y) where: Y-Weight of powder taken in gm y -Weight of ash in gm 7 Determination of Acid-insoluble Ash Total ash obtained in the previous step boiled for 5 minutes in a 100 mL beaker with 25 mL dil. HCl. The insoluble matter was collected and washed with hot water. Ignited to a fixed weight. Insoluble in acid 100/Y = Ash (y) Where: Y is the weight of insoluble matter in grammes, and y is the weight of ash in gm. [7] Determination of Water Soluble Ash Total ash obtained in the previous step boiled for 5 minutes in a 100 mL beaker with 25 mL water. The insoluble matter was collected and washed with hot water. Ignited to a fixed weight. The percentage of water soluble ash was calculated using air dried drug. [7] Determination of Moisture content A 2gm sample was placed on a tarred Petri dish and baked. Drying of the sample was done at 105°C till the weight of the sample remained constant. % moisture content = 100/Y× (y) where: Y -Weight of powder taken in gm y -Weight of powder after constant drying in gm. [8] Determination of petroleum ether, chloroform, methanol and water-soluble extractive value In a closed flask, 20 g of air dried, coarsely powdered Tagetes erecta leaves were macerated with 100 ml of petroleum ether for 24 hours, shaking frequently during the first 6 hours, and allowed to stand for 18 hours. The mixture was quickly filtered, and precautions were taken to prevent the loss of petroleum ether. In a Petri dish, 25 ml of the filtrate was evaporated to dryness, dried at 105°C, and weighed. The percentages of petroleum ether soluble extracts were calculated using the air dried sample as a reference. The procedure was repeated, but instead of petroleum ether, chloroform, methanol, and water were used. [9] Preparation of extracts Collected plant material was dried under shade and grounded in to coarse powder. Powder so obtained was subjected to soxhlet extraction in order to prepare whole extract and also successive solvent extracts. [10] Preparation of successive extracts of leaves and flowers The solvents were extracted one after the other in decreasing sequence of polarity, as shown below: Petroleum ether > Chloroform > Ethyl acetate > Methanol > Ethanol > Water. 100g of dried coarse leaf powder was weighed and packed loosely in a thimble of soxhlet, with a thin layer of cotton at the bottom to ensure the powder did not enter the distillation route. To avoid bumping the solvent, porcelain chips were placed in the round bottomed flask, and the thimble was inserted into RBF's mouth. Three syphons of petether were built to pass through the powder, and the soxhlet was supplied with a condenser. The solvent was heated to reflux between 60 and 800 degrees Celsius, which is the boiling point of petether. The solvent vapour rises up a distillation arm and into the chamber containing the solid thimble. The condenser makes sure that any solvent vapour cools and drips back into the solid material chamber. Warm solvent progressively fills the chamber housing the solid substance. In the heated ISSN: 2456-3110 ORIGINAL ARTICLE July 2023 Journal of Ayurveda and Integrated Medical Sciences | July 2023 | Vol. 8 | Issue 7 32 solvent, some of the desired chemical will dissolve. When the Soxhlet chamber is nearly full, a syihon side arm automatically empties the chamber, returning the solvent to the distillation flask. This cycle can be repeated as many times as desired for a total of 5 hours. The medication was taken out of the thimble and set aside to dry. To obtain dry extract, the solvent was collected and placed on a heating mantle for evaporation. Other leaf extracts were made using the same process, utilising successive solvents in the order listed above. Test for Carbohydrates Molisch's test (General test): To 2-3ml aqueous extract, add few drops of alpha-naphthol solution in alcohol, shake and add conc. H₂SO4 from sides of the test tube. Violet ring is formed at the junction of two liquids. Test for Alkaloids Evaporate the aqueous, alcoholic and chloroform extracts separately. To residue, add dilute Shake well and filter. With filtrate, perform following tests: Wagner's test: 2-3ml filtrate with few drops Wagner's reagent gives reddish brown ppt. Keller-Killiani test To 2ml extract, add glacial acetic acid, one drop 5% FeCl3 and conc. H2SO4. Reddish brown colour appears at junction of the two liquid layers and upper layer appears bluish green. Test for Flavonoids To small quantity of residue, add lead acetate solution. Yellow coloured precipitate formed. Test for Tannin To 2-3ml of aqueous or alcoholic extract, add few drops of following reagents: Dilute iodine solution: transient red colour. Test for Steroids Salkowski reaction: To 2ml of extract, add 2ml chloroform and 2ml conc. H₂SO4. Shake well. Chloroform layer appears red and acid layer shows greenish yellow fluorescence. Test for Fat and Oil Press powder of crude drugs between two filter paper. Filter paper gets permanently stained due to oil. Test for Saponins Foam test: Shake the drug extract or dry powder vigorously with water. Persistent stable foam observed. Test for Protein Biuret test: To 3ml test solution add 4% NaOH and few drops of 1 % CuSO4 solution. Violet or pink colour appears. Qualitative analysis for different chemical constituents [12] Chromatography techniques [13] Chromatography is a method of separating molecules based on their size, shape, and charge. During chromatography, analytes are dissolved in solvent and then passed through a solid phase that serves as a sieve medium. The molecule is divided as it passes through the molecular sieve. Paper and thin layer chromatography are chromatographic procedures that easily provide qualitative information while also allowing quantitative data to be obtained. Thin Layer Chromatography (TLC) [14] TLC has a number of advantages over paper chromatography, including adaptability, speed, and sensitivity. TLC is an adsorption chromatography technique in which materials are separated by the interaction of thin layers of adsorbent on a plate. The approach is mostly used to separate low molecular weight molecules. Macroscopic Evaluation Morphological studies revealed the shape of Tagetes erecta Linn. leaves. Leaves occur in their entirety. The leaves are lanceolate in shape. The leaves are a dark green colour. Table 2 summarises all of the organoleptic features investigated. Microscopic Evaluation The leaf's upper and lower epidermis were made up of a single layer of ovate and oblong ovate ordinary cells with mean thicknesses of 16.32 m adaxially and 17.68 m abaxially. Mesophyll was not homogeneous, with palisade 1-2 rows of 108.8 m adaxially and spongy 5-7 rows of 115.5 m abaxially. A single circular elliptic collateral vascular bundle with 7-9 rows of tracheary elements makes up the midrib. Physicochemical Determinations Physicochemical parameter like Extractive values, LOD (Loss on drying), Ash value, acid insoluble ash, water soluble, moisture content of Tagetes erecta leaves powder were determined as per WHO guideline. The results are as follows: Extraction of Plant Material Successive solvent extraction values in various organic solvents were observed follows. '+' = present and significant; '-' = absent. Qualitative evaluation by Thin Layer Chromatography Solvent extraction was performed on a dried powdered leaves sample of Tagetes erecta Linn. Thin layer chromatography on silica was performed on approximately 20g of the extract. The systematic order of solvent selection demonstrates the effect of polarity on the extraction and the extracted phytochemicals. During the thin layer chromatography procedure, the fractions with different Rf values were separated. Table 8 shows results of observation of phytochemical analysis of Tagetes erecta L. by observing the spots on TLC plates. A convenient way for chemists to report the results of a TLC plate in lab is through a "retention factor" or Rf value which quantitates a compounds movement. To measure how far a compound travelled, the distance is measured from the compounds original location to the compounds location after elution in figure. CONCLUSION The current study provides information on the preliminary phytochemical and pharmacognostic screening of Tagetes erecta leaves, which may be helpful in order to standardise, authenticate, and prevent any adulteration. The construction of a monograph will benefit from the diagnostic microscopical characteristics and physicochemical information presented in this work. The current phytochemical screening of Tagetes erecta revealed the presence of bioactive compounds such as flavonoid, steroids, alkaloids, glycosides, and tannin that have medicinal value and a distinct physiological action on the human body. Additionally, Tagetes erecta leaf extracts can be subjected to pharmacological screening due to the presence of several phytochemicals that may have therapeutic activity.
2023-10-12T15:21:03.094Z
2023-09-01T00:00:00.000
{ "year": 2023, "sha1": "490d25133442613f812c5b2266a9e57e32c84f22", "oa_license": "CCBYNCSA", "oa_url": "https://jaims.in/jaims/article/download/2509/3560", "oa_status": "HYBRID", "pdf_src": "Anansi", "pdf_hash": "81091e123bd8b36fbf147a14e30c891e610e98ac", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [] }
204963090
pes2o/s2orc
v3-fos-license
Surgical treatment of isolated right ventricular metastasis from renal cell carcinoma Background Cardiac metastasis from renal cell carcinoma is an exceptional event, particularly when there is lack of inferior vena cava involvement. Only a few cases have been reported worldwide so far. Case presentation We presented a case of a 58-year-old man diagnosed with isolated right ventricular metastasis of renal cell carcinoma in the absence of direct inferior vena cava extension, who underwent surgical tumor resection using cardiopulmonary bypass. Conclusions Surgical resection of the cardiac mass with an understanding of the pathology is needed to prevent sudden death from acute heart failure or tumor embolism and improve the patient’s quality of life. Background Renal cell carcinoma (RCC) represents 3% of all malignant tumors and approximately 30% of the patients diagnosed with RCC develop metastasis [1]. The most common metastatic sites include the lung, bones, soft tissues, liver, and central nervous system. While cardiac metastases from RCC are unusual, isolated right ventricular (RV) metastasis without vena cava involvement is exceedingly rare [2]. Therefore, discussion of multidisciplinary therapies and follow-up strategies for cardiac metastasis of RCC is essential to prevent the risk of sudden deaths. In this report, we present a case of surgical treatment of isolated RV metastasis from RCC in the absence of vena cava extension. Case presentation A 58-year-old man was presented to our hospital with progressive dyspnea and atypical chest pain. Clinical examination found no signs of congestion and angina pectoris. At the age of 51 years old, he had undergone partial right nephrectomy as right RCC was detected. Two years post to nephrectomy, multiple lung metastases from RCC were detected, and the patient was treated with targeted molecular therapy (Sorafenib). This treatment happened to be successful with reduced and disappeared metastatic lesions. However, 4 years post nephrectomy, right adrenal gland metastasis was subsequently detected. He was treated with another targeted molecular therapy (Sunitinib), but limited effect was observed followed by increased lesion, and eventually underwent right adrenalectomy. Post these treatments, lung and adrenal gland metastases were well-controlled through chemotherapy. At the time of admission, transthoracic echocardiography showed a 53 × 32 mm mass in the RV that moved without extension into the outflow tract nor involvement of the inferior vena cava (IVC) (Fig. 1a). Cardiac magnetic resonance imaging (MRI) confirmed a mobile mass with hypervascular tissue characteristics infiltrating the free wall of the RV myocardium (Fig. 1b). A fluorodeoxyglucose-position emission tomography (FDG-PET) examination showed a mildly FDG-avid mass in the RV free wall and free of other organ metastases (Fig. 1c). Contrast-enhanced cardiac computed tomography (CT) displayed an intramyocardial mass in the RV wall supplied by RV branch of the right coronary artery (Fig. 1d). After consultation with urology and oncology, the differential diagnosis included cardiac metastatic from RCC based on his medical history. Surgical tumor resection might enable preventing a tumor embolismrelated sudden death as well as identifying appropriate anticancer agent through pathological diagnosis of metastatic lesion. Therefore, a multidisciplinary treatment was planned expecting a prognosis improvement. Under general anesthesia and median sternotomy, a cardiopulmonary bypass was established between the ascending aorta and bicaval drainage. After cardiac arrest, the inside of the RV was observed with the approach via the tricuspid valve by the right atriotomy. A part of the valve leaflets between the anterior and posterior leaflet were incised near the annulus and the tumor was deployed to the right atrium by manual compression of the outside of the RV wall (Fig. 2). It was observed that the tumor layer attached to the RV free wall was thin, but the muscular layer remained as previous. Moreover, RV branch was running on the center of the lesion. For that reasons, the tumor was resected to preserve the RV wall as much as possible. Considering the possibility of tumor cell remnants, cryoablation was managed against the wall followed by a tricuspid annuloplasty with a 26-mm Physio tricuspid (Carpentier-Edwards, Irvine, California) after the leaflet suture repair. The operation progressed straightforward and weaning from the cardiopulmonary bypass was smooth. Histopathological examination led to a final diagnosis of metastatic tumor from clear cell RCC (Fig. 3). Postoperative echocardiography showed the disappearance of RV tumor and normal tricuspid valve function. The patient was discharged on postoperative day 16 after an uneventful hospitalization. One year post surgery, he has been asymptomatic and stable course without recurrence of cardiac tumors and metastasis to other organs under careful follow-up of oncologists. Discussion Approximately 45% of the patients with RCC present localized tumors, 25% of patients present locally advanced disease, and approximately 30% of patients may have Fig. 1 a Transthoracic echocardiography shows a 52 × 31 mm right ventricular mass (arrow) moving without extension into the outflow tract. b Cardiac MRI showed an anterior right ventricular free wall mass of the RV myocardium. c Axial fused FDG-PET/CT image demonstrates mildly FDG-avid mass within right ventricular wall (arrow). d A cardiac CT angiogram (segmented three-dimensional volume rendered image) shows the RV branches entering (arrow) the mass Fig. 2 Intraoperative image. Giant tumor adhered to the free wall was deployed to the right atrium space by manual compression of the outside of RV wall via the tricuspid valve metastases at the time of diagnosis [1]. Cardiac metastases of RCC occur through two mechanisms. The first is a lymphatic pathway through the lymphatic vessels of the thorax collecting the drainage from the posterior wall of the heart. Reports have mentioned that drainage from the left heart wall passes through these lymph vessels and lymphatic flow can be reversed by metastasis to the nodes [3]. The second mechanism involves a venous hematogenous pathway through the renal vein to the right heart. In cases with isolated and delayed progression to the right heart without involvement of the IVC remain the most probable mode of metastasis through venous hematogenous micro dissemination [2]. This mechanism of metastasis is more compatible with the present case because of an isolated lesion and the right heart. Patients with cardiac metastases present nonspecific symptoms such as palpitations, chest pain, shortness of breath, and syncope. Coronary occlusion or compression from tumor masses can lead to myocardial infarction, eventual heart failure, and even death [4]. A high index of suspicion is required to make a timely diagnosis of cardiac metastases because of the nonspecific clinical symptoms. Although various diagnostic imaging modalities have been used in prior reports, cardiac MRI is recommended as a reliable tool for evaluating the cardiac masses given its excellent contrast resolution and tissue characterization, which can exclude lipomas, fibromas, and hemangiomas as well as thrombus or lipomatous hypertrophy [5]. Also, cardiac CT provides high-quality images with superior spatial resolution for evaluation of relationship between the tumor and coronary arteries for surgical planning for the mass resection [6]. Unlike most other neoplasms, metastatic RCC is relatively resistant to conventional chemotherapy. Moreover, angiogenesis inhibitors, cytokine-based therapy including interferon, were the mainstay of treatment for advanced RCC. The development of drugs known as receptor tyrosine kinase inhibitors including sorafenib and sunitinib has created a paradigm shift in the treatment of RCC. Some reports have described cases of sudden death due to malignant cardiac metastases [7]. However, there is no consensus regarding surgical treatment for such disease. While the patients with isolated cardiac metastasis of RCC generally have obstructive symptoms; surgical resection may provide effective and favorable outcomes by preventing tumor embolism [8]. In this case, the free wall of the tumor adhesion site was thinned and the muscular layer remained. Moreover, RV branch was running on the center of the tumor lesion. Therefore, the transmural wall resection was not performed to avoid postoperative RV dysfunction by over-invasive surgery. In addition, cryoablation was managed against the RV wall to prevent tumor cell remnants. A pen-type freeze coagulation device frequently used in maze procedure for atrial fibrillation was adopted. The procedure was adopted after referring to some multidisciplinary treatment combining hepatectomy, microwave coagulo-necrotic therapy (MCN), and postoperative chemotherapy. It has been reported that this treatment may provide long-term survival for patients with unresectable metastatic hepatocellular carcinoma [9,10]. However, there is no report on the effectiveness of MCN for metastasis of renal cell carcinoma. Surgical resection acts as palliation therapy of malignant cardiac metastasis; thus, multidisciplinary therapy as a combination of surgical treatment and targeted molecular therapy with cooperation of multiple experts is essential. For carefully selected patients, surgical resection of cardiac metastases to provide symptom palliation, improved quality of life, and prolonged survival may be acceptable. Conclusions Here, we report a surgical tumor resection in a patient with RV mass caused by metastatic RCC. Since there is a risk of tumor recurrence affected by the tumor cell remnants or ineffective chemotherapy, close observation should be mandatory. Our experiences highlight the necessity of excision of the cardiac mass surveillance to improve quality of life.
2019-10-31T05:31:52.771Z
2019-10-29T00:00:00.000
{ "year": 2019, "sha1": "59d6e984220cf636640061a242ee3feaa3af7735", "oa_license": "CCBY", "oa_url": "https://surgicalcasereports.springeropen.com/track/pdf/10.1186/s40792-019-0733-3", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "59d6e984220cf636640061a242ee3feaa3af7735", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
234357809
pes2o/s2orc
v3-fos-license
Rethinking and Reweighting the Univariate Losses for Multi-Label Ranking: Consistency and Generalization (Partial) ranking loss is a commonly used evaluation measure for multi-label classification, which is usually optimized with convex surrogates for computational efficiency. Prior theoretical work on multi-label ranking mainly focuses on (Fisher) consistency analyses. However, there is a gap between existing theory and practice -- some pairwise losses can lead to promising performance but lack consistency, while some univariate losses are consistent but usually have no clear superiority in practice. In this paper, we attempt to fill this gap through a systematic study from two complementary perspectives of consistency and generalization error bounds of learning algorithms. Our results show that learning algorithms with the consistent univariate loss have an error bound of $O(c)$ ($c$ is the number of labels), while algorithms with the inconsistent pairwise loss depend on $O(\sqrt{c})$ as shown in prior work. This explains that the latter can achieve better performance than the former in practice. Moreover, we present an inconsistent reweighted univariate loss-based learning algorithm that enjoys an error bound of $O(\sqrt{c})$ for promising performance as well as the computational efficiency of univariate losses. Finally, experimental results validate our theoretical analyses. Introduction Multi-Label Classification (MLC) [1] is an important task, in which each instance is associated with multiple labels simultaneously. It has a wide range of applications, such as text categorization [2], bioinformatics [3], multimedia annotation [4], and information retrieval [5]. To evaluate the performance of different methods in MLC, various measures [6,7] have been developed from diverse aspects owing to the complexity of MLC. Among them, the (partial) ranking loss [2,8] is a widely-used measure in practice (or in theory). Formally, the ranking loss calculates the fraction of pairs that a positive label does not precede a negative label according to the rank given by a score function (or predictor). Accordingly, minimizing such a loss is usually referred to as Multi-Label Ranking (MLR) [9], which is our consideration in this paper. Since (partial) ranking loss is non-convex and discontinuous, existing methods [6] seek to optimize certain convex surrogate losses for computational efficiency. These surrogate losses can be divided into two main categories: pairwise ones [8] and univariate ones [9], which have their own advantages and limitations in terms of computational costs, theory and empirical performance. a Note that this is in terms of partial ranking loss. Besides, these surrogate losses are all inconsistent w.r.t. ranking loss. b This is for the cases where the base loss is the exponential, logistic, least squared or squared hinge loss. Computationally, the pairwise losses, defined over pairs of positive and negative labels, lead to a complexity depending on O(c 2 ) (c is the number of labels), while the univariate losses enjoy a complexity depending on O(c). Thus the latter are preferable, especially in cases with a large-scale label space. Theoretically, the pairwise losses are not (Fisher) consistent w.r.t. both the ranking loss and the partial ranking loss [8], while, remarkably, certain univariate losses are consistent w.r.t. the partial ranking loss [9,8]. Empirically, however, the consistent univariate losses usually have no significant superiority in comparison with the inconsistent pairwise losses [9]. In fact, we observed that the former under-perform the latter on 10 MLR benchmarks (see results in Table 4). Such a gap between the existing theory and practice is worth being further studied and it would be appealing if one can further improve the performance of the univariate losses especially when c is large due to its computational efficiency. A natural explanation of the gap is that although the (Fisher) consistency [10,11] provides valuable insights in the asymptotic cases, it cannot fully characterize the behaviour of a surrogate loss when the number of training samples is not sufficiently large and the hypothesis space is not realizable. To address the issue, this paper presents a systematic study in a complementary perspective of generalization error bounds [12] besides the consistency. In fact, we prove that the existing consistent univariate losses-based algorithms lead to an error bound depending on O(c) while the pairwise losses based ones enjoy an error bound depending on O( √ c) [13], which explain the empirical behaviour better (see Table 4). Further, we present two reweighted surrogate univariate losses that employ carefully designed penalties for positive and negative labels. Such losses strictly upper bound the (partial) ranking loss, which is crucial in generalization analysis (see Section 5.1). Moreover, we analyze their consistency and generalization bounds of the corresponding algorithms. Surprisingly, though not consistent, one of them enjoys an error bound depending on O( √ c), which is nearly the same as the pairwise loss, and retains the computational efficiency. See Table 1 for a summary of our main theoretical results. Experimental results validate our theory findings. Technically, focusing on the widely used kernel-based algorithms [14,15,13], we present the generalization analyses based on Rademacher complexity [16] and the vector-contraction inequality [17], following recent work [13]. For the Fisher consistency, we consider more general reweighted univariate losses, which naturally extends the results in prior work [9,8]. Considering different base losses (e.g., logistic loss), we present simple conditions that only involve penalties to characterize its consistency w.r.t. (partial) ranking loss, which may be of independent interest. This paper is organized as follows. In Section 2, we review the related work in MLC and MLR. Section 3 introduces the problem setting, evaluation measures, risk and regret for MLR. Section 4 lists various surrogate losses, including the reweighted univariate ones, and their associated learning algorithms. In section 5, we present the generalization analyses of the algorithms and the consistency analyses of the corresponding surrogate losses. Section 6 presents and analyzes the experimental results. Section 7 concludes this paper and discusses future work. Related Work Here we mainly review the theoretical work relevant to this paper in MLC and MLR. Consistency. [8] studied the consistency of various surrogate losses w.r.t. Hamming and (partial) ranking loss. Remarkably, [9] presented an explicit regret bound w.r.t. partial ranking loss for certain consistent univariate losses. Extensive work investigated the consistency w.r.t. other measures, especially the F-measure. For instance, [18] provided justifications and connections w.r.t. the F-measure using the empirical utility maximization (EUM) framework and the decision-theoretic approach (DTA) in binary classification, which were applied to the optimization of the macro-F measure in MLC. Further, [19] studied connections and differences between these two frameworks and clarified the notions of consistency 3 w.r.t. many complex measures (e.g., the F-measure and Jaccard measure) in binary classification. Besides, prior work [20, 21] studied the consistency of the F-measure in MLC from the DTA perspective via different approaches to estimate the conditional distribution P (y|x). [22] devoted to the study of consistent multi-label classifiers w.r.t. various measures under the EUM framework. [23] investigated the multi-label consistency of various reduction methods w.r.t. precision@k and recall@k measures. Generalization analysis. [13] studied the generalization bounds of the algorithms based on the pairwise surrogate loss (L pa ) and the (variant) univariate surrogate loss (L u1 ) w.r.t. the ranking loss. We mention that a specific form of Eq. (13) with base hinge loss has been used as a part of prior work [24], which achieves excellent empirical results in MLC. In comparison, this paper considers a more general form of such reweighted surrogate losses and provides formal consistency and generalization analyses, which have not been investigated in the literature to our knowledge. Preliminaries In this section, we fist introduce the problem setting of MLC and MLR. Then, we present the evaluation measures, risk, and regret of MLR. Notations. Let boldface lower case letters denote vectors (e.g., a) and boldface capital letters denote matrices (e.g., A). For a matrix A, a i , a j and a ij denote its i-th row, j-th column, and (i, j)-th element respectively. For a vector a, a i denote its i-th element. For a square matrix, Tr(·) denotes the trace operator. For a set, | · | denotes the cardinality. [[π]] denotes the indicator function, i.e., it returns 1 when the proposition π holds and 0 otherwise. sgn(x) returns 1 when x > 0 and −1 otherwise. [n] denotes the set {1, ..., n}. For a function g : R → R and a matrix A ∈ R m×n , define g(A) : R m×n → R m×n , where g(A) ij = g(a ij ). Problem Setting Let x ∈ X ⊂ R d and y ∈ Y ⊂ {−1, +1} c denote the input and output respectively, where d is the feature dimension, c is the number of labels, and the value y j = 1 (or −1) indicates that the associated j-th label is relevant (or irrelevant). Given a training set S = {(x i , y i )} n i=1 which is sampled i.i.d. from the distribution P over X × Y, the original goal of MLC is to learn a multi-label classifier H : To solve MLC, a common approach is to first learn a vector-based score function (or predictor) f = [f 1 , ..., f c ] : R d −→ R c and then get the classifier by a thresholding function. Multi-Label Ranking (MLR) aims to learn the best predictor from the finite training data in terms of some ranking-based measures, which is our consideration in this paper. Evaluation Measures To evaluate the performance of different approaches for MLR, many measures have been developed. Here we focus on two widely-used measures in practice (or theory), which are defined below 4 . Ranking Loss: where S + y (or S − y ) denotes the relevant (or irrelevant) label index set induced by y. Partial Ranking Loss 5 : From the above definitions, we can observe that the only difference between these two measures is the penalty when f p (x) = f q (x) holds. Besides, it is easy to verify that ranking loss upper bounds the partial ranking loss, i.e. L 0/1 . Although these two measures are almost the same in practice for the evaluation of one algorithm, they have different consistency properties for some surrogate losses theoretically [8]. Risk and Regret Since (partial) ranking loss is non-convex and discontinuous, often leading to NP-hard problems [25], extensive methods optimize it with convex surrogate losses in practice for computational efficiency. Define a surrogate loss L φ : R c ×{−1, +1} c → R + , where φ indicates the specific surrogate loss and will be detailed in the next section. Besides, define a vector-based predictor class F = {f : X → R c }. For a predictor f ∈ F, its true (0/1) expected risk, surrogate expected risk, and surrogate empirical risk are defined as follows: Besides, we use a superscript (i.e., pr or r) to distinguish the risks for specific measures. For instance,R pr S (f ) andR r S (f ) denote the empirical partial ranking risk and the empirical ranking risk respectively. Moreover, for convenience the expected risk conditioned on an instance x (i.e., the conditional risk) can be expressed as: where L denotes the true (0/1) or surrogate loss. Thus, the expected risk of f is For each x, given the conditional distribution P (y|x), we can get its optimal predictions 6 as follows. f * (x) = arg min a∈R c y L(a, y)P (y|x), where f * is called the Bayes predictor w.r.t. the loss L. Besides, the expected risk of f * (i.e., R(f * )) is called the Bayes risk, which is the minimal expected risk w.r.t. the loss L and denoted by R * for convenience. Then, we can define the regret (a.k.a. excess risk) of a predictor f w.r.t. the true and surrogate loss as follows. Reg Besides, we also use a superscript (i.e., pr or r) to distinguish the regrets for specific measures. Moreover, we denote the learned predictor from finite training data S asf n . Note that, our goal is to find a predictorf n that achieves the minimal true regret (i.e. Reg 0/1 (f n )) as possible as it can. 5 To minimize the partial ranking loss is equivalent to maximize the instance-AUC. 6 Notably, the optimal predictions can be not just one value but a set with many elements that share the same minimal conditional risk. Methods In this section, we first introduce several specific surrogate losses. Then, we present their associated learning algorithms. Surrogate losses To optimize the (partial) ranking loss, it is natural to employ the convex surrogate pairwise loss [2,3,26, 27] as follows: where the base (margin-based) convex loss (z) can be defined in various popular forms, such as the exponential loss (z) = e −z , the logistic loss (z) = ln(1 + e −z ), the hinge loss (z) = max{0, 1 − z}, and squared hinge loss (z) = (max{0, 1 − z}) 2 . A common property is that the base convex surrogate loss upper bounds the original 0/1 loss 7 , i.e., Besides, the surrogate univariate loss, which primarily aims to optimize Hamming loss [28,13], can also be viewed as a surrogate loss for the (partial) ranking loss, which is defined as follows: Note that L u1 cannot strictly upper bound the (partial) ranking loss, i.e. L 0/1 Remarkably, previous work presents the consistent surrogate univariate loss [9,8] w.r.t. partial ranking loss, which is defined as follows: Again, the consistent surrogate loss L u2 cannot strictly upper bound the (partial) ranking loss either. Notably, when the surrogate loss strictly upper bounds the 0/1 loss, the true (0/1) risk can be upper bounded by the surrogate risk too, which is crucial for its generalization analysis. Thus, we present two reweighted convex surrogate univariate losses, which strictly upper bound (partial) ranking loss, defined as below. For a clear presentation, we will formally discuss the relationships among these surrogate losses in the next section. Learning Algorithms In the following, we consider the kernel-based learning algorithms which have been widely used in practice [3,28,29,14,15] and in theory [13] in MLC. Besides, our following analyses can be extended to other forms of hypothesis class, such as neural networks [30]. Let H be a reproducing kernel Hilbert space (RKHS) induced by the kernel function κ, where κ : X × X → R is a Positive Definite Symmetric (PSD) kernel. Let Φ : X → H be a feature mapping associated with κ. The kernel-based hypothesis class can be defined as follows. where W denotes W H,2 = ( c j=1 w j 2 H ) 1/2 for convenience. Here we consider the following five learning algorithms with the corresponding aforementioned surrogate losses. Theoretical Analyses In this section, we present generalization error bounds of the learning algorithms presented before and consistency analyses of the corresponding surrogate losses. Firstly, we want to highlight the complementary roles of the two perspectives. Recall that our goal is to find a predictorf n learned from finite training data that achieves the minimal true regret Reg 0/1 (f n ). In the following, we will decompose the regret appropriately for clear discussions. For generalization analyses, the true regret can be decomposed into the following terms w.r.t. the 0/1 loss. where F is the constrained function class that real learning algorithms utilize. For a given distribution P (x, y) and a specific measure, R * 0/1 is fixed. Besides, inf g∈F R 0/1 (g) depends on the size of F and is fixed for a given F. Thus, in this case, the original goal becomes to minimize R 0/1 (f n ) as possible as it can. In Section 5.1, we present the generalization error bounds of the learning algorithms to provide learning guarantees for R 0/1 (f n ) through bounding the surrogate risk R φ (f n ) 8 . However, these error bounds cannot exactly tell the size of the gap between R 0/1 (f n ) and R φ (f n ). Consistency analyses aim to answer the question whether the (0/1) expected risk of the learned function converges to the Bayes risk [11,8], i.e., when n → ∞, If a loss is consistent, a regret bound [11,9] as follows is preferable. Namely, for all measurable function f (includingf n ) and valid joint distribution P (x, y), the following holds: where ψ is an invertible function such that for any sequence Prior work [9] shows that θ with logistic and exponential loss in MLR. Besides, when learning in the real setting (with finite data), the surrogate regret off n can be decomposed into the following two terms w.r.t. the surrogate loss. where the estimation error is due to finite data size, and the approximation error is due to the choice of F. Notably, the consistency analysis [11] neglects these two errors since it allows P (y|x) known in the infinite data setting and assumes that the hypothesis class F is over all measurable functions. In summary, consistency can provide valuable insights for learning from infinite data (or data of relatively large n w.r.t. c) with an unconstrained hypothesis class, while generalization bounds can offer more insights for learning from finite data with a constrained hypothesis class. Generalization Analyses For generalization analyses, we mainly follow the recent theoretical work [13]. First, we introduce the common assumptions for the subsequent analyses. Note that, the widely-used hinge and logistic loss are both 1-Lipschitz continuous 9 . Then we provide the properties of surrogate losses in the following lemma. Notably, the Lipschitz constants of surrogate losses characterize the relationship between the Rademacher complexities [16] of the loss class and hypothesis class based on the vector-contraction inequality [17], which plays a central role in the generalization analysis. Lemma 1 (The properties of surrogate losses; full proof in Appendix A.1). Assume that the base (convex) loss (z) is ρ-Lipschitz continuous and bounded by B. Then, the following holds. the first argument and bounded by (2) the surrogate loss L u3 (f (x), y) in Eq. (13) is 2ρ-Lipschitz w.r.t. the first argument and bounded by 2B. the first argument and bounded by cB. Next, we analyze the relationship between true and surrogate losses as follows, which is used for the proof of learning guarantees of algorithms. Lemma 2 (The relationship between true and surrogate losses). For the ranking loss and its surrogate losses, the following inequalities hold: The full proof is in Appendix A.2. From this lemma, we can observe that when a learning algorithm minimizes L u2 , it also optimizes an upper bound of L First, we analyze the learning guarantee of A u2 , as follows. Theorem 1 (Learning guarantee of A u2 ). Assume the loss L φ = cL u2 , where L u2 is defined in Eq. (12). Besides, Assumption 1 is satisfied. Then, for any δ > 0, with probability at least 1 − δ over S, the following generalization bound holds for all f ∈ F: The full proof is in Appendix A.3.1. From this theorem, we can see that the learning algorithm A u2 has a learning guarantee in terms of (partial) ranking loss which depends on O(c). Then, we provide the learning guarantee of A u3 in the following theorem. (13). Besides, Assumption 1 is satisfied. Then, for any δ > 0, with probability at least 1 − δ over S, the following generalization bound holds for all f ∈ F: The full proof is in Appendix A.3.2. From this theorem, remarkably, we can see that the learning algorithm A u3 has a learning guarantee in terms of (partial) ranking loss which depends on O( √ c), which enjoys the same order as the algorithm A pa [13]. Finally, we give the learning guarantee of A u4 as following. Theorem 3 (Learning guarantee of A u4 ). Assume the loss L φ = L u4 , where L u4 is defined in Eq. (14). Besides, Assumption 1 is satisfied. Then, for any δ > 0, with probability at least 1 − δ over S, the following generalization bound holds for all f ∈ F: The full proof is in Appendix A.3.3. The above theorem indicates that A u4 has a learning guarantee w.r.t. (partial) ranking loss depending on O(c), which is the same as A u2 . Consistency Analyses For consistency, following [9,8], we consider the general ranking loss and the general partial ranking loss as follows: and where α y is a positive penalty. The losses in Eq. (1) and Eq. (2) where β + y and β − y are penalties for the positive and negative labels respectively. We assume β + y β − y > 0 for convenience in our analyses. Note that the penalties can be different and all univariate surrogate losses presented in Section 4.1 are special cases of Eq. (28). (See Table 2 for details.) Let B L (x, P (y|x)) denote the set of the Bayes predictors of a loss L given a data point x and a conditional distribution P (y|x). Remarkably, a sufficient and necessary condition (called multi-label consistency [8]) for a surrogate loss to be (Fisher) consistent w.r.t. the (partial) ranking loss is presented in the following Lemma 3. Note that, when c ≤ 3, the penalties of L u1 , L u3 and L u4 may coincide with that of L u2 up to a multiplicative constant. When c ≥ 4, it is straightforward to construct counter examples that violate the necessary condition in Theorem 4 and obtain the following Corollary 1. An immediate conclusion from Corollary 1 and Proposition 1 is that L u1 , L u3 and L u4 are inconsistent w.r.t. the ranking loss in Eq. (1) because [8]. Compared to existing work [9,8], although Theorem 4 and Proposition 1 are negative, such results consider surrogate losses in a more general reweighted form, i.e. Eq. (28), which may be of independent interest. Experiments To validate our theory findings, we evaluate all algorithms presented in Section 4 on 10 widely-used benchmark datasets 10 with various domains and sizes of label and data. We summarize their statistics in Table 3. Since the first four datasets are not properly prepossessed, we normalize the input as Table 4: Ranking loss of all five algorithms on benchmark datasets. On each dataset, the top two algorithms are highlighted in bold and the top one is labeled with † . zero mean and unit variance following [13]. For all the learning algorithms, we utilize the linear models with the base logistic loss for simplicity and a fair comparison. Besides, we use the same efficient stochastic algorithm (i.e. SVRG-BB [31]) to solve these convex optimization problems. Moreover, for fairness, we take 3-fold cross validation on each dataset, where the hyper-parameter λ is searched in a wide range of {10 −8 , 10 −7 , · · · , 10 2 } for all algorithms. We use the ranking loss as the evaluation measure. The experimental results are summarized in Table 4 and we refer the readers to Appendix C for complete results with standard deviations. First, we observe that A pa and A u3 outperform the others especially A u2 on almost all benchmarks. It agrees with our generalization analyses: A pa and A u3 enjoy a generalization error bound of O( √ c) while the others have a bound of O(c). Besides, we would like to emphasize that such results do not contradict with the consistency results. In fact, there are two assumptions in consistency analyses are violated in the real settings. The first one is that the Bayes predictor may not be linear and the second one is that the number of samples may not be sufficient to achieve the Bayes predictor, which explain the relatively weaker results of A u2 considering its consistency. In this sense, the generalization error bounds may provide more insights than consistency when the number of training samples is finite (or not sufficiently large) and the hypothesis space is not realizable. Further, we also note that A u3 outperforms A pa on the last four datasets with a relatively large c. The underlying mechanism is not clear yet. Our hypothesis is that our univariate loss is easier to optimize than the pairwise loss, which may provide additional benefits beyond the scope of the generalization analyses. A deeper analysis is left as future work. Moreover, as for computation efficiency, the pairwise loss is much slower than all univariate ones, including A u3 . Indeed, A pa takes more than a week using a 48-core CPU server on the delicious dataset with c = 983 and we do not finish it. We provide quantitative results of the running time in Appendix C. In conclusion, the benchmark results show the promise of our L u3 in terms of both efficiency and effectiveness. Conclusion and Discussion This paper presents a systematic study from two complementary perspectives of consistency and generalization error bounds in multi-label ranking. In particular, existing consistent univariate losses lead to an error bound depending on O(c) while the inconsistent pairwise losses enjoy an error bound of O( √ c) [13]. Inspired by the generalization analyses, we present two reweighted surrogate univariate losses that strictly upper bound the (partial) ranking loss. Surprisingly, though not consistent, one of them enjoys an error bound depending on O( √ c), which is nearly the same as the pairwise loss and retains the computational efficiency. Empirical results validate our theory findings. Some problems in MLR and MLC are still open and may inspire future work. Theoretically, consistency provides valuable insights when learning on large-scale samples with a unconstrained function space. However, our empirical results show that generalization explains the behaviour of a loss more accurately than consistency if the two assumptions are violated. Generally, a deeper understanding of the complementary roles of the two perspectives is intriguing. Besides, it is also attractive to investigate whether one can design a loss such that it is consistent and its corresponding learning algorithm has a tight generalization bound. Since the inequality c − 1 ≤ |S + y ||S − y | ≤ c 2 4 holds, it is easy to check that L u2 is bounded by (2) For the surrogate univariate loss L u3 (f (x), y), ∀f 1 , f 2 ∈ F, the following holds: It is easy to check that L u3 is bounded by 2B. A.2 Proof of Lemma 2 Lemma 2 (The relationship between true and surrogate losses). For the ranking loss and its surrogate losses, the following inequalities hold: Proof. For the first inequality, the following holds: For the second inequality, the following holds: A.3 Proof of Theorem 1, 2 and 3 Following [13], we also give the base theorem used in the subsequent generalization analysis, as follows. A.3.2 Proof of Theorem 2 Theorem 2 (Learning guarantee of A u3 ). Assume the loss function L φ = L u3 , where L u3 is defined in Eq. (12). Besides, Assumption 1 is satisfied. Then, for any δ > 0, with probability at least 1 − δ over S, the following generalization bound holds for all f ∈ F: Proof. Since L φ = L u3 , we can get its Lipschitz constant (i.e. 2ρ) and bounded value (i.e. 2B) from (2) in Lemma 1. Then, applying Theorem A.1 and the inequality R pr InEq.(21)), we can get this theorem. A.3.3 Proof of Theorem 3 Theorem 3 (Learning guarantee of A u4 ). Assume the loss function L φ = L u4 , where L u4 is defined in Eq. (13). Besides, Assumption 1 is satisfied. Then, for any δ > 0, with probability at least 1 − δ over S, the following generalization bound holds for all f ∈ F: Proof. Since L φ = cL u4 , we can get its Lipschitz constant (i.e. ρ √ c) and bounded value (i.e. cB) from (1) in Lemma 1. Then, applying Theorem A.1 and the inequality R pr InEq.(20)), we can get this theorem. B Consistency Analyses Recall that the ranking loss and the partial ranking loss are defined as and respectively. For generality, following [9,8], we do not specify the penalties in the losses at beginning. Recall that the general ranking loss is defined as where α y is a positive penalty, and the general partial ranking loss is in a similar form of The commonly used ranking loss and partial ranking loss are the spacial cases of Eq. (38) and Eq. (27) with α y = 1 |S + y ||S − y | respectively. Also, recall that the general reweighted univariate surrogate loss is defined as follows: where β + y and β − y are positive penalties. All univariate surrogate losses mentioned in the main text are spacial cases of Eq. (40), respectively. Let B L (x, P (y|x)) denote the set of the Bayes predictors of a loss L given a data point x and a conditional distribution P (y|x). Remarkably, a sufficient and necessary condition (called multi-label consistency [8]) for a surrogate loss to be (Fisher) consistent w.r.t. the (partial) ranking loss is presented in the following Lemma 3. Lemma 3 (Multi-label consistency [8]). A surrogate loss L is consistent w.r.t. a 0/1 loss L 0/1 , including the general ranking loss in Eq. (38) and the general partial ranking loss in Eq. (39), if and only if ∀x and P (y|x), B L (x, P (y|x)) ⊂ B L 0/1 (x, P (y|x)). For convenience, we define ∆ rk pq = y:yp=sr,yq=s k α y P (y|x), and ∆ r p = y:yp=sr where r, k ∈ {+, −} and s + = +1 and s − = −1. The following Lemma B.1 characterizes the set of the Bayes predictors w.r.t. the general ranking loss in Eq. (38) and the general partial ranking loss in Eq. (39). Lemma B.1 (Bayes predictor of (partial) ranking loss [8]). For all x and P (y|x), the set of Bayes predictors w.r.t. the general ranking loss in Eq. (38) is given by and the set of Bayes predictors w.r.t. the general partial ranking loss in Eq. (39) is given by Similarly to Eq. (41), we define The where C = 1 2 if (z) = e −z and C = 1 if (z) = ln(1 + e −z ). Lemma B.4 (Bayes predictor of Eq. (40) with hinge loss). For all x and P (y|x), the set of Bayes predictors w.r.t. the general reweighted univariate surrogate loss in Eq. (40) with (z) = max(0, 1−z) is given by 11 Because y P (y|x) = 1 for any x and we assume that the penalties are positive, then ∀1 ≤ j ≤ c, Proof. Because y P (y|x) = 1 for any x and we assume that the penalties are positive, then ∀1 ≤ j ≤ c, φ + j + φ − j > 0. Note that both the exponential loss and logistic loss are strictly monotonically decreasing functions. We now discuss the case where φ + j φ − j > 0. For the exponential loss (z) = e −z , we consider g(z) = ae −z + be z for a > 0 and b > 0. It achieves its minima at z * = 1 2 ln a b . To see this, just take the gradient up to the second order and get Since ∀z, g (z) > 0. Therefore g(z) is convex. Let g (z * ) = 0 ⇒ z * = 1 2 ln a b . For the logistic loss (z) = ln(1 + e −z ), we consider g(z) = a ln(1 + e −z ) + b ln(1 + e z ) for a > 0 and b > 0. It achieves its minima at z * = ln a b . To see this, just take the gradient up to the second order and get Since ∀z, g (z) > 0. Therefore g(z) is convex. Let g (z * ) = 0 ⇒ z * = ln a b . Combining all cases together completes the proof. Combining the Case 1.1 and Case 1.2 together, for all 1 ≤ p < q ≤ c, there exists τ > 0, β + y β − y = τ α 2 y for all y such that y p y q = −1. Step 2: Note that the values of τ in Step 1 may depend on p and q. Now we prove that there exists a universal τ for all 1 ≤ p < q ≤ c. For any nontrivial y = y , we can find 1 ≤ p < q ≤ c and 1 ≤ p < q ≤ c such that y p y q = −1 and y p y q = −1. We consider four cases. Case 2.1: Two pair of indices match, namely, p = p , q = q . We have proven that Step 1. Case 2.2: No index matches for c ≥ 4, namely, p = p , q = q , p = q , p = q. We can construct y such that y p = y p , y q = y q , y p = y p , y q = y q and get Step 1. In L u1 , according to the definition, we have β + y = β + y = β − y = β − y = 1 c . It is easy to check that In L u3 , according to the definition, we have β + y = 1, β + y = 1 2 , β − y = 1 c−1 , and β − y = 1 c−2 . It is easy to check that In L u4 , according to the definition, we have β + y = 1, β + y = 1 2 , β − y = 1, and β − y = 1 2 , for all c ≥ 4. It is easy to check that According to Theorem 4 and Proposition B.1, the above surrogate losses are not consistent w.r.t. the partial ranking loss in Eq. (37). C Additional Experimental Results The complete experimental results (with standard deviations) are summarized in Table 5. Besides, the computational costs of all five algorithms on benchmark datasets are shown in Figure 1. From Figure 1, we can observe that A pa with the pairwise loss is much slower than the other four algorithms with the univariate loss, especially when the label space is large. Note that the CPU time is plotted in the log scale in Figure 1.
2021-05-12T01:16:36.547Z
2021-05-10T00:00:00.000
{ "year": 2021, "sha1": "8faa99518b08b70000b7764e5f771cbbf0da6ffc", "oa_license": null, "oa_url": null, "oa_status": null, "pdf_src": "Arxiv", "pdf_hash": "8faa99518b08b70000b7764e5f771cbbf0da6ffc", "s2fieldsofstudy": [ "Computer Science", "Mathematics" ], "extfieldsofstudy": [ "Computer Science", "Mathematics" ] }
247379141
pes2o/s2orc
v3-fos-license
GM-CSF Protects Macrophages from DNA Damage by Inducing Differentiation At inflammatory loci, pro-inflammatory activation of macrophages produces large amounts of reactive oxygen species (ROS) that induce DNA breaks and apoptosis. Given that M-CSF and GM-CSF induce two different pathways in macrophages, one for proliferation and the other for survival, in this study we wanted to determine if these growth factors are able to protect against the DNA damage produced during macrophage activation. In macrophages treated with DNA-damaging agents we found that GM-CSF protects better against DNA damage than M-CSF. Treatment with GM-CSF resulted in faster recovery of DNA damage than treatment with M-CSF. The number of apoptotic cells induced after DNA damage was higher in the presence of M-CSF. Protection against DNA damage by GM-CSF is not related to its higher capacity to induce proliferation. GM-CSF induces differentiation markers such as CD11c and MHCII, as well as the pro-survival Bcl-2A1 protein, which make macrophages more resistant to DNA damage. Introduction Macrophages play a central role in immune response and tissue homeostasis. These cells have two functions in inflammation, initially being pro-inflammatory by destroying infectious agents and infected tissues and then being anti-inflammatory by repairing the damaged tissues [1]. In the early stages of inflammation, after interaction with inflammatory activators such as lipopolysaccharide (LPS) and interferon γ (IFN-γ), macrophages produce reactive oxygen species (ROS) through adaptation of mitochondrial respiration by mitofusin 2 [2]. ROS are a variety of molecules and free radicals derived from molecular oxygen that are essential mediators of the macrophage with a variety of functions [3]. ROS are a useful tool for eliminating infectious agents but they are also DNA-damaging agents, putting the survival of monocytes and macrophages at risk [4,5] and therefore blocking the tissue reconstruction phase, which can lead to chronic inflammation. We showed that macrophages have systems to repair DNA breaks when they become activated or injured [5]. Following DNA damage a number of molecules are specifically induced in macrophages by pro-but not by anti-inflammatory activators, such as Trex1 [6], NBS1 [5] and SAMHD1 [7]. In fact, the malfunction of genes responsible for clearing free nucleic acid fragments inside cells leads to the accumulation of intracellular nucleic acids and activation of sensors of the innate immune system. These free nucleic acids in turn induce the production of type I interferons associated with pathological conditions included under RNA Extraction and Real-Time RT-PCR Total RNA was extracted, purified and treated with DNAse using the ReliaPrep RNA system kit (Promega, Madison, WI, USA), as recommended by the manufacturer. 400 ng of RNA was retrotranscribed to cDNA using the Moloney murine leukemia virus (MMLV) reverse transcriptase RNAse H Minus (Promega), following the manufacturer's specifications. Quantitative PCR (qPCR) was performed using SYBR Green Master Mix (Applied Biosystems, Waltham, MA, USA), as recommended by the manufacturer. Nonretrotranscribed RNA samples were used as negative controls for each gene. When signal was detected in these negative controls (<32 Ct), the primer pairs used were discarded and replaced with alternative primers for the same gene. Furthermore, the amplification efficiency for each pair of primers was calculated by making a standard curve of serially diluted cDNA samples. Only the pairs of primers with an amplification efficiency of 100 ± 10% were used. Real-time monitoring of PCR amplification was performed in the ABI Prism 7900 Sequence Detection System (Applied Biosystems). Data were analyzed by the ∆∆Ct method [22] using Biogazelle Qbase+ software (Biogazelle, Gent, Belgium). Gene expression was normalized to three reference genes (i.e., housekeeping genes): Hprt1, L14, and Sdha. The stability of these reference genes was determined by checking that their geNorm M value was lower than 0.5 [23]. The primers used are described in Supplementary Materials Table S2. Proliferation Assay Macrophage proliferation was measured by [ 3 H]-thymidine incorporation as described previously [24]. Cells were deprived of M-CSF for 16-18 h and then 10 5 cells were incubated in 24-well plates (Costar, Washington, DC, USA) for 24 h in DMEM and 20% FBS in the presence of the growth factor. After this period, the medium was replaced with medium containing [ 3 H]-thymidine. After 6 additional hours of incubation, the medium was removed and the cells were fixed in ice-cold 70% methanol. After three washes, the cells were solubilized and their radioactivity was measured. Each experiment was performed in triplicate and the results were expressed as the mean ± SD. For cell counting we used a hemocytometer. Cytometry The phenotypic analysis of macrophages was conducted by direct immunofluorescence using flow cytometry. Cell recovery from culture plates was facilitated by treatment with trypsin (Biological Industries, Cromwell, CO, USA). Cells were resuspended in PBS and incubated for 15 min with rat anti-CD16/32 (BD Pharmigen, San Diego, CA, USA) at 4 • C for 30 min to block nonspecific binding. Then, antibodies against different markers were used in conjunction with their respective isotype controls. The antibodies used for cytometry are shown in detail in Supplemental Material Table S1. Samples were analyzed using a Gallios multi-color flow cytometrer instrument (Beckman Coulter, Brea, CA, USA) set up with the 3-lasers 10 colors standard configuration. Excitation of DAPI was conducted using a violet (405 nm) laser. Forward scatter (FS), side scatter (SS) and FL9 (450/40 nm) fluorescence emitted by DAPI were collected. Aggregates were excluded by gating single cells according to their area vs. peak fluorescence signal. DNA analysis (Ploidy analysis) on single fluorescence histograms was performed using Multicycle software (Phoenix Flow Systems, San Diego, CA, USA). Apoptosis Apoptotis was determined by incubating BMDMs with the Annexin V-FITC Apoptosis Detection Kit, following the manufacturer's instructions. Live or viable cells (double negative), necrotic (DAPI positive), early (Annexin-V positive) and late apoptotic (double positive) cell populations were detected by flow cytometry (Supplementary Materials Figure S5). Dead cells included those in early and late apoptosis and necrotic cells. Viability was quantified by staining the cells with Crystal Violet staining solution (0.5%) (Sigma-Aldrich) [25]. Cell Cycle Analysis The cell cycle was analyzed as described previously [26]. BMDMs (10 6 ) were cultured in DMEM + 10%FBS in 12-well plates for 16 h. They were then left unstimulated or treated as specified for 24 h and then fixed with 95% ethanol. Next, cells were incubated with propidium iodide (PI) (Sigma-Aldrich) and RNase A (Sigma-Aldrich). Cell cycle distributions were analyzed on the basis of propidium iodide (IP) staining (G1, S and G2). Gating strategy for cell cycle analysis is shown in Supplemental Figure S2. Western Blot Protein Analysis To obtain total protein lysates, BMDMs (at least 10 6 cells) were washed in cold PBS and lysed with TGH-NaCl (1% Triton X-100, 10% glycerol, 50 mM HEPES and 250 mM NaCl) plus protease inhibitors, as indicated previously [5,27]. Lysates were centrifuged to remove cellular debris. The protein concentration was determined using the Bradford Protein Assay (Bio-Rad laboratories, Hercules, CA, USA). Total protein lysates (50 mg) were separated by SDS-PAGE and transferred to polyvinylidene difluoride (PVDF) membranes using the iBlot2 system (Thermo Fisher, Waltham, MA, USA) and following the manufacturer's instructions. Membranes were blocked for 1 h at room temperature in blocking buffer (5% dry milk in TBS-0.1% Tween 20) and then incubated with primary antibody in blocking buffer for 16 h at 4 • C. The concentration of antibodies is in Supplemental Material Table S1. The membranes were then washed three times × 5 min with TBS-Tween and incubated for 1 h at room temperature with the corresponding HRP-conjugated secondary antibody diluted 1:1000 in blocking buffer. After washing as before, ECL detection was performed, and the membranes were exposed to X-ray films. When necessary, band intensity was quantified using the open source image analysis software Fiji [28]. For histone H2AX western blot, an acid extraction of proteins was performed as described by the anti-phospho-H2AX antibody manufacturer. Briefly, cells were washed in cold PBS and lysed with lysis buffer (10 mM HEPES pH7.9, 1.5 mM MgCl 2 , 10 mM KCl, 0.5 mM DTT and 1.5 mM PMSF). Hydrochloric acid was added to a final concentration of 0.2 N and the cell lysate was incubated on ice for 30 min. After centrifugation at 11,000× g for 10 min at 4 • C, supernatants were dialyzed twice against 200 mL 0.1 M acetic acid for 1-2 h and three times against 200 mL H 2 O for 1 h, 3 h and overnight, respectively. For NBS1 western blot, a chromatin acid extraction was performed as described [5] with some modifications. After cell lysed, nuclei were collected by centrifugation at 2000× g for 5 min at 4 • C. The supernatant was discarded and the pellet resuspended in 2-5 volumes of ice-cold HCl 0.2N and incubated on ice for 20 min. Centrifuged at 2000× g for 10 min at 4 • C and the supernatant was neutralized with the same volume of Tris-HCl pH8 1 M. DNA Damage Susceptibility Assay To analyze the susceptibility of macrophages to DNA damage we treated the cells with etoposide (Tocris, Ellisville, MO, USA) or hydrogen peroxide (Sigma-Aldrich). Unless specified otherwise, the concentrations of etoposide or hydrogen peroxide used were 50 µM and 250 µM respectively. Cells were washed and left in complete medium (DMEM + 10% FCS + M-CSF or GM-CSF at the indicated concentrations) for different periods of time as indicated. Test Fraction of Activity Released (FAR) Assay The number of double-strand breaks (DSBs) in cells treated with etoposide or hydrogen peroxide was monitored using the FAR assay [29]. After exposure to drug, the cells (10 5 ) were kept on ice at all times, and all solutions added to the cells were ice-cold. The cells were centrifuged, resuspended in PBS supplemented with 0.2 mg/mL sheared herring sperm DNA and 56 mM β-mercaptoethanol to inactivate excess calicheamicin γ1 and incubated on ice for 5 min. The cells were then washed with PBS, and 150,000 cells were mixed with melted agarose (1.25% type VII in PBS with 5 mM EDTA) and transferred to a plug mold. The cells in the plug were then lysed at 4 • C for a minimum of 24 h in lysis buffer (25 mM EDTA, pH 8.5, 0.5% SDS, 3 mg/mL proteinase K added just prior to lysis). Longer incubation times did not alter the quality of the data. The cells were then resolved using agarose gel electrophoresis (0.7%) in 1× TAE (0.04 M Tris acetate, 1 mM EDTA, pH 8) at 4 • C for 17 h at 2 V/cm. The relative amount of cellular DNA migrating into the gel (FAR) was quantified using laser scanning equipment to calculate the number of DSBs. Senescence-Associated β-Galactosidase Staining The cells (10 5 ) were washed in phosphate-buffered saline (pH 7.4) and fixed with 2% formaldehyde and 0.2% glutaraldehyde for 10 min at room temperature. After being washed twice, the cells were incubated at 37 • C for 4 h in a humidified chamber with freshly prepared staining solution (1 mg/mL X-Gal (Sigma-Aldrich) in dimethylformamide, 40 mM citric acid and phosphate buffer, pH 6.0, 5 mM potassium ferrocyanide, 5 mM potassium ferricyanide, 150 mM sodium chloride and 2 mM magnesium chloride). At the end of the incubation, the senescence-associated β-galactosidase staining rate was calculated by counting four random fields per dish and assessing the percentage of senescence-associated β-galactosidase staining-positive cells from 100 cells per field [30]. Quantification and Statistical Analysis Data were analyzed using the unpaired Student's t test, as indicated in each figure legend. When two or more variables were compared, a one-way ANOVA test followed by a Bonferroni correction was used, as indicated in the figure legends. Center, dispersion and n are defined in each figure legend. For all analyses, significance was set at p < 0.05. Statistical analyses were performed using GraphPad Prism 9.0 software. GM-CSF Induced Increased Protection against DNA Damage in Relation to M-CSF Bone marrow-derived macrophages are a homogenous population of non-transformed cells that require growth factors for proliferation and survival [17]. M-CSF is the most potent and specific growth factor for these cells [16]. However, GM-CSF also promotes macrophage proliferation and survival [17]. To determine the effect of these growth factors on the macrophage response to DNA-damaging agents, we first studied their susceptibility to treatment with etoposide, a topoisomerase II inhibitor used as an anticancer drug that causes DSBs [31]. Macrophages obtained from bone marrow cultures with M-CSF were starved of growth factor for 18 h and grown for 24 h in the presence of M-CSF or GM-CSF ( Figure 1A). Then, macrophages were treated with the DNA-damaging agent etoposide for 1 h. After that, cells were washed and M-CSF or GM-CSF was added to the media, and 3 h later we analyzed the DSBs using the FAR test ( Figure 1A). When we treated macrophages growing in the presence of M-CSF, etoposide induced DSBs in the DNA that increased in a dose-dependent manner when the amounts of the drug were increased ( Figure 1A,B). However, when we cultured macrophages in medium containing GM-CSF, there was a significant reduction in susceptibility to this DNA-damaging agent ( Figure 1A,B). growth factors on the macrophage response to DNA-damaging agents, we first studied their susceptibility to treatment with etoposide, a topoisomerase II inhibitor used as an anticancer drug that causes DSBs [31]. Macrophages obtained from bone marrow cultures with M-CSF were starved of growth factor for 18 h and grown for 24 h in the presence of M-CSF or GM-CSF ( Figure 1A). Then, macrophages were treated with the DNAdamaging agent etoposide for 1 h. After that, cells were washed and M-CSF or GM-CSF was added to the media, and 3 h later we analyzed the DSBs using the FAR test ( Figure 1A). When we treated macrophages growing in the presence of M-CSF, etoposide induced DSBs in the DNA that increased in a dose-dependent manner when the amounts of the drug were increased ( Figure 1A,B). However, when we cultured macrophages in medium containing GM-CSF, there was a significant reduction in susceptibility to this DNAdamaging agent ( Figure 1A,B). . Each experiment was performed in triplicate, and the results are shown as the mean ± SD. ** p < 0.01 and *** p < 0.001 in relation to the corresponding treatments with M-CSF or GM-CSF, when all the independent experiments had been compared. Data were analyzed using Student's t-test, with the exception of (D), which was performed using ANOVA test. We also used hydrogen peroxide to increase the oxidative stress in the cell, leading to base damage and DNA breaks [32]. This reagent also produced DSBs in macrophages, but in lower quantities than etoposide. Interestingly, again GM-CSF gave significantly better protection than M-CSF (Supplementary Materials Figure S3A,B). To explore the effect of etoposide on single strand breaks (SSBs), we determined the expression of common markers such as phosphorylated replication protein A (RPA) [33]. While etoposide affected the macrophages incubated with M-CSF in a dose-dependent manner, no effect was observed in those incubated with GM-CSF ( Figure 1C,D). Similar results were obtained when we used hydrogen peroxide instead of etoposide (Supplementary Figure S3C). Thus, the protective effect of GM-CSF seemed to be independent of the type of DNA damage. GM-CSF Induced More Rapid Recovery from DNA Damage Than M-CSF Next, we wanted to determine the influence of GM-CSF and M-CSF on the capacity to repair DSBs. The effect of etoposide on DNA breaks was assessed by detecting the induction of γ-H2AX, the H2A histone family member X (abbreviated as H2AX) that is a type of histone protein from the H2A family encoded by the H2AFX gene. An important phosphorylated form is γ-H2AX (S139), which forms when DSBs appear [34,35]. After treatment with etoposide for 1 h, cells were allowed to recover from DNA damage. After 3 h in the presence of GM-CSF, there was a significant decrease in γ-H2AX (Figure 2A,B). When we used hydrogen peroxide instead of etoposide, similar results were observed (Supplementary Materials Figure S3D). Macrophages were first treated with M-CSF for 24 h, then, after that, treated with etoposide for 1 h, washed and either incubated with M-CSF or GM-CSF ( Figure 2C,D). Under these conditions, the damage caused by etoposide is the same for all cells that will later be incubated with M-CSF or GM-CSF. When we analyzed the recovery from DNA damage after 24 h there was also a significant difference between GM-CSF and M-CSF ( Figure 2C,D). However, after 48 h in the presence of either GM-CSF or M-CSF, macrophages showed no increase in γ-H2AX, suggesting that in both cases the DNA was repaired. This experiment demonstrates that the ability to repair damaged DNA in cells incubated with GM-CSF is greater than in those incubated with M-CSF. After 24 h of treatment with M-CSF or GM-CSF, the percentage of CD11b and F4/80 cells, as well as MHCII and CD11c was similar before and after treatment with etoposide or hydrogen peroxide. This data was confirmed when we measured the induction of phosphorylated p53, another marker associated with DSBs, after etoposide treatment [36]. There was a significant decrease in phosphorylated p53 in macrophages incubated with GM-CSF 3, 6 or 24 h after etoposide treatment, in relation to those incubated with M-CSF ( Figure 2E,F). These results were confirmed when we used hydrogen peroxide instead of etoposide (Supplementary Materials Figure S3E). To confirm our results, we determined the expression of nuclear Nijmegen breakage syndrome (NBS1) protein. Together with the meiotic recombination 11 homolog (MRE11) and RAD50, NBS1 forms the complex MRE11, a DSB sensor that regulates the DNA damage response (DDR) and repair of DSBs [37]. Break detection by the MRE11 complex activates the ataxia telangiectasia mutated (ATM) kinase that promotes a robust DDR that includes the activation of checkpoint kinase 2 (CHK2) and the tumor suppressor p53 [38]. In macrophages treated with etoposide or hydrogen peroxide, NBS1 was obtained from the nucleus using chromatin acid extraction. The levels of phosphorylated NBS1 in cells incubated with GM-CSF after treating the cells with etoposide or H2O2 were lower in relation to the cells incubated with M-CSF ( Figure 2G). DNA Damage Induced Cell Cycle Impairment Exposure of eukaryotic cells to different DNA-damaging agents, such as ionizing radiation, UV light or reactive oxygen species (ROS), causes several types of DNA damage ranging from base modifications to DNA backbone breaks [10]. The cell responds to this damage by activating the DNA damage response pathway, leading to cell cycle arrest, which provides enough time for efficient DNA repair [11]. Both M-CSF and GM-CSF induce macrophage proliferation, but M-CSF is more powerful than GM-CSF. Thus, within 24 h under saturating concentrations M-CSF almost duplicates the number of cells, whereas GM-CSF induces lower amounts of proliferation ( Figure 3A) [17]. However, when cells were treated with etoposide and subsequently cultured with growth factors there were fewer cells. Interestingly, under these conditions the number of cells treated with GM-CSF was higher than that of those treated with M-CSF ( Figure 3A). Similar results were found when hydrogen peroxide was used instead of Figure S3F). These results are likely related to the capacity to repair DNA after genotoxin treatment. DNA Damage Induced Cell Cycle Impairment Exposure of eukaryotic cells to different DNA-damaging agents, such as ionizing radiation, UV light or reactive oxygen species (ROS), causes several types of DNA damage ( Figure 3A) [17]. However, when cells were treated with etoposide and subsequently cultured with growth factors there were fewer cells. Interestingly, under these conditions the number of cells treated with GM-CSF was higher than that of those treated with M-CSF ( Figure 3A). Similar results were found when hydrogen peroxide was used instead of etoposide (Supplementary Materials Figure S3F). These results are likely related to the capacity to repair DNA after genotoxin treatment. . Each experiment was performed in triplicate, and the results are shown as the mean ± SD. * p < 0.05, ** p < 0.01 and *** p < 0.001 in relation to the corresponding treatments with M-CSF or GM-CSF, when all the independent experiments had been compared. Data were analyzed using Student's t-test, with the exception of (C), which was performed using ANOVA test. . Each experiment was performed in triplicate, and the results are shown as the mean ± SD. * p < 0.05, ** p < 0.01 and *** p < 0.001 in relation to the corresponding treatments with M-CSF or GM-CSF, when all the independent experiments had been compared. Data were analyzed using Student's t-test, with the exception of (C), which was performed using ANOVA test. When there is abnormal processing of DNA, the replication checkpoints act as a brake and stop cycling, allowing DNA repair systems to correct replication errors [39]. In fact, if there is significant DNA damage that cannot be repaired successfully, cells will undergo apoptosis. However, if the DNA breaks are repaired, checkpoint signals will be attenuated and the cell cycle will be restarted and cells re-enter the cell cycle. The percentage of macrophages at the G2/M stage of the cell cycle increased after etoposide treatment, but there was a significant difference after this treatment when cells were incubated with M-CSF or with GM-CSF ( Figure 3B). As shown herein, the damaging effect of etoposide prompts the expression of phosphorylated p53, a key regulator of the G1/S checkpoint [40], and induces the expression of stress sensors that modulate the response of mammalian cells to genotoxic stress such as growth arrest and DNA damage (e.g., Gadd45), which interact with other proteins implicated in stress responses [41]. The incubation of cells with etoposide induces the expression of the cyclin-dependent kinases (CDK) inhibitor p21 waf−1 and reduces that of Gadd45 and cyclin B1, which explains the arrest of the cell cycle due to DNA damage ( Figure 3C) [42]. GM-CSF Induced the Expression of Anti-Apoptotic B-Cell Lymphoma 2 (Bcl-2) A1 Following the induction of DNA damage, an important route of cell inactivation is apoptosis. A large number of specific DNA lesions that trigger apoptosis have been identified [39]. These mechanisms are crucial for preventing the replication and propagation of potentially deleterious mutations. Because of DNA breaks after etoposide treatment, the number of apoptotic macrophages increased [43], and after 24 h of growth factor treatment, there was a significant difference between M-CSF and GM-CSF ( Figure 4A). Similar results were obtained when hydrogen peroxide was used as a DNA-damaging agent (Supplementary Materials Figure S3G). To determine the mechanism of apoptosis protection, we analyzed the levels of phosphorylated Akt, which is induced by M-CSF and GM-CSF for macrophage survival and is independent of the proliferation pathway [17]. With both treatments, phosphorylated Akt was induced after etoposide treatment ( Figure 4B) or hydrogen peroxide (Supplementary Materials Figure S3H). These results prompted us to measure the expression of genes whose products are involved in apoptosis after treating the cells with etoposide or hydrogen peroxyde. No differences were obtained between M-CSF or GM-CSF treatments when Bcl-2, Bcl-X L or Bax were measured; however, Bcl-2A1 was notably induced by GM-CSF ( Figure 4C). Bcl-2A1 is a highly regulated NF-κB target gene that has important pro-survival functions [44]. In fact, Bcl-2A1 was induced in macrophages that had been incubated with GM-CSF previously in response to etoposide damage ( Figure 4D) and hydroxide treatment (Supplementary Materials Figure S4). DNA Damage Was Independent of Proliferation Although M-CSF and GM-CSF are growth factors, macrophage proliferation was higher in the presence of M-CSF ( Figure 3A). Because highly proliferative cells are more sensitive to DNA damage [45], we wanted to rule out the possibility that the protective effect of GM-CSF versus M-CSF was related to the decreased proliferative capacity elicited by macrophages in response to this growth factor. In a first attempt, we compared the macrophage proliferation dependent-induction of M-CSF and GM-CSF. We observed similar proliferation when we incubated the cells with 2 ng/mL of M-CSF or GM-CSF ( Figure 5A). Then, we treated the cells with the same amount of growth factors and added etoposide. Under these conditions, the viability of macrophages was impaired in the presence of M-CSF in relation to GM-CSF independently of the amount of growth factor used ( Figure 5B). The doses of growth factors did not affect the viability of cells treated with hydrogen peroxide ( Figure 5B). Moreover, we treated macrophages growing in the presence of M-CSF with etoposide or hydrogen peroxide, and after washing we added either M-CSF or GM-CSF. Under these experimental conditions, we again observed a greater protective effect of GM-CSF over M-CSF ( Figure 5C). Together, these results indicate that GM-CSF gives greater protection than M-CSF to bone marrow-derived macrophages against DNA-damaging agents, independently of the proliferative status of the cell. GM-CSF Induced CD11c + MHC II + Cells That Were Resistant to the Effect of Etoposide In the presence of GM-CSF, macrophages not only become activated but also express several molecules related to differentiation. GM-CSF activates the transcription factor PU.1, and a series of target genes, such as CD11b, mannose receptor, TLR4 and TLR2, are induced and their products expressed on the surface of macrophages [19,46]. After 72 h, more than 22% of macrophages incubated with GM-CSF, but not with M-CSF, expressed CD11c and MHC II, which are markers of dendritic cell differentiation ( Figure 6A and Supplementary Materials Figure S5) [47]. GM-CSF and M-CSF differentially induce the expression of other markers. GM-CSF induces the expression of Il-1β, Mannose receptor and Bcl2-A1, while M-CSF induces Tnf-α and Macrophage scavenger receptor 1 ( Figure 6B). These opposing effects of M-CSF and GM-CSF on macrophages have been linked to "M2 and M1 macrophages" [48]. DNA Damage Was Independent of Proliferation Although M-CSF and GM-CSF are growth factors, macrophage proliferation was higher in the presence of M-CSF ( Figure 3A). Because highly proliferative cells are more sensitive to DNA damage [45], we wanted to rule out the possibility that the protective presence of M-CSF with etoposide or hydrogen peroxide, and after washing we added either M-CSF or GM-CSF. Under these experimental conditions, we again observed a greater protective effect of GM-CSF over M-CSF ( Figure 5C). Together, these results indicate that GM-CSF gives greater protection than M-CSF to bone marrow-derived macrophages against DNA-damaging agents, independently of the proliferative status of the cell. . Each experiment was performed in triplicate, and the results are shown as the mean ± SD. ** p < 0.01 and *** p < 0.001 in relation to the corresponding treatments with M-CSF or GM-CSF, when all the independent experiments had been compared. Data were analyzed using Student's t-test, with the exception of (B,C), which was performed using ANOVA test. ng/mL) or GM-CSF (2 ng/mL) for 24 h, after which viability was quantified using Crystal Violet (n = 3). Each experiment was performed in triplicate, and the results are shown as the mean ± SD. ** p < 0.01 and *** p < 0.001 in relation to the corresponding treatments with M-CSF or GM-CSF, when all the independent experiments had been compared. Data were analyzed using Student's t-test, with the exception of (B,C), which was performed using ANOVA test. Finally, other macrophage surface markers, such as CD11b and F4/80, were expressed on almost 100% of macrophages, regardless of whether they were cultured with M-CSF or GM-CSF (Supplementary Materials Figure S6A,B). The initial reduced percentage of F4/80 macrophages before the addition of M-CSF or GM-CSF is due to the 18 h deprived of M-CSF to render cells quiescent. By staining with DAPI and annexin V we were able to differentiate necrotic, apoptotic and live cells (Supplementary Materials Figure S7). After 24 h of treatment with GM-CSF, the population that expressed CD11c and MHC II included significantly fewer necrotic and apoptotic cells in relation to the CD11c-and MHC II-negative population, before and after etoposide treatment ( Figure 7A). Obviously, the more differentiated population contained more living cells. GM-CSF Induced CD11c + MHC II + Cells That Were Resistant to the Effect of Etoposide In the presence of GM-CSF, macrophages not only become activated but also express several molecules related to differentiation. GM-CSF activates the transcription factor PU.1, and a series of target genes, such as CD11b, mannose receptor, TLR4 and TLR2, are induced and their products expressed on the surface of macrophages [19,46]. After 72 h, more than 22% of macrophages incubated with GM-CSF, but not with M-CSF, expressed CD11c and MHC II, which are markers of dendritic cell differentiation (Figures 6A and S5) [47]. GM-CSF and M-CSF differentially induce the expression of other markers. GM-CSF induces the expression of Il-1β, Mannose receptor and Bcl2-A1, while M-CSF induces Tnf-α and Macrophage scavenger receptor 1 ( Figure 6B). These opposing effects of M-CSF and GM-CSF on macrophages have been linked to "M2 and M1 macrophages" [48]. Finally, other macrophage surface markers, such as CD11b and F4/80, were expressed on almost 100% of macrophages, regardless of whether they were cultured with M-CSF or GM-CSF (Supplementary Materials Figure S6A,B). The initial reduced To confirm the GM-CSF-induced differentiation in macrophages we stained the senescence-associated beta-galactosidase, a marker of aging and differentiation ( Figure 7B) [49]. There was a significant increase in cells treated with GM-CSF in relation to those treated with M-CSF, which express the lysosomal enzyme ( Figure 7B). Finally, macrophages treated with GM-CSF expressed significantly higher levels of the cyclin-dependent kinases inhibitor p21 waf1 , which is associated with cellular aging [50] (Figure 7C). By staining with DAPI and annexin V we were able to differentiate necrotic, apoptotic and live cells (Supplementary Materials Figure S7). After 24 h of treatment with GM-CSF, the population that expressed CD11c and MHC II included significantly fewer necrotic and apoptotic cells in relation to the CD11c-and MHC II-negative population, before and after etoposide treatment ( Figure 7A). Obviously, the more differentiated population contained more living cells. Discussion Bone marrow-derived macrophages are the equivalent of human macrophages in terms of origin, function, activation or proliferation [51][52][53], and they constitute the best available model for macrophage studies. The results presented show that GM-CSF is a better protector of macrophages against DNA damage than M-CSF. The capacity of GM-CSF to induce proliferation is less potent than that of M-CSF. However, the different strength of these two growth factors to protect against DNA damage is unrelated to their capacity to induce cell proliferation. In our experiments, we have treated macrophages with etoposide or with hydrogen peroxide. Etoposide, although it is not a physiological agent, produces more relevant DNA breaks in relation to hydrogen peroxide, making the results easier to quantify. An interesting question is to determine whether these GM-CSF treated macrophages are actually resistant to DNA damage, or if they have increased/rapid repair capabilities. We have shown ( Figure 2C) that the repairing mechanisms of cells submitted to the same injury previously treated with M-CSF are superior if the cells were treated later on with GM-CSF instead of M-CSF. Another interesting question is the role of GM-CSF in DNA repairing. It has been shown in macrophages that GM-CSF induces ATM activating the Akt kinase [54], which is the key factor in the phosphoinositide 3-kinase (PI3K) pathway and regulates the activation of the major signals for cell growth, survival, metabolism and DNA repair [55]. Furthermore, monocytes accumulated DSBs following temozolomide treatment because they lack the DNA repair proteins XRCC1, ligase IIIa and PARP-1. Treatment with GM-CSF restored the expression of these proteins during differentiation into dendritic cells being able to repair DSBs [56]. Both M-CSF and GM-CSF activate two pathways upon interaction with their respective receptors in macrophages, inducing the same activities, one for proliferation, involving ERK, and the other one for protection against the induction of apoptosis associated with proliferation, which needs the PI-3K/Akt kinases and p21 Waf1 [17]. However, M-CSF and GM-CSF induce two different differentiation programs in macrophages. Whereas M-CSF differentiates CD34 cells into macrophages [57], GM-CSF differentiates macrophages into "monocyte-derived" or "inflammatory" dendritic cells [58]. Therefore, upon engaging with their receptors, M-CSF and GM-CSF share the same pathways to induce proliferation and survival, but have different pathways to induce differentiation with diverse results. In this study, we used bone marrow-derived macrophages differentiated in vitro. These untransformed cells provide an excellent model that responds to growth factors as well as pro-(IFN-γ or LPS) and anti-inflammatory (IL-4) agents [59]. In our studies, two different DNA-damaging agents were used with similar results, although the effects of etoposide were stronger than those of hydrogen peroxide. Interestingly, not only does GM-CSF provide better protection against DNA damage, but it is also able to induce more rapid recovery of DSBs as monitored by the induction of γ-H2AX or phosphorylated p53. The reduction in proliferation that occurs following the DNA breaks was lower if the macrophages were treated with GM-CSF. This is due to the diminished induction of genes such as Gadd45 or Cyclin B1, which code for proteins that block the cell cycle. Cellular proliferation is associated with DNA damage, being the basis for many chemotherapeutic agents used in the treatment of cancer or autoimmune diseases [60]. GM-CSF is an important factor that induces differentiation [61]. Our results showed that in macrophages incubated with GM-CSF for 24 h a population of cells emerged with the cell surface markers CD 11c and MHC class II, suggesting that they had become differentiated [19,46]. It has recently been shown that the exposure of bone marrow macrophages to GM-CSF induces the expression of CD11c and MHC class II molecules in two groups of cells that comprise conventional dendritic cells and monocyte-derived macrophages that behave as separable entities [47]. The data presented in that study do not invalidate our results since in both cases we demonstrated the differentiation produced by GM-CSF, which translated into an increase in CD11c and MHC II. We also showed that GM-CSF induces other differentiation markers, such as lysosomal senescence-associated β-galactosidase staining, a marker of aging and differentiation [49]. Interestingly, the more differentiated macrophages were the more resistant to induction of apoptosis/necrosis. After etoposide treatment and 24 h incubation with GM-CSF, fewer apoptotic cells were observed in relation to those incubated with M-CSF. Fascinatingly, when we tested different anti-apoptotic genes, only Bcl-2A1 was induced by GM-CSF and not by M-CSF, confirming previous results [62]. Bcl2-A1 is a hematopoietic-specific protein and protects cells from apoptosis induced by a variety of apoptotic stimuli, such as DNAdamaging agents [63]. In response to GM-CSF, the BCL2-A1 gene is induced and is a direct transcriptional target of nuclear factor NF-κB [44]. The transcription factor Spi-B regulates human plasmacytoid dendritic cell survival through direct induction of the antiapoptotic gene BCL2-A1 [64]. BCL2A1 is able to reduce the release of pro-apoptotic cytochrome c from mitochondria and block caspase activation. The interaction of BCL2A1 with pro-apoptotic factors BAX and BAK has been described with conflicting results; however, it seems clear that BCL2A1 interacts with and blocks most of the BH3-only proteins that induce apoptosis through activation of BAX and BAK [65]. Our results could have implications for our understanding of the inflammatory process. Ly6C high monocytes in an inflammatory locus differentiate into macrophages with a proinflammatory profile, releasing a number of molecules (IL-1α, IL-1β, TNF-α and IL-12) that activate helper T cells and induce the production of GM-CSF [66,67]. This cross-talk between macrophages and helper T cells, as well as the autocrine production of GM-CSF by monocytes [68], are crucial in inflammatory loci. On the one hand, GM-CSF induces macrophage differentiation of monocytes to inflammatory dendritic cells that play an important role in innate and adaptive immunity [69] and monocyte-derived macrophages that are double positive for CD11c and MHC class II. On the other hand, GM-CSF protects these two types of cell from DNA damage produced by the high levels of ROS. Although both macrophages and dendritic cells require mechanisms to repair DNA damage such as Trex1 [27,70], they also require the presence of GM-CSF to enhance their survival. Indeed, macrophages need to survive the anti-inflammatory phase to repair the tissue damage, and dendritic cells need to move from the inflammatory loci to the lymph nodes, carrying the antigens and initiating the acquired immune response. It has been reported that GM-CSF blockade in murine models of inflammation decreased the severity of the disease [71][72][73]. DNA breaks in the cells of the immune system and, particularly in macrophages, play a key role in immune-senescence [74]. It is interesting to note that macrophages from aged mice showed increased susceptibility to oxidants and an accumulation of intracellular reactive oxygen species [32], which could be responsible for DNA breaks [4]. Finally, telomeres shorten with age in macrophages, leading, as a result of decreased phosphorylation of STAT5a, to a reduced GM-CSF response [32]. Therefore, the increased susceptibility in aged macrophages to DNA breaks together with the reduced capacity of GM-CSF to repair the breaks may contribute to the reduced functional capacity of these cells in aging. These observations, together with our results, emphasize the critical role of GM-CSF in protecting against DNA damage and ensuring the survival of macrophages in the homeostasis of the immune response. Supplementary Materials: The following supporting information can be downloaded at https:// www.mdpi.com/article/10.3390/cells11060935/s1, Figure S1: Hydrogen peroxide induced more double strand breaks (DSBs) in macrophages incubated with M-CSF than in those incubated with GM-CSF. Figure S2: Different CD11c and MHCII phenotypes induced in macrophages by M-CSF and GM-CSF. Figure S3: Similar CD11b and F4/80 phenotypes induced in macrophages by M-CSF and GM-CSF. Figure S4: Etoposide did not induce apoptosis or necrosis in CD11c-or MHCII-positive macrophages. Figure S5: Different CD11c and MHCII phenotypes induced in macrophages by M-CSF or GM-CSF. Figure S6: Similar CD11b and F4/80 phenotypes induced in macrophages by M-CSF or GM-CSF. Figure S7: Etoposide did not induce apoptosis or necrosis in CD11c-and MHCII-positive macrophages. Table S1: Primers used for real-time RT-PCR of mRNA. Table S2: Antibodies: identification, source, application and dilution used. Author Contributions: T.V., C.Y. and F.Z. performed the experiments and interpreted the results. M.C., C.S., J.L. and A.C. designed the study, interpreted the data, supervised the work and wrote the manuscript. All authors have read and agreed to the published version of the manuscript. Data Availability Statement: All relevant data is available from the authors upon reasonable request.
2022-03-11T16:16:55.152Z
2022-03-01T00:00:00.000
{ "year": 2022, "sha1": "5fb3d649636248a929060ec927e53c26753e038c", "oa_license": "CCBY", "oa_url": "https://www.mdpi.com/2073-4409/11/6/935/pdf", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "2037bd16696f7fb4e0eab098625cd057e007208d", "s2fieldsofstudy": [ "Medicine", "Biology" ], "extfieldsofstudy": [ "Medicine" ] }
255623302
pes2o/s2orc
v3-fos-license
A Case of Esophago-Respiratory Fistula due to Inhalation Smoke Injury Diagnosed by Upper Endoscopy Esophago-respiratory fistula (ERF) refers to the formation of a pathological connection between the esophagus and respiratory tract. Acquired ERF is a rare but life-threatening diagnosis in adults. We describe a 79-year-old male who was admitted with an inhalation smoke injury. He was diagnosed with ERF by endoscopic visualization and sampling of the hyaline cartilage within the wall of the esophagus. Percutaneous endoscopic gastrostomy placement and conservative measures were effective in the management of ERF. Introduction Esophago-respiratory fstula (ERF) refers to the formation of a pathological connection between the esophagus and respiratory tract. Although rare in adults, it can cause signifcant morbidity and mortality in patients [1,2]. Malignancy, trauma, and infections are the most common causes of the formation of these fstulas [1][2][3][4][5]. Tissue damage from intubation and endoscopic interventions, foreign body ingestions such as taco shells, and blunt chest injuries have been reported in the literature as traumatic events that cause ERF [4,6,7]. However, inhalation injury in burn patients is rarely reported as the underlying cause of ERF [8,9]. Here, we are describing a patient who was admitted with an inhalation smoke injury and was found to have an ERF. Case Report A 79-year-old male was initially admitted for cardiac arrest after being rescued from a house fre. Te hospital course was notable for intubation for smoke inhalation injury, confrmed by bronchoscopic fndings of infammation including moderate erythema, carbonaceous deposits, and bronchorrhea [10]. On the 30 th day of admission, the patient developed acute melena and anemia while on a heparin drip for atrial fbrillation. Vitals were remarkable for hypotension and tachycardia, and his hemoglobin level was 6.5 g/dL. Esophagogastroduodenoscopy (EGD) was remarkable for a midesophageal ulcer and associated mucosal tear surrounded by granulation tissue. On the anterior wall of the esophagus, there was a yellow-tan foreign body that appeared to be embedded onto the esophageal wall ( Figure 1). Te initial impression of the foreign body was a hard food material such as a taco shell that had been ingested before the house fre and had become frmly embedded within the esophageal wall. Te midesophageal ulcer and associated mucosal tear due to the embedded foreign body were likely the sources of the patient's anemia and melena. Two small biopsy samples of the foreign body were sent for pathological analysis. Given the midesophageal ulcer and projected need for prolonged nasogastric tube feeding due to dysphagia, a percutaneous endoscopic gastrostomy (PEG) tube was placed the following day. Eventually, the pathology of the foreign body samples revealed fragments of hyaline cartilage, similar to that found in the tracheobronchial tree, which was concerning for ERF. Chest CT scan without contrast showed a narrowed left mainstem bronchus abutting the esophagus with foci of suspected extraluminal air along the left lateral margin of the esophagus (Figure 2). Tis was concerning for an esophageal injury or tear with localized perforation and possible bronchial injury. Terefore, the suspicion was high that the patient had an ERF, likely from the initial inhalation smoke injury. A confrmatory esophagogram was not performed due to the patient's dysphagia and given the high likelihood of an ERF with the EGD and imaging fndings. Bronchoscopy was performed 2 weeks after the initial EGD to assess for a true fstulous connection between the left bronchus and esophagus. Bronchoscopy revealed extrinsic compression of the left mainstem bronchus without mucosal defects or obvious fstulous connection to the esophagus. However, there was difusely infamed left-sided mucosa and extensive purulent distal mucous plugging, suggestive of prior fstulous connection. Tis was followed by the placement of a stent in the left mainstem bronchus. Te fstula had either resolved by the time of bronchoscopy or was too small to be visible during the procedure. Te follow-up chest CTscan 11 days after stent placement no longer visualized the previously seen foci of air between the esophagus and left mainstem bronchi. Discussion Acquired ERF is a rare but life-threatening diagnosis in adults [4]. A study of patients with ERF who were diagnosed between 2001 and 2011 showed those with benign ERF had a median survival of 74 months [11]. ERF can present as a late complication of thermal inhalation injury. Te incidence of inhalational injury in burn patients who required hospitalization ranged from 20% to 30%, and the risk of mortality was increased by 24 times in this population [12,13]. Mucosal edema along with hypotension and shock in burn patients may compromise the perfusion of the upper airway mucosa, leading to ERF formation. Prolonged intubation and tracheostomy can also be contributing factors for the development of ERF in patients with inhalation injury, especially in those with high cuf pressure, infection, hypotension, steroid use, diabetes, and the use of a nasogastric tube [14]. Although the initial symptoms of ERF can be insignifcant, a delayed diagnosis can lead to severe complications such as pneumonia, life-threatening hemoptysis, and respiratory failure [4,6,15]. In intubated patients, increased secretions, pneumonia, and evidence of aspiration of gastric contents are concerning for ERF formation [15]. Bronchoscopy and esophageal endoscopy are frst-line diagnostic and therapeutic modalities [16]. Although EGD or bronchoscopy may not identify the fstula orifce, like in our case, they may reveal infammatory changes in the luminal mucosa suggestive of a prior or current fstula [17]. Plain radiography and CT scan of the chest can be helpful with diagnosis if there is evidence of pneumomediastinum or an Case Reports in Gastrointestinal Medicine obvious mucosal tear [17]. An esophagogram with ingestion of barium contrast can be used to confrm the diagnosis [4,6,16,17]. Gastrografn is not recommended due to the risk of acute pulmonary edema and respiratory failure associated with aspiration of it. Te goal of treatment of ERF is to prevent severe complications and optimize nutritional status [16]. Surgical interventions for the closure of fstulas have been recommended for certain cases, especially in large and traumatic ERF [14,16]. Tere was no signifcant diference in survival between surgical and nonsurgical treatment of ERF such as airway or esophageal stent placement in those with nonmalignant ERF [11]. Conservative treatment options such as removing nasogastric tubes and placing gastrostomy or jejunostomy feeding tubes have also been shown to aid the healing of ERF [14]. In summary, we present a case of ERF due to inhalation burn injury, which was diagnosed by endoscopic visualization and sampling of hyaline cartilage within the wall of the esophagus. Te most likely cause of ERF was infammation and irritation of the bronchial mucosa secondary to smoke inhalation. PEG placement and conservative measures were efective in the management and healing of ERF. As in our case, the collaboration between pulmonology and gastroenterology services is essential for the diagnosis and management of this condition. Data Availability Te data used to support the fndings of this study are available from the corresponding author upon reasonable request, for protecting the patient's identity. Conflicts of Interest Te authors declare that there are no conficts of interest.
2023-01-12T16:41:15.966Z
2023-01-09T00:00:00.000
{ "year": 2023, "sha1": "8929c962a8f3c3add2c8ba23e02f285002d4074d", "oa_license": "CCBY", "oa_url": "https://downloads.hindawi.com/journals/crigm/2023/4231287.pdf", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "0edc06148cf52a9ed17b24273e20b20a7b2591ba", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
248784263
pes2o/s2orc
v3-fos-license
Predicting the role of coping factors on pandemic-related anxiety The year 2020 saw the emergence of a worldwide pandemic caused by the novel coronavirus COVID-19. Measures against further spread of the virus were taken nearly everywhere in the world. Many countries also imposed social distancing rules and lockdowns on their population. This situation has caused a lot of fear and insecurity, along with reactance and even unrest in some countries. In this study, we measured the psychological concepts of resilience, reactance, positive schemas, social solidarity, and anxiety among psychiatric patients and in how far these factors influence their psychopathological anxiety during the pandemic. The aim was to better understand in what ways these factors influence pandemic anxiety to be able to reduce its negative psychological effects. Findings show a significant effect of positive schemas and social solidarity on the level of pandemic anxiety in a sample of psychiatric patients, but no correlation between resilience or reactance and pandemic anxiety. Based on these insights, the inclusion of positive schemas and social solidarity for therapy should be considered. Looking deeper into the relation between positive schemas and pandemic anxiety could provide insight into the different ways that schemas influence people’s anxiety and determine whether some of them are particularly important. Introduction In 2019, the novel coronavirus COVID-19 was discovered. Due to its high contagiousness, it quickly spread, causing a worldwide pandemic of the disease SARS-CoV-2. Practically all countries in the world were forced to take measures against the virus, often including a temporary shutdown of schools and universities, bars and restaurants, retail shops, as well as the cancellation of concerts, sports matches, etc. People were asked or forced to stay at home, socially isolate themselves from people of other households, and limit contact to others to an absolute minimum (WHO, 2020). The pandemic attacked one of the strongest factors in maintaining mental health -social exchange (BPtK, 2020). From a societal perspective, the regulations executed under the term "lockdown", including the shutdown of many places for social meetings, lead to economic consequences that keep persisting. On the individual level, all regulations were based on the compliance with strict "social distancing" rules, forcing people to keep distance of at least 1.5 m to others and preventing all physical contact. Strategies were inconsistently discussed and executed in all sixteen federal states of Germany over a period of twelve months, leading to confusion and insecurity in the German population. The situation was slightly relieved over summer, and most shutdowns and restrictions were facilitated (BMG, 2020). However, starting in autumn 2020, a second wave of COVID-19 infections arose almost all over the world. In Germany, the numbers of new infections began to increase drastically again in early October (RKI, 2020), and the German government reacted with a series of new regulations for a lighter lockdown (BMG, 2020). The reapplied measures which were taken to reduce social events and shut down gastronomy and entertainment were not accepted by some parts of the population, who shortly after organized demonstrations without abiding by the rules of keeping distance to others and wearing a mask in crowds (Wieser et al., 2020). Apart from confusion, insecurity about the future, and fear of infection, many people were already suffering from anxiety due to the social isolation, often accompanied by frustration, boredom, financial loss, stigma, and even post-traumatic stress symptoms (Brooks et al., 2020). The German cohort study NAKO "COVID-19 Pandemic: psychosocial effects on the general population" (NAKO Gesundheitsstudie, 2020) carried out by the Federal Ministry of Education and Research (BMBF) investigated 159,562 subjects over the period of four weeks in May 2020 and found that the psychological strain caused by the pandemic, the long isolation and continuous insecurity about the future led to higher numbers of stress and anxiety symptoms in Germany during the first lockdown until May 2020. Depressive and anxiety symptoms especially increased within participants below the age of 60, particularly in young women. Stress symptoms increased likewise for all genders, particularly in participants within the age group of 30 to 50 (Schwetje, 2020). Similar association of gender and increased anxiety were previously reported elsewhere (Yalçın et al., 2022). Rationale of this Study As stated above, the impact of the pandemic and the following lockdowns and limitations heavily influence people's mental wellbeing. Therefore, learning more about the underlying mechanisms, predicting factors (such as solidarity and resilience) and consequences of pandemic anxiety on mental health is prioritized in research to date (Holingue et al., 2020;Hofmeyer & Taylor, 2021;Brooks et al., 2020). Recent studies have investigated the impact of mental wellbeing during the pandemic, among them studies which looked at the levels of mental distress in healthy adults without a mental disorder (Holingue et al., 2020). For example, Hofmeyer and Taylor (2021) investigated anxiety due to the COVID-19 pandemic in a healthcare setting looking at anxiety levels of healthcare workers and thus, suggested optimal behavior of healthcare workers in leading positions to deal with anxiety levels. However, these articles looked at mentally healthy samples. The pandemic's impact on people that already suffer from mental disorder will most likely be even more severe. For instance, Brown et al. (2020) described the pandemic as "a protracted communal stressor that is expected to affect the content, incidence, and severity of psychotic symptoms, both among those who have and those who are at risk of developing a psychotic disorder " (Kopelovich & Turkington, 2021, p.32). The COVID-19 pandemic has had a noticeable impact on the mental wellbeing of the German population in all areas of the society. Therefore, we must develop preventive measures against social distress in the most diverse areas of society. The purpose of this study is to identify clinical and psychological predictors that influence pandemic anxiety. Four clinical-psychological dimensions were used as predictors, which are briefly explained below: Resilience, Reactance, Positive Internal Schemas and Capacity for Solidarity. This study aims to identify psychological predictors with an influence on the anxiety caused by the pandemic. This way, we hope to determine and clarify which factors influence pandemic-related anxiety the most, and it will open opportunities to reduce it by dealing with the predicting factors, e. g. in therapy. Each of the four examined factors will be introduced hereinafter. Reactance Psychological reactance is often defined as an "unpleasant motivational arousal that emerges when people experience a threat to or loss of their free behaviors " (Steindl et al., 2015, p. 205). Free behavior usually describes the freedom to decide where you want to go, what you want to buy, or simply the freedom to say "No "to something (Steindl et al., 2015). Reactance depends on the significance of the freedom under threat (Steindl et al., 2015). In the current situation, many countries were forced to impose lockdowns and cancellations and to close shops, bars, and restaurants. Usually, reactance leads to actions to restore one's freedom (Rains, 2013). Thus, in many countries (e. g. Germany), the sudden prolonged limitation of personal freedom led to public unrest and demonstrations (Frei et al., 2021), other countries like France even saw violent riots. Therefore, looking at people's tendency to act reactantly might bear valuable information when attempting to understand the psychological impact of the pandemic. High reactance might relate to a high perceived threat of one's freedom because of the pandemic and that would lead to pandemic anxiety. Social Solidarity The term social solidarity describes a phenomenon that can often be observed after critical events such as terror attacks or other criminal events, but also after natural hazards like earthquakes (Hawdon & Ryan, 2011). People often come together at the event's location, place down candles or flowers and comfort each other. This solidarity is believed to benefit survivors and victims, as it focuses the attention on the damage that the event inflicted on the whole community, while also showing that the affected people are not alone (Collins, 2004;Hawdon & Ryan, 2011). The pandemic faced society with a challenge that can only be mastered by cohesion within a country, asking people to act solidary and responsibly for the whole country. Restrictions were based on the aim to protect risk groups and spread the number of COVID infections over a longer period (Güner et al., 2020). Thus, people need to rely on other responsibility and the measures asked people to restrict themselves without receiving an immediate benefit from it (Güner et al., 2020). People who are willing to act solidary might develop less fear due to the pandemic. Resilience The concept of psychological resilience is described as the ability to recover quickly from the psychological effects of an adverse event (Bonanno et al., 2010). It is as the ability to remain psychologically healthy or stable despite witnessing or experiencing an adverse event (Bonanno, 2004). Being resilient is shown to be negatively correlated with having anxiety (Yildirim, 2019). This correlation can be especially observed in times where negative life events occur. Therefore, one could well argue the massive changes due to the pandemic as a drastic experience in one's life that needs adaptation (Yildirim, 2019). Thus, being resilient may well be an important predictor for developing less fear during the pandemic and have higher levels of psychological wellbeing, respectively. Positive Schemas The concept of positive schemas developed within the approach of Schema Therapy. It assumes that people acquire schemas over the course of their life, most of them in early childhood. These schemas are classified as adaptive (positive) and maladaptive (negative) and mainly serve the purpose of fulfilling an individual's basic needs (Young, 1999). Positive schemas can be understood as a set of beliefs, memories, cognitions and bodily reactions about oneself or the relationship one has with others (Louis et al., 2018). The positive schemas are developed as a reaction to the fulfilment of our core emotional needs. It is based on the idea that each person possesses four core emotional needs (autonomy, connection and acceptance, realistic limits and self-control, spontaneity, and play) that need to be met in order to develop positive schemas. They reflect sets that were build up due to life experiences and the personal interpretations of these. Thus, they highly influence how we behave, think, or feel in certain situations (Videler et al., 2020). As the positive schemas have influenced the way individuals think and how they react, it is assumed that they could also influence the way people react during the pandemic and how they deal with pandemic-related anxiety. Hypotheses As explained above, it is expected that each of the predictors influence anxiety in their own way. The following hypotheses were derived: 1. Higher scores in reactance lead to higher pandemic anxiety. 2. Higher scores in social solidarity lead to less pandemic anxiety. 3. Higher scores in resilience lead to less pandemic anxiety. 4. Higher scores in positive schema lead to less pandemic anxiety. 5. There is difference between several age groups with pandemic anxiety and coping factors. 6. There is a difference between male and females with pandemic anxiety and coping factors. Participants Originally, 94 subjects participated in the study. Inclusion criteria comprised sufficient knowledge of German, age between 18 and 65 years, participation in inpatient or outpatient clinical treatment, and the ability to give informed consent. In total, seven participants had to be excluded from the sample. Data of 87 subjects (56 female, 31 male) from the inpatient and outpatient population in LVR-Klinikum Düsseldorf (a clinic for psychiatry and psychotherapy) was included in the sample. 85 participants had an educational degree in form of a school graduation, university degree or a training. The participants were diagnosed according to ICD-10 during their treatment in LVR Klinikum Düsseldorf by experienced psychiatrists and psychotherapists (see Tables 1, 2). The recruitment process of this field study started on 9th November 2020 during the second wave of the pandemic and ended on 5th March 2021 during the second lockdown in Germany. All patients who met the inclusion criteria were asked to voluntarily participate in the study by filling out the survey. Patients who were already in treatment at the starting point of the recruitment process were invited to participate in the study, new patients were recruited during the intake of their treatment. Material Following a short demographic questionnaire, five questionnaires were used for data collection in this study. First, the Overall Anxiety Severity and Impairment Scale (OASIS; Norman et al., 2006;Hiller et al., 2018) modified to measure pandemic-related anxiety rated on a 4-point Likertscale (minimum score 0 und maximum 20) was used. In this case, the participants were asked to report pandemic-related anxiety, for example: "How much did your anxiety interfere with your ability to do the things you needed to do at work, at school, or at home?". The analyses showed good internal consistency and adequate convergent and discriminant validity, as well as sensitivity to change (González-Robles et al., 2018). Exploratory and confirmatory factor analyses supported a unidimensional structure. The five OASIS items displayed strong loadings on the single factor and had a high degree of internal consistency. OASIS scores demonstrated robust correlations with global and disorder-specific measures of anxiety (Campbell-Sills et al., 2009). Second, the newer version of the Questionnaire of Reactance (Merz, 1983), modified by Hong and Faedda (1996), measures psychological reactance (e.g., "I become angry when my freedom of choice is restricted.", "Advice and recommendations usually induce me to do just the opposite."). The scale is composed of 11 items rated on a 5-point Likert-scale (minimum score 11 und maximum 55) covering the "generalized" motivation of producing and experiencing psychological reactance. The test-theoretical values meet the requirements demanded of a psychological measure. Third, social solidarity was measured with the Social Solidarity Scale (Hawdon & Ryan, 2011) which entails six items (e.g., "I am proud to be a member of my community.", "People work together to get things done for this community."), rated on a five-point Likert scale (minimum score 6 und maximum 30). Regarding psychometric properties, it exhibited good construct validity and reliability (Hawdon & Ryan, 2011). Fourth, the Resilience Scale (Resilienzskala RS-11) (Schumacher et al., 2005) measures resilience in 11 items which are rated on a 7-point Likert-scale (minimum score 11 und maximum 77). Example items include "Usually, I manage everything somehow." or "I have enough energy to do everything I have to do". The newly developed RS-11, conceptualized as an unidimensional scale, is shown to be a reliable and valid instrument that allows an economic assessment of resilience (Schumacher et al., 2005). Lastly, the German version of the Young Positive Schema Questionnaire (YPSQ, Louis et al., 2018, German validation by Paetsch et al., 2021 measures positive schemas of oneself over the last year. Identical to the original version (Louis et al., 2018), the German YPSQ is a 56-item self-report measure of 14 EAS. Items are rated on a 6-point scale ranging from "Completely untrue of me" to "Describes me perfectly" (minimum score 56 und maximum 336). Regarding psychometric properties, the German YPSQ exhibited satisfying factorial validity, construct and incremental validity, and internal consistency (Paetsch et al., 2021). Data Analysis Data was analyzed using SPSS 27. Pearson correlations were computed between all four independent variables (positive schemas, reactance, solidarity, and resilience) and the dependent variable (pandemic anxiety). An independent t-test was calculated to investigate whether gender was to be considered a confounding variable. To assess differences between age groups (18-35, 36-52, 53-70) regarding pandemic anxiety, a one-way ANOVA was conducted. Based on significant correlations between all factors, simple linear regressions were conducted to investigate whether the independent variables (social solidarity, positive schemas) can predict the dependent variable (pandemic anxiety). To investigate the effects of the interaction of multiple predictors (positive schemas, reactance, solidarity, and resilience) on the outcome variable (pandemic anxiety), a stepwise multiple linear regression was performed. Due to the exploratory nature of this study, no alpha adjustments were calculated; level of significance was defined as p < 0.05. Results Results showed five significant correlations between the variables of interest. Pandemic anxiety was significantly negatively correlated with positive schemas (r = −.22, p < .05; see Table 2), and social solidarity (r = −.22, p < .05). There was no significant correlation between pandemic anxiety and the other factors of reactance and resilience. Between positive schemas and social solidarity, the correlation was moderate and positive (r = .31, p < .01), Between positive schemas and resilience, there was a high positive correlation (r = .54, p < .01). Both positive correlations could give rise to a relationship between positive schemas and solidarity and resilience. Lastly, there was a significant negative correlation between reactance and solidarity (r = −.25, p < .05). Results for the independent t-test between female participants (n = 56) and male participants (n = 31) showed no significant difference in pandemic anxiety, t(85) = −.755, p = .452. The one-way ANOVA yielded a significant difference on positive schemas between age groups, F(2/84) = 5.43, p < .01). Tukey post-hoc analysis revealed a significant difference between the youngest age group (18-35 years) and the middle age group (36-52 years) (p < .05), as well as between the youngest age group and the oldest age group (53-70 years) (p < .05). Between the middle age group and the oldest group, no significant difference in positive schemas was found (p = .805). For all other factors, no significant difference was found (see Tables 3, 4). A multiple regressions yielded significant results for positive schemas as a predictor for pandemic anxiety F(1/84) = 4.88, p < .05 and significant results for social solidarity as a predictor for pandemic anxiety F(1/84) = 4.45, p < .05, see Table 5). Discussion The aim of the study was to investigate the influence of predicting factors, such as positive schemas, resilience, reactance and social solidarity on pandemic anxiety in a sample of psychiatric patients. The hypothesis that higher scores in reactance lead to higher pandemic anxiety had to be rejected. As hypothesized, the study showed a negative relationship between social solidarity and pandemic anxiety. Higher scores in the Positive Schemas questionnaire also lead to less pandemic anxiety. Although resilience was hypothesized to negatively influence pandemic anxiety in people, this hypothesis could not be supported. Interestingly, people with higher positive schemas possessed higher social solidarity and were more resilient, with people in the age from 36 to 70 having significantly more positive schemas compared to younger patients (18-35). Finally, no gender effect could be found in any of the constructs. As revealed, positive schemas seem to have an influence on the level and development of the anxiety that is caused by the pandemic. Given that people with more positive schemas showed less pandemic anxiety, it can be assumed that possessing a set of positive schemas can serve as a protective factor against developing anxiety. In accordance with this finding, Keyfitz et al. (2013) argue that a low level of positive schemas can lead to a higher vulnerability to anxiety. This is also in line with O'Byrne et al., 2021 who found that PSQ scores predicted measures of anxiety and depression driven by the pandemic in a sample of university students. Building on the finding of positive schemas having a protective function against anxiety during the pandemic, further notion is given on how to help people build positive schemas. As schemas can be understood as patterns of memories, beliefs, and reactions to oneself or to others leading us to a certain behavior, positive schemas help us develop adaptive behavioral patterns (Paetsch et al., 2021). Therefore, it is evident that possessing a set of positive schemas contributes to having less anxiety. Still, most research and therapy focus on negative, maladaptive schemas. However, negative schemas and positive schemas are considered two separate and distinct constructs, not lying on the same spectrum (Videler et al., 2020). Thus, the current findings recommend a shift of focus towards the work and establishment of positive schemas during therapy, as a valuable resource activation for people to build up protection against developing anxiety during the time of the pandemic. Insights from the SARS pandemic in 2003 still show negative long-term effects on people's mental health years after the outbreak (Canet-Juric et al., 2020). Thus, it is to expect that the impact of the COVID-19 pandemic will be present over the following years and that people will either seek psychiatric help on their own or be admitted to a clinic. This underlines the importance of promoting positive schemas within clinical treatment to create a buffer against the anxiety and persistent changes coming along with the COVID-19 pandemic. The second main finding is that people who possess more social solidarity show less pandemic-related anxiety. Possible explanations arise when considering the Social Solidarity Scale. As most items focus on being part of a community, people who feel as a member of society to a greater part may experience the pandemic as a circumstance that is affecting the whole society. This might influence their perception of anxiety that is caused by the pandemic. A possible reason could be that people tend to develop greater fear if they feel that they are alone with a situation. Similar results were obtained by Liekefett and Becker (2021) existential needs, a new created measure of future anxiety and worries, was related to perceived threat and engagement of self-protecting behavior. Conversely identification, group efficacy and concern for risk groups induced group protecting behavior, emphasizing the importance of social affiliation. Here it is to mention that, although social solidarity and pandemic anxiety were correlated, the regression only revealed positive schemas to be a predictor of pandemic anxiety. However, there was a strong relationship between positive schemas and social solidarity showing that people who possess more positive schemas are also higher in social solidarity. Therefore, future therapy that focuses on positive schemas can indirectly influence social solidarity. Looking more closely at the positive schemas, it turns out that especially the schemas of social belongingness could automatically influence social solidarity which underlines the assumption of social solidarity having an indirect influence on pandemic anxiety. Nevertheless, these are plausible explanations that still require further exploration in future research. While the results could not demonstrate resilience to be a predictor of pandemic anxiety in a psychiatric sample, a study by Mosheva et al. (2020) found alternative results with resilience being a protective factor against anxiety. Furthermore, the importance of resilience in the context of the impact of the COVID-19 pandemic is underlined by Vinkers et al. (2020). As they considered resilience in general as well as in psychiatric patients, they argue that the controllability of a situation highly influences if, and how, people can handle the effects of crisis situations. Thus, a possible explanation for the findings of the current study could be the fact that loneliness promoted through quarantine regulations, fear of contamination, or health-related threats are even more challenging for people with a psychiatric disorder (Vinkers et al., 2020). Thus, it can be assumed that the impact of the pandemic are occupying people to such a high extent that a moderate level of resilience, as demonstrated in this study, cannot protect against the effects in the population of psychiatric patients. Still, all aforementioned research argues in favor of promoting resilience training within patients, especially respecting the thought that the effects of the pandemic will certainly burden us over the upcoming years. Limitations First, the gathered data relied on self-report questionnaires, thus running the risk of being inaccurate or prone to bias (Van de Mortel, 2008;Benítez-Silva et al., 2004). With regard to the study sample of people diagnosed with a mental disorder, the risk of an exaggeration bias is expected to be higher than in a comparable healthy sample. There might have been participants who found themselves in an unfavorable phase or mood at the moment of completing the questionnaires. Furthermore, the questionnaire instruction of measuring anxiety asked only for anxiety that was specifically related to the pandemic. Looking at the participants' condition and current state, it could be assumed that some participants might have had difficulties differentiating between pandemic-related anxiety or anxiety because of their current phase of life. However, data can still be considered reliable because the perception of anxiety relies on subjective feeling. Future Research Considering the unexpected outbreak of the pandemic and the massive consequences it has had on the whole world, the COVID-19 pandemic has become an emerging subject in a great number of studies, including several target groups and investigated factors. Thus, placing the findings into the context of previous studies dealing with the COVID-19 or similar pandemics gives rise to the following recommendations. First, several studies underline that effects of the pandemic on people's mental health are considerably heavier for people who already live with a mental health condition (Liu et al., 2020;Hao et al., 2020). Moreover, Nasrallah (2020) discussed if and how to integrate treatment of stress and anxiety caused by the pandemic into already established treatment for mental disorders in psychiatric patients. The authors suggest strategies for stress management to be introduced to the patients along with treatment as usual. For example, mindfulness-based interventions (Yalçın et al., 2022) might improve resilience since mindfulness is related to reduced anxiety/depression scores. Similarly, Behan et al. 2020 suggested that such interventions could decrease pandemic triggered anxiety and distress. However, to date, there is no specific recommendation on how to combine both demands. For future research it is highly recommended to investigate the combination of treating mental health effects caused by the pandemic with usual treatment. Looking at the current findings, a promising opportunity could lie in the investigation of promoting positive schemas, social solidarity, and resilience training. Conclusion To conclude, findings show an influence of positive schemas and social solidarity on the level of pandemic anxiety in a sample of psychiatric in-and outpatients. The inclusion of positive schemas and social solidarity for individual and group therapy should be considered to implement these findings. Against the expectations, the study could not detect any influence of people's reactance or resilience on their pandemic anxiety. Looking deeper into the relation between positive schemas and pandemic anxiety could give rise to how different schemas influence people's anxiety and determine whether some of them are particularly important. Deeper knowledge on its relation could be used to implement units about positive schemas and how to promote them into the treatment of psychiatric in-and outpatients with the aim of lowering pandemic anxiety. Data Availability The datasets generated during and/or analysed during the current study are available from the corresponding author on reasonable request.
2022-05-15T15:25:03.340Z
2022-05-13T00:00:00.000
{ "year": 2022, "sha1": "45890c0f5a001c472ebee1b3f5f7c04a42848e0d", "oa_license": null, "oa_url": "https://link.springer.com/content/pdf/10.1007/s12144-022-03188-7.pdf", "oa_status": "BRONZE", "pdf_src": "PubMedCentral", "pdf_hash": "b6b972e2b3a15c953eb1f8015284fcc3c32bf1e9", "s2fieldsofstudy": [ "Psychology" ], "extfieldsofstudy": [] }
119317374
pes2o/s2orc
v3-fos-license
Ergodic measures and infinite matrices of finite rank Let $O(\infty)$ and $U(\infty)$ be the inductively compact infinite orthogonal group and infinite unitary group respectively. The classifications of ergodic probability measures with respect to the natural group action of $O(\infty)\times O(m)$ on $\mathrm{Mat}(\mathbb{N}\times m, \mathbb{R})$ and that of $U(\infty)\times U(m)$ on $\mathrm{Mat}(\mathbb{N}\times m, \mathbb{C})$ are due to Olshanski. The original proofs for these results are based on the asymptotic representation theory. In this note, by applying the Vershik-Kerov method, we propose a simple method for obtaining these two classifications, making it accessible to pure probabilists. Main results Let G := [g ij ] i∈N,1≤j≤m be an infinite Gaussian random matrices on M such that g ij 's are independent standard real Gaussian random variables. Let O be a random matrix sampled uniformly from O(m) and independent of G. Define ∆ := {s = (s 1 , · · · , s m )|s 1 ≥ · · · ≥ s m ≥ 0}. For any s ∈ ∆, define µ s as the probability distribution of the following random matrices G · diag(s 1 , · · · , s m ) · O. Theorem 1.1 (Olshanski [4,5]). The map s → µ s defines a homeomorphism between ∆ and P erg (M). Let G C = [g C ij ] i∈N,1≤j≤m be an infinite Gaussian random matrices on M C such that g C ij 's are independent standard complex Gaussian random variables. Let U be a random matrix sampled uniformly from U(m) and independently of G C . For any s ∈ ∆, define µ C s as the probability distribution of the following random matrices [4,5]). The map s → µ C s defines a homeomorphism between ∆ and P erg (M C ). Remark 1.4. The reader is also referred to [6] for a recent related work on Olshanski spherical functions for infinite dimensional motion groups of fixed rank. Comments on the proof of Theorems 1.1 and 1.3. The proof of Theorem 1.3 is similar to that of Theorem 1.1. Only the proof of Theorem 1.1 will be detailed in this note. In the case of bi-orthogonally or bi-unitarily invariant measures on the space Mat(N×N, R) or Mat(N×N, C), the ergodicity of an invariant measure is equivalent to the so-called Ismagilov-Olshanski multiplicativity of its Fourier transform, in particular, the ergodicity can be derived using the classical De Finetti Theorem from Ismagilov-Olshanski multiplicativity of its Fourier transform, see [3] and a recent application of this method in [2] in non-Archimedean setting. However, in our situations, there does not seem to be an analogue of Ismagilov-Olshanski multiplicativity for the Fourier transforms of ergodic measures µ s or µ C s . The proofs of the ergodicity for the measures µ s or µ C s require a new method. Two main ingredients for proving the ergodicity of µ s are: the mutual singularity between all measures µ s 's (derived from the strong law of large numbers) and an a priori ergodic decomposition formula due to Bufetov for invariant Borel probability measures with respect to a fixed action of inductively compact group. Our method can also be applied to give a probably simpler proof, by avoiding the Harish-Chandra-Izykson-Zuber orbital integrals, of the Olshanski and Vershik's approach to Pickrell's classification of unitarily ergodic Borel probability measures on the space of infinite Hermitian matrices. This part of work will be detailed elsewhere. This research is supported by the grant IDEX UNITI-ANR-11-IDEX-0002-02, financed by Programme "Investissements d'Avenir" of the Government of the French Republic managed by the French National Research Agency. Preliminaries 2.1. Notation. Let X be a Polish space. Denote by P(X ) the set of Borel probability measures on X . Let G be a topological group and let G acts on X by homeomorphisms. Let P G inv (X ) denote the set of G-invariant Borel probability measures on X . Recall that a measure µ ∈ P G inv (X ) is called ergodic, if for any G-invariant Borel subset A ⊂ X , either µ(A) = 0 or µ(X \ A) = 0. Let P G erg (X ) denote the set of ergodic G-invariant Borel probability measures on X . If the group action is clear from the context, we also use the simplified notation P inv (X ) and P erg (X ). A sequence (µ n ) n∈N in P(X ) is said to converge weakly to µ ∈ P(X ), and denoted by µ n =⇒ µ, if for any bounded continuous function f on X , we have Given any random variable Y , we denote by L(Y ) its distribution. Let M(∞) be the subset of M consisting of matrices X ∈ M whose all but a finite number of entries vanish. Let µ ∈ P(M), its Fourier transform is defined on M(∞) by In what follows, for simplifying notation, for λ = (λ 1 , · · · , λ m ) ∈ ∆, we denote When it is necessary, we also identify D λ with an element of M(∞) by adding infintely many 0's to make it a matrix in M(∞) Remark 2.1. Any B ∈ M(∞) can be written in the form: The proof of the following lemma is elementary and is omitted here. Given a sequence (µ n ) n∈N in P inv (M) and an element µ ∞ ∈ P inv (M). The weak convergence µ n =⇒ µ ∞ is equivalent to the uniform convergence µ n (D λ ) → µ ∞ (D λ ) on compact subsets of ∆. Let m K(n) denote the normalized Haar measure on K(n). Given Theorem 2.4 (Vershik [7, Theorem 1]). The following inclusion holds: We will also need an a priori ergodic decomposition formula due to Bufetov. Theorem 2.5 (Bufetov [1, Theorem 1]). The set P inv (X ) is a Borel subset of P(X ). For any ν ∈ P inv (X ), there exists a Borel probability ν on P erg (X ) such that Remark 2.6. Here the equality (2.3) means that for any Borel subset A ⊂ X , we have 3. Haar random matrices from O(N) or U(N). How to sample Haar random matrices from O(N) or U(N)? We need the following well-known simple results. Let N ∈ N be a fixed positive integer. Let G N = (g ij ) 1≤i,j≤N be a random N × N real matrix such that the entries g ij 's are i.i.d standard real Gaussian random variables. Similarly, denote G C N the complex random matrix with i.i.d standard complex Gaussian random variables. For any N × N square real or complex matrix A, let GS(A) be the matrix obtained from A by doing the Gram-Schmidt orthogonalization procedure with respect to the columns of A. Note that GS(G N ) ∈ O(N) almost surely. Moreover, for any given 1≤i,j≤S . where g ij (resp. g C ij ) are independent standard normal real (resp. complex) random variables. The following well-known result will be useful. Proposition 2.8 (Borel Theorem). As N goes to infinity, the following weak convergences hold: 3. Classification of P erg (M) 3.1. Singularity between µ s 's. Recall that two Borel probability measures σ 1 and σ 2 on X are called singular to each other, if there exists a Borel subset A ⊂ X such that σ 1 (A) = 1 − σ 2 (A) = 1. Then, the sequence ( m i=2 s 2k i ) k∈N is known and so is s 2 . Continue this procedure, we see that the sequence ( m i=1 s 2k i ) k∈N determines s uniquely. In particular, we have C n (XY ) = C n (XY ) once XY is well-defined. Proposition 3.3. For any s ∈ ∆ and any k ∈ N, we have The random matrix C n (G) * C n (G) is of size n × n. Let 1 ≤ i, j ≤ n, then the (i, j)-entry of C n (G) * C n (G) is By the strong law of large numbers, we have It follows that As a consequence, we have By the definition of µ s , (3.6) implies the desired assertion (3.5). 1 < ∞. Proof. Note that for any s, λ ∈ ∆, we have Proof of Proposition 3.1. For any s ∈ ∆, we define a subset A s ⊂ M by From this, it is clear that s → µ s is continuous. Thus the compactness of the set {s ∈ ∆|s 1 ≤ sup n∈N s (n) 1 } implies the tightness of the sequence (µ s (n) ) n∈N . Conversely, let (µ s (n) ) n∈N be a tight sequence. Assume by contradiction that sup n∈N s (n) 1 = ∞. Then we may find a sequence n 1 < n 2 < · · · of positive integers, such that lim k→∞ s (n k ) 1 = ∞ and there exists µ ∈ P inv (M) with µ s (n k ) =⇒ µ. By independence between C m (G) and O, we have Since O 11 = 0 a.s. and lim k→∞ s Proof. From above, the map s → µ s is a continous bijection between ∆ and {µ s : s ∈ ∆}. We only need to show the converse map is also continuous. Assume that (s (n) ) n∈N is a sequence in ∆ and s (∞) ∈ ∆ such that µ s (n) =⇒ µ s (∞) . By Proposition 3.4, we have sup n∈N s (n) 1 < ∞. Since the set {s ∈ ∆|s 1 ≤ sup n∈N s (n) 1 } is compact, we only need to show that the sequence (s (n) ) n∈N has a unique accumulation point. Let s ′ be any accumulation point of the sequence (s (n) ) n∈N . Then there exists a subsequence (s (n k ) ) n∈N that converges to s ′ . By continuity of the map s → µ s , we have µ s (n k ) =⇒ µ s ′ . It follows that µ s ′ = µ s (∞) and hence s ′ = s (∞) . Thus s (∞) is the unique accumulation point of the sequence (s (n) ) n∈N , as desired. It follows that ν s 0 = δ s 0 , where δ s 0 is the Dirac measure on the point s 0 . Since ν s 0 is a probability measure on ∆ erg , we must have s 0 ∈ ∆ erg . Hence we get the desired relation µ s 0 ∈ P erg (M). The proof of Theorem 1.1 is completed. Limit orbital measures are µ s 's The following lemma will be used. Proof. This is an immediate consequence of the following inequalities: For simplifying notation, in what follows, given n ∈ N and X ∈ M, we denote ) k∈N such that lim k→∞ s (n k ) 1 = ∞. Using the truncation notation Z (n) [m] introduced in §2.3.2, we have Take now λ = (λ 1 , 0, · · · , 0). We may assume that the transposition of Z (n) is produced as in §2.3.1, that is, Z (n) is the random matrix obtained by the Gram-Schimidt operation with respect to rows from a Gaussian random matrix G n = [g lj ] 1≤l,j≤n . Then The uniform convergence (4.10) on any compact subsets implies that the limit exists and the convergence is uniform when λ 1 ranges over any compact subsets of [0, ∞) and hence by symmetry of the Gaussian distribution, on any compact subset of R. It follows that the following sequence In particular, for any λ 1 ∈ R, we have Since O 11 = 0 a.s. and by assumption lim k→∞ s (n k ) 1 = ∞, we may apply bounded convergence theorem to conclude that σ(λ 1 ) = 0, for all λ 1 ∈ R. This contradicts to the fact that σ is a probability measure on R. Hence we must have sup n∈N s (n) 1 < ∞. Now since {s ∈ ∆|s 1 ≤ sup n∈N s (n) 1 } is compact, we may assume that there exists a subsequence (s (n k ) ) k∈N converges to a point s (∞) ∈ ∆. Taking Proposition 2.8 into account, the equalities (4.10) and (4.11) now imply By definition of the probability measure µ s (∞) , we get µ(D λ ) = µ s (∞) (D λ ), for all λ ∈ ∆. Hence the proof of Proposition 3.6 is completed.
2016-06-13T14:07:52.000Z
2016-06-13T00:00:00.000
{ "year": 2016, "sha1": "4ffdab142e16ef20d93a9c84ea7a560e516e3b0f", "oa_license": null, "oa_url": null, "oa_status": null, "pdf_src": "Arxiv", "pdf_hash": "4ffdab142e16ef20d93a9c84ea7a560e516e3b0f", "s2fieldsofstudy": [ "Mathematics" ], "extfieldsofstudy": [ "Mathematics" ] }
235684627
pes2o/s2orc
v3-fos-license
In-depth proteomics analysis of sentinel lymph nodes from individuals with endometrial cancer Summary Endometrial cancer (EC) is one of the most common gynecological cancers worldwide. Sentinel lymph node (SLN) status could be a major prognostic factor in evaluation of EC, but several prospective studies need to be performed. Here we report an in-depth proteomics analysis showing significant variations in the SLN protein landscape in EC. We show that SLNs are correlated to each tumor grade, which strengthens evidence of SLN involvement in EC. A few proteins are overexpressed specifically at each EC tumor grade and in the corresponding SLN. These proteins, which are significantly variable in both locations, should be considered potential markers of overall survival. Five major proteins for EC and SLN (PRSS3, PTX3, ASS1, ALDH2, and ANXA1) were identified in large-scale proteomics and validated by immunohistochemistry. This study improves stratification and diagnosis of individuals with EC as a result of proteomics profiling of SLNs. In brief The standard of care for endometrial cancer (EC) does not currently include sentinel lymph node (SLN) mapping. Aboulouard et al. report an in-depth proteomic analysis of SLNs from individuals with grade I, II, or III EC and identify potential biomarkers for tumor grade and overall survival. INTRODUCTION Sentinel lymph node (SLN) mapping is used as a surgical strategy to perform a complete lymphadenectomy in individuals with endometrial cancer. 1 The concept of ''sentinel nodes'' appeared in 1960 2 and is linked to the fact that if the SLNs are negative for metastasis, then nodes distal from the SLNs should also be negative. 3 In 1977, Cabanas 4 used lymphography to describe SLNs in individuals with penile carcinoma. SLN mapping enables affected individuals to avoid the side effects associated with complete lymphadenectomy and guides surgeons in decisionmaking. The use of pathologic ''ultrastaging'' and surgeon experience are key factors for successful SLN mapping, especially with breast cancer and melanoma. 1,5 The approach is based on simply identifying the anatomical location of the SLNs. 6 In the case of gynecological malignancies, the reliability of the SLN detection procedure has been investigated extensively in vulvar and cervical cancer. 7 SLN mapping in endometrial cancer (EC) was introduced by Burke et al. 8 and gained credibility in recent years 9 but has not yet been incorporated as a standard-of-care procedure in EC. 10 There are several factors that can explain the low use of SLN mapping, including complex uterine drainage, the various modalities of tracer injection, and lack of large prospective series. It has been reported that SLN mapping achieved a detection rate of 81.7%, a metastatic SLN involvement rate of 10.9%, and a false negative rate of 12.3% in main clinical trials. 10 In 2017, 55 eligible studies that included 4,915 women, were published. 11,12 The overall detection rate of SLN mapping was 81%, with a bilateral pelvic node detection rate of 50% and para-aortic detection rate of 17% with a metastasis detection sensitivity of 96%. Thus, SLN mapping accurately predicts nodal status in women with EC. 11 Despite these promising clinical data and a positive view of the role of SLN mapping in detecting EC, the exact underlying molecular mechanisms relating SLNs and EC grades have not been fully identified. Obtaining molecular information can highlight EC pathological mechanisms and serve to identify potential prognostic and therapeutic targets. Here we present a state-of-the-art proteomics study of SLNs from individuals with EC to look at differences in protein expression and mutations that are associated with tumor grade and identify affected pathways possibly involved in EC. In addition to the mutations, the unreferenced proteins translated from the regions described as non-coding of mRNA such as 5' and 3' UTRs and frameshift or from non-coding RNA (ncRNA), forming the ''ghost proteome,'' were also investigated. Our work identifies a correlation between SLNs and EC grades based on significant protein abundance variation. Furthermore, we identify and validate five key protein biomarkers that link EC and SLN cancer grading that could be used later as diagnostic tools, pending validation in larger and independent cohorts from multicenters. RESULTS The aim of this study was to identify comprehensive molecular proteomics signatures from SLNs and compare them with early-stage endometrial carcinoma proteomes at intermediate and high risk of recurrence, which was performed as a randomized study (Table S1). All sentinel nodes were examined by standard staining. When negative, serial sections were performed for standard staining and pancytokeratin immunohistochemistry (IHC). For the endometrial tissues, P53 immunostaining was performed. Figure 1A presents results of IHC from a healthy sentinel node (Figure 1Aa 1Ag and 1Ag'), and grade III (Figures 1Ah and 1Ah'). Based on these IHC results, we performed spatially resolved shotgun proteomics on regions of interest (ROIs) selected by a pathologist ( Figure 1B). The stained slides were then unmounted and resin was removed, and then trypsin was deposited on the ROI using a piezo chemical inkjet printer. The digested ROI was then subjected to liquid junction peptide (A) Histopathological data obtained from immunocytochemistry performed with anti-P53 on tissue sections of sentinel nodes (SNs), either healthy (Aa) or cancerous (Ab, Ac, and Ad), from grade I (Ab) with magnification (Ab' and Ab''), grade II (Ac) with magnification (Ac'), and grade III (Ad) with magnification (Ad'). Similarly, results were obtained with their corresponding endometrial tissue, either healthy (Ae) or cancerous (Af, Ag, and Ah), at grade I (Af) with magnification (Af' and Af''), grade II (Ag) with magnification (Ag'), and grade III (Ah) with magnification (Ah'). See also IHC images in the Supplemental information). (B) Workflow for spatially resolved proteomics using IHC tissue sections. ROIs were subjected to enzymatic microdigestion using trypsin followed by liquid junction microextraction, and then subjected to shotgun proteomics analyses. extraction before separation with a nanoliquid chromatography (LC) coupled with high-resolution mass spectrometry for tandem mass spectrometry (MS/MS) analysis. We first analyzed proteins from sentinel nodes samples and endometrioid tissue samples separately, and then we compared their protein profiles together to assess common signatures that might exist between them. In this way, we expected to demonstrate whether SLNs could provide a more sensitive method of assessing the spread of apparent early-stage EC than a lymph node dissection, which would enable a focus-targeted adjuvant therapy decision, such as performing radiotherapy or chemotherapy. Shotgun proteomics of sentinel nodes From the 24 samples (normal and grades I-III), 1,291 proteins showed a significant difference in expression based on MSbased relative quantification. After filtering proteins based on a minimum number of values in at least one of the four defined groups (2 of 3 valid values), 1,005 proteins were obtained (Figure 1B; Data S1). Sixty-two specific proteins were identified with the following repartition: 3 in the normal sentinel node, 21 in grade I, 5 in grade II, and 33 in grade III ( Figure 1B; Table S2). Comparison of the different sentinel node cancerous states showed differences in terms of cellular components and molecular functions. Grade I tumors contained the highest level of proteins related to the nucleus and cytoplasmic constituents compared with grade II and grade III tumors ( Figure S1A), whereas the three grades presented the same level of proteins as detected in exosomes ( Figure S1A). For the molecular functions, proteins identified in grade I tumors were more related to an immune response, which was confirmed by STRING analysis ( Figure S1B). Serpins, PTX3 (pentraxin-related protein 3), CHI3L1 (chitinase-3-like protein 1), PROM1 (prominin-1), ORM2 (alpha-1-acid glycoprotein 2), MNDA (myeloid cell nuclear differentiation antigen), AZU1 (azurocidin), RNASE3 (eosinophil cationic protein), ASRGL1 (isoaspartyl peptidase/L-asparaginase), and MMP8 (neutrophil collagenase) are proteins involved in the immune response, especially the innate immune response ( Figure S1B). Grade II tumors contained only 5 specific proteins: PLEK (Pleckstrin), PLIN1 (Perilipin-1), PLIN4 (Perilipin-4), ECM1 (extracellular matrix protein 1), and GIMAP1 (GTPase, IMAP family member 1). PLIN1 and PLIN4 are involved in the PPAR (peroxisome proliferator-activated receptors) signaling pathway. ECM1 and PLECK are implicated in platelet degranulation and immunity. ECM1 is also involved in angiogenesis ( Figure S1C). Grade III tumors contained proteins involved in DNA and RNA binding, nucleic acid metabolism, transcription regulator activity, and metabolism ( Figure S1D). Several interesting proteins can be pointed out, in particular CDC42 (cell division control protein 42 homolog), which plays a role in extension and maintenance of formation of thin, actin-rich surface projections called filopodia. SNW1 (NW domain-containing protein 1) is known to be implicated in epigenetics and is involved in NOTCH1-mediated transcriptional activation. Metastasis-associated protein (MTA2) is associated with the estrogen receptor in breast cancer and predicts proliferation in non-small cell lung cancer. 13 MTA2 also targets P53. MTA1, but not MTA2, has already been identified in sentinel nodes of head and neck cancer 14 and breast cancer. 15 MTA1 expression was correlated positively with lymph node metastasis and poor survival rate in EC. 16 According to the TCGA (The Cancer Genome Atlas) 17 , a list of 786 unfavorable genes has been characterized, and among them, 20 are considered to be associated with lower overall survival when overexpressed. 18 From this list, PTX3 protein was identified as the only unfavorable factor in grade I. To better understand the modulation registered across the different lesions, a multiple-sample test ANOVA with p < 0.05 was performed. A total of 336 proteins showed a significant difference in expression among the 4 groups, as shown in a heatmap ( Figure 2A). Proteomes in samples from the same group were similar (mean Pearson correlation, 0.92) compared with inter-group variation. The main differences were observed between normal sentinel nodes and grade I SLNs ( Figure 2B). A volcano plot was made, based on combination of grades I, II, and II together, which were compared with normal sentinel nodes; the data revealed 91 differentially regulated proteins, with 44 proteins more represented in tumor tissue and 47 in normal tissue ( Figure 2C; Table S3). Among the 44 identified proteins more expressed in cancerous sentinel nodes, 12 are involved in the innate immune response (protein SGT1 homolog, nucleoside diphosphate kinase B, Drebrin-like protein, alpha-1-acid glycoprotein 2, Clusterin, DNA-dependent protein kinase catalytic subunit, Ras-related protein Rap-1b, vesicle-associated membrane protein 8, Ras-related protein Rap-1A, major vault protein, interleukin enhancer-binding factor 2, and Cystatin-B), 4 are involved in necroptosis and cellular senescence (ADP/ATP translocase 1, ADP/ATP translocase 2, ADP/ATP translocase 3, ADP/ATP translocase 4, and charged multivesicular body protein 4a), and 20 are involved in mediated transport ( Figure 2D). Functional enrichment analysis established that tumors are connected to nucleotide metabolism, whereas proteins involved in normal tissue are highly related to signal transduction and cell communication ( Figure 2E). The hierarchical clustering between normal grades and the different grades showed a separation into 2 branches, i.e., one separating grade III and another separating the other grades and normal grade. The second branch separated grade I SLNs to normal and grade II SLNs, and then the last subbranch separated normal from grade II SLNs (Figure 2A). Cluster 1, representing overexpressed proteins in grade I SLNs, contains 47 proteins (Data S2). Nine are antimicrobial peptides, i.e., dermicidin, S100-A8, S100-A9, eosinophil cationic protein, lysozyme C, lactotransferrin, neutrophil elastase, cathepsin G, and neutrophil gelatinase-associated lipocalin. Twenty-four proteins are involved in the innate immune response or neutrophil degranulation ( Figure S2A). The other proteins are involved in energy pathways and metabolism ( Figure S2A). Cluster 2, corresponding to proteins overexpressed in grade II, contains 37 proteins that are involved in metabolism involving mitochondrial enzymes, such as isocitrate dehydrogenase 1 (IDH1), serine hydroxymethyltransferase (SHMT2), 2-oxoglutarate dehydrogenase (OGDH), ADP/ATP (translocase 1-4), glycanic enzymes such as UDP-glucose 6-dehydrogenase (UGDH), sialic acid synthase (NANS, N-Acetylneuraminate Synthase), 78-kDa glucoseregulated protein (HSPA5), hypoxia upregulated protein 1 (HYOU1), endoplasmin (HSP90A1), and protein disulfide-isomerase A3 ( Figure S2B). Other proteins are also involved in cell growth ( Figure S2D). Cluster 3 contains 117 proteins more expressed in grade III SLNs (Data S2). ClueGo analyses established that 32 are involved in the ribonucleoprotein complex (Figure S2C, red balls), ribosomes ( Figure S2C, blue balls), and translation ( Figure S2C, green balls). Among the identified proteins, ICAM-3 is known to mediate inflammatory signaling to promote cancer cell stemness. 19,20 Catenin beta-1 and Stathmin are poor prognosis markers in EC. 21,22 TP53BP1 is also detected and is known to interact with p53 and MFN1; these two genes encode a mitochondrial membrane protein and are considered to have tumor suppressor gene functions. However, its mutation is also considered a poor prognosis marker in cancer. 23 Functional enrichment analyses demonstrated that proteins more represented in grade III SLNs are involved in metabolism of nucleotides, protein metabolism, and energy pathways (Figure S2D). Cluster 4 corresponds to proteins that are more represented in normal sentinel nodes; 26 cytosolic proteins have been identified ( Figure S2E) and are involved in oxygen carrier activity, oxygen binding, and myosin binding. Gene set enrichment analysis (GSEA) 24 associated with Cytoscape 25 analyses of the 4 clusters is in line with the precedent analyses ( Figure 2F). Comparison of the volcano plot of the grade I, grade II, and grade III sentinel nodes ( Figures 3A-3C) established a molecular transition between these grades. The volcano plot confirmed the presence of a high number of differentially regulated proteins between grade I and normal SLNs ( Figure 3A). No differentially regulated proteins were detected between grade II and normal SLNs ( Figure 3B), whereas few proteins were differentially regulated between grade III and normal SLNs ( Figure 3C). Thus, we observed a clear shift from normal to grade I sentinel nodes that keeps immune activity through antimicrobial peptides and immune factors involved in the innate immune response. The transition from grade I to grade II can be explained by the switch from an immune profile to cytoskeleton modifications and proliferation. We then refined the analysis with a false discovery rate (FDR) of less than 0.01 to compare grade II with grade III SLNs ( Figure S2F). Nineteen proteins were identified to be in common between grade II and grade III; most of these proteins are involved in metabolism and in the Warburg effect ( Figure S2F, inset STRING analysis). Cancer cell evolution to epithelialmesenchymal transition (EMT) is suggested, as reflected by the presence of hypoxia upregulated protein 1. However, resistance of the immune response to cancer cells is still present in grade III. In fact, regulators of self-antigen presentation are still present, such as Tap1, HLA-CW12 (human leukocyte antigen), and class II HLA-DRB1. Shotgun proteomics of EC Proteins from the 12 samples (normal and grades I-III) were extracted and subjected to shotgun proteomics analyses. 1,280 proteins showed a significant difference in expression level. After filtering proteins based on a minimum number of values in at least one of the four defined groups (2/3 of valid values), 913 proteins were obtained (Data S3). According to the Venn diagram, 11 proteins were specific to healthy endometrial tissue, 24 to grade I EC, 14 to grade II, and 19 for grade III ( Figure 3D; Table S3). Moreover, principal-component analyses (PCAs) revealed a clear separation between the different grades and healthy endometrial tissue ( Figure 3E). The volcano plot associated with the heatmap confirmed the presence of 2 clusters, i.e., a cluster representative of normal endometrial tissue and cluster 2 related to EC ( Figure 3F). GSEA associated with Cytoscape confirmed that cluster 2, corresponding to proteins overexpressed in EC, are involved in translation, transcription, and nucleotide metabolism ( Figure 3F). Specific proteins identified in grade I EC tissue are related to the immune response with the presence of antimicrobial peptides (neutrophil elastase, neutrophil gelatinase-associated lipocalin, bactericidal permeability-increasing protein, and azurocidin). Among the immune factors, we identified interleukin-16 (IL-16); receptor-type tyrosine-protein phosphatase C, required for T cell activation; Integrin alpha-M; and Integrin ITGAM/ITGB2, known to be implicated in various adhesive interactions of monocytes, macrophages, and granulocytes. We also identified CD74, which is known to play a role in major histocompatibility complex (MHC) class II antigen presentation. However, CD74 is a poor prognostic marker in breast cancer 26 and PTX3 in EC, according to the TCGA (Table S3). For grade II, 4 proteins involved in the Wnt pathway have been identified (i.e., SMARCA4, GNB2, GNB4, and 26S proteasome subunit 10 [PSMC10]) (Table S3) and in grade III, NOTUM and PSMC6. Moreover, endosialin is known to play a role in tumor angiogenesis. 27 The host cell factor c1 (HCFC1) is an immunomodulator that plays a role in limiting the anti-cancer immune response and production of cytokines such as IL-6 or IL-8, which can contribute to neovascularization or tumor growth. 28 Volcano plots confirmed the high number of specific proteins identified in the 3 stages of EC development ( Figures 4A-4C). To better understand the modulation registered across the different lesions, a multiple-sample ANOVA with p < 0.01 was performed ( Figure 4D). A total of 384 proteins showed a significant difference in expression between the 4 groups. Hierarchical clustering and heatmap representation established a good separation between the three EC grades ( Figure 4D; Data S3). Cluster 1 contained proteins overexpressed in grade I tumors. Among the identified proteins, 30 are involved in immune response ( Figure S3A), such as gamma interferon-inducible protein 16 (IFI16), lysozyme C, myeloperoxidase, and lactotransferrin. The other ones are related to the metabolism of RNA. For grade II, 42 proteins implicated in cytoskeleton protein binding, the ribosome, and actin-binding were detected in cluster 2 (Figure S3B). A network implicating TPM4, TPM2, TPM1, MYH10, and MYH11 was identified, similar to what we found recently in glioma. 29 For grade III, some of the proteins are involved in cell adhesion (Cadherin-13, b-catenin, tenascin, vitronectin, emilin 1, and collagen alpha-1 chains) and the extracellular matrix, including ApoE and HSPG2 (cluster 3; Figure S3C). ApoE is important for proliferation and survival of ovarian cancer. 30 An analysis with a FDR of 0.01 is presented in Table S4. Interestingly, we identified PTX3 and ASS1 to be overexpressed in grade I and grade II tumors, respectively. These two proteins are considered to be among the top 20 most unfavorable prognostic factors according to TCGA data 17 and antibody-based protein data 18 ( Figure 4G). Comparison of sentinel node and EC proteomes A comparison analysis was performed based on the 24 samples used previously after ANOVA with a FDR of 0.01. 659 significant proteins were identified among 1,053. After hierarchical clustering and heatmap representation, healthy endometrial tissue is separated from the sentinel nodes and endometrial tumor grades. Interestingly, one branch regroups sentinel node grade I and endometrial grade I samples. Endometrial grade II and III tumors are also regrouped and separated from sentinel node grades II and III (Table S5). Cluster 1 corresponds to the common proteins overexpressed in sentinel and endometrial grade I tumor tissue (Table S5), and cluster 2 represents the common overexpressed proteins between SLN and EC grade III tissue (Table S5). Cluster 1 contains 22 proteins involved in immune response ( Figure 5A) with 10 antimicrobial peptides (neutrophil gelatinase-associated lipocalin, lactotransferrin, lysozyme C, neutrophil elastase, cathepsin G, RNAS3, myeloperoxidase, azurocidin, S100A9, and S100A). Interestingly, cluster 1 also contains the MNDA, which acts as a transcriptional activator/ repressor in the myeloid lineage, as well as SerpinB1 and integrins (ITGB2 and ITGA6), also involved in immunity. Besides these immune factors, which are considered favorable prognosis factors for EC, ANXA1 is also overexpressed in grade I SLNs and EC and is considered a favorable prognosis gene. However, ANXA3, ANXA11, LGALS3, FTH1, CP (ceruloplasmin), SERPINB1, FLOT1, and FLOT2 are unfavorable markers for the overall (F) Volcano plot and Hierarchical clustering of the most variable proteins between normal tissue and grade I-III EC (n = 3 for each category, ANOVA with permutation-based FDR < 0.05) and GSEA analyses of normal versus the 3 grades of the EC associated together. The GSEA was performed on the two clusters identified. Figure S4A). Cluster 2 contains ALDH2, a favorable prognosis gene for EC, and PRSS3, an unfavorable prognosis marker. YBX1, SLC1A5, ALDOA (aldolase, fructose-bisphosphate A), ATP1A1, and UBA2 are unfavorable OS (overall survival) markers ( Figure S4A) and were validated in immunocytochemistry and quantified based on pathological atlas data ( Figures S4B-S4B 00 ). Moreover, correlation studies between the different grades of SLNs ( Figure 5B) or EC ( Figure 5C) and between sentinel nodes and EC grades ( Figure 5D) established a positive correlation between grades when the datasets are taken individually. Comparison of sentinel nodes and EC grades points out a positive correlation between grade I of both tissues. Using correlation matrices ( Figure 5E), we found that PTMA (prothymosin alpha), ACTL6A, SHMT2, RBM25, and RBM4 were detected in grade I, II, and III sentinel nodes as well as in grade I, II, and III of EC. SUB1 and ETHE1 are expressed in grade II and III SLNs and grade II and III EC. DCD is present in grade I and II SLNs and grade I and II EC. YBX2, NOTUM, and RANDBP1 are found specifically in grade III SLNs and grade III EC. NUP210 is specific to grade II SLNs and grade II EC. PADI4, MUC5B, GOLM1, MNDA, CHI3L1, PTX3, SP100, MMP8, AZU1, and SLC9A3R2 are specific to grade I SLNs and grade I EC. We established the presence of markers common between SLNs and EC tissue that are grade dependent. Among the identified markers, PRSS3, PTX3, and ASS1 are considered poor outcome gene marker, whereas ALDH2 and ANXA1 are positive outcome markers. Validation was performed on 13 samples (Table S1; corresponding to 8 patients) by immunofluorescence ( Figure 6). In EC, PRSS3 is more expressed in grade III, ASS1 in grade II, and PTX3 and ANXA1 in grade I, whereas ALDH2 is present in grade II and grade III of EC. In SLNs, PRSS3, PTX3, and ASS1 are highly expressed compared with other markers, such as ANXA1 and ALDH2, which are close to undetectable. ASS1, PTX3, and PRSS3 are highly detectable in grade I, whereas their expression is lower in grades II and III. PRSS3 is slightly less detectable in grade II, and its detection is increased in grade III. For PTX3, the level of detection in grades II and III is lower than in grade I. None of them are detected in healthy pa-tients ( Figures 6A and 6B). The other markers correlated with poor outcome (YBX1, SLC1A5, ALDOA, ATP1A1, and UBA2) have already been tested on transcription-mediated amplification (TMA), and we confirmed their presence at elevated levels ( Figures S4B and S4B 00 ). We also detected some protein markers that can be correlated with a positive outcome, associated with longer overall survival; these included the CD74, GOLM1, SLC9A3R2, and SP100 proteins, according to the TCGA. Mutations Several studies have evaluated genetic mutation frequencies in individuals with EC of different tumor grades. 31 In this context, we sought to determine whether these mutations can be translated and detected in proteins. For that purpose, we used a human database combined with the XMan v.2 database. 32 This database contains information about mutated peptides that can be found in some cancers, extracted from the COSMIC database. Sixty-six mutated peptides were identified (Data S4). Among the proteins from which mutated peptides were identified, we retrieved collagen alpha 1, histones, laminin 1, myosin 9, HDGF, CAP-1, fibrinogen beta unit, tenascin, PTBP1, actin, hemoglobin subunits, integrin beta 2, ACTBL, SRP-14, annexin 6, TMP2, HCLS1, and lactotransferrin (Data S4). Laminin subunit alpha-1 mutated peptide ( Figure 7A) is only detected in EC grade II tissue, and HDGF protein is identified in SLN grade I (Data S4). Correlation between SLN grades and EC grades has shown that CO1A1, histone H1A, histone 2A2C, myosin 9, ACTBL1, and HDGF are in common with CO1A1 mutated peptides, which were found in all samples (Data S4; Figure 7). Ghost proteins We have previously established the presence of proteins derived from non-canonical human open reading frames promoting cancer metastasis in high-grade serous carcinoma (HGSC) and glioma. [33][34][35][36][37] From our proteomics data, 36 alternative proteins were identified, with more than 80% derived from non-coding RNA (ncRNA), and 10% of the mRNA coding for RefProt is derived from the 5 0 UTR, 11% from the 3 0 UTR, and 3% from a shift in the CDS (coding sequence) (Data S5). Eleven are common to all tissues (Data S5). Six are common to all grades of sentinel nodes and grade II of EC tissues (Data S5). Only DISCUSSION In this work, based on spatially resolved proteomics analysis of grade I-III clinical samples derived from individuals with EC and SLNs, we establish the presence of common proteomics markers that are grade dependent. Compared with recent studies using SWATH-MS (Sequential Window Acquisition of all Theoretical Mass Spectra) proteomics on grade II and grade I EC, we identified all of their validated markers; i.e., CAPS, PKM, AZU1, CNN1, S100A8, STMN1, and CTSG proteins. 38 These proteins are known to have interactions with drugs; some of these are US Food and Drug Administration (FDA)approved drugs, such as SMARCA4. 39 However, these markers are not used from SLN analyses to predict EC status. In our study, we identify markers that cross-correlate nodal status with EC grade. Among the markers identified in EC and SLNs, 3 markers (PRSS3, PTX3, and ASS1) are considered poor outcome gene markers in EC, and 2 (ALDH2 and ANX1) are positive outcome markers. These markers are linked to tumor cell motility. In fact, PTX3 contributes to melanoma cell invasion 40 as well as in other cancers, 41 through a Toll-like receptor 4 (TLR4)/nuclear factor kB (NF-kB) signaling pathway. ASS1 protein is required for cancer cell migration. 42,43 PRSS3 is also known to be involved in tumor metastasis. 44 PRSS3 upregulates VEGF (vascular endothelial growth factor) expression via the PAR1-mediated ERK (extracellular signal-regulated kinases) pathway and promote tumors progression and metastasis. 45 The two positive outcome markers (ANXA1 and ALDH2) are also involved in tumor motility, but in its modulation. ALDH2 is known as one of the key regulators in tumor metastasis, especially in the lungs. 46 ALDH2 functions as a mitogen-activated protein kinase (MAPK) upstream to inhibit cell proliferation and migration, promote cell apoptosis, and alter EMT by elevating E-cadherin and attenuating vimentin. The role of ANXA1 in tumor motility and metastasis is still unclear. 47 Loss of annexin A1 expression has been observed in breast, gastric, esophageal, prostate, bladder, head and neck, laryngeal, and oral cancer and correlates with tumorigenesis and malignant tendency. However, its expression has also been linked to advanced stages of specific cancers as well as metastatic tendency and degree of differentiation. In this case, ANXA1 expression increased inversely to epithelial markers such as E-cadherin and cytokeratin (CK) 8 and 18 and proportionally to mesenchymal ones such as vimentin, ezrin, and moesin. 48 ANXA1 seems to regulate metastasis by favoring cell migration/invasion intracellularly as a cytoskeleton remodeling factor and extracellularly as a ligand of formyl peptide receptor. 49,50 Most of these markers have not been reported previously in EC at the protein level, based on IHC data, except the ASS1 protein. We validate these markers using immunofluorescence and confirmed their specific presence in EC, offering new tools for pathologists for diagnosis. Other markers identified as poor outcome markers (YBX1, SLC1A5, ALDOA, ATP1A1, and UBA2) have already been tested on TMA, and their presence in high abundance was confirmed, as we quantified ( Figure S4B). We also detected some positive outcome markers (CD74, GOLM1, SLC9A3R2, and SP100) according to overall survival (from the TCGA and pathological protein atlas). 17,18 These results provide a clear answer regarding the disagreement among cancer centers regarding the value of lymph node dissection. It is clear that lymph node metastasis is one of the most important prognostic factors in EC. EC and SLN data identified key proteins that can be used to establish this correlation. In fact, PTMA, ACTL6A, SHMT2, RBM25, and RBM4 are detected in SLN and EC grades I and II and in EC grade III, whereas SUB1 and ETHE1 are expressed in SLN and EC grades II and III. Of interest, the DCD protein is found in SLN and EC grades I and II. The YBX2, NOTUM, and RANDBP1 proteins are found specifically in SLN and EC grade III. NUP210 is specific to SLN and EC grade II. PADI4, MUC5B, GOLM1, MNDA, CHI3L1, PTX3, SP100, MMP8, AZU1, and SLC9A3R2 are specific to grade I SLNs and EC. We can also include proteins with specific mutations as signature markers. Col1A1, histone H1A, histone 2A2C, myoglobin 9, ACTBL1, and HDGF were detected in all cancer tissue, whatever the grade, but lamin1 mutated peptides are only detected in grade II EC tissue and HDGF only in SLN grade I. In addition to these data, another family of proteins has also been identified (i.e., Alt-CALCUL1, Alt-HMGN2P3, and Alt-RP11-279O17) that is overexpressed only in SLN grades II and III. Based on these specific markers identified in EC and SLNs and ones that are specific to grade I, II, or III, stratification of affected individuals can be established, which will guide treatment decisions. As we detected the 5 markers which were analyzed blind in SLN tissues and validated in EC tissues, SLN mapping is feasible and can accurately predict nodal status in women with EC based on our markers. In this way, the exact underlying molecular mechanisms relating SLN and EC grades have now been highlighted by systemic biology-proteomicsbased study, which facilitates accurate EC detection and can be used as a therapeutic endpoint target. Limitations of study The study was performed on a cohort of 41 samples from 15 individuals. A larger cohort is needed to definitively establish the markers we identified and validated for routine molecular pathology. Such a large validation cohort is under construction at the national level and will be tested as a multicentric assay though a national PHCR (Programme hospitalier de recherche clinique) from the National Institute of Cancer. STAR+METHODS Detailed methods are provided in the online version of this paper and include the following: DECLARATION OF INTERESTS The authors declare no competing interests. Materials availability This study did not generate new unique reagents. Data and code availability The raw data and result files used for analysis were deposited at the ProteomeXchange Consortium 54 (http://proteomecentral. proteomexchange.org) via the PRIDE partner repository with the dataset identifier PXD020410. EXPERIMENTAL MODEL AND SUBJECT DETAILS A cohort of 41 samples was selected for this study including 6 healthy Endometrium, 4 Grade I endometrioid, 8 Grade II endometrioid, 4 Grade III endometrioid, 3 Normal sentinel nodes, 4 Grade I sentinel nodes, 8 Grade II sentinel nodes, 4 Grade III sentinel nodes. The age, type of tumors and patient information has been described in Table S1. The normal tissue was analyzed from a healthy normal tissue section (presenting no abnormalities on IHC) from a patient with p53 signature lesion. Prior to the experiments, patients were asked to sign an informed consent, authorization form describing the experimental protocol and instrument and exposure to the hazards. No personal information and data, such as the name of the individuals and identifiers were used in these experiments. A randomized number was assigned to everyone. This clinical study (Sentirad-1502, EudraCT: 2015-001732-38): Randomized study comparing sentinel node (SN) policy to current French initial staging protocols in early-stage endometrial carcinomas at intermediate and high risk of recurrence is supported by the French cancer clinical research projects funding program 2014 (National Cancer Institute, INCa). The endometrioid and sentinel nodes biopsies were obtained from patients of the Centre Oscar Lambret (Lille, France). All experiments were approved by the local Ethics Committee (CPP Nord-Ouest I on July 20th, 2015, CPP 03/008/2015) in accordance with the French and European legislation on this topic. The study complies with the MR004 reference methodology adopted by the French Data Protection Authority (Paris, France), and we checked that patients did not object to the use of their data and biological samples for research purposes. Serial sections of sentinel nodes were realized every 2-3 mm and then formalin-fixed and paraffinembedded. On these tissues, HPS and immunostaining of pancytokeratins (CKAE1/3, CKAE1/4, CKAE1/5), P53, L1CAM were performed. Twenty-four samples were selected for spatially resolved shotgun microproteomic analyses (Table S1). This sample selection has been performed by taking into account the ability to find in the same patient the same Grade in endometrial carcinoma and in sentinel nodes tissues. This cohort is considered a diverse representative of the population as each sample is biologically investigated in triplicate. METHOD DETAILS Antigen retrieval All the slides were unmounted, and the resin was removed by soaking them overnight in xylene and rinsing them with xylene and ethanol baths. The tissues are rehydrated using 5 0 each successive bath of decreasing ethanol degree (2x95 , 1x30 ) and two baths of 10mM NH 4 HCO 3 buffer. Then, antigen retrieval was performed to relax the tissue and increase the trypsin access to biomolecules. Slides are dipped in 90 C pH9 20mM Tris for 30 minutes, rinsed in two baths of 10mM NH 4 HCO 3 for 2 minutes each, and dried under vacuum at room temperature. Microproteomic analysis Areas of interest have been selected on the tissue. These areas were digested, extracted and then analyzed in nanoLC-MS. Trypsin digestion on tissue Tryptic digestion was performed using a Chemical Inkjet Printer (CHIP-100, Shimadzu, Kyoto, Japan). The regions were selected from the tissue scanned on the software, and then the trypsin solution (40mg/mL, 50mM NH 4 HCO 3 buffer) was deposited on these region defined to 1mm 2 for 2h. During this time, the trypsin was changed every half-hour to avoid the autolytic digestion. With 350 cycles and 450pl per spot, a total of 6.3mg was deposited. To stop the digestion, TFA 0.1% was spotted during 25 cycles. Liquid extraction After microdigestion, the content of the spot was collected by liquid microjunction using the TriVersa Nanomate (Advion Biosciences Inc., Ithaca, NY, USA) using the parameters of liquid extraction and surface analysis (LESA). 3 Mixtures of different extraction solvents have been prepared and are composed of 0.1% TFA, ACN / 0.1% TFA (8: 2, v / v) and MeOH / 0.1% TFA (7: 3, v / v). A complete LESA sequence run 2 cycles for each mixture solvent. The first step was to aspirate 2ml of solvent into a tip, 0.6 ml was deposited on the tissue to create a liquid microjunction with 10 aspirate-dispense cycles to perform the extraction, and the extracted solution was collected in 0.2 mL weak binding tubes. For each interesting spot, 2 sequences are grouped together in the same bottle. NanoLC-ESI-MS 2 After liquid extraction, samples were freeze-dried in a SpeedVac concentrator (SPD131DPA, ThermoScientific, Waltham, Massachusetts, USA), reconstituted with 10mL 0.1% TFA and subjected to solid-phase extraction to remove salts and concentrate the peptides. This was done using a C-18 ZipTip protocol (Millipore, Saint-Quentin-en-Yvelines, France). The pipettor was set to 10mL and the ZipTip pipette tip was washing by performing 5 aspirate-dispense cycles in ACN, and equilibrated by 5 aspirate-dispense in TFA 0.1%. To bind peptides, 20 aspirate-dispense was performed in the sample and 10 times in TFA 0.1% to remove salts. Peptides were eluted with 20ml of ACN/0.1% TFA (8:2, v/v) by realizing 20 aspirate-dispense, and then the samples were dried for storage. Before analysis, samples were suspended in 20mL ACN/0.1% FA (2:98, v/v), deposited in nanoLC vials and 10mL were injected for analysis. The separation prior to the MS used online reversed-phase chromatography coupled with a Proxeon Easy-nLC-1000 system (Thermo Scientific) equipped with an Acclaim PepMap trap column (75 mm ID x 2 cm, Thermo Scientific) and C18 packed tip Acclaim PepMap RSLC column (75 mm ID x 50 cm, Thermo Scientific). Peptides were separated using an increasing amount of acetonitrile (5%-40% over 145 minutes) and a flow rate of 300 nL/min. The separation column was kept at 50 C. The LC eluent was electrosprayed directly from the analytical column and a voltage of 2 kV was applied via the liquid junction of the nanospray source. The chromatography system was coupled to a Thermo Scientific Q-Exactive Orbitrap mass spectrometer. The mass spectrometer was programmed to acquire in a data-dependent mode defined to analyze the 10 most intense ions of MS analysis (Top 10). The survey scans were acquired in the Orbitrap mass analyzer operated at 70,000 (FWHM) resolving power. The MS analysis was performed with an m/z mass range between 300 to 1600, an AGC of 3e6 ions and a maximum injection time of 120 ms. The MS/MS analysis was performed with an m/z mass range between 200 to 2000, an AGC of 50000 ions, a maximum injection time of 60 ms and the resolution was set at 17,500 FWHM. Higher Energy Collision Dissociation (HCD) was set to 30%. Precursors ions with charges states > +1 and < +8 were kept for the fragmentation, with a dynamic exclusion time of 20 s. Data interrogation and analyses All MS data were processed with MaxQuant 51,52 (Version 1.5.6.5) using the Andromeda 55 search engine. The proteins were identified by searching MS and MS/MS data against the Decoy version of the complete proteome for Homo sapiens in the UniProt database (Release March 2017, 70941 entries) combined with 262 commonly detected contaminants. Trypsin specificity was used for digestion mode, with N-terminal acetylation and methionine oxidation selected as a variable modification. We allowed up to two missed cleavages. Initial mass accuracy of 6 ppm was selected for MS spectra, and the MS/MS tolerance was set to 20 ppm for the HCD data. For the identification parameters, FDR at the peptide spectrum matches (PSM) and protein level was set to 1%, and a minimum of 2 peptides per protein in which 1 was unique. Relative label-free quantification of the proteins was conducted into Max-Quant using the MaxLFQ algorithm 56 with default parameters. Analysis of the identified proteins was performed using Perseus software (http://maxquant.net/perseus/) (version 1.6.12.0). The file containing the information from the identification (proteinGroup.txt) was used. Briefly, the LFQ intensity of each sample were downloaded in Perseus and the data matrix was filtered by removing the potential contaminants, reverse and only identified by site. The LFQ intensity was logarithmized (log2[x]). Categorical annotation of the rows was used to define the different group. Venn diagram and principal component analysis (PCA) were done to compare the protein content of each sample. Statistical multiple-sample tests were performed using ANOVA with a p value of 1%. Normalization was achieved using a Z-score with matrix access by rows. Only proteins that were significant by ANOVA were used. The hierarchical clustering and profile plot of only the statistically significant proteins were all performed and visualized by Perseus. Each protein cluster were selected and extracted for biological analysis. Functional annotation and characterization of the identified proteins were performed using FunRich software (version 3) and STRING (version 9.1, http://stringdb.org). 57 Pearson's correlation coefficient and matrix representation were generated in R software using corrplot package. Gene Set Enrichment Analysis (GSEA) and Cytoscape software (version 3.6.1) were used for the biological process analysis of the clusters selected from the heatmap. Subnetwork Enrichment Pathway Analyses and statistical Testing The Elsevier's Pathway Studio version 10.0 (Ariadne Genomics/Elsevier) was used to deduce relationships among differentially expressed proteomics protein candidates using the Ariadne ResNet database. 58,59 ''Subnetwork Enrichment Analysis'' (SNEA) algorithm was selected to extract statistically significant altered biological and functional pathways pertaining to each identified set of protein hits among the different groups. SNEA utilizes Fisher's statistical test set to determine if there are nonrandom associations between two categorical variables organized by specific relationships. Integrated Venn diagram analysis was performed using ''the InteractiVenn'': a web-based tool for the analysis of complex datasets. 60 Mutation identification Proteins identification was also performed using the mutation-specific database. 32 XMan v2 database contains 2 539 031 mutated peptide sequences from 17 599 Homo sapiens proteins (2 377 103 are missense and 161 928 are nonsense mutations). The interrogation was performed by Proteome Discoverer 2.3 software and Sequest HT package, using an iterative method. The precursor mass tolerance was set to 15 ppm and the fragment mass tolerance was set to 0.02 Da. For high confidence result, the false discovery rate (FDR) values were specified to 1%. A filter with a minimum Xcorr of 2 was applied. The generated result file was filtered using a Python script to remove unmutated peptides. All mutations were then manually checked based on MSMS spectra profile. Cell Reports Medicine 2, 100318, June 15, 2021 e3 Article ll OPEN ACCESS Ghost proteins identification RAW data obtained by nanoLC-MS/MS analysis were analyzed using Proteome Discoverer V2.2 (Thermo Scientific) with the following parameters: Trypsin as an enzyme, 2 missed cleavages, methionine oxidation as a variable modification and carbamidomethylation of cysteines as static modification, Precursor Mass Tolerance: 10 ppm and Fragment mass tolerance: 0.6 Da. The validation was performed using Percolator with an FDR set to 1%. A consensus workflow was then applied for the statistical arrangement, using the high confidence protein identification. The protein database was uploaded from Openprot (https://openprot.org/) and included RefProt, novel isoforms, and AltProts predicted from both Ensembl and RefSeq annotations (GRCh38.83, GRCh38.p7). 35 Confirmatory immunohistochemistry analyses Grading group validation was performed using antibodies directed against ALDH2, ANXA1, PRSS3, ASS1 and PTX3. After dewaxing and antigen retrieval with citrate buffer, the tissues were incubated with a primary antibody at 4 C overnight, followed by application of a secondary antibody (Alexa fluor conjugated antibody, 1/1 000 dilutions) for 1 hour at RT. We used the following primary antibodies: ALDH2 (Invitrogen; 1/500 dilution), ANXA1 (OriGene, 1/50 dilution), PRSS3 (Invitrogen, 1/100 dilution), ASS1 (Abcam; 1/100 dilution) and PTX3 (Abcam, 1/100 dilution). All slides were imaged on the Zeiss LSM700 confocal microscope. Three pictures were taken for each tumor section.
2021-07-01T05:14:43.162Z
2021-06-01T00:00:00.000
{ "year": 2021, "sha1": "c79c4c831a3ccb7f95a4e5469e7eb7b8a655b156", "oa_license": "CCBYNCND", "oa_url": "http://www.cell.com/article/S2666379121001610/pdf", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "c79c4c831a3ccb7f95a4e5469e7eb7b8a655b156", "s2fieldsofstudy": [ "Biology" ], "extfieldsofstudy": [ "Medicine" ] }
247967640
pes2o/s2orc
v3-fos-license
A Study to Compare the Effect of Conventional Knee Exercises & Macquarie Injury Management Group (Mimg) Protocol on Pain & Functional Mobility in Patients with Primary Osteoarthritis Knee- An Interventional Study Introduction: Osteoarthritis is a disorder of the diarthrodial joint, a slow degenerative disease clinically characterized by pain, loss of range of motion. On x-ray seen as reduced joint space, formation of osteophytes and deformity. Manual therapy has been proved to be an effective treatment method in knee osteoarthritis (OA), but there is a need to investigate the effectiveness of MIMG manual therapy technique. There is limited evidence on the effect of MIMG protocol in knee OA. Aim: To find the effectiveness of MIMG protocol on pain and range of motion in knee osteoarthritis. Methodology38 subjects with the diagnosis of knee OA stage II and III on Kellegren Lawrence classification were recruited from Shree K.K. Sheth physiotherapy Centre, Rajkot, Gujarat. An interventional study was conducted on 38 subjects. Result: The outcome measures MIMG protocol is effective as conventional exercises in treating OA knee. Conclusion: MIMG can be suggested as one of the treatment protocol. INTRODUCTION Osteoarthritis (OA) is a degenerative joint disease and most common form of chronic disorder of synovial joints 1 in ageing population. 2 It is the most frequent joint disease with a prevalence of 22% to 39% in India. 2 Knee Osteoarthrosis (OA) is one of the most prevalent musculoskeletal complaints worldwide, affecting 30-40% of the population by the age of 65 years. Primary osteoarthritis of the knee is associated in 90% of cases with varus deformity. Worldwide estimates indicate that 9.6% of men and 18% of women above 60 years of age have symptomatic OA 2 but knee involvement is seen equally in both genders from 55-64 years of age. 4 The pathophysiology states that osteoarthritic changes are due to an imbalance between degradation and synthesis process of the articular cartilage. The most widely used classification scheme for the diagnosis OA is based on the radiological appearance of the joint, which is known as Kellgren and Lawrence classification of Osteoarthritis. Grade I and II according to Kellgren-Lawrence Grading Scale 4 or the participants fulfilling the following criteria of the American College of Rheumatology (ACR). (ACR): ACR clinical and radiological criteria: 1) Knee pain for most days of the prior month. 2) Osteophytes at joint margins on X-ray. 3) Synovial fluid typical of osteoarthritis (laboratory). 4) Age 40 years. 5) Morning stiffness 30 min. 6) Crepitus on active joint motion. OA present if items 1, 2 or 1, 3, 5, 6 or 1, 4, 5, 6 are present. 3 Radiographs add little to the accuracy of the clinical diagnosis. But in Osteoarthritis of the knee muscle strength and pain are more explanatory of functional loss than radiograph findings. According to a study done on by Roddy et al., (2005), aerobic walking and quadriceps strengthening exercises helped patients to decrease pain and improve functional activities. However, there are few studies regarding the effects of exercise on postural stability and balance in OA patients. 5 The Macquarie Injury Management Group (MIMG) knee protocol is a new technique in manual therapy developed by Dr. Henry Pollard, a practicing sports clinical scientist based in Sydney. MIMG knee protocol is an approach which includes two techniques myofascial mobilization and myofascial manipulation. It was introduced by the MIMG group, Australia. The techniques involved are myofascial mobilization technique and myofascial manipulation technique. 6 Short-wave diathermy (SWD) is a high frequency current generated by an oscillator circuit that allows electrons to oscillate at a frequency of 27.12 MHz. 7 Pain can be measured for severity on a visual analog scale. It is one of the most basic pain measurement tools. The reliability of VAS is 0.60 to 0.77 and validity is 0.64 to 0.84. 8 The WOMAC is a disease-specific self-report multidimensional questionnaire assessing pain, stiffness, and physical functional disability. The original WOMAC is available in two formats, visual analog scales (VAS) and five Likert boxes, with similar metric properties. 9 METHOD OF DATA COLLECTION A total 38 patients were selected for study by giving consideration to inclusion and exclusion criteria. All the subjects were explained about the purpose and the test procedures & written consent was obtained. Ethical clearance was given by Saurashtra University panel, Rajkot, Gujarat, India. SELECTION CRITERIA: Inclusion criteria: • age-40 to 70 years of age. 5 • Gender-both males and females. • Subjects who are clinically diagnosed with primary OA knee according to American College of Rheumatology (ACR). Exclusion criteria:- • Patients with history of hip and/or back injury and lower-limb joint replacement. • Participants who had a joint replacement surgery, history of meniscal or other knee surgery in past 6 months. • fractures at knee and hip joint, deformity at lower limb, osteoporosis, neurological deficits, systemic illness & metabolic disorder. Materials: • Pen. • Record and data collection sheet. Measurement procedure The A and B group subjects took part in identical pre-and post-test protocols. Group A: conventional Q-drills and short wave diathermy. • Static quadriceps. GROUP B-MAQUARIE INJURY MANAGEMENT GROUP PROTOCOL (MIMG) 6 The intervention group received a MIMG (Macquarie Injury Management Group) chiropractic knee protocol. It consists of a non-invasive myofascial mobilization procedure and an impulse thrust procedure performed on the symptomatic knee of participants. In cases where OA was bilateral; mobilization was performed on both knees. Myofascial mobilization technique: The patient laid supine near the homolateral edge of the couch. The practitioner sat on the homolateral side of the couch with the cephalad thigh under the leg of patient's involved limb and superior to the patient's knee. The patient's lower hamstring area rested on the practitioner's thigh with their knee able to rest in 90 0 of flexion. The practitioner had two choice of contacts:1) a pincer contact with the thumb and index either side of the medial and lateral superior poles of the patella. 2) A reinforced web contact supporting the medial and lateral superior poles of the patella. The second position is recommended for those practitioners who have a hypermobile thumb. The patient was then instructed to begin actively extending their knee through the pain free range of motion while the practitioner maintained contact at the patella. The force through the patella is in a plane applied at a tangent to the angle of the knee to avoid a compressive load. The patient extended the knee as far as possible in a pain free manner from the initial starting position. The practitioner maintained contact at the patella during this movement. This was repeated upto 10 times. Myofascial manipulation technique: the patient laid supine and the therapist stood on the same side of the plinth with the patient's leg grasped between the thighs to apply a distractive force to produce traction over the tibio-femoral joint. The practitioner contacted the knee with hands either side. Both thumbs contact on the tibial tuberosity and the fingers wrap around the knee to the distal end of the popliteal space. A thrust was then delivered, in the caudal direction in order to mobilize the joint in a near full extension position. Treatment given in both the groups: Both the groups were given shortwave diathermy for 15min. 11 STATISTICAL ANALYSIS Statistical software: All statistical analysis was done by SPSS statistics version 20.0 for windows software. Microsoft excel was used to calculate mean and Standard Deviation (SD), and to generate graphs and tables. Statistical test: Means and Standard Deviation (SD) were calculated as a measure of central tendency and measure of dispersion respectively. Within group comparison of WOMAC and VAS value was analyzed by Wilcoxon signed rank test and between group comparison of WOMAC and VAS value was analyzed by Wilcoxon sum rank test or Mann-Whitney U test. Pretreatment and post-treatment data of active knee flexion extension range of motion and visual analog scale was analyzed by Paired t-test and Wilcoxon signed rank test respectively and comparison between two groups of active knee flexion extension range of motion and visual analog scale was analyzed by unpaired t-test and Mann-Whitney U test (Wilcoxon sum rank test). Level of significance (p-value) was set to 0.05. RESULT Thirty eight subjects were randomly divided into two Groups:-Group A conventional Q-Drills treatment (n=19) and Group B MIMG protocol (n=19). Outcome measures WOMAC, ROM and VAS for pain were taken before and after completion of twelve sessions of treatment (6 times/ week). The below findings suggest that there is statistically significant difference for pre-treatment and post-treatment comparison for WOMAC in Group-A (Conventional Q-Drills) and Group-B (MIMG Protocol). There is a statistically significant difference in pre-treatment and post-treatment comparison of VAS in both the groups. There is no significant difference for between group comparisons of WOMAC & VAS. Hence, null hypothesis was accepted and experimental hypothesis was rejected. DISCUSSION In present study, when the values of pre-treatment and posttreatment VAS and WOMAC were analysed, it was statistically significant in both the groups but when comparison was done between them, both the techniques were equally effective in reducing pain and improving functional mobility. Conventional exercise therapy is regarded as the cornerstone of conservative management of the OA knee. 2,3 Exercise regimens containing repetitive movements increase the ability of the person's control over joint movements in all positions. Dynamic stability may help to control abnormal joint translation that occurs during daily movements. This shows as an improvement in WOMAC score. Knee-extension exercises provide additional vastus medialis obliques (VMO) activity and thus have a greater impact on strengthening the VMO. The VMO functions to control patella alignment by pulling the patella medially during extension and under normal knee function acts as a dynamic medial stabilizer of the patella once the knee reaches terminal extension 12 The other objective of this study was to find out the effectiveness of MIMG protocol. Gail Deyle(2000) concluded from their interventional study that a combination of manual therapy and supervised exercise is effective in improving walking distance, decreasing pain, dysfunction and stiffness in patients with OA knee. The MIMG protocol used for the intervention consisted of a non-invasive myofascial mobilization procedure and an impulsive thrust procedure specific to the patello-femoral articulation. The patient is able to actively articulate through knee flexion and not excessively tighten the quadriceps to cause a vector that compresses the patella onto femur. The mobilization procedure stretches the joint capsule in a sagittal plane, gently mobilizes any restriction to normal movement within the limits of patient tolerance and likely loosens adhesions to the patello-femoral articulation. Together these effects allow the knee greater mobility with less effort, restriction and pain. The second part of the procedure utilizes a manual therapy procedure that is not under the voluntary control of the patient. It involves the application of a longitudinal traction of the tibio-femoral joint in a manner designed to distract the knee and mobilize the joint in a near-full extension position. An impulsive type of thrust directed in a caudal direction is delivered to the knee of the patient. The object of this procedure is not to produce joint cavitation, more so to mobilize the joint. In cases of tibial rotational restriction, the pre manipulative setup could include a rotated tibia as a start point. CONCLUSION The result of the present study showed that patients belonging to both the groups that are conventional knee exercises and MIMG group had relief from pain and other symptoms, increase the activity of daily living and knee-related quality of life. Hence, concluded that both the techniques were effective for osteoarthritis of knee joint. It can be further recommended that both the techniques can be included together in OA treatment regime for better results for patients. Shukla: A study to compare the effect of conventional knee exercises & macquarie injury management group Wilcoxon sign rank test was used for pretreatment and post treatment comparison of WOMAC of Group A and Group B. Interpretation: The above table and graph shows mean and SD for group A and group B Mann Whitney U test was used for between group comparison of WOMAC of Group A and Group B. Interpretation: The above table and graph shows mean and SD of VAS of Group A and Group Interpretation: The above table shows mean and SD of VAS for both the groups. Interpretation: The above table and graph shows mean and SD of VAS for both the groups.
2022-04-06T15:18:59.975Z
2022-01-01T00:00:00.000
{ "year": 2022, "sha1": "4c606f62186391d155b40d55fcc0da51bf4191cc", "oa_license": null, "oa_url": "https://doi.org/10.31782/ijcrr.2022.14703", "oa_status": "GOLD", "pdf_src": "MergedPDFExtraction", "pdf_hash": "9a7fe146ff6696eda216ca8368cc21de6c74c177", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [] }
268758885
pes2o/s2orc
v3-fos-license
Two new species of the Cnemaspisgalaxia complex (Squamata, Gekkonidae) from the eastern slopes of the southern Western Ghats Abstract Two new species allied to Cnemaspisgalaxia are described from the eastern slopes of the south Western Ghats, Tamil Nadu, India. Both new species are members of the ornata subclade within the beddomei clade. The two new species can be easily distinguished from all other members of the beddomei clade and each other by a combination of nonoverlapping morphological characters such as small body size, distinct colouration of both sexes, the number of dorsal tubercles around the body, the number or arrangement of paravertebral tubercles, the number of midventral scales across the belly and longitudinal ventral scales from mental to cloaca, besides uncorrected pairwise ND2 and 16S sequence divergence of ≥ 7.4% and ≥ 2.7%. The two new species are distributed from low elevation, deciduous forests of Srivilliputhur, and add to the five previously known endemic vertebrates from Srivilliputhur-Megamalai Tiger Reserve. The most diverse of the three subclades of the beddomei clade is the ornata subclade which includes nine valid species distributed from low elevations on the eastern slopes to high elevations (~ 200-1000+ m a.s.l.) in the Western Ghats south of Srivilliputhur (Fig. 1).Most species are low to mid elevation (~ 200-700 m a.s.l.) and are distributed on the eastern slopes as well as through some low passes onto the western slopes, and only C. ornata and the recently described C. rashidi are high elevation species found at elevations > 1,000 m a.s.l.(Sayyed et al. 2023a, b).Members of the ornata subclade are all strongly sexually dichromatic, diurnal, and scansorial, found on rocks, buildings and occasionally trees (Sayyed et al. 2019(Sayyed et al. , 2023a, b;, b;Pal et al. 2021;Khandekar et al. 2022).Seven of these species have been described since 2019, suggesting the diversity of this subclade is still incompletely known (Sayyed et al. 2019(Sayyed et al. , 2023a, b;, b;Pal et al. 2021;Khandekar et al. 2022). As part of a project on the lizards of Tamil Nadu, we surveyed the southern Western Ghats from 2018-2022, specifically targeting known species of Cnemaspis as well as potential habitats that had not been previously sampled.We were able to collect most described species of the ornata subclade as well as multiple unnamed divergent lineages, two of which were subsequently described as C. rashidi and C. sundara (Sayyed et al. 2023a, b).In this paper, we provide molecular data from new localities for C. galaxia, C. nairi, C. nigriventris, C. rashidi, and C. sundara and describe two new species allied to C. galaxia.We also provide a brief note on the Code of Ethics and how it is rarely followed in the Indian context, and call for more collaborative research. Taxon sampling Surveys were conducted in the early morning until a few hours after dark, specimens were observed on rocks, tree trunks, and collected by hand, followed by euthanasia using isoflurane after taking colour photos in life.Liver or tail tissues of at least two individuals of each new species/per locality were collected in molecular grade ethanol and subsequently stored at -20 °C for genetic analysis.Specimens were fixed in 8% formalin for ~ 12-24 h, washed and kept in tap water for ~ 24 h, and transferred to 70% ethanol for long-term storage.Collection permit was issued by the Tamil Nadu Forest Department (see acknowledgements), and collection protocols cleared by an inhouse ethics committee.Specimens are deposited in the Museum and Research Collection Facility at National Centre for Biological Sciences, Bengaluru (NRC-AA). Molecular data and analyses We generated new sequences for 25 individuals representing five known species and three divergent lineages of the ornata subclade from ~ 18 localities (Fig. 1, Table 1).We targeted two mitochondrial genes that have been used in Indian Cnemaspis phylogenies, the protein coding ND2 and the large ribosomal subunit (16S).We extracted DNA from liver or tail-tips using the Qiagen DNeasy Blood and Tissue Extraction kit.We used the Macey et al. (1997) primers L4437 and H5934 to PCR amplify ND2 with L4437 and H5540 used for sequencing, and 16SA and 16SB (Palumbi et al. 1991) to amplify and sequence 16S; with PCR and sequencing outsourced to Barcode Biosciences, Bangalore.We combined the new sequences with published sequences for the beddomei clade using members of the wynadensis clade as outgroups ( 2021; Khandekar et al. 2022;Sayyed et al. 2023a, b).Sequences were aligned in MEGA 5.2 (Tamura et al. 2011) using CLUSTALW (Thompson et al. 1994) with default settings.The ND2 sequences were translated to amino acids to check for erroneous stop codons, which were absent, confirming we had sequenced the targeted mitochondrial protein coding gene.Pairwise uncorrected sequence divergence was calculated in MEGA 5.2 using the pairwise deletion option for each marker.We reconstructed phylogenetic relationships for the ND2 and 16S data separately (not shown as both mitochondrial markers were largely congruent) as well as in a concatenated analysis, using Maximum Likelihood (ML) in RaXML HPC 8.2.12 (Stamatakis 2014) and Bayesian Inference (BI) in MrBayes 3.2.7 (Ronquist and Huelsenbeck 2003).The best-fit models of sequence evolution and partitioning scheme were selected using the Bayesian Inference Criteria in PartitionFinder 2 (Lanfear et al. 2016) with the greedy algorithm (Lanfear et al. 2012) and RaxML (Stamatakis 2014).Three partitions were selected for each codon position of ND2 and one for 16S with the GTR+I+G model for codon position 1 and 16S and GTR+G for the other two codon positions.ML analyses employed 10 independent runs and 1000 non-parametric bootstraps (BS) to assess support.Partitioned BI analyses had parameters unlinked across partitions, four chains each (one cold and three hot) with two parallel runs with 1,000,000 generations sampled every 100 generations and convergence determined based on standard deviation of split frequencies (<< 0.01) and examination of ESS scores (> 200).The sumt function was used to build a consensus tree after removing the first 25% of trees as burn-in, with support assessed using posterior probability (PP) of each node. Phylogenetic relationships We recovered the three subclades of the beddomei clade, anamudiensis, beddomei, and ornata, each of which received high support (Fig. 2; BS > 95, PP 1; Fig. 1).The two undescribed lineages fall within the ornata subclade and form a well-supported clade (BS 100, PP 1) together with C. galaxia.The two lineages have an uncorrected ND2 p-distance of 10.7% between each other (2.7% on 16S), 7.4-10.1% (3.1-3.4% 16S) from C. galaxia, and ≥ 15.6% (≥ 7.8% 16S) from all other members of the clade (Table 2).The lowest uncorrected ND2 p-distance between previously described species of the ornata subclade is 6.3% (2.2% 16S) between C. nairi and C. nigriventris.We describe the two genetically divergent lineages as new species below.Figs 2-6, Tables 3-5 Type material examined.Diagnosis.A small-sized Cnemaspis, snout to vent length ≤ 34 mm (n = 7).Dorsal pholidosis heterogeneous; smooth to weakly keeled granular scales intermixed with fairly regularly arranged rows of enlarged, weakly keeled, conical tubercles; 10 rows of dorsal tubercles at midbody, 7-14 tubercles in paravertebral rows; ventral scales subequal from chest to vent, smooth, subcircular and subimbricate with rounded end; 29-31 midventral scales across belly, 125-140 longitudinal ventral scales from mental to cloaca; subdigital scansors smooth, unnotched, some divided and others entire, a distinct enlarged metacarpal scale below digit I; 11-14 lamellae under digit I of manus and 11-13 under digit I of pes, 19-22 lamellae under digit IV of manus and 18-25 lamellae under digit IV of pes; males with continuous series of six or seven precloacal pores (n = 6); scales on non-regenerated tail dorsum heterogeneous; small, smooth, subcircular, flattened, subimbricate scales intermixed on anterior one third portion with enlarged, weakly keeled, and weakly conical tubercles forming seven whorls; six tubercles on first three whorl, four tubercles on fourth to seventh whorls, only a pair of paravertebral tubercles each on eighth to 11 th whorls; rest of the tail lacking enlarged tubercles; median row of subcaudals smooth, roughly rectangular, distinctly enlarged, with condition of two enlarged scales alternating with a divided scale.Males with ochre anterior 1/2 of body, single central black dorsal ocellus on neck, a white ocellus on ventrolateral side of neck and one on throat posterior to jaw, venter off-white with dark throat, tail unbanded, females and juveniles brown, juveniles with indistinct mid-dorsal streak. Comparisons with members of beddomei clade.Cnemaspis vangoghi sp.nov.can be easily distinguished from all 16 members of the beddomei clade as well as from C. boiei by a combination of the following differing or non-overlapping characters: A small-sized Cnemaspis, snout to vent length ≤ 34 mm (vs medium-sized Cnemaspis, snout to vent length [40][41][42][43][44][45][46][47][48][49]C. nimbus,C. ornata,C. rashidi,C. rubraoculus,and C. wallaceii;snout to vent length > 50 mm in C. anamudiensis,C. beddomei,C. maculicollis,and C. smaug;snout to vent length ≤ 38 mm in C. azhagu,C. boiei,and C. nigriventris); ten rows of dorsal tubercles at midbody (vs only a few enlarged scattered tubercles at midbody dorsum in C. anamudiensis, two or three rows of . azhagu, eight in C. galaxia, 16-18 in C. nairi, 13 or 14 in C. nigriventris, 12-14 in C. nimbus and C. ornata, 7-9 in C. regalis, 19-22 in C. smaug, six in C. sundara, 14 or 15 in C. wallaceii); 125-140 longitudinal ventral scales from mental to cloaca (vs 151-171 longitudinal ventral scales from mental to cloaca in C. azhagu, 154-161 in C. beddomei, 153-159 in C. galaxia, 143-147 in C. nairi, 154-159 in C. nigriventris, 157-165 in C. ornata, 170-172 in C. rashidi, 148-154 in C. regalis, 142-150 in C. smaug, 156-160 Description of the holotype.Adult male in good state of preservation except tail marginally bent towards left and tip is missing, hemipenis partially everted on right and fully on left side, and a 3.1 mm long incision in sternal region for tissue collection (Fig. 2A, B); SVL 32.1 mm, head short (HL/SVL 0.25), wide (HW/ HL 0.68), not strongly depressed (HD/HL 0.40), distinct from neck.Loreal region marginally inflated, canthus rostralis indistinct.Snout 1/2 head length (ES/HL 0.48), 2.5× eye diameter (ES/ED 2.5); scales on snout and canthus rostralis subcircular to elongate, subequal, smooth, weakly conical, much larger than those on forehead and interorbital region; scales on forehead similar to those on snout and canthus rostralis except almost 2× smaller and elongate; scales on interorbital region, occipital, and temporal region even smaller, granular (Fig. 3A).Eye small (ED/HL 0.19); with round pupil; supraciliaries short, larger anteriorly; eight interorbital scale rows across narrowest point of frontal bone; 27 scale rows between left and right supraciliaries at mid-orbit level (Fig. 3A).Ear-opening deep, oval, small (EL/ HL 0.06); eye to ear distance much greater than diameter of eye (EE/ED 1.60) (Fig. 3C).Rostral slightly > 2× as wide (1.5 mm) as high (0.7 mm), incompletely divided dorsally by a strongly developed rostral groove for > 1/2 of its height; a single enlarged, roughly rectangular supranasal on each side, almost 3× larger than upper postnasal, and strongly in contact with each other on snout; a pair of enlarged scales on snout behind internasals, separated from each other by two much smaller, granular scales; rostral in contact with supralabial I, nostril, and supranasal on either side; nostrils oval, surrounded by four postnasals, supranasal, rostral and supralabial I on either side; four roughly circular postnasals on either side, the one touching supranasal largest, gradually decreasing in side posteriorly; two single row of scales separate orbit from supralabials (Fig. 3C).Mental enlarged, subtriangular, marginally wider (2.0 mm) than high (1.6 mm); two pairs of postmentals, inner pair roughly rectangular, shorter (0.9 mm) than mental, separated from each other below mental by a single enlarged median chin shield; inner pair bordered by mental, infralabial I, outer postmental, median chin shield and a single enlarged chin shields on either side; outer postmentals roughly rectangular, slightly smaller (0.6 mm) than inner pair, bordered by inner postmentals, infralabial I and II, and four enlarged chin shields on either side; three enlarged gular scales between left and right outer postmentals; all chin scales bordering postmentals more or less flattened, subcircular, smooth, and smaller than outermost postmentals; scales on rest of throat, much smaller, smooth, subcircular, and subimbricate (Fig. 3B).Infralabials bordered below by a row or two of slightly enlarged, much elongated scales, decreasing in size posteriorly.Nine supralabials up to angle of jaw and five at midorbital position on each side; supralabial I largest, gradually decreasing in size posteriorly; eight infralabials on left and seven on right side up to angle of jaw, four at midorbital position on left and five on right side; infralabial I largest, gradually decreasing in size posteriorly (Fig. 3C). Body relatively slender (BW/AGL 0.37), trunk < 1/2 of SVL (AGL/SVL 0.42) without spine-like tubercles on flank (Fig. 4A-C).Dorsal pholidosis heterogeneous; smooth to weakly keeled granular scales intermixed with a fairly regularly arranged rows of enlarged, weakly keeled, conical tubercles; granular scales gradually increasing in size towards each flank, largest on mid-flank; granular scales on occiput and nape slightly smaller than paravertebral granules; enlarged tubercles in approximately 10 longitudinal rows at midbody; 12 (left) and 14 (right) tubercles in paravertebral rows (Fig. 4A, C).Ventral scales much larger than granular scales on dorsum, subequal from chest to vent, smooth, subcircular and subimbricate with rounded end; scales on precloacal region and four or five rows on femur distinctly enlarged; midventral scale rows across belly 31; 138 ventral scales from mental to anterior border of cloaca (Fig. 4B).A continuous series of six precloacal pores, femoral pores absent (Fig. 3D). Colouration in life (Fig. 5A).Dorsal ground colour of body, limbs and tail light grey; neck to mid-body ochre, fading slightly at mid-body.Light blue-grey preorbital streak runs from nostril to orbit; three light postorbital streaks, uppermost on either side meeting in parietal region forming an inverted chevron enclosing a single large elongate black ocellus on occiput, middle terminating on neck and lowermost continuing until ear opening.Head finely reticulated with pale blue-grey, a white ocellus on a black patch of scales on each side of ventrolateral aspect of neck just anterior to forelimb insertions; a fine yellow collar at anterior edge of forelimb insertions, just divided by indistinct continuation of chevron on neck, two small black spots anterior to the division.No distinct dorsal spots or bands, tubercles and a few adjacent scales at mid-body and posterior 1/2 of body pale blue-grey; similar spots on femur and bands on tibia; forelimbs with some ochre near insertions, otherwise whitish-grey with dark outlines of scales; digits with white and dark markings.Original tail without bands, blue-grey with dark outlines of scales.Ventral ground colouration grey-white; throat strongly marked with black up to forelimb insertions except for a fine pale border just below infralabials; a white spot on either side of the throat posterior to jaw; belly with dark markings and blue-grey scales toward the lateral margins; underside of limbs and tail with few dark markings; precloacal, femoral and tibial regions with almost no dark markings.Pupil black, iris reddish with a pale orange ring lining pupil. Variation and additional information from type series (Figs 5B, C, 6).Mensural, meristic and additional character state data for the type series is given in Tables 3-5, respectively.There are four adult males, a single subadult male, and a single adult female ranging in size from 28.6-33.6mm (Fig. 6A, B).All paratypes resemble the holotype except as follows: three postnasals on either side in NRC-AA-8344, NRC-AA-8346, and NRC-AA-8348.Inner postmentals bordered by mental, infralabial I, outer postmental, enlarged median chin shield in all paratypes, additionally, bordered by two small chin scales on either side in NRC-AA-8343, single chin scale on left and two on right side in NRC-AA-8344.Outer postmentals bordered by inner pair, infralabial I and II in all paratypes, additionally, bordered by five chin scales on left and four on right side in NRC-AA-8344, NRC-AA-8345, NRC-AA-8347; four on left and five on right side in NRC-AA-8348; outer postmental separated from each other by five chin scales including median chin shield in NRC-AA-8343, four chin scales in NRC-AA-8344.NRC-AA-8348 with original and complete tail, slightly longer than body (TL/SVL 1.23); three paratypes, NRC-AA-8344, NRC-AA-8346, and NRC-AA-8347, with original partially broken tails; NRC-AA-8343 with small and partially regenerated tail, and NRC-AA-8345 with complete regenerated tail, detached from the body (Fig. 6A, B).NRC-AA-8347 with damaged skink on the snout; NRC-AA-8343 with fully everted hemipenis on either side, NRC-AA-8347 with fully everted hemipenis only on left side.The new species is strongly sexually dimorphic and also shows ontogenetic colour variation (Fig. 5A-C): females brown with numerous black and pale blotches, collar pale brown, flanked anteriorly by thick black, divided by an extension of the neck chevron; distinct black ocellus on occiput; white ocelli on side of neck absent; forelimbs brown, hindlimbs with scattered dark and pale markings, digits banded.Regenerated tail grey, without bands.Ventral ground colouration of gular, body and tail grey-white; underside of limbs with few dark markings.Subadult male brown with an indistinct, cream mid-dorsal streak formed by the extension of the neck chevron, five or six spots in the streak; black ocellus on occiput and white ocelli on side of neck distinct; forelimbs brown, hindlimbs with scattered dark and pale markings, digits banded.Original tail without bands, grey with dark outlines of scales, regenerated portion brown.Ventral ground colouration of gular, body and tail grey-white; a white spot on either side of the throat posterior to jaw; belly without dark markings; underside of limbs and tail with few dark markings. Etymology.The specific epithet is a patronym for Dutch painter Vincent Van Gogh (1853-1890).The colouration of the new species is reminiscent of one of Van Gogh's most iconic paintings, The Starry Night.Suggested common name is Van Gogh's starry dwarf gecko. Distribution and natural history.Cnemaspis vangoghi sp.nov. is known only from two closely spaced localities (Ayyanar Kovil and Settur Reserve Forest, both in Meghamalai-Srivilliputhur Tiger Reserve, Tamil Nadu) within < 15 km straight line distance (Fig. 1).The new species was recorded in seasonally dry tropical forest with a mix of evergreen and deciduous species between elevations of 250-400 m a.s.l. on eastern slopes of the Western Ghats (Fig. 7A).Individuals of the new species were observed active during the daytime (0830-1400 hrs) on rocks and tree trunks < 2 m high from the base (Fig. 7B).A large number of individuals (n ≥ 25/hr) were observed at both the locations indicating high abundance.At Ayyanar Kovil, a few individuals were observed inactive, resting on rocks during evening and night time (1800-2030 hrs).We also observed Giant wood spider (Nephila sp.) feeding on an adult female individual of the new species.Diagnosis.A small-sized Cnemaspis, snout to vent length ≤ 33 mm (n = 5).Dorsal pholidosis heterogeneous; smooth to weakly keeled granular scales intermixed with irregularly arranged rows of enlarged, weakly keeled, conical tubercles; 6-8 rows of dorsal tubercles at midbody, paravertebral tubercles either absent or irregular; ventral scales subequal from chest to vent, smooth, subcircular and subimbricate with rounded end; 28-30 midventral scales across belly, 130-137 longitudinal ventral scales from mental to cloaca; subdigital scansors smooth, unnotched, some divided and others entire, a distinct enlarged metacarpal scale below digit I; 11-13 lamellae under digit I of manus and 11 or 12 under digit I of pes, 18-21 lamellae under digit IV of manus and 23 or 24 lamellae under digit IV of pes; males with continuous series of seven or eight precloacal pores (n = 4); scales on non-regenerated tail dorsum heterogeneous; small, smooth, subcircular, flattened, subimbricate scales intermixed on anterior one third portion with enlarged, weakly keeled, and weakly conical tubercles forming eight whorls; six tubercles on first whorl, four tubercles on second to fourth whorls, only a pair of paravertebral tubercles each on fifth to eighth whorls; rest of the tail lacking enlarged tubercles; median row of subcaudals smooth, roughly subcircular, distinctly enlarged than rest, with condition of two enlarged scales alternating with a divided scale.Males with ochre dorsum, single central black dorsal ocellus on neck, a white ocellus on ventrolateral side of neck and one on throat posterior to jaw, venter off-white with dark throat, tail unbanded, females and juveniles brown with a prominent mid-dorsal streak. Body relatively slender (BW/AGL 0.50), trunk < 1/2 of SVL (AGL/SVL 0.39) without spine-like tubercles on flank (Fig. 10A-C).Dorsal pholidosis heterogeneous; smooth to weakly keeled granular scales intermixed with irregularly arranged rows of enlarged, weakly keeled, conical tubercles; granular scales gradually increasing in size towards each flank, largest on mid-flank; granular scales on occiput and nape slightly smaller than paravertebral granules; enlarged tubercles in approximately six longitudinal rows at midbody; enlarged tubercles in paravertebral rows absent, (Fig. 10A, C).Ventral scales much larger than granular scales on dorsum, subequal from chest to vent, smooth, subcircular and subimbricate with rounded end; scales on precloacal region and four or five rows on femur distinctly enlarged; midventral scale rows across belly 30; 132 ventral scales from mental to anterior border of cloaca (Fig. 10B).A continuous series of seven precloacal pores, femoral pores absent (Fig. 9D). Colouration in life (Fig. 11A).Dorsal ground colour of body, limbs, and tail pale grey; neck and trunk ochre, fading slightly near hindlimb insertions.Pale blue-grey preorbital streak runs from nostril to orbit; three pale postorbital streaks, uppermost on either side meeting in parietal region forming an inverted chevron enclosing a single large elongate black ocellus on occiput, middle terminating on neck and lowermost continuing until ear opening.Head finely reticulated with pale blue-grey, a white ocellus on a black patch of scales on each side of ventrolateral aspect of neck just anterior to forelimb insertions; a fine yellow collar at anterior edge of forelimb insertions, broken in the centre, two fine black spots anterior to and a yellow spot on the division.Fine black spots and paler blotches on dorsum, tubercles and a few adjacent scales around hindlimb insertions and on tail pale blue-grey; similar spots on posterior flank, femur and bands on tibia; upper 1/2 of upper arm ochre, otherwise whitish-grey with dark outlines of scales; digits with white and dark markings.Original tail without bands, blue with dark outlines of scales and darker markings.Ventral ground colouration grey-white; throat fairly strongly marked with black up to forelimb insertions except for a fine pale border just below infralabials, a white spot on either side of the throat posterior to jaw; belly with scattered dark markings and blue-grey scales toward the lateral margins; underside of limbs and tail with few dark markings; precloacal and femoral region with almost no dark markings.Pupil black, iris dark red with a pale orange ring lining pupil. Holotype Lateral caudal furrows present (0) or absent (1) Variation and additional information from type series (Figs 11B, C, 12).Mensural, meristic, and additional character state data for the type series is given in Tables 3-5, respectively.There are three adult males, and a single subadult female ranging in size from 26.7-33.0mm (Fig. 12).All paratypes resemble the holotype except as follows: inner postmentals bordered by mental, infralabial I, outer postmental, enlarged median chin shield in all paratypes, additionally, bordered by two small chin scales on left and a single scale on right side in NRC-AA-8351.Outer postmentals bordered by inner pair, infralabial I & II in all paratypes, additionally, bordered by four chin scales on left and five on right side in NRC-AA-8350, four on left and five on right side in NRC-AA-8351 and NRC-AA-8353, four on either side in NRC-AA-8352; outer postmental separated from each other by four chin scales including median chin shield in NRC-AA-8351.NRC-AA-8350 with almost original tail with tip regenerated, marginally longer than body (TL/SVL 1.17), NRC-AA-8351 with original tail with missing tail tip, equal to body (TL/SVL 1.02); Two paratypes, NRC-AA-8352 and NRC-AA-8353 with completely missing tails; NRC-AA-8351 with damaged skink on the tail base; NRC-AA-8350 with fully everted hemipenis only on left side. The new species is strongly sexually dichromatic and shows ontogenetic colour variation (Fig. 11A-C): subadult female pale brown with a cream mid-dorsal streak that continues onto tail formed by the extension of the neck chevron, dorsum with scattered black and pale blotches, collar pale brown, flanked anteriorly by a few black spots; distinct black central ocellus on occiput, white ocelli on side of neck absent; forelimbs brown, hindlimbs with scattered dark and pale markings, digits banded.Original tail grey, without bands, regenerated portion blue in male paratypes.Ventral ground colouration of gular, body and tail grey-white; underside of limbs with few dark markings.Juveniles brown with a cream mid-dorsal streak that continues onto tail where it is orange formed by the extension of the neck chevron; distinct black central ocellus on occiput, white ocelli on side of neck absent; forelimbs brown, hindlimbs with scattered dark and pale markings, digits banded.Original tail grey, without bands, regen- erated portion blue in male paratypes (Fig. 12A, B).Ventral ground colouration of gular, body and tail grey-white; underside of limbs with few dark markings. Etymology.The specific epithet is a toponym for the type locality of the new species, Sathuragiri mountain in Srivilliputhur-Megamalai Tiger Reserve (SMTR), Virudhunagar District, Tamil Nadu.Suggested Common name is Sathuragiri dwarf gecko. Discussion The ornata subclade now has 11 known valid species (including the two new species described in this paper) in a small geographic area spanning < 1° longitude and 1.5° latitude.At the southern extreme of the Western Ghats, the region is incredibly heterogeneous, with altitudinal variation from close to sea level to > 1,500 m a.s.l. and strong east-west gradients in total annual precipitation and seasonality.Habitats range from thorny scrub forest on the lower eastern slopes of the mountains to evergreen forest at higher elevations and on the western slopes.This subclade is distributed across the Shencottah Gap (SG), a relatively low elevation pass through the Western Ghats.All 11 members of the clade are strongly sexually dichromatic, and sexual selection may at least in part be a driver of the high diversity in this clade, as has been speculated for members of the C. gracilis clade (Agarwal et al. 2022).The two new species add to the five previously known endemic vertebrates from Srivilliputhur-Megamalai Tiger Reserve -the geckos Cnemaspis galaxia, C. rashidi, Hemidactylus vanam; the skink Dravidoseps srivilliputhurensis and the anuran Nasikabatrachus bhupathyi Janani, Vasudevan, Prendini, Dutta & Aggarwal, 2017(Janani et al. 2017;Chaitanya et al. 2018;Pal et al. 2021;Sayyed et al. 2023a, b;Agarwal et al. 2024). Though sampling of the ornata subclade likely remains incomplete as this vast mountainous landscape has a number of higher elevations we could not access in our rapid surveys, there are some geographic patterns that emerge based on available data.The only two high elevation species are the sister pair C. ornata + C. rashidi that are distributed north of the SG, forming This last section is a note on violations of Principle 2 of the Code of Ethics prescribed by The Code (appendix A; Anonymous 1999) which states "A zoologist should not publish a new name if he or she has reason to believe that another person has already recognized the same taxon and intends to establish a name for it (or that the taxon is to be named in a posthumous work).A zoologist in such a position should communicate with the other person (or their representatives) and only feel free to establish a new name if that person has failed to do so in a reasonable period (not less than a year)." One of the authors of Cnemaspis rashidi accompanied us in the field in 2022 when we collected the then unnamed and distinctively coloured species, and multiple co-authors including the first author were aware that we were working in Tamil Nadu on Cnemaspis among other lizards (Sayyed et al. 2023a, b).While it is not unexpected that multiple workers may find the same undescribed species, what happens next is important.This is a matter of concern for the scientific community at large, and the Indian herpetological community in particular.In two other cases, even after we (AK, IA) explicitly initiated discussions with two groups whom we knew had collected the same species we were in the process of describing, the other teams went ahead with their descriptions without consultation, of Eublepharis pictus Mirza & Gnaneswar, 2022 and Cyrtodactylus (Geckoella) aravindi Narayanan et al., 2022 (Mirza andGnaneswar 2022;Narayanan et al. 2022).While there are more than enough species to go around, it is in contravention of the Code of Ethics, besides being a waste of time, effort, and resources when teams compete against one another instead of coming together to collaborate and increase the amount of data available in a species description. Bilgi.Fieldwork assistance was provided by Swapnil Pawar, Vaibhav Patil, Satpal Gangalmale, Vivek Waghe, and Satheesh Kumar.We are thankful to Tarun Karmakar (NCBS field station and museum facility, Bengaluru for help with specimen registration, Uma Ramakrishnan for lab support at NCBS, Navendu Page for help with forest classification, R. Chaitanya for edits in the discussion, and Azhar Hotel Devaki for sustaining us on a high protein diet through our time at Srivilliputhur. Figure 1 . Figure 1.Maximum likelihood phylogeny of the beddomei clade (ND2 + 16S concatenated, 1610 base pairs) with photographs of the new species and C. galaxia (not to scale); numbers at nodes represent bootstrap support/ posterior probability > 70/0.99 (not shown close to terminal nodes).Inset, elevation map of the southern Western Ghats showing type and sampled localities for the ornata subclade. Figure 2 . Figure 2. Cnemaspis vangoghi sp.nov.(holotype, NRC-AA-8342) A dorsal view of body B ventral view of body C dorsal view of tail D ventral view of tail E lateral view of tail.Photos by Akshay Khandekar.Scale bars: 10 mm. Figure 3 . Figure 3. Cnemaspis vangoghi sp.nov.(holotype, NRC-AA-8342) A dorsal view of head B ventral view of head C lateral view of head on right D view of femoral region showing femoral pores E ventral view of left manus F ventral view of left pes.Photos by Akshay Khandekar.Scale bars: 5 mm. Table 4 . Meristic data for the new species.Abbreviations are listed in Materials and methods, * = lamellae damaged, L&R = left & right, A = absent. Figure 8 . Figure 8. Cnemaspis sathuragiriensis sp.nov.(holotype, NRC-AA-8349) A dorsal view of body B ventral view of body C dorsal view of tail D ventral view of tail E lateral view of tail.Photos by Akshay Khandekar.Scale bars: 10 mm. Figure 9 . Figure 9. Cnemaspis sathuragiriensis sp.nov.(holotype, NRC-AA-8349) A dorsal view of head B ventral view of head C lateral view of head on right D view of femoral region showing femoral pores E ventral view of left manus F ventral view of left pes.Photos by Akshay Khandekar.Scale bars: 5 mm. Figure 13 . Figure 13.Habitat of Cnemaspis sathuragiriensis sp.nov. at the type locality A general view, and B microhabitat from where types were collected.Photos by Akshay Khandekar. Table 1 . Sequences used in this study.Museum abbreviations are as follows: BNHS, Bombay Natural History Society, Mumbai; CESL, Centre for Ecological Sciences, Bangalore; NRCAA, National Centre for Biological Sciences, Bangalore; AK/ AK-R, Akshay Khandekar field series; AS, Amit Sayyed field series.All from India; KL = Kerala, MH = Maharashtra, TN = Tamil Nadu.We restricted morphological comparisons to the beddomei clade (see Results).Morphological data were collected from 12 specimens of the two new species and from 42 specimens of the beddomei clade including type material of C. azhagu, C. nimbus, C. smaug, and C. wallaceii; type as well as topotypic and/ or additional materials for C. galaxia, C. nigriventris, C. regalis, and C. rubraoculus; and additional materials for C. beddomei, C. nairi, C. rashidi, and C. sundara (all listed in Appendix 1).Data for remaining four species-C.aaronbaueri, C. anamudiensis, C. maculicollis, and C. ornata (as well as C. boiei which is Table 3 . Mensural (mm)data for the new species.Abbreviations are listed in Materials and methods, * = tail incomplete. Table 5 . Additional morphological characters of the new species.A = absent, / = data unavailable.
2024-03-31T15:27:40.372Z
2024-03-27T00:00:00.000
{ "year": 2024, "sha1": "04939e19263e1eec23af07ba26921c6f49849c2d", "oa_license": "CCBY", "oa_url": "https://zookeys.pensoft.net/article/117947/download/pdf/", "oa_status": "GOLD", "pdf_src": "ScienceParsePlus", "pdf_hash": "96bef3b339c566f4f59847e8dfbc9829b9fe6a2f", "s2fieldsofstudy": [ "Biology", "Environmental Science", "Geography" ], "extfieldsofstudy": [] }
89613148
pes2o/s2orc
v3-fos-license
Ulva rigida in the future ocean: potential for carbon capture, bioremediation and biomethane production Ulva species have been considered as ideal candidates for carbon capture, bioremediation and biofuel production. However, little is known regarding the effects of simultaneous ocean warming, acidification and eutrophication on these capacities. In this study, Ulva rigida was cultivated under two levels of: temperature (14 °C (LT) and 18 °C (HT)); pH (8.10 and 7.70) by controlling pCO2 (LC, HC respectively); and nutrients (low (LN) – 50 μm N and 2.5 μm P and high (HN) – 1000 μm N and 50 μm P) for 6 weeks. During the first week of cultivation, HT, HC and HN increased biomass by 38.1%, 17.1% and 20.8%, respectively, while the higher temperature led to negative growth in weeks 2, 4 and 6 due to reproductive events. By the end of the cultivation, biomass under HTHCHN was 130.4% higher than the control (LTLCLN), contributing to a higher carbon capture capacity. Although the thalli at HT released nutrients to seawater in weeks 2, 4 and 6, the HTHCHN treatment increased the overall nitrate uptake rate over the cultivation period by 489.0%. The HTHCHN treatment also had an increased biochemical methane potential and methane yield (47.3% and 254.6%, respectively). Our findings demonstrate that the capacities for carbon and nutrient capture, and biomethane production of U. rigida in the future ocean may be enhanced, providing important insight into the interactions between global change and seaweeds. Introduction The burning of fossil fuels combined with changes in land use have been the main drivers behind increased atmospheric and oceanic CO 2 levels, which having risen by more than 40% since the industrial revolution (270-400 ppm) are currently increasing by approximately 2 ppm per year (Moreira & Pires, 2016). Our CO 2enriched world contributes to global warming and ocean acidification, both of which directly and indirectly impact a wide range of organisms within the biosphere (Doney et al., 2009;Joos, 2015). The global ocean will continue to warm during the 21st century (IPCC, 2013) with the global mean sea surface temperatures for the months of February and August projected to increase by 1.9°C by the end of the 21st century. The maximum summer warming of around 4°C is predicted for high northern latitudes (Bartsch et al., 2012). In addition, sea surface pH is predicted to decrease by 0.4 units based on Representative Concentration Pathway (RCP) 8.5 (Gattuso et al., 2015). To meet the Paris Agreement goal of maintaining the global temperature rise to below 2°C above preindustrial levels (Schreurs, 2016) will necessitate a reduction in CO 2 emissions and a sweeping programme of CO 2 capture and storage (Archer et al., 2009). In addition to chemical absorption, biological approaches to CO 2 capture remain research and policy priorities. Added to the immense challenges posed by rising CO 2 levels, eutrophication presents another macroscale problem that has yet to be effectively mitigated, particularly in rapidly developing nations with increasing coastal populations (Fleming-Lehtinen et al., 2015;Liu & Wang, 2016). Excessive nutrient enrichment drives cycles of algal blooms and crashes (so-called red and green tides) that risk the proliferation of potentially toxic species, and also drive destructive hypoxia events over vast spatial scales (Smith et al., 1999;Smetacek & Zingone, 2013;Farmaki et al., 2014). Despite algae proliferation (both micro-and macroalgae) being a symptom of the eutrophication dilemma, they are also gaining attraction as CO 2 and nutrient biocapture candidates. Due to their large biomass and relatively long turnover time, marine macrophytes (e.g. seagrasses and macroalgae) are more effective carbon sinks than microalgae (Smith, 1981). Species of Ulva (Chlorophyta) exhibit high growth rates with strong CO 2 capture capacities and a high affinity for nutrients. For instance, U. fasciata has the highest CO 2 capture rate among marine or freshwater macrophytes (Alwis & Jayaweera, 2011). Ocean warming, acidification and eutrophication usually promote Ulva's growth and hence increase the overall carbon capture capacity. For instance, the growth rate of U. fasciata increased as temperature rose from 15 to 25°C (Mohsen et al., 1973). The growth of U. prolifera was also enhanced when cultured under ocean acidification conditions compared with controls . In addition, the growth rate of U. rigida was positively related to dissolved inorganic nitrogen (DIN) levels when DIN varied from 3 to 75 lmol L À1 (Viaroli et al., 1996). Ocean warming, acidification and eutrophication can also enhance the nutrient uptake rate of Ulva species (Gordillo et al., 2001;Fan et al., 2014). Increasingly, the carbon capture community has divided into two camps; advocates of carbon capture and storage (i.e. long-term burial of the captured carbon) and advocates of carbon capture and reuse (i.e. recycling the captured carbon into a valorised form, e.g. biofuel; Rao & Rubin, 2002;Brune et al., 2009). An exposition of the relative merits of both approaches is beyond the scope of this study. However, we have investigated the response of a major green tide-forming alga, U. rigida to simulated climate change and eutrophication conditions, with a focus on growth rate, reproductive response and carbon and nutrient capture; feeding into the potential to further exploit Ulva species as a biofuel source. Macroalgae are a promising biofuel feedstock owing to their high carbohydrate content that can readily be converted by microbes to biomethane or bioethanol (Kraan, 2010;Hinks et al., 2013). Ulva species are an attractive feedstock due to their wide distribution, rapid growth rate and high carbohydrate content (Bruhn et al., 2011). Married to their strong propensity for nutrient capture (particularly nitrogen and phosphorous), Ulva species are increasingly at the forefront of applied phycology (Bolton et al., 2009;Cruz-Su arez et al., 2010;Lawton et al., 2013;Korzen et al., 2016); however, uncertainties remain over Ulva's responses to an ever more carbon and nitrogen impacted future ocean. Until now, most studies regarding the effects of ocean warming, acidification and eutrophication on Ulva species have been either single-or two-factor trials. The combined effects of global change factors need to be examined simultaneously as they are co-occurring (Koch et al., 2013). To the best of our knowledge, no study has yet investigated the interactive effects of ocean warming, acidification and eutrophication on Ulva species from the perspective of their potential for carbon and nutrient capture for biofuel production. In this study, U. rigida was cultured at current and simulated future ocean conditions to test the hypothesis that the combined influence of ocean warming, acidification and eutrophication would enhance Ulva's biocapture propensity and utility as a biofuel feedstock, resulting in a negative feedback (Fig. 1). Sample collection and culture conditions Vegetative U. rigida of 25-30 mm in length were collected in July 2015 from the low intertidal of Cullercoats Bay, UK (55.03°N, 1.43°W) after a spring tide. The thalli were placed in a zip-lock plastic bag and transported to the laboratory within one hour where they were gently rinsed in one micron filtered natural seawater to remove any contaminating sediment, epiphytes and small grazers. The 720 healthy individual Ulva plants were randomly assigned to 24 Perspex â tanks of 13.5 L in volume, each containing 10 L of natural seawater. The interactive effects of ocean warming, acidification and eutrophication were investigated using a fully crossed factorial design, wherein the thalli were incubated under combinations of two pH levels (8.10, 7.70; coded as low pCO 2 , LC and high pCO 2 , HC, respectively), two temperatures (14, 18°C; coded as LT and HT, respectively) and two nutrient conditions (50 lM N as nitrate and 2.5 lM P as phosphate, 1000 lM N and 50 lM P; coded as LN and HN, respectively). Three replicate tanks (30 plants per tank) were set up for each treatment. The summer Fig. 1 Interactions between global change variables (ocean warming, acidification and eutrophication) and Ulva rigida in terms of carbon and nutrient capture, and biomethane production. The plus sign represents strengthening and the minus sign represents weakening. average surface seawater temperature in the coastal waters of the central North Sea (14°C; Mathis et al., 2015) and the ambient pH of natural seawater (8.10) were set as LTLC. The reduced pH and elevated temperature represent the predicted levels by the year 2100 (Baede et al., 2001). The low (50 lM N and 2.5 lM P) and high (1000 lM N and 50 lM P) nutrient levels were chosen as they could represent nutrient limited and replete conditions based on a preliminary experiment. Nutrient concentrations were maintained daily by adding the consumed amount after direct measurement. The culture temperatures were controlled using research grade laboratory incubators while pH was maintained using a custom-built, Aqua-medic TM computer-controlled pH system (Loveland, Colorado) to add CO 2 into an air stream via solenoid valves. The thalli received a light intensity of 80 lmol photons m À2 s À1 , with a photoperiod of 16L: 8D. The culture lasted for 6 weeks and seawater media were fully renewed every 3-4 days. Carbonate chemistry Total alkalinity (TA) was measured by titrations prior to seawater changes. Carbonate system parameters, which were not directly measured, were calculated using CO2SYS (Pierrot et al., 2006), using the equilibrium constants of K1 and K2 for carbonic acid dissociation from Roy et al. (1993) and the KSO À 4 dissociation constant from Dickson (1990). Biomass and growth Biomass of U. rigida was determined each week by weighing fresh thalli. Ulva thalli were blotted gently with tissue paper to remove surface water before weighing. Specific growth rate (SGR) was calculated by the formula: where W t is the weight after t days, W 0 is the initial weight and t is the number of culture days. The mean SGR over 6 weeks of cultivation was based on the initial weight and the final weight at the end of the cultivation period. Carbon capture Carbon capture rate (CCR) over 42 days of culture was determined by the following equation: where W t and C t are, respectively, the algal weight and carbon content after t days of culture, W 0 and C 0 are, respectively, the initial weight and carbon content and t is culture time (42 days). Total carbon content was measured by a CHN elemental analyzer (PerkinElmer 2400, Shelton, CT, USA). Reproduction Fertile thalli were recognized by their colour. The formation of reproductive cells in U. rigida is accompanied by a change in thallus colour from green (vegetative state) to yellowish (reproductive state) and then to white (after the release of swarmers; Gao et al., 2017a). This was verified by microscope observation. The reproduction rate at each week was expressed as the ratio of fertile thalli to all thalli in a tank. Nitrate and phosphate uptake determination The nitrate or phosphate uptake rate (NUR) was estimated from the change of NO À 3 or PO 3À 4 concentrations in the culture medium over a given time interval (24 h) using the following equation: where N 0 is the initial concentration of NO À 3 or PO 3À 4 , N t is the concentration after 24 h, V is the volume of the culture medium and W is the fresh weight of the thalli in culture. Nitrate concentration was measured by a rapid spectrophotometer method (Collos et al., 1999), and phosphate was determined by the phosphomolybdenum blue colorimetry method (Murphy & Riley, 1962). The overall nitrate rate (ONR) over 42 days of culture was determined by the following equation: where W t and C t are, respectively, the algal weight and nitrogen content after t days of culture, W 0 and C 0 are, respectively, the initial weight and nitrogen content, and t is culture time (42 days). Total nitrogen content was measured by a CHN elemental analyzer (PerkinElmer 2400, USA). Biochemical composition At the end of 42 days of culture, dry weight was obtained by oven drying fresh thalli at 50°C until a consistent weight was attained (for 24 h). Ash content was measured by burning dried seaweed samples at 550°C for 24 h, and volatile solids (VS) were calculated as the ash-free dry weight. Total protein content was estimated by the Kjeldahl method using nitrogen contents multiplied by 5.45 based on the mean value of three species of Ulva (Shuuluka et al., 2013). Lipid was extracted according to a modified Folch method (Gao et al., 2017b), and carbohydrate was estimated by an approximation, subtracting the content of protein, lipid and ash from the total content. Biochemical methane potential and biomethane yield At the end of 42 days of culture, the biochemical methane potential (BMP) of thalli was determined according to the modified method of Jard et al. (2013). Dry thalli (~2 g DW) from each treatment were placed in a 500 mL Duran bottle. Each bottle was inoculated with 40 g of effluent from a 5.0 L laboratory-scale reactor treating cattle manure (5.5% VS). Bottles were filled to 400 mL with distilled water. Blanks (without thalli) were carried out simultaneously to account for the biogas produced from the inoculum alone. The bottles were rapidly sealed with butyl-rubber stoppers and held using clamped aluminium collars. Pure nitrogen gas (99.999%) was flushed into the headspace to create an anaerobic condition. Afterwards, the bottles were incubated at 35°C and shaken throughout the 45day incubation period. The butyl-rubber stopper in each Duran bottle was perforated with a needle attached to a pressure transducer (Type 453A, Bailey and Mackey Ltd, Birmingham, UK) which recorded the gas pressure within the bottle. Methane production was measured with a gas chromatograph equipped with a flame ionization detector (6890 N, Agilent Technologies, Santa Clara, CA, USA). The background methane production from the blank was subtracted from sample methane production obtained in the substrate assays. The biochemical methane potential was calculated by dividing the corrected methane volume (standard pressure and temperature) with the weight of sample VS added to each bottle. The methane yield (MY) was estimated by the following equation: where W t is the weight after t days of culture, VS is the volatile solid per cent, BMP is biochemical methane potential, W 0 is the initial weight and t is the culture time (42 days). Statistical analysis The results were expressed as means of replicates AE standard deviation. Data were analysed using SPSS v.21 (IBM, Armonk, NY, USA). The data under every treatment conformed to a normal distribution (Shapiro-Wilk, P > 0.05) and the variances could be considered equal (Levene's test, P > 0.05). Repeatedmeasures ANOVAs (RM-ANOVAs) were conducted to assess the effects of temperature, pCO 2 and nutrients on biomass, specific growth rate, reproduction rate and nitrate and phosphate uptake rate of U. rigida over cultivation time. Three-way ANOVAs were conducted to assess the effects of temperature, pCO 2 and nutrients on mean growth rate, carbon content, carbon capture rate, overall nitrate uptake rate, BMP and methane yield. Three-way multivariate ANOVAs (MANOVAs) were conducted to assess the effects of temperature, pCO 2 and nutrients on seawater carbonate parameters and biochemical composition (protein, lipid, carbohydrate and ash). Tukey's honest significant difference test was conducted for post hoc investigation. A confidence interval of 95% was set for all tests. Biomass and growth The biomass and specific growth rate varied with cultivation time and the variation patterns under different treatments were heterogeneous (P < 0.001; Fig. 2). For example, the biomass in the low temperature treatment had two peaks; 26.7 AE 4.3 g tank À1 by week 2 and 41.0 AE 10.2 g tank À1 by week 5. The high temperature treatment, while also having two peaks, was slightly out of phase from the low temperature treatment, with peaks of 37.8 AE 8.5 g tank À1 by week 3 and 47.5 AE 12.3 g tank À1 by week 5 (Fig. 2a). The fluctuation in biomass with cultivation time was a function of growth rate. The specific growth rate at the lower temperature was negative in weeks 3 and 6, whereas negative growth occurred in weeks 2, 4 and 6 in the high temperature treatment (Fig. 2b). In terms of the effect of temperature, pCO 2 and nutrients on biomass in each week, high temperature, high pCO 2 and high nutrients increased biomass by 38.1%, 17.1% and 20.8%, respectively, by week 1 (P < 0.001). By week 2, the higher temperature had reduced the biomass by 42.3% (P < 0.001). High pCO 2 and nutrients, respectively, increased biomass by 18.4% and 26.3% at the lower temperature (P < 0.01) but not at the higher temperature. By week 3, high temperature, high pCO 2 and high nutrient treatments had increased biomass by 79.1%, 15.0% and 38.9%, respectively (P < 0.001). By week 4, temperature did not affect biomass while high pCO 2 and nutrients increased it by 16.4% and 36.3%, respectively (P < 0.001). By week 5, high temperature, high pCO 2 and high nutrients increased biomass by 15.9%, 24.1% and 55.0%, respectively (P < 0.001). The stimulating effects of high temperature (27.1%), high pCO 2 (22.1%) and high nutrient (45.3%) on biomass continued into week 6 (P < 0.001). The effects of these three factors on specific growth rate were similar to the biomass yield (Fig. 2b). There were two trends that are worthy of note. First, growth rate generally decreased with cultivation time. Secondly, the negative growth effect of high temperature decreased with cultivation time and the negative specific growth rate at the lower temperature did not change with cultivation time. To evaluate the growth effects of temperature, pCO 2 and nutrients over the whole cultivation period, the mean growth rate over 6 weeks of cultivation was calculated (Fig. 2c). All three factors positively affected the mean growth rate. High Fig. 2 Biomass (a), specific growth rate (b) and mean growth rate (c) of Ulva rigida cultured under the experimental conditions for 6 weeks. The mean growth rate is based on the initial biomass and final biomass at the end of the cultivation. The error bars indicate the standard deviations (n = 3). LT = lower temperature (14°C); HT = higher temperature (18°C); LC = lower pCO 2 (pH 8.10); and HC = higher pCO 2 (pH 7.70); LN = lower nutrients (50 lmol L À1 N and 2.5 lmol L À1 P); HN = higher nutrients (1000 lmol L À1 N and 50 lmol L À1 P). temperature, pCO 2 and nutrients increased it by 17.1%, 11.1% and 23.5%, respectively (P < 0.001). Reproduction The reproduction rate of thalli during cultivation was observed to investigate the reasons behind the periodic decrease in biomass and growth (Fig. 3). The thalli grown at the higher temperature had reproductive events in weeks 2, 4 and 6, whereas those grown at the lower temperature were reproductive in weeks 3 and 6, indicating that the higher temperature shortened the reproductive period from three to two weeks. Another temperature trend was that the reproduction rate at the lower temperature did not change with cultivation time (43.9 AE 5.8% in week 3 and 42.5 AE 6.8% in week 6), while it decreased from 63.9 AE 18.5% (week 2) to 42.2 AE 9.0% (week 6; P = 0.003) at the higher temperature; although the differences between weeks 2 and 4 (P = 0.086) or weeks 4 and 6 (P = 0.328) were not significant. pCO 2 and nutrients also affected reproduction. High pCO 2 increased reproduction by 25.4%, 17.8% and 15.0% in weeks 2, 4 and 6, respectively. High nutrients increased reproduction by 64.4%, 16.5%, 65.3% and 29.5% in weeks 2, 3, 4 and 6, respectively. Carbon capture The thalli carbon content is presented in Fig. 4a. By the end of the cultivation period, temperature (P < 0.001) and pCO 2 (P = 0.029) had main effects, and pCO 2 had interactive effects with temperature (P = 0.001) or nutrients (P = 0.005). Additionally, these three factors interacted (P = 0.004) to affect the thallus carbon content. The higher temperature increased the carbon content by 10.9-23.0% (P < 0.01). High pCO 2 increased the carbon content by 4.1-4.3% at the higher temperature (P < 0.05), but did not affect it in the LTHN treatment and decreased it by 5.98% in the LTLN treatment (P = 0.027). In terms of the carbon capture rate (Fig. 4b), all three factors had positive effects (P < 0.001). High temperature, pCO 2 and nutrients increased carbon capture by 61.8%, 31.3% and 60.6%, respectively. In addition, pCO 2 interacted with temperature (P = 0.007) or nutrients (P = 0.002). For instance, high pCO 2 increased the carbon capture rate by 29.2% at the lower temperature and by 32.7% at the higher temperature, and by 26.7% in the low nutrient treatment and by 34.3% in the higher nutrient treatment. Due to the main and interactive effects of temperature, pCO 2 and nutrients, the carbon capture rate at HTHCHN was 245.1% higher than that at LTLCLN. Uptake of nitrate and phosphate Nitrate and phosphate uptake rates were measured to investigate bioremediation capacity (Fig. 5). A RM-ANOVA showed that nitrate uptake varied with cultivation time, and variation patterns were different under each treatment (P < 0.001). For instance, the nitrate uptake rate at the lower temperature decreased from 211.4 AE 65.9 lM g DW À1 day À1 (week 1) to 134.3 AE 33.2 lM g DW À1 day À1 (week 2) further to À47.7 AE 4.2 lM g DW À1 day À1 (week 3), then increased to 119.0 AE 25.7 lM g DW À1 day À1 (week 4), did not change in week 5 (101.3 AE 29.7 lM g DW À1 day À1 ) and finally decreased to À37.4 AE 6.1 lM g DW À1 day À1 (week 6; Fig. 5a). In contrast, the nitrate uptake rate at the higher temperature decreased from 267.5 AE 143.0 lM g DW À1 day À1 (week 1) to À50.6 AE 33.9 lM g DW À1 day À1 (week 2), increased to 279.4 AE 44.5 lM g DW À1 day À1 (week 3), decreased to À52.9 AE 8.8 lM g DW À1 day À1 (week 4), increased to 150.9 AE 83.6 lM g DW À1 day À1 (week 5), and finally decreased to À27.3 AE 17.1 lM g DW À1 day À1 (week 6). The effects of temperature, pCO 2 and nitrate on the nitrate uptake in each week were analysed by a MANOVA. By week 1, high temperature and high pCO 2 increased nitrate uptake in thalli grown in the high nutrient treatment by 51.1% (P < 0.001) and 37.6% (P = 0.002), respectively, with high nutrient having a larger promoting effect of 118.0% (P < 0.001). By week 2, thalli grown at the higher temperature had negative nitrate uptake rates, suggesting nitrate release from thalli to the Fig. 3 Reproduction rate of Ulva rigida cultured under the experimental conditions for 6 weeks. The error bars indicate the standard deviations (n = 3). LT = lower temperature (14°C); HT = higher temperature (18°C); LC = lower pCO 2 (pH 8.10); and HC = higher pCO 2 (pH 7.70); LN = lower nutrients (50 lmol L À1 N and 2.5 lmol L À1 P); HN = higher nutrients (1000 lmol L À1 N and 50 lmol L À1 P). Fig. 4 Carbon content (a) and carbon capture rate (b) of Ulva rigida grown at the experimental conditions by the end of 6 weeks of cultivation. The error bars indicate the standard deviations (n = 3). LT = lower temperature (14°C); HT = higher temperature (18°C); LC = lower pCO 2 (pH 8.10); and HC = higher pCO 2 (pH 7.70); LN = lower nutrients (50 lmol L À1 N and 2.5 lmol L À1 P); HN = higher nutrients (1000 lmol L À1 N and 50 lmol L À1 P). seawater. High nutrients increased nitrate uptake by 54.7% at low temperature (P < 0.001) but it led to a quicker nitrate release of 253.4% at the higher temperature (P < 0.001). By week 3, thalli grown at the lower temperature released nitrate to the seawater. High nutrients increased nitrate uptake by 214.3% at the higher temperature (P < 0.001) but did not affect it at the lower temperature. By week 4, the high temperature led to a negative nitrate uptake. High nutrients increased nitrate uptake by 38.7% at the lower temperature (P = 0.003) but lead to a quicker nitrate release of 147.2% at the higher temperature (P < 0.001). By week 5, the higher temperature increased nitrate uptake in thalli grown with high nutrients by 79.6% (P < 0.001). High nutrient levels increased nitrate uptake rate by 143.6% (P < 0.001). By week 6, the higher temperature reduced nitrate release by 27.0% (P = 0.002). The high nutrients treatment increased nitrate releases of thalli grown at the higher temperature by 236.8% (P < 0.001). The pattern of phosphate uptake (Fig. 5b) was the same as the nitrate uptake. When the effects of these three factors were normalized to 6 weeks (Fig. 5c), all of them showed positive effects on nitrate uptake. High temperature, pCO 2 and nutrients increased the overall nitrate uptake rate by 104.2%, 17.6% and 108.3%, respectively (P < 0.001), making the overall nitrate uptake rate at HTHCHN (7.1 AE 0.4 lM g DW À1 day À1 ) almost six times greater than that at LTLCLN. Biochemical methane potential and methane production rate The biochemical methane potential (Fig. 7a) and methane yield (Fig. 7b) of U. rigida grown at different conditions were investigated by the end of 6 weeks of Fig. 6 Content of protein (a), lipid (b), carbohydrate (c) and ash (d) in Ulva rigida grown at the experimental conditions by the end of 6 weeks of cultivation. The error bars indicate the standard deviations (n = 3). LT = lower temperature (14°C); HT = higher temperature (18°C); LC = lower pCO 2 (pH 8.10); and HC = higher pCO 2 (pH 7.70); LN = lower nutrients (50 lmol L À1 N and 2.5 lmol L À1 P); HN = higher nutrients (1000 lmol L À1 N and 50 lmol L À1 P). culture. A three-way ANOVA showed that temperature (P < 0.001), pCO 2 (P = 0.014) and nutrients (P < 0.001) had main effects on the biochemical methane potential, and pCO 2 had an interactive effect with nutrients (P = 0.002). The higher temperature increased the biochemical methane potential by 19.9-34.3% (P < 0.001). The higher nutrient treatment did not affect the biochemical methane potential at LC, but increased it by 15.3-22.2% at HC (P < 0.001). In terms of methane yield (Fig. 7b), all three factors had main effects (P < 0.001) and any two of them had an interactive effect (P < 0.01). For instance, high pCO 2 did not affect methane yield at LN but increased it by 42.7% at HN. HN increased methane yield at LC by 44.4%, while it was 73.4% at HC. The stimulating effects of these three factors made the methane yield (35.5 AE 1.5 mL g DW À1 day À1 ) at HTHCHN 254.5% higher than that at LTLCLN. Growth and reproduction In the present study, the higher temperature enhanced the specific growth rate in U. rigida in weeks 1, 3 and 5. It has been widely reported that high temperatures could promote the growth of Ulva. For instance, a 5°C increase (from 20 to 25°C) more than doubled the growth rate of U. fasciata when salinity was 25 (Mantri et al., 2011). The results in the present study indicate that the lower temperature at high latitudes limits the growth of U. rigida even in summer, and therefore, Ulva species may benefit from future ocean warming. In terms of the effect of CO 2 , although it has been reported that Ulva species have efficient carbon concentrating mechanisms (CCMs) and their photosynthesis could be saturated at the current CO 2 level (Axelsson et al., 1999;Gao et al., 2016), growth could still be promoted by elevated CO 2 (Gao et al., 2016(Gao et al., , 2017b. Our results were consistent with these studies. Such an increase in growth may be due to enhanced nitrogen assimilation at the elevated CO 2 concentration (Gordillo et al., 2003;Xu et al., 2017). In addition, nitrogen and phosphorus, two key nutrients supporting algal growth, are generally thought to be limiting in marine systems (M€ uller & Mitrovic, 2015). Accordingly, their enrichment can promote algal growth (Gao et al., 2017b;Xu et al., 2017). In the present study, adding nitrogen and phosphorus increased the growth of U. rigida. A similar result was also documented in nitrogen and phosphorus enrichment experiments for Ulva spp. conducted in the field (Teichberg et al., 2010). On the other side, the effect of temperature on the specific growth rate was reversed, the positive effect turning to negative in weeks 2, 4 and 6. Decreased growth at the higher temperature could be attributed to induced reproductive events at the higher temperature. Reproduction can stop vegetative growth, and the Fig. 7 Biochemical methane potential (a) and methane yield (b) of Ulva rigida grown at the experimental conditions by the end of 6 weeks of cultivation. The error bars indicate the standard deviations (n = 3). LT = lower temperature (14°C); HT = higher temperature (18°C); LC = lower pCO 2 (pH 8.10); and HC = higher pCO 2 (pH 7.70); LN = lower nutrients (50 lmol L À1 N and 2.5 lmol L À1 P); HN = higher nutrients (1000 lmol L À1 N and 50 lmol L À1 P). release of spores leads to a loss of thallus mass (Gao et al., 2017b). Moderate temperatures can accelerate reproductive processes by increasing the metabolic activity to produce essential materials, such as nucleotides and proteins (Iken, 2012). For instance, the reproductive period of U. fenestrata in the laboratory decreased from 30 to 5 days when temperature increased from 10 to 20°C (Kalita & Titlyanov, 2011). The effect of temperature on rhythms of Ulva reproduction was also found in the field. U. pseudocurvata in the North Sea was reported to have biweekly peaks of gametophytic reproduction during the colder seasons and approximately weekly peaks during summer (L€ uning et al., 2008). The high temperature also reduced the reproductive period of U. rigida from three to two weeks compared to the low temperature in the present study. Taken together, these findings indicate that higher temperature could commonly shorten the reproductive period in Ulva species. It is noteworthy that the reproduction rate at high temperature declined with cultivation time, indicating acclimation of reproduction to ocean warming. This also led to the decreasing negative effect of high temperature on the specific growth rate with cultivation time. Most studies on algal acclimation to temperature rise are confined to photosynthesis, growth and respiration (Eggert et al., 2006;Zou & Gao, 2013;Graiff et al., 2015;Al-Janabi et al., 2016;Gao et al., 2017c). Our study suggests algal reproduction could also acclimatize to global warming. In addition to temperature, high pCO 2 and nitrate also induced more reproduction in U. rigida. This finding was consistent with our previous study based on a short-term cultivation (Gao et al., 2017b). The promoting effects of higher CO 2 and nutrients on U. rigida may be due to increased carbon and nitrogen assimilation and thus the necessary materials for reproduction. Compared to pCO 2 , temperature and nutrients seem to play a more important independent role in controlling both growth and reproduction of U. rigida. This finding is consistent with field studies (Keesing et al., 2011;Liu et al., 2013;Smetacek & Zingone, 2013). Eutrophication is deemed as the primary reason for green tides (Smetacek & Zingone, 2013). In addition, the coverage of green tides formed by Ulva species extends with the rise of seawater temperature but begins to shrink and then disappears when seawater temperature increases further and nutrients are exhausted (Keesing et al., 2011;Liu et al., 2013). Carbon capture capacity The carbon capture rate depends on the growth rate and carbon content. In the present study, high temperature, pCO 2 and nitrate induced more reproductive events, leading to reduced growth rate in the shorter term. This phenomenon was also found in our previous study (Gao et al., 2017b). But in a longer-term cultivation, the quicker growth at the high temperature, pCO 2 and nutrients could offset the negative effect of reproduction on growth. Accordingly, high temperature, pCO 2 and nutrients increased the mean growth rate over 7 weeks of cultivation. In addition, high temperature and pCO 2 also increased the carbon content in U. rigida. These resulted in the highest carbon capture rate at HTHCHN, which is more than three times higher than at LTLCLN. Chung et al. (2011) showed that Ulva species had the highest carbon capture capacity compared to seaweeds belonging to the Chlorophyta, Phaeophyta and Rhodophyta. Our findings suggest that the future ocean environment may strengthen the carbon capture capacity of Ulva species. Bioremediation capacity The nitrate and phosphate uptake of algae commonly increases with temperature (Pedersen et al., 2004;Smith et al., 2009;Fan et al., 2014). In the present study, the higher temperature also increased nitrate and phosphate uptake in U. rigida when thalli were vegetative. However, the higher temperature led to a negative nutrient uptake when reproductive events occurred, indicating that thalli were releasing rather than absorbing nutrients from the seawater. The reasons for this may be twofold. When thalli release spores, the nutrients in the cell could also be discharged to the seawater. Meanwhile, the decomposition of debris after spore release also contributes to the increase in nutrients in the seawater. This is supported by studies in which the seawater was enriched with nitrate and phosphate when macrophytes were decomposing (Hanisak, 1993;Gao et al., 2013). Furthermore, the higher temperature could increase the decomposition rate and thus the nutrient release rate (Hanisak, 1993;Da et al., 2014), which may explain the negative nutrient uptake rate at the higher temperature in the present study. This suggests that Ulva species can actually be a source of nutrients when they are reproducing, which has implications for the biofilter efficiency of Ulva in the future ocean. Maintaining a long-term vegetative state seems to be critical for promoting the biofiltering efficiency of Ulva species. The higher pCO 2 in the present study also increased the nitrate and phosphate uptake during the first week. Xu et al. (2017) demonstrated that a higher pCO 2 level (1017 latm) increased the nitrate reductase activity and thus the nitrate uptake in Sargassum muticum during a 13-day cultivation. Therefore, the increased nutrient uptake at elevated pCO 2 in the present study may be due to the activation of nitrate reductase at the higher pCO 2 levels (Gordillo et al., 2001;Xu et al., 2017). Biomethane production In the present study, U. rigida cultivated at the conditions of high temperature and high nutrients had a higher biochemical methane potential. When measuring the biochemical composition, it was found that the culture conditions of higher temperature and nutrients resulted in increased lipid and protein content and decreased carbohydrate content. The theoretical methane production for lipid, protein and carbohydrate is 1014, 496 and 415 mL CH 4 g VS À1 , respectively (Møller et al., 2004). Therefore, the greater proportion of lipid and protein in thalli grown under the conditions of high temperature and high nutrient will produce a higher biochemical methane potential. The biochemical methane potential (286.3 AE 8.9 mL CH 4 g VS À1 ) of thalli cultured under the conditions of high temperature, high pCO 2 and high nutrients was 47.3% higher than the control and was also above the range of 120-271 mL CH 4 g VS À1 reported in Ulva species using fresh or ground thalli (Briand & Morand, 1997;Peu et al., 2011;Allen et al., 2013;Jard et al., 2013). The findings above indicate that the future ocean could increase the biochemical methane potential of U. rigida by altering its biochemical composition. The present study is the first to document the impacts of global change factors on biochemical methane potential in seaweeds. In addition to biochemical methane potential, methane yield was also determined by growth rate. High temperature, pCO 2 and nutrients increased mean growth rate over 6 weeks of cultivation. Combined with the positive effect of these three factors on biochemical methane potential, methane yield at HTLCHN was 2.5 times higher than at LTLCLN. Ignoring the high levels of sulphur, Ulva species are considered as ideal seaweeds for biomethane production due to their abundant availability, quick growth and high biochemical methane potential (Bruhn et al., 2011;Amosu, 2016;Karray et al., 2017). Our finding indicates that the future ocean may further improve Ulva's advantages as a biofuel feedstock. Interactions between climate change and seaweeds It has been suggested that CO 2 -induced global warming would lead to increased stratification of the water column, resulting in decreasing nutrient transport from the deep ocean to the upper ocean and increasing light exposure (Falkowski et al., 2000;Gao et al., 2012). This can reduce the carbon fixation of phytoplankton and thereby the rate of oceanic uptake of anthropogenic CO 2 (Gao et al., , 2017c. On the other hand, most seaweeds inhabit the tide zones and are not affected by stratification of the water column. Our results demonstrate that ocean warming, acidification and eutrophication significantly enhanced the capacities for carbon and nutrient capture and biomethane production in U. rigida. The increased capacity for carbon and nutrient capture seems to be a negative feedback in response to the environmental changes caused by human activity and thus could alleviate some aspects of global climate change. Although seaweeds are restricted to a narrow zone of the oceans, they contribute to about 10% of the total world marine productivity (Israel et al., 2010). Further investigations into other seaweeds are needed to understand whether this negative feedback applies to the whole seaweed community and to have a more comprehensive view of the interactions of climate change and marine primary producers.
2019-01-06T03:01:44.405Z
2018-01-01T00:00:00.000
{ "year": 2018, "sha1": "ada11ce9143d6c18b8c01dedbbf29e80b3417eb6", "oa_license": "CCBY", "oa_url": "https://doi.org/10.1111/gcbb.12465", "oa_status": "GOLD", "pdf_src": "Wiley", "pdf_hash": "33d201363306fae13ec8800bef99184a501feaef", "s2fieldsofstudy": [ "Biology" ], "extfieldsofstudy": [ "Biology" ] }
270706825
pes2o/s2orc
v3-fos-license
Hemospray® (hemostatic powder TC-325) as monotherapy for acute gastrointestinal bleeding: a multicenter prospective study Background Hemostatic powders are used as second-line treatment in acute gastrointestinal (GI) bleeding (AGIB). Increasing evidence supports the use of TC-325 as monotherapy in specific scenarios. This prospective, multicenter study evaluated the performance of TC-325 as monotherapy for AGIB. Methods Eighteen centers across Europe and USA contributed to a registry between 2016 and 2022. Adults with AGIB were eligible, unless TC-325 was part of combined hemostasis. The primary endpoint was immediate hemostasis. Secondary outcomes were rebleeding and mortality. Associations with risk factors were investigated (statistical significance at P≤0.05). Results One hundred ninety patients were included (age 51-81 years, male: female 2:1), with peptic ulcer (n=48), upper GI malignancy (n=79), post-endoscopic treatment hemorrhage (n=37), and lower GI lesions (n=26). The primary outcome was recorded in 96.3% (95% confidence interval [CI]: 92.6-98.5) with rebleeding in 17.4% (95%CI 11.9-24.1); 9.9% (95%CI 5.8-15.6) died within 7 days, and 21.7% (95%CI 15.6-28.9) within 30 days. Regarding peptic ulcer, immediate hemostasis was achieved in 88% (95%CI 75-95), while 26% (95%CI 13-43) rebled. Higher ASA score was associated with mortality (OR 23.5, 95%CI 1.60-345; P=0.02). Immediate hemostasis was achieved in 100% of cases with malignancy and post-intervention bleeding, with rebleeding in 17% and 3.1%, respectively. Twenty-six patients received TC-325 for lower GI bleeding, and in all but one the primary outcome was achieved. Conclusions TC-325 monotherapy is safe and effective, especially in malignancy or post-endoscopic intervention bleeding. In patients with peptic ulcer, it could be helpful when the primary treatment is unfeasible, as bridge to definite therapy. Introduction Acute gastrointestinal (GI) bleeding (AGIB) is a common medical emergency, especially in an era when antithrombotic agents are widely used [1,2].Depending on the origin of the bleeding, AGIB is defined as upper GIB (UGIB), when located proximally to the ligament of Treitz, and lower GIB (LGIB) when it occurs elsewhere in the alimentary tract.The frequency of UGIB has followed a reducing trend over the last 2 decades, probably due to the eradication of Helicobacter pylori and the widespread prescription of proton pump inhibitors (PPIs) [3].More specifically, UGIB is recorded at a rate of 67 cases per 100,000 population in the United States of America [4], 134 per 100,000 population in the UK [5], and 47 per 100,000 in Spain [3].Similarly, the incidence of UGIBrelated deaths has reduced, as indicated by a database study of peptic ulcer bleeding from the US, conducted between 1989 and 2009, which found that the mortality rate had halved, falling from 4.5-2.1% [6].Although LGIB is more common than UGIB, limited data exist in the literature regarding its prevalence in the general population.Interestingly, the rate of diverticular disease and angiodysplasia-related bleeding has increased, probably reflecting the use of antiplatelets and oral anticoagulants [1,2]. Endoscopic hemostasis represents the mainstay treatment, alongside optimization of medical care.This is supported by studies revealing a reduction in overall mortality caused by GI bleeding.GI endoscopy societies have published thorough guidelines on the management of AGIB, favoring dual hemostasis as the optimal approach in cases of active hemorrhage [7][8][9].Mechanical treatment, including a variety of endoscopic clips and bands, provides a reliable and lasting effect, especially when applied to focal lesions and vessels.Similarly, thermal ablation techniques target actively bleeding or highrisk spots with equivalent efficacy.Injection with adrenaline solution provides a combined tamponade and vasoconstrictive effect; however, it is limited by its short duration and needs to be accompanied by another technique [9].These techniques require fine movements to target the bleeding site, which may be challenging in difficult positions, or when there is a large abnormal surface, as in the case of malignancies. Combination therapy, including at least 2 of the aforementioned modalities, is strongly recommended by current guidelines and supported by high-quality evidence [8,9].Although the available modalities offer an adequate effect on hemostasis, single treatment with epinephrine injection is inferior to combination therapies with thermal or mechanical hemostasis.At least in cases with active bleeding, epinephrine injection in the bleeding site, followed by cauterization or clipping, provides lower rates of rebleeding and need for emergency surgery [10,11].However, in cases with a difficult and unstable endoscopic position, unavailability of sophisticated devices such as over-the-scope clips, and inadequate endoscopic experience, combined hemostasis can be challenging. Topical hemostatic powders offer a treatment modality that is easy to use, with a minimal learning curve.Therefore, they provide a promising alternative, especially when a targeted treatment cannot be provided.Additional benefits include the ability to treat a large surface area and their non-contact nature.TC-325 (Hemospray®; Cook Medical, Winston-Salem, North Carolina, USA) is a mineral-based hygroscopic powder that is deployed using a pressurized carbon dioxide canister (Fig. 1).When Hemospray® comes into direct contact with blood it triggers a clotting cascade that results in the formation of a coagulum.This leads to a tamponade effect over the bleeding foci, forming an adhesive seal that results in hemostasis.The powder then sloughs off the mucosa over the following 24-72 h [12].Although these hemostatic agents seem to yield an acceptable rate of bleeding cessation, they are currently recommended as rescue therapy, rather than primary therapy.The aim of this single-arm, prospective, multicenter international registry study was to evaluate hemostasis outcomes and adverse events in consecutive patients who received Hemospray® as endoscopic monotherapy for AGIB, in various locations and with different underlying causes. Study design A prospective international multicenter study, in form of a registry, was conducted to investigate the efficacy of Hemospray® on AGIB as monotherapy.The Hemospray® Registry was presented to the local research ethics committee (London -South East Research Ethics Committee) and received ethical approval in October 2016 (ISRCTN29594250).A total of 18 centers across Europe and the USA contributed to the registry between January 2016 and February 2022.The study protocol conformed to the ethical guidelines of the last revision of the Declaration of Helsinki and complied with Good Clinical Practice Guidelines [13,14].Patients' anonymity was ensured and all recruited subjects provided written informed consent to their participation in this trial. Inclusion criteria Adult patients with evidence of AGIB were considered as eligible to undergo endoscopic hemostasis with TC-325.UGIB was suspected in patients with melena, hematemesis or Glasgow-Blatchford score ≥1.Cases with hematochezia and abnormal Oakland score were treated as LGIB, unless evidence of UGIB existed (e.g., increased urea, hemodynamic instability).The final decision for enrolment was at the endoscopists' discretion during the endoscopy.Regarding peptic ulcers, only cases with active bleeding in endoscopy were recruited (Forrest Type 1a and Type 1b). Patients were excluded if they did not consent to participate in the study, had prior failed attempts for hemostasis during the same or a previous session, or when TC-325 was used as part of combined hemostasis (adjunctive to clips or thermocautery). Procedure Following resuscitation with intravenous fluids and personalized medical treatment, where needed (e.g., PPIs, red blood cell transfusion), upper or lower GI endoscopy was offered, depending on the suspected area of bleeding.Upon identification of the bleeding site, TC-325 was sprayed on the lesion, using a commercially available system (Hemospray®; Cook Medical, Winston-Salem, North Carolina, USA).This system includes a canister filled with the powder, a 7-or 10-Fr delivery catheter, and a CO 2 pump incorporated in a handle that controls the expulsion of the powder.Once a clear field had been obtained in front of the bleeding site, the working channel of the endoscope was dried with air inflation, followed by the catheter insertion at 1-2 cm from the bleeding lesion.Short bursts were delivered to release the powder under direct vision, until the area was completely covered by the powder.The site was then observed for at least 5 min to assess for immediate hemostasis or the need for complementary treatment. Data collection A predefined online platform was used to enter and maintain the records of the enrolled cases, including the variables that were analyzed.Only the primary investigators (NA, RJH) had access to the patients' records across centers. Outcomes and definitions Given the different behavior and impact of the potential bleeding causes and the challenges raised by the location, the outcomes were measured depending on the cause (e.g., peptic ulcer, malignancy, iatrogenic bleeding) and the bleeding site (upper or lower GI) in order to identify any potential benefit from TC-325 related to these variables.The primary endpoint was defined as the rate of immediate endoscopic hemostasis using the Hemospray® device.This was defined as Figure 1 The Hemospray ® (TC-325) device the intraprocedural observation of bleeding cessation within the first 5 min post monotherapy with TC-325, without recurrence on the same session.The 5-min threshold was also used in previous studies, and thus represented a reasonable comparator [15]. Rebleeding rates, diagnosed when clinical hemorrhage (new hematemesis or melena associated with hemodynamic change following index treatment) or a drop in hemoglobin >2 g/L was observed, were considered as a secondary outcome [16,17].In addition, 7-and 30-day all-cause mortality rates were calculated.As for any interventional procedure, the frequency and the severity of adverse events were also evaluated. Follow up A 30-day follow up was agreed, either with a face-to-face clinic review or via telephone consultation, to assess for recurrence or adverse events. Statistical analysis Data analysis was performed using the Statistical Package for Social Science Software for Windows (IBM SPSS Statistics, Version 28.0.Armonk, NY: IBM Corp).Continuous variables are presented as mean ± standard deviation, and categorical variables are shown as percentages.We examined the association between the recorded independent variables and the outcomes.Logistic regression was performed in 2 stages.First, the association between each factor and the outcomes was examined separately using a univariable analysis.If several factors showed a statistically significant association with the primary outcomes, we then examined the joint association between the factors as part of a multivariable analysis.Where appropriate we adopted a backwards stepwise selection procedure to omit non-significant variables from the final model.Odds ratios (OR) and their 95% confidence intervals (CIs) were derived from each variable coefficient in the final model.Statistical significance was defined as a P-value ≤0.05 (2-tailed). Results One hundred ninety patients were finally included in our cohort and received TC-325 as monotherapy between January 2016 and February 2022.The age ranged between 51 and 81 years, with the median being 66-71 years among subgroups, and the male-to-female ratio was 2:1.In terms of antithrombotics, 15 patients were under aspirin, 8 under clopidogrel, 1 of them on dual antiplatelet therapy, and 17 on anticoagulation, either warfarin or direct oral anticoagulant.Forty patients (21.1%) presented as hemodynamically unstable and underwent endoscopy after initial resuscitation.Immediate hemostasis was achieved in 96.3% (95%CI 92.6-98.5;183/190) of patients, with an overall recurrence rate of 17.4% (95%CI 11.9-24.1;28/161), occurring within 14 days from the initial hemostasis.Data on blood units transfused post-hemostasis were available for 52 patients, with a mean number of 0.56 units per patient (range 0-8).Sixteen of 161 patients (9.9%, 95%CI 5.8-15.6)died within 7 days post-hemostasis, and deaths rose to 21.7% (95%CI 15.6-28.9;35/161) after 1 month. Peptic ulcer-related bleeding Forty-eight patients with Forrest Ia (2/48) or Ib ulcer (46/48) were included, of a total 74 cases with ulcer-related bleeding (Fig. 2).The rationale for Hemospray in this setting is that once it comes into contact with blood it forms a cohesive and adhesive barrier that tamponades the bleeding lesion.This subsequently promotes the concentration of clotting factors and cellular elements that may activate the clotting cascade [18].In our cohort, immediate hemostasis was achieved in 42/48 patients, equating to a rate of 88% (95%CI 75-95) (Table 2).The Blatchford score was borderline associated with failed hemostasis; every 5-unit increase in the score resulted in a 5-fold increase in the odds of failure (P=0.05). Upper GI malignancy Seventy-nine patients with an upper GI cancer were recruited into this subgroup (19 esophagus, 6 esophagogastric junction, 51 gastric, 3 duodenal).The primary outcome was achieved in 100% (79/79) of upper GI malignancy cases, regardless of the location or lesion size. Post-upper GI endoscopic therapy Post-procedure bleeding was diagnosed and treated with TC-325 after various procedures, as presented in Fig. 3.An optimal rate of immediate hemostasis was achieved (100%, 95%CI 91-100; 37/37), for all of the different procedures.Only 1 case of endoscopic mucosal resection (EMR) (3.1%, 95%CI 0-16; 1/32) presented with rebleeding; the defect was 50 mm and the resected lesion 22.5 mm.One patient died within the first thirty days (3.1%, 95%CI 0-16; 1/32]. LGIB A total of 26 patients received Hemospray® for LGIB, with 12 of them (46.2%)having an underlying lower GI malignancy as the cause of bleeding, and in all but one the primary outcome was achieved (96%, 95%CI 80-100; 25/26).Follow-up information was available in 22 cases with a rebleeding rate of 23% (95%CI 8-45; 5/22).The univariable analysis revealed that age and hemodynamic status were significantly associated with rebleeding.More specifically, for every 10-year increase in age the risk of rebleeding was reduced by one fifth (P=0.03), while it was 18 times higher in patients who were hemodynamically unstable compared to those who were hemodynamically stable (P=0.04).Post-hemostasis, mortality was 14% (95%CI 3-35; 3/22) within the first 7 days and 32% (95%CI 14-55%; 7/22) within the first 30 days; none of the factors included in our regression models was linked with 30-day mortality; however, all but 1 had underlying malignancy and only 2 of them rebled.A single complication was reported in the registry, with the endoscopist reporting catheter blockage during the treatment of a duodenal ulcer.Despite this, immediate hemostasis was achieved and there were no reports of rebleeding. Discussion This prospective multicenter registry assessed the efficacy of Hemospray® as monotherapy.Immediate hemostasis was achieved in 88-100% across a range of GI bleeding scenarios.The highest rates were recorded in bleeding related to malignancy and post-endoscopic intervention, where TC-325 was universally successful.Interestingly, these 2 subgroups were associated with the lowest rates of recurrent hemorrhage (17% and 3.1%, respectively), whereas one fourth of peptic ulcers and LGI lesions rebled.A recent meta-analysis assessed the pooled rates of 19 studies, including 212 cases where Hemospray® was used as monotherapy.Their outcomes were similar to ours, with an immediate hemostasis rate of 91% (95%CI 79-96), regardless of the combined use with other modalities, the intensity of bleeding, and its cause.The early rebleeding rate was 21% (95%CI 14-31), which is higher than the 17.4% (95%CI 11.9-24.1)observed in our registry across all scenarios [19].Within the first month after hemostasis, the mortality among patients treated for a peptic ulcer or upper GI malignancy was 25%, which was higher among those with an advanced ASA score or hemodynamic instability.Only 1 patient died post-EMR, whereas the higher mortality rates were detected among patients with LGIB; however, none of the evaluated variables was associated with this outcome.Finally, TC-325 monotherapy was an extremely safe treatment, with only once adverse event reported secondary to catheter blockage.In 2023, a Field Safety Notice was released regarding adherence of the endoscope to the hemostatic powder while deployed in a retroflexed position, but this was not seen in our registry. Treating active peptic ulcer-related bleeding requires at least 2 hemostatic techniques, and hemostatic powders, such as TC-325, are considered for refractory or recurrent cases [9].Hemospray® monotherapy yielded bleeding cessation in 88% (95%CI 75-95) of cases; however, the recurrence rate was considerable (26%, 95%CI 13-43), accompanied by a similarly high mortality rate within the first month (26%, 95%CI 13-43).Interestingly, a high ASA score, reflecting the patients' comorbidities and perioperative risk, was an independent predictor of mortality, with an OR of 23.5.We have previously shown, in a study of 202 patients who received Hemospray® monotherapy (25%), combination therapy (75%) or Hemospray® rescue therapy (25%), that the overall rate of hemostasis was 88%, with no difference among subgroups.Similarly, there was no difference in rebleeding rates (17%) and early mortality (12%); however, the 1-month mortality rates were significantly lower when a combined hemostasis approach was applied, compared to monotherapy (P<0.001)[15].Despite the theoretical risk of failure and rebleeding in cases with spurting hemorrhage (Forrest Ia), it is not uniformly supported by the literature [15,20].The high rates of immediate hemostasis and the non-inferiority for this outcome compared to the combined approach, reveal a significant role for TC-325 in achieving a direct effect on the active bleeding site.This is especially true when combined hemostasis cannot be achieved, as in the case of a difficult position, a marginally stable patient or an unclear field.Hemospray® could be used in these cases as a bridge therapy, to gain time with primary control before a second-look endoscopy, especially when resources are limited, or when the patient needs to be transferred to another center for definitive treatment.However, the significantly higher rates of mortality in monotherapy cases with comorbidities imply a need for confirmation of hemostasis with a second endoscopy and complementary treatment where needed.Potential causes associated with these rates need to be assessed by future studies, thereby evaluating the clinical approach policies post-hemospray monotherapy for peptic ulcer, including restarting feeding, transfusion policy and continuation of antithrombotics. Malignancy-related bleeding is notoriously difficult to treat, given the lack of a direct target for endotherapy, the tumour tissue's friability, the diffuse bleeding and the absence of a single bleeding vessel [19,21].The wide field of treatment during the application of Hemospray makes it a helpful endoscopic option for this indication [21], and we have shown that immediate hemostasis can be achieved in 100% of cases.Similar studies provide equivalent results regarding immediate efficacy [22][23][24].Additionally, TC-325 significantly reduces the required transfusions in this patient group [24].A recent randomized controlled trial randomized 106 patients with GI malignancy bleeding to receive monotherapy with Hemospray® or the standard treatment (thermal or mechanical modalities or adrenaline injection alone or in combination).Immediate hemostasis rate was significantly higher using Hemospray® compared to the conventional techniques (100% vs. 68.6%;P<0.001) and, more interestingly, the Hemospray® group also had lower recurrence rates (2.1% vs. 21.3%;P=0.003).However, we should note that up to 20% of the standard treatment cohort were managed with epinephrine therapy alone [25].In our cohort, rebleeding occurred in 17% of cases with lesions larger than 4 cm, presenting a non-significant tendency for recurrence; however, data on variables affecting this outcome (e.g., morphology, location of the lesion, coagulation status) need to be elucidated by future studies.Considering mortality, a small number of patients (7%) died during the first week, though this rate increased over a month, especially among patients who presented with hemodynamic instability.This outcome shows heterogeneity in the literature, ranging between 18.9% and 44.9% within 30 days post bleeding, with active bleeding during the endoscopy increasing the risk of death by 2.24 [26,27]. Another area where Hemospray® could represent a reliable choice as monotherapy is post-endoscopic intervention bleeding.In our cohort, immediate hemostasis was yielded in all bleeding cases, while only 1 patient exhibited recurrence.This single incident occurred following colonic EMR, where the lesion was 50 mm in size [28].Similar results of optimal hemostasis were also presented by our group in a related study, with recurrences occurring in 2 post-EMR patients of 57 (4%) [29].Data on the performance of Hemospray® in LGIB are limited; however, it appears equivalent to UGIB [19].Although immediate bleeding cessation was achieved in almost all of our patients, the recurrence rate was relatively high (23%, 95%CI 8-45%), probably reflecting a persistent LGIB etiology in most cases, such as diverticular disease.Finally, TC-325 is already established in terms of safety, with the most common adverse event being catheter blockage.The most significant limitation of this multicenter prospective registry study is its non-randomized design with no comparator, thus not allowing the evaluation of TC-325 compared to the current standard of care.In specific subgroups, such as postintervention bleeding, the sample size was too small to identify potential confounders related to the type of intervention, whereas its effect on variceal bleeding was not assessed.Patient selection for monotherapy use was at the discretion of the endoscopist, as opposed to a set criteria/protocol, which potentially introduced an element of selection bias.Furthermore, excluding patients who underwent combination therapy with other endoscopic modalities could obscure the true efficacy of Hemospray® monotherapy.This is because initial use with a hemostatic powder may have required salvage intervention during the same procedure; salvage treatment following recurrence is also under-reported.Moreover, detailed aspects regarding the macroscopic features of bleeding lesions or histological diagnosis regarding malignancy were not extracted, which could have impacted our outcome measures.A significant drawback is the fact that the exact cause of death for patients was not documented, meaning that we cannot directly associate rebleeding or immediate hemostasis with mortality. Endoscopic hemostasis using the TC-325 powder as monotherapy is safe and effective, especially in hemorrhage due to malignant lesions or post-endoscopic intervention (Fig. 4).In peptic ulcer-related bleeding it could achieve immediate results when the standard-of-care combined treatment is not feasible, allowing more time to optimize a patient's condition and make a definite plan.In these cases, a second-look endoscopy could be considered to confirm the outcome and intervene when necessary; however, this approach needs to be evaluated further. Figure 4 Figure 4 Proposed algorithm for Hemospray® use in gastrointestinal (GI) bleeding Table 1 Main characteristics of the recruited sample Table 2 Study outcomes stratified per cause of bleeding GI, gastrointestinal; CI, confidence interval
2024-06-25T15:04:50.546Z
2024-06-20T00:00:00.000
{ "year": 2024, "sha1": "87ed215d90a06f28551c68ea551b487c4dad9e46", "oa_license": "CCBYNCSA", "oa_url": "https://doi.org/10.20524/aog.2024.0897", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "48a1ba1841c83d6d5a4f71999457caf260849433", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
228067057
pes2o/s2orc
v3-fos-license
A new approach to interspecific synchrony in population ecology using tail association Abstract Standard methods for studying the association between two ecologically important variables provide only a small slice of the information content of the association, but statistical approaches are available that provide comprehensive information. In particular, available approaches can reveal tail associations, that is, accentuated or reduced associations between the more extreme values of variables. We here study the nature and causes of tail associations between phenological or population‐density variables of co‐located species, and their ecological importance. We employ a simple method of measuring tail associations which we call the partial Spearman correlation. Using multidecadal, multi‐species spatiotemporal datasets on aphid first flights and marine phytoplankton population densities, we assess the potential for tail association to illuminate two major topics of study in community ecology: the stability or instability of aggregate community measures such as total community biomass and its relationship with the synchronous or compensatory dynamics of the community's constituent species; and the potential for fluctuations and trends in species phenology to result in trophic mismatches. We find that positively associated fluctuations in the population densities of co‐located species commonly show asymmetric tail associations; that is, it is common for two species’ densities to be more correlated when large than when small, or vice versa. Ordinary measures of association such as correlation do not take this asymmetry into account. Likewise, positively associated fluctuations in the phenology of co‐located species also commonly show asymmetric tail associations. We provide evidence that tail associations between two or more species’ population‐density or phenology time series can be inherited from mutual tail associations of these quantities with an environmental driver. We argue that our understanding of community dynamics and stability, and of phenologies of interacting species, can be meaningfully improved in future work by taking into account tail associations. | INTRODUC TI ON All ecologists study relationships between biological and environmental variables and among biological variables. But standard methods for studying the association between two variables provide only a small slice of the information content of the association. For instance, the two pairs of variables in Figure 1a,b have identical Pearson correlation coefficients, and also have identical Spearman correlation coefficients, but nonetheless display very different patterns of association (Ghosh, Sheppard, Holder, et al., 2020;. Correlations are not the only way to study associations, but they are very commonly used, and other standard methods in ecology provide a similarly limited amount of information that neglects patterns of association (Anderson, de Valpine, Punnett, & Miller, 2018;Genest & Favre, 2007;Joe, 2014;Mai & Scherer, 2017;Nelsen, 2006) that seem likely to be ecologically important (Ghosh, Sheppard, Holder, et al., 2020;. The variables of Figure 1a (respectively, Figure 1b) are more strongly related in the left (respectively, right) portions of their distributions, thereby displaying asymmetric associations of the distribution tails, henceforth called asymmetric tail association. For two positively associated variables, stronger association between values in the left or lower portions of the distributions of the variables is henceforth referred to as left-tail association (Figure 1a), whereas stronger association between values in the right or upper portions of the distributions of the variables is henceforth referred to as right-tail association ( Figure 1b). The word "distribution" is sometimes omitted from the terminology, but implied. Tail association is a potentially important pattern of association that is not captured by standard correlation coefficients. Statistical approaches exist, however, that provides a complete description of the relationship between variables; these approaches are based on the idea of the copula. Tail associations are an important aspect of a copula approach to dependence, and tail association will be a focus of this paper. We here give a conceptual flavor of copulas before subsequently focusing on tail association. We introduce copulas instead of proceeding directly to tail associations, for three reasons: to properly credit the copula ideas at the root of our tail association tools, and the researchers who developed them; to indicate the origin of our tail association tools, so that future researchers seeking to generalize our approach will have a place to start; and to introduce ideas (normalized rank plots-see below) that are necessary to define our measures of tail association. Copulas can be used to separate the information content of a bivariate dataset, (x t , y t ) for t = 1,…,T, into two nonoverlapping parts: the information in the marginal distributions (which is not about the association between the variables) and the rest of the information (which is solely about the association). Following Ghosh, Sheppard, Holder, et al. (2020) and Genest and Favre (2007), the isolated information about the association between x t and y t is revealed by the plot of u t against v t , where u t is the rank of x t in the set {x 1 , x 2 , . . . , x T }, divided by T + 1; and v t is the rank of y t in the set {y 1 , y 2 , . . . , y T }, also divided by T + 1. Here the rank of the smallest element of a set is understood to be 1. We refer to the u t and v t as normalized ranks of the x t and y t . We refer to the plot of v t against u t as the normalized rank plot for y t and x t . For instance, the normalized rank plots for Figure 1a,b are in Figure 1c,d and show the asymmetric associations in the tails. The normalized rank plot reflects the copula structure of (x t , y t ) (Genest & Favre, 2007;Ghosh, Sheppard, Holder, et al., 2020). Ranking makes the marginal distributions uniform, isolating only the information on association between the variables. Genest and Favre (2007) states that inferences about dependence structures should always be based on ranks. It is likewise the purpose of copula approaches to separate association information from information on marginals. We emphasize that we have not here provided a formal definition of copulas, instead only introducing the fundamental copula F I G U R E 1 Pedagogical figure for introducing tail association and partial Spearman correlation. (a, b) Two pairs of variables that have identical Pearson (P) correlation, and also identical Spearman (S) correlation, but that differ markedly in the nature of the association. Panel a shows stronger left-than right-tail association and panel b shows the reverse. (c, d) Normalized rank plots (see Section 1) for panels a and b, respectively. (e, f) Graphics supporting the definitions of partial Spearman correlation and our statistic measuring asymmetry of tail association (see Section 2). This figure is similar in some respects to figs 1 and 7 of Ghosh, Sheppard, Holder, et al. (2020) idea of separating dependence information from information on marginals. Brief (Anderson et al., 2018;Genest & Favre, 2007;Ghosh, Sheppard, Holder, et al., 2020) and comprehensive (Joe, 2014;Mai & Scherer, 2017;Nelsen, 2006) introductions to copulas are available elsewhere. Copulas can also be used to study multivariate data. Copula approaches are applied widely and to great effect in fields such as finance and neuroscience (Emura & Chen, 2016;Goswami, Hazra, & Goyal, 2018;Kim et al., 2008;Li, 2000;Li, Xie, & Hu, 2013;Onken, Grünwälder, Munk, & Obermayer, 2009;Serinaldi, 2008;She & Xia, 2018), but only rarely, so far, in ecology (Anderson et al., 2018;Ghosh, Sheppard, Holder, et al., 2020;Popovic, Warton, Thomson, Hui, & Moles, 2019;Valpine, Scranton, Knape, Ram, & Mills, 2014). The potential of copulas for improving ecological understanding was argued by Ghosh, Sheppard, Holder, et al. (2020), and those authors also introduced tail association as an important aspect of copula structure and elaborated the relationship between tail association and copulas. The study of Ghosh, Sheppard, Holder, et al. (2020) was a wide-ranging study of the importance, causes, and consequences of copula structures in associations between ecological variables. One of the main foci of that paper was associations between fluctuations through time of population-density or phenological measurements of the same species in different locations. This study instead focuses on population-density and phenological measurements of different species in the same location. Ghosh, Sheppard, Holder, et al. (2020) studied, for instance, associations between first flight time series, for a given species of aphid, measured at different locations in the United Kingdom (UK); and associations between plankton density time series, for a given plankton taxon, measured at different locations in seas around the UK. We instead study associations between first flight or population-density time series measured in the same location for different (sympatric) species. Thus, in contrast with the study of Ghosh, Sheppard, Holder, et al. (2020), this study is more part of community ecology than of spatial ecology. Our reasons for this shift are as follows. First, synchronous (positively correlated) and compensatory (negatively correlated) population-density dynamics of different species occupying the same area are longstanding topics of concern in community ecology, with important ramifications for the stability or instability of aggregate community or ecosystem properties (Gonzalez & Loreau, 2009;Jochimsen, Kümmerlin, & Straile, 2013;Kent, Yannarell, Rusak, Triplett, & McMahon, 2007;Loreau & Mazancourt, 2008;Raimondo, Turcáni, Patoèka, & Liebhold, 2004); there are reasons to believe tail associations in this context will play an important but unstudied role in understanding these topics. A major past insight into community dynamics (Gonzalez & Loreau, 2009) was that an aggregate property of a community, such as its total biomass, can be relatively stable through time although its constituent parts (population biomasses of individual species) are highly variable, if the parts show compensatory dynamics (Hallett et al., 2014). Likewise, synchrony amplifies community biomass variability because the concordant variations of species biomass time series reinforce each other in the total (Ma et al., 2017 Second, studies of the phenology of species interacting in one area have also played a central role in community ecology, with important ramifications for whether and to what extent interactions will be modified by climate change (Durant, Hjermann, Ottersen, & Stenseth, 2007;Yang & Rudolf, 2010); there are reasons to believe tail associations between variables in this context may play an important role, as well. As climate changes and phenologies shift, there is the potential for phenologies of interacting species to shift differently, disrupting the interaction (Thackery et al., 2010). This idea is referred to as the match-mismatch hypothesis. Even if, for instance, year-to-year fluctuations in the emergence times of two interacting species are highly correlated, if this correlation is principally in the right (respectively, left) tails of the distributions of possible emergence times, so that early (respectively, late) emergences of the species are actually uncorrelated, then mismatched years are likely to occur, impacting the species. Such mismatches will occur, in this conceptual example, when emergence is early (respectively, late). Essentially, even with substantial correlation between emergence dates of species, if this correlation is principally in one of the tails, then uncorrelated emergences, and therefore mismatches, can occur under some conditions. One potential mechanism by which early emergences, for example, may be uncorrelated between species while later emergences remain correlated is if both species follow the same environmental cue for their emergence, but physiological limitations of only one of the species prevent emergence before a certain date. Advancing emergence dates of myriad species make this scenario more plausible. We here begin exploring whether tail associations may be important for studies of synchrony and compensatory dynamics, and for studies of phenology and the match-mismatch hypothesis. We In addition to examining whether tail association in our data is asymmetric, we also test for possible causes of such patterns. One possible mechanism, similar to some of the mechanisms explored by Ghosh, Sheppard, Holder, et al. (2020), is explained for the Ceratium example as follows. Earlier work showed that average sea surface temperature is an important correlate of phytoplankton abundance in our data (e.g., Defriez, Sheppard, Reid, & Reuman, 2016;Sheppard, Defriez, Reid, & Reuman, 2019a;Sheppard, Reid, & Reuman, 2017): cold water is associated with more phytoplankton, likely because upwelling and mixing of the surface and deeper ocean layers bring both nutrients and cold water to the photic zone. However, if it is the case for a given location that very cold water is associated with no more Ceratium, on average, than is moderately cold water, then that corresponds to a positive relationship and a left-tail association between the "coldness" of the surface water (measured, for instance, by how many degrees colder the water is than average) and Ceratium abundance. If such tail association is strong and consistent across Ceratium species, it should produce positive relationships with left-tail association between the abundance time series of the species. Likewise, in locations for which the winter coldness-Ceratium abundance association shows less left-tail association, one should see less left-tail association between different Ceratium species. So tail association between two species may be inherited from joint tail association of both species on a common environmental driver. Phytoplankton are also strongly influenced by the abundant generalist copepod consumer Calanus finmarchicus, so our actual investigation of the mechanism proposed here will take into account this influence as well as the association with sea surface temperature. For aphid first flight, we examine the same potential mechanism, but the relevant driver in that case is winter temperature. Thus this paper focuses on whether and why population-den- has additional thoughts on next steps toward this goal. Our results and the conceptual considerations introduced above are good evidence, in our view, of the potential for tail association to make a crucial difference in how ecologists understand these important topics. | Data Our population dataset comprised average annual abundance esti- because four species were available from the genus (Table 1), and we chose closely related species because they may be influenced in similar ways by environmental variables. The 15 locations we used were selected from the 26 locations of the larger dataset ( Figure S1) as follows. First, to reduce the effects of sampling variation on statistical results, we chose the subset of locations for which more than 35 years of data were available for all species. Second, for a given location, we excluded Ceratium species that were undetected for more than 10% of sampled years at that location. Finally, we considered only those locations for which at least two Ceratium species remained. We also had data on average growing season sea surface temperature for each grid cell and year (Sheppard et al., 2017(Sheppard et al., , 2019a. Earlier analyses (e.g., Sheppard et al., 2019a) demonstrated that sea surface temperature and C. finmarchicus abundance are important covariates of phytoplankton dynamics in UK seas, though associations between temperature and phytoplankton are probably due to relationships both these variables have with nutrient abundance in surface ocean layers. Sea surface temperature data preprocessing was the same as used by Sheppard et al. (2017). Our phenology dataset comprised annual first flight dates for 20 aphid species (Table 1) from 10 locations across the UK ( Figure S2), spanning the 35 years 1976-2010. These data were a subset of a larger dataset covering 11 locations, analyzed previously by Sheppard, Bell, Harrington, and Reuman (2016) and Ghosh, Sheppard, Holder, et al. (2020). The data were originally obtained from the Rothamsted Insect Survey suction-trap dataset (Bell et al., 2015;Harrington, 2014 | Statistical methods Given bivariate data (x t , y t ) for a set of years, t, of size T, and after computing normalized ranks (u t , v t ) as described in the Introduction, tail association and asymmetry of tail association were measured using the partial Spearman correlation of Ghosh, Sheppard, Holder, et al. (2020), which we here reintroduce. The standard Spearman correlation itself measures association between the variables x t and y t (or between u t and v t -recall the Spearman correlation is based on ranks, so is the same for both sets of variables); but Spearman correlation measures only the overall association of the samples and cannot tell us how association varies across the distributions of the variables. Given two bounds 1 ≤ l b < u b ≤ 1, we define the boundary (Figure 1e), which intersect the unit square on which normalized ranks are plotted. The partial Spearman correlation associated with the bounds l b and u b will be the portion of the Spearman correlation attributable to the points that fall between these boundary lines. The partial Spearman correlation for the band between these boundaries and within the unit square is, Here, sample means and sample variances are computed using all T data points, but the sum, ∑ , is over only the indices t for which for any other choice of bands (l b k , u b k ) that partition (0, 1)) equals the standard Spearman correlation, as long as no points happen to lie exactly on the bounds. Notation is summarized in Table S1. For each sampling location, n, we computed a matrix, C n , which we call the community tail association matrix, which quantifies asymmetry of tail association between pairs of aphid species or pairs of Ceratium species at n. Denote by s n i (t) the aphid first flight date or the Ceratium population-density for sampling location n, for the ith species that was present in the cleaned data for location n, and for year t. We then defined the matrix C n by defining C n (i, j) for two aphid or Ceratium species i,j, as follows. First, C n (i, j) was not defined, or was defined to equal the missing-data space holder "NA", if one of three conditions held true: (a) i = j; or if (b) the hypothesis that s n i (t) and s n j (t) were independent could not be rejected (5% level, using a test described by Genest and Favre (2007), implemented in the function BiCopIndTest in the VineCopula package in R); or if (c) independence was rejected but the Spearman correlation of s n i (t) and s n j (t) was negative. Otherwise , where the partial Spearman correlations in this expression were computed over the times, t, for which data were available for location n. The entry C n (i, j) was set to NA if independence of s n i (t) and s n j (t) could not be rejected because attempting to quantify tail association (or anything else about association) for independent variables is pointless. C n (i, j) was set to NA for negatively associated s n i (t) and s n j (t) because negative association occurred for only one pair of species in one location in our data (plankton sampling location 18, species C. furca and C. macroceros, see Section 3). Tail association for negatively associated variables should be studied, and this topic is revisited in the Discussion, but negative associations were too rare in our data to study them. The community tail association matrix C n is symmetric. The value b = 1∕3 was used for plankton TA B L E 1 Names of 4 plankton and 20 aphid species for which data were used locations, whereas b = 1∕2 was used for aphid locations because aphid time series were shorter, and larger b reduces sampling variation for our statistics (Ghosh, Sheppard, Holder, et al., 2020). See Appendix S1 for more information on the choice of b. We also computed a matrix D n , which we call the community-driver tail association matrix, which quantifies tail association between aphid or plankton time series and their covariates. Denote by d n k (t) the value of the kth covariate that operated at sampling location n in year t (winter temperature for an aphid sampling location, sea surface temperature or C. finmarchicus density for a Ceratium location). We then defined D n by defining D n (i, k) for an aphid or Ceratium species i and a covariate k, as follows. First, D n (i, k) was not defined, or was set to NA, if the hypothesis that s n i (t) and d n k (t) were independent could not be rejected (5% level, BiCopIndTest). Otherwise, we either: (a) set and d n k (t) were positively associated (positive Spearman correlation); or (b) set and d n k (t) were negatively associated (negative Spearman correlation). For aphid first flight time series, for which k was always 1 and d n k (t) was winter temperature in location n, associations between s n i (t) and d n k (t) were always negative when they were significant (see Section 3). The same was true for Ceratium density time series and sea surface temperature. Thus our practice of using −d n k (t) was equivalent, in the case of temperature variables, to using a "coldness" index such as the number of degrees colder than an average or typical reference temperature, in place of temperature. Aphid and Ceratium data were always positively associated with the coldness index when they were significantly associated with it. Although C. finmarchicus abundance was positively associated with Ceratium time series in some sampling locations and negatively associated in others, it always showed the same sign of association with all Ceratium species within a location. Using −d n k (t) in place of d n k (t) when negative associations with aphid or Ceratium data occurred allowed us to study asymmetry of tail association using methods developed with positively associated variables in mind. We again used b = 1∕3 for plankton data and covariates, and b = 1∕2 for aphid data and winter temperature. For display, we horizontally concatenated the matrices C n and D n and displayed matrix values using color. We used the community tail association matrix C n for each sampling location n to answer Q1 from the Introduction, as follows. First, we counted the number, N n L , of entries of C n which were not NA and which were greater than 0. These were the "left-tail dominant" species pairs, that is, pairs of species for which association was stronger in the left rather than in the right tails of the species distributions. We also counted the number, N n R , of right-tail dominant pairs, for which the corresponding entries of C n were negative. If N n L was substantially greater than (respectively, substantially less than) N n R for a location n, it suggested that left-tail association (respectively, righttail association) between species in that location was dominant, answering Q1 in the affirmative. We also calculated A n C,L , the sum of all positive, non-NA entries of C n ; A n C,R , the sum of all negative, non-NA entries of C n ; and A n C = A n C,L + A n C,R , a general measure of asymmetry of tail association in location n. We refer to A n C as the total community tail association. We additionally calculated the normalized quantities F n C,L = A n C,L ∕(A n C,L + |A n C,R |) and F n C,R = A n C,R ∕(A n C,L + |A n C,R |). Because 0 ≤ F n C,L ≤ 1, 0 ≤ |F n C,R | ≤ 1, and F n C,L + |F n C,R | = 1, the relative sizes of F n C,L and |F n C,R | indicate the relative dominance of left-and right-tail association between species at location n. Together, all these statistics provide an answer to Q1. We used the community tail association matrix, C n , and the community-driver tail association matrix, D n , to answer Q2 from the Introduction for the Ceratium and aphid data, as follows. First, we calculated A n D , the sum of all non-NA entries of D n . This was analogous to A n C , but calculated using the matrix D n instead of the matrix C n . We refer to A n D as the total community-driver tail association. We then examined whether the values A n C and A n D were correlated across locations, n. This tests the causal hypothesis in the Introduction because it tests whether Ceratium or aphid time series having stronger right-tail (respectively, left-tail) association with environmental covariates in a given location also had stronger right-tail (respectively, left-tail) association with each other at that location. Recall that an environmental covariate was reversed (its negative was used) when it was negatively associated with a Ceratium or aphid species, and that no covariate was ever significantly positively associated with some Ceratium or aphid species and significantly negatively associated with another such species in the same location (see Section 3). We also answered Q2 for the aphid data as follows. Within a location, n, for each species, i, we computed the mean n C (i) of all non-NA entries C n (i, j), for j ranging across all species for which we had data. This quantity measures an average tail association of species i with other species in the same location, with positive values for greater left-tail association and negative ones for greater right-tail association. We refer to n C (i) as the species-community tail association for species i. We then defined n D (i) as the sum of all non-NA entries D n (i, k), for k ranging across all covariates for which we had data. We refer to this as the species-driver tail association for species i. For aphids we only had one covariate, winter temperature, so n D (i) = D n (i, k) for k = 1 corresponding to winter temperature. We provide the more general definition of n D (i) that applies when more covariates were available so the definition can also be considered (briefly, see below) for Ceratium data. We then examined, for each location, n, whether n C (i) and n D (i) were correlated across species, i. This tests the causal hypothesis in the Introduction because it tests whether aphid species which were more right-tail (respectively, left-tail) associated with environmental covariates (winter temperature) also had time series that were more right-tail (respectively, left-tail) associated with the time series of other species in the location. Recall that winter temperature was always negatively associated with aphid first flight when it was significantly associated (see Section 3), and negative temperature (a coldness index) was used in computing D n (i, k). Testing whether n C (i) and n D (i) were correlated across species, i, within a location, n, was not practical for Ceratium, because we only had data for at most four Ceratium species per sampling location, an insufficient number to provide much statistical power in testing for a correlation. | RE SULTS Associations between Ceratium species were always positive when they were significant, except for one pair of species in one location (plankton sampling location 18, species C. furca and C. macroceros). Asymmetric tail association was very common between Ceratium population-density time series from the same location, answering Q1 in the affirmative for Ceratium; for some locations, left-tail association between Ceratium species was dominant, and for other locations right-tail association was dominant. To show this, we show that for some locations, the community tail association matrix, C n , Associations between aphid time series were always positive when they were significant. Asymmetric tail association was also very common between aphid first flight time series from the same location, answering Q1 in the affirmative for aphids; left-tail association was more common for some sampling locations and right-tail association dominated for others, but for most sites right-tail association dominated. To show this, we show that for some locations, the community tail association matrix, C n , was comprised of a slight F I G U R E 2 Either right-or left-tail association between population-density time series of Ceratium species could dominate, depending on the sampling location. (a, b) The community tail association matrix, C n , and the community-driver tail association matrix, D n (Statistical methods), horizontally concatenated, for example locations n = 12 (a) and n = 26 (b). See Table 1 for species names. All the non-NA values in C n were positive (red) for location 12 (a), indicating left-tail association dominated in that location; but values were largely negative (blue) for location 26 (b), indicating right-tail association dominated there. Matrix entries which were NA because time series were independent are displayed in yellow. The counts N n L and N n R (see Section 2.2) also reflect the distinct tail association characteristics of the two locations. C. fin. = C. finmarchicus; Temp. = temperature. Green dots in D n represent variables which were originally negatively associated, so the negative of the environmental covariate was used for calculating tail association. See Figure S3 for analogous figures for the other sampling locations. (c) The summary statistics F C,L and F C,R (see Section 2.2) for each site show that association between Ceratium species was either substantially dominated by the left or right tails of Ceratium distributions, with the exceptions of a few locations for which tail association was closer to symmetric. Site codes are colored red or blue depending on which of F C,L or F C,R had higher magnitude. Values are not plotted for site 3 because the hypothesis could not be rejected for that site that dynamics of distinct Ceratium species were independent For the Ceratium data, the total community tail association, A n C , and the total community-driver tail association, A n D , were significantly correlated across locations, n, validating our hypothesis from the Introduction for a cause of tail association between co-located species, and helping to answer Q2. In other words, tail association between co-located species time series was apparently inherited from common tail association of the species on environmental drivers. Across our 15 locations, A n C and A n D were significantly positively correlated (Pearson correlation, two-tailed test, Figure 4a). Thus locations for which Ceratium density time series showed greater left-tail (respectively, right-tail) association with environmental covariates (measured with A n D ) also exhibited greater left-tail (respectively, right-tail) association between density time series for distinct species (measured with A n C ). For the aphid data, the total community tail association, A n C , and the total community-driver tail association, A n D , were positively but nonsignificantly correlated across our 10 sampling locations ( Figure 4b). Thus locations for which aphid first flight time series showed greater left-tail (respectively, right-tail) association with winter temperature also showed a nonsignificant tendency toward greater left-tail (respectively, right-tail) association between the time series of distinct species. The correlation was close to significant for the aphid data, and may have been nonsignificant simply because there were slightly fewer aphid sampling locations than there were plankton locations. See also the subsequent results for aphids, which were significant and which support the same overall conclusions. Our second analysis using aphids, based on the species-community tail associations, n C (i), and the species-driver tail associations, n D (i) (Statistical methods), provided further evidence supporting our hypothesis for a cause of tail association between co-located species (Introduction). For 8 of 10 sampling locations, n C (i) and n D (i) were significantly correlated across species, i ( Figure 5). In other words, for 8 of 10 locations, aphid species with greater left-tail (respectively, right-tail) association with winter temperature also had greater lefttail (respectively, right-tail) association with other aphid species. | D ISCUSS I ON Our results show that synchronous population-density or phenological time series of co-located species can very commonly show asymmetric tail association. For some sampling locations and species, tail association was predominantly in the left tails, and for others it was predominantly in the right tails of time series distributions, showing a new kind of ecologically meaningful variation among ecosystems. The partial Spearman correlation presented by Ghosh, Sheppard, Holder, et al. (2020) is a simple and effective way to measure tail association for ecological applications. Our results also demonstrate a mechanism by which asymmetric tail association between species can arise: It can be inherited by joint tail association of the two species on the same environmental variables. This mechanism seems likely to apply commonly when co-located species are influenced by the same external factors. Our results convincingly show that standard correlation approaches omit phenomena that seem likely to be important for at least two major topics of interest in ecology: synchronous/compensatory dynamics of species within a community and their influence on community stability; and shifting phenologies and the match-mismatch hypothesis. The distinct tail association characteristics of Ceratium in different sampling areas around the UK may have consequences for the stability through time of total Ceratium abundance, which may relate to harmful algal blooms because Ceratium species can have a role in such blooms (Baek et al., 2009). For locations in which left-tail association between Ceratium density time series is dominant, Ceratium species are scarce simultaneously, potentially producing years of very low total Ceratium biomass. In contrast, for locations in which right-tail association is dominant, Ceratium species are highly abundant simultaneously, which may produce years of very high Ceratium biomass, which may sometimes correspond to harmful algal blooms. Our results show that the distinction between these two types of location relates to the tail association of Ceratium species with their environmental covariates, sea surface temperature and C. finmarchicus density. It may be useful to study in future work why some locations principally have left-tail association with these drivers and some principally have right-tail association. First flight time series for populations of co-located aphid species were principally right-tail associated; that is, more strongly correlated when first flights were later in the season. Our results show this was probably because: cold winters delay aphid first flights, but warm winters do not lead to first flights that are any earlier, on average, than those following moderate winters, producing right-tail association between first flights and winter coldness across multiple species; this common association leads to right-tail association between aphids. Thus winter temperature fluctuations lead to temporally dispersed early but temporally coordinated late arrival times of aphid species on summer hosts (many of which are crops, for the species we studied), a fact that may have pest-control significance. Winter temperature is known to influence the first flight dates of virtually all the aphid species for which we had data . Overwintering aphids are sensitive to frost conditions, and so winters probably reduce early spring populations on winter hosts plants. This then lengthens the time required for populations to reach sufficient densities to stimulate the production of winged morphs for flight to summer host plants. F I G U R E 3 Either right-tail association between first flight time series of aphid species could dominate, or left-tail association could be more common, depending on the sampling location. (a, b) The community tail association matrix, C n , and the community-driver tail association matrix, D n (Statistical methods), horizontally concatenated, for example locations n = 2 (a) and n = 5 (b). See Table 1 for species names. A slight majority of non-NA values in C n were positive (red) for location 2 (a; see the N n L and N n R counts displayed), indicating left-tail association was slightly more common than right-tail association in that location. But values were largely negative (blue) for location 5 (b), indicating right-tail association dominated there. Matrix entries which were NA because time series were independent are displayed in yellow. Temp. = temperature. Green dots in D n represent variables which were originally negatively associated, so the negative of winter temperature was used for calculating tail association (Statistical methods); this happened in all cases for which temperature and first flight were significantly associated. See Figure S4 for analogous figures for the other sampling locations. (c) The summary statistics F C,L and F C,R (see Section 2.2) for each site show that association was either dominated by the right tails, or, for a few locations, showed slightly more lefttail association. Site codes are colored red or blue depending on which of F C,L or F C,R had higher magnitude sp1 sp2 sp3 sp4 sp5 sp6 sp7 sp8 sp9 sp10 sp11 sp12 sp13 sp14 sp15 sp16 sp17 sp18 sp19 sp20 Temp. sp1 sp2 sp3 sp4 sp5 sp6 sp7 sp8 sp9 sp10 sp11 sp12 sp13 sp14 sp15 sp16 sp17 sp18 sp1 sp2 sp3 sp4 sp5 sp6 sp7 sp8 sp9 sp10 sp11 sp12 sp13 sp14 sp15 sp16 sp17 sp18 sp19 sp20 Temp. sp1 sp2 sp3 sp4 sp5 sp6 sp7 sp8 sp9 sp10 sp11 sp12 sp13 sp14 sp15 sp16 sp17 sp18 Pearson correlation = 0.598, p = 0.0678 F I G U R E 5 For 8 out of 10 sites, the Pearson correlation (P) between the species-community tail association, n C (i), and the speciesdriver tail association, n D (i), across i = 1, 2, …, 20, was significantly positive (p < .05, one tailed test). This supports the hypothesis that tail association between species may be inherited from joint tail association of both species on a common environmental driver. See Table 1 for species IDs i th species P = 0.52 , p = 0.013 n = 11 Tail association, α X n (i) mentioned above, of tail association for the ecological context of this study. Ghosh, Sheppard, Holder, et al. (2020) showed that the skewness, though time, of the spatial-total time series ∑ l x s,l (t) is sensitive to the nature of tail association between the x s,l (t) (l = 1, . . . , L), if these time series are positively associated with each other. Right-tail (respectively, left-tail) association tended to produce right (respectively, left) skew in the total. Right skew corresponds to a spatial-total time series with exceptionally large values, that is, to "spiky", unstable dynamics of the total population. Left skew corresponds to a spatial-total time series with low values, that is, to dynamics of the total population with a tendency to "crash". The total population can be regarded as a landscape-level measure of the stability or variability of species s, and is important, for instance, if species s is a pest or an exploited species. For the same reasons, the skewness, through time, of the community-total time series ∑ s x s,l (t) is sensitive to the tail association between the x s,l (t) (s = 1, . . . , S), which we have here studied. Right-tail (respectively, left-tail) association again tends to produce right (respectively, left) skew in the total time series. In this community context, the total is an aggregate property of the community, and the variability of this total has been used in an extensive literature (e.g., Hallett et al., 2014) Although our results are sufficient to show that tail associations are likely to be important for studies of community dynamics and stability, many communities show not only synchronous dynamics between some species pairs x s i ,l (t) and x s j ,l (t), but also compensatory dynamics between other pairs. Our Ceratium time series were almost entirely synchronous, so we could not study the importance of tail association for compensatory dynamics. Next research steps should include the study of tail association between compensatory species within a local community. Furthermore, Ceratium is only part of the phytoplankton community in UK seas. It may be advantageous for future work to use data characterizing an entire competitive community. For instance, the data of Hallett et al. (2014) constitute annual abundances of all species of plant in an area. In that dataset, some species pairs show synchronous and some show compensatory dynamics. Studying asymmetry of tail association for negatively correlated species density time series will require slightly modified methods. The only negative association between aphid or Ceratium time series that occurred in our system was not analyzed. Negative associations between species time series and the environmental covariates we considered were handled statistically by considering the positive association between the species time series and a "reversed" covariate; this corresponds to a positive association with a reconceptualized covariate, for example, a "coldness" index. But that approach would make no sense for negatively associated time series of two aphid or Ceratium time series: there is no canonical choice of which variable to reverse. Asymmetry of tail association could still be considered, however, for negatively associated variables, u, v, in an unsigned approach, via the index |cor 0,b (u, 1− v) − cor 1−b,1 (u, 1− v)|. Because |cor 0,b (u, 1− v) − cor 1−b,1 (u, 1− v)| = |cor 0,b (1 − u, v) − cor 1−b,1 (1 − u, v)| no choice need be made on which variable to "reverse." A large value of this index indicates that tail association between u and v is asymmetric, though it does not provide information on whether association is stronger between the left tail of u and the right tail of v or between the right tail of u and the left tail of v. Measures of tail association may also reveal useful information about freshwater plankton ecosystems and harmful algal blooms, in addition to information about marine harmful algal blooms (discussed above). Because blooms are extreme phenomena involving multiple species, monitoring the associations of phytoplankton species with each other and their associations with temperature and nutrient data in the extremes (this is tail association) could help us to better understand harmful blooms. Considering tail association may even produce improvements in statistics that have been developed to serve as early warning signals of impending major changes (so-called "tipping points") in plankton communities and the lakes they inhabit (Butitta, Carpenter, Loken, Pace, & Stanley, 2017;Carpenter et al., 2011), since some established early warning statistics make use of skewness of population distributions (Guttal & Jayaprakash, 2008). Tail association between phytoplankton species is related to skewness of the total phytoplankton biomass time series, as described in an earlier Discussion paragraph. Although our aphid results were sufficient to demonstrate that tail association can be an important factor in the phenology of co-located species, it will be necessary in future work to apply tail association ideas to different datasets to assess whether these ideas can improve our understanding of the consequences of changing phenology for trophic phenological matching. The aphid species we studied have different host plants, so they do not directly interact. Shifts and fluctuations in the phenology of one species probably do not directly influence other species in our dataset. Future research should apply tail association to time series of phenologies of interacting species, such as the data on tree budburst dates, caterpillar abundance, and breeding phenology of great tits (Parsus major) and blue tits (P. caeruleus) collected in Wytham Woods, Oxford, and other locations in Europe (e.g., Cole & Sheldon, 2017;Nilsson & Källander, 2006;Savill, Perrins, Kirby, & Fisher, 2011), or the extensive data collection from multiple trophic levels of Thackery et al. (2010). One final idea for potentially valuable future research has to do with combining our approach, based on tail associations, with other recent approaches which emphasize other statistical aspects of the synchrony. For instance, research has now showed that synchrony and compensatory dynamics in communities have "timescale structure"; that is, the dynamics of two or more species can be synchronous on some timescales of analysis and compensatory on others (Keitt & Fisher, 2006;Vasseur et al., 2015;Zhao et al., 2020). How timescale specificity and tail associations interact is unknown, but potentially interesting. Multivariate copula approaches (Czado, 2019;Joe, 2014) may be useful in this and other future extensions of the work we have begun here. Our results extend the results of Ghosh, Sheppard, Holder, et al. (2020). Those authors argued that considering copulas and tail associations can provide insights across the field of ecology. But Ghosh, Sheppard, Holder, et al. (2020) did not consider co-located species, a context important for community ecology which we considered here. ACK N OWLED G M ENTS We thank the many contributors to the large datasets we used; D. Stevens and P. Verrier for data extraction; and Joel E. Cohen, Lauren Hallett, and Jonathan Walter for helpful suggestions. We CO N FLI C T O F I NTE R E S T The authors declare no conflict of interest.
2020-11-12T09:09:37.716Z
2020-11-10T00:00:00.000
{ "year": 2020, "sha1": "431156383a6439b27f09c5999ec404e8a6ef551f", "oa_license": "CCBY", "oa_url": "https://onlinelibrary.wiley.com/doi/pdfdirect/10.1002/ece3.6732", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "3b479a5889e7654c19cb7b9c37cacc0694b650c3", "s2fieldsofstudy": [ "Environmental Science", "Biology" ], "extfieldsofstudy": [ "Medicine", "Biology" ] }
203284829
pes2o/s2orc
v3-fos-license
Introverted and Extroverted Students' Learning Attitude in Chinese Speaking Class In communicating, personality is divided into two major types: introvert and extrovert. Each type of personality has its own learning attitude. Speaking class requires each student to participate actively in class activities so that each student dares to speak. However, because the learners' personality is different, there will be differences in learning attitudes and learning outcomes. Therefore, in order to understand the introverted and extroverted Chinese learners in the speaking class, we studied the Chinese speaking class of Universitas Sebelas Maret. The data were analyzed by using the theory of MBTI. This paper introduces the situation of introverted and extroverted students in the Chinese speaking class, the problems of each personality, learning outcomes, and teaching suggestions. At present, the research results of the introverted and extroverted students in the speaking class are relatively few. This study is expected to give some inspirations for the researchers in the future, and to give information for the teachers who speak Chinese to understand introverted and extroverted personality. Keywords— introvert; extrovert; oral class; chinese language; learning attitude I. INTRODUCTION The Myers-Briggs Type Indicator (MBTI) is an introspective self-report questionnaire with the purpose of indicating different psychological preferences in how people perceive the world around them, and make decisions. The MBTI was constructed by Katharine Cook Briggs, and her daughter, Isabel Briggs Myers. It is based on the conceptual theory proposed by Carl Jung, who had speculated that humans experience the world using four principal psychological functionssensation, intuition, feeling, and thinkingand that one of these four functions is dominant in a person for most of the time. The MBTI was constructed for normal populations, and emphasizes the value of naturally occurring differences. "The underlying assumption of the MBTI is that we all have specific preferences in the way we construe our experiences. Therefore these preferences underlie our interests, needs, values, and motivation." Although the MBTI has been popular in the business sector, it exhibits significant scientific (psychometric) deficiencies, notably including: 1) poor validity, for example: i.e. it does not measure what it purports to measure, and it does not have predictive power, or items that can be generalized, 2) poor reliability, which means giving different results for the same person on different occasions, 3) measuring categories that are not independent, in which some dichotomous traits have been noted to correlate with each other), and 4) not being comprehensive due to missing neuroticism). The four scales used in the MBTI have some correlation with four of the Big Five personality traits, which are more commonly accepted framework. In a communication process, there are two human personalities, which are: extrovert, and introvert. Those personalities might influence the way students get involved while taking certain classes. Speaking class demands the students to participate actively. The different traits between extroverted students and introverted students give impacts to how they perceive knowledge during the learning, and their learning result. Thus, the ways to deliver the materials should be varied according to the personalities, so that the extroverted, and introverted students are able to understand the materials well. This research aims to describe how extrovert and introvert students were taking part in Chinese speaking class. Furthermore, the author also explains how the materials were delivered in the classroom. In the future, it is hoped that this study can contribute as one of standards in teaching Chinese speaking class. II. METHOD This research employed a case study method. Case study is a research method with a purpose of learning intensively the background of current situations, and the interaction of any objects. With the previous theory, the author observed directly what happened in the field, which includes observing the students' personalities in the classroom, and classifying them into extroverted, and introverted students. Then, the author observed their manners during the class. After getting sufficient data, the author used descriptive research method. It is a method that intends to create accurate, factual, and systematic descriptions according to the facts. By giving clear explanations related to the situations in the field, it is hoped that the readers will be able to understand well the ways extroverted students, and introverted students learn in the classroom. Before you begin to format your paper, first write and save the content as a separate text file. Keep your text and graphic files separate until after the text has been formatted and styled. Do not use hard tabs, and limit use of hard returns to only one return at the end of a paragraph. Do not add any kind of pagination anywhere in the paper. Do not number text heads-the template will do that for you. A. Personalities and Classroom Condition 1) Personalities Personalities are the culmination of many keys inherited and learned, from many factors, such as: agendas, culture, human nature, and knowledge. These combinations create a personality that is relatively specific to each of us. In the communication process, there are two human personalities, which are: extrovert, and introvert. According to MBTI (Myers-Briggs Type Indicators), Extraversion refers to extroverted people who mainly direct energy toward the outer world of people and things. Furthermore, they appear to be energized by interacting with other people. When discussing by using MBTI framework, the single letter 'E' is used to signify. On another hand, Introversion is the opposite of Extraversion.People with introverted personality tend to direct their energy toward the inner world of experiences, and ideas. These people often pursue solitary activities. However, this does not mean that they do not like to be around people. It simply means that they tend to lose energy from social interactions. Interacting with other people can somehow tire them out. The single letter 'I' denotes Introversion. Moreover, each of personalities might have their own characteristics. The main issue of being introverted students is that they do not have enough bravery to express themselves. Unlike the introverted students, extroverted students usually bring the classroom situation more alive, dominate the classroom, and follow teachers' instruction better. In summary, it can be concluded that extroverted students usually excel more. Referring to the mentioned characteristics, it is argued that extroverted students generally comprehend the materials better that lead them to be better learners. Students' personalities might affect the way they learn so that their result might differ. Wang Xuemei (2000) highlighted that extroverted, and introverted students in English course were slightly different in terms of their characters, comprehensive skill. However, when it came to dictation activity, introverted students had a tendency to achieve higher scores. Meanwhile, extroverted students are more convenient in speaking classes, as they can bring the situation more alive. Place your cursor to the right of the last character of the last affiliation line of an even numbered affiliation (e.g., if there are five affiliations, place your cursor at end of fourth affiliation). Drag the cursor up to highlight all of the above author and affiliation lines. Go to Column icon and select "2 Columns". If you have an odd number of affiliations, the final affiliation will be centered on the page; all previous will be in two columns. 2) Classroom Condition Speaking courses demand students to train their speaking ability by encouraging them to be brave while speaking, especially in expressing themselves, and uttering sentences. The majority of the students were actively involved, yet some of them were passive during the learning. These active, and passive students might indicate that they belong to extroverted students, and introverted students. Due to lively classroom environment, we believe that most of the students were extroverted students. By analyzing an interview result between the researcher and the participants, students' gestures, their reaction during the learning process, we conclude that there were nine introverted students, and 20 extroverted students. The result showed that the extroverted students could cooperate well with their teacher. They were actively answering questions by using their initiatives and raising their hand up. They also liked to read texts and vocabularies out loud. On the contrary, during the learning process, the introverted students did not cooperate with the teacher as good as the extroverted students did. They tended to be passive, in a sense of they preferred to only listen to the teacher's explanation, and read their handout. When the teacher asked questions, the introverted students did not have an initiative to answer the questions. They simply answered the questions right after the teacher called their names, as if they were waiting to be pointed out. Due to the fact that there were the greater amount of extrovert students rather than introverted students, the classroom situation was more lively. It was because of the extroverted students were confident to talk. This is actually the kind of classroom condition which can support the students to learn foreign language better. The extroverted students were freely to express their opinions in the classroom. This impacts to a more alive classroom situation in which it can promote a natural setting to practice the foreign language. As a result, the students' do not merely rely on their handout for practicing. Swain (1985) argued that the students need opportunities to channel their foreign language practice naturally by using correct grammar. 3) Correlation between students' personalities and classroom situation Students' characters might influence their learning attitude in the classroom. The teacher of the Chinese speaking class said that an alive classroom atmosphere is the best situation to practice speaking. Therefore, teachers should encourage students' to be active in any speaking activities. Due to the necessity of actively speaking in the classroom, teachers will likely think that the Advances in Social Science, Education and Humanities Research, volume 338 extroverted students are better than the introverted one. However, the teacher should not discriminate the students. They still have to ask questions to all students. Whenever the teacher raised a question, the introverted students did not answer it direcly. Even if they answered it, they would do it with less facial expressions, as if they were not so confident. Each personality can contribute to the response of teaching and learning activity. In Chinese speaking course, in order to enhance the students' speaking ability, the teacher usually raised a question like: "Have he/she ever visited a zoo?" Various responses were coming from extroverted, and introverted students. The introverted students did not simply answering the question, yet pretending to read a textbook to avoid giving an answer. In contrast, the extroverted students spoke out loud, and direcly said, "Yes, he/she have." They immediately answered the questions with no doubt. The same thing happened repeatedly. Because the extroverted students participated actively, the learning process could be more alive. Furthemore, students' speaking test scores were also analyzed. The average test score was 81. The introverted students' score were mostly less than 81. Meanwhile, an extroverted student could achieve the highest score, which was 91. This high score might be triggered by their activeness in taking part on speaking activities. That habits led them to be confident when it came to a speaking test. As a consequence, they could achieve the better score. The extroverted students prefer more alive learning activity, yet tend to dislike learning only from the handout, and lecturing. To gain their interest, the teachers should have interesting teaching method. By doing so, the researcher hopes that students' speaking ability can improve. However, as the extroverted students might think that the activities should be created into more alive, the introvert will think that it does not suit them well. They also might feel uncomfortable during the learning process. The huge gap between their characteristics should contribute to the teachers' consideration in designing the activities. It is because to accomplish a goal which is to gain the best result for all students. B. Extroverted and Introverted Students' Attitude inside the Classroom This research was conducted in the Diploma Program of Chinese Language, Faculty of Humanities, Universitas Sebelas Maret. The participants were 29 students coming from the third semester, in year 2017/2018. Their Chinese language skill was intermediate, as they have already learned Chinese language for more than one year. After analyzing 29 students, it can be concluded that the ratio between extroverted, and introverted students is 1:3. There were nine introverted students, while the extroverted ones were 20 students. Because there were more extroverted students than the introverted ones, the learning atmosphere turned into more alive. The analysis of these two different characteristics toward their learning attitude were as follows: 1) Outside Classroom Activities Baranov (a Soviet scientist) argued that outside classroom activities are all teaching and learning activities conducted outside the classroom. Extroverted students usually adore this kind of activity. In order to analyze the students' traits toward the activities, there were two projects given, which were writing a conversation, and making a video in group. The first part is examining the students' writing task. It was found that the introverted students had a better writing skill. They used good diction and grammar. Furthermore, the task was written neatly. It seemed that they considered the task more seriously. However, the majority of the extroverted students were vice versa. Their task was not finished really well. The second part is checking the video project. It was crystal clear that the introverted students did not perform really well on that video, regarding their confidence, inaudible voices, and pronunciation. On the other side, the extroverted students performed better. They could play their role very well with audible voices, natural body movements, and rich facial expressions. Based on the video, it can be assumed that introverted students were not interested in outside classroom activity as much as extroverted students were. For the foreign language learners, this sort of activity actually can increase their interests to express themselves. To strengthen this argument, we have done another survey. The survey revealed that out of nine introverted students, there were six of them who disliked the video project. Meanwhile, out of 20 extroverted students, all of them enjoyed making the video. 2) Answering Questions The teacher frequently asked questions to students in an attempt to test their understanding toward the material. Every time the teacher raised a question, students' enthusiasm were diversed. Extroverted students would be active, while students with introverted personality were willing to answer after the teacher called their names out. The teaching method being applied was that the teacher used lots of questions which could help them digging their knowledge deeper, invite students to be active, and stimulate their critical thinking. Through this method, students could improve their problem solving skill, by actively answering questions. Extroverted students was passionate in working with their classmates, and the teacher. When there was a question: "Have he/she ever visited a zoo?", they certainly answered by saying: "Yes, he/she has." Due to incomplete answer, the teacher asked them to repeat but using a complete one by saying: "He/she has ever visited a zoo." Despite the fact that they had to revise their mistake, they still enjoyed the learning by laughing at their mistake, then answering the question altogether with loud voices. Unlike extroverted students who were so energetic, introverted students used low voices to answer the questions. Advances in Social Science, Education and Humanities Research, volume 338 3) Interests Extraversion have broad interests, such as swimming, dancing, singing, etc. Students who own this characteristic are usually get bored easily if they consider the lesson as something unattractive. They usually cannot deal with reading books, and merely listening to the teacher. In contrast, introversion usually do not own as many passion as extraversion. They enjoy their own world. On one hand, when the teacher employed lots of lecturing activities, extroverted students were less convenient to get engaged during the learning process. It could be seen from the students' faces, and gestures. On another hand, introverted students were more comfortable and less stressful. The teacher also prepared the other activities, such as dancing and singing with a song taken from their textbook. Based on my observation, extroversion tend to enjoy them. However, these activities did not work well to the introversion, as they were less convenient. In conclusion, it can be inferred that introverted students tended to enjoy lecturing activity. However, extroverted students demanded more various learning activities which can accommodate their interests well. 4) Self-Confidence Particularly, making mistakes in learning a language is unavoidable. One exact way to improve Chinese proficiency is through oral practice using the target language. The applied teaching method in the context of the study was being questioned. This method requires students' understanding on the material that has been taught and learned. For this reason, students should be encouraged to answer the questions delivered by the teacher. Students who put more effort to learn should not worry about their mistake in expressing their idea, even if they did not answer correctly or made mistakes. Extroverted students had positive learning manners. Mostly, when the teacher asked questions, they actively engaged in answering the questions. Furthermore, when the teacher asked the students to share about certain opinions and/or experiences, the extroverted students showed their confidence and willingness to try. Despite the fact that they had problem to speak fluently and did not have advanced vocabulary and grammatical mastery, they made some efforts to practice. On top of that, it is teachers' role to provide opportunities for practicing the target language so that the students can have high self-esteem to answer the delivered questions during the teaching and learning process. Meanwhile, to answer questions, introverted students waited for the teacher's instructions until s/he directly called their names. In answering the questions, they tended to speak in a quiet gentle voice and performed poor eye contact. The current study revealed that the students experienced feeling fear of teacher and fear of expressing ideas or delivering answers. Yet, there were several students with introverted personalities who performed well. Thus, teachers should encourage them for their improvement in learning. 5) Initiative Aspect Students with extroverted preferences were actively engaged and showed their high self-esteem to try during their learning process. They always be willing to answer questions that their teacher gave without being called by the teacher. Their positive attitudes created an encouraging environment. Conversely, introverted students did not seem to show their initiative to answer questions actively since they always waited for the teacher to call their names. With regard to the speaking course, students are required to take every opportunity to practice the target language, particularly Chinese. Therefore, teacher gives tasks and manage the classroom so that the students have opportunities to speak in the classroom activities. The teacher asked, "Do you often go to Paragon Square?", most of the students answered, "Yes, I do!", some said, "No, I don't." Then the teacher gave followed up question to a student, "Why do you often go there? Are the sold items cheap?" By giving such questions that stimulate their memory, the classroom activities went well. In this situation, extroverted students voluntarily shared their experiences when they went to Paragon. On the contrary, the introverted ones showed their low initiative and interest to participate in the classroom activities. C. How to deal with extroverted and introverted students in Chinese class Teachers always hope for having a fun teaching and learning process in a speaking course. In order to improve students' speaking skills, students need to have enough opportunities to practice Chinese orally. For those who have extroversion characters, speaking is a course that suitable with their personality as this course requires high self-esteem to try and express students' ideas actively. These students do not have any difficulties in participating classroom activities. They are willing to be active in front of their teacher and peers. In contrast, introverted students are inconvenience with such active course since they feel anxious if the teacher asks them to speak. For this reason, teachers' teaching methods in dealing with extroverted and introverted students became an essential factor affecting the learning and teaching process. Thus, teachers should apply effective teaching approaches, methods, and techniques to make introverts and extroverts thrive in the classroom. 1) Selecting effective learning activities Concerning the students' improvement of Chinese skills, teachers could implement more various learning activities. There are plenty of classroom activities such as collecting a data of a case and retelling the details, visiting and promoting tourist attractions, creating news report videos, and/or any other outside class assignments. Regarding those activities, the researcher expected that it Advances in Social Science, Education and Humanities Research, volume 338 is possible for teachers to encourage students and engage the teaching and learning process. Knowing the students' productions in completing their assignments, it could be concluded that they were well prepared and did the assignments thoroughly. Further, it appeared that the students were highly interested in such kind of learning activities. Aiming to go into details in understanding the students' opinion on activities mentioned before, the researcher implemented interview. The data obtained from the interview sessions showed that most of the students had a high interest on those outside class activities. In addition, having more variety on outdoor activities is expected to provide students with more opportunities to showcase their practical use of the language. As students usually write Chinese characters, sentences, and dialogues in the class, the interview participants conveyed that doing those extracurricular activities was a favorite new learning strategy for them. Regarding the students' activity outside the class, introverted and extroverted students have different point of views. The extroverts did not have any problems to carry out the given assignments. However, for the students with introversion personality, they did not find it exciting to work in groups. Instead, they enjoyed working individually. Thus, in order to observe the introverted students' performance, students were required to film themselves individually to complete the news report assignment. The result of the observation appeared to prove that they performed better in the videos. This occurred because when the introverts worked in groups, they tended to feel uncomfortable and show their reluctance to express themselves. Learning a foreign language in our native countries may result in lack of language exposure and unsupportive environment. Owing to that, having cooperative teaching and learning within the context to create a language-rich environment. In terms of language knowledge, activities outside the classroom could reinforce students' creativity. By doing the activities, they could create a new positive environment on their language learning. It is possible that the students can be encouraged to practice and develop the target language skills outside the classroom naturally. It could be said that the extracurricular activities created positive impact on the students' language learning process. With reference to the observation on the outdoor activities, it was concluded that sort of activities was effective to stimulate and support the students' learning interest. Yet, it is necessary for teachers to take into special consideration on the students' personalities in selecting the activities. 2) Applying questioning approach Extroverted students are likely to have a high enthusiasm, and dominate the class. They are very active and responsive on the teacher's instructions, and questions. Through teachers' extensive questioning, it was expected that the students would show positive attitudes by participating in the activity, and practicing their communication skills. Students with extroversion personality seemed to be stand out in the class because of their ability to express themselves and create positive atmosphere in the classroom. Having students with such personality helped the teacher to encourage all students to learn with higher level of interest. Relatively, introverted students were considered as passive students who easily got uncomfortable if being asked by the teacher. They were likely to speak with a very gentle voice in answering questions. Teachers apply questions-answer method in their teaching and learning process. After reading a text, students were required to answer several questions delivered by the teacher, such as: It was expected that if there were many questions arose, students would answer the questions actively and could develop their speaking skills. Because of this, a student with extroversion personality made a positive statement suggesting the teacher to maintain the teaching method in the classroom. Concerning the introverted students' low performance, teachers should modify their way of giving questions by calling their names, asking students to give written answers, making group discussions, etc. After implementing different method to propose questions, the teacher hoped to make introverted students to be more relax and comfortable in participating the learning activities. Making group discussion by classifying the students' personalities would encourage students to be willing and participate in conversations more actively. 3) Using various media to enhance teaching and learning To increase students' interests, the teacher can utilize various medias to enhance teaching, and learning activities. Extroverted students have many interests which cannot be covered by only relying on a textbook. Teaching by using techology might be a solution. However, introverted students have stabil interests. They can deal with a textbook as a learning source. It is necessary for a teacher to use various medias in teaching to gain more students' interests. In order to analyze students' attitude toward a learning process using various medias, the researcher gave some treatments: 1) in the first meeting, the teacher used demonstration while teaching, 2) in the second meeting, the teacher employed the other teaching materials, 3) in the third meeting, videos, songs, and powerpoint slides were also used. From students' facial expression, the researcher assumed that learning process without varios medias could not gain students' interest. Even if the teacher delivered the lessons well, yet students' were not too comfortable due to monotonous activities. At the time when the teacher used videos, thr whole students watch them, even dancing, and singing altogether. It promoted the classroom to be more active. According to students' response, these teaching methods seemed to be interesting ways for extroverted students. On the contrary, students with introverted personality could not really enjoy those activities. When it comes to singing activity, only extroverted students sang the song loudly. Even in dancing activity, the introversion did not really follow the dancing. Besides analyzing students' attitudes, and expresions, we also interviewed the participants. Both Extraversion and Introversion enjoyed learning with technology. Introversion sometimes worried because of the dancing and singing activities. The researcher recommend to do dancing, and singing activities in the same time with the videos aid. The technology could bring more relaxed situation. Furthermore, the researcher believes that it is fascinating way to learn Chinese language. 4) Appreciating students' performance Chinese speaking class is inseparable from tasks, and class activities, such as questions being raised in the classroom, home works, asking for any events, or things, etc. The researcher wish that those tasks can improve students' speaking ability because they get more space to train their speaking. The main characteristics of Extraversion are they are brave enough to try something new, and to answer teachers' questions. Although they have a limitation in term of speaking fluency, they are still confident to practice speaking. This is absolutely a good manner. Thus, a teacher should appreciate students' result. Even though they do not answer questions correctly, and precisely, it is better to encourage them by praising their performance. For the introversion, although they are not actively engaged in the activities, a teacher should motivate them to be more confident, and appreciate their courage. The researcher also hopes that compliments can boost students' confidence to speak more during the speaking classes. One of examples is they are more confident to answer teachers' questions. Giving a praise does not only contribute to a more convenient learning atmosphere, but also improving students' Chinese speaking. IV. CONCLUSION The present study aims to investigate, and compare the attitudes on extroverted and introverted students in a Chinese speaking course. The researcher summarized major findings that students with extroversion personality showed they actively participated in the course activities. The second findings to emerge from this study is that students with introversion personality performed constant inhibition problem to communicate. Speaking course in this department requires students to be able to participate actively in every classroom activity, brave to make sentences, and confident so that they can enhance and develop their competence in speaking Chinese. The researcher analyzed the condition of students with two different personalities who enrolled Chinese speaking course, summed up that having outdoor activities was the main factor affecting the students' interest, participation, and their language performance in answering questions. Students who considered as extroverts were actively involved in the course activities and boldly expressing themselves. In order to gain an effective language teaching on introverts and extroverts, this study served to suggest some pedagogical implications. Teachers should provide classroom activities which engage both students' personalities and various methods to propose questions. Also, for teachers, it is important to vary their teaching media in enhancing the teaching and learning process. Additionaly, teachers should always appreciate students' works and efforts. Considering the language teaching and learning aspects, this study hopes to serve insights for language teachers and students in increasing learning interest and students' classroom participation. Thus, the whole class will perform better in the speaking course, improve their ability in oral communication, and gain academic achievement as a result.
2019-09-17T01:08:31.375Z
2019-08-01T00:00:00.000
{ "year": 2019, "sha1": "dfe71c347615bff4404545b7b04dff6a3572ba06", "oa_license": "CCBYNC", "oa_url": "https://doi.org/10.2991/prasasti-19.2019.16", "oa_status": "GOLD", "pdf_src": "MergedPDFExtraction", "pdf_hash": "e73f160f2669fa7f9d475d139fb3245c1299053b", "s2fieldsofstudy": [ "Education", "Psychology" ], "extfieldsofstudy": [ "Psychology" ] }
239029103
pes2o/s2orc
v3-fos-license
AutoCovNet: Unsupervised feature learning using autoencoder and feature merging for detection of COVID-19 from chest X-ray images With the onset of the COVID-19 pandemic, the automated diagnosis has become one of the most trending topics of research for faster mass screening. Deep learning-based approaches have been established as the most promising methods in this regard. However, the limitation of the labeled data is the main bottleneck of the data-hungry deep learning methods. In this paper, a two-stage deep CNN based scheme is proposed to detect COVID-19 from chest X-ray images for achieving optimum performance with limited training images. In the first stage, an encoder-decoder based autoencoder network is proposed, trained on chest X-ray images in an unsupervised manner, and the network learns to reconstruct the X-ray images. An encoder-merging network is proposed for the second stage that consists of different layers of the encoder model followed by a merging network. Here the encoder model is initialized with the weights learned on the first stage and the outputs from different layers of the encoder model are used effectively by being connected to a proposed merging network. An intelligent feature merging scheme is introduced in the proposed merging network. Finally, the encoder-merging network is trained for feature extraction of the X-ray images in a supervised manner and resulting features are used in the classification layers of the proposed architecture. Considering the final classification task, an EfficientNet-B4 network is utilized in both stages. An end to end training is performed for datasets containing classes: COVID-19, Normal, Bacterial Pneumonia, Viral Pneumonia. The proposed method offers very satisfactory performances compared to the state of the art methods and achieves an accuracy of 90:13% on the 4-class, 96:45% on a 3-class, and 99:39% on 2-class classification. Introduction The novel Coronavirus disease 2019 also known as COVID-19 first appeared in Wuhan, Hubei, China in December 2019 [1], and from then on it turned into a global pandemic affecting millions of lives worldwide. COVID-19 is a novel severe acute respiratory syndrome coronavirus which mostly affects the lungs in the human body [2]. Researchers have found ground-glass opacities, consolidation, and lower zone predominance [3] in chest X-rays of COVID-19 patients. Because of these features in the lung scans, it has been shown that chest X-ray can be used to detect the virus [4] in patients. Even though CT images can also be used for the task of detecting COVID-19, in comparison to the CT imaging technique, X-ray imaging technique is less expensive and more widely available [10]. X-ray imaging can also be used for mass testing at a faster rate and that is where machine learning technologies can truly contribute. Moreover, X-ray imaging offers the ease of interpretation for various chest related problems. As such, X-ray is used instead of CT images in this study. One of the main challenges currently with detecting COVID-19 from chest X-ray images using deep learning is the relatively small size of labeled data available. In the deep learning literature, it has been observed that in these cases unsupervised learning can be first used to learn representations which later makes it possible for the supervised learning to converge and generalize even on a small labeled dataset. It was shown by [11] that, using a deep convolutional autoencoder for unsupervised image feature learning made it possible to detect lung nodules with only a small amount of labeled data. It was proposed by [12] that, using a multiscale representation learning method via sparse autoencoder networks to capture the intrinsic scales in medical images leads to better performance in the classification task. In pathology detection conditional variational autoencoder was used by [13] to learn the reconstruction and encoding distribution of healthy images and the encoder part used these learned features later on for classification task. Autoencoder-based reconstruction techniques are already being used on chest CT images for COVID-19 detection. Researchers have successfully used U-Net-based architectures [14] to segment multiple COVID-19 infection regions in chest CT images. Some studies [15,16] have shown the use of encoder networks in their system to classify COVID-19 infection from CT images. The method proposed in [17] utilizes contrastive domain invariance enhancement techniques on the output of the feature extractor to further boost their classification performance and to make the system more generalized for detecting COVID-19 in CT images. Given the success of deep learning-based methods in chest X-ray image-related tasks, it is only natural to use it for classifying COVID-19 from chest X-ray images. A lot of research is being done in this field. COVID-Net [18], a deep convolutional neural network trained for classifying COVID-19 in chest X-ray images on a dataset containing 3 classes (normal, pneumonia, and COVID) achieved a 93.3% accuracy across the classes. DarkCovidNet another CNN model for this task developed by [19] was trained on both 3 classes and 2 classes (COVID and Non-COVID) and attained an accuracy of 87.02% and 98.08% respectively. Another CNN model based on the Xception [20] architecture named CoroNet [21] was trained on 4 classes (normal, COVID, bacterial pneumonia and viral pneumonia), 3 classes and 2 classes and its accuracy for each of this case was 89.6%, 95% and 99%. A method of segmenting lungs from a chest X-ray image and using random patches from that segmented image to train a pre-trained ResNet-18 [22] to classify COVID-19 was proposed by [23]. Using a small dataset of 50 normal and 50 COVID-19 patients images, [24] trained an InceptionV3, ResNet-50 and Inception-ResnetV2 models and got an accuracy of 97%, 98% and 87% respectively for 2 classes. To get a very satisfactory COVID-19 image detection performance from a relatively small number of the available training dataset is still a difficult and open-ended challenge. In this study, in order to overcome the problem of getting a very accurate trained model from the given small dataset of patient's chest X-ray images, a two-stage training scheme is developed for detecting COVID-19. Firstly an encoderdecoder based autoencoder network is designed and trained in an unsupervised manner using the X-ray images. Here, the autoencoder network learns to reconstruct the given image and due to the use of an overall optimization scheme, it is expected that the encoder part can preserve detailed information of the image in its different levels and learn relevant features for our dataset. Then the different levels of the encoder part of this autoencoder are connected to the proposed merging network to form an encoder-merging network where the encoder network part is initialized with the weights learned in the first stage. The proposed merging network is developed with unique Merging-blocks (M-blocks) that receive inputs from two different levels of the encoder and merge it in an intelligent way. These M-blocks are arranged in a tree pattern. The encoder-merging network and it is trained in a supervised manner for feature extraction of the X-ray images. The features obtained at the end of this encoder-merging network are passed through densely connected classification layers and these layers make the final prediction. The unique methodology of the proposed method is presented in Fig. 1. The proposed two stage training scheme is different from the traditional approach as instead of initializing our encoder model with random weights or transfer learning from an unrelated dataset, the encoder model first learns about the features of the dataset in the unsupervised training stage and it is later initialized with this learned weight in the second stage. The unsupervised learning using the autoencoder and initializing our encoder-merging classification network with the learned weights enables the model to converge and generalize on a small dataset of labeled chest Xray images containing the classes: Normal, COVID-19, Bacterial Pneumonia, Viral Pneumonia. An end to end training of the classification network is performed on a balanced dataset of these classes. The addition of this unsupervised learning at the beginning and the use of our uniquely designed encoder- Fig. 1 -The novel approach of the proposed method is presented. In traditional approach images are passed through a randomly initialized neural network model and the model learns to classify the images, in the proposed method there are two phases of training. In the first phase an autoencoder model learns to reconstruct the input X-ray images. In the second phase the encoder portion of the autoencoder is initialized with the weights learned in phase one and connected to a proposed merging block network and this combined model is trained for the classification task. merging classification blocks result in improved performance across all the traditional metrics. Methodology In the proposed method, for the purpose of COVID-19 image classification, both unsupervised and supervised deep neural network architectures are utilized in an effective way. The major blocks involved in the proposed scheme are shown in Fig. 2. First, a deep convolutional autoencoder network is designed to perform unsupervised feature extraction from a given chest X-ray image. Next a supervised deep CNN architecture is designed utilizing the extracted first phase features then using these features for supervised learning was performed on the classification network. The classification network is made up of uniquely de-signed smaller blocks that are arranged in a tree style architecture. Both these networks are trained on chest X-ray images. One major challenge in this work is to handle the classification task in case of limited number of training data, especially for the COVID-19 cases. Hence, in order to obtain a better trained model, prior to the network training stage, an efficient feature extraction stage is incorporated. In view of extracting spatial characteristics of the input image, we propose to utilize an unsupervised feature extraction stage based on the autoencoder-decoder structure. The motivation behind introducing such an additional encoder-decoder step prior to the conventional classification stage is its capability of preserving the detailed information of the given image in its different levels. Since in an autoencoder-decoder structure, a given image needs to be reconstructed at the output stage by using an overall optimization scheme, it is expected that at the encoder stage, the spatial characteristics of the input image is precisely captured. Hence, if features are extracted from various levels of the encoder, the extracted features can precisely represent a particular class with a better inter-class separation. In the proposed scheme, in order to effectively use the extracted features from various levels of autoencoder, an efficient merging scheme with unique Merging-blocks (M-blocks) is also developed. Use of these merged features in the classification network helps in obtaining better training even with a small dataset of labeled chest X-ray images. The basic steps of this methodology are shown in Fig. 2. Fig. 2 represents the major blocks used in the proposed method where the first block corresponds to the proposed unsupervised feature extraction stage. In this stage, an architecture with an EfficientNet-B4 model backbone is used to design an encoder-decoder model that performs optimization for each given input image and results in the decoded image. In this process, the encoder extracts different kinds of information from various perspectives which are then encoded in the encoder. These different levels of encoder containing different kinds of information are treated as useful features to be used in the next stage. In Fig. 2, the next stage represents the feature merging block where features taken from different levels of the encoder are efficiently merged by using proposed merging blocks. As a result, features collected from different levels of the encoder are merged in a single feature vector which is then finally used in the classification layer as shown in Fig. 2. These different stages of training are described in the following sections. Preprocessing Prior to using the X-ray images in the deep neural models, a two-stage pre-processing is performed on the images: resizing and normalizing. The input images are resized to 256x256 square images containing three channels. Then a min-max normalization is applied on the resized input images. This makes the training process faster and helps the model converge more easily. 2.2. Proposed unsupervised feature learning architecture The first phase of the system is an autoencoder that is trained on the unlabeled chest X-ray images and it learns to reconstruct the input images. Autoencoder algorithms are able to use unsupervised learning method to automatically learn features from unlabeled data [25] and they are specially useful in the medical image analysis domain where there is a scarcity of labeled data [11]. An autoencoder consists of two parts: encoder and decoder. The encoder learns the representation of a set of data to efficiently compress and encode it. And the decoder part learns to take that encoded data and reconstruct it as a representation that is as close to the original input data as possible. While selecting a model for this, it is important to note that at the first stage it will be used for an encoder-decoder based feature extraction purpose, and in the last stage, the same architecture will serve as a fundamental classification network. One of the goals is to select the classification architecture in such a way so that two separate architectures won't be needed for these two different stages which will unnecessarily increase the computational burden. This is why in that case the target was to select one classification network which could serve both purposes. In deep convolutional autoencoders, the encoder part is made by stacking convolution layers which are then followed by pooling layers. As a result, the resolution of the input image is gradually decreased and the channels are increased. This property is similar to the conventional CNN architectures used for classification tasks. Because of this similarity, a conventional classification architecture can be used to implement the encoder block. Among different types of deep convolutional neural networks, the EfficientNet proposed in [26], carefully balances network depth, width, and resolution to obtain a better classification performance. In EfficientNet architecture, a compound scaling scheme is proposed that uniformly scales all dimensions of depth/width/resolution. Such a compound scaling offers the advantage of focusing on more relevant regions with greater object details and can enhance the classification performance by a significant margin in comparison to that obtained by single dimension scaling methods [26]. The compound scaling is defined as: Here, u is a user-specified compound coefficient that controls how many more resources are available for model scaling and a, b, c are constants that can be determined by a small grid search [26]. Depending on different scaling operations, there exist various versions of the EfficientNet model, namely B0 to B7. In the proposed study, various EfficientNet models are tested and finally the EfficientNet-B4 architecture is chosen because of its consistently better performance considering the size of input image and the dataset. For this purpose, different types of available deep convolutional neural network architecture are tested and EfficientNet-B4 [26] model proved to perform the best in terms of accuracy. In the results and simulation section, this study on how the performance would have varied if another state of the art architecture, such as-InceptionV3, Resnet50, VGG11, etc, were used in the proposed scheme. At the first stage, the EfficientNet-B4 model will be used as an encoder-decoder block to get the optimum weights by utilizing different training images. Once the weights of this encoder-decoder block are optimized, these weights will be used as an initial weight at the later stage where the classification task will be performed. And the same encoder-decoder network will be trained at that time in a supervised classification manner. For this reason, using a higher performance accuracy model like the EfficientNet-B4 model reduces computation complexity since it's used as both an encoder block and later as a classification block. EfficientNet-B4 network balances between both these tasks without compromising with performance accuracy. The EfficientNet-B4 model was initialized using the pre-trained weights of Imagenet [27] because the dataset used here is relatively small to be used without Imagenet weights. The fully connected layers at the bottom of the network were omitted and output was taken from the last convolutional block so that it can be used as the encoded data for the autoencoder. For an input image size of (256, 256, 3) the encoder network produces an encoded data of shape (8,8,1792). The next part of the autoencoder is the decoder. A decoder module was designed to reconstruct the original input image of size (256, 256, 3) from the encoded data of the shape of (8,8,1792). This is an opposite operation of the encoder model. A conventional CNN architecture does not perform this kind of operation and as a result a decoder is designed in the proposed scheme to reconstruct the input image from the encoded data produced by the EfficientNet-B4 encoder model. Further analytical details of the decoder can be found in [28].The decoder model consisted of 5 blocks where each block started with a transpose convolutional layer that upsampled the image by a factor of 2. This was followed by a convolutional layer that had the same number of filters as the transposed convolutional layer. The detailed architecture is presented in Fig. 3 and all the layers used in the decoder model are presented in Table 1 with their corresponding output shapes. At the end of the decoded model, there is a convolutional layer with same number of channels as the input image. This layer had a SELU activation function. SELU is the scaled exponential linear unit activation function. It is defined as: Here, scale and alpha are predefined constants with the value: alpha = 1.67326324 and scale = 1.05070098 [29]. Even though the reconstructed X-ray image from this network is not directly used, it is an important by-product of the b i o c y b e r n e t i c s a n d b i o m e d i c a l e n g i n e e r i n g 4 1 ( 2 0 2 1 ) 1 6 8 5 -1 7 0 1 proposed architecture. It will not be possible to train the encoder network on a small dataset to learn relevant features without this reconstructed X-ray image. Also, the quality of the reconstructed X-ray indicates how well the autoencoder network is converging. If the autoencoder network is trained properly that will help the encoder to preserve detailed information of the images in its different layers that can later be used for the classification task. As the autoencoder model learns to reconstruct the input image, it does not require a label, and the entire pixel space of the input image works as labels. So even with a small amount of data and also with unlabeled data of other chest X-ray images, this network can be trained and converged. In the process of generating encoded data that is useful for reconstruction, the encoder model manages to perform information preservation from various perspectives by learning unique features of the images in the dataset and these features can then be used for the classification purpose. Proposed classification architecture The next part of the study was to develop a convolutional neural network architecture for the supervised learning scheme of detecting COVID-19 patients from chest X-ray images. At this stage, a classification network is required where the problems to be dealt with are 2-class, 3-class, or 4-class. For this task as mentioned before outputs from the different levels of the encoder part were extracted out from our autoencoder with the weights which were learned in the previous step. Then the features from this encoder network were used and passed through the classification network. The different parts of the classification network are specified below. Feature extraction stage The encoder was the EfficientNet-B4 model that had been trained in the previous step. Since this model had learned Merging blocks (M-blocks) For our classification network, information from different layers of the encoder is taken and reduced to a single channel so that the classification task can be performed. In that case, one major task is to reduce these five layers of information to one layer and to serve this purpose, a unique block called Mblocks is developed that merges features from two layers of the encoder in an intelligent way and later merges features from other M-blocks as well. The block takes two inputs of a 3D tensor with the first one having double the height and width of the second input. The first input is then passed through a pooling layer that uses filter size of (2,2) and averages the value for each window. The output tensor from this step has the same height and width as the second input and these two tensors are then concatenated in the channels axis. Then a convolutional operation is preformed on this concatenated tensor with a window size of (1,1) and with the number of filters equal to the second inputs channel number. As a result, the output shape of each M-block is the same as the shape of the second input but it contains features from both the inputs. The structure of this block is presented in detail in output shape of (8,8,1792). Then a global average pooling is performed on this tensor and it outputs a feature vector of size 1792. These features are then passed through two fully connected dense neural networks with the first one having 1024 neurons and the second one has 512 neurons. After this, softmax activation is performed and classification is completed using that prediction. Result and discussion In this section, the performance of the proposed method is demonstrated considering different classification cases and various performance measure criteria. Results found by the proposed method are compared with that obtained by some state-of-the-art methods. In what follows, first the dataset and then the results with detailed analysis and comments and presented. Dataset Due to its very recent spread, there is a huge scarcity of publicly available chest X-ray images corresponding to COVID-19 patients. However, datasets for various other types of pneumonia and normal cases are available. Hence, in this research, a combination of two publicly available datasets is used to analyze the performance of the proposed method. Pneumonia (both viral and bacterial) and normal chest Xray images were collected from [30], an open-sourced dataset released on the Kaggle platform. The dataset contains 5,863 chest X-ray images with 4273 pneumonia images and 1590 normal images. Out of the 4273 pneumonia class images, there were 2530 images of bacterial pneumonia and 1345 images of viral pneumonia. For COVID-19 chest X-ray images, the dataset was collected from Dr. Cohen's [31] open-source Github repository. The repository contains an open database of Covid-19 cases with chest X-ray or CT images and is being updated regularly. Chest X-ray images are largely compiled from websites such as Radiopaedia.org, the Italian Society of Medical and Interventional Radiology, and Fig. 1.com [31]. During the time this research was conducted, the repository contained 408 COVID-19 chest X-ray images. As the dataset was unbalanced, the classes containing a higher number of images were downsampled to make the dataset balanced. Thus, the final dataset consists of 408 bacterial pneumonia, 408 viral pneumonia, 408 normal, and 408 COVID-19 chest X-ray images. After that, these images were randomly distributed into the train and test sub-folders, and five different folds of the dataset were generated for crossvalidation. The training set consists of 1306 images of four different classes and the test set contains 326 images, also classified into four different classes. The train and test set images were completely independent. Also, all the images were resized to 256 Â 256 pixels with a resolution of 96 dpi. In this study two different views of chest X-ray images of this repository are considered in this study: (1) standard frontal PA (Posteroanterior) views and (2) standard AP (Anteroposterior) views. and AP Supine (Anteroposterior laying down) views (the AP Supine view is avoided due to its confounding image artifacts). The X-ray images of a patient acquired from different views are found to be significantly different. The information that was available in the repository was the total number of people from whom the X-ray images were collected. At the time we conducted the research, X-ray images originate from 408 people of various hospitals across 26 different countries. Unfortunately the person's label was not available and thus we could not split the images on patient-level. It is to be noted that while some X-rays of different views might originate from a single patient, the number is not that significant to impose a heavy data leakage and cause an overfitting problem. In order to ensure minimum data leakage and address the over-fitting problem created by the dataset, in each block of the merging network, a Batch Normalization layer that has a regularization effect is used, and even in the feature extractor, there is a batch normalization layer in each block. Hence, in this study, the problem of over-fitting is addressed with the unique learning method proposed in the architecture and the use of the regularization layers. It is expected that the proposed method can be found suitable for larger datasets as well where splitting the image on patient-level can be addressed. This is left as a future work depending on the availability of required larger datasets. In Fig. 6 some samples of chest X-ray images from the prepared dataset are shown. Experimental setup In the first phase of training using the autoencoders, the encoder part was initialized with weights trained on the Imagenet dataset and the decoder part was randomly initialized. This network was optimized on the mean square error loss function with the Adam optimization algorithm. It converged with 50 epochs of training with a batch size of 32. In the second phase the encoder was initialized with the weights learned from the first phase and the M-blocks were randomly initialized. The hyperparameter values used while training the classification model were: learning rate = 0.0001, epoch = 40, batch size = 32. To avoid getting stuck on saddle points in the loss plane learning rate reduction technique is used in both the training phases. The proposed models were implemented with the Keras library using TensorFlow 2.0 backend. The entire training and testing process was performed on Google Colaboratory Server. Performance evaluation The proposed model is trained and tested on 5 fold crossvalidation data containing 3 class. For each of the test set Precision, Sensitivity, F1-score and Accuracy is calculated as the performance metric and it can be seen in the Table 2. From Table 2 it can be seen that the model got the highest accuracy of 97.97% from fold 1 and the average accuracy for all the 5 folds is 96.41%. The model had accuracy in the range of 95.47% to 97.97% for all of the folds of data. Even the lowest accuracy of 95.47% is still quite high. The same performance metrics are also generated in a class-wise basis for all of the folds. The class-wise result for fold-1 can be seen in the Table 3. As evident from Table 3, the model performed exceptionally well in the COVID19 and Pneumonia class getting an accuracy of 100% for both of these classes. While for the Nor-mal class it gets an accuracy of 93.90%. These claims are further supported by the confusion matrix generated for each of the folds. The confusion matrix for fold-1 and fold-2 are presented in Fig. 7. From Fig. 7 it can be observed that the model accurately predicted all the COVID-19 class images. But some The model is also trained on a 4 class dataset to separately classify bacterial pneumonia and viral pneu-monia. The same performance metrics from the 3-class setup is used in this case as well. The result for cross validation testing is presented in Table 4 and the class-wise result is presented in Table 5. From these tables, it can be seen that the model gave consistent performance in all of the folds and from the class-wise results it can be seen that the model exceptionally well for the COVID-19 class and reasonably well for the Normal class. The performance dropped a bit when differentiating between bacterial pneumonia and viral pneumonia class. On average, for this 4 class dataset, the model achieved a classification accuracy of 90.13% for the five fold cross validated data. This experimentation was done to see if the model can generalize for all kinds of low data irrespective of the data source and even if the data are very similar. Even under these conditions, the model acquired an average accuracy of 90.13% which is a relatively good performance. The model is trained on a 2-class dataset as well. This dataset was derived from the 3-class dataset where the Normal and Pneumonia classes were labeled as Non-Covid19. The evaluation metric is the same for this task as well. This detailed result is presented in Table 6. From Table 6 it can be seen that the proposed method performed well on both the classes with an average accuracy of 99.39%. These performances on both the 4 class and 2 class dataset can be further inspected with the confusion matrices presented in Fig. 8. As can be observed from the confusion matrix of Fig. 8, that in the case of the two-class dataset almost all the test images were classified correctly except for two Non-COVID images. As mentioned in the methodology section, the EfficientNet-B4 model was used as the encoder network in this study. But other classification networks, such as Resnet-50, InceptionV3, and the other variants of EfficientNet were also tried as the encoder network and their results on the 4 class and 3 class datasets are compared in Table 7. From Table 7 it can be observed that even though EfficientNet-B4 performed the best, the other models also provide similar performance which is further proof to the credibility and robustness of the proposed scheme. However, in the results section, in order to report the results in all tables, EfficientNet-B4 is used in the proposed method as the encoder network. To further justify the use of EfficinetNet-B4 model as the feature extractor, the Cohen's Kappa score and the Mattheus Correlation Coefficient for the models were evaluated in the 4-class classification scheme using the proposed method. The detailed result of this analysis is presented in Table 8. To evaluate the effectiveness of the proposed method its results were compared with a simple EfficientNet-B4 classification network pretrained on Imagenet weights. This com-parison is presented in Table 9. From the results it can be observed that the use of the autoencoder network coupled with the merging block resulted in a performance improvement and this improvement can be specially seen in case of the four class dataset where the classification task becomes much more difficult. To further evaluate the performance of the proposed methodology statistical significance test was performed on the two methods mentioned in Table 9. McNemar's test [32] and Wilcoxon signed ranked test [33] are the two statistical tests that were performed for this purpose. The statistical significance tests are performed on the prediction of the two methods mentioned in Table 9. The prediction of each model on the 326 test set images are compared to the ground label of each of these images and a binary label with correct/incorrect decision is generated based on this comparison. There are two distributions of this binary variable for the two models and the disagreement between the two methods is used as the variable for these statistical significance tests. The test tries to see if it is possible to reject the null hypothesis which states that there is no difference in the disagreement between the two methods. The results of these tests Table 6 -Precision, Sensitivity, F1-score and Accuracy of the 2 classes for Fold 1. Class Precision ( Table 10. It can be observed from the results that the P-value of McNemar's test for the 4 class classification scheme was 0.83825 and for the 3 class classification scheme it was 0.68309. The P-value for the Wilcoxon signed ranked test for the 4 class classification scheme was 0.638 and for the 3 class classification scheme, it was 0.084. As for both the test in both classification schemes the P-value was very close to 0.5 it can be inferred that the proposed methodology produced some degree of statistically significant results. As previously mentioned in section 1, a lot of research work is currently being done on classifying COVID-19 patients from chest X-ray images. These studies are being conducted on both 3-class and 2-class datasets with variation in the number of images in the dataset and the model architecture. A comparison of the proposed system with the existing literature is presented in Table 11. It is to be noted here that in these different methods reported, the number of COVID-19 dataset X-ray images are different in each case. In order to further prove the superiority of the proposed method, a comprehensive comparative analysis has been performed with the existing literatures that have publicly available implementation of their systems, on the common evaluation protocol of: 408 Covid-19 + 408 Normal + 408 Viral Pneumonia + 408 Bacterial Pneumonia, 408 Covid-19 + 408 Normal + 408 Pneumonia, 408 Covid-19 + 816 Non-Covid. The results of this analysis are presented in Table 12. From the analysis it can be observed that the proposed method outperforms the other methods in all three classification scheme. Discussion From the results presented in the previous section on different classification tasks and various performance metrics, the noteworthy observations are presented below. In 2-class, 3-class, and as well as in the 4-class setup, the model can always detect COVID-19 classes with very high accuracy. The model performance is relatively poor when differentiating between the two classes of bacterial and viral pneumonia, as they look almost identical even to the human eye. Table 11 compares our proposed method with the existing literature. It can be seen here that the model manages to perform better than the other methods presented in this table. An average accuracy of 96.45% for the 3-class setup and 90.13% for the 4-class setup is found. While for the 2-class setup it is 99.39%. It can be observed that most of the other studies evaluated their system on 3-class and 2-class datasets only. Another point to note here is that more COVID19 class data is used in this study compared to the other studies mentioned. Using a VGG-19 based model [34] acquired an accuracy of 93.48% for the 3-class setup but used 224 COVID, 700 Pneumonia, and 504 normal class images. DarkCovidNet [19] used In this study only four classes were considered but if more respiratory diseases are included in the dataset, in fact it will increase the number of classes to be handled by the deep learning network. In that case, the overall accuracy may depend on some well-known factors, such as the intra-class and inter-class feature characteristics of the members belong to the new class, and availability of the training data of the new class. In case of the proposed scheme, it is observed that even when more classes are added to the dataset, the accuracy for the COVID-19 class does not drop significantly. It achieves good generalization and offers very satisfactory COVID-19 detection performance in comparison to some existing methods even when there is more classes included in the dataset. The proposed classification model is developed with the aim of being used in clinical conditions for detecting COVID-19 patients from their chest X-ray images. In such a case only patients showing the known symptoms will undergo this process to verify if they have COVID-19 infection or not. As a result, the purpose of this model is to classify the COVID-19 cases from the other classes with high accuracy. Since pneumonia patients are known to show similar symptoms to that of COVID-19 patients and their chest X-ray images look very similar, the proposed system was trained to differentiate between these two classes and also recognize normal conditioned chest X-ray images similar to the state of the art literature [21,19]. However the proposed system has the potential to be used for detecting multiple respiratory diseases from chest X-ray images similar to this research [41] and this can be further explored in the future research. It is to be noted that in recent times, several researchers have tested their methods on this dataset as it is publicly available. Apart from the competitive classification performance of the proposed method, it is expected that the users would find the intelligent methodology of this study useful in real-life applications. Gradient based localization To further inspect the results from our system, the Gradientweighted Class Activation Mapping (Grad-CAM) [42] algorithm was integrated with our system, and the regions of interest in our X-ray images were identified that are being used for the classification purpose. The result of this integration in our test dataset is presented in Fig. 9. It can be observed from the heatmap examples of each class that the classification model mostly looked at the regions of lungs in the chest X-ray images. In parts of the lungs with cloudy regions indicate ground-glass opacity (GGO) and consolidation. For the bacterial and viral pneumonia and as well as the COVID-19 cases the model concentrated on the opacities present in those images with the red and yellow patches indicating the severity of abnormalities present in those regions. While for the normal class images, the heatmaps did not have any red patches in the lungs indicating they did not contain any opacities or abnormalities and as such, they were classified as normal lungs. As mentioned in [3] that, the presence of GGOs in those regions is an indication of there being pneumonia and COVID-19. In [43,44], expert radiologists found GGOs to be the predominant feature in chest X-rays of COVID-19 patients and the regions of abnormality were mostly the lower lobe and bilateral regions. Similar findings, as described above, are obtained in most of the Grad-Cam representations for pneumonia, COVID-19 and normal cases handled by the proposed method. The misclassified instances of the test set and their corresponding Grad-Cam representations are shown in Fig. 10. It can be observed from the Grad-Cam images that the proposed model makes a wrong classification mostly when the test image of a particular class exhibits close resemblance with the images of another class. It is to be noted that in many cases, the weights shown in the Grad-cam images cannot be well justified according to the relevant class of the sample image. There was no feedback from the Grad-Cam images that we incorporated in the proposed methodology to improve the classification performance, which could be a potential future work. Conclusion It has been more than 6 months since the advent of the Covid-19 pandemic, and an automatic detection system of COVID-19 from chest X-ray images is a necessity now. This research was conducted with the aim of developing a deep learning-based system that can generalize even on a small dataset. It is shown that the proposed training scheme utilizing an unsupervised image reconstruction stage for weight initialization of the encoder model and the proposed encoder-merging network that extracts features from different layers of the encoder network and learns to effectively merge them in a supervised training method has the capability to give some very satisfactory consistent results even with a very small dataset. It can handle both binary and multi-class problems in an efficient way. For this reason, it is expected that when a large dataset on this task becomes publicly available, this model will be able to generalize even better. Moreover, the network was designed in such a way that both feature extraction and the classification stage used the same backbone network of EfficientNet-B4. This resulted in more efficient computation and faster convergence. Declaration of Competing Interest The authors declare that they have no known competing financial interests or personal relationships that could have appeared to influence the work reported in this paper.
2021-10-20T13:09:57.279Z
2021-10-01T00:00:00.000
{ "year": 2021, "sha1": "f46d33024c244d584d2068ad7fd425bdaa87a889", "oa_license": null, "oa_url": "https://doi.org/10.1016/j.bbe.2021.09.004", "oa_status": "BRONZE", "pdf_src": "PubMedCentral", "pdf_hash": "551f1f5aca60bda422dc5a4195a2ae230845a1dc", "s2fieldsofstudy": [ "Medicine", "Computer Science" ], "extfieldsofstudy": [ "Medicine" ] }
799852
pes2o/s2orc
v3-fos-license
NeuroD1 regulation of migration accompanies the differential sensitivity of neuroendocrine carcinomas to TrkB inhibition The developmental transcription factor NeuroD1 is anomalously expressed in a subset of aggressive neuroendocrine tumors. Previously, we demonstrated that TrkB and neural cell adhesion molecule (NCAM) are downstream targets of NeuroD1 that contribute to the actions of neurogenic differentiation 1 (NeuroD1) in neuroendocrine lung. We found that several malignant melanoma and prostate cell lines express NeuroD1 and TrkB. Inhibition of TrkB activity decreased invasion in several neuroendocrine pigmented melanoma but not in prostate cell lines. We also found that loss of the tumor suppressor p53 increased NeuroD1 expression in normal human bronchial epithelial cells and cancer cells with neuroendocrine features. Although we found that a major mechanism of action of NeuroD1 is by the regulation of TrkB, effective targeting of TrkB to inhibit invasion may depend on the cell of origin. These findings suggest that NeuroD1 is a lineage-dependent oncogene acting through its downstream target, TrkB, across multiple cancer types, which may provide new insights into the pathogenesis of neuroendocrine cancers. INTRODUCTION Aberrant expression of basic helix loop helix transcription factors such as neurogenic differentiation 1 (NeuroD1) and achaete-scute homolog1 has been observed in aggressive small cell lung cancer (SCLC), neural and neuroendocrine lung carcinomas. [1][2][3][4] Although the developmental roles of these basic helix loop helix proteins are well established, their possible causative roles in the pathogenesis of neuroendocrine carcinomas are less understood. Neuroendocrine tumors can initiate from almost any organ system. Although described based on organ of origin, this feature is not limiting, as many of these tumors share pathological characteristics such as expression of the neuroendocrine markers synaptophysin and chromogranin A, and the neural cell adhesion molecule (NCAM, also known as CD56). [5][6][7] Neuroendocrine tumors are thought to originate from neuroendocrine cells, or to undergo an epithelial to neuroendocrine differentiation that leads to more aggressive carcinomas, as observed in melanoma and cancers of the gastrointestinal tract and prostate. 2,[8][9][10][11][12][13][14] To investigate the role of NeuroD1 in tumorigenesis, we examined several tissue types. We report that several pigmented melanoma cell lines express high amounts of NeuroD1 and confirmed previous findings in prostate cell lines. 2 We find that regulation of TrkB is conserved across multiple tissue types. Additionally, downregulation of both NeuroD1 and TrkB prevented viability and migration of several carcinomas; however, inhibition of TrkB activity only had an effect in cell lines with the neuroendocrine feature as defined by the presence of the neuroendocrine markers synaptophysin and NCAM. We also determined that loss of p53 is permissive for increased expression of NeuroD1, possibly in a lineage-dependent manner. RESULTS NeuroD1 is highly expressed in aggressive neuroendocrine cancers To investigate the clinical significance, expression of NeuroD1 was examined in a data set including more than 5400 patient samples taken from tumor and normal tissues. Elevated NeuroD1 expression was observed in several malignant tumors, including those from the neuroendocrine tissues, pancreas, brain, and lung, all of which were SCLC (Figure 1a, Supplementary Tables S1 and S2). Previously, we have demonstrated that NeuroD1 promotes tumor cell survival and metastasis in aggressive neuroendocrine lung tumors through regulation of the tyrosine kinase receptor, TrkB. 15 To complement studies in the lung, we examined the role of NeuroD1 in cell lines from non-neural or non-neuroendocrine cells that undergo neuroendocrine differentiation. 2,16 The consequences of NeuroD1 expression have not been investigated in malignant melanoma, even though it has been suggested to induce neuroendocrine differentiation in conjunction with oncogenic B-RAF V600E under certain circumstances. 17,18 NeuroD1 was observed in prostate cancer; however, its expression in several of the commonly used prostate cancer cell lines was only noted upon in vitro differentiation with cAMP. 2 We observed that NeuroD1, TrkB, and NCAM expression was greater in melanoma cell lines that were reported to have higher pigmentation and metastatic potential 19,20 (Figure 1b and Supplementary Figure 1a). The tendency of increasing expression of the three factors with increasing pigmentation appeared to be independent of the mutational status of B-RAF, as all cell lines with the exception of WM3211 have V600 mutations. [21][22][23] As in SCLC, increased expression of the neuroendocrine marker synaptophysin was also detected in melanomas with high NeuroD1 (Figures 1b and c). NeuroD1 was also expressed in undifferentiated malignant prostate cell lines; however, neither of the neuroendocrine markers, synaptophysin or NCAM, were detected ( Figure 1d). Loss of p53 increased NeuroD1 expression We next sought to investigate specific onco-genotypes possibly responsible for expression of NeuroD1. To do this we utilized human bronchial epithelial cell (HBEC) lines that were assigned a number to distinguish lines from different individuals and immortalized them by overexpression of cyclin-dependent kinase 4 and human telomerase reverse transcriptase (for example, HBEC3KT). 24 The immortalized HBEC3KT cell line was sequentially transformed by knockdown of the tumor suppressor p53 and expression of K-RasV12 (HBEC3KTR L 53) 25 (also, Sato et al 26 ). NeuroD1 expression was increased in HBEC3KT53 cells, a nontumorigenic derivative with stable knockdown of p53 ( Figure 2a). Additionally, isogenic derivatives of HBEC3KT that were transformed from normal to tumorigenic cells followed by clonal selection (HBEC3KTR L 53-Clone 5, hereafter called Clone 5) exhibited spontaneous expression of NeuroD1 (Figure 2a). Sustained inactivation of p53 is suggested to enhance tumorigenesis at multiple stages, including initiation and progression. 13,24,[27][28][29][30] To test a possible relationship between p53 and NeuroD1, we reexpressed p53 in the tumorigenic cell line Clone 5 and found a substantial decrease in NeuroD1 mRNA ( Figure 2b). Next, we utilized a luciferase construct driven by the mouse Neurod1 proximal promoter to determine if p53 expression affected promoter activity. A 100-fold increase in Neurod1 promoter activity was observed in immortalized HBEC3KT53 compared with the parental HBEC3KT (Supplementary Figure 1A). Furthermore, re-expression of p53 in HBEC3KT53 and Clone 5 led to a dramatic reduction in Neurod1 promoter reporter activity (Figure 2c). From this HBEC model we concluded that loss of p53 induced NeuroD1 expression, suggesting that p53 may regulate NeuroD1 early in the pathogenesis of neuroendocrine lung cancer. To evaluate p53 as a determinant of NeuroD1 expression in neuroendocrine cancers, we analyzed its expression in lung, prostate and melanoma cells with loss of (H358, PC3, and YUMAC) or mutation in (H1155, M14, SK-MEL-2 and SK-MEL28) p53 31,32 ( Figure 2d). Overexpression of p53 only suppressed NeuroD1 in cells that also had neuroendocrine features, not in the three nonneuroendocrine cell lines ( Figure 2e). Together, these results suggest a role for loss of p53 being permissive for NeuroD1 expression, not only in neuroendocrine lung cancers but also, as recently suggested, in melanoma pathogenesis. 33 NeuroD1 and TrkB regulates viability and migration of prostate and melanoma cell lines Previously, we have demonstrated that knockdown of NeuroD1 and its downstream target TrkB led to a decrease in survival and migration of neuroendocrine lung cancers. 15 We observed that loss of NeuroD1 resulted in loss of TrkB in all lines tested, indicating a conserved connection between NeuroD1 and TrkB across multiple cancer types (Supplementary Figure 2a). NeuroD1 has the ability to regulate the promoter of TrkB in neural and neuroendocrine lung cancers. 15,34 We sought to investigate if NeuroD1 bound the promoter of TrkB in the melanoma and the prostate cell lines using chromatin immunoprecipitation. immunoblot is shown from one of three independent experiments. (c) HBEC3KT53 and Clone 5 were transfected with pGL3-NeuroD1luciferase with and without p53. p53 was immunoblotted and luciferase activity was measured; one of six experiments shown. (d) Melanoma, prostate and lung cancer cell lines were lysed; 50 mg total protein was immunoblotted for p53, NeuroD1 and GAPDH (as loading control). The dashed line indicates discontinuity in gel. The asterisk represents a loss-of-function mutation in p53. 43 (e) Cell lines with loss of or mutation in p53 were transfected with control vector or vector encoding p53. Cells were lysed and immunoblotted for p53. Overexpression was quantified using Odyssey software. NeuroD1 regulation of migration JK Osborne et al did decrease viability and migration of LnCAP; however, these effects were not through a decrease in phosphorylated TrkB and may be due to another target of the drug (Figures 4b and d). We hypothesized that even if NeuroD1 was highly expressed, its mechanisms of action through TrkB may be dependent on the cell of origin. Neuroendocrine differentiation, defined by the presence of synaptophysin, was observed in the melanoma and neuroendocrine lung cancer cell lines, but not in the prostate cell lines (Figures 1b and d). We suggest that NeuroD1 and TrkB regulate migration in neuroendocrine and non-neuroendocrine cancer cells, but the efficacy of targeting TrkB may depend on the cell of origin. DISCUSSION Neurogenic basic helix loop helix transcription factors, including NeuroD1, are found to have increased expression in neural and neuroendocrine tumors. Whether their expression was causative or solely a consequence of disease had not been determined. 1,4,9,14,35 Recently, NeuroD1 was implicated in the tumorigenesis of neuroblastoma. 36 Our data reveal a novel function for NeuroD1 in the induction and coordination of signal transduction pathways that regulate survival and migration of non-neural/neuroendocrine cancers. We now demonstrate that NeuroD1 promotes survival and migration in neuroendocrine lung and other carcinomas at least in part through TrkB. HBEC models provided a useful system to explore NeuroD1 function. Our studies suggest that p53 negatively regulates NeuroD1 expression not only in HBEC but also in carcinomas with neuroendocrine features. Loss of p53 did not unilaterally result in an increase in NeuroD1 expression, as observed in the non-neuroendocrine lung and prostate cell lines. Unlike cells of the prostate, melanocytes derive from neural crest cells migrating from the dorsal neural tube to the dermis, making them neuroectoderm in origin. 37,38 Furthermore, p53 is not only a potent tumor suppressor, but also suppresses self-renewal of adult neural stem cells. 39 NeuroD1 has also been shown to enhance proliferation of committed neuronal progenitor cells. 40 Perhaps p53-mediated inhibition of NeuroD1 in neuroendocrine cells present in non-neural tissues may in some respects parallel its effect on determination of neuronal cell fate. Surprisingly, inhibition of TrkB kinase activity is apparently not equally significant in the three cancer types examined: lung, melanoma, and prostate. TrkB activity is important for neither migration nor viability in prostate cancer cell lines. We speculate (2) prostate, unlike melanoma or SCLC, expresses smaller forms of TrkB, possibly TrkB splice variants thought to be inhibitory due to lack of the kinase domain; 41 (3) the effects of NeuroD1 on prostate viability and migration may be mediated by pathways independent of TrkB. Ultimately, additional characteristics must be identified to distinguish between TrkB inhibitor-sensitive and insensitive tumor types. The findings here suggest that NeuroD1 acts as a lineagespecific regulator of survival and migration in cells with neuroendocrine features mainly through TrkB, which may be potentiated by loss of p53 ( Figure 5). The discovery of downstream targets of NeuroD1 in non-neuroendocrine/non-neural tumors is ongoing. The actions of the NeuroD1/TrkB axis may differ outside of cancers with neuroendocrine features. The development of drugs that act as inhibitors of transcription factors has proven extremely difficult. Cell surface proteins offer greater opportunities for therapeutic intervention. In particular, TrkB, a receptor and enzyme, has gained attention as a potential target of drug development for neuronal and non-neuronal metastatic carcinomas. NeuroD1-expressing neuroendocrine carcinomas should now also be considered for sensitivity to TrkB inhibitors. Reagents, antibodies, immunoblotting Immunoblot analyses were done as previously described using equal amounts of protein from each sample. 42 The following antibodies were used for blotting, immunoprecipitation and chromatin immunoprecipitation: goat NeuroD1 (N-19), rabbit pan-phospho-Trk (E-6), synaptophysin Chromatin immunoprecipitation Chromatin immunoprecipitation was performed as previously described. 42 Twenty-five nanograms of total DNA was used for the quantitative RT-PCR reactions. TrkB primers were as described previously. 34 Quantitative real-time PCR Total RNA from the xenograft tumors and cell lines was isolated with TRI Reagent (Sigma-Aldrich, St Louis, MO, USA). RNA from tumor samples was from MD Anderson Cancer Center (Houston, TX, USA). complementary DNA was synthesized using iSCRIPT cDNA Synthesis Kit (Bio-Rad, Laboratories, Hercules, CA, USA). RNAs for mouse and human NeuroD1, TrkB, NCAM and 18s ribosomal RNA were quantified by RT-PCR with iTaq (Bio-Rad) master mix using TaqMan probes (Applied Biosystems, Life Technologies/Invitrogen, Grand Island, NY, USA) on an ABI 7500 thermocycler. Relative transcript levels were normalized to 18s rRNA. Transcript amounts in knockdown cells were plotted as fold change relative to control. Data were analyzed using ABI 7500 system software (Life Technologies/Invitrogen). Cell viability and proliferation assay Cells were plated at a density of 10 5 /well and reverse transfected with small interfering RNA for 3 days. Viability after knockdown or drug was assayed using Cell-Titer Blue Reagent according to the manufacturer's protocol by measuring fluorescence as readout. Proliferation was measured by incorporation of BrdU in cells for 24 h following reverse transfection with indicated small interfering RNA for 3 days. Incorporation was measured using the Cell Signaling Assay Kit #6813S. Cell culture SCLC and non-small-cell lung carcinoma lines were obtained from the Hamon Cancer Center Collection (UT Southwestern). SCLC, non-small cell lung cancers with neuroendocrine differentiation, HBEC3KTRL53-Clone 5 (Sato et al., submitted) and prostate cell lines were cultured in RPMI 1640 medium with 10% fetal bovine serum. Melanoma cell lines were cultured in DMEM with 10% fetal bovine serum. Immortalized HBECs and RWPE (normal immortalized prostate cells) (except HBEC3KTRL53-Clone 5) 25 were cultured in KSFM (Life Technologies/Invitrogen) with 5 ng/ml epidermal growth factor and 50 mg/ml bovine pituitary extract. The lung cancer cell lines were DNA fingerprinted using the PowerPlex 1.2 kit (Promega, Madison, WI, USA) and confirmed to be the same as the DNA fingerprint library maintained either by ATCC or by the Hamon Cancer Center. The lines were also tested to be free of mycoplasma by e-Myco kit (Boca Scientific, Boca Raton, FL, USA). Migration assays For migration assays, cells were seeded 48 h following knockdown of NeuroD1 or TrkB. Transwell migration was assayed in Transwell permeable supports (Corning #3422, Corning, NY, USA). Cells were seeded in the top chamber in RPMI with 1% fetal bovine serum and allowed to migrate along a concentration gradient through a polycarbonate membrane with 8 mm pores to the bottom chamber containing medium with 10% fetal bovine serum. After 24 h cells were fixed, stained (with hematoxylin and eosin stain), and counted. For invasion assays 1.5 Â 10 5 cells were embedded in growth factor reduced matrigel in the presence or absence of 100 nM lestaurtinib in transwell permeable supports. Cells were allowed to migrate for 48 h across membranes with a gradient of 10% serum in the bottom chamber. Microarray analysis Five micrograms of total RNA was labeled and hybridized to Affymetrix GeneChips HG-U133A and B according to the manufacturer's protocol (http://www.affymetrix.com) while 0.5 micrograms of total RNA was used for Illumina BeadChip HumanWG-6 V3 (http://www.illumina.com). These data are available in GEO (accession no. GSE4824 and GSE32036). Array data were pre-processed with MAS5 (Affymetrix algorithm for probe summarization) or MBCB (Illumina algorithm for background subtraction (Ding et al, NAR 36 (10), 2008)), quantile-normalized and log-transformed Microarray expression data of NeuroD1 mRNA were also compared across diverse benign (N ¼ 3879; black dots) and malignant tissues (N ¼ 1605; red dots) using the Affymetrix HGU133 Plus v2 GeneChip. These data were obtained from Gene Logic, Inc. (Gaithersburg, MD, USA). The analysis shown is for probe set ID 206282_at. The microarray data were normalized using the RMA method. Statistical analyses Student's t test, one-way analysis of variance (ANOVA), Pearson's test and linear regression were used to determine statistical significance. Statistical significance for all tests was assessed by calculating the P values and was defined as o0.05.
2016-05-12T22:15:10.714Z
2013-08-01T00:00:00.000
{ "year": 2013, "sha1": "1e8a282902417f8b720772b4d64695206b2152fc", "oa_license": "CCBYNCND", "oa_url": "https://www.nature.com/articles/oncsis201324.pdf", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "1e8a282902417f8b720772b4d64695206b2152fc", "s2fieldsofstudy": [ "Biology" ], "extfieldsofstudy": [ "Biology", "Medicine" ] }
257505045
pes2o/s2orc
v3-fos-license
Real trees We survey the definition and some elementary properties of real trees. There are no new results, as far as we know. One purpose is to give a number of different definitions and show the equivalence between them. We discuss also, for example, the four-point inequality, the length measure and the connection to the theory of Gromov hyperbolic spaces. Several examples are given. Introduction This is a survey of various equivalent definitions of real trees and some properties of them, mostly with proofs. We do not think that there are any new results. Most of the paper considers deterministic real trees, but we include also some brief comments on random real trees. For further results, see for example [7], [11], [16], and the references there. 1.1. Definition. There are several different but equivalent definitions of real trees (also called R-trees). We collect several of them as follows. We define below conditions (T1) and (T2a)-(T2j) on a metric space (T, d); we will show that assuming (T1), the conditions (T2a)-(T2j) are all equivalent. We then make the following definition. (Which we state already here, although it is not yet justified.) Definition 1.1. A real tree (or R-tree) is a non-empty metric space T = (T, d) that satisfies condition (T1) and one (and thus all) of (T2a)-(T2j). Remark 1.2. Some authors assume also that the metric space T is complete. We will not do so. See further Remark 6.6 below. Note also that in many applications, T is assumed to be compact; again we do not assume this. Another equivalent, and related, definition is given in [11,Definition 3.15]. A characterization of a different kind of real trees is given in Theorem 6.1. 1.2. Some notation. Throughout, T = (T, d) is a (non-empty) metric space. We often write d x,y for d(x, y). B(x, r) := {y : d(x, y) < r} denotes the open ball with centre x ∈ T and radius r > 0. The conditions In this section we state the conditions on a metric space T = (T, d), beginning with the central (T1). (T1) For any x, y ∈ T , there exists a unique isometric embedding ϕ x,y of the closed interval [0, d x,y ] ⊂ R into T such that ϕ x,y (0) = x and ϕ x,y (d x,y ) = y. Assume that (T1) holds. We then denote the image ϕ Obviously, ϕ y,x (t) = ϕ x,y (d x,y − t) and [y, x] = [x, y]. Furthermore, still assuming (T1), let x, y, z ∈ T . Since ϕ x,y and ϕ x,z are isometries, (2.1) We define, noting that the maximum exists (i.e., the supremum is attained) by continuity, Remark 2.2. Condition (T1) alone is not sufficient. Examples of spaces satisfying (T1) without being real trees are the Euclidean space R d , d 2, and any convex subset of R d of dimension 2; for example the unit disc. 3. Consequences of (T1) In this section we assume (T1) (and sometimes further conditions), and show some lemmas used in the proof of Theorem 2.1. In particular, if z ∈ T , then the components of T \ {z} are open and pathwise connected. These (path) components are called the branches at z; see also Section 8. Lemma 3.6. Suppose that (T1) and (T2d) hold. Then, for any x, y, z ∈ T , and (3.14) In particular, ∆ is a continuous function T 3 → R, and γ(x, y, z) is a symmetric function of x, y, z. Subtrees We have, as a simple consequence of the definition and Theorem 2.1 a simple result for subsets of a real tree. Theorem 5.1. Let T be a real tree, and let S ⊆ T be a nonempty subset of T , regarded as a metric space with the induced metric. Then the following are equivalent. where ϕ x,y is the mapping in (T1) for the real tree T . Hence, ϕ x,y : [0, d x,y ] → S, and thus (T1) holds for S too; uniqueness follows because ϕ x,y obviously is unique in S if it is unique in T . Finally, (T2e) holds in S since it holds in T . (In fact, we could here argue with any of (T2a)-(T2j).) Hence, S is a real tree. (ii) =⇒ (iv): Suppose that S is connected. Let x, y ∈ S, and consider [x, y] (in the real tree T ). Let z ∈ (x, y), and suppose that z / ∈ S. By (T2a) and Lemma 3.1, the components of T \ z are disjoint open sets, with x and y in different components. Let U be the component containing x, and V the union of all other components; then where S ∩ U and S ∩ V are two nonempty disjoint open subsets of S. This contradicts the assumption that S is connected, and this contradiction shows A subtree of a real tree is thus a connected nonempty subset. Theorem 5.2. The intersection of any family {T α } of subtrees of a real tree T is a subtree of T , provided is it nonempty. Proof. This is an immediate consequence of Theorem 5.1. Let S := α T α . If x, y ∈ S, then Theorem 5.1(iv) shows that [x, y] ⊆ T α for every α, and thus [x, y] ⊆ S. Hence, another application of Theorem 5.1 shows that S is a subtree. In particular, it follows that if T is a real tree, then for any nonempty set U ⊆ T , there exists a smallest subtree S ⊆ T with U ⊆ S; we say that S is the subtree spanned by U . This subtree can be described as follows. Theorem 5.3. Let T be a real tree and let S be the subtree generated by a nonempty set U ⊆ T . Then Furthermore, for every x ∈ U , we also have Proof. Denote the unions in (5.1) and (5.2) by S ′ and S ′′ x , respectively. Then On the other hand, S ′′ x is pathwise connected, since every interval [x, y] is and they contain a common point x. Thus Theorem 5.1 shows that S ′′ x is a subtree. Since S ′′ x ⊇ U , it follows that S ′′ x ⊇ S, and the result follows. The four-point inequality A different type of characterization of real trees is given by the following theorem, see e.g. [11,Theorem 3.40] or [7] and the references there. (This characterization is less intuitive, but technically very useful.) The condition (6.1) is called the four-point inequality or four-point condition; an equivalent condition is 0-hyperbolicity, see Definition A.1 and Lemma 6.7. Theorem 6.1. A metric space T is a real tree if and only if T is connected and for any four points x, y, z, w ∈ X It is easily verified that (6.1) is trivial if two or more of x, y, z, w coincide; hence it does not matter whether we require x, y, z, w to be distinct or not. Remark 6.3. By considering all permutations of x, y, z, w, it follows that (6.1) is equivalent to the condition that (for any x, y, z, w), among the three sums two are equal and the third is equal or less than the other two. [7]; see also [8]. Theorem 6.4. Let X be a metric space. Then X can be isometrically embedded into a real tree if and only if the four-point inequality (6.1) holds for any four points x, y, z, w ∈ X. Proof of Theorem 6.1 from Theorem 6.4. Suppose that T is connected and that (6.1) holds. Then, by Theorem 6.4, T ⊆ T for some real tree T . Since T is connected, T is a real tree by Theorem 5.1. Among the consequences we mention the following. Theorem 6.5. If T is a real tree, then so is its completionT . Proof. By continuity, if the four-point inequality (6.1) holds in the dense subset T ofT , then it holds inT . Furthermore, since T is connected, so is T . hence,T is a real tree by Theorem 6.1. Remark 6.6. By Theorem 6.5, we may in many situations assume without loss of generality that real trees are complete, since we can replace an arbitrary real tree by its completion. The four-point inequality (6.1) can be rewritten in several ways. Define, for three points x, y, z in a general metric space (X, d), the Gromov product Note that (x, y) z 0 by the triangle inequality, and that (6.3) meaures how far the triangle inequality is from being an equality. Note also that in a real tree, Lemma 3.6 shows that (x, y) z = ∆(z, x, y), which equals the distance from z to [x, y]. Lemma 6.7. The four-point inequality (6.1) is equivalent to Proof. By the definition (6.3), the inequality (6.4) holds if and only if at least one of the following holds: These are equivalent to, respectively, 8) and thus at least one of them holds if and only iff (6.1) holds. We note also that, in fact, it suffices to verify the four-point inequality for a fixed choice of one of the four points. Lemma 6.8. Let T be a metric space and let o ∈ T be fixed. If the fourpoint inequality (6.1) holds for w = o and all x, y, z ∈ T , then it holds in general, i.e., for all x, y, z, w ∈ T . Proof. By Lemma 6.7, this is the special case δ = 0 of Lemma A.5. Rooted real trees In a rooted real tree (T, ρ), we may define a partial order by x, y ∈ T. (7.1) Theorem 7.2. Let (T, ρ) be a rooted real tree. Then (7.1) defines a partial order in T , with ρ as the minimum element. Moreover, any two points x, y ∈ T have a greatest common lower bound, which we denote by x ∧ y. Recalling the notation of (2.3) and Lemma 3.6, we have Proof. It is easily seen, using Lemmas 3.2 and 3.4, that (7.1) defines a partial order. It is obvious from (7.1) that ρ x for every x ∈ T . For any x, y ∈ T , by the definition (7.1) which shows that γ(ρ, x, y) is a greatest lower bound x ∧ y. For any x, y ∈ T , the path [x, y] from x to y is a combination of the paths [x, x ∧ y] and [x ∧ y, y] (where one or both parts may reduce to a single point). Hence, we have We note also that for any subset {x α } α∈A ⊆ T , it follows from Theorem 5.3 that the subtree spanned by these points and the root ρ is α [ρ, x α ]; see further Examples 8.5 and 10.3. We ignore here the trivial case when T consists of a single point. (In this case, the point is defined to be a leaf, by a modification of the definition above, and T o = ∅.) Leaves and branch points We note that In a rooted real tree, the root is often not regarded as a leaf, even if its degree is 1. We note also that the branches at a point can be characterized as follows. Lemma 8.4. Let T be a real tree, and let z ∈ T . Then the following are equivalent, for any x, y ∈ T \ {z}: (i) x and y belong to different branches of T at z. Proof. A metric space of compact real trees Consider the set T of all compact real trees, or rather the set of all equivalence classes under isometry of compact real trees (so that two isometric real trees are regarded as the same). (The set theoretic difficulties with "all compact real trees" are handled in the standard way: since a compact real tree, as any compact metric space, has cardinality at most c, it suffices to consider real trees that as sets are subsets of, for example, R.) The set T can be equipped with a metric, the Gromov-Hausdorff distance, which makes T a complete separable metric space. Similarly, the set T 1 of rooted compact real trees is a complete separable metric space, equipped with (a rooted version of) the Gromov-Hausdorff distance. See [12] for definitions and proofs; see also [6,Section 7.3] for the Gromov-Hausdorff distance for general metric spaces. The fact that T and T 1 thus are complete separable metric spaces (and thus Polish topological spaces) makes it possible to define random compact real trees as random elements of one of these spaces, and a lot of standard machinery then is available. For noncompact real trees, one can similarly use the version of Gromov-Hausdorff convergence in [6, Section 8.1]. We may regard a combinatorial tree as a real tree T , by regarding each edge as a copy of [0, 1], with the endpoints identified with the corresponding vertices in V . Equivalently, we may define T as the disjoint union of V and one copy of (0, 1) for each edge in E, with a suitably defined metric. (We omit the details, and the verification that T is a real tree.) In any case, we regard V as a subset of T . Some examples Note that for v, w ∈ V , the distance d(v, w) equals the usual distance in a graph, i.e., the number of edges in a shortest path from v to w. The degree δ T (z) of a vertex z ∈ V equals the degree of z in the graph (V, E); the degree of any vertex in T \ V is 2. In particular, the leaves of T are precisely the leaves of the tree (V, E) (i.e., the vertices in V adjacent to a single edge in E), and the branch points are the vertices in V that have degree 3. It is easy to see that T always is complete, that T is separable if and only if V (and thus also E) is countable, and that T is compact if and only if V (and thus also E) is finite. Example 10.2. More generally, suppose as in Example 10.1 that (V, E) is a combinatorial tree, and assume also that for every edge e ∈ E, we are given a real number ℓ e , called the length e. We may construct a real tree T as in Example 10.1, but now for each edge e taking an interval of length ℓ e . (In particular, ℓ e = 1 for all e gives back the real tree in Example 10.1. ) We see again that T is separable if and only if V is countable. (In one direction, note that if D is a countable dense subset of T , then every edge contains, in its interior, an element of D; hence E is countable.) Moreover, T is compact if V is finite, but the converse does not hold. One counterexample is an infinite star which is compact for some (but not all) choices of edge lengths: let V = {0, 1, . . . } and E = {0i : i 1}, with length ℓ 0i = 2 −i . No vertex in V is a leaf, and thus, see Example 10.1, the real tree T has no leaf, so T L = ∅ and T o = T . The real tree T is always separable, and never compact. It is easy to see that T is complete if n ℓ n = ∞, but not if n ℓ n < ∞, since in the latter case, the sequence (0 n ) ∞ 0 = ∅, 0, 00, 000, . . . is a Cauchy sequence without a limit. See further the next example. Example 10.5. Let T 0 be the infinite binary tree in Example 10.4 and assume that L := n ℓ n < ∞. Note that d(∅, z) < L for every z ∈ T 0 . For every s < L, the set {z ∈ T 0 : d(∅, z) s} is closed and contained in a finite number of edges, and thus it is compact. Let (z n ) ∞ 1 be a Cauchy sequence in T 0 . Then the sequence d(∅, z n ) is a Cauchy sequence, so it converges to some limit d ∞ L. If the limit d ∞ < L, then we see that the Cauchy sequence (z n ) belongs to a compact subset of T 0 , and thus it converges. On the other hand, if d(∅, z n ) → L, then the Cauchy sequence cannot converge, since a limit z would have to satisfy d(∅, z) = lim n→∞ d(∅, z n ) = L, but no such z exists in T 0 . Consider now the completion T of T 0 ; T is a real tree by Theorem 6.5, and we call T a complete infinite binary tree. We claim that T \ T 0 may be identified with the set {0, 1} ∞ of infinite strings from {0, 1}. In fact, if v = ξ 1 ξ 2 · · · ∈ {0, 1} ∞ , then let v n := ξ 1 · · · ξ n ∈ V for each n 0; we have d(v n , v m ) = n<i m ℓ i when n m, and thus (v n ) is a Cauchy sequence in T 0 so it has a limit in T which we represent by v. We have d(∅, v) = L, and Further, let L + n := i n ℓ i ; thus L + 1 = L, and L + n ց 0 as n → ∞. It is then easy to see that for any v, v ′ ∈ {0, 1} ∞ , regarded as elements of T , we have In particular, this shows that two different strings in {0, 1} ∞ represent different points in T , so we may regard {0, 1} ∞ as a subset of T . Note also that, since L + n → 0 as n → ∞, the metric (10.1) induces the product topology on {0, 1} ∞ ; thus {0, 1} ∞ is a compact subset of T , homeomorphic to the Cantor set. Finally, if (z n ) is any Cauchy sequence in T 0 without limit in T 0 , we have seen that d(∅, z n ) → L, and since ℓ k → 0 as k → ∞, it follows easily that we may approximate each z n by z ′ n ∈ V such that d(z n , z ′ n ) → 0 as n → ∞. Then z ′ n is a finite string; we extend it (arbitrarily) to an infinite string z ′′ n ∈ {0, 1} ∞ and note that d(z ′ n , z ′′ n ) = L + |z ′ n |+1 → 0. The complete infinite binary tree T is compact; this follows either by using a modification of the argument above to show that an arbitrary sequence (z n ) in T has a subsequence that converges, or by noting that for every ε > 0, there is a finite ε-net in T , since {0, 1} ∞ is compact, and so is the set {z ∈ T : d(z, {0, 1} ∞ ) ε}; we omit the details. Note that the complete infinite binary tree T = T 0 ∪ {0, 1} ∞ regarded as a set is the same for every sequence (ℓ n ) ∞ 1 satisfying the assumptions ℓ n > 0 and n ℓ n < ∞; furthermore, it is easily seen that the topology of T is the same for all such (ℓ n ) ∞ 1 . However, the metric on T depends on (ℓ n ) ∞ 1 , as is seen e.g. by (10.1). It is easily seen that the set of leaves T L = {0, 1} ∞ , and thus the skeleton It is easily verified that this is a semimetric; thus, if we define an equivalence relation on [0, ℓ] by s ≡ t if d(s, t) = 0, then the quotient space T g := [0, ℓ]/ ≡ is a metric space; moreover, it is not difficult to show that T g is connected and satisfies the 4-point inequality; thus T g is a real tree by Theorem 6.1. The quotient map [0, ℓ] → T g is continuous, and thus T g is a compact real tree. Note that if s t, then s ≡ t ⇐⇒ g(u) g(s) = g(t) for every u ∈ [s, t]. (Informally, we may think of obtaining T g by putting glue on the downside of the graph of g, and then compressing the x-axis.) As a simple example, a finite combinatorial tree as in Example 10.1 or 10.2 can be constructed in this way by taking g(t) to be the contour function of the tree, defined as the height (distance to the root) of a particle that moves with unit speed along the "outside" of the tree, starting and ending at the root. In fact, every compact rooted real tree may be constructed in this way (up to isometry) using a suitable function g : In applications, as in the following two examples, g(t) is usually a random function, and then T g is a random real tree. If we for simplicity let ℓ be fixed (for example, ℓ = 1), then the map g → T g is a continuous map from C[0, ℓ] to the set T 1 of rooted compact real trees with the Gromov-Hausdorff metric in Section 9, see [10]. In particular, this map is (Borel) measurable, so if g is a random element of C[0, ℓ], then T g is a well-defined random element of the Polish space T 1 of rooted compact real trees. Example 10.7. The Brownian continuum random tree, originally constructed (in several different ways) by Aldous [1,2,3], is the random real tree T e obtained by the construction in Example 10.6 letting g(t) be a random (normalized) Brownian excursion e : [0, 1] → [0, ∞); see [3,Corollary 22]. (Actually, Aldous defined the Brownian continuum random tree to be T 2e in our notation, but the convention has later changed to T e ; of course, the results differ only by a scaling.) See e.g. [1; 2; 3], [11] and [16] for properties of this random real tree. In particular, T e has almost surely a countably infinite number of branch points, all of degree 3, and an uncountable number of leaves. Example 10.8. More generally, a Lévy tree is a random real tree constructed as in Example 10.6 letting g be a random continuous fuction known as the height process of a Lévy process (with certain conditions), see [9; 10]. In the special case when the Lévy process is Brownian motion, this height process is a Brownian excursion and we obtain the Brownian continuum random tree as in Example 10.7. Other special cases are the stable trees, see [16]. Example 10.9. Let T be a partially ordered set such that (i) Any two elements x, y ∈ T have a greatest lower bound x ∧ y. (ii) For every x ∈ T , the set L x := {y ∈ T : y x} is linearly ordered. (iii) There is a height function h : T → R such that for every x ∈ T , the restriction h : L x → R is an order-preserving bijection onto an interval It is easily seen from (i) and (ii) that a in (iii) cannot depend on x. Moreover, either h(L x ) = [a, h(x)] for all x, and then T has a smallest element o with h(o) = a, or h(L x ) = (a, h(x)] for every x, and then T has no minimum (or minimal) element. Define It is easily seen that d is a metric on T , which makes T a real tree. The path [x, y] between two points x, y ∈ T consist of the two parts [x, x ∧ y] and [x ∧ y, y], which are subsets of L x and L y , respectively. If T is has a minimum element o, we choose o as a root, and then the partial order defined in (7.1) is the original order. Moreover, h(x) = d(x, o)+ a with a := h(o). Conversely, if (T, ρ) is a rooted real tree, the partial order defined in (7.1) satisfies (i)-(iii) above with the height function h(x) := d(x, ρ), and the construction above returns the original metric on T . It is easily verified that the trees constructed in Example 10.6 are of this type, with height function g (after identifying equivalent points). It is easily verified that is a partial order, and that it satisfies (i)-(iii) in Example 10.9 with the height function h (with a = −∞). Hence, (10.4) defines a metric that makes T inte a real tree. Note that this is a very large tree. Its cardinality is 2 c , and every point in T has uncountable degree (more precisely, also of cardinality 2 c ). In particular, T is not separable. See [11,Examples 3.18 and 3.45] for further properties of this real tree. The length measure Every real tree has a natural measure on it, defined as follows. (See e.g. Note that, by definition, λ(T L ) = 0. In the definition (11.1), if A is a Borel set in T , then A ∩ T o is a Borel subset of T o , and thus λ(A) is well defined. If T o is a Borel subset of T , or more generally a H 1 -measurable subset of T , then we can also define the length measure by (11.1) interpreting H 1 as the Hausdorff measure on T . In general, T o is not measurable (see Example 11.6 below), but it is in most cases of interest (and in particular for all compact T ) by Theorem 11.4 below. Alternatively, we can always (even if T o is not measurable) define λ by (11.1) interpreting H 1 as the outer Hausdorff meaure on T . Remark 11.2. We have here defined λ as a Borel measure. Alternatively, we may more generally define it by (11.1) We note some elementary properties of λ, which justify the name length measure. Note that for every x, y ∈ T , the set [x, y] is isometric to the interval Proof. For every x ∈ T , we have λ{x} = 0 by (11.1). Hence, for every x, y ∈ T , λ([x, y]) = λ((x, y)). Furthermore, (x, y) is a subset of T o isometric to the interval (0, d(x, y)) ⊂ R, and thus H 1 ((x, y)) = H 1 ((0, d(x, y)) = d(x, y), since the Hausdorff measure H 1 on R equals the Lebesgue measure. To show uniqueness, suppose that λ ′ is another Borel measure on T with λ ′ ([x, y]) = d(x, y) for all x, y ∈ T , and λ ′ (T L ) = 0. Note first that then λ ′ {x} = 0 for every x ∈ T , and thus for all x, y ∈ T , we have, by the assumption and (11.3), (11.5) We may assume that T o = ∅. Let x 0 , x 1 , . . . be a dense subset of T o , and let, for n 1, T n is a pathwise connected subset of T , and thus T n is a real tree by Theorem 5.1. (In fact, by Theorem 5.3, T n is the subtree spanned by x 0 , . . . , x n .) We consider the restrictions of λ and λ ′ to the compact (and thus Borel) subset T n . First, for every n 2 we have [x 0 , x n ]∩T n−1 = [x 0 , y n ] for some y n ∈ T n−1 , and then T n = T n−1 ∪ (y n , x n ] is a partition into two disjoint Borel subsets; hence, induction and (11.5) yield . We see also that H 1 (T L ) = H 1 ({0, 1} ∞ ) > 0 when γ 1 (and ∞ when γ < 1); this shows that in general, the length measure is not equal to the Hausdorff measure H 1 on T . We see also that the total length measure λ(T ) = ∞ 1 2 n−γn is finite for γ > 1 but infinite for γ 1. Example 11.6. Consider the complete infinite binary tree T in Example 10.5, for some sequence (ℓ n ) ∞ 1 , and let A ⊆ T L be an arbitrary subset of T L = {0, 1} ∞ . Define the real tree T A by attaching an interval [x, x ′ ] of length 1 to every x ∈ A in the obvious way. Then T A is a real tree with Leaf measure In some applications, a real tree is equipped with a different (Borel) measure, which, in contrast to the length measure in Section 11, is supported on the set of leaves T L . For simplicity, suppose that T is a separable real tree, so that T L is a Borel set by Theorem 11.4. We then may call any Borel measure supported on T L a leaf measure. Note that a leaf measure thus has to be specified, and is not automatically determined by T , unlike the length measure in Section 11. Usually, one considers leaf measures that are probability measures; they thus give a meaning to "a random leaf". Aldous [3] defines a continuum tree as a rooted real tree (with some extra conditions) equipped with a nonatomic probability measure that is supported on the set of leaves (and thus a leaf measure in our sense), and furthermore has full support in the sense that for any x in the skeleton of the tree, the set {y : y > x} (recall (7.1)) has positive measure. Example 12.1. If T is constructed from a finite combinatorial tree as in Example 10.1 or 10.2, then T has a finite number of leaves, and a natural leaf measure is given by the uniform distribution on T L . Example 12.2. If T is constructed from a continuous function g : [0, ℓ] → [0, ∞) as in Example 10.6, then the natural (quotient) mapping [0, ℓ] → T is continuous, and thus measurable, so it maps the Lebesgue measure on [0, ℓ] to a measure µ on T . This is in general not a leaf measure (one counterexample is when g is the contour function of a finite combinatorial tree as in There are several alternatives to Definition A.1, with conditions that are equivalent in the sense that if one holds, then so do the others, but with δ replaced by Cδ for some (small) constant C. (The conditions are in general not equivalent with the same δ.) Some of these alternative conditions are given (or implicit) in the following lemmas. See further e.g. [14], [6] and [18]. Proof. As the proof of Lemma 6.7, adding 2δ to the left-hand sides of (6.5)-(6.8). Lemma A.5 ( [14]). Let X be a metric space and let o ∈ X be fixed. Let δ 0. If (A.1) holds for w = o and all x, y, z ∈ X, then it holds for all x, y, z, w ∈ X with δ replaced by 2δ. Proof. We first show that the assumption implies that, for any x, y, z, w, To see this, we first note that both sides are symmetric in x and y, and also in z and w; hence, by interchanging (x, y) and/or (z, w) if necessary, we may assume that (x, z) o is the largest of the four numbers (x, z) o , (x, w) o , (y, z) o , (y, w) o . In this case, the assumption implies In particular, we can choose o = w in (A.3). Since (w, v) w = 0 for every v by (6.3), we obtain (x, y) w (x, z) w ∧ (y, z) w − 2δ, (A.7) as asserted. A geodesic in a metric space X is an isometric curve, i.e., an isometric mapping of an interval I ⊆ R into X. If a geodesic ϕ is defined on an interval I = [a, b] that is closed and finite, we say that the geodesic has the endpoints ϕ(a) and ϕ(b), and that it joins ϕ(a) and ϕ(b). (Then, the geodesic necessarily has length d x,y , and we may choose I = [0, d x,y ].) Note that condition (T1) says that for every x, y ∈ X, there is a unique geodesic from x to y (provided we normalize I = [0, d x,y ]). More generally, a metric space X = (X, d) is geodesic, if every pair x, y ∈ X is joined by a geodesic. In other words, we assume the existence part of (T1), but uniqueness is not assumed. In particular, every real tree is geodesic. Gromov hyperbolicity is often studied for geodesic metric spaces. This can be done without real loss of generality by the following result by Bonk and Schramm [5], to which we refer for a proof. Theorem A.6 (Bonk and Schramm [5,Theorem 4.1]). Let δ 0. A metric space is δ-hyperbolic if and only if it can be isometrically embedded into a δ-hyperbolic complete geodesic metric space. For geodetic metric spaces, there are further conditions equivalent to Gromov hyperbolicity. We note first an extension of Lemma 3.3 to general geodesic spaces. Lemma A.7. Let (X, d) be a geodesic metric space, and let x, y, z ∈ X. Then (x, y) z = 0 if and only if there exists a geodesic from x to y that contains z. Proof. As for Lemma 3.3. In a geodesic metric space, it is enough to assume the hyperbolicity condition (A.1) when the left-hand side is 0, i.e., by Lemma A.7, when w is on a geodesic joining x and y. (i) If X is δ-hyperbolic, then every triangle in X is 2δ-slim. (ii) If every triangle in X is δ-slim, then X is 3δ-hyperbolic. Remark A. 15. If every triangle in X is δ-slim (or δ-thin), then any two geodesics with the same endpoints are within distance 2δ of each other. (I.e., their Hausdorff distance is 2δ.) This follows by considering the triangle obtained by subdividing one of the geodesics at an arbitrary interior point. (Or, more bravely, by considering the geodesics as two sides of a degenerate triangle with two vertices coinciding.) Remark A. 16. In particular, using Theorem A.2, a geodesic metric space is a real tree if and only if every triangle is 0-slim (or 0-thin). In fact, if this holds, then by Remark A.15, geodesics are unique, and thus (T1) holds. Then, a triangle xyz is 0-slim if and only if (T2b) holds for all permutations of x, y, z. Conversely, each triangle in a real tree is 0-thin as a consequence of Lemmas 3.5 and 3.6.
2017-09-06T04:09:11.076Z
2023-03-14T00:00:00.000
{ "year": 2023, "sha1": "5767e3477fc0c00c147f31976fb16d1dd81072fd", "oa_license": null, "oa_url": null, "oa_status": null, "pdf_src": "Arxiv", "pdf_hash": "8103d004524a3f4af88c82cf812a9a07253d75a4", "s2fieldsofstudy": [ "Mathematics" ], "extfieldsofstudy": [ "Mathematics" ] }
26678238
pes2o/s2orc
v3-fos-license
Chitosan/Cyclodextrin/TPP Nanoparticles Loaded with Quercetin as Novel Bacterial Quorum Sensing Inhibitors The widespread emergence of antibiotic-resistant bacteria has highlighted the urgent need of alternative therapeutic approaches for human and animal health. Targeting virulence factors that are controlled by bacterial quorum sensing (QS), seems a promising approach. The aims of this study were to generate novel nanoparticles (NPs) composed of chitosan (CS), sulfo-butyl-ether-β-cyclodextrin (Captisol®) and/or pentasodium tripolyphosphate using ionotropic gelation technique, and to evaluate their potential capacity to arrest QS in bacteria. The resulting NPs were in the size range of 250–400 nm with CS70/5 and 330–600 nm with CS70/20, had low polydispersity index (<0.25) and highly positive zeta potential ranging from ζ ~+31 to +40 mV. Quercetin, a hydrophobic model flavonoid, could be incorporated proportionally with increasing amounts of Captisol® in the NPs formualtion, without altering significantly its physicochemical properties. Elemental analysis and FTIR studies revealed that Captisol® and quercetin were effectively integrated into the NPs. These NPs were stable in M9 bacterial medium for 7 h at 37 °C. Further, NPs containing Captisol® seem to prolong the release of associated drug. Bioassays against an E. coli Top 10 QS biosensor revealed that CS70/5 NPs could inhibit QS up to 61.12%, while CS70/20 NPs exhibited high antibacterial effects up to 88.32%. These results suggested that the interaction between NPs and the bacterial membrane could enhance either anti-QS or anti-bacterial activities. Introduction The current poor efficacy of antibiotics to treat bacterial disease, due to the increasing widespread emergence of resistance, highlights the urgent need for alternative therapeutic strategies. Rather than focusing on targeting bacteria either by bactericidal or bacteriostatic agents, targeting their virulence and associated factors, seems a more promising alternative approach. Such virulence factors are required for infection (e.g., toxin function and delivery, regulation of virulence expression and bacterial adhesion); they seem to be preserving the endogenous host microbiome and impose less selective pressure on pathogenic bacteria and in theory, decrease resistance [1]. Many bacteria use a cell-cell communication process termed quorum sensing (QS) to communicate, coordinately regulate their gene expression and synchronise their collective social behaviours, such as biofilm formation, bioluminescence and secretion of virulence factors [2,3]. QS involves the production, detection of, and response to extracellular signalling molecules known as autoinducers [4]. QS is not essential for carriers that transport quercetin into different tissues, such as vascular, where β-glucuronidase will deconjugate it into the aglycone form which has the most biological effect [57]. It may be that the entrapment of quercetin in chitosan-cyclodextrin nanoparticles might help to improve the solubility, potentiate the biological effects and improve the bioavailability of quercetin in a controlled manner. Chitosan/cyclodextrin nanoparticles have been reported as potential carriers for the oral delivery of small peptides [63] as well as for the gene delivery to the airway epithelium [64]. In the present work, we aimed to design a novel anti-QS formulation that combine the virtues of chitosan and Captisol ® nanoparticles in terms of association for quercetin, modulating the release profile and enhancing the anti-QS efficacy using a E. coli Top 10 AHL-regulated biosensor. Preparation of Unloaded Nanoparticles Nanoparticles composed of chitosan, and either SBEβCD or mixtures of SBEβCD/TPP, were obtained via the ionotropic gelation technique [65]. This method is based on the ionic interaction between the positively charged CS and the negatively charged TPP and/or SBEβCD, and the ability of CS to form inter-and intra-molecular linkages with poly-anions thus resulting in the formation of colloidal particles. By contrast with macroscopic gelation, this process occurs in dilute conditions. The process is extremely mild as it only involves the mixture of two aqueous phases at room temperature. Previous studies have reported the use of the neutral hydroxypropyl β-cyclodextrin derivative in association with CS to form nanoparticles [66,67]. In this study we decided to choose a negatively charged cyclodextrin derivative-SBEβCD, which allegedly could be incorporated more effectively into nanoparticles due to stronger ionic interactions with the positively charged CS. Nanoparticles could be prepared either in the presence or absence of TPP by mixing CS with different amounts of SBEβCD (Tables 1 and 2). The resulting NPs prepared with CS 70/5 and CS 70/20 were in the size range of 250-400 and 330-600 nm, PDI 0.03-0.19 and 0.13-0.25, respectively, and invariably high positive zeta potential ranging from +31 to +40 mV. Generally, if the contents of initial anionic charged species (SBEβCD and/or TPP) was too low (e.g., CS/CD/TPP mass ratio 4/1/0 and 4/2/0 in Tables S1 and S2), NPs either did not form or their yields were too low for characterization. On the other hand, too much of initial anionic charged species, resulted in either aggregation or the NPs could not be re-suspended after isolation (Figures 1 and 2). We reasoned that when a fixed amount of CS was used, the amount of cyclodextrin that is adequate for NPs formation varied with the proportion of Captisol ® which carries more than six sulfate charges per mol (SBEβCD, D.S. ≈ 6.4), as well as the presence of TPP cross-linker, which is supposed to compete with SBEβCD for the positively charged amino group of CS. If the net charge ratio ( + / − ) ranges from 0.75 to 1.25 (near the isoelectric point), precipitation occurred immediately (e.g., CS/CD/TPP mass ratio 4/1.5/1, 4/2/0.75, 4/3/0.5 and 4/4/0.25 in Tables S1 and S2). Around this point, NPs of greater size were obtained (e.g., at CS 70/20 /CD/TPP mass ratio 4/4/0 charge ratio ≈ 1.5). Our results are consistent with previous works [66], that report when the SBEβCD/TPP ratio decreased, the size, zeta potential and production yield of NPs increased (cf. mass ratio 4/1/0.5 vs. 4/2/0.25 in Table 1; and mass ratio 4/1/0.75 vs. 4/2/0.5 in Table 2). The lower zeta potential with increasing SBEβCD amounts in these formulations could be explained by an increased masking of free positively charged amino groups of CS. It also might be noted that TPP incorporated in the formulation helps to increase the production yield. Molecules 2017, 22,1975 5 of 23 charge ratio ≈ 1.5). Our results are consistent with previous works [66], that report when the SBEβCD/TPP ratio decreased, the size, zeta potential and production yield of NPs increased (cf. mass ratio 4/1/0.5 vs. 4/2/0.25 in Table 1; and mass ratio 4/1/0.75 vs. 4/2/0.5 in Table 2). The lower zeta potential with increasing SBEβCD amounts in these formulations could be explained by an increased masking of free positively charged amino groups of CS. It also might be noted that TPP incorporated in the formulation helps to increase the production yield. Preparation and Characterization of Quercetin-Loaded Cyclodextrin-Containing CS Nanoparticles NPs loaded with quercetin were prepared. To achieve a comprehensive picture of the encapsulation process of this compound in the NPs, phase-solubility studies with increasing SBEβCD concentrations were performed ( Figure 3). As expected, quercetin showed a marked increase in their solubility as the SBEβCD concentration increased. In fact, a 325-fold increase in quercetin solubility was achieved using 40 mM SBEβCD solutions [66]. Tables 3 and 4 show the size, PDI, zeta potential and production yield of quercetin-loaded NPs of CS70/5 and CS70/20, respectively. In all formulations, positive zeta potential values were detected, suggesting that CS is mainly located on the surface of the particles. The addition of quercetin did not change significantly the physicochemical properties of the NPs, except for the PDI of the CS70/20-SBEβCD NP formulations that increased slightly. Preparation and Characterization of Quercetin-Loaded Cyclodextrin-Containing CS Nanoparticles NPs loaded with quercetin were prepared. To achieve a comprehensive picture of the encapsulation process of this compound in the NPs, phase-solubility studies with increasing SBEβCD concentrations were performed ( Figure 3). As expected, quercetin showed a marked increase in their solubility as the SBEβCD concentration increased. In fact, a 325-fold increase in quercetin solubility was achieved using 40 mM SBEβCD solutions [66]. Tables 3 and 4 show the size, PDI, zeta potential and production yield of quercetin-loaded NPs of CS 70/5 and CS 70/20 , respectively. In all formulations, positive zeta potential values were detected, suggesting that CS is mainly located on the surface of the particles. The addition of quercetin did not change significantly the physicochemical properties of the NPs, except for the PDI of the CS 70/20 -SBEβCD NP formulations that increased slightly. In previous studies, it has been shown that at least 99% of the maximum drug solubility was already reached in 24 h [66]. This result allowed to reduce drug/SBEβCD incubation time to 24 h for the solutions intended for loaded NP preparation. In the next step, we investigated how the solubilization of quercetin by its inclusion on the CD cavity could facilitate the association of the complexed flavonoid to CS NPs (Figure 4a Specifically, with NP formulations 4/3/0 and 4/4/0, the LE increase up to 7.33-and 8.1-fold, respectively, when compared with the control formulation, thus suggesting that the LE increased in proportion with the increase of the amount of Captisol ® . As can observed from Tables S3 and S4, formulations without Captisol ® , the LE achieved was very low. By contrast, when the amount of TPP decreased, the encapsulation efficiency (EE) and LE were elevated, thus effectively suggesting that TPP might compete with quercetin during the association with CS in these formulations ( In previous studies, it has been shown that at least 99% of the maximum drug solubility was already reached in 24 h [66]. This result allowed to reduce drug/SBEβCD incubation time to 24 h for the solutions intended for loaded NP preparation. In the next step, we investigated how the solubilization of quercetin by its inclusion on the CD cavity could facilitate the association of the complexed flavonoid to CS NPs (Figure 4a Figure 4b). Specifically, with NP formulations 4/3/0 and 4/4/0, the LE increase up to 7.33-and 8.1-fold, respectively, when compared with the control formulation, thus suggesting that the LE increased in proportion with the increase of the amount of Captisol ® . As can observed from Tables S3 and S4, formulations without Captisol ® , the LE achieved was very low. By contrast, when the amount of TPP decreased, the encapsulation efficiency (EE) and LE were elevated, thus effectively suggesting that TPP might compete with quercetin during the association with CS in these formulations (CS 70 Elemental Analysis of Selected NPs Many approaches have been proposed for the quantification of SBEβCD in nanoparticle carrier systems. Most of them rely on colourimetric reactions of the cyclodextrin with an appropriate reagent (e.g., fading of phenolphthalein reaction) [63,68]. These methods are useful, however, the need for either lyophilized of the supernatant of NPs or using the enzymatic reaction at 40 °C for 60 min in 2% starch, have limited their application. In this study, elemental analysis was performed to determine the composition of the different nanoparticle formulations ( Figure 5). Using this technique, the composition of the NPs could be determined by comparing the C-N mass ratios (or the C-N-S mass ratios) of CS and SBEβCD with those of the NPs. CS/SBEβCD/TPP (CS70/5 4/0/0.75 and CS70/20 4/0/1) NPs were analyzed and taken as the references for SBEβCD-containing NP formulations. Their compositions were 68.93% CS, 31.07% TPP and 72.73% CS, 27.27% TPP for CS70/5 and CS70/20 NPs, respectively. These values are close to the expected ones from theoretical ratios at which the materials were incorporated. As expected, the anionic SBEβCD could be incorporated into the NPs with considerable high efficiency: 41.03 and 34.5% (w/w) of the final composition NPs corresponding to the respective SBEβCD incorporated (CS70/5 4/2/0.25 and CS70/20 4/2/0.5, respectively). Particularly, SBEβCD was effectively entrapped into the CS70/20 4/3/0 NPs, representing up to 52.7% of the total components of the nanoparticles. Stability Studies As the final intended application of these nanoparticles is their use for anti-QS in gram negative bacteria, we determined their stability in M9 medium (pH 6.8 and 37 °C). The results showed that both loaded and unloaded nanoparticles did not suffer a significant change in their size following Elemental Analysis of Selected NPs Many approaches have been proposed for the quantification of SBEβCD in nanoparticle carrier systems. Most of them rely on colourimetric reactions of the cyclodextrin with an appropriate reagent (e.g., fading of phenolphthalein reaction) [63,68]. These methods are useful, however, the need for either lyophilized of the supernatant of NPs or using the enzymatic reaction at 40 • C for 60 min in 2% starch, have limited their application. In this study, elemental analysis was performed to determine the composition of the different nanoparticle formulations ( Figure 5). Using this technique, the composition of the NPs could be determined by comparing the C-N mass ratios (or the C-N-S mass ratios) of CS and SBEβCD with those of the NPs. CS/SBEβCD/TPP (CS 70/5 4/0/0.75 and CS 70/20 4/0/1) NPs were analyzed and taken as the references for SBEβCD-containing NP formulations. Their compositions were 68.93% CS, 31.07% TPP and 72.73% CS, 27.27% TPP for CS 70/5 and CS 70/20 NPs, respectively. These values are close to the expected ones from theoretical ratios at which the materials were incorporated. As expected, the anionic SBEβCD could be incorporated into the NPs with considerable high efficiency: 41.03 and 34.5% (w/w) of the final composition NPs corresponding to the respective SBEβCD incorporated (CS 70/5 4/2/0.25 and CS 70/20 4/2/0.5, respectively). Particularly, SBEβCD was effectively entrapped into the CS 70/20 4/3/0 NPs, representing up to 52.7% of the total components of the nanoparticles. Elemental Analysis of Selected NPs Many approaches have been proposed for the quantification of SBEβCD in nanoparticle carrier systems. Most of them rely on colourimetric reactions of the cyclodextrin with an appropriate reagent (e.g., fading of phenolphthalein reaction) [63,68]. These methods are useful, however, the need for either lyophilized of the supernatant of NPs or using the enzymatic reaction at 40 °C for 60 min in 2% starch, have limited their application. In this study, elemental analysis was performed to determine the composition of the different nanoparticle formulations ( Figure 5). Using this technique, the composition of the NPs could be determined by comparing the C-N mass ratios (or the C-N-S mass ratios) of CS and SBEβCD with those of the NPs. CS/SBEβCD/TPP (CS70/5 4/0/0.75 and CS70/20 4/0/1) NPs were analyzed and taken as the references for SBEβCD-containing NP formulations. Their compositions were 68.93% CS, 31.07% TPP and 72.73% CS, 27.27% TPP for CS70/5 and CS70/20 NPs, respectively. These values are close to the expected ones from theoretical ratios at which the materials were incorporated. As expected, the anionic SBEβCD could be incorporated into the NPs with considerable high efficiency: 41.03 and 34.5% (w/w) of the final composition NPs corresponding to the respective SBEβCD incorporated (CS70/5 4/2/0.25 and CS70/20 4/2/0.5, respectively). Particularly, SBEβCD was effectively entrapped into the CS70/20 4/3/0 NPs, representing up to 52.7% of the total components of the nanoparticles. Stability Studies As the final intended application of these nanoparticles is their use for anti-QS in gram negative bacteria, we determined their stability in M9 medium (pH 6.8 and 37 °C). The results showed that both loaded and unloaded nanoparticles did not suffer a significant change in their size following Stability Studies As the final intended application of these nanoparticles is their use for anti-QS in gram negative bacteria, we determined their stability in M9 medium (pH 6.8 and 37 • C). The results showed that both loaded and unloaded nanoparticles did not suffer a significant change in their size following incubation for 7 h (Figure 6a-d). The size varied within a small range 300-500 and 200-400 nm for unloaded and quercetin-loaded NPs, respectively. However, upon contact with M9 medium some formulations exhibited a size increase which could be attributed to a swelling effect. There was a slight variation in PDI of these NPs during the first 3 h, after that the PDI tended to stabilize at~0.5 and~0.3 for unloaded (Figure 6a,c) and quercetin-loaded (Figure 6b,d) formulations, respectively. Previous studies have also shown the possible role of cyclodexrtins in particle stabilization [66,67,69]. The colloidal stability is very important, since it maximizes the number of NPs covering the surface of bacteria as well as maintaining the inherent surface effect to volume ratio of these NPs. Molecules 2017, 22,1975 8 of 23 incubation for 7 h (Figure 6a-d). The size varied within a small range 300-500 and 200-400 nm for unloaded and quercetin-loaded NPs, respectively. However, upon contact with M9 medium some formulations exhibited a size increase which could be attributed to a swelling effect. There was a slight variation in PDI of these NPs during the first 3 h, after that the PDI tended to stabilize at ~0.5 and ~0.3 for unloaded (Figure 6a,c) and quercetin-loaded (Figure 6b,d) formulations, respectively. Previous studies have also shown the possible role of cyclodexrtins in particle stabilization [66,67,69]. The colloidal stability is very important, since it maximizes the number of NPs covering the surface of bacteria as well as maintaining the inherent surface effect to volume ratio of these NPs. In Vitro Release of Quercetin As can be appreciated from Figure 7, in formulations without Captisol ® , the encapsulated quercetin was released up to 90% within 60 min. The fast release of quercetin from the nanometric matrix could be explained because of its weak interaction with chitosan. This result was in accordance with previous studies where almost all the payload was released from CS/TPP NPs in 15 min [66,70]. In contrast, nanoformulations containing Captisol ® seem to prolong the release of the loaded-drug. The different composition of these NPs regarding different amounts of TPP and Captisol ® has a negligible influence on the release profile of quercetin when around 40% of quercetin was released after 6 h incubated in M9 medium at 37 °C. The slow release of quercetin in these formulations could be understood as the expected consequence of the inclusion complexes formed by the hydrophobic cavity of Captisol ® and quercetin. The strong interaction between drug and Captisol ® might have an impact in controlling the drug release. Previous studies have reported the ability of Captisol ® to form inclusion complexes with auto-inducers, especially with AHL with acyl tail from C4 to C8 [71][72][73][74], hence we speculated that the drug release rate might be increased significantly when AHL is added to the bacterial medium leading to the competition of AHL and quercetin to occupy the cavity of Captisol ® . The simultaneously burst release of loaded-drugs (vancomycin and hamamelitannin) In Vitro Release of Quercetin As can be appreciated from Figure 7, in formulations without Captisol ® , the encapsulated quercetin was released up to 90% within 60 min. The fast release of quercetin from the nanometric matrix could be explained because of its weak interaction with chitosan. This result was in accordance with previous studies where almost all the payload was released from CS/TPP NPs in 15 min [66,70]. In contrast, nanoformulations containing Captisol ® seem to prolong the release of the loaded-drug. The different composition of these NPs regarding different amounts of TPP and Captisol ® has a negligible influence on the release profile of quercetin when around 40% of quercetin was released after 6 h incubated in M9 medium at 37 • C. The slow release of quercetin in these formulations could be understood as the expected consequence of the inclusion complexes formed by the hydrophobic cavity of Captisol ® and quercetin. The strong interaction between drug and Captisol ® might have an impact in controlling the drug release. Previous studies have reported the ability of Captisol ® to form inclusion complexes with auto-inducers, especially with AHL with acyl tail from C 4 to C 8 [71][72][73][74], hence we speculated that the drug release rate might be increased significantly when AHL is added to the bacterial medium leading to the competition of AHL and quercetin to occupy the cavity of Captisol ® . The simultaneously burst release of loaded-drugs (vancomycin and hamamelitannin) within 1 h and the uptake auto-inducers (either C 6 HSL or 3-oxo-C 12 HSL) was reported elsewhere [75]. FTIR Analysis of Selected NPs Fourier transform infrared spectroscopy (FTIR) analyses were performed on freeze-dried samples of selected loaded NPs to identify the infrared absorption peaks of quercetin, chitosan, Captisol ® , unloaded, quercetin-loaded NPs and to investigate a possible reaction between quercetin and NPs ( Figure 8). FTIR Analysis of Selected NPs Fourier transform infrared spectroscopy (FTIR) analyses were performed on freeze-dried samples of selected loaded NPs to identify the infrared absorption peaks of quercetin, chitosan, Captisol ® , unloaded, quercetin-loaded NPs and to investigate a possible reaction between quercetin and NPs ( Figure 8). FTIR Analysis of Selected NPs Fourier transform infrared spectroscopy (FTIR) analyses were performed on freeze-dried samples of selected loaded NPs to identify the infrared absorption peaks of quercetin, chitosan, Captisol ® , unloaded, quercetin-loaded NPs and to investigate a possible reaction between quercetin and NPs ( Figure 8). indicated that quercetin might be entrapped inside the cavity of Captisol ® rather than present on the surface of nanoparticles. A new peak appeared centered at 794.5 cm −1 (attributed to a peak at 795.92 cm −1 of free quercetin) in loaded 4/2/0.5 NPs when compared with unloaded formulation indicated that quercetin was efficiently associated in the NPs. The disappearance of typical peaks of quercetin after nanoencapsulation has been reported elsewhere [55,76]. FTIR results have confirmed the conjugation between quercetin and the NPs matrix. Bioassay against E. coli Top 10 of Selected NPs We have investigated the influences of free quercetin, chitosan, Captisol ® , unloaded and quercetin-loaded nanoparticles at different concentrations to the responses of AHL-regulated biosensor strain, E. coli Top 10, regarding the evolution of the fluorescence intensity and the bacterial growth (proportional to OD 600 ). The ratio between fluorescence intensity and OD 600 was also calculated and be defined as relative light unit (RLU). To establishing quantitative comparisons, we have selected measurement of the last RLU and OD 600 (i.e., endpoint measurement after 7 h when the growth rate is assumed to enter the stationary phase). The QS in the positive control was set as 100%, and the relative QS of a given treatment is defined as the ratio of its RLU at 7 h with respect to that of the control. Therefore, theoretically, if the relative QS values are equal to one, it means that the evaluated compounds do not have any anti-QS effect. In turn, relative QS values lower than one, are diagnostic of QS inhibition as the OD 600 does not decrease. The recorded results are shown in Figure 9. There was no inhibition effect to bacterial growth at different concentrations of Captisol ® (Figure 9a,d) namely, 0.1875, 0.375 and 0.75 mg/mL, which are equivalent to the amount of Captisol ® in 4/1/-, 4/2/-, 4/4/-NP formulations, respectively, thus suggesting that Captisol ® is non-toxic to bacteria. However, GFP has reduced significantly in a dose-dependent manner in a range of Captisol ® from 0.1875 to 0.75 mg/mL (Figure 9b,e). Altogether, increasing amounts of free form Captisol ® decreased proportionally relative QS activity from 8.43% to 20.86% (Figure 9h,j). Since the final concentrations of quercetin of loaded-NPs in the bioassays ranging from 0.0028 mg/mL (lowest in CS 70/20 4/0/1) to 0.0373 mg/mL (highest in CS 70/20 4/4/0 NPs), quercetin existing in free form at three different concentrations namely 0.0125, 0.025 and 0.0375 mg/mL was also tested. CS 70/5 and CS 70/20 at the same final concentration in the bioassay (0.75 mg/mL) were also tested. Interestingly, quercetin existing in free form exerted inhibition effect to bacterial growth as evidenced in Figure 9a,d, but the reduction of GFP expression is negligible and the differences between treatments are not very clear. Thus, the free form quercetin exhibited slightly anti-bacterial effect rather than anti-QS effect (Figure 9h,j). In CS 70/5 NPs, chitosan in free form and unloaded NPs showed negligible inhibition on bacterial growth around 20% (Figure 9a,g). When compared with unloaded NPs, quercetin-loaded ones revealed a minor decrease on bacterial growth that might attributed to the final amount of quercetin-loaded in these formulations, ranging from 0.0046 mg/mL in 4/0/0.75 to 0.0178 mg/mL in 4/2/0.25 (Table S3). Of note, when GFP intensities of loaded-NPs are the lowest (Figure 9b), the bacterial growth seems to be affected very little by CS 70/5 NPs, thus displaying an anti-QS effect. In fact, when applied in free form, the highest anti-QS effect observed for Captisol ® and CS 70/5 were 20.86% and 27.0%, respectively (Figure 9h). The anti-QS effect of CS 70/5 NPs both quercetin-loaded and unloaded increased significantly when compared with the single components. Interestingly, unloaded 4/2/0.25 containing a double amount of Captisol ® exhibited equivalent anti-QS effect when compared with unloaded 4/1/0.5. Higher surface charge of unloaded 4/1/0.5 than unloaded 4/2/0.25 nanoparticles (cf. ζ~+38 vs. +36.3 mV, respectively) could be an explanation for this phenomenon since electrostatic interaction between oppositely surface charged of NPs and bacteria favor anti-QS efficiency. It should be noted that in each formulation, loaded NPs showed stronger anti-QS effect than the unloaded ones, thus suggesting that quercetin might act synergistically with chitosan and Captisol ® in increasing the anti-QS effect of these NPs system. As we expected, the highest anti-QS effect, up to 62%, was observed in loaded 4/2/0.25 formulation comprising the greatest amount of Captisol ® , thus the concomitant greatest amount of associated quercetin. With CS70/20 NPs, free form of CS70/20 exhibited significantly inhibitory effect to bacterial growth when 88.67% OD600 reduction was observed in this treatment (Figure 9i). The lower OD600 reduction observed in unloaded NPs (from 30.1% in unloaded 4/0/1 to 71.33% in unloaded 4/2/0.5) revealed that upon nanoencapsulation, the toxicity of CS70/20 was significantly reduced (Figure 9d,i). The same trends were observed when in all formulations loaded NPs showed stronger inhibition effect to bacterial growth (from 39.48% in loaded 4/0/1 to 88.32% in loaded 4/4/0 formulations) than the unloaded ones (Figure 9d,i). Since almost both unloaded and loaded formulations of CS70/20 NPs, caused strong reduction in OD600 (up to 88.32% in loaded 4/4/0 NPs), these NPs exhibited antibacterial effect rather than anti-QS effect. The reduction in GFP intensity might stem from the death bacteria that cannot generate GFP, rather than the inhibition to survival bacteria expressing GFP. The antibacterial effect in these NPs could be divided into two main groups. The first group which caused slight [ With CS 70/20 NPs, free form of CS 70/20 exhibited significantly inhibitory effect to bacterial growth when 88.67% OD 600 reduction was observed in this treatment (Figure 9i). The lower OD 600 reduction observed in unloaded NPs (from 30.1% in unloaded 4/0/1 to 71.33% in unloaded 4/2/0.5) revealed that upon nanoencapsulation, the toxicity of CS 70/20 was significantly reduced (Figure 9d,i). The same trends were observed when in all formulations loaded NPs showed stronger inhibition effect to bacterial growth (from 39.48% in loaded 4/0/1 to 88.32% in loaded 4/4/0 formulations) than the unloaded ones (Figure 9d,i). Since almost both unloaded and loaded formulations of CS 70/20 NPs, caused strong reduction in OD 600 (up to 88.32% in loaded 4/4/0 NPs), these NPs exhibited antibacterial effect rather than anti-QS effect. The reduction in GFP intensity might stem from the death bacteria that cannot generate GFP, rather than the inhibition to survival bacteria expressing GFP. The antibacterial effect in these NPs could be divided into two main groups. The first group which caused slight [ Table 2). With the loaded NPs, the higher amount of quercetin loaded in the NPs matrix, the higher antibacterial activities were observed with 4/3/0 and 4/4/0 formulations. The higher activity of loaded 4/2/0.5 NPs than loaded 4/3/0 NPs suggesting the synergistic effect between highly positive surface charged of 4/2/0.5 NPs with the natural Captisol ® 's cavity and the antibacterial effect of quercetin as well. It should be noted that free quercetin and 4/0/1 NPs caused slightly antibacterial effect on bacteria. However, both unloaded and loaded 4/0/1 NPs could reduce the GPF expression more efficiently when compared with free quercetin of different concentrations. Quercetin-loaded NPs showed less toxicity to bacterial than free chitosan, but stronger reduction of GFP were observed with loaded 4/2/0.5 and loaded 4/4/0 NPs suggesting that encapsulation process could reduce the toxicity of NP's component as well as enhance GFP inhibitory activities of these NPs. The lowest florescence intensity observed in loaded 4/4/0 formulation suggests that at this concentration (drug releases up to~40% from the nanoparticle matrix after 6 h, was equivalent to 0.0298 mg/mL) might be too high and caused the toxic for this biosensor bacteria. Discussion CS/SBEβCD nanoparticles were prepared by ionic gelation either in the presence or absence of TPP. Nanosystems were formed by the combination of the electrostatic interaction between CS and SBEβCD, which are oppositely charged, and the ability of CS to experience a liquid-gel conversion due to its ionic interaction with TPP. The initial experiments were aimed at screening the best NPs formulations using a derivative of β-CD with degree of substitution of 6.4 (SBEβCD) and two kinds of chitosan namely CS 70/5 and CS 70/20 . As can be seen from the results (Tables 1-4; Figures 1 and 2 This important result allowed shortening the screening processes for the best formulations. Charge ratio can also influence the physicochemical properties of the NPs in terms of size, PDI, zeta potential and production yield. The resulting NPs were in the size range of 250-400 nm with CS 70/5 and 330-600 nm with CS 70/20 , low polydispersity index (<0.25) and always exhibit high zeta potential (ranging from ζ +31 to +40 mV), thus suggesting that CS is mainly located on the surface of the particles. It could be noted that production yield of formulations containing TPP increased significantly. Quercetin, a poorly soluble flavonoid, was chosen for testing the ability to load hydrophobic drug of the best NPs formulations in previous part. Quercetin-loaded NPs were characterized in terms of size, PDI, zeta potential and production yield. The results show that the addition of quercetin did not alter significantly the physicochemical properties of the NPs, suggesting that quercetin was fully entrapped in the cavity of cyclodextrin. The noncovalent inclusion complexes formed by Captisol ® 's cavity and guest molecules both in solution and the solid state can lead to alter the physical, chemical and biological properties of guest molecules. The inclusion complexes in which guest molecule was surrounded by hydrophobic environment of Captisol ® 's cavity is ideal for delivering low solubility drug. Solid inclusion complexes between quercetin and Captisol ® have also been studies before in order to enhance the solubility, dissolution rate, as well as improve significantly anti-cancer activity at lower quercetin concentration [77]. Quercetin was released sustainable but higher antioxidant activity and photostability was obtained upon inclusion complexed with β cyclodextrin [78]. The toxicity of quercetin has also been demonstrated to be reduced upon complexing with hydroxypropyl β-cyclodextrin elsewhere [79]. Results from Tables 3 and 4 indicate that Captisol ® could facilitate the association of complexed drug into the CS NPs. The association efficiency in all formulations containing Captisol ® was higher than 85% and the loading efficiency increased linearly with Captisol ® amount. When compared with the controls (4/0/0.75 in CS 70/5 NPs and 4/0/1 in CS 70/20 NPs), the LE of the best formulation increased up to 2.98 and 8.1 times, respectively. Interestingly, the LE of the controls (without Captisol ® ) were negligible and when the amount of TPP decreased, the AE and LE increase suggesting that TPP might compete with quercetin in association with CS in these formulations. Elemental analysis was performed to identify the compositions of selected NP formulations. The values obtained by this technique are close to the theoretical mass ratios at which the materials were incorporated. As expected, Captisol ® was effectively incorporated into the NPs and representing up to 52.7% of the total mass in CS 70/20 4/3/0 NPs. The preparation of nanoparticles containing more than 50% mass of SBEβCD is very crucial since SBEβCD is low toxicity and possess special features in terms of enhancing permeability and protecting drug molecules [80]. The strong interaction between the SBEβCD and CS is afforded by the presence of negatively charged sulfate groups in the SBEβCD that ionically interacts with the positively charged CS molecules. FTIR results indicated that quercetin was effectively entrapped inside the cavity of Captisol ® . This result is very important since quercetin will be released gradually from the nanosytems that was driven by the exchange between quercetin and autoinducer (3OC 6 HSL) to occupy the cavity of Captisol ® . The gradualy release has allowed bacteria to have enough time to adapt to the drug and therefore, help to reduce the toxicity of the nanosystems as well as prolong their anti-QS effect. Bioassays against E. coli Top10 biosensor have been carried out to evaluate the bioactivities of NPs derived from two kinds of chitosan with different DA. CS 70/5 NPs exhibited highly anti-QS effect while CS 70/20 NPs showed strongly antibacterial effect. This suggests that CS's DA is a crucial factor that determine the pathway in which NPs might interfere with bacteria. In fact, both anti-QS and anti-bacterial activities increased significantly upon nanoencapsulation, thus highlighting the benefit of unique physicochemical properties and high surface area to volume ratio of NPs that facilitate their attachment to bacteria's membrane and enhance the bioactivities effect of the systems as well as minoring their toxicity. As evidenced in Figure 9, the best anti-QS and anti-bacterial effects were attained in loaded NPs, suggesting that the synergistic effect of chitosan, Captisol ® and quercetin will optimize the bioactivities of nanosystems. Formulations containing Captisol ® (both loaded and unloaded) showed higher either anti QS or antibacterial effects than the control without Captisol ® (4/0/0.75 and 4/0/1, respectively). This suggests that the exchange between the release of quercetin outside Captisol ® 's cavity and the simultaneously uptake of 3OC 6 HSL inside this cavity could be the reason of enhancing bioactivities of these NPs. The uptake of autoinducer inside the cavity making autoinducer cannot reach adequate threshold to activate the fully QS in E. coli Top10 biosensor. Our result is in accordance with previous works [73,74,81] that suggested autoinducers, especially AHLs possessing an acyl chain from C 4 to C 8 (in our case is 3-oxo-C 6 -HSL), could be trapped inside the cavity of Captisol ® , thus leading to the reduction in QS activity. Quercetin at concentration of 16 µg/mL has been reported sofar as an effective inhibitor of QS, biofilm formation and QS-regulated virulence factors in P. aeruginosa PAO1 [82]. The two main QS systems in P. aeruginosa PAO1 are lasI/R and rhlI/R in which lasI and rhlI are involved in autoinducer synthesis, while lasR and rhlR served as receptors. E. coli Top 10, biosensor used in our studies, possesses a cassette luxR transformed from Vibrio fischeri that can only respond to 3OC 6 HSL but cannot produce autoinducer due to lack of luxI cassette. Since QS circuits in P. aeruginosa PAO1 and E. coli Top 10 respond to different type of autoinducers and quercetin exerted anti-QS effect on both systems, we can excluded that quercetin compete for the binding site of the involved receptors with the cognate autoinducers and different mechanisms have been hypothesized. One possibility could be that quercetin inhibited autoinducer synthase enzymes that could be eliminated from our studies, as E. coli Top 10 does not synthesize autoinducer. Also, quercetin can bind to different domains of LuxR receptor (except for binding site) that would affect the binding affinity of luxR to luxI-DNA. An alternative explanation is that quercetin might be accumulated rapidly at the lipidic membrane of bacteria leading to block the diffusion of AHL to the cytosol [83]. The last interpretation seems to be the most plausible one, as it can answer why in free form quercetin exert highly toxic to bacterial growth in a dose dependent manner while CS 70/5 -loaded NPs do not. In one hand, quercetin was released in a sustained and controlled manner from the NPs, permitting bacteria to have enough time to adapt to as well as metabolise this drug. On other hand, the aggregation effect due to the highly positive charged of these NPs could enhance the anti-QS of the system since they can deliver their payloads locally at the bacterial cell wall, maintain the lower dose during the timespan of the experiment and hence prolong the anti-QS effect of the nanosystems. In addition, our previous study has shown that blank nanocapsules could 100% bind to E. coli Top 10 at their low concentration below the optimal "stoichiometric" nanocapsule/bacterium binding point [84]. However, the precise mechanisms underlying these effects remain to be fully elucidated, and the results need to be confirmed in an in vivo experiment. As a natural QS inhibitor, quercetin has several advantages. Firstly, quercetin is low in cost and abundant in nature. Secondly, as reported by available literatures, quercetin would not have any adverse health effect on human following the oral administration at doses up to 1000 mg per day for up to 12 weeks [85]. Thirdly, the anti-QS effect observed for quercetin is at very low concentrations, compared with most of plant extracts and substances before [86,87]. To this end, nanoencapsulation could help to enhance both anti-QS and anti-bacterial effect of the nanosystems. Chitosan's DA plays an important aspect in determining their pathways to interfere with bacteria. Materials Chitosan samples were of high purity research grade from Heppe Medical Chitosan GmbH Phase-Solubility Studies To get insight the kinetics and dynamics of quercetin's solubility, phase-solubility studies were performed by adding an excess of the drug to 5 mL solution containing increasing amounts of SBEβCD (from 0 to 40 mM) in sealed glass containers stirred at 37 • C until equilibrium (after 3 days). The suspension was then filtered (pore size 0.45 µm), and quercetin concentration was identified spectrophotometrically (λ = 374 nm) (Jasco V-630 spectrophotometer, Labor und Datentechnik, 64319 Pfungstadt, Germany). In keeping with Higuchi and Connors [88], the apparent 1:1 stability constants were calculated from the straight-line portion of the phase solubility diagrams. Preparation of Nanoparticles (NPs) NPs composed of CS and SBEβCD or mixtures of SBEβCD/TPP were obtained via the ionotropic gelation technique [65] with slight modifications. The methods are detailed briefly below: (a) NPs without CD: were spontaneously formed at room temperature upon addition of 1 mL of TPP aqueous solution (0.15% w/v, polyanionic phase) to 3 mL of the CS solution (0.20% w/v, pH 4.95, polycationic phase) under stirring (850 rpm, 10 min). The solution was then kept stable for at least 50 min to allow the complete stabilization of the system. (b) NPs containing CD: The volumes of the two phases were always the same as well as for NPs without CD. CS/SBEβCD/TPP NPs were prepared by mixing the CS solution (0.2% w/v) with a polyanionic phase containing SBEβCD (0.15-0.9% w/v) or both SBEβCD and TPP (0.075-0.3% w/v). (c) Preparation of quercetin-loaded nanoparticles: For the association of quercetin into the NPs system, an excess of quercetin was incubated under magnetic stirring (500 rpm, 24 h) with either water solution containing different amounts of SBEβCD (from 3.0 to 6.0 mg/mL) or CS solution (0.2% w/v, pH 4.95). After incubation, the drug suspensions were filtered through 0.45 µm membrane and the resulting solutions was identified spectrophotometrically for quercetin content. This inclusion-complexed solution was then used for NP formation by the ionotropic gelation technique as described above. The resulting NPs were isolated by ultracentrifugation on a glycerol bed (10,000× g, 40 min, 15 • C; Mikro 220 R, Hettich GmbH & Co. KG, Tuttlingen, Germany). Supernatants were collected for determination of the amount of unbound quercetin. NPs were then re-suspended in 100 µL NaCl 85 mM. Glycerol was used to enhance the re-suspend ability of centrifuged nanoparticles. The production yield of the nanoparticles was obtained by centrifuging fixed volumes of the freshly prepared nanoparticles suspensions (16,000× g, 40 min, room temperature) without glycerol bed. The supernatants were then discarded, and the pellets were lyophilized at −50 • C until constant weight (after 2 days). The production yield was calculated by comparing the actual weight with the theoretical weight of the total components of nanoparticles. Physicochemical Characterization of Nanoparticles The Z-average particle size (hydrodynamic diameter) and size distribution of the NPs were determined by dynamic light scattering with non-invasive back scattering (DLS-NIBS) at 25 • C detected at an angle of 173 • fitted with a red laser light output (λ = 632.8 nm) using a Malvern Zetasizer Nano ZS instrument (ZEN3600, Malvern Instruments Ltd., Malvern, UK). The ζ-potential was measured by phase analysis light scattering and mixed laser Doppler velocimetry (M3-PALS) at 25 • C. The samples were diluted 1:20 in 1 mM KCl before measurement. Elemental Analysis of the NPs Unloaded nanoparticles were prepared as described above, without using glycerol bed during centrifugation, and finally lyophilized (Telstar Cryodos, Terrassa, Spain). Elemental analysis of the starting materials (i.e., pure CS and pure SBEβCD) and the lyophilized nanoparticles was analyzed by Elemental Analyzer Telstar (Telstar Cryodos freeze-drier, Telstar Industrial SL, Terrassa, Spain). For all samples, the elemental composition of C, H and N was determined. For the SBEβCD (pure component) and CS/SBEβCD NPs, the corresponding composition of S was also determined. The CS content of the NPs sample was analyzed by comparing the N content between the samples and pure CS. For SBEβCD content determination, the C content of the samples arising from CS was calculated and subtracted from the total C amount. The remaining amount of C was used to calculate the amount of SBEβCD by comparing that C amount with that of pure SBEβCD. In the case of the CS/SBEβCD/TPP NPs, the S content was also used for SBEβCD quantification. Hence, the reported data represent an average of the two results. The remaining fraction of the NPs composition was attributed to TPP. Loading and Association Efficiency of Nanoparticles The association efficiencies of the nanoparticle formulations were determined after isolation of nanoparticles by centrifugation as described in Section 4.2.2. Supernatants were collected for determination of the amount of unbound quercetin using spectrophotometry method. The loading efficiency (LE%) and the association efficiency (AE%) of quercetin were calculated according with Equations (1) and (2) Stability study has been conducted following the protocol used in our previous studies [89]. Briefly, selected nanoparticle formulations were prepared and centrifuged in the presence of glycerol bed. Unloaded-nanoparticles and quercetin-loaded nanoparticles were tested for their stability in M9 medium in terms of the change in size and the polydispersity index (PDI) of nanoparticles and possible precipitations. Nanoparticles were incubated in M9 medium at 37 • C with agitation of 100 rpm. The size distribution of the nanoparticles and PDI were measured by photon correlation spectroscopy at different time points of 0, 30, 60, 120, 240 and 420 min. Each experiment was performed in triplicates. In Vitro Release Studies It was done according to method developed by Kaiser et al., with slightly modifications [49]. Briefly, quercetin-loaded NPs were isolated and re-suspended in NaCl 85 mM. The release studies were performed by incubating 800 µL Quercetin-loaded NPs suspension in 25 mL of M9 medium at 37 • C and then, stirred at 100 rpm. At appropriate time points, the samples were withdrawn, replaced by fresh M9 medium and centrifuged at 16,000× g for 30 min. The drug released from the NPs, present in the supernatant, was determined by UV/Vis spectrophotometry at 374 nm and was calculated by interpolation using a calibration curve. FTIR Spectroscopy Studies The spectra of Fourier transform infrared spectroscopy (FTIR) was used to analyze molecular bonding formation between QUE and chitosan nanoparticles. The FTIR spectra of pure QUE and QUE-loaded chitosan nanoparticles were recorded using Perkin Elmer Nicolet 520 spectrophotometer (Perkin Elmer, Boston, MA, USA). The lyophilized samples were ground with spectroscopic grade potassium bromide (KBr) powder and then, pressed into 1 mm pellet for FTIR measurement in the range of 450-4000 cm −1 with 4 cm −1 resolution, using 16 scans. All samples were analyzed and recorded in triplicates. QS Inhibition Studies (a) E. coli Top 10 Biosensor Strain The bacterial strain used for all experiments was a fluorescence biosensor constructed from an E. coli Top 10 (Invitrogen, Life Technologies Co., Paisley, UK), which had been transformed chemically by Celina Vila of our laboratory to contain the standard biological part BioBrick_T9002 on the plasmid BBa pSB1A3 (http://partsregistry.org/Part:BBa_T9002), kindly donated by Prof. Anderson's lab (UC Berkeley, Berkeley, CA, USA). The sequence BBa T9002, comprised the luxR gene, coding for the transcriptional factor LuxR, under the control of the pTetR promoter, being expressed in a constitutive manner. Upon external addition of 3OC 6 HSL, the dimerization of two monomeric species of LuxR, each bound to one AHL molecule, drives to activation of gfp expression through binding of the LuxR-AHL dimerized complex to the lux pR promoter from Vibrio fischeri, and initiates the production of green fluorescent protein when 3OC 6 HSL is added. This strain has been used in previous studies in our laboratory [36,84]. The bacterial strain was cultivated in Luria-Bertani (LB) medium supplemented with 200 µg/mL ampicillin for 18 h at 37 • C, shaking at 100 rpm and then was stored at −80 • C in 30% sterile glycerol for future use. Before the biosensor assay, the bacteria working solution was prepared by cultivating 40 µL of bacteria stored in cryo-tube at −80 • C into 20 mL M9 medium plus 20 µL of ampicillin (200 µg/mL), under incubation at 37 • C, 100 rpm, until the OD 600 reached 0.04~0.07 (~4 h). 3OC 6 HSL was dissolved in acetonitrile to a stock concentration of 100 mM and stored at −20 • C. Five µL aliquot of the 100 mM 3OC 6 HSL stock solution was diluted with sterile milli-Q water to a working concentration of 10 nM. QS inhibition activity was tested in 96-well microplate, in which 10 µL of 10 nM 3OC 6 HSL, 10 µL of the treatment nanoparticles formulations and 180 µL aliquots of the bacterial culture of OD 600 0.04~0.07 were added. Two kinds of blank were set up. Blank 1 (OD blank) contained 180 µL of M9 medium and 20 µL of milli-Q water. Blank 2 (flourescence blank) contained 180 µL of bacterial culture and 20 µL of milli-Q water to measure the auto-flourescence of the bacteria itself. Positive control contained 180 µL of bacterial culture and 10 µL of milli-Q water and 10 µL of AHL has also been set up to compare the anti-QS effect of different formulations. The plates were incubated in a Spectra Max-M2 Microplate Reader (Molecular Devices, Sunnyvale, CA, USA) at 37 • C. Fluorescence measurements were recorded automatically using a repeating procedure (λ excitation = 480 nm and λ emission = 510 nm, 40 µs, 10 flashes, gain 100, top fluorescence), growth measurements (OD 600 ) (λ = 600 nm absorbance filter, 10 flashes) and shaking (5 s, orbital shaking, high speed). The interval between measurements was 60 min. For each experiment, the fluorescence intensity (FI) and OD 600 values were obtained by subtracting the received values with the fluorescence blank and OD blank above, respectively. All measurements were taken in triplicates. Conclusions In this study, we have developed novel nanocarrier formulations consisting of chitosan and a negatively charged cyclodextrin, Captisol ® , via the very mild ionotropic gelation technique. The charge ratio [+] / [−] determined the final physicochemical characteristics of the resulting NPs. The nanoparticles exhibited a small size, a positive zeta potential and a great capacity for the association of quercetin. Quercetin-loaded NPs showed to be stable in bacterial M9 medium for 7 h. The presence of Captisol ® in the NPs plays an important role in controlling the release rate of quercetin. Chitosan-based NPs can be used as an effective vehicle to deliver hydrophobic bioactive compounds locally to the bacterial surface, as well as to enhance both their anti-QS and anti-bacterial activities. The exact mechanism in which NPs interact with the E. coli Top 10 biosensor remains to be elucidated and is currently being addressed in our Laboratory. Conflicts of Interest: The authors declare no conflict of interest. The founding sponsors had no role in the design of the study; in the collection, analyses, or interpretation of data; in the writing of the manuscript, and in the decision to publish the results.
2017-11-27T13:03:30.365Z
2017-11-01T00:00:00.000
{ "year": 2017, "sha1": "b37493dc3e7e263a772b79247c9ef35e67f3c248", "oa_license": "CCBY", "oa_url": "https://doi.org/10.3390/molecules22111975", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "b37493dc3e7e263a772b79247c9ef35e67f3c248", "s2fieldsofstudy": [ "Biology" ], "extfieldsofstudy": [ "Chemistry", "Medicine" ] }
21755104
pes2o/s2orc
v3-fos-license
Treatment-Refractory Mania with Psychosis in a Post-Transplant Patient on Tacrolimus: A Case Report Bipolar affective disorder type I imparts significant morbidity and disease burden in the population. It is characterized by occurrence of one or more manic episodes which may be preceded or followed by a depressive or hypomanic phase. About half of these manic episodes are characterized by the presence of psychotic features. The condition is further complicated when the patient has multiple comorbid conditions. We report here the case of a Caucasian woman, aged 66 years, previously diagnosed with Bipolar disorder who developed treatment refractory mania with psychotic feature after being on the immunosuppressive agent, tacrolimus, after kidney transplantation. C aucasian woman, aged 66 years, with a past psychiatric history of bipolar type 1 disorder and a medical history significant for cadaveric renal transplant, aortic stenosis, end-stage renal disease, hypertension, diabetes mellitus type 2, thyroid dysfunction and right adrenal adenoma presented with acute mental status change and was referred to the psychiatric facility for disorganized thought process, pressured speech, and delusion 1.5-years post-transplant. The patient was a poor historian and collateral information was obtained from the family who informed us that she was not sleeping well for last few weeks. She would call them over the phone at night, was very talkative and reported to have pressured speech. She would talk about the male violinist "Joshua Bell", stating that "she was Joshua" or that "Joshua is a family member". They reported that her symptoms gradually got worse in the previous few days where she became disoriented and confused and so, the family brought her to the hospital. Family also mentioned that she has a history of endstage renal disease and had renal transplant in 1.5 years prior. She was placed on maintenance immunosuppression with tacrolimus 3 mg b.i.d., mycophenolic acid 360 mg b.i.d., and prednisone 5 mg daily after the transplant. She had posttransplant mania with psychosis one month after the transplant. She was stabilized at that time, but then had this sudden deterioration. She was recently switched from divalproex sodium to quetiapine in outpatient care because of thrombocytopenia. During the hospitalization, quetiapine was optimized to 800 mg, and a complete work up was done. However, she continued to be severely manic and her delusions appeared to be worse, now believing she was in a World War II scenario and the staff was Nazi's. Her thought flow continued to show loose associations with recent onset of intermittent agitation. Since she was not responding to quetiapine, she was tapered off after 2 weeks and restarted divalproex sodium and risperidone with close monitoring of blood count. The divalproex sodium was increased up to 1500 mg in divided doses and optimized risperidone to 6 mg over the next 2 weeks. When her psychosis did not improve, she was switched to olanzapine oral at 5 mg in divided doses with a plan to increase up to 20 mg for the psychosis. At the same time, other causes of psychosis were explored including immunosuppressant medications and her concurrent adrenal adenoma. In a multidisciplinary approach, the psychiatry service and nephrology service discussed about the likelihood of tacrolimus-associated psychosis. The patient's most recent tacrolimus level was 5.8 one month prior to admission, 2.3 during admission; however, 5 days after admission, it had increased to about 8.4. At that time, she was on prednisone 5 mg daily, tacrolimus 1 mg b.i.d., myfortic 360 mg oral daily. Though the most recent tacrolimus level was within therapeutic range, it was still higher than her baseline levels. There had been only a couple case reports of psychosis associated with tacrolimus, 1,2 and keeping that in mind during discussion with nephrology service, we planned cross taper of tacrolimus and initiation of cyclosporine. Tacrolimus was decreased to 1 mg daily and cyclosporine was started at 100 mg b.i.d. As cyclosporine peak and trough levels were obtained and were adequate, tacrolimus was finally tapered off completely. Finally, based on blood levels, cyclosporine was dosed at 125 mg in morning and 100 mg in evening. Significant improvement was observed in the patient's mental status as soon as the tacrolimus taper was started. By the time of discharge, she had better insight to her delusions and did not demonstrate bizarre behavior. She was ultimately discharged on olanzapine 20 mg at night, divalproex sodium 1500 mg at night, lorazepam 0.5 mg at night, propranolol 30 mg t.i.d. for tremors and hypertension, cyclosporine 125 mg in morning and 100 mg at night, myfortic 360 mg daily, prednisone 5 mg daily. She was continued on her home medications of amlodipine 10 mg daily, alendronate 35 mg weekly, rosuvastatin 5 mg daily, eplerenone 50 mg daily, cinacalcet 30 mg and 60 mg every other day, levothyroxine 175 µg daily. Patient was discharged with the recommendation for close follow up with psychiatry and nephrology. Discussion Bipolar affective disorder type I imparts significant morbidity and disease burden in the population. It is characterized by occurrence of one or more manic episodes which may be preceded or followed by a depressive or hypomanic phase. About half of these manic episodes are characterized by the presence of psychotic features. 3 The condition is further complicated when the patient has multiple comorbid conditions as was observed in the case presented here. Our patient had a very complicated presentation. At the age of 33 years, she was diagnosed with bipolar disorder type I and had one known manic episode 3 years prior to renal transplant. The first episode of mania was a month post-transplant. This particular episode started almost a year after being on maintenance immunosuppression therapy with tacrolimus, and in spite of getting treatment with mood stabilizers and antipsychotics, only after tapering off tacrolimus and switching to cyclosporine did the patient improve. Though, there have been neurologic symptoms like seizures and tremors reported, we have seen only a couple case reports that have reported psychosis as a consequence of tacrolimus maintenance therapy. However, per our knowledge, this is the first case report on treatment resistant mania in a previously diagnosed case of bipolar disorder. Management of mental health issues in the post-transplant setting can be difficult given the potential for medication related neurotoxicity and the lack of established guidelines related to it. 4 The majority of available literature focuses on assessment of depression pre-and post-transplantation; however, no specific guidelines exist for the management of patients with bipolar affective disorder. 4 Our patient was stable on divalproex sodium 500 mg b.i.d. for about 2 years but that was discontinued 4 months prior to the episode due to thrombocytopenia, and patient was started on quetiapine 200 mg. It is probable the emergence of the mania with psychosis was precipitated by discontinuation of divalproex sodium. However, we believe the treatment resistant mania was due to the tacrolimus therapy. Psychosis and delirium are listed as possible adverse effects in the manufacturer's package insert. Severe neurotoxic symptoms are reported to affect up to 5% of patients on calcineurin inhibitors and include psychoses, hallucinations, blindness, seizures, cerebellar ataxia, motor weakness, or leukoencephalopathy. 5 Tacrolimus is associated with similar neurotoxic adverse events. Factors that may promote the development of serious complications include advanced liver failure, hypertension, hypocholesterolemia, elevated cyclosporine or tacrolimus blood levels, hypomagnesemia, and methylprednisolone. Our patient had underlying hypertension, diabetes mellitus type 2, thyroid dysfunction and a right adrenal adenoma. Interestingly, her tacrolimus levels were at therapeutic levels, but severe neurotoxicity seemed to not be related to tacrolimus levels, similarly as seen in the study by Veroux et al. 6 Based on the FK506 consensus reports by Jusko et al 7 and Wong, 8 tacrolimus therapeutic ranges in kidney transplanted patients should be 10-15 µg/L in the first 6 months of treatment; [8][9][10][11][12] µg/L in the following semester; and 5-10 µg/L as maintenance therapy after one year. Bottiger et al 9 reported neurotoxicity in 10% of patients with drug levels between 10 g/ml and 30 g/mL, while this incidence rises up to more than 20% in patients with a level >30 g/ml. This molecule can cross the intact blood brain barrier and probably attaches to myelin, which is rich in lipids, permitting tacrolimus to exert a direct toxic effect through nitrous oxide production. 10 Tacrolimus has downstream regulatory effects on both dopaminergic and N-Methyl-Daspartate receptor systems 2,11 which may be another factor. Another postulated hypothesis relates to hypertensive encephalopathy, which corresponds to white matter changes observed in the parietal and occipital lobes. 10 There is relatively no literature on treatment of tacrolimusinduced psychosis. Future areas of study should question whether the type of donor, cadaveric vs live, has any contribution to the tacrolimus-induced psychosis. There also seems to be no consensus as to the time required to be on tacrolimus therapy for psychosis to develop. It is also contradictory that on one hand, medications that reduce inflammation are beneficial in acute psychotic episodes 12 but tacrolimus, which is an immunosuppression agent which inhibits the T-cell receptor activation of interleukins, seems to have precipitated the psychotic episode. This case illustrates the challenges and complexity in treating patients with multiple medical comorbidities and major psychiatric disorder. It reinforces the importance of collecting detailed information and communication with other specialties. Our patient needed close follow up from nephrology and detailed work up and literature search by the psychiatrist for possible causes of her psychosis and further treatment. After a long course, detailed chart review, collateral information and multiple trials, we made the conclusion that the clinical manifestation and sudden determination in patient condition is possibly secondary to tacrolimus.
2018-05-21T23:03:42.916Z
2018-06-01T00:00:00.000
{ "year": 2018, "sha1": "59f16911796d780065e37232e6a9aabe6efa658f", "oa_license": null, "oa_url": "http://www.clinmedres.org/content/16/1-2/47.full.pdf", "oa_status": "GOLD", "pdf_src": "Highwire", "pdf_hash": "49b9742f843b31b624a6c013bb963a1d71fba218", "s2fieldsofstudy": [ "Medicine", "Psychology" ], "extfieldsofstudy": [ "Medicine" ] }
262016677
pes2o/s2orc
v3-fos-license
Exploring the functional connectivity between the Kis-Balaton Water Protection System and Lake Balaton using satellite data Lake Balaton, a shallow polymictic freshwater lake in Central Europe, became eutrophic in the 1970s. To retain the inorganic nutrients from the main tributary River Zala, a semi-artificial system called the Kis-Balaton Water Protection System (KBWPS) was constructed in the early 1980s. In 2015, the system was reconstructed and modernised, thus offering the opportunity to evaluate the effectiveness of the functional connection between the KBWPS and Lake Balaton over the past 20 years and to compare its impact before and after the reconstruction. To this end, time series data of algal biomass in Lake Balaton between 1999 and 2019 based on Landsat 7 satellite data were analysed. Over the last 20 years, the algal biomass in Lake Balaton showed an increasing trend (0.009 ± 0.011% increase per year), with territorial specificities also observed. No change was noted in the western part, while an increase was recorded in the eastern part of the lake. A significant difference in the rate of algal biomass accumulation was noticed before (annual increase of 0.008 ± 0.019%) and after (0.240 ± 0.306% per year) the KBWPS reconstruction. Given that the largest increase in algal biomass after reconstruction was observed in the outermost KBWPS basin of Lake Balaton, it appears that mesoscale environmental, water balance, or other factors affecting the lake are playing a role in this increase, rather than the KBWPS reconstruction. This research highlights the potential to study aquatic ecosystems using Earth observation techniques, and how mesoscale factors such as changes in the local climate regime or shifts in lake management can greatly impact the trophic state of a large shallow lake. Effectively identifying these factors is crucial in maintaining the proper status of aquatic ecosystems. Introduction Aquatic ecosystems are facing significant threats globally (Revenga et al. 2000;Wang et al. 2018), with anthropogenic eutrophication being a significant contributor.While eutrophic lakes are a natural occurrence, human activities resulted in an increased rate of lake enrichment in the latter half of the twentieth century, causing severe problems in many countries (Heisler et al. 2008;O'Neil et al. 2012).This has led to the rapid accumulation of algae in lakes, disrupting the natural processes and phytoplankton communities of the aquatic ecosystem (Vollenweider 1968;Padisák and Reynolds 1998).Efforts have been made globally to reverse eutrophication (Moss et al. 1986;Jeppesen et al. 2005;Fastner et al. 2016), and Lake Balaton serves as a positive example. Lake Balaton is a large (596 km 2 ), shallow (approx.3.7 m) recreational freshwater lake.In the early 1970s, a surge in nutrient levels caused a shift from mesotrophic to eutrophic in the lake's water (Herodek 1984).The primary source of nutrients was the Zala River, which flowed into the westernmost basin of Lake Balaton and increased primary production by five-fold within a decade (Herodek 1984).To combat eutrophication, the Kis-Balaton Water Protection System (KBWPS) was initiated in the early 1980s.The KBWPS includes an 18-km 2 shallow (less than 1.3 m) artificial lake system (Lake Hídvégi) and a 16-km 2 wetland, which were constructed at the site of the former Kis-Balaton wetland by the mid-1980s.Lake Hídvégi was designed to retain nutrients and provide ideal conditions for algae growth, while the wetland component of the KBWPS prevented these algae from entering Lake Balaton (Pomogyi 1993;Padisák and Istvánovics 1997;Hatvani et al. 2011).To ensure optimal 106 Page 2 of 11 functioning of the KBWPS, a hydrological retention time of 30 days in Lake Hídvégi and 90 days in the wetland is maintained (Pomogyi 1993;Tátrai et al. 2000). Since its opening in 1984, the KBWPS has consistently been retaining increasing levels of phosphorus (P) each year (Pomogyi 1993;Tátrai et al. 2000;Istvánovics et al. 2007), eventually reversing the eutrophication process in Lake Balaton by 1995 (Padisák and Istvánovics 1997;Istvánovics et al. 2007).Following this, a steady oligotrophication of the lake took place, leading to a decrease in algal biomass and the return of the algal species present in the 1950s (Pálffy and Vörös 2019).In 2013, the KBWPS underwent modernisation and redesign, with a focus on increasing its flexibility during different water regimes.New water pathways and regulatory structures were added, along with new elements in the monitoring system.Additionally, the wetland area was expanded by 35 km 2 to ensure maximum efficiency.The redesigned KBWPS became fully operational in the spring of 2015. Since 1995, Lake Balaton has been meso-oligotrophic, with occasional, spatially and temporally limited spikes of eutrophication (Palmer et al. 2015a).Between 2014 and 2019, excluding some localised and short-lived algal blooms (Pálmai et al. 2016), the eastern half of the lake was oligotrophic throughout the year.However, starting in June 2019, an unprecedented hypertrophic algal bloom in the western half of the lake happened, peaking in early September of that year (Fig. 1).The effectiveness of the KBWPS to protect Lake Balaton was questioned following this bloom, although an alternative hypothesis on the background of this hypertrophic event is supported by most experts (Istvánovics et al. 2022).Nevertheless, for the sake of clarity, the performance of the KBWPS was analysed by comparing the accumulation of algal biomass in Lake Balaton before 2013 and after 2015 using satellite data. The use of remote sensing to study large areas is advantageous, since long-term and large-scale monitoring of open waters would be time-and resource-consuming (Palmer 2015a).Synoptic and regular monitoring of key water quality indicators in lakes can enhance our understanding of the processes involved and improve management of these bodies of water. Lake Balaton is nearly an ideal water body for Earth observation and remote sensing studies of aquatic Fig. 1 Change of algal biomass (chl-a, µg l −1 ) at the end of summer, beginning of autumn in Lake Balaton.Chlorophyll-a maps were created using Sentinel-2 MSI satellite data using the Case-2 Regional ecosystems.It offers a variety of optical water types, including eutrophic westernmost and oligotrophic easternmost basins, high dissolved organic matter at the inflows, and differences in turbidity between the northern and southern shores.Moreover, the size of the lake allows the study of mesoscale processes, while the infrastructure around the lake makes it accessible from all 240 km of the shore.In addition, the relatively large macrophyte cover (~ 12 km 2 ) makes it an excellent target not only for water quality studies, but also for remote sensing of macrophytes.Consequently, numerous articles have focused on remote sensing of water quality in Lake Balaton, dating back to the early days of remote sensing (Büttner et al. 1987;Gitelson 1993), with some recent results (Tyler et al. 2006;Hunter et al. 2008;Palmer et al. 2015b).Furthermore, several studies have focused on remote sensing of its macrovegetation (Stratoulias et al. 2015;Stratoulias and Tóth 2020). One of the crucial management concerns for inland lakes is the amount and biomass of algae (trophic state).This study evaluates the efficacy of the KBWPS by analysing the spatial and temporal distribution of algal biomass in Lake Balaton from 1999 to 2019.The main hypothesis of the study is that the reconstruction of the KBWPS had no impact on the accumulation of algal biomass before (1999)(2000)(2001)(2002)(2003)(2004)(2005)(2006)(2007)(2008)(2009)(2010)(2011)(2012)(2013) and after (2015-2019) its renovation. Study areas Lake Balaton and the Kis-Balaton Water Protection System Lake Balaton (Fig. 2a) is a freshwater lake located in Central Europe (46.83°N, 17.71° E).It is the 38th largest lake in Europe, covering an area of 596 km 2 .The lake is 78 km long, with a width ranging from 1.4 to 15.3 km.The average depth of the lake is 3.7 m, with its deepest point reaching 11.8 m.Lake Balaton is divided into four basins, with the Keszthely basin (basin 1 in this work) located on the western side and the Siófok basin (basin 4) on the eastern side.The catchment of Lake Balaton covers an area of 5765 km 2 and includes many small inflows and the main tributary, the Zala River, which flows into basin 1.The concentration of industry in the catchment is along the Zala River that supplies its waters to the Kis-Balaton Water Protection System (KBWPS-46.63°N, 17.15° E), consisting of an artificial lake in its western part and a wetland in its eastern part (Fig. 2b). Satellite data acquisition and analysis Freely available satellite remote sensing data were considered to investigate the effects of the reconstruction of the KBWPS on its nutrient retention and its role as a control of eutrophication in Lake Balaton.A comparative analysis of the effects before and after the reconstruction was considered essential, thus the study included data from 4 years after the reconstruction (2015-2019), which required a corresponding time frame of at least 4 years before the reconstruction (2009 to 2012).The Landsat 7 satellite was chosen for its higher resolution. To assess the water chlorophyll-a content over a long period, Landsat 7 ETM+ data collected between June 1999 and July 2019 were analysed (Fig. 3).The data were obtained from Google Earth Engine (https:// earth engine.google.com/), and preprocessed to eliminate systematic geometric and radiometric distortions.The database was created by targeting clear Landsat images (with cloud cover < 5%) of the open water area of Lake Balaton.A total of 602 Landsat images were acquired and processed over the period, although owing to clouds and other errors, as few as 340 images were used for certain areas of the lake.A chlorophyll index was calculated for pixels with normalised difference water index (NDWI) values over a threshold of 0.6 using the band ratio between ETM+ 1 and ETM+ 3 (Gitelson et al. 1996) (Fig. 3). owing to the rarity of Lake Balaton water with chlorophyll-a content below 3 µg l −1 , only the ratio ETM+ 3/ ETM+ 1 was used for the analysis.In the regression models established, the logarithmically transformed chlorophyll-a concentration was used as an independent variable, while the dependent variable was the logarithmically transformed ETM+ 1 and ETM+ 3 band ratio; from the best regressions (highest R 2 values) (R Development Core Team 2012), maps were generated.These data were downscaled by overlaying a 1000 m × 1000 m grid to reduce errors from the Landsat 7 ETM+ Scan Line Corrector failure (error data were omitted), and average water chlorophyll-a content was calculated for each cell (Fig. 3). A trend analysis of the algal biomass accumulation in each 1000 m × 1000 m cell of Lake Balaton was performed to determine the direction of change during the study period.Trend analysis was performed on annual (all data) and summer (June-July-August) chlorophyll-a data separately.The average slope of the trend lines was also calculated for each 3-year rolling water chlorophyll value in each cell. Water sampling Water samples from Lake Balaton were collected on a nearmonthly basis from 1999 to 2019.Sampling was done at five locations in the middle of each basin of the lake (1: 46° 44.095′ N, 17° 16.583′ E; 2: 46° 45.112′ N, 17° 25.145′ Chlorophyll-a was determined on the same day as the sample was collected using a whole water column integrated water sample.Water (1000 ml) was filtered using Whatman GF-5 filter papers, and the pigments were analysed after extraction in 60 °C methanol, followed by clarification through centrifugation at 10,000 rpm for 10 min (Iwamura et al. 1970). To validate satellite data, their consistency with in situ data was tested.The validity of all pixels in the 3 × 3 pixel box surrounding the in situ sampling location was checked, with a focus on low covariance between all nine pixels.A total of 33 same-day matches were found for the 1999-2019 timespan for Lake Balaton.The chlorophyll index obtained from satellite data was correlated with the in situ measured chlorophyll-a data (Fig. 4).The maps built from chlorophyll indices were then corrected using the following formula: chl-a = chl-index/1.7514.Using this correction, chlorophyll data were calculated for each available pixel (Fig. 3) and then averaged for each cell. Total phosphorus data were obtained from West-Transdanubian Water Directorate. Statistical analysis of the data Empirical regression analyses were performed to assess the relationship between chlorophyll content measured in situ and the corresponding chlorophyll index estimated from remote sensing data.Ordinary least squares (OLS) regression was used for trend analysis.To avoid seasonal bias, additional trend analyses were performed using only summer (June, July, August) data sets.The strength of the association between the two variables was examined using Pearson product moment correlation analysis.Statistical analyses, including the calculation of averages, standard deviations, regressions and correlations, were performed using R (R Development Core Team 2012). Trophic trends in Lake Balaton The water chlorophyll content exhibits characteristic spatial and temporal patterns in Lake Balaton (Fig. 5).The trophic gradient of Lake Balaton determines the specific algal biomass of its basins, thus the westernmost basin 1 of the lake Balaton had water chlorophyll-a levels ranging from 3.2 to 92.4 µg l −1 , with an average of 20.8 ± 17.8 µg l −1 , making it meso-eutrophic according to the satellite data, with short periods of local hypertrophicity (Fig. 5).Conversely, the easternmost basin 4 of Lake Balaton was oligo-mesotrophic, with average water chlorophyll-a content of 4.9 ± 2.8 µg l −1 (Fig. 5).Temporal specificities were also observed in Lake Balaton, including a regular, annual large phytoplankton bloom in late summer and early fall, and occasional smaller blooms in spring, which were more prominent in basins 1 and 2, i.e., the western half of Lake Balaton (Fig. 5). Since the construction of the KBWPS, the efficiency of the system has decreased significantly as the total phosphorus retention decreased from 40.3 ± 27.4% in 1986-1999 to −1.4 ± 30.5% in the study period (t-test t = 8130, P = 1.7 × 10 -13 ).Nevertheless, the phosphorus retention capacity of the KBWPS between 1999 and 2019 showed no significant trend (Pearson product moment correlation r = −0.171,P = 0.093, Supplementary Fig. 1). The analysis of algal biomass accumulation in Lake Balaton was conducted between 1999 and 2019.During this period, the average trend of algal biomass accumulation was positive, with an increase of 0.009% per year.The highest increase was recorded in basin 4 (0.016% per year), while basins 1 and 2 showed no signs of change or even a negative slope (−0.0045% in basin 2).The summer data sets showed similar trends (Supplementary Table 1, Supplementary Fig. 2). The efficiency of the KBWPS was evaluated by comparing the algal biomass accumulation trends before (prior to 2013) and after (post 2014) the system reconstruction.Before the KBWPS reconstruction, the trend of algal biomass change in Lake Balaton was slight positive except for basin 2, which showed a negative trend.After the KBWPS reconstruction, the overall trend increased by 3273% from annual 0.007-0.24%per year biomass accumulation (Table 1).Very similar data were obtained if only summer data sources were analysed (Supplementary Fig. 3) (Figs. 6, 7). Further temporal analysis showed that, in most cases (10 out of 18 years), short-term trends were close to zero, especially before 2014 (Fig. 8).Occasionally (e.g. in basins 1 and 2 in 2010 and 2011, and in basin 4 in 2008), trends exceeded the standard deviation for the 20-year dataset, but these observed changes were not long-term Table 1 Lake-wide and basinspecific averages of algal biomass change (% per year) in Lake Balaton during the study period Discussion Assessment of temporal patterns and spatial zonation within lakes is crucial for a better understanding of the processes and functions of aquatic ecosystems, and for their effective management.In response to this need, an increasing number of large-scale process studies have been initiated to assess spatial and temporal aspects of abiotic and biotic elements of ecosystems (Blowes et al. 2019;Ho et al. 2019;Pilotto et al. 2020).However, these studies require consistent and adequate data collection, which is not always feasible with traditional sampling methods.Earth observation, despite its limitations, provides a good estimate with a synoptic view of water quality parameters (Giardino et al. 2001;Hestir et al. 2015;Palmer et al. 2015a) and has become essential tool in the study of Lake Balaton (Büttner et al. 1987;Tyler et al. 2006;Palmer et al. 2015b).This study used Landsat 7 ETM+ data to investigate the long-term effects of a water protection system on a lake.Several challenges were encountered.The reduced radiometric and spectral resolution of Landsat 7 limited its suitability for assessing water quality.In addition, the failure of the scan line corrector further hampered the usability of the data.A further complication arose from the use of an algorithm originally developed for dinoflagellate algae, whereas scumforming cyanobacteria were the dominant algae in Lake Balaton during the study period.Despite all these, the Earth observation data of this study, supported by in situ measurements, showed that the phytoplankton phenology of Lake Balaton was characterised by two main blooms during the study period.Chronologically, the first bloom, which occurs almost annually in spring, was a relatively small phytoplankton bloom (up to 16.1 ± 12.9 µg l −1 in basin 1) dominated by diatoms.The second, larger eutrophication event (up to 49.3 ± 21.5 µg l −1 in basin 1) occurred in late summer and early autumn and was dominated by Cyanobacteria.These phenological events fit perfectly into the data of previously discussed studies (Mózes et al. 2006;Hajnal and Padisák 2007;Présing et al. 2008;Palmer et al. 2015b).Both blooms, especially the latter, were more pronounced in the western, more eutrophic half of the lake.During the time between the blooms, clear water stages with an average chlorophyll-a content of 6.5 ± 4.5 µg l −1 were recorded. The long-term trend of algal biomass change derived from Landsat 7 data showed no significant change in the trophic state of Lake Balaton between 1999 and 2019, with an annual accumulation of 0.009% of algal biomass throughout the lake.However, a clear distinction between the western and eastern parts of Lake Balaton was observed: a slight decrease in algal biomass (−0.005%) in the western part and a slight increase (0.016%) in phytoplankton abundance in the eastern basins.This may seem to contradict the established trophic gradient of the lake, but the increase in chl-a content in the eastern parts of the lake was not substantial (0.010% and 0.016% per year in basin 3 and 4, respectively), and until 2019 the chlorophyll-a content in basin 4 remained below 7 µg l −1 .The results of the current study go beyond the traditional spatial division of Lake Balaton into eutrophic western and mesotrophic eastern parts (Somlyódi and Van Straten 1986;Herodek et al. 1988) and show a slow but steady shift away from this internal zonation. Phytoplankton is crucial to the survival of lake ecosystems, but excessive growth due to factors such as humancaused nutrient pollution can severely degrade water quality, reduce biodiversity, and alter the species composition and food chain of the system.In the 1970s, eutrophication of Lake Balaton was caused by increasing pollution from local agriculture and industry, combined with inadequate wastewater treatment.Intense algal growth, dominated by Raphidiopsis (Cylindrospermopsis) raciborskii, continued until 1994 (Padisák 1997(Padisák , 1998;;Sprőber et al. 2003).Proper management of nutrient sources has been shown to alleviate eutrophication problems and restore the ecological status of the lake (Jeppesen et al. 2005;Istvánovics et al. 2007;Hatvani et al. 2011), but positive changes can easily be reversed by local, regional or global processes. The KBWPS was established with the aim of protecting Lake Balaton through phosphorus retention.During the first 13 years of its operation, the system did its job and proved to be successful, capturing about 40% of the total phosphorus from the Zala River.The KBWPS had a significant impact on the water quality of Lake Balaton in the past, especially in the 1980s (Tátrai et al. 2000;Hatvani et al. 2011).It significantly reduced the algal biomass in Lake Balaton within a decade of its construction (Padisák and Reynolds 1998;Istvánovics et al. 2007;Fastner et al. 2016).However, between 1999 and 2019, the KBWPS was not always able to achieve the projected phosphorus retention, and sometimes, especially during low water periods (2000-2004 and 2011-2012), it was even a source of phosphorus.This study also showed that, between 1999 and 2013, before the reconstruction of the KBWPS, there was only a slight change in the algal biomass in the lake (between −0.002% and 0.01% per year).However, this stability was relative and larger interannual variations were observed (−0.342% to 0.233% per year), probably due to interannual variability of microand mesoclimatic parameters or the water balance of the lake.This led to a small (0.055% per year) increase in phytoplankton abundance in the lake as a whole. From the point of view of phosphorus retention, the reconstruction of the KBWPS had no effect, as the amount of total phosphorus entering Lake Balaton remained practically unchanged between 1999 and 2019.Nevertheless, after the reconstruction of the KBWPS in 2015 and later, a noticeable increase in algal biomass was recorded, with an average of 0.240% per year for the total lake level.However, the trends in the basin levels varied between −0.015% and 0.378% per year, and the 3-year rolling trend varied between −0.056% and 0.151% per year.The outermost basin of Lake Balaton from the KBWPS (basin 4) showed the highest increase in algal biomass, but owing to the 3-year retention time of the lake and the large distance, the direct impact of the KBWPS reconstruction is considered negligible.Without further analysis, it is possible only to speculate on the causes behind this phenomenon.The long-term changes in phytoplankton abundance or shifts in any community type could be the result of global or regional changes (Blowes et al. 2019;Ho et al. 2019;Pilotto et al. 2020) such as the slow warming of the lake due to climate change (Pálffy and Vörös 2019), or the combination of increased warming temperatures (Joehnk et al. 2008;Paerl and Huisman 2008;Kosten et al. 2012;Taranu et al. 2012), intensified run-offs (Paerl and Huisman 2009;Michalak et al. 2013), and increased atmospheric CO 2 (Hutchins et al. 2013;Sandrini et al. 2016;Visser et al. 2016) on algal growth and biodiversity. Knowledge of the spatial patterns and temporal dynamics of lakes is crucial for their effective management, especially in the case of large, heterogeneous water bodies.This study showed that the functionality of the Kis-Balaton Water Protection System remained largely unchanged over time.Its reconstruction coincided with autochthonous changes in the lake resulting from persistent regional climate forcing.Furthermore, this long-term data set is of even greater importance as it provides an opportunity to analyse biotic processes within the lake, to reveal previously undetected trends and to help identify evolving characteristics of the Lake Balaton ecosystem.This study has demonstrated the potential of using Earth observation data for more than just extrapolation of existing in situ data sets, by attempting to establish additional key limnological relationships within a large shallow freshwater lake. The 2015 reconstruction did not improve the operational efficiency of the KBWPS, as neither the long-term nor the short-term trends in the western basins of Lake Balaton showed any significant improvement.Nevertheless, the KBWPS should remain an important system for the protection of Lake Balaton, but its performance needs to be continuously evaluated and improved, while new sciencebased strategies should also be explored.In general, it can be said that management of the KBWPS will not be possible without assessing and understanding its actual functioning in this changing environment.The large areas of the KBWPS and Lake Balaton make it impossible to carry out effective monitoring without automated high-frequency monitoring systems or remote sensing techniques, as the detection of nutrient surges outside and inside the KBWPS and the early warning of phytoplankton blooms could only be successfully implemented in this way.This proactive, data-driven approach will allow water authority managers to implement timely, automated, and targeted interventions to mitigate the effects of nutrient surges or lack thereof.In addition to early warning systems that can provide real-time alerts, the development of forecasts is necessary to support strategic decision-making processes for both the KBWPS and Lake Balaton management and water resource protection.This low-maintenance, high-throughput, multi-platform approach will provide researchers, lake managers and policy-makers with a comprehensive view of aquatic ecosystem functioning, enabling informed decision-making and timely responses to environmental challenges. Fig. 2 A Fig. 2 A Situation of Lake Balaton (a) and the Kis-Balaton Water Protection System (b) within Hungary.B The division of Lake Balaton into basins as used in this study Fig. 3 Fig. 3 Flowchart of the data processing used in the study Fig. 4 Fig.4Correlation of same-day matches between in situ chlorophylla measurements in the pelagic zone of Lake Balaton and the chlorophyll index (chl-index) calculated from Landsat 7 satellite imagery.The regression line (chl-a = chl-index/1.7514)is shown as a solid line, while the dotted red lines are 95% confidence intervals.Pearson product moment correlation coefficient r = 0.639, P ≤ 0.001(n = 33) Fig. 6 Fig.6Change of algal biomass (% per year) in Lake Balaton between 1999 and 2019 based on annual data sets.The map shows the slope of the trend line from the linear regression fitted to the sequences of water chlorophyll-a content from each 1000 m × 1000 m cell in Lake Balaton.Calculated from Landsat 7 imagery acquired between June 1999 and July 2019 Fig. 7 Fig. 7 Change of algal biomass (% per year) in Lake Balaton between 1999 and 2013 (A) and between 2015 and 2019 (B) based on annual data sets.Map of trend line slopes from the linear regression line fit- Fig. 8 Fig. 8 Change of algal biomass (% per year) in the basins of Lake Balaton between 2000 and 2018.Dotted line is the 20-year average, while dashed lines are standard error lines for the 20-year data set
2023-09-18T13:37:33.287Z
2023-09-17T00:00:00.000
{ "year": 2023, "sha1": "e24294e6502e0c268494008c2a53aee6d09a0d0e", "oa_license": "CCBY", "oa_url": "https://link.springer.com/content/pdf/10.1007/s00027-023-01005-2.pdf", "oa_status": "HYBRID", "pdf_src": "Springer", "pdf_hash": "e24294e6502e0c268494008c2a53aee6d09a0d0e", "s2fieldsofstudy": [ "Environmental Science" ], "extfieldsofstudy": [] }
129943990
pes2o/s2orc
v3-fos-license
Expression of Longevity Genes Induced by a Low-Dose Fluvastatin and Valsartan Combination with the Potential to Prevent/Treat “Aging-Related Disorders” The incidence of aging-related disorders may be decreased through strategies influencing the expression of longevity genes. Although numerous approaches have been suggested, no effective, safe, and easily applicable approach is yet available. Efficacy of low-dose fluvastatin and valsartan, separately or in combination, on the expression of the longevity genes in middle-aged males, was assessed. Stored blood samples from 130 apparently healthy middle-aged males treated with fluvastatin (10 mg daily), valsartan (20 mg daily), fluvastatin-valsartan combination (10 and 20 mg, respectively), and placebo (control) were analyzed. They were taken before and after 30 days of treatment and, additionally, five months after treatment discontinuation. The expression of the following longevity genes was assessed: SIRT1, PRKAA, KLOTHO, NFE2L2, mTOR, and NF-κB. Treatment with fluvastatin and valsartan in combination significantly increased the expression of SIRT1 (1.8-fold; p < 0.0001), PRKAA (1.5-fold; p = 0.262) and KLOTHO (1.7-fold; p < 0.0001), but not NFE2L2, mTOR and NF-κB. Both fluvastatin and valsartan alone significantly, but to a lesser extent, increased the expression of SIRT1, and did not influence the expression of other genes. Five months after treatment discontinuation, genes expression decreased to the basal levels. In addition, analysis with previously obtained results revealed significant correlation between SIRT1 and both increased telomerase activity and improved arterial wall characteristics. We showed that low-dose fluvastatin and valsartan, separately and in combination, substantially increase expression of SIRT1, PRKAA, and KLOTHO genes, which may be attributed to their so far unreported pleiotropic beneficial effects. This approach could be used for prevention of ageing (and longevity genes)–related disorders. Introduction An aging population, along with increased life expectancy and the prevalence of associated chronic diseases, has become an important medical and economic issue. Consequently, the burden of so-called "aging-related disorders," such as associated cardiovascular diseases, degenerative diseases of the central nervous system, and malignant diseases, is also increasing [1,2]. These diseases represent one of the leading problems of the healthcare systems in developed countries around the globe. Effective strategies for their prevention are therefore needed. Aging-related disorders have been found to be causally associated with the altered expression of aging-related or so-called longevity genes [3]. In addition, telomere length per se [4], as well as telomerase expression [5], are also associated with aging-related disorders. In a narrower context, it seems logical that these genes represent mechanistic intracellular targets that could be altered to change the intracellular pro-aging milieu. It is expected that with the induction of expression of protective and suppression of expression of harmful genes, a new rejuvenated cellular phenotype could be reached that could influence the occurrence and course of aging-related disorders. In summary, so-called rejuvenating strategies, focused on modifying such genes' expression, could possibly importantly impact the prevalence of aging-related disorders [3]. In our prior studies, we explored the functional and structural characteristics of the arterial wall, some of which are also characteristic for arterial aging, and were particularly interested in the improvement of altered characteristics of the arterial wall by low doses of fluvastatin and valsartan [6]. We found significant improvement of arterial wall characteristics after 30 days of treatment in middle-aged males, with the beneficial effect slowly declining within nine months of treatment discontinuation [7][8][9]. Importantly, for age-related changes, the improvement of arterial wall characteristics was associated with increased telomerase expression and reduced inflammation as well as oxidative stress parameters [10,11]. Both decreased telomerase expression along with decreased telomeres' length and increased activation of inflammation and oxidative stress are characteristic of aging, arterial aging, and aging-related disorders. Therefore, we hypothesized that "anti-aging" or "rejuvenation" effects could be achieved through treatment impacting the longevity genes. This impact should have the capacity to influence "aging-related" disorders. Considering our previous results, this approach could be particularly effective for decreasing aging-related changes of the arterial system. In the present study, we explored the efficacy of an approach consisting of (short-term) treatment with low-dose fluvastatin and valsartan alone or in combination on the expression of longevity genes in middle-aged males who had already (sub-clinically) impaired functional and structural arterial wall characteristics which could at least partially be attributed to the initial processes of aging. Expression of Longevity Genes We analyzed the expression of longevity genes in the treatment and control groups. At the beginning of the study, there was no difference in longevity genes expression in the four different groups (low-dose fluvastatin, low-dose valsartan, low-dose fluvastatin and valsartan combination, and the control group). Differences between the groups were only observed after 30 days of treatment. Five months after treatment discontinuation, the expression of longevity genes in all treatment groups decreased almost to initial values. In the control group, the expression of longevity genes did not change during the study ( Figure 1A-F). . "0" represents the time before treatment, "30" represents the end of treatment, i.e., after 30 days (30) and "FU" represents five months after treatment discontinuation. Values are presented as means ± SEM. Presented p values are after Benjamini-Hochberg false discovery rate (FDR) correction, significance threshold set at p < 0.05. * signifies p < 0.05 and *** p < 0.001, vs. control group. SIRT1-sirtuin 1 gene; PRKAA-5 -AMP-activated protein kinase catalytic subunit α-2 gene; NFE2L2-nuclear factor (erythroid-derived 2)-like 2 gene; mTOR-mechanistic target of rapamycin (mTOR) gene; NF-κB1-nuclear factor κB gene. Sirtuin 1 (SIRT1) Gene Expression Both low-dose fluvastatin and valsartan separately increased the expression of the SIRT1 gene after 30 days of treatment up to 1.4-fold (p = 0.0165 and p = 0.0229, respectively), while their low-dose combination increased its expression up to 1.8-fold (p < 0.0001) compared to the control group. Five months after treatment discontinuation, no significant effects of the treatment on the SIRT1 gene expression were observed ( Figure 1A). 5'-AMP-Activated Protein Kinase Catalytic Subunit α-2 (PRKAA) Gene Expression After 30 days of treatment, only the low-dose fluvastatin and valsartan combination significantly increased the expression of the PRKAA gene, up to 1.5-fold (p = 0.0262) compared to the control group. Separate drugs had no influence on the PRKAA gene expression ( Figure 1B). KLOTHO Gene Expression After 30 days of treatment, only the low-dose fluvastatin and valsartan combination significantly increased the expression of the KLOTHO gene, up to 1.7-fold (p < 0.0001) compared to the control group. Separate drugs had no influence on the KLOTHO gene expression ( Figure 1C). Nuclear Factor (Erythroid-Derived 2)-Like 2 Gene Expression (NFE2L2) No significant changes of the NFE2L2 gene expression were observed in any of the study groups ( Figure 1D). Mechanistic Target of Rapamycin (mTOR) Gene Expression No significant changes were observed in the mTOR gene expression in any of the study groups ( Figure 1E). Nuclear Factor κB (NF-κB) Gene Expression No statistically significant changes in the expression of the NF-κB gene were observed either ( Figure 1F). Correlations between the Expression of Longevity Genes, Telomerase Activity and Arterial Wall Properties The correlations between longevity genes expression and previously described telomerase activity and arterial wall properties [7][8][9][10], all measured after 30 days of treatment, were calculated. In the separate, low-dose fluvastatin and valsartan groups, the expression of the SIRT1 gene positively correlated with telomerase activity (r = 0.42; p = 0.04 and r = 0.39; p = 0.03, respectively). Importantly, in the low-dose combination group, the expression of the SIRT1 gene positively correlated with telomerase activity (r = 0.62; p = 0.01) and with brachial artery flow-mediated dilation (FMD) (r = 0.52; p = 0.05) while negatively correlated with carotid artery beta stiffness (r = −0.45; p = 0.02) and pulse wave velocity (PWV) (r = −0.56; p = 0.05). Discussion In the present study, we showed that low-dose fluvastatin and valsartan in combination significantly increased the expression of several important longevity genes (SIRT1, PRKAA, KLOTHO). Moreover, these changes correlated with an improvement of functional and structural arterial wall characteristics as well as with telomerase activity, both assessed previously [7][8][9][10]. Overall, the results revealed increased expression of several longevity genes that seems to be causally associated with increased telomerase activity and improvement of arterial wall (initial aging-related) characteristics. The results are very promising and indicate that our relatively simple but innovative approach could have potential efficacy as a "rejuvenating agent," particularly in the efforts to decrease the occurrence of aging-related disorders. The present study was designed as a two-part study: The first part comprised the measurement of longevity genes in relation to treatment with low-dose fluvastatin and valsartan, and the second part comprised of a correlation analysis with previously measured relevant parameters (telomerase activity and functional/structural arterial wall characteristics). The studied population was a group of middle-aged males with already present aging-related changes. Since we found that a low-dose combination of fluvastatin and valsartan improved arterial wall characteristics, we aimed to further explore the mechanism behind these beneficial effects. Thus, from the previously obtained samples, we assessed the expression of longevity genes to explore the potential rejuvenating effect of our approach. We found that the low-dose fluvastatin and valsartan combination increased the expression of the SIRT1 (1.8-fold; p < 0.0001), PRKAA (1.5-fold; p = 0.0262) and KLOTHO (1.7-fold; p < 0.0001) genes after 30 days, whereas no differences in the expression of the NFE2L2, mTOR, and NF-κB genes were observed. Fluvastatin or valsartan alone were less effective, increasing only the expression of the SIRT1 gene to a lesser extent. Moreover, the expression of the SIRT1 gene in the combination group positively correlated with telomerase activity and improvement of the arterial wall characteristics. Importantly, the FDA recently approved the first interventional "anti-aging" study (MILES-Metformin in Longevity Study). Metformin, which, for this purpose, was repositioned from a solely antidiabetic drug to an "anti-aging" drug, is the interventional drug. This is based on a wealth of data indicating that metformin could influence the aging process, or more importantly, aging-related disorders. Interestingly, one of the most prominent of the several hypotheses underlying the "anti-aging" effects of metformin is the activation of longevity genes with consequent effects on energy metabolism, inflammation and oxidative stress. Several other currently ongoing studies with metformin are focusing on its "anti-aging" effects, such as VA-IMPACT, TAME and ePREDICE. In any event, a new period in which specific treatments of "aging-related disorders" are being studied, has already begun. Sirtuins are a family of nicotinamide adenine dinucleotide (NAD)-dependent deacetylases, and according to some studies, are one of the key molecules involved in the regulation of aging and aging-related disorders [12]. SIRT1 regulates DNA transcription and repair as well as cell survival, thus also inducing longevity. It was shown to have an important role in aging-related disorders of the cardiovascular [5] and nervous systems [13]. Some statins in therapeutic doses were shown to induce SIRT1 expression, for example, simvastatin in endothelial progenitor cells [14]. On the other hand, atorvastatin and rosuvastatin reduced its expression in patients with coronary artery disease [15]. Until now, the effect of fluvastatin in therapeutic or low doses on SIRT1 expression has not been assessed. To the best of our knowledge and according to the literature, no studies have assessed the effect of valsartan on the SIRT1 gene in humans, either. A few studies were performed on rats or mice, in which valsartan and other sartans increased the expression of the SIRT1 gene [16][17][18]. The PRKAA gene encodes the catalytic subunit of the AMPK. AMPK is the primary regulator of cellular responses and acts as a sensor to maintain energy balance within the cell [19]. In animal and cell culture studies, several statins were shown to induce the AMPK and eNOS pathway, acting in a vasoprotective manner [20][21][22]. Valsartan was also shown to act protectively through activation of the AMPK pathway in diabetic rats [23]; similar effects have been shown for telmisartan in human coronary artery cells [24]. The consequences of AMPK activations were divergent: In acute activation, it caused cell protection, whereas in chronic activation, it might have activated the pro-aging pathways and progressive degeneration during cellular senescence. There are various interactions between sirtuin and AMPK pathways [19]. The expression of KLOTHO decreased in aging-related disorders [25]. Valsartan in therapeutic doses increased the amount of plasma-soluble KLOTHO and consequently induced cardiorenal beneficial effects in patients with diabetes mellitus and diabetic kidney disease [26]. The effects of valsartan in low doses have not yet been studied in such a setting. The potential beneficial effects of statins on KLOTHO expression was only shown in animal studies [27]. The NFE2L2 gene encodes a transcription factor, which regulates the proteins involved in responding to injury and inflammation. According to some studies, enhancing NFE2L2 activity may be beneficial in diabetic cardiomyopathy, mitochondrial dysfunction, and as an anti-aging agent, but further studies are needed [28]. Fluvastatin was shown to induce NFE2L2 in the vascular smooth muscle cells [29,30]. Mammalian target of Rapamycin (mTOR) was shown to have an important role in cardiovascular diseases, oxidative stress and longevity [31]. In animal or cell studies, statins influenced the mTOR, which was proven for fluvastatin in rats [32] and for lovastatin in vascular smooth muscle cells [33]. In one study on rats, valsartan induced cardio protection against ischemic-reperfusion injury through the mTOR [34]. NF-κB gene expression mediated vascular and myocardial inflammation and was additionally associated with impaired endothelial function [35]. There is evidence that both statins and sartans could reduce the NF-κB gene activation in various animal and cell line models [36,37]. To the best of our knowledge, no study like the present one, has been published. In our review of the literature, we found several different studies that assessed the effects of statins or sartans on longevity genes, but most of those studies were performed either on cell lines or animals. Therefore, the present study is the first to show that our new preventive cardiovascular approach, which was proven to induce the improvement of arterial wall functional and structural characteristics, and consequently decrease arterial age, additionally acted through the expression of several longevity genes. This could be one of the mechanisms lying behind the beneficial effects observed in our previous clinical studies [7][8][9]. Nevertheless, one of the limitations of the present study is that we used only qPCR, but validation by Western blotting would be of added value. The results of the present study indicate that our innovative approach using short-term low-dose fluvastatin and valsartan has a potential in inducing the expression of certain longevity genes. These effects could be anti-aging or rejuvenating as well as act as a potential specific prevention/treatment for "aging-related" disorders. It can be speculated that using cycling, intermittent treatment with low-dose combination (every 6-12 years) starting at middle-age could postpone the occurrence of aging-related disorders. On the other hand, this approach could be used in the same population and with the same aim as metformin in the MILES trial. One of the major advantages of this approach is its cyclic, intermittent character, which, as previously described, could potentially activate the beneficial longevity genes for a time short enough not activate their contra regulatory mechanisms as well. With intermittently repeating cycles, this could lead to repetitive beneficial activations of the protective longevity genes. Thus, it could be speculated that the cumulative effect of these repeating cycles of treatment might eventually lead to successful specific prevention/treatment of "aging-related" disorders, most likely cardiovascular "aging-related" disorders. Participants and Study Design The stored blood samples from our three prior studies were used together for the present longevity gene expression study. Overall, 130 middle-aged, apparently healthy male participants were recruited and treated for one month (30 days): 25 persons with fluvastatin 10 mg daily, 20 persons with valsartan 20 mg daily, and 20 persons with a low-dose combination of fluvastatin (10 mg daily) and valsartan (20 mg daily). Accordingly, 65 participants received placebo. All the participants were blindly randomized into the relevant group, as in our previous studies, which are described in more detail elsewhere [7][8][9]. Blood samples were collected and ultrasound measurements of the arterial wall properties (endothelial function, arterial stiffness) were performed at the beginning and at the end of the treatment period (day 0 and day 30). The measurements were also repeated five months after treatment discontinuation. The National Medical Ethics Committee of Slovenia approved the studies (Approval date 3 July 2009, Approval No.: 21k/05/09) and informed consent was obtained from all participants. Inclusion criteria were: age between 30 and 50 years, non-smoking status, normal blood pressure values, body mass index values below 30 kg/m 2 , no clinical cardiovascular disease, no history of any other chronic disease, and no regular medication therapy. The characteristics of the subjects were already extensively described in our previous publications [7][8][9]. Blood Sampling Three samples of whole peripheral blood were collected from each participant: before treatment (day 0), after treatment conclusion (day 30), and five months after treatment discontinuation (follow-up). The whole blood samples were collected in 10 mL EDTA tubes and stored at −80 • C. Prior to RNA extraction, samples were centrifuged at 4000 rpm for 25 min to obtain the pellet of cells and cell debris. The pellets were then used for RNA extraction. RNA Isolation Total RNA was isolated using a miRNeasy Mini kit (Qiagen, Hilden, Germany) according to the manufacturer's instructions. RNA was quantified using the NanoDrop, and cDNA was synthesized from 300 ng of the total RNA using the High-Capacity cDNA Reverse Transcription Kit with RNase Inhibitor (Applied Biosystems, Foster City, CA, USA) according to the manufacturer's protocol. Quantitative Real-Time PCR (qPCR) for Human Telomerase Reverse Transcriptase (hTERT) Expression The expression of target genes in the tested samples was performed using TaqMan Gene Expression Assays (Applied Biosystems) according to the manufacturer's instructions: Assay Hs00183100_m1 for the KLOTHO gene; assay Hs01009006_m1 for the Sirtuin 1 (SIRT1) gene; assay Hs01562315_m1 for the 5 AMP-activated protein kinase (AMPK) gene; assay Hs00765730_m1 for the Nuclear factor κB (NF-κB1) gene; assay Hs00234522_m1 for the Mechanistic target of rapamycin (mTOR) gene and assay Hs00975961_g1 for the Nuclear factor (erythroid-derived 2)-like 2 (NFE2L2) gene. The housekeeping gene glyceraldehyde 3-phoshate dehydrogenase (GAPDH) was used as an endogenous control. Described briefly, qPCR was performed using the ABI 7900 instrument (Applied Biosystems). Individual qPCR reactions were carried out in 10 µL reaction mix with 2xTaqMan Universal PCR Master Mix (Applied Biosystems), 1× TaqMan Gene Expression Assay (Applied Biosystems) and 200 ng cDNA. Each sample was analyzed in triplicate. RNA isolated from healthy volunteers (n = 5) was used as a positive control for target genes expression. In each run, the dilutions of control RNA (pool of RNA from healthy volunteers) was included. The data were analyzed by the SDS2.4 software and Ct values were extracted. Fold-differences in hTERT expression were calculated using the comparative Ct method as described previously [38], where data were normalized to day 0 for each participant. Data Analysis All values were expressed as means ± SEM. Differences between values were assessed by one-way analysis of variance (ANOVA). When significant interaction was present, the Bonferroni post-test was performed. Benjamini-Hochberg's correction method was used to control false discovery rate (FDR), with significance threshold set at p < 0.05. Correlations between arterial wall properties and telomerase activity that were described previously [7][8][9]11] and longevity genes expression assessed in the present study were calculated after 30 days treatment period using Pearson correlation coefficients. A p < 0.05 was considered significant. All statistical analyses were performed using the Graph Pad Prism 5.0 software. Conclusions In conclusion, the present study has shown that low-dose fluvastatin and valsartan treatment increased the expression of beneficial longevity genes (SIRT1, PRKAA, and KLOTHO) and could therefore represent a promising new treatment approach for "aging-related" disorders. Additional, population-based research is needed to additionally prove the proposed concept. Conflicts of Interest: The authors declare no conflict of interest.
2019-04-25T13:03:23.325Z
2019-04-01T00:00:00.000
{ "year": 2019, "sha1": "3e2100d5c5cef9e16d0e977fe93d6a8aca0e3a36", "oa_license": "CCBY", "oa_url": "https://www.mdpi.com/1422-0067/20/8/1844/pdf", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "3e2100d5c5cef9e16d0e977fe93d6a8aca0e3a36", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
248074256
pes2o/s2orc
v3-fos-license
PABLO-QA: A sensitive assay for quantifying monophosphorylated RNA 5′ ends Summary Generated by RNA deprotection or cleavage, 5′ monophosphates trigger RNA degradation in all organisms. Here we describe PABLO-QA (Phosphorylation Assay By Ligation of Oligonucleotides and Quantitative Amplification), a sensitive, low-cost procedure for determining the percentage of specific RNA 5′ ends that are monophosphorylated from their ability to undergo ligation to an oligonucleotide. Comparison to a cognate internal standard and a fully monophosphorylated control allows precise quantification of monophosphorylated 5′ termini by RT-PCR, enabling the analysis of transcripts undetectable by blotting. For complete details on the use and execution of this protocol, please refer to Richards and Belasco (2021). SUMMARY Generated by RNA deprotection or cleavage, 5 0 monophosphates trigger RNA degradation in all organisms. Here we describe PABLO-QA (Phosphorylation Assay By Ligation of Oligonucleotides and Quantitative Amplification), a sensitive, low-cost procedure for determining the percentage of specific RNA 5 0 ends that are monophosphorylated from their ability to undergo ligation to an oligonucleotide. Comparison to a cognate internal standard and a fully monophosphorylated control allows precise quantification of monophosphorylated 5 0 termini by RT-PCR, enabling the analysis of transcripts undetectable by blotting. For complete details on the use and execution of this protocol, please refer to Richards and Belasco (2021). BEFORE YOU BEGIN In both prokaryotic and eukaryotic organisms, rates of RNA processing and degradation are often governed by 5 0 -terminal deprotection to generate a 5 0 monophosphate that triggers subsequent ribonucleolytic attack (Muhlrad et al., 1994;Deana et al., 2008;Richards et al., 2011;Cahová et al., 2015). To enable regulatory pathways of this kind to be dissected, we previously devised a quantitative procedure known as PABLO (Phosphorylation Assay By Ligation of Oligonucleotides) for determining the percentage of a particular RNA 5 0 terminus that is monophosphorylated on the basis of the unique ability of such ends to undergo splinted ligation to a synthetic oligonucleotide (Celesnik et al., 2007Luciano and Belasco, 2019). A limitation of that method was its reliance on Northern blotting, as many cellular transcripts are present at a concentration that is insufficient for detection on a blot. It also required prior knowledge of the precise location of the 5 0 end of interest. We recently modified that assay to improve its sensitivity. Because the new procedure, PABLO-QA (Phosphorylation Assay By Ligation of Oligonucleotides and Quantitative Amplification), involves reverse transcription and PCR amplification, quantification requires spiking in a cognate internal standard whose RT-PCR yield is compared to that of the cellular RNA under investigation (Richards and Belasco, 2021). In addition, to correct for transcript-dependent differences in ligation efficiency, RT-PCR yields are compared before and after treating the RNA sample with an excess of the RNA pyrophosphohydrolase RppH to fully convert each 5 0 end to a monophosphate. A benefit of using T4 RNA ligase 1 instead of T4 DNA ligase for PABLO-QA is that it obviates the need for a splint, thereby enabling the simultaneous analysis of heterogeneous RNA 5 0 termini, whose exact locations are conveniently determined by sequencing the RT-PCR products. PABLO-QA also has significant advantages over other methods for measuring the percentage of 5 0 ends that are monophosphorylated. Compared to those based on RNA-seq (German et al., 2009;Bischler et al., 2017), it is less costly, generates data that are easier to analyze, and enables accurate measurements even for low-abundance RNAs. In addition, it is quantitatively more reliable than methods based on sensitivity to degradation by 5 0 -monophosphate-dependent exonucleases such as XRN1 and Terminator (Bandyra et al., 2012), especially when less than half of the 5 0 termini under investigation are monophosphorylated. PABLO-QA involves two stages of analysis. The preliminary stage (i) verifies the efficacy and selectivity of two transcript-specific primers designed by the investigator, (ii) identifies the precise location of the RNA 5 0 end(s), and (iii) suggests a suitable location for the 5 0 end of a cognate internal standard, which is then synthesized in vitro as a fully monophosphorylated transcript and spiked into the cellular RNA. In the final stage, the percentage of 5 0 termini that are monophosphorylated is determined by comparing the band intensities of the RT-PCR products obtained from the RNA 5 0 end(s) of interest and the internal standard before and after exhaustive treatment with RppH to convert all of the cellular 5 0 ends to monophosphates. Oligonucleotides Timing: 1 h PABLO-QA requires several desalted oligonucleotides, most of which are universal but two of which (primer X and primer Y) are specific for the RNA of interest (see Figure 1 and the key resources table for a diagram and sequences). The RNA-specific primers are designed as follows. 1. Design a DNA primer (primer X) for target-specific reverse transcription of the RNA of interest and the first round of PCR. a. This primer needs to anneal far enough downstream of the transcription start site for the product of the first round of PCR to be >100 nt long. This allows for purification of the first-round PCR products on a Qiagen QiaQuick column. A primer length of 18-25 nt, depending on its GC content, is sufficient. b. A possible pitfall in positioning a reverse transcription primer too far from the RNA 5 0 end is that it may increase the likelihood of intervening RNA structure that could hinder reverse transcriptase. For that reason, a primer that generates an extension product longer than 250 nt should generally be avoided. 2. Design a nested DNA primer (primer Y) for the second round of PCR. A primer length of 18-25 nt, depending on its GC content, is sufficient. This primer should be designed to anneal 15-50 nt downstream of the 5 0 end of the transcript of interest so as to generate one or more PCR products that are sufficiently short (<100 bp) to enable resolution of closely spaced 5 0 ends. RNA extraction Timing: 1 day 200-400 mg of total RNA is required to complete the analysis. It can be isolated from bacterial or eukaryotic cells by a number of methods, such as extraction with hot acidic phenol (Luciano et al., 2017), extraction with a phenol/guanidine isothiocyanate reagent (e.g., Invitrogen TRIzol), or elution from a silica matrix (e.g., QIAGEN RNeasy or New England Biolabs Monarch) and must be DNasetreated before use. (1) Ligation of the chimeric oligonucleotide (blue and red rectangle) to monophosphorylated RNA (gray line preceded by the letter p). (2) Reverse transcription by extension of primer X (broad white arrow). (3) First round of PCR amplification by extension of primers A (broad blue arrow) and X. (4) Second round of PCR amplification by extension of nested primers B (broad red arrow) and Y (broad black arrow). The products of the second round of PCR amplification are then either examined by gel electrophoresis to compare band intensities and calculate the percentage of 5 0 ends that are monophosphorylated (% monoP) or gel purified, individually PCR amplified a third time by extension of primers C (broad green and red arrow) and Y, gel purified again, and sequenced with a primer (primer D, not shown) matching the 5 0 -terminal segment of primer C to map RNA 5 0 ends. Exhaustive pretreatment of the cellular RNA with RppH generates a control sample in which the 5 0 end of the transcript of interest is fully monophosphorylated. In the preliminary stage of analysis, this allows the visualization of transcripts that are predominantly triphosphorylated or capped. Additionally, in the final, quantitative stage of analysis, this control enables the percentage of RNA 5 0 ends that are monophosphorylated to be determined by making it possible to correct mathematically for transcript-dependent differences in ligation efficiency. 2. Incubate at 37 C for 2 h. 3. Add 130 mL of 3 mM EDTA (pH 8.0), extract with phenol/chloroform (pH 4.3), and ethanol precipitate the RNA. a. Add an equal volume of phenol/chloroform (pH 4.3) and mix by shaking. b. Centrifuge at 15,0003g for 5 min at room temperature. Collect the aqueous (upper) layer and transfer it to a new microcentrifuge tube. c. Add 1/10 th volume of 3 M sodium acetate (pH 5.2) and 3 volumes of ethanol. Vortex and incubate at À20 C for 15 min. Pause point: At this point the samples can be stored indefinitely at À20 C. d. Centrifuge at 15,0003g for 30 min at 4 C and discard the supernatant. e. Add 800 mL of 70% ethanol and centrifuge at 15,0003g for 10 min at 4 C. f. Discard the supernatant and air dry the RNA pellets until no liquid is visible. 4. Dissolve the RNA pellets in 10 mL of sterile water. Pause point: At this point the samples can be stored indefinitely at À20 C. Ligation of a chimeric oligonucleotide Timing: 4 h This step uses T4 RNA ligase 1 to ligate a chimeric oligonucleotide to the 5 0 end of transcripts that are monophosphorylated ( Figure 1). Comprising 41 deoxyribonucleotides and three 3 0 -terminal ribonucleotides, this chimeric oligonucleotide is long enough to encompass the sequences of two nested forward primers (primers A and B) (key resources table). The three ribonucleotides at its 3 0 end are required for T4 RNA ligase 1 to efficiently ligate the oligonucleotide to 5 0 -monophosphorylated RNA, while the inclusion of deoxyribonucleotides lowers the cost of synthesis. 5. Assemble the ligation mixtures for the mock-treated and RppH-treated cellular RNA samples. 6. Incubate at 65 C for 5 min and then transfer to ice for 1 min. 7. Add 14 mL of ligase master mix to each tube. 8. Mix thoroughly and incubate at 37 C for 2 h. 9. Add 125 mL of 3 mM EDTA (pH 8.0), extract with phenol/chloroform (pH 4.3), and ethanol precipitate as described in step 3. Dissolve the RNA pellets in 11.5 mL of sterile water. Pause point: At this point the samples can be stored indefinitely at À20 C. Reverse transcription Timing: 2 h Reverse transcription with a transcript-specific primer generates cDNA complementary to a 5 0 -terminal segment of the RNA of interest and the chimeric oligonucleotide to which the RNA has been ligated ( Figure 1). 10. Transfer the ligated RNA sample to a 0.2-mL thin-walled PCR tube and mix with primer X and the dNTP mix. Incubate at 65 C for 5 min and transfer to ice for 1 min. 11. Prepare a master mix containing reverse transcriptase buffer, DTT, RNasin, and reverse transcriptase, and combine it with the nucleic acid mixture. 12. Transfer the tube to a thermocycler with a heated lid. Incubate at 55 C for 1 h and then at 70 C for 15 min. Pause point: At this point the samples can be stored indefinitely at À20 C. Amplification Timing: 6 h Two rounds of PCR with nested primers (Figure 1) allows the detection of low-abundance cellular RNAs while achieving a high level of transcript specificity. b. Use a thermocycler with a heated lid and programmed as follows. 14. Remove the primers for the first round by purifying the PCR reaction products with a Qiagen Qia-Quick PCR purification kit according to the manufacturer's instructions and eluting the DNA from the column with sterile water. 15. Perform the second round of PCR. a. Assemble the PCR mixtures. b. Use the same thermal cycling conditions as for the first round PCR, as described in step 13. Pause point: Samples can be stored indefinitely at À20 C after either of the PCR steps. Electrophoresis Timing: 5 h Electrophoresis on a non-denaturing 12% polyacrylamide gel allows the PCR products to be separated even if derived from closely spaced transcription start sites or RNA cleavage sites (Figure 1). Note: We routinely use a V16 vertical gel system with a 19-well comb and a gel thickness of 1.5 mm. 17. Use 13 TBE as the gel running buffer and pre-run the polyacrylamide gel for 30 min at 100 V. 18. Add 5 mL of non-denaturing sample buffer containing bromophenol blue to each PCR reaction product. PCR is used to amplify the small amount of DNA extracted from the polyacrylamide gel and to lengthen the segment upstream of the junction between the ligated oligonucleotide and the RNA (Figure 1). This allows sequencing of the RNA 5 0 end that was ligated to the chimeric oligonucleotide, which otherwise would be too close to the end of the PCR product. 23. Extract DNA from the excised polyacrylamide gel slice(s). a. Place the excised gel slice into a 1.5-mL microcentrifuge tube and crush the slice with a 1-mL pipette tip. Rolling the tip around the wall of the microcentrifuge tube breaks the polyacrylamide into small pieces. b. Add 300 mL or more of 1 mM EDTA (pH 8.0) to the tube, enough to fully submerge the gel pieces. Vortex and incubate for 16 h at 4 C. c. Pellet the gel fragments by centrifugation at 15,0003g at 4 C for 30 min. Use a pipette to carefully transfer the supernatant to another microcentrifuge tube. If the supernatant still contains small gel pieces, repeat the centrifugation and transfer. d. Add an equal volume of phenol/chloroform (pH 7.6) to the supernatant and mix by shaking. e. Centrifuge at 15,0003g for 5 min at room temperature. Transfer the aqueous layer to another microcentrifuge tube. f. Add 1/10 th volume of 3 M sodium acetate (pH 5.2) and 3 volumes of ethanol. Vortex briefly and incubate at À20 C for 15 min. Pause point: At this point the samples can be stored indefinitely at À20 C. g. Centrifuge at 15,0003g for 30 min at 4 C and discard the supernatant. h. Add 800 mL of 70% ethanol and centrifuge at 15,0003g for 10 min at 4 C. i. Discard the supernatant and air dry the precipitate until no liquid is visible. j. Dissolve the DNA precipitate in 10 mL of water. 24. Amplify and extend each purified PCR product by performing another round of PCR with universal primer C (key resources table) and transcript-specific primer Y. a. Assemble the PCR mixtures. b. Use the same thermal cycling conditions as for the first round PCR, as described in step 13. 25. Resolve the PCR products on a horizontal 1.8% agarose gel in 13 TBE alongside an appropriate DNA size ladder. Stain the gel with ethidium bromide, visualize the PCR products with a UV transilluminator, and excise the bands of interest. 26. Extract the PCR products from the gel slices by using a Qiagen QiaQuick gel purification kit according to the manufacturer's instructions. 27. Using primer D (key resources table), sequence the PCR products to identify the 5 0 end of each of the original RNAs. Preparation of an internal standard Timing: 2 days Determining the percentage of the 5 0 ends of interest that are monophosphorylated requires quantitative comparison to a monophosphorylated internal standard that can be reverse transcribed and amplified with the same set of primers. The RT-PCR product of this cognate internal standard must be well resolved from the other RT-PCR products. Visualization of the products of the preliminary round of PABLO-QA on a polyacrylamide gel makes it possible to identify a clear zone on the gel above the RT-PCR products arising from cellular RNA. An internal standard whose 5 0 terminus is located 10-50 nucleotides upstream of the 5 0 end of the longest cellular transcript under investigation should generate an RT-PCR product that migrates there. 28. Use PCR to generate a DNA template for synthesizing the internal standard by in vitro transcription ( Figure 2). a. The internal standard should be cognate to the cellular transcript under investigation, but with a 5 0 extension. b. The template for PCR can be a cell suspension or a cloned DNA fragment encoding the RNA of interest. c. The forward PCR primer (primer E) should incorporate a T7 promoter (TAATACGACTCAC TATAG, underlined) upstream of the intended transcription start site (boldface G). This promoter should be preceded by 5 nucleotides and followed by about 20 transcribed nucleotides complementary to the PCR template. d. Use primer X as the reverse PCR primer. e. Assemble the PCR mixture. f. Use a thermocycler with a heated lid and programmed as follows. Reagent Amount ( g. Resolve the PCR products by electrophoresis on a 1.2%-1.8% agarose gel in 13 TBE alongside an appropriate DNA size ladder. Stain the gel with ethidium bromide, visualize the PCR products with a UV transilluminator, and excise the band of interest. Extract the PCR product from the gel slice by using a Qiagen QiaQuick gel purification kit according to the manufacturer's instructions. h. For use as a template for in vitro transcription, the concentration of the purified PCR product should be >20 ng/mL. If necessary, its concentration can be increased by evaporation (e.g., on a SpeedVac concentrator). 29. Synthesize the monophosphorylated internal standard by in vitro transcription with T7 polymerase in the presence of a 30-fold molar excess of GMP over GTP ( Figure 2). a. Assemble the reaction mixture. b. Incubate the reaction mixture at 37 C for 4-12 h. c. Degrade the DNA template by adding 11 mL of 103 Turbo DNase buffer and 4 mL (8 U) of Turbo DNase to the RNA synthesis reaction. Mix well and incubate at 37 C for 1-2 h. 30. Purify the in vitro transcript on a 6% denaturing polyacrylamide gel. a. Cast a denaturing 6% polyacrylamide-urea gel. Note: We routinely use a V16 vertical gel system with a 19-well comb and a gel thickness of 1.5 mm. Note: This method of purification involves loading the products of in vitro transcription in two wells of unequal width. If a comb that creates a broad well for preparative electrophoresis is not available, then two or three narrower teeth of a regular comb can be taped together for that purpose. b. Use 13 TBE as the running buffer for electrophoresis and pre-run the polyacrylamide gel for 30 min at 100 V. c. Add 220 mL of formamide sample buffer to the in vitro transcription reaction, heat at 95 C for 5 min, and transfer to ice. d. Wash the unpolymerized acrylamide and urea out of the wells of the gel and load the denatured RNA sample into two wells: 20 mL in a narrow well (the marker lane) and the remainder ($315 mL) in a broad neighboring well (the preparative lane). e. Run the gel at 100 V for 15 min or until the dye has entered the gel, and then increase the voltage to 180 V. Stop electrophoresis when the bromophenol blue has reached the bottom of the gel. f. Cut the gel vertically between the two lanes and stain the marker lane with 0.5 mg/mL ethidium bromide in 13 TBE for 30 min with gentle agitation. Cover the remainder of the gel with plastic wrap or a plastic sheet protector to prevent it from drying out. g. Place the stained part of the gel on a sheet of clear plastic such as a sheet protector. Visualize the RNA band with a UV transilluminator and use a pen to mark the upper and lower boundaries of the band on the sheet protector. h. Place the unstained portion of the gel on the marked sheet protector so as to align it with the marker lane. Use the outline of the marker band as a guide for excising the RNA band from the unstained part of the gel. This method allows the RNA product to be gel purified without exposing it to ethidium bromide or UV light. i. Extract the RNA from the polyacrylamide gel slice by the method described in step 23, substituting phenol/chloroform (pH 4.3) for phenol/chloroform (pH 7.6). j. After ethanol precipitating and drying the RNA, dissolve it in 40 mL of sterile water and determine its concentration by measuring the absorbance at 260 nm. Final analysis Timing: 21 h Using RT-PCR to measure the percentage of a cellular transcript that is monophosphorylated requires comparison to a cognate internal standard that itself is monophosphorylated and can be reverse transcribed and replicated with the same set of primers. To be informative, the internal standard must be added at a concentration that is comparable to that of the transcript(s) under investigation. This concentration is determined empirically by serially diluting the internal standard and spiking equal amounts (0.001-10 ng) into samples of cellular RNA (7.5 mg) that have or have not been treated with RppH. The resulting pairs of mixtures are then ligated to the chimeric oligonucleotide, reverse transcribed, PCR amplified, and examined by electrophoresis as described for the preliminary analysis. RppH treatment Perform as described for the preliminary analysis. Ligation of a chimeric oligonucleotide Perform as described for the preliminary analysis (Figure 1), but include the monophosphorylated internal standard. Note: An additional nucleic acid mixture should contain only the monophosphorylated internal standard and the chimeric oligonucleotide, with 7.5 mg of tRNA, poly(A), or total RNA from another species substituted for the cellular RNA listed above. Parallel analysis of this additional mixture enables the bands on the final polyacrylamide gel to be identified correctly. 31. Incubate the nucleic acid mixture at 65 C for 5 min and then transfer it to ice for 1 min. 32. Add 9 mL of ligase master mix to each tube. 33. Mix thoroughly and incubate at 37 C for 2 h. 34. Add 125 mL of 3 mM EDTA (pH 8.0), extract with phenol/chloroform (pH 4.3), and ethanol precipitate as described in step 3. Dissolve the RNA pellets in 11.5 mL of sterile water. Reverse transcription, amplification, and electrophoresis Perform as described for the preliminary analysis ( Figure 1). Quantify as described below. Quantitative analysis of PABLO-QA data The final stage of PABLO-QA concludes with quantifying band intensities and calculating the percentage of 5 0 ends that are monophosphorylated. The two gel lanes (GRppH, spiked with identical amounts of the internal standard) in which the intensity of the DNA band representing the monophosphorylated internal standard is most similar to that for the transcript(s) of interest are scanned and quantified with ImageJ or commercial software to determine the relative intensities of those bands. Because ethidium bromide staining can result in substantial fluorescence between DNA bands, background correction is essential. Finally, the percentage of each cellular transcript that is monophosphorylated is calculated by comparing the relative band intensities with or without prior treatment of the cellular RNA with excess RppH, as follows: The experiments (GRppH) with the optimal concentration of the internal standard should be performed in triplicate to allow mean values and standard deviations to be calculated. EXPECTED OUTCOMES PABLO-QA is a sensitive method for accurately measuring the percentage of any particular RNA 5 0 end that is monophosphorylated, even when that percentage is small. It is particularly useful for examining the phosphorylation state of RNAs whose cellular concentration is low or that have multiple closely spaced 5 0 termini. It can be used to differentiate RNA processing sites from transcription initiation sites, as the former will generally be 100% monophosphorylated whereas the latter will typically be more heterogeneous, potentially comprising a mixture of triphosphorylated, diphosphoryated, monophosphorylated, and/or capped 5 0 ends caught at various stages of maturation or deprotection. For a 5 0 terminus generated by transcription initiation, the percentage that is monophosphorylated reflects the relative rates of formation and decay of the monophosphorylated intermediate. In combination with genetic mutations, this information can be used to identify the RNA features and proteins that govern these processes. We have used PABLO-QA to examine the phosphorylation state of the three principal 5 0 ends of sugE mRNA in Legionella pneumophila (Richards and Belasco, 2021). A preliminary analysis of the kind described above identified the location of these termini and led us to design a cognate internal standard whose 5 0 end preceded that of the longest sugE transcript by 19 nt (Figure 3). After spiking this fully monophosphorylated standard into total RNA from Legionella, a final PABLO-QA analysis was performed to determine the percentage of each sugE 5 0 end that was monophosphorylated in Figure 3. Preliminary analysis of Legionella pneumophila sugE 5 0 ends by PABLO-QA Total RNA extracted from L. pneumophila by the hot phenol method (Richards and Belasco, 2021) was analyzed by PABLO-QA with sugE-specific primers in the absence of an internal standard, with or without prior treatment of the RNA with excess RppH. The sugEspecific primers X (AATCAGTGTGGTGGCTGTAA) and Y (TGTCTTAACCGGGTACGG) used for this analysis annealed 0.18 kb or 15-22 nt, respectively, downstream of the heterogeneous 5 0 ends of sugE mRNA. Above the bands representing the sugE 5 0 ends is a clear zone where no bands are visible. For comparison, a cognate internal standard whose principal PABLO-QA product migrates in the clear zone was analyzed in parallel. This internal standard comprised sugE mRNA bearing a 5 0terminal 19-nt extension derived from the sugE promoter region. It was synthesized by in vitro transcription of a DNA template prepared by PCR amplification of L. pneumophila DNA with sugE-specific primers, one of which contained a T7 promoter (underlined) followed by 22 nt of the sugE promoter and transcription unit (lowercase): CCAAAAGAATTCCAAATTAATACGACTCACTATTagtgatatgctataaaataatc. vivo ( Figure 4, Table 1). In addition, the quantitative reliability of PABLO-QA was verified by analyzing a set of in vitro transcribed sugE RNA mixtures in which the 5 0 phosphorylation state of the RNA was known in advance ( Figure 5). LIMITATIONS Transcripts whose abundance is exceptionally low Even though the use of PCR amplification makes PABLO-QA very sensitive, some transcripts may be so scarce that PCR cannot amplify their signal above that of non-specific amplification products. Stable secondary structure that masks the 5 0 end In principle, sequestration of an RNA 5 0 end in thermodynamically stable base pairing might impair its reactivity with RppH and/or T4 RNA ligase 1. Nevertheless, we have successfully used PABLO-QA to determine the phosphorylation state of a transcript with only one unpaired nucleotide at the 5 0 end. Figure 3, with or without prior treatment of the cellular RNA with excess RppH. (Right) The intensities of four bands representing the internal standard and three distinct sugE 5 0 termini (P1, P2, and P3) were quantified and used to calculate the percentage of each sugE 5 0 end that was monophosphorylated. Each value is the average of three biological replicates. Error bars correspond to standard deviations. Modified from Figure 5D of Richards and Belasco (2021). TROUBLESHOOTING Problem 1 Low yield of cellular RNA. Potential solution Thanks to PCR amplification, the mRNA of interest may be sufficiently abundant to allow less cellular RNA to be used in each reaction. Problem 2 Structural obstacle to reverse transcription. Potential solution Additional PCR cycles may compensate for a low yield of full-length reverse transcription products caused by premature termination. Problem 3 Non-specific amplification products. Potential solution This problem can be solved by redesigning primers X and Y or by increasing the PCR annealing temperature. Problem 4 The DNA sequencing electropherogram obtained during 5 0 end mapping contains overlaid peaks due to a mixed population of PCR products. The percentage of sugE model RNA that was monophosphorylated was calculated from the relative band intensities in each lane and compared to the actual percentage of monophosphorylated sugE model RNA in the original mixture. In these calculations, the ratio of band intensities in the lane representing fully monophosphorylated sugE model RNA (100% monoP) served as a surrogate for the ratio obtained with an RppH-treated sample. Each value is the average of three independent measurements. Error bars (some too small to see) correspond to standard deviations. The line represents an ideal experimental outcome. Reproduced from Figure 5C of Richards and Belasco (2021). Potential solution Overlaid DNA sequences indicate inadequate separation of the PCR products generated by amplification with primers B and Y. The electrophoretic resolution of these PCR products can be improved by changing the percentage of polyacrylamide or by allowing the PCR products to migrate further. Problem 5 High background fluorescence on polyacrylamide gels stained with ethidium bromide. Potential solution Background fluorescence can be reduced by destaining the gel in 13 TBE for 10 min. If this does not resolve the problem, then use of a fluorescently labeled primer in the second round of PCR would obviate the need for ethidium staining. RESOURCE AVAILABILITY Lead contact Inquiries about the protocol should be addressed to the lead contact, Joel Belasco (joel.belasco@ med.nyu.edu). Materials availability The materials and reagents needed for this protocol are commercially available. Data and code availability All of the pertinent data are presented in Table 1 and Figures 3, 4, and 5. No computer code was generated in the course of this study.
2022-04-11T15:08:32.866Z
2022-04-09T00:00:00.000
{ "year": 2022, "sha1": "bbd12e045ef06aeac2dcda99a110b0ad90b5f22b", "oa_license": "CCBYNCND", "oa_url": "https://doi.org/10.1016/j.xpro.2022.101190", "oa_status": "HYBRID", "pdf_src": "PubMedCentral", "pdf_hash": "d7892819ca5884d5d287beaf42fee20c45ed6f38", "s2fieldsofstudy": [ "Biology", "Chemistry" ], "extfieldsofstudy": [ "Medicine" ] }
54833518
pes2o/s2orc
v3-fos-license
Subjection, Social Work and Social Theory Reflecting on Judith Butler’s conception of ‘performativity’, this paper argues that the notion has important implications for contemporary debates over agency, subjection and ‘resistance’ in social work. Using, wider social theory drawn from post-structuralist Butler, makes sense of complex professional-service user relations. The article explores the possibilities and problems for resisting dominant power relationships in micro and meso settings. INTRODUCTION Before we problematize the ingenuity of theoreticsal insights of Judith Butler (1990; and professionalisation and relationship to service users such as older people; let us begin the article, by stating that "old age" is not of itself a 'problem' or 'pathology'. 'Older people' are not a homogeneous group and categorisation as a distinct service user group is, arguably, contentious (Phillipson 2013;Chen and Powell, 2010). Furthermore, since the advent of personalisation in England, conceptualising support by user groups is considered by many as obsolete (Poll and Duffy 2008). People do not receive social services by virtue of being 'older'. Rather they are in need of a service -for example, because of ill health, physical impairment, mental health difficulties, addiction or offending. This article looks in more detail at the incidence and consequence of social policies for service users such as older people through the distinctly American post-structuralist concept of performativity (Butler, 1990;1995). This will enable us to consider the implications of the re-figuring of the relationship between the state and professional social work. This re-figuring constructs an ambiguous place for service users: they feature either as a resource -captured in the idea of the 'active citizen', as affluent consumers, volunteers or providers of child care -or as a problem in the context of poverty, vulnerability and risk. In many ways, policy provides three trajectories for older people: first, as independent self-managing consumers with private means and resources; second, as people in need of some support to enable them to continue to self-manage; and third, as dependent and unable to commit to performance management (Butler 1998a). Butlers (1995) notion of performativity provides the theoretical framework through which to view policy and practice that is largely governed by discourses of personalisation, safeguarding, capability and risk. DEMOGRAPHICS, POVERTY AND AGEISM Before moving on to assess Butler's (1990; 1998a) performativity and the construction of professionalisation and old age, it is highly pertinent to explore and problematise the notion of old age through consideration of demographics, poverty and ageism because these issues are intertwined with the way social policy targets both older people and those who work with them. (i) Demographics First we will consider demographics and some of the contradictions that lie within the figures. Much of the anxiety that surrounds the debate about old age concerns the proportion of the population that is older, non-economically productive and in some way dependent. In addition, changes in intergenerational family relations provoke concerns and anxiety over who has responsibility for supporting older people: the family or the state. Media hype fuels such concerns with suggestions that the costs of supporting an 'explosion' [sic] of older dependent people will overwhelm the ability of the reducing proportion of the population that is economically active and paying tax to fund the provision of care (Kemshall, 2002). In addition, a parallel argument suggests that the state is committing future generations to an unaffordable financial burden via pension payments and state-funded support. Such beliefs work to construct an image of older people as dependent and a burden on their children and the taxpayer and do much to fuel discrimination and ageism (Gilleard and Higgs, 2005). It is correct that demographic changes are occurring with a reduction in the birth rate and an extension of life expectancy: projections suggest that there will be over 10 million people aged 65 and over by 2021 or; alternatively, that the over-65s will make up 17.2 per cent of the population (Phillipson, 2008). It is also the case that the over-65s are in percentage terms the highest users of health and social care services (Kemshall, 2002). Nevertheless, it is a cause for celebration that the last 25 years or so have seen progressive increases in life expectancy. In 2008 , approximately 8.3 per cent of the population were between 65 and 74, 5.8 per cent were aged 75 -84 and 2.2 per cent were 85 or older. 410,000 people were over 90 and 10,000 over 100 (Bayliss and Sly 2010). But despite the headline costs, only a small proportion of people in the older age bands require personal social services (Johnson, 1999). Many of us can look forward to an active and relatively healthy old age. It is clear that predicting the future needs for support for specific individuals is more difficult in old age than in other periods of life. Nevertheless, the influence of major social variables such as class, race and gender continue to show a differential impact on morbidity and acquired limiting conditions, as well as on overall life expectancy. In particular, classbased differences show the influence of external factors from earlier parts of the life-course particularly pre-and post-natal periods and childhood (Kuh and Shlomo 2004) -a feature that Philp (2008) refers to as extrinsic ageing. This contrasts with intrinsic ageing which relates to the limitations of cells and other biological factors. Volume 21 At the same time, gender imbalances increase with age: there are 50 per cent more women than men aged 65 and over (Phillipson, 2008). Race and ethnicity are factors in the differential impact of ageing on particular individuals. Again the links here are with earlier life experiences and extrinsic or environmental factors such as manual labour in risky settings, poverty, poor housing and racism (Phillipson, 2008). In contrast, for some individuals and groups, the limitations associated with ageing come about at an earlier age, highlighting the problem of taking chronological age as the key determining factor. People with life-long disability tend to experience the 'effects' of ageing at an earlier part of the life-course. It is also well documented that some individuals -such as people with Down's Syndrome -have a higher risk of early onset Alzheimer type conditions (Bigby, 2004). There is also a growing recognition of early onset dementia and other organic cognitive impairments such as those linked to Crutzfeld-Jakob Disease (CJD) or, in certain cases, HIV/AIDS. Estimates suggest that there are some 16,000 people below the age of 65 with early onset dementia, with approximately 33 per cent having Alzheimer's Disease (Alzheimer's Society, 2011). In addition to an awareness of these demographics, Kerr et al. (2005) suggests three contextual elements essential to effective social work with older people -poverty, ageism and the integration of services. We will consider the first two elements here and return to the issue of services later. (ii) Poverty Carroll Estes (1979) claims that poverty in old age is best understood in the relationship between ageing and the economic structure: that is, how the state decides and dictates who is allocated resources and who is not. This impinges upon social policy in relation to retirement and subsequent pension schemes. As Phillipson (1982) points out, the retirement experience is linked to the reduction of wages and enforced withdrawal from work; together, these place many older people in the UK in a financially insecure position. Looking at the contemporary issue of poverty and older people, we have something of a mixed picture. Hoff (2008) notes the preference of policy makers from the late 1980s onwards to refer to the effects of poverty and social exclusion rather than just poverty. Walker and Walker (1997) highlight the need to take account of the multi-dimensional effects of low income and the impact of barriers to social integration experienced by older people. Nevertheless, there are contradictory patterns in income levels. These demonstrate that despite a steep decline in pensioner poverty over the last decade of the 20 th century, at the turn of the 21 st century; nearly 25 per cent of British pensioners remained in poverty (DWP, 2005). In addition, early life experiences such as engagement in the labour market and decisions about investments and pensions impact on material resources in older age (Burholt and Windle, 2006). Burholt and Windle (2006) emphasise the vulnerability of particular groups in older age: women, the socially disadvantaged, those from deprived neighbourhoods, people with ill health or disability, people living alone, divorced or widowed. They also note that, while individuals in younger generations may move in and out poverty, in later life there is little people can do about their position. (iii) Ageism Hughes and Mtejuka (1992) identify personal, structural and cultural dimensions to ageism which they describe as the negative images and attitudes towards older people that are based solely on the characteristics of old age. Dominelli (2004) notes the complexity of the impact of social dimensions such as gender, race, disability, mental health and sexual orientation, in social work with older people. Thompson (2001) suggests that one manifestation of institutional ageism is the tendency for work with older people to be seen as routine and uninteresting, more suited to unqualified workers and social work assistants than to qualified social workers or nurses. MacDonald (2004), describes a four year research programme about the priorities which older people as service users defined as important for 'living well in later life'. The older people involved in the projects did not commonly refer specifically to 'ageism', but the projects reported 'strong' evidence of its existence 'in a number of spheres'. These included poverty and the lack of opportunities that arise because much policy and practice identifies older people as a problem to be solved. She argues that, while older people continue to be viewed as a burden, then the denial of rights and opportunities to the ordinary things in life will continue. PERFORMATIVITY, PROFESSIONALISATION AND THE CONSTRUCTION OF SERVICE USERS Exploring the role that social theory plays in shaping the social context of service users through Butlers (1998a; 1998b) performativity is to adopt a specific approach to the analysis of this phenomenon. The use of such an analysis reflects the way that neo-liberal forms of government -such as those that have existed in the UK and most of the western world since the late 20 th century -how professional workers manage populations. Our interest is in the subtle mechanisms through which the behaviour of individuals is shaped, guided and directed without recourse to both coercion and subjection (Butler 1995). Central to this process is the concept of the self-managing citizen-consumer engaged in an endless process of decisionmaking in consumer-based markets. The process is supported by an array of discourses of self-management and associated social practices that are disseminated through social institutions such as factories and workplaces, the media, banks and retail outlets, health and welfare services, schools and universities, churches, and leisure and community organisations. These discourses penetrate deep into family life and personal relationships, regulating behaviour by locating individuals in a network of obligations towards themselves and others. Simultaneously a 'felt' responsibility for a particular locality or an imagined community is produced (Butler 1998b), whereby identity is affirmed. Examples of this process can be identified in the commitments to promoting social capital of the Blair/Brown Labour administrations or the 'Big Society' idea of the Cameron/Clegg Coalition government. Citizenship is avowed by participating in consumerbased activities and the maintenance of an accredited life-style. The process has been described as an 'ethic of the self' (Davidson, 1994) and is supported by an ever increasing array of professionalised experts embedded in a range of subjective relationships involving social worker. Parallel to this process the state is concerned with gathering statistics that help define the population and maintain a level of surveillance that affords the management of Volume 21 performance (Butler 1995). Affluent persons are identified, measured, and then grouped with similar persons. Once described, the characteristics of this group are disseminated via a range of media that suggest personality, aspirations and life chances. This produces the three trajectories referred to earlier where those individuals who are willing and able to commit to the market and to self-manage experience a particular combination of options and opportunities while those who, for whatever reason, fail to meet this commitment experience a different and more limited set of options that are often oppressive and impersonal (Butler 1990). The consequence of this for the professionalisation of the social (cf. Butler 1995) is that its role is clearly circumscribed. It must set out to ensure that basic freedoms are respected, but acknowledge the importance of the family and the market for the professional management of care. SOCIAL THEORY AND NEO-LIBERALISM Analysing the impact of neo-liberalism, citizens and the state are faced with the task of navigating themselves through a changing world in which global forces has transformed personal relations and the relationship between state and the individual (Butler 1990). In the period since 1979, both Conservative and Labour Governments have adopted a neo-liberal stance characterised by an increasing distancing of the state from the direct provision of services. Instead, government operates through a set of relationships where the state sets standards and budgets for particular services but then contracts delivery to private, voluntary or third sector organisations. The underpinning rationale is that this reconfiguration of the state retains a strong core to formulate public policy alongside the dissemination of responsibility for policy implementation to a wide range of often localised modes such as professionalisation of social work. Neo-liberal governance emphases enterprise as an individual and corporate strategy, supported by its concomitant discourse of marketisation and the role of consumers (Butler 1995). The strategy increasingly relies on individuals to make their own arrangements with respect to welfare and support, accompanied by the rhetoric of choice, self-management, responsibility and performance management. Neo-liberalism is perhaps the dominant contemporary means through which boundary adjustments are being made and rationalised, with far-reaching consequences for both states and markets (Butler 1990). The project of neo-liberalism is evolving and changing, while the task of mapping out the moving terrain of boundaries for professional social work and service experiences is only just beginning. In this context, the territorial state defined by geographical space is not so much withering away as being increasingly enmeshed in webs of economic interdependencies, social connections and power (Butler 1995). This, in turn, leads to the development of a denser and more complex set of virtual, economic, cultural and political spaces that cut across traditional distinctions between inside and outside, public and private, left and right (Beck, 2005). In this sense, possibly the most influential piece of contemporary neo-liberal social policy came with the implementation of the National Health Service and Community Care Act, 1990. This brought with it the purchaser/provider split and professional performativity and subjection. International Letters of Social and Humanistic Sciences Vol. 21 In the second decade of the 21 st Century, we have entered an accelerated phase of retraction by the UK state in relation to its role in the provision of welfare, with actual levels of support being reduced. Rhetorically, the Conservative/Liberal Democrat coalition is committed to the idea of the 'Big Society' which translates into a vision of individuals and communities coming together to work to resolve common concerns, as this Cabinet Office statement confirms: We want to give citizens, communities and local government the power and information they need to come together, solve the problems they face and build the Britain they want. We want society -the families, networks, neighbourhoods and communities that form the fabric of so much of our everyday lives -to be bigger and stronger than ever before. Only when people and communities are given more power and take more responsibility can we achieve fairness and opportunity for all. (The Cabinet Office 2010, www.cabinetoffice.gov.uk/news/building-big-society accessed 08/04/2011) In the process, the disciplinary effect of the self-managing individual is reproduced at neighbourhood and community levels through subjection. The third sector is crucial in such a scenario, playing a key role by inter-connecting a new partnership between government and civil society. Promoting this relationship is core to the functions of the new Office of Civil Society established by the coalition government in 2010 whose role is to enable people to develop social enterprises, voluntary and charitable organisations while promoting the independence and performance of the sector. Evidence of public intervention to support the renewal of community through local initiatives not only advances the status of professional social work organisations but fetishises the day-to-day operations of social work. Equality, mutual respect, autonomy and decision-making through communication with socially disadvantaged and/or dependent older people come to be seen as integral to the sector and provide an opportunity to encourage socially excluded groups and communities to participate as active citizens in, rather than be seen as a potential burden to, community engagement (Gilleard and Higgs, 2005). Performativity is bound up with neo-liberalism which is especially concerned with inculcating a new set of values and objectives orientated towards incorporating citizens as both players and partners in a marketized system. As such, social workers are exhorted to become entrepreneurs in all spheres and to accept responsibility for the management of civic life (Beck, 2005). There is also an apparent dispersal of power (Butler, 1990) achieved through establishing structures in which professional social workers are co-opted into or coproduce governance through their own subjective choices. This is directly connected with the political rationality that assigns primacy to the autonomization of society in which the paradigm of enterprise culture comes to dominate forms of conduct including that of social work with service users. The very significance of Butler's (1990) notion of performativity is that there is a strategic aim to diffuse the public sector's monolithic power to encourage diversity and fragmentation of provision of care to private and voluntary sectors facilitating professionalservice user practice. Such a strategy constitutes a fundamental transformation in the mechanisms for governing social life. Volume 21 It has combined two interlinked developments: a stress on the necessity for enterprising subjects and the resolution of central state control with older people articulates with a desire to promote organizational social work autonomy through service provision. Each of these has redefined previous patterns of subjective relationships (cf. Butler 1995) within and between those agencies and their clients. The important point to note is that there is great contingence and variation in such relationships, with unevenness across time and space. These relationships involve the development of new forms of statecraft -some concerned with extensions of the neo-liberal market-building project itself (for example, trade policy and financial regulation), some concerned with managing the consequences and contradictions of marketisation (for example, professionalisation). It also implies that the boundaries of the state and the market are blurred and that they are constantly being renegotiated through performativity (Butler, 1998b). Theoretically we identify the need to engage with key social debates about the future of welfare and individual relationships to and expectations of the state. One of the central debates has been on neoliberalism and its impingement on re-positioning of professional performativity. INTEGRATING SERVICES The previous sections of this article have sought to identify the changing relationship between the state and older people by exploring the Butler's (1990) notion of performativity. The discussion now moves on to consider more specifically how policy shapes the subjectivity of service users. Here we need to take account of the social and economic backdrop that frames service users experiences of support and care. In the process, we identify key developments in social policy such as performativity and risk and their congruence with the neo-liberal project. The neo-liberal project constructs as its core subject the self-managing citizen-consumer who is actively making choices within markets. In the context of welfare this involves individuals making choices about the type of support they want and who will provide that support as the range of providers is expanded in two broad ways. First, new providers enter the market providing new services or providing services in new ways. Second, and of key importance, people seeking support move outside of the segregated confines of welfare services to obtain services from mainstream providers (Dickinson and Glasby, 2010). In many ways, the 'Personalisation Agenda' as it is set out in 'Putting People First' (2007) represents the high point of the neo-liberal project with respect to welfare. This approach is largely constructed through a framework of earlier policy which includes the Community Care (Direct Payments) Act (1996), Independence Wellbeing and Choice (DH, 2005) and Our Health, Our Care, Our Say (DH, 2006). This was then supplemented by the Coalition Government with the publication of Capable Communities and Active Citizens (DH, 2010) and Think Local, Act Personal (2011) which aim to tie the shift to self-directed support outlined by the 'Personalisation Agenda' more closely to the notion of the Big Society. The discourses that articulate within this policy framework are those familiar to neo-liberalism: independence, choice, freedom, responsibility, quality, empowerment, active citizenship, partnership, the enabling state, coproduction and community action. International Letters of Social and Humanistic Sciences Vol. 21 Alongside this policy framework are constructed a number of specific techniques that target individuals, families and communities. These include an alternative method of allocating cash to individuals in the form of individual budgets, on-line self-assessment to augment local authority assessment processes, and community-based advocacy to support life style choices. In addition, commissioning models and approaches are being developed that aim to promote opportunities by responding proactively to the aspirations of people receiving services. Self-directed support is significant as it breaks with the tradition where state support is mediated by professionalisation who undertake assessments and organisations that are funded to provide places. Even in more recent times, when individuals might be afforded a choice between two or more places or opportunities, the organisations received funding from the state. Under personalisation, assessment takes place to identify the overall budget a person is entitled to receive, but the money is allocated to the individual either through a direct payment or by establishing an individual budget. In terms of performativity, the 'Personalisation Agenda' effectively shifts the responsibility for organising support from the state to the individual needing support via a form of cash transfer -something that Ferguson (2007) describes as the privatisation of risk onto professionals and service users. The advance of the 'Personalisation Agenda' has drawn support from a number of sources including specific groups of service users (Glendinning et al. 2008), politicians from across the spectrum (Ferguson 2007), and professional social workers (Samuel, 2009). One possible reason for this is that personalisation is conceptually ambiguous, making it difficult to disagree with its basic premise while it retains a number of contradictory ideas (Ferguson, 2007). However, it has also drawn criticisms particularly from older people who have reported lower psychological wellbeing due, possibly, to added anxiety and stress due to the burden of organising their own care (Glendinning et al. 2008). There are also concerns expressed regarding the impact of personalisation on the integration and stability of adult social care; this includes unease with the emphasis on individualistic solutions which may undermine democratic and collective approaches to transforming existing services or developing new services (Newman et al. 2008). Doubts have also been expressed over the readiness of the third sector to take on the demands of providing support. At the same time, while the disaggregation of budgets might suit some small innovative niche organisations the disruption of funding streams may be perceived as a threat and bring instability to larger more mainstream third sector organisations (Dickinson and Glasby, 2010). Other issues arise due to the somewhat fragmented process of implementation and the differences that occur in service provision between urban and rural areas (Manthorpe and Stevens, 2010). Ferguson (2007), drawing on the Canadian experience, suggests that personalisation favours the better educated, may provide a cover for cost-cutting and further privatisation and marketization of services, while the employment conditions of personal assistants may give rise to concern. Performativity enables the identification of the parallel concerns of neo-liberalism and subjection -the promotion of the self-managing individual and the management of risk. So far we have explored self-management in social care through the promotion of self-directed care as part of the 'Personalisation Agenda'. We now turn to the management of risk. This can be seen to take two forms, each dealt with by different elements of social policy. Protection from the risks posed by others are managed through safeguarding and policy such as No Secrets (DH and HO, 2000) Volume 21 In Capable Communities and Active Citizens (2010) the government clearly states that safeguarding is central to personalisation. Risks posed by the individual to their own person are contained by the Mental Capacity Act (2005) and its powers to override individual choice or replace autonomy by measures such as Enduring or Lasting Powers of Attorney or the Court of Protection. No Secrets has provided the basis of policy towards safeguarding for over a decade. It defined abuse in the context of an abuse of trust and the Human Rights Act (1998) and set out a model for inter-agency working that has been adopted by local authorities in England and Northern Ireland. In Wales the corresponding policy is 'In Safe Hands'. No Secrets drew from experience in relation to safeguarding children and described a number of categories of abuse including physical, sexual, neglect and financial abuse. However, it lacked the legal imperative to share information that is included in safeguarding children. Furthermore, the environment within which 'No Secrets' operates has seen considerable change since implementation. One key change was the discursive shift from vulnerable adult to safeguarding that took account of the dangers of victim blaming implied in the notion of vulnerable adults while the concept of safeguarding suggests the focus should be on the environment within which people find themselves. However, this rhetorical shift has not removed abuse. A recent prevalence survey suggests levels of abuse of between 2.6 per cent and 4 per cent, depending on how the estimates are constructed (O' Keeffe et al. 2007). Action on Elder Abuse, one of the organisations that sponsored the study uses evidence of under reporting to reinterpret this estimate as 9 per cent (Gary Fitzgerald, personal communication). In 2008, the UK Department of Health set up a consultation over the review of No Secrets where a number of organisations including the Association of Directors of Adult Social Care and Action on Elder Abuse campaigned for a legislative framework to put adult protection on the same footing as child protection (Samuel, 2008). However, no significant changes in guidance or legal status occurred as the Coalition government maintained that safeguarding was an issue for local communities; thus maintaining the distance between the state and individuals. Discourses of safeguarding operate and produce their effects via the multiple interactions of institutions embedded in local communities. Furthermore, the advent of personalisation has seen an increasing focus on financial abuse as direct payments and rules about eligibility for state support for care costs increase opportunities for financial exploitation, fraud and theft. No Secrets treats financial abuse as an artefact of other apparently more serious forms of abuse. However, in 2004, the House of Commons Select Committee identified financial abuse as possibly the second most commonly occurring form of abuse experienced by service users. CONCLUSIONS This article has explored the place that professionalisation and subjection plays in shaping the social context of service users. To achieve this I have drawn on the concept of performativity from Judith Butler to identify how neo-liberal forms of government construct older people as active consumers within welfare markets shifting the responsibility for organising support from the state to the individual. The contemporary context for working with older people who need some form of support is formed by the relationship between professionalisation and safeguarding. These set out the twin pillars of neo-liberal governance, namely professionalisation and self-management through self-directed support and the management of risk through safeguarding. Individuals are constructed as citizen-consumers actively making choices about what their needs are and identifying appropriate services, sometimes with the support of advocates or workers such as professional social workers in a process of performativity.
2019-05-06T14:06:13.902Z
2014-02-01T00:00:00.000
{ "year": 2014, "sha1": "95c68236e3e67e3584819275688a4a04d831e3ee", "oa_license": "CCBY", "oa_url": "https://www.scipress.com/ILSHS.21.107.pdf", "oa_status": "HYBRID", "pdf_src": "MergedPDFExtraction", "pdf_hash": "cd46837836b6147ce3cddafcc5b33cc5d93988e1", "s2fieldsofstudy": [ "Sociology" ], "extfieldsofstudy": [ "Sociology" ] }
118412783
pes2o/s2orc
v3-fos-license
Electric dipole polarizabilities of alkali metal ions from perturbed relativistic coupled-cluster theory We use the perturbed relativistic coupled-cluster theory to compute the static electric dipole polarizabilities of the singly ionized alkali atoms, namely, Na$^+$, $^+$, Rb$^+$, Cs$^+$ and Fr$^+$. The computations use the Dirac-Coulomb-Breit Hamiltonian with the no-virual-pair approximation and we also estimate the correction to the static electric dipole polarizability arising from the Breit interaction. I. INTRODUCTION The electric dipole polarizabilities, α, of ions are important to determine the optical properties of ionic crystals. In addition, for closed-shell ions like the singly ionized alkali atoms, α is a measure of the core-polarization effects in the neutral species. It is, however, nontrivial to measure α of ions. For the singly charged alkali ions, an indirect method to determine α is through the measurement of the transition energy between the nonpenetrating Rydberg states of the neutral species [1] and it has been used to determine the α of Cs + [2,3]. In absence of experimental data, there is a need for accurate theoretical calculations. In the case of neutral atoms, accurate values of polarizabilities are essential in studies related to the parity non-conservation in atoms [4], optical atomic clocks [5,6] and physics with the condensates of dilute atomic gases [7][8][9] are of current interest. Theoretically, methods based on a wide range of atomic many-body theories have been used to calculate α. In this regard, the recent review [10] provides description about the various theoretical methods used to calculate α. In the present work we use the perturbed relativistic coupled-cluster (PRCC) theory, which was earlier applied the noble gas atoms [11,12], to compute the α of the singly charged alkali ions. The PRCC theory is an extension of the standard relativistic coupled-cluster (RCC) theory to include an additional perturbation and for this, we introduce a new set of cluster operators. The formulation is, however, general enough to incorporate any perturbation Hamiltonian. It must be emphasized that, compared to other many-body methods, the use of PRCC is an attractive option as it is based on coupledcluster theory (CCT) [13,14]: an all order many-body theory considered to be reliable and powerful. The recent review [15] provides an overview of CCT, and variants of CCT developed for structure and properties calculations. The theory has been widely used for atomic [16][17][18][19], molecular [20], nuclear [21] and condensed matter physics [22] calculations. Coming back to the PRCC theory, it is different from the other RCC based theories in a number of ways, but the most important one is the representation of the cluster operators. In the PRCC theory, the cluster operators can be scalar or rank one tensor operators and it is decided based on the nature of the perturbation in the electronic sector. Consequently, the theory is suitable to incorporate multiple perturbations of different ranks in the electronic sector. One basic advantage of PRCC theory is, it does away with the summation over intermediate states in the first order time-independent perturbation theory. The summation is subsumed in the perturbed cluster amplitudes and this offers significant advantages in computing properties like α which involves summation over a complete set of intermediate states. The paper is organized as follows. In the Sec. II, for completeness and easy reference we briefly describe the RCC and PRCC theories with the Breit interaction. In Sec. IV we introduce the formal expression of the dipole polarizability and its representation in the PRCC theory. In the subsequent sections we describe the calculational part, and present the results and discussions. We then end with conclusions. All the results presented in this work and related calculations are in atomic units ( = m e = e = 4π 0 = 1). In this system of units the velocity of light is α −1 , the inverse of fine structure constant. For which we use the value of α −1 = 137.035 999 074 [23]. II. OVERVIEW OF THE COUPLED-CLUSTER THEORY The detailed description of the RCC and PRCC theories are given in our previous works. However, for completeness and easy reference we provide a brief overview in this section. A. RCC theory The Dirac-Coulomb-Breit Hamiltonian, denoted by H DCB , is an appropriate choice to include the relativistic effects in the structure and property calculations of high-Z atoms and ions. There are, however, complications associated with the negative energy continuum states of H DCB . These lead to variational collapse and continuum dissolution [24]. One remedy to avoid these complications is to use the no-virtual pair approximation. In this approximation, for a singly charged ion of N electrons [25] where α and β are the Dirac matrices, Λ ++ is an operator which projects to the positive energy solutions and V N (r i ) is the electrostatic potential arising from the Z = (N + 1) nucleus. Projecting the Hamiltonian with Λ ++ ensures that the effects of the negative energy continuum states are removed from the calculations. The last two terms in H DCB , 1/r ij and g B (r ij ), are the Coulomb and Breit interactions, respectively. The later, Breit interaction, represents the transverse photon interaction and is given by The Hamiltonian satisfies the eigen-value equation where, |Ψ i is the exact atomic state. In CCT the exact atomic state is defined as where |Φ i is the reference state wave-function and T (0) is the unperturbed cluster operator, which incorporates the residual Coulomb interaction to all orders. We have introduced the superscript to distinguish it from the second set of cluster operators, the perturbed cluster operators, to be introduced later. In the case of a closed-shell ion, the model space of the ground state consists of a single Slater determinant, |Φ 0 , and i , where i is the order of excitation. However, in actual computations, incorporating T (0) i with i 4 is difficult with the existing computational resources. A simplified, but quite accurate approximation is the coupled-cluster single and double (CCSD) excitation approximation, in which This is an approximation which embodies all the important electron correlation effects, and is a good starting point for structure and properties calculations of closedshell ions. In the second quantized notations where t ... ... are cluster amplitudes, a † i (a i ) are single particle creation (annihilation) operators and abc . . . (pqr . . .) represent core (virtual) states. For the present work, the ground state is the required atomic state |Ψ 0 = e T (0) |Φ 0 and satisfies the eigenvalue equation where, E 0 and |Φ 0 are the energy and reference state of the ground state, respectively. Following similar procedure, the CC eigenvalue equation of the one-valence [26] and two-valence [27] systems may be defined. B. PRCC Theory In the PRCC theory we introduce a new set of cluster operators, T (1) , to incorporate an interaction Hamiltonian, H int , perturbatively. For general representation, we consider T (1) as tensor operators of arbitrary rank and depends on the multipole structure of H int . The new cluster operators follow the selection rules associated with H int and the modified ground state eigenvalue equation, after including the perturbation, is where λ is the perturbation parameter, |Ψ 0 is the perturbed ground state andẼ 0 is the corresponding eigen energy. To calculate the electric dipole polarizability, α, consider the perturbation as the interaction with an electrostatic field E. The interaction Hamiltonian is then where D is the many electron electric dipole operator. Here, H int is odd in parity and to be more precise, D, the operator in the electronic space is odd in parity and a rank one operator. Hence, the cluster opertors T (1) are also rank one tensor operators and odd in parity, meaning, they connect states of different parities. Further more, the first energy correction Ψ 0 |H int |Ψ o = 0 and therefore,Ẽ 0 = E 0 . We can then write, using PRCC theory, the perturbed ground state as where, we have introduced the scalar product between T (1) and E for a consistent representation of the states and operators. The advatage of introducing T (1) and using |Ψ 0 is, it allows a systematic consolidation of the correlation effects arising from multiple perturbations. Based on the analysis of the low-order many-body perturbation theory diagrams, the single and double excitation operators of PRCC theory are represented as where τ ... ... are the cluster amplitudes and C i (r) are Ctensors of rank i. To represent T (1) 1 , a rank one operator, we have used the C-tensor of similar rank C 1 (r). And, the key difference of T is l a +l p must be odd, in other words (−1) la+lp = −1, where, l a (l p ) is the orbital angular momentum of the core (virtual) state a (p). Coming to T (1) 2 , to represent it two C-tensor operators of rank l and k are coupled to a rank one tensor operator. In terms of selection rules, the angular momenta of the orbitals and multipoles in T The other selection rule follows from the parity of H int , the orbital angular momenta must satisfy the condition (−1) (la+lp) = −(−1) (l b +lq) . C. PRCC equations The ground state eigenvalue equation, in terms of the PRCC state, is In the CCSD approximation we define the perturbed cluster operator T (1) as Using this, the PRCC equations are derived from Eq. (11). The derivation involves several operator contractions and these are more transparent with the normal ordered Hamiltonian H DCB The eigenvalue equation then assumes the form where, ∆E 0 = E 0 − Φ 0 |H DCB |Φ 0 is the ground state correlation energy. Following the definition in Eq. (9), the PRCC eigen-value equation is whereH DCB = e −T (0) H DCB e T (0) is the similarity transformed Hamiltonian. Multiplying Eq. (16) from left by e −λT (1) and considering the terms linear in λ, we get the PRCC equation Here, the similarity transformed interaction Hamiltonian H int terminates at second order as H int is a one-body interaction Hamiltonian. ExpandingH int and dropping E for simplicity, the PRCC equation assumes the form The equations of T (1) 1 are obtained after projecting the equation on singly excited states Φ p a |. These excitation states, however, must be opposite in parity to |Φ 0 . The T (1) 2 equations are obtain in a similar way after projecting on the doubly excited states Φ pq ab |. The equations form a set of coupled nonlinear algebraic equations. The equations and a description of the different terms along with diagrammatic analysis are given in our previous works [11,12]. An approximation which incorporates all the important many-body effects like core-polarization is the linearized PRCC (LPRCC). In this approximation, only the terms linear in T (0) , equivalent to retaining only III. DIPOLE POLARIZABILITY In the PRCC theory we can write the α of the ground state of a closed-shell atom as [11,12] where,D = e T (0) † De T (0) , represents the unitary transformed electric dipole operator. Retaining only the dominant terms, we obtain ]|Φ 0 is the normalization factor, which involves a non-terminating series of contractions between T (0) † and T (0) . However, in the present work we use 2 |Φ 0 . An evident advantage of computing α with PRCC theory is the absence of summation over the intermediate states |Ψ I . The summation is subsumed in the evaluation of T (1) in a natural way and eliminates the need for a complete set of intermediate states. For further analysis and evaluation of the different terms in Eq. (20), we use many-body diagrams or Goldstone diagrams. To evaluate the diagrams we follow the notations and conventions given in ref. [28]. However, there is an additional feature in the diagrams of α, we employ a wavy interaction line to represent the diagrams of T where d ab = a|d|b , andτ pq ab = τ pq ab − τ qp ab andt pq ab = t pq ab − t qp ab are the antysymmetrized cluster amplitudes. In the figure, the first two diagrams, Fig. 1(a) and 1(b), are the most important ones. These represent T 1 , respectively, and subsume DF and the effects of random phase approximation (RPA). The next two diagrams in the figure, Fig.1(c) and Fig.1(d), arise from the term T Each of these terms generate four diagrams, which are given in Fig.1(o-r) and Fig.1(s-v) and these correspond to T The first step of our computations, which is also true of any atomic and molecular computations, is to generate an spin-orbital basis set. For the present work, the basis set is even-tempered gaussian type orbitals (GTOs) [29] generated with the Dirac-Hartree-Fock Hamiltonian. This means, the radial part of the spin-orbitals are linear combinations of the Gaussian type functions. The Gaussian type functions which constitutes the large components are of the form where p = 0, 1 . . . m is the GTO index and m is the number of gaussian type functions. The exponent α p = α 0 β p−1 , where α 0 and β are two independent paramters. Similarly, the small components of the spin-orbitals are linear combination of g S κp (r), which are generated from g L κp (r) through the kinetic balance condition [30]. The GTOs are calculated on a grid [31] with optimized values of α 0 and β. The optimization is done for individual atoms to match the spin-orbital energies and self consistent field (SCF) energy of GRASP92 [32]. For the current work, the optimized α 0 and β are listed in Table. I. For comparison, the spin-orbital energies of Cs + obtained from the GTO and GRASP92 are listed in Table II. In the table, the deviation of the GTO results from the GRASP92 is ∼ 10 −3 , which is quite small. We obtain similar level of deviations for the other ions as well. The next step, related to spin-orbial basis set, is the choice of an ideal basis set size. For this, we examine the convergence of α using the LPRCC theory. We calculate α starting with a basis set of 50 GTOs and increase the basis set size in steps through a series of calculations. The results of the such a series of calculations are listed in Table. III, it shows the convergence of α of Cs + as a function of basis set size. In the present work we have considered finite size Fermi density distribution of the nucleus where, a = t4 ln(3). The parameter c is the half charge radius so that ρ nuc (c) = ρ 0 /2 and t is the skin thickness. Coming to the PRCC equations, these are solved iteratively using Jacobi method, we have chosen this method as it is parallelizable. The method, however, is slow to converge. So, to accelerate the convergence we use direct inversion in the iterated subspace (DIIS) [33]. The PRCC diagrams corresponding to the nonlinear terms are numerous and topologically complex. Further more, in these diagrams, the number of the spinorbitals involved is large and in general, the diagrams with the largest number of spin-orbitals are associated with the terms H N T 1 . All of these terms have a common feature: the presence of the Coulomb integral ab|1/r 12 |pq . Returning to the number of spin-orbitals, the T (0) 2 diagrams arising from any of the three terms mentioned earlier consist of four core and virtual spinorbitals each. Accordingly, the number of times a diagram is evaluated, N d , scales as n 4 o n 4 v and sets the computational requirements. Here, n o and n v are the number of core and virtual spin-orbitals, respectively. In the present work, n o ∼ 10 and n v ∼ 100 for lighter atoms and moderate sized basis sets, even then N d ∼ 10 12 . This is a large number and puts a huge constraint on the computational resources. To mitigate the computational constraints arising from the n 4 o n 4 v scaling, we separate the diagrams into two parts. One of the parts scales at the most n 2 o n 4 v and the total diagram is equivalent to the product of the parts. The part of the diagram, which is calculated first is referred to as the intermediate diagram. During computations, all the intermediate diagrams are calculated first and stored. Later, these are used to combine with the remaining part of the RCC diagram and the total diagram is calculated. The scaling is still n 2 o n 4 v and compared to the n 4 o n 4 v scalling, this improves the performance by several orders of magnitudes. To examine in more detail, consider the term H N T and it is diagrammatically equivalent to Fig. 2(a). However, while evaluating the diagram, the part within the dashed round rectangle or the intermediate diagram can be separated and computed first. The Eq. (24) can then be written as where η d a = rcs t rs ac v cd rs is the amplitude of the effective one-body operator corresponding to the intermediate diagram. It scales as n 3 o n 2 v and when contracted with T (1) 2 , the computation still cales as n 3 o n 2 v . This is much less than the n 4 o n 4 v scaling. Consider another term and it is diagrammatically equivalent to Fig. 2 V. RESULTS AND DISCUSSIONS To compute α using PRCC theory, as described earlier, we consider terms up to second order in the cluster operators. We have, however, studied terms which are third order in cluster operators and examined the contributions from the leading order terms. But the contributions are negligible and this validates our choice of considering terms only upto second order in cluster operators. To begin with, we compute α using the cluster amplitude obtained from the LPRCC and results are presented in Table IV. In the table we have listed, for systematic com-parison, the experimental data and results from previous theoretical computations. For Na + and K + , our values of α are higher than the experimental values by 1% and 0.9% respectively. However, for Rb + and Cs + our results are lower than the experimental values by 0.15% and 4.8%, respectively. In terms of theoretical results, our results of Na + and K + are in excellent agreement with the previous work which used the RCCSDT method for computation. But, for Rb + and Cs + , like in the experimental data, our results are lower than the RCCSDT results. One possible reason for these deviations in the heavier ions could be the exclusion of triple excitation cluster operators in the present work. Our result of Fr + seems to bear out this reasoning as the same trend is observed ( our result is 4.4% lower than the RCSSDT result) in this case as well. However, in absence of experimental data for Fr + , it is difficult to arrive at a definite conclusion. To investigate the importance of Breit interaction, H B , in computing α of the alkali ions, we exclude H B in the atomic Hamiltonian and do a set of systematic calculations. Our results for the values of α are then 1.008, 5.514, 8.973 and 14.908 for Na + , K + , Rb + and Cs + , respectively. These values are 0.001, 0.007, 0.013 and 0.016 a.u lower than the results computed using the Dirac-Coulomb-Breit Hamiltonian. This indicates that the correction from the Breit interaction is larger in heavier ions and this is as expected since the stronger nuclear potential in heavier ions translates to larger Breit correction. For a more detailed study, we examine the contributions from each of the terms in the Eq. 20 and these are listed in Table. V. The leading order contribution arises from T (1) † 1 D + h.c and diagrammaticaly, it corresponds to the first two diagrams in Fig. 1 . These are also the lowest order terms and are the dominant terms since these subsume the contributions from the Dirac-Fock and RPA effects. For all the ions, the results from the dominant terms exceeds the final results. Here, it must be mentioned that a similar trend is observed in the results of noble gas atoms as well [11,12]. The next to leading order (NLO) contributions arise from the T 1 (1) † DT (0) 2 + h.c. The contributions from these terms are an or- are the cluster operators with smaller amplitudes in PRCC and RCC theories, respectively. Collecting the results, the net contributions from the second order terms are 0.016, -0.117, -0.223, -0.456 and -0.517 for Na + , K + , Rb + , Cs + and Fr + , respectively. Next, we consider all the terms in the PRCC theory, including the terms which are non-linear in cluster operators. The results of α are presented in the Table VI. For Na + the result of α is 2.6% higher than the exper- To investigate the RPA effects in detail, we isolate the contributions from each of the core spin-orbitals to T (1) † 1 D + h.c. and The dominant contributions are presented in Table. VII. It is to be noted that α has a quadratic dependence on the radial distance, so the orbitals with larger spatial extension contribute dominantly. The effect of this is discernible in the results, for all the alkali ions the leading contribution to α comes from the outermost np 3/2 orbital, which is the occupied orbital with largest radial extent. The next leading contribution arise from the np 1/2 orbital. An important observation is, as we proceed from from lower Z to higher Z, the ratio of contribution of np 3/2 to the np 1/2 increases. It is 1.8, 2.1, 2.3, 2.6 and 4.5 for Na + , K + , Rb + , Cs + and Fr + respectively. The ratio is much larger in the case Fr + and without any ambiguity it can be attributed to the relativistic contraction of the np 1/2 orbital. The third leading contribution for Na + , K + , Rb + arise from the 2s 1/2 , 3s 1/2 and 4s 1/2 orbital respectively. But, for Cs + and Fr + the third leading contribution arise from 4d 5/2 and 5d 5/2 orbital respectively. This is because the 5s 1/2 and 6s 1/2 orbital are contracted due to large relativistic effects. From the above analysis of RPA effets, the trend in the contributions demonstrates the importance of relativistic corrections in Cs + and Fr + . To study the pair-correlation effects, we identify the pairs of core spin-orbitals in the next leading order terms T 1 (1) † DT (0) 2 + h.c. The four leading order pairs for Na + and K + , Rb + , Cs + and Fr + are listed in table VIII and IX respectively. The dominant contribution, for all the ions, arise from the combination (np 3/2 , np 3/2 ) orbital pairing. To illustrate the relative values, the contributions from the pairs of the five outermost core spin-orbitals of Rb + is shown as a barchart in Fig. IV B. Comparing the results to α of Rb + , Cs + and Fr + of all the ions, there is a major difference in the results of Fr + . For Fr + the fourth largest contribution is from the (6p 3/2 , 5d 5/2 ) pair, whereas for the other ions it is of the form (np 1/2 , np 1/2 ). This is again a consequence of the contraction of the 6s 1/2 spin-orbital in Fr + due to relativistic effects. In the present calculations we have identified the following possible sources of uncertainty. The truncation of the spin-orbital basis sets is one of the possible source. For all the ions we start the computations with 9 symmetries and increase up to 13 symmetries. Along with it, we also vary the number of the spin-orbitals till α converges to ≈ 10 −4 . So, we can safely neglect this uncertainty for our calculations. Another source of uncertainty is the truncation of the CC theory at the single and double excitation for both unperturbed and the perturbed RCC theories. Based on our previous theoretical results [11,12] the contributions from triple and higher order excitations is at the most ≈ 3.3%. The truncation of e T (1) † De T (0) + e T (0) † De T (1) at the second order in cluster operator is also a source of uncertainty. From our earlier studies [26] with CC theory and in the present work we have studied the contribution from the third order in cluster operator, but the contribution is negligibly small. The quantum electrodynamical(QED) corrections is another source of uncertainty in our calculations and based on our previous studies, we estimate it at 0.1%. In total, we estimate the uncertainties in our results as ≈3.4%. VI. CONCLUSION We have computed the static electric dipole polarizability of alkali ions using the PRCC theory. The PRCC theory is a coupled-cluster based theory and can be easily modified to incorporate other perturbations in the atomic many-body calculations. In the present work, we have explored the use of PRCC theory to calculate the electric dipole polarizability of closed-shell ions and find that the results are in good agreement with the experimental results and previous theoretical results. On a closer examination of the results, the pattern of the contributions from the individual and pairs of spin-orbitals establishes the importance of the relativistic corrections in higher Z ions. The results further indicates that it is essential to obtain the outermost p 3/2 spin-orbitals of the ions accurately. The reason is, these are associated with the dominant contributions from the Dirac-Fock, RPA effects and pair-correlation effects.
2012-12-24T13:05:57.000Z
2012-12-24T00:00:00.000
{ "year": 2012, "sha1": "7550990cb246be566bbd33025b7161404e5b0278", "oa_license": null, "oa_url": "http://arxiv.org/pdf/1212.5910", "oa_status": "GREEN", "pdf_src": "Arxiv", "pdf_hash": "7550990cb246be566bbd33025b7161404e5b0278", "s2fieldsofstudy": [ "Physics", "Chemistry" ], "extfieldsofstudy": [ "Physics" ] }
118473668
pes2o/s2orc
v3-fos-license
Primordial magnetogenesis Magnetic fields appear everywhere in the universe. From stars and galaxies, all the way to galaxy clusters and remote protogalactic clouds magnetic fields of considerable strength and size have been repeatedly observed. Despite their widespread presence, however, the origin of cosmic magnetic fields is still a mystery. The galactic dynamo is believed capable of amplifying weak magnetic seeds to strengths like those measured in ours and other galaxies, but the question is where do these seed fields come from? Are they a product of late, post-recombination, physics or are they truly cosmological in origin? The idea of primordial magnetism is attractive because it makes the large-scale magnetic fields, especially those found in early protogalactic systems, easier to explain. As a result, a host of different scenarios have appeared in the literature. Nevertheless, early magnetogenesis is not problem free, with a number of issues remaining open and a matter of debate. We review the question of primordial magnetic fields and consider the limits set on their strength by the current observational data. The various mechanisms of pre-recombination magnetogenesis are presented and their advantages and shortcomings are debated. We consider both classical and quantum scenarios, that operate within as well as outside the standard model, and also discuss how future observations could be used to decide whether the large-scale magnetic fields we see in the universe today are truly primordial or not. Introduction Observations have well established the widespread presence of magnetic fields in the universe [1,2,3,4,5,6,7]. In fact, as the technology and the detection methods improve, it seems that magnetic fields are everywhere. The Milky Way, for example, possesses a coherent Bfield of µG strength over the plain of its disc. These fields are a very important component of the interstellar medium, since they govern the gas-cloud dynamics, determine the energy of cosmic rays and affect star formation. Similar magnetic fields have also been detected in other spiral and barred galaxies. Cosmic magnetism is not confided to galaxies only however. Observations have repeatedly verified B-fields of µG-order strength in galaxy clusters and also in high redshift protogalactic structures. Recently, in particular, Kronberg et al and Bernet et al reported organized, strong B-fields in galaxies with redshifts close to 1.3 [8,9]. Also, Wolfe et al have detected a coherent magnetic field of approximately 100 µG in a galaxy at z ≃ 0.7 [10]. All these seem to suggest that magnetic fields similar to that of the Milky Way are common in remote, high-redshift galaxies. This could imply that the time needed by the galactic dynamo to build up a coherent B-field is considerably less than what is usually anticipated. On the other hand, the widespread presence of magnetic fields at high redshifts may simply mean that they are cosmological (pre-recombination) in origin. Although it is still too early to reach a conclusion, the idea of primordial magnetism gains ground, as more fields of micro-Gauss strength are detected in remote protogalaxies. Further support may also come from very recent reports indicating the presence of coherent magnetic fields in the low density intergalactic space, where typical dynamo mechanisms cannot operate, with strengths close to 10 −15 G [11,12,13,14,15]. The measurements of [12], in particular, are based on halos detected around Active Galactic Nuclei (AGN) observed by the Fermi Gamma-Ray Space Telescope. Complementary studies seem to limit the strength of these B-fields between ∼ 10 −17 and ∼ 10 −14 Gauss [15]. Analogous lower limits were also reported by [11,14] and [13], after measuring radiation in the GeV band (γ-rays) produced by the interaction of TeV photons from distant blazars with those of the Cosmic Microwave Background (CMB). If supported by future surveys, these measurements will render considerable credence to the idea of primordial magnetism. It is possible, however, that the matter will not settle unless magnetic imprints are found in the CMB spectrum [16]. Among the attractive aspects of cosmological magnetic fields, is that they can in principle explain all the large-scale fields seen in the universe today [17,18,19,20,21,22]. Nevertheless, early magnetogenesis is not problem free. The galactic dynamo needs an initial magnetic field in order to operate. Such seed fields must satisfy two basic requirements related to their coherence scale and strength [23,24,25,26,27,28,29]. The former should not drop below 10 Kpc, otherwise it will destabilise the dynamo. The latter typically varies between 10 −12 and 10 −22 Gauss. It is conceivable, however, that in open, or dark-energy dominated, Friedmann-Robertson-Walker (FRW) cosmologies the minimum required magnetic strength could be pushed down to ∼ 10 −30 G [30]. Producing magnetic seeds that comply with the above mentioned specifications, however, has so far proved a rather difficult theoretical exercise. There are problems with both the scale and the strength of the initial field. Roughly speaking, magnetic seeds generated between inflation and recombination have too small coherence lengths. The reason is causality, which confines the scale of the B-field within the size of the horizon at the time of magnetogenesis. This is typically well below the dynamo requirements. If we generate the seed at the electroweak phase transition, for example, the size of the horizon is close to that of the Solar System. Assuming that some degree of turbulence existed in the pre-recombination plasma, one can increase the coherence scale of the initial field by appealing to a mechanism that in hydrodynamics is known as 'inverse cascade'. In magnetohydrodynamics (MHD), the process results from the conservation of magnetic helicity and effectively transfers magnetic energy from small to successively larger scales. The drawback is that inverse cascade seems to require rather large amounts of magnetic helicity in order to operate efficiently [31,32,33,34]. Inflation can solve the scale problem, since it naturally creates superhorizon-sized correlations. There, however, we have a serious strength issue. Magnetic fields that were generated during a period of typical de Sitter-type inflation are thought to be too weak to seed the galactic dynamo [35]. The solution to the strength problem is usually sought outside the realm of classical electromagnetic theory, or of conventional FRW cosmology. There is a plethora of articles that do exactly that, although mechanisms operating within standard electromagnetism and the Friedmann models have also been reported in the literature. The aim of this review is to present the various mechanisms of early magnetogenesis, outline their basic features and discuss their advantages and weaknesses. The next section starts with a brief overview of the observation techniques that have established the ubiquitous presence of large-scale magnetic fields in the universe. We then provide the limits on cosmological B-fields, imposed by primordial nucleosynthesis and the isotropy of the CMB. Section three sets the mathematical framework for the study of large-scale magnetic fields in relativistic cosmological models. There, for completeness, we also outline the typical magnetic effects on structure formation and how the latter could have backreacted on the B-field itself. In section four we discuss cosmic magnetogenesis within the realm of standard electrodynamics and within the limits of conventional Friedmannian cosmology. After a brief review of the FRW dynamics, we explain why it is theoretically difficult to generate and sustain astrophysically relevant Bfields in these models. More specifically, why inflationary magnetic fields in FRW cosmologies are generally expected to have residual strengths less than 10 −50 G (far below the current galactic dynamo requirements) at the epoch of galaxy formation. At the same time, it is also pointed out that the standard picture can change when certain general relativistic aspects of the magnetic evolution are accounted for. More specifically, it is shown that curvature effects can in principle slow down the standard 'adiabatic' magnetic decay and thus lead to B-fields with residual strengths much stronger than previously anticipated. In section five we describe the generation of magnetic fields by nonlinear and out-of-equilibrium processes, which are believed to have taken place in the early universe. We begin by analyzing several mechanisms of magnetogenesis that could have operated during the reheating epoch of the universe, namely parametric resonance, generation of stochastic electric currents and the breaking of the conformal invariance of the electromagnetic field by cosmological perturbations. Then, we address the generation of magnetic fields during cosmological phase transitions. It is believed that at least two of such phase transitions have occurred in the early universe: the EW (Electroweak) and the QCD (Quantum Chromodynamical). In general, the problem with the (post-inflationary) early-universe magnetogenesis is that the generated B-fields have high intensity but very short coherent scale (in contrast to what happens during inflation), which amounts to performing certain line averages to obtain the desired large-scale intensities. This procedure generally results in weak magnetic fields. To a certain extent, the uncertainty in the obtained residual magnetic values reflects our limited knowledge of the dissipative processes operating at those times. Thus, better understanding of the reheating physics is required, if we are to make more precise predictions. Phase transitions, on the other hand, are better understood. Note, however, that despite the fact that the EW phase transition in the standard model is a second order process, extensions to other particle physics models treat it as first order. The QCD phase transition, on the other hand, was recently established to be a smooth crossover [36]. To the best of our knowledge, however, no work on primordial magnetogenesis in this scenario has been reported in the literature. Section six provides an overview of magnetic generation mechanisms operating outside the standard model. In all scenarios the magnetic fields are created in the very early universe, during inflation. Then, subhorizon quantum fluctuations in the electromagnetic field become classical superhorizon perturbations, manifesting themselves as current supported magnetic fields during the subsequent epochs of standard cosmology. In order to overcome the problem of not creating strong enough magnetic seeds, which is known to plague standard electrodynamics, different theories are explored. There are two basic classes of models, depending on whether electrodynamics is linear or nonlinear. In the first case, magnetic fields of astrophysically relevant strengths are usually achieved after breaking the conformal invariance of electromagnetism. This can be achieved by coupling the electromagnetic with a scalar field (as it naturally happens with the dilaton in string cosmology), by introducing dynamical extra dimensions, through quantum corrections leading to coupling with the curvature tensor, by inducing symmetry breaking, or by means of trace anomaly. When dealing with nonlinear electrodynamics, on the other hand, the conformal invariance of Maxwell's equations is naturally broken in four dimensions. Recall that the concept of nonlinear electrodynamics was first introduced by Born, in his search for a classical, singularity-free theory of the electron. Another example is provided by the description of virtual electron pair creation, which induces a self-coupling of the electromagnetic field. Linear and nonlinear models of electrodynamics are discussed in detail and the parameter space, for which strong enough magnetic seeds are generated, is determined. Finally, in section seven, we briefly summarise the current state of research on primordial magnetogenesis and take look at the future expectations. Magnetic fields in the universe Magnetic fields have long established their ubiquitous presence in the universe. They are a major component of the interstellar medium, contributing to the total pressure, affecting the gas dynamics, the distribution of cosmic rays and star formation. It also seems very likely that large-scale magnetic fields have played a fundamental role during the formation of galaxy clusters. Despite our increasing knowledge, however, many key questions related to the origin and the role of these fields remain as yet unanswered. Large-scale magnetic fields in the universe Most galaxies, including the Milky Way, carry coherent large-scale magnetic fields of µG-order strength. Analogous fields have also been detected in galaxy clusters and in young, high-redsift protogalactic structures. In short, the deeper we look for magnetic fields in the universe, the more widespread we find them to be. Detection and measuring methods The key to magnetic detection is polarized emission at the optical, the infrared, the submillimeter and the radio wavelengths. Optical polarization is due to extinction along the line of sight, caused by elongated dust grains aligned by the interstellar magnetic field. The net result is that the electromagnetic signal has a polarisation direction parallel to the intervening B-field. This physical mechanism is sometimes referred to as the Davis-Greenstein effect [37]. Although optical polarization is of limited value, it has unveiled the magnetic structure in the spiral arms of the Milky Way and in other nearby galaxies [38,39,40,41]. Most of our knowledge about galactic and intergalactic magnetic fields comes from radiowave signals. The intensity of synchrotron emission is a measure of the strength of the total magnetic field component in the sky plane. Note that polarized emission is due to ordered B-fields and unpolarized comes from turbulent ones. The Zeeman splitting of radio spectral lines is the best method to directly measure the field strength in gas clouds of our galaxy [42], OH masers in starburst galaxies [43] and in dense HI clouds in distant galaxies on the line of sight towards bright quasars [10]. The drawback is that the Zeeman effect is very weak and can mainly be used for detecting intersellar magnetic fields. This is due to the small line shift, which given by ∆ν ν = 1.4g B ν , (2.1.1) and is extremely difficult to observe at large distances. Note that in the above the B-field is measured in µG and the line frequency in Hz. Also, the parameter g represents the Landé factor that relates the angular momentum of an atom to its magnetic moment. When polarized electromagnetic radiation crosses a magnetized plasma its orientation is changed by Faraday rotation. The latter is caused by the left and right circular polarisation states traveling with different phase velocities. For linearly polarised radiation, the rotation measure (RM) associated with a source at redshift z s is (cf. , e.g., [44]) RM (z s ) ≃ 8 × 10 5 zs 0 n e B (z) (1 + z) 2 dL(z) (rad/m 2 ) , where n e is the electron density of the intervening plasma (in cm −3 ), B is the magnetic intensity along the line of sight (in µG) and dL is the distance traveled by the radio signal. The latter is given by with H 0 and Ω 0 representing the present values of the Hubble constant of the density parameter respectively. 1 As the rotation angle is sensitive to the sign of the field direction, only ordered B-fields can give rise to Faraday rotation. Multi-wavelength observations determine the strength and the direction of the line-of-sight magnetic component. Then, the total intensity and the polarization vectors yields the three-dimensional picture of the field and allow us to distinguish between its regular, anisotropic and random components. Some novel detection methods try to exploit the effects that an intervening magnetic field can have upon the highly energetic photons emitted by distant active sources [45,46,47,48]. Using such techniques, together with data from state-of-the-art instruments (like the Fermi Gamma-Ray Space Telescope for example), three independent groups have recently reported the detection of intergalactic magnetic fields with strengths close to 10 −15 G (see § 2.1.2 next). Galactic and extragalactic magnetic fields The strength of the total magnetic field in galaxies can be determined from the intensity of the total synchrotron emission, assuming equipartition between the magnetic energy density and that of the cosmic rays 2 . This seems to hold on large scales (both in space and time), though deviations occur locally. Typical equipartition strengths in spiral galaxies are around 10µG. Radio-faint galaxies, like the M31 and M33, have weaker total fields (with B ∼ 5µG), while gas-rich galaxies with high star-formation rates, such as the M51, M83 and NGC6946, have magnetic strengths of approximately 15µG. The strongest fields, with values between 50 µG and 100 µG, are found in starburst and merging galaxies, like the M82 and NGC4038/39 respectively [51]. 1 Conventionally, positive RM values indicate magnetic fields directed towards the observer and negative ones correspond to those pointing away. 2 Determining the magnetic strength from the synchrotron intensity requires information about the number density of the cosmic-ray electrons. The latter can be obtained via X-ray emission, by inverse-Compton scattering, or through γ-ray bremsstrahlung. When such data is unavailable, an assumption must be made about the relation between cosmic-ray electrons and magnetic fields. This is usually the aforementioned principle of energy equipartition [49,50] Spiral galaxies observed in total radio emission appear very similar to those seen in the farinfrared. The equipartition magnetic strength in the arms can be up to 30µG and shows a low degree of polarization. The latter indicates that the fields are randomly oriented there. On the other hand, synchrotron radio-emission from the inter-arm regions has a higher degree of polarization. This is due to stronger (10µG -15µG) and more regular B-fields, oriented parallel to the adjacent optical arm. The ordered fields form spiral patterns in almost every galaxy, even in ringed and flocculent galaxies. Therefore, the magnetic lines do not generally follow the gas flow (which is typically almost circular) and dynamo action is needed to explain the observed radial magnetic component. In galaxies with massive bars, however, the field lines appear to follow the gas flow. As the gas rotates faster than the bar pattern of the galaxy, a shock occurs in the cold gas. At the same time, the warm gas is only slightly compressed. Given that the observed magnetic compression in the spiral arms and the bars is also small, it seems that the ordered field is coupled to the warm diffuse gas and is strong enough to affect its flow [7]. Spiral dynamo modes can be identified from the pattern of polarization angles and Faraday rotation measures from multi-wavelength radio observations of galaxy disks [52], or from RM data of polarized background sources [53]. The disks of some spiral galaxies show large-scale RM patterns, but many galaxy disks posses no clear patterns of Faraday rotation. Faraday rotation in the direction of QSOs helps to determine the field pattern along the line of sight of an intervening galaxy [53,54]. Recently, high resolution spectra have unambiguously associated quasars with strong MgII absorption lines to large Faraday rotation measures. As MgII absorption occurs in the haloes of normal galaxies lying along the line of sight to the quasars, this implies that organized strong B-fields are also present in high-redshift galaxies [8,9,10]. Magnetic fields have also been detected within clusters of galaxies, where X-ray observations have revealed the presence of hot gas [5]. There are several indications that favour the existence of cluster magnetic fields. In particular, galaxy clusters are known to have radio halos that trace the spatial distribution of the intra-cluster gas found in the X-ray observations. The radio signals are due to synchrotron emission from relativistic electrons spiralling along the field lines. In addition, there have been reports of Faraday rotation measurements of linearly polarized emissions crossing the intracluster medium. The first detection of a cluster magnetic field was made in the Coma cluster [55]. The Very Large Array (VLA) was used to compare Faraday rotation measures of radio sources within and directly behind the Coma cluster with radiation not crossing the cluster. Since then, there have been more analogous detections. It turns out that the observed cluster-field strengths vary slightly with the type of cluster. In particular, the magnetic field strength depends on whether we are dealing with cooling flow or non-cooling flow clusters. Faraday observations indicate turbulent field strengths of µG-order in non-cooling flow clusters, such as the Coma. For cooling flow clusters, like the Hydra for example, the B-fields are of the order of a few 10 µG [56]. In fact, the cool core region of the Hydra A cluster is associated with a magnetic field of 7µG with correlation length of 3 Kpc. Non-cooling flow clusters like the Coma, on the other hand, have weaker fields of the order of 3µG but with larger correlation lengths (between 10 Kpc and 30 Kpc) [57]. In general, the magnetic structure is not homogeneous but is rather patchy on small scales (5 Kpc -20 Kpc), indicating the presence of tangled magnetic fields [56]. An alternative way of determining the strength of cluster B-fields is to compare the radio synchrotron emission with inverse Compton X-ray emission [5]. The former comes from spiraling electrons along the cluster magnetic field. The latter is mainly due to CMB photons being upwardly scattered by the relativistic electrons of the intracluster gas. In view of the accumulating observational evidence for magnetic presence on all scales up to that of a galaxy cluster, the idea of a truly cosmological origin for cosmic magnetism gains ground. The potential detection of such primordial B-fields in the intergalactic medium may also change our understanding of the way structure formation has progressed. Note that an intergalactic magnetic field ordered on very large scales would pick out a preferred direction, which should then manifest itself in Faraday rotation measurements from distant radio sources. This puts an upper limit on any cosmological intergalactic magnetic field of B IGM 10 −11 [58]. Assuming that such a field has a characteristic scale, galaxy rotation measures suggest a size of 1 Mpc and an upper limit of the order of 1 nG [44]. Indications of intergalactic magnetic fields have come from observations of radio-galaxy groupings near the Coma cluster, suggesting the presence of B-fields with strengths between 0.2 µG and 0.4 µG and a coherence scale close to 4 Mpc [59]. There is also evidence for an intergalactic magnetic field around 0.3 µG on scales of the order of 500 Kpc, from excess rotation measures towards the Hercules and the Perseus-Pisces superclusters [60,61]. In addition, intergalactic B-fields close to 30 nG and spanning scales of approximately 1h −1 Mpc were recently suggested after cross-correlating the galaxy density field, obtained from the 6th Data Release of the Sloan Digital Sky Survey, with a large sample of Faraday rotation measures supplied by the NRAO-VLA Sky Survey [62]. Additional reports of intergalactic magnetic fields have appeared within the last year, using techniques that exploit the magnetic effects on the highly energetic photons emitted by distant sources (e.g. see [45,46,47,48]). More specifically, TeV-energy photons from a distant AGNs interact with the low frequency photons of the extragalactic background and lead to electronpositron pair creation. These produce (GeV-level) γ-rays through the inverse Compton scattering of the CMB photons. Observation wise, the key point is that a magnetic presence, even a very weak one, can affect the profile of the the resulting γ-ray spectra. For instance, the B-field can cause the formation of an extended halo around the γ-ray images of distant AGNs. Such halos were first reported by Ando and Kusenko, using combined data from the Atmospheric Cherenkov Telescopes and Fermi Gamma-Ray Space Telescope [12]. Subsequent, complementary analysis indicated the presence of an intergalactic magnetic field with strength between 10 −17 and 10 −14 Gauss [15]. In addition to halo formation, the B-field can also reduce the observed flux of the secondary GeV-photons by deflecting them into larger solid angles. Using observations of the Fermi/Large Area Telescope and assuming that the original TeVphotons were strongly beamed, a lower limit of ∼ 10 −15 G was imposed on the intergalactic magnetic field [11,14]. A similar lower limit of ∼ 10 −16 G was also obtained assuming that the blazar source radiated isotropically [13]. Limits on primordial magnetic fields Any primordial magnetic field must comply with a number of astrophysical constraints, the most significant of which come from Big Bang Nucleosynthesis (BBN) and the isotropy of the CMB. The latter probes B-fields with coherence scales larger than the particle horizon during nucleosynthesis, while the BBN limits apply in principle to all scales. Nucleosynthesis limits The main effects of a magnetic presence on the output of primordial nucleosynthesis are related with: (a) the proton-to-neutron conversion ratio; (b) the expansion and cooling of the universe; and (c) the electron thermodynamics. Here we will only provide a very brief summary of these effects. For a detailed review, the reader is referred to [20]. (a) In the early universe, the weak interaction is responsible for maintaining chemical equilibrium between protons and neutrons. The main effect of a strong magnetic presence at the time of nucleosynthesis is to enhance the conversion rate of neutrons into protons. As a result, the neutron-to-proton ratio would freeze-out at a lower temperature. This in turn would lead to a less efficient production of 4 He and of heavier elements [63,64]. In fact, the magnetic effect would be catastrophic if B ≫ m 2 p /e ∼ 10 17 G at the time of nucleosynthesis. (b) The temperature at which the proton-to-neutron ratio freezes-out is determined by the balance between the timescale of the weak interaction and the expansion rate of the universe [65]. Equilibrium is attained when Γ n→p ∼ H, where Γ n→p is the cross-section of the interaction and H is the Hubble constant at the time. The latter is proportional to the total energy density of the universe, where the B-field contributes as well. Thus, a strong magnetic presence will increase the value of the Hubble parameter. This would cause an earlier freeze-out of the proton-to-neutron ratio and result into larger residual amounts of 4 He [66,67]. (c) The magnetic presence will also change the phase-space volume of electrons and positrons, since their momentum component normal to the B-field will become discrete (Landau levels). Therefore, the energy density, the number density and the pressure of the electron gas augment, relative to their magnetic-free values [64]. The rise happens at the expense of the background photons, which transfer energy to the lowest Landau level. This delays the electron-positron annihilation, which in turn increases the photon-to-baryon ratio and finally leads to lower 3 He and D abundances [68]. All of the above need to be accounted for when calculating the BBN limits on primordial magnetic fields. This is done by means of numerical methods, which seem to conclude that the main magnetic effect on the light-element abundances comes from the field's contribution to the expansion rate of the universe (i.e. case (b)). The overall constraint on the magnetic strength is B 10 11 G at the time of nucleosynthesis, which (roughly) translates to B 7 × 10 −7 G at the time of galaxy formation [20]. Cosmic microwave limits Observations of the CMB temperature anisotropies and polarization provide valuable tools to constrain cosmological models. As such, they also play an important role in the diagnostic of early universe magnetic fields. In comparison to the data of the angular power spectra of polarization, C EE ℓ , and temperaturepolarization, C T E ℓ , the temperature angular power spectrum, C T T ℓ , is known at higher precision. For example, the 7-year WMAP (Wilkinsion Microwave Anisotropy Probe) power spectrum is limited only by cosmic variance up to ℓ ≈ 548 [69]. Moreover, on smaller-scales, observations from CBI (Cosmic Background Imager) [70] and VSA (Very Small Array) [71], ACBAR (Arcminute Cosmology Bolometer Array Receiver) [72] and the forthcoming SPT (South Pole Telescope) [73] missions will determine the C T T ℓ to even higher accuracies. The PLANCK satellite is expected to extend the region limited only by cosmic variance to ℓ ≈ 1500. At the moment, the high isotropy of the Cosmic Microwave Background appears to exclude homogeneous cosmological magnetic fields much stronger than ∼ 10 −9 G [74]. A similar limit is found for stochastic magnetic fields as well [75,76,77,78]. It has been shown that the temperature angular power spectrum, C T T ℓ , from magnetically-induced vector and scalar perturbations increases slightly across all angles. The extra pressure that the B-field adds into the system can change the position and the magnitude of the acoustic peaks, thus producing a potentially observable effect [79,80]. The presence of small-scale magnetic fields appears to leave undamped features on small angular scales and may also lead to distinctive polarisation structures [81,82]. In addition, large-scale primordial fields could be related to the low-quadrupole moment problem [83]. Nevertheless, the magnetic signal remains subdominant to that from standard scalar perturbations until around ℓ ≈ 2000, depending on the field strength and spectral index [84,85,86,87,88]. Magnetic fields also source tensor modes, which however are of relatively low-amplitude. The signal is similar to that of inflationary gravitational wave, but probably weaker in strength [89]. Therefore, the direct magnetic impact onto the CMB T T correlation does not generally provide an ideal probe of primordial magnetism. However, the CBI mission observed a weak increase of power on small scales, as compared to the concordance model [71]. Provided this is real and not a statistical or systematic artifact, it could be partly explained by the presence of a cosmological B-field [84,90]. Nucleosynthesis bounds, however, imply that a primordial field is unlikely to account for all the increase. Besides contributing to the CMB temperature fluctuations, a primordial magnetic field also produces E-mode polarisation that can significantly change the angular power spectrum of the standard ΛCDM model [85]. However, the polarisation limits are not as strong as those coming from the temperature anisotropy. Due to the presence of both vector and tensor perturbations, the magnetic field also leads to B-mode polarisation. Moreover, B-modes are also induced in the scalar sector by Faraday rotation if a magnetic field is present at decoupling [91,92,93,94,95,96]. Taking into account that in the standard picture B-modes are produced only by lensed E-modes and by inflationary gravitational waves, in principle, the observation of a distinct B-mode power spectrum would be the clearest indication of a primordial magnetic field. However, the CMB polarisation maps are poorly known, compared to the temperature ones. While we currently possess a power spectrum C T E ℓ , this is by no means cosmic-variance limited on any scale. The observations of the B-modes yield bounds consistent with zero [97,98]. These are on relatively small scales, directly observing the region at which magnetic effects may come to dominate. Nevertheless, we are far from the required accuracy, particularly for the B-modes. Given the limitations of the power spectra, the non-Gaussianity of the temperature map is a reasonable place to look for further constraints on primordial magnetic fields [99,100,101,102,103,104,105]. Although up to now the observations are entirely consistent with Gaussian initial conditions, there are non-Gaussian features in the WMAP maps [106]. Also, the number of non-Gaussian features could well increase with the next generation of CMB experiments. Limits from gravitational waves A strong limit on stochastic magnetic fields produced before nucleosynthesis has been derived in [107]. The anisotropic stress of the magnetic field acts as a source term in the evolution equation of gravity waves. This causes the conversion of magnetic field energy into gravity waves above a certain critical value of the magnetic field strength. In particular, the field strength smoothed over a scale λ of magnetic fields generated during inflation must be smaller than B λ ∼ 10 −20 G for spectral indices n B > −2, where n B = −3 corresponds to a scale invariant magnetic field energy spectrum. If the magnetic field is produced by a causal mechanism, for example during the electroweak phase transition, n B > 2, its strength has to be below 10 −27 G in order not to loose all its energy density to gravitational waves. The magnetic-strength limits asserted in [107] are the strongest reported in the literature, far more restrictive than those coming from nucleosynthesis or the CMB. However, analogous studies of magnetically produced gravity waves have reached different conclusions. It has been claimed, in particular, that the limits on cosmological magnetic fields set by the latest LIGO S5 data lie close to those obtained by BBN and the CMB [108]. Finally, we should also note the possibility of constraining primordial B-fields using the ionisation history of the post-recombination universe and, in particular, the observed re-ionisation depth. Thus, based on the 5-year WMAP data, upper limits of nGauss order have been reported in the literature [109]. Relativistic magnetised cosmologies Although the study of large-scale magnetic fields goes a long way back into the past, the first systematic attempts to incorporated magnetism in cosmology appeared in the late 60s and the early 70s [110,111,112,113]. Next, we will provide the basic background for the relativistic study of cosmological B-fields. For the details and a recent review the reader is referred to [114,115,116,117,118,119]. The gravitational field In the geometrical framework of general relativity, gravity is a manifestation of the non-Euclidean geometry of the spacetime. The gravitational field is therefore described by the Riemann curvature tensor (R abcd ), which satisfies the Ricci identities applied here to an arbitrary vector v a (with ∇ a representing the familiar covariant derivative operator). The Riemann tensor also assumes the invariant decomposition and obeys the symmetries R abcd = R cdab and R abcd = R [ab] [cd] . Note that g ab is the spacetime metric, R ab = R c acb is the Ricci tensor, R = R a a is the associated Ricci scalar and C abcd is the Weyl (or conformal curvature) tensor. 3 The Ricci component of the Riemann tensor determines the local gravitational field through the Einstein field equations where T ab is the energy-momentum tensor of all the matter fields involved, κ = 8πG and Λ the cosmological constant. 4 The Weyl tensor, on the other hand, has to do with the long-range component of the gravitational field (i.e. tidal forces and gravity waves), shares the same symmetries with R abcd and it is also trace-free (i.e. C c acb = 0). 3 Unless stated otherwise, we consider a general 4-dimensional (pseudo) Riemannian spacetime with a Lorentzian metric of signature (−, +, +, +). Also, throughout this review, Latin indices take values between 0 and 3, while their Greek counterparts vary from 1 to 3. 4 We use the Heaviside-Lorentz units for the electromagnetic field in this section. Natural units, with c = 1 = k B = and energy as the fundamental dimension, are used throughout this review. Kinematics We introduce a family of observers with worldlines tangent to the timelike 4-velocity field u a (i.e. u a u a = −1). These are the fundamental observers that define the direction of time. Then, the tensor h ab = g ab + u a u b projects orthogonal to u a and into the observers' instantaneous 3-dimensional rest-space. 5 Together, u a and h ab introduce an 1+3 'threading' of the spacetime into time and space, which decomposes physical quantities, operators and equations into their irreducible timelike and spacelike parts (see [120,121] for further details). For example, splitting the covariant derivative of the observers' 4-velocity, leads to the irreducible kinematic variables of the motion. In particular, we arrive at is the vorticity tensors and A a =u a = u b ∇ b u a is the 4-acceleration vector. 6 The volume scalar describes changes in the average separation between neighbouring observes. When Θ is positive this separation increases, implying that the associated fluid element expands. In the opposite case we have contraction. The volume scalar also defines a representative length scale (a) by means ofȧ/a = Θ/3. In cosmological studies, a is commonly referred to as the 'scale factor'. We use the shear to monitor changes in the shape of the moving fluid under constant volume, while the vorticity traces its rotational behaviour. Note that we can replace the vorticity tensor with the vorticity vector ω a = ε abc ω bc /2, where ε abc represents the 3-D Levi-Civita tensor. Finally, the 4-acceleration reflects the presence of non-gravitational forces and vanishes when the observers worldlines are timelike geodesics. The time evolution of the volume scalar, the vorticity vector and the shear tensor is determined by a set of three propagation equations, supplemented by an equal number of constraints. Both sets are onbtained after applying the Ricci identities (see (3.1.1) in § 3.1.1) to the fundamental 4-velocity field [120,121]. Matter fields Analogous decompositions apply to the rest of the kinematical and dynamical variables. Thus, relative to the u a -frame, the energy-momentum tensor of a general (imperfect) fluid splits as (3.1.5) 5 By construction, h ab is a symmetric spacelike tensor, with h a a = 3 and h ab h b c = h ac . The projector coincides with the metric of the observers' 3-dimensional space in non-rotating spacetimes. 6 Overdots indicate (proper) time derivatives along the u a -field, while the gradient D a = h a b ∇ b defines the 3-dimensional covariant derivative operator. Round brackets denote symmetrisation, square antisymmetrisation and angled ones indicate the symmetric and traceless part of projected tensors and vectors. Here, ρ = T ab u a u b represents the energy density, p = T ab h ab /3 the isotropic pressure, q a = −h a b T bc u c the total energy flux and π ab = h a c h b d T cd the anisotropic pressure of the matter, as measured by the fundamental observers [120,121]. When dealing with a perfect fluid, both q a and π ab vanish and the above reduces to The remaining degrees of freedom are determined by the equation of state, which for a barotropic medium takes the simple p = p(ρ) form. Electromagnetic fields Magnetic and electromagnetic fields introduce new features to any cosmological model through their energy density and pressure contributions and due to their generically anisotropic nature. The Maxwell field is invariantly described by the antisymmetric Faraday tensor. Relative to the fundamental observers introduced in § 3.1.2, the latter decomposes as where E a = F ab u b and B a = ε abc F bc /2 are respectively the electric and magnetic components. The inherit anisotropy of the electromagnetic field is reflected in the form of its energymomentum tensor. The latter has the invariant form T (em) ab = −F ac F c b + (F cd F cd /4)g ab , which relative to the u a -frame recasts to Comparing the above to (3.1.5) in § 3.1.3, we conclude that the Maxwell field corresponds to an imperfect fluid with energy density (E 2 + B 2 )/2, isotropic pressure (E 2 + B 2 )/6, energy flux given by the Poynting vector P a and anisotropic stresses represented by the symmetric and trace-free Π ab -tensor. This fluid-like description of the Maxwell field has been proved particularly helpful in many applications [114,115,116,117,118,119]. Conservation laws In the case of charged matter, the total energy-momentum tensor is T ab = T representing the Lorentz 4-force. Combining the two, we obtain the conservation laws of the total energy and momentum. These are given by the continuity equatioṅ (3.1.10) and by the Navier-Stokes equation respectively [119]. Note that µ = −J a u a is the electric charge density and J a = h a b J b is the associated 3-current, so that J a = µu a + J a . An additional conservation law is that of the 4-current, which satisfies the invariant constraint ∇ a J a = 0. The latter translates into the conservation law for the charge density, given by [119] (3.1.12) Evolution of the electromagnetic field The vector nature of the electromagnetic components and the geometrical approach to gravity that general relativity introduces, mean that the Maxwell field is the only known energy source that couples directly to the spacetime curvature through the Ricci identities as well as the Einstein Field Equations. Both sets are therefore necessary for the full relativistic treatment of electromagnetic fields. Maxwell's equations We monitor the evolution of the electromagnetic field using Maxwell's formulae. In their invariant form these read 2.1) with the first manifesting the existence of a 4-potential. Relative to the u a -frame, the above set splits into two pairs of propagation and constraint equations. The former consists oḟ These provide the 1+3 forms of Coulomb's and Gauss' laws respectively. Note that Eqs. (3.2.2)-(3.2.4) contain relative motion effects, in addition to the standard 'curl' and 'divergence' terms of their more traditional versions. This is an essentially built-in property of the 1+3 formalism, which should be always kept in mind when applying expressions like the above. The electromagnetic wave equations Maxwell's equations also provide the wave equation of the electromagnetic tensor. This can be obtained by applying the Ricci identities to the Faraday tensor and takes the invariant form (e.g. see [118,122]) 2.5) where ∇ 2 = ∇ a ∇ a is the d'Alembertian and R abcd , R ab are the Riemann and Ricci tensors respectively (see § 3.1.1). The above results from the vector nature of the electromagnetic components and from the geometrical interpretation of gravity that general relativity advocates. The two guarantee that the Maxwell field is the only known source of energy that couples directly to gravity through both the Einstein equations and the Ricci identities. Expression (3.2.5), which clearly shows how spacetime curvature drives electromagnetic disturbances, can also provide the wave-equations of the individual components of the Maxwell field. Alternatively, one may obtain these relations using the set ( The ideal MHD limit At the ideal MHD limit Maxwell's formulae reduce to one propagation equation and three constraints. The former comes from (3.2.3) and is the familiar magnetic induction equatioṅ The constraints, on the other hand, are obtained from Eqs. (3.2.2) and (3.2.4). In particular, eliminating the electric field form these relations, we arrive at respectively. Following (3.3.3b), the inner product ω a B a corresponds to an effective charge density, triggered by the relative motion of the B-field. In the absence of the electric field, the electromagnetic energy-momentum tensor simplifies as well. To be precise, expression (3.1.8) reduces to This means that the magnetic field can be seen as an imperfect fluid with energy density given by B 2 /2, isotropic pressure equal to B 2 /6, zero energy flux and anisotropic stresses given by At the MHD limit, the matter energy and that of the residual magnetic field are conserved separately, with the induction equation (see formula (3.3.2) above) providing the conservation law of the magnetic energy. At the same time, the momentum conservation law reads When dealing with a perfect fluid with zero pressure, we may set p = 0 = q a = π ab . Then, starting from (3.3.6), one can show that A a B a = 0 and recast the latter into the form where in the right-hand side we have two expressions for the Lorentz force. Note that the first term in the last equality is due to the field's pressure, while the second carries the effects of the magnetic tension. The former reflects the tendency of the field lines to push each other apart and the latter their elasticity and tendency to remain 'straight'. As we will explain below, the majority of studies analysing the magnetic effects on structure formation do not account for the tension contribution to the Lorentz force. Magnetism and structure formation Despite the widespread presence of large-scale magnetic fields in the universe, their role and implications during structure formation are still not well understood. Here, we will briefly summarise the way B-fields could have altered the linear and the mildly non-linear stages of galaxy formation and also how the latter might have backreacted on the magnetic evolution. The linear regime Scenarios of magnetised structure formation typically work within the ideal-MHD approximation, looking at the effects of the magnetic Lorentz force on density inhomogeneities. The bulk of the available inhomogeneous treatments are Newtonian, with the relativistic approaches being a relatively recent addition to the literature. Although the role of the magnetism as a source of density and vorticity perturbations was established early on [128,129,130], the complicated action of the B-field did not allow for analytic solutions (with the exception of [131] for the case of dust). Full solutions were provided by means of covariant techniques, which considerably simplified the mathematics [114,115,116,117,118,119]. Magnetic fields generate and affect all three types of density inhomogeneities, namely scalar, vector and (trace-free) tensor inhomogeneities. The former are those commonly referred to as density perturbations and represent overdensities or underdensities in the matter distribution. Vector inhomogeneities describe rotational (i.e. vortex-like) density perturbations. Finally, tensor-type inhomogeneities correspond to shape distortions. Following [114,115,116,117,118,119], the scalar describes linear perturbations in the density (ρ) of the matter and corresponds to the more familiar density contrast δρ/ρ. Note that positive values for ∆ indicate overdensitites and negative ones underdensities. In a perturbed, weakly magnetised and spatially flat Friedmann-Robertson-Walker (FRW) universe, the above defined scalar evolves according to 7 where Z = a 2 D 2 Θ and B = (a 2 /B 2 )D 2 B 2 . The first of the last two variables describes linear inhomogeneities in the smooth Hubble expansion and the second represents perturbations in the magnetic energy density. Then, to linear order, 4.4) respectively. In the above w = p/ρ is the (constant) barotropic index of the matter, H =ȧ/a is the background Hubble parameter, c 2 s =ṗ/ρ is the square of the adiabatic sound speed and c 2 a = B 2 /ρ(1 + w) is that of the Alfvén speed. We have also assumed that B 2 ≪ ρ, given the relative weakness of the magnetic field. Expression (3.4.2) verifies that B-fields are generic sources of linear density perturbations. Indeed, even when ∆ and Z are zero initially,∆ will generally take nonzero values due to the magnetic presence. Also, Eq. (3.4.4) shows that linear perturbations in the magnetic energy density evolve in tune with those in the density of the matter (i.e. B ∝ ∆). This means that a B-field residing in an overdense region of an Einstein-de Sitter universe will grow by approximately two to three orders of magnitude (see solution (3.4.7) below). Note that the aforementioned growth occurs during the linear regime of structure formation and is independent of the (nonlinear) increase in the field's strength due to the adiabatic compression of a protogalactic cloud (see § 3.4.2 for more details). Finally, we should emphasise that only the pressure part of the Lorentz force contributes to Eqs. (3.4.2) and (3.4.3). To account for the tension effects at the linear level, one needs to allow for FRW backgrounds with non-Euclidean spatial geometry. 8 The system (3.4.2)-(3.4.4) has analytical solutions in the radiation and the dust eras. Before equipartition, when w = 1/3 = c 2 s , H = 1/2t, ρ = 3/4t 2 and c 2 a = 3B 2 /4ρ = constant, largescale magnetised density perturbations obey the power-law solution. In particular, keeping only the dominant growing and decaying modes one arrives at [116,119] ∆ = C 1 t −1/2+10c 2 a /9 + C 2 t 1−4c 2 a /9 . In the absence of the B-field, we recover the standard growing and decaying modes of ∆ ∝ t and ∆ ∝ t −1/2 respectively. So, the magnetic presence has reduced the growth rate of the density contrast by 4c 2 a /9. Also, since B ∝ ∆ -see Eq. (3.4.4), the above describes the linear evolution of the magnetic energy-density perturbations as well. Well inside the horizon we can no longer ignore the role of the pressure gradients. There, the k-mode oscillates like a magneto-sonic wave with where λ k = a/k is the perturbed scale and λ H = 1/H the Hubble horizon [116,119]. Here, the magnetic pressure increases the effective sound speed and therefore the oscillation frequency. The former makes the Jeans length larger than in non-magnetised models. The latter brings the peaks of short-wavelength oscillations in the radiation density closer, leaving a potentially observable signature in the CMB spectrum [79]. When dust dominates, w = 0 = c 2 s , H = 2/3t, ρ = 4/3t 2 and c 2 a = B 2 /ρ ∝ t −2/3 . Then, on superhorizon scales, the main growing and decaying modes of the density contrast are [114,119] with α 1,2 = −[1 ± 5 1 − (32/75)(c a λ H /λ k ) 2 0 ]/6. In the absence of the B-field we recover again the standard solution with α 1 = 2/3 and α 2 = −1. Thus, as with the radiation era before, the magnetic presence slows down the growth rate of density perturbations. In addition, the field's pressure leads to a magnetically induced Jeans length, below which density perturbations cannot grow. As a fraction of the Hubble radius, this purely magnetic Jeans scale is Setting B ∼ 10 −9 G, which is the maximum homogeneous field allowed by the CMB [74], we find that λ J ∼ 10 Kpc. Alternative, magnetic fields close to 10 −7 G, like those found in galaxies and galaxy clusters, give λ J ∼ 1 Mpc. The latter lies intriguingly close to the size of a cluster of galaxies. Overall, the magnetic effect on density perturbations is rather negative. Although B-fields generate this type of distortions, they do not help them to grow. Instead, the magnetic presence either suppresses the growth rate of density perturbations, or increases the effective Jeans length and therefore the domain where these inhomogeneities cannot grow. The negative role of the B-field, which was also observed in the Newtonian treatment of [131], reflects the fact that only the pressure part of the Lorentz force has been incorporated into the equations. When the tension component (i.e. the elasticity of the field lines) is also accounted for, the overall magnetic effect can change and in some cases it could even reverse [117]. Magnetic fields also induce and affect rotational, vortex-like, density inhomogeneities. To linear order, these are described by the vector W a = −(a 2 /2ρ)ε abc D b D c ρ. Then, on an spatially flat FRW background,Ẅ 4.9) after matter-radiation equality [116,119]. Defining λ a = c a λ H as the 'Alfvén horizon', we may write the associated solution in the form where α 1,2 = −[5± 1 − (48/9)(λ a /λ k ) 2 0 ]/6. On scales well outside the Alfvén horizon, namely for λ a ≪ λ k , the perturbed mode decays as W ∝ t −2/3 . This rate is considerably slower than W ∝ t −1 , the decay rate associated with magnetic-free dust cosmologies. In other words, the B-field has reduced the standard depletion rate of the vortex mode. An analogous effect is also observed on ω a , namely on the vorticity proper [116,119]. Hence, magnetised cosmologies rotate faster than their magnetic-free counterparts. In contrast to density perturbations, the field seems to favour the presence of vorticity. This qualitative difference should probably be attributed to the fact that the tension part of the Lorentz force also contributes to Eq. (3.4.9). In addition to scalar and vector perturbations, magnetic fields also generate and affect tensortype inhomogeneities that describe shape-distortions in the density distribution [116]. An initially spherically symmetric inhomogeneity, for example, will change shape due to the magnetically induced anisotropy. All these are the effects of the Lorentz force. Even when the latter is removed from the system, however, the B-field remains active. Due to its energy density and anisotropic nature, for example, magnetism affects both the local and the long-range gravitational field. The anisotropic magnetic pressure, in particular, leads to shear distortions and subsequently to gravitational-wave production. Overall, magnetic fields are a very versatile source. They are also rather unique in nature, since B-fields are the only known vector source of energy. An additional unique magnetic feature, which remains relatively unexplored, is its tension. When we add to all these the widespread presence of magnetic fields, it makes sense to say that no realistic structure formation scenario should a priori exclude them. Aspects of the nonlinear regime The evolution of large-scale magnetic fields during the nonlinear stages of structure formation has been addressed primarily by means of numerical methods. The reason is the high complexity of the nonlinear MHD equations, which makes analytical studies effectively impossible, unless certain simplifying assumptions are imposed. The simplest approximation is to assume spherically symmetric compression. Realistic collapse, however, is not isotropic. In fact, when a magnetic field is present, its generically anisotropic nature makes the need to go beyond spherical symmetry greater. Anisotropic contraction can be analytically studied within the Zeldovich approximation [132,133]. The latter is based on a simple ansatz, which extrapolates to the nonlinear regime a well known linear result. The assumption is that the irrotational and acceleration-free linear motion of the dust component, also holds during the early nonlinear stages of galaxy formation. This approximation allows for the analytical treatment of the nonlinear equations, the solution of which describe anisotropic (one dimensional) collapse and lead to the formation of the well-known Zeldovich 'pancakes'. Suppose that a magnetic field is frozen into a highly conductive protogalactic cloud that is falling into the (Newtonian) potential wells formed by the Cold Dark Matter (CMB) sector. 9 Relative to the physical coordinate system {r α }, the motion of the fluid velocity is u α = 3Hr α + v α , where H =ȧ/a is the Hubble parameter of the unperturbed FRW background and v α is the peculiar velocity of the fluid (with α = 1, 2, 3). Then, the magnetic induction equation reads [134] where overdots now indicate convective derivatives (i.e.˙= ∂ t + u β ∂ β ). Also, ϑ = ∂ α v α and σ αβ = ∂ β v α are the peculiar volume scalar and the peculiar shear tensor respectively. 10 The former takes negative values (i.e. ϑ < 0), since we are dealing with a protogalactic cloud that has started to 'turn around' and collapse. Note that the first term in the right-hand side of (3.4.11) represents the background expansion, the second is due to the peculiar contraction and the last reflects the anisotropy of the collapse. Introducing the rescaled magnetic field B α = a 2 B α , the above expression recasts into with primes indicating differentiation with respect to the scale factor. Also ϑ = aHθ and σ αβ = aHσ αβ , whereθ = ∂ αṽ α andσ αβ = ∂ βṽα (withṽ α = ax ′ α and v α = aHṽ α ). Relative to the shear eigenframe,σ αβ = (σ 11 ,σ 22 ,σ 33 ) and expression (3.4.12) splits into (3.4.14) This system describes the second-order evolution of a magnetic field, which is frozen in with the highly conductive matter of a collapsing protogalaxy, within the limits of the Zeldovich approximation. In order to solve the set of Eqs. (3.4.13), (3.4.14), we recall that in the absence of rotation and acceleration, the peculiar volume scalar is given bỹ Similarly, the shear eigenvalues arẽ where λ 1 , λ 2 and λ 3 are the eigenvalues of the initial tidal field and determine the nature of the collapse [135,136]. In particular, one-dimensional collapse along, say, the third eigen-direction is characterised by λ 1 = 0 = λ 2 and by λ 3 < 0. In that case, the pancake singularity is reached as a → −1/λ 3 . Spherically symmetric collapse, on the other hand, has λ 1 = λ 2 = λ 3 = λ < 0. Then, we have a point-like singularity when a → −1/λ. stages of the nonlinear regime, when the effects of the fluid pressure are negligible. Assuming that the contraction is driven by non-baryonic CDM, means that we can (in principle) extend the domain of the Zeldovich approximation beyond the above mentioned mildly nonlinear stage. Substituting, the above expressions into the right-hand side of Eqs. (3.4.13) and (3.4.14), we obtain the solutions and with the zero suffix corresponding to a given time during the protogalactic collapse. Note that the ratios in parentheses reflect the magnetic dilution due to the background expansion, while the terms in brackets monitor the increase in the field's strength caused by the collapse of the protogalactic cloud. According to the above solution, when dealing with pancake collapse along the third eigen-direction, the B 3 -component decays as a −2 , while the other two increase arbitrarily. Alternatively, during a spherically symmetric contraction the B-field evolves as Here, all the magnetic components diverge as we approach the point singularity (i.e. for a → −1/λ). Comparing the two results, we deduce that the anisotropic (pancake) collapse leads to a stronger increase as long as λ 3 < λ. The latter is always satisfied, provided that the initial conditions are the same for both types of collapse, given that λ 3 =θ/(1 − a 0θ ) and λ 3 =θ/(3 − a 0θ ) -see expression (3.4.15) above. The above given qualitative analysis indicates that a magnetic field trapped in an anisotropically contracting protogalactic cloud will increase beyond the limits of the idealized spherically symmetric scenario. Note that this type of amplification mechanism appears to be the only alternative left if the galactic dynamo (see § 4.2.1 below) fails to operate. Quantitatively, the achieved final strength depends on when exactly the backreaction of the B-field becomes strong enough to halt the collapse [25]. Thus, the longer the anisotropic collapse lasts, the stronger the residual B-field. The analytical study of [134], in particular, showed that (realistically speaking) the anisotropy could add one or two orders of magnitude to the magnetic strengths achieved through conventional isotropic compression. These results are in very good agreement with numerical studies simulating shear and tidal effects on the magnetic evolution in galaxies and galaxy clusters [137,138,139]. Magnetogenesis in conventional FRW models In order to operate successfully, the galactic dynamo needs magnetic seeds that satisfy two specific requirements. The first refers to the (comoving) coherence length of the initial B-field and the second is related to its strength. The scale must not drop below ∼ 10 Kpc. The strength typically varies between ∼ 10 −12 G and ∼ 10 −22 G, depending on the efficiency of the dynamo amplification. At first, these requirements may seem relatively straightforward to fulfill. Nevertheless, within classical electromagnetism and conventional FRW cosmology, magnetic seeds with the aforementioned desired properties are very difficult to produce. The Friedmann models Current observations, primarily the isotropy of the CMB, strongly support a universe that is homogeneous and isotropic on cosmological scales. In other words our universe seems to be described by the simplest cosmological solution of the Einstein field equations, the FRW models. Before proceeding to discuss the magnetic evolution on FRW backgrounds, it helps to summarise some basic features of these models. The FRW Dynamics The high symmetry of the Friedmann models means that all kinematical and dynamical variables are functions of time only, while every quantity that represents anisotropy or inhomogeneity vanishes identically. Thus, in covariant terms, an FRW model has Θ = 3H(t) = 0, σ ab = 0 = ω a = A a , E ab = 0 = H ab , where H =ȧ/a is the familiar Hubble parameter. The isotropy of the Friedmann models also constrains their matter content, which can only have the perfect-fluid form (with ρ = ρ(t) and p = p(t)). In addition, due to the spatial homogeneity, all orthogonally projected gradients (e.g. D a ρ, D a p, etc -see § 3) are by definition zero. This means that the only nontrivial equations left, are the FRW version of Raychaudhuri's formula, the equation of continuity and the Friedmann equation. These are given by [121] and respectively. Note that K = 0, ±1 is the 3-curvature index. The latter is associated to the Ricci scalar (R) of the spatial sections by means of the relation R = 6K/a 2 [121]. In FRW spacetimes with non-Euclidean spatial geometry, the scale factor also defines the curvature scale (λ K = a) of the model. This marks the threshold at which the effects of spatial curvature start becoming important (e.g. see [140]). Lengths smaller than the curvature scale are termed subcurvature, while those exceeding λ K are referred to as supercurvature. The former are essentially immune to the effects of spatial geometry, with the latter dominating on supercurvature lengths. The relation between the curvature scale and the Hubble radius is determined by Eq. (4.1.2). In the absence of a cosmological constant, the latter reads where λ H = H −1 and Ω ρ = κρ/3H 2 are the Hubble radius and the density parameter respectively. Hence, hyperbolic 3-geometry (i.e. K = −1) ensures that λ K > λ H always, with λ K → ∞ as Ω ρ → 1 and λ K → λ H for Ω ρ → 0. In practice, this means that supercurvature scales in spatially open FRW cosmologies are always outside the Hubble radius. 11 This is not the case in closed models, where λ K > λ H when Ω ρ < 2 and λ K ≤ λ H if Ω ρ ≥ 2. Finally, we note that, since the curvature scale simply redshifts with the expansion, the importance of spatial geometry within a comoving region does not change with time. Scale-factor evolution in FRW models In order to close the system (4.1.1), one needs to introduce an equation of state for the matter. Here, we will only consider barotropic perfect fluids, mainly in the form of nonrelativistic 'dust' or isotropic radiation (with p = 0 and p = ρ/3 respectively). When w = p/ρ is the (constant) barotropic index of the medium, the continuity equation (see (4.1.1b)) gives ρ ∝ a −3(1+w) . Then, setting K = 0 = Λ and normalising so that a(t = 0) = 0, we obtain For non-relativistic matter with w = 0 (e.g. baryonic dust or non-baryonic cold dark matter), we have the Einstein-de Sitter universe with a ∝ t 2/3 . Alternatively, a ∝ t 1/2 in the case of relativistic species (e.g. isotropic radiation) and a ∝ t 1/3 for a stiff medium with w = 1. When w = −1/3, which corresponds to matter with zero gravitational mass, the above leads to 'coasting' expansion with a ∝ t. Solution (4.1.4) does not apply to the w = −1 case. There, both ρ and H are constant to ensure de Sitter-type inflation with a ∝ e H 0 (t−t 0 ) . When the FRW spacetime has non-Euclidean spatial geometry it helps to use conformal rather than proper time. Then, for K = +1, Λ = 0 and w = −1/3 Eqs. Late vs early-time magnetogenesis The various mechanisms of magnetogenesis have been traditionally classified into those operating at late times, that is after recombination, and the ones that advocate an early (prerecombination) origin for the B-field. In either case, the aim of the proposed scenarios is to produce the initial magnetic fields that will successfully seed the galactic dynamo. The galactic dynamo paradigm The belief that some kind of nonlinear dynamo action is responsible for amplifying and sustaining the galactic magnetic fields has long roots in the astrophysical community [23,24,25]. Dynamos provide the means of converting kinetic energy into magnetic energy and the reader is referred to [29] for a recent extended review. Nevertheless, one can get a quick insight of how the mechanism in principle works by looking at the magnetic induction equation. In the Newtonian limit and assuming resistive MHD, the latter readṡ where overdots indicate convective derivatives (see also § 3.4.2) and α, β =1,2,3. Contracting the above along the magnetic field vector and recalling that curlB α = J α , leads to 2.2) with F α = ε αβµ B β J µ representing the magnetic Lorentz force. The latter contributes to the kinetic energy of the fluid via the Navier-Stokes equation (see expression (3.1.11) in § 3.1.5). Following (4.2.2), the action of the Lorentz force can in principle enhance the magnetic energy at the expense of the fluid's kinetic energy. The amplification can happen provided that the dissipative effects, carried by the last term on the right-hand side of Eq. (4.2.2), are subdominant. The first term, on the other hand gives the magnetic increase caused by the adiabatic galactic collapse (typically ϑ 0 in gravitationally bounded systems), while the second conveys the effects of the shearing stresses (see § 3.4.2). Dynamos are typically powered by the differential rotation of the galaxy. The latter combines with the small-scale turbulent motion of the ionised gas causing the exponential increase of the large-scale mean B-field in the plane of the galactic disc. The growth continues until it reaches saturation, which typically occurs when B ∼ 10 −6 G and the backreaction of the magnetic stresses suppresses any further increase. The amplification factor, however is quite sensitive to the specific parameters of the dynamo model. This sensitivity leads to serious uncertainties regarding the total ammount of magnetic amplification and has been the subject of ongoing discussions. The pattern and the orientation of the galactic magnetic field, especially of those seen in spiral galaxies, seem to support the dynamo idea. On the other hand, the detection of strong magnetic fields in high-redshift protogalactic structures has raised a number of question regarding the role of dynamos [8,9]. In any case, the galactic dynamo needs the presence of an initial magnetic seed in order to operate. These seeds must satisfy certain requirements regarding their coherence length and strength. The minimum required scale for the magnetic seed is comparable to the size of the smallest turbulent eddy. This lies close to 100 pc at the time the host galaxy is formed, which translates to a comoving length of approximately 10 Kpc before the collapse of the protogalactic cloud. The strength of the seed-field, on the other hand, varies, depending on the efficiency of the dynamo amplification and on the cosmological model it operates in [26,27,28,29]. Typically, the required values range between 10 −12 G and 10 −22 G. It is conceivable, however, that the lower limit could be brought down to 10 −30 G in spatially open or dark-energy dominated FRW universes [30]. Note that, in the absence of the dynamo, protogalactic collapse (spherically symmetric or anisotropic -see § 3.4.2) seems the only alternative means of magnetic amplification. Then, B-seeds as strong as 10 −9 G may be needed in order to meet the observations. So, provided that galactic dynamos work, the question is where do the initial magnetic seeds come from? Late-time magnetogenesis Post recombination mechanisms of magnetic generation appeal to astrophysical processes and battery-type effects. It has been proposed, in particular, that the Biermann-battery mechanism can produce seed B-fields, which the dynamo could subsequently amplify on galactic scales and to the observed strengths. The Biermann effect [143], which was originally discussed in the stellar context, exploits differences between the electron and the ion acceleration that are triggered by pressure gradients. These will first give rise to electric currents and subsequently lead to magnetic fields by induction. The literature contains several alternative scenarios using battery-type mechanisms to generate magnetic seed-fields in the post recombination era. Supernovae explosions of the first stars, for example, could eject into the interstellar medium B-fields that could seed the galactic dynamo [144,145]. Active galaxies and Active Galactic Nuclei (AGN) can also channel away jets of magnetised plasma [146,147]. Thermal-battery processes operating in (re)ionisation fronts can also lead to magnetic seeds that can sustain the dynamo [148,149]. Analogous results could be achieved through turbulent motions or shocks developed in collapsing protogalactic clouds [150,151]. Nevertheless, while Biermann battery effects can produce the seed fields that the dynamo will subsequently amplify to the observed strengths, the whole process operates on galactic scales. For this reason, it is less straightforward to invoke the Biermann mechanism when trying to explain the magnetic fields found in galaxy clusters. To a certain extent, this also weakens the overall position of the Biermann battery as a likely candidate for generating the galactic magnetic fields. Indeed, the possibility that the galactic and the cluster B-fields have a different origin seems rather unlikely, in view of their similarities. Early-time magnetogenesis The idea that cosmic magnetism might have pre-recombination origin is attractive because it makes the widespread presence of magnetic fields in the universe easier to explain. Especially the origin of the fields observed in high-redshift proto-galactic condensations. However, generating cosmological B-fields that will also successfully seed the galactic dynamo is not a problem-free exercise. In the early 1970s, Harrison proposed that battery-type effects, operating during the radiation era, could generate B-fields with strengths capable of sustaining the galactic dynamo [152]. The mechanism is based on conventional physics and does not need any new postulates. The disadvantage in Harrison's idea is that it requires significant amounts of primordial vorticity. The latter is essentially absent from the standard cosmological model. Note that the possibility of simultaneously generating both vorticity and magnetic fields in the late radiation era and around recombination (when the tight-coupling between photons and baryons is relaxed) was recently investigated in [153,154]. An alternative approach is to generate the magnetic seeds during phase transitions early in the radiation era (see § 5 below). There are problems, however, primarily related to the coherence length of the initial B-field. The difficulties arise because the size of the post-inflationary magnetic seeds, namely those created between inflation and (roughly) recombination is typically too small and will destabilise the dynamo. The reason is causality, which confines the scale of the field within that of the horizon at the time of magnetogenesis. For example, B-fields produced during the electroweak phase transition have coherence lengths of the order of the astronomical unit. The size of the magnetic field can increase if the host plasma has some degree of MHD turbulence. In such environments "cascade" processes are known to occur, whereby certain ideally conserved quantities flow from larger towards smaller scales (direct cascade) or the other way around (inverse cascade). In three dimensional MHD turbulence, the total (kinetic plus magnetic) energy cascades toward smaller scales, where it is dissipated by viscosity and resistivity. However, the other important ideal invariant, the magnetic helicity, inverse-cascades towards larger scales. The magnetic helicity is defined by the integral [155] where A a is the electromagnetic vector potential (recall that B a = curlA a ) and is equivalent to the Chern-Simon number of particle physics [32]. Besides being an ideal invariant, the magnetic helicity is also asymptotically conserved within the resistive MHD approximation. Physically, H M describes the topology of the field lines, that is their degree of withering and twisting [156]. As mentioned before, magnetic helicity inverse-cascades and evolves from smaller to larger scales [155], while its conservation has profound effects in the operation of MHD dynamos [157]. The aforementioned inverse-cascade effect makes primordial helicity very important, because it allows the magnetic energy to shift from smaller to larger scales, as the system tries to minimize its energy while conserving magnetic helicity. For example, Pouquet et al carried out a study in which nonhelical kinetic energy and maximally helical magnetic energy were injected into the plasma at a constant rate [158]. The outcome was a well defined wave of magnetic energy and helicity, propagating from smaller to larger scales. Similar results were also obtained in the case of steady turbulence [31,32,33,34,159,160,161] and for freely decaying MHD turbulence [162]. Although helical magnetic fields can enhance their original length, inflation seems to be the only effective solution to the scale problem faced by fields generated during the early radiation era. The reason is that inflation can naturally generate correlations on superhorizopn lengths. There are still problems, however, this time with the magnetic strength. In particular, B-fields that has survived a period of standard de Sitter inflation are typically to weak to sustain the galactic dynamo. Typical inflation produced magnetic fields Inflation is known to produce long wavelength effects from microphysical processes that operate well inside the Hubble radius. For this reason, inflation has long been seen as the best candidate for producing large-scale, cosmological magnetic fields. Here, we will look at scenarios operating within standard electromagnetic theory and conventional FRW models. Alternative approaches are given in § 6. Quantum-mechanically produced magnetic seeds The inflationary paradigm provides the dynamical means of producing long-wavelength electromagnetic fluctuations, by stretching subhorizon-sized quantum mechanical fluctuations to superhorizon scales. Roughly speaking, quantum fluctuations in the Maxwell field are excited inside the horizon and cross the Hubble horizon approximately e-folds before the end of the de Sitter phase [163]. In the above λ is the comoving scale of the mode (measured in Mpc and normalised to coincide with the mode's current physical length), M is the scale of inflation and T RH is the reheat temperature (both measured in GeV). Assuming that ρ is the energy density of the electromagnetic mode, then at the first horizon crossing. Once outside the Hubble radius, the aforementioned quantummechanically excited modes are expected to freeze-out as classical electromagnetic waves. The latter, which initially appear like static electric and magnetic fields, can subsequently lead to current-supported magnetic fields. This happens after the modes have re-entered the horizon in the radiation era, or later during the dust-dominated epoch. Note that, after the second horizon crossing, the currents of the highly conductive plasma will also eliminate the electric component of the Maxwell field, leaving the universe permeated by a large-scale B-field of primordial origin. The fast expansion of the de Sitter phase means that, by the end of inflation, the initial electromagnetic quantum fluctuations have achieved correlation lengths much larger than the current size of the observable universe. Thus, inflation produced B-fields have no scale problem whatsoever. Nevertheless, magnetic seeds that have survived a period of de Sitter expansion are generally too weak to sustain the dynamo. In particular, the typical strength of the residual B-field (in today's values) is less than 10 −50 G [35]. To understand why and how this happens, we first need to consider the linear magnetic evolution on FRW backgrounds. The adiabatic magnetic decay The evolution of large-scale electromagnetic fields on FRW backgrounds depends on the electric properties of the medium that fills the universe. Here, we will consider the two limiting cases of poorly and highly conductive matter. For any intermediate case, one needs a model for the electrical conductivity of the cosmic medium. In poorly conductive environments, ς → 0 and the electric currents vanish despite the presence of nonzero electric fields (see Ohm's law (3.3.1) in § 3.3). Then, the wave equation (3.2.7) linearises toB 3.3) where H =ȧ/a is the Hubble parameter of the unperturbed model. To simplify the above we introduce the rescaled magnetic field B a = a 2 B a and employ conformal, rather than proper, time [35]. Then, on introducing the harmonic splitting B a = n B (n) Q (n) a -with D a B (n) = 0, expression (4.3.3) takes the compact form 12 with the primes denoting conformal-time derivatives and K = 0, ±1 [118]. The above describes the linear evolution of the rescaled magnetic field on a Friedmannian background with any type of spatial curvature. Note the magneto-curvature term on the right-hand side of (4.3.4), which results form the direct coupling between the B-field and the geometry of the 3-space. The interaction is monitored by the Ricci identities and reflects the fact that we are dealing with an energy source of vector nature within a geometrical theory of gravity. We will discuss the implications of this interaction, which is largely bypassed in the literature, for the evolution of large-scale magnetic fields in § 4.4. When the FRW host has Euclidean spatial hypersurfaces, the 3-curvature index is zero (i.e. K = 0) and expression (4.3.4) assumes the Minkowski-like form The latter accepts the oscillatory solution with the integration constants depending on the initial conditions. Then, recalling that B (n) = a 2 B (n) , the above given solution translates into This guarantees an adiabatic (B (n) ∝ a −2 ) depletion for the magnetic field, irrespective of the equation of state of the matter, as long as the background spacetime is a spatially flat FRW model and the electrical conductivity remains very poor. The adiabatic magnetic decay is also guaranteed in highly conductive environments, namely at the ideal-MHD limit. There, ς → ∞ and, according to Ohm's law -see Eq. (3.3.1) in § 3.3, the electric field vanishes in the frame of the fluid. As a result, when linearised around a FRW 12 We use pure-vector harmonics that satisfy the constrainsQ (n) a and the associated Laplace-Beltrami equation, namely D 2 Q (n) a . Following [164], the (comoving) eigenvalues depend on the background spatial curvature according to n 2 = ν 2 + 2K, where ν represents the associated wavenumber. Also, n has a continuous spectrum, with n 2 ≥ 0, when K = 0, −1 and a discrete one, with The latter guarantees that B a ∝ a −2 on all scales, regardless of the equation of state of the matter and of the background 3-curvature. The universe is believed to have been a very good electrical conductor throughout its classical Big-Bang evolution, at least on subhorizon scales. During inflation, on the other hand, the conductivity is expected to be very low. However, as the universe leaves the inflationary phase and starts reheating, its conductivity grows. So, by the time we have entered the radiation era, the currents have eliminated the electric fields and frozen their magnetic counterparts in with the matter. 13 These arguments essentially guarantee that the set (4.3.5) and (4.3.8) monitors the evolution of cosmological magnetic fields throughout the universe's lifetime. This in turn has led to the widespread belief that the adiabatic decay-rate of cosmological magnetic fields is ensured at all times, unless classical electromagnetism is modified or the FRW symmetries are abandoned. As we will see in § 4.4, however, this is not necessarily the case. The residual magnetic field The immediate implication of (4.3.7) is that magnetic fields that survived a period of typical de Sitter-type inflation have been drastically diluted by the accelerated expansion of the universe. Together with (4.3.8), this means that B-fields of primordial origin are too weak to be of astrophysical relevance today. To demonstrate the dramatic magnetic depletion during the de Sitter phase we follow [35]. As a first step, recall that the relative energy density stored in the n-th magnetic mode at the (first) horizon crossing is (ρ B /ρ) HC ≃ (H/M P l ) 2 . Here, ρ B = B 2 (n) , ρ is the energy density of the background universe and M P l is the Planck mass. During inflation the total energy density is dominated by that of the vacuum (i.e. ρ ≃ M 4 , with M representing the energy scale of the adopted inflationary scenario). Consequently, the relative strength of the n-th magnetic mode at horizon crossing is (ρ B /ρ) HC ≃ (M/M P l ) 4 . Also, throughout inflation the universe is believed to be a very poor electrical conductor. This means that any magnetic field that may be present at the time decays adiabatically (see solution (4.3.7) in § 4.3.2). As a result, B 2 (n) = (B 2 (n) ) HC e −4N by the end of inflation, with N = ln(a IN /a HC ) representing the number of e-folds between horizon crossing and the end of the de Sitter era. This number depends on the scale of the mode in question and, in typical inflationary scenarios, is given by Eq. (4.3.1). Using the latter and recalling that (ρ B /ρ) RH = (ρ B /ρ) IN (T RH /M) 4/3 is the relative change of the magnetic energy density between the end of inflation proper and that of reheating, we find that [35] at the onset of the radiation era. Note that ρ γ ≃ ρ RH ≃ T 4 RH represents the energy density of the relativistic species and λ is the current (comoving) scale of the B-field. Also, the above ratio is independent of the energy scale of the adopted inflationary scenario and of the associated reheat temperature. Moreover, given that ρ B , ρ γ ∝ a −4 throughout the subsequent evolution of the universe, the same ratio remains unchanged until the time of galaxy formation. Once the scale of the magnetic mode is given, we can use (4.3.9) to evaluate the residual strength of any primordial B-field that underwent an era of (typical) de Sitter inflation. For example, in order to operate successfully, the galactic dynamo requires magnetic seeds with a minimum coherence scale of approximately 10 Kpc. Substituting this scale into Eq. (4.3.9) and recalling that ρ γ ≃ 10 −51 GeV today, we find that the corresponding magnetic field has strength of ∼ 10 −53 G [35]. This value is well below the galactic dynamo requirements, which leads to the conclusion that magnetic fields that have survived a period of standard, de Sitter-type inflationary expansion are (for all practical purposes) astrophysically irrelevant. Magnetic amplification in conventional FRW models The "negative" results of the previous section have been been widely attributed to the conformal invariance of Maxwell's equations and to the conformal flatness of the Friedmannian spacetimes. The two have been thought to guarantee an adiabatic decay-rate for all large-scale magnetic fields at all times. This, in turn, has led to the widespread perception that inflation produced B-fields are astrophysically unimportant, unless standard electromagnetism is modified or the FRW symmetries are abandoned. Superadiabatic amplification Strictly speaking, the adiabatic magnetic depletion seen in solution (4.3.7) of § 4.3 has only been proved on Friedmannian backgrounds with Euclidean spatial sections. Although it is true that all three FRW universes are conformally flat, they are not the same. There are differences in their 3-curvature, which mean that only the spatially flat model is globally conformal to Minkowski space. For the rest, the conformal mappings are local [165,166]. Another way of putting it is that, when dealing with spatially curved Friedmann universes, the conformal factor is no longer the cosmological scale factor but has an additional spatial dependence. All these imply that the wave equation of the rescaled magnetic field (B a = a 2 B a ) takes the simple Minkowski-like form (4.3.5), which guarantees an adiabatic decay for the actual B-field, only on FRW backgrounds with zero 3-curvature. In any other case, there is an additional curvaturerelated term (see expressions (4.3.3) and (4.3.4) in § 4.3), reflecting the non-Euclidean spatial geometry of the host spacetime. As a result, when linearised around an FRW background with nonzero spatial curvature, the magnetic wave equation reads [118] with the plus and the minus signs indicating the spatially closed and the spatially open model respectively. Recall that in the former case the eigenvalue is discrete (with n 2 ≥ 2), while in the latter it is continuous (with n 2 ≥ 0). In either case, the curvature-related effects fade away as we move down to successively smaller scales (i.e. for n 2 ≫ 2). According to (4.4.1), on FRW backgrounds with spherical spatial hypersurfaces, the B-field still decays adiabatically. The curvature term only modifies the frequency of the magnetic oscillation in accord with the solution [167] Overall, the adiabatic decay-rate of the B-field remains. Also, as expected, the smaller the scale the less important the role of the background 3-geometry. The standard picture, and the adiabatic-decay law, change when the background FRW model has open spatial sections. In particular, the hyperbolic geometry of the 3-D hypersurfaces alters the nature of the magnetic wave equation on large enough scales (i.e. when 0 < n 2 < 2). These wavelengths include what we may regard as the largest subcurvature modes (i.e. those with 1 ≤ n 2 < 2) and the supercurvature lengths (having 0 < n 2 < 1). Recall that eigenvalues with n 2 = 1 correspond to the curvature scale with physical wavelength Following [167,168,169], we introduce the scale-parameter k 2 = 2 − n 2 , with 0 < k 2 < 2. Then, k 2 = 1 indicates the curvature scale, the range 0 < k 2 < 1 corresponds to the largest subcurvature modes and their supercurvature counterparts are contained within the 1 < k 2 < 2 interval. In the new notation and with K = −1, Eq. (4.4.1) reads while its solution leads to Written with respect to the actual magnetic field, the above takes the form Magnetic fields that obey the above evolution laws can experience superadiabatic amplification without modifying conventional electromagnetism and despite the conformal flatness of the FRW host. 14 For instance, throughout the radiation and the dust eras (as well as during reheating), the scale factor of a FRW universe with hyperbolic spatial geometry evolves as a ∝ sinh(η) and a ∝ sinh 2 (η/2) respectively (see solution (4.1.6) in § 4.1.2). Focusing on the curvature scale, for simplicity, we may set |k| = 1 in (4.4.4). It is then clear that, on that scale, the magnetic mode never decays faster than B (1) ∝ a −1 [168]. In other words, large-scale Bfields are superadiabatically amplified throughout the post-inflationary evolution of an open Friedmann universe. Although in the above examples we only considered the cases of radiation and dust, the amplification effect is essentially independent of the type of matter that fills the universe. In particular, B-fields in open FRW models containing a barotropic medium with p/ρ = −1/3 are superadiabatically amplified on large scales [169]. 15 This means that the mechanism also operates during reheating (when p = 0) and also throughout a phase of slow-roll inflation, namely in spatially open FRW models with a false-vacuum equation of state (i.e. when p = −ρ). In the latter case, the background scale factor evolves as [121] where η, η 0 < 0. Substituting the above into Eq. (4.4.5), we find that near the curvature scale (i.e. for |k| → 1) the magnetic evolution is given by with C 3 , C 4 depending on the initial conditions [167,168]. This result also implies a superadiabatic type of amplification for the B-field, since the dominant magnetic mode never decays faster than B (1) ∝ a −1 . The adiabatic decay rate is only recovered at the end of inflation, as η → 0 − . It should be noted that the magnetogeometrical interaction triggering the above described effect is possible because, when applied to spatially curved FRW models, inflation does not lead to a globally flat de Sitter space. Although the inflationary expansion dramatically increases the curvature radius of the universe, it does not change its spatial geometry. Unless the universe was perfectly flat from the beginning, there is always a scale where the 3-curvature effects are important. It is near and beyond these lengths that primordial B-fields are superadiabatically amplified. 15 Mathematically, the easiest way of demonstrating the amplification effect is by adopting the Milne universe as our background spacetime. The latter corresponds to an empty spacetime with hyperbolic spatial geometry (see § 4.1.2) and can be used to describe a low density open universe. There, the scale factor evolves as a ∝ e η , which substituted into solution (4.4.5) leads to [168] Consequently, all magnetic modes spanning scales with 0 < k 2 < 2 are superadiabatically amplified. Close to the curvature scale, that is for k 2 → 1, the dominant magnetic mode is B (1) ∝ a −1 ; a rate considerably slower than the adiabatic a −2 -law. The latter is only restored at the k 2 = 0 limit, namely on small enough scales where the curvature effects are no longer important. Stronger amplification is achieved on supercurvature lengths, with B (k) ∝ a √ 2−2 at the homogeneous limit (i.e. as k 2 → 2). The magnitude of the residual magnetic field is calculated in a way analogous to that given in § 4.3.3. Now, however, there is an additional parameter due to the non-Euclidean geometry of the FRW background. In particular, near the curvature scale -where B ∝ a −1 , we find that 4.9) at the beginning of the radiation era [169]. Note that, in deriving the above, we also used the auxiliary relation (ρ B /ρ) RH = (ρ B /ρ) IN (M/T RH ) 4/3 . The latter provides the relative change in the energy density of the (superadiabatically amplified) magnetic field between the end of inflation and that of reheating. Comparing (4.4.9) to expression (4.3.9), one can see that the (superadiabatic) magnetic amplification is already substantial by the end of reheating, despite its dependence on the energy scale of the inflationary model and of the corresponding reheat temperature. Moreover, large-scale B-fields are superadiabatically amplified during the subsequent evolution of the universe. This means that on scales close to the curvature radius of our background model, the ratio r = ρ B /ρ γ is no longer constant but increases as r ∝ a 2 ∝ T −2 . Consequently, recalling that λ K = λ H / √ 1 − Ω is the curvature scale of a spatially open FRW cosmology, we obtain for the present strength of the residual B-field [169]. Therefore, the higher the energy scale of inflation the stronger the superadiabatic amplification. On the other hand, the closer the density parameter to unity, the weaker the final field. Currently, the WMAP reports indicate that 1 − Ω 0 10 −2 [172,173,174]. On these grounds, and provided that the universe is spatially open, expression (4.4.10) gives 4.11) when M ∼ 10 14 GeV and 1 − Ω 0 ∼ 10 −2 [169]. The last parameter choice implies a curvature radius of the order of 10 4 Mpc at present. These lengths are far larger than 10 Kpc, which is the minimum magnetic size required for the dynamo to work. Nevertheless, once the galaxy formation starts, the field lines should break up and reconnect on scales similar to that of the collapsing protogalaxy. According to (4.4.10), the above quoted magnetic strength will increase if the energy scale of inflation is greater than 10 14 GeV. On the other hand, the magnitude of the residual B-field will drop if the current curvature scale of the universe is much larger than the Hubble horizon (i.e. for 1 − Ω 0 ≪ 1). Nevertheless, the Ω-dependence in Eq. (4.4.10) is relatively weak, which means that B-fields capable of seeding the galactic dynamo (i.e. with B 0 10 −22 G) are possible even when 1 − Ω 0 ∼ 10 −18 (or lower -when the scale of inflation is higher than 10 14 GeV). Overall, FRW universes with hyperbolic spatial geometry seem capable of sustaining astrophysically important magnetic fields under a fairly broad range of initial conditions. 16 Large-scale, primordial magnetic fields with residual magnitudes like those quoted above are far stronger than any other of their conventionally produced counterparts. Such strengths are usually achieved outside classical electromagnetism or beyond the standard model (see § 5 and § 6 below). 17 Moreover, although magnetic seeds with strengths 10 −10 G cannot affect primordial nucleosynthesis or the CMB spectrum, their strength lies within the galactic dynamo requirements (see § 4.2.1) and are therefore of astrophysical interest. Finally, we should also note very recent reports indicating the presence of coherent magnetic fields in empty intergalactic space with strengths intriguingly close to those quoted here [12,13,14,15]. lower) and still produce magnetic fields able to sustain the galactic dynamo (i.e. with B 0 10 −30 G). We also note that to these magnitudes one should add the magnetic amplification during the linear and the nonlinear regime of structure formation -see § 3.4.1 and § 3.4.2 respectively. 17 Primordial magnetic fields can be superadiabatically, or even resonantly, amplified through their interaction with cosmological gravitational waves (see [175] and references therein). The former type of amplification is typically associated with highly conductive environments, but requires rather large amounts of shear anisotropy to operate efficiently. Resonant amplification, on the other hand, occurs in media of poor electrical conductivity and can lead to substantially strong B-fields with relatively minimal shear anisotropy. Both mechanisms are essentially nonlinear in nature and their detailed discussion lies beyond the limits of this review. Magnetogenesis in the standard model In this section we review mechanisms of primordial magnetic field generation in the framework of the standard model of particle physics and of non-linear and out of equilibrium processes that may have happened in the very early universe. In the first subsection we address magnetogenesis during reheating and in the second subsection we review magnetogenesis due to the EW (electroweak) and QCD (quantum chromodynamical) phase transitions. Although the EW phase transition is likely to have taken place during reheating, due to its importance and to the fact that it is framed in particle physics field as well as the QCD transition, we treat it together with the latter, in a specific subsection. Magnetogenesis during reheating Accepting the inflationary paradigm, the reheating phase of the Universe was one of the richest epochs in its evolution. 18 It is usually treated as the intermediate phase between the exponential expansion and the radiation dominated expansion, during which almost all the matter that constitutes the Universe was produced. This period can be divided roughly into two or three stages: preheating, heating and thermalisation. Of these the most interesting ones are the first and the third. During the first stage, the dominant effect is parametric particle creation. The importance of this process for reheating was first realized in 1990 by Traschen and Brandenberger [176] and also by Dolgov and Kirilova [177] and later developed in Refs. [178,179,180,181]. The thermalization process is a difficult and complex one. The interested reader is referred to specific works, such as, e.g., [182], with references therein. In few words, the process of reheating the Universe is due to the profuse creation of particles, caused by inflaton oscillations around the minimum of the effective potential. Those particles self-interact and ultimately reach a state of thermal equilibrium, when all (or almost all) the inflaton energy has been transformed into thermal energy of the created elementary particles, with temperature T RH , the so-called reheating temperature. Being a strong out of equilibrium process (and also turbulent, according to theoretical and numerical studies [183,184,185,186,187,188]), the reheating period is therefore a suitable scenario for primordial magnetogenesis. It is important to observe that, irrespective of the mechanism that may have generated the magnetic field during inflation, the quantity r = ρ B /ρ γ can be diluted during reheating because, during that phase, the radiation density increases by a factor of at least e 4N , with N being the number of e-foldings. So, unless the gauge field is also amplified by the same amount, r is likely to decrease during reheating. Parametric resonance Although the conformal invariance of the gauge fields is the main drawback for their amplification by the expansion of the universe, it also opens up the possibility of amplification by parametric resonance, if the conditions in the early universe are favourable. In this sense, the preheating stage of reheating offers a suitable scenario of magnetic amplification, through its parametric resonance with a scalar field [189]. From the Lagrangian density of scalar electrodynamics conformally coupled to gravity, where A a is the gauge potential, where δj is a source term that can be non null when statistical correlations of the electric currents are considered [190,191]. In Fourier space and in terms od conformal time, η with dη = dt/a (t), the homogeneous part of eq. (5.1.2) is with ω 2 k (η) = k 2 + 4πe 2 a 2 (η) ρ 2 (η), and where the primes denote derivatives with respect to η. Expression (5.1.3) describes a harmonic oscillator with time dependent frequency, which during the oscillation of the complex scalar field can be rewritten as a Mathieu-like equation. The solutions of that equation exhibit exponential increase, i.e., they are proportional to e µ k η (µ k is the Floquet exponent) for some frequency intervals known as resonant bands. The features of the parametric resonance depend strongly on the time evolution of the homogeneous scalar field, which in turn depends on the form of V Φ † Φ . For potentials of the form V = λ n Φ † Φ 2n , Finelli and Gruppuso found that, for a quadratic potential, parametric resonance is efficient when 4πe 2 ρ 2 ≫ λ 1 and it is stochastic and broad, with the largest µ k occurring for small k [178]. For a quartic potential, eq. (5.1.3) becomes a Lame equation [192]. The resonance features and µ k depend on e 2 /λ 2 [192]. In particular modes with k 2 ≪ λ 2ρ 2 0 are resonant for 1/2π < e 2 /λ 2 < 3/2π (ρ 0 being the initial value ofρ). For a symmetry-breaking potential, i.e., V = m 2 Φ † Φ + λ Φ † Φ 2 and in the case of m 2 < 0, the gauge field acquires an effective mass propotional to m 2 /λ, a fact that can completely inhibit the resonance in an expanding universe. For a general value of m 2 , the gauge coupling affects the resonance structure of the scalar field and it is not possible to determine the resonant bands for the imaginary part of the scalar field, as it would be the case if the charged scalar fields were not coupled to the gauge field. Another possible coupling is the one described by the Lagrangian density where φ may represent an axion or a general pseudo-Goldstone boson. If the scalar field performs coherent oscillations, the evolution equation of the transverse circularly polarized photons,Ā ±k , is again given by a Mathieu-like equation In this case the resonance occurs when k ∼ 4πgφω, with ω being the oscillation frequency of φ (which is very small). For V = λφ 4 /4 and 4πgf = 1 (with f being the Peccei-Quinn symmetry scale),Ā ±k grows linearly for k/ω ≪ 4πgf [189]. All the previous description did not take into account dissipation due to the presence of other charged fields, i.e., plasma effects. When they are taken into account (basically in the form of electric conductivity), their effect on the magnetic field depends on the wavelengths considered. For wavelengths larger than the plasma collision length, the equations acquire a damping term proportional to the conductivity, 4πaςδA ′ k , with a = a(η) being the scale factor of the universe and ς the conductivity. If aς is constant (as is the case considered in the literature, where ς ∝ T ∝ 1/a [191]) and larger than the Floquet exponent, the parametric resonance could be completely suppressed. For wavelengths shorter than the plasma collision length, the plasma frequency changes as ω 2 k (η) → ω 2 k (η) + 4πe 2 n (η) /m, with n (η) being the number density and m the mass of the plasma particles. This term plays the role of an effective mass that decays as a −3 , thus allowing for resonance, especially for large coupling constants. Whether or not primordial magnetic fields are amplified by parametric resonance during preheating, depends on the existence of an oscillating scalar or axion field, which the e.m. field is coupled to. If, for example, that charged scalar field is the inflaton, the maximum amplifying factor obtained for the gauge field is ∼ 10 12 , which is not enough to give the minimum seed fields that can sustain a dynamo action [189]. Exponential growth of large scale magnetic fields could also be achieved by considering the Lagrangian [35] with R the scalar curvature and β a real constant. From the action S = d 4 xL, it is obtained that the evolution equation for the Fourier components of the magnetic field is given by [193], When the inflaton enters the oscillatory regime, the scalar curvature, given by Defining B k = a 1/2 B k , eq. (5.1.7) recasts into [193] when β > 0, and 1.14) for β < 0. Parametric resonance occurs when q 1 initially [193], but to get relevant growth of B k one needs |β| 1. However, for |β| ≫ 1 the growth of the magnetic fluctuations is suppressed [194]. In the former case, as the super-horizon δB k modes are exponentially suppressed during inflation, we do not expected to obtain high intensities from parametric amplification. On the other hand, for sub-Hubble scales, the suppression is weaker and consequently magnetic fields can be amplified during preheating. For the latter case, namely when β < 0, the amplification is mainly due to inflation rather than parametric resonance. When finite electric conductivity is taken into account, a term of the form −ς e aδB ′ k is added to the r.h.s of eq. (5.1.7), which counteracts the parametric amplification of the fields. In summary, despite the exponential growth of the magnetic fluctuations due to parametric resonance, the main amplification occurs during inflation [193]. The possibility of further amplification by parametric resonance during reheating of a seed hypermagnetic field generated during inflation, was investigated by Dimopoulos et al [195]. However the authors also concluded that the fields do not grow substantially during preheating. Magnetogenesis by stochastic electric currents Another possibility to induce magnetic fields during reheating is tied to the fact that abrupt changes in the metric at that stage may result in the abundant creation of charged particles. This could generate stochastic currents, which would eventually decay into the Maxwell field [190,191]. As the inflaton is a gauge singlet, it will not decay directly into charged species, so this mechanism assumes the existence of another field, a charged one, that is in its vacuum state during inflation. It becomes a particle state by the gravitational field, due to the changes in the equation of state of the inflaton [196]. Spin 1/2 particles, such as the electrons, would be conformally invariant at the high energies prevailing during inflation, and consequently are not created in large numbers. Therefore, we must seek for a minimally coupled charged scalar field, of which none is included in the standard model but only in its supersymmetric extensions [197]. The scalar field can be decomposed into its real and imaginary parts as Φ = (φ 1 + φ 2 ) / √ 2, and the associated current as For a crude estimate of the field we can neglect J 2k . Assuming that Ohm's law holds, is the electric conductivity. Then, the equation of the magnetic field is Because the current is stochastic, the induced field must be evaluated through its two-point correlation function, where the "noise kernel" N ii ′ is the Hadamard two-point function 1.20) and D ret (x a , y a ) is the retarded propagator of eq. (5.1.18). We are interested in fields coherent over a scale λ, so the spatial integral in eq. (5.1.19) must be weighed by a window function, W (λ), that filters out frequencies higher than the one associated to the field's scale of coherence, λ −1 . After weighing, the magnetic energy density stored today in a region of size k −1 can be directly inferred, giving where m 0 is the bare mass of the scalar field, T RH the reheating temperature of the universe (i.e., the temperature at the beginning of radiation dominance), ς 0 = e 2 T RH the electric conductivity at the beginning of radiation dominance, τ 1/2 the mean lifetime of the current and H the Hubble parameter, treated as time independent during inflation. Observe that the field intensity depends very weakly on τ 1/2 . Assuming instantaneous reheating, T RH = √ HM P l an estimate of B λ tod on a comoving galactic scale λ gal ≃ 1 Mpc is which is about 15 orders of magnitude weaker than the minimum required to feed the galactic dynamo. Calzetta et al have considered the effect of the "London current", eq. (5.1.17) [198]. In this case the evolution equation of the magnetic two-point function shows two kernels, a local and a non-local one. Of these, the local (non-dissipative) one dominates over the non-local (dissipative) one by several orders of magnitude throughout reheating, which means that dissipation in this system is not due to ordinary electric conductivity. The equations for the magnetic field can be recast in the form a Langevin equation, which due to the local kernel looks like the London equation for a superconducting medium: ] ln (∆/Υ), γ being a parameter that determines the temperature evolution during reheating, ∆ = g 1/2 T RH /H, (with 0 ≤ g ≤ 1 being a coupling constant of the mass to the thermal bath and T RH being the reheating temperature), and Υ is the dimensionless wavenumber corresponding to the original inflationary patch. Finally, J 3/2γ (z) is a spherical Bessel function and Γ 2 (· · · ) a Gamma function. Due to this current, the heavily amplified long-wavelength modes of the scalar field act as a Landau-Ginzburg order parameter in a superconductor, and as in the Meissner effect, the photon acquires a time-dependent mass. This allows for an exponential growth of the Maxwell field during reheating. The obtained intensities, however, are too weak (∼ 10 −53 G) to seed the galactic dynamo. Besides, in this model the amplification is very sensitive to the details of the reheating scenario, so it is not possible to obtain generally valid estimates for the resulting magnetic intensity. Primordial magnetic fields from metric perturbations Amplification of electromagnetic vacuum fluctuations can also be achieved by scalar perturbations in the metric during the transition inflation-reheating, i.e., by breaking the conformal flatness of the background geometry, rather than the conformal invariance of the Maxwell field equations [193,199]. The main effect is due to super-horizon scalar perturbations, specially by those modes that reenter the horizon right after the end of inflation. These fluctuations create an inhomogeneous background, in which the magnetic field evolves in a non conformally invariant way: the mode-mode coupling between electromagnetic and metric fluctuations mixes positive and negative frequency modes of the former field, thus breaking its conformal invariance. The line element of a flat FLRW model with scalar metric perturbations, in the longitudinal gauge, reads [200,199] with Φ (η, x i ) representing the gauge invariant gravitational potential. To first order in the cosmological and e.m. perturbations, δA i , the evolution equation of the Fourier transform of the latter is [199] with J i k , η being a source term that depends only on the Fourier transform of the metric perturbations and on their time derivatives [199]. The resulting field strength depends on the power spectrum for super-Hubble metric perturbations, which is given by [199] where A S ≃ 5.10 −5 sets the normalization at the COBE scale (λ C ≃ 3000 Mpc). At decoupling, the field strength on a coherence scale corresponding to a galaxy, k G , is where k max is a cut-off that must be introduced in the case of a blue spectrum (n > 1) to avoid excessive primordial black hole production, and that for negative tilt, is related to the minimum size of the horizon (k max ≤ a I H I , I denoting the end of inflation). Observe that eq. (5.1.27) is a function of k max , i.e., of the mechanism that generated the perturbations. The resulting magnetic field spectrum is thermal (B k ∼ k 3/2 ) in the low-momentum tail. The relation between the energy densities in magnetic field and photons, for a suitable wavenumber k G , at decoupling turns out to be The obtained intensities are upper limits, as dissipative effects were not taken into account when deriving expression (5.1.27). Scalar metric perturbations can grow exponentially during preheating [201,202], thus inducing strong enhancement of magnetic fields. Let us consider the Lagrangian [193] with V (φ) = (1/4) λφ 4 , φ being the inflaton and χ the scalar field it is coupled to. In this case metric perturbations are expected to grow due to enhancement of the scalar field perturbations, and in turn the former would stimulate the growth of magnetic field perturbations through gravitational scattering. Assuming that on super-Hubble scales Φ depends only on time, and adopting the Coulomb gauge (A 0 = 0, ∂ i A i = 0), the Fourier component A i (k) of the gauge potential reads 1.31) with its solution given in integral form bỹ Decomposing the scalar fields as ϕ → ϕ + δϕ (where ϕ denotes either φ or χ), the evolution equation for the Fourier transformed metric perturbations readṡ When the fluctuations δχ (k, η), with low k, are excited during preheating, the corresponding metric and inflaton perturbations, Φ (k, η) and δφ (k, η) respectively, grow on large scales and thus enhance the magnetic fluctuations [193]. However, with the increase of g 2 /λ, the long wavelength modes δχ (k, η) are suppressed during inflation. Sub-Hubble fluctuations, on the contrary, do not suffer from suppression and exhibit parametric amplification during reheating [203]. Therefore, the mode-mode coupling between small-scale metric perturbations and largescale magnetic fields in eq. (5.1.30) can enhance the latter. This model, however, has many uncertainties and complexities, which require further research because they make it difficult to obtain reliable estimates for the final magnetic intensities. Magnetogenesis in phase transitions The actual state of the particles in our universe is the result of phase transitions that occured in the early phases of the expansion. At least two phase transitions are believed to have taken place in that epoch: the electroweak (EW -at T EW ∼ 100 GeV) and the quantum chromodynamical (QCD -at T QCD ∼ 200 MeV). In the former case, the transition is from a symmetric, high temperature phase with massless gauge bosons to the Higgs phase, in which the SU (2) × U (1) gauge symmetry is spontaneously broken and all the masses of the model are generated. In the QCD case, the transition is from a quark-gluon plasma to a confinement phase with no free quarks and gluons. At the same energy scale, it is expected that the global chiral symmetry of QCD with massless fermions is spontaneously broken by the formation of a quark pair condensate. First order phase transitions occur via bubble nucleation. Domains of new phase of broken symmetry form, whose sizes are at most of the order of the horizon at that time. As the horizons grow, different domains come into causal contact and bubble walls collide with each other. Magnetogenesis occurs through violent processes that take place during these collisions: reconnection of magnetic field lines carried by the walls, MHD dynamos induced by the turbulent flows produced by the collisions, etc. In every case, the question is whether the generated fields can explain the intensity and morphology of the observed magnetic fields, or to seed further amplification mechanisms, such as turbulent dynamos that could operate within galaxies. In general, the magnetic fields that are produced during phase transitions can be very strong, but typically have very small coherence lengths. Second order transitions occur in a smooth and regular way, with approximate local thermal equilibrium being maintained throughout the process. In spite of this, magnetogenesis can also be possible as shown below. It was recently proved [36] that the QCD transition in the hot universe was an analytic crossover rather than a phase transition. In this sense, the results on magnetogenesis obtained by considering the QCD transition as first order are invalid and new research needs to be done. We shall therefore review in this section only the EW phase transition, which also seems to provide a very suitable scenario for magnetogensis, since it facilitates the separation between electric and magnetic fields as classical fields. Besides, while the Standard Model predicts a smooth cross-over for this transition, its extensions can give a strong first-order phase transition, which is a fundamental ingredient for electroweak baryogenesis and the generation of primordial magnetic fields. 19 Supersymmetric extensions of the Standard Model have been the most intensively studied [208,209,210,211,212,213], but it is also possible to get a strong transition from more generic two-Higgs doublet models [214,215], from technicolor theories [216], etc. [8][9][10][11][12][13][14][15][16][17] [217,218,219,220]. Phase transitions in the early universe lead to another class of mechanism for generating primordial magnetic fields, based on the Kibble mechanism [221], i.e., on the generation of cosmic strings. If the vacuum manifold M of the broken gauge theory that exhibits a phase transition has a nontrivial first homotopy group [222,223], then a cosmic string network will form generically [221]. This network has a characteristic length scale ξ (t), which expands with the expansion of the universe. Infinitely long strings and loops are formed, the smallest of the latter decaying away via gravitational radiation. The result is a scaling solution, in which the string properties such as ξ (t) are proportional to the time passed [224,225,226,227,228]. This means that, if cosmic strings can produce randomly oriented magnetic fields, these could be coherent by the Vachaspati mechanism [229] over galactic scales at the time of galaxy formation, as required by the dynamo paradigm. In the last subsection we review some works done on this mechanism. Recently, a new mechanism of early magnetogenesis was proposed by Dolgov et al [230], whereby ferromagnetic domains of condensed W bosons would form in the broken phase of the standard electroweak theory. These domains could create large-scale magnetic fields that would survive after the decay of the domains due to flux-freezing. Although the authors do not give estimates for the produced fields, their work points towards a new direction that should be explored further. Magnetogenesis in the electroweak phase transition In his seminal work of 1983 [231], C. Hogan was the first to propose a mechanism of magnetogenesis based on a small-scale dynamo induced by a first order phase transition in the early universe. His aim, however, was not to explain the fields observed in galaxies, but to study the effect of the induced fields on structure formation. The dynamo that Hogan proposed would be induced in the wall of the bubbles by the ordered release of free energy during the transition. Each bubble would be an independent dynamo, producing fields correlated only on the scale of the bubble radius. The result is a pattern of randomly oriented field lines that, properly averaged, would produce a large scale field spanning over regions that are not causally connected, i.e. coherent on larger scales, whose spectrum is of the form B l ∝ l −3/2 i.e., a dipole field, but of weaker intensity. Since then, several mechanisms for magnetic field generation by first order phase transitions have been proposed in the literature. In 1995 Baym, Bödeker and McLerran [232] proposed an also dynamo-based mechanism, whereby seed fields (produced by thermal fluctuations) are amplified by a turbulent dynamo induced by the collision of supersonic shock waves created by the expansion of the walls of the broken symmetry bubbles. Their work was framed within the standard model, since at the time it was still believed that a first order phase transition was possible in it. In this sense, the resulting magnetic intensities would be incorrect. Concretely, when the expansion of the universe supercooled the cosmos below a critical temperature, of the order T cr ∼ 100 GeV, then (at random locations) the Higgs fields tunnels from the unbroken SU (2) × U (1) Y phase to the broken U (1) em phase, forming bubbles that expand and convert the false vacuum energy into kinetic energy. As the shock fronts collide, turbulence is developed in the cones associated to the bubble intersection, with Reynolds number where v w is the wall velocity, R b is the size of a bubble at collision time and λ is the scattering length of fluctuations in the electroweak plasma. Baym et al found that for scales smaller than R b , Re ∼ 10 12 . Assuming v w ∼ v f luid ∼ 10 −1 , that the typical bubble radius after the completion of the phase transition is is the Planck mass and g * ∼ 10 2 is the number of massless degrees of freedom in the matter) and that λ ∼ T g EW α 2 |log α| (with α the fine structure constant and g EW ∼ g * the number of degrees of freedom that scatter by EW processes), Baym et al obtained Such a large Reynolds number means that turbulence is fully developed on scales smaller than R b . Assuming that the electric conductivity is large at that epoch [49], strong magnetic turbulence should exist and in that situation kinetic and magnetic energies are in equipartition, allowing us to estimate that To obtain the intensity of the large-scale field, Baym et al assumed that the small-scale field formed a pattern of continuously distributed dipoles, with distribution being a Gaussian. Therefore the correlation function of the dipole density is 2.4) while the one of the magnetic for r where · · · R means averaging on a scale R. The present-time estimate for this magnetic field on a galactic scale, l gal ∼ 10 9 AU, is B (l gal ) ∼ 10 −17 − 10 −20 . In 1991 T. Vachaspati [229] proposed a mechanism of magnetogenesis based on second order cosmological phase transition. These transitions would produce domains of different vacuum expectation values for the Higgs field, with these differences amounting to gradients in the field. The latter would ultimately lead to electromagnetic fields after the completion of the transition. This mechanism can produce fields associated to other (unbroken) symmetries (like SU(3)) as well. When applied to the electroweak transition, and assuming that the initial correlation scale, χ i ∼ 2 (gT i ) −1 , is of the order of the inverse mass of the W boson, with T i ≃ 10 2 GeV, an initial intensity of B ∼ gT 2 i /2 ≃ 10 23 G is obtained for that correlation length. The initial energy density of the field is comparable to that of the universe, Ω B (t i ). For a region of size ℓ i = Nχ i , with N ≫ 1, the Higgs field is randomly oriented. Consequently the initial magnetic intensity on that scale would be B N ∼ gT 2 i /4N, which at the electroweak scale gives B N (t EW ) ∼ 10 23 N −1 G. At the QCD scale, we finds B N (t QCD ) ∼ 10 18 N −1 G and today B N (t today ) ∼ 10 −6 N −1 G (with N > 10 13 in all cases). For a scale of 100 Kpc today, N = 10 24 and thus B ∼ 10 −30 G. The work of Vachaspati was questioned by Davidson in Ref. [233]. She computed the electric current due to the dynamics of the Higgs field and showed that it vanishes during the EW phase transition. Her conclusion was that no large-scale magnetic fields are generated by the classical rolling of the Higgs vacuum expectation value during the electroweak phase transition. Later, in 1998, Grasso and Riotto [234] reanalysed the generation of magnetic fields during the EW phase transition and found that the Vachaspati mechanism was plausible. Grasso and Riotto analyzed the two possibilities for the phase transition: first order and second order. They showed that the magnetic induction is connected to some semiclassical configurations of the gauge fields, such as electroweak Z-strings and W -condensates. The initial Higgs field configuration is with τ a representing the Pauli matrices, n a a unit vector in the SU (2) isospace, θ (X) the U(1) Higgs field phase and ρ (X) the modulus of the Higgs field. The equation of motion for the SU (2) gauge field in the adjoint representation is where it is assumed that the initial gauge fields W a µ and their derivatives are zero at t = 0. Also,φ ≡ Φ † τ a Φ/Φ † Φ = cos θφ 0 + sin θn ×φ 0 + 2 sin 2 (θ/2) n ·φ 0 n, withφ T 0 ≡ − (0, 0, 1). Asn does not depend on the space coordinates, it is always possible to assume that it is perpendicular toφ 0 . In other words,φ can be always obtained by rotatingφ 0 at an angle θ in the (φ 0 ,n)-plane. Then, eq. (5.2.8) becomes This clearly shows that only the gauge field component alongn, namely A a = n b W ab is created by a nonvanishing gradient of the phase between the two domains. When the full SU (2) × U (1) Y group is considered, it is no longer possible to choosen arbitrarily, because the different orientations ofn, with respect toφ 0 , correspond to different physical situations. Settingn parallel toφ 0 and assuming that the charged gauge field does not evolve significantly, Grasso and Riotto found the following complete set of evolution equations, which is valid for a finite (though short) time after the bubbles first contact: and d a d a ρ (X) e iϕ/2 + 2λ ρ 2 (X) − 1 2 η 2 ρ (X) e iϕ/2 = 0 . Here d a = ∂ a + i g 2 cos θ W Z a , with η being the vacuum expectation value of Φ and λ the quartic coupling. Expressions (5.2.10) and (5.2.11) are the Nielsen-Olesen equations of motion [235]. Their solution describes a Z-vortex with ρ = 0 at its core [236,237]. The geometry of the system implies that the vortex is closed, forming a ring whose axis coincides with the conjunction of the bubble centers. To determine the magnetic field produced during the process described above, it is necessary to give a gauge-invariant definition of the electromagnetic field in the presence of a non-trivial Higgs background. Grasso and Riotto chose An attempt to predict the strength of the magnetic field at the end of the EW phase transition was done by Ahonen and Enqvist [239] and by Enqvist [240], who analyzed the formation of ring-like magnetic fields in collisions of bubles of broken phase in an Abelian Higgs model. Under the assumption that magnetic fields are induced by a process similar to the Kibble and Vilenkin mechanism [241], it was concluded that a field of the order of B ≃ 2 × 10 20 G, with a coherence length of about 10 2 GeV −1 , could be induced. In addition, assuming that the plasma was endowed with MHD turbulence, Ahonen and Enqvist found that the coherence scale could be enhanced by the inverse cascade of the magnetic helicity, and so a field of B rms ≃ 10 −21 G on a comoving scale of 10 Mpc could be present today. As stated earlier, however, the problem with first order phase transitions in the standard model is that they are incompatible with the experimental lower limit for the Higgs mass. Grasso and Riotto also analyzed the creation of magnetic fields when the EW phase transition is of second order. In this case domains where the Higgs field is physically correlated are formed near the critical temperature. The formation of topological and non-topological vortices, is a common phenomenon in second order phase transitions via the Kibble mechanism. It is also known that the non-topological vortices share many common features with the electroweak strings [242]. In this sense, Grasso and Riotto argued that electroweak strings are formed during the second order EW phase transition. To estimate the density of vortices (and consequently the mean magnetic field), it is necessary to know the Ginzburg temperature, T G . This sets the threshold at which the thermal fluctuations of the Higgs field, inside a given domain of broken symmetry, are no longer able to restore the symmetry. The Ginzburg temperature was computed by the authors of Ref. [234], after comparing the expansion rate of the Universe with the nucleation rate per unit volume of sub-critical bubbles of symmetric phase with size equal to the correlation length of the broken phase. The latter is given by 2.14) where ℓ b is the correlation length in the broken phase and S ub 3 is the high temperature limit of the Euclidean action [243]. For the EW phase transition, T G ≃ T C , and the corresponding size of a broken phase domain is determined by the correlation length at T = T G , i.e., where V (φ, T ) is the effective Higgs potential. Using the fact that ℓ b (T G ) 2 depends weakly on M H , Grasso and Riotto estimated the magnetic field strength, on a scale ℓ b (T G ) at the end of the EW phase transition, to be B ℓ ∼ 4e −1 sin 2 θ W ℓ 2 b (T G ) ∼ 10 22 G. To obtain the intensity on cosmologically interesting scales, the authors of Ref. [234] followed the procedure of line averaging sugested by Enqvist and Olesen, i.e., B rms,L ≡ B ℓ / √ N , where N is the number of domains crossed by line, obtaining that a field coherent on a scale of 1 Mpc today would have an intensity of B 0 (1 Mpc) ∼ 10 −21 G. It must be pointed out, however, that all these studies do not take into account the dissipative effects of the primordial plasma. Consequently, the corresponding numerical results should be treated as upper limits. The mechanism proposed by Vachaspati [229] and later analyzed by Grasso and Riotto [234] (see also Cornwall [32]) was recently numerically confirmed and improved by Diaz-Gil et al. [244,245]. The authors considered the full SU (2) ⊗ U (1) model in the framework of hybrid inflation. After a short period of hybrid inflation that ends at the EW scale, where nonlinearities in the Higgs and gauge fields can be neglected, tachyonic preheating develops and non-linearities in the fields cannot be neglected anymore. During this period the Vachaspati mechanism operates, and magnetic string-like configurations appear due to the gradients in the orientation of the Higgs field. The important feature of the induced magnetic fields is that they are helical, i.e., they posses a non null r.m.s. magnetic helicity. During the subsequent phase of (first order) EW symmetry breaking, the magnetic fields are squeezed in string-like structures localized in the regions between bubbles, where the gradients of Higgs fields are still large. The evolution of the coherence scale of these fields can be tracked for a short period of time after the end of the phase transition. At that time it is important to track the evolution of the low momentum part of the spectrum, which is the one that can seed the fields for galaxies and clusters of galaxies. It is seen that it carries a fraction of ∼ 10 −2 of the total energy density, which would be enough to explain the magnetic fields observed in clusters. The correlation length grows as fast as the particle horizon (i.e., linearly in time) and this behaviour is interpreted as an indication that an inverse cascade of magnetic helicity is in operation. However, it is not possible to extrapolate this behaviour to later times, due to our limited knowledge on the primordial plasma features. Stevens and Johnson [246,247] analyzed the possibility of magnetogenesis by a first order EW phase transition, possible for some choices of parameters in the minimal supersymmetric Standard Model (see also [248]). They considered the Lagrangian L = L 1 + L 2 + ( leptonic, quark and supersymmetric partner interactions) , (5.2.16) with (5.2.17) and Also, T represents the temperature, while W i µν and B µν are given by 2.20) respectively. In the previous equations, W i (with i = +, −) are the W + , W − fields, Φ is the Higgs field and τ i is the SU (2) generator (fermions are not considered in this model). In the framework of the MSSM the bubbles that consist of a region of space filled by the Higgs field with a cloud of the other constituents of the MSSM in the broken phase. From this Lagrangian, one obtains the linearized equations of motion with O (3) symmetry, which are suitable to study collisions where the Higgs field is relatively unperturbed from its mean value within the collision volume. Stevens and Johnson [247] found that the coherent evolution of the charged W fields within the bubbles is the main source of the electric current that generates the magnetic field. In their model, fermions are taken into account as a background that provides dissipation through electric conductivity. They numerically integrated the equations of motion of the model, paying special attention to the role of the surface thickness of the bubbles, finding that the main sensitivity is due to the steepness of the bubble surface: the steeper the transition, the more enhanced the seed field becomes. Despite this, the authors of Ref. [247] did not attempt to give the present-day value of the generated magnetic field, because of uncertainties in the properties of the host plasma. Magnetogenesis from cosmic strings The interaction between cosmic strings and magnetic fields was first discussed in 1986 by Ostriker et al [249], while their connection with primordial magnetogenesis was first suggested by Vachaspati in 1991 [229]. Later the mechanism was further developed by Brandenberger et al [250], also for superconducting cosmic strings, who showed that these models are severely constrained by cosmological arguments: the only stable confirgurations for those strings are springs and vortons, which produce matter overdensities in the same manner primordial magnetic monopoles do. So, these models had to be ruled out. In 1999 Brandenberger and Zhang [251] studied magnetogenesis by anomalous global strings and discussed for the first time the importance of the coherence length in these models. The authors proposed a mechanism based on the realization that anomalous global strings couple to electromagnetism [252] through an induced F µνF µν term in the low-energy effective Lagrangian and therefore magnetic fields can be generated. The major advantage of the mechanism is that the coherence scale of the induced magnetic fields is basically the curvature radius of the inducing string. The mechanism is realized within QCD, namely there exists a class of stringlike classical solutions of the linear sigma model, that describes strong interactions below the confinement scale, called pion strings [253]. At low temperatures those strings are not topologically stable, decaying at a temperature T d ∼ 1 MeV, but within the plasma they can stabilize because the plasma interactions break the degeneracy among the three pions. Since a pion string is made of σ and π 0 fields, it is neutral under the U em (1) symmetry. However, the π 0 couples to photons via the Adler-Bell-Jackiw anomaly. In the linear sigma model, the effective coupling of π 0 to photons is obtained from the contribution of the quark triangle diagram [254]. At low energies only pions and photons are important, hence the effective Lagrangian to leading order reads 2.21) where N c = 3, Σ = exp (iτ ·π/f π ),τ are the Pauli matrices and α is the electromagnetic fine structure constant. From this Lagrangian one also obtains the classical equation of the electromagnetic field The key effect is due to the anomaly term in eq. (5.2.22). Charged zero modes on the string will induce a magnetic field circling the string that falls off less rapidly, as a function of the distance from the string, than it is classically expected. Zero mode currents are automatically set up by the analog of the Kibble mechanism [221] at the time of the phase transition and therefore magnetic fields coherent in a scale of the string size are automatically generated. The coherent magnetic field, as a function of the distance r from the string, can be expressed as with n being the number density of charge carriers on the string, r 0 giving the width of the string and α ≪ 1. By dimensional analysis, Brandenberger and Zhang obtained that at the time t c = t c , when the strings form, r 0 ∼ T −1 c and n ∼ T c . The initial correlation length of the string network, ξ (T c ) increases rapidly, approaching a scaling solution of the form ξ (t) ∼ t. During this evolution the charge density is diluted as the strings stretch, while at the same time the merger of small strings into larger ones leads to an increase of charge. Assuming that the initial separation of the strings is microscopic, and that they decay during radiation dominance, Brandenberger and Zhang obtained 2.24) with p = 5/4 or 3/2 [251]. Also, the corresponding magnetic field at t d is Brandenberger and Zhang assumed that between t d and the present time, t 0 , the field propagates through a perfectly conducting plasma and found that today the magnetic intensity should be Note that T 0 is the present time temperature, r Kpc is the present distance from the original comoving location of the string expressed in Kpc and r the physical distance at T d . Considering t as an estimate of the string separation, T c ∼ 1 GeV, T d ∼ 1 MeV and p = 1, Brandenberger and Zhang obtained B (t 0 ) ∼ 10 −26 (rT c ) α/π Gauss , (5.2.27) arguing that, if r is of cosmological order and T −1 To analyze the coherence scale of the fields, the authors assumed that after T d the field lines are frozen in comoving coordinates, obtaining where z (t d ) is the redshift at t d and β ∼ 1 for scaling strings. On scales larger than ξ (t d ) c the fields have random orientation, yielding an average of where d is the scale the coherente field is calculated. The authors found thatB ≃ 10 −2 B (t 0 ). Obviously, if the resistivity of the host plasma is accounted for, the suppression will be larger. Recently, Gwyn et al [255] extended the mechanism to heterotic cosmic strings arising in M theory. Those strings, being stable, would produce even stronger fields. This work is reviewed in the following section. Another possible way that cosmic strings could produce primordial magnetic fields was proposed by Dimopoulos [256], by Dimopoulos and Davies [257] and by Battefeld et al [258]. In these scenarios, the magnetic fields are be induced by vortices produced by cosmic strings via the Harrison-Rees [152] effect. This mechanism, however, was recently strongly criticized by Hollenstein et al [259], who showed that the Harrison-Rees effect is quite inefficient in producing cosmologically intersting magnetic fields. Magnetogenesis beyond the standard model In this section we are going to review different types of magnetic field generation mechanisms involving theories beyond the standard model. Primordial magnetic fields will be generated during inflation. As it was shown in § 4, on a non flat background perturbations in the electromagnetic field can be efficiently amplified during inflation within the standard model, that is within the standard linear theory of electrodynamics. On the contrary, on a flat background the amplification during inflation is not sufficient in order to be cosmologically relevant. Following [35] (see also § 4.3 earlier) the ratio r of the energy density in the magnetic field ρ B over the energy density ρ γ in the background radiation is introduced, thus In the case of linear electrodynamics on a flat background the magnetic energy density decays as a −4 , where a is the scale factor. Hence the ratio r is a constant as the universe evolves. This is also true in the radiation dominated era when the universe is dominated by a highly conducting plasma. The interstellar magnetic field in our Galaxy is of the order of a few µG. Assuming that a galactic dynamo, contributing an exponential factor in time, is operating since the time of the formation of the Galaxy requires an initial seed magnetic field at the time of galaxy formation of at least B s ≃ 10 −20 G [260] which corresponds to a minimum magnetic to photon energy density ratio r given by r ≃ 10 −37 . There is some controversy about the efficiency of such a galactic dynamo, thus working under the hypothesis that there is no efficient amplification of the initial magnetic seed field due to a galactic dynamo but only the amplification due to the collapsing protogalactic cloud requires r to be at least of the order of r = 10 −8 [35]. These lower bounds on r were derived assuming no cosmological constant. In a flat universe with a large positive cosmological constant assuming a galactic dynamo operating these bounds can be lowered significantly. In particular for reasonable cosmological parameters an initial seed magnetic field of at least B s ≃ 10 −30 G is enough to explain the present day galactic magnetic field strength [30]. This corresponds to r = 10 −57 . In typical inflationary scenarios on a galactic scale of 1 Mpc r ≃ 10 −104 at the beginning of the radiation dominated era (cf. equation (4.3.9)) which is much below the required minimal value even in the presence of a cosmological constant. Therefore, in the case of a flat background, it is necessary to go beyond the standard model. There are different possibilities of modifying the standard four dimensional electromagnetic Lagrangian. Currently models of modified gravity enjoy an intense activity due to the fact that they can be used to describe the late time evolution of the universe at a global scale as well as, say, the observed rotation curves of galaxies. Thus these models combine the effects of dark energy, which is used to model the accelerated expansion of the present universe, and dark matter, which is postulated to exist in the form of halos around most galaxies. The gravitational sector in theories of modified gravity is usually described by a Lagrangian of the form (e.g., [261]) where f (R) is a function of the Ricci scalar R, most often chosen to be of the form f (R) ≃ R + αR n , where α and n are constants. It should also be noted that one of the original realizations of inflation is given by f (R) = R + R 2 which can be shown to be equivalent to a conformally coupled scalar field [262]. Usually it is argued that modified gravity theories are some kind of effective description resulting from taking into account quantum corrections to the classical Einstein-Hilbert action. In order to study the generation of primordial magnetic fields in this type of theories the electromagnetic field has to be included in the Lagrangian [263]. Considering flat space the conformal invariance of the Maxwell Lagrangian in four dimensions has to be broken in order to generate magnetic fields strong enough to seed the galactic magnetic field. In the following a survey of models will be given which are used in this respect. Gravitational coupling of the gauge potential Models involving the gravitational coupling of the gauge potential are described by Lagrangians of the form RA m A m and R mn A m A n where R mn is the Ricci tensor. Gauge invariance of the Lagrangian is broken explicitly and for that matter it does not seem very appealing. The term RA m A m describes a massive photon with its mass given by m γ ∼ R 1/2 . Electromagnetism is then described by Proca theory and A m is the Proca field. The strongest bound on the photon mass within our galaxy is obtained by assuming a Proca regime on all scales. The Proca field contributes to the magnetic pressure of the intergalactic medium which has to be counterbalanced by the thermal pressure of the plasma. Observations assuming standard electrodynamics indicate that within our galaxy the interstellar medium is approximately in equipartition. From this it can be concluded that the magnetic pressure due to the Proca field has to be subleading with respect to the standard magnetic pressure [264]. This implies the bound m γ < 10 −26 eV [265]. Furthermore, in [265] it is pointed out that these limits depend on the mechanism on how the photon aquires mass. If it is via the Higgs mechanism then it is possible that large scale magnetic fields are effectively described by Maxwell's theory. In this case the strongest bound comes from the validity of Coulomb's law and is given by m γ < 10 −14 eV [266,265]. Using this type of Lagrangian on cosmologically scales, the typical scale is given by the value of the Hubble parameter today, H 0 , which results in the estimate, m γ ∼ R 1 2 and R 1 2 ∼ H then at present the photon mass is given by m γ ∼ H 0 ∼ 10 −33 eV which is well below the above mentioned present limits on the photon mass [35]. The idea is that the initial magnetic seed field is created from the amplification of perturbations in the electromagnetic field during inflation. In [35] the resulting magnetic field at the end of inflation in this type of theories was calculated using the assumption that at the time of horizon crossing during inflation the energy density in this mode is determined by the Gibbons-Hawking temperature. This was critically reconsidered in [267] were it was found calculating the spectral energy density from first principle quantizing the corresponding canonical field that this assumption actually is an over-estimation of the actual energy density. However, here we follow the original calculation of [35]. The equations of motion are derived from the Lagrangian [35] 20 where A 2 ≡ A m A m . Furthermore b and c are constants. The equations of motion lead together with the parametrization of the Maxwell tensor in terms of the electric and magnetic field, E α andB α in the "lab" frame, respectively, to the equations where η is conformal time. Thus the line element is given by ds 2 = a 2 (−dη 2 + d x 2 ) and Expanding in Fourier modes using F α ( k, η) ≡ a 2 d 3 xe i k· xB α ( x, η), yields So that the additional terms in the Lagrangian can potentially act as pump terms, that is amplifying the spectral energy density of the magnetic field. The averaged magnetic field energy density is given by ρ mag (η) = (B αB α )( x, η) /(8π). Using the correlation function Thus, with ρ B (k, η) = k dρmag dk , the spectral energy density is given by (6.1.10) Therefore, solving equation (6.1.7) for different types of scale factor, the magnetic energy density is estimated after the end of inflation using ρ B ∝ |F µ F µ |/a 4 [35]. Note that for standard electrodynamics, n = 0, and thus the magnetic energy density simply scales at the usual a −4rate of a frozen-in magnetic field. As can be seen from equation (6.1.7) modes well inside the horizon, that is those modes with comoving wave number |kη| ≫ 1, simply oscillate, since the last term can be neglected and equation (6.1.7) reduces to the equation of a harmonic oscillator. In the opposite case, for modes well outside the horizon, satisfying |kη| ≪ 1 the second term in equation (6.1.7) becomes subleading and the resulting equation can easily be solved giving the solutions Sitter stage and q ≡ m + = 1 2 1 + √ 1 − 48b − 24c calculated in the radiation dominated era [35]. Thus, at the time of galaxy formation, the ratio of magnetic over photon energy density, r, is found to be [35] r ≃ (7 × where T * is the temperature at which plasma effects become important during reheating. In [35] this is estimated to be of the order of T * ∼ min T RH M . Taking typical values for the physical parameters, there is a wide range for the exponents p and q such that r is bigger than the minimal required value in order to seed the galactic magnetic field with a galactic dynamo operating, r > 10 −57 , or without, r > 10 −8 . In figure 1 log r is shown for typical values of the parameters. The logarithm of the ratio r, magnetic field energy density over photon energy density, is shown for different values of p and q in a model including RA 2 and R µν A µ A ν terms [35]. The numbers in the graph refer to the values of log 10 r along the closest contour line. Quantum corrections in QED in a curved background The QED one-loop vacuum polarization of the photon in a general curved background gives rise to terms coupling the Maxwell tensor to the curvature [268]. Vacuum polarization describes the effect of virtual electron positron pair creation thus giving the photon a size of the order of the electron Compton wave length which interacts with curvature. This leads to an interesting space-time structure including the phenomenon of gravitational birefringence, where the photon propagation depends on its polarization and it can be faster than the speed of light [268,269,270,271]. In general the Lagrangian has the form [268] where b, c , d and f are constants. The last term can be neglected with respect to the other terms since it leads to higher order derivative terms in the equations of motion [272], In order to proceed here a different approach from the one used in the previous section will be followed. This will allow to actually determine the spectrum of the resulting magnetic field. Instead of assuming that the energy density of the magnetic field at the time of horizon crossing corresponds to the one calculated using the Gibbons-Hawking temperature, the spectrum of the resulting primordial field will be calculated by determining explicitly the Bogoliubov coefficients. This gives the particle production and thus the spectral energy density of the primordial magnetic field. The background model will be described by two stages, an inflationary stage during which the correction terms coupled to curvature are important and a radiation dominated stage determined by standard Maxwell electrodynamics. Since the resulting field strength will be compared with observational values at the order of galactic scale corresponding to 1 Mpc today, it is not necessary to include the evolution during the matter dominated period. Galactic length scales re-enter the horizon during the radiation dominated stage, which can seen from the temperature T at which a scale for λ < λ eq ∼ 14Ω −1 m h −2 Mpc crosses back into the horizon. Thus a galactic scale of order 1 Mpc enters the horizon at a time when the universe was at about 78 eV and thus inside the radiation dominated era, since radiation-matter equality occurs at T eq = 5.6Ω m h 2 eV. The background cosmology is chosen to be such that In the following a 1 ≡ 1. The matching between the inflationary phase and the radiation dominated era takes place at η = η 1 . De Sitter inflation corresponds to β = −1 and for β < −1 power-law inflation is taking place. The Maxwell tensor is written in terms of the gauge potential A m , that is F mn = ∂ m A n −∂ n A m . Furthermore, the radiation gauge, A 0 = 0, ∂ λ A λ = 0 will be used. Then the gauge potential is given in terms of the expansion in Fourier modes by However, since the Lagrangian has additional terms coupling the electromagnetic field to the curvature, the commutation relations between the operator A j and its canonical momentum π Aµ = ∂L ∂(∂ 0 A j ) are not the canonical ones. However, as will be done explicitly below, it is possible to define a canonical field which does satisfy the standard commutation relation. It is that field that is used to quantize the theory and calculate the production of particles. The mode functions satisfy where a dot indicates d dη and In the case where the additional terms in the lagrangian are absent, that is b = c = d = 0, the mode equation for A k reduces to a simple harmonic oscillator equation. In this case A k itself can be used to implement the standard quantization scheme. In general, however, it is necessary to use the canonical field Ψ µ (η, x) and its Fourier amplitude Ψ(η, k) defined by, respectively, With this the mode equation for Ψ(η, k) is given by where a new dimensionless variable z ≡ −kη has been defined and ′ ≡ d dz . Moreover, 2.10) and Here the maximally amplified (comoving) wavenumber k 1 has been defined by k 1 ≡ 1 |η 1 | . Furthermore H 1 is the value of the Hubble paramter at the beginning of the radiation dominated stage at η 1 . It is related to k 1 by k 1 ∼ H 1 . Thus the canonical field satisfies the equation of a harmonic oscillator. This is also the case of a free scalar field in flat space-time. Therefore the canonical quantization procedure will be applied to the canonical field Ψ, which will be written as Hence it will be required that the mode functions f k (x) ≡ Ψe i k· x /(2π) 3 4 and f * k form an orthonormal set, that is satisfying [273], Furthermore, the scalar product is defined by 2.14) where dΣ m = n m dΣ and n m is a future-directed unit vector orthogonal to the space-like hypersurface Σ which is taken to be a Cauchy surface. Moreover, dΣ is the volume element in Σ. Also the notation f k (x) Since Ψ is quantized in flat space-time, dΣ m = δ m 0 d 3 x and the normalization condition on the mode functions f k reduces to which is the Wronskian of the solutions of the differential equation (6.2.9). The field equation in real space for the Fourier transform Ψ(η, x), assuming it to be real, can be derived from the Lagrangian The effective mass can be determined by going back to k-space. Using equations (6.2.9) and (6.2.10) It can be verified that Ψ µ (η, x) and its canonical momentum satisfy the standard commutation relations. The time-dependent effective mass m eff (η) reflects the dynamics of the cosmological background. It also indicates that there is no unique vacuum state. Having one set of orthonormal functions f k another orthonormal set of mode functionsf k can be found. Then the canonical field Ψ µ has the expansion in terms of the annihilation and creation operators a for all k and λ and a new Fock space. Since both sets of mode functions are complete, they are related by the Bogolubov transformation [273], where α k q and β k q are the Bogolubov coefficients satisfying k (α q k α * r k − β q k β * r k ) = δ q r . Moreover, it is found that, suppressing the index λ, [273], Thus, equation (6.2.21) implies that the vacuum state |0 is in general not annihilated by a k , but rather gives This means that the expectation value of the number operator N k = a † k a k of f k -mode particles in the state |0 is given by (6.2.23) In order to determine the particle production due to the time-dependent cosmological background the mode functions are matched at the transition time η = η 1 . Furthermore, on subhorizon scales corresponding to z ≫ 1, the mode equation (6.2.9) reduces to the equation for a free harmonic oscillators and therefore does not give any important contribution. Only modes on superhorizon scales are relevant, since in that case, for z ≪ 1, the mode equation can be approximated by The particular choice β = −1 describes de Sitter inflation, and in this case ξ 1 = 0 and ξ 2 = 1 leading to a plane wave solution which was also noted in [268,35]. Furthermore, β = −2 implies ξ 1 = 0, but ξ 2 = 18b+5c+4d 18b+4c+2d . Equation (6.2.24) solved during power law inflation, β < −1 and β = −2, results in the following solution in terms of the Hankel function of the second kind, H (2) ν (x), which gives the correctly normalized incoming wave function for η → −∞ for ξ 2 > 0 . This means that the incoming vacuum solution at infinity is a plane wave solution and moreover approaches the positive frequency solution in Minkowski space-time. It is assumed that electrodynamics becomes standard Maxwell electrodynamics at the beginning of the radiation dominated stage at η = η 1 . Thus the terms due to the interaction between curvature and the electromagnetic field in the mode equation (6.2.24) can be neglected which leads to a free harmonic oscillator equation which is solved by the superposition of plane waves, 2.27) where z 1 ≡ k|η 1 | and c ± are the Bogoliubov coefficients, corresponding to α k q = c + δ k q and β * k q = c − δ k q . Therefore to determine the magnetic field energy spectrum, the Bogolubov coefficients are calculated by matching the solutions of the gauge potential and its first derivative at η = η 1 on superhorizon scales. Using the small argument limit of the Hankel functions [274], this leads to β = − 3 2 [272] |c where it was used that in the approximation used here, In the case β = − 3 2 the limiting behavior of the mode function on superhorizon scales leads to a a divergent factor ln 2 √ ξ 2 k k 1 in |c − | 2 . Thus, we will not pursue this case any further. Including both polarization states the total spectral energy density of the photons is given by (cf., e.g., [275]) Since the electric field decays rapidly due to the high conductivity of the radiation dominated universe, the spectral energy density (6.2.29) gives a measure of the magnetic field energy density, ρ B . Using the density parameter of radiation, Ω γ = H 1 H 2 a 1 a 4 , the ratio of magnetic over background radiation energy density r is given for β = −2, − 3 2 , −1, by [272] r where M P l is the Planck mass. The magnetic field energy density can also be calculated using the two point function of the magnetic field, B µ ( k)B * ν ( k ′ ) . This leads to an expression similar to (6.2.30). Furthermore, the form of the magnetic field spectrum (6.2.30) imposes the constraint ν ≤ 3 2 . This implies the range for β given by −3 < β < −1 taking into account the constraint from power law inflation. Using the constraint which was used to derive the mode equation ( the maximal value of r which can be achieved within this model can be estimated. It is found to be, for β = −2, − 3 2 , −1, at ω G = 10 −14 Hz corresponding to a galactic scale of 1 Mpc, and using the maximal amplified frequency evaluated today, ω 1 (η 0 ) = 6 × 10 11 H 1 M P l 1 2 Hz, [272] In figure (2) log 10 r max is shown for the case that the parameters determining the contributions due to the quantum corrections are all of the same order, b ∼ c ∼ d. In this case the constraint on µ 1 leads to a lower bound on the parameter b, given by b min ≡ 10 −45 β(7β − 10) (6.2.33) As can be appreciated from figure (2) inflation, corresponding to β = −1, there is no significant magnetic field generation since the mode functions during inflation as well as during the radiation dominated stage are plane waves. Furthermore, it can be checked that the resulting maximum magnetic field strength satisfies the bound due to gravitational wave production [107]. It was shown in [107] that for magnetic fields created before nucleosynthesis conversion of magnetic field energy into gravitational wave energy takes place. This leads to a maximal value of r GW given by, [272] r GW ≃ 2 × 10 −61+52ν 2 2.34) at the galactic scale used here, λ = 1 Mpc. Thus, the requirement r max ≤ r GW leads to an upper limit on H 1 M P l , that is, Hence the allowed range is given by . This is always satisfied since H 1 < M P l . It is also important to check that the fluctuation in the electromagnetic field during inflation are within the perturbative regime and thus there is no strong backreaction on the dynamics of inflation. This effect can be estimated by calculating the energy density in the electromagnetic field and comparing it with the total energy density during inflation given by, The average value of the electromagnetic field energy density is found to be [272] where k * is the wave number corresponding to the scale which becomes superhorizon at the time η during inflation. Thus with k * ∼ −η −1 the ratio ρ (em) (η)/ρ ∼ ρ/M 4 P which is always smaller than one in the classical domain. Therefore, no backreaction effects have to be taken into account. In other models of magnetic field generation during inflation backreaction does play a role [276,277]. Trace anomaly Linear electrodynamics is scale invariant at classical level. Taking into account quantum corrections it is known that this classical symmetry is broken. Defining the energy momentum tensor by (see for example, [278]) then if the classical theory is scale invariant there will be a conserved current C a = Θ ab x b such that 3.2) A scale transformation is equivalent to a conformal transformation of the metric, such as g mn (x) → e 2σ g mn (x). (6.3.3) Since the trace vanishes of the electromagnetic field in linear electrodynamics, the current C m is conserved at the classical level. However, when quantum corrections are included scale transformations are no longer a symmetry, since the renormalized coupling constant depend on the scale. Namely, the renormalized coupling constant changes under the conformal transformation (6.3.3) as [278] g → g + σβ(g), (6.3.4) where β(g) is the beta function. The Lagrangian changes as σβ(g) ∂ ∂g L. Thus the current satisfies, In massless QED the trace of the energy-momentum tensor can be found explicitly as [278] Θ A similar expression for the trace of the energy-momentum is also found in QCD and other gauge theories. The trace anomaly was used in [279,280] (see also [281]) to study the generation of primordial magnetic fields during inflation. It induces a new term in Maxwell's equations, namely, in a flat Friedmann-Robertson-Walker background with scale factor a, [279,280], The constant κ depends on the theory which is used. For example, for the SU(N) gauge theory with N f generations of fermions in the fundamental representation [279,280], where α is the fine structure constant taken at the time of horizon crossing of the scale k −1 during inflation. Quantizing the gauge potential and finding the spectrum of the electromagnetic field in de Sitter inflation it is found that [280] Thus for large values of κ corresponding to a large number of light fermions during inflation the trace anomaly could provide an efficient mechanism to generate large magnetic fields during inflation to serve as seed magnetic fields for a subsequent amplification by a galactic dynamo. Coupling to other fields and varying couplings In [35] the coupling of a pseudoscalar axion to electrodynamics was proposed which for energy scales below the Peccei-Quinn symmetry breaking scale f a can be described by the effective Lagrangian, where g a is a coupling constant and the vacuum angle θ = φ a /f a , where φ a is the axion field. In [282] a similar model has been considered in detail, namely the coupling of a pseudo Goldstone boson to electrodynamics. It is interesting to note that in these models the created magnetic fields have non zero helicity [282,283,284,285]. In [282] the Lagrangian is assumed to be of the form where g = α/(2πf ) and f is the coupling constant and α the fine structure constant. The field equations in Fourier space are found to be [282] d 2 F ± dη 2 + k 2 ± gk dφ dη = 0, (6.4.3) where F ± = a 2 (B y ± iB z ) are the two circular polarization modes. The electric field satisfies the equation, [282] 4.4) where G ± = a 2 (E y ± iE z ). Using a potential of the form V (φ) = Λ 4 [1 − cos(φ/f )] for the scalar field, the resulting amplification of the magnetic field during inflation is too weak in order to provide a seed field for the galactic dynamo [282]. In [286] a model with N pseudo Goldstein bosons has been investigated. It has been found that even in the case of one pseudo Goldstein boson due to the helical nature of the generated magnetic field the process of the inverse cascade will result in a strong enough seed field at the time of galaxy formation. A time-dependent coupling of the electromagnetic field provides another possibility of amplification of perturbations in the electromagnetic field during inflation. This was first studied in [287] (see also, [288,289,290,291,292]) considering a lagrangian of the form [287] L ∼ e αφ F mn F mn , (6.4.5) where α is a constant. Perturbations in the electromagnetic field are amplified during de Sitter inflation. The resulting magnetic field for the choice of α = 20 is found to be as large as 6.5 × 10 −10 G today. In [293] (see also [294]) the generation of cosmologically relevant magnetic fields and their subsequent signature in the CMB has been discussed in a model where instead of the usual U(1) em gauge field, the photon, a hypercharge field Y m [295], which is associated with the U(1) Y hypercharge group before the electroweak phase transition when the SU(2)×U(1) Y symmetry is still unbroken, is coupled to a spectator field during inflation. After the electroweak phase transition the photon field is determined by the hypercharge field by A m = Y m cos θ W which gives rise to a primordial magnetic field in the postinflationary universe. Magnetogenesis in string theory A natural candidate for a scalar field coupled to the electromagnetic field is provided within the low energy limit of string theory. In the low energy limit string theory leads to Einstein gravity coupled to additional fields, such as the dilaton, which is a scalar field, and the antisymmetric tensor field which in four dimensions can be related to a pseudo-scalar field, the axion. To lowest order in the inverse string tension α ′ and in the loop expansion controlled by the string coupling g s the action in the so-called string frame is given by where G D is Newton's constant in D dimensions, R D the Ricci scalar in D dimensions and φ is the dilaton. Indices take values between 0 and D − 1. The string coupling is given by g s = e φ . Superstring theory can be consistently quantized only in D = 10 and M-theory predicts 11 space-time dimensions. In order to reduce the resulting model to the four observed space-time dimensions, the extra space-time dimensions can either be treated as compactified to small extra dimensions, following the paradigm of Kaluza-Klein compactification, or one could model the observable universe as a four dimensional hypersurface embedded in a higher dimensional background space-time, which is the procedure followed in the models of brane cosmology [296]. Here the simplest model is used where the extra dimensions are compactified on static tori with small, constant radii. Thus the action (6.5.1) is used in D = 4 dimensions [297,298,299]. It is difficult to implement the standard slow roll inflation paradigm in string cosmology, derived from the low energy limit of superstring theory. The reason for that is that the dilaton does not have an appropriate potential. The potential resulting from supersymmetry breaking is far too steep to allow for a slow roll phase in the evolution of the dilaton. Inflation driven by the kinetic energy of the dilaton, however, can be realized. This is the pre-big-bang model [297,298,299] where inflation takes place for negative times (pre-big-bang phase) and is matched to the standard radiation dominated stage for positive times (post-big-bang phase). Since in the low energy limit of superstring theory to lowest order Einstein gravity is recovered at cosmic time t = 0 there is a space-time singularity, which follows from the theorems of Penrose and Hawking. Thus higher order corrections have to be included in order to regularize the transition between the pre-and post-big-bang era. In general, when calculating perturbations in pre-big-bang inflation it is assumed that the background evolves from an asymptotically flat initial state at t → −∞ to a high curvature phase at around t = 0 which, however, never reaches a singular state. Only a few explicit, non-singular solutions are known and it seems difficult to determine the generic behaviour of pre-big-bang cosmologies [300]. During pre-big-bang inflation in four dimensions the scale factor behaves in the string frame in which the universe is expanding and accelerating, assuming the end of inflation at η 1 . The dilaton φ behaves in the low energy phase, η < η s , as [301] φ = − √ 3 ln |η| + const. (6.5.3) After some time η s higher order corrections in the inverse string length α ′ become important and the universe enters into a string phase which lasts until the end of inflation at η 1 . During the string phase the dilaton evolves as [301] φ = −2β ln |η| + const. where z s ≡ a 1 /a s and a s and a 1 are the scale factors at the beginning of the string phase and at the end of inflation, respectively. Maxwell's equations derived from the action (6.5.1) imply in the radiation gauge in Fourier space the mode equation for the gauge potential, [301,302] A where a prime denotes the derivative with respect to conformal time η. Matching the stage of pre-big-bang inflation to the radiation dominated era, quantizing the gauge potential and calculating the appropriate Bogoliubov coefficient results in the following values of the ratio of magnetic energy density and background radiation energy density [301], 5.7) where g 1 is the value of the string coupling at the beginning of the radiation era, ∆φ s = φ s −φ 1 and ω s ≡ ω 1 /z. In figure 3 the ratio r(ω) is plotted at galactic scale 1Mpc which corresponds to ω G = 10 −14 Mpc. Imposing that r(ω) < 1 for all frequencies, leads to the condition [301,302] z −2 s < g s /g 1 . This implies a lower bound on the value of the coupling at the beginning of the string phase, g s . From figure 3 it can be appreciated that for a duration of the string phase determined by z s > 10 20 and a string coupling g s less than 10 −42 the resulting magnetic field is strong enough to seed the galactic magnetic field directly. Furthermore, figure 3 shows that even for a very short string phase the resulting magnetic fields can be as strong as 10 −30 G which is the limiting value in case of action of a galactic dynamo in a universe with non vanishing cosmological constant. Without taking neither the string phase nor the cosmological constant into account, corresponding to a minimal required value of 10 −37 it was concluded in [303] that it is not possible to generate cosmologically relevant magnetic fields during pre-big-bang inflation. Even though in this section we focus on mechanisms which rely on the amplification of electromagnetic perturbations during inflation, we will briefly comment on a different model of magnetogenesis in string theory. In [255] the generation of primordial magnetic fields from heterotic cosmic strings is studied. Heterotic fundamental cosmic strings were ruled out by Witten for stability reasons [304]. However, as was shown in [305] the presence of branes offers a solution to the stability problem. In [306] heterotic cosmic strings are constructed by wrapping M5 branes around the 4-cycles of the Calabi-Yau manifold present in heterotic string theory. In [255] it was found that in a generalisation of the model of [306] the resulting heterotic strings are superconducting and as such can generate strong magnetic seed fields (see also section 5). Magnetogenesis from extra dimensions Extra dimensions played a role in gravity ever since the proposal by Kaluza [307] to explain gauge fields geometrically. Postulating a fifth dimension the components of the metric involving the fifth coordinate can be interpreted as the components of the gauge potential A m of electrodynamics and a scalar field φ, whose effective coupling to electrodynamics in four dimensions is that of a dilaton (see, e.g., [308]). Einstein's equations in vacuum in five dimensions imply Einstein's equations in four dimensions as well as Maxwell's equation for the gauge potential A m and the massless Klein-Gordon equation for the scalar field φ, if the dependence on the fifth coordinate is suppressed. However, despite its successful unification of gravity and electrodynamics there is still something missing in this picture. The point is how to explain that we have not observed the fifth dimension and why there is no dependence on the extra dimension. These problems were solved by Klein [309] assuming that the extra dimension is a circle of such a small radius that it is beyond observational limits. Using a Fourier expansion in the extra coordinate at each point in the four dimensional space-time there is an infinite number of four-dimensional fields. The zero mode results in the original theory of Kaluza where the fields have no dependence on the extra coordinate. The remaining part of the spectrum corresponds to massive modes. Extra dimensions appear naturally in models of string/M-theory which also admits solutions with large extra dimensions [310]. Contrary to the Kaluza-Klein picture in the case of large extra dimensions inspired by string theory, our observable four dimensional universe is described by a four-dimensional hyper surface (brane) embedded in a higher dimensional background space-time. The cosmological solutions on the brane are influenced by the curvature of the higher dimensional space-time projected onto the brane. This leads for example to additional terms in the Friedmann equation [296]. In models derived from higher dimensional gravity the four dimensional Planck mass, which in this section will be denoted by M 4 , is no longer a fundamental parameter, but the Ddimensional Planck mass M D . Assuming for simplicity that all extra dimensions are of the same characteristic size R, the four-dimensional and the D-dimensional Planck masses are related by, where α, β = 1, .., 3 and A, B = 4, .., 3 + n, n ≥ 1. a(η) and b(η) are the scale factor of the external, 3-dimensional space and the internal, n-dimensional space, respectively. Assuming that before a time η = −η 1 inflation takes place in the external dimensions while the extra dimensions are collapsing. After this time the universe enters the standard radiation dominated era with the extra dimensions frozen to a small size. The first stage is described by a generalized vacuum Kasner solution. Thus the behaviour of the scale factors is determined by, In the following we set a 1 = 1 = b 1 . The Kasner exponents σ and λ are given in terms of the number of extra dimensions by [315], . (6.6.5) In D dimensions Maxwell's equations are given by ∇ÃFÃB = 0 with FÃB = ∇ [Ã AB ] ,Ã,B = 0, .., n + 3. Assuming that A µ = A µ (x α , y B , η) , A B = 0 and using the radiation gauge A 0 = 0, where ∂ 0 ≡ ∂ ∂η , ∂ µ ≡ ∂ ∂x µ and ∂ B ≡ ∂ ∂y B . Defining the canonical field Ψ µ = b n 2 A µ the following expansion is used where l m is a (3 + n)−vector with components l µ ≡ k µ , l A ≡ q A . Moreover, l · X = k · x + q · y. α runs over the polarizations. During inflation for η < −η 1 the mode equation is given by The solution for one extra dimension, n = 1, is given by [316] Ψ l = √ π 2 e π 2 qη 1 (−kη) where H (2) ν (z) is the Hankel function of the second kind. The approximate solution for n > 1 is found by solving the mode equation (6.6.8) in two regimes determined by whether the term due to the modes q in the extra dimensions, (−η/η 1 ) 2β q 2 is larger or smaller than k 2 , where k are the comoving wave numbers in the observable three dimensional space. When the contribution due to the wave numbers q in the extra dimensions is subdominant − η η 1 2β q 2 < k 2 , or ω q < ω k in terms of the physical frequencies ω k = k/a(η) and ω q = q/b(η), the canonical field is approximately given by [316], where H (2) µ is the Hankel function of the second kind and µ 2 ≡ 1 4 + N ⇒ µ = 1 2 (nλ − 1). In the other case, that is for − η η 1 2β q 2 > k 2 , or ω q > ω k , it is found that [316] where κ ≡ 1 β+1 and µ = 1 2 (nλ − 1). The total magnetic energy density is given by [317] |c − | 2 dV, (6.6.14) where, assuming that the volume consists of two spheres, dV = 1 Γ( n 2 ) q n−1 dq. At η = −η 1 the comoving wavenumbers k and q are equal to the physical momenta, since a 1 = 1 = b 1 . The spectral energy density ρ(ω k ) = dρ/dlogω k is then given by where Y ≡ ωq ω k , and ω k = k a , ω q = q b . To calculate the ratio of energy density in the magnetic field over the background radiation energy density it will be assumed that ω 1 = k 1 a and k 1 ∼ H 1 is the maximal wave number, leaving the horizon at the end of inflation and thus at end of the dynamical higher dimensional phase η = −η 1 . Equally it is required that there is a maximal wave number q max in q-space corresponding to the modes in the extra dimensions. These assumptions are justified by the sudden transition approximation used here. At the time of transition η = −η 1 the metric is continuous but not its first derivative. For modes with periods much larger than the duration of the transition the transition can be treated as instantaneous. However, in order to avoid an ultraviolet divergency an upper cut-off has to be imposed [318,319,320]. The following expressions for r(ω k ) are obtained [316]: (1) For q = 0, n = 1 (6.6.16) (2) For q > 0 and n = 1 where ω qmax (η) = qmax b and it was assumed that ω qmax > ω k . (3) For q = 0 and n > 1 (6.6.18) Furthermore, nλ = 3n n+2 . Since nλ < 4 the resulting spectrum for r(ω k ) is increasing in frequency. (4) For q > 0 and n > 1 where subleading terms have been omitted and ω qmax > ω k was assumed. The resulting spectrum is growing in frequency. The resulting spectrum of the primordial magnetic field is characterized by the Hubble parameter at the beginning of the radiation dominated era H 1 , the D-dimensional Planck mass M D and the number of extra dimensions, n. In addition, in the case where the modes lying in the extra dimensions are taken into account, there is the maximal physical frequency ω qmax which is estimated assuming q max ∼ k 1 . The spectrum is constrained by r(ω) < 1 for all frequencies. The ratio of the D-dimensional over the four dimensional Planck mass is limited by the observation that Newtonian gravity is valid at least down to scales of the order of 1 mm [314]. This leads to the lower bound M D M 4 ≥ (1.616 × 10 −32 ) n n+2 . Furthermore, with T 1 the temperature at the beginning of the radiation epoch, big bang nucleosynthesis requires that T 1 > 10 MeV. This imposes a bound on H 1 by using H 1 M 4 = 1.66g where for T 1 > 300 GeV the number of effective degrees of freedom is given by g * (T 1 ) = 106.75 (see, e.g., [163]), namely, log H 1 M 4 > −40.94. Imposing the various constraints leads to upper limits on r(ω k ) calculated at the galactic scale corresponding to 1 Mpc, that is, ω G = 10 −14 Hz and taking the maximally amplified frequency evaluated today to be ω 1 ∼ 6 × 10 11 Hz H 1 M P l 1 2 [316]. Not taking into account modes in the extra dimensions leads for one extra dimension to magnetic field strengths B s < 10 −39 G. However, taking into account these modes substantially increases the upper value of the magnetic field to upto 10 −8 G. Imposing the constraint T 1 ∼ M 5 leads to magnetic seed fields B s < 10 −23 G. In models with more than one extra dimension, n > 1, strong magnetic seed fields can be created if the internal momenta are taken into account. In particular, without the assumption that the temperature at the beginning of the radiation epoch is of the order of the D-dimensional Planck scale allows for the creation of seed magnetic fields with strengths of upto 10 −10 G. For more than three extra dimensions, this also holds assuming T 1 ∼ M D . With this assumption for two and three extra dimensions results in weaker magnetic seed fields, with maximal field strengths, B s < 10 −18 G for two extra dimensions and B s < 10 −13 G for three extra dimensions. Magnetogenesis in theories with broken Lorentz symmetry The spontaneous breaking of Lorentz invariance is present in certain solutions of string field theory which leads to a non-vanishing photon mass described by the Lagrangian [321] , m L is a light mass scale in comparison with the typical string energy scale and 2ℓ is a positive integer. In [321] cosmologically interesting magnetic field strengths are found for a diverse choice of parameters of the model. In [322] an extension of the standard model is presented in which due to new physics at the Planck scale Lorentz symmetry is broken spontaneously. In the pure photon sector of the extended QED the Lagrangian is given by, [322] where the coupling (k F ) plmn is real and dimensionless and (k AF ) p is real and has dimensions of mass. In the context of the generation of primordial magnetic fields during inflation the Lagrangian (6.7.2) has been investigated in [323,324]. Analyzing the model resulting by taking into account only the first two terms in (6.7.2) it has been shown in [323] that magnetic fields of nanogauss field strength on a megaparsec scale at present can be generated for a wide range of parameters. In [324] (see also, [325]) primordial magnetogenesis during inflation has been discussed in a model resulting from considering only the first and third term in (6.7.2). In this case the generated magnetic field is found to be maximally helical at the end of de Sitter inflation. The subsequent inverse cascade of the magnetic field spectrum taking place in the turbulent plasma during the radiation dominated era results in a magnetic field with an interesting field strength and correlation length at the time of the protogalactic collapse. Noncommutativity in space provides a different possibility of breaking Lorentz invariance which in this case is explicitly broken so that all the amplitudes are frame dependent. In the context of generation of primordial magnetic fields this was first discussed in [326]. Noncommutative spaces occur in string theory in the Seiberg-Witten limit [327] and are described by the commutation relation for the coordinate operatorsx m , [x m ,x n ] = iθ mn , where θ mn is a constant of dimension length which is conveniently parametrized in terms of the noncommutativity scale Λ N C , defined by θ mn ≡ c mn where c mn is an antisymmetric tensor with components of order unity [326]. Moreover, in order to avoid problems with unitarity and causality, θ 0µ = 0 is chosen such that only space is noncommutative. In [328] it was shown that the magnetic dipole moment of a charged massive particle, such as the electron, receives quantum corrections at one loop which are spin independent and proportional to θ µ ≡ ǫ µνκ θ νκ . This leads to a non vanishing magnetic field proportional to θ µ when summing over all possible states. However, choosing the noncommutativity scale Λ N C ≃ 10 3 GeV the authors find the resulting magnetic field to be too weak in order to successfully seed the galactic dynamo. In [329] noncommutative quantum field theory was used for the U(1) gauge field leading to a modified Lagrangian describing the photon which is of the form of the Lagrangian (6.7.2) including the first and the third term. Moreover, in this case (k AF ) m is nonzero only for the spatial components and given by the noncommutativity parameter θ µ . Using the approach of [330] to implement the stringy spacetime uncertainty relation which leads to an effective noncommutative space-time [331] investigate primordial magnetogenesis in dilaton electromagnetism. In [332] the generation of primordial magnetic fields in inflation with a cut-off is investigated. The effect of the cut-off is to add extra terms to the action which in the model under consideration describes a photon with a mass term during inflation. The free parameter of the model can be chosen such that cosmologically relevant magnetic fields are obtained. Magnetogenesis and nonlinear electrodynamics So far the models in this section proposed to generate primordial magnetic fields are all situated within linear electrodynamics. In order to amplify perturbations in the electromagnetic field during inflation the electromagnetic field is coupled to a scalar field or curvature terms, quantum corrections resulting in the trace anomaly, symmetries are broken or dynamical extra dimensions are taken into account. All of these leading to the breaking of conformal invariance of Maxwell's equations in four dimensions. Nonlinear electrodynamics provides yet another possibility of breaking conformal invariance of the electromagnetic field. On large scales present day observations confirm the linearity in the electric and magnetic fields of Maxwell's equations in vacuum. However, as smaller and smaller scales are approached one might expect deviations from linearity due to the fact that charges become more localized (see e.g. [127]) and hence increases the energy density. This led to the hypothesis that there is some upper bound on the field strengths avoiding thus an infinite self-energy of a charged particle. A first example of a classical singularity-free theory of the electron was proposed by Born and later by Born and Infeld [333,334,335,336]. The modified field equations can be derived from the Lagrangian of the form [334] where b is a maximal field strength. In this section vector notation will be used to make expressions easier to read. The electromagnetic field is modified at short distances and its energy density is finite. One of the problems with this type of theory is its quantization [127]. Nonlinear electromagnetism had been considered before by Mie [337]. However, it was discarded since it depended on the absolute values of the gauge potential [334]. Another place where nonlinear electrodynamics arises is in quantum electrodynamics. Virtual electron pair creation induces a self-coupling of the electromagnetic field. Heisenberg and Euler calculated the self-interaction energy for slowly varying, but arbitrarily strong electromagnetic fields [338,339,340]. It is described by the lagrangian [338] and M 2 ≡ 2X − 2iY . Expanding the lagrangian leads to [338,339,340] L = X + κ 0 X 2 + κ 1 Y 2 . , where α is the fine structure constant and m e the electron mass. Furthermore, the propagation of a photon in an external electromagnetic field can be described effectively by the Heisenberg-Euler langrangian. Moreover, the transition amplitude for photon splitting in quantum electrodynamics is nonvanishing in this case. Photon splitting is a process in which an electron-positron pair is created and one of the particles emits a photon before annihilating with the other particle to generate the second photon. Thus the initial one photon state transforms into a two photon final state. In principle, this might lead to observational effects, e.g., on the electromagnetic radiation coming from neutron stars which are known to have strong magnetic fields [340,341,342]. In particular, certain features in the spectra of pulsars can be explained by photon splitting [343,344]. Finally, Born-Infeld type actions also appear as a low energy effective action of open strings [345,346,347,348]. As was shown in [349] the low energy dynamics of D-branes is described by the Dirac-Born-Infeld action. To test whether nonlinear electrodynamics can lead to the generation of cosmologically relevant primordial magnetic fields the following model will be considered. A stage of de Sitter inflation followed by reheating is matched to a standard radiation dominated era. During inflation quantum fluctuations are excited within the horizon. Upon leaving the causal domain they become classical perturbations. It is assumed that electrodynamics is nonlinear during inflation and becomes linear once the universe enters reheating and subsequently the radiation dominated stage. This latter assumption ensures that the evolution during the radiation dominated era and subsequent stages of the universe are described by the standard model of cosmology. To study nonlinear electrodynamics in this setting was put forward in [350] and independently in [351]. where P mn = − (L X F mn + L Y * F mn ), where the dual bi-vector * F mn is given by * F mn = 1 2 √ −g ǫ mnab F ab , and ǫ mnab the Levi-Civita tensor with ǫ 0123 = +1. Furthermore L A denotes L A = ∂L/∂A, and ∇ m * F mn = 0, (6.8.7) which implies that F mn = ∂ m A n − ∂ n A m . Assuming the background metric to be of the form, ds 2 = a 2 (η) −dη 2 + dx 2 . (6.8.8) Furthermore writing the Maxwell tensor in terms of electric and magnetic fields in the "lab" frame (cf. equation (6.1.2)) equations (6.8.6) and (6.8.7) imply [350], D · B = 0 (6.8.11) 1 a 2 ∂ η (a 2 B ) + curl Ê = 0. (6.8.12) From these equations two wave type equations can be derived which, however, contrary to the case of linear electrodynamics do not decouple the electric and magnetic field. Taking the curl of equation (6.8.10) and using equations (6.8.11) and (6.8.12) a wave type equation for the magnetic field B can be found [350]. Similarly, taking the time derivative of of equation (6.8.10) and using the remaining equations results in a wave type equation for the electric field Ê [350], (6.8.14) In the long wavelength approximation spatial gradients can be neglected [353]. Thus neglecting spatial derivatives equation (6.8.13) implies, 8.15) where B k ≡ a 2 B k , E k ≡ a 2 Ê k and a prime denotes the derivative with respect to conformal time η, that is ′ ≡ d dη . Assuming that the lagrangian depends only on X, that is L Y = 0, equation (6.8.15) implies 8.16) where K k is a constant vector and L X = 0. For K k ≡ 0 linear electrodynamics is recovered, for which B k = const. In the long wave length limit, the wave like equation for the electric field, equation (6.8.14), yields to Thus integrating equation (6.8.17) results in where P k is a constant vector. The equations determining the magnetic and electric field, (6.8.15) and (6.8.17), are coupled nontrivially for L Y = 0. Thus in order to find solutions, the Lagrangian will be considered to be only a function of X, L = L(X). Furthermore, since X = 1 2 ( B 2 − Ê 2 ) it is useful to find equations for E 2 k and B 2 k which are given by, for P 2 k > 0, (6.8.20) Assuming that the constant vector in equation (6.8.18) vanishes, P k = 0, leads to a significant simplification. In this case, equation (6.8.18) for L = L(X) can be solved immediately, giving for the electric field where M k is a constant vector. Thus for P k = 0 equation (6.8.20) leads to an equation only involving X and L X , namely, A particular model In order to find explicit solutions of equation (6.8.22) a particular lagrangian has to be chosen. For simplicity the lagrangian is chosen to be of the form where δ is a dimensionless parameter and Λ a dimensional constant. This is the abelian Pagels-Tomboulis model [354,355]. An effective model of low energy QCD is provided by its nonabelian version [356]. Clearly, linear electrodynamics is recovered for the choice δ = 1. The lagrangian (6.8.23) is chosen since it leads to a significant simplification of the equations, but still allows to study the effects of a strongly nonlinear theory of electrodynamics on the generation of primordial magnetic fields. In general, the energy-momentum tensor derived from a lagrangian L(X) is given by T mn = 1 4π L X g ab F ma F bn + g mn L . (6.8.24) Furthermore, for the lagrangian (6.8.23) the trace of the energy-momentum tensor is given by T = 1 − δ π L, (6.8. 25) which vanishes only in the case δ = 1 that is for linear electrodynamics. The energy-momentum tensor is calculated explicitly to check whether there are any constraints on the parameter δ. Decomposing the Maxwell tensor with respect to a fundamental observer with 4-velocity u m into an electric field E and a magnetic field B, implies [357,114,115,119], F mn = 2E [m u n] − η mnks u k B s , (6.8.26) where η mnks = √ −gǫ mnks and u m u m = −1. Thus the electric and magnetic field are given, respectively, by E m = F nm u n and B m = 1 2 η mnkl u n F kl . The lab frame is defined by the proper lab coordinates (t, r) determined by dt = adη, d r = ad x. Applying a coordinate transformation then gives the relation between the fields measured by a fundamental observer and the lab frame. Using the four velocity of the fluid u m = (a −1 , 0, 0, 0) results in the relation [358] E µ = aE µ ,B µ = aB µ . (6.8.27) As shown in [357,114,115,119] the energy-momentum tensor of an electromagnetic field can be cast into the form of an imperfect fluid. The energy-momentum tensor of an imperfect fluid is of the form (see for example, [357,114,115,119]), T mn = ρu m u n + ph mn + 2q (m u n) + π mn , (6.8.28) where ρ is the energy density, p the pressure, q m the heat flux vector and π mn an anisotropic pressure contribution of the fluid. h mn = g mn + u m u n is the metric on the space-like hypersurfaces orthogonal to u m . With q m u m = 0 and π mn u m = 0, ρ = T mn u m u n q a = −T mn u m h n a Q ab ≡ T mn h m a h n b Q ab = ph ab + π ab . (6.8.29) Therefore using equations (6.8.24) and (6.8.26) the energy density and the heat flux vector for the Pagels-Tomboulis model (6.8.23) are found to be Imposing the condition that π ab is trace-free then the pressure and π ab are given by Thus considering ρ (cf. equation (6.8.30)) in general there is a constraint on δ which is δ ≥ 1 2 required by the positivity of the energy density ρ. Estimating the primordial magnetic field strength During de Sitter inflation electrodynamic is nonlinear and described by the Pagels-Tomboulis lagrangian (6.8.23). Thus in the very early universe electrodynamics is highly nonlinear and very different from standard Maxwell electrodynamics. At the end of inflation electrodynamics is assumed to become linear so that the description of reheating and the subsequent radiation dominated stage are unaltered. The end of inflation is assumed to be at η = η 1 . The equations determining the electric and magnetic field in the long wave length limit, (6.8.19) and (6.8.20), are coupled, since X depends on Ê 2 and B 2 , in particular the invariant X reads, 2a 4 X ≃ B 2 k − E 2 k . Therefore to find approximate solutions three different regimes will be considered. Following [35] it is assumed that quantum fluctuations in the electromagnetic field lead to initial electric and magnetic fields. The energy density at the time of first horizon crossing during inflation, say at a time η 2 , is estimated to be of the order of the energy density of a thermal bath at the Gibbons-Hawking temperature of de Sitter space. Furthermore it is useful to recall the energy density in the magnetic field at the time of first horizon crossing corresponding to a value of the scale factor a 2 given by ρ B (a 2 ) ≃ H 4 ≃ M 4 where M 4 is the constant energy density during inflation. Assuming that initially the magnetic and electric energy densities are of the same order, there is an equivalent expression for the electric energy density at the first horizon crossing. While after the end of inflation, during the radiation dominated epoque, the electric field rapidly decays due to plasma effects, the magnetic field remains frozen-in. (1) B 2 k ≃ O( E 2 k ): In this case equation (6.8.22) can be approximately solved and leads to the magnetic field strength at the end of inflation corresponding to the value of the scale factor a 1 determined by [350], 8.34) where N(λ) is the number of e-folds before the end of inflation at which λ left the horizon, that is, e N (λ) = a 1 /a 2 . Moreover, m ≡ | K k | M P l | M k | and x ≡ η M −1 P l . Furthermore, the constant C 1 is chosen such that (δ − 1)C 1 = −x 2 . Using that during de Sitter inflation, a = a 1 (η 1 /η) and the number of e-folds, results in the magnetic energy density ρ B at the end of inflation, ρ B (a 1 ) ≃ ρ B (a 2 )e −4N (λ) cosh 2 [−mx 1 (e N (λ) − 1)], (6.8.35) where ρ B = B 2 8π . Together with the expression for the number of e-folds remaining until the end of inflation after the comoving scale λ has crossed the horizon during inflation (cf. equation (4.3.1)), this results in the ratio of magnetic to background radiation energy density r at the end of inflation [350], r(a 1 ) ≃ 10 −104 λ Mpc There are bounds on the parameter m coming from the requirements that r should be larger than a lower value in order to be strong enough to seed the galactic magnetic (6.8.39) For P 2 k > 0 the resulting magnetic fields very weak magnetic fields and r(a 1 ) ≪ 10 −37 for typical values of the cosmlogical parameters. However, in the case P 2 k = 0, for δ > 19.5 and a reheat temperature T RH = 10 9 GeV primordial magnetic fields result which could successfully act as seed fields for the galactic dynamo. In summary, using the Pagels-Tomboulis model an example of a theory of nonlinear electrodynamics has been provided which can lead during inflation to sufficient amplification of perturbations in the electromagnetic field in order to seed the galactic magnetic field. In [351] the resulting magnetic field generated during inflation is estimated for Lagrangians of the form L = L(X). The magnetic field strength is obtained by neglecting the magnetic field contribution to X on superhorizon scales since equation (6.8.12) implies B µ ∼ kηE µ on these scales. Moreover, neglecting the spatial gradient terms in equation (6.8.10), [351] find the scaling (L X ) 2 X ∝ a −4 . (6.8.40) Using this relation for Lagrangians of the form L = −X + n i=2 c i X i and L = −X exp (−cX), where c i and c are constants, and the Born-Infeld Lagrangian (6.8.1), the resulting magnetic field strength at present is estimated. There is a range of parameters for which magnetic fields strong enough to directly seed the galactic magnetic field can be generated. In [359] the generation and evolution of primordial magnetic fields has been discussed during de Sitter inflation, reheating and the radiation dominated era in theories of nonlinear electrodynamics described by Lagrangian densities L ∼ X + γX δ and L ∼ X + µ 8 /X, where γ, δ and µ are constants. It was found that not only primordial magnetic fields of interesting field strengths can be generated but also the baryon asymmetry by gravitational coupling between the baryon current and the curvature of the background. In [360] primordial magnetic field generation has been discussed in DBI inflation. Summary and outlook Although the origin of cosmic magnetism is still the subject of debate, the ubiquitous presence of large-scale B-fields with similar (of µG order) strengths in galaxies, galaxy clusters and highredshift protogalactic structures seems to suggest a common, primordial origin for them. Very recent reports indicating the presence of extragalactic magnetic fields close to 10 −15 G in low density regions of the universe also point towards the same direction. The possibility of a cosmological (pre-recombination) origin for all the large-scale magnetic fields is a relatively old suggestion and there have been numerous studies looking at the generation, the evolution and the potential implications of such primeval fields. There are still serious difficulties to overcome, however, especially when trying to produce the initial B-fields that will seed the galactic dynamo. Primordial magnetogenesis is still not a problem-free exercise, which probably explains the plethora of mechanisms proposed in the literature. Roughly speaking, magnetic seeds produced between inflation and recombination are too small in size, while those generated during inflation are generally too weak in strength. In either case, the galactic dynamo will not be able to operate successfully. The former of the aforementioned two problems is essentially due to causality, which severely constrains the coherence length of almost every B-field produced during the radiation era. The latter problem is attributed to the dramatic depletion suffered by typical inflationary magnetic fields. Primordial turbulence and magnetic-helicity conservation can in principle increase the initial coherence scale of magnetic seeds, especially of those generated during phase-transitions in the early radiation era. Considerable effort has also been invested in the search for viable physical mechanisms that could amplify weak inflationary magnetic fields. Solutions to the magnetic strength problem are typically sought outside the realm of classical electromagnetism and/or that of standard cosmology, although conventional amplification mechanisms can also be found in the literature. The aim of this review is to provide an up-to-date and as inclusive as possible discussion on the current state of primordial magnetogenesis. Deciding whether the large-scale magnetic fields that we observe in the universe today are of cosmological origin, or not, would be a step of major importance for cosmology. If confirmed, such primordial fields could have affected in a variety of ways a number of physical processes that took place during the early, as well as the subsequent, evolution of the cosmos. Although the argument in favour of cosmological B-fields may not settle unless an unequivocal magnetic signature is detected in the CMB, their case gets stronger as more reports of magnetic fields at high-redshifts and in empty intergalactic space appear in the literature. Upcoming observations may also help in this respect. A new generation of radio telescopes, like the Expanded Very Large Array (EVLA), the Low Frequency Array (LOFAR), the Long Wavelength Array (LWA) and the Square Kilometer Array (SKA) have large-scale magnetic fields in their lists of main targets. If nothing else, the expected influx of new data should put extra constrains that may allow us to distinguish between the various scenarios of magnetic generation and evolution. Information of different type, but of analogous importance, may also come from CMB observations, like those associated with the ESA PLANCK satellite. At the same time, structure-formation simulations are becoming more sophisticated by the day and a number of research groups have started systematically incorporating magnetic fields into them. This in turn should help us understand and interpret better the non-thermal regime of galaxy formation. So, hopefully, we will soon have cosmological and structure formation models with fewer free parameters and more physics.
2011-03-03T17:19:22.000Z
2010-07-22T00:00:00.000
{ "year": 2011, "sha1": "4d9040b302c42068e81728e27964c8ab384ec5c6", "oa_license": null, "oa_url": "http://arxiv.org/pdf/1007.3891", "oa_status": "GREEN", "pdf_src": "Arxiv", "pdf_hash": "4d9040b302c42068e81728e27964c8ab384ec5c6", "s2fieldsofstudy": [ "Physics" ], "extfieldsofstudy": [ "Physics" ] }
252195920
pes2o/s2orc
v3-fos-license
Validation of the Functional Assessment of Cancer Therapy/Gynecologic Oncology Group Neurotoxicity Questionnaire for the Latin American Population Background Chemotherapy-induced peripheral neuropathy is a common adverse effect of chemotherapeutic treatment and is associated with decreased quality of life. The aim of this study was to evaluate the validity and reliability of the neurotoxicity subscale of the Functional Assessment of Cancer Therapy/Gynecologic Oncology Group-Neurotoxicity (FACT/GOG-Ntx) for the Chilean population. Methods A cross-sectional study in which 101 participants with haematologic, colorectal, breast, gastric, gynaecological, and other types of cancer completed the FACT/GOG-Ntx. Content validity (n = 14 health professionals evaluated the subscale in four categories: test-retest reliability (n = 20 patients), dimensionality, internal consistency, and concurrent validity and discriminant validity. In all analyses, p < 0.05 was considered significant. Results There was an agreement among the evaluators for all categories of the subscale (Kendall's coefficient, W = 0.4, p < 0.01) and moderate to high intrarater reliability (intraclass correlation coefficient = 0.7–0.9). Of the 11 original items that make up the subscale, none was eliminated. The factor analysis generated four factors that represented 72.2% of the total variance. Cronbach's α was 0.8 for the 11 items. Women showed greater compromise in emotional well-being and neurotoxicity symptoms compared with men, and age was directly correlated with the questions ‘I have difficulty hearing' (r = 0.2, p = 0.019) and ‘I feel a noise or buzzing in my ears' (r = 0.2, p = 0.03). Conclusion The Chilean version of the FACT/GOG-Ntx neurotoxicity subscale is a valid and reliable scale for evaluating neurotoxicity symptoms in adult cancer survivors in Latin America. The scales also adequately distinguish between sex-based well-being among the afflicted population. Introduction There has been increasing consensus on the higher prevalence of cancer-related morbidity and mortality. There is also an increased amount of stress on medical facilities in Latin American countries to counter cancer and its related morbidity. Chemotherapy-induced peripheral neuropathy (CIPN) is one of the common adverse effects of chemotherapy; it can lead to a significant burden of symptoms after treatment [1]. The main classes of chemotherapy drugs that cause neuropathy include platinum-based cancer therapies (oxaliplatin and cisplatin), vinca alkaloids (vincristine and vinblastine), taxanes (paclitaxel and docetaxel), proteasome inhibitors (bortezomib), and immunomodulatory drugs (thalidomide) [2]. Among the factors that have contributed to the increasing prevalence of CIPN are the increase in the number of patients who are candidates for chemotherapy and the increase in survival due to the greater efficacy of new drugs and therapeutic regimens [3]. In general, it is estimated that 30%-40% of all the patients treated with chemotherapeutic agents will develop peripheral neurotoxicity. In breast cancer survivors, a study showed that 74% of patients reported CIPN that persisted long after their diagnosis, with a median of 6.5 years [4]. CIPN has been reported in up to 60% of patients treated with cisplatin, paclitaxel, docetaxel, vincristine, oxaliplatin, or bortezomib, the latter two of which have recently been introduced in first-line treatment regimens [5]. CIPN mainly affects the hands and feet and predominantly involves sensory symptoms such as numbness, tingling, and pain, including cold-stimulated neuropathic pain [6]. It can also present autonomic symptoms associated with orthostatic hypotension and motor symptoms such as cramps and impaired balance and gait [6,7]. Likewise, those who suffer from it may have difficulty performing daily activities such as buttoning clothes, brushing teeth, handling small objects, and writing, which can impact their performance and quality of life [7][8][9][10][11][12]. It is suggested that the lack of optimal methods for evaluating CIPN is a key barrier to effective symptom management [12]. The European Organization for Research and Treatment of Cancer (EORTC) created the EORTC QLQ-CIPN20 questionnaire to screen for CIPN [13]. In parallel, the Functional Assessment of Cancer Therapy/Gynecologic Oncology Group (FACT/GOG) created a neurotoxicity scale (FACT/GOG-Ntx), which has been shown to have good psychometric properties and has been validated in multiple countries and different languages [1,3,[13][14][15]. The first psychometric studies were carried out in a population with gynaecological cancers [3,15]. Recently, it has been used in patients with different cancer diagnoses in the Chinese and English languages [1]. Currently, in Latin American countries like Chile, there is a lack of an instrument that allows easy evaluation of CIPN symptoms in the context of quality of life, information that could also be recorded by clinicians and allow monitoring these patients. The objective of this study was to evaluate the psychometric properties of the FACT/GOG-Ntx subscale in a cross-sectional study in patients with different cancer diagnoses treated with chemotherapy in two Chilean public hospitals. Material and Methods 2.1. Study Design. This cross-sectional study of scale validation followed the recommendations of the Strengthening the Reporting of Observational Studies in Epidemiology (STROBE) [16]. 2.2. Subjects. The population of this study corresponds to adults diagnosed with cancer who received or were receiving systemic chemotherapy between December 2019 and March 2020 and users of two Chilean public hospitals belonging to two regions of the country. The inclusion criteria were adults diagnosed with cancer who had received at least the first chemotherapy cycle or were receiving systemic chemotherapy. The exclusion criteria were adults with cognitive deficits, illiteracy, and other disabilities that limited their ability to answer questionnaires. The sample size was estimated considering that a factor analysis requires a minimum of 9 participants for each questionnaire item [17]. The FACT/GOG-Ntx subscale has 11 items, and a minimum of 100 people were considered in the study that was approved by the Human Research Ethics Committee at the Metropolitan Health Service (memo 147, noviembre 5th de 2019). For the execution of this study, the Functional Assessment of Chronic Illness Therapy (FACIT) quality of life questionnaires group authorised the use of the questionnaire, providing a linguistically validated version in Spanish. Materials. The FACT/GOG-Ntx questionnaire was used to assess the impact of CIPN on quality of life after chemotherapy for cancer. It consists of questions for dimensions related to physical, social, emotional, and functional wellbeing [2]. It is a patient-reported outcome measure that contains 11 items designed to capture the symptoms of CIPN. Each item is scored on a 5-point scale (0 = not at all, 4 = a lot), and a higher score reflects worse CIPN. The scores are added together to generate a total score that ranges from 11 to 44. The questions correspond to sensory and motor problems in the extremities, hearing problems, body weakness, and mobility [3]. The questionnaire has been validated with women diagnosed with gynaecological cancer who had received taxane and platinum, two recognised neurotoxic agents [3]. The English version of the questionnaire was previously administered in the United Kingdom, Singapore, Hong Kong [1], and China [13]. Procedures. Eligible patients from the two public hospitals were recruited by the investigators and received information on the objective and their participation in the study. The psychometric analyses included content validity, test-retest, intrarater reliability, internal consistency, and discriminant validity. Content Validity. The expert judgment technique was used to assess the content validity of the neurotoxicity subscale. Fourteen health professionals were contacted (physiatrists or chemotherapists and physiotherapists) who met the following inclusion criteria: a minimum of 5 years of professional experience, experience in the application of functional scales, and recognition in the field of clinical oncology. An individual approach was employed for this technique: each judge completed a written survey and had no contact with the other judges. Participants received instructions regarding their participation as an expert in the validation process and the voluntary nature of their participation. The professionals received 2 International Journal of Breast Cancer a document in which the subscale was presented; the objective of the scale and the construct that it evaluates were made explicit. Subsequently, with the purpose of evaluating the content of the scale, the experts expressed their opinions by answering a survey that evaluated the 11 items of the FACT/GOG-Ntx. The experts evaluated the content in the categories of clarity, coherence, relevance, and sufficiency [18]. The responses were given with a Likert-type scale with five choices: 'Strongly agree', 'Agree', 'Neither agree nor disagree', 'Disagree', and 'Strongly disagree'. There was also a free-response section to request additional information that the evaluating experts considered relevant. 2.6. Test-Retest Interrater Reliability. The within-day and intrarater reliability was performed in a sample of 20 adults who were undergoing chemotherapy treatment. The relative reliability was determined by calculating the intraclass correlation coefficient (ICC 2,1 ). The absolute reliability was determined by calculating the standard error of measurement (SEM) and the minimal detectable change (MDC). Internal Consistency. The internal consistency of the FACT/GOG-Ntx was evaluated with Cronbach's α analysis for all items and by the dimensions. Discriminant Validity. Discriminant validity was evaluated based on the results of functional evaluation of cancer therapy between the sexes. Furthermore, correlations between the questions of the neurotoxicity subscale and age were also considered. 2.9. Data Analysis. The data were analysed with SPSS Statistics version 25.0. Descriptive analyses were used. Internal consistency that was determined with Cronbach's α coefficient and dimensionality was evaluated with exploratory factor analysis (principal component analysis with Varimax rotation). For this type of instrument, values of α ≥ 0:7 are considered adequate [19]. For discriminant validity, the dimensions of the functional evaluation results of cancer therapy were compared between the sexes by using the Mann-Whitney U test. Absolute reliability considered the SEM [20] and the MDC [21], using a confidence interval of 90% for each variable considering mathematical equations, as follows: The SEM evaluates the mean error of the measurement for any trial (reliability between trials) and for any test situation (reliability between days) [20]. The MDC is an estimate of the smallest amount of change that can be objectively detected as a true change outside of measurement error [21]. Kendall's W coefficient was used to evaluate the agreement among the evaluators regarding the domains of the FACT/GOG-Ntx. In all analyses, p < 0:05 was considered significant. Results A total of 101 participants completed the study (Supplementary Figure 1). The average age of the participants was 58.9 (12.9) years, including 56 (55.4%) women and 45 (44.6%) men. In relation to sociodemographic characteristics, there was heterogeneity in the participants' education. Fifty people were married (49.5%), the majority lived with relatives (84.6%), and there was representation of different professions and occupations. A notable percentage of the women were housewives (27.7%) (Supplementary Table 1). The clinical characteristics of the patients are presented in Supplementary Table 2. The main cancers affecting older Chileans were haematological (43.6%), colorectal (22.8%), and breast (13.9%). The most common type of treatment was chemotherapy alone (66.3%) or associated with radiotherapy (24.8%) and surgery (8.9%). The most frequent comorbidities were musculoskeletal problems (56.4%), mood disorders (46.5%), and arterial hypertension (36.6%). Of note, 77.2% of the people did not report smoking and 45.6% drank occasionally. Only 10% of the people interviewed presented some degree of dependence when performing activities of daily living. All 14 health professionals who participated (nine physiotherapists, four physiatrists, and one haematologist) were women and had a mean (standard deviation) age of 42.7 (6.4) years and 16.3 (6.3) years of professional experience. They evaluated the eight dimensions of the scale for sufficiency, clarity, coherence, and relevance. There was agreement among the evaluators for all the evaluated aspects (Kendall's W = 0.4, p < 0:01). International Journal of Breast Cancer Factor analysis generated four factors that represented 72.2% of the total variance (Table 2). Of these, the first factor (sensitive) included four items that represented 35.3% of the total variance. The second factor (hearing) included two items that represented 15.1% of the total variance, the third factor (motor) loaded three items responsible for 12.3% of the variation, and the fourth factor (dysfunction) had two items that explained 9.6% of the variance. The FACT/ GOG-Ntx had a Cronbach's α of 0.8 for the 11 items and a Cronbach's α of 0.8 for factors 1-4, namely, sensitivity, hearing, motor, and dysfunction. For discriminant validity analysis, when comparing the results of functional evaluation of cancer therapy for people undergoing chemotherapy by sex, women presented greater impairment in emotional well-being and symptoms of peripheral neuropathy compared with men (Table 3; p < 0:05). Additionally, when comparing the functional evaluation of cancer therapy based on the types of neoplasms (Table 4), survivors of gynaecological and haematological cancer presented greater impairment in the various domains of the FACT/GOG-Ntx. Regarding the representative total score of the FACT/GOG-Ntx, patients with gynaecological Abbreviations: FACT/GOG − Ntx : Functional Assessment of Cancer Therapy/Gynecologic Oncology Group − Neurotoxicity FACT/Ntx total score = PWB score + SWB score + EWB score + FWB score + NtxS score:FACT − G total score = PWB score + SWB score + EWB score + FWB score:FACT/GOG − Ntx TOI = PWB score + FWB score + NtxS score. U test: Value of Mann-Whitney test. Values are expressed as median (first quartile; third quartile) (minimum; maximum). Regarding the general scores of the functional evaluation of cancer therapy, on average, the patients presented scores around 40% in all the dimensions evaluated ( Figure 1). Discussion The emerging forms of cancer in Latin American countries are lung, cervix, breast, prostate, and stomach cancers, all with high mortality and morbidity [22]. Hence, there is an urgent need to identify health care resources and to improve understanding of the complexities of neurotoxicity in the affected population. This validation study of the FACT/ GOG-Ntx subscale was performed on a sample of patients from two public hospitals in Chile. It produced adequate results for all the psychometric properties evaluated. In the sample evaluated, the most frequent comorbidities were musculoskeletal problems (56.4%), mood disorders (46%), arterial hypertension (37%), and diabetes mellitus (20%). These factors could influence the increased presence of symptoms reported by the participants, consistent with the findings of other studies [23,24]. In relation with the validation of the FACT/GOG-Ntx subscale content, there was an agreement among the 14 professional evaluators for its 11 items in all the categories evaluated. The professionals highlighted the 'relevance' of having a clinical evaluation instrument and also considering adequate time for its application. Other studies have mentioned that the FACT/GOG-Ntx is one of the most appropriate evaluation instruments due to its 'practical' characteristics that can be applied in the clinic [3,14]. The internal consistency of the four domains (sensitivity, audacity, motor, and dysfunction), measured with Cronbach's α, were similar to the results reported in other studies [1]. Women showed greater CIPN symptoms and impaired emotional well-being compared with men, findings consistent with a previous report [25]. Researchers have mentioned that treatment of ovarian and breast cancer can contribute the development of peripheral neuropathy [26,27]. Indeed, an incidence of 11%-87% of CIPN has been reported after treatment with taxanes. In addition, CIPN develops in 70%-100% of patients treated with platinumbased chemotherapeutics [27]. In the study, about 46.6% of the participants were receiving chemotherapy for gynaecologic, breast, colorectal, gastric, and digestive cancers, which use taxane and platinum-based treatment schemes. In the sample studied, haematological and gynaecological cancers presented the greatest involvement in the various domains of the FACT/GOG-Ntx. The treatment for these two types of cancers often involves taxanes, alkylating gents, proteasome inhibitors, and epothilone B analogues, all of which are associated with CIPN [25,28,29]. Regarding the neurotoxicity subscale questions, age was directly correlated to the questions 'I have difficulty hearing' and 'I feel a noise or buzzing in my ears'. These results are consistent with previous studies that indicated elderly patients are at particular risk of developing CIPN due to comorbid conditions affecting peripheral nerve health Abbreviations: FACT/GOG − Ntx : Functional Assessment of Cancer Therapy/Gynecologic Oncology Group − Neurotoxicity FACT/Ntx total score = PWB score + SWB score + EWB score + FWB score + NtxS score:FACT − G total score = PWB score + SWB score + EWB score + FWB score. FACT/GOG − Ntx TOI = PWB score + FWB score + NtxS score. Values are expressed as mean (standard deviation). 5 International Journal of Breast Cancer [23,30], mainly associated with objective measurements related to tactile stimulation, sensitivity to cold, and vibratory threshold [31]. Although this study included a representative sample of patients with various types of cancers that are treated with chemotherapeutics that produce CIPN, it is not without limitations. The chemotherapeutic agents were not recorded nor were the time since that last administration of chemotherapy at the time of evaluation. This information could influence the extent of symptoms present in the patients. Despite advances in cancer treatment and the increased rate of people surviving with a significant number of unwanted side effects, there is still a lot to learn about CIPN. It is important to recognise that CIPN symptoms are very heterogeneous: they may be acute, such as the neuropathy commonly experienced with oxaliplatin, or chronic, which can persist long after treatment has been completed [2]. Therefore, having a validated evaluation scale will make it possible to recognise CIPN early, to provide preventive measures and timely treatment, and to improve the quality of life of those who experience it. Conclusions The Chilean version of the FACT/GOG-Ntx is a valid and reliable scale for evaluating neurotoxicity symptoms in adult cancer survivors in Latin America. The scale also adequately distinguishes between the sexes regarding well-being in the afflicted population. Data Availability All data are included in the main text. Original data sheet of subjects can be available on request by the first author (ileao@ucm.cl). Conflicts of Interest The authors agree that there is no conflict of interest. Authors' Contributions IL Ribeiro helped in the conceptualization, formal analysis, methodology, and administration of the project; gathered the resources; supervised, validated, and visualized the project; wrote the original draft; and wrote, reviewed, and edited the study. LA Lorca contributed to the formal analysis, validation, visualization, writing of the original draft, and writing, reviewing, and editing of this study. CR Cid performed the data curation, formal analysis, and methodology of this study; gathered the resources; and wrote the original draft. Snehil Dixit contributed to the formal analysis and validated, visualized, and wrote the original draft. NY Benavides carried out the formal analysis, administered the project, and validated, visualized, and wrote the original draft. FO Gonzales developed the formal analysis, administered the project, and validated, visualized, and wrote the original draft.
2022-09-12T15:41:06.493Z
2022-09-10T00:00:00.000
{ "year": 2022, "sha1": "49bd331f79b15c5454e07d1094caa63d9f9d9046", "oa_license": "CCBY", "oa_url": "https://downloads.hindawi.com/journals/ijbc/2022/6533797.pdf", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "78cbf71063f22e36d4490741df1f6004db18c5e0", "s2fieldsofstudy": [ "Medicine", "Psychology" ], "extfieldsofstudy": [ "Medicine" ] }