id
stringlengths
3
9
source
stringclasses
1 value
version
stringclasses
1 value
text
stringlengths
1.54k
298k
added
stringdate
1993-11-25 05:05:38
2024-09-20 15:30:25
created
stringdate
1-01-01 00:00:00
2024-07-31 00:00:00
metadata
dict
247479918
pes2o/s2orc
v3-fos-license
Clinical Decision Support Systems for Antibiotic Prescribing: An Inventory of Current French Language Tools Clinical decision support systems (CDSSs) are increasingly being used by clinicians to support antibiotic decision making in infection management. However, coexisting CDSSs often target different types of physicians, infectious situations, and patient profiles. The objective of this study was to perform an up-to-date inventory of French language CDSSs currently used in community and hospital settings for antimicrobial prescribing and to describe their main characteristics. A literature search, a search among smartphone application stores, and an open discussion with antimicrobial stewardship (AMS) experts were conducted in order to identify available French language CDSSs. Any clinical decision support tool that provides a personalized recommendation based on a clinical situation and/or a patient was included. Eleven CDSSs were identified through the search strategy. Of the 11 CDSSs, only 2 had been the subject of published studies, while 9 CDSSs were identified through smartphone application stores and expert knowledge. The majority of CDSSs were available free of charge (n = 8/11, 73%). Most CDSSs were accessible via smartphone applications (n = 9/11, 82%) and online websites (n = 8/11, 73%). Recommendations for antibiotic prescribing in urinary tract infections, upper and lower respiratory tract infections, and digestive tract infections were provided by over 90% of the CDSSs. More than 90% of the CDSSs displayed recommendations for antibiotic selection, prioritization, dosage, duration, route of administration, and alternative antibiotics in case of allergy. Information about antibiotic side effects, prescription recommendations for specific patient profiles and adaptation to local epidemiology were often missing or incomplete. There is a significant but heterogeneous offer for antibiotic prescribing decision support in French language. Standardized evaluation of these systems is needed to assess their impact on antimicrobial prescribing and antimicrobial resistance. Introduction Antimicrobial resistance (AMR) is a major public health concern worldwide [1,2]. It is associated with high morbidity and mortality as well as significant healthcare costs [2]. In response to the global threat of AMR, antimicrobial stewardship programs (ASPs) have been introduced to optimize antibiotic use and to improve the quality of infection care [3,4]. ASPs have been proven to be effective to tackle AMR in hospital and community settings [5,6]. Moreover, ASPs based on physician education and increased availability of guidelines through decision support tools such as clinical decision support systems (CDSSs) have shown significant results in improving appropriate antibiotic prescribing [7,8]. CDSSs are computerized tools designed to support diagnostic or therapeutic decisionmaking in order to improve clinical practice and quality of care [9,10]. Upon providing information about a given clinical context and patient characteristics, clinicians are offered easy and quick access to up-to-date clinical practice guidelines (CPGs) at the point of care [9,10]. In the infectious diseases (ID) field, CDSSs have been increasingly used to assist physicians' decision-making in antibiotic management in both community and hospital settings [11][12][13][14]. With a few clicks, CDSSs provide expert or evidence-based recommendations to promote the appropriate choice of antibiotics, dosage, route of administration, and duration of treatment. One of the first CDSSs that was developed in medicine was MYCIN [15]. MYCIN was an expert system designed in the 1970s for both the diagnosis and treatment of infectious diseases [15]. Then with the emergence of evidence-based medicine in the 1990s [16], new CDSSs have been developed with the purpose of implementing CPGs. CDSSs have since shown many benefits such as improvement in antibiotic selection [17,18], reduction in antibiotic usage [19][20][21][22], reduction in broad-spectrum antibiotic use [22,23], shorter length of hospital stay [17,19], reduction in adverse events [19,20], decreased mortality [20], increase in pharmacy interventions [19], and decreased healthcare costs [17,19,21]. A systematic review has been performed on studies assessing CDSSs for antimicrobial management but was limited by publication bias and targeted a broad range of different clinical tasks, such as alert systems for pharmacists or tools for antimicrobial stewardship (AMS) teams to review prescriptions [13]. Moreover, newer systems are now available including innovative tools [24,25] and applications available on smartphones. Therefore, this study aims to provide an up-to-date inventory of French language CDSSs that are currently used in community and hospital settings by expert and non-expert physicians. The purpose of this study was to describe existing CDSS, including those not cited in the scientific literature, and to provide their main characteristics and usage data. Search Strategy A literature search was carried out in February 2021 to identify published articles about the design, the implementation or the evaluation of French language CDSSs for antimicrobial prescribing. The Pubmed/MEDLINE database was searched using MeSH terms and text words for antimicrobials and CDSSs, including synonyms. The Pubmed search strategy can be found in the Supplementary Materials (Table S1). Additional search terms such as "France" and "French" were included in the search strategy to restrict the search to French-language CDSSs. The reference lists of related reviews and systematic reviews were also searched to identify any relevant study that might have been missed by the search strategy. Additionally, an open discussion was conducted with AMS experts from the Antimicrobial Stewardship study group of the French Infectious Diseases Society to identify CDSSs that are used in common practice by French-speaking primary care and hospital physicians, including those that have not been the subject of published research. They were asked to report any CDSS that can support physicians in community or hospital settings in the prescription of empirical or targeted antimicrobial therapy. We also searched smartphone application stores such as App Store (iOs) and Play Store (Android) using French keywords such as "antibiotique" (French word for antibiotic), "antibiothérapie" (French word for antibiotic therapy) or "prescription" (French word for prescription). CDSS Selection Any French-language clinical decision support tool that provides a personalized recommendation based on a clinical situation and/or a patient was included. Electronic tools available on smartphone applications, stand-alone software, and online websites were all included. Tools that exclusively provide a list of official practice guidelines or information about a single clinical situation were not included. Applications or websites that only offer teleconsultation services, drug monographs, or veterinary prescribing guidelines were also excluded. Data Collection A data collection form was developed and reviewed by ID specialists and AMS physicians using 5 randomly selected CDSSs. Collected data included the CDSS characteristics regarding their administration, access, targeted healthcare providers and patients, search criteria, types of infection, and types of information provided. After identifying available decision support tools, testing was carried out by two researchers using the standardized form. Testing was performed after installation on a server in order to have reproductible testing procedure. Data from the included CDSSs were recorded by two reviewers independently and were then subjected to further critical appraisal during a narrative synthesis. Figure 1 describes the CDSS selection process that was undertaken. Through the Pubmed search strategy, we identified and screened 35 articles. After assessment of eligibility and exclusion of duplicates, only 2 CDSSs were included in the inventory from the literature search [24,25]. Seven other CDSSs were identified and included in the inventory through the open discussion with AMS experts. Two additional CDSSs were found by the search on application stores, after excluding one CDSS intended for antibiotic prescribing in veterinary medicine. A total of 11 CDSSs were thus included in the inventory: Antibioclic, Antibiogarde, Antibiogilar, antibioGUIDE (Perpignan), Antibioguide (Basse-Normandie), AntibioEst, APPLIBIOTIC, ePOPI, Prescriptor, Antibiothérapie Pédiatrique, AntibioHelp ® . Results Included CDSSs were then tested using the standardized form. The collected data are described in Table 1 and detailed for each CDSS in the Supplementary Materials (Tables S2 to S12). One CDSS was unavailable for testing so we contacted its main administrator to obtain its characteristics. Out of the 11 decision support systems included, 10 CDSSs were designed by French AMS teams whereas 1 CDSS was developed by Canadian physicians and was intended for pediatrics use only. Most of the CDSSs were less than 10 years old and were developed on a regional scale by multidisciplinary teams including general practitioners (GPs), ID specialists, emergency physicians, intensive care physicians, pediatricians, geriatricians, microbiologists, pharmacists, and medical informatics specialists. Nine and eight support systems were accessible respectively via smartphone applications and online websites. Two of the CDSSs available on smartphone applications could only be accessed through one or another mobile operating system, Android or iOS. Eight CDSSs could be used offline on smartphone. Moreover, 8 CDSSs were available free of charge. Included CDSSs were then tested using the standardized form. The collected data are described in Table 1 and detailed for each CDSS in the Supplementary Materials (Tables S2 to S12. One CDSS was unavailable for testing so we contacted its main administrator to obtain its characteristics. Out of the 11 decision support systems included, 10 CDSSs were designed by French AMS teams whereas 1 CDSS was developed by Canadian physicians and was intended for pediatrics use only. Most of the CDSSs were less than 10 years old and were developed on a regional scale by multidisciplinary teams including general practitioners (GPs), ID specialists, emergency physicians, intensive care physicians, pediatricians, geriatricians, microbiologists, pharmacists, and medical informatics specialists. Nine and eight support systems were accessible respectively via smartphone applications and online websites. Two of the CDSSs available on smartphone applications could only be accessed through one or another mobile operating system, Android or iOS. Eight CDSSs could be used offline on smartphone. Moreover, 8 CDSSs were available free of charge. Abbreviations: CVC, central venous catheter; GFR, glomerular filtration rate. 1 No information was found about the funding of the CDSS. 2 Mandatory information provided by users before accessing prescription recommendation. 3 Antibiotics are listed in preferred order. 4 Choice of antibiotics adapted to the local epidemiology. 5 Context and reminders included information about infection epidemiology, clinical presentation, diagnosis, and other treatment. The individual characteristics of each CDSS regarding targeted users, patients, and infections are presented in Table 2. All the CDSSs offered prescription recommendations for the ambulatory treatment of community-acquired infections. In addition, all CDSSs except two (Antibioclic and AntibioHelp ® ) were also intended for the treatment of inpatients in hospital settings. Urinary tract infections (UTIs) were the only type of infection for which all decision support tools provided prescription recommendations. Furthermore, UTIs, upper respiratory tract infections (URTIs), lower respiratory tract infections (LRTIs), and digestive tract infections were the only types of infection for which over 90% of the decision support systems provided recommendations. In contrast, recommendations for the treatment of cardiovascular infections, bloodstream infections, central venous catheter (CVC) related infections, eye infections, and dental infections were the least frequently advised among the CDSSs with less than half of the CDSSs providing treatment recommendations for these conditions. Table 3 describes the individual characteristics of the included CDSSs regarding the types of information provided. All the decision support systems provided recommendations for the decision to initiate antibiotic therapy for a given infection, the selection of appropriate antibiotics as well as their preferred order according to guidelines. All included tools also displayed alternatives in case of allergy. All but one of the CDSSs also provided decision support on the appropriate dosage and duration of treatment. However, less than 30% of the CDSSs displayed information about antibiotic side effects. Two CDSSs required clinicians to systematically provide the patient profile (i.e., adult, child, pregnant woman) prior to displaying prescription recommendation. Recommendations for specific patient profiles such as children or pregnant women were frequently provided by the CDSSs but the information supplied was often incomplete. Recommendations for antibiotic selection and dosage in patients with chronic kidney disease (CKD) were displayed by about half of the CDSSs. Moreover, less than 30% of the CDSSs displayed antibiotic prescription recommendations adapted to the local epidemiology. In addition to displaying prescription recommendations, the majority of CDSSs displayed additional information about infection epidemiology, clinical presentation, diagnosis, and other treatment. Nine CDSSs displayed the sources of their recommendations, including primarily national and international guidelines from scientific societies. Discussion This study provides an overview of available French language CDSSs and their characteristics. Although this inventory might not be exhaustive, our main objective was to identify and to describe the CDSSs that are used by clinicians for antibiotic prescribing. To the best of our knowledge, there is no published research describing and comparing CDSS for antibiotic prescribing in a similar way. We found that two CDSSs (Antibioclic and AntibioHelp ® ) were particularly suitable for use in primary care settings. Indeed, these two CDSSs focused on the infectious situations most frequently encountered in general and emergency medicine and displayed comprehensive prescription recommendations for different patient profiles (i.e., adults, children, pregnant women). One CDSS (ePOPI) met all the predefined criteria regarding targeted users, patients, infectious situations, and recommendations, although it should be noted that this CDSS was not free of charge. Another CDSS (Antibiothérapie Pédiatrique) focused solely on pediatrics and offered comprehensive recommendations for a range of infectious situations in this area. Two other CDSSs (APPLIBIOTIC and AntibioEst) targeted a variety of infections in both general and specialized medicine and thus seemed appropriate for decision support in both inpatient and outpatient settings. It is reasonable to infer from these results that the appropriateness of a CDSS for a physician likely depends on his or her scope of practice and patient profile. Despite the growing use of CDSSs, only 2 of the 11 CDSSs included in the inventory appear to have been the subject of published studies. This lack of published research on existing tools highlights currents gaps in the evaluation of CDSSs and their potential impact on antibiotic prescribing behavior and clinical outcomes. In a study published in 2020 [26], Delory et al. described the architecture of Antibioclic and its use. They reported its growing number of users and queries as well of the nature of these queries, including mostly URTIs and UTIs. They also reported the findings of two cross-sectional studies conducted five years apart with Antibioclic users through an online survey [25]. Among the 1848 and 3621 survey participants, 93% were physicians and 81% were GPs. The vast majority of GPs (93%) reported following the CDSS recommendations while the occurrence of CDSS users' non-compliance with the decision to initiate an antibiotic, select an antibiotic and extend the duration of treatment beyond the CDSS recommendation decreased between the two surveys. A substantial number of GPs declared using the CDSS to update their knowledge on antibiotic therapy with a decrease over time between the two surveys (83% in 2014 versus 43% in 2019), suggesting an increase in user knowledge of antibiotic prescribing guidelines over time. However, the authors reported that no formal assessment of the CDSS impact on improving antibiotic prescribing practices has been carried out. Another CDSS included in this inventory has been the subject of small-scale evaluation [25]. AntibioHelp ® aims to help GPs extrapolate guideline recommendations to clinical situations and patients for which there are no explicit recommendations [25]. By displaying antibiotic properties weighted by degree of importance in addition to displaying recommended and non-recommended antibiotics according to guidelines, this CDSS promoted a better understanding of recommendations and encouraged the weighing of pros and cons of each antibiotic in decision making by clinicians. The use of AntibioHelp ® by GPs resulted in a significant improvement in antibiotic prescribing in situations when no explicit recommendation existed [25] as well a significant increase in GP confidence in guideline recommendations [25]. The provision of flexible and comprehensible recommendations therefore appears to be an important factor to consider to increase the uptake of CDSSs by clinicians. Indeed, several studies have reported a correlation between CDSS adoption and their positive impact on antibiotic prescribing [11][12][13], which highlights the need to assess not only the effects of CDSSs on antibiotic prescribing but also their implementation process and utilization. Given the link between the uptake of CDSSs by clinicians and their effectiveness on improving antibiotic prescribing behavior, it is crucial to understand the limits of CDSSs and the characteristics that influence clinician adoption, to allow for the development of new research methods to overcome these limits. In order to optimize current CDSSs and to improve their sustainability, current gaps in the evaluation of CDSS utilization, user satisfaction, and impact on clinician adherence to guidelines should be addressed by CDSS administrators. Despite the paucity of CDSSs described in the scientific literature, we found a significant offer for French language CDSSs showing a strong interest from multidisciplinary physician teams to improve antibiotic use. Included CDSSs were simple to use, with most support systems merely requiring users to provide the site and nature of infections, making them easy to use by non-expert physicians. Most support systems were found to offer prescription recommendations for a variety of infectious diseases, making them valuable to different types of physicians, both general and specialist, and useful in both primary care and hospital settings. We found that UTIs, URTIs, LRTIs, and digestive tract infections were the infectious situations for which the most CDSSs provided recommendations whereas bloodstream infections and CVC-related infections were advised by only a few CDSSs. This may reflect the priority given to the most frequent indications for antibiotic prescribing or the most frequent causes of antibiotic misuse. This also highlights the fact that CDSSs are probably easier to develop for the treatment of simple community-acquired infections, given their narrower spectrum of causative pathogens and infrequent multidrug resistance. Indeed, guidance for the treatment of healthcare-associated infections requires in most cases more detailed information about patient history, clinical presentation, previous antibiotic exposure, and proper examination of microbiological test results. The development of knowledge-based CDSSs for these difficult clinical situations would hence require a large volume of rules to capture expert knowledge. To this day, the use of CDSSs for antibiotic management seems more appropriate in general and emergency medicine practice or for the inpatient treatment of simple community-acquired infections. In contrast, therapeutic decision-making for the management of severe infections in hospital settings requires individualized expert guidance and follow-up from hospital AMS teams. Three CDSSs, namely Antibiogarde, ePOPI, and Antibiothérapie Pédiatrique, were not free of charge and were available on an annual subscription basis, ranging from 7.99 to 33 EUR per year. Although fee-based access may significantly limit the uptake of these CDSSs given free coexisting options, it is worth mentioning that all three of these CDSSs had specific features, including prescribing guidelines for fungal and parasitic infections, guidelines for the diagnosis of infections and comprehensive guidelines for the management of neonatal and pediatric infections. Recommendation updates were infrequent in some decision support systems and recommendations for specific patient profiles such as children, pregnant women, or patients with CKD were missing or incomplete in some decision support systems. CDSSs' lack of explicit recommendations for some clinical situations and populations is likely a barrier to widespread adoption by clinicians and could potentially contribute to delayed or inappropriate prescribing in these situations. Moreover, we found that information on the use and administration of CDSSs was sometimes missing or incomplete. Overall greater transparency could also promote better prescriber adherence to decision support systems. It is also worth noting that many of the included CDSSs appear to overlap and provide the same type of recommendations for the same patient profiles. Therefore, it may be worthwhile to find ways to centralize the process of computerizing antibiotic therapy recommendations in order to pool the resources invested in the development and sustainability of CDSSs, whether for antimicrobial prescribing or other clinical decisions. All the study support systems were accessible through online websites, stand-alone applications or computer software and were not integrated into the electronic health record (EHR) workflow, which means every request on these CDSSs has to be activated by clinicians. The development of automated clinical decision support delivered through EHRs may increase user adoption. Furthermore, all the CDSSs presented in this study were knowledge-based systems, i.e., they provide recommendations based on expert medical knowledge. None of them used machine learning algorithms to recognize patterns in clinical data in order to predict patient outcomes, which is likely related to the lack of clinical data warehouses in primary care [27]. One narrative review has investigated the use of machine learning decision support systems (ML-CDSSs) in infectious diseases and found only three ML-CDSSs intended for decision making in antibiotic therapy selection while most existing ML-CDSSs focused on the diagnosis of infection and the prediction, early detection or stratification of sepsis [27]. Combining expert knowledge and machine learning algorithms could allow for personalized and predictive recommendations tailored to patient profiles and could thus have a positive impact on the quality of antibiotic prescribing. All CDSSs identified in this article were intended for medical prescribers. However, other healthcare providers such as pharmacists have been playing a growing role in ASPs [28][29][30] and could potentially rely on CDSSs for reviewing antibiotic prescriptions [31,32]. In addition, a few CDSSs offered the possibility to be parameterized locally to be adapted to the local epidemiology which could further optimize antibiotic prescribing and positively impact local antimicrobial resistance patterns. Conclusions This inventory shows a significant but heterogeneous offer for antibiotic prescribing decision support. Based on these results, a physician's choice of CDSS should presum-ably depend on his or her scope of practice and patient profile. Most CDSSs provided recommendations for a range of infections, although few CDSSs offered comprehensive recommendations for antibiotic prescribing in specific patient profiles, which may limit adoption by clinicians. Frequent updates, free use, comprehensive recommendations, and automated clinical decision support are important factors to consider to increase the uptake of CDSSs by clinicians and thus their effectiveness in improving the quality of antibiotic prescribing and clinician adherence to guidelines. Moreover, findings from this study highlight current gaps in the evaluation of CDSS use and impact on antimicrobial prescribing and antimicrobial resistance. Standardized evaluation of current CDSSs is needed to optimize current tools and to improve the adoption and sustainability of CDSSs for antibiotic prescribing. Supplementary Materials: The following supporting information can be downloaded at: https: //www.mdpi.com/article/10.3390/antibiotics11030384/s1, Table S1: Search strategy for Pubmed (docx); Tables S2 to S12: Detailed characteristics of the CDSSs. Funding: This work was funded by ANRS Maladies infectieuses émergentes as part of the project ANRS COV03 Antibioclic Afrique, with the financial support of L'initiative, a facility implemented by Expertise France. Institutional Review Board Statement: Not applicable. Informed Consent Statement: Not applicable. Data Availability Statement: All data related to this study are available and accessible on request from the corresponding author.
2022-03-17T06:52:19.420Z
2022-03-01T00:00:00.000
{ "year": 2022, "sha1": "272806508df6b56e2c969024131d4ea1cdb56584", "oa_license": "CCBY", "oa_url": "https://www.mdpi.com/2079-6382/11/3/384/pdf", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "272806508df6b56e2c969024131d4ea1cdb56584", "s2fieldsofstudy": [ "Medicine", "Computer Science" ], "extfieldsofstudy": [ "Medicine" ] }
248138617
pes2o/s2orc
v3-fos-license
Research on the Preparation of Graphene Quantum Dots/SBS Composite-Modified Asphalt and Its Application Performance : This study aims to prepare a graphene quantum dots (GQDs)/styrene-butadiene seg-mented copolymer composite (GQDs/SBS) as an asphalt modifier using the Pickering emulsion polymerization method. The physicochemical properties of the GQDs/SBS modifier and their effects on asphalt modification were investigated. In addition, the GQDs/SBS modifier was compared with the pure SBS modifier. Research results demonstrated that GQDs could be evenly dispersed into the SBS phase to form a uniform composite. Adding GQDs brings more oxygen-containing functional groups into the GQDs/SBS modifier, thus strengthening the polarity and making it disperse into the asphalt better. Compared with the SBS modifier, the GQDs/SBS modifier presents better thermostability. Moreover, GQDs/SBS composite-modified asphalt achieves better high-temperature performance than SBS-modified asphalt, which is manifested by the increased softening points, complex shear modulus and rutting factors. However, the low-temperature performance decreases, which is manifested by reductions in cone penetration, viscosity and ductility as well as the increased ratio between creep stiffness (S) and creep rate (m), that is, S/m. Furthermore, adding GQDs can improve the high-temperature performance of asphalt mixture, but it influences low-temperature and water stability slightly. GQDs/SBS also have the advantages of simple preparation techniques, low cost and are environmentally friendly. Therefore, they have become a beneficial choice as asphalt cementing material modifiers. Introduction In all pavement forms, asphalt pavement accounts for a very high proportion of road engineering around the world due to its remarkable advantages such as high riding comfort and convenient maintenance. Recently, transportation industries in countries around the world are booming. Increasing traffic loads, especially the growth in heavy-loaded and overloaded vehicles, intensifies damage to original roads. As a result, asphalt pavements may develop different types of early diseases soon after opening to traffic, such as ruts, pavement subsidence, upheaval, etc. [1,2]. These diseases affect the performance and service life of pavements significantly. Nowadays, improving durability and prolonging the service life of asphalt pavement is a key problem that has to be solved in the road field at present. Furthermore, developers often choose superior performance materials for asphalt pavements since excellent pavement structural performances are closely related to material performances. In particular, it is very important to choose a good performance asphalt binder because its quality is directly related to the performance of the asphalt pavement [3]. To obtain ideal performances from asphalt under various climatic and traffic Nevertheless, carbon nanomaterials must overcome considerable surface tension in order to disperse into asphalt since they have great specific surface area [4]. As a result, the dispersion problem of carbon nanomaterials in SBS-modified asphalt is a key constraint against their development at present. With abundant carboxyl and hydroxyl functional groups on the surface, GQDs show some surface activity and they can be used to prepare Pickering emulsion as nano surfactant [28]. Further, the polymerization of Pickering emulsion is expected to be a new choice to prepare GQDs/SBS compositemodified materials [4,29]. This study aims to discuss the applications of GQDs as an asphalt modifier. For this purpose, the asphalt-based GQDs and SBS were used as the main raw material, and a new GQDs/SBS composite material prepared by the Pickering emulsion polymerization method was used as the modifier of asphalt. A series of chemical analyzers were used to analyze the functional group structures and thermostability of the GQDs and SBS modifier. On this basis, the conventional physical properties and rheological properties of GQDs/SBS composite-modified asphalt and SBS-modified asphalt were compared. Moreover, pavement performances of asphalt mixtures that were prepared using GQDs/SBS composite-modified asphalt and SBS-modified asphalt as binders, were characterized. Materials GQDs/SBS modifier was prepared using asphalt-based GQDs and linear SBSas the raw materials. Modified asphalt was prepared using Qinhuangdao AH-70 (PetroChina fuel asphalt Co., Ltd., Qinhuangdao, China)asphalt and GQDs/SBS modifier as raw materials. The conventional performances and four-component compositions of Qinhuangdao AH-70 asphalt are listed in Table 1. SBS, white fluffy rod-like solids, was provided by Yueyang Baling Petrochemical Corp (Sinopec, Yueyang, China). SBS is a linear molecule with an average molecular weight of 100,000 g/mol. The deoiling asphalt (DOA, Asphaltenes content is 20%) from SINOPEC Jiujiang company, Jiujiang, China, was used as the raw material, and GQDs with asphaltene polycyclic aromatic hydrocarbon nucleus were prepared by nitric acid oxidation. The specific manufacturing technology was introduced as follows: 10 g DOA powder were added into a 250 mL flask, and 150 mL 65% concentrated nitric acid were added in slowly and stirred continuously. Under the strong stirring, the temperature increased gradually, and the reflux was heated. The reaction lasted for 4 h under 90 • C. After finishing the reaction, it was cooled to room temperature and then diluted with distilled water. It was filtered by a 0.2 um Millipore filter directly rather than neutralized by sodium hydroxide. The residual nitric acids in the filtrate were eliminated through reduced pressure distillation and then dried, thus obtaining nitric acid oxidized GQDs. Preparation of GQDs/SBS Modifier The GQDs/SBS modifier was prepared by the Pickering emulsion polymerization method. The preparation process is shown in Figure 1. Firstly, a certain mass of GQDs was dispersed into pure water, and a 5% (mass concentration) GQDs solution was gained through ultrasonic dispersion for 30 min. Meanwhile, SBS particles were dissolved into methylbenzene, and a 20 wt% (mass concentration) SBS methylbenzene solution was prepared. The SBS methylbenzene solution was added to the GQDs solution at a mass ratio of 1:1, and Pickering emulsion was acquired through 5 min high-speed shearing using a BME shearing machine under 4000 r/min. The Pickering emulsion was poured into a clean glass tray with a flat bottom. Finally, the tray was put in a vacuum drying box under 80 • C for 12 h. The Pickering emulsion developed auto polymerization under these conditions, thus obtaining a GQDs/SBS modifier. It can be seen from Figure 2 that the GQDs/SBS modifier is a black solid under room temperature. Preparation of GQDs/SBS Modifier The GQDs/SBS modifier was prepared by the Pickering emulsion polymerization method. The preparation process is shown in Figure 1. Firstly, a certain mass of GQDs was dispersed into pure water, and a 5% (mass concentration) GQDs solution was gained through ultrasonic dispersion for 30 min. Meanwhile, SBS particles were dissolved into methylbenzene, and a 20 wt% (mass concentration) SBS methylbenzene solution was prepared. The SBS methylbenzene solution was added to the GQDs solution at a mass ratio of 1:1, and Pickering emulsion was acquired through 5 min high-speed shearing using a BME shearing machine under 4000 r/min. The Pickering emulsion was poured into a clean glass tray with a flat bottom. Finally, the tray was put in a vacuum drying box under 80 °C for 12 h. The Pickering emulsion developed auto polymerization under these conditions, thus obtaining a GQDs/SBS modifier. It can be seen from Figure 2 that the GQDs/SBS modifier is a black solid under room temperature. FT-IR Spectral Analysis The functional groups and material structures of the SBS modifier and the GQDs/SBS modifier were characterized using a Fourier infrared microscopic analysis spectrometer. Meanwhile, their chemical compositions were analyzed. A Nicolet IS 5-type infrared spectrometer (Thermo Science, Waltham, MA, USA) was used in the experiment. All tests were performed at room temperature. The resolution was 4 cm −1 , the scanning frequency was 32 times/min and the spectral wavenumber ranged between 4000 and 500 cm −1 . The samples were prepared by casting a film onto a potassium bromide (KBr) window from a 5% by weight solution in carbon tetrachloride (CCl4). Preparation of GQDs/SBS Modifier The GQDs/SBS modifier was prepared by the Pickering emulsion polymerization method. The preparation process is shown in Figure 1. Firstly, a certain mass of GQDs was dispersed into pure water, and a 5% (mass concentration) GQDs solution was gained through ultrasonic dispersion for 30 min. Meanwhile, SBS particles were dissolved into methylbenzene, and a 20 wt% (mass concentration) SBS methylbenzene solution was prepared. The SBS methylbenzene solution was added to the GQDs solution at a mass ratio of 1:1, and Pickering emulsion was acquired through 5 min high-speed shearing using a BME shearing machine under 4000 r/min. The Pickering emulsion was poured into a clean glass tray with a flat bottom. Finally, the tray was put in a vacuum drying box under 80 °C for 12 h. The Pickering emulsion developed auto polymerization under these conditions, thus obtaining a GQDs/SBS modifier. It can be seen from Figure 2 that the GQDs/SBS modifier is a black solid under room temperature. FT-IR Spectral Analysis The functional groups and material structures of the SBS modifier and the GQDs/SBS modifier were characterized using a Fourier infrared microscopic analysis spectrometer. Meanwhile, their chemical compositions were analyzed. A Nicolet IS 5-type infrared spectrometer (Thermo Science, Waltham, MA, USA) was used in the experiment. All tests were performed at room temperature. The resolution was 4 cm −1 , the scanning frequency was 32 times/min and the spectral wavenumber ranged between 4000 and 500 cm −1 . The samples were prepared by casting a film onto a potassium bromide (KBr) window from a 5% by weight solution in carbon tetrachloride (CCl4). The functional groups and material structures of the SBS modifier and the GQDs/SBS modifier were characterized using a Fourier infrared microscopic analysis spectrometer. Meanwhile, their chemical compositions were analyzed. A Nicolet IS 5-type infrared spectrometer (Thermo Science, Waltham, MA, USA) was used in the experiment. All tests were performed at room temperature. The resolution was 4 cm −1 , the scanning frequency was 32 times/min and the spectral wavenumber ranged between 4000 and 500 cm −1 . The samples were prepared by casting a film onto a potassium bromide (KBr) window from a 5% by weight solution in carbon tetrachloride (CCl 4 ). Thermogravimetric Analysis (TGA) The TGA-100 A (Shanghai all Instrument Equipment Co., Ltd., Shanghai, China) thermal gravimetric analyzer was applied in the experiment for the TGA of the SBS modifier and GQDs/SBS modifier. Under the nitrogen atmosphere, about 7 mg samples were collected and heated from 30 to 600 • C at the constant temperature rising rate of 10 • C/min. The thermostability performances of the two modifiers were evaluated by TG and DTG curves. To assure accuracy and decrease errors, all experiments were performed three times. Preparation of Modified Asphalt This study prepared GQDs/SBS composite-modified asphalt and SBS-modified asphalt (control group) using the melting-thawing mixed method. The melted-thawed AH-70 asphalt was collected and then poured into a cylinder container, which was then heated to 180 • C. Subsequently, 3 wt% (asphalt mass) compatilizer (extract oil) and 4 wt% modifier were added successively. Next, the mixture was processed by high-speed shearing for 30 min at the rate of 4000 r/min. Later, the temperature was lowered to 170 • C, and the stirring rate was 750 r/min. In total,0.25 wt% stabilizer was added and stirred continuously for 3 h. After full development, modified asphalt with stable performance was obtained. Rheological Test The rheological properties of modified asphalt samples were characterized using the dynamic shear rheometer (DSR, TA, New Castle, DE, USA). The clamp chose the parallel plates with diameters of 8 mm and 25 mm, respectively. Firstly, the linear viscoelastic interval of the samples was determined through a stress and strain scan. Secondly, a smallangle vibration shearing test was carried out within the determined linear viscoelastic interval. The scanning results of isothermal frequencies (0.1-50 mads) were acquired under 30, 45, 60, and 75 • C. The specific operation process was introduced as follows: first, put about 0.1 g of the samples on the lower plate of the parallel plates. Second, install the parallel plates on the rheometer, and set the initial temperature. After the samples are softened, lower the upper plate to squeeze some samples. Finally, set the interval between the parallel plates to 1 mm (25 mm plate) or 1.5 mm (8 mm plate). The temperature scanning ranged from 58 to 95 • C. The temperature rising rate was 1 • C/min, and the frequency was l0 rad/s. The multi-stress repetitive creeping test was carried out under 100 and 3200 Pa. Each stress cycle number was set to 10, and each circle had 1 s stress loading and 9 s relaxation. The bending dye rheometer (BBR, ATS, Butler, PA, USA) was used to measure the creep properties of the asphalt under low temperatures. The combination of BBR and DSR can present relatively comprehensive rheological information on asphalt under the used temperature. BBR uses the small beam principle in engineering to characterize the cracking trend of the asphalt upon temperature drop, through which two indexes could be gained: creep stiffness (S) and variation rate of stiffness with time (m). To avoid the cracking phenomenon of the asphalt under low temperatures, Peformance Grade (PG) classification norms require that the S for 60 s loading of BBR should be no higher than 300 MPa and the m value should be no smaller than 0.3. The temperature of the BBR test ranged between −18 and −24 • C. In this study, the viscoelasticity within a wide-frequency and wide-temperature range was gained by the time-temperature equivalence principle. Such viscoelasticity with a very large span in orders of magnitude can hardly be measured directly. The time-temperature equivalence principle elaborates that influences of extended time (or decreased frequency) Coatings 2022, 12, 515 6 of 18 on mechanical properties of materials are equivalent to temperature rise. Under conditions meeting the time-temperature equivalence principle, various viscoelastic parameters measured by experiment can be used to synthesize curves using translocation factors. Performance Characterization of Asphalt Mixture The SBS-modified asphalt (control group) and the prepared GQDs/SBS compositemodified asphalt were used as binders, respectively. The AC-20 asphalt mixture, which is commonly used in asphalt pavement surfaces, was chosen to design asphalt mixture by the Marshall Design method according to China's Construction Technological Norms on Highway Asphalt Pavement (JTG F40-2004). The grading curve is shown in Figure 3. Combining with engineering experiences, the optimal oil-stone ratio was determined to be 4.5 based on the target voidage of 4.0%. In this study, all evaluated asphalt mixtures had the same grading and optimal asphalt content. In this study, the viscoelasticity within a wide-frequency and wide-temperature range was gained by the time-temperature equivalence principle. Such viscoelasticity with a very large span in orders of magnitude can hardly be measured directly. The timetemperature equivalence principle elaborates that influences of extended time (or decreased frequency) on mechanical properties of materials are equivalent to temperature rise. Under conditions meeting the time-temperature equivalence principle, various viscoelastic parameters measured by experiment can be used to synthesize curves using translocation factors. Performance Characterization of Asphalt Mixture The SBS-modified asphalt (control group) and the prepared GQDs/SBS compositemodified asphalt were used as binders, respectively. The AC-20 asphalt mixture, which is commonly used in asphalt pavement surfaces, was chosen to design asphalt mixture by the Marshall Design method according to China's Construction Technological Norms on Highway Asphalt Pavement (JTG F40-2004). The grading curve is shown in Figure 3. Combining with engineering experiences, the optimal oil-stone ratio was determined to be 4.5 based on the target voidage of 4.0%. In this study, all evaluated asphalt mixtures had the same grading and optimal asphalt content. Two asphalt mixtures were molded into specimens according to Test Regulations on Highway Engineering and Asphalt Mixture (JTG E 20-2011) of China. The properties of the asphalt mixtures, including high-temperature performance, low-temperature performance and water stability, were analyzed. Since the grading and asphalt consumptions of the two asphalt mixtures were the same, the volume indexes were similar, and their differences in performance indexes were mainly determined by the different performances of the asphalt cements. FTIR Functional Group Analysis The FT-IR spectra of the SBS modifier and GQDs/SBS composite modifier are shown in Figure 4. The IR region (wavenumber from 4000 to 400 cm −1 ) was divided into a functional group zone (wavenumber from 4000 to 1330 cm −1 ) and fingerprint zone (wavenumber from 1330 to 400 cm −1 ) [30]. It can be seen from Figure Two asphalt mixtures were molded into specimens according to Test Regulations on Highway Engineering and Asphalt Mixture (JTG E 20-2011) of China. The properties of the asphalt mixtures, including high-temperature performance, low-temperature performance and water stability, were analyzed. Since the grading and asphalt consumptions of the two asphalt mixtures were the same, the volume indexes were similar, and their differences in performance indexes were mainly determined by the different performances of the asphalt cements. FTIR Functional Group Analysis The FT-IR spectra of the SBS modifier and GQDs/SBS composite modifier are shown in Figure 4. The IR region (wavenumber from 4000 to 400 cm −1 ) was divided into a functional group zone (wavenumber from 4000 to 1330 cm −1 ) and fingerprint zone (wavenumber from 1330 to 400 cm −1 ) [30]. It can be seen from Figure 4 that SBS shows obvious methylene C-H asymmetric and symmetric stretching vibration peaks at 2917 and 2848 cm −1 as well as multiple absorption peaks between 3100-2950 cm −1 , which were stretching vibration absorption peaks of unsaturated hydrocarbons. The absorption peaks occurring simultaneously at 1630, 1600, 1560, and 1422 cm −1 corresponded to the stretching vibration of the aromatic ring skeleton (-CH 2 -). The vibration within 1390~1000 cm −1 is the stretching vibration of the -C-O bond and single-bond skeleton vibration of C-C. In addition, absorption peaks near 697, 730, and 749 cm −1 were caused by the vibration absorption of single substituted benzene. The absorption peak near 972 cm −1 was caused by the twisting vibration of the C=C bond, while the absorption peak near 915 cm −1 is the infrared characteristic absorption peak of polybutadiene caused by out-of-plan swinging and vibration of =CH 2 . As the petroleum asphalt-based GQDs are added, the peaks of the GQDs/SBS composite modifiers at these points are all strengthened. Moreover, a wide adsorption peak occurs at 3307 cm −1 , which is a combined peak of hydroxyl and amidogen stretching vibrations of petroleum asphalt-based GQDs. Meanwhile, there are obvious shoulder peaks at 1650-1580 cm −1 , which are stretching vibration peaks of the benzene ring. This reflects that GQDs and SBS develop polymerization reactions to form stable covalent bonds. These covalent bonds are enough to avoid phase separation which might occur during the simple physical mixing manufacturing of nanocomposites. In addition, modified asphalt contains more oxygen-containing functional groups (e.g., -C=O and -C-O). Furthermore, these functional groups mainly come from GQDs, and the increased oxygen content can also improve the polarity of GQDs/SBS composite modifiers, thus increasing their compatibility with asphalt [31]. cm −1 as well as multiple absorption peaks between 3100-2950 cm −1 , which were stretching vibration absorption peaks of unsaturated hydrocarbons. The absorption peaks occurring simultaneously at 1630, 1600, 1560, and 1422 cm −1 corresponded to the stretching vibration of the aromatic ring skeleton (-CH2-). The vibration within 1390~1000 cm −1 is the stretching vibration of the -C-O bond and single-bond skeleton vibration of C-C. In addition, absorption peaks near 697, 730, and 749 cm −1 were caused by the vibration absorption of single substituted benzene. The absorption peak near 972 cm −1 was caused by the twisting vibration of the C=C bond, while the absorption peak near 915 cm −1 is the infrared characteristic absorption peak of polybutadiene caused by out-of-plan swinging and vibration of =CH2. As the petroleum asphalt-based GQDs are added, the peaks of the GQDs/SBS composite modifiers at these points are all strengthened. Moreover, a wide adsorption peak occurs at 3307 cm −1 , which is a combined peak of hydroxyl and amidogen stretching vibrations of petroleum asphalt-based GQDs. Meanwhile, there are obvious shoulder peaks at 1650-1580 cm −1 , which are stretching vibration peaks of the benzene ring. This reflects that GQDs and SBS develop polymerization reactions to form stable covalent bonds. These covalent bonds are enough to avoid phase separation which might occur during the simple physical mixing manufacturing of nanocomposites. In addition, modified asphalt contains more oxygen-containing functional groups (e.g., -C=O and -C-O). Furthermore, these functional groups mainly come from GQDs, and the increased oxygen content can also improve the polarity of GQDs/SBS composite modifiers, thus increasing their compatibility with asphalt [31]. TGA The thermostability of the modifier is an important property that has to be considered when analyzing the structural characteristics of asphalt binders. In this study, the thermal stability of GQDs/SBS composite modifiers and the SBS modifier were discussed by TGA. It can be seen from Figure 5 and Table 2 that the TGA curves of the GQDs/SBS composite modifier and SBS modifier present the same trend, and they both experienced two major stages of mass loss. However, the thermodynamic behaviors of the GQDs/SBS composite modifier and SBS modifier are significantly different. The GQDs/SBS composite modifier and SBS modifier both enter the first stage of mass loss before 340 °C . In this stage, mass loss is mainly attributed to volatilization of crystal water adsorbed onto the sample surface as well as decomposition of some oxygen-containing functional groups in molecules (-OH and -COOH). Since the GQDs surface has a lot of oxygen-containing functional groups, the mass loss rate of the GQDs/SBS composite modifier is far higher than that of TGA The thermostability of the modifier is an important property that has to be considered when analyzing the structural characteristics of asphalt binders. In this study, the thermal stability of GQDs/SBS composite modifiers and the SBS modifier were discussed by TGA. It can be seen from Figure 5 and Table 2 that the TGA curves of the GQDs/SBS composite modifier and SBS modifier present the same trend, and they both experienced two major stages of mass loss. However, the thermodynamic behaviors of the GQDs/SBS composite modifier and SBS modifier are significantly different. The GQDs/SBS composite modifier and SBS modifier both enter the first stage of mass loss before 340 • C. In this stage, mass loss is mainly attributed to volatilization of crystal water adsorbed onto the sample surface as well as decomposition of some oxygen-containing functional groups in molecules (-OH and -COOH). Since the GQDs surface has a lot of oxygen-containing functional groups, the mass loss rate of the GQDs/SBS composite modifier is far higher than that of the SBS modifier in the first stage. The second stage of mass loss occurs in the temperature range of 340~490 • C. The mass loss of modifiers is mainly attributed to the decomposition of SBS into small molecules and volatilization. This is the major stage of mass loss. It can be seen from the TG curve that the initial decomposition temperatures of the GQDs/SBS composite modifier and SBS modifier are at about 416 • C (the tangent initial point of TG is about 416 • C), and the pyrolysis termination temperature is about 480 • C. The pyrolysis termination temperature of the GQDs/SBS composite modifier (478.8 • C) is slightly lower than that of the SBS Coatings 2022, 12, 515 8 of 18 modifier (479.9 • C), showing a very small difference. After finishing the pyrolysis, the residual mass of the GQDs/SBS composite modifier is 1.78%, and the mass change is 98.22%. The residual mass of the SBS modifier is only 0.05%, and the mass changes are 99.95%. This demonstrates that within this temperature range, the SBS modifier almost loses weight completely under the N 2 atmosphere, and it is decomposed into small molecules and then volatilized without producing any residual carbons. However, the GQDs/SBS composite modifier is decomposed incompletely under an N 2 atmosphere, and it will produce some residual carbons. This is because the carbon nucleus is the major structural unit of GQDs in the GQDs/SBS composite modifier. After surface oxygen-containing functional groups are lost in the first stage of mass loss, the residual carbon nucleus has very good thermostability, and it will not be decomposed again, thus resulting in a high residual mass. The maximum mass loss rate points (DTG peak value) of the GQDs/SBS composite modifier and SBS modifier are at 460 • C. The mass-loss rate of the SBS modifier (17.08%/min) is 0.85%/min higher than that of the GQDs/SBS composite modifier (16.23%/min). To sum up, the GQDs/SBS composite modifier has better thermostability than the SBS modifier. In other words, adding GQDs improves the thermostability of the SBS modifier. range of 340~490 °C . The mass loss of modifiers is mainly attributed to the decomposition of SBS into small molecules and volatilization. This is the major stage of mass loss. It can be seen from the TG curve that the initial decomposition temperatures of the GQDs/SBS composite modifier and SBS modifier are at about 416 °C (the tangent initial point of TG is about 416 °C ), and the pyrolysis termination temperature is about 480 °C . The pyrolysis termination temperature of the GQDs/SBS composite modifier (478.8 °C ) is slightly lower than that of the SBS modifier (479.9 °C ), showing a very small difference. After finishing the pyrolysis, the residual mass of the GQDs/SBS composite modifier is 1.78%, and the mass change is 98.22%. The residual mass of the SBS modifier is only 0.05%, and the mass changes are 99.95%. This demonstrates that within this temperature range, the SBS modifier almost loses weight completely under the N2 atmosphere, and it is decomposed into small molecules and then volatilized without producing any residual carbons. However, the GQDs/SBS composite modifier is decomposed incompletely under an N2 atmosphere, and it will produce some residual carbons. This is because the carbon nucleus is the major structural unit of GQDs in the GQDs/SBS composite modifier. After surface oxygen-containing functional groups are lost in the first stage of mass loss, the residual carbon nucleus has very good thermostability, and it will not be decomposed again, thus resulting in a high residual mass. The maximum mass loss rate points (DTG peak value) of the GQDs/SBS composite modifier and SBS modifier are at 460 °C . The mass-loss rate of the SBS modifier (17.08%/min) is 0.85%/min higher than that of the GQDs/SBS composite modifier (16.23%/min). To sum up, the GQDs/SBS composite modifier has better thermostability than the SBS modifier. In other words, adding GQDs improves the thermostability of the SBS modifier. Conventional Physical Properties of Modified Asphalt The physical properties of the GQDs/SBS composite-modified asphalt and SBSmodified asphalt are listed in Table 3. Clearly, the cone penetration and ductility of the GQDs/SBS composite-modified asphalt decrease, while the softening point increases compared with those of the SBS-modified asphalt. This implies that adding GQDs can improve the high-temperature performance of SBS-modified asphalt but decrease the low-temperature performance to some extent. In addition, temperature influences the high-temperature flowing characteristics of asphalt. Flow characteristics of different samples show different degrees of sensitivity to temperature changes. In other words, asphalts have different temperature sensitivities. There are high-temperature zones and low-temperature zones. The temperature sensitivity of the high-temperature zone is closely related to the construction of asphalt mixture, the pumping of asphalt and other construction characteristics. In this study, a Brookfield rotary viscosimeter was applied, and the 27 # rotors were applied to the viscosity of the GQDs/SBS composite-modified asphalt and SBS-modified asphalt within the temperature range of 110-175 • C. The variation curves of viscosity with temperature are shown in Figure 6a. With the increase in temperature, the viscosity values of both the GQDs/SBS composite-modified asphalt and SBS-modified asphalt decline sharply in the beginning and then become stable. This is because modified asphalt changes from a non-Newtonian body to a Newtonian body gradually under high temperatures. Given the same temperature, the viscosity of GQDs/SBS composite-modified asphalt is lower than SBS-modified asphalt. With the increase in temperature, differences between the GQDs/SBS composite-modified asphalt and SBS-modified asphalt decrease. This reflects that chemical crosslinking between GQDs and SBS is disadvantageous to the strength of polymers. The physical properties of the GQDs/SBS composite-modified asphalt and SBS-modified asphalt are listed in Table 3. Clearly, the cone penetration and ductility of the GQDs/SBS composite-modified asphalt decrease, while the softening point increases compared with those of the SBS-modified asphalt. This implies that adding GQDs can improve the high-temperature performance of SBS-modified asphalt but decrease the low-temperature performance to some extent. In addition, temperature influences the high-temperature flowing characteristics of asphalt. Flow characteristics of different samples show different degrees of sensitivity to temperature changes. In other words, asphalts have different temperature sensitivities. There are high-temperature zones and low-temperature zones. The temperature sensitivity of the high-temperature zone is closely related to the construction of asphalt mixture, the pumping of asphalt and other construction characteristics. In this study, a Brookfield rotary viscosimeter was applied, and the 27 # rotors were applied to the viscosity of the GQDs/SBS composite-modified asphalt and SBS-modified asphalt within the temperature range of 110-175 °C. The variation curves of viscosity with temperature are shown in Figure 6a. With the increase in temperature, the viscosity values of both the GQDs/SBS composite-modified asphalt and SBS-modified asphalt decline sharply in the beginning and then become stable. This is because modified asphalt changes from a non-Newtonian body to a Newtonian body gradually under high temperatures. Given the same temperature, the viscosity of GQDs/SBS composite-modified asphalt is lower than SBS-modified asphalt. With the increase in temperature, differences between the GQDs/SBS compositemodified asphalt and SBS-modified asphalt decrease. This reflects that chemical crosslinking between GQDs and SBS is disadvantageous to the strength of polymers. The Saal model (Equation (1)) proposed by ASTM D2493 was further applied to process the viscosity-temperature curves. It can characterize the temperature sensitivities of the GQDs/SBS composite-modified asphalt and SBS-modified asphalt. The Saal model (Equation (1)) proposed by ASTM D2493 was further applied to process the viscosity-temperature curves. It can characterize the temperature sensitivities of the GQDs/SBS composite-modified asphalt and SBS-modified asphalt. lg(lgη * 1000) = n + m·lg (T + 273.13) where m refers to the slope of the regression line; n denotes the intercept of the regression line on the lg(lgη*1000) axis; η is the viscosity (Pa·s); T is the temperature ( • C). Saal fitting curves of the viscosity-temperature curves of the GQDs/SBS composite-modified asphalt and SBS-modified asphalt are shown in Figure 6b. The parameters of the corresponding Saal model are listed in Table 4. Moreover, m in the Saal model was defined as viscositytemperature sensitivity (VTS). The smaller absolute value of VTS indicates that viscosity changes more slowly with temperature, and the temperature sensitivity is better. It can be seen from Figure 6 and Table 4 that the absolute value of VTS of the GQDs/SBS composite-modified asphalt is higher than that of the SBS-modified asphalt, indicating that adding GQDs is disadvantageous for the temperature sensitivity of SBS-modified asphalt. Rheological Properties of Modified Asphalt The usability of asphalt pavement is determined, to a very large extent, by the viscoelastic properties of the modified asphalt binder. The linear viscoelasticity of modified asphalt is very sensitive to the motion and interaction of polymer molecular chains. Moreover, the complexity of different high-molecular polymer modification systems may influence the internal structure of modified asphalt, thus influencing the rheological characteristics of asphalt. Rheological parameters in the linear viscoelasticity interval are independent of changes regarding stress and strain, and they are only related to the properties of the materials [32]. Therefore, linear viscoelasticity and dynamic rheological tests are very effective methods to elaborate on the influences of modifiers on the performances of modified asphalt and to study the influences of polymers on the viscoelasticity of asphalt. Frequency Scanning under Middle and High Temperature The major curve of the GQDs/SBS composite-modified asphalt and SBS-modified asphalt at 30 • C is shown in Figure 7. This curve was obtained from translocations of frequency scanning curves at 30, 45, 60, and 75 • C. It can be seen from Figure 7 that within the whole frequency scanning range, given the same frequency, the modulus of a complex number (G*) of the GQDs/SBS composite-modified asphalt is higher than that of the SBS-modified asphalt. Moreover, the major curves of the GQDs/SBS composite-modified asphalt and SBS-modified asphalt differ significantly in the low-ω zone. However, such difference decreases with the increase of frequency. According to the time-temperature equivalence principle, the low-frequency zone corresponds to the high-temperature zone. Hence, the GQDs/SBS composite-modified asphalt has a better high-temperature performance than SBS-modified asphalt. In other words, adding GQDs improves the rutting resistance of the SBS-modified asphalt. Additionally, the G* values of the GQDs/SBS and SBS-modified asphalt in the high-ω zone are close to the same value, indicating that GQDs influence the viscoelasticity performance of the SBS-modified asphalt in the high-ω zone. It can be seen from major curves in Figure 7a that the time-temperature equivalence principle is highly applicable to GQDs/SBS composite-modified asphalt and SBS-modified asphalt. Variations of the translocation factor with temperature are shown in Figure 7b. Obviously, the translocation factors of GQDs/SBS composite-modified asphalt and SBS- It can be seen from major curves in Figure 7a that the time-temperature equivalence principle is highly applicable to GQDs/SBS composite-modified asphalt and SBS-modified asphalt. Variations of the translocation factor with temperature are shown in Figure 7b. Obviously, the translocation factors of GQDs/SBS composite-modified asphalt and SBSmodified asphalt are significantly different. The variations of translocation factor with temperature were fit using the Arrhenius-like equation (Figure 8). Differences in the translocation factors of different samples can be distinguished quantitatively by the activation energy of the Arrhenius-like equation. The activation energy is related to the temperature sensitivity of materials. This further proves that adding GQDs improves the high-temperature performance of the SBS-modified asphalt. It can be seen from major curves in Figure 7a that the time-temperature equivalence principle is highly applicable to GQDs/SBS composite-modified asphalt and SBS-modified asphalt. Variations of the translocation factor with temperature are shown in Figure 7b. Obviously, the translocation factors of GQDs/SBS composite-modified asphalt and SBSmodified asphalt are significantly different. The variations of translocation factor with temperature were fit using the Arrhenius-like equation (Figure 8). Differences in the translocation factors of different samples can be distinguished quantitatively by the activation energy of the Arrhenius-like equation. The activation energy is related to the temperature sensitivity of materials. This further proves that adding GQDs improves the high-temperature performance of the SBS-modified asphalt. Temperature Scanning In this study, temperatures of asphalt samples within a wide range (58-95 °C ) were scanned. The variations of storage modulus (G′) and loss modulus (G″) with temperature are shown in Figure 9. Both G′ and G″ decrease dramatically with the increase in temperature. The reduction rates of the GQDs/SBS composite-modified asphalt and SBS-modified asphalt are different and finally, tend to be stable. Within a wide temperature range, the reduction rates of the G′ and G″ of the GQDs/SBS composite-modified asphalt with the temperature rise are lower than those of SBS-modified asphalt. This reflects that Temperature Scanning In this study, temperatures of asphalt samples within a wide range (58-95 • C) were scanned. The variations of storage modulus (G ) and loss modulus (G") with temperature are shown in Figure 9. Both G and G" decrease dramatically with the increase in temperature. The reduction rates of the GQDs/SBS composite-modified asphalt and SBSmodified asphalt are different and finally, tend to be stable. Within a wide temperature range, the reduction rates of the G and G" of the GQDs/SBS composite-modified asphalt with the temperature rise are lower than those of SBS-modified asphalt. This reflects that compared to SBS-modified asphalt, the GQDs/SBS composite-modified asphalt has better temperature sensitivity within a wide range. The rutting resistance of asphalt can be characterized by the rutting factor G*/sinδ and the failure temperature which is gained when G*/sinδ = 1.0 kPa. The higher the G*/sinδ and failure temperature, the better the high-temperature stability of asphalt. It can be seen from Figure 10 that in the middle-temperature and high-temperature intervals, The rutting resistance of asphalt can be characterized by the rutting factor G*/sinδ and the failure temperature which is gained when G*/sinδ = 1.0 kPa. The higher the G*/sinδ and failure temperature, the better the high-temperature stability of asphalt. It can be seen from Figure 10 that in the middle-temperature and high-temperature intervals, asphalt is mainly in the sticky flow state. The elasticity and strength of the system are provided by polymers. The variation laws of the G*/sinδ of the GQDs/SBS composite-modified asphalt and SBS-modified asphalt with temperature are consistent with the variation laws of G' and G". Their temperatures at G*/sinδ = 1.0 Kpa are 84.82 and 86.20 • C, respectively. This further demonstrates that GQDs bring higher hardness of SBS-modified asphalt so that SBSmodified asphalt presents better mechanical properties and better resistance to deformation. The rutting resistance of asphalt can be characterized by the rutting factor G*/sinδ and the failure temperature which is gained when G*/sinδ = 1.0 kPa. The higher the G*/sinδ and failure temperature, the better the high-temperature stability of asphalt. It can be seen from Figure 10 that in the middle-temperature and high-temperature intervals, asphalt is mainly in the sticky flow state. The elasticity and strength of the system are provided by polymers. The variation laws of the G*/sinδ of the GQDs/SBS compositemodified asphalt and SBS-modified asphalt with temperature are consistent with the variation laws of G' and G". Their temperatures at G*/sinδ = 1.0 Kpa are 84.82 and 86.20 °C , respectively. This further demonstrates that GQDs bring higher hardness of SBS-modified asphalt so that SBS-modified asphalt presents better mechanical properties and better resistance to deformation. MSCR After the reciprocal action of vehicle loads for a long period, asphalt pavement may develop shear creep deformation and form ruts. A multi-stress cyclic creep (MSCR) test is an index used to evaluate the high-temperature performance of modified asphalt in recent years. MSCR usually provides 10 loading cycles to samples. In each cycle, loads are applied for 1s, and then the stress is eliminated for resilience for 9 s. In this study, MSCR tests were carried out at 60 • C under two stress levels (100 Pa and 3200 Pa). In MSCR tests, the recovery rate (R) and unrecoverable compliance (Jnr) could be calculated from the recoverable and unrecoverable strains, respectively. The strain responses of the GQDs/SBS composite-modified asphalt and SBS-modified asphalt after 10 cycles at 60 • C and two stress levels (100 Pa and 3200 Pa) are shown in Figure 11. For one creep-recovery cycle, the strain of the GQDs/SBS composite-modified asphalt at the end of the creep stage and its strain at the end of the recovery stage are smaller than those of SBS-modified asphalt, which are attributed to the added GQDs. For the quantitative comparison of high-temperature performance between the GQDs/SBS composite-modified asphalt and SBS-modified asphalt, the parameters of R and Jnr of the two samples at 60 • C under two stress levels (100 Pa and 3200 Pa) are shown in Figure 12. With the addition of GQDs, R increases while Jnr decreases. Given the same conditions, the R of the GQDs/SBS composite-modified asphalt is higher than that of the SBS-modified asphalt, while Jnr is smaller. This implies that adding GQDs increases the high-temperature rutting resistance of asphalt. In addition, the R values of both the SBS-modified asphalt and GQDs/GQDs composite-modified asphalt decrease with the increase of stress. Meanwhile, the Jnr of the two samples increases to some extent. In a word, increasing vehicle loads may weaken the recoverable capacity of asphalt pavement significantly under high temperatures in summer, thus causing rutting damages. For the quantitative comparison of high-temperature performance between the GQDs/SBS composite-modified asphalt and SBS-modified asphalt, the parameters of R and Jnr of the two samples at 60 °C under two stress levels (100 Pa and 3200 Pa) are shown in Figure 12. With the addition of GQDs, R increases while Jnr decreases. Given the same conditions, the R of the GQDs/SBS composite-modified asphalt is higher than that of the SBS-modified asphalt, while Jnr is smaller. This implies that adding GQDs increases the high-temperature rutting resistance of asphalt. In addition, the R values of both the SBSmodified asphalt and GQDs/GQDs composite-modified asphalt decrease with the increase of stress. Meanwhile, the Jnr of the two samples increases to some extent. In a word, increasing vehicle loads may weaken the recoverable capacity of asphalt pavement significantly under high temperatures in summer, thus causing rutting damages. Low-Temperature Creep Properties Asphalt pavement may crack under low temperatures. Since there is a binding force between the asphalt mixture layer and the lower layer, it will hinder shrinkage and produce translocations, thus generating tensile stress. Cracks occur when the tensile stress exceeds the tensile strength of the asphalt mixture. This requires the asphalt to have a high creep rate to release stresses generated under low temperatures or in the cooling process. Since DSR cannot test asphalt which has considerable hardness under low temperatures, BBR is usually applied to measure the creep properties of asphalt when the temperature is very low. BBR uses the small beam principle to characterize the cracking trend of asphalt when temperature declines. Tow indexes can be gained from BBR: creep stiffness (S) and creep rate (m). These two indexes are used to characterize the load resistance and relaxation ability of asphalt. If S is too large, the possibility of cracking is high. If m is relatively low, the relaxation ability is insufficient to release stress produced by the reduction of temperature and the probability of cracking increases. Variations of the S and m of the GQDs/SBS composite-modified asphalt and SBSmodified asphalt at −18 and −24 • C, which are measured by BBR with time, are shown in Figure 13. S drops quickly with the increase of loading time, while m increases significantly. The variable rates of S and m are different. Given the same loading time, the m of the GQDs/SBS composite-modified asphalt at −18 • C is smaller than that of SBS-modified asphalt. However, the m of the GQDs/SBS composite-modified asphalt at −24 • C is higher. S presents the opposite variation trend. To further compare the low-temperature performance of the GQDs/SBS composite-modified asphalt and SBS-modified asphalt, S/m at 60 s was used to characterize the low-temperature crack resistance of asphalt. The lower S/m indicates the stronger crack resistance of asphalt and better low-temperature performance. It can be seen from Figure 14 that given the same temperature, the S/m of the GQDs/SBS composite-modified asphalt is higher than that of the SBS-modified asphalt, indicating that the GQDs/SBS composite-modified asphalt has poor low-temperature crack resistance. Variations of the S and m of the GQDs/SBS composite-modified asphalt and SBSmodified asphalt at −18 and −24 °C, which are measured by BBR with time, are shown in Figure 13. S drops quickly with the increase of loading time, while m increases significantly. The variable rates of S and m are different. Given the same loading time, the m of the GQDs/SBS composite-modified asphalt at −18 °C is smaller than that of SBS-modified asphalt. However, the m of the GQDs/SBS composite-modified asphalt at −24 °C is higher. S presents the opposite variation trend. To further compare the low-temperature performance of the GQDs/SBS composite-modified asphalt and SBS-modified asphalt, S/m at 60 s was used to characterize the low-temperature crack resistance of asphalt. The lower S/m indicates the stronger crack resistance of asphalt and better low-temperature performance. It can be seen from Figure 14 that given the same temperature, the S/m of the GQDs/SBS composite-modified asphalt is higher than that of the SBS-modified asphalt, indicating that the GQDs/SBS composite-modified asphalt has poor low-temperature crack resistance. High-Temperature Stability In the present study, the high-temperature stabilities of the SBS-modified asphalt mixture and the GQDs/SBS composite-modified asphalt mixture were evaluated by the dynamic stability in the high-temperature (60 °C ) rutting test. The high-temperature stability test results are shown in Figure 15. High-Temperature Stability In the present study, the high-temperature stabilities of the SBS-modified asphalt mixture and the GQDs/SBS composite-modified asphalt mixture were evaluated by the dynamic stability in the high-temperature (60 • C) rutting test. The high-temperature stability test results are shown in Figure 15 High-Temperature Stability In the present study, the high-temperature stabilities of the SBS-modified asphalt mixture and the GQDs/SBS composite-modified asphalt mixture were evaluated by the dynamic stability in the high-temperature (60 °C ) rutting test. The high-temperature stability test results are shown in Figure 15. It can be seen from Figure 15 that the dynamic stability of the GQDs/SBS compositemodified asphalt mixture is significantly higher than that of the SBS-modified asphalt mixture, indicating that adding GQDs increases the cohesive force of asphalt. Moreover, more compact structures are formed by adjusting the skeleton of asphalt mixtures in the compacting process, thus increasing the internal friction angle. With the increase of cohesive force and internal friction angle, the shear strength of the asphalt mixture increases, thus making it equipped with good high-temperature stability. It can be seen from Figure 15 that the dynamic stability of the GQDs/SBS compositemodified asphalt mixture is significantly higher than that of the SBS-modified asphalt mixture, indicating that adding GQDs increases the cohesive force of asphalt. Moreover, more compact structures are formed by adjusting the skeleton of asphalt mixtures in the compacting process, thus increasing the internal friction angle. With the increase of cohesive force and internal friction angle, the shear strength of the asphalt mixture increases, thus making it equipped with good high-temperature stability. Low-Temperature Crack Resistance The resistance of the asphalt mixture to low-temperature cracking performance was evaluated through a low-temperature small beam bending test. Small beam specimens (250 mm (Length)*30 mm (Width)*35 mm (Height)) were used in the test, and the loading rate and temperature were set to 50 mm/min and −10 • C, respectively. The lowtemperature crack resistance test results are shown in Figure 16. Low-Temperature Crack Resistance The resistance of the asphalt mixture to low-temperature cracking performance was evaluated through a low-temperature small beam bending test. Small beam specimens (250 mm (Length)*30 mm (Width)*35 mm (Height)) were used in the test, and the loading rate and temperature were set to 50 mm/min and −10 °C , respectively. The low-temperature crack resistance test results are shown in Figure 16. It can be seen from Figure 16 that the maximum bending strain of the GQDs/SBS composite-modified asphalt mixture at low-temperature failures is 14.5% lower than that of the SBS-modified asphalt mixture. However, all test results meet the standard requirements, indicating that adding GQDs decreases the tenacity and temperature sensitivity of the asphalt mixture under a low-temperature state. As a result, the low-temperature crack resistance declines accordingly. It can be seen from Figure 16 that the maximum bending strain of the GQDs/SBS composite-modified asphalt mixture at low-temperature failures is 14.5% lower than that of the SBS-modified asphalt mixture. However, all test results meet the standard requirements, indicating that adding GQDs decreases the tenacity and temperature sensitivity of the asphalt mixture under a low-temperature state. As a result, the low-temperature crack resistance declines accordingly. Water Stability The water stabilities of two modified asphalt mixtures were evaluated by the freezethaw splitting test. After freeze-thaw cycles of specimens based on the Marshall test, freezethaw splitting residual strength was tested, thus enabling us to analyze the resistance of the asphalt to water damage under tough environments. The water stability test results are shown in Figure 17. It can be seen from Figure 17 that the residual strengths of the GQDs/SBS compositemodified asphalt mixture before and after freezing and thawing decrease compared with those of the SBS-modified asphalt mixture. This reveals that adding GQDs decreases the adhesion between asphalt and aggregate, thus decreasing the resistance of the asphalt mixture to water damage. However, the residual strength meets the requirements of technical specifications. This implies that the prepared GQDs/SBS composite modifier influences the water stability of the mixture slightly. Conclusions In this study, the GQDs/SBS composite modifier was prepared using the Pickering emulsion polymerization method. In addition, the physical and chemical properties of the GQDs/SBS composite modifier, physical and rheological properties of the binders, as well as pavement performances of the GQDs/SBS composite-modified asphalt mixture were investigated. According to results and discussions, some conclusions could be drawn: (1) The GQDs/SBS composite modifier is prepared by the simple Pickering emulsion polymerization method. GQDs can evenly disperse into the SBS modifier to form a uniform composite. The GQDs/SBS composite modifier contains more oxygen-containing functional groups than the SBS modifier. Furthermore, the pyrolysis rate of the GQDs/SBS composite modifier is lower than the SBS modifier, and its residual mass is higher, thus showing better thermostability. (2) The conventional physical properties and rheological properties of the GQDs/SBS composite-modified asphalt and SBS-modified asphalt are compared. The GQDs/SBS composite-modified asphalt shows a higher softening point, complex shear modulus, activation energy, rutting factor and recovery rate than the SBS-modified asphalt, thus showing better high-temperature performance. However, the cone penetration and ductility of the GQDs/SBS composite-modified asphalt decrease while S/m increases, indicating that its low-temperature performance is worsened. It can be seen from Figure 17 that the residual strengths of the GQDs/SBS compositemodified asphalt mixture before and after freezing and thawing decrease compared with those of the SBS-modified asphalt mixture. This reveals that adding GQDs decreases the adhesion between asphalt and aggregate, thus decreasing the resistance of the asphalt mixture to water damage. However, the residual strength meets the requirements of technical specifications. This implies that the prepared GQDs/SBS composite modifier influences the water stability of the mixture slightly. Conclusions In this study, the GQDs/SBS composite modifier was prepared using the Pickering emulsion polymerization method. In addition, the physical and chemical properties of the GQDs/SBS composite modifier, physical and rheological properties of the binders, as well as pavement performances of the GQDs/SBS composite-modified asphalt mixture were investigated. According to results and discussions, some conclusions could be drawn: (1) The GQDs/SBS composite modifier is prepared by the simple Pickering emulsion polymerization method. GQDs can evenly disperse into the SBS modifier to form a uniform composite. The GQDs/SBS composite modifier contains more oxygencontaining functional groups than the SBS modifier. Furthermore, the pyrolysis rate of the GQDs/SBS composite modifier is lower than the SBS modifier, and its residual mass is higher, thus showing better thermostability. (2) The conventional physical properties and rheological properties of the GQDs/SBS composite-modified asphalt and SBS-modified asphalt are compared. The GQDs/SBS composite-modified asphalt shows a higher softening point, complex shear modulus, activation energy, rutting factor and recovery rate than the SBS-modified asphalt, thus showing better high-temperature performance. However, the cone penetration and ductility of the GQDs/SBS composite-modified asphalt decrease while S/m increases, indicating that its low-temperature performance is worsened. (3) The pavement performance of the GQDs/SBS composite-modified asphalt mixture and SBS-modified asphalt mixture are compared. The high-temperature stability of the GQDs/SBS composite-modified asphalt mixture is improved to some extent compared to that of the SBS-modified asphalt mixture, while its water stability changes slightly and the low-temperature performance declines to some extent.
2022-04-14T15:12:25.779Z
2022-04-11T00:00:00.000
{ "year": 2022, "sha1": "74f7d85b36076089e61cc4edebfbc482a4a23205", "oa_license": "CCBY", "oa_url": "https://www.mdpi.com/2079-6412/12/4/515/pdf", "oa_status": "GOLD", "pdf_src": "MergedPDFExtraction", "pdf_hash": "45204f1c4ba17df597cdcd994feadc4b7732b197", "s2fieldsofstudy": [ "Materials Science", "Engineering" ], "extfieldsofstudy": [] }
252570771
pes2o/s2orc
v3-fos-license
Leveraging innovation, education, and technology for prevention and health equity: Proceedings from the cardiology oncology innovation ThinkTank 2021 COPYRIGHT © 2022 Brown, Berman, Logan, Sadler, Moudgil, Patel, Scherrer-Crosbie, Addison, Cheng and Teske. This is an open-access article distributed under the terms of the Creative Commons Attribution License (CC BY). The use, distribution or reproduction in other forums is permitted, provided the original author(s) and the copyright owner(s) are credited and that the original publication in this journal is cited, in accordance with accepted academic practice. No use, distribution or reproduction is permitted which does not comply with these terms. Leveraging innovation, education, and technology for prevention and health equity: Proceedings from the cardiology oncology innovation ThinkTank 2021 Introduction Cardio-oncology has emerged as a distinct cardiology subspecialty over the last decade. While many still consider this field to be limited to anthracyclines and heart failure, cardio-oncology has expanded to include the entire spectrum of cardiology (ranging from rhythm disturbances to vascular toxicities) regarding adverse effects of cancer therapies. Modern cancer therapies continuously produce an uncharted territory of cardiovascular toxicities, such as recent developments and insights in cardiotoxicity associated with immune checkpoint inhibition. Thus, cardiologists, oncologist, and other specialists and consultants have come together to pursue innovation together in the Cardiology Oncology Innovation Network (COIN). A range of specialists in the network gathered in August 2021 for the first ever COIN ThinkTank. The agenda items for the inaugural COIN ThinkTank 2021 meeting were determined by network members at previous COIN gatherings. The primary objective of the ThinkTank was to investigate collaborative knowledge gaps and provide a platform to facilitate the development and implementation of various forms of innovation and . cross-platform communication. Our goal was to contribute to an international initiative that will propel cardio-oncology forward into the era of digital transformation and health equity. Each topic discussed at the ThinkTank was therefore considered in the context of exploring prevention, addressing inequity, and strengthening the involvement of oncologists in cardio-oncology. The following were the predetermined topics that anchored discussions in breakout rooms during the ThinkTank: • Artificial intelligence (AI) and digital health in cardiooncology, • The role of informatics in the global cardio-oncology registry (G-COR), • Education on innovation and development of innovative educational techniques in cardio-oncology. Specific emphasis was placed on how to advance prevention efforts, eliminate racial and ethnic disparities, and increase collaboration among cardiologists and oncologists in open discussion with ample room for ideas and innovations ( Figure 1). In addition to these predetermined discussion topics, participants provided additional input on future ThinkTank items, including translational research and building a consortium. Here, we introduce the format, content, and participants of the ThinkTank, and we summarize these discussions and their implications for the future of cardio-oncology in the context of the tripartite COIN mission: innovation, collaboration, and education. Format and structure of ThinkTank The 2021 COIN ThinkTank was held on August 7, 2021 and was designed to facilitate meaningful discussion and the dissemination of information. An interactive meeting on Zoom was convened for 3 h. Dr. Sherry-Ann Brown welcomed the attendees and provided an introduction. The majority of the remaining time was devoted to three 40-min breakout room sessions, separated by 5-min virtual exhibit engagement breaks and room transfers. Small working groups discussed the following topics in the context of prevention and disparities and increasing involvement of colleagues in oncology: AI and digital health in cardio-oncology (innovation); the role of informatics in G-COR hosted in Research Electronic Data Capture (REDCap) Cloud (collaboration); education on innovation and innovative delivery methods for education in cardio-oncology (education). After introductions in the main room, the breakout rooms were opened. Then individuals who were not serving as breakout room leaders selected their desired destination breakout rooms. Three breakout rooms each lasting for 40 min were facilitated, in order to offer discussants the opportunity to rotate among all three breakout rooms before the end of the ThinkTank. Consequently, breakout room leaders remained within their designated breakout rooms while participants rotated among the rooms as desired. This allowed the accumulation of summary points while new ideas were incorporated by different individuals in each room. The breakout room leader roles included a group facilitator, scribe, timekeeper and technical support. The group facilitator led the group discussions. The scribe took notes and reported out for the group toward the end of the ThinkTank. The timekeeper let the group facilitator know when 40 min passed during discussions. Finally, the individual in the technical support role checked in with the main room and breakout room to ensure all participants were in rooms as desired. The individuals in breakout room 1 focused on AI and digital health to answer the following questions. • How can we collaborate on AI and digital health to advance prevention efforts in cardiology, oncology, and cardiooncology? • How can we collaborate on AI and digital health to advance efforts at eliminating ethnic and racial disparities in cardiology, oncology, and cardio-oncology? • How can we increase collaborations among cardiologists and oncologists on topics relevant to AI and digital health? The participants in breakout room 2 focused on the role of informatics in G-COR sought to answer the questions: • How can we collaborate on the role of informatics in G-COR to advance prevention efforts in cardiology, oncology, and cardio-oncology? • How can we collaborate on the role of informatics in G-COR to advance efforts at eliminating ethnic and racial disparities in cardiology, oncology, and cardio-oncology? • How can we increase collaborations among cardiologists and oncologists on topics relevant to the role of informatics in G-COR? The attendees in breakout room 3 focused on education on innovation and innovative delivery methods for education in cardio-oncology sought to answer the following questions. • How can we collaborate on education on innovation and innovative delivery methods for education to advance prevention efforts in cardiology, oncology, and cardiooncology? • How can we collaborate on education on innovation and innovative delivery methods for education to advance efforts at eliminating ethnic/racial disparities in cardiology, oncology, and cardio-oncology? FIGURE ThinkTank priority topics selected and discussed. The Cardiology Oncology Innovation Network ThinkTank focused on priority topics preselected by network members and leaders at preceding gatherings, such as the virtual receptions at the end of daytime sessions of national or international cardiology, oncology, or cardio-oncology meetings. The priority topics selected and discussed were artificial intelligence and digital health, the role of informatics in the collaborative global cardio-oncology registry, and education on innovation coupled with innovative methods of education in cardio-oncology. These priority topics were considered in the setting of prevention of cardiovascular adverse e ects from cancer therapies, addressing health disparities and equity, and increasing the presence and involvement of hematologists/oncologists in collaborations in cardio-oncology. • How can we increase collaborations among cardiologists and oncologists on topics relevant to education on innovation and innovative delivery methods for education? Following the breakout room discussions, each group was designated 5 min to report out what had been discussed in the small groups. Then cardio-oncology poetry and closing remarks were shared to conclude the event. Expert participants Breakout room participants were cardiology, oncology, and industry leaders, as well as patient advocates spanning geographic locations and institutions from across the United States and the world. Attendees had a range of interests within cardio-oncology, as well as diversity of career and training stages. Here we provide excerpts from some of our participants' introductions to illustrate the spectrum of expertise. A specialist with a dual role in cardio-oncology and primary care at the Royal Brompton Hospital in London described frequently following patients from primary care to oncology to cardio-oncology and back to primary care. Thus, she sees the patient's entire treatment cycle, which she finds fascinating. She mentioned her interest in AI and digital technology, as evidenced by the availability of her presentation on the fundamentals of AI hosted online by the European Society of Cardiology. With Microsoft and Hitachi, her group is attempting to create a prototype cardio-oncology work management system in England. Their goal is to automate many of their processes, which currently require substantial human input, in order to increase efficiency and throughput. The chief executive and innovation officer of a digital health company co-founded with the assistance of several doctors, engineers, and financial professionals also described her work with an insurance company, concentrated on elder care and the management of chronic conditions, with a focus on healthcare finance payments. Their projects on AI in healthcare, and prior with IBM Watson, were also discussed, guided by a patient-centered approach. As a previous nurse case manager, she described her clinical background, and her current natural aptitude for technology, as she delves into quantum computing. A faculty member at the University of Pennsylvania in Philadelphia described her work with multiple cardio-oncology . /fcvm. . trials utilizing echocardiography and more recently cardiac MRI. She described her experience with the role of AI in cardiac imaging and cardio-oncology, contributing to a very fruitful discussion. Our patient advocate discussed his work with creating digital media, producing webcasts, and a variety of other tasks, such as video production. He described his status as a cardio-oncology patient, having participated in the very first imatinib clinical trials and is now no longer on cancer therapies. A director of program grants and strategic partnerships at an academic institution was also present. He reported learning a great deal in the AI breakout room and described his professional role locating funding sources for the cancer center's work. A bioengineering lecturer at Santa Clara University had previously spent many years as an engineer and executive typically working with cardiology devices. Consequently, medical device development was her area of expertise. She also provides consulting services to medical technology startup companies. She described contact with the COIN founder, who had given a guest lecture on cardio-oncology, which inspired her greatly. Therefore, she joined the network and participated in the ThinkTank to listen in this space and determine the unmet needs of physicians and patients that may benefit from her background. A cardiologist and echocardiographer from the National Institute of Cardiology in Mexico City described being extremely focused on the notion that precision medicine and AI are the means by which we can treat more cardiovascular disease and cancer patients. He discussed involvement of members of the Mexican Society of Cardiology in the COIN ThinkTank, along with colleagues attending from Colombia and Argentina and others from South America. A pediatric cardiologist, echocardiography specialist, and bariatric interventionalist from Mexico City who focuses on the evaluation of children undergoing cancer treatment and subsequent childhood cancer survivors described working to disseminate information from related fields, beginning with raising awareness of the issue of cardio-oncology in Mexico. All of these experts and several others gathered to discuss these innovative topics and how to navigate them together in cardio-oncology. Key insights and professional guidance Innovation Advances in artificial intelligence and digital health In parallel, progress has occurred in oncology, cardiology, and other fields invested in AI and digital transformation. We can learn from these accomplishments to understand innovations in AI. We can apply these discoveries to the interrogation of existing databases such as (Surveillance, Epidemiology, and End Results) SEER and Medicare, in collaboration with each other and with statisticians. Appropriate resources and funding in the form of National Institutes of Health (NIH) and institutional grants and beyond are needed to facilitate these collaborations providing valuable information to drive innovation forward in cardio-oncology. AI could be applied to the development of a curated repository for cardio-oncology imaging, to include tests such as MRI, echocardiography (including at the point-of-care), CT, ECG, and PET/CT. These imaging tests are frequently obtained for patients as part of their cancer surveillance and staging. Cardiologists can partner with oncologists and radiologists to streamline the gathering of these studies for AI work in cardiooncology. It is imperative that we work together to channel these potential opportunities and maximize opportunities for AI. These efforts requires substantial work, time, and effort, and the results are worth it. Interrogation of existing databases, such as Medicare and SEER, can be challenging. AI may help simplify and interpret some of this output. A statistician can also be key in these studies. These are all reasonable opportunities to pursue. Based on results, an app can be created in the future to incorporate additional validation studies. Individuals from cardiology, oncology, and other specialty areas participate in COIN. As various areas of medicine, healthcare, and industry have already begun using AI, we can leverage this pre-existing expertise and develop creative solutions. We can explore pre-existing AI work and learn from them and invite various experts in AI to serve as mentors as we collectively pursue these studies funded by governmental and non-governmental organizations. In the future, even beyond AI alone, we will also incorporate efficiency through quantum computing, as well as remote health monitoring, telehealth, and digital health. We anticipate ongoing partnership with companies and computer programming and biomedical engineering programs that develop software as well as the hardware. Therefore, we will have the ability to co-create and add components, whether to watches with ECG capabilities, or to other wearable devices with a variety of medical tools. Present and future artificial intelligence applications AI applied to cardiac MRI can be used to obtain information regarding myocardial structure and mapping to gain insight into global and regional heart function and muscle contractility. These can give insight into tissue composition, which can be coupled with more advanced information about metabolomics and other -omics in precision imaging. By using MRI for early markers, separating patients based on treatment/cancer type, and utilizing AI algorithms to calculate strain and ejection fraction, the cardio-oncology clinic can be transformed. What training is required and what is the potential of this technology? What will the future hold? It is possible to have individuals without echocardiography training perform echocardiograms, augmented by AI in real-time. AI guides the person obtaining the echocardiogram to obtain the appropriate images and can automatically calculate the ejection fraction. This work is also being pursued for automatic point-of-care assessment of strain. In such ways, AI might be able to transform the cardio-oncology clinic. As echocardiographers in cardio-oncology, of course we maintain that individuals generally should be trained formally in echocardiography. Nevertheless, where this is not readily available, point-of-care AI-guided echocardiography may become key. Some companies are currently researching myocardial strain for early detection of subclinical myocardial dysfunction, to improve options for disease prevention. As a result, we are attempting to determine the best parameters for predicting post-treatment ejection fraction decline. AI can be used in this way, applied to cardiac imaging, as well as to various studies examining the effects of radiation and chemotherapy, and other forms of cancer treatment for breast, lung, and various cancers on the cardiovascular system. Future applications of AI and digital health innovation in cardio-oncology will also involve using the Substitutable Medical Applications and Reusable Technologies (SMART) on Fast Healthcare Interoperability Resources (FHIR) architecture to incorporate apps into Epic via the Epic App Orchard, working in and with virtual reality, and working routinely with data scientists, multidisciplinary scholars, on various research efforts. Innovative methods for file sharing and data storage will be needed, in addition to applications for large multi-institutional grants to kickstart funding. Working with Nvidia's GTX which has quantum computing capabilities should be considered. Collaboration Informatics in G-COR The global cardio-oncology registry (G-COR) is a recently launched multi-center, global registry (1). Informatics can play a variety of roles in G-COR, including data abstraction and curation, risk calculator generation and output, and tracking the evolution of cardiovascular health of participants in the registry. G-COR is based in over 20 countries and has over 120 academic and community centers as participants. Prospective collection of clinical data from patients with breast cancer has begun, in the pilot phase of the program. Subsequent stages will incorporate data for patients with hematological malignancies and those treated with immune checkpoint inhibitors for various cancers. This global registry will generate vast quantities of clinical data, providing a unique opportunity to place a strong emphasis on disparities in access to cardio-oncology, barriers to care access, and regional disparities. New developments in automated data extraction from electronic health care systems (e.g., Epic) will be investigated to further facilitate the entry of accurate and comprehensive data into this registry. Leveraging data for transformation in cardio-oncology will be a focus for the using informatics in the registry. The ThinkTank identified two additional potential advantageous benefits of applying informatics in G-COR: (1) assessing valuable feedback from the participating centers, which are typically led by experienced cardio-oncologists; and (2) providing feedback to these centers, allowing them to compare their numbers and data to the global registry in order to assess their own strengths and weaknesses. The latter would be anticipated to have a direct effect on local healthcare policies and protocols. Facilitating prevention through collaboration in G-COR Preventive informatics can be advanced in G-COR, with the development of robust and risk calculators validated in prospective cohorts in the registry. In addition, practice variations in different centers, regions, and countries should be studied. By analyzing the different approaches, it would be possible to identify the most effective pathways for preventing cardiovascular toxicity. Advanced informatics and analytics algorithms are needed to facilitate cardio-oncology referral patterns for patients at highest risk of cardiovascular disease, in order to facilitate preventive efforts. Informatics can be used to identify cancer survivors who are being undertreated for their cardiovascular risk or disease. Informatics projects can be devised to intervene in these cases and incorporate AI and predictive analytics into the electronic health record. For cardio-oncology programs in the United States, some groups may be able to study social determinants of health such as zip codes (see https://www.ahrq.gov/sdoh/dataanalytics/sdoh-data.html) and access to cardiologists/cardiooncologists, regarding their association with and ability to predict cardiovascular toxicities. Patients can potentially be supported to enter their own information such as social determinants of health, as well as adverse events (see https://healthcaredelivery.cancer.gov/pro-ctcae), if this can be pursued securely and with informed consent. State-level social determinants of health information can be collected without consent, but granularity is sacrificed. However, G-COR zip codes will not be available for collection or analysis since G-COR will not be collecting identifiable patient information. A consideration of bias might arise if we automate Epic data abstraction, i.e., the G-COR cohort could become skewed toward larger academic centers readily able to pursue this automation and provide large amounts of real-world data. This concern will need to be addressed by supporting data . gathering from smaller centers. This would also be the case for Epic-based algorithms for modeling outcomes to target for prediction and prevention based on socioeconomic factors, as data needs to be captured especially from centers with patient populations that are underrepresented. In many cardiooncology centers, referrals are routed through patient/nurse navigators. These navigators can be trained in Epic predictive analytics and patient advocates from underrepresented groups can help guide these informatics efforts to maximize their impact for these populations. Education Patient and clinician partnership in education on innovation It is important to educate both doctors and patients as partners in innovation. Patient and clinicians can learn from each other and become better informed by engaging in meaningful conversation. For patients, the focus is on information that educates and empowers them to be informed participants in their health care. Thus, health education materials should be designed to reflect the context of patients' lives, explain the inter-relationships between choices and offer practical approaches to making those choices. Additionally, getting input from patient advocates can be very informative for educating clinicians in cardio-oncology. Consequently, we engage patient advocates in our network for bidirectional support and education. Innovative delivery methods for education Patient journey map As we innovate for our patients, it may be helpful for us to map out the patient journey, beginning with the initial cancer diagnosis and including touchpoints with family, friends, physicians, nurses, and other health care professionals. Fears can be addressed along the journey, such as limited awareness about cancer radiation and drugs, particularly regarding assessment and comprehension of cardiovascular risk vs. benefit. An example patient journey map is one created for chronic obstructive pulmonary disease (COPD) based on social listening (2). Sometimes patients develop frustration from not knowing or understanding their disease or risk, and express fear about the unexpected. Mapping out the patient journey and touchpoints for disease or risk assessment and management could be helpful in cardio-oncology and could be hosted collaboratively on the COIN website. Such a map may help with educating each other and others, to help us better understand the patient journey. As we share a map draft with colleagues and with patient advocates, we can offer others the opportunity to chime in and identify additional touchpoints. The patient journey may need to vary by country or region. Nevertheless, a conceptual detailed map can assist physicians, researchers, trainees, and entrepreneurs in identifying patients' touchpoints along the journey, with opportunity for education and intervention, as well as innovation. The COPD map was developed based on social listening, or observations from social media (2). Thus, social media and other opportunities for listening in on patient needs and frustrations can be very helpful for building the conceptual map of the patient journey. Observing which websites patients visit and where else they search for information and what they seek to learn can help us understand the patient needs and journey. This information can be helpful for us all to understand and consider how to address the unmet needs of our patients. This can also be helpful for entrepreneurs to consider how to support physicians as we meet patient needs in this journey. New innovation and technologies that would aid this patient journey could be devised collaboratively, in academia-industry partnerships (3). Digital collaboration An education emphasis working group has been established on the COIN website to provide space for the group to connect and collaborate on ideas such as the patient journey concept map and collaborate together. Output from the group discussions can be posted on social media for others to view, discuss, and come up with additional ideas. This would serve as a great guide for much of our work in the network. Infographics It is important it is to engage all patients. Infographics can facilitate this for some patients. This could be simplified with the creation of a digital plan-similar to a drawing or painting-which can serve as a map to chart a course for raising awareness among patients, patient groups, cardiologists, oncologists, and other partners in this work. Many great patient-facing infographics are, for example, available on www.cardiosmart.org providing an excellent approach to patient education. CardioSmart presents the infographics in the setting of a collection of basics about the diseases, along with frequently asked questions and also resources. Knowledge dissemination We encourage early exposure to cardio-oncology in health professional training environments. Social media efforts can help globalize this effort. These methods can include disseminating brief 10-15 min interview sessions with a host and a single presenter on the latest breakthroughs on innovation in cardio-oncology, incorporating patient advocacy on various topics. These videos would also be displayed on our centralized website (cardioonccoin.org). The links to the videos would be placed on social media for both patients and clinicians. This will supplement COIN continuing medical education (CME) in the future facilitated by stable funding sources. It is essential for patients, physicians, and educational institutions to identify . /fcvm. . champions for this work. A group effort will be needed to determine who these champions are and provide the tools to make this happen. Patient videos Physicians should display customized educational videos while patients are in waiting rooms. This would enhance the patient's awareness of cardio-oncology and complement what they gain during their time directly spent with physicians. In these videos, it would be beneficial for patients to hear patients' dialogue with cardio-oncology doctors and patients' sharing information about their journeys. Such videos could become extremely valuable. Special topics Engagement with oncology COIN draws on experiences of collaboration between cardiologists and oncologists at the local, state, national, and international levels with a goal to further increase these collaborations. In Florida especially, these collaborations are between the American College of Cardiology and the American Society for Clinical Oncology local chapters. Similar collaborations are being forged in cardio-oncology in Illinois and California, and throughout the country and world. Colleagues in both academic and non-academic centers are engaged, across professional societies within cardiology and oncology, to advance innovation and education. More cardiologists than oncologists have traditionally been involved, in large part due to cardiologists noting the adverse effects and tracking these back to cancer therapy. Thus, the desire for more collaboration with oncologists has been borne out of these clinical and research observations. Indeed, we need more oncologists to join us. Oncology involvement varies by institution and we hope to expand this throughout the network. The ThinkTank examined next steps for enhancing collaboration between cardiologists and oncologists. Since the majority of cardio-oncology programs are administered by cardiologists, the current strategy is to implement practice changes within the cardiology community. It will be essential to engage more oncology colleagues in both academic and non-academic hospitals to have a significant impact on cardiooncological outcomes. One way to accomplish this is to elucidate the significance of cardiovascular care, which extends beyond survival and falls within our shared mission to provide the best long-term care for our patients. Eliminating racial disparities in G-COR An objective was to investigate the potential for advancing effectors to eliminate racial and ethnic disparities. One of the primary emphases of the aforementioned G-COR is to investigate how cardio-oncology patients are treated in various regions of the world. This also includes investigating the factors that influence or restrict access to care, such as socioeconomic status, race and ethnicity, access to insurance, transportation, and internet access. It would be beneficial to learn how these various socioeconomic groups, ethnic groups, and geographic locations affect cardio-oncology care. Collaboration and the collection of massive amounts of data would be required to draw meaningful conclusions. According to the ThinkTank, this is something to strive for in order to influence policymaking and reduce existing inequalities and disparities. It is also crucial to achieve racial and ethnic health equity by eradicating barriers to care and eradicating health disparities. This is one of our key concepts and focus areas in G-COR. How do we gather data in order to study disparities in cardio-oncology? Across the globe, how do we recognize and address health equity in cardio-oncology patients? What factors impact or limit access to care, including geographical, socioeconomic, racial, and ethnic factors? In what ways has global cardio-oncology care benefited this population? How have these patterns been affected by the pandemic? In order to answer these questions, we will investigate ethnicity, income, insurance, transportation, and internet access, as well as how different socioeconomic groups, ethnic groups, and geographic locations impact access to care. We hope to generate sufficient data to analyze and chart a course to have an impact. We anticipate contributing to policymaking for the reduction of inequalities and disparities. We will additionally attempt to accomplish these in our individual cardio-oncology environments. Creative expression and humanism A cardio-oncology poem capturing the experience of the patient was then shared prior to concluding comments. As part of processing thoughts in medicine and science, some of us write poetry about the things we ponder. Some of the poems are composed after we visit with patients. Even if many of us may not directly share our individual patients' experiences, we can empathize and capture their story. The poems we therefore create can literally describe our frame of reference regarding how we experience our patients' journeys. These poems can be typically straightforward and simple to comprehend. In this case, we interpret a particular patient's journey as analogous to juggling four balls at once. The poem "Juggling Four Balls in the Air" (4) was therefore shared to illustrate this principle and ground us as we closed out the ThinkTank. Juggling Four Balls in the Air It's not enough That I am living These are the four balls I will juggle in the air. The poem was inspired by the patient's attitude and strength in facing all of these challenges. Our patients continue to inspire us and motivate us in everything we do. As a result, they remind us of why we are doing what we are doing and what we need to do to assist them. Discussion The Cardiology Oncology Innovation Network (COIN) has gained momentum since being founded in 2018 (1). The very first COIN ThinkTank brought together cardiologists, oncologists, and other specialists in August 2021 to facilitate meaningful discourse and information dissemination. ThinkTank agenda items were determined by network members at previous COIN meetings. The following topics were discussed by small working groups: AI and digital health in cardio-oncology (innovation); the role of informatics in G-COR (collaboration); education and innovative methods of education delivery in cardio-oncology (education). Cardiology, oncology, . /fcvm. . and industry leaders, as well as patient advocates, participated in the breakout sessions. Cardiologists can collaborate with oncologists and radiologists to expedite the collection of imaging studies for cardio-oncology AI research. Recent reviews have cataloged various ways in which AI is being applied to cardiovascular imaging in cardio-oncology (5)(6)(7)(8). Future app development may facilitate patient enrollment and engagement for prospective validation studies. Future AI and digital health innovations in cardio-oncology will also incorporate the SMART on FHIR architecture for app integration into electronic health records (1). The global cardio-oncology registry (G-COR) was recently established to examine disparities and variation in cardio-oncology care worldwide (1). COIN will collaborate on using informatics in the registry, especially to identify cancer survivors who are undertreated for their cardiovascular risk or disease. In cardio-oncology, mapping the patient journey and touchpoints for disease or risk assessment and management could be beneficial. The patient journey can be determined in part from social listening to patients' needs and experiences on social media (9)(10)(11), and discussions with patient advocates (12). The patient journey map and other innovative methods of education can become instrumental in cardio-oncology. Additionally, the COIN Annual Summit in December each year (December 10 th in 2022; cardioonccoin.org) provides live continual professional development (12), with subsequent use of the summit presentation content as online enduring CME content via the COIN website. Within this, we share a draft of the map with colleagues and patient advocates, we can invite others to suggest additional touchpoints. On the COIN website, an education emphasis working group has been created to connect and collaborate on ideas such as the patient journey concept map. These discussions' outcomes can be shared on social media for others to view, discuss, and generate new ideas. The ThinkTank examined next steps for enhancing cardiologists' and oncologists' collaboration, such as in G-COR, to address health disparities. Eliminating barriers to care is essential for achieving racial and ethnic health equity. In G-COR, ethnicity, income, insurance, transportation, and internet access, as well as the impact of socioeconomic status, ethnicity, and location on access to medical care will be investigated with proposal and testing of potential solutions. This will facilitate taking next steps in the pursuit of health equity in cardio-oncology (1,(13)(14)(15)(16)(17). Cardio-oncology is a relatively new subspecialty of cardiology. The very first COIN ThinkTank brought together cardiologists, oncologists, and other specialists, along with patient advocates in August 2021. In small working groups, AI and digital health in cardio-oncology (innovation); the role of informatics in G-COR; and innovation education/innovative education methods were discussed. Emphasis was placed on recruiting more oncologists, addressing health equity, and advancing prevention in cardio-oncology via innovation and collaboration. Mapping the patient journey, engaging patients and social media, and appreciating resilience were all topics of interest. Together, in the face of all obstacles alongside our patients who are all very brave (4,(18)(19)(20), we can continue to motivate and inspire each in all that we do in the Cardiology Oncology Innovation Network. All are welcome to join us at the next COIN ThinkTank on August 7, 2022. Disclosure Our authors work closely with several health technology companies, none of whom inappropriately restrict or limit our analyses or publications. Industry and pharmaceutical companies sponsored the inaugural COIN ThinkTank 2021. Our sponsors did not exert any restrictions on our discussion, work, or publications. Author contributions Conception and design: S-AB. All authors contributed to drafting of the manuscript, interpretation of data, critical revision, and final approval of manuscript. organizations, or those of the publisher, the editors and the reviewers. Any product that may be evaluated in this article, or claim that may be made by its manufacturer, is not guaranteed or endorsed by the publisher.
2022-09-29T13:17:06.393Z
2022-09-29T00:00:00.000
{ "year": 2022, "sha1": "74082a13057911128626c0f2c80a1cadd9756d3b", "oa_license": null, "oa_url": null, "oa_status": null, "pdf_src": "ScienceParsePlus", "pdf_hash": "74082a13057911128626c0f2c80a1cadd9756d3b", "s2fieldsofstudy": [ "Medicine", "Education" ], "extfieldsofstudy": [] }
22815340
pes2o/s2orc
v3-fos-license
A survey of patient behaviours and beliefs regarding antibiotic self-medication for respiratory tract infections in Poland Introduction Self-medication can contribute to the inappropriate use of antibiotics in respiratory tract infections (RTI). This phenomenon has not been well described, particularly in Poland. The aim of our study was to describe the prevalence of antibiotic self-medication for RTI, to explore factors influencing antibiotic use without prescription, and to determine the available sources of such antibiotics. Material and methods A self-administered questionnaire completed by patients presenting to family medicine clinics at Lodz and Wroclaw from 1st March to 15th May 2010. Results A total of 891 patients in ten clinics completed the survey (response rate, 89.1%). Overall, 41.4% (n = 369) of patients reported self-medication with an antibiotic for RTI. The most common reason for antibiotic self-medication was a belief that antibiotics treat the majority of infections, including influenza and influenza-like illnesses (43.9%; n = 162). The predominant sources of antibiotics for self-medication were antibiotics from previous prescriptions stored by the patient at home (73.7%, n = 272), those received from a pharmacy without prescription (13.5%; n = 50), or from family members and friends (12.7%; n = 47). Conclusions Antibiotic self-medication for RTI was common in this population. This may be due to the belief that the antibiotics treat the majority of infections. A recommendation to either ask patients to return unused antibiotics to the physician's office or to dispense antibiotics in the exact amount which is necessary for an individual course, as well as the targeted education of pharmacy personnel and the general population, appear to be justified. Introduction The increase in resistance to antimicrobial drugs represents an important clinical and social problem [1,2]. Many patients presenting to family medicine clinics have already started self-medication with antimicrobial agents [3]. Self-medication, defined as the administration of a therapeutic agent without a physician's prescription, can contribute to the inappropriate use of antibiotics without clinical indication. The most common reasons for self-medication in Europe are 'sore throat' and bronchitis [4,5]. Symptoms of the common cold usually resolve within 7 to 10 days (with some symptoms possibly lasting for up to 3 weeks) without treatment [6,7]. Nevertheless, coryzal symptoms, cough or fever may prompt patients to make therapeutic decisions wit hout a consultation with a health care professional [3]. Self-medication is related to the overuse of anti microbial drugs [8,9]. Due to possible complications as well as growing bacterial resistance, antimicrobial therapy should be used only upon a physician's recommendation [10,11] and if feasible following microbiology tests, parti cu larly when streptococcal pharyngitis is suspected [12,13]. Previous studies have revealed that self-me dication with antibiotics is commonly encountered both in the United States (US) and in Europe, predo minantly in cases of common cold and upper respiratory tract infections [4,14]. Using unnecessary or inappropriate antibiotics can cause adverse effects, and lead to increasing numbers of drug-resistant microorganisms [15]. An estimated 142,505 visits were made each year to US emergency departments for drug-related adverse events attributable to sys temic antibiotics. Antibiotics were responsible for 19.3% of all emergency department visits for drug-related adverse events, particularly allergic reactions [16]. The prevalence of selfmedication is high in eastern and southern Europe and low in northern and western Europe [4,5,17]. The aim of our study was to describe the prevalence of antibiotic self-medication for respiratory tract infections, to explore factors influencing antibio tic use without prescription, and to determine the available sources of such antibiotics. Material and methods The study included data from 891 adults (304 men/587 women) presenting to 5 family medicine clinics in Lodz and 5 in Wroclaw and surroundings between 1 st March and 15 th May 2010. All physicians collaborated with vocational training units locally. Nurses asked each consecutive patient presenting to family physicians' office and able to give their voluntary informed consent to fill in an anonymous questionnaire and return it to the collection box; 100 study questionnaires were distributed per clinic. The questionnaire included 8 questions relating to demographic characteristics, self-treatment methods for managing infections, patterns of taking antibiotics and available sources of antibiotics without a physician's prescription. S St ta at ti is st ti ic ca al l a an na al ly ys si is s Data are shown as number (n) and proportion (%). A χ 2 test was used to evaluate the statistical signifi cance between the particular subgroups. Statistical analysis was performed using SPSS statistical pack age (SPSS 16.0, Chicago, USA). A total of 369 participants (41.4%) reported taking antibiotics without consulting a physician. This was more common in rural areas, at 171 (62.2%) of 275 respondents in comparison to 198 (32.1%) of 616 from urban areas (p < 0.001; Figure 1). In both groups, the great majority of respondents acquired this antibiotic as tablets left over from previously used packs kept in their home medical kits, or, less often, from friends and family members. In addition, respondents from rural areas more frequently purchased antibiotics from a pharmacy without a prescription (Table I). The majority of persons who used antibio tics without consulting a physician agreed that the antibiotics were not effective against all respiratory tract infections. However, as many as 43.9% (n = 162) of respondents from this group believed that antibiotics were effective in the treatment of influenza and influenza-like illnesses. Conversely, the majority of 522 persons who reported not taking antibiotics without a physicians' recommendation believed that the antibiotics were not effective either in the treatment of influenza or other infections. In addition, a signi ficantly higher percentage of participants in this (Table II). Discussion In Poland, antibiotics are prescribed very often for respiratory tract infections. Previous studies have revealed that in Lodz, approximately 72% of adult patients who presented with symptoms typical for a lower respiratory tract infection received an antibio tic [18,19]. Even more frequent use of antibiotics for acute RTI has been reported in south-eastern Poland, including some rural areas [20,21]. Data from our study confirmed frequent use of antibiotics without a physician's recommendation, especially in the rural environment. This latter finding may be related to a lower level of education in the rural population of Poland, a factor that has been associated with more frequent antibiotic selfme di cation [17]. Our findings differ from those of a recent multicenter European study, in which no difference was noted in the use of antibiotics without a physician's consultation between inhabitants of urban and rural areas [4]. In contrast, the results of a study from Lithuania indicated that the use of antibiotics wit hout a prescription was 1.61-fold more frequent in rural than urban populations [9]. Published medical research on this topic relates mostly to the general population. According to a Spanish survey, 41% of participants had taken antibiotics (over the past 6 months) which had been acquired from a pharmacy without a physician's prescription [22]. Also, the fin dings of a US study indicated that the main source of self-medication with antibiotics were anti biotics kept at home, which were subsequently used by as many as 17% of patients [14]. In the population of Malta, as many as 19% of persons reported self-treatment with antibiotics [23]. In contrast, accor ding to data from the northern part of Israel, almost 25% of survey respondents kept antibiotics at home, but only 17% would use them without a physician's consultation [24]. In our study we found that 4.1% of respondents from urban areas acquired their anti biotics from a pharmacy without a prescription, while as many as 23.7% of rural inhabitants did so (while in a Spanish study, conducted in a general population, about 32.1% of participants did so) [25]. It should be emphasized that the respondents' knowledge about antibiotic therapy use was incomplete. Similarly, the results of a European survey [26] indicated that as many as 53% of respondents gave an incorrect answer to the question 'do antibiotics kill viruses?', and 47% of them expressed the opinion that antibiotics were effective against cold and flu. In our study, 43.9% of patients who used antibiotics without a physician's recommendation believed that antibiotics treated influenza and influenza-like infections. Data from our study and most other surveys indicate the need for conducting further studies in this field, as well as for educating patients about adverse effects and harmful consequences of inappropriately applied antimicrobial therapy. In conclusion, many patients use antibiotic selfmedication for RTI. This may be due to the common belief that antibiotics treat the majority of infections. A recommendation to either ask patients to return unused antibiotics to the physician's office or to dispense antibiotics in the exact amount which is necessary for an individual course, as well as the targeted education of pharmacy personnel and the general population, appear to be justified. U Ur rb ba an n a ar re ea a, , n n ( (% %) ) R Ru ur ra al l a ar re ea a, , n n ( (% %) ) S Se el lf f--t tr re ea at tm me en nt t T Tr re ea at tm me en nt t w wi it th h w wi it th h a an nt ti ib bi io ot ti ic cs s a an nt ti ib bi io ot ti ic cs s w wi it th h G GP P' 's s n n ( (% %) ) r re ec co om mm me en nd da at ti io on n n n ( (% %) )
2016-05-04T20:20:58.661Z
2012-06-28T00:00:00.000
{ "year": 2012, "sha1": "1d011dd2b0bac38a4e56e5afcece0cc6f3777c70", "oa_license": "CCBYNCND", "oa_url": "https://www.termedia.pl/Journal/-19/pdf-18799-10?filename=A%20survey.pdf", "oa_status": "GOLD", "pdf_src": "Anansi", "pdf_hash": "1d011dd2b0bac38a4e56e5afcece0cc6f3777c70", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
260336572
pes2o/s2orc
v3-fos-license
Shared GABA transmission pathology in dopamine agonist- and antagonist-induced dyskinesia Summary Dyskinesia is involuntary movement caused by long-term medication with dopamine-related agents: the dopamine agonist 3,4-dihydroxy-L-phenylalanine (L-DOPA) to treat Parkinson’s disease (L-DOPA-induced dyskinesia [LID]) or dopamine antagonists to treat schizophrenia (tardive dyskinesia [TD]). However, it remains unknown why distinct types of medications for distinct neuropsychiatric disorders induce similar involuntary movements. Here, we search for a shared structural footprint using magnetic resonance imaging-based macroscopic screening and super-resolution microscopy-based microscopic identification. We identify the enlarged axon terminals of striatal medium spiny neurons in LID and TD model mice. Striatal overexpression of the vesicular gamma-aminobutyric acid transporter (VGAT) is necessary and sufficient for modeling these structural changes; VGAT levels gate the functional and behavioral alterations in dyskinesia models. Our findings indicate that lowered type 2 dopamine receptor signaling with repetitive dopamine fluctuations is a common cause of VGAT overexpression and late-onset dyskinesia formation and that reducing dopamine fluctuation rescues dyskinesia pathology via VGAT downregulation. INTRODUCTION Dopamine-modulating medications often affect the motor system.2][3] Conventional antipsychotics (e.g., haloperidol) block D2 receptors and induce akinesia and rigidity, which are regarded as hypokinetic symptoms. 4,5Opposite acute pharmacological effects occur for dopamine receptor agonism versus antagonism.Besides these acute effects, long-term use of either D2 agonists or antagonists can induce late-onset erratic movements known as dyskinesia. Dyskinesia refers to involuntary movements of the face, arms, legs, or trunk and is a major side effect during drug treatment. 6,7ost dyskinesias are associated with therapeutic drug treatment; there are two major types of drug-induced dyskinesias.The dopamine agonist 3,4-dihydroxy-L-phenylalanine (L-DOPA) is the first-line therapy for Parkinson's disease (PD), and L-DOPA-induced dyskinesia (LID) often appears in PD patients after a few years of successful L-DOPA treatment. 8The other type of dyskinesia is tardive dyskinesia (TD), which is induced by D2-blocking agents (mainly antipsychotics). 9,10ong-term use of antipsychotics (first and second generation) over several months can induce late-onset TD. 11 Although LID and TD are caused by long-term medication, it remains unknown why distinct types of medications for distinct neuropsychiatric disorders can induce similar involuntary movements.Regarding this question, we hypothesized that the development of both kinds of dyskinesia might share a common mechanism. The genesis of dyskinesia can be understood as an irreversible brain shift from a no-dyskinesia state to a dyskinesia state after drug treatment.Such brain state changes likely involve unidentified cellular and circuitry plastic changes as well as structural changes; theoretical functional plasticity should be accompanied by identifiable structural plasticity.Pioneering anatomical studies have demonstrated structural changes in a rat model of LID; terminals of striatal medium spiny neurons (MSNs) are enlarged, increasing the volume of the internal segment of the globus pallidus (GPi) and substantia nigra pars reticulata (SNr). 12This finding provide us with clues regarding the shared pathology of LID and TD, in line with functional and structural plastic changes in the brain. RESULTS Increased volume of the external segment of the globus pallidus (GPe) and SNr in LID model mice To better understand structural plastic changes in dyskinesias, we developed a comprehensive panel of brain anatomical investigations (Figure 1A; Table S1).The panel consisted of brain-wide structural magnetic resonance imaging (MRI) screening of brain regions as well as light microscopy-, super-resolution microscopy (SRM)-, and electron microscopy (EM)-assisted identification of cellular/subcellular volume changes.We applied the panel to a well-established mouse model of LID in which hemiparkinsonism is induced by 6-hydroxydopamine hydrobromide (6-OHDA)-mediated dopaminergic neuronal ablation, and mice are then treated with L-DOPA daily for 2 weeks (Figure 1B).To validate the successful LID model, 7,13 we observed abnormal involuntary movements on the final day of L-DOPA administration.All 6-OHDA-and L-DOPA-treated mice had increased numbers of contralateral rotations and contralateral dystonic postures, confirming a successful LID model (Figures S1A and S1B).We also confirmed ipsilateral ablation of striatal dopaminergic terminals using dopamine transporter (DAT) staining post hoc (Figure S1C).With this LID model, brain volume changes were compared using region of interest (ROI)-based volume comparisons between the ipsilateral and contralateral hemispheres of 6-OHDA injection (Figure 1D).The ipsilateral volume increased in 15 loci and decreased in nine loci (Table S2).On the basis of previous studies, 12,14,15 we selected the basal ganglia for further anatomical analyses and revealed significantly increased brain volumes in the GPe, GPi, and SNr, where striatal MSNs terminate. A previous microscopic study reported enlargement of striatonigral MSN terminals in the GPi and SNr of LID model rats, 12 supporting macroscopic GPi and SNr volume increases.Our MRI data also indicated engagement of the striatopallidal pathway with the GPe volume increase.To clarify this finding, we conducted immunohistochemistry for vesicular gamma-aminobutyric acid transporter (VGAT), which strongly labeled g-aminobutyric acid (GABA)ergic MSN terminals and thus delineated these nuclei (Figure 1E).The VGAT + areas of the ipsilateral GPe/SNr were significantly larger than their contralateral counterparts (Figure S1D), indicating that the striatopallidal and striatonigral pathways are involved in the volume increases of the GPe/SNr. Increased size of inhibitory presynaptic and postsynaptic structures in the GPe/SNr of LID mice Striatopallidal and striatonigral MSNs are unmyelinated 16 and terminate in the GPe and SNr, respectively.Regardless of MSN type, MSN target nuclei share a similar structure under healthy conditions.The GPe and SNr consist of gray and white matter; myelinated axons of cortical pyramidal neurons form bundles, corresponding to the white matter, and the rest of the nucleus is composed of VGAT + presynaptic MSN terminals and neuronal nuclei (NeuN) + postsynaptic principal cell somata, corresponding to the gray matter (Figures 1F, S1G, and S1H).SRM of GPe gray matter demonstrated that VGAT + puncta surrounded parvalbumin (PV) + dendrites and somata of GPe principal cells (Figure 1F, SRM image).EM demonstrated vesiclerich presynaptic terminals surrounding dendrites and somata and thin, unmyelinated axons occupying the neuropil of GPe gray matter (Figure 1F, EM image). To address whether gray or white matter volume increases contributed to the GPe/SNr volume increase in the LID model, we conducted proteolipid protein (PLP) immunohistochemistry and quantified the areas of gray (PLP À , neuropil area) and white (PLP + , myelinated axon area) matter in ipsilateral and contralateral nuclei (Figure S1I).In the ipsilateral GPe and SNr, gray and white matter areas were larger than those in the contralateral hemisphere (Figure S1J).The ipsilateral proportion of gray matter was larger than that of the contralateral hemisphere, indicating that increased gray matter volume contributes more to the total nucleus size (Figure S1K). We then explored what types of cells and subcellular compartments were responsible for the GPe/SNr volume increases.We used a comprehensive histological panel to assess the number and size of constituent cells in the GPe/SNr and subcellular neuronal structures in gray and white matter.To examine the neuronal elements that contributed to increased gray matter volume in the GPe and SNr, we labeled presynaptic MSN terminals, unmyelinated MSN axons, and the somata and dendrites of principal neurons using immunohistochemistry. Signals were detected using SRM.The size and density of VGAT + puncta were increased significantly in the ipsilateral GPe and SNr compared with those in the contralateral hemisphere (Figure 1G), indicating that the volume and number of MSN presynaptic terminals are increased in LID.In contrast, these features were comparable in control mice (Figure S2A).The EM analyses strengthened the SRM findings; presynaptic terminals that were associated with a dendrite increased in size in LID but not control mice (Figures S2B and S2C).Consistent with the increased density of MSN terminals, the percentage area of unmyelinated axons (bIII tubulin [Tubb3] + , microtubule-associated protein 2 [MAP2] À , and PLP À ) was increased in the ipsilateral GPe and SNr (Figures S2E and S2F). Principal neurons in the GPe and SNr are divided into PV + and PV À populations. 17We conducted NeuN and MAP2 immunohistochemistry to identify somata and dendrites, respectively, and evaluated their sizes in each population.The soma areas of PV + neurons were significantly increased in the ipsilateral GPe and SNr, whereas those of PV À neurons were significantly increased in the GPe but not the SNr (Figures 1G and S2H).The dendrite diameters of PV + and PV À neurons were significantly increased in the ipsilateral GPe (Figure S2I); this finding was confirmed by MAP2 staining (Figure S2G) and EM analyses (Figure S2D).In addition, we conducted VGAT and gephyrin (a postsynaptic maker of inhibitory synapses) immunohistochemistry to examine the sizes of VGAT + and gephyrin + puncta.Their sizes were significantly increased in the ipsilateral GPe and SNr and were positively correlated (Figures S2J and S2K).These results indicate that enlargement of the somata and dendrites of principal neurons (postsynaptic structures) also contributes to the increased GPe/SNr volume. Principal neurons in the GPe/SNr receive glutamatergic input from the cortex and subthalamic nucleus (STN); presynaptic terminals have vesicular glutamate transporter (VGluT) 1 and 2, respectively. 18,19In LID model mice, the density and area of VGluT1 + puncta were comparable between the GPe and SNr (Figure S3A).In contrast, the density of VGluT2 + puncta was significantly decreased and the area was increased in the ipsilateral GPe and SNr (Figure S3B).Nonetheless, the density of VGluT + puncta was less than one-tenth that of VGAT + puncta; thus, volume changes in glutamatergic terminals may have a negligible impact on GPe/SNr volume. Regarding the observed white matter increase in the ipsilateral GPe and SNr, we evaluated myelinated axon diameter and myelin thickness using PLP staining and SRM. 20,21Both indices were significantly increased in the ipsilateral GPe and SNr (Figure S3C) and likely account for the increased white matter volume. Comprehensive histological analyses emphasize MSN terminal changes in LID mice To fully address factors that might explain GPe/SNr volume increases, we quantified the numbers (density) and volumes (percentage area) of cells, including neurons, astrocytes, oligodendrocytes, oligodendrocyte precursor cells (OPCs), microglia, and vascular cells.The number of principal neurons (NeuN + ) per nucleus was unchanged; however, the density was decreased in the ipsilateral GPe and SNr (Figure S3D), indicating an increased-volume-associated reduction in cell density and no neuronal proliferation.The number of glial cells was determined after in situ hybridization (ISH) with Gja1 (an astrocyte marker), Plp1 (an oligodendrocyte marker), Pdgfra (an OPC marker), or Csf1r (a microglial marker).GPe astrocytes (Figure S3E) and GPe/SNr oligodendrocytes (Figure S3F) showed a neuron-like pattern of cell number change with an increased-volume-associated reduction in cell density.SNr astrocytes (Figure S3E) and GPe OPCs (Figure S3G) exhibited increased cell numbers with sustained cell density, indicating an adaptive response to the GPe/SNr volume increase.SNr OPCs (Figure S3G) and GPe/ SNr microglia (Figure S3H) showed increases in number and density, which is likely relevant to the LID volume increase.Immunohistochemistry for glutamate transporter 1 (GLT1; an astrocyte marker that labels membranes) and ionized calcium binding adapter molecule 1 (Iba1; a microglia marker that labels cytoplasm) demonstrated that the percentage area of astrocytic processes was comparable between the contralateral and ipsilateral GPe and SNr, whereas that of microglia was increased in the ipsilateral GPe or SNr, consistent with the cell density data (Figures S3J and S3K).Immunohistochemistry of laminin a2 (a vasculature marker) showed an increased vasculature area in the ipsilateral GPe or SNr; however, the vasculature areas normalized by VGAT areas were comparable between the contralateral and ipsilateral GPe and SNr (Figure S3I). These results are summarized in Table S3 and indicate that there are significant increases in SNr OPCs and GPe/SNr microglia in LID.However, the populations of these cells are much smaller than those of neurons and astrocytes within the GPe and SNr. 17 We therefore suppose that OPC/microglia-mediated cell volume changes contributed relatively little to regional volume increases.Increases in total cell number without increased cell densities (e.g., in astrocytes and vasculature) may adaptively support an LID-induced nuclear volume increase.Together, these findings suggest that inhibitory presynaptic structures (VGAT + MSN terminals) and postsynaptic structures (dendrites and somata of GPe/SNr principal neurons) as well as cortical myelinated axons are the main contributing factors to increased GPe/SNr volumes in LID; it is possible that these anatomical changes are associated with dyskinesia development. Enlargement of inhibitory presynaptic structures in the GPe and SNr is a shared pathological change in dyskinesias We next evaluated whether TD, which has a distinct etiology from LID but exhibits similar involuntary movements, shared the structural changes observed in LID.We generated a haloperidol (D2 receptor antagonist)-induced TD model by intramuscular administration of the long-acting injectable haloperidol decanoate (Figure 1C). 22Typical involuntary movements of the rodent TD model are known as vacuous chewing movements (VCMs). 22,23We quantified these using two methods: visual identification of VCMs from video data and measurement of orofacial muscle activity by electromyogram (EMG) (Figures S1E and S1F).Both methods captured a gradual increase in VCMs after long-acting injectable haloperidol treatment (Figure 1C). We then examined the microscopic structural changes in the GPe/SNr of TD model mice.We looked separately at the changes in the corresponding orofacial, trunk, and limbic regions of the GPe/SNr 24 (Figures S2L and S2M).Inhibitory presynaptic structures in the orofacial region of the GPe/SNr and inhibitory postsynaptic structures in the orofacial region of the GPe were enlarged in TD mice compared with those in controls (Figures 1H and S2O).Interestingly, such structural changes were not observed in the limbic region of SNr (Figure S2M).These data suggest a somatotopy of MSN terminal changes within the basal ganglia in TD, which is not consistent with the LID case (Figure S2N).White matter integrity of TD differed from that of LID; the diameters of penetrating myelinated axons were significantly decreased in the Gpe/SNr, and myelin thickness was decreased in the GPe (Figure S3C).Together, these inhibitory presynaptic and postsynaptic structural changes suggest a shared pathology between LID and TD and correlate with the development of dyskinesia. Increased GABA content in the GPe/SNr of dyskinesia models Previous studies have consistently reported increased mRNA expression of the 65-and 67-kDa isoforms of glutamate decarboxylase (GAD65 and GAD67, respectively; both are GABA-synthesizing enzymes) in the dopamine neuron-depleted ipsilateral striatum of LID models. 25,26In addition, an imaging mass spectrometry (IMS) study revealed increased GABA content in the ipsilateral striatum and ventral pallidum of LID model mice. 27Because MSNs in the ventral striatum terminate in the ventral pallidum, ipsilateral MSNs in the dorsal striatum (or caudate putamen [CPu]) that terminate in the GPe/SNr may also contain high GABA levels in LID.To evaluate this concept, we conducted GABA IMS in the LID mouse model.As expected, GABA content was significantly increased in the ipsilateral CPu and GPe/SNr compared with that in their contralateral counterparts (Figure 2A).Similarly, the expression of GABA-related genes, including Slc32a1 (encoding VGAT), Gad1 (GAD67), and Gad2 (GAD65), was significantly increased in the ipsilateral striatum (Figures S4A and S4B).Together, these observations suggest that increased MSN terminal volume is associated with striatal overexpression of GABA-related genes and increased GABA content in the GPe/SNr in LID. Because enlarged MSN terminals are a common structural change in dyskinesias, we next investigated whether increased MSN terminal volume coincided with increased GABA-related gene expression in the striatum and increased GABA content in the GPe/SNr in TD.Although Gad2 mRNA levels were significantly decreased in TD mice compared with those in controls, Gad1 and Slc32a1 mRNA levels were comparable (Figure S4B).Moreover, IMS revealed increased GABA content in the GPe of TD mice (Figure 2B).These data suggest that local VGAT protein levels govern the size and GABA content of MSN terminals but not the mRNA expression levels of GABA-synthesizing enzymes. Striatal VGAT overexpression causes the increase in MSN terminal size and GABA content The molecular and biochemical findings from TD mice suggested that the packaging of GABA into presynaptic vesicles, rather than its synthesis, is key to structural changes in dyskinesias.To address whether VGAT overexpression per se recapitulated a shared pathology between LID and TD, we compared the effect of VGAT overexpression alone (mimicking TD) with that of VGAT/ GAD67/GAD65 overexpression (mimicking LID) on MSN terminal size.We generated adeno-associated virus (AAV) vectors expressing VGAT, GAD67, or GAD65.A single AAV or mixture of all three AAV vectors was injected into the right dorsal striatum of wild-type (WT) mice.After AAV-mediated ALFA-tagged VGAT overexpression in MSNs, ALFA + /VGAT + puncta were significantly enlarged in the GPe and SNr compared with ALFA À /VGAT + puncta (Figures 2C and 2D).Furthermore, the presynaptic active zone (ALFA + /Bassoon + puncta) was significantly enlarged compared with ALFA À /Bassoon + puncta (Figures 2D and 2E).Striatal VGAT overexpression induced increases in the soma area of PV + principal neurons (Figure 2F) as well as in the axon diameter and myelin thickness of cortical myelinated axons in the GPe (Figure S4C).With triple GABA-related gene overexpression, similar structural changes were observed (Figure S4E-G), indicating that striatal VGAT overexpression is sufficient to induce dyskinesia-relevant MSN terminal volume increases. We then evaluated whether striatal VGAT overexpression per se (rather than striatal GAD overexpression) increased GABA content in the GPe/SNr.GABA content was significantly increased in the GPe, SNr, and CPu after AAV-mediated striatal VGAT overexpression compared with that in the uninjected hemisphere (Figure 2G).In contrast, GABA content was not upregulated in the GPe/SNr after striatal GAD65/67 overexpression, although GABA content was upregulated in the CPu (Figure 2H).These data indicate that the packaging of GABA into presynaptic vesicles, rather than its synthesis, increases GABA content in MSN terminals. We next conducted a striatal loss-of-function study using short hairpin RNA (shRNA) targeting Slc32a1 (VGAT) mRNA.We generated an AAV carrying enhanced green fluorescent protein (EGFP)-miR30-VGAT shRNA 28 and investigated whether shRNA-mediated striatal VGAT inhibition decreased MSN terminal size in WT mice.As expected, GFP + /VGAT + and GFP + /Bassoon + punctum size in the GPe/SNr was decreased by VGAT loss of function compared with GFP À /VGAT + or GFP À /Bassoon + punctum size (Figures 2I-2K).Together with gain-of-function data, this suggests (E) Areas of VGAT + /Bassoon + puncta were plotted in VGAT overexpression mice.(F) S areas of PV + Ns in the GPe and SNr were compared between the AAV injection (AAV Inj) and non-injection (AAV Un-inj) hemispheres (n = 5).(G and H) IMS of GABA content in VGAT (G) and GAD67/GAD65 (H) overexpression mice.Fold changes in GABA content (relative to AAV Un-inj) were plotted for AAV Inj in VGAT (n = 5) and GAD67/GAD65 (n = 5) overexpression mice.(I) SRM images of VGAT/Bassoon/GFP in the GPe of VGAT inhibition mice.(J) Areas of VGAT + and Bassoon + puncta in MSN terminals were compared between GFP + and GFP À puncta in the GPe and SNr (n = 5).(K) Areas of VGAT + /Bassoon + puncta were plotted in VGAT inhibition mice.(L) S areas of PV + Ns in the GPe and SNr were compared between the AAV Inj and AAV Un-inj hemispheres (n = 5).(M and N) IMS of GABA content in VGAT inhibition mice.Fold changes in GABA content (relative to AAV Un-inj) were plotted in VGAT inhibition mice (n = 5).*p < 0.05, **p < 0.01, ***p < 0.001 (Student's or paired t test, p values corrected by Bonferroni's method).Each value and the mean ± SEM are plotted. that VGAT levels determine presynaptic structure size in MSNs.In contrast, striatal VGAT loss of function did not induce structural changes in GPe/SNr principal neurons (Figure 2L) or penetrating myelinated axons (Figure S4D) and did not reduce GABA content (Figures 2M and 2N) in WT mice. Striatal VGAT overexpression enhances GABA transmission from MSNs Given that we have previously demonstrated enhanced GABA transmission from MSNs in the GPe and SNr in LID, 29 we exam-ined whether GABA transmission was also enhanced in TD and whether VGAT overexpression enhanced GABA transmission.In our experimental setup, we recorded single-unit activity in orofacial and forelimb regions of the GPe and SNr and examined their responses to electrical stimulation of the cerebral cortex (Cx).The typical response pattern of GPe and SNr neurons is a triphasic response composed of early excitation (i), inhibition (ii), and late excitation (iii) (Figure 3A). 29Each component in the GPe is mediated by the Cx-STN-GPe (i), Cx-D2 MSN (striatopallidal)-GPe (ii), and Cx-D2 MSN-Gpe-STN-GPe (iii) pathways.Each component in the SNr is mediated by the Cx-STN-SNr (i), Cx-D1 MSN (striatonigral)-SNr (ii), and Cx-D2 MSN-Gpe-STN-SNr (iii) pathways.Alterations in Cx-evoked triphasic responses can therefore highlight neurotransmission changes through each basal ganglion pathway. Averaged peristimulus time histograms (PSTHs) of GPe and SNr neurons between control and TD mice and their algebraic differences were plotted (Figure 3B).The duration and amplitude of inhibition (ii) were significantly increased in the SNr of TD mice (Figure 3D), indicating increased GABA release from striatonigral MSNs.The duration of late excitation (iii) was significantly increased in the Gpe, and its amplitude was increased in the SNr (Figure 3E), suggesting increased GABA release from striatopallidal MSNs.Early excitation (i), which is irrelevant to MSN GABA release, was also significantly increased in the SNr (Figure 3C), possibly indicating structural changes in VGluT2 + excitatory terminals (Figure S3B).These results indicate increased GABA release from striatonigral and striatopallidal MSNs in TD mice. We next assessed the effects of striatal VGAT overexpression per se on GABA release from MSN terminals.Averaged PSTHs of GPe and SNr responses were compared before and after AAV injection (Figure 3F).The duration and amplitude of inhibitory responses were increased in the SNr (Figure 3H), indicating increased GABA release from striatonigral MSNs.Although the duration of late excitation was comparable before and after AAV injection, the amplitude of late excitation tended to be increased in the GPe (p < 0.1) (Figure 3I), suggesting increased GABA release from striatopallidal MSNs.These data indicate that striatal VGAT overexpression results in structural and functional augmentation of GABA release from presynaptic MSN terminals. Striatal VGAT expression levels gate the severity of dyskinesia Striatal VGAT levels determine MSN terminal size and GABA transmission, and striatal VGAT overexpression may be a common mechanism of both types of dyskinesia.To better understand this phenomenon, we conducted VGAT gain-and lossof-function studies in the two dyskinesia models and evaluated the degree of involuntary movement.In the LID model, a VGAT-or GFP-expressing AAV vector was injected into the right side of the dorsal striatum; 6-OHDA was injected into the same side 2 weeks later (Figure 4A).VGAT overexpression did not induce rotational behavior or dystonic posture before L-DOPA administration; however, abnormal involuntary movements were observed even on the first day of L-DOPA administration and remained significantly more common on the last day (Figure 4B).In the TD model, VGAT-or GFP-expressing AAV vectors were injected into the bilateral striatum 3 weeks before administration of haloperidol decanoate (Figure 4C).VGAT overexpression induced VCMs without haloperidol administration (Figure 4D), which was unexpected, and enhanced VCMs with haloperidol administration.In contrast, GAD65/67 overexpression did not enhance VCMs (Figure S5A).In both dyskinesia models, increased VGAT gene expression exacerbated involuntary movements. We next attempted to mitigate dyskinesia by targeting the overexpressed striatal VGAT.In the LID model, prior to LID in-duction, an AAV carrying VGAT shRNA or GFP was injected into the ipsilateral dorsal striatum.After induction of hemi-parkinsonism, the impaired daily activities, including feeding and locomotion, were comparable between the VGAT shRNA or GFP groups, and these impairments were equally restored by L-DOPA in both groups, indicating that the beneficial effect of L-DOPA was not abolished by the VGAT shRNA intervention.Moreover, LID behaviors were markedly decreased during LID induction (Figure 4E).Notably, ipsilateral VGAT + MSN terminal size was decreased rather than increased (Figures S5B and S5C), and the PV + soma areas of GPe and SNr neurons were not enlarged (Figure S5D).These changes were consistent with the loss-of-function study in WT mice (Figures 2I-2L).In the TD model, AAVs were injected bilaterally into the dorsal striatum before haloperidol treatment.Visually identified VCMs were decreased 3 weeks after haloperidol injection (Figure 4F), and suppression of increased orofacial EMG activity was identified before 3 weeks (Figure S5J).Suppression of enlarged VGAT + MSN terminals and enlarged PV + somata in GPe neurons was also identified (Figures S5F-S5I).These results indicate that striatal VGAT levels govern the pathology and pathophysiology of TD and LID. Two-hit model of dyskinesia formation We next speculated regarding the events that might lead to a shared structural footprint in LID and TD.Clinical observations suggest that continuous L-DOPA treatment, such as Duodopa intestinal infusion, reduces LID risk compared with conventional oral L-DOPA administration, 30,31 which may induce dopamine surges.We therefore investigated the constructive role of pulsatile dopamine fluctuation on dyskinesia formation.As demonstrated previously in the LID rodent model, 32 we continuously administered L-DOPA after 6-OHDA treatment using a subcutaneous slow-release L-DOPA pellet 33 and found no dyskinesia (Figure 5A).To rule out the possibility that insufficient L-DOPA treatment failed to produce dyskinesia, we examined whether hemi-parkinsonism was treated with this system.We trained mice to perform a single-forelimb reaching task and evaluated forelimb movements. 34Continuous L-DOPA administration markedly improved 6-OHDA-mediated impaired reaching behavior, confirming successful L-DOPA treatment of hemiparkinsonism (Figures S6A and S6B).Under this condition, there were no increases in VGAT + MSN terminal size or PV + soma area of GPe and SNr principal neurons (Figures 5B and 5C), indicating that continuous dopamine supplementation does not cause LID-associated structural changes or abnormal involuntary movements. Because ambient striatal dopamine concentrations are sufficient to occupy high-affinity D2 receptors but not low-affinity D1 receptors, 35,36 ablation of dopamine neurons in PD leads to long-term vacant D2 receptors and a resultant continuous loss of D2 signaling.Pharmacological D2 receptor antagonism decreases D2 signaling; long-term D2 blockade, as occurs with long-term antipsychotic use, can therefore also be regarded as a continuous loss of D2 signaling.Accordingly, LID and TD likely share a common etiology in terms of D2 signaling.In the case of LID, in addition to the initial hit (a lasting loss of D2 signaling), L-DOPA-mediated dopamine fluctuation (a second hit) leads to Article dyskinesia.In the case of TD, physiological dopamine fluctuation (that is, a dopamine surge or dip in response to salient stimuli in daily life) may function as the second hit (Figure 5J). To experimentally evaluate the relevance of the second hit in TD, we artificially enhanced dopamine fluctuations via pulsatile L-DOPA administration under continuous haloperidol treatment.After L-DOPA administration, the number of VCMs (Figure 5D) and the size of VGAT + MSN terminals and PV + soma in GPe/ SNr neurons (Figures 5E and 5F) were increased significantly.In contrast, 2 weeks of pulsatile L-DOPA administration without loss of D2 signaling did not increase the size of VGAT + MSN terminals (Figure S2A).Together, these results support the two-hit model of TD. 8][39] After VCM acquisition, mice were administered valbenazine for 2 weeks (Figure 5G).VCMs were dose-dependently reduced, and VGAT + punctum size was normalized (Figure 5H), suggesting that the therapeutic mechanism of valbenazine is mediated by a reduction in striatal VGAT.Because this VMAT2 inhibitor also downregulates dopamine release, 40,41 it may also lower the amplitude of dopamine fluctuations triggered by daily salient stimuli, reducing the impact of the second hit. DISCUSSION Motor learning relies on changes in brain function.This functional plasticity is likely accompanied by brain structural changes. 42,43t is therefore expected that acquisition of involuntary movements, such as in LID and TD, also relies on functional and structural plasticity.We aimed to identify structural plasticity and address the molecular mechanisms leading to structural changes with a focus on structural-functional correlations.The current study revealed enlargement of MSN presynaptic terminals and GPe/SNr principal neurons in LID and TD.These structural changes correlated with increased GABA content and enhanced GABA transmission at MSN terminals. The pathogenesis of dyskinesia takes a relatively long time, and it is challenging to follow its development.We believe that structural changes reflect functional changes at every moment.In turn, functional changes during the disease process can be identified by visualizing the trajectory of structural plastic changes.Our results suggest that MSN terminal size mirrors dopamine fluctuation during dyskinesia progression.MSN terminal enlargement was induced by pulsatile but not continuous administration of L-DOPA in the LID model and by L-DOPA administration-induced enhancement of dopamine fluctuation in the TD model.In addition, MSN terminal enlargement was reduced by administration of valbenazine, the only currently approved drug for TD, in the TD model.Together, these findings suggest that structural changes in MSN terminals may be markers of the dyskinesia developmental process. Two steps are likely necessary to develop dyskinesia (the twohit model of dyskinesia; Figure 5J).This model is easy to apply to LID because dopaminergic neuronal loss (first hit) and pulsatile L-DOPA administration (second hit) are clearly identifiable.Application of the two-hit model to TD is more challenging; however, the following two clinical observations support this model.Patients with TD receive dopamine D2 receptor-blocking agents, such as antipsychotics (first hit).Given that substance abuse/ dependence is a risk factor of TD, 44 the second hit may be dopamine fluctuation induced by substance use.Use of addictive substances such as psychostimulants (e.g., cocaine and amphetamine) evokes an increase in dopamine release, leading to a hedonic response.Psychological stress, the main cause of substance use, may also increase dopamine level fluctuations.Indeed, positron emission tomography studies have reported increased dopamine after psychological stress 45 and a positive association between psychological stress and dopamine release in healthy subjects. 46Taken together, the two-hit dyskinesia model may apply to LID, TD, and beyond. What corresponds to structural plasticity in inhibitory synapses?In the case of excitatory post-synapses, structural plasticity corresponds to newly formed dendritic spines and the increased volume of existing spines; in both cases, the surface area at the postsynaptic neuron increases.In contrast, inhibitory synapses do not form postsynaptic dendritic spines.Thus, if the number and size of the presynaptic active zone increase, then postsynaptic neurons would have to gain receptive sites paired with the active zones.We demonstrated that AAV-mediated VGAT overexpression in MSNs increased Bassoon + presynaptic structures.We also found increased gephyrin + postsynaptic structures in Gpe/SNr neurons; expanding dendritic and somal volumes coincided with the increased postsynaptic structure.Accordingly, we hypothesize that structural plasticity in MSN Gpe/SNr inhibitory synapses occurs in the following order: (1) MSN axon terminal volume increases in a VGAT-dependent manner, (2) the presynaptic active zone increases, (3) postsynaptic sites are enlarged, and (4) proximal dendrites and somata, where inhibitory synapses form, are enlarged because of storage of expanding presynaptic sites in GPe/SNr neurons.Given that the structural basis of inhibitory synaptic plasticity is largely unknown, our discovery and proposal shed light on the possible mechanisms of inhibitory synaptic plasticity. The present study has the following clinical significance.First, we clearly showed volume increases in the GPe and SNr of LID and TD model mice.Similar changes are observed in the GPe, SNr, and probably GPi of LID and TD patients.These volume changes may therefore be biomarkers of dyskinesia.Second, suppression of striatal VGAT activity ameliorated morphological changes and dyskinesia.Striatal VGAT may thus be a therapeutic target (e.g., striatal VGAT suppression by viral vectors or drugs). In summary, we found a shared pathology between LID and TD: increased volumes of presynaptic MSN terminals and postsynaptic somata in Gpe/SNr neurons.Striatal overexpression of VGAT was necessary and sufficient to induce these structural signatures that correlated with functional changes, such as the increased GABA content and GABA transmission at MSN terminals.We propose that a long-term reduction of MSN D2 signaling with dopamine fluctuation initiates VGAT-dependent dyskinesia development. Limitations of the study In this study, we used D2 antagonist-mediated VCMs as a model of TD. 16 Currently, long-term administration of D2 antagonists would be the best way to induce a TD-like phenotype in rodents, but the drawback of this model is that VCMs do not persist after cessation of D2 antagonist administration.The persistence of TD even after discontinuation of D2 antagonists is the most serious condition in the clinic, and the underlying mechanism of sustained TD is still unknown.We do not know whether hypertrophy of MSN terminals and striatal overexpression of VGAT are involved in persistent TD at this stage. STAR+METHODS Detailed methods are provided in the online version of this paper and include the following: Lead contact Further information and requests for resources should be directed to and will be fulfilled by the lead contact, Kenji F Tanaka (kftanaka@keio.jp). Cell counts of ISH images To evaluate the cell density of astrocytes, microglia, OPCs, and oligodendrocytes, cell counts were performed from the ISH images using ImageJ.Cell density was calculated by dividing the number of cells expressing the marker RNA in an ROI by the area of the ROI. The ROI was defined as a region that included the GPe and SNr. Quantitative reverse transcription PCR Mice were sacrificed by cervical dislocation and CPu, GPe, and SNr tissue was collected.Total RNA was isolated from these regions using TRIzol (Thermo Fisher Scientific) and reverse transcribed into cDNA using ReverTra Ace qPCR RT Master Mix (Toyobo, Osaka, Japan).The qRT-PCR was performed using TaqMan probes (Thermo Fisher Scientific) on the Step One Plus real-time PCR monitoring system (Thermo Fisher Scientific).The primers and TaqMan probe sequences of Slc32a1 (VGAT), GAD2 (GAD65), and GAD1 (GAD67) were Mm00494138_m1, Mm00484623_m1, and Mm00725661_s1, respectively.The expression of these mRNA transcripts was measured; mRNA levels were normalized to those of glyceraldehyde 3-phosphate dehydrogenase (Gapdh, Mm99999915_g1). Magnetic resonance imaging (MRI) An ex vivo MRI study of the LID model mice was performed using a 9.4 T BioSpec 94/30 (Biospin GmbH, Ettlingen, Germany) unit and a solenoid-type coil with 28-mm inner diameter for transmitting and receiving.After the final L-DOPA administration, mice were deeply anesthetized with ketamine (100 mg/kg) and xylazine (10 mg/kg) and perfused with a 4% paraformaldehyde phosphate buffer solution.The brains were removed with the skull and postfixed in the same fixative for 24 h.The fixed brains were then stored in PBS for 1 week.The duration of paraformaldehyde and PBS immersion was the same for all samples to avoid differences in brain volume caused by postperfusion immersion fixation and storage (Guzman et al., 2016).Four brains (with their skulls) were firmly fixed using fitted sponges into an acrylic tube (30-mm diameter) filled with Fluorinert (Sumitomo 3M Limited, Tokyo, Japan) to minimize the signal intensity attributed to the embedding medium.Additionally, vacuum degassing was performed to reduce air bubble-derived artifacts of structural images.For volume analysis, structural images were acquired using T2-weighted multi-slice rapid acquisition with relaxation enhancement (RARE) with the following parameters: repetition time = 20,000 ms, echo time = 15 ms, spatial resolution = 100 3 100 3 100 mm, RARE factor = 4, and 24 averages.Brain volumes of LID model mice were compared between the ipsilateral and contralateral sides using ROI-based analysis.For each hemisphere, ROIs of 64 brain loci (128 loci in total) were defined using the Allen Brain Atlas-based flexible annotation atlas of the mouse brain (Table S1). 54When we compared the voxel size of each ROI, there were small differences in voxel size between hemispheres, indicating a risk of false positives for brain volume changes.We therefore chose ROI-based analysis for the brain volume analysis (instead of voxel-based analysis).We made tissue probability maps (TPMs) of gray matter, white matter, and cerebrospinal fluid from the flexible annotation atlas for the preprocessing of tissue segmentation.The GPe and GPi of the TPMs that were commonly used were classified as white matter.We thus made the TPMs where GPe and GPi were classified as gray matter. Preprocessing and statistics for ROI-based brain volume analysis were performed using SPM12 (Wellcome Trust Center for Neuroimaging, London, UK) and in-house software written in MATLAB (MathWorks, Natick, MA, USA).First, each T2-weighted image was resized by a factor of 10 to account for the whole-brain volume difference between humans and rodents.It was then re-sampled into 1-mm isotropic voxels and aligned to the same space by registering each image to the TPM.Next, each image was segmented into TPMs of gray matter, white matter, and cerebrospinal fluid using the unified segmentation approach, which enables image registration, tissue classification, and bias correction.The segmented images were spatially normalized into the population template, which was created by diffeomorphic anatomical registration through the Diffeomorphic Anatomical Registration Through Exponentiated Lie Algebra (DARTEL) algorithm.Modulated gray matter images were then obtained for each animal by the determinant of the Jacobian of the transformation to account for the expansion and/or contraction of brain regions.These images were smoothed with a 3-mm full width at half maximum Gaussian kernel.The modulated values referring to brain volumes were extracted from each ROI and averaged within each ROI.Two-tailed two-sample t-tests were performed to compare the averaged values at each ROI between the ipsilateral and contralateral sides.The p values were corrected using false discovery rate correction. Electron microscopy (EM) Mice were deeply anesthetized with ketamine (100 mg/kg, i.p.) and xylazine (10 mg/kg, i.p.) and perfused with 4% paraformaldehyde and 2.5% glutaraldehyde in 0.1 M phosphate buffer (pH 7.4).The brains were removed and postfixed in the same fixative overnight before being cut at 1-mm thickness on a vibratome.Slices were then treated with 2% OsO 4 (Nisshin EM, Tokyo, Japan) in 0.1 M cacodylate buffer containing 0.15% K 4 [Fe(CN) 6 ] (Nacalai Tesque), washed four times with cacodylate buffer, and incubated with 0.1% thiocarbohydrazide (Sigma-Aldrich) for 20 min and 2% OsO 4 for 30 min at room temperature.The slices were then treated with 2% uranyl acetate at 4 C overnight and stained with Walton's lead aspartate at 50 C for 2 h.The slices were dehydrated through a graded ethanol series (60%, 80%, 90%, 95%, and 100%) at 4 C; infiltrated sequentially with acetone dehydrated with a molecular sieve, a 1:1 mixture of resin and acetone, and 100% resin; and embedded with Aclar film (Nisshin EM) in Durcupan resin with carbon (Ketjen black) (Sigma-Aldrich).The specimen-embedded resin was polymerized at 40 C for 6 h, 50 C for 12 h, 60 C for 24 h, and 70 C for 2 days.After trimming the region containing the GPe from the brain, the samples were serially imaged with a Merlin (Carl Zeiss) electron microscope equipped with the 3View system and an OnPoint backscattered electron detector (Gatan, Pleasanton, CA, USA).The Merlin is a field emission-type scanning electron microscope with a single electron beam, which was set to 1.2-1.5 kV acceleration voltage and 130 pA beam current. For EM image analysis, serial images of the serial block face scanning EM were handled with Fiji/ImageJ and segmented using Microscopy Image Browser (http://mib.helsinki.fi/).The EM images were acquired in the ipsilateral GPe of the LID model mouse and its control mouse.The area of terminals making inhibitory synaptic contacts with dendrites or somas, the diameter of dendrites surrounded by the terminal, and the diameter of unmyelinated axons were measured.Inhibitory terminals were defined as those with vesicles present and symmetrical postsynaptic density. Mass spectrometry imaging of GABA and dopamine Mice were deeply anesthetized with ketamine (100 mg/kg, i.p.) and xylazine (10 mg/kg, i.p.), and the brains were rapidly removed from the skull and frozen in liquid N 2 .Next, 10-mm-thick sections of fresh-frozen brains were prepared with a cryostat and thawmounted on conductive indium-tin-oxide-coated glass slides (Matsunami Glass). A pyrylium-based derivatization method was applied for the tissue localization imaging of neurotransmitters. 55,56TMPy solution (4.8 mg/200 mL; Taiyo Nippon Sanso Co., Tokyo, Japan) was applied to brain sections using an airbrush (Procon Boy FWA Platinum 0.2-mm caliber airbrush, Mr. Hobby, Tokyo, Japan).To enhance the reaction efficiency of TMPy, TMPy-sprayed sections were placed into a dedicated container and allowed to react at 60 C for 10 min.The container contained two channels in the central partition, to wick moisture from the wet filter paper region to the sample section region.The filter paper was soaked with 1 mL methanol/ water (70/30 volume/volume) and placed next to the section inside the container, which was then completely sealed to maintain humidity levels.The TMPy-labeled brain sections were sprayed with matrix (CHCA-methanol/water/TFA = 70/29.9/0.1 volume/volume) using an automated pneumatic sprayer (TM-Sprayer, HTX Tech., Chapel Hill, NC, USA).Ten passes were sprayed according to the following conditions: flow rate, 120 mL/min; airflow, 10 psi; nozzle speed, 1100 mm/min. To detect the laser spot area, the sections were scanned and laser spot areas (200 shots) were detected with a spot-to-spot center distance of 80 mm.Signals between m/z 100-650 were corrected.The section surface was irradiated with yttrium aluminum garnet laser shots in the positive ion detection mode using matrix-assisted laser desorption/ionization time-of-flight mass spectrometry (MALDI-TOF MS; timsTOF fleX, Bruker Daltonics, Bremen, Germany).The laser power was optimized to minimize the in-source decay of targets.Obtained mass spectrometry spectra were reconstructed to produce mass spectrometry images using Scils Lab software (Bruker Daltonics).Optical images of brain sections were obtained using a scanner (GT-X830, Epson, Tokyo, Japan) followed by MALDI-TOF MS of the sections.The detected masses of TMPy-labeled standard GABA (m/z 208.163) increased by 105.0 Da compared with the original mass (molecular weight 103.0 Da).Tandem mass spectrometry confirmed the fragmentation ions of TMPy from the standard sample.A fragmented ion of the pyridine ring moiety (m/z 122.1) was regularly cleaved and observed for all TMPy-modified target molecules. Neuronal activity recording Single-unit recording was performed before and after the administration of haloperidol-decanoate for TD or AAV for striatal VGAT overexpression, respectively.The surgical operation to mount a head holder onto the head of each mouse was performed as described previously (Chiken et al., 2015; Sano et al., 2013; Wahyu et al., 2021).Each mouse was anesthetized with ketamine (100 mg/kg, i.p.) and xylazine (5 mg/kg, i.p.) and held in a stereotaxic apparatus (SR-6M, Narishige Scientific Instrument).The skull was widely exposed and covered with bone adhesive resin (ESTECEM II, Tokuyama Dental, Tokyo, Japan).A small U-shaped head holder made of acetal resin was attached to the skull with acrylic resin (Unifast II, GC, Tokyo, Japan).The mouse was thus held in the stereotaxic apparatus with its head restrained using the U-shaped head holder.For the TD model mice, part of the skull over the right hemisphere was removed to access the motor Cx, CPu, GPe, and SNr.Somatotopy of the motor Cx was confirmed by intracortical microstimulation (a train of 10 pulses at 333 Hz, 200-ms duration, <20 mA).Two pairs of bipolar stimulating electrodes (50-mm diameter Teflon-coated tungsten wires, tip distance 300-400 mm) were chronically implanted into the orofacial and forelimb regions of the primary motor Cx and fixed using acrylic resin. 29,57,58For mice who received AAV injections, part of the skull over the right hemisphere (ipsilateral to the AAV injection) was removed to access the motor Cx, GPe, and SNr.Somatotopy of these regions was confirmed by intracortical microstimulation (a train of 10 pulses at 333 Hz, 200-ms duration, <20 mA). After recovering from the surgery, each awake mouse was positioned painlessly in a stereotaxic apparatus (SR-6M) using the U-shaped head holder. 29,57,58For single-unit recording, a glass-coated tungsten microelectrode (0.5 or 1.0 MU at 1 kHz; Alpha Omega, Nazareth, Israel) was inserted vertically into the right GPe (target area: posterior 0.3-0.6 mm and lateral 2.2-2.6 mm from bregma) or SNr (posterior 2.6-3.0 mm and lateral 1.6-2.2mm from bregma) through the dura mater using a hydraulic microdrive.Signals from the microelectrode were amplified and filtered (0.3-5.0 kHz).Unit activity was isolated, converted to digital data with a homemade time-amplitude window discriminator, and sampled at 2.0 kHz using a computer with LabVIEW 2013 software (National Instruments, Austin, TX, USA) for offline data analysis.The responses to Cx electrical stimulation (200-ms duration, single monophasic pulse at 0.7 Hz, 50-mA strength) through the stimulating electrodes implanted in the motor Cx were examined by constructing PSTHs (bin width, 1 ms; prestimulus, 100 ms; poststimulus, 800 ms) for 100 stimulation trials.These Cx-evoked responses were recorded in the GPe and SNr and were considered control data. After refining the recording in control conditions, haloperidol-decanoate or AAV vector was injected.For the TD experiments, haloperidol-decanoate was injected in the hindlimb (83 mg/kg, intramuscular) as described in the generating TD model mice section.Recordings from the GPe and SNr in TD conditions were started 3 weeks after the injection.For VGAT overexpression, AAV-hSyn-ALFA-VGAT was injected into the right striatum, ipsilateral to the recording side.Neuronal responses to Cx stimulation were examined.The striatal region with motor cortical inputs from the orofacial and/or forelimb regions was identified and AAV vector was injected (0.3 mL/site, 1 site).Approximately 3-4 weeks after the injection, when VGAT was overexpressed, recording from the GPe and SNr in VGAT overexpression conditions was started. Responses to Cx stimulation were analyzed using PSTHs.Cx stimulation typically induced a triphasic response-composed of early excitation, inhibition, and late excitation-in GPe and SNr neurons.The mean value (m baseline ) and standard deviation (SD baseline ) of the discharge rate during 100 ms preceding the onset of stimulation were considered the baseline discharge rate; the significance level was set as m baseline ± 1.65 SD baseline (corresponding to p = 0.1, two-tailed t-test).If at least two consecutive bins (2 ms) exceeded the significance level, the response was judged significant as described previously. 57,58The initial point was determined as the time of the first bin exceeding the significance level.The responses were judged to end when two consecutive bins fell below the significance level.The endpoint was determined as the time of the last bin exceeding this level.The duration (from the initial point to the endpoint) and the amplitude (the area of response; the number of spikes during the significant changes minus the number of spikes during baseline) of each response were calculated and compared. 57,58If there was no significant early excitation, inhibition, or late excitation, the duration and amplitude were set to zero.For averaged PSTHs, the PSTH of each neuron with a significant response to cortical stimulation was averaged within the same conditions and smoothed using a binomial filter (s = 2.0 ms). Forelimb reaching task Detailed methods are provided in a previous study. 34Before 6-OHDA injections, mice were food-restricted and trained.Their body weights were maintained at 85% of their initial body weight.The training chamber was constructed as a clear Plexiglas box (20 cm tall, 15 cm deep, and 8.5 cm wide) into which each mouse was placed.There was one vertical slit (0.5 cm wide and 13 cm high) in the center of the front wall of the box.A 1.25-cm-tall exterior shelf was affixed to the wall in front of the slits to hold food pellets (10 mg each: Dustless Precision Pellets, Bio-Serv, Prospect, CT, USA) for a food reward.The training period (1-5 days in duration) was used to familiarize mice with the training chamber and task requirements and to determine their preferred limbs.Food pellets were placed in front of the center slit, and mice used both paws to reach for them.The training was finished when 50 reach attempts were achieved within 30 min and the mouse showed 70% limb preference.After training, the pre-period reaching task was conducted for 7 days; each day consisted of one session of 100 trials with the preferred limb or 30 min.Food pellets were presented individually in front of the slit.Next, 6-OHDA was injected into the medial forebrain bundle contralateral to the preferred limb.Two days after the injection, the 6-OHDA-period reaching task (after injection of 6-OHDA) was conducted for 6 days.Mice then underwent surgery to implant one L-DOPA pellet (hormone and drug pellets of the matrix-driven delivery system; 15 mg/pellet, Innovative Research of America, Sarasota, FL, USA) to continuously administer L-DOPA, and the L-DOPA-period reaching task (after the implantation of the L-DOPA pellet) was conducted for 14 days. To evaluate the task, mice displayed three reach attempt types: fail, drop, or success.A fail was scored as a reach in which the mouse failed to touch the food pellet or knocked it away.A drop was a reach in which the mouse retrieved the food pellet but dropped it before putting it into its mouth.A success was a reach in which the mouse retrieved the pellet and put it into its mouth.Occasionally a mouse used the non-preferred limb; this was categorized as a fail.Success rates were calculated as the percentage of successful reaches relative to the total reach attempts. QUANTIFICATION AND STATISTICAL ANALYSIS Statistical processing was performed using MATLAB and Excel (Microsoft, Redmond, WA, USA) software.Paired t-tests were performed to compare the contralateral and ipsilateral sides.Two-tailed Student's t-tests were performed to compare TD and its control mice.Two-way repeated analysis of variance (ANOVA) was performed to compare the dyskinesia time course of LID mice.Bonferroni corrections were applied to correct p values for multiple comparisons.Values are shown as the mean and standard error of the mean (SEM), and are plotted as scatter diagrams. Figure 1 . Figure 1.Enlargement of inhibitory presynaptic structure in the GPe and SNr is a shared pathological change in dyskinesias (A) Schematic of our research strategy.N, neuron; G, glia; BV, blood vessel.(B) Time course of LID model mouse generation.Contralateral rotations and contralateral dystonic postures were counted every 5 min in LID model mice (n = 6).(C) Time course of TD model mouse generation.The number of VCMs per 20 min was plotted every week in TD (n = 6) and control (n = 6) mice.(D) ROI-based brain volume changes were compared between the ipsilateral (Ipsi) and contralateral (Contra) hemispheres of LID model mice (n = 8).Colored brain ROIs, showing significant changes in brain volume, were plotted (false discovery rate [FDR]-corrected p < 0.05).(E) VGAT immunostaining in the GPe and SNr of LID mice.The arrows show increases in Ipsi brain volumes.(F) Low-and high-magnification images of myelin proteolipid protein (PLP)/VGAT/NeuN in the GPe of control mice (ic, internal capsule), SRM images of VGAT/PV staining, and an EM image of the GPe of control mice.Green indicates the somata (Ss) and dendrites (Ds) of GPe principal Ns; purple indicates MSN terminals.(G and H) VGAT + density and area and S areas of PV + Ns were compared between the Contra and Ipsi hemispheres of LID mice (n = 8) (G) and between TD (n = 5) and control (n = 5) mice (H).*p < 0.05, **p < 0.01, ***p < 0.001 (Student's or paired t test, p values corrected by Bonferroni correction).Values are plotted as the mean ± standard error of the mean (SEM). Figure 2 . Figure 2. Striatal VGAT expression levels determine axon terminal size and GABA content (A) IMS showing the optical image, GABA, dopamine, and their overlay in LID mice.Fold changes in GABA content relative to the Contra hemisphere are plotted (n = 5).(B) MSI of GABA content in TD mice.Fold changes in GABA content (based on the average GABA content of each region in control mice) were plotted in control (n = 4) and TD (n = 4) mice.(C) SRM images of VGAT/Bassoon/ALFA in the GPe of VGAT overexpression mice.(D) Areas of VGAT + and Bassoon + puncta in MSN terminals were compared between ALFA + and ALFA À puncta in the GPe and SNr (n = 5). Figure 3 . Figure 3. Striatal VGAT overexpression enhances GABA transmission from MSNs (A) Stimulation (Cx) and recording (GPe or SNr) sites are depicted with basal ganglion circuitry.Red and blue lines represent glutamatergic excitatory and GABAergic inhibitory projections, respectively.In the striatum (Str), D1 and D2 represent D1-MSN and D2-MSN, respectively.Cx-evoked responses in the GPe or SNr are typically composed of early excitation (i), inhibition (ii), and late excitation (iii).Purple lines represent the duration of the three responses (a, c, and e), and green areas represent the amplitudes of the three responses (b, d, and f).(B) Averaged PSTHs of Cx-evoked responses in the GPe (control, n = 30 Ns, gray; TD, n = 38 Ns, blue; from four mice) and SNr (control, n = 52 Ns; TD, n = 89 Ns; from four mice).Algebraic differences between TD and controls are also indicated (black).(C-E) Duration and amplitude of early excitation, inhibition, and late excitation responses were compared between control and TD mice.(F) Single-unit recording was performed before and after VGAT overexpression.Averaged PSTHs of Cx-evoked responses in the GPe (before n = 52 Ns, after n = 123 Ns, from four mice) or SNr (before n = 37 Ns, after n = 61 Ns, from four mice) before and after striatal VGAT overexpression in WT mice.Differences in averaged PSTHs before and after overexpression were calculated, as were algebraic differences before and after overexpression.(G-I) Duration and amplitude of early excitation, inhibition, and late excitation responses were compared before and after VGAT overexpression.*p < 0.05, **p < 0.01 (Student's t test, p values corrected by Bonferroni's correction).Values of each mouse and the mean ± SEM are plotted. Figure 4 . Figure 4. Striatal VGAT expression levels gate the severity of dyskinesia (A) AAV vectors were injected into the right dorsal Str (Ipsi to the 6-OHDA injection) of LID mice 2 weeks before 6-OHDA injection; L-DOPA was then injected for 2 weeks.Mouse behavior was observed on the first and final days of L-DOPA injection.(B) Numbers of Contra rotations and Contra dystonic postures were compared between LID with VGAT overexpression (n = 4) and LID with control AAV (n = 7) mice.(C) AAV vectors were injected into the bilateral dorsal Str of TD mice 3 weeks before haloperidol decanoate injection.(D) The number of VCMs was compared between TD with control AAV (n = 6) and TD with VGAT overexpression (n = 6) mice.(E) Numbers of Contra rotations and Contra dystonic postures were compared between LID with VGAT shRNA (n = 6) and LID with control AAV (n = 7) mice.(F) Numbers of VCMs were compared between TD with VGAT shRNA (n = 10) and TD with control AAV (n = 11) mice.#p < 0.05, ##p < 0.01, ###p < 0.001 (two-way repeated analysis of variance [ANOVA]).*p < 0.05, **p < 0.01 (Student's t test, p values corrected by Bonferroni's correction).Values of each mouse and the mean ± SEM are plotted. Figure 5 . Figure 5. Lowered dopamine receptor type 2 signaling with repetitive dopamine fluctuations induces VGAT overexpression and dyskinesia (A) Experimental time course of continuous L-DOPA administration.The numbers of Contra rotations and Contra dystonic postures were counted in L-DOPAtreated (n = 4) and sham (n = 3) mice. ( B and C) The areas of VGAT + puncta of MSN terminals and S area of PV + Ns were compared between the Ipsi and Contra hemispheres of mice with continuous L-DOPA (n = 4).(D) Experimental time course for pulsatile administration of L-DOPA (daily, intraperitoneal) in TD mice.Numbers of VCMs were plotted every week and for the 4-week period in TD mice with saline (n = 12) and L-DOPA (n = 12) treatment.(E and F) The areas of VGAT + puncta of MSN terminals and S area of PV + Ns were compared among control mice with saline (n = 3), TD mice with saline (n = 6), and TD mice with L-DOPA (n = 6).(G) Experimental time course of daily valbenazine administration (0.5 or 1.5 mg/kg, oral administration) in TD mice.Numbers of VCMs were plotted every week and for the 6-week period in TD mice with saline (n = 8) and valbenazine (n = 8 for each dose) treatment.(H and I) The areas of VGAT + puncta of MSN terminals and S area of PV + Ns were compared among control mice (n = 4), TD mice with saline (n = 7), and TD mice with valbenazine (n = 7 for each dose).(J) Schematic of the proposed dyskinesia pathology.In LID, ablation of dopaminergic Ns (first hit) and evoked dopamine (DA) surges by L-DOPA administration (second hit) increases striatal VGAT expression, resulting in volume increases in MSN terminals and Ss of GPe/SNr Ns and increased GABA content and transmission.In TD, blocking of D2 receptors (D2R; first hit) and physiological dopamine fluctuations (second hit) induce pathophysiology similar to LID.The increased and decreased amplitudes of dopamine fluctuations induced by L-DOPA and valbenazine, respectively, led to exacerbated (with L-DOPA) and ameliorated (with valbenazine) dyskinesia pathology.*p < 0.05, **p < 0.01, ***p < 0.001 (Student's t test, p values corrected by Bonferroni correction).Values are plotted as the mean ± SEM.
2023-08-01T13:10:31.061Z
2023-07-28T00:00:00.000
{ "year": 2023, "sha1": "ac21555b749169242a3381a1c3578a55783f253b", "oa_license": "CCBYNCND", "oa_url": "http://www.cell.com/article/S2666379123003750/pdf", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "18066a3d90ffa093adb46dcdb748de1e22eee1f9", "s2fieldsofstudy": [ "Psychology", "Biology" ], "extfieldsofstudy": [ "Biology", "Medicine" ] }
119334242
pes2o/s2orc
v3-fos-license
Global oscillations of a fluid torus as a modulation mechanism for black-hole high-frequency QPOs We study strong-gravity effects on modulation of radiation emerging from accreting compact objects as a possible mechanism for flux modulation in QPOs. We construct a toy model of an oscillating torus in the slender approximation assuming thermal bremsstrahlung for the intrinsic emissivity of the medium and we compute observed (predicted) radiation signal including contribution of indirect (higher-order) images and caustics in the Schwarzschild spacetime. We show that the simplest oscillation mode in an accretion flow, axisymmetric up-and-down motion at the meridional epicyclic frequency, may be directly observable when it occurs in the inner parts of accretion flow around black holes. Together with the second oscillation mode, an in-and-out motion at the radial epicyclic frequency, it may then be responsible for the high-frequency modulations of the X-ray flux observed at two distinct frequencies (twin HF-QPOs) in micro-quasars. Introduction X-ray radiation coming from accreting black hole binary sources can show quasi-periodic modulations at two distinct high frequencies (> 30 Hz), which appear in the 3 : 2 ratio (McClintock & Remillard, 2005). Observations show that the solely presence of a thin accretion disk is not sufficient to produce these HFQPO modulations, because they are exclusively connected to the spectral state, where the energy spectrum is dominated by a steep power law with some weak thermal disk component. We have shown recently (Bursa et al., 2004) that significant temporal variations in the observed flux can be accomplished by oscillations in the geometrically thick flows, fluid tori, even if they are axially symmetric. Here we propose that the QPO variations in the energetic part of the spectrum may come from such very hot and optically thin torus terminating the accretion flow, which exhibits two basic oscillating modes. Relativistic tori will generally oscillate in a mixture of internal and global modes. Internal modes cause oscillations of the pressure and density profiles within the torus. The outgoing flux is therefore directly modulated by changes in the thermodynamical properties of the gas, while the shape of the torus is nearly unchanged, which is off our interest here. Global modes, on the other hand, alter mainly the spatial distribu-Correspondence to: bursa@astro.cas.cz tion of the material. Because light rays do not follow straight lines in a curved spacetime, these changes can be displayed out by effects of gravitational lensing and light bending. In this paper we summarize extended results of numerical calculations and show how simple global oscillation modes of a gaseous torus affect the outgoing flux received by an static distant observer in the asymptotically flat spacetime and how the flux modulation depends on the geometry and various parameters of the torus. In Section 2 we briefly summarise the idea of the slender torus model and the equations, which are used to construct the torus and to set its radiative properties. In Section 3 we let the torus to execute global oscillations and using a numerical ray-tracing we inspect how these oscillations modulate the observed flux. If not stated otherwise, we use geometrical units c = G = 1 throughout this paper. Slender torus model The idea of a slender torus was initially invented by Madej & Paczynski (1977) in their model of accretion disk of U Geminorum. They noticed that in the slender limit (i.e. when the torus is small as compared with its distance) and in the Newtonian potential, the equipotential surfaces are concentric circles. This additional symmetry induced by a Newtonian potential allowed Blaes (1985) to find a complete set of normal mode solutions for the linear perturbations of polytropic tori with constant specific angular momentum. He extended calculations done for a 'thin isothermal ring' by Papaloizou & Pringle (1984) and showed how to find eigenfunctions and eigenfrequencies of all internal modes. Abramowicz et al. (2005) have recently considered global modes of a slender torus and showed that between possible solutions of the relativistic Papaloizou-Pringle equation there exist also rigid and axisymmetric (m = 0) modes. These modes represent the simplest global and always-present oscillations in an accretion flow, axisymmetric up-down and inout motion at the meridional and radial epicyclic frequencies. Metric Most, if not all, of stellar and super-massive black holes have considerable amount of angular momentum, so that the Kerr metric has to be used to accurately describe their exterior spacetime. However, here we intend to study the basic effects of general relativity on the appearance of a moving axisymmetric body. We are mainly interested in how the light bending and gravitational lensing can modulate observed flux from sources. For this purpose we are pressing for a maximum simplicity to be able to isolate and recognise the essential effects of strong gravity on light. Therefore, instead of the appropriate Kerr metric, we make use of the static Schwarzschild metric for calculations and where we compare with the non-relativistic case, the Minkowski flat spacetime metric is also used. Equipotential structure The equipotential structure of a real torus is given by the Euler equation, where a µ ≡ u ν ∇ ν u µ is the 4-acceleration of the fluid and ǫ, p are respectively the proper energy density and the isotropic pressure. The fluid rotates in the azimuthal direction with the angular velocity Ω and has the 4-velocity of the form After the substitution of (2), the Euler equation reads where U = − 1 2 ln g tt + ℓ 2 g φφ is the effective potential and ℓ is the specific angular momentum. For a barotropic fluid, i.e. the fluid described by a oneparametric equation of state p = p(ǫ), the surfaces of constant pressure and constant total energy density coincide and it is possible to find a potential W such that W = − p 0 ∇p/(p + ǫ), which simplifies the problem enormously (Abramowicz, Jaroszyński & Sikora, 1978). The shape of the 'equipotential' surfaces W (r, z) = const is then given by specification of the rotation law ℓ = ℓ(Ω) and of the gravitational field. We assume the fluid to have uniform specific angular momentum, where r 0 represents the centre of the torus. At this point, gravitational and centrifugal forces are just balanced and the fluid moves freely with the rotational velocity and the specific angular momentum having their Keplerian values Ω K (r 0 ) and ℓ K (r 0 ). The shape of the torus is given by the solution of equation (3), which in the case of constant ℓ has a simple form, In the slender approximation, the solution can be expressed in terms of second derivatives of the effective potential and it turns out that the torus has an elliptical cross-section with semi-axes in the ratio of epicyclic frequencies (Abramowicz et al. 2005; see also [Šrámková] in this proceedings). In the model used here, we make even greater simplification. Introducing the cylindrical coordinates (t, r, z, φ), we use only the expansion at r = r 0 in the z-direction to obtain a slender torus with a circular cross-section of equipotential surfaces, . The profiles of the equipotential structure of a relativistic torus and of our model are illustrated in Fig. 1. Thermodynamics An equation of state of polytropic type, is assumed to complete the thermodynamical description of the fluid. Here, γ is the adiabatic index, which have a value of 5 / 3 for an adiabatic mono-atomic gas, and K is the polytropic constant determining the specific adiabatic process. Now, we can directly integrate the right-hand side of the Euler equation (1) and obtain an expression for the potential W in terms of fluid density, where we have fixed the integration constant by the requirement W (ρ = 0) = 0. The density and temperature profiles are therefore where µ w , k B and m u and the molecular weight, the Boltzmann constant and the atomic mass unit, respectively (Fig. 2). Bremsstrahlung cooling 1 We assume the torus to be filled with an optically thin gas radiating by the bremsstrahlung cooling. The emission include radiation from both electron-ion and electron-electron collisions (Stepney & Guilbert, 1983;Narayan & Yi, 1995): The contributions of either types are given by where n e andn are number densities of electrons and ions, σ T is Thomson cross-section, m e and r e = e 2 /m e c 2 denote mass of electron and its classical radius, α f is the fine structure constant, F ee (θ e ) and F ei (θ e ) are radiation rate functions and θ e = k T e /m e c 2 is the dimensionless electron temperature. F ee (θ e ) and F ei (θ e ) are about of the same order, so that the ratio of electron-ion and electron-electron bremsstrahlung is and we can neglect the contribution from electron-electron collisions. For the function F ei (θ e ) Narayan & Yi (1995) give the following expression: = 9θ e 2π [ln(1.123 θ e + 0.48) + 1.5] , θ e > 1 . Fig. 4. A schematic illustration of the displacement. The centre T of the torus is displaced radially by δr and vertically by δz from its equilibrium position E, which is at the distance r0 from the centre of gravity G. In case of a multi-component plasma, the densityn is calculated as a sum over individual ion species,n = Z 2 j n j , where Z j is the charge of j-th species and n j is its number density. For a hydrogen-helium composition with abundances X : Y holds the following: where A rj is the relative atomic weight of the j-th species, m u denotes the atomic mass unit and we define µ ≡ (X + 4Y )/(X + Y ). The emissivity is then which for the non-relativistic limit (θ e ≪ 1) and Population I abundances (X = 0.7 and Y = 0.28) gives Torus oscillations In the following, we excite in the torus rigid and axisymmetric (m = 0) sinusoidal oscillations in the vertical direction, i.e. parallel to its axis, as well as in the perpendicular radial direction. Such an assumption will serve us to model the possible basic global modes found by Abramowicz et al. (2005). In our model, the torus is rigidly displaced from its equilibrium (Fig. 4), so that the position of the central circle varies as Here, ω z = Ω K = (M/r 3 0 ) 1 2 is the vertical epicyclic frequency, in Schwarzschild geometry equal to the Keplerian orbital frequency, and ω r = Ω K (1 − 6M/r 0 ) 1 2 is the radial epicyclic frequency. The torus is placed at the distance r 0 = 10.8 M so that the oscillation frequency ratio ω z : ω r is 3 : 2, but the choice is arbitrary. If not stated otherwise, the cross-section radius is R 0 = 2.0 M and amplitudes of the both vertical and radial motion are set to δz = δr = 0.1 R 0 . We initially assume the 'incompressible' mode, where the equipotential structure and the thermodynamical quantities describing the torus are fixed and do not vary in time as the torus moves. Later in this Section we describe also the 'compressible' mode and discuss how changes in the torus properties affect powers in the different oscillations. The radial motion results in a periodic change of volume of the torus. Because the optically thin torus is assumed to be filled with a polytropic gas radiating by bremsstrahlung cooling and we fix the density and temperature profiles, there is a corresponding change of luminosity L ∝ f dV , with a clear periodicity at 2π/ω r . On the contrary, the vertical motion does not change the properties of the torus or its overall luminosity. We find that in spite of this, and although the torus is perfectly axisymmetric, the flux observed at infinity clearly varies at the oscillation frequency ω z . This is caused by relativistic effects at the source (lensing, beaming and time delay), and no other cause need to be invoked to explain in principle the highest-frequency modulation of X-rays in luminous black-hole binary sources. Effect of spacetime geometry In the Newtonian limit and when the speed of light c → ∞, the only observable periodicity is the radial oscillation. There is no sign of the ω z frequency in the power spectrum, although the torus is moving vertically. This is clear and easy to understand, because the c → ∞ limit suppresses the time delay effects and causes photons from all parts of the torus to reach an observer at the same instant of time, so it is really seen as rigidly moving up and down giving no reason for modulation at the vertical frequency. When the condition of the infinite light speed is relaxed, the torus is no longer seen as a rigid body. The delay between photons, which originate at the opposite sides of the torus at the same coordinate time, is ∆t ≃ 2 r 0 /c sin i, where i is the viewing angle (i.e. inclination of the observer). It is maximal for an edge-on view (i = π / 2 ) and compared to the Keplerian orbital period it is ∆t/T K ≃ (2π 2 r 0 /r g ) −1/2 . It makes about 10% at r 0 = 10.8M. The torus is seen from distance as an elastic ring, which modulates its brightness also at the vertical oscillation frequency ω z due to the time delay effect and the seeming volume change. Curved spacetime adds the effect of light bending. Photons are focused by the central mass's gravity, which leads to a magnification of any vertical movement. Black hole is not a perfect lens, so that the parallel rays do not cross in a single point, but rather form a narrow focal furrow behind it. When the torus trench the furrow (at high viewing angles), its oscillations are greatly amplified by the lensing effect. This is especially significant in the case of the vertical oscillation, as the bright centre of the torus periodically passes through the focal line. Figure 3 illustrates the geometry effect on three Fourier power density spectra of an oscillating torus. The spectra are calculated for the same parameters and only the metric is changed. The appearance of the vertical oscillation peak in the 'finite light speed' case and its power amplification in the relativistic case are clearly visible. Effect of inclination In previous paragraphs we have find out that both the time delay and the lensing effects are most pronounced when the viewing angle is rather high. Now we will show how much is the observed flux modulated when the torus is seen from different directions. The effect of inclination is probably the most featured, in spite of it is difficult to be directly observed. Changing the line of sight mixes powers in amplitudes, because different effects are important at different angles. When the torus is viewed face-on (i.e. from the top), we expect the amplitude of ω r to be dominant, as the radial pulsations of the torus can be nicely seen and light rays passing through the gas are not yet strongly bended. When viewed almost edge-on, the Doppler effect dumps the power of ω r and gravitational lensing amplifies the power in ω z . Thus we expect the vertical oscillation to overpower the radial one. Figure 5 shows the inclination dependence of oscillation powers in the flat Minkowski spacetime (top) and in the curved Schwarzschild spacetime (bottom). We see that in the flat spacetime the power of radial oscillation gradually decreases, which is caused by the Doppler effect (c.f. the red dotted line in the graph). The vertical oscillation decreases as well, but it is independent on the g-factor. At inclinations i > 75 • it has a significant excess caused by the obscuration of part of the torus behind an opaque sphere of radius 2M representing the central black hole. When gravity is added, the situation at low inclinations (up to i ≃ 25 • ) is very much similar to the Minkowski case. The power of gravitational lensing is clearly visible from the blue line, i.e. the vertical oscillation, progression. It is raising slowly for inclinations i > 45 • , then it shows a steeper increase for i > 75 • , reaches its maximum at i = 85 • and it finally drops down to zero. At the maximum it overpowers the radial oscillation by a factor of 40, while it is 20× weaker if the torus is viewed face-on. The rapid decrease at the end is caused by the equatorial plane symmetry. If the line of sight is in the θ = π / 2 plane, the situation is the same above and below the plane, thus the periodicity is 2 ω z . The power in the base frequency drops abruptly and moves to overtones. Effect of the torus size The effect of the size of the torus is very important to study, because it can be directly tested against observational data. Other free model parameters tend to be fixed for a given source (e.g. like inclination), but the torus size may well vary for a single source as a response to temporal changes in the accretion rate. The power in the radial oscillation is correlated with its amplitude, which is set to δr = 0.1 R 0 and grows with the torus size. It is therefore evident, that the radial power will be proportional to R 0 squared. If the amplitude was constant or at least independent of R 0 , the ω r power would be independent of R 0 too. Thus the non-trivial part of the torus size dependence will be incurred by vertical movements of the torus. Figure 6 shows the PSD power profiles of both the radial and vertical oscillations for several different inclinations. Indeed, the radial power has a quadratic profile and is more dominant for lower viewing angles, which follows from the previous paragraph. The power in the vertical oscillation is at low inclinations also quadratic and similar to the radial one, but the reason is different. The time delay effect causes apparent deformations from the circular cross-section as the torus moves up and down, i.e. to and from the observer in the case of a face-on view. The torus is squeezed along the line of sight at the turning points and stretched when passing the equatorial plane. Deformations are proportional to its size, being the reason for the observed profile. At high inclinations the appearance of strong relativistic images boosts the vertical oscillation power even more. But, as can be clearly seen from the 85 • line and partially also from the 80 • line, there is a size threshold, beyond which the oscillation power decreases though the torus still grows. This corresponds to the state, where the torus is so big that the relativistic images are saturated. Further increase of the torus size only entails an increase of the total luminosity, while the variability amplitude remains about the same, hence leading to the fractional rms amplitude downturn. Effect of the torus distance The distance of the torus also affects the intensity of modulations in observed lightcurves (Fig. 7). The power in the radial oscillation is either increasing or decreasing, depending on the inclination. Looking face-on, the g-factor is dominated by the redshift component and the power in ω r is increasing with the torus distance being less dumped. When the view is more inclined, the Doppler component starts to be important and the oscillation looses power with the torus distance. The critical inclination is about 70 • . The power of vertical oscillation generally decreases with the torus distance. It is made visible mainly by the time delay effect and because with the increasing distance of the torus the oscillation period also increases, the effect is loosing on importance. An exception is when the inclination is very high. The large portion of visible relativistic images causes the vertical power first to increase up to some radius, beyond which it then decays. Both small and large tori do not have much of visible secondary images, because they are either too compact or they are too far. The ideal distance is about 11 M -this is the radius, where the torus has the largest portion of higher-order images, corresponding to the maximum of the vertical power in Fig. 7. Generally, the relative power of the vertical oscillation is getting weaker as the torus is more and more far-away from the graviting centre. This is most significant for higher viewing angles, where the drop between 8M and 16M can be more than one order of magnitude. On the other hand, for low inclinations the effect is less dramatic and if viewed face-on the power ratio is nearly independent from the distance of the fluid ring. Effect of radial luminosity variations As already mentioned above, the volume of the torus changes periodically as the torus moves in and out. In the incompressible torus, which we have considered so far, this results in a corresponding variance of the luminosity, linearly proportional to the actual distance of the torus r(t) from the centre, (23) Because we do not change the thermodynamical properties, it also means that the total mass M = ρ dV contained within the torus is not conserved during its radial movements, which is the major disadvantage. In this paragraph we relax this constraint and explore the compressible mass conserving mode. A compressed torus heats up, which results in an increase of its luminosity and size. These two effects go hand-in-hand, however to keep things simple we isolate them and only show, how powers are affected if we only scale the density and temperature without changing the torus cross-section. We allow the torus to change the pressure and density profiles in a way that it will keep its total mass constant. The volume element dV is proportional to r, so that in order to satisfy this condition, the density must be scaled as where ρ • refers to the density profile of a steady nonoscillating torus with central ring at radius r 0 . If we substitute for the emissivity from (21), we find out that the luminosity now goes with r as The negative sign of the exponent causes the luminosity to increase when the torus moves in and compresses. Moreover, the luminosity variance is stronger than in the incompressible case, because of the greater absolute value of the exponent. Figure 8 shows the inclination dependence of oscillation powers in the compressible case. Compared to Fig. 5 Fig. 9. The total observed bolometric luminosity of a steady (nonoscillating) torus as a function of inclination. In a flat spacetime (orange) with only special relativistic effects, the total luminosity is increased by a factor of two if the view is changed from faceon to edge-on. It is even more in a curved spacetime (blue), where the relativistic images make a significant contribution. For comparison also calculations with switched-off g-factor (with g being set to unity) are shown (dashed lines). see that the signal modulation at vertical frequency is not affected, but the slope of the radial oscillation power is reversed. A key role in this reversing plays the g-factor, which combines effects of the Doppler boosting and the gravitation redshift. The Doppler effect brightens up the part of the torus, where the gas moves towards the observer, and darkens the receding part. This effect is maximal for inclinations approaching π / 2 , i.e. for edge-on view. On average, i.e. integrated over the torus volume, the brighten part wins and the torus appears more luminous when viewed edge-on (see Fig. 9). The redshift effect adds the dependence on the radial distance from the centre of gravity, which is an important fact to explain the qualitative difference between Figs. 5 and 8. In the incompressible mode, the luminosity has a minimum when the torus moves in and a maximum when it moves out of its equilibrium position. The g-factor goes the same way and consequently amplifies the amplitude of the luminosity variability. The situation is right opposite in the compressible mode and the luminosity has a maximum when the torus moves in and a minimum when it moves out. The g-factor goes with the opposite phase and dumps the luminosity amplitude. Because the difference in the g-factor value is more pronounced with inclination, it results in increasing or decreasing dependence of the radial power on inclination in the compressible or incompressible case, respectively. Discussion and Conclusions We have found out that intrinsic variations of the radiation emitted from inner parts of an accretion flow may be significantly modified by effects of a strong gravitational field. Above all we have shown that orientation of the system with respect to the observer is an important factor, which may alter the distribution of powers in different modes. However this effect, although strong, cannot be directly observed, because the inclination of a given source is fixed and mostly uncertain. Within the model there are other parameters, which may be used for predictions of powers in different frequencies. We have shown that the size of the torus affects the power of the vertical oscillation. In this model this corresponds to an emission of harder photons from a hotter torus and provides a link between the model and observations. From those we know (Remillard et al., 2002) that the higher HFQPO peak is usually more powerful than the lower one in harder spectral states, which is consistent with the model, but the exact correlation depends on amplitudes of both oscillations. The power in the radial oscillation very much depends on the thermodynamical properties of the torus and on its behaviour under the influence of radial movements. We have shown that different parametrizations of intrinsic luminosity in the in-and-out motion (i.e. compressible and incompressible modes) change power of the radial oscillation. On the other hand, the power of the vertical oscillation remains unaffected. This is an important fact and it means that the flux modulation at the vertical frequency is independent on the torus properties, driven by relativistic effects only. Another model parameter is the distance of the thin accretion disk. The Shakura-Sunyaev disk is optically thick and blocks propagation of photons, which cross the equatorial plane at radii beyond its moving inner edge. Most of the stopped photons are strongly lensed and carry information predominantly about the vertical mode, thus the presence or not-presence of an opaque disk may be important for the power distribution in QPO modes. However, this effect is beyond the scope of this article and will be described in a separate paper.
2019-04-14T01:44:20.726Z
2005-10-15T00:00:00.000
{ "year": 2005, "sha1": "d1537ce3d03ee9f5e771aaa69956071a5fd83f64", "oa_license": null, "oa_url": "http://arxiv.org/pdf/astro-ph/0510460", "oa_status": "GREEN", "pdf_src": "Arxiv", "pdf_hash": "ac7d57cd25fb075430ef6b43010a24bb766b39c3", "s2fieldsofstudy": [ "Physics" ], "extfieldsofstudy": [ "Physics" ] }
54819141
pes2o/s2orc
v3-fos-license
Modeling sensitivity study of the possible impact of snow and glaciers developing over Tibetan Plateau on Holocene African-Asian summer monsoon climate Modeling sensitivity study of the possible impact of snow and glaciers developing over Tibetan Plateau on Holocene African-Asian summer monsoon climate L. Jin, Y. Peng, F. Chen, and A. Ganopolski Key Laboratory of Western China’s Environmental Systems (Ministry of Education), Lanzhou University, Lanzhou 730000, China Potsdam Institute for Climate Impact Research, Potsdam, Germany Received: 13 October 2008 – Accepted: 14 November 2008 – Published: 11 December 2008 Correspondence to: L. Jin (jinly@lzu.edu.cn) Published by Copernicus Publications on behalf of the European Geosciences Union. Introduction Holocene climate change is one of the focus themes both in paleoclimate modeling and proxy data reconstruction communities.By examining 50 globally distributed paleoclimate records, Mayewski et al. (2004) revealed as many as six periods of significant rapid climate change during the Holocene which were synchronous of the whole Earth.It is suggested that changes in insolation related both to Earth's orbital variations and to solar variability played a central role in the global scale changes in climate of the last 11.5 cal kyr (Mayewski et al., 2004).This insolation driving mechanism in the Holocene climate change is supported by climate modeling experiments of African-Asian monsoon climate (e.g.Kutzbach and Otto-Bliesner, 1982;Kutzbach and Guetter, 1986;COHMAP Members, 1988;Joussaume et al., 1999;Otto-Bliesner, 1999;Weber et al., 2004).However, external forcing, e.g. the Earth's orbital variations and the solar variability, can be amplified and modified through a number of feedbacks within the climate system leading to marked climate variations in the Holocene (Foley et al., 1994;TEMPO Members, 1996;Claussen and Gayler, 1997;Ganopolski et al., 1998b;Wang, 1999).The atmospherevegetation feedback is an important amplifying factor of the North Africa's abrupt climate transition from a wet phase to a dry phase starting at around 6 kyr BP (Texier et al., 1997;Claussen et al., 1999).The positive oceanic feedback is another factor for enhanced African summer monsoon in early Holocene (Kutzbach and Liu, 1997;Liu et al., 2003).Studies from lake sediment pollen and carbonate records showed the arid phase in South Asia probably started around 5 kyr BP (Maxwell, 2001;Singh, 2002), coinciding with a stepwise weakening of the southwest monsoon (Gupta and Anderson, 2005), which was closely linked to North Atlantic cold spells, e.g. the millennial-scale cold events during Published by Copernicus Publications on behalf of the European Geosciences Union. L. Jin et al.: Modeling sensitivity study of the possible impact of snow and glaciers 4.6-4.2kyr BP (Gupta et al., 2003).In addition, it is indicated that the so-called "Megathermal" or "Holocene Optimum" from 8.5-3.0 kyr BP often mentioned in the Chinese Quaternary and paleoclimate community (Huang, 1998;Qin, 2002), as defined by peak precipitation or effective moisture, is asynchronous in eastern Asian monsoon regions, which is related to a general weakening and southward retreat of the East Asian summer monsoon since about 9 kyr BP (An et al., 2000). The Tibetan Plateau, with a mean elevation of 4.5 km above sea level and an area of 2.5×10 6 km 2 , is one of the most imposing topographic features in Central Asia and the greatest glaciated areas outside the Polar Regions.The Tibetan Plateau has a profound influence on regional and global atmospheric circulation and is therefore important for our understanding the dynamics of global environmental and climatic change (Ruddiman and Kuzbach, 1989;Molnar and England, 1990;Prell and Kutzbach, 1992;Yanai et al., 1992).Studies of glacier changes during the late Quaternary were conducted over the Tibetan Plateau and significant progress has been made in recent years (Lehmkuhl and Oven, 2005).Earlier study has revealed that there were intervals of glacier advances over the Tibetan Plateau during the Holocene at about 8.2-7.2 kyr BP, 5.8-4.9kyr BP, 3 ky BP and 300-450 yr BP, respectively (Lehmkuhl, 1997).Observation from the Dasuopu glacier (28 • 23 N, 85 • 43 E) (located in the southwestern region of the Tibetan Plateau) shows a gradually increase of snow accumulation from AD 1600 to 1817, and then a significant increase persisted until AD 1880 (Thompson et al., 2000).At another glacier site, the Guliya ice cap (35 • 17 N, 81 • 29 E) which is located in northwestern region of the Tibetan Plateau and whose records is believed to cover more than 100 000 years, spanning the whole Holocene to the last interglacial (Christner et al., 2003) showed a rapid decrease in temperature up to 3∼4 • C during the period of 7-5 kyr BP (Thompson et al., 1997;Yao et al., 2000).Glaciers at Nanga Parbat (35 • 14 15 N, 74 • 35 21 E), north western Himalaya Mountains, expanded during the early to middle Holocene about 9.0-5.5 kyr BP (Phillips et al., 2000).Evidence from oxygen isotope records in the Bay of Bengal shows that during earlymiddle Holocene, the Himalayas experienced at least two significant episodes of aridity and intensified glaciation at 5-4.3 kyr BP and about 2 kyr BP (Chauhan, 2003).Recently, evidence based on radiocarbon ages of fossil wood buried in moraines, lichen-dated moraines and tree ring identified three main periods of glacier advances in the southeastern Tibetan Plateau during the late Holocene: around AD 200-600, AD 800-1150, and AD 1400-1920, respectively (Yang et al., 2007).By using a set of data from a fully coupled oceanatmosphere model (FOAM), Casal et al. (2004) calculated the ice-sheet mass balance for the Tibetan Plateau for the present day, and for several different time slices of Holocene, under the insolation forcings at 3 kyr BP, 6 kyr BP, 8 kyr BP and 11 kyr BP.Their result show that the area with positive ice-sheet mass balance expands from 11 kyr BP to 0 kyr BP (Casal et al., 2004, Fig. 11).Oven (2009) reviewed the latest Holocene glacier fluctuations in the Himalaya and Tibet, suggesting that "notable glacier advances occurred during the Late-glacial and the early Holocene, with minor advances in some regions during the mid-Holocene" and "there is abundant evidence for multiple glacial advances throughout the latter part of the Holocene".Both observation and modeling of glacier development over the Tibetan Plateau suggest that the Tibetan Plateau may have experienced a glacier fluctuation during the Holocene.There are different suggestions concerning the controlling factors on glacier expansion over Tibetan Plateau during the Holocene.Thompson et al. (2006) suggest that glacier expansion on the southern and central Tibetan Plateau is driven mainly by variation in monsoonal precipitation that is modulated by precession-driven insolation changes, while Yang et al. (2007) argued that it is the temperature change rather than precipitation change caused by variations of the south Asian summer monsoon that is the controlling factor for glacier fluctuation during the late Holocene. In a previous model study using CLIMBER-2, Jin et al. (2005) studied the impacts of ice and snow cover over the Tibetan Plateau on Holocene climate change and the simulation results suggest that the snow and glacier environment over the Tibetan Plateau is an important factor for Holocene African-Asian monsoon retreat and an amplifier for monsoon regional climate variability.In their transient modeling experiments (Jin et al., 2005), the changes of snow and glaciers over the Tibetan Plateau was set to be simply linear increasing from 9 kyr BP to present.However, the assumption that the snow and glaciers developing on the Tibetan Plateau were in a linear way may be unrealistic due to the complexity of snow and glacier development.Because the different scenarios of snow and glaciers developing on the Tibetan Plateau may have different effects on climate change, here, as a follow-up research of Jin et al. (2005), we conduct a series of sensitivity experiments by using CLIMBER-2 focusing on the impacts of different scenarios of snow and glaciers developing over the Tibetan Plateau on Holocene climate changes in African-Asian monsoon region and other regions. The model The Earth system model of intermediate complexity, CLIMBER-2, used in this study was developed in Potsdam Institute for Climate Impact Research (PIK) in Germany to perform the long-term simulations.The model consists of modules describing atmosphere, ocean, sea ice, land surface and terrestrial vegetation.The atmosphere module is a statistical-dynamical atmosphere model with a low spatial resolution of 10 • in latitude and 51 • in longitude but f is fixed at zero for all simulations AO, AV, AOV (Grey line in Fig. 3a) ICE1 linearly increasing for the 9 kyr-transient simulation, starting with 0 at 9 kyr BP and ending with 0.2 at 0 kyr BP (Green line in Fig. 3a) ICE2 linearly increasing by 0.05 (1/4 of the maximum of fractional ice cover at 0 kyr BP as in ICE1) during four periods including 7.2-8.2kyr BP, 4.9-5.8kyr BP, 2.4-3.32 kyr BP and 3.0-4.5 kyr BP, and fixed with the values the previous period reached maximum during other periods, which are mainly based on result in work of Lehmkuhl (1997) (Purple line in Fig. 3a) linearly and slightly increasing in the early Holocene (9-7 kyr BP), starting with 0 at 9 kyr BP and reaching to 0.017 at 7 kyr BP, then prescribed linearly and rapidly increasing in the mid-Holocene (7-5 kyr BP), reaching to 0.117 at 5 kyr BP, finally prescribed linearly and slightly increasing between 5 and 0 kyr BP, ending with 0.2 at 0 kyr BP, which mimic the Nigardsbreen Glacier (62 linearly increasing between 9 kyr BP and 6 kyr BP, starting with 0 at 9 kyr BP and reaching to the maximum 0.2 at 6 kyr BP, then fixed at 0.2 between 6 kyr BP and 0 kyr BP, which mimic Abramov glacier (40 • N, 72 • E) variation during the Holocene resulted from ECBilt (Weber et al., 2003) (Blue line in Fig. 3a) explicitly resolves the large-scale circulation patterns such as subtropical jet streams, Hadley, Ferrel and polar cells, monsoon and centers of action of the Siberian high-pressure area and the Aleutian low-pressure area.It does not resolve individual synoptic weather systems but rather predicts their statistical characteristics, including the fluxes of heat, moisture, and momentum associated with ensembles of synoptic systems.The vertical structure includes a planetary boundary layer, a free troposphere (including cumulus and stratiform clouds) and a stratosphere.Radiative fluxes are computed on 16 vertical levels.In short, the model works like most coupled general circulation models (GCMs) except that synoptic-scale activity is parametrized.The ocean module is a zonally averaged model with three separate basins (Atlantic, Indian and Pacific oceans) similar to the one used by Stocker et al. (1992), including a model of sea-ice thickness, concentration, and advection, which operates with latitudinal resolution of 2.5 • .Vertically the ocean is subdivided into 20 uneven layers.Parameterizations for its vorticity balance and Ekman transport are employed.The model of terrestrial vegetation (Brovkin et al., 2002) describe the dynamics of vegetation cover, i.e. fractional coverage of a grid cell by trees, grass, and desert (bare soil), as well as net primary productivity, leaf area index (LAI), biomass, and soil carbon pool. In CLIMBER-2, the vegetation model interacts with the atmosphere model in the way that at the end of the simulation year, output of the atmospheric model (temperature and precipitation fields) is used to simulate temporal dynamics of vegetation cover and in turn, the vegetation cover and the maximum of LAI are accounted for in calculating the surface albedo, roughness, and evapotranspiration during the following simulation year.Hence, CLIMBER-2 is able to describe changes in vegetation cover that can be interpreted as shifts in vegetation zones smaller than the spatial resolution of the model.Atmosphere and ocean interact through the surface fluxes of heat, fresh water and momentum.The model does not employ flux adjustments.The CLIMBER-2 model has been validated against present-day climate (Petoukhov et al., 2000;Ganopolski et al., 2001) and has been used successfully for a variety of paleoclimate studies (Ganopolski et al., 1998a;Claussen et al., 1999;Jin et al., 2005Jin et al., , 2007)). The experimental set-up To mimic various scenarios of snow and glaciers developing over the Tibetan Plateau, five scenarios of snow and glacier area growth over the Tibetan Plateau were set up for the 9 kyr transient simulation, namely ICE0, ICE1, ICE2, ICE3 and ICE4 respectively (see Table 1 for detail). In all experiments except ICE0, the fraction of snow and glaciers in the grid cell which contain the Tibetan Plateau (30 in the model is prescribed starting with 0 at 9 kyr BP and ending with 0.2 at 0 kyr BP, with different scenarios of variations of fraction of snow and glaciers over the Tibetan Plateau through 9 kyr BP to 0 kyr BP (see Fig. 3a and Table 1 for a detail). Three simulations using CLIMBER-2 were conducted for each scenario for snow and glacier area over the Tibetan Plateau for the Holocene.Firstly, the fully coupled atmosphere-ocean-terrestrial vegetation model (AOV) was employed for the transient simulation for the past 9000 years.Secondly, the coupled atmosphere-ocean model (AO) with vegetation cover fixed with 9 kyr BP vegetation as in the equilibrium run at 9 kyr BP was performed.Finally, the simulation AV was run with interactive vegetation (coupled atmosphere-vegetation model), while ocean characteristics www.clim-past.net/5/457/2009/Clim.Past, 5, 457-469, 2009 were fixed as in the equilibrium run at 9 kyr BP.The AO and AV simulations are intended to investigate interactive effects of ocean and terrestrial vegetation cover respectively.In all transient simulations (AOV, AO, AV), CLIMBER-2 was started from an equilibrium state with orbital forcing at 9 kyr BP and it was run for 9000 years until present day driven by changes in orbital parameters and by different scenarios of imposed ice forcing.The global and seasonal change of the orbital insolation is computed with the algorithm after Berger (1978).No flux corrections between the atmospheric and oceanic modules are applied in the model in any simulation.The atmospheric CO 2 concentration is kept constant at 280 ppmv, and the solar constant is fixed at 1365 W m −2 . CLIMBER-2 captures most major features of the observed climatology, as can be seen in detail in work of Petoukhov et al. (2000), in which modeled and observed climatology for present day are carefully compared.Figure 1 scale patterns of SLP for both (June-July-August, JJA and December-January-February, DJF) seasons, including positions and absolute values of the stationary high and lowpressure systems in the subtropics and mid-latitudes.The summer Asian low pressure system is accompanied by substantial monsoon winds and heavy precipitation, especially in Southern and Eastern Asia (left panel in Fig. 1).The intensive winter Siberian high drives a strong southward Asian winter monsoon wind, which veers southeastward after crossing the equator (right panel in Fig. 1).CLIMBER-2 is able to simulate the basic global patterns of the presentday potential vegetation cover: A boreal forest belt, tropical forests (see Fig. 2a), subtropical deserts in Africa and Eurasia (Fig. 2c).Grasses occupy a significant part of high latitude regions as well as subtropical areas (Fig 2b).In midlatitudes in northern America, the model overestimates the tree fraction due to the coarse model resolution: the strong W-E gradient in precipitation is not represented by the mean values across the continent. Temperature, precipitation and vegetation changes in the last 9 kyr The transient simulation in AOV revealed pronounced responses of CLIMBER-2 to changes in orbital parameters as well as the impacts of snow and glacier cover over the Tibetan Plateau (different imposed ice scenarios) on climate.Figure 3e illustrates that changes in boreal summer temperature in South Asia are quite different from that in Southeast Asia (Fig. 3d) and North Africa (Fig. 3f).Since 6 kyr BP, the summer temperature in South Asia began to increase with the fraction of snow and glacier cover increased over the Tibetan Plateau in all imposed ice scenarios (except for ICE0).The quicker increase of snow and glacier cover over the Tibetan Plateau would cause a faster increase in summer temperature in South Asia, as can be seen when compared scenario ICE4 with ICE1, ICE2, and ICE3 in Fig. 3e.In Southeast Asia and North Africa, the faster increased snow and glacier cover over the Tibetan Plateau leads to an earlier decrease in summer surface air temperature during 8 kyr BP to 5 kyr BP, but after 5 kyr BP, even though the snow and glaciers are still growing in all ice scenarios, the summer temperature have a small change (Fig. 3d, f).The summer precipitation in Southeast Asia (Fig. 3g) increases rapidly during the early to mid-Holocene (9 kyr BP to 6 kyr BP) with the gradually increased snow and glacier cover over the Tibetan Plateau, in which the faster the snow and glacier grows (scenario ICE4, blue line in Fig. 3a), the more summer precipitation increases.Since 6 kyr BP, even though the snow and glacier are still growing (scenarios ICE1, ICE2, and ICE3), the summer precipitation in Southeast Asia no more increases or even began to decrease with the fraction of snow and glacier cover over the Tibetan Plateau was kept constant (corresponding to scenario ICE4).In North Africa, the strongest decrease in summer precipitation appears around 5.5-6.5 kyr BP in scenario ICE0 (no-ice) simulation (grey line in Fig. 3i), while in scenarios ICE1 (green), ICE2 (purple), ICE3 (red), and ICE4 (blue), it occurs around 6.5-7.5 kyr BP, 6.5-7.5 kyr BP, 6-7 kyr BP, and 7-8 kyr BP, respectively.In South Asia, including the southern part of the Tibetan Plateau, the rapid and strong reduction in summer precipitation during early to mid-Holocene is similar to that in North Africa.Compared to scenario ICE0 (no-ice), the ice scenarios (ICE1, ICE2, ICE3, and ICE4) show a pronounced increase in summer precipitation in Southeast Asia (Fig. 3g), and a decrease in South Asia (Fig. 3h) and Africa (Fig. 3i). Figure 4 shows the global distribution of boreal summer surface air temperature and precipitation anomalies from present day (0 kyr BP) at 6 kyr BP simulated in AOV with imposed ice scenarios ICE0 and ICE4, which is corresponded to the difference between two AOV transient simulations (with ICE0 and ICE4) at 6 kyr BP and 0 kyr BP (see Fig. 3a, the vertical red dash line).In ICE0 (no-ice) scenarios, simulated boreal summer air temperature is up to 2 • C higher than at present in the northern parts of Europe, Asia and North America, and summer precipitation is greater than today's in North Africa and South Asia, with maximum exceeding 1.6 mm/day at the center (Fig. 4, upper panel).When snow and glaciers are imposed over the Tibetan Plateau (scenario ICE4), a cooling of exceeding 2 • C in summer surface air temperature appears over the High Asia (Fig. 4, left middle panel), and accordingly the boreal summer precipitation decreases greatly in North Africa and South Asia (Fig. 4, right middle panel) compared to scenario ICE0 (no-ice).The prominent changes of boreal summer precipitation between ICE4 and ICE0 at 6 kyr BP are in North Africa and South Asia with a decrease at maximum by 1 mm/day, and an increase at maximum by 0.4 mm/day in Southeast Asia (Fig. 4, right low panel).The tendency for the pattern of rapidly increased precipitation in Southeast Asia (Fig. 3g, and Fig. 4, right low panel) and decreased in South Asia and North Africa (Fig. 4, right low panel) during the early to mid-Holocene reflects the tendency for the large-scale weakening African-Asian summer monsoon circulation to and a southeastward shift of the locus of monsoon rains (Fig. 4).The www.clim-past.net/5/457/2009/Clim.Past, 5, 457-469, 2009 weakening and shift of the monsoon is a result of the adjustment of the atmospheric circulation to the changes in thermal contrast between the African and Eurasian continent and the Pacific and Indian Oceans during the boreal summer due to the gradually weakened seasonal cycle of solar insolation in the northern hemisphere.The weakening of the Asian summer monsoon circulation is supported by the paleoclimate records (Morrill et al., 2003;Tang et al., 2000).Compared to the changes in near-surface air temperature and precipitation, the fraction f of vegetation cover (trees plus grasses) changed differently.Fig. 3b, c shows responses of the changes in the fraction f of vegetation cover to the different scenarios of gradually increased snow and glaciers over the Tibetan Plateau in South Asia and North Africa.In North Africa (Fig. 3c), the vegetation cover decreases earlier and more rapidly than that in South Asia (Fig. 3b) in all imposed ice scenarios.Complete desertification in North Africa appears around 3.5 kyr BP with no-ice imposed over the Tibetan Plateau (scenario ICE0), and by 5.0 kyr BP (scenarios ICE1, ICE2, ICE3), and 6 kyr BP (scenario ICE4) respectively (Fig. 3c), showing a strong effect of snow and glacier cover over the Tibetan Plateau on North African vegetation development.In South Asia, the response of the changes in vegetation cover to the gradually increased snow and glaciers over the Tibetan Plateau lags behind North Africa a few thousand years (the fraction of vegetation cover in South Asia reduces only 10% ( f ∼−0.1) for the first 5 kyr for scenarios of ICE1, ICE2, and ICE3) (Fig. 3b), while the fraction of vegetation cover in North Africa reached almost zero at 5 kyr BP (Fig. 3c).The snow and glacier influence on changes of vegetation cover in South Asia began after 5 kyr BP, with a rather rapidly decrease in vegetation cover from 3.5 to 2 kyr BP ( f ∼−0.35) in scenarios of ICE1, ICE2, and ICE3.In scenario ICE4, even though the snow and glacier cover would not increase any more after 6 kyr BP (Fig. 3a, blue line), its influence last for the rest of the Holocene vegetation evolution compared with scenario ICE0 (Fig. 3b).The simulated fraction of vegetation cover in North Africa are generally in agreement with proxy records of changes in vegetation cover evidenced from ocean temperature and terrigenous dust in marine sediment records off western Africa (deMenocal et al., 2000).The percentage of terrigenous dust contents indicates a dramatic increase in the amount of dust that relates directly to the changes in vegetation as the Sahara expanded across North Africa during 5.7 kyr BP to 5.0 kyr BP (deMenocal et al., 2000). Effects of imposed ice-albedo on North African and South Asian climate In order to investigate the synergy between imposed icealbedo change over Tibetan Plateau and the vegetation feedbacks, in addition to the fully coupled experiments (AOV) discussed above, we performed two sets of transient simulations with fixed vegetation cover (AO) and fixed ocean characteristics (AV).Figure 5 shows simulated results in North Africa for the last 9000 years from simulations AO, AV, AOV with no-ice scenario ICE0, and with ice scenarios ICE1 and ICE4.A rapid decrease in the near-surface air temperature during the mid-Holocene in ICE4 scenario occurs about 1 kyr earlier than in scenario ICE1, and 2 kyr earlier than in scenario ICE0 both in simulations AV and AOV (Fig. 5a, b, c).Similar behavior is seen in changes in summer precipitation (Fig. 5d, e, f) and vegetation cover in North Africa (Fig. 5g, h, i).Results of experiments AV and AOV for North Africa show a similar response of CLIMBER-2 to changes in orbital parameters and ice-imposed forcing over the Tibetan Plateau while results of experiment AO show a less climate variations in all ice-imposed scenarios.In simulation AO (without vegetation interaction and no ice imposed) (Fig. 5a, long dash line), the summer near-surface air temperature decreases slowly and smoothly during the early to middle Holocene due to the slow decrease of surface absorbed solar radiation (Fig. 6a, long dash line) and of the (d, e, f), and surface albedo (g, h, i) in North Africa (15 • W-40 • E, 20 • N-30 • N) for the last 9 kyr in simulations AO (long dash line), AV (long and short dash lines), and AOV (solid line) with imposed ice scenarios ICE0 (no-ice) (a, d, g), ICE1 (b, e, h), and ICE4 (c, f, i). latent heat flux (evapotranspiration) (Fig. 6d).But in simulations AV and AOV, the summer near-surface air temperature and precipitation change more strongly than in simulation AO in North Africa during 7-5 kyr BP (Fig. 5).A parallel rapid decrease in surface absorbed solar radiation (Fig. 6a), latent heat flux (evapotranspiration) (Fig. 6d) and the rapid increase in surface albedo (Fig. 6g) are caused by a strong positive feedback between subtropical vegetation and precipitation.This feedback emerges from an interaction between high albedo of Saharan sand deserts and atmospheric circulation as hypothesized by Charney et al. (1975) and from subsequent changes in the hydrological cycle (Claussen, 1997(Claussen, , 1998)).If the snow and glaciers over the Tibetan Plateau (scenarios ICE1, ICE4) are imposed with simulations AV and AOV, the abrupt regional changes in summer precipitation (Fig. 5e, f) and vegetation cover (Fig. 5h, i) occur earlier than in no-ice (ICE0) experiments.The faster the ice cover increases over the Tibetan Plateau (Fig. 3a), the earlier and more rapidly the decrease in summer precipitation and vegetation cover in North Africa appear (Fig. 5).It tends to be that the effect of ice over the Tibetan Plateau "accelerates" the abrupt changes in summer precipitation and vegetation cover in North Africa, while the ocean (AO experiment) plays only a minor role in North African climate change during the Holocene, as a previous modeling study in work of Claussen et al. (1999). In South Asia, whereas the summer precipitation decreases rapidly during the early to middle Holocene (Fig. 7d, e, f), which is similar to that in North Africa, the summer near-surface air temperature (Fig. 7b, c) and vegetation cover (Fig. 7h, i) have a different evolution compared to that in North Africa (Fig. 5).With a linear increase in snow and glacier cover over the Tibetan Plateau in scenario ICE1 (Fig. 3), the summer near-surface air temperature in South Asia began to increase after around 6 kyr BP in simulations AO, AV, AOV (Fig. 7b), while in North Africa, it still decreases after 6 kyr BP (Fig. 5b).In scenario ICE4 (Fig. 7c), the summer near-surface air temperature in South Asia began to increase almost along with the increase of snow and glacier cover over the Tibetan Plateau at the early Holocene.The SAMI here is defined as the difference of averaged westerlies over 850 hPa and 200 hPa in South Asian monsoon area (40 , which is referenced in Webster-Yang Index (Webster and Yang, 1992) and modified according to spatial resolution of CLIMBER-2. The rapidly increased summer near-surface air temperature in South Asia is closely related to the rapid decreases in summer precipitation (Fig. 7e, f) and vegetation cover (Fig. 7h, i) in this region.Figure 7e, f shows that the summer precipitation in South Asia decreases stronger in the experiments with imposed snow and glacier cover over the Tibetan Plateau (scenarios ICE1, ICE4) compared to that in no ice scenario (ICE0) (Fig. 7d) in AO, AV, and AOV simulations.Similarly, it is shown that the faster increase in snow and glacier cover over the Tibetan Plateau (scenarios ICE1, ICE4) would cause an earlier (about 1000-2000 years) and more rapid decrease in vegetation cover in South Asia (Fig. 7h, i) compared to that in no-ice scenario (ICE0) (Fig. 7g).The rapid decrease in summer precipitation in South Asia during the early to middle Holocene (9-6 kyr BP) in ice imposed scenarios (ICE1, ICE4) (Fig. 7e, f) is closely related to the weakening of the South Asian summer monsoon (Fig. 8).It shows that the South Asian summer monsoon strength, which is indicated by the South Asian summer monsoon index (SAMI) as referenced in Webster and Yang (1992), reduces much more rapidly during 9-6 kyr BP in ICE4 scenario than in ICE0 scenario (Fig. 8).But the vegetation cover changes slower (Fig. 7g, h, i) compared to that in North Africa (Fig. 5g, h, i), and the surface albedo remains relatively low (Fig. 9g, h, i) compared to that in North Africa (Fig. 6g, h, i) during 9-6 kyr BP.These are corresponded to a smooth decrease of absorption of solar radiation in this region before 5 kyr BP in ICE0 experiment (Fig. 9a), but an increase of absorption of solar radiation in ICE1 and ICE4 experiments, as can be also compared to summer near-surface air temperature changes during the early to middle Holocene in this region (Fig. 7a, b, c).The increased summer near-surface air temperature in experiments with imposed ice scenarios (ICE1, ICE4) corresponds to the increases in absorption of solar radiation (Fig. 9b, c).Although the surface albedo increases (Fig. 9h, i), the planetary albedo decrease with time (Fig. 9m, n) due to a strong reduction of cloud cover (Fig. 9p, q), which is, in turn, a result of weakening of moisture convergency due to weakening of summer monsoon. In boreal summer, the latent heat released from North Indian Ocean sea surface transports to air over the South Asian continent via Indian monsoon circulation.Figure 10 shows the simulated changes in sea surface temperature (SST) between 9 kyr BP and 0 kyr BP in North Indian Ocean (40 • E-95 • E, 0 • N-20 • N) in experiment AOV for no ice scenario (ICE0).Although a gradually increased SST (Fig. 10c) is favor to increase the latent heat flux to air over the South Asian continent, the increase of evaporation over the ocean and a stronger moisture transport from ocean also increase the cloudiness that may compensate release of latent heat over sea surface.It is suggested from our modeling experiment that the rapid decrease in precipitation and vegetation cover after 5 kyr BP (Fig. 7e, h) could reduce the planetary albedo, which increases the absorbtion of solar radiation and hence increases the summer air temperature in this region.Small changes in SST (Fig. 10) in neighboring North Indian Ocean (by 0.3 • C between 9 kyr BP and 0 kyr BP) can only play a very minor role in the South Asian continental warming. Comparison with paleoclimate records and within different ice scenarios Climate proxy records have revealed a wetter climate in North African and Asian monsoon regions in the early Holocene (Hoelzmann et al., 1998;Jolly et al., 1998;Kohfeld and Harrison, 2000;Yu et al., 1998Yu et al., , 2001)).After 8 kyr BP boreal summer insolation gradually declined and it appears that monsoon dynamics became more sensitive to other factors, such as transient climatic perturbations and terrestrial feedbacks involving vegetation.In a previous model-ing study by using CLIMBER-2, Claussen et al. (1999) simulated that the decrease of the North African monsoon from early Holocene toward the present is consistent with the generally decreasing wetter conditions in North Africa, in which the simulated rapidly collapsed monsoon within several centuries during the end of the middle Holocene is supported by the ocean temperature and terrigenous dust in marine sediment records off western Africa (deMenocal et al., 2000).The abrupt African climate response was attributed to the highly non-linear feedbacks linking progressive decreases in regional precipitation, vegetation cover loss, and increasing surface albedo (Claussen et al., 1999).This simulation is similar to our simulation AOV in ICE0 (no-ice) scenario (Fig. 3c).The sensitivity experiments of simulation AOV in ice imposed scenarios (ICE1, ICE2, ICE3, and ICE4) show an earlier and more abrupt termination of the African humid period during mid-Holocene (see Fig. 3c,i and Fig. 5e,f,h,i) for changes in vegetation cover and summer precipitation), where the effect of imposed ice resembles the cold event at about 8 kyr BP when a reduction in rainfall over regions such as North Africa caused a period of centennial-scale aridity (Gasse and Van Campo, 1994;Alley et al., 1997). The drought conditions (gradually increased summer nearsurface air temperature and rapidly decreased summer precipitation and vegetation cover in South Asia (rapid changes after 5 kyr BP in scenario ICE1 (Fig. 7b, e, h), and 1000-2000 years earlier in scenario ICE4 (Fig. 7c, f, i) are consistent with paleoclimate evidences of climatic and environmental desiccation during 8-4 kyr BP.Record in the Lake Lunkaransar (within the Thar desert, South Asia) shows that a major environmental change led to an abrupt fall in the lake level around 6.4 kyr BP and the lake was completely dry by around 5.5 kyr BP (Enzel et al., 1999).In the Ganga plain where is being controlled largely by climatic variability associated with the monsoon rains, the conversion of river channels into ponds between 8 kyr BP and 6 kyr BP, with fluvial activity in the region ceased sometime between 7 kyr BP and 5 kyr BP (Srivastava et al., 2003).The arid phase might have intensified around 4-3.5 kyr BP as has been observed in the terrestrial record in the Himalayas (Phadtare, 2000;Chauhan and Sharma, 1996), western peninsula (Cratini et al., 1994), and northwestern India (Singh et al., 1990).The Indian summer monsoon, as indicated by the percentages of fossil shells of planktic foraminifer Globogerina bulloides in an upwelling record from the Arabian Sea (Gupta et al., 2005), shows a gradual weakening over the past 8 kyr with a more or less stable dry phase beginning about 5 kyr BP that coincides with the onset of an arid phase in Indian (Sharma et al., 2004) and termination of the Indus Valley civilization (Staubwasser et al., 2003;Gupta, 2004).As in North Africa, the effect of snow and glaciers over the Tibetan Plateau "accelerate" the decreases in summer precipitation and vegetation cover and an increase in summer near-surface air temperature in South Asia, as can be compared from experiments in scenarios ICE1 and ICE4 to ICE0 (Fig. 7, Fig. 3e, h). The faster the ice cover increases over the Tibetan Plateau, the earlier and more rapidly the decrease in vegetation cover appears (Fig. 7h, i).Comparatively, the simulated climate change in South Asia (onset of dry conditions) during mid-Holocene in ICE1, ICE2, and ICE3 scenarios is closer to proxy data, in which the temporal evolution patterns of the three scenarios are similar (Fig. 3e).In ICE0 scenario, only small climate changes (gradually decreased changes in summer near-surface air temperature, precipitation and vegetation cover appear (Fig. 7a, d, g).In ICE4 scenario, the effect of imposed ice seems to be over stronger to result in a much earlier rapid climate change (Fig. 7c, f, i) than that in ICE1, ICE2, and ICE3 which is less consistent to paleoclimatic records. Summary and concluding remarks Using the Earth system model of intermediate complexity, CLIMBER-2, the Holocene climate changes were simulated forced by variations of Earth's orbital parameters and different scenarios of snow and glaciers developing over the Tibetan Plateau.The simulations show an additional decrease in boreal summer temperature in mid-Holocene (6 kyr BP) when snow and glaciers are imposed over the Tibetan Plateau, especially in the northern parts of Europe, Asia, and North America.An increase in snow and glaciers over the Tibetan Plateau for the last 9000 years would lead to an earlier and more rapidly climate change in African-Asian monsoon region as well as the changes in summer temperature in South Asia and summer precipitation in Southeast Asia.The faster the snow and glaciers increase, the earlier the rapid decrease in boreal summer temperature and precipitation as well as vegetation cover in North Africa.However, the faster increase in snow and glaciers over the Tibetan Plateau would cause an increase in summer near-surface air temperature in South Asia.The rapid decrease in vegetation in South Asia was lagged behind North Africa about 1500 to 2000 years. The model results suggest that the development of snow and ice cover over the Tibetan Plateau represents an additional important climate feedback, which amplify orbital forcing and produces a significant synergy with the positive vegetation feedback.In North Africa, an enhanced snow and glacier fraction over the Tibetan Plateau tends to cause an earlier and more rapid decrease in summer near-surface air temperature, precipitation and vegetation cover during the early to mid-Holocene, with a parallel decrease in surface absorbed solar radiation, latent heat flux (evapotranspiration) and the rapid increase in surface albedo, which are caused by a strong positive feedback between subtropical vegetation and precipitation that emerges from an interaction between high albedo of Saharan sand deserts and atmospheric circulation and from subsequent changes in the hydrological cycle.In South Asia, the summer near-surface air temperature and vegetation cover have a different evolution during the Holocene compared to that in North Africa.The summer near-surface air temperature in South Asia began to increase almost along with the increase of snow and glacier cover over the Tibetan Plateau at the early Holocene.The rapidly increased summer near-surface air temperature in South Asia is closely related to the rapid decreases in summer precipitation and vegetation cover in this region.The summer precipitation in South Asia decreases stronger in the experiments with imposed snow and glacier cover over the Tibetan Plateau than that in no ice scenario simulations during the early to mid-Holocene.Similarly, an earlier (about 1000-2000 years) and more rapid decrease in vegetation cover in South Asia appear in the experiments with the faster increase in snow and glacier cover over the Tibetan Plateau.The rapid decrease in summer precipitation in South Asia during the early to middle Holocene (9-6 kyr BP) in ice imposed scenarios is closely related to the weakening of the South Asian summer monsoon.But the vegetation cover changes slower and the surface albedo remains relatively low compared to that in North Africa during 9-6 kyr BP.These are corresponded to a smooth decrease of absorption of solar radiation in this region before 5 kyr BP with no-ice imposed over the Tibetan Plateau, but an increase of absorption of solar radiation in ice imposed over the Tibetan Plateau.The increased summer near-surface air temperature in experiments with imposed ice scenarios corresponds to the increases in absorption of solar radiation.Although the surface albedo increases, the planetary albedo decrease with time due to a strong reduction of cloud cover, which is, in turn, a result of weakening of moisture convergency due to weakening of summer monsoon. Although our modeling experiments have focused on cooling effect induced by different scenarios of snow and glaciers developing over the Tibetan Plateau on Holocene climate change in African and Asian summer monsoon regions, it must be emphasized that due to a very coarse spatial resolution of CLIMBER-2 model (10 • in latitude and 51 • in longitude in atmosphere module), the simulated changes in the intensity of individual monsoon systems should be treated with caution and only as qualitative.Such as in the African summer monsoon region, the simulated 850 hPa winds for boreal summer (JJA) at present day (0 kyr BP, ICE0) seems to "underestimate" the intensity of African summer southwest winds (Fig. 1, left panel in low level), which probably results in an uncertainty of summer precipitation in this region.To capture accurately effects of land surface changes on climate, a more detailed feedback analysis with additional experiments is warranted and more sensitivity experiments with high resolution models are necessary. Fig. 8 . Fig. 8. Changes in South Asian summer monsoon index (SAMI) (m/s) in transient simulation AOV for two scenarios (ICE0, ICE4).The SAMI here is defined as the difference of averaged westerlies over 850 hPa and 200 hPa in South Asian monsoon area (40• E-95 • E, 0 • -20 • N), which is referenced inWebster-Yang Index (Webster and Yang, 1992) and modified according to spatial resolution of CLIMBER-2. Table 1 . Imposed snow and glacier scenarios in modeling experiments.
2018-12-14T21:24:36.665Z
2008-12-11T00:00:00.000
{ "year": 2008, "sha1": "80744857e0b39706bd6c2e9207c82469263cbc27", "oa_license": "CCBY", "oa_url": "https://cp.copernicus.org/articles/5/457/2009/cp-5-457-2009.pdf", "oa_status": "GOLD", "pdf_src": "ScienceParseMerged", "pdf_hash": "4a2372dfc7b5ea494025d110ca7a1754260c0486", "s2fieldsofstudy": [ "Environmental Science", "Geography" ], "extfieldsofstudy": [ "Geology" ] }
264473637
pes2o/s2orc
v3-fos-license
The synergistic effects of polyphenols and intestinal microbiota on osteoporosis Osteoporosis is a common metabolic disease in middle-aged and elderly people. It is characterized by a reduction in bone mass, compromised bone microstructure, heightened bone fragility, and an increased susceptibility to fractures. The dynamic imbalance between osteoblast and osteoclast populations is a decisive factor in the occurrence of osteoporosis. With the increase in the elderly population in society, the incidence of osteoporosis, disability, and mortality have gradually increased. Polyphenols are a fascinating class of compounds that are found in both food and medicine and exhibit a variety of biological activities with significant health benefits. As a component of food, polyphenols not only provide color, flavor, and aroma but also act as potent antioxidants, protecting our cells from oxidative stress and reducing the risk of chronic disease. Moreover, these natural compounds exhibit anti-inflammatory properties, which aid in immune response regulation and potentially alleviate symptoms of diverse ailments. The gut microbiota can degrade polyphenols into more absorbable metabolites, thereby increasing their bioavailability. Polyphenols can also shape the gut microbiota and increase its abundance. Therefore, studying the synergistic effect between gut microbiota and polyphenols may help in the treatment and prevention of osteoporosis. By delving into how gut microbiota can enhance the bioavailability of polyphenols and how polyphenols can shape the gut microbiota and increase its abundance, this review offers valuable information and references for the treatment and prevention of osteoporosis. Introduction With the continuous progress of population aging, osteoporosis (OP) has become one of the top three chronic diseases (1).Osteoporosis is a chronic disorder characterized by the deterioration of bone tissue microstructure and loss of bone mass, primarily attributed to the up-regulation of osteoclasts (2).Osteoblasts are essential cells for bone growth and maintenance as they form bone tissue (3).Studies have shown that hormonal imbalance and local oxidative inflammation in vivo can affect the dynamic balance of osteoclasts and osteogenesis (4), mainly manifested as degradation of bone microstructure, reduction in bone mass, and decrease in bone strength (5).The decline in osteogenic differentiation and intraosseous angiogenesis of bone marrow mesenchymal stem cells occurs simultaneously, leading to increased bone fragility and susceptibility to fractures (6,7). Phenols or polyphenols found in our diets are incredibly abundant and can be found across a wide range of plants in nature.At present, more than 8,000 phenolic structures are known, of which more than 4,000 kinds of flavonoids have been identified (8).Polyphenols are chemically identified as compounds possessing phenolic structural characteristics.However, this diverse class of natural products encompasses various subgroups of phenolic compounds.Rich sources of polyphenols include fruits, vegetables, and whole grains, as well as other types of foods and drinks like tea, chocolate, and wine. In addition, most plant polyphenols are in the form of glycoside, the skeleton of polyphenols has different positions with different sugar units and acylation of sugar.Hence, polyphenolic aglycones can be categorized into phenolic acids and flavonoids, and polyphenolic amides, based on their chemical structure (9).For example, quercetin is a well-known flavonol flavonoid that can be found in a variety of food sources, and its main form is glycoside (10). Turmeric has been used throughout history as a spice, herb, and dye, and is widely used worldwide as an ingredient in curry powder.In recent decades, numerous studies have demonstrated the extensive array of advantageous characteristics associated with curcumin.These include anti-inflammatory, antioxidant, hypoglycemic, wound-healing, antibacterial, and antitumor activities (11).Curcumin is an important bioactive substance, which mainly exists in the rhizome of turmeric (12). The phenolic hydroxyl structure of plant polyphenols has antioxidant activity, including direct and indirect antioxidant effects (13).In addition, polyphenols also inhibit osteoporosis through mechanisms such as anti-inflammatory and promoting bone formation (14).Most natural polyphenols must be absorbed and utilized under the action of specific gut microbiota, and phenolic metabolites may have activities that are not present in the original compounds. Research has revealed that polyphenols interact with gut microbiota, thereby enhancing the functionality of the intestinal mucosal mechanical barrier (15).Polyphenols are capable of changing the composition of gut microbiota, which can improve the function of the intestinal mucosal mechanical barrier.Studies have revealed that Resveratrol, a natural polyphenol found in plants, may affect the intestinal barrier by inhibiting the growth of harmful bacteria and fungi, regulating the expression of tight junction proteins, and balancing pro-inflammatory and antiinflammatory T cells.These mechanisms help to control the growth of pathogens and maintain the integrity of cellular barriers (16,17).These actions help to prevent damage to the intestinal barrier and maintain its proper functioning.The activation of the PI3K/Akt-mediated Nrf2 signaling pathway by Resveratrol protects IPEC-J2 cells from oxidative stress, preventing damage to the intestinal barrier (18,19). On the other hand, recent studies have revealed that tea polyphenols can prevent the disturbance of gut microbiota by regulating gut microbiota (20,21).Evidence suggests that epigallocatechin-3-gallate, the principal active component in green tea, exhibits the potential to alleviate inflammatory bowel disease by primarily targeting bacteria responsible for producing short-chain fatty acids, including Akkermansia (22,23).Subsequently, these bacteria produce functional SCFAs which contribute to beneficial changes in the gut microbiome.These changes lead to increased production of protective SCFAs, such as butyrate, which trigger significant antioxidant, anti-inflammatory, and barrier-strengthening responses, ultimately reducing inflammation and damage in the gut (23,24).Additionally, polyphenols play a "prebiotic" role in the gut, supporting the growth of beneficial bacteria.While the impacts of various plant polyphenols on the gut microbiota may vary, the majority of them typically stimulate the proliferation of beneficial bacteria (25, 26).It is worth noting that the health benefits of most plant polyphenols are achieved through a "two-way interaction" with intestinal microorganisms. The gut microbiota is the body's "second largest gene pool" and is the symbiotic, symbiotic, and disease-causing microbes that live in our gut (27).The gut microbiome contains approximately 1,200 bacterial species, with the main representative groups being Bacteroidetes, Firmicutes, Actinobacteria, Proteobacteria, and Myxococcus (28).The metabolites of gut microbiota can act on the human gut, thereby regulating and preventing most diseases.Among these factors, intestinal microbes crucially influence the balance of bone health by exerting effects on host metabolism, immune function, hormone secretion, and the gut-brain axis (29,30).These interactions can contribute to the development of osteoporosis. The intestinal barrier function is significantly influenced by the interplay between gut microbiota and the immune system.GM forms various symbiotic relationships with the host, including parasitic, commensal, and mutualistic relationships.Under normal physiological conditions, the gut microbiota contributes to food digestion, combats pathogens, and aids in the development of the host immune system, particularly during the early post-natal period.Throughout one's lifespan, the gut microbiota interacts with the host, playing a role in modulating both gut and systemic immunity (31,32). The intricate interplay between immune cells and bone cells is closely intertwined, with the gut microbiota playing a vital role in maintaining bone health through its influence on bone turnover and density (33).By producing metabolites, intestinal microorganisms influence and regulate intestinal barrier function.A normal intestinal barrier is important for isolating harmful substances, facilitating nutrient absorption, and providing immune protection.Impaired intestinal barrier function is considered one of the pathogenic factors contributing to osteoporosis.The intestinal mucosal barrier is made up of four components: a mechanical barrier, chemical barrier, immune barrier, and biological barrier.Together, these barriers prevent harmful substances such as toxins and bacteria from entering the body through the intestinal mucosa (34).When the intestinal mucosal barrier is compromised, it can cause an increase in intestinal permeability.This can lead to bacterial and endotoxin translocation, which can trigger or worsen systemic inflammation and multiple organ dysfunction.Intestinal epithelial cells are closely arranged by cell junctions, which are composed of tight junctions, adhesion junctions, and desmosomes, which can effectively block the entry of bacteria, viruses, and endotoxins, and it is essential for nutrition absorption and immune function (35).The chemical barrier consists of gastric acid, bile, a wide range of digestive enzymes, lysozyme, mucin, and bacteriostatic substances produced by commensal bacteria residing in the intestinal cavity.It has the effect of inactivating pathogenic microorganisms (36).The immune barrier is composed of intestinal mucosal lymphoid tissue, cells, and secreted antibodies on the surface of the intestinal mucosa, which induce local and systemic immune responses and protect the intestinal tract from damage by foreign antigens and abnormal immune responses (37).The biological barrier is mainly composed of normal gut microbiota, which is the intestinal normal parasitic flora with colonization resistance to foreign strains.When the stability of this microflora is disrupted, the intestinal colonization resistance is significantly diminished, thereby increasing the risk of potential pathogens, including opportunistic pathogens, colonizing and invading the gut (38).Dysfunction of the gut microbiota can lead to impaired intestinal barrier function, causing the absorption of harmful substances and inflammation, ultimately resulting in bone loss, inhibited osteoblast growth, and increased osteoclast activity (39,40) Chronic inflammatory diseases and immune dysfunctions have been associated with a higher incidence of osteoporosis, primarily attributed to the excessive production of pro-inflammatory cytokines that stimulate osteoclastic activity.Consequently, GM disorders weaken intestinal barrier function, and enhanced immune system reactivity contributes to the entry of harmful substances into the body (39), thereby promoting the production of factors that activate osteoclasts and lead to bone resorption, ultimately causing osteoporosis.Therefore, gut microbiota can affect bone formation and bone resorption Figure 1. Through the interaction of polyphenols with gut microbiota, intestinal barrier function can be enhanced, and simultaneously, the richness and activity of the gut microbiota increase (41,42).Gut microbiota converts polyphenols from food into more bioavailable microbial metabolites.Therefore, under the synergistic effect of the two, the effect of each on the treatment of osteoporosis is maximized (43). Interactions between polyphenols and gut microbiota 2.1 Effects of GM on foodborne polyphenols Polyphenols are renowned for their antioxidant properties and are frequently utilized in the treatment of diverse diseases.Their metabolic degradation in the body is influenced by GM (43). Polyphenols, when consumed through food, are present in the form of glycosides and complex oligomerization structures.In the human body, these complex structures undergo sequential metabolism.After ingestion, some polyphenols are minimally absorbed in the stomach, primarily as phenolic acids (44).Only a small fraction (5-10%) of polyphenols are absorbed in the small intestine, primarily in the form of free polyphenols (45).Under the influence of intestinal microbial flora, the polyphenols that remain unabsorbed, especially the ones that are bound, are transported to the colon where they undergo decomposition, release, and subsequent absorption.Figure 2 shows the absorption and metabolism of foodborne polyphenols. Polyphenols exhibit a range of structural variations that influence their bioavailability.Upon ingestion, these compounds tend to accumulate in the large intestine where they undergo extensive metabolism by gut microbiota.The microbiota transforms polyphenols into metabolites, making them bioactive.The extended retention of polyphenols in the intestines can yield beneficial effects on the gut microbiota.On the contrary, gut microbiota plays a crucial role in enhancing the biological activity of polyphenols by converting them into active metabolites known as phenolics (46).Polyphenols and other compounds are biotransformed by various bacterial species, including Bifidobacterium, Lactobacillus, Escherichia coli, Bacteroides, and Eubacterium, resulting in the production of shortchain fatty acids (SCFAs) and other metabolites (47).Short-chain fatty acids (SCFAs) play a significant role in reducing the pH of the intestines, suppressing the growth of harmful pathogens, and facilitating optimal absorption of minerals and vitamins.The metabolism of polyphenols in the intestines is carried out by microorganisms, which utilize hydrolysis, lysis, and reduction mechanisms to break down the polyphenols (48).Phenolic compounds in food and herbal products exist as conjugates and require hydrolysis for absorption (49).Phenolic compounds like hesperetin, daidzein, ellagic acid, caffeic acid, and secoisolariciresinol must undergo hydrolysis to produce phenolic aglycons that can be absorbed (50).Metabolites can undergo two processes, either being metabolized in the gut or absorbed directly.During the cleavage process, the carbon ring is opened and the C-C bond is broken, while methyl ether is removed through demethylation.Hydrolase then releases glycogen, which is subsequently broken down through the cleavage of flavonoids' carbon ring, the removal of ellagic acid's esterification through lactone ring opening and decarboxylation, and the cleavage of the quinic acid ring from chlorogenic acid (50).C-ring cleavage converts isoflavone daidzein to O-demethylancomycin and flavonoid hesperidin to 3-(30-hydroxy-40-methoxyphenyl) hydroxy acrylic acid.Collectively, these substances facilitate the transformation of non-absorbable oligo procyanidins into easily assimilable phenolic acid molecules, such as derivatives of hydroxyphenylacetic acid, hydroxyphenylpropionic acid, and hydroxyphenylvaleric acid.During the reduction process, gut microbiota also catalyze different PPs reduction reactions (51).The transformation of caffeic acid into 3,4-dihydroxyphenylpropionic acid is a typical hydrogenation reaction (52).The implementation of targeted dehydroxylation processes can lead to the production of monohydroxy derivatives.This process also facilitates the conversion of the aliphatic side chain, resulting in the formation of phenylacetic acid, benzoic acid, and decarboxylated metabolites. Studies have also shown that gut bacteria can metabolize resveratrol precursors, such as piceid, into resveratrol, thereby increasing its bioavailability.Bifidobacterium and Lactobacillus acidophilus are two specific bacteria responsible for producing resveratrol from piceid (53).Resveratrol, a polyphenol, has the ability to undergo glycosylation in the gut, resulting in its transformation into piceid.Piceid can then be absorbed in both its free and conjugated forms, the latter being referred to as piceid glucuronide.It is evident that polyphenol metabolites, which are metabolized by gut microbiota, exhibit a higher level of activity and are more efficiently absorbed. Effects of polyphenols and their metabolites on GM Polyphenols can impact the gut microbiota in two ways, by promoting the proliferation of beneficial bacteria and increasing their abundance.Polyphenols have the ability to mimic prebiotics and change the composition of the human gut microbiota.This has been shown through numerous studies, both in vitro investigations utilizing human gut microbiota and in vivo clinical trials.Foods rich in polyphenols have been consistently proven to effectively modify the gut microbiota.They achieve this by promoting the Absorption and metabolism of foodborne polyphenols.Intestinal enzymes and gut microbiota are involved in the metabolism and absorption of polyphenols in the intestine.Once converted, the polyphenols travel to the liver through the portal vein, where they undergo two metabolic stages, resulting in different metabolic compounds.These compounds then enter phase II metabolism in the circulatory system, where sulfate, glucuronide, and methyl conjugates are produced.These conjugates can be detected in urine several days after ingestion.Gut microbiota act on bone, leading to bone resorption or inhibiting osteoporosis mechanisms.Bone formation and bone resorption are key factors affecting the pathogenesis of osteoporosis.Intestinal microbes can affect bone growth by regulating intestinal homeostasis, such as reducing oxidative stress, increasing antimutagenesis activity, enhancing intestinal barrier function, and regulating immune response.Intestinal microbes always maintain the homeostasis of the intestinal environment and play a role in the prevention and treatment of osteoporosis. proliferation of beneficial bacteria such as Lactobacillus and Bifidobacterium.For example, cocoa polyphenols have been shown to regulate the composition of the gut microbiota, functioning through a probiotic mechanism (54).Cocoa polyphenols have the potential to stimulate the growth and proliferation of beneficial gut bacteria, including Lactobacillus and Bifidobacterium, while concurrently diminishing the population of harmful bacteria, including Clostridium perfringens. Some tannins catabolites of the gut microbiota may have "prebiotic" activity, such as urolithin produced by pomegranate ellagitannins, which in preclinical studies has actively regulated lactic bacteria, Bifidobacteria and enterobacteria model of intestinal inflammation in rats (55). In the human study, it was observed that a particular subset of the population (16 out of 20 subjects) exhibited a higher abundance of Akkermansia muciniphila in their gut microbiota both before and after the intervention.Notably, these individuals were capable of producing urolithin A (56).Another study conducted by the same research group further concluded that the consumption of pomegranate extract promotes the abundance of A. muciniphila (57). In addition to the probiotic effects described above, Polyphenols have the ability to modulate the gut microbiota in a way that promotes the growth of beneficial strains, thereby positively impacting the overall health of the host, and the metabolites derived from polyphenols can enhance gut health and exhibit anti-inflammatory properties.For example, the bioactive metabolites of cocoa can enhance gut health, show antiinflammatory effects, have a positive effect on the immune system, and reduce the risk of various diseases (58).Intake of polyphenols may improve the health effects of the gut microbiota by promoting the excretion of short-chain fatty acids, enhancing intestinal immune function, and other physiological processes. a majority of studies have consistently demonstrated that polyphenols can induce favorable alterations in the gut microbiota composition.Specifically, when individuals consume a diet rich in polyphenols, notable changes occur in the human gut microbiota, the numbers of Lactobacillus, Bifidobacterium, Akkermansia, Enterococcus, and Bacteroides increase, while the ratio of enterococcus, Clostridium, and firmicutes to Bacteroides significantly decreases.As an example, red wine, which is rich in polyphenols, has been found to stimulate the growth of certain bacteria species, such as Bacteroides and Roseburia intestinalis, in the gut microbiota (59).Some types of polyphenols found in fermented papaya juice, such as gallic acid and caffeic acid, can affect the composition of the microorganisms in the intestines.Studies have shown that these polyphenols can decrease the number of harmful bacteria, like Enterococcus, Clostridium perfringens, and Clostridium difficile, while promoting the growth of beneficial bacteria, like Bifidobacterium.Moreover, certain polyphenols can also encourage the growth of fungi in the gut microbiota (60).Consuming grape seeds rich in procyanidins can increase the number of lactobacilli, Clostridium, and Ruminococcus in the gut (61).Vegetables, similar to fruits, are rich in polyphenols and prebiotic fiber.Dietary polyphenol intake in carrots can increase Bacteroides and Lactobacillus, and decrease the number of Bordetella such as Clostridium perfringens, Clostridium coccoides, Bacteroides coccoides, and Enterobacterium ecium (62).In the study conducted by Xu Song et al., the impact of resveratrol on the intestinal biological barrier was investigated using 16S rRNA and metagenomic sequencing analyses.The findings revealed that resveratrol had a positive effect on the diversity and structure of the gut microbiota.Specifically, it increased the abundance of probiotic bacteria and regulated the function of the gut microbiota to counteract immunosuppression (63). Monica Maurer Sost et al. conducted a study to evaluate the impact of citrus fruit extracts containing polyphenols hesperidin and naringin on the regulation of gut microbiota composition and activity using an in vitro model of colon dynamics.Their findings revealed that polyphenol hesperidin led to a dose-dependent increase in the abundance of Roseburia, Eubacterium ramulus, and Bacteroides eggerthii (64). Tart cherry polyphenols underwent a bacterial fermentation assay in vitro and were subsequently assessed using 16S rRNA gene sequencing and metabolomics.In vitro, tart cherries were discovered to stimulate a significant rise of Bacteroides, possibly attributable to the presence of polysaccharides (65).In the human study, the consumption of tart cherries was linked to two distinct and contrasting responses, which were associated with the initial levels of Bacteroides (65).Studies have shown that individuals with a high initial abundance of Bacteroides in their gut microbiota tend to exhibit a specific response to tart cherry juice consumption.In this group, tart cherry juice consumption was associated with a decrease in Bacteroides populations and an increase in fermentative Firmicutes.Additionally, there was an observed increase in the presence of Collinsella, which has the potential to metabolize polyphenols.On the other hand, individuals with a low initial abundance of Bacteroides in their gut microbiota exhibited a different response to tart cherry juice consumption.In the group that consumed tart cherry juice, there was an observed increase in the populations of Bacteroides or Prevotella, as well as a rise in Bifidobacterium.Conversely, there was a decrease in the abundance of Lachnospiraceae, Ruminococcus, and Collinsella in these individuals, as indicated by the 16S rRNA gene sequencing and metabolomics analysis (65,66). Polyphenols, similarly to legumes, represent one of the primary bioactive compounds.In vitro research has demonstrated that germinated lentil seeds harbor potent antimicrobial compounds, including cysteine-rich peptides, which exhibit activity against detrimental microorganisms like E. coli and Staphylococcus aureus.In a study focused on mung bean coats, researchers performed simulated digestion and colonic fermentation in vitro to investigate the liberation of polyphenols from the mung bean coat and assess their bioactive properties.These experiments aimed to understand how the polyphenols in mung bean coats are digested and fermented within the gastrointestinal tract and their potential effects on human health.In the study involving the mung bean coat, during the process of colonic fermentation, a noteworthy enhancement in the relative abundance of beneficial bacteria, particularly Lactococcus and Bacteroides, was observed.This observation implies that the polyphenols released from the fermentation of mung bean coats potentially exert a favorable influence on the growth and multiplication of these beneficial bacterial populations within the colon (67).Studies exploring the effects of red wine polyphenols on intestinal microbiota have observed an increase in the concentration of specific bacterial genera in the intestines.Specifically, the genera Clostridium, Bacteroides, Enterococcus, and Bifidobacterium are positively influenced by red wine polyphenols.The findings indicate that the intake of red wine polyphenols could potentially yield advantageous effects on the composition of the intestinal microbiota (68), that is, accelerate the growth of "phoenixes", Klebsiella, Bacillus, Bordetella and Staphylococcus, while reducing the growth of Bacteroides, Clostridium, anaerobic coccus, and Bifidobacterium.The incorporation of tea polyphenols, specifically catechins, in a culture medium containing human fecal bacteria was observed to lead to a reduction in the levels of harmful bacteria, including E. coli, Clostridium perfringens, and Bacteroides.This finding suggests that the introduction of tea polyphenols in the gut environment may have the potential to combat the proliferation of these specific pathogenic bacteria.The results of this study highlight the positive influence of tea polyphenols on the balance of gut microbiota.Ma et al. researched the impact of green tea polyphenols on the redox status of the intestine and its correlation with gut microbiota (69).It was found that Spirochaetaceae and Bacteroides were identified as biomarkers of intestinal redox status, revealing the benefits of tea polyphenols.The polyphenol compounds in oolong tea are mainly catechins, which can increase the number of Bacteroides, Bifidobacterium, and Lactobacillus genera.In a 2015 study, researchers examined the potential impact of saponins found in herbal teas on the gut microbiota of mice (25, 70).In the treatment group, the administration of ginseng, red ginseng, Panax San Qi, and ginsenosides led to noticeable increases in Enterococcus, Lactobacillus, and Bifidobacterium.In addition, a significant increase in the proportion of Firmicutes/Bacteroides was observed after consumption of Asarum and San Qi.Consumption of hypericum tea also increased the growth of aromatic coccus.The main phenolic substances in coffee are flavanols and chlorogenic acids (71).When male Wistar rats were fed coffee grounds, the number of microflora in Ruminococcaceae, Muribaculaeceae, and Lachnospiraceae increased, while the ratio of Firmicutes to Bacteroidetes decreased (72).Nuts are rich in polyphenols, mainly persimmonic acid and procyanidins (73).Increasing the intake of nut polyphenols can enhance the probiotic effect and benefit the gut microbiota.Ellagic tannins are metabolized into urolithin, which circulates in the plasma, thereby increasing the number of Bifidobacterium and Lactobacillus (74).From this, polyphenols play a probiotic role in the gut, shaping the gut microbiota and interacting with the gut microbiota.Table 1 summarizes recent research findings on the effects of specific polyphenols and/or polyphenol-containing dietary sources on gut microbiome composition. 3 Synergistic effect of polyphenols and GM to treat OP The role of polyphenols in the treatment of osteoporosis In addition to short-chain fatty acids, polyphenols can be metabolized into different substances, such as phenolic acid, glucuronic acid, sulfate, etc. Apigenin, a flavone commonly found in fruits and vegetables, can be metabolized into p-coumaric acid by gut microbiota.The metabolism of apigenin by the gut microbiota highlights the role of these microorganisms in breaking down dietary compounds and generating metabolites with potential health benefits.p-coumaric acid itself possesses antioxidant and anti-inflammatory properties, and its production through apigenin metabolism adds to the overall beneficial effects of flavonoid consumption on human health (82).In cell line studies, 4-hydroxycinnamic acid exhibited notable antiinflammatory activity in LPS-stimulated macrophage cells.Specifically, it was observed to inhibit the activity of nitric oxide synthase (iNOS), an enzyme involved in the production of nitric oxide, which plays a role in inflammation.This suggests that 4hydroxycinnamic acid may have potential as an anti-inflammatory agent by modulating the iNOS pathway in immune cells. Experimental method Changes in the microbiota Ref. Green tea polyphenols (catechins, flavonoids and flavonols) In Nevertheless, it is essential to acknowledge that additional research is required to corroborate these findings and investigate the possible therapeutic uses of 4-hydroxycinnamic acid. Research conducted on rat femoral tissue showed that 4hydroxycinnamic acid increased calcium content and affected bone metabolism in vitro.These findings suggest that 4hydroxycinnamic acid holds potential benefits for osteoporosis and overall bone health.However, it is important to acknowledge that findings from in vitro studies offer preliminary evidence, and further research, including in vivo studies and clinical trials, is necessary to confirm these effects in humans.Further research is required to validate these findings in animal models and human clinical trials to fully understand the effects of 4-hydroxycinnamic acid on bone health (83). Research has shown that daidzein, a key soy isoflavone in our diet, can be converted to equol by certain gut microorganisms.This conversion has been linked to positive health effects in individuals who produce equol.In women with osteopenia, taking red clover extract (RCE) with probiotics twice daily for a year has been found to effectively reduce bone mineral density loss caused by estrogen deficiency.Additionally, phlorizin, a natural compound found in several fruit trees, is a dietary component (84).Its metabolites are Phloretin (phloretic acid and phloroglucinol).Phloretin and its derivatives, primarily in glycosyl forms, are naturally occurring dihydrochalcones found in fruits like apples, kumquat, pear, strawberry, and various vegetables (85). The osteoprotective effects of phloretin, a dihydrochalcone present in apple tree leaves, were examined in ovariectomized (OVX) C57BL/6 female mice to assess its potential for preventing bone loss (86).The researchers discovered that phloretin modulated the ASK-1-MAPK signal transduction pathway, resulting in the transcription of apoptotic genes.This mechanism effectively prevented osteoclast absorption induced by estrogen deficiency, thereby highlighting the potential of phloretin in mitigating bone loss (87).In conclusion, phloridzin metabolites play an important role in regulating bone dynamics and increasing bone mineral density and content. Genistein is indeed a secondary metabolite commonly found in leguminous plants, seeds, fruits, and vegetables.It belongs to the class of compounds known as isoflavones and exhibits phytoestrogenic activity.Genistein, a phytoestrogen, can mimic the structure or function of 17b-estradiol, a naturally occurring estrogen in mammals.Several studies have indicated that higher dietary intake of phytoestrogens like genistein is associated with increased bone mineral density (BMD) in postmenopausal women, as observed in cross-sectional analyses.However, it's worth noting that these effects were primarily observed in postmenopausal Chinese women and not in premenopausal women.The exact mechanisms through which genistein influences bone health are still being investigated.It is believed that genistein may modulate the estrogen receptor pathway and exert estrogen-like effects on bone tissue, leading to potential benefits for bone density.Indeed, additional research, including prospective studies and clinical trials, is necessary to gain a more comprehensive understanding of the association between genistein consumption and its effects on bone health in diverse populations (88). In studies conducted on ovariectomized (OVX) rats, genistein administered orally at a dose of 10 mg/kg for 12 weeks has been shown to stimulate bone formation and possess inhibitory properties against bone resorption (89). Most of the intestinal metabolites of polyphenols have antiinflammatory and antioxidant effects, so they have an important role in the treatment of osteoporosis. Anti-oxidative stress Oxidative stress occurs when there's an imbalance between the production and elimination of reactive oxygen species (90, 91).Excessive reactive oxygen species can cause cell damage and apoptosis, affect cell function, and trigger diseases (74).Oxidative stress can affect the functioning of bone marrow-derived mesenchymal stem cells, thereby influencing both bone growth and osteogenic differentiation of mesenchymal stem cells.Consequently, this can result in impaired osteoblast function and accelerated formation and differentiation of osteoblasts (92) (93). However, the presence of antioxidants can provide cellular protection against damage induced by reactive oxygen species.Polyphenolic compounds contain a large number of phenolic hydroxyl groups that act as hydrogen donors to reduce singlet oxygen to less active triplet oxygen, thereby reducing the probability of oxygen radical generation and terminating chain reactions triggered by free radicals (94).In addition, they can scavenge free radicals and protect biological macromolecules from free radical damage (95).There is research evidence that the intake of natural berries rich in foodborne polyphenolic compounds, such as cranberries and blueberries, can combat oxidative stress by scavenging free radicals, and prevent and treat osteoporosis (96). Anti-inflammatory effects Polyphenolic compounds exert anti-inflammatory effects by negatively regulating inflammatory pathways, especially their regulation of the key NF-kB transcription factor (TF) (97,98).Estrogen receptors are capable of engaging in protein interactions with NF-kB, leading to the formation of complexes and subsequent binding of NF-kB to specific response elements.These specific response elements regulate the transcription of NF-kB-dependent genes in a cell type-specific manner and are crucial in modulating inflammatory processes (99). For example, TP can inhibit lipid peroxidation and combat oxidative stress by regulating the transcription factor NF-kB and acting as an estrogen receptor ERK in HMC-1 cells.Impaired expression of inducible nitric oxide synthase reduces the production and release of inflammatory factors TNF-1, IL-6, IL-8, and NO.This process, in turn, brings about anti-inflammatory effects and helps mitigate bone loss (100). In addition, prong and its polyphenolic compounds have been shown to inhibit bone resorption by down-regulating the receptor activated NF-kB ligand (RANKL), and to directly inhibit the generation of osteoblasts by down-regulating NFATc1 and inflammatory mediators, thereby reducing osteogenic activity (101,102).Under lipopolysaccharide (LPS) induced inflammatory conditions, the expression of cyclooxygenase and the production of nitric oxide (NO) in osteoblastic progenitors were inhibited by polyphenols extracted from plums at concentrations of 10, 20, and 30mg/mL.The inhibition was achieved by downregulating the expression of inducible nitric oxide synthase (103).In the presence of RANKL, these polyphenolic compounds simultaneously stimulated bone formation and suppressed the generation of NO and tumor necrosis factor (TNF)-a (104,105).TNF-a production increased over time in response to oxidative stress stimulation, and dried plum polyphenols were able to reduce the differentiation of bone resorptive cells under normal conditions as well as under inflammatory and oxidative stress conditions (106). Activate the Wnt/b-Catenin pathway The Wnt signaling pathway plays a critical role in both bone development and the maintenance of metabolic homeostasis (107).Wnt-related proteins or factors can bind to the Frizzled gene receptor (Fzd) and initiate downstream intracellular cascade reactions, thereby regulating the transcription or expression of target genes such as b-Catenin, peroxisome proliferator-activated receptor g (PPARg), and RUNX2 (Runt-related transcription factor 2) (108, 109).Then it regulates the physiological processes of osteoblast formation, differentiation, and maturation.b-Catenin is a pivotal factor in the classical pathway, serving as a central regulator of the Wnt/b-Catenin signaling pathway.b-Catenin can enhance the activity of alkaline phosphatase (ALP) while promoting bone.Runx2 is a specific transcription factor of osteoblasts, which is closely related to the proliferation and differentiation of osteoblasts. For example, icariin and resveratrol have been widely used in the prevention and treatment of OP (110).Potential applications for regulating the osteogenic differentiation of BMSCs, preventing bone loss, and promoting bone regeneration have been discovered.In a study by Wei et al., it was found that icariin intervention in rat bone marrow stromal cells increased total b-catenin and nuclear translocation by stimulating b-catenin activation.Additionally, the expression of Wnt signaling members (b-catenin, Lef1, TCF7, c-jun, c-myc, and cyclin D) was significantly upregulated (111).Moreover, the activation of ERa was found to enhance the expression of osteogenic genes, thereby promoting both the proliferation and osteogenic differentiation of BMSCs.Similarly, resveratrol is one of the effective active components of Polygonum knotweed and veratrol and has estrogen-like effects.Some researchers concluded by intervening in OVX rats through the Wnt/b-catenin pathway mediated by resveratrol (112).By stimulating the expression of Runx2, resveratrol down-regulates the expression level of GSK38, preventing the effective formation of b-catenin degradation complex, ensuring the stable accumulation of b-catenin in cytoplasm and translocation to the nucleus, thus activating Wnt/B-catenin pathway, promoting osteogenesis differentiation and enhancing bone density, playing a role in preventing and treating OP. Inhibition of the NF-kB pathway The NF-kB signaling pathway has a significant role in bone metabolism and can also interact with other signaling pathways to impact the progression of osteoporosis (113,114).The NF-kB signaling pathway is mainly composed of IkB protein, core IkB kinase (IKK) complex, and NF-kB.The IkBa proteasome degrades the translocation signal of exposed NF-kB/p65 subunits, promoting NF-kB to enter the nucleus and bind to related genes, initiating transcription of these genes.The NF-kB pathway regulates bone metabolism and influences the skeletal system. For example, Lin et al. applied paeoniflorin to intervene osteoclast model differentiated from RAW 264.7 cell line to observe the effect of paeoniflorin on the osteoclast signaling pathway.The results suggested that paeoniflorin weakened the phosphorylation level of p65 NF-kB, that is, inhibited the activation of the NF-kB signaling pathway (115,116).The NF-kB pathway is one of the main biological pathways of osteoclast differentiation (117).It has been verified that paeoniflorin reduces the activity of the NF-kB signaling pathway, reduces the activity of osteoclasts, reduces bone resorption, and further maintains bone homeostasis by inhibiting the activation of p65 NF-kB (118).What's more, Wang used ostiole to interfere with osteoclasts and studied the mechanism of action of OST on osteoclasts (119).The experimental findings demonstrated an up-regulation in the expression of P65 NF-kB, while down-regulation was observed in the expression of NFATc1, CTSK, MMP-9, TRAP, and p-IkB., the expression of the NF-kB pathway can inhibit the further differentiation of osteoclasts, and the activity gap between osteoblast and osteoblast can be narrowed to a large extent.This is the molecular mechanism of antiosteoporosis of OST via the NF-kB pathway.Both peony and snake seeds are traditional Chinese medicine and contain polyphenols.By inhibiting the NF-kB signaling pathway, the production of osteoclasts is inhibited, to achieve the therapeutic effect of osteoporosis (116) Figure 3. Polyphenols promote bone formation and inhibit bone absorption Researchers have extensively studied the beneficial effects of polyphenols, which are known to enhance bone formation and suppress bone resorption.During the process of bone formation, osteoblasts play a vital role in the synthesis and secretion of crucial components of the bone matrix, including collagen and glycoproteins (120).Through the study of MC3T3-E1 cells, SaOS-2 cells, D1 cells, NRG cells, osteosarcoma cells, and other cell models, it has been confirmed that tea polyphenols can enhance the activity of alkaline phosphatase (ALP) (121), increase bone mineral formation and bone mineralization area, improve bone mineral density, and thereby promote bone formation.Bone resorption occurs when hematopoietic stem cell-derived preosteoblasts transform into osteoblasts due to the presence of M-CSF, RANKL, and cytokines (122).These cytokines have the ability to induce osteoblasts to undergo cell polarization, leading to their active involvement in the process of bone resorption. According to a study, it was found that MGF has the ability to impede the differentiation process of pre-osteoblastic macrophages (BMM) induced by M-CSF and RANKL, preventing their transformation into TRAP-positive multinucleated macrophages (123), i.e. osteoblasts, suggesting that MGF could inhibit the differentiation of BMM macrophages and promote the expression of ER-b mRNA.MGF promotes the proliferation and differentiation of osteoblast precursor cells MC3T3-E1 through RunX2; therefore, MGF may promote bone formation of osteoblasts, thereby regulating the balance of osteoblast and bone resorption cell functions (124). The role of GM in the treatment of OP Recent research has revealed a significant link between gut microbiota and osteoporosis.Gut microbiota regulates bone homeostasis and can affect osteoporosis through various mechanisms.These include the modulation of its metabolites, influencing host metabolism, altering drug metabolism, and regulating the integrity of the gut barrier function (125).Numerous studies have highlighted alterations in the collagen properties of the gut microbiota, which are closely linked to bone fragility.These changes encompass variations in biochemical properties and protein structure, emphasizing the significant role of the gut microbiota in bone health.Supplementation with specific probiotics in mouse models associated with osteoporosis improved bone density and enhanced bone heterogeneity.In addition, quercetin fights osteoporosis by regulating the level of short-chain fatty acids (SCFAs), improving the bone microenvironment, and restoring the integrity of the intestinal mucosa (126).Another study by Zhang et al. showed that fecal flora transplantation (FMT) improved bone loss in osteoporosis mice after ovariectomy by regulating gut microbiota and metabolic function (127). The gut microbiome plays a crucial role in maintaining bone health by influencing the immune system, which is closely connected to bone cells.It achieves this by utilizing the host's fully developed immune system to regulate responses throughout the body, thus controlling bone turnover and density.The gut microbiota improves bone health, enhances calcium absorption, and regulates serotonin production in the gut, which interacts with bone cells and is considered a bone regulator (128). The gut microbiota initially varies but stabilizes quickly as the immune system responds to environmental factors.The composition of the gut microbiota changes with age, with great variability in the elderly (>65 years) (129).The gut microbiota offers many possible antigens for the immune system of the host.Under normal conditions, a harmonious relationship exists between the host and the commensal bacteria, which aid in food digestion and protect against intruding pathogens (130).In certain conditions where the host's ability to control the entry of gut microbiota is compromised, certain species may invade host tissues and cause disease.Changes in the composition of gut microbiota can lead to intestinal inflammation and disrupt the balance of the immune regulatory network, which has been linked to osteoporosis in numerous studies (131). In addition, gut microbiota alleviates oxidative stress by producing antioxidant molecules such as glutathione, folate, and polysaccharides (132,133).Furthermore, certain components of the intestinal microbiota have the capability to produce short-chain fatty acids (SCFA).These SCFAs not only stimulate the generation of antioxidant molecules but also aid in mitigating oxidative stress (134). Certain lactic acid bacteria in the gut have been found to aid in preventing osteoporosis by reducing mutagenic activity.These bacteria can attach themselves to potent mutagens in the gut, lessening their mutagenic impact.This, in turn, lowers the levels of inflammation and DNA damage, thus providing superior shielding for the gut wall.Furthermore, this process promotes improved mineral absorption, ultimately thwarting the onset of osteoporosis (135). In addition, exopolysaccharides exhibit a vast variety of biological activities, such as immunomodulatory, antioxidant, anti-tumor, and regulation of intestinal microbial balance, thereby improving immune response and playing an anti-inflammatory and antioxidant role (136). The study examined how Lactobacillus plantarum extracellular polysaccharide affects the intestinal immune response, oxidative stress, intestinal mucosal barrier, and microbial community in immunosuppressed mice induced by cyclophosphamide (137).These results suggest that the extracellular polysaccharide of L. plantarum JLAU103 may regulate the intestinal immune response by regulating SCFA production and intestinal microbiota in immunosuppressed mice, thereby activating systemic immunity (137).In a separate study, Bifidobacterium WBIN03 was identified as having a high growth rate and exopolysaccharide production.The effects of these exopolysaccharides on the intestinal microflora in mice were examined.The study showed that exopolysaccharides boosted the growth of Lactobacillus and anaerobic bacteria while suppressing Enterobacter, Enterococcus, and Bacteroides fragilis (138).An additional analysis of the gut microbiome revealed that Lactobacillus plantarum NCU116 enhanced the abundance of microbial populations involved in gut regeneration and glycan metabolism (139). In conclusion, in addition to their metabolites, their exopolysaccharides also have anti-inflammatory and antioxidant stress effects and also have a certain impact on the treatment of osteoporosis. Effects of GM improved by PPs on OP treatment Due to their capacity to inhibit inflammatory factors and engage in various other mechanisms, polyphenols have been identified as potential agents for the treatment of osteoporosis, and chronic inflammation, and multiple mechanisms are closely intertwined with the function of the intestinal barrier, the gut microbiota is usually related to the immune regulatory network, and the regulation of the immune system is often induced by inflammatory factors, and inflammation is closely related to bone loss and osteoclast activation (140).Osteoporosis is treated by converting polyphenols into metabolites to inhibit inflammatory factors, enhance intestinal barrier function, and regulate immunity to inhibit bone loss and osteoclast formation (141).Meanwhile, polyphenols can increase the abundance and activity of the gut microbiota, acting as regulatory mediators and inducers in the gut barrier-bone-immune system (142).Studies in mice with osteopenic ovariectomies fed a diet supplemented with crude extracts of dried plums and dried plums polyphenol compounds showed that the polyphenols caused modifications in both the gut microbiota and the levels of cecal short-chain fatty acids.These findings demonstrate the potential prebiotic activity of dried plum polyphenols and their significant contribution towards regulating both bone formation and bone resorption (143).Sangeeta Huidrom et al. conducted a study, which demonstrated that the oral administration of various strains of probiotics exhibited promising effects in reducing bone resorption and increasing bone density.This finding was observed in both animal models and human studies, suggesting the potential of probiotics as a therapeutic approach for osteoporosis (144).Therefore, probiotics may be an effective way to prevent and treat postmenopausal osteoporosis.As a prebiotic, polyphenols can improve gut microbiota and increase the number of intestinal probiotics, to achieve the effect of treating osteoporosis (144).Hence, the synergistic effect between polyphenols and gut microbiota emerges as a critical factor in the treatment of osteoporosis. Conclusions and perspectives Osteoporosis is a common metabolic disease.In this paper, the effects of polyphenols and intestinal microbes on the treatment of osteoporosis were summarized.Polyphenols can be decomposed into metabolites that are more easily absorbed, and the abundance and activity of intestinal microorganisms are increased due to the action of polyphenols.Under the synergistic effect of the two, they play their respective functions and roles to a greater extent, providing innovative ideas and important insights for the treatment of osteoporosis. FIGURE 2 FIGURE 2Molecular mechanism of osteogenic differentiation controlled by resveratrol and icariin.Icariin down-regulates the expression level of GSK38 by stimulating the expression of Runx2 through veratrole, thus achieving stable accumulation of b-catenin and transferring into the nucleus, thereby activating Wnt/B-catenin pathway, promoting osteogenesis differentiation and enhancing bone density.GSK-3b, Glycogen synthase kinase 3b; APC, adenomatous polyposis coli; Runx2/TCF/LEF, specific transcription factors. FIGURE 3 FIGURE 3Polyphenols act on the NF-kB pathway and the mechanism of oxidative stress.Paeoniflorin weakens the phosphorylation level of p65 NF-kB, thereby inhibiting the activation of NF-kB signaling pathway.Oxidative stress is closely related to inflammatory cytokines, which are always associated with the NF-kB pathway.Inflammation can cause mitochondrial dysfunction, which prevents oxidative metabolism.After the inhibition of the NF-kB signaling pathway, PG-F2a can be down-regulated to inhibit inflammatory factors. TABLE 1 Microbes provoked or inhibited in the gut based on the consumption of polyphenols. vivo experiment: Mice were divided into groups and given 100 mg/kg body weight TP (TPL), 200 mg/kg body weight TP (TPM), and 400 mg/kg body weight TP (TPH) by tube feeding, respectively, for 12 weeks "↑" : The number is increased; "↓" : The number is reduced.
2023-10-26T15:33:20.163Z
2023-10-23T00:00:00.000
{ "year": 2023, "sha1": "adb0110389478c0ba2d81e2d654123bbbd127d1c", "oa_license": "CCBY", "oa_url": "https://www.frontiersin.org/articles/10.3389/fimmu.2023.1285621/pdf?isPublishedV2=False", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "53d982dba3f69eb4cb068396cc1f63d0ab1bd80d", "s2fieldsofstudy": [ "Medicine", "Environmental Science", "Biology" ], "extfieldsofstudy": [ "Medicine" ] }
233814739
pes2o/s2orc
v3-fos-license
Assisted computer and imaging system improve accuracy of breast tumor size assessment after neoadjuvant chemotherapy Background The use of neoadjuvant therapy (NAT) in patients with early breast cancer is becoming increasingly common. The purpose of this study was to explore the combined use of breast pathology cabinet X-ray system (CXS) to accurately assess the response to neoadjuvant treatment of breast cancer and establish a standard evaluation system. Methods A total of 100 patients with breast cancer after neoadjuvant treatment were randomly selected. Preoperative imaging evaluation of tumor masses were significantly degenerated, and they were randomly divided into experimental and control groups of 50 cases each. Compared with the traditional two methods of material extraction, the effective material extraction rate is comparative. Take the two largest diameters of the largest two-dimensional surface of the tumor bed as the measurement object, the macro-description value is D1/D2, the radiographic system description measurement value is the experimental group d1/d2, and the correction under the microscope is worth the true size of the tumor bed H1/H2 as the final test standard, calculate the difference between D1/D2 and d1/d2 with H1 and H2, and compare the difference between d1− H1, d2 − H2 and D1− H1, D2 − H2. Results The average group of tissue samples in the experimental group was 16.4, and the average group of tissue samples in the control group was 16.7, and there was no difference between the two groups; The effective tissue blocks of tumor bed samples in the experimental group were11.8, and the control group was 7.5. There is difference between the two groups. The average effective percentage of tumor bed in the experimental group was 72%, and the average effective percentage of tumor bed in the control group was 44.8%. The difference was also statistically significant; d1− H1, d2 − H2 and D1− H1, D2 − H2 are all different. Conclusions CXS assists the collection of breast tumor bed, which can significantly improve the efficiency of tumor bed collection and save the cost of collection. Compared with the maximum diameter of the tumor bed by eyes, the CXS mapping value is closer to the value measured under the microscope. reduce the mass and clinical stage, and to fully prepare for subsequent surgical removal of the lesion. With the rise of artificial intelligence and computer-assisted learning technology, it is widely used in the assessment of breast cancer, such as the use of convolutional neural network for automatic digital patching to restore the tumor bed, and use the three-dimensional single-cell imaging for the analysis of RNA and protein expression in intact tumour biopsies. Three-dimensional imaging technology has also been initially tried in the restoration of breast cancer bed. The morphology of the breast tumor bed after neoadjuvant treatment, the way of concentric and non-concentric contraction can be revealed in the simulated threedimensional imaging. However, changes in tumor bed have put forward new requirements for pathologist selection and pathological evaluation. Needless to say, accurate material selection and description of the tumor bed can effectively restore the true condition of the tumor bed to the greatest extent, and provide accurate data for pathological evaluation. Is there a new and effective way to determine the location of the tumor bed? Exactly describe the size? The cabinet type X-ray radiography system (CXS) was first introduced to China. This article discusses the guidance of the tumor bed material acquisition and the role of microscope bed measurement after NACT to provide a preliminary reference for tumor bed material selection. We present the following article in accordance with the MDAR checklist (available at http://dx.doi.org/10.21037/tcr-20-2373). Information A total of 100 breast specimens from NACT from January to September 2019 in the Fourth Hospital of Hebei Medical University were collected and randomly divided into 50 cases in the artificial group (control group) and 50 cases in the machine-assisted group (experiment group, we obtained statement confirming informed consent from all patients). The patients in both groups were females, aged 35-65 years, with a median age of 48 years. Among them, 64 patients had radical resection of breast cancer, 36 cases had breastconserving resection, and patients underwent different regimens of chemotherapy for 3-8 cycles. Complete clinical remission was achieved before. Fifty-two cases were judged as complete or near complete remission after NACT, 36 cases showed multifocal calcification on X-ray images, and 12 cases tumors were significantly reduced to 1-2 cm, the images of X-ray showing high-density areas, which were suspected to have residual tumor parenchymal components. Cases were screened based on the following principles: After NACT, before radical surgery, imaging tumors regressed significantly, approached or reached complete clinical remission compared to NACT. The study was conducted in accordance with the Declaration of Helsinki (as revised in 2013). The study was approved by ethics board of the Fourth Hospital of Hebei Medical University (2020K-1334) and informed consent was taken from all the patients. Instrument Cabinet X-ray system. Definition In order to be able to simply and accurately compare the efficiency of the two material extraction methods, the efficiency of retaining bed material is defined as: the number of effective specimens (tumor parenchyma and non-tumor parenchyma but tissues with chemotherapy response)/ tumor bed specimens Number, which is the formula P = T/N ×100% Specimen selection and correction under the microscope The subjects were divided into breast-conserving specimens and radical mastectomy specimens after neoadjuvant therapy. For breast-conserving specimens, preliminary positioning was performed based on preoperative tattoos and titanium clip marks, and the specimens were irradiated with CXS for further accurate positioning. Use different colors of dye to mark the edges in different directions, and take the edges vertically. The breast was dissected at an interval of 1 cm and the largest two-dimensional surface of the tumor bed was exposed. The maximum diameter in both directions was described. The edges of the largest twodimensional section of the tumor bed were irradiated with X-rays. The suspicious positive edge was recorded and the largest tumor bed was marked on the X-ray image. The measured value of the two-dimensional surface. The largest tumor bed is obtained in order. For radical mastectomy specimens, pathologists should look for markers and preoperative image information to initially locate the tumor bed location, and expose the largest two-dimensional surface of the tumor bed at 1 cm intervals. According to the BIG-NABCG recommendation, for the larger tumors, five representative complete samplings of the largest crosssection. You can select a few more of these maximum cross-sections every 1 cm to determine the full range of the tumor, and record the maximum two-dimensional surface value of the tumor bed. Using a ruler to mark the maximum diameter value, and select the largest diameter of the largest tumor bed in these sections as a reference (Figure 1). At the same time, restore a tumor bed to the largest ( Figure 2), then take the largest tumor bed in order and record the corresponding numbers of tissue blocks ( Figure 3). To save time, multiple sections can be illuminated together. Use the CXS marking function to mark the area of suspicious tumor beds on each section (Figure 4). Use a marker pen to draw the corresponding suspicious area on the cut surface of the real specimen ( Figure 5). Then all the suspected tumors were taken from the area, and the total number of tumor bed materials was recorded. The tissue block is dehydrated, embedded in paraffin, sectioned and stained with hematoxylin and eosin (HE), and submitted to the experienced pathologist to evaluate the two dimensions of the largest tumor bed section under the microscope, H1, H2 (maximum H1, followed by H2), which needs to be pointed out. H1/H2 can be obtained by the following methods: observe the farthest end of the tumor under a microscope, mark the glass with black dots, and finally superimpose the glass pieces together and measure the maximum ( Figure 6). The two values obtained by the correction under the microscope are as follows: the first one, there is a small error between the macro description and the maximum diameter under the microscope; the second, there is a large discrepancy between the macro value and the final value under the microscope, which is far from above/below final measurement. The determination of the value under the microscope is based on the consensus given by MD Anderson, that is, the largest path through the tumor parenchyma, including non-tumor areas such as fibers, inflammation, and foam cell responses within the measurement. A simple and effective method is to stack the slides to restore the largest two-dimensional surface of the tumor bed, mark both ends of the parenchyma of the tumor at the same time, and measure the two largest diameters in the vertical direction with a ruler. When the tumor bed is large and the parenchymal component of the tumor is small, the complicated workload can be avoided. Evaluation method based on residual cancer burden (RCB) Through the above-mentioned specimen collection and observation under the microscope, we had collected the following data: (I) corrected length under microscope in two vertical directions of the largest two-dimensional surface of tumor bed (d1/d2); (II) percentage of residual tumor (CA%) and residual ductal carcinoma in situ (DCIS%) (calculate the value of each slice and get the average); (III) the number of positive lymph nodes and the largest diameter of metastases (dmet) (the measurement of the largest diameter should include the fibrous matrix between the two most distant lesions). The calculation of the score can use the formula: RCB = 1.4(finv × dprim)0. We log on to http://www.mdanderson.org/breastcancerrcb, enter the above data to calculate the score and get the RCB rating. The scores correspond to different RCB ratings in the following ranges:  RCB 0: pathologic complete response (pCR);  RCB I: low risk (0-1.36);  RCB II: moderate risk (1.37-3.28);  RCB III: high risk (>3.28). Statistical analysis The number of effective tumor bed materials and the number of tumor bed materials for each case of the experimental group and the control group were recorded. According to the formula: P = T/N ×100%, the efficiency of the remaining bed material was calculated. Two independent samples were used. Parametric rank sum test (Wilcoxon W) and the use of SPSS16.0 software to compare the differences between the experimental group and the control group. Record the macroscopic value D1/D2, the CXS value d1/d2, and the microscopic value H1/H2 of each case, and calculate the absolute value of the difference from the microscopic value in each dimension: experimental group D1 − H1, D2 − H2, control group d1 − H1, d2 − H2. The nonparametric rank sum test (Wilcoxon W) of two paired samples was used to compare D1 − H1 and d1 − H1; whether there was a difference between D2 − H2 and d2 − H2, P<0.05, the difference was statistically significant. Results In 100 cases, 21 cases were evaluated as mild treatment response, 15 cases were moderate treatment response, and 64 cases were severe treatment response. Forty cases were diagnosed with invasive ductal carcinoma of the breast (31 cases with histological grade 2, grade 3 in 9 cases). The statistical results show that the effective number of tumor bed samples taken in the experimental group was 11.8 (median 12.0), the number of tumor bed samples taken was 16.4 (median 15.5), and the average effective sample retention rate was 72.0% (median 75.0%);The effective number of tumor bed materials in control group was 7.5 (median 7.0), the number of tumor bed materials was 16.7 (median 17.0), and the total effective bed retention rate was 44.8% (median 45.0%), There were differences in the effective number of tumor bed materials between the experimental group and the control group (P<0.05). There Table 2). There was a statistical difference between D1 and d1 (P<0.05, Table 3). There was a statistical difference between D1 − H1 and d1 − H1, D2 − H2 and d2 − H2 (P<0.05, Tables 4,5). The line chart shows that the measured value of the tumor bed CXS is closer to the measured value of the microscope than the measured value of the macro tumor bed. Using CXS measurement, its error value does not change significantly (Figures 7,8). D1 and d1 was statistically different, which show that the two methods had a measure of the tumor bed without comparing the true values after correction under the microscope was different; D2 − H2 and d2 − H2 was different, but D2 and d2 was not different, which indicated that when the tumor bed had a small range, the CXS measurement value was closer to the curve of the measurement value under the microscope, and macro measurement values was far away from the CXS measurement value and the CXS measurement value on the same side of the CXS measurement value curve (macro measurements ware too low or too high). Discussion The treatment of breast cancer has gradually developed T: the number of effective specimens; N: the number of tumor bed specimens; P: the efficiency of retaining bed material (%). CXS, cabinet X-ray system. from the initial simple surgical treatment to the current comprehensive treatment plan with surgery as the center, supplemented by chemotherapy, radiotherapy, immunotherapy, endocrine therapy, etc. This is breast cancer research in imaging, surgery, The inevitable result of multidisciplinary comprehensive development such as pathology (1). NACT not only benefits more breast cancer patients, but also brings many problems and challenges to pathologists (2). Specimen selection and evaluation of important pathological parameters are critical to assessing the extent of treatment response, so it is important to recognize that histopathologists play a key role in this multidisciplinary environment. However, there is no uniform standard for the selection of tumor bed materials. This means that taking common methods for breast tumor beds after neoadjuvant may lose important parameters and cause great errors in the RCB score (3,4). According to the latest American Joint Committee on Cancer (AJCC) classification, taking macro-and micro-tissues to evaluate the size range (ypT) of residual cancer after neoadjuvant treatment of the tumor bed is the best combination. After neoadjuvant therapy, tumor bed changes can be roughly divided into two types: concentric contraction; non-concentric contraction (Figures 9,10). Concentric contractions are more common in HER2 overexpression and triple-negative or basal-like types, while non-concentric contractions are mostly luminal breast cancer. When the chemotherapy effect is good, the fibrosis and necrosis often occur in the tumor bed area, and it is often difficult to distinguish the existence of the parenchymal component of the tumor with eyes. Therefore, when the non-concentric contraction tumor bed approaches/reaches complete remission, it is difficult for pathologists to choose materials because they are easy to miss. According to BIG-NABCG recommendations, when residual cancer is found, a complete cross-section of the largest tumor area should be evaluated under a microscope section. For larger tumors, five representative sections are selected for complete sampling of the largest cross section. Several more such maximum cross-sections can be selected every 1 cm apart to determine the full extent of the tumor. This method is sufficient to assess the tumor size and the percentage of residual cancer calculated by the AJCC stage and RCB. Correlation with preoperative imaging should be used to help locate the tumor site, and if possible, specimen radiographs should be used to locate the tumorrelated site and/or calcifications before surgery. The size of the tumor before sampling will determine the scope of sampling. Systematic material extraction is preferred, rather than blind material extraction of the entire fiberized area Figure 9 The tumor showed a centripetal atrophy in CXS images. Figure 10 The tumor showed a non-centric atrophy in CXS image. CXS, cabinet X-ray system. or any number of pieces. This requires comprehensive judgment through careful study of clinical data and imaging characteristics, so as to select the best area for material extraction. When no tumor residue is found, BIG-NABCG recommends the following method: one largest crosssection [or five representative slices for each 1cm (larger tumor bed 1-2 cm)] pretreatment area. For larger tumor beds, take a maximum of 25. In contrast, the US FDA recommends taking at least one piece per centimeter of the size of the tumor before treatment, or a total of at least ten pieces, whichever is greater. However, the Royal College of Pathologists (UK) does not support neoadjuvant Any specific recommendations for the treatment of specimens are given. For multifocal tumors, each lesion should be treated in the same way, and sections of breast tissue between tumors should be recorded and taken. Obviously, whether it is postoperative Miller-Payne system or RCB system evaluation, the sampling method suggested by BIG-NABCG has the advantages of evaluation. The key to pathological material acquisition is to determine the location of the tumor bed after surgery. If imaging is used as a reference, the omission of the lesion can be avoided to the greatest extent. In recent years, multidisciplinary cooperation and the rise of artificial intelligence technology have undoubtedly provided better methods for breast cancer screening and diagnosis. For example, using computer-aided diagnosiscontrast enhanced spectral mammography (CAD-CESM) tools, Patel et al. and other studies have found that in the observation of 50 breast cancer patients, CAD CESM correctly identified 45 of the 50 lesions in the cohort with an accuracy of 90% (5). In the field of breast cancer, the current artificial intelligence technology is mainly used in early imaging screening (6)(7)(8), and pathologists use virtual microscopes and remote pathological digital section consultations (9). Maeda and other studies used 200 hollow-core needle biopsy breast specimens to perform immunohistochemical staining on their estrogen receptor (ER), synaptophysin, and CKl4/p63, scan the whole section, and analyze the nuclear and cytoplasmic staining with image analysis software as proof of diagnosis cases with high ER expression may indicate malignancy (10). Preoperative assessment of the patient's tumor bed can use methods such as mammography, magnetic resonance imaging (MRI) and Doppler ultrasound images. The X-ray manifestations of breast cancer are divided into 4 kinds: (I) mass; (II) calcification; (III) structural distortion; (IV) asymmetric dense shadow. The imaging principle is based on the difference between the density of the lesion and the surrounding normal tissues. Therefore, when the treatment effect is not good, high-density lesions are still present. At this time, malignant cells at the edge of the lesion infiltrate the surrounding area, and the normal tissue inflammatory response prevents its development and forms wrapping and pulling, X-ray images show burrs, roughness and lobes; when there is a certain effect, the density of the lesion is reduced and the volume becomes smaller; when the effect is good, the density of the lesion is consistent with the surrounding glands, and the shape is unclear. MRI is the most sensitive breast cancer detection method and the most accurate imaging method for evaluating the efficacy of neoadjuvant therapy for breast cancer (11). It includes diffusion-weighted imaging, dynamic enhanced MRI, magnetic resonance spectroscopy imaging and other technologies. Compared with clinical evaluation, MRI can more accurately predict the pathological response of breast cancer after neoadjuvant treatment, and the change of tumor volume is more valuable for the evaluation of curative effect than the diameter (12). The treatment response varies with the MRI appearance of the tumor and the subtype of the tumor. MRI showed clear boundaries, triple-negative breast cancer NACT, the consistency of MRI and clinicopathological judgments of tumor size was better than that of MRI interval scatter-like enhancement, hormone receptor (HR) + breast cancer (13). At the same time, the type of chemotherapy regimen affects the efficacy of MRI evaluation accuracy (14,15). Color Doppler ultrasound technology can intuitively evaluate the therapeutic effect, usually with the nipple as the center to do a spoke-like scan, adjust the depth, gain, and focus position according to the actual condition of the mass to ensure a clear image. At the same time, it can also be combined with two-dimensional ultrasound to detect the location, size, boundary, shape, internal echo, etc. of the lesion, and use color Doppler ultrasound mode to observe the blood flow inside the tumor. Refer to the Alder grading standard for grading assessment: level 0, inside the tumor no blood flow signal; level I, less blood flow signal, and 1 to 2 punctate blood flow with a diameter of less than 1 mm can be seen; level II, blood flow signal is more obvious, with 3 to 4 blood vessels visible, radially distribution, at least one blood vessel straddles the lesion; level III, the blood flow signal is abundant, and more than 4 blood vessels are visible, which are distributed in a network. Judging the effect of chemotherapy by observing the patient's blood flow resistance index and comparing it with pre-operative images. If patients are sensitive to chemotherapy drugs, their two-dimensional ultrasound performance, blood flow classification and resistance index will have obvious changes, but there will be no obvious changes when the treatment is ineffective. This is mainly because when the patient is sensitive to chemotherapy drugs, the tumor cells are gradually destroyed, causing the tumor to gradually shrink or even disappear. At the same time, the blood vessels inside the tumor are embolized, collapsed, and occluded, which leads to a decrease in blood flow signals inside the tumor (16). In this experiment, the CXS can clearly show the suspected area of the tumor bed, and can accurately measure and mark the tumor bed range. CXS includes six main originals. Lead room: composed of two layers of stainlesssteel plates and 6-8-mm thick lead plates; stage: used to place samples; detector: used to sense X-ray intensity and generate black and white signals to the computer; tube: generate X-ray; high voltage generator: generate 160 kV voltage to supply power to the entire system; vacuum system: X-ray tube must work in vacuum. Operation steps: after the power is turned on, the high-voltage generator generates high voltage and acts on the tube to generate tube current (total number of electrons escaping from the filament), including tube wall current and target current (the actual number of electrons reaching the target). The actual area of the target that the target current hits on the X-ray tube is called the focal point size, which determines the detection of the smallest defect of the object. Generally speaking, the smaller the focal point, the smaller the defect detected. The X-rays generated by the target current acting on the target can illuminate the sample and then image it on a digital camera. When the radiation penetrates the sample and reaches the sensing material, there is a voltage difference of 20 kV between the material and the phosphor screen. Due to the different X-ray intensity, the sensing material will produce different numbers of electrons which are accelerated and hit the phosphor screen to form a black and white photo. The digital camera will take pictures and the photos are converted into digital signals and sent to the computer. We identify tumors by the gray scale of the image from black to white (level 25, blackest to whitest) On the X-ray image, the tumor bed showed obvious high-density signal areas, and the calcification and DCIS showed brighter light spots ( Figure 11). Radially dense white shadowed areas of the tumor bed with radial constriction are clearly distinguished from the surrounding black lowdensity fat area. Non-concentrically contracted tumor beds differ only in signal intensity, but the corresponding range can be determined. The comparison of the sampling efficiency between the experimental group and the control group shows that although there is no statistical difference between the two groups, the effective sampling rate of the total tumor bed in the experimental group is more than that in the control group. The experimental group has a clear advantage. The reason is that eyes is very easy to miss when identifying areas of more obvious treatment response. The CXS play an important role in the maximum two-dimensional cross section of the tumor bed and the restoration of the tumor bed. When the samples were too small, this situation was more common in breast-conserving samples of neoadjuvant treatment. The determination of the tumor bed range not only help to obtain the largest two-dimensional surface, but also found the edge closest to the tumor. The vertical tumor bed could be used. The direction was taken from the nearest margin. We irradiated the sample with CXS, observed the range of the suspicious density signal of the tumor bed, measured the distance from the tumor bed to the margin and mark it. If there was residual DCIS, it was easier to observe. Because the tumor bed was too small and the boundary was not clear, we took the evaluation of the scope of the microscope as the best guiding principle, and require that the number of pieces (1-2 pieces) on the largest surface when taking talents to increase the accuracy of slice splicing. While taking the smaller tumor bed, the adipose tissue around the tumor bed can be expanded, so as to avoid the area that is not visible to the eyes or CXS from being missed. In this experiment, the measured value of the tumor bed of the control group and the measured value under the microscope have a large error. Compared with the experimental group, the error value is more unstable, and the difference is statistically significant. When the tumor bed is larger, the error in the control group is more obvious, but no obvious change in the experimental group, which proves that when the effect of chemotherapy is good and the tumor parenchyma is not obvious, there is a wide range of suspicious tumor beds. Sometimes, because the dense fibrous response at the edge of the tumor bed is not easy to distinguish from lower density cancerous lesions, the diameter measured at the outermost edge of the tumor parenchyma observed under the microscope is smaller than the macro measurement value (Figure 12), but CXS can solve this problem. CXS can play an accurate assessment role, when small focal tumors remain on the edge of the tumor bed, too small lesions are easily missed when the maximum diameter is plotted by the naked eye ( Figure 13), and on X-ray imaging, they show strong signal points. Small lesions are therefore detected, which avoids the need for lesions omission. The shift of the tumor's main body position after NACT is also a major factor affecting macro measurement, but CXS images can clearly find this phenomenon ( Figure 14). One of the advantages of CXS is that it will appear as invasive cancer, DCIS, microcalcifications, and fibrous fat response under the microscope (Figures 15-17), and distinguish the images by different imaging signal densities. In summary, in the era of rapid development of medical technology, the emergence of new pathological materials and diagnostic auxiliary equipment represented by artificial intelligence requires us to continuously explore its advantages and explore its potential in order to overcome difficulties encountered in daily work. However, the role of the CXS needs to be studied and explored by more scholars. Ethical Statement: The authors are accountable for all aspects of the work in ensuring that questions related to the accuracy or integrity of any part of the work are appropriately investigated and resolved. The study was conducted in accordance with the Declaration of Helsinki (as revised in 2013). The study was approved by ethics board of the Fourth Hospital of Hebei Medical University (2020K-1334) and informed consent was taken from all the patients. Open Access Statement: This is an Open Access article distributed in accordance with the Creative Commons Attribution-NonCommercial-NoDerivs 4.0 International License (CC BY-NC-ND 4.0), which permits the noncommercial replication and distribution of the article with the strict proviso that no changes or edits are made and the original work is properly cited (including links to both the formal publication through the relevant DOI and the license). See: https://creativecommons.org/licenses/by-nc-nd/4.0/. Figure 17 The lower signal areas on the X-ray diagram often appear fibrotic under the microscope (hematoxylin and eosin). Figure 16 The high-brightness points on the X-ray map can also be calcifications (hematoxylin and eosin).
2021-05-07T00:03:55.122Z
2021-03-01T00:00:00.000
{ "year": 2021, "sha1": "d3f87349f8ca0e59bca65acbb42d2bf4fe6affed", "oa_license": "CCBYNCND", "oa_url": "https://tcr.amegroups.com/article/viewFile/49508/pdf", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "9100f18a253d85b79d0d0dc20337b7c8b42c560f", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
17435897
pes2o/s2orc
v3-fos-license
Delayed colonisation of Acacia by thrips and the timing of host-conservatism and behavioural specialisation Background Repeated colonisation of novel host-plants is believed to be an essential component of the evolutionary success of phytophagous insects. The relative timing between the origin of an insect lineage and the plant clade they eat or reproduce on is important for understanding how host-range expansion can lead to resource specialisation and speciation. Path and stepping-stone sampling are used in a Bayesian approach to test divergence timing between the origin of Acacia and colonisation by thrips. The evolution of host-plant conservatism and ecological specialisation is discussed. Results Results indicated very strong support for a model describing the origin of the common ancestor of Acacia thrips subsequent to that of Acacia. A current estimate puts the origin of Acacia at approximately 6 million years before the common ancestor of Acacia thrips, and 15 million years before the origin of a gall-inducing clade. The evolution of host conservatism and resource specialisation resulted in a phylogenetically under-dispersed pattern of host-use by several thrips lineages. Conclusions Thrips colonised a diversity of Acacia species over a protracted period as Australia experienced aridification. Host conservatism evolved on phenotypically and environmentally suitable host lineages. Ecological specialisation resulted from habitat selection and selection on thrips behavior that promoted primary and secondary host associations. These findings suggest that delayed and repeated colonisation is characterised by cycles of oligo- or poly-phagy. This results in a cumulation of lineages that each evolve host conservatism on different and potentially transient host-related traits, and facilitates both ecological and resource specialisation. Background Host-plant specialisation is common and central to explanations for the enormous diversity of plant-feeding insects. Phytophagous insects vary in the taxonomic breadth of their respective host-plant range, but most still tend to use only a fraction of the plants available to them in their environment [1][2][3][4][5]. Generally, selection promoting both the broadening and reduction of hostplant resources must take place. Host-plant conservatism is not universal and selection for generalised host associations is expected to be persistent because of characteristics such as resource abundance variability or environmental predictability [6,7]. Colonisation of a new plant taxon signifies the broadening of a species host range, and specialisation on traits of the host show a narrowing of resource use. Explaining mechanisms that cause expansions or contractions in host-ranges has been difficult especially for species rich interactions [8] because the vagaries of time tend to obscure complex patterns of association [9]. Here we investigate the timing of colonisation by a lineage that evolved diverse specialised modes of resource-use but remained relatively species depauperate. The enormous diversity of phytophagous insects has been attributed to traits associated with the insect herbivore (diet tolerances for plants and oviposition preferences), the plants they parasitise (defense strategies against herbivores), the interaction itself ('coevolution'), ecological community interactions (predation & competition), or the environment (bottom-up forces). Conventional hypotheses posit 'reciprocal' or 'sequential' bitrophic interactions between traits of diversifying clades that drive insect and plant radiations [10][11][12]. Trade-offs in reproduction or diet, competition and predation, and tolerance to plant 'defensive' traits are central to these arguments [2,13]. Alternative explanations argue host-plant conservatism can be driven by predictability [14], climate [15], lifehistory characteristics [16], geographical contexts [17], plasticity [18], genetic predispositions or ecological compatibilities suited to the use of a resource [19], and hostrange ecology [20] or genetics [21]. To distinguish among these causal mechanisms it is necessary to study evolutionary periods that are meaningful to the association of interest. Transitions to specialisation on a novel hostplant resource are only meaningful for a finite period because a shift to a narrower set of resources can be transient or bidirectional [3,8,22]. Determining the period that separates the origin of the insect group and the hostplants they feed on is essential to unraveling hypotheses explaining the origin or loss of narrow host ranges. Discerning between colonisation and becoming reproductively isolated on the novel resource requires understanding distinct processes. The first phase in the evolution of a conservative host-plant affiliation is colonisation. Colonisation signifies a potential prelude to adaptation to a novel resource [5]. Colonisation of a novel plant lineage is either a fundamental shift to a resource previously not utilised in the evolutionary past or a secondarily derived association with a lineage used in the past [8]. The phylogenetic distance and dispersion among terminal host taxa has been used to distinguish between these two possibilities [23,24]. Furthermore, the relative time between the most recent common ancestor (MRCA) of the host lineage and inferred colonisation is expected to be indicative of the extent of the distance in resource space between natal and novel host [25]. This measure is informative because it describes the extent of nicheexpansion, differences between alternative niches, and provides a framework for identifying trade-offs between them. The most direct means of testing this distance is to determine whether the common ancestors of insect and plant clades are contemporaneous or not. The second phase following initial contact leads to reproductive isolation on the new host that is assumed to ensue via disruptive selection in sympatry, or by gene flow disruption and drift in allopatric or parapatric isolation. Acacia (sensu stricto) Mill. (Leguminosae, Mimosoideae) is broadly distributed over Australia with an estimated 1020 species. A fossil-calibrated molecular study has placed the origin of the legume subfamily Mimosoideae at approximately 42 Mya [26]. The fossil record indicates that species of subfamily Mimisoideae assignable to genera other than Acacia (sensu lato) [27] were present in the eastern Great Australian Bight approximately 37 Mya during the Oligocene. Australian Acacia is thus an immigrant taxon among a number of mimosoid genera and probably established in Australia during the Late Oligocene-Early Miocene [27]. The evidence suggests Acacia became a dominant part of sclerophyll communities in Australia during the Pliocene 7.0 -1.5 Mya. Thrips (Thysanoptera, Tubulifera, Phlaeothripinae) that parasitise Australian Acacia are uncharacteristic of the other 5500 estimated Thysanopteran species that mostly exhibit generalist relationships with plants [28]. Most of the Tubulifera species (ca. 60%, [29]) are fungivorous, some phytophagous, and fewer still are predators. Approximately 15% of the 2000 thrips species belonging to the Tubulifera are able to induce galls. Endemic northern tropical Australian thrips include species belonging to genera present in Southeast Asian in the wet tropics [30] suggesting thrips in Australia had an ancestral origin in a tropical environment. Thrips specialising on Acacia comprise several distinct behavioural suites that exhibit variation in host-specificity and oviposition strategies [31]. Acacia thrips, estimated to be in excess of 230 species [32], feed almost exclusively on sections Phyllodineae Pedley, Plurinerves Benth., and Juliflorae Benth. (ca. 397 spp., 216 spp., 255 spp. respectively, [33]). Of the 1020 Acacia species, approximately 950 develop phyllodes, the expanded petiole believed to be necessary for the radiation of thrips on Acacia. The most current molecular systematics of Thysanoptera supports the monophyly of this group [34]. The domicile-building Acacia thrips tie or glue phyllodes with silk to create a chamber. Kleptoparasitic thrips species invade and kill gall-inducing or domicile-building thrips on Acacia while opportunistic Acacia thrips species utilise the abandoned domiciles, galls, or similar constructions of other insect orders. Here we construct the most comprehensive Acacia (sensu stricto) molecular phylogeny to date and compare it with the evolutionary history of Acacia thrips. We expect one of three possible scenarios ( Figure 1) explain the colonisation of Acacia by thrips, each with a distinct timing pattern. Phylogenetically contemporaneous common ancestors of insect and host plant are explained by insect lineages tracking the host with conserved host switching among related taxa. A pattern showing a considerably younger insect common ancestor compared to that of the host lineage is expected when a lineage that has not been used in its recent evolutionary past is colonised. An insect common ancestor that predates the host lineage requires invoking extinctions of insect lineages on other plant taxa or extinctions of distantly related ancestral host taxa. Specifically, we test the hypotheses that: i) the MRCA of Acacia thrips was contemporaneous with the MRCA of Acacia (rapid colonisation); ii) the MRCA of Acacia thrips postdates the origin of the MRCA of Acacia (delayed colonisation); or iii) predates the MRCA of Acacia (convergent colonisation and extinction). We interpret the results in terms of distinguishing between colonisation of Acacia, the evolution of host conservatism, and the evolution of ecological specialisation amongst thrips lineages with a focus on galling behaviour. Phylogenetic inference of Acacia We inferred phylogenies using parsimony-based and probabilistic approaches to evaluate uncertainty in topology, test deviations from taxonomic classifications, and generate a distribution of phylograms to be used in divergence time estimation (see below). The reliability of the inferences between independent Bayesian analyses was evaluated using the standard deviation of split frequencies that was below 0.01 on all runs. The potential scale reduction factor (PSRF) ranged from 1.000 to 1.012 for all parameters in the separate 100 × 10 6 generations indicating consistent posterior parameters among runs. The Bayesian consensus tree indicated several poorly supported deeper nodes, but otherwise resolved the section clades ( Figure 2). Figure 1 Colonisation hypotheses. Relative timing of the common ancestors of Acacia thrips in relation to Acacia. The green branches indicate the Acacia clade and the black branches thrips. We calibrated the relative timing of the two clades at the nodes where a parallel divergence (codivergence) event occurred between them [89]. Explanations for hypotheses: H 0 ) contemporaneous origin of Acacia and Acacia-thrips MRCA; H 1 ) multiple independent colonisations of Acacia and extinction of ancestral hosts; and H 2 ) host shifting from more distantly related natal host. Our consensus tree showed good general agreement with section classifications [33]. The topologies of the parsimony, maximum likelihood, and Bayesian inferences all indicated very similar polyphyletic groupings of species from all four sections (Additional file 1, Additional file 2, Additional file 3, Additional file 4, Additional file 5 and Additional file 6). The SH-test for section monophyly indicated that 100 ML constraint trees generated using maximum liklehood were all significantly worse (P < 0.0001) than the topology of the Bayesian consensus phylogeny. Considered together, species of sections Phyllodineae and Juliflorae cluster within the Plurinerves as do Plurinerves within the Juliflorae. Acacia colletioides (Plurinerves) groups within a clade that is otherwise comprised of section Botrycephalae. Section Botrycephalae is a derived clade of section Phyllodineae. Acacia elata and A. terminalis are paraphyletic with other Botrycephalae in section Phyllodineae. These topological associations are well supported in our Bayesian inference ( Figure 2) and all other inferences (Additional file 4 and Additional file 5). Acacia brachystachya (Juliflorae) is well supported within section Phyllodineae in all inferences. Within Juliflorae, A. stenophylla (Plurinerves) is a well-supported sister-species of A. xiphophylla in all inferences. Acacia heteroclita and A. confluens (Plurinerves) also consistently grouped within the Juliflorae. Our inferences also show that section Plurinerves comprises A. verniciflua, A. howittii, A. aspera, A. flexifolia, A. lineata, A. genistifolia, and A. montana, which have all been classified as Phyllodineae species. These relationships were well-supported in the probabilistic inferences. Acacia verticilata (Juliflorae) grouped within the Plurinerves clade. Lineages that were not well resolved included A. cuthbersonii, A. coriacea, A. masliniana and A. havilandiorum, and the clade comprising A. floribunda, A. mucronata, A. longifolia, A. orites, and A. triptera. Phylogenetic inference of Acacia thrips Inferences of Acacia thrips phylogeny were undertaken using the same procedure as for Acacia. The standard deviation of split frequencies was below 0.01 on all runs. The potential scale reduction factor (PSRF) was 1.000 for all parameters in the 100 × 10 6 generations runs. The Figure 2 Bayesian consensus tree of Acacia. The consensus was derived from sampling every 1000 th tree of 100 × 10 6 iterations with 2 chains and a GTR+I+Γ model applied to each gene locus and a burnin using 75,000 trees of a 100,000 tree posterior sample. Posterior probabilities > 0.90 are shown above branches. Red dots at branch terminals indicate host species. Taxon colour refers to Acacia sections: Plurinerves (blue); Juliflorae (green); Phyllodineae (black); and Botrycephalae (red). Bayesian consensus tree was largely concordant with that of previous work [32] and with our parsimony and likelihood inferences (Additional file 1, Additional file 2 and Additional file 3). An important difference in our topology arises due to the uncertain placement of Kladothrips antennatus in respect to the clade containing Kladothrips zygus. Previous phylogenetic inference [35] also shows poor support for this relationship despite more thorough testing of topology. Acacia divergence timing models We inferred divergence time estimates using Bayesian and penalised likelihood (PL) approaches. The null molecular clock hypothesis of equal evolutionary rates was rejected (P < 0.0001). The estimated sample size (ESS) performance criteria (> 1000) indicated sufficient posterior parameter sampling. A total of n = 28 × 10 3 Acacia phylograms were filtered according to the topological constraint inferred with MrBayes. Of these, n = 25 Acacia PL chronograms were identical to the constraint. As this sample was not sufficient (age estimates not normally distributed) to calculate confidence intervals, we used the geometric mean to summarise the ranges of node age estimates inferred using PL. The dates of the parallel divergence inferred from the Bayesian approach and the geometric mean of the chronograms inferred using PL were 5.6 and 7.4 millions of years, respectively. Acacia thrips divergence timing models We inferred timing estimates of Acacia thrips to generate and test relative divergence timing hypotheses (see below). The null molecular clock hypothesis was rejected (P < 0.0001). The ESS performance criteria (> 1000) indicated sufficient posterior parameter sampling. A total of n = 28 × 10 3 Acacia thrips phylograms were filtered according to a topological constraint inferred with MrBayes. Of these n = 10 were identical to the constraint. After scaling node ages, the dates for the MRCA of Acacia thrips were 14.38 mya under the Bayesian consensus, and 25.32 mya as the geometric mean calculated from the PL inferences. Testing between divergence timing models The range of divergence timing estimates represented in our BEAST and r8s inferences were summarised as divergence timing models ( Table 1). The assumption of co-cladogenesis, contemporaneous MRCA's at 20 Mya, and the maximal r8s estimate of approximately 50 million years for the MRCA were also tested. Bayes factor testing ( Table 2) between divergence timing models using stepping-stone sampling of the log marginal likelihoods among our three hypotheses for the MRCA of Acacia thrips were: ln(H 14 Discussion Our findings indicate that the common ancestor of Acacia thrips postdates the common ancestor of Acacia. Putative absolute estimates of divergence timing indicate that thrips included Acacia in their host range approximately 14 Mya. We detected phylogenetic under-dispersion in host-species use that is consistent with i) cycles of oligophagy or polyphagy interspersed with repeated colonisations of Acacia over protracted periods before the evolution of resource specialisation; and ii) colonisation of host phenotypes that favour resource use in one environment over the other. Opportunistic and domicile-building thrips are polyphyletic groups whose common ancestors appeared Date priors for the common ancestor of Acacia thrips and a parallel divergence event used to calibrate the thrips phylogeny. Path and stepping-stone sampling were used to estimate marginal likelihoods of each the divergence timing model for Bayes factor tests. A split in the thrips and Acacia phylogenies was treated as fixed [89]. between 5 and 10 Mya. The galling genus Kladothrips arose as recently as 6 Mya and represents the least uncertain shift to more stringent host-specificity by thrips and specialisation solely on Acacia. The common ancestors of the kleptoparasitic genus Koptothrips, and the gallers on whom they specialise, arose at approximately the same time. The putative date for the origin of the galling clade is of particular interest because several hypotheses posited for adopting this life history strategy can now be contextualised with the evolution of the Australian environment. Colonisation of Acacia By definition, colonisation and the change to include a new species implies a broadening of host range and a period of oligophagy. Our ultrametric inference ( Figure 3) for the transition between oligophagy (or polyphagy) at colonisation and host conservatism on Acacia, is explainable in several ways: i) oligophagy or polyphagy persisted for considerable evolutionary time after colonisation and before host conservatism on Acacia and the evolution of specialised behaviour; ii) host conservatism on Acacia evolved during or shortly after the colonisation of Acacia and specialised behaviour considerably later; or iii) host-conservatism at macro-evolutionary scales has obscured patterns of recolonisations of Acacia occurring at micro-evolutionary scales. Given the estimate for the origin of Acacia at 20 Mya, our results ( Figure 3) indicate that the earliest possible transition to host conservatism on Acacia by thrips occurred at approximately 14 Mya. Uncertainty in our node estimates does not exclude earlier colonisation at 16.5 Mya. Thrips colonised Acacia a considerable period after the host lineage radiated. Recolonisations might be expected to occur after initial contact with Acacia if there was an extended period before resource specialisation, and where host ranges include several plant taxa [20,36,37]. The ability to colonise a phylogenetically wider range of potential hosts is consistent with oligophagy and the relatively rapid colonisation of the Juliflorae, Plurinerves, and Phyllodineae ( Figure 3). Ancestor lineages of these host sections existed before the MRCA of the gallers. This suggests host switching among distantly related species was initially accompanied by high species-specificity (e.g. [38]). There appears to be a protracted period before specialised behaviour evolved between thrips lineages and with Acacia. The 5 million year lag between the MRCA of the gallers and their divergence from the other genera is a relatively deep split. Primary and secondary associations with Acacia appear to be derived. Therefore, it is plausible that ancestors of extant species recolonised Acacia numerous times subsequent to the evolution of host conservatism on Acacia. Host conservatism in Acacia thrips Conservative associations between an insect and host plant clade have been estimated at periods from 3 Mya (psyllids, [9]), 20 Mya (gallwasps, [39]), 40 Mya (yucca moths, [40]), to sometime since the Cretaceous (fig wasps, [16]). The two former studies of parasitic associations reported delayed colonisation of the host. The latter two associations infer co-cladogenesis with rapid colonisation scenarios and involve pollination mutualisms. Mutualisms are expected to select for more specific host conservatism due to the pollinator habit [41]. By comparison, parasitisms having relatively high species-specificity have been shown to involve switching between more distantly related plants [42]. Generally, host-range limits vary between antagonistic associations compared to symbioses and mutualisms where narrower species-to-species dependencies are more common [43][44][45]. Parasites of the galling habit exhibit wider host ranges than that of the prey species and evolve host-plant conservatism as a secondary association [46,47]. Kleptoparasitic and opportunist thrips have evolved associations with Acacia as secondary hosts presumably by targeting domiciles of other thrips species. Ecological specialisation amongst Acacia thrips characterized by primary and secondary associations predict forces selecting for host-plant conservatism will vary among the Acacia thrips as do their host-plant ranges. Host conservatism is transitory as diet breadths of phytophagous insects fluctuate over time [8,24,[48][49][50]. Thrips with strict host plant associations are rare, exhibit a willingness to engage in feeding on a wide variety of plant families, and have similar feeding apparatus in all life stages [51]. This suggests plasticity in host plant tolerance is possibly linked to secondary associations with food resources that have been used in the evolutionary past [48,52,53] and facilitated cycles of recolonisation. Gall-inducing thrips, apart from those on Acacia, are able to exploit multiple plant taxa [25]. This suggests host choice by thrips involves multiple evolutionarily labile traits. For example, galling by sawflies has arisen independently on multiple occasions across five plant families [54]. The nematine subfamily of sawflies that specialises on the genus Salix also has several origins of galling, but on various parts of the plant [55]. Therefore, trade-offs between plesiotypic trait compatibility among available plants [19] and selection for traits resulting in host conservatism, should strongly favour thrips associations with Acacia. In other words, a broad diet breadth facilitates colonisation of new plant lineages, but selection for host conservatism develops when genetic trade-offs in performance arise on the new host. Host conservatism and environment Shifting to a new host plant can result from trade-offs between alternative environments associated with natural enemies [56,57] and larval or oviposition performance on alternative hosts [58][59][60][61]. Thrips colonised Acacia at a time when it presumably supported a similar diversity of insects as it does today [62]. As a result, thrips likely experienced fitness costs associated with predation or competition during colonisation. Furthermore, host conservatism and ecological specialisation expressed by contemporary thrips species appears to have taken several millions of years to evolve as Australia experienced pronounced environmental change and ecological disruption. Our estimates suggest that the common ancestors of the thrips behavioural suites arose approximately at the beginning of the Quaternary when Australia's climate was strongly linked to glacial/interglacial cycling [63]. Before this period, Australia experienced a more general transition from humid to seasonal climates. The development of arid environments resulted in profound structural changes to animal and plant communities [64,65] including Acacia [66,67]. Performance between host lineages has been shown to respond to such habitat gradients [15,68]. Our timeline for the colonisation of Acacia coincides with the first major step towards aridity during the mid-Miocene and the development of a more acute dry season in Central Australia [63]. It is at this time that Acacia replaced Eucalyptus in the developing arid regions. Acacia represented an expanding resource with geographical range changes that potentially influenced host resource suitability and predictability. Host conservatism evolved under transient abiotic and biotic conditions in a non-random manner during the diversification of Acacia thrips. Our inferences indicate that the phylogenetic distribution of Acacia host-species compared to non-hosts is non-random. For example, one or several species of Acacia in crown clades that support thrips have intermixed lineages that are absent of thrips parasitism. This form of phylogenetic under-dispersion, where host lineages are distantly related and intermixed among terminal branches, is characteristic of recolonisation episodes [23,24,48]. We suggest these patterns are robust to our incomplete sample. Acacia thrips are a species poor group (ca. 235 spp., [32]) compared to Acacia (> 1000 spp.). Single species of Acacia are known to support upto 5 thrips species, reducing the realised number of host species even further. Similarly, at a very broad taxonomic scale, thrips radiations on several plant families have occurred with noticeable absences from others. Thrips are associated with several angiosperm families including species of Ficus (Moraceae), Geijera parviflora (Rutaceae), and Casuarina (Casuarinaceae) as well as genera specific to mosses, conifers, and cycads [30,69,70]. Plant families with very few or no specific patterns of affiliation with thrips include Myrtaceae, Proteaceae, Asteraceae, Leguminosae, and Poaceae. The latter two families have a remarkable diversity of thrips species attracted to flowers and leaves respectively, but with no perceivable pattern of affiliation. We propose that these phylogenetic patterns of under-dispersion are indicative of host conservatism driven by biotic and abiotic environmental compatibilities subsequent to delayed colonisation. Host conservatism and geographic distribution Thrips lineages associated with phylogenetically isolated host species suggests geographic range characteristics of non-host sister-taxa are not suited to supporting Acacia thrips [71]. Acacia have typical geographical range distributions with most species having small and intermediate range sizes and few with large distributions. The size of the host species geographic distribution appears independent of extant thrips associations, but might not be indicative of ancestral ranges during colonisation. For example, A. oswaldii is a broadly distributed arid-zone species inhabited by galling and kleptoparasitic thrips species. Acacia oswaldii is phylogenetically distinct from sister-taxa that are not parasitised by thrips (Additional file 6) suggesting this host has geographic range characteristics suited to the maintenance of Acacia thrips populations while sisterspecies do not. Acacia cuthbersonii, A. carneorum, and A. pickardii are also phylogenetically isolated, but these species have broad as well as very narrow geographic range distributions among them. Phylogenetically isolated clades supporting thrips that include A. triptera, A. kempeana, A. aneura, A. citrinoviridis, and A. xiphophylla also have species with broad and narrow geographic distributions, suggesting historical factors are important to maintenance of Acacia thrips populations. The Acacia lineage possessing both species with phyllodes (section Phyllodineae) as well as those with bipinnate leaves (section Botrycephalae) are all presumably unsuitable for thrips inhabitation due to geographic range characteristics, biotic associations, or heritable traits. Phyllodinous Acacia are phylogenetically and chemically related to bipinnate forms [72][73][74], providing some basis for the presence of heritable traits partly explaining thrips absence in this stem clade (but see below). We suggest these patterns are consistent with host conservatism among genetically similar and dissimilar hosts with heritable and nonheritable characteristics favouring host use. Ecological specialisation on Acacia Host conservatism and host specialisation can be differentiated as the evolutionary conservative association of thrips and Acacia, and the evolution of distinct phenotypes that emerge directly or indirectly as a result of host conservatism [2]. Acacia thrips exhibit diverse phenotypes that characterise distinct forms of host specialisation that appear to have evolved in a cumulative manner on particular Acacia-related traits. Selection pressures and new ecological opportunities for specialisation arising during the course of climate transition should be dependent on stochastic and plesiotypic factors. Our timeline for the origins of Acacia thrips genera suggests the evolution of ecological specialisation was approximately contemporaneous, occurring midway between the common ancestor and the present. Once the inclusion of a novel host-plant in the dietary range of an insect occurs, conservative interactions conceivably select for traits such as gall induction [75]. Our findings show corresponding origins of domicile-building and kleptoparasitic thrips genera that are consistent with previous work [76]. It was suggested that gall-inducing was a selective response to kleptoparasitism. The observation that facultative kleptoparasitsm is present in some species of Koptothrips suggests an intermediate stage of specialisation similar to opportunism that has responded to selection on habitat. These observations make it difficult to determine whether behavioural specialisation by kleptoparasitic lineages on the domicile-building and gall-inducing thrips evolved as a consequence of either biotic or abiotic pressures. Abiotic forces have a strong influence over trade-offs between heritable and non-heritable constraints on host use [15]. The physical environment and spatial context of hosts has been shown to structure insect communities [77,78]. For instance, the impetus for galling behaviour is believed to include non-mutually exclusive factors associated with avoidance of natural enemies [79], minimising environmental stress [80], or optimising nutritional choices [81]. Galling arose in Kladothrips near the Miocene-Quaternary boundary. Pronounced ecological transitions during the Quaternary would have changed the selective landscape in Australia. Evidence of non-random host associations such as phylogenetic under-dispersion is also indicative of specialised behaviour as a response to habitat and resource selection [77]. This type of hostplant conservatism suggests colonisation of phenotypes that favour host-use in one environment over the other (e.g. [82]). For instance, the evolution of galling is believed to be favoured in harsh xeric environments [83] that became particularly pronounced in Australia during the Quaternary. However, Acacia thrips are more species rich and occupy more diverse ecological roles outside the arid biome. At the other climatic extreme, Acacia thrips are absent from hydric habitats in southeastern Australia [31,84]. Social behaviour in Kladothrips arose with the evolution of a specialised defensive caste and is symptomatic of species distributed in non-arid areas. An alternative strategy exhibited by non-social Kladothrips species is the adoption of physogastry and extreme fecundity. This 'boom-or-bust' lifestyle tends to characterise arid-distributed species. These hypotheses remain untested. Behavioural differentiation between environments predicts that ecological specialisation under conditions alternating between xeric and mesic environments, was based on selection on behaviour and habitat specialisation. Conclusions A considerably younger Acacia thrips common ancestor compared to that of Acacia is consistent with colonisation of a lineage that has not been used as a host in its recent evolutionary past. Presumably either oligophagous or polyphagous ancestral thrips populations were able to feed on and recognise Acacia subsequent to the evolution of host conservatism. We propose that colonisation of Acacia was initially characterised by either oligophagy or polyphagy and subsequent recolonisations by a number of ancestral lineages. Colonisation of phenotypically and environmentally suitable lineages occurred over a protracted period that resulted in phylogenetically underdispersed pattern of host conservatism. The evolution of host conservatism on suitable Acacia lineages facilitated the evolution of ecological specialisation during a period that coincided with aridification and ecological disturbance in Australia. Our findings support the hypothesis that host conservatism is a process shaped by changing abiotic and biotic forces, and ecological specialisation an additive process imposed by changing selective pressures on habitat preference and behaviour. Phylogenetic inference of Acacia We inferred Acacia and thrips phylogenies using parsimony and probabilistic approaches to assess topological support for both thrips and Acacia datasets and to generate a distribution of phylograms to be used in divergence time estimation (see below). Four plastid loci (matK, rpl32-trnL intergenic spacer, psbA-trnH intergenic spacer, and trnL-F intron and intergenic spacer) and two nuclear loci from internal and external transcribed spacers (ITS and ETS respectively) of Acacia were sequenced. Previous work [85] has inferred several smaller trees that included multiple exemplars of some species used in this study. Primers and PCR protocols are described in a previous study [86]. We combined new sequence data with single representatives of species from the previous study and added 61 new species that included all Acacia that thrips are known to specialise on. Together our sample comprised 125 (12.6%) described species and two outgroup taxa. We used Paraserianthes lophantha [87], the sister taxon of Acacia, and Parachidendron pruionsum [85] as outgroup taxa. Probabilistic and parsimony inferences were conducted in MrBayes v.3.2.1 [88], RAxML v.7.3.0 [89], and MEGA5 v.5.05 [90]. We used jModelTest v.2.1.1 [91] to justify priors for models of sequence evolution that were selected according to the Akaike and the Bayesian Information Criteria (AIC & BIC; [92]). The best-fit model selected by the AIC or BIC test for each of the plastid and nuclear plant often differed (Additional file 7: Table S1). Most of these models could not be specified in the divergence time estimation approaches, so we applied the general time reversible (GTR) model that generates distributions of parameters that approximate sub-models of the GTR model [93]. For the MrBayes and RAxML approaches, we fitted separate model priors (GTR with a proportion of invariant sites (I) and gamma (Γ) distributed rates) to each of the plastid and nuclear loci. Each Bayesian inference was performed over two simultaneous analyses with two Markov chains. Analyses were run four times to verify the repeatability of the phylogenetic inference; two runs at 100 × 10 6 and two at 40 × 10 6 generations. Posterior probabilities were derived from 75,000 trees sampled from post-burnin generations 25-100 million, after the chains had reached apparent stationarity. Convergence was assessed using the MCMC Tracer Analysis Tool v.1.5 [94] by plotting the log likelihoods to assess the point in the chain where stable values were reached. For the likelihood analyses conducted with RAxML, we implemented the rapid bootstrap analysis and search for the best-scoring tree using 1 × 10 4 runs. Our parsimony analyses conducted with MEGA5 were implemented using the Close-Neighbor-Interchange (CNI) method with random starting tree and 1 × 10 3 bootstrap replicates. Current phylogenetic relationships of Australian Acacia are not consistent with past classifications [85,86]. In our data, Acacia diphylla is a synonym of Acacia blakei subsp. diphylla (Section Juliflorae). Although recent revisions have placed older classifications into doubt, we used the commonly used taxonomic ranking of Pedley. Four main clades including sections Plurinerves, Juliflorae, Botrycephalae, and Phyllodineae [33] were considered in this study. We used the SH-test [95] as implemented in RAxML, to assess the section classifications presented in Maslin (2004) against our consensus trees. We specified a constraint tree that grouped each section as a multifurcating clade using Mesquite v.2.75 [96]. The constraint tree consisted of three polytomous crown clades each grouping the section classifications Juliflorae, Plurinerves, and Botrycephalae, and a fourth stem clade as the Phyllodineae. We used RAxML to resolve the multifurcations and optimise the topology under maximum likelihood given the sequence alignment and gene partitions. The test used 100 runs (generates 100 ML trees) and the GTR+I+Γ substitution model. Each of the resulting 100 bifurcating topologies were compared with our consensus using the SH test ( Figure 4). Phylogenetic inference of Acacia thrips The sole dependence of extant species of Acacia thrips on Acacia might be taken as evidence for the common ancestor sharing this attribute. However, without fossil material this is difficult to test and might not be the case given the difficulty in accurately estimating ancestral hostranges. Previous work [32] inferred an Acacia thrips phylogeny using most of the data presented here. The classification of the galling species has since been revised. Three genera comprising the galling species have been collapsed into the genus Kladothrips. We have added new samples of the galling species. Furthermore, the Kladothrips rugosus species complex previously believed to be an oligophagous group, are now considered separate monophagous species. Species delimitation using molecular approaches has been conducted in previous work [97] and demonstrates genetic divergence thresholds between these lineages are characteristic of separate species. As such, species of this clade are undescribed and have been designated by their host species association. Sequence data of cytochrome oxidase one (COI) mitochondrial DNA, and nuclear loci elongation factor one alpha (EF-1α) and wingless gene fragments [98] were used to reconstruct a thrips phylogeny. Full details describing primers, PCR conditions, sequencing, alignment, and substitution model priors can be found in [96]. The outgroup taxon Gynaikothrips specialises as a leaf-galler on the genus Ficus [70] and was chosen based on previous work [32]. The same phylogenetic and model testing approaches conducted with the Acacia sequence data was repeated using the thrips data. For the MrBayes and RAxML approaches, we fitted separate model GTR+I+Γ to the 1 st , 2 nd , and 3 rd codon positions of COI and single partitions of EF-1α and wingless. The thrips Bayesian inferences were performed using four Markov chains. The same protocols for assessing repeatability and stationarity of Acacia inferences were applied. Acacia divergence time estimation We tested whether our Acacia consensus tree obeyed a molecular clock hypothesis using MEGA5 by comparing the ML value for our topologies with and without the molecular clock constraints using the GTR+I+Γ model of evolution. Ultrametric trees were inferred using PL as conducted in r8s v.1.8 [99] and a Bayesian approach conducted in BEAST v.1.7.2 [100]. Date calibrations were based on the most recent divergence timing estimates [101] with the MRCA of Acacia (sensu stricto) at between 14.6 and 21.2 Mya. We used a putative date of 20 million years before present as a fixed calibration for the origin of Acacia. This calibration prior was fixed to facilitate testing the relative timing between the two clades. Absolute divergence dates based on previous estimates are assumed to broadly contextualise Acacia divergence timing with changes in the Australian environment. A macrofossil of an extant species Acacia melanoxylon identified from the Pliocene [27] enabled us to compare our inferred dates with that of the fossil record. Maximum clade credibility trees were inferred using BEAST. The model of evolution used to infer divergence time estimates was based on the priors implemented in the MrBayes inference across the locus partitions: GTR+ I+Γ, four gamma categories, and empirical base frequencies. The chain was run for 100 × 106 generations and sampled every 1000 th generation and the last 75,000 trees used for inferring ultrametric consensus trees and 95% highest posterior density intervals. We conducted several pilot runs using different priors on gene partitioning, topology constraints, and parameter distributions to estimate clock rates to use as priors in the dedicated runs in order to meet the posterior ESS optimisation criteria. We used the lognormal relaxed clock (with 'estimate rate') for the gene partitions and a normal distribution prior for the 'ucld.mean' for all partitions. The Yule process was used as the speciation model with a starting ultrametric tree topology constraint from a pilot BEAST inference that used a PL tree generated with r8s. Substitution and clock models were set to unlinked across gene partitions, and linked for tree priors. We used date priors only for those topology constraints necessary to define an ingroup and to calibrate the tree for the ultrametric hypothesis comparisons tested using path and stepping-stone sampling methods (see below). The 'stem' function was activated and clades assumed to be monophyletic. The r8s PL approach uses a data-driven crossvalidation procedure to select an appropriate level of rate smoothing given branch length estimates proportional to substitution differences. Acacia thrips divergence time estimation Divergence time estimates for the Acacia thrips were inferred using a nominal root age of 1. As no reliable date calibration prior for the origin of the most recent common ancestor of Acacia thrips was available, we preferred to scale the thrips trees in respect to the date of a parallel Figure 4 Divergence timing of Acacia. An ultrametric comparison between Acacia (above) and thrips (below). Acacia tree is abbreviated and colour branches indicate host sections Plurinerves (blue), Juliflorae (green), Phyllodineae (black), and Botrycephalae (grey). Time scale in millions of years is based on molecular dating [103]. Horizontal bars on nodes indicate the 95% highest posterior density intervals. Red dashed lines indicate the calibration model priors used for the thrips inference. Behavioural categories of thrips indicated on the right. The yellow dot indicates the node when the inferred parallel divergence occurred. divergence event [98] and our Acacia ultrametric trees generated by BEAST and r8s (see below). Although the parallel divergence was our only reliable date prior, the use of a single, derived calibration can produce spurious root-node age estimates. The Yule process was used as the speciation model with a starting ultrametric tree topology constraint from a pilot BEAST inference that used default priors. Otherwise, the same procedures used to estimate Acacia divergence timing were implemented with the thrips data. Tests of temporal & phylogenetic congruence We used a bifurcation in both the Acacia and thrips clades that is an inferred point of parallel diversification, and therefore temporarily concordant, to estimate the relative timing between clades. To test whether the MRCA of Acacia thrips was contemporaneous with, pre-, or post-dated the MRCA of Acacia we compared ultrametric tree inferences using the various date priors estimated for their MRCA's and the parallel divergence. All posterior trees from our thrips and Acacia MrBayes runs were filtered using a consensus topology constraint conducted in PAUP* v.4b10 [102], and divergence times estimated from these phylograms using r8s. Dates for the parallel bifurcations of respective social and non-social thrips clades parasitising the same sister host-clades [98] were used to scale the Acacia thrips chronograms to estimate the date of the MRCA. The parallel diversification of Acacia thrips on the stem clade comprising A. cambagei and A. harpophylla and species in the sister-clade was used as a date prior to match with the divergence date of these hosts in the Acacia chronograms. The divergence timing estimates generated from BEAST and r8s provided a maximum and minimum date priors for the origin of the MRCA of Acacia thrips in respect to the MRCA of Acacia. We tested these timing hypotheses as well as the assumption of co-cladogenesis ( Table 2). The different divergence timing models were compared using Bayes factors (BF) by estimating marginal likelihoods using path sampling and stepping-stone sampling conducted in BEAST [103,104]. In terms of the relative strength of the model, the ln(BF) (natural log) indicates: strong (2.3-3.4), very strong (3.4-4.6), and decisive (> 4.6 ) evidence [105].
2017-06-19T20:16:07.327Z
2013-09-09T00:00:00.000
{ "year": 2013, "sha1": "f44264c957adf637f8e4435ef452f8ee5b6d691d", "oa_license": "CCBY", "oa_url": "https://bmcevolbiol.biomedcentral.com/track/pdf/10.1186/1471-2148-13-188", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "e633a5cd5e4a68fd8e240ebbf6e9cda063dbbd3d", "s2fieldsofstudy": [ "Biology", "Environmental Science" ], "extfieldsofstudy": [ "Biology", "Medicine" ] }
203172879
pes2o/s2orc
v3-fos-license
The Contemporary Significance of the Study of Gorz’s Labor View After carefully reading the study of Gorz’s labor view, Gorz makes a new interpretation of the times for labor liberation, criticizing the capitalist labor problem, and puts forward a new viewpoint: The combination of labor and ecology, the combination of labor and freedom, and the combination of material labor and non-material labor formed the individual view of labor. This view of labor has certain reference significance for the realization of Chinese dream. Therefore, under the current concept of green development, we can draw on the positive significance of Gorz’s labor concept: Let labor inject ecological elements, make people get happiness in labor, form sufficient employment and new perspective on non-material labor, and so on. Of course, we should take a renunciation attitude towards our views in the light of the concrete reality of contemporary China. Gorz's labor from an ecological point of view is necessary to reduce the use of economic rationality and exchange value, and to meet people's basic needs to the maximum extent, not to pursue the maximization of profits, but to endow the social ecological direction of labor; the purpose of which is to liberate both labor and workers. This shows that Gorz inherits the Marxist view of labor to a certain extent, that is, Marx thinks that labor is the creative activity of human beings, can realize the activities of self in labor, and embody the nature of human freedom. This can arouse people's re-understanding of labor, make people from the labor alienation of the production and consumption of the concept of the imprisonment of liberation, and limit unnecessary labor; China's reference is undoubtedly included in the combination of labor and ecology. In today's China, combined with the "green development concept" put forward by the Ninth Plenary Meeting, we should have the consciousness of ecological protection in production or consumption, and practice the idea of combining labor with ecology. In this way, it contributes to the realization of harmony among man, nature, and society. To Get Happiness in Labor Gorz's combination of work and happiness provides a new picture of people's lives. How do people get the happiness of their personal lives in their labor? Gorz thought that labor becomes an autonomous and creative behavior, so that people get rid of the control of economic rationality, especially cannot let labor only become a means of earning money, in the labor as far as possible to obtain the satisfaction of personal life, which means that no longer work as a goal in life. The future socialism envisaged by Gorz is what Marx said a communist society. As Marx put it: "It will be such a union, where the free development of every human being is a condition for the free development of all" (Central Compilation & Translation Bureau, 2009, p. 53). "In Gorz's view, more and more people are not content with regular jobs, trying to find their own way of activity or lifestyle, and they have to balance their work with the pleasures of other lives" (Gorz, 1999, p. 60). Gorz thought that they are the unsung heroes of precarious work, choosing pioneers of working hours. Therefore, the laborers should arrange the time reasonably, change the concept of "work first", seek the balance of work life, enhance the happiness of personal life, and have important significance for people's happiness and the realization of Chinese dream. Adequate Employment Is the Guarantee of the Happiness of the Whole People While focusing on ecological factors and the need for sustainable development in production and consumption, it is a better job security to emphasize the importance of the "social income plan" and to establish a basic social income for every citizen (Gorz, 1985) to realize that adequate employment is the guarantee of people's happiness, therefore, at present, in the process of building a modern society, there are non-metric economic organizations, we should adhere to the "harmonious labor relations are mutually beneficial and win-win, the management for labor to create employment opportunities, labor for the management to bring profits" (Wu, 2010, p. 35). Although the mode of employment is diversified; the employment channels are extensive; and the employment structure is optimized, the traditional labor jobs are reduced; there is a problem of unemployment, and in the future, workers should be encouraged to correctly treat unemployment and independent career selection, and to encourage independent entrepreneurship. We will strengthen the government's macro-control of employment and step out of a employment policy with Chinese characteristics, such as relying on the Internet to promote employment channels, so as to effectively improve people's employment opportunities and maintain social stability, on the basis of which, improve labor productivity and promote economic and social development and people's livelihood improvement in an innovative and driven manner. Viewing Immaterial Labor From a New Perspective According to Gorz, non-material labor cannot be treated by traditional work, this view is based on the review of today's capitalist situation, the analysis of the new changes and forms of capitalist labor, summed up its characteristics, to a certain extent, can be understood as in the post-industrial society, Gorz absorbed Marx's "immaterial production labor" thought. Because Gorz believes that Karl Marx has long recognized that knowledge is the main source of (maximum productivity) and wealth, measuring "the direct form of labor" had to stop, with the result that measurable wealth was created. The creation of wealth depends on less labour time and the amount of labor employed; and more and more progress is being made in relying on science and technology. Compared to general scientific labor, direct labor and its number will disappear with the way production is decided, which, of course, is an essential but secondary process. Now, the "production process" is no longer mistaken for a "labor process" (Gorz, 2010, pp. 2-3). Similarly, Gorz said that immaterial labor can produce economic value, is the driving force of social development, for capital proliferation services, can be seen, is still a new form of capital exploitation of workers, by capital into the production process, and gradually occupy a major position in the production of capitalist society, and become a real sense of capitalist production and labor. Conclusion The issue of "emancipation of labor" is a new research hotspot. Gorz's view of labor is based on the actual situation of the society in the Western developed countries; he always explores the emancipation of labor from the standpoint of individual existentialism, and puts forward a series of ways of labor liberation. Special attention should be paid to the application of Gorz's emancipation road to the Western developed welfare state, for China, under the current concept of shared development, learning and studying Gorz's view of labor, such as the diversity of activities and the innovation of labor, rational consumption, and the adoption of ecological technology, helps people to view labor from an ecological point of view and re-understand labor, attaching importance to the creativity of labor helps people to enjoy a happy life in their labor and realize the unity of personal value and social value.
2019-09-17T02:47:06.317Z
2019-07-28T00:00:00.000
{ "year": 2019, "sha1": "e8f0106b8f56a2602150c965b62de088e6a2ced8", "oa_license": "CCBYNC", "oa_url": "http://www.davidpublisher.org/Public/uploads/Contribute/5d6c8c6968ace.pdf", "oa_status": "GOLD", "pdf_src": "Adhoc", "pdf_hash": "6c3bb0a12ca6ed369dcf51a2d816aa77cc087435", "s2fieldsofstudy": [ "Philosophy", "Environmental Science" ], "extfieldsofstudy": [ "Economics" ] }
221540781
pes2o/s2orc
v3-fos-license
AIBP protects retinal ganglion cells against neuroinflammation and mitochondrial dysfunction in glaucomatous neurodegeneration Glaucoma is a leading cause of blindness worldwide in individuals 60 years of age and older. Despite its high prevalence, the factors contributing to glaucoma progression are currently not well characterized. Glia-driven neuroinflammation and mitochondrial dysfunction play critical roles in glaucomatous neurodegeneration. Here, we demonstrated that elevated intraocular pressure (IOP) significantly decreased apolipoprotein A-I binding protein (AIBP; gene name Apoa1bp) in retinal ganglion cells (RGCs), but resulted in upregulation of TLR4 and IL-1β expression in Müller glia endfeet. Apoa1bp−/− mice had impaired visual function and Müller glia characterized by upregulated TLR4 activity, impaired mitochondrial network and function, increased oxidative stress and induced inflammatory responses. We also found that AIBP deficiency compromised mitochondrial network and function in RGCs and exacerbated RGC vulnerability to elevated IOP. Administration of recombinant AIBP prevented RGC death and inhibited inflammatory responses and cytokine production in Müller glia in vivo. These findings indicate that AIBP protects RGCs against glia-driven neuroinflammation and mitochondrial dysfunction in glaucomatous neurodegeneration and suggest that recombinant AIBP may be a potential therapeutic agent for glaucoma. Introduction Glaucoma is a leading cause of irreversible blindness worldwide in individuals 60 years of age and older. Despite the high prevalence of glaucoma, the factors contributing to its progressive worsening are currently not well characterized. To date, intraocular pressure (IOP) is the only proven treatable risk factor. Eye drops or systemic administration of medications are employed to lower IOP. However, lowering IOP often is insufficient for preventing disease progression. Neuroinflammation is defined as immune responses in the central nervous system, and it is of great interest to better understanding the role of glia-mediated neuroinflammation in glaucoma [1,2]. However, the interplay between glia-mediated neuroinflammation and mitochondrial dysfunction in glaucomatous neurodegeneration is poorly understood. Apolipoprotein A-I binding protein (AIBP; gene name APOA1BP) is a secreted protein that associates with apolipoprotein A-I (APOA-I) [3] and high-density lipoprotein (HDL) [4]. Human APOA1BP mRNA is ubiquitously expressed, and the AIBP protein is found in cerebrospinal fluid and urine [3,5,6]. We and others have demonstrated that the binding of AIBP to HDL facilitates cholesterol efflux from endothelial cells and macrophages, resulting in reduction of lipid rafts, inhibition of angiogenesis and atherosclerosis, and regulation of hematopoietic stem and progenitor cell fate [4,[7][8][9]. More recently, we have shown that AIBP binds Toll-like receptor-4 (TLR4), thus mediating selective regulation of lipid rafts in activated cells. It also inhibits TLR4 dimerization, neuroinflammation, and glial activation in the mouse models of neuropathic pain states [10,11]. These findings demonstrate a mechanism by which AIBP regulates neuroinflammation and suggest a therapeutic potential of AIBP for treating neuropathic pain [10]. TLR4 is an important innate immune receptor that contributes to the innate and adaptive inflammatory responses. Upon activation, TLR4 is recruited to lipid rafts where it dimerizes and initiates a signaling cascade leading to proinflammatory responses [11][12][13]. Previous studies have demonstrated that TLR4-dependent signaling induces the IL-1β cascade in elevated IOP-induced acute glaucoma [14,15]. Evidence from our group and others strongly indicates that mitochondrial dysfunction and metabolic stress by glaucomatous insults such as elevated IOP and oxidative stress are critical to loss of RGCs in experimental glaucoma [16][17][18][19]. TLR4 is also associated with mitochondrial damage caused by intracellular reactive oxygen species (ROS) and defective mitochondrial dynamics [20,21]. In transfected cells, AIBP is reported to localize to mitochondria [22], but potential mechanisms connecting AIBP, TLR4 signaling, mitochondrial dysfunction, and neuroinflammation in glaucoma remain to be elucidated. Here, we demonstrate that AIBP plays a critical role in protection against neuroinflammation and mitochondrial dysfunction during glaucomatous neurodegeneration. Using systemic AIBP knockout (Apoa1bp − /− ) mice, we show that AIBP deficiency triggers mitochondrial dysfunction in both RGCs and Müller glia. It also increases TLR4 and IL-1β expression in Müller glia endfeet, leading to oxidative stress, RGC death and visual dysfunction. Moreover, AIBP deficiency exacerbates vulnerability to elevated IOP-induced RGC death. In particular, AIBP treatment inhibits inflammatory responses and protects RGCs against elevated IOP. These results suggest that AIBP has therapeutic potential for restraining excessive mitochondrial dysfunction and neuroinflammation in glaucomatous neurodegeneration. Human tissue samples Human retina tissue sections were obtained from a normal (age 81 years) donor and a patient with glaucoma (age 91 years) (San Diego Eye Bank, CA, USA) with a protocol approved by the University of California, San Diego Human Research Protection Program. The normal patient has no history of eye disease, diabetes, or chronic central nervous system disease. Animals Adult male and female DBA/2J and age-matched DBA/2J-Gpnmb + (D2-Gpnmb + ) mice (The Jackson Laboratory, ME, USA), and WT and Apoa1bp − /mice were housed in covered cages, fed with a standard rodent diet ad libitum, and kept on a12 h light/12 h dark cycle. C57BL/ 6J mice were initially purchased from the Jackson Laboratory, bred inhouse for experiments and used as wild-type (WT) mice. Apoa1bp − /− mice on a C57BL/6J background were generated in our group as previously reported [8,23]. Animals were assigned randomly to experimental and control groups. To investigate the effect of IOP elevation and/or AIBP deficiency, 10 month-old DBA/2J and age-matched D2-Gpnmb + mice, and 4 month-old WT and age-matched Apoa1bp − /− mice were used. Behavioral response and visual function were studied with 3-4 month old male and female mice. All procedures concerning animals were in accordance with the Association for Research in Vision and Ophthalmology Statement for the Use of Animals in Ophthalmic Vision Research and under protocols approved by Institutional Animal Care and Use Committee at the University of California, San Diego (USA). Induction of acute IOP elevation Mice were anesthetized by an intraperitoneal (IP) injection of a cocktail of ketamine (100 mg/kg, Ketaset; Fort Dodge Animal Health, IA, USA) and xylazine (9 mg/kg, TranquiVed; VEDCO Inc., MO, USA). Eyes were also treated with 1% proparacaine drops. Induction of acute IOP elevation was performed as previously described [24]. Briefly, a 30-gauge needle was inserted into the anterior chamber of right eye that was connected by flexible tubing to a saline reservoir. By raising the reservoir, IOP was elevated to 70-80 mmHg for 50 min. Sham treatment was performed in the contralateral eyes by the insertion of a needle in the anterior chamber without saline injection. Recirculation started immediately after removal of the cannula and the IOP decreased to normal values within 5 min. Mice were anesthetized by an IP injection of a cocktail of ketamine/xylazine as described above prior cervical dislocation at different time points for tissue preparation after reperfusion: 1 day and 4 weeks. Retinal ischemia was confirmed by observing whitening of the iris and loss of the retina red reflex. IOP was measured with a tonometer (icare TONOVET, Vantaa, Finland) during IOP elevation. Non-IOP elevation contralateral control retinas were used as sham control. IOP measurement IOP elevation onset typically occurs between 5 and 7 months of age, and by 9-10 months of age, IOP-linked optic nerve axon loss is well advanced [16,25]. IOP measurement was performed as previously described [16,25]. The glaucomatous DBA/2J mice that have confirmed IOP elevation were obtained in 65.3% (64/98) at 10 months of age [25]. Our previous study showed that mean IOP was 15 ± 1.8 mmHg in 3 month-old DBA/2J mice and that spontaneous IOP elevation typically began by 6-8 months. The peak of IOP elevation was 21.5 ± 4.5 mmHg in the right eyes and 19.9 ± 3.7 mmHg in the left eyes within 10 month-old DBA/2J mice [25]. As has been reported previously [26,27], substantial ON damage, including axon loss, was observed in 10 month-old glaucomatous DBA/2J mice, confirming the presence of acquired optic neuropathy. Each of the 10 month-old DBA/2J mice used in this study had a single IOP measurement (to confirm development of spontaneous IOP elevation exceeding 20 mmHg) (n = 5 for selected DBA/2J mice). Also, each of the non-glaucomatous control C57BL/6 or D2-Gpnmb + mice (n = 5) used in this study had a single IOP measurement. For WT and Apoa1bp − /− mice, IOP was measured with a tonometer (icare TONOVET) as described above. Recombinant AIBP N-terminal His-tagged AIBP was produced in a baculovirus/insect cell expression system to allow for post-translational modification and to ensure endotoxin-free preparation as previously described [9,11]. AIBP protein was purified using a Ni-NTA agarose column (Qiagen, CA, USA) eluted with imidazole. Purified AIBP was dialyzed against PBS, and concentration was measured. Aliquots were stored at − 80 • C. Tissue preparation Mice were anesthetized by an IP injection of a cocktail of ketamine/ xylazine as described above prior cervical dislocation. For immunohistochemistry, the retinas and superior colliculus (SC) tissues were dissected from the choroids and fixed with 4% paraformaldehyde (Sigma) in phosphate buffered saline (PBS, pH 7.4, Sigma) for 2 h at 4 • C. Retinas and SCs were washed several times with PBS then dehydrated through graded levels of ethanol and embedded in polyester wax. For EM, the eyes were fixed via cardiac perfusion with 2% paraformaldehyde, 2.5% glutaraldehyde (Ted Pella, CA, USA) in 0.15 M sodium cacodylate (pH 7.4, Sigma) solution at 37 • C and placed in precooled fixative of the same composition on ice for 1 h. As described below, the procedure was used to optimize mitochondria structural preservation and membrane contrast. For Western blot and PCR analyses, extracted retinas were immediately used. Western blot analyses Harvested retinas were homogenized for 1 min on ice with a modified RIPA lysis buffer (#9806, Cell Signaling Technology, MA, USA), containing complete protease inhibitor cocktail (#HY-K0010, Med-ChemExpress, NJ, USA). The lysates were then centrifuged at 15,000 g for 15 min and protein amounts in the supernatants were measured by Bradford assay. Proteins (10-20 μg) were run on a NuPAGE Bis-Tris gel (Invitrogen, CA, USA) and transferred to polyvinylidene difluoride membranes (GE Healthcare Bio-Science, NJ, USA). The membranes were blocked with 5% non-fat dry milk and PBS/0.1% Tween-20 (PBS-T) for 1 h at room temperature and incubated with primary antibodies (sTable 1) for overnight at 4 • C. Membrane were washed three times with PBS-T then incubated with horseradish peroxidase-conjugated secondary antibodies (Bio-Rad, CA, USA) for 1 h at room temperature. Membranes were developed using enhanced chemiluminescence substrate system. The images were captured using a UVP imaging system (UVP LLC, CA, USA). Immunohistochemistry Immunohistochemical staining of 7 μm wax sections of full thickness retina were performed. Sections from wax blocks from each group (n = 4 retinas/group) were used for immunohistochemical analysis. To prevent non-specific background, tissues were incubated in 1% bovine serum albumin (BSA, Sigma)/PBS for 1 h at room temperature before incubation with the primary antibodies for 16 h at 4 • C. After several wash steps, the tissues were incubated with the secondary antibodies (sTable 1) for 4 h at 4 • C and subsequently washed with PBS. The sections were counterstained with the nucleic acid stain Hoechst 33342 (1 μg/ml; Invitrogen) in PBS. Images were acquired with Olympus FluoView1000 confocal microscopy (Olympus, Tokyo, Japan) or Leica SPE-II confocal microscope (Leica, Wetzlar, Germany). Each target protein fluorescent integrated intensity in pixel per area was measured using the ImageJ software. All imaging parameters remained the same and were corrected with the background subtraction. Terminal deoxynucleotidyl transferase dUTP nick end labeling (TUNEL) staining TUNEL staining was performed using In Situ Cell Detection Kit (TMR red, Roche Biochemicals, IN, USA) as previously described [28,29]. After rinsing in PBS, the sections were incubated with TUNEL mixture in reaction buffer for 60 min at 37 • C. To count TUNEL-positive cells, the areas were divided into three layers by ganglion cell layer (GCL), inner nuclear layer (INL) and outer nuclear layer (ONL). To determine whether TUNEL-positive cells are RGCs, we performed immunohistochemistry before TUNEL staining using RNA-binding protein with multiple splicing (RBPMS) antibody as described above. The sections were counterstained with the Hoechst 33342 (1 μg/ml; Invitrogen) in PBS as described above. TUNEL-positive cells were counted in 5 microscopic fields (20x) per condition (n = 5 retinas) by two investigators in a masked fashion, and the scores were averaged. Images were acquired with Olympus FluoView1000 confocal microscopy (Olympus). Whole-mount immunohistochemistry and RGC counting Retinas from enucleated eyes were dissected as flattened wholemounts from WT and Apoa1bp − /− mice. Retinas were immersed in PBS containing 30% sucrose for 24 h at 4 • C. The retinas were blocked in PBS containing 3% donkey serum, 1% bovine serum albumin, 1% fish gelatin and 0.1% Triton X-100, and incubated with primary antibodies (sTable 1) for 3 days at 4 • C. After several wash steps, the tissues were incubated with the secondary antibodies (sTable 1) for 24 h, and subsequently washed with PBS. Images were captured under fluorescence microscopy using a Nikon ECLIPSE microscope (E800; Nikon Instruments Inc., NY, USA) equipped with digital camera (SPOT Imaging, MI, USA) or Olympus FluoView1000 confocal microscopy (Olympus). Image exposures were the same for all tissue sections and were acquired using Simple PCI version 6.0 software (Compix Inc.). To count RGCs labeled with Brn3a, each retinal quadrant was divided into three zones by central, middle, and peripheral retina (one sixth, three sixths, and five sixths of the retinal radius). RGC densities were measured in 12 distinct areas (one area at central, middle, and peripheral per retinal quadrant) per condition by two investigators in a masked fashion, and the scores were averaged. Serial block-face scanning electron microscopy (SBEM) Retina tissues were washed with cacodylate buffer for 2 h at 4 • C and then placed into cacodylate buffer containing 2 mM CaCl 2 and 2% OsO 4 / 1.5% potassium ferrocyanide as previously described [16]. The tissues were left for 2 h at room temperature. After thorough washing in double distilled water, the tissues were placed into 0.05% thiocarbohydrazide for 30 min. The tissues were again washed and then stained with 2% aqueous OsO 4 for 1 h. The tissues were washed and then placed into 2% aqueous uranyl acetate overnight at 4 • C. The tissues were washed with water at room temp and then stained with en bloc lead aspartate for 30 min at 60 • C. The tissues were washed with water and then dehydrated on ice in 50%, 70%, 90%, 100%, 100% ethanol solutions for 10 min at each step. The tissues were then washed twice in dry acetone and then placed into 50:50 Durcupan ACM:acetone overnight. The tissues were transferred to 100% Durcupan resin overnight. The tissues were then embedded and left in an oven at 60 • C for 72 h. BEM was performed on Merlin scanning electron microscopy (Zeiss, Oberkochen, Germany) equipped with a 3view2XP and OnPoint backscatter detector (Gatan, CA, USA). The retina volumes were collected at 2.5 kV accelerating voltages, with pixel dwell time of 0.5 μs The raster size was 20k x 5k, with 3.5 nm pixels and 50 nm z step size. Once a volume was collected, the histograms for the tissues throughout the volume stack were normalized to correct for drift in image intensity during acquisition. Digital micrograph files (.dm4) were normalized using Digital Micrograph and then converted to MRC format. The stacks were converted to eight bit and volumes were manually traced for reconstruction and analysis using IMOD software (http://bio3d.colorado.edu/imod/). 3DEM tomography EM tomography experiments were conducted on a FEI Titan Halo operating in the Scanning Transmission Electron Microscope mode at 300 kV, with the possibility to resolve micrometer thick plastic embedded specimen down to nanoscale spatial resolution as described previously. Vertical sections of retina tissues from each group were cut at a thickness of 750 nm and electron tomography was performed following a 4-tilt series scheme described in, with the specimen tilted from − 60 • to +60 • every 0.5 • at four evenly distributed azimuthal angle positions. The magnification was 28,500 × and the pixel resolution was 4.2 nm. The IMOD package was used for alignment, reconstruction and volume segmentation. Volume segmentation was performed by manual tracing of membranes in the planes of highest resolution with the Drawing Tools and Interpolator plug-ins [16,25,30]. The reconstructions and surface-rendered volumes were visualized using 3DMOD. Measurements of mitochondrial outer, inner boundary (IBM), and cristae membrane surface areas and volumes were made within segmented volumes using IMODinfo. These were used to determine the cristae density, defined as the ratio: sum of the cristae membrane surface areas divided by the mitochondrial outer membrane surface area. Energy calculations We used the biophysical modeling of Song et al. [31], which takes into account spatially accurate geometric representations of a crista, inner boundary membrane and crista junction to estimate the rate of mitochondrial ATP generation. The validity of this modeling was supported by predicting the higher proton motive force on cristae membranes, the effect of crista surface-to-volume ratio on this force, and the effect of crista membrane surface area on the rate of ATP synthesis-all integral to the current paradigm of mitochondrial structure/function ATP mechanics. The first variable needed in the model is the crista shape factor, defined as the crista membrane surface area divided by the crista volume. Measurements of crista surface area and volume were made from 3DEM mitochondrial volumes using ImageJ measurement tools. The rate of ATP production per crista was derived from Song et al. [31] Fig. 5b, which plots the rate of ATP production as a function of crista surface area for differing values of the crista shape factor. This rate was then summed for all the cristae in a given mitochondrion to produce the ATP rate of production per mitochondrion. Because mitochondria differ in size, the rate of ATP production was also calculated per unit mitochondrial volume (micron 3 ) after measuring each mitochondrion's volume using ImageJ tools. Also, because mitochondria produce most of their ATP for the cell's use, we calculated the amount of ATP available per second per unit cytoplasmic volume (micron 3 ) by multiplying the mean value of ATP rate per mitochondrion by the number of mitochondria in the cytoplasmic volume (nucleus excluded) and dividing by the cytoplasmic volume after measuring each cell's volume using ImageJ tools. There are two caveats with the energy calculations. First, this analysis does not take into account the ATP produced by glycolysis. RGCs appear to favor oxidative phosphorylation [32]. In vitro studies indicate that Müller glia may be predominantly glycolytic. However, Müller glia metabolism may shift from glycolysis to oxidative phosphorylation during stress, such as when energy demand exceeds supply or oxidative stress. Thus, a switch in Müller glia metabolism may be a biomarker for pathophysiology, at least in vitro. Second, mitochondrial respiration can be governed by various states, the most common in vitro being state 3 (active) and state 4 (resting). State 3 respiration can have an ATP production rate up to four times that of state 4. If the in vivo, or in our case, ex vivo (in silico) energy state follows the in vitro findings, both RGCs and Müller glia mitochondria exhibit state 4 respiration. Taking into consideration the estimates and uncertainties, it is important to understand that the calculations of ATP production rate may have "first-order" accuracy. The ATP production rates of Müller glia endfeet and RGC mitochondria are in agreement with estimations for other cell types [33,34]. Note that the rate of ATP production is higher per mitochondrion for Müller glia endfeet and RGC because their mitochondria are larger, especially compared with the small synaptic mitochondria examined by Garcia and coworkers. Quantitative PCR analyses Total RNA from the retina was isolated using Nucleospin RNA columns (Clontech, CA, USA). Isolated RNA was reverse transcribed using RNA to cDNA EcoDry (Clontech) following the manufacturer's instructions. Quantitative PCR (qPCR) was performed using KAPA SYBR FAST Universal qPCR kit (KAPA Biosystems, KK4602, Roche Diagnostics, IN, USA), with primers ordered from Integrated DNA Technologies (IDT, CA, USA), and a Rotor Gene Q thermocycler (Qiagen). The qPCR was performed with cDNAs synthesized from 1 μg of the total RNA of each group as a template and specific primers (sTable 2). Virtual optomotor response analysis Spatial visual function was performed on a virtual optomotor system (OptoMotry; CerebralMechanics Inc., AB, Canada) [35]. Unanesthetized mice were placed on an unrestricted platform in the center of a virtual cylinder comprised of four monitors arranged in a square (arena) that project a sinusoidal grating (i.e., white versus black vertical bars) rotating at 12 deg/sec. Mice were monitored by a camera mounted at the top of the arena while a cursor placed on the forehead centers the rotation of the cylinder at the animal's viewing position. To assess visual acuity, tracking was determined when the mouse stops moving its body and only head-tracking movement is observed. Spatial frequency threshold, a measure of visual acuity, was determined automatically with accompanying OptoMotry software, which uses a step-wise paradigm based upon head-tracking movements at 100% contrast. Spatial frequency began at 0.042 cyc/deg, which gradually increased until head movement was no longer observed. Visual evoked potential (VEP) analysis VEP was measured as previously described [36,37]. Mice were dark adapted in the procedure room at vivarium for less than 12 h in a dark room. Mice were prepared for recording under dim red light and anesthetized with IP injection of a mixture of ketamine/xylazine as described above. Pupils were dilated using equal parts of topical phenylephrine (2.5%) and tropicamide (1%). Proparacaine (0.5%) was used as a topical anesthetic to avoid blinking and a drop of lubricant is frequently applied on the cornea to prevent dehydration and allow electrical contact with the recording electrode (a gold wire loop, disposable). The top of the mouse's head was cleaned with an antiseptic solution. A scalpel was used to incise the scalp skin, and a metal electrode was inserted into the primary visual cortex through the skull, 0.8 mm deep from the cranial surface, 2.3 mm lateral to the lambda. A platinum subdermal needle (Grass Telefactor) was inserted through the animal's mouth as a reference and through the tail as ground. The measurements commenced when the baseline waveform became stable, 10-15 s after attaching the electrodes. Flashes of light at 2 log cd.s/m2 were delivered through a full-field Ganzfeld bowl at 2 Hz. Signal was amplified, digitally processed by the software (Veris Instruments, OR, USA), then exported, and peak-to-peak responses were analyzed in Excel (Microsoft). To isolate VEP of the measured eye from the crossed signal originating in the contralateral eye, a black aluminum foil eyepatch was placed over the eye not undergoing measurement. For each eye, peak-to-peak response amplitude of the major component P1-N1 in IOP eyes was compared to that of their contralateral non-IOP controls. The All the recordings are carried out with the same stimulus intensity. The average signals for each group were compared with respect to both amplitude and latency. Cholera toxin-B (CTB) labeling Mice were anesthetized with IP injection of a mixture of ketamine/ xylazine as described above and topical 1% proparacaine eye drops. A Hamilton syringe was used to inject a 1 μL of Alexa Fluor 594-conjugated CTB (Invitrogen), into the vitreous humor. Injections were given slowly over 1 min and the needle was maintained in position for an additional 5 min to minimize CTB loss through the injection tract. At 3 days after injection, the mice were fixed via cardiac perfusion with 4% paraformaldehyde (Ted Pella) following an IP injection of a mixture of ketamine/xylazine. After perfusion, the SC tissues were dissected and immersed in PBS containing 30% sucrose for 24 h at 4 • C. The SC tissues were coronally sectioned at 50 μm using a Leica Cryostat (Wetzlar, Germany). The 30 representative sections were mounted on slides and images were acquired with Olympus FluoView1000 (Olympus). The area densities from the images were analyzed using ImageJ (http://rsb. info.nih.gov/ij/; provided in the public domain by the National Institutes of Health, MD, USA) and Imaris software (Bitplane Inc., MA, USA). Statistical analysis For comparison between two groups that have small number of samples related to a fixed control, statistical analysis was performed using the nonparametrical analysis to one-sample t-test. For comparison between independent two groups, a two-tailed Student's t-test was performed. For multiple group comparisons, we used either one-way ANOVA or two-way ANOVA, using GraphPad Prism (GraphPad, CA, USA). A P value less than 0.05 was considered statistically significant. Reduced AIBP expression in RGCs in glaucomatous retinas In neonatal mice, Apoa1bp mRNA is expressed in RGCs [8]. To determine whether elevated pressure alters expression level of AIBP in murine RGCs, we first transiently induced acute IOP elevation in the eye of normal C57BL/6J mice by the cannulation of the anterior chamber of the eye, which was elevated to maintain an IOP of 70-80 mmHg for 50 min [24]. This model is also widely used by others to determine RGC death and survival in degenerative retinal diseases including acute glaucoma [14,38]. We found that elevated IOP significantly reduced Apoa1bp gene and AIBP protein expression in the retina at 24 h compared with sham control retina ( Fig. 1A and B). Immunohistochemical analysis showed that AIBP immunoreactivity was localized in the outer plexiform layer (OPL), INL, inner plexiform layer (IPL), and GCL of control mice. In the GCL, AIBP immunoreactivity was present in RGC somas and axons, which were labeled with neuron-specific β-III tubulin (TUJ1), a marker for RGCs. Consistently, elevated IOP decreased AIBP immunoreactivity in the OPL and inner retinal layer (Fig. 1C). We further cultured primary RGCs and exposed cells to elevated HP (30 mmHg) for 3 days [16]. Notably, elevated HP exposure significantly reduced AIBP protein expression in RGCs (Fig. 1D). We next examined whether a chronic IOP elevation alters AIBP protein expression in the retina using glaucomatous DBA/2J mice, which spontaneously develop elevated IOP and glaucomatous damage with age, and age-matched control D2-Gpnmb + mice [39,40]. Interestingly, we found that glaucomatous DBA/2J retina showed a similar pattern of reduced AIBP immunoreactivity as in our acute model of IOP elevation (Fig. 1E). In D2-Gpnmb + mice, AIBP immunoreactivity was present not only in RGC soma in the GCL but also in the IPL (Fig. 1F). In contrast, AIBP immunoreactivity was significantly decreased in the inner retina of glaucomatous DBA/2J mice ( Fig. 1F and G). Since AIBP mediates the stabilization of ATP-binding cassette transporter A1 (ABCA1) by facilitating apoA-1 binding to ABCA1 and prevents ABCA1 degradation via the ubiquitination pathway [41], we further tested whether a chronic IOP elevation also alters expression level of ABCA1 protein. In D2-Gpnmb + mice, ABCA1 immunoreactivity was present in Brn3a-positive RGCs in the GCL and OPL (Fig. 1H). In contrast, ABCA1 immunoreactivity was highly diminished in the neurons of the GCL, including Brn3a-positive RGCs, of glaucomatous DBA/2J retina ( Fig. 1H and I). AIBP deficiency exacerbates RGC vulnerability to elevated IOP and triggers visual dysfunction To test the hypothesis that AIBP deficiency plays an important role in glaucomatous RGCs, we used Apoa1bp − /− mice that are viable and fertile, and have no apparent morphological defects compared with control mice under naïve conditions [8]. As shown in Fig. 2, we induced IOP elevation in WT and Apoa1bp − /− mice and assessed RGC loss at 4 weeks after IOP elevation. There was no statistically significant difference in IOPs between WT and naïve Apoa1bp − /− mice. The mean IOP of contralateral control eyes was 9-10 mmHg and IOP was elevated in ipsilateral eyes to 70-75 mmHg in WT and Apoa1bp − /− mice (n = 15 mice; Fig. 2A). Remarkably, we found that elevated IOP significantly enhanced RGC loss in all retinal areas of Apoa1bp − /− mice compared with sham control WT or naïve Apoa1bp − /− retinas ( Fig. 2B and C). In addition, we observed that no statistically significant differences in RGC number were detected in the retinas between sham control WT and naïve Apoa1bp − /− mice ( Fig. 2B and C). To test the effect of AIBP deficiency on visual function, we next measured 1) the maximum spatial frequency that could elicit head tracking ("acuity") in a virtual-reality optomotor system and 2) central visual function using VEP, a measurement of the electrical signal recorded at the scalp over the occipital cortex in response to light stimulus. In the absence of AIBP, we found a significant reduction of visual acuity by decreasing spatial frequency in both male and female naïve Apoa1bp − /− mice (Fig. 2D). However, there were no statistically significant differences in VEP P1-N1 potentials and latency in naïve Apoa1bp − /− mice compared with WT mice (Fig. 2E and F). Because VEP is considered to be valid in analyzing and predicting visual properties in glaucoma patients with severe visual impairments [42], our results suggest the possibility that while AIBP deficiency triggers spatial vision dysfunction in the eye, it may not be sufficient to induce severe progression of optic nerve damage. Additionally, we further determined whether AIBP deficiency alters axon transport from retina to SC by intravitreal injection of an anterograde tracer, CTB, into the eyes of WT and naïve Apoa1bp − /− mice. At 3 days after injection, we measured anterograde tracing of CTB to SC. We found that there was no statistically significant difference in the density of CTB labeling in the SC between WT and naïve Apoa1bp − /− mice ( Fig. 2G and H). Increased TLR4 and IL-1β expression in glaucomatous and Apoa1bp − /müller glia endfeet AIBP plays a unique role of targeting cholesterol efflux machinery to TLR4-occupied inflammarafts [10,11]. Evidence from clinical and animal studies indicates that TLR4-dependent signaling is an important factor in the pathogenesis of POAG and that this signaling is associated with activated glial cells and contributes to inflammatory responses in experimental glaucoma [43][44][45]. First, we determined the expression level and distribution of TLR4 and IL-1β proteins in glaucomatous retinas from human patients with POAG and DBA/2J mice. Remarkably, we observed significantly increased patterns of TLR4 and IL-1β immunoreactivity in glutamine synthase (GS)-positive Müller glia in both glaucomatous human and DBA/2J mouse retinas compared with control retinas (Fig. 3A and B). In glaucomatous human retina, we found that TLR4 immunoreactivity was increased in the endfeet of Müller glia of the GCL and nerve fiber layer (NFL) compared with normal retina, while IL-1β immunoreactivity was increased in both processes and endfeet of Müller glia of the IPL, GCL and NFL (Fig. 3A and B). In glaucomatous DBA/2J mouse retina, both TLR4 and IL-1β immunoreactivities were significantly increased in the endfeet of Müller glia of the GCL but depleted in the processes of Müller glia compared with age-matched control D2-Gpnmb + mouse retina (Fig. 3A and B). Consistent with these results, glaucomatous retinas displayed significantly increased relative fluorescence intensity of both TLR4 and IL-1β proteins in the endfeet of Müller glia in the GCL compared with control Müller glia ( Fig. 3C and D). Second, we further determined whether AIBP deficiency alters TLR4 and IL-1β protein expression in Müller glia using Apoa1bp − /− mice. We found that AIBP deficiency not only significantly increased TLR4 and IL-1β immunoreactivities in the endfeet of Müller glia in the GCL but also showed a characteristic pattern of increased TLR4 and IL-1β expression in the processes of Müller glia in the IPL observed in DBA/2J mouse and human glaucomatous retina ( Fig. 3A-D). AIBP deficiency induces mitochondrial fragmentation and reduces ATP production in müller glia TLR4 is associated with mitochondrial damage caused by intracellular ROS and defective mitochondrial dynamics [20,21]. Using naïve Apoa1bp − /− mice, we further investigated whether AIBP contributes to the regulation of mitochondrial structure and function in the endfeet of Müller glia. 3D EM ( Fig. 4A and B, and sFig. 1) demonstrated lower crista density and dark outer membrane onion-like swirls in Apoa1bp − /− mitochondria ( Fig. 4B and sFig. 1D), although fewer in number than found in the RGC. Interestingly, we also found ring-shaped mitochondria, a hallmark of mitochondrial stress (sFig. 1E) [46], as well as lower rough endoplasmic reticulum (ER) density and dilated ER strands (sFig. 1D and E). The mitochondria were traced in yellow to make it easier to identify them and those with lower crista density pointed to in Apoa1bp − /− (Fig. 4B and sFig. 1). Mitochondria in the Müller glia have rarely been studied via EM at high resolution or in 3D in a quantitative manner [47]. Reconstructions showed examples of long tubular mitochondria in WT but small rounded mitochondria in Apoa1bp − /− (Fig. 4C and D, and sMovies 1 and 2). Because each mitochondrion covered multiple image planes at variable cutting angles ( Fig. 4E and F), to perform more accurate length measurement, mitochondria were segmented by drawing a series of connected spheres centered along the length of each mitochondrion using IMOD open contour (Fig. 4G-I). AIBP deficiency impairs mitochondrial dynamics and OXPHOS activity in the retina AIBP in transfected cells was shown to localize to mitochondria [22]. At 24 h after acute IOP elevation, we found AIBP protein expression was significantly decreased in the mitochondrial fraction (Fig. 5A), indicating that elevated IOP alters mitochondrial AIBP expression. Thus, we further determined the effect of AIBP deficiency on mitochondrial dynamics and function in the retina. We first found significant decreases of the mitochondrial fusion proteins optic atrophy type 1 (OPA1) and mitofusin 2 (MFN2) in naïve Apoa1bp − /− retina compared with WT retina (Fig. 5B). In WT retina, we observed that OPA1 immunoreactivity was highly present in Brn3a-positive RGCs, colocalizing with cytochrome c immunoreactivity (Fig. 5C). In contrast, AIBP deficiency diminished OPA1 immunoreactivity in the OPL and INL, as well as RGCs of the GCL (Fig. 5C). Interestingly, we also observed that AIBP deficiency induced an increase of OPA1 immunoreactivity in GS-positive Müller glia (sFig. 2A). In the absence of AIBP, we next found significant decreases of the levels of total mitochondrial fission protein, dynamin-related protein 1 (DRP1), and phosphorylated DRP1 at serine 637 in the retina (Fig. 5D). Immunohistochemical analysis showed that DRP1 immunoreactivity was highly present in the IPL and Brn3a-positive RGCs of the GCL in WT retina (Fig. 5E). In the absence of AIBP, we consistently observed that total DRP1 immunoreactivity was diminished in the OPL and inner retinal layer, especially in RGC somas and the IPL (Fig. 5E). In contrast to Fig. S2A, DRP1 immunoreactivity was not detectable in Apoa1bp − /− Müller glia (sFig. 2B). Additionally, there were no statistically significant differences in mitochondrial dynamics-related gene expression (Opa1, Mfn2 and Drp1) between WT and naïve Apoa1bp − /− retinas (sFig. 3A). We further determined whether AIBP is involved in mitochondrial OXPHOS in the retina and found that OXPHOS complexes (Cxs) protein expression were significantly decreased in Apoa1bp − /− retina (Fig. 5F). AIBP deficiency triggers mitochondrial fragmentation and reduces ATP production in RGCs To determine whether AIBP deficiency directly affects mitochondrial structure and function in RGCs, we assessed the structural and functional changes of mitochondria in Apoa1bp − /− RGC somas. Applying 3DEM, we found that AIBP deficiency principally caused swelling and rounding of mitochondria and altered ER structure in RGC somas ( Fig. 6A and B and sFig. 4A and B). Even though many of the ER strands were dilated, a hallmark of ER stress, ER-facilitated mitochondrial fission did not appear to be impaired (Fig. 6C-F and sFig. 4B). As with the Müller glia endfeet, abnormal mitochondria with localized structural perturbation of the outer membrane, usually manifest as onion-like swirling membrane, were commonly seen in the Apoa1bp − /− RGC soma ( Fig. 6G and H). Long extended axons distinguish RGCs from displaced amacrine cells in the GCL. Variable RGC mitochondrial structures were rendered and analyzed (Inserts in Fig. 6I and J; sMovies 3 and 4). 3D volumes showed long tubular forms of mitochondria and branched mitochondria in WT RGC somas (Fig. 6I), whereas small and rounded forms of mitochondria and sometimes branched mitochondria were observed in Apoa1bp − /− RGC somas (Fig. 6J). We also found abnormal mitochondria with vesicular inclusions and what appears to be autophagosome formation in Apoa1bp − /− RGC somas (sFig. 4C and D). Unlike in the Apoa1bp − /− Müller glia endfeet, it was verified that Apoa1bp − /− RGC mitochondria were larger (Fig. 6K, n = 50 mitochondria in 5 RGC soma) and occupied more of the cytoplasmic volume (Fig. 6L, n = 10 RGC soma) likely due to their largeness, yet they did not have increased numbers (Fig. 6M, n = 10 RGC soma). The form factor for mitochondria was significantly lower, indicating more rounded mitochondria caused by volume dilation in the Apoa1bp − /− RGC compared to the WT (Fig. 6N, n = 50 mitochondria in 5 RGC soma). As with the Müller glia endfeet, the lengths of Apoa1bp − /− mitochondria were significantly decreased in RGC somas (Fig. 6O, n = 50 mitochondria in 5 RGC soma); their greater volume comes from their rounding. As implied in Fig. 6, the crista density was significantly lower in the Apoa1bp − /− RGC mitochondria (Fig. 7A, n = 50 mitochondria in 5 RGC soma), leading to a lower modeled rate of ATP production per mitochondrial volume (Fig. 7B). Yet, because mitochondria were larger in the Apoa1bp − /− RGC soma, each mitochondrion, on average, was modeled to produce more ATP per second (Fig. 7B). However, unlike in the Apoa1bp − /− Müller glia endfeet, the model for the rate of ATP production, which is based on 3D cristae surface area, predicts that there is not much decrease in cellular ATP production via mitochondria in the Apoa1bp − /− RGC soma (Fig. 7B, n = 50 mitochondria in 5 RGC soma) even though the rate of ATP production per mitochondrial volume had significantly decreased (Fig. 7C, n = 50 mitochondria in 5 RGC soma); this decrease was simply offset by larger mitochondria. Tomographic volumes of WT RGC soma mitochondria show typical cristae ( Fig. 7C and D, n = 10 RGC soma). In contrast, Apoa1bp − /− RGC soma mitochondria typically show cristae that are less densely packed and commonly observed onion-like protuberances. Also, adjacent ER strands are often dilated (Fig. 7E-J; sMovies 5 and 6). The crista density was lower in the Apoa1bp − /− RGC soma mitochondria due in part to these onion-like outer membrane protuberances. In summary, the Apoa1bp − /− RGC somas have mitochondria that are structurally perturbed by dilation and rounding with some localized structural perturbation of the outer membrane and some loss of cristae membrane. We next measured expression levels of mitofilin in naïve Apoa1bp − /− and WT mice. Mitofilin is a mitochondrial inner membrane protein that controls cristae architecture [48]. In the absence of AIBP, we found a significant reduction of mitofilin protein expression in the retina (Fig. 7K), suggesting that AIBP deficiency may be the underlying factor for the loss of cristae membrane. However, there was no significant difference in mitofilin gene expression between WT and naïve Apoa1bp − /− mice (Fig. 7L). AIBP deficiency induces oxidative stress and MAPK signaling activation in RGCs Under oxidative stress conditions, sirtuin 3 (SIRT3) impairment reduces the activity of superoxide dismutase 2 (SOD2) and increases ROS production [49,50], and multiple mitogen-activated protein kinases (MAPKs) signaling pathways such as p38 and extracellular signal-regulated kinase 1/2 (ERK1/2) are activated [51,52]. Thus, we tested whether AIBP regulates expression levels of SIRT3 and SOD2, as well as p38 and ERK1/2 activation in WT and naïve Apoa1bp − /mice. In the absence of AIBP, we found that SIRT3 and SOD2 protein expression were significantly decreased in the retina (Fig. 8A and B). However, there were no statistically significant differences in Sirt3 and Sod2 gene expression (sFig. 3B). Also, we found that SIRT3 and SOD2 immunoreactivity was dramatically diminished in the IPL and GCL, especially in the in Brn3a-positive RGCs (Fig. 8C and D). Next, we found that AIBP deficiency significantly increased phosphorylation of p38 and ERK1/2 in the retina (Fig. 8E and F). Consistently, we also observed that phospho-p38 and phospho-ERK1/2 immunoreactivities were increased in the inner retinal layer in naïve Apoa1bp − /− mice ( Fig. 8G and H). We noted that AIBP deficiency showed an increased pattern of phospho-p38 immunoreactivity in Brn3a-positive RGCs (Fig. 8G and H). Administration of AIBP promotes RGC survival and inhibits inflammatory responses in IOP mouse model Since AIBP deficiency was associated with a neuroinflammatory and RGC death phenotype, next we tested the hypothesis that injections of recombinant AIBP will be protective. We intravitreally injected recombinant AIBP protein or BSA (1 μL, 0.5 mg/ml) into C57BL/6J mice at 2 days before the induction of acute IOP elevation as described above. At 24 h after IOP elevation, we performed TUNEL staining and RBPMS immunohistochemistry. In BSA-injected animals, elevated IOP significantly increased the number of TUNEL-positive cells in all retinal layers compared with control mice (Fig. 9A and B). In the GCL, RBPMS-positive RGCs were co-stained with TUNEL, indicating apoptotic RGC death. In contrast, we remarkably observed that AIBP treatment significantly reduced the number of TUNEL-positive cells in the retina in response to elevated IOP, whereas there were no TUNEL-positive cells in both sham controls (BSA and AIBP) ( Fig. 9A and B). To determine the effect of AIBP administration on inflammatory responses in activated Müller glia in response to elevated IOP, we quantified expression levels of IL-1β protein in Müller glia endfeet by measuring relative fluorescence intensity of IL-1β immunoreactivity in the GCL. Consistent with the data from glaucomatous retinas (Fig. 3), we found that elevated IOP significantly increased IL-1β immunoreactivity in the endfeet of Müller glia of BSAtreated control retina compared with BSA-treated sham control ( Fig. 9C and D). In AIBP-treated animals, however, we found that IL-1β immunoreactivity was significantly decreased in the endfeet of Müller glia against elevated IOP (Fig. 9C and D). There was no significant difference in IL-1β immunoreactivity between BSA-and AIBP-treated sham control groups (Fig. 9C and D). Discussion Factors contributing to neuroinflammation, the process that plays a critical role in glaucomatous neurodegeneration, are poorly understood. In the present study, we identified AIBP as an important neuroprotective protein in the retina. We demonstrated that elevated IOP reduced AIBP expression in glaucomatous retina and that Apoa1bp − /− Müller glia had an upregulated TLR4-mediated inflammatory response via increasing IL-1β expression that is accompanied by compromised mitochondrial dynamics and energy depletion. In parallel, we found that ABIP deficiency contributed to dysfunctional RGC mitochondria, oxidative stress and visual dysfunction. Also, AIBP deficiency exacerbated RGC death in response to elevated IOP. Remarkably, recombinant AIBP administration prevented apoptotic RGC cell death and inflammatory responses in Müller glia in vivo. Here we propose for the first time that AIBP could be a therapeutic target for treating neuroinflammation, mitochondrial dysfunction and RGC death in glaucoma progression. AIBP has been known to accelerate cholesterol efflux from endothelial cells and macrophages [4,[7][8][9]23,41]. Accumulating evidence indicates that cholesterol is considered as a risk factor for POAG [53][54][55][56][57]. Indeed, epidemiological studies indicate that POAG is linked to single-nucleotide polymorphisms of ABCA1 [53][54][55]. Interestingly, ABCA1 is expressed in human RGCs [53,54], which is significantly decreased in RGCs in response to elevated IOP [58]. In the current study, both AIBP and ABCA1 protein expression were found to be reduced in glaucomatous retina. A previous study has demonstrated that AIBP mediates the stabilization of ABCA1 by facilitating apoA-1 binding to ABCA1 and prevents ABCA1 degradation via the ubiquitination pathway [41]. Although the relationship between AIBP and ABCA1, particularly in RGCs needs to be elucidated, it is likely that AIBP stabilizes ABCA1 and regulates cholesterol efflux in glaucomatous RGCs. Moreover, loss of AIBP induced by elevated IOP may contribute to deregulation of ABCA1 in glaucomatous neurodegeneration. Epidemiological studies indicate that human POAG is linked to single-nucleotide polymorphisms of TLR4 [43,45] and recent evidence from animal studies further suggests that TLR4-dependent signaling is an important factor in the pathogenesis of POAG [14,44]. In the current study, we found that glaucomatous retina significantly increased Tlr4 gene expression. Moreover, both glaucomatous and Apoa1bp − /− Müller glia endfeet, which are in close contact with RGCs, were characterized by increased TLR4 protein expression. Under normal conditions, Müller glia interact with and help maintain neurons including RGCs, but reactive Müller glia contribute to neuronal degeneration [59]. Since RGC is a main affected cell type in glaucomatous neurodegeneration, it is likely that increased immunoreactivity of GS and TLR4 in the endfeet of Müller glia in glaucomatous and Apoa1bp − /− retinas are associated with degenerative or vulnerable RGCs. Since our previous study demonstrated that AIBP bound to activated microglia via TLR4, augmented cholesterol efflux and also the disruption of lipid rafts in LPS-stimulated cells, as well as reduced TLR4 dimerization [7,11], the current results strongly suggest that AIBP mediates inhibition of TLR4 activity in Müller glia and may have a critical role in protection against glaucomatous neuroinflammation. Recent evidence suggests that acute IOP elevation induces TLR4-mediated inflammasome activation, including pyrin domain containing 1 (NLRP1) and NLRP3, and activates the IL-1β cascade in the retina [14,15]. Further, genetic deletion or pharmacological inhibition of TLR4 significantly reduces RGC death and proinflammatory responses in experimental glaucoma [44,60,61]. Thus, we propose that loss of AIBP and activation of TLR4 signaling in glaucomatous Müller glia are critical to inflammatory response-mediated glaucomatous RGC degeneration. Indeed, this notion is strongly supported by our results that show a significant increase in IL-1β protein expression in both glaucomatous and Apoa1bp − /− Müller glia endfeet. In transfected cells, AIBP was localized to mitochondria [22], but the role of AIBP in regulation of mitochondrial structure and function in mammalian cells remained unknown. Interestingly, activated TLR4 signaling is associated with mitochondrial damage in microglia caused by intracellular ROS and defective mitochondrial dynamics [20,21]. In the current study, we demonstrated for the first time that the loss of AIBP impaired mitochondrial network and function in Müller glia endfeet via induction of mitochondrial fragmentation and reduction of ATP production. Recent evidence suggests that Müller glia-induced neuroinflammation is linked with RGC death [62] and that Müller glia-mediated lactate is a critical source in maintaining RGC energy metabolism and survival [32]. Given our findings and those of others, it is conceivable that loss of AIBP augments the TLR4 signaling in glaucomatous Müller glia that might compromise mitochondrial network, increase ROS production and deplete energy production, leading to dysfunction of Müller glia, activation of inflammatory responses and RGC death. Another possible mechanism of a protective role of AIBP in glaucoma is related to angiogenesis. Müller glia are associated with the regulation of angiogenesis that is linked to a severe form of secondary glaucoma commonly associated with proliferative diabetic retinopathy, ischemic central retinal vein occlusion, and ocular ischemic syndrome [63]. Further, Müller glia activation is increased with age in glaucomatous DBA/2J mice, showing abnormal neovascularization [64]. Since previous studies have demonstrated that loss of AIBP results in dysregulated sprouting/branching angiogenesis and that enhanced AIBP expression inhibits angiogenesis [4,8], it is possible that Müller glia dysfunction induced by loss of AIBP may contribute to abnormal angiogenesis in secondary glaucoma. In addition, microglial activation is a common inflammatory response to elevated IOP-induced retinal injury and microglia-mediated TLR4 activation is involved in retinal degeneration [14,65]. Given that elevated IOP increased Tlr4 gene expression in the retina and augmented microglial activation in Apoa1bp − /− retina, our findings collectively suggest the possibility that loss of AIBP exacerbates vulnerability to elevated IOP-induced RGC death through TLR4 signaling activation, mitochondrial dysfunction and inflammatory response by activated Müller glia and microglia. We also provide the first evidence that AIBP regulates structural and functional integrity of mitochondria in RGCs. Our study intriguingly demonstrated a significant loss of mitochondrial AIBP in the retina in response to elevated IOP. Moreover, loss of AIBP significantly impaired not only the OXPHOS system in the retina but also mitochondrial dynamics and ATP production in RGCs, resulting in extensive mitochondrial fragmentation, energy depletion and possible autophagosome formation. Since we have demonstrated that impairment of mitochondrial dynamics and function was strongly linked to RGC death in glaucomatous neurodegeneration [16,19,25,[66][67][68], it is likely that AIBP might play a critical role in mitochondrial quality control to maintain cellular homeostasis by preserving mitochondrial dynamics and bioenergetics in RGCs against glaucomatous insults such as elevated IOP and oxidative stress. Since degenerative pruning of RGC dendrites and their dysfunctional synapse, as well as mitochondrial degeneration have been implicated as early features of glaucomatous neurodegeneration [69,70], combined, these and our results with decreased AIBP in the inner retina and Müller glia activation suggest an intriguing possibility that loss of AIBP in the inner retina may affect not only RGC soma and axon in the GCL, but also RGC dendrites and their synapses in the IPL in an autocrine/paracrine manner during glaucomatous neurodegeneration. SIRT3, a mitochondrial NAD + -dependent deacetylase, has protective roles against oxidative stress, neuroinflammation and neurodegeneration [71,72]. SIRT3-mediated SOD2 activation and deacetylation reduces ROS levels, leading to the enhancement of resistance against oxidative stress [73,74]. Our study demonstrated that loss of AIBP significantly reduced the expression levels of SIRT3 and SOD2 proteins in the inner retina including RGCs. Recent evidence indicates that the SIRT3-SOD2 pathway is linked to inflammation and oxidative stress [75,76]. In line with these and our findings, it is possible that mitochondrial AIBP may contribute to the stabilization of the SIRT3-SOD2 axis, rescuing RGC mitochondria from neuroinflammation and/or oxidative stress. As shown in Fig. 8, SIRT3 and SOD2 are widely expressed in the retina, consistent with previous studies [19,77,78], and AIBP may contribute to SIRT3-SOD2-mediated protection of retinal structure and cells against oxidative stress. Under oxidative stress conditions, multiple MAPK signaling pathways, including p38 and ERK1/2, are activated [51,52]. Our study demonstrated that loss of AIBP persistently increased phosphorylation of p38 and ERK1/2 in the retina. p38 is phosphorylated in response to cytokines and oxidative stress [79,80] and activation of the p38 signaling pathway leads to mitochondrial dysfunction and inflammatory responses [81][82][83][84]. Because a p38 inhibitor blocks mitochondrial dysfunction and inhibits cytochrome c release [85], it is likely that retinal AIBP not only plays a role in the stabilization of mitochondrial proteins, but also inhibits stress-activated intracellular signaling responses, such as p38 activation. On the other hand, ERK1/2 is also activated in response to cytokines, free radicals and inflammatory factors in neurodegenerative diseases [86,87]. In experimental glaucoma, ERK1/2 activation has a neuroprotective effect on RGC survival [88][89][90][91]. Since phosphorylation of p38 and ERK1/2 induced by lipopolysaccharide (LPS) in alveolar macrophages is inhibited in the presence of AIBP [9], our study suggests that AIBP may contribute to a differential regulation of MAPK signaling pathways in RGCs against inflammatory responses and/or oxidative stress. Our recent studies have demonstrated that recombinant AIBP protein administration reduced not only spinal myeloid cell lipid rafts, TLR4 dimerization, neuroinflammation, and glial activation in facilitated pain states, but also LPS-induced airspace neutrophilia, alveolar capillary leak, and secretion of IL-6 in acute lung inflammation [9,11]. These results demonstrated a mechanism by which AIBP regulates neuroinflammation and suggested the therapeutic potential of AIBP in treating preexisting pain states and lung inflammation. In the current study, we found that intravitreal administration of AIBP significantly protected not only RGCs in the GCL but also cells in the INL against apoptotic cell death, and reduced IL-1β-mediated inflammatory responses in Müller glia in response to elevated IOP. Müller glia are widely distributed throughout the entire retina in which their endfeet resides in the GCL and cell body sits in the INL [92]. Interestingly, it has been reported that TLR4-mediated neuroinflammation was highly associated with apoptotic cell death in the INL and GCL of the ischemic retina, and that inhibition of TLR4 signaling activation blocked apoptosis in the retinal layers, INL and GCL [15]. Based on these and current finding, it is possible that AIBP deficiency may increase susceptibility in not only RGCs but also Müller glia in response to elevated IOP. Thus, we propose that AIBP may have a therapeutic potential for treating glaucoma via blocking glia-driven neuroinflammation. In addition, accumulating evidence suggests that damage-associated molecular patterns (DAMPs) which are endogenous activators of TLR4 released by damaged cells activate microglia and produce inflammatory cytokines [10,93]. The activated microglia-mediated neuroinflammation is an early event in glaucoma, leading to Müller glia activation, neuronal damage, and RGC loss [94,95]. Bidirectional interaction between activated microglia and Müller glia in the retinal-injured model induces adaptive responses. Cross-talk between Müller glia and activated microglia increases pro-inflammatory cytokine production which in turn facilitates further activation of microglia in a positive feedback loop and Müller glia-microglia adhesion via upregulation of adhesion molecules [96]. Thus, understanding of potential effect of AIBP on bidirectional signaling between Müller glia and activated microglial may help to limit further amplifying inflammatory signaling and RGC death, and highlight therapeutic strategies for neuroinflammatory glaucoma. Further studies will determine whether administration of AIBP also preserves RGC and glia mitochondria and whether this mechanism contributes to RGC survival and preservation of glial function against glaucomatous insults. Conclusion In summary, our study connects for the first time AIBP with protecting mechanisms controlling mitochondrial pathogenic mechanisms, neuroinflammation and RGC death as illustrated in the graphical abstract. Here, we propose that combined therapeutic strategies that block glia-driven inflammatory TLR4/IL-1β axis and mitochondrial dysfunction in glaucomatous neurodegeneration are likely to be beneficial. Further studies aimed at identifying the AIBP target pathological pathways and understanding the potential mechanisms of AIBP-mediated neuroprotection may lead to developing a new therapeutic approach for glaucoma and other inflammatory conditions. Funding This work was supported in part by National Institutes of Health grant EY018658 and UCSD Academic Senate to WKJ, National Institutes of Health grants HL135737, HL136275 and NS102432 to YIM, National Institutes of Health grant EY027011 and RPB Special Scholar Award to DSK, National Institutes of Health grants P30 EY022589 and T32 EY026590, and National Institutes of Health grant P41GM103412 to MHE.
2020-08-27T09:13:39.649Z
2020-08-27T00:00:00.000
{ "year": 2020, "sha1": "521c2f2cc117211942f37bdeae7c467243ec3fde", "oa_license": "CCBYNCND", "oa_url": "https://doi.org/10.1016/j.redox.2020.101703", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "e3b7bf1adbf18f0bded2e2fd25885681a35fbdfe", "s2fieldsofstudy": [ "Medicine", "Biology" ], "extfieldsofstudy": [ "Medicine" ] }
235346640
pes2o/s2orc
v3-fos-license
Characteristics and outcomes of pregnant women with placenta accreta spectrum in Italy: A prospective population-based cohort study Introduction Placenta accreta spectrum (PAS) is a rare but potentially life-threatening event due to massive hemorrhage. Placenta previa and previous cesarean section are major risk factors for PAS. Italy holds one of the highest rates of primary and repeated cesarean section in Europe; nonetheless, there is a paucity of high-quality Italian data on PAS. The aim of this paper was to estimate the prevalence of PAS in Italy and to evaluate its associated factors, ante- and intra-partum management, and perinatal outcomes. Also, since severe morbidity and mortality in Italy show a North-South gradient, we assessed and compared perinatal outcomes of women with PAS according to the geographical area of delivery. Material and methods This was a prospective population-based study using the Italian Obstetric Surveillance System (ItOSS) and including all women aged 15–50 years with a diagnosis of PAS between September 2014 and August 2016. Six Italian regions were involved in the study project, covering 49% of the national births. Cases were prospectively reported by a trained clinician for each participating maternity unit by electronic data collection forms. The background population comprised all women who delivered in the participating regions during the study period. Results A cohort of 384 women with PAS was identified from a source population of 458 995 maternities for a prevalence of 0.84/1000 (95% CI, 0.75–0.92). Antenatal suspicion was present in 50% of patients, who showed reduced rates of blood transfusion compared to unsuspected patients (65.6% versus 79.7%, P = 0.003). Analyses by geographical area showed higher rates of both concomitant placenta previa and prior CS (62.1% vs 28.7%, P<0.0001) and antenatal suspicion (61.7% vs 28.7%, P<0.0001) in women in Southern compared to Northern Italy. Also, these women had lower rates of hemorrhage ≥2000 mL (29.6% vs 51.2%, P<0.0001), blood transfusion (64.5% vs 87.5%, P = 0.001), and severe maternal morbidity (5.0% vs 11.1%, P = 0.036). Delivery in a referral center for PAS occurred in 71.9% of these patients. Conclusions Antenatal suspicion of PAS is associated with improved maternal outcomes, also among high-risk women with both placenta previa and prior CS, likely because of their referral to specialized centers for PAS management. Introduction Placenta accreta spectrum (PAS) is an obstetric condition caused by excessive trophoblast invasion into the myometrium of the uterine wall [1]. Defective decidualization in an area of scarring, mostly due to previous uterine surgery, is supposed to be the main underlying mechanism of PAS [2]. Although rare, PAS represents a potentially life-threatening event, especially if not suspected before delivery [17,18]. It may result in massive hemorrhage ultimately requiring emergency hysterectomy to prevent maternal death [19][20][21]. Thus, PAS can be considered a "near-miss" event [17,22]. "Near-miss" events are proxies of maternal health care quality, and their monitoring and in-depth investigation provide an essential feedback to improve obstetric care [23]. Considering that hemorrhage is the leading cause of maternal mortality and morbidity in Italy [24,25], where there is a paucity of high-quality studies on PAS notwithstanding high rates of CS [26][27][28][29][30][31], the Italian Obstetric Surveillance System (ItOSS) carried out a prospective, population-based study on hemorrhagic "near-miss" events, including PAS. The aim of this paper is to estimate the incidence of PAS and to analyze its associated factors, management, and perinatal complications. In addition, since Italy has regional health care imbalances with the South displaying higher rates of morbidity and mortality [20,25], outcomes were compared according to the geographical area of delivery. Material and methods This is a prospective, population-based study including all women aged 15-50 years and delivering at �22 weeks of gestation with a diagnosis of PAS from September 2014 to August 2016 in six Italian regions covering 49% of the national births. These regions were selected by annual number of births (�25 000) and to ensure the representativeness of the Northern (Piedmont, Emilia Romagna, and Tuscany) and Southern (Lazio, Campania and Sicily) areas. The present study is part of a wider research project on severe maternal morbidity due to obstetric hemorrhage coordinated by the ItOSS, as previously reported [20]. Briefly, the ItOSS project prospectively collected data on women delivering at �22 weeks of gestation with any of the following complications: (1) severe postpartum hemorrhage, defined as "hemorrhage within 7 days from delivery requiring �4 units of whole blood or packed red blood cells"; (2) "hemorrhage due to complete or incomplete uterine rupture"; (3) "peripartum hysterectomy within 7 days from delivery"; and (4) PAS, clinically defined as "difficult or incomplete manual removal of the placenta following vaginal delivery and the need of blood transfusion within 48 hours" or "difficult removal of the placenta during cesarean delivery and clinical evidence of an abnormally invasive placenta". The present study includes all cases of PAS as defined in (4), independent of the associated outcomes, such as severe postpartum hemorrhage (1), uterine rupture (2), and peripartum hysterectomy within 7 days from delivery (3), leading to inclusion in the wider ItOSS research project. All maternity units in the selected regions were invited to participate in the study and to appoint a clinician as reference person for reporting incident cases. Unified electronic data collection forms, prepared by a team of national experts by adapting the forms of the Nordic Obstetric Surveillance Study [32], were used for data collection. Each reference person was trained to use the web system for data collection before study's commencement, and received a monthly reminder by email to promote complete reporting. A multidisciplinary audit involving all healthcare professionals that assisted the women with PAS diagnosis was recommended in each participating maternity unit. Statistical analyses The prevalence rate was calculated as the number of PAS per 1000 maternities with a 95% CI, assuming the Poisson approximation to the binomial distribution. When available, the background population was retrieved from the National Hospital Discharge database by selecting all women aged 15-50 who delivered during the same study period in the participating maternity units of the selected regions. When not available, the background population was estimated in aggregate form from the National Birth Register, year 2015 [33]. Potential factors associated to PAS were identified by calculating unadjusted relative risks (RR) and 95% CI. Dichotomous data were compared using Pearson Chi-square test or Fisher exact test for categorical variables and Mann-Whitney test for continuous variables. Ethical approval The study was approved by the Ethics Committee of the Italian National Institute of Health (Prot. PRE-839/13). Data were fully anonymized before being accessed and analyzed. Thus, need for informed consent was waived by the local Ethics Committee. Results Seven of the 212 maternity units in the six selected regions did not provide the requested data, for a 96.7% participation rate. PAS rate and associated factors During the study period, there were 372 cases of PAS notified. Assessment of data completeness led to recovery of twelve additional cases, for a total of 384 cases out of 458 995 maternities with an estimated prevalence of 0.84 per 1000 (95% CI, 0.75-0.92). Along with the regional and overall estimates of the PAS rate, Fig 1 shows the contribution to the PAS rate given by women with both placenta previa or low-lying and previous CS. The solid line describes the percentage of women with previous CS in the background population. Women with PAS had a median age of 35 years (IQR, 31.4-39.0 years) at delivery; six women were older than 45 and one younger than 20 years (Table 1). PAS patients were mostly Italian with a low education level and more likely multiparas. Overall, 54% had a previous CS, with 18.5% and 6.5% having two or � three previous CS, respectively. Placenta previa or lowlying was diagnosed in 60% of women. In 44.6% of cases there was both a placenta previa or low-lying and a prior CS. Regional and overall PAS rate and percentage of previous cesarean section in the background population. Bar graphs show regional and overall prevalence distribution of PAS with the 95% CI (white bars and black lines). Grey bars display the contribution to the regional and overall PAS rate given by women with both placenta previa or low-lying and previous cesarean section. Both white and grey bars are plotted on the left Y axis. Solid line with dots shows regional and overall rate of previous cesarean section in the background population (plotted on the right Y axis). There were 74.5% deliveries by CS, with elective surgery being the most common (73.4%). Median gestational age at delivery was 36 weeks (IQR, 35-38 weeks). Preterm delivery <37 weeks' gestation occurred in 51.3% of cases, and was more frequent among women with placenta previa or low-lying compared to women without (66.0% vs 16.3%, P<0.0001). The analysis of maternal characteristics showed a substantially higher risk of PAS in women with placenta previa or low-lying and with previous CS or other uterine surgery, with the greatest risk increase for �2 previous CS (RR 17.6; 95% CI, 12.9-24.0). A modest risk increase was also observed for maternal age �35 years, multiparity, low education level, and delivery in Southern Italy (Table 1). Also, ART and multiple gestation significantly increased PAS risk. In addition, women with PAS showed a 5-and 15-fold increase in the odds of delivering by CS and <37 weeks' gestation, respectively. Pregnancy, delivery, and perinatal outcomes of PAS PAS was antenatally suspected in 50% of the cases, more likely in multiparas with prior uterine surgery, placenta previa or low-lying, or a combination of both (Table 2). These conditions were more frequent among women in Southern Italy, and, accordingly, a higher rate of antenatal suspicion was identified (61.7% vs 28.7%, P<0.0001). Most of the suspected women delivered in a high-level hospital setting and by a scheduled CS. None of the 35 women without risk factors for PAS was suspected prenatally, and 26 (74.3%) of them delivered vaginally. Four women were managed conservatively: three had a partial placenta accreta diagnosed after delivery and only the abnormally adherent cotyledon was left in situ whereas the remaining one had an antenatal diagnosis of complete placenta previa with signs of percretion and no attempt of placenta removal was performed at the time of CS. Follow up of these patients was not available at the time of data collection. Almost 73% of women were transfused with RBC units, with higher rates among unsuspected women ( Table 2). a High-level hospital setting was defined as a hospital with availability of ICU and interventional radiology, and possibility of blood transfusion within 15 minutes; b Severe maternal morbidity included vegetative state (n = 1), cardiac arrest (n = 2), respiratory distress (n = 3), acute pulmonary edema (n = 2), disseminated intravascular coagulopathy (n = 6), acute renal failure (n = 1), deep vein thrombophlebitis or pulmonary embolism (n = 1), sepsis or septic shock (n = 1), hemorrhagic shock (n = 7). Damage to adjacent organs during surgery and post-operative complications are described separately (details in the text); c Lazio region excluded. At least one severe maternal morbidity condition was identified in 27 (7.0%) women, with hemorrhagic shock (n = 7) and disseminated intravascular coagulopathy (DIC, n = 6) being the most frequent. Twelve (3.1%) patients were assisted with mechanical ventilation, and 24% required admission to the ICU. There were no differences in rate of severe maternal morbidity or ICU admission between women with and without suspected PAS (Table 2). Overall, there were 51 (13.3%) patients experiencing organ damage during surgery, post-surgical complications, or a severe maternal morbidity condition. There was one maternal death in the study cohort, for a fatality rate of 2.6‰; it occurred in a primiparous young woman with no risk factors for PAS and no antenatal diagnosis, who delivered vaginally and experienced uterine inversion in the attempt of removing a partially attached placenta with subsequent severe PPH, DIC, cardiac arrest, and death. Among 398 infants who were given birth to (372 singletons, ten twins, and two triplets), eight perinatal deaths were identified, for a perinatal mortality rate of 20.1‰: seven (87.5%) cases happened postnatally and in 85.7% of them delivery was before 26 weeks' gestation. Histology data Histology report was available at the time of data retrieval in 179 cases, 77.1% of whom had undergone hysterectomy. Overall, PAS was confirmed in 130 (72.6%) patients; depth of invasion with rates of placenta previa or low-lying and previous CS are shown in Fig 2. All histological diagnoses were performed on both uterine and placental specimen, except for five (3.8%) cases with placenta accreta which were identified by assessment of just the placenta. PAS was antenatally suspected in 62.7%, 52.4%, and 79.4% of cases with placenta accreta, increta, and percreta, respectively. Women with antenatal diagnosis did not show higher grade of invasion, such as placenta increta or percreta, compared to unsuspected women (44.7% vs 37.8%, p = 0.463). However, when analysis was performed by geographical area, women in Southern regions were more likely to have more severe forms of PAS than women in the North (46.5% vs 29.0%, P = 0.045). Rates of PPH �2000 mL and of blood transfusion among women with either placenta accreta or placenta increta/percreta and antenatal diagnosis were similar to those of unsuspected women (Table 3). Main findings This study showed that prevalence of PAS in the participating Italian regions was 0.84‰, with higher rates in Southern Italy. Results highlighted the pivotal contribution of placenta previa or low-lying, prior CS and/ or other uterine surgery, and ART to the occurrence of PAS. Half of the cases did not have antenatal suspicion, and this occurred also among women with relevant risk factors for PAS such as placenta previa and previous CS. Antenatal suspicion did not associate with improved outcomes in our cohort, except for a lower rate of RBC unit transfusion in suspected women. However, when assessed according to geographical area, adverse outcomes were less likely in patients in Southern regions notwithstanding higher rates of high-risk cases, such as those with both placenta previa and prior CS or placenta increta/ percreta. Strengths and limitations The strengths of this study include the prospective and population-based design, the high participation rate of the maternity centers, and the opportunity to rely on the ItOSS surveillance system to validate the reported maternal death. Also, although subnational, results are unlikely to be significantly biased due the distribution of the participating regions in all the geographical areas of the country. There are also limitations. In order to fully capture all cases of PAS, we used a clinical case definition including also women with vaginal delivery. Although unlikely [34], this may have led to inclusion of cases of common entrapped placenta [35,36] and thus, to overestimation of PAS prevalence. Of note, the present study was designed and implemented before FIGO guidelines on PAS definition were published [4]. Also, cases without histological confirmation of PAS were considered in the analyses. However, it is known that the absence of histological features indicative of PAS does not necessarily exclude such diagnosis, especially when high clinical suspicion is present [37]. In addition, information regarding blood loss at delivery was lacking in almost 16% of women and this may have led to biased result interpretation. Nevertheless, lack of missingness in first-and second line treatments of PPH has likely limited this possibility. Another potential limitation is the lack of a PAS code in the ICD-9 Hospital Discharge database to ascertain completeness of notified cases. However, presence of a trained clinician in each hospital, the active monthly checks of ItOSS case reporting, and previous studies using ItOSS that suggested high rates of ascertainment [20], make this possibility unlikely. Finally, the lack of individual data of deliveries in women without PAS prevented us from adjusting the estimated RRs. Interpretation The PAS prevalence reported in this study (0.84‰) is higher than the one reported for the Nordic countries by the NOSS (0.34‰) [36]. This finding might be related to the exclusion of women with vaginal delivery from this work. However, a lower rate of PAS (0.46‰) was still identified when these women were included in a previous analysis [32], thus suggesting a more relevant role of prior CS rate (10% in Nordic countries vs 16.8% in Italy) in causing such a difference [20,36]. Similarly, the higher rate of prior CS might explain the increased PAS prevalence in Italy compared to France (0.48‰, prior CS rate 11.4%) [38] and to the United Kingdom (0.17‰, prior CS rate 14.9%) [34]. In turn, the use of statistic record-linkage procedures instead of active reporting may account for our lower prevalence estimates compared to a recent Australian-population based study (2.5‰, prior CS rate 14.4%) [16]. High rates of primary CS [39], alongside the policy defined by the axiom "once a cesarean always a cesarean" [40], has led to a considerable increase in women with �2 previous CS in the ItOSS cohort (96/384, 25%) compared to the Nordic countries (32/205, 15.6%). It is known that the incidence of placenta previa and of PAS rise with the number of prior CS [8][9][10]. According to this and in line with published data [36,41], we identified a "dose-dependent" relation between prior CS and PAS, with an increase in the RR of PAS from 3.8 for one to 17.6 for � two previous CS. Antenatal suspicion of PAS has been associated to improved outcomes [18,[42][43][44][45]. This finding has also been confirmed by the UKOSS study [34]. Although our rate of antenatal suspicion (192/384, 50%) was similar to that reported in this work (66/134, 49.3%), we did not observe any difference in term of blood loss at delivery ( Table 2), even when analysis was restricted to only those cases with histological confirmation of placenta increta or percreta (Table 3). Notwithstanding this, we noted a lower rate of RBC unit transfusion among suspected women, thus possibly suggesting a different preparedness to and, thus, management of severe PPH when substantial bleeding is expected at delivery and an adequate planning is put in place [34,42,46]. Of note, there were 25 (6.5%) women with both placenta previa or low-lying and prior CS, a combination of risk factors defining a clinical profile at high risk for adverse outcomes [38], who were not antenatally suspected. Knowledge of relevant risk factors for PAS is pivotal to guide a targeted prenatal ultrasound scan and increase the rate of antenatal diagnosis [42,[47][48][49][50][51][52][53][54][55]. However, PAS can also occur in the absence of any known risk factor, as we observed in 35 (9.1%) cases and as reported by the NOSS in 15 (7.3%) cases [36]. Almost half of our patients had a high-risk clinical profile [38]. Although such profile was substantially more common among women in the South compared to the North, we observed improved outcomes with decreased rates of PPH, blood transfusion, ICU admission, and severe maternal morbidity. Also, these women less frequently experienced intra-and post-hysterectomy complications, notwithstanding higher rates of placenta previa and previous CS, conditions known to make surgery more technically challenging [38,44,56,57]. Since referral of expected cases has been suggested as a more important determinant of outcomes than the patient's clinical risk profile [45,53,58], it is plausible that this finding may be related to the higher rate of antenatal diagnosis observed among Southern women (61.7% vs 28.7%) with their subsequent referral to specialized centers, which occurred in 71.9% of the cases. Altogether our results suggest that outcomes can be optimized even in women with a highrisk clinical profile when high rates of antenatal diagnosis are followed by referral to specialized centers with skilled multidisciplinary teams for PAS management. Overall, almost 50% of women with PAS underwent hysterectomy in our study cohort, a rate similar to published data [34,36]. Of note, a previous study from the same working group had reported PAS as the second leading cause (n = 191, 40.2%), after uterine atony (n = 214, 45.1%), of hysterectomy performed within 7 days of delivery for obstetric hemorrhage [20]. In the present work, all cases of PAS identified during the study period (n = 384) were included and assessed in terms of associated factors, management, and outcomes, providing novel Italian population-based data on the topic. Rate of maternal death in our study (2.6‰, national rate 0.09‰ [25]), was higher compared to the UKOSS, NOSS, and Australian cohorts, which did not report any fatal case [16,34,36], but lower than the French cohort (4.1‰) [38]. Of note, the only death in our cohort occurred in an unsuspected woman without risk factors for PAS. Conclusions A low CS rate in the population has been already proved as the most effective way to decrease CS-related adverse outcomes, including PAS [36,60,61]. Considering that Italy holds one of the highest rate of primary and elective repeat CS among European nations [28,30], it is urgent to promote educational efforts to support Italian obstetricians in safely reducing primary CS and admitting women with prior CS to a trial of labor [29]. Management in specialized centers should be considered for all high-risk cases as pivotal determinant in improving outcomes [54,55]. As recommended by the national guideline on PPH prevention and treatment [62], coordinated, multi-faceted efforts should be directed to increase antenatal suspicion of PAS by rising awareness of relevant risk factors with referral of patients at risk for targeted ultrasound assessment by expert sonographers [53,63].
2021-06-06T06:16:36.329Z
2021-06-04T00:00:00.000
{ "year": 2021, "sha1": "5bd5eed2e495701a6a1f75a6c157f0d97598125d", "oa_license": "CCBY", "oa_url": "https://journals.plos.org/plosone/article/file?id=10.1371/journal.pone.0252654&type=printable", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "e7f751b69e39d9d6999dff2eed594fbc51616d0e", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
9431566
pes2o/s2orc
v3-fos-license
Comparative genomics of nucleotide metabolism: a tour to the past of the three cellular domains of life Background Nucleotide metabolism is central to all biological systems, due to their essential role in genetic information and energy transfer, which in turn suggests its possible presence in the last common ancestor (LCA) of Bacteria, Archaea and Eukarya. In this context, elucidation of the contribution of the origin and diversification of de novo and salvage pathways of nucleotide metabolism will allow us to understand the links between the enzymatic steps associated with the LCA and the emergence of the first metabolic pathways. Results In this work, the taxonomical distribution of the enzymes associated with nucleotide metabolism was evaluated in 1,606 complete genomes. 151 sequence profiles associated with 120 enzymatic reactions were used. The evaluation was based on profile comparisons, using RPS-Blast. Organisms were clustered based on their taxonomical classifications, in order to obtain a normalized measure of the taxonomical distribution of enzymes according to the average of presence/absence of enzymes per genus, which in turn was used for the second step, to calculate the average presence/absence of enzymes per Clade. Conclusion From these analyses, it was suggested that divergence at the enzymatic level correlates with environmental changes and related modifications of the cell wall and membranes that took place during cell evolution. Specifically, the divergence of the 5-(carboxyamino) imidazole ribonucleotide mutase to phosphoribosylaminoimidazole carboxylase could be related to the emergence of multicellularity in eukaryotic cells. In addition, segments of salvage and de novo pathways were probably complementary in the LCA to the synthesis of purines and pyrimidines. We also suggest that a large portion of the pathway to inosine 5’-monophosphate (IMP) in purines could have been involved in thiamine synthesis or its derivatives in early stages of cellular evolution, correlating with the fact that these molecules may have played an active role in the protein-RNA world. The analysis presented here provides general observations concerning the adaptation of the enzymatic steps in the early stages of the emergence of life and the LCA. Electronic supplementary material The online version of this article (doi:10.1186/1471-2164-15-800) contains supplementary material, which is available to authorized users. Background Metabolism represents an intricate ensemble of enzymecatalyzed reactions that lead to synthesis and degradation of compounds within the cell. In recent years, an increasing amount of information on metabolism from different species has become available, allowing for comparative genomic-scale studies on the evolution of specific pathways or whole metabolic networks [1][2][3][4]. Metabolism can be considered one of the most ancient biological networks, where nodes represent substrates or enzymes and edges represent the relationships between them. From this perspective, the study of metabolic networks is focused on describing topological properties, such as the existence of functional modules, giving special relevance to clustering and motif formation, and showing the existence of similar attributes to the small-world and scale-free networks [3]. Therefore, two main hypotheses on the origin and evolution of enzyme-driven metabolism have been postulated based on the notion that gene duplication, followed by divergence, can lead to the origin of new metabolic reactions. The "stepwise hypothesis" [5] suggests that, in the case where a substrate tends to be depleted, gene duplication can provide an enzyme capable of supplying the exhausted substrate, giving rise to homologous enzymes that catalyze consecutive reactions. On the other hand, the "patchwork hypothesis" [6] proposes that duplication of genes encoding promiscuous enzymes (capable of catalyzing multiple reactions) allows each descendant enzyme to specialize in one of the ancestral reactions. In this regard, it is plausible that a small number of enzymes with broad specificity existed in early stages of metabolic evolution. Genes encoding these enzymes would have been duplicated, generating enzymes that, through sequence divergence, became more specialized [7]. Collectively, these studies have highlighted the contribution of gene duplication in the evolution of metabolism [4]. In recent works, the universal occurrence of some pathways and branches, such as diverse amino acid pathways, in modern species suggests that they existed in the last common ancestor (LCA) of Bacteria, Archaea and Eukarya [1,8]. However, despite the importance of nucleotide metabolism in all organisms, few studies have addressed this issue by using genomic approaches [9,10]. Nucleotide metabolism is central in all living systems, due to its role in transferring genetic information and energy. Indeed, it has been described as one of the ancient metabolisms in evolution. Specifically, the emergence of an ancestral folding or P-loop hydrolase appeared parallel to this metabolism [11], reinforcing its antiquity. In addition, many of the intermediates associated with this metabolic module have been intimately associated with prebiotic chemistry and the origin of life [10,12,13]. In this regard, we adopted a multigenomic strategy for the reconstruction and analysis of the metabolism of nucleotides, evaluating the contribution of the origin and diversification of de novo and salvage pathways for nucleotides in the evolution of organisms. In addition, these analyses allow the identification of a metabolic link between the LCA and the first steps in the structure of biological networks [14][15][16]. Our strategy reveals some general rules concerning the adaptation of the first predominant chemical reactions to enzymatic steps in the LCA and allows us to infer environmental issues in the early stages of the emergence of life. In addition, it was possible to determine the presence of de novo biosynthesis pathways of the ribonucleotides and deoxyribonucleotides of uracil and cytosine for pyrimidine metabolism associated with the LCA. Finally, we found differences in the enzymes for nucleotide metabolism that correlated with environmental changes and with associated cellular architecture adaptations; such is the case for the enzymes involved in the synthesis of 5-aminoimidazole ribonucleotide (AIR) to (4-carboxyaminoimidazole ribonucleotide) (CAIR). These findings suggest the presence of HCO 3 − in primitive seas and its use as one of the main carbon sources for the first organisms. Our main results derived from the taxonomical distribution of enzymatic families belonging to nucleotide metabolism, supported by experimental evidence, are described in this report. Results and discussion Taxonomic distribution of nucleotide metabolic enzymes The taxonomic distribution of proteins provides clues concerning the relative occurrence of the enzymes and their branches and paths for the evolution of metabolism [1,8,11,17,18]. In this regard, the origin and evolution of nucleotide metabolism was traced in organisms belonging to the three cellular domains, Bacteria, Archaea and Eukarya, by an exhaustive evaluation of the taxonomic distribution of their enzymatic repertoires. Therefore, each enzymatic activity encoded by an EC number can exhibit different profiles, which are individual vectors of the presence/absence of homologous enzymes in all the genomes. In total, 151 profiles associated with 120 enzymatic reactions related to nucleotide metabolism were used to scan 1,606 genomes by using RPS-BLAST. (Additional file 1: Figure S1 and Additional file 2: Figure S2 and Additional file 3: Tables S1 and Additional file 4: Table S2). Based on these comparisons, evolutionary origins of nucleotide metabolism can be traced, even close to the LCA of all organisms. In this regard, enzymes widely distributed across the three cellular domains are proposed to have been present in the LCA [8,11,[18][19][20]. Alternatively, enzymes constrained to specific clades or cellular domains would suggest adaptations of organisms or cellular domains to specific lifestyles. Based on these considerations, we discuss our most notable results in the following sections. Evolution of purine metabolism De novo purine biosynthesis The de novo biosynthesis of purines, starting from Dribose-1-phosphate to inosine 5'-monophosphate (IMP) production, the main intermediate in the synthesis of ribonucleotides and deoxyribonucleotides, guanine and adenine, follows a linear branch. The first step is associated with phosphoglucomutase (EC 5.4.2.2) or phosphopentomutase (5.4.2.7) and the second step is associated with ribose-phosphate diphosphokinase (2.7.6.1); both steps are necessary for the synthesis of 5-phospho-alpha-D-ribosy-1-pyrophosphate (PRPP), which starts from D-ribose-1phosphate. Based on their taxonomical distribution, the enzymes associated with the 5.4.2.2 and 2.7.6.1 catalytic activities were identified as being widely distributed among Bacteria, Archaea and Eukarya, suggesting the probable existence of PRPP biosynthesis in the LCA. Indeed, PRPP is a key precursor for biosynthesis in the de novo and salvage pathways for purines and pyrimidines; however, this intermediary is unstable and susceptible to hydrolysis. Therefore, it is probable that its abiotic synthesis, if it occurred, was not enough to maintain the biosynthesis in the LCA [10]. Therefore, the first step for purine biosynthesis, the catalysis to ribose-5-phosphate starting from ribose 1phosphate, is achieved by either of the two enzymes related to the enzymes EC 5.4.2.2 and 5.4.2.7. These two enzymes are analogous, since no homology at the sequence or structural level was detected. The enzyme 5.4.2.7 is partially distributed in Bacteria, mainly in free-living organisms associated with a host, such as Streptococcus pneumoniae and Lactobacillus rhamnosus; however, it was not found in archaeal and eukaryal organisms, suggesting that its emergence was posterior to the LCA divergence, probably as a secondary adaptation associated with the bacterial host. Starting from the intermediary PRPP, in the linear branch towards IMP biosynthesis, we identified enzymes belonging to five catalytic steps (amidophosphoribosyltransferase, 2.4.2.14; phosphoribosylamine-glycine ligase, 6.3.4.13; phosphoribosylglycinamide formyltransferase, 2.1.2.2; phosphoribosylformylglycinamidine synthase, 6.3.5.3; phosphoribosylformylglycinamidine cyclo-ligase, 6.3.3.1) required for the transformation of PRPP into AIR [21]. Most of these enzymes were identified as widely distributed in the three cellular domains, suggesting their presence in the LCA (Figure 1). The enzymatic step associated with EC 2.1.2.2 is responsible for the transformation of glycinamide ribotide (GAR) to formyglycinamide ribotide (FGAR) and could be carried out by two enzymes associated with different evolutionary families, PurN (Figure 1 Gold box) or phosphoribosylglycinamide formyltransferase and PurT or phosphoribosylglycinamide formyltransferase 2 [9]. Proteins associated with the PurN family use derivatives from folate synthesis as substrates. This family was identified as widely distributed in Bacteria and Eukarya and partially distributed in Archaea. Alternatively, proteins from the PurT family were partially distributed in Archaea and Bacteria and sparsely in Eukarya. It is probable that the PurT enzymatic family could have been present in the LCA, with posterior loss events in Eukarya due to its requirement for formate as a substrate. In this regard, formate is described as one-carbon donor and one of the main molecules present in prebiotics conditions, prior to folate metabolism [22][23][24]. In a posterior phase, the emergence of folate biosynthesis might have facilitated the emergence of PurN ( Figure 1, gold box), thereby achieving the co-occurrence of PurN and PurT in the LCA. Indeed, previous works have suggested that the emergence of PurT preceded the emergence of PurN, mainly because PurT utilizes a more primitive substrate prior to the folate-dependent pathway [24]. One of the evolutionary pressures for the selection of PurN instead of PurT in eukaryotic organisms could be associated with the emergence of the mitochondrial respiratory chain. It Figure 1 Route of de novo biosynthesis towards IMP. In red are the enzymatic steps associated with the LCA. In green is the enzymatic synthesis towards CAIR. In pink, are the enzymatic steps specifically identified in Archaea associated with the synthesis of AICAR to IMP. In gold are the folate-dependent enzymatic steps. The asterisk shows the second catalytic activity, IMP cyclohydrolase, achieved by PurH. The precursor PRPP is also indicated. has been shown that the PurT substrate, formate, is toxic and binds to cytochrome c oxidase-like [25,26], uncoupling the redox reactions and favoring the selection of PurN in eukaryotic organisms. Another enzymatic step identified in the purine biosynthetic pathway is achieved by the phosphoribosylformylglycinamidine synthase (6.3.5.3) that transforms FGAR to formylglycinamidine-ribonucleotide (FGAM). This enzyme is composed of two catalytic subunits, PurQ and PurL. In general, this enzyme may occur as a multidomain protein or as a multiprotein complex, where the subunits L and Q are encoded independently. Altogether, we identified a co-occurrence of these subunits in the three cellular domains. Finally, the last step towards AIR transformation is carried out by the cyclo-ligase phosphoribosylformylglycinamidine (EC 6.3.3.1), which transforms FGAM to AIR, and in turn this enzyme was found to be widely distributed in the three cellular domains, suggesting its presence in the LCA. The discovery of the emergence of these branches is interesting, because it allows us to infer the processes associated with environmental changes on Earth. In this context, Tribunskikh et al. [29] suggested that the divergence of the phosphoribosylaminoimidazole carboxylase (4.1.1.21) to 5-(carboxyamino) imidazole ribonucleotide mutase (5.4.99.18) was a consequence of decreasing atmospheric CO 2 , which resulted in the addition of the 5-(carboxyamino) imidazole ribonucleotide synthase (6.3.4.18) and a change of specificity and, by consequence, in the two-step pathway. Those authors supported their proposal by the fact that the two-step pathways could work at low CO 2 levels, under aerobic or anaerobic conditions. However, our data suggest an alternative scenario where, although the concentration of CO 2 in the primitive atmosphere was high, acidic conditions in early oceans favored the formation of HCO 3 − . This is consistent with simulations showing that the concentration of HCO 3 − oscillated between levels 30 to 30,000 times higher in the early oceans than current levels [30]. Interestingly, the enzyme 6.3.4.18, which is associated with the two-step pathway, uses HCO 3 − as a substrate, and HCO 3 − is considered a dominant form of CO 2 in early oceans [30,31], which together with our data of taxonomic distribution supports the notion that this pathway preceded the path where the enzymatic reaction catalyzed by 4.1.1.21 uses CO 2 as the substrate. Indeed, it has been experimentally shown that in aqueous solutions with high concentrations of KHCO 3 , AIR is easily converted into CAIR in the absence of enzymes [29,32,33]. This mechanism could take place through the accumulation of the intermediate N-CAIR (N5-carboxy-AIR, or N5-carboxyaminoimidazole ribonucleotide), which then undergoes a rearrangement to CAIR. These reactions appear to be the template whereby the enzymatic activities adapted in the two-step pathway, widespread in the cellular domains and probably in the LCA. Probably, HCO 3 − levels had a steady decline due to the reduction of atmospheric CO 2 , as documented in the evolution of the terrestrial atmosphere [31]. This reduction could have selected the enzymes 6.3.4.18 and 5.4.99.18, which transform AIR to N-CAIR and N-CAIR to CAIR, respectively. Subsequently, once the atmosphere was provided with oxygen, the emergence of mitochondria and eukaryotic cells was possible. In this regard, HCO 3 − is one of the main products of mitochondrial respiration, which follows a pHdependent conversion to CO 2 , converting an impermeable anion into a gas that can diffuse through membranes [34]. The transformation of HCO 3 − accumulated in cells with mitochondrial activity towards CO 2 may have resulted in a more efficient regulation of intracellular pH in eukaryotic cells, and in parallel, to the use of CO 2 as a carbon source. All these processes might have favored multicellularity, because cells with nutrients availability and high mitochondrial activity as the consequence of oxidative respiration could provide a permeable carbon source to other cells with low nutrients availability. Altogether, the taxonomic distribution data, chemical synthesis information, and primitive ocean simulations data, support the divergence of the enzyme 5.4.99.18 to 4.1.1.21 by selecting a CO 2 binding site. Although the selection of a CO 2 binding site in metabolism is not exclusive to this protein family, the selection of a CO 2 binding site in these enzymes could favor the development of multicellularity in eukaryotic cells. The last two steps in the synthesis of IMP correspond to the transformation of 5-aminoimidazole-4-carboxamide ribonucleotide (AICAR) and 5-formamidoimidazole-4carboxamide ribotide (FAICAR). In the first step (2.1.2.3), AICAR formyltransferase activity is required, whereas for the second step (3.5.4.10), IMP cyclohydrolase activity is required. The gene for the bifunctional folate-dependent enzyme PurH, which is widely distributed in Bacteria and Eukarya and partially in Archaea, encodes both activities [35]. PurH exhibits two independent catalytic sites, in which each half of the enzyme catalyzes an independent reaction. The transformation to IMP by the bifunctional enzyme PurH may have been posterior to the emergence of the folate synthesis pathway ( Figure 1, gold box). In addition, both catalytic reactions performed by PurH could have emerged in Archaea to be carried out by different proteins that achieve each step independently. In this regard, the first reaction is carried out by the formate-dependent PurP enzyme, whereas the second one is achieved by the IMP cyclohydrolase PurO (Figure 1, magenta). In both cases, the catalytic mechanism is very similar to that of its counterpart PurH, and they are exclusively distributed in Archaea; however, these two enzymes do not maintain relationships at the sequence or structural level with PurH [9,35]. In summary, all these results suggest that PurP and PurO emerged posterior to archaeal divergence. In the case of PurO, we did not find homology relationships with any other enzymatic family associated with nucleotide biosynthesis. In addition, we found a correlation between the presence of PurP and PurO; however, in Archaea some clades do not exhibit either PurH or PurO, which suggests that alternative enzymes performing the PurO function are probably present and remain to be described. Purine salvage pathway Ribonucleotide adenine salvage pathway In the purine salvage pathway, diverse pathways to generate adenine and guanine ribonucleotides and deoxynucleotides were identified. In this regard, the first pathway starts from hypoxanthine, via hypoxanthine-guanine phosphoribosyltransferase (PRTase) (2.4.2.8) (Figure 2), an enzyme with broad specificity and that is universally distributed in the three cellular domains. This enzyme achieves the biochemical transformation to IMP, starting from hypoxanthine and PRPP, for subsequent transformation to the ribonucleotide of adenine through the de novo adenyl-succinate pathway, which includes enzymes widely distributed in the three cellular domains ( (Figure 2). For the transformation of alpha-deoxyribonucleotide, there are two routes, the pathway associated with the ribonucleoside-diphosphate reductase beta-subunit (small subunit) (1.17.4.1), the ribonucleoside-diphosphate reductase alpha-subunit (large chain family), and the 1.17.4.2 pathway, which contains adenosylcobalamin-dependent ribonucleoside-triphosphate reductase, and the anaerobic ribonucleoside-triphosphate reductase complex (NrdD and NrdG). The first of these pathways, for the formation of deoxyribonucleotides, is widely distributed in the three cellular domains, suggesting its presence in the LCA. Finally, the enzymes of the 1.17.4.2 pathway are partially distributed in Bacteria and Archaea but not in Eukarya, which makes it difficult to determine its presence in the LCA. This enzyme may be specialized for the adenine substrate, posterior to the divergence of the LCA, from a broad-specificity ancestor, similar to the HPRTases (2.4.2.8). Route of salvage of guanine ribonucleotides Starting from the first salvaging route, previously described, which starts with hypoxanthine-guanine phosphoribosyltransferase (2.4.2.8), it is also possible to synthesize guanine ribonucleotide, adding to the enzymatic families widely distributed in the three cellular domains: inosine-5'-monophosphate dehydrogenase 1 (1.1.1.205) and GMP synthase (glutamine-hydrolyzing) subunit A and GMP synthase (glutamine-hydrolyzing) subunit B (6.3.5.2) ( Figure 2). Because the hypoxanthine-guanine phosphoribosyltransferase (2.4.2.8) also exhibits specificity for guanine, it is possible to synthesize (ribonucleotide monophosphate guanine) guanosine 5'-monophosphate (GMP) in one step, starting from the guanine salvage pathway with PRPP; however, although the subsequent step with 2.7.4.8, performed by the guanylate kinase family, for transforming GMP to GDP is widely distributed in Eukarya and Bacteria, in Archaea only the reaction but not the enzyme has been identified ( Figure 2, star) [36]. These data limit the possibility of making genomic and evolutionary comparisons between Bacteria and Eukarya to guarantee the possible presence of guanine deoxyribonucleotides in the LCA. However, subsequent steps for the synthesis of GDP to GTP are carried out with the nucleoside diphosphate kinase (2.7.4.6) and pyruvate kinase (2.7.1.40); both of these enzymes are widely distributed in the three cellular domains, and the step of GTP conversion to dGTP for the widely distributed enzyme ribonucleoside diphosphate reductase (1.17.4.1) suggests the presence of GTP and dGTP in the LCA. Therefore, although the gene sequence associated with the guanylate kinase function in Archaea has not yet been identified, we suggest that this protein could exhibit a common origin with its counterpart in Bacteria and Eukarya, because most of the nucleotide kinases belong to the P-loop-containing nucleoside triphosphate hydrolases family. This family shows a broad specificity in recognition of ribonucleotides and deoxyribonucleotides [37]. In this regard, it has been reported that only two mutations are enough to introduce the adenylate kinase activity into guanylate kinase [38], suggesting a masking of guanylate kinase assignment by adenylate kinase. In addition, it is possible that an enzyme with broad specificity similar to adenylate kinase (Figure 3, star) recognizes NMP and converts it into NDP; thus, its specialization toward the guanylate kinase function occurred posterior to the divergence of the three cellular domains. At present, members of the adenylate kinase family and in particular from the AK6 subfamily have been described as proteins that exhibit the ability to transform different NMPs to their corresponding NDPs, specifically, the AK6 subfamily transforms AMP, dAMP, CMP, dCMP, IMP, and GMP to their corresponding NDPs and dNDPs [39,40]. This finding also suggests that an ancestral enzyme with broad specificity for the synthesis of ribonucleotides and deoxynucleotides could have been utilized in the LCA. It is also interesting that this enzyme could have used IMP as a substrate, since this nucleotide could have had an active role in the evolution of nucleic acids and the genetic code [41]. Interestingly, our results show that Archaea contain an adenylate kinase from the AK6-like subfamily, similar to Eukarya and Bacteria, suggesting that the function of guanylate kinase in archeal organisms, which has not yet been identified, could be carried out at least in part by enzymes of this subfamily. Integrating the evolutionary analysis of the salvage and de novo purine pathways The taxonomic distribution of enzymes associated with purine metabolism shows that segments of the de novo and salvage pathways were complementary and critical to the availability of nucleic acids before the divergence of the three cellular domains. This finding correlates with the chemical synthesis under prebiotic conditions for nitrogenous base precursors for the purine salvage pathway, as previously described [10,13,42,43] (Figure 3, blue circles). The findings of those previous studies agree with the fact that these routes were dependent on PRPP biosynthesis, since it has been argued that PRPP, Figure 2 Salvage routes of nucleotide and de novo biosynthesis starting from IMP. In red are the enzymatic steps associated with the LCA; in pink, enzymatic steps with dubious taxonomical distribution pattern to be associated to the LCA are shown; in blue, enzymatic steps not associated to the LCA. The enzymatic step associated with the guanylate kinase family is indicated with a star. nucleosides, and nucleotides are susceptible to hydrolysis and thus are very unlikely prebiotic compounds. In addition, our data show a widely taxonomic distribution in the three cellular domains of the enzymes associated with the last two enzymatic steps (5.4.2.2 and 2.7.6.1) required for the transformation of ribose-1-phosphate to PRPP (Figure 3, yellow circle), supporting the biosynthesis of this molecule in the LCA. Despite the possible absence of folate and folatedependent enzymes PurH and PurN in early stages of life emergence (Figure 1, gold boxes), most of the enzymes associated with the de novo biosynthesis pathway for IMP may have occurred in the LCA (Figure 1), based on their wide taxonomical distribution in the three cellular domains. Posterior to early stages of life emergence and folate biosynthesis, the appearance of PurH could have completed the purine biosynthesis pathway, filling the gap between AICAR and IMP (Figure 1, PurH is shown in the gold boxes). In addition, the fact that PurP and PurO have been identified exclusively in Archaea, without homology with PurH, suggests that these proteins were generated posterior to the divergence of Archaea (Figure 1, magenta boxes). Previous to the emergence of folate synthesis and by consequence of PurH, it is possible that in addition to the salvage routes, semienzymatic alternative routes, starting from the substrates 5-amino-4-imidazole carboxyamide AICA and PRPP, are transformed to AICAR by an ancestral phosphoribosyltransferase (PRTase), with subsequent nonenzymatic transformations to IMP, as has been previously proposed [10] (Figure 3, white boxes). Specifically, the adenine phosphoribosyltransferase (PRTases) (2.4.2.7) synthesize adenine ribonucleotides from the adenine salvage pathway and also transform AICA to AICAR. This enzyme may have become specialized posterior to the divergence of the LCA, from a broad-specificity ancestor, similar to the HGPRTase (PRTases) (2.4.2.8); indeed, this enzyme was found to be universally distributed among the three cellular domains and exhibits a similar catalytic mechanism. In evolutionary terms, the enzymes 2.4.2.7 and 2.4.2.8 belong to the phosphoribosyltransferases superfamily, suggesting a common evolutionary origin. Based on data previously described, we suggest that an ancestral PRTase enzyme similar to the hypoxanthineguanine phosphoribosyltransferase (2.4.2.8) (Figure 3, star) with broader specificity to diverse structurally related substrates, such as guanine, xanthine, hypoxanthine, adenine, and AICA, was present in the early stage of the emergence of life. This ancestral enzyme with prebiotic origins could have been involved in the transformation of AICA (Figure 3, light blue circle) [10] to AICAR, not only to feed semienzymatic IMP synthesis, as previously proposed [10] (Figure 3, white box), but also to a greater extent to thiamine (vitamin B 1 ) biosynthesis. From this, it is interesting two pathways feed that thiamine metabolism as a Figure 3 Purine biosynthetic pathway associated with the LCA. In red are all the steps associated with the LCA. In white is the semienzymatic pathway associated with an early stage of cell development, previously proposed by Becerra et al. [10]. The proposed ancestral histidine biosynthesis that feeds AICAR is displayed in a box with bars. The substrates with prebiotic origins are shown in blue circles. Ribonucleotides that may have occurred in the LCA are in purple circles. The thiamine (vitamin B 1 ) metabolism is in gray. In dashed lines, is indicated the AICAR in backflow as a seed substrate to AIR. bifurcation of the de novo IMP pathway and also with AICA in backflow as a seed substrate (Figure 3, following the direction of the dashed arrows). Additionally, the thiamine pathway may have been also fed from a third source by means of histidine metabolism (Figure 3, box with bars); the connection to purine biosynthesis results from an enzymatic step catalyzed by imidazole glycerol phosphate (IGP) synthase, which transforms N-(5phosphoribosyl)-formimino-5-aminoimidazol-4-carboxamide ribonucleotide (PRFAR) into AICAR, which is then recycled into the de novo purine biosynthetic pathway, and imidazole-glycerol 3-phosphate, which leads to histidine. Interestingly, previous works have also suggested that the histidine biosynthetic route is ancient and related to the emergence of life [18]. Our proposal of an ancestral branch related to thiamine synthesis is consistent with the catalytic functions that have been proposed for this molecule in the early evolution of life, as suggested by its essential catalytic role in most of organisms and its requirement at several central points of anabolic and catabolic intermediary metabolism [44], as well as in semienzymatic pathways that may precede the actuals [45]. It is interesting this branch in the early emergence of life and before the constitution of the LCA could have been fed the thiamine metabolism in a semienzymatic way, as the AICAR-to-AIR transformation could occur in a facile, nonenzymatic chemical synthesis pathway [46]. In this regard, molecules of thiamine or its derivatives bind to the mRNA in the absence of cofactors or proteins in the three domains of life, forming a complex that sequesters the ribosome binding site and suggesting the existence of an ancestral form of riboswitches, which have been implicated in regulatory mechanisms [47]. Additionally, thiamine could have interacted with the RNA, leading to catalytically versatile ribozymes related to the RNA world, due to its catalytic and RNA binding capabilities [44]. Finally, we suggest that the guanylate kinase function could have been carried out in the LCA by an ancestral enzyme with broad specificity to nucleoside monophosphates (NMPs), similar to those in the AK6 adenylate kinase subfamily. Its specialization to guanylate kinase could have been posterior to archaeal divergence. In parallel, this ancestral enzyme may have provided the nucleotide inosine ( Figure 3, pink circle), which has been suggested to play an active role in a rudimentary stage of the genetic code [41]. The ITP can compete with or replace the ATP and GTP binding sites in diverse proteins, such as RNA polymerase [48,49], that may have maintained their affinity and specificity for ITP as a remnant of its ancestral role. In addition, inosine maintains a strong structural similarity to guanine [50], and it may even have the same function for guanine pairing to cytosine in the codons. Currently, inosine has been found in the third position of anticodons, pairing with codons at bases U, C or A and thereby decreasing the need for the 61 tRNAs for each codon. In this regard, it has been suggested that inosine may have been produced by adenosine deamination or even in RNA-mediated catalysis through an early stage of emergence of the genetic code, and it was excluded in nucleic acids when the canonical Watson-Crick pairing evolved to avoid ambiguous rules in replication [41]. Posterior to the divergence of Archaea from the LCA, the specialization and divergence to guanylate kinase of members with broad specificity of the ancestral AK6 subfamily type could have increased the availability of guanine, favoring the replacement of inosine for guanine. Evolution of pyrimidine metabolism De novo pyrimidine biosynthesis Based on taxonomical distributions, we evaluated the enzymes associated with de novo pyrimidine biosynthesis: carbamoyl-phosphate synthase large chain, carbamoylphosphate synthase small chain, 6 Figure 4). These enzymes form the branch of UTP de novo biosynthesis starting from L-glutamine. This entire branch may have occurred in the universal ancestor, based on the wide taxonomical distribution of the enzymes that compose it (Figure 4, red boxes). One of the most interesting steps of this pathway is the conversion of dihydroorotate to orotate, which is carried out by enzymes classified into two families of dihydroorotate group dehydrogenases (DHODs), according to the terminal electron acceptor and relationships at the sequence level [51]. In this regard, family 1 uses soluble electron acceptors. This family is widely distributed in gram-positive Bacteria, Archaea and in some unicellular eukaryotic organisms. In turn, this family is subdivided into DHODA (1.3.98.1) and DHODB (1.3.1.14), which are homodimeric and heterotetrameric, respectively. DHODA uses fumarate as the electron acceptor, whereas DHODB uses NAD + [52]. Members of family 2 (1.3.5.2) are linked to the cell membrane and use quinones from the respiratory chain as electron acceptors [53]. These enzymes are mainly found in most eukaryotic organisms and gram-negative bacteria, in agreement with our taxonomical distribution results. In spite of these differences, both DHODs belong to the FMN-linked oxidoreductases superfamily, suggesting a common ancestor. Probably, this ancestral enzyme was similar in functional terms to members of the family 2 ( Figure 4, star), according to its electron acceptor molecules, which have been previously described as one of the most abundant in extraterrestrial environments, and also that could have been delivered to the Earth around 4 billons years ago, making possible the prebiotic conditions needed for the emergence of life. Indeed, quinones have been found in meteorites in considerable amounts and also have been synthesized in a cloud chamber simulation with good yields [54,55]. Additionally, quinones are spontaneously partitioned into model membrane systems, representing an evolutionary advantage to early organisms by providing some protection against UV radiation in the early Earth environment [56], and were posteriorly exploited for their capacity to pump protons across membrane bilayers [57]. In a posterior step, changes associated with the cell membrane and cell wall could lead to divergence of the DHODs from family 2 to family 1, via incorporation of soluble electron acceptors. Surprisingly, our proposal for DHOD family divergence is consistent with previous reports describing a transition from gram-negative to gram-positive bacteria [58,59] and to drastic changes associated with the chemical constituents of the cell membrane in Archaea, such as glycerol stereochemistry [60], posterior to the divergence of the LCA. Furthermore, subdivision of family 1 might have occurred in the direction of DHODB to DHODA, since most of these enzymes have been found in gram-positive bacteria that have adapted to parasitic or symbiotic relationships, as shown by our taxonomical distribution results. The DHODA uses fumarate as a terminal electron acceptor, which in turn is used instead of oxygen as a terminal electron acceptor for succinate production, one of the essential processes controlling redox homeostasis for many parasites living under anaerobic conditions [53]. Concerning the transformation to deoxyuridine, two main routes have been described. One pathway starts from deoxy-CDP, whereas the second one starts from deoxy-CMP and which can be extended to transforming deoxyuridine. The route of deoxy-CDP requires two enzymatic steps: the first one requires the nucleoside diphosphate kinase (2.7.4.6), which produces dUTP, which is subsequently transformed by the inosine triphosphate pyrophosphatase 1 (nucleoside-triphosphate pyrophosphatase 1; 3.6.1.19) to dUMP. These two enzymes were identified as universally distributed in the three cellular domains; therefore, they can be associated with the LCA (Figure 5, red boxes). The second pathway converts dCMP to dUMP via cytidine/deoxycytidylate deaminase, which is associated with the catalytic activity of 3.5.4.12, whose members are partially distributed among the three cellular domains. Finally, a third pathway, which starts from dCTP and is catalyzed by deoxycytidine triphosphate deaminase (3.5.4.13), is absent in eukaryotes and partially distributed in Bacteria and Archaea. The assignment of the second and third pathways toward deoxyuridine transformation was within the LCA, based on the complex evolutionary history of their enzymes. Finally, the transformation of deoxy-UMP to deoxy-TMP can be carried out through two folate-dependent enzymes, thymidylate synthase ThyX (2.1.1.148) and thymidylate synthase ThyA (2.1.1.45) [9] (Figure 5 in gold boxes). These enzymes are not homologous, suggesting an independent evolutionary origin. Both enzymes promote methylation by using the 5,10-methylenetetrahydrofolate (CH 2 -H 4 folate) as a carbon donor. ThyA also uses CH 2 -H 4 folate to produce dihydrofolate (H 2 -folate). In contrast, ThyX uses flavin adenine dinucleotide (FAD) and NAD (P) H as cofactors to form reduced tetrahydrofolate (H 4 -folate) [61]. ThyX is partially distributed in Bacteria Figure 4 The de novo pyrimidine biosynthesis pathway towards UMP. The enzymes associated with the LCA are shown in red boxes. The enzymatic step associated with the proposed ancestral dihydroorotate dehydrogenase family is denoted with a star. and Archaea but is absent in eukaryotes; its counterpart, ThyA, is partially distributed in Bacteria, sparsely distributed in Archaea and widely distributed in Eukarya. In this context, it has been suggested that the catalytic differences between ThyA and ThyX influenced the evolution of bacterial genomes [61,62]. In this regard, ThyA is 10 times more catalytically efficient than ThyX, and previous studies in more than 400 prokaryotic genomes have revealed that the catalytic capacity associated with ThyA correlates with its presence in organisms with large genome sizes [61,62]. Our results related to its taxonomic distribution are consistent with those of earlier studies, which showed a pattern of anticorrelation for the presence/absence of these two enzyme families through bacterial clades, which has also been described as a mutual event of replacements between these two families [63], indicating that bacterial metabolism has modulated the size and composition of the bacterial genomes. Additionally, our data show the complete absence of ThyX and absolute presence of ThyA in eukaryotic genomes, suggesting that the influence of ThyA was a determinant in the eukaryotic genome size and in those organisms' evolutionary potentials. In this context, it is difficult to determine whether some of these families were present in the LCA, based on their observed complex taxonomical distribution. Pyrimidine salvage routes In association with the salvage pathway for uracil ribonucleotide are two main routes that start from uracil. The first pathway comprises uridine kinase (2.7.1.48) and uridine phosphorylase (2.4.2.3). Uridine phosphorylase was found sparsely distributed in the three cellular domains and uses preformed uracil as a substrate. Uridine kinase is widely distributed in Bacteria and Eukarya and sparsely distributed in Archaea. Therefore, based on the taxonomical distribution patterns, it is difficult to determine whether all these enzymes were present in the LCA, due to their low distribution in Archaea genomes, which may also be a reflection of horizontal transfer events. phosphorylase (2.4.2.2) is only found partially distributed in Bacteria, suggesting that this pathway was not present in the LCA. In a third salvage pathway, a single enzymatic step, using uracil phosphoribosyltransferase (2.4.2.9), is required. This enzyme is widely distributed in the three cellular domains, suggesting its presence in the LCA ( Figure 6B). Regarding the salvage routes of ribonucleotides of cytosine, the main pathway occurs in two steps, involving uridine-cytidine kinase (2.7.1.48) and pyrimidine-nucleoside phosphorylase (2.4.2.2), and starting from cytosine as the substrate. Based on their taxonomical distribution, the first enzymatic family, with uridine-cytidine kinase, as previously described does not exhibit a clear distribution pattern to be considered in the LCA, while the second step is only sparsely distributed in Bacteria. In addition, there is another cytosine deoxyribonucleotide salvage pathway that starts from deoxycytidine, in which the deoxyadenosine/ deoxycytidine kinase (2.7.1.74) participates ( Figure 6B), and in turn it is only sparsely distributed in Eukarya. In the salvage pathway of deoxythymine, two enzymatic steps are involved, one with thymidine kinase (2.7.1.21) and the other with thymidine phosphorylase (2.4.2.4), and they start with thymine as the substrate ( Figure 6A). The enzyme associated with the first step is partially distributed in Bacteria and more sparsely distributed in Eukarya and Archaea, while the enzyme associated with the second step is sparsely distributed in the three cellular domains. It is difficult to determine whether this pathway was present in the LCA or if its distribution is a reflection of horizontal gene transfer or massive gene loss, due to the poor distribution of the enzymes belonging to this pathway in the three cellular domains. Integrating the evolutionary analysis of the de novo and salvage pathways of pyrimidine metabolism Based on the taxonomical distributions of enzymes associated with the de novo and salvage pathways for pyrimidine, it was possible to identify, in addition to the de novo pathway, a widespread salvage pathway (via 2.4.2.9) for the synthesis of ribonucleotide of uracil ( Figure 6B), suggesting that the LCA had two pathways to carry out its synthesis. On the other hand, for the synthesis of uracil deoxyribonucleotides, two main routes, each with one step, have been identified; the first one starts from dCTP, which is converted by the enzyme 3.5.4.13, and the second route starts from dCMP, which is converted by the enzyme linked to 3.5.4.12. The wide taxonomic distribution in the three cellular domains of the 3.5.4.12 enzyme suggests that the second pathway was present in the LCA (Figure 7). The taxonomical distribution of enzymatic families associated with DHDOS suggests that their divergence correlated with the transition from gram-negative to gram-positive bacteria and was affected by similar evolutionary pressures. The evolutionary pressures could be associated with a variety of environmental changes, such as increased atmospheric oxygen levels, temperature or changes from water to soil habitats. These changes could lead to modifications in the plasma membrane's chemical properties, resulting in the divergence of family 2 enzymes (1.3.5.2) linked to the cell membrane, which uses quinones from the respiratory chain as electron acceptors, to the family 1 enzymes, which incorporate soluble electron acceptors. This transition from gram-negative to gram-positive was also suggested by Cavalier-Smith [59,60]; altogether, with our data regarding the taxonomic distribution showing that family 1 is ubiquitous in grampositive bacteria and archaea, we therefore suggest close evolutionary relationships between archaea and gram-positive bacteria. Although gram-negative bacteria and eukarya contain the subfamily 2 of DHODS, it is not possible to deduce an evolutionary relationship, as we previously identified between gram-positive bacteria and archaea, because members of this subfamily in eukarya are linked to the mitochondrial membrane. In this regard, the mitochondrial acquisition in Eukarya has been described as a probable lateral gene transfer event, as described in the endosymbiont theory [64]. Therefore, our results support the notion of phagocytosis of a gram-negative bacterium by a protoeukaryotic cell, with posterior specialization to mitochondria. Additionally, de novo synthesis of cytosine ribonucleotides and deoxyribonucleotides was probably associated with the LCA. In de novo biosynthesis of thymine, the folate-dependent enzymes ThyA and ThyX can catalyze the transformation of this metabolite independently, suggesting that this pathway appeared posterior to the emergence of folate biosynthesis. For enzymes present in the thymine salvage pathway, it was not possible to determine their presence in the LCA. Conclusion The analysis presented here is based on multiple complete genomes belonging to organisms from the three cellular domains, along with current biochemical knowledge. These analyses allowed us to identify issues related to the origin and evolution of nucleotide metabolism. One of our main findings is that we were able to assess the ancestry of some segments of the purine salvage and the de novo pathways, which could be complementary and closely related to the LCA of the three cellular domains. Additionally, it was found that a large part of the de novo purine branch is widely distributed in Bacteria, Archaea and Eukarya, primarily towards de novo biosynthesis of IMP (a key precursor of purines). This branch may have been associated, in the early stages of cell evolution, with the metabolism of thiamine (vitamin B 1 ) and posteriorly was complemented by the addition of two new enzymatic steps to complete the IMP biosynthesis pathway by means of the folate-dependent PurH enzyme, giving rise to the modern de novo synthesis of purines. The ancestry and divergence of enzymes associated with these routes provide clues to the environmental changes in the early stages of the emergence of life. Such is the case with the divergence of the enzyme phosphoribosylaminoimidazole carboxylase (4.1.1.21) from N5-carboxyaminoimidazole ribonucleotide mutase (5.4.99.18); the divergence of these two enzymes supports the hypothesis of the origin of life in primitive seas with high levels of HCO 3 − . Once the atmosphere was provided with oxygen and the first eukaryotic organisms with mitochondria emerged, the enzyme 5.4.99.18 diverged to 4.1.1.21 through acquisition of a CO 2 binding site. This divergence process, among other metabolic changes, may have facilitated the emergence of the first eukaryotic multicellular organisms. In the case of pyrimidines, it was possible for us to infer that the LCA synthesized uracil ribonucleotides, by both the de novo and salvage pathways, suggesting that this ribonucleotide could have been involved in a great number of enzymatic functions and/or regulation as a remnant of the RNA world. Additionally, it was possible to associate the synthesis of cytosine and uracil deoxyribonucleotides in the LCA, and once folate biosynthesis was possible, the thymine deoxyribonucleotides emerged due to their enzymatic dependence on the folate precursor. Before the emergence of folate biosynthesis, some variants of these three bases, including cytosine or cytosine-methylated derivatives, based on its similarity to thymine, could have played a role similar to actual thymine in DNA. This inference is supported by the fact that several analogs of cytosine and uracil partially integrate into DNA as replacements for thymine [65][66][67]. Thus, thymine does not have an active role in the nucleotide transcript and it is more likely to be replaced. This argument is consistent with the evolution of bacterial strains [68], which shows that strains can be generated with the ability to incorporate derivatives of uracil (chloro-uracil) by replacing up to 98% of thymine and maintain cell viability. In relation to purine synthesis, our results revealed that the biosynthesis of adenine could have been carried out in the LCA by the adenylate kinase. In the case of guanine biosynthesis, we found a complex evolutionary history; for instance, its synthesis has been detected in archaeal organisms, although its gene sequence has not been identified [36]. The fact that we did not find the guanylate kinase bacterial type in Archaea suggests three possible scenarios: a) a high sequence divergence between these proteins, b) a masking function for some other enzyme, and/or c) a gene loss or gene nonorthologous displacement. As discussed above, it is likely that the guanylate kinase function has been masked by the adenylate kinase AK6 subfamily type (AK6). This enzyme is homologous to the canonical guanylate kinase of Eukarya and Bacteria and is probably closer to the possible ancestral enzyme present in the LCA, with promiscuity in the synthesis of NMPs and dNMPs to their respective NDPs and dNDPs. In addition, we found the presence of this multispecific adenylate kinase AK6 subfamily type in Archaea, suggesting that this could have been acquired from the LCA with no major posterior changes. In contrast, the guanylate kinase of Bacteria and Eukarya may have diverged to a greater extent from the adenylate kinase family subsequent to archaeal divergence. The structural similarity of inosine and guanine, together with the remnant affinity of inosine in several proteins such as RNA polymerase, in addition to the taxonomical distribution data that showed a possible LCA route for the synthesis of ITP (Figure 3, magenta circle) by means of a broad-specificity ancestral enzyme similar to the adenylate kinase Ak6 subfamily type, suggest that this base could have played an important role in early cell evolution, as has been previously proposed [41]. The ancestral enzyme might have provided both IDP and GDP for subsequent processing to NTPs, with both playing similar informational process roles in the genetic code based on their structural similarities. The subsequent specialization of guanylate kinase may have facilitated greater availability of guanine nucleotide to replace inosine, thus avoiding ambiguous rules in DNA replication that would have been achieved with this consolidation stage of the genetic code. Interestingly, the divergence of the DHOD families in the LCA suggests transitions associated with changes in the cell wall and cell membrane, supporting an order of divergence from cell walls of gram-negative-like organisms and membranes similar to Eukarya-Bacteria, towards gram-positive cell wall, and/or membranes similar to Archaea. Since the plasmatic membrane is considered a matter of vertical inheritance, we suggest that the divergence of the family of DHODs can be associated with the divergence of Archaea and gram-positive bacteria. Profiles (RPS-Blast) In order to select the enzymatic numbers and their corresponding enzymes belonging to nucleotide metabolism, the KEGG [2] and MetaCyc [69] databases were exhaustively explored. In total, 120 enzymatic numbers and their corresponding enzymes were collected. (Additional file 5: Table S3 and Additional file 6: Table S4). In a second step, RPS-Blast profiles were used to search for the occurrence of members of enzyme families in complete genomes. These profiles were extracted from PRIAM, the specialized database for detection of enzymatic sequences [70]. This database encompasses characteristics representative of alignments manually annotated, including members of a particular enzyme family, according to the Enzyme DB [71]. In addition, enzymatic steps can be associated with more than one profile, such as protein complexes, or families of nonenzymatic homologous proteins (analogous enzymes) or steps performed by members of the same families (paralogs). Altogether, profiles were curated manually based on functional annotated domains according to Enzyme DB [71]. Enzymatic function For the analysis of enzymatic functions, the best hit for each sequence, with an E-value of ≤10 −10 and coverage of ≥55% in relation to the profile, was considered. Similar criteria have been previously described for enzymatic annotation of complete genomes [72]. Taxonomical distributions In total, 151 sequence profiles associated with 120 enzymatic reactions related to nucleotide metabolism were evaluated in 2,044 complete genomes. The evaluation was based on profile comparisons, using RPS-Blast against the 2,044 complete genomes. Organisms classified as obligate parasites or those organisms with a reduced genome, i.e., less than 1,000 genes, were not considered in this study, with the aim of excluding a possible bias associated with massive gene loss, as previously described [1,8], leaving us with a total of 1,606 of the 2,044 complete genomes. In order to exclude redundancy in the genomes analyzed, we clustered the organisms based on their taxonomical classifications, in order to obtain a normalized measure of the taxonomical distribution of enzymes according to the following steps. In the first step, we obtained the average presence/absence of enzymes per genus, which in turn was used for the second step, to calculate the average presence/absence of enzymes per Clade. Clades corresponded to the taxonomical categories from the Joint Genome Institute's Integrated Microbial Genomics. Finally, we consider enzymes as widely distributed as those present in more than 50 percent of the clades of the same cellular domain. Additional file 1: Figure S1 and Additional file 2: Figure S2.
2016-05-04T20:20:58.661Z
2014-09-17T00:00:00.000
{ "year": 2014, "sha1": "496c693f02638ac15ca80ecaae044cf81df2fd0f", "oa_license": "CCBY", "oa_url": "https://bmcgenomics.biomedcentral.com/track/pdf/10.1186/1471-2164-15-800", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "e2703ff0c230eb87e786c8703a237a97ea2a3513", "s2fieldsofstudy": [ "Biology" ], "extfieldsofstudy": [ "Biology", "Medicine" ] }
225664518
pes2o/s2orc
v3-fos-license
Kinesiophobia dilemma for older adults: a systematic review Kinesiophobia is one of the pain complications which eventually might cause disability. Several studies showed correlation between age-related problems with kinsiophobia. The objective was to investigate clinical trials about managing kinesiophobia among older adults aged +65 years until March 2020. PubMed, CINAHL, Google Scholar, and PsycINFO databases were electronically searched until March 2020. All studies about kinesiophobia, with clinical trials, and randomized trials study design among older adults aged +65 years were included in the review. Two set of searching terms including ‘kinesiophobia AND intervention’ and ‘fear of movement AND intervention’ were used. From a total of 2669 articles, after excluding for different reasons, only three articles with total of 87 participants, mean age 68.5, all from Turkey related to the objectives of this study remained. Two of them were evaluated using two different physiotherapy approaches to manage neck pain and low back pain and one of them was regarding falls. Kinesiophobia was used as measure for the effectiveness of treatments. Older adults with routine and properly designed exercise and activity are healthier, with a lower probability for disability and therefore higher quality of life and longer healthy life. But to reach those goals, agerelated diseases and barriers should be investigated. Introduction Kinesiophobia is one of the pain complications which eventually might cause disability. 1,2 Based on the definition given by Kori et al., kinesiophobia is an excessive, irrational, and debilitating fear of physical movement and activity resulting from a feeling of vulnerability due to painful injury or reinjury. 3 In simple words, individuals suffering from pain might not involve in activities because they might think of re-injury and aggression of original injury. The importance of kinesiophobia on lowering function and performance of older adults and eventually disability is more than beginning pain itself for the number of researchers. 4 Several studies showed a correlation between chronic pain, 5,6 low back pain, 1,2,7-9 neck pain, 2,7-9 cardiovascular diseases , pulmonary diseases, 10 depression 11 and fall 8,[12][13][14][15] with higher kinesiophobia and lower quality of life (QoL). On the other hand, factors influencing the level of physical activity among older adults are not unimodal and beside sex, age, physical impairments, and smoking, elements such as pain and its characteristics and kinesiophobia should be considered. 6 For example, patients with low back pain have a lower level of physical activity. 16 Larsson et al. showed a significant correlation between pain, kinesiophobia and physical activity 6 and Swinkels-Meewisse et al. showed that personal and social activities could be improved by intervening with the kinesiophobia. 17 Aging is accompanied with osteoporosis, 18 cognitive impairment, cardiovascular and pulmonary diseases such as chronic obstructive pulmonary diseases, 19 diabetes, depression, musculoskeletal disorders 20 such as low back pain 7,21 and neck pain, 9 fall, 13,14,22,23 frailty, 24 and chronic pain. 6 Most of these problems result in reduced functional capacity, decreasing their performance in activities of daily living, increasing dependency and finally lower QoL. 2,4,5,7,19 In addition to increasing the population of older adults by 2050, 20,[25][26][27] this group compared to individuals younger than 60 years old faces more physical limitations and disabilities 25 and in addition to medical problems, this group has many social, political and economic concerns. 12 Studying kinesiophobia among older adults needs to be considered by researchers because of the abundance of medical problems among older adults, the correlation between kinesiophobia induced problems and the level of physical activity, the complexity of the physical activity, the fact that individuals at any age with high physical activity have improved health, 28 and benefits of physical activity 6 on prevention from those problems such as falling. 22,23 An adequate response to the needs of older adults is possible through proper and comprehensive assessment and providing better situations for healthy aging. 12,20 Therefore, the main objective of this review is to investigate clinical trials about managing kinesiophobia among older adults aged +65 years until March 2020. Search strategy Preferred reporting items for systematic review and meta-analysis protocols (PRIS-MA-P, 2015 statement) was selected as a search protocol because this protocol is comprehensive and has researchers consensus about its advantages. 29,30 PubMed database was electronically searched until March 2020. To have as much as possible numbers of articles, search terms including kinesiophobia and intervention were used. Restrictions including aged +65 years, humans, and clinical trials on the PubMed database were applied. Other sources and databases including CINAHL, Google Scholar, and PsycINFO were searched. Fear of movement is considered as a synonymous for kinesiophobia, therefore searching with terms including fear of movement AND intervention and same restrictions on PubMed was conducted. Selection criteria All studies about kinesiophobia, with clinical trials, and randomized trials study design among older adults aged +65 years were included in the review. All other systematic reviews, meta-analysis, longitudinal study without intervention, editorial, case reports, cross-sectional, and descriptive studies were excluded. All studies was read by the author. Results From a total of 2669 articles, after excluding because of different reasons (Figure 1), only three articles with a total of 87 participants, mean age 68.5, all from Turkey related to the objectives of this study remained (Table 1). Those studies were with a small sample size (less than 80 individuals). Two of them were evaluated using two different physiotherapy approaches to manage neck pain and low back pain and one of them was regarding falls. Kinesiophobia was one of measuring of the effectiveness of treatments. These studies used one type of blinding in their study design. Only one of those studies assessed the correlations between outcome measurements. All of the studies evaluated mostly physical variables related to the problem and mental and psychological variables were not examined independently. Sociodemographic characteristics of the samples were not fully presented in none of the included studies. Although most of the articles were excluded because of reasons such as study design and different sample ages, there were studies in which sample age included individuals with age +65 years but none of them showed any specific results. Some characteristics of the included studies that can produce bias are shown in Table 2. Most of the studies with search terms fear of movement AND intervention were regarding fall and fear of falling that was not study's objective. Discussion and Conclusions This review was designed to investigate interventions and treatments regarding kinesiophobia among older adults aged +65 years until March 2020. Only three studies were found in relation to the objective of this review, all of which were from Turkey, with sample sizes of less than 80 participants. Although all of the studies had a comprehensive perspective and insight on the effectiveness of their treatment plan besides measuring other criteria such as pain severity, range of motion, and QoL, in this review we had only three studies, two of which about pain and a third one regarding exercise among older adults. We need to consider the facts that, modern life and decreasing mobility as a result of new technology and communication system lead to lower needs for physical activity 31 and older adults suffer a number of agerelated physical and psychological problems, therefore, other problems should be investigated in the future. All three included studies were about older adults between 65 and 80 years old but individuals aged +80 years were not considered. Since the goal of most of the health-care systems and societies is to expand life span and have healthier older adults, these group should be studied. Study setting for two of three studies was at the university which might make bias in recruiting sample. There are older adults in the normal population and people who are marginalized as a result of different reasons such as living in rural and suburban areas, with many medical problems, poverty, and living alone. Including groups of disadvantages and hard to participate would have more normal and generalizable results. We face varying presentations of kinesiophobia among older adults, 32 with a variety of health-related symptoms including medical, psychological, social, economic, and cultural. 33 This suggests a need for more holistic assessment and evaluation for this age group. Adding the population of dependent older adults to lower birth rate and a smaller population of younger adults considered as the production force, could be a nightmare for politicians and bring challenges for all sectors of society, 26 especially for the healthcare system. 32 Milenkovic et al. showed lower activity of daily living and therefore higher dependence among older adults with a higher level of kinesiophobia. 4 Hence, underdiagnosed and untreated kinesiophobia could lead to an increasing number of dependent older adults. To improve the health of older adults and their independence at later ages, researchers, therefore, need to look at kinesiophobia and other age-related problems from a broader biopsychosocial view. 28 Benefits of adequate exercise for older adults include: i) positive physical changes such as lower obesity, higher physical and functional capacity and healthier musculoskeletal components; ii) mental changes such as increasing psychomotor performance, the lower rate of depression and anxiety which both of those benefits result in a better QoL. 23 But to reach those goals, as many as possible diseases and barriers related to aging should be investigated. This is the first review of interventions on kinesiophobia among older adults. All studies were in English and review was done by one author which decrease level of possible bias because of varying view. Although the electronic search of large databases was conducted, there is a possibility of unpublished studies, or published but not in connection with databases that might change the results. Lack of standard keywords might affects the study. Review In conclusion, functional capacity of older adults is low. A society with a higher rate and number of dependent older adults would have economic and social problems to provide adequate formal and informal support. 4 Older adults with routine and properly designed exercise and activity are healthier, with a lower probability for disability and therefore higher QoL and longer healthy life. 23
2020-06-18T09:08:55.572Z
2020-06-15T00:00:00.000
{ "year": 2020, "sha1": "7b58fed8e59f083d791a868cce0dde49e61cc1e4", "oa_license": "CCBYNC", "oa_url": "https://www.pagepressjournals.org/index.php/gc/article/download/9056/8820", "oa_status": "GOLD", "pdf_src": "Anansi", "pdf_hash": "95bc693bfbbf3c6b28e3e79f9261b7e118def43f", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [ "Psychology" ] }
232086928
pes2o/s2orc
v3-fos-license
Charles Bonnet Syndrome as Another Cause of Visual Hallucinations Charles Bonnet syndrome (CBS) presents as gradual vision loss and associated visual hallucinations in a patient who is otherwise neurologically and psychiatrically intact. This syndrome presents primarily as ophthalmologic disease, however, may be secondary to an ischemic stroke or tumor in the occipital lobe. Patients present with a complaint of vivid visual hallucinations ranging from spots and geometric shapes to seeing people, distorted figures and landscapes.First noted in the 1760s, CBS did not reach the western scientific community until the early 1980s. Our patient reported seeing her dog and deceased mother only when looking left. She was having an experience of phantom images in addition to a visual field impairment but was otherwise of sound mind with no gross neurological deficits. Computer topography of the brain revealed a subacute infarct in the right posterior occipital lobe. The patient was ultimately diagnosed with CBS following magnetic resonance imaging and ophthalmology consultation at tertiary center. Diagnosis may be delayed by lack of symptom reporting as patients do not want to carry a stigma as ‘crazy.’ Further, physician awareness of this etiology is low and a better understanding of the disease will prevent missed diagnosis as well as lack of appropriate consultation and follow-up. Treatment includes close outpatient ophthalmology care, maximizing existing vision and lifestyle changes including adjustments in lighting, decreasing stress and increasing socialization. Trials of prescription treatment (i.e., antipsychotics, serotonin reuptake inhibitors, antiepileptics) have shown only anecdotal evidence at efficacy. CBS is an uncommon presentation of cerebral vascular disease that warrants the attention of emergency department physicians. Introduction Charles Bonnet syndrome (CBS) is defined as the occurrence of phantom vision in people living with some form of ophthalmologic disease who are otherwise cognitively and physiologically healthy. The condition was first noted by Charles Bonnet in 1760 in Geneva, Switzerland. Bonnet was a natural scientist, the youngest member of the Parisian Academy of Sciences and elected into the Royal Society of London. Unfortunately, he slowly began to lose his vision in his 20s making continued microscope use difficult prompting him to spend the second half of his life studying psychology, philosophy and metaphysics. Bonnet's published works reference his sane elderly grandfather and the 'visions' he would describe. Bonnet ultimately suffered similar hallucinations and continued visual loss late in his own life. The diagnosis was coined by neurologist, George de Morsier in the 1930s to describe visual hallucinations in the elderly whose insight remains intact [1]. Common ophthalmologic diseases associated with Charles Bonnet syndrome are macular degeneration and glaucoma as they alter stimuli within the visual cortex [2]. The core features of CBS involve vision impairment, existence of phantom images or visions, full or near full insight into the unreal nature of what one sees, no discernible cognitive memory deficits and visual images that do not extend to other sense modalities [1]. In less common cases, the syndrome can develop as a consequence of other clinical conditions such as brain surgery, multiple sclerosis, a tumor or in this case, an ischemic stroke. The following case report will discuss the presentation and diagnosis of CBS in the emergency department. Case Presentation A 68-year-old female with a history of hypertension and diabetes presented to the emergency department (ED) with complaints of left visual field hallucinations. The symptoms had begun four days ago. She reported, "if I look to my left, I see my dog, but I know he is not there." The patient also reported seeing her deceased mother only in her left visual field. She stated that if she looked to her right, she no longer saw the hallucinations. She denied any motor or tactile sensory deficits. Patient's neurologic baseline otherwise was awake, alert and oriented with no focal motor or sensory deficits. The patient did report right upper lid droop since her elective bilateral blepharoplasty one month ago. Exam revealed an afebrile patient with a normal heart rate at 66 bpm, elevated blood pressure of 197/99 mmHg and oxygen saturation of 97% on room air. Visual acuity showed OD: 20/40, OS: 20/60. The patient was an overall well appearing female, awake and alert in no acute distress. Her exam was unremarkable other than pertinent findings in the eye and neurologic exam. Eye exam showed bilateral pupils equal, round and reactive to light. Extraocular movement was intact however mild right-sided ptosis was noted, reportedly chronic per patient secondary to recent blepharoplasty. Patient's neurologic exam revealed an awake, alert and oriented patient to person, place, time and purpose with structured, linear mentation. All cranial nerves were intact and no dysmetria was noted. No sensory deficits or motor deficits were found on examination in all four extremities. The patient was noted to have visual field loss in the left peripheral field. Evaluation in the ED included a complete blood count, coagulation studies, troponin and alcohol level which were within normal limits. Both glucose and potassium were minimally elevated. Patient's chest X-ray demonstrated no active disease and electrocardiogram showed a normal sinus rhythm with no arrhythmias or signs of ischemia. Evaluation also included neuroimaging (Figure 1). FIGURE 1: Computer Tomography of Brain Computed tomography of brain without contrast demonstrating right posterior infarction (white arrows). Computed tomography (CT) of the brain/head demonstrated mild underlying volume loss, right posterior infarction, which was not present previously, either sub-acute or old. CT showed no evidence of intracranial hemorrhage, other obvious acute event or vascular calcification. As noted in the exam, the patient did have a slight left-sided visual field loss that she previously had not noticed. However, upon questioning she did note that over the past month when changing lanes to the left while driving she noticed she had to turn her head further to the left to see her blind spot but disregarded this at the time. Our patient's case was discussed with a consulting neurologist at a tertiary center where the patient was ultimately transferred for further evaluation of her new and persistent symptoms. MRI was performed upon admission at the tertiary center ( Figure 2) and showed an acute infarct in the right medial posterior occipital lobe in the right posterior cerebral artery territory. The patient's discharge diagnosis was a right occipital infarct and Charles Bonnet syndrome upon discharge from the tertiary center. Unfortunately, patient followup and progression of disease is unknown. FIGURE 2: Magnetic Resonance Imaging of Brain Magnetic resonance imaging of brain without contrast demonstrating acute infarct in right medial posterior occipital lobe in the right posterior cerebral artery territory (white arrow). Discussion Charles Bonnet syndrome is a relatively unknown condition typically associated with a primary ophthalmologic disease, however the disease can less commonly present due to a brain lesion such as a tumor or infarct. A patient presenting with visual hallucinations may initiate a psychiatric work-up. However, if the patient is appropriate and healthy with insight into the unreal nature of their hallucinations and not otherwise impaired with prescription or nonprescription drugs, nonpsychiatric causes should be considered, including Charles Bonnet syndrome [3]. Precisely how and why CBS occurs has not yet been defined. Two theories have been accepted as possible explanations for pathogenesis of the disease. Release theory suggests that a lesion in the visual pathway results in abnormal signals being sent to the visual cortex. The abnormal signals combined with normal signals result in hallucinations. Deprivation theory suggests that a reduction in sensory input leads to the production of spontaneous images from the visual association cortex, resulting in visual hallucinations. This theory is described similarly to the theory of phantom limb pain. When a stroke occurs in the visual regions of the brain there is an increased risk of visual disturbances, including Charles Bonnet syndrome. In about 20% of strokes, visual or perceptual disturbances may occur. The content of these visual disturbances adds to the mystique of the disorder. Patients diagnosed with CBS have reported seeing basic geometric shapes to vivid structures, distorted faces with headdresses and even finding themselves in different landscapes [1]. It is not understood why or how each patient's hallucinations vary. Unlike most cases of Charles Bonnet syndrome due to primary eye disease, in stroke-induced Charles Bonnet Syndrome the affected persons may retain central vision (visual acuity) even though they may suffer from peripheral visual field loss [3]. Charles Bonnet syndrome has been known for quite some time; however, many medical and health care professionals are unaware of the syndrome. An informal 2010 survey of 343 general practitioners in the Metropolitan area of Sydney, Australia showed that only two admitted to being familiar with Charles Bonnet syndrome [1]. They also expressed having very limited knowledge of the diagnosis and appropriate treatment. There is a high rate of non-reporting of visual hallucinations to care providers due to a fear of being labeled with a psychiatric disorder, not being believed or losing one's independence. Vukicevic and Fitzmaurice found that whereas 21% of Charles Bonnet syndrome patients had not reported their symptoms to anyone, 64% mentioned it to their family members and only 15% had told a health care professional. Directed questioning by the health care professionals regarding hallucinations were essential in identifying patients suffering from Charles Bonnet syndrome [4]. Treatment of Charles Bonnet syndrome is focused on optimizing outpatient eye care. Maximizing improvement of vision may reduce or even cause resolution of symptoms. Since sensory deprivation is a crucial factor in CBS some outpatient goals are to increase sensory stimulus, which may activate the brain in appropriate areas and thus improve symptoms. Stress and anxiety tend to worsen symptoms. No pharmacological agents have been found to be effective. Preliminary studies with electromagnetic stimulation treatments have shown reduction of symptoms temporarily but have not been officially approved for treatment of Charles Bonnet syndrome [1]. Conclusions Charles Bonnet syndrome is a clinical condition that is not known to many medical practitioners. On top of being unaware of this condition it may be difficult to obtain a complete history from a patient suffering from Charles Bonnet syndrome due to their fear of being diagnosed with a psychiatric illness. Missing this diagnosis is missing a potential ischemic event or other major neurological condition. Knowledge of Charles Bonnet syndrome as well as the ability to ask directed questions to obtain appropriate history, will be the mainstay of keying a provider into a patient with Charles Bonnet syndrome and is relevant to the practice of emergency medicine. Additional Information Disclosures Human subjects: Consent was obtained by all participants in this study. Conflicts of interest: In compliance with the ICMJE uniform disclosure form, all authors declare the following: Payment/services info: All authors have declared that no financial support was received from any organization for the submitted work. Financial relationships: All authors have declared that they have no financial relationships at present or within the previous three years with any organizations that might have an interest in the submitted work. Other relationships: All authors have declared that there are no other relationships or activities that could appear to have influenced the submitted work.
2021-03-03T05:23:39.283Z
2021-01-01T00:00:00.000
{ "year": 2021, "sha1": "a1a81b664b33ea86b8d0c7eac131c5555e17338f", "oa_license": "CCBY", "oa_url": "https://www.ncbi.nlm.nih.gov/pmc/articles/PMC7906273", "oa_status": "GREEN", "pdf_src": "PubMedCentral", "pdf_hash": "a1a81b664b33ea86b8d0c7eac131c5555e17338f", "s2fieldsofstudy": [ "Medicine", "Psychology" ], "extfieldsofstudy": [ "Medicine" ] }
54910257
pes2o/s2orc
v3-fos-license
The Virtual Reality of Work – How to Create a Workplace that Enhances Well-Being for a Mobile Employee New developments in information and communication technology have changed the way people approach their life and work. Mobile virtual work is no longer bound to fixed locations as utilizing information and communication technology allows people to function freely in various environments. The employee is considered as mobile, when he works more than ten hours per week outside of the primary workplace and uses information and communication technologies for collaboration (Gareis et al., 2006; Vartiainen & Hyrkkänen 2010). Virtual reality (Fox et al., 2009), as an environment related to the new ‘anytime anywhere work’, can be called the virtual workplace. The virtual workplace provides connectivity through different size of devices and is accessed by different interfaces when supporting the performance of both individual and collaborative activities (Nenonen et al., 2009). Introduction New developments in information and communication technology have changed the way people approach their life and work.Mobile virtual work is no longer bound to fixed locations as utilizing information and communication technology allows people to function freely in various environments.The employee is considered as mobile, when he works more than ten hours per week outside of the primary workplace and uses information and communication technologies for collaboration (Gareis et al., 2006;Vartiainen & Hyrkkänen 2010).Virtual reality (Fox et al., 2009), as an environment related to the new 'anytime anywhere work', can be called the virtual workplace.The virtual workplace provides connectivity through different size of devices and is accessed by different interfaces when supporting the performance of both individual and collaborative activities (Nenonen et al., 2009). The interest of this article is the interrelationship between the physical and the virtual workplace not only with regard to their infrastructure, but also to their social and cultural contexts.Both prerequisites connected to the virtual workplace and its actual use can be challenging.It could be claimed, for instance, that simultaneous physical and virtual copresence is generally not yet mastered in an effective way and that there still exist certain bottlenecks for a mobile employee in entering virtual reality.Vischer (2007Vischer ( , 2008) ) has analyzed the workplace as a physical, functional and psychological entity in order to identify features related to comfort and fit between a workplace and an employee (fig 1).When the environment sets inappropriate or excessive demands to users, in spite of their adaptation and adjustment behaviors, it manifests the concept of misfit.In a good fit there is a balance between a person's abilities, skills, degree of control and decision latitude and the work environment's demands, complexity, expectations and challenges.The nature of person-environment transactions arouses the sensation of either comfort or stress.Comfort may be considered as the fit of the user to the environment in the context of work.(Vischer 2005(Vischer , 2007, see also Dainoff et al. 2007.)According to Vischer (2007), environmental comfort encompasses three hierarchical categories: the physical, functional, and psychological.Physical comfort relates to basic human needs, i.e. safety, hygiene and accessibility.These needs are responded to by applying building codes and standards.Functional comfort is defined in terms of support for users' performance in work-related tasks and activities.Psychological comfort is related to feelings of belonging, ownership and control over workspace.We have expanded the category of psychological comfort and fit also to cover the social factors, and named the third category psychosocial comfort and fit.Vischer's (2005) model of comfort and fit modified (Hyrkkänen & Nenonen, 2011) for assessing virtual work places Vischer's user-centered model (2007) merges environmental aspects with psychological aspects in a dynamic way.Vischer has developed this abovementioned model for assessing the fit or misfit of physical workspace.We have tested and developed its applicability for assessing virtual places (see Hyrkkänen & Nenonen 2011).In this article, the virtual workplace will be analyzed as a three-level entity that enhances well-being from the point of view of the mobile employee. The purpose of this chapter is to explore what are the elements of the virtual workplace that either hinder or enable productive mobile virtual work processes and well-being at work.The script will proceed as follows: first, there will be a broad literature inspection of the physical, functional and psychosocial elements of comfort and fit which either hinder or enable productive mobile virtual work.Secondly the method and findings of a preliminary study called "virtual me" will be presented for enlivening the literature review findings with vivid up to date data. Background The basic proposition in the background of this research follows the idea of Vischer's modified and tested model (Hyrkkänen & Nenonen 2011).The factors of fit and misfit are in the upcoming chapters examined from the physical, functional and psychosocial perspectives. The elements of physical comfort and fit in the virtual workplace of a mobile employee The elements of physical spaces and places impact on the possibilities for effective virtual work.Constraints of physical places hamper the mobile worker's way to virtual work places.It could be claimed that the access to the virtual reality is restricted in many ways by poor and out of date working environments, their lay outs, electrical designs and furniture.The reviewed articles demonstrated and confirmed this by describing many situations where the mobile employees met physical hindrances. Despite the increase of "hot desking", many odd places are still offered for building up a work station, especially if the mobile employee is an occasional visitor (Hislop & Axtell, 2009;Mark & Su, 2010) at his own company's or customer's premises.At public places, mobile employees have even reported the need to compete for electrical power due to a limited amount of power outlets (Axtell et al., 2008;Brown & O'Hara, 2003;Forlano b, 2008;Mark & Su, 2010). When executing the anywhere working style, the employee will undoubtedly encounter many physical places that are not in the first hand designed primarily for working purposes.This is likely to happen at airports, in different means of transportation, in cafeterias or in hotel rooms (Axtell et al., 2008;Breure & van Meel, 2003;Brown & O'Hara, 2003;Laurier, 2004;Laurier & Philo, 2003).Their furniture is primarily designed for travelling or for leisure time activities.They are hardly convertible for working.For example, in trains there are no flat surfaces large enough for laying down portable mobile devices (Perry & Brodie, 2006). In the physical fit of virtual reality lies also the question of its appropriateness to the human sensory system.For example visual and auditory problems may arise.For ensuring the success of work, mobile employees carry many tools with them -including redundant tools to be on the safe side.To avoid letting the burden grow beyond measure, increasingly smaller-sized devices are selected.With small size you inevitably choose small displaysand visual difficulties.(Axtell et al., 2008;Brown & O'Hara, 2003;Felstead et al., 2005;Hislop & Axtell, 2009;Mark & Su, 2010;Perry et al., 2001;Perry & Brodie, 2006;Vartiainen & Hyrkkänen, 2010;Venezia & Allee, 2007.)Noisy physical environments may disturb and interrupt concentrated working in virtual reality.Especially in public places, in trains and airplanes, tourists and neighbors near the mobile worker may disturb the work (Axtell et al., 2008;Breure & van Meel, 2003).On the other hand, a smooth level of discussing voices e.g. in a cafeteria may help the worker to relax and lose him/herself in virtual reality (Forlano, 2008a;Rasila et al. 2011). The contradictory relation between the physical and virtual worlds might cause the misfit which may also lead to safety risks, e.g. when driving a car (Laurier & Philo, 2003;Perry & Brodie, 2006).Switching concentration from driving to working with ICT-tools causes hazards and is therefore for safety reasons limited by law and norms.(Hislop &Axtell, 2009). The elements of functional comfort and fit in the virtual workplace of a mobile employee The functional fit or misfit of the workplace can be assessed by defining the degree to which occupants can either conserve their attention and energy for their tasks or expend it to cope with poor environmental conditions.Related to the functional fit of virtual places, the connectivity problems that cause disturbances and hindrances to virtual work flow are crucial.The maturity and sophistication of the ICT infrastructure is one of the key factors.For example, the Wi-Fi connections are not yet fully developed in all environments (Axtell, et al., 2008).Some of the connectivity problems are derived from the limited skills of mobile workers in employing virtual settings and infrastructure (Hallford, 2005;Mann & Holdsworth, 2003;Mark & Su, 2010;Perry & Brodie, 2006;Vartiainen & Hyrkkänen, 2010;Venezia & Allee, 2007).Time constraints and tight schedules of mobile employees together with timeconsuming downloads of connections and programs also make it unreasonable to start virtual work (Axtell et al., 2008;Brown & O'Hara, 2003;Breure & van Meel, 2005;Mark & Su, 2010;Perry et al., 2001;Perry & Brodie, 2006). The security regulations of mobile employees' own or their customers' companies may hinder the access to and functioning in virtual places (Brown & O'Hara, 2003;Mark & Su, 2010;).In addition very expensive connections may present a barrier to employing functional connections (Axtell el al., 2008). The elements of psychosocial comfort and fit in the virtual workplace of a mobile employee In Vischer's (2005Vischer's ( , 2007) ) environmental comfort model, psychological comfort links psychosocial aspects with environmental design and management of workspace through the concepts of territoriality, privacy and control. A sense of territory is associated with feelings of belonging and ownership.Territoriality of the virtual work place may be considered as a different composition of public, semipublic and private virtual places.Public shared places and platforms include the internet, many applications of social media and interfaces which are open for everyone.Semipublic areas include applications and media channels which demand an identity but are still shared among a defined group of users.The private zone requires a personal key and passwords and the content is not shared or if so, the principles of sharing are decided by the individual user.Virtual territory is personalized by individual choices e.g. in screen savers, chosen applications and programs.The visual appearance is a significant factor indicating both individual ownership and social belonging e.g. to the organization (see Ettlinger 2008). In many cases, the need for belonging will not come true in virtual spaces (Brown & O´Hara, 2003;Hallford, 2005;Mann & Hodsworth, 2003, Perry et al., 2001).The lack of belonging is affected also by limited access to colleagues and individuals, who are distant.This is the case of the mobile employee's physical world but also the case of virtual reality, e.g., when www.intechopen.com The Virtual Reality of Work -How to Create a Workplace that Enhances Well-Being for a Mobile Employee 197 an employer attempts to avoid huge operating expenses (Axtell el al., 2008).Furthermore, the perceived problems of spreading tacit knowledge in virtual spaces (Hallford, 2005) can be seen as a factor of territoriality.The sense of presence is not easy to create. In Vischer's model (2005), environmental control consists of mechanical or instrumental control, and empowerment.Instrumental control exists, if the employee masters his furniture, devices and tools.Empowerment as a form of environmental control arises from participation in the workplace decision making.The reviewed articles highlighted the lack of control in staying in virtual reality.The stress arose from expectations of continuous availability (Brown & O'Hara, 2003;Felstead et al., 2005;Hallford, 2005;Green, 2002;Mark & Su, 2010;Tietze & Musson 2005). When comparing the factors identified in the reviewed articles to Vischer's psychosocial factors, the similarities are evident.Ensuring the psychosocial fit of a virtual workplace is the question of territoriality, privacy and control. Method In order to reflect the results of the literature review, a small scale empirical survey was carried out.The experience sampling method (ESM) was used as the research method.ESM refers to a technique that enables the capturing of people's behaviors, thoughts, or feelings as they occur in real time (Hektner et al. 2006). The ESM research process consisted of five stages.In the first stage the design for the research was made and the diary booklet was designed and tested.In the second stage the subjects were contacted and the diary booklet was delivered to them.The sample of 20 employees (users) from different organizations participated.They were instructed to carefully enter all their actions and places they had been to in a diary booklet.The diary phase focused on what virtual devices and tools are used and for what purposes.In the third stage, the filled diary booklets were retrieved and familiarized with and the first interpretations were made.In the fourth stage, the interviews concerning the themes of fit and misfit in virtual work places were finalized and carried out with 10 users.The aim of the interview phase was to examine employees' experiences of fit or misfit concerning physical, functional and psychosocial features of their virtual workplace.In the fifth and final stage the final interpretation of the collected material was done with help of AtlasTi-program. ESM can be seen as an application of a probes method.The probes method is a user-centered design approach and a qualitative knowledge gathering research tool that is based on user participation by means of self-documentation (Gaver et al. 1999;Gaver et al. 2004;Boeher et al. 2007;Mattelmäki 2008).The purpose of the method is to understand human phenomena and find signals of new opportunities by examining users' personal perceptions and background. More precisely, probes are a collection of evocative assignments through which or inspired by which the users actively record requested material (Mattelmäki 2008).The most typical forms of traditional self-documentation are diaries and camera studies.The academic purpose of selfdocumentation is to examine the daily factors of human lives.(Graham et al. 2007;Mattelmäki 2008.)A relevant feature of self-documentation is collecting data from several situations that increase the reliability of the research (DeLongis et al. 1992).Self-documentation also minimizes the observers' possible influence on the person observed. Results Virtual devices and applications make it possible to work from almost any physical location.Some of the users started the working day in bed when waking up in the morning, by reading emails with their mobile phones and ended it in the same place before going to sleep.The use of virtual tools was constant: at all times, in all places, in work and in leisure.For instance, both making and answering work related phone calls and emails are done when shifting from one physical location to another in staircases, streets, cars, public transportation vehicles, taxis, airports and airplanes.The virtual tools are also used in the middle of different kind of work and leisure related events and meetings such as in lunch restaurants, cafés and bars, offices, seminar facilities, saunas and at home.As one user (U4) wrote in his diary: "I welcomed seminar guests and at the same time I answered some phone calls". Physical comfort and fit in the virtual workplace of a mobile employee The themes of discourse about physical comfort included tools and application for the virtual work as well as the places for the work including the theme of ergonomics (fig.2).The employees used multiple physical places for work during their working days and the amount of different devices and applications that were utilized was numerous and varied from user to user.The most common virtual devices carried with were laptop and mobile phone.Some users also worked with table computers.The most common virtual application was the e-mail.Additionally, users applied a wide range of other applications.Some of them were used via the Internet e.g.Facebook, Skype, Google, blog, virtual newspapers and net banks.Some of them did not demand an internet connection like shared hard disk, virtual calendar and notebook, Microsoft Office programs and work specific applications such as ArchiCAD. In many cases the virtual tools were utilized concurrently.The users had usually many applications open at the same time and they used them alternately.Some users also applied different devices for fulfilling the same task.As an example, a user (U6) was waiting for his next flight at the airport.The battery of his laptop was running low and he was charging it while waiting for the boarding call.When the call came the battery was 70% recharged.The user decided to answer some of the latest emails with the laptop and older ones with his mobile phone.The concurrent use of different devices requires a large enough flat surfaces to place the devices -this was not fulfilled especially in the means of transport or was hardly fulfilled in bus stations, railway stations or airports.Also the lack or paucity of functional power points or internet plug-ins or wireless webs was considered hampering the work especially during transitions.The inability to use the printers or totally non-working printers was a problem for some of the users. The Virtual Reality of Work -How to Create a Workplace that Enhances Well-Being for a Mobile Employee 199 The layouts of the physical workspaces were seen as a challenge in many cases.While on the move it was especially hard to find a place that supports quiet work or confidential discussions.For these reasons, working with certain tasks with virtual applications was considered difficult. Also the decent ergonomics of the workplaces used was important.Many of the mobile employees mentioned the fatigue of musculoskeletal organs due to bad work postures.Inappropriate furniture and visual difficulties were the main causes for impairing working postures.On the other hand, some virtual tools allow flexible changes not only in the physical work position but also in bodily postures.According to the interview, the mobile phone appeared to be the most flexible virtual tool from this point of view. Functional comfort and fit in the virtual workplace of a mobile employee The leading themes of data included the connectivity and effective use of time (fig.3).The most important thing when considering the nice and smooth i.e. functional use of virtual devices and applications seemed to be the availability, speed and functionality of the internet connection.Most of the notes in the diaries were somehow related to the use of an internet connection.Altogether, the internet connection, which was non-functional or difficult to access, was regarded as the key hindrance of productive knowledge work in virtual workplaces.There was a requirement that a quick and easily accessible internet connection should be available everywhere.This also presents requirements for the infrastructure of both virtual and physical places: they should guide you in getting quickly connected. Because the workdays of the employees seemed to be busy with many things to do, the baseline assumption was that the use of virtual devices and applications is quick and smooth.If not, the irritation on account of wasted time increased, e.g., when sending emails took a few seconds instead of being instant.The interviewed subjects described the wasted time as a time that was spent with virtual tools but which did not directly contribute to completing the work related tasks and duties they were working with.An example of an experience of wasted time is a laptop that took 10 minutes to turn on.Another example is a situation where the user spent hours learning to use free program to find out if it is suitable for his purposes -it was not and the time was wasted. Psychosocial comfort and fit in the virtual workplace of a mobile employee The discourse concerning the psychosocial fit of virtual places dealt with the concepts of territoriality, privacy and control (fig.4). When the employees described the matters of territoriality, they stressed the importance of selecting the right virtual communication tools and channels.For example, sending emails was not the channel for enhancing belonging; more nuanced communication channels were selected and used. The interviewed subjects described the virtual teambuilding methods that enhance belonging they have used or participated in.For this reason the normal working tools and applications were used for sharing work divergent matters.For example they shared photos of leisure time and discussed their holiday plans with live-meeting tools. The managers of distributed teams expressed the essence to select and use the most suitable virtual behavior for enhancing the belonging of individuals.For example the calls "for no particular reason" played important role in enhancing employees' feeling of belonging. According to the interview data, the concept of privacy consisted of three components.The interviewed described privacy through problems in simultaneous co-and telepresence, simultaneous use of many virtual communication and collaboration channels as well as problems followed from simultaneous use of work and leisure related virtual environments.The ensemble of the privacy was related to the concept of accessibility: the feeling of fit arose from the good control over the multidimensional opportunities to access both physical and virtual worlds. Simultaneous co-and telepresence enabled simultaneous use of many communication and collaboration channels.For example, when taking part in a conference call without visual communication employees tend to do many other duties too, i.e., mute the microphone and The Virtual Reality of Work -How to Create a Workplace that Enhances Well-Being for a Mobile Employee 201 communicate face to face with colleagues and send e-mails.Simultaneous use of many virtual communication and collaboration channels blur also the boundaries of work related and leisure related virtual places.While working in virtual places, many employees amused themselves by occasional visits in leisure related places, e.g., in their own Face book pages.Taking over the fit of psychosocial elements in the virtual workplace the success in control is essential.The interviewed subjects defined the control to flow from the success in handling the demands of continuous availability, clear communication and collaboration rules inside the team as well as a good command over the different virtual working modes.They also pointed out the negative features of control.For example, the virtual tracing methods may also be used in a way that reflects a sign of distrust. Conclusions This research showed that Vischer's model (2005Vischer's model ( , 2007) ) of environmental fit is useful for a more detailed classification of virtual places and spaces.In virtual work the threshold of usability is at the functional level due to accessibility demands (see fig. 1).The work of a mobile employee will stop completely if the worker is not able to connect. In order to develop well-functioning virtual workplaces for mobile employees, extensive attention should be paid to the whole system, within which employees confront their duties in different locations.Gaining comprehensive understanding on the context in which a given task is performed starts by forming questions first on the physical places and later on psychosocial themes.As such a vast field, the process demands profound multidisciplinary collaboration of different actors in organizations and support functions.The inspection of different fit levels is a useful tool for helping different authorities to explain their expertise in relation to other authorities.Gaps in management may also be demonstrated.There is a need for putting more emphasis in analyzing the non-visible work processes we have learned to conduct in virtual entity. According to this research with Vischer ´s model as a frame of reference, it can be stated that www.intechopen.com  At the level of physical fit, building codes and standards should be expanded to cover also the needs generated from the new working modes i.e. mobile work.The layouts of different premises should be clear and also instruct occasional visitors to quickly settle for working.The physical places should guide your route to virtual reality.Because the virtual reality is its own world with its voices and vistas, the disturbances caused by physical reality should be diminished, also when you are working in the places not primarily designed for work, i.e. trains, cafeterias, hotel rooms.This is a truly and demanding challenge for construction planning.The demands of performing mobile work should be taken into account also when designing furniture for premises not primarily aimed at working.  At the level of functional fit, the access creates the threshold of work.Entering virtual work places i.e. the virtual reality of work is a question of existing and well functioning infrastructure.Moreover, questions concerning the easiness of connecting signals as well as of finding help and support in using information technology are essential.The transfer to virtual work places via well functioning infrastructures and applications must be attained regardless of the time and physical place.The operational environment of mobile employee should be portable as well as easy perceivable.(cf.Hyrkkänen et al, 2007.) Enhancing the fit at the psychosocial level, the mixture of physical and virtual worlds and simultaneous existence in both should be more effectively understood and supported.Particular and a lot of learning demanding challenge lies in controlling the simultaneous co-and telepresence, simultaneous use of many virtual communication and collaboration channels as well as simultaneous use of work and leisure related virtual channels.Although one of the major goals driving the development of virtual reality has been in providing a space for people to interact without the constraints of the physical world the fact seems to be that we can not totally rid ourselves form being physical as well (c.f.Fox et al, 2009).On that account, we have to learn to behave and work also in the interspace i.e. controlling simultaneous existence and belonging in the mixture of physical and virtual worlds.The integrated design, which seamlessly combines the physical and virtual places, needs to be developed further as well. Fig. 2 . Fig. 2. The elements impacting the physical fit or misfit of virtual workplaces Fig. 3 . Fig. 3.The elements impacting the functional fit or misfit of virtual workplaces Fig. 4 . Fig. 4. The elements impacting the psychosocial fit or misfit of virtual workplaces
2018-12-05T21:33:33.488Z
2012-04-27T00:00:00.000
{ "year": 2012, "sha1": "a1d51dcf144cc84c645a2c0634c718ac5ce28528", "oa_license": "CCBY", "oa_url": "https://www.intechopen.com/citation-pdf-url/36384", "oa_status": "HYBRID", "pdf_src": "ScienceParseMerged", "pdf_hash": "a1d51dcf144cc84c645a2c0634c718ac5ce28528", "s2fieldsofstudy": [ "Computer Science" ], "extfieldsofstudy": [ "Engineering" ] }
23222269
pes2o/s2orc
v3-fos-license
BETA-2-MICROGLOBULIN IN BASAL CELL CARCINOMA . Differences in the cell surface of malignant cells .. as compared with normal cells. are believed to be char­ acteristic of many features of tumour cell behaviour. We have obtained evidence suggesting that solid and super­ ficial basal cell carcinomas lack immuno-reactive beta-2- microglobulin (beta-2-m) on the cell surface. in contrast to normal epidermis and that of various non-malignant dermatoses, including basal cell papillomas. The major human histocompatibility antigens, the HLA antigens. are composed of two types of poly peptide chains (2. 8. 10): one heavy chain, carrying the antigenie specificity. and one light chain, which is invariant and has been identified as beta-2-micro globulin (beta-2-m) ( I. 5. 6. 8). Beta-2-m appears to be pre:sent at the surface of all human nucleated cells hitherto investigated, most of which have been studied in cell culture systems (3. 7) with the exception of the B-cell derived human Daudi lymphoblastoid line (7). It has been claimed that HLA heavy chains are pres ent on the surface of the Daudi cells ( 11). By means of immunotluorescence (IFL). antibeta-2-m has been shown to bind to cell surface associated beta-2-m in both epithelial cells cultures (9) and cyrostat sections of human epidermis (4). Changes in the surface properties of tumour cells have attracted widespread attention in recent years, as they may be important in determining behav ioural characteristics of the tumour. such as its ability to proliferate in an uncontrolled fashion, invade normal tissue, and metastasize to distant sites. One may ask what alterations in the surface properties of tumour cells contribute to their escape from the controls to which normal cells are subject. In this study we have investigated the cell surface reactivity to antibeta-2-m. concanavalin A (Con A) and pemphigus antibodies in basal cell carcinomas and basal cell papillomas, with the aim of elucidat ing possible differences in the cell slllface binding of these markers in epidermal tumours. MATERIALS AND METHODS Skin biopsics were taken from basal cell carcinoma le sions and basal cell papillomas. using a punch 3-5 mm in diameter. and were quick-frozen in isopenthane at -70°C. Most specimens were then sectioned immediately at 6 µ,m on a cryostat. but same were stored at -70°C and sectioned within 3 days. The slides were air dried, incubated with conjugates for 30 min at room temperature. washed thoroughly in PBS (pH 7.0) and mounced in 10% glycerol PBS (pH 7.2). For indirect tests, the slides were fi r st incubated with positive pemphigus serum (diluted in PBS-BSA 4 % to a titre of l : 10) for 30 min at room temperature. ,md then with conjugates. The slides were examined in a Leitz Orthoplan micro scope with incident light and blue. narrow band activa tion. and illuminated with a Xenon XB0-75 lamp. FTC fluorescence was detected with the filter combination K480. 2 KP490. TK5l0/K5l5 and a secondary filter K510. The si ides were read blind and decoded after examina tion. The same sections that were examined for IFL were afterwards stained with hematoxylin-eosin for light micro scopy. Routine histo-pathological examination was made on parallel biopsies taken from the lesion. RESULTS Skin biopsies f r om 21 patients with basal cell car cinoma and f r om 8 patients with basal cell papil loma were investigated. Of the basal cell carcino mas, 17 were solid growing (3 were rich in connec tive tissue). one was fibrosing (morphea-like) and 3 were superficial. In an earlier study (4), normal skin biopsies and biopsies from lesions as well as seemingly normal skin from patients with cold urticaria, contact der matitis. dermatiti, herpetiformis, erythema multi forme. pemphigoid. psoriasis. and SLE all showed the same degree of reactivity to conjugated anti beta-2-m-that is. a strong interepithelial fluores cence f r om the basal layer upwards at a titre of I: 10 (Table I) The 8 basal cell papillomas in this study also ex hibited strong interepithelial fluorescence in the whole epidermis at a titre of I : 80. but it was still detectable at a titre of I : 320 (Table Il and Fig. I). In sharp contrast to the normal skin and non malignant skin lesions. biopsies from 12 patients with solid growing basal cell carcinoma showed an interepithelial fluorescence only in the upper most parts of the section-that is. in the normal epidermis-while in the carcinomatous parts no cell membrane-associated beta-2-m was found (Table Il and Fig. 2). The fluorescence of the normal epi dermis above the carcinomatous tissue. like that in the basal cell papillomas. was strong up to a titre of I : 80 and still detectable at I: 320. In 5 cases of solid basal cell carcinoma, including 3 rich in connective tissue. an irregular pattern of fluores cence was revealed: in certain parts of the carci noma tissue there was brilliant intercellular stain ing, while most parts lacked detectable beta-2-m (Table 11). In none of the 3 superficial basal cell carcinomas was any interepithelial_ fluorescence observed in the carcinoma tissue (Table IT). The fibrosing basal cell carcinoma, which was extreme ly rich in dense hyalinised, fibrous stroma, showed thin strands of basal cell carcinoma, with intense interepithelial fluorescence as brilliant as in the normal epidermis and at the same titre (Table lf). Unconjugated rabbit antihuman beta-2-m serum used at a dilution of I : 10 inhibited the staining com pletely in normal epidermis of conjugated rabbit antihuman beta-2-m at a dilution of I : 80. In 10 solid basal cell carcinomas examined, inter- (7). Since many features of cellular organiza tion are greatly affected by cultivation of cells in vitro, these findings do not necessarily reflect the situation in vivo. In 15 of the 20 cases of solid and superficial basal cell carcinoma in the present study. immuno reactive beta-2-m appeared to be absent, while it was readily detected in all basal cell papillomas examined (Table Il). The failure lo detect beta-2-m on the cell surface may have several plausible explanations. I. These antigens, when present on the cell sur face of the malignantly transformed cells. could be come reorganized into patches instead of being dis tributed evenly. Howeve,·, beta-2-m would then be detectable in certain sections. The failure of the carcinoma tissue lo show beta-2-m in any examined c;ection of the 15 basal cell carcinomas mcntioned above militate:, against thi, explanation. 2. The surface of the transformed cell� could have lost all beta-2-m or else beta-2-m might be present in tracc amount!, only. not detectable v.·ith this method. 3. The cell �urface might havc bcen altercd after malignant tran!,formation in such a way that beta-2-m is no Ionger accessible to antibodie�. or thc beta-2-m peptide chain might havc been modified slightly and thus no longer detectable with the spe cific an tisera used. That bcta-2-m was found on the cell '>Urface of the highly fibrosing basal cell carcinoma but not on most of the solid ones might -,imply mean that basal cell carcinomas are a heterogeneous group al the Jevel of surface propenies. A conceivablc explanation for the irregular pat tern of fluorescence found in 5 of 17 solid basal cell carcinomas. is that normal cell� might be inter mingled with carcinoma tissue. either as an in vivo phenomenon or as a section artifact. The possibility of malignant transformation of a previom,ly benign tumour must also be considered. The absence or alteration of the antigenie ex pression of the cell surface beta-2-m as demon strated in the present study might render these tumours increasingly unresponsive to controlling mechunisms involved in ,;cl! inte1,1-.1ion� <1n<.I mighl reflect a disturbance of the gene regulation of beta-2-m on the cell surface of solid and superficial basal cell carcinomas. ACK OWLEDGEM E'-ITS Histopathological examinations were kindly carried out by Dr P. Westermark. The ,kilful technical assistance of Mrs J. Shull and Miss A. Westin b gratefully acknowl edged. Thi, investiga1ion was supponed by grants from the Finsen Foundation and Edvard Welandcr Foundation.
2018-04-03T00:56:22.615Z
1977-11-01T00:00:00.000
{ "year": 1977, "sha1": "f540fb6c1b82495acc6b6696fe06360647e4a683", "oa_license": "CCBYNC", "oa_url": "https://medicaljournalssweden.se/actadv/article/download/17412/21263", "oa_status": "HYBRID", "pdf_src": "Anansi", "pdf_hash": "4b1bd0100776b37324c91b5adf7227e3439f9963", "s2fieldsofstudy": [ "Medicine", "Biology" ], "extfieldsofstudy": [ "Medicine" ] }
221818725
pes2o/s2orc
v3-fos-license
Physical constraints derived from FCNC in the 3-3-1-1 model We investigate several phenomena related to FCNCs in the $\text{3-3-1-1}$ model. The sources of FCNCs at the tree-level from both the gauge and Higgs sectors are clarified. Experiments on the oscillation of mesons most stringently constrain the tree-level FCNCs. The lower bound on the new physics scale is imposed more tightly than in the previous, $\text{M}_{\text{new}}>12 $ \text{TeV}. Under this bound, the tree-level FCNCs make a negligible contribution to the $\text{Br}(B_s \rightarrow \mu^+ \mu^-)$, $\text{Br}(B \rightarrow K^{*} \mu^+ \mu^-)$ and $\text{Br}(B^{+}\rightarrow K^{+}\mu^{+}\mu^{-})$. The branching ratio of radiative decay $b \rightarrow s \gamma$ is enhanced by the ratio $\frac{v}{u}$ via diagrams with the charged Higgs mediation. In contrast, the charged currents of new gauge bosons significantly contribute to the decay process $\mu \rightarrow e \gamma$. flavor violation (LFV) processes l i → l j γ and b → sγ decay. We organize our paper as follows. In Sec. II, we briefly overview the 3-3-1-1 model. In Sec III, we describe the tree-level FCNCs and study their effects on the mass difference of mesons. We predict the NP contributions to the rare decays of B s → µ + µ − , B → K * µ + µ − and B + → K + µ + µ − processes based on the constrained parameter space. Sec. IV studies the one-loop calculation of the relevant Feynman diagrams, which relate to the b → sγ and µ → eγ. The consequences of the parameters on the branching ratio of these decays are implied from the experimental data studied. Our conclusions are given in Sec.V. A. Symmetry and particle content The gauge symmetry of the model is SU the color group, SU (3) L is an extension of the SU (2) L weak-isospin, and U (1) X , U (1) N define the electric charge Q and B − L operators [36] as follows where β, β are coefficients, and both are free from anomalies. The parameters β, β determine the Q and B − L charges of new particles. In this work, we consider the model with β = − 1 √ 3 . This is the simple 3-3-1-1 model for dark matter [31]. The electrically-neutral scalars can develop vacuum expectation values (VEVs) and break the symmetry of model via the following scheme where P is understood as the matter parity (W-parity) and takes the form: P = (−1) 3(B−L)+2s . All SM particles have W-parity of +1 (called even W-particle) while new fermions have W-parity of −1 (called odd W-particle). With W-parity preserved, the lightest odd W-particle can not decay. If the lightest particle has a neutral charge, it may account for dark matter (see [31]). The VEVs, u, v, break the electroweak symmetry and generate the mass for SM particles with the consistent condition: u 2 + v 2 = 246 2 GeV 2 . The VEVs, w, Λ, break SU (3) L , U (1) N groups and generate the mass for new particles. For consistency, we assume w, Λ u, v. • One neutral CP-odd particle • Two charged fields that are given as follows For the odd W-particle spectrum, there exists a complex scalar particle For convenience, we list a few mass expressions for the physical fields that we will use for the calculations below C. Fermion masses The Yukawa interactions in the quark sector are written in [31] as follows After symmetry breaking, the up-quarks and down-quarks receive mass. Their mixing mass matrices have the following form In the general case, these matrices are not flavor-diagonal. They can be diagonalized by the unitary It means that the mass eigenstates relate to the flavor states by The CKM matrix is defined as The Yukawa interactions for leptons are written by The charged leptons have a Dirac mass [ The flavor states e a are related to the physical states e a by using two unitary matrices U l L,R as The neutrinos have both Dirac and Majorana mass terms. In the flavor states, n L = (ν L , ν c R ) T , the neutrino mass terms can be written as follows The mass eigenstates n L are related to the neutrino flavor states as n L = U ν † n L , where U ν is a 6 × 6 matrix and written in terms of The new neutral fermions N a are a Majorana field, and they obtain their mass via effective interactions [32,33]. We suppose that the flavor states N a relate to the mass eigenstates N a by using the unitary matrices U N L,R as D. Gauge bosons Let us review the characteristics of the gauge sector. In addition to the SM gauge bosons, the 3-3-1-1 model also predicts six new gauge bosons: X 0,0 * , Y ± , Z 2 , Z N . The gauge bosons are even W-parity except for the X, Y gauge bosons that carry odd W-parity. The masses of new gauge bosons have been given in [32], [33] as III. A. Meson mixing at tree level In previous works [32], [36], the authors have considered the FCNCs that couple to the new neutral gauge bosons Z 2 and Z N at tree-level. Due to the different arrangements between generations of quarks, the SM quarks couple to two Higgs triplets. Therefore, there exist FCNCs coupled to the new neutral Higgs bosons at tree-level. These interactions derive from the Yukawa Lagrangian (11). After rotating to the physical basis via using Eqs. (12), (13), (14), we obtain the following where t β = tan β = v u , and Γ u , Γ d are defined as: The first three terms of Eq. The Lagrangian of tree-level FCNCs mediated by Z 2 , Z N , which has been studied in [32], has the following form where ξ is a mixing angle that is determined by tan 2ξ = We now investigate the impact of FCNCs associated with both new gauge and scalar bosons on the oscillation of mesons. From FCNCs given in Eqs. (22)- (24), we obtain the effective Lagrangian that affects the meson mixing as with q denoting either u or d quark. This Lagrangian gives contributions to the mass difference of the meson systems as given We would like to remind the reader that the theoretical predictions of the meson mass differences account for both SM and all tree-level contributions. It hints that meson mass differences can be separated as where the SM contributions to the meson mass differences are given by [37], [38] ( The theoretical predictions, given in Eq. (28), are compared with the experimental values as given in [39], [40] (∆m K ) exp = 0.5293(9) × 10 −2 /ps, However, due to the long-distance effect in ∆m K , the uncertainties in this system are considerable. Therefore, we require the theory to produce the data for the kaon mass difference within 30%, The SM predictions for B-meson mass difference are more accurate than those of kaon, and we have the following constraints by combining quadrature of the relative errors in the SM predictions and measurements [41] 0 or equivalently Let us do a numerical study from a set of all the input parameters that are taken by [40,[42][43][44][45] All mass parameters are in MeV. Besides, we assume t N = 1, g = √ 4πα/s W , where α = 1/128 and s 2 W = 0.231. The mixing matrix for right-handed quarks, V uR , is a unitary matrix, whereas V dR is parameterized by three mixing angles, θ R 12 , θ R 13 and θ R 23 , as where For instance, we can choose θ R 12 = π/6, θ R 13 = π/4 and θ R 23 = π/3. The NP scales require the following constraints w ∼ Λ ∼ −f u, v, due to the condition of diagonalization for the mixing mass matrices in [32]. We first study the role of FCNCs coupled to the scalar fields, H 1 , A, in meson mixing parameters. To see its effect, we change the f -parameter, which only affects the masses of the H 1 , A (see in Eq. That is, the mixing parameters are affected slightly by FCNCs coupled to the scalar fields. Next, we consider the contributions of FCNCs coupled to new gauge bosons to the meson mixing parameters. To estimate how important they are, we compare their contributions with those of the new scalar bosons. The ratio of these two contributions is presented in Fig. 2. The results show that the significant contribution comes from the FCNCs of new gauge bosons. It once again clarifies the small effect of the new scalar fields on the meson mixing systems. Finally, we investigate the constraints on the VEVs from ∆m K,Bs,B d . In Fig.1, the allowed region of parameters that satisfies the constraints given in Eqs. (31), (33) is the green one. The electroweak symmetry breaking energy scale, u, is not constrained by conditions imposed on the meson mass mixing parameters. However, these conditions affect the NP scale w. From Fig. 1, we obtain a lower bound on the NP scale, w > 12 TeV. This lower bound is more stringent and is remarkably larger than that obtained previously [32]. This difference is because, in the previous study, the authors compared the NP contributions with experimental values and ignored the SM contributions to the theoretical predictions. Moreover, Eq. (131) in [32], the authors used .2871/ps, the upper limit for (∆m Bs ) NP is even greater than that of the experimental value given in Eq. (30). This is not reasonable because the theoretical prediction must consist of both SM and NP contributions. We must also consider the uncertainties of both SM and experimental predictions. Thus, the NP contributions have to be constrained by the conditions given in Eqs. (31,33). Rare decays of B meson, in particular of the decay induced by the quark level transition, where M lD = Diag(m e , m µ , m τ ). It is worth noting that there is no neutral Higgs mediated FCNC in the lepton sector. The interactions of Z 2 and Z N with two charged leptons have been written where the form of coefficients [31]. Combining the quark FCNCs and the LFCNCs, we obtain the effective Hamiltonian for B s → µ + µ − , B → K * µ + µ − and B + → K + µ + µ − processes as follows where the operators are defined by The operators O 9,10,S,P are obtained from O 9,10,S,P by replacing P L ↔ P R . Their Wilson coefficients consist of the SM leading and tree-level NP contributions. For C 9,10 we split into the SM and NP contributions as: C 9,10 = C SM 9,10 + C NP 9,10 , where the central points of C SM 9,10 are given in [46], C SM 10 = −4.198, C SM 9 = 4.344, and the C NP 9,10,S,P are written by Noting that C SM S,P = C SM S,P = 0. Therefore, the C S,P , C S,P are obtained by NP contributions as follows where Γ l αα = ∆ l αa = u v m lα . From the effective Hamiltonian given in (38), we obtain the branching ratio of the where τ Bs is the total lifetime of the B s meson. If including the effect of oscillations in the B s −B s system, the theoretical and experimental results are related by [47] Br where y s = ∆Γ Bs 2Γ Bs = 0.0645(3) [39]. For B s → e + e − , the SM prediction [48] is and the experimental bound has been given in [49] as The SM contribution to the branching ratio of B s → e + e − is strongly suppressed to the current experimental upper bound. It may be an excellent place to look for NP. Completely contrary to B s → e + e − , the very recent measurement of the branching ratio (B s → µ + µ − ) is given by [7] Br(B s → µ + µ − ) exp = (3.09 +0.46 +0.15 −0.43 −0.11 ) × 10 −9 . This experimental upper bound closes to the central value of the SM prediction (including the effect of B s −B s oscillations) that has been studied in [50] Br B s → µ + µ − SM = (3.66 ± 0.14) × 10 −9 . It shows that experimental results are in slight tension with the SM prediction of Br(B s → µ + µ − ). NP effects in B s → µ + µ − lead to new stringent constraints on NP scale. Let us concentrate on the numerical study of B s → µ + µ − . In the right panel of Fig. 3, we draw the NP contributions to each Wilson coefficient. Compared to the C NP 9,10 , the C S,P are further suppressed by a factor of 10 −4 ÷ 10 −5 . So, the main contribution of the NP to the Br(B s → µ + µ − ) comes from the C NP 10 . In the limit w > 12 TeV, the C NP 10 is positive. It causes the Br(B s → µ + µ − ) reduced about 5% , which brings the theoretical prediction and experimental values get closer together. If the C NP 10 affects the decay process B s → µ + µ − , the C NP in the 3-3-1-1 model. In the limit, w > 12 TeV, we obtain its maximal prediction value C NP 9 −0.01. So, the NP coming from the 3-3-1-1 model can not explain the anomalies of B → K * µ + µ − process. The measurements of the branching fraction of the decay B + → K + µ + µ − [23,24] have turned out to be slightly on the low side compared to SM expectations. Both the C 9 , C 10 contribute to the Br (B + → K + µ + µ − ). As predicted by the 3-3-1-1 model, the NP contribution to these parameters is minimal (see Fig. 3) because the NP scale satisfies the constraint w > 12 TeV. Both the C NP 9 and C NP 10 are too low and far from the values of global analysis, see in [51][52][53][54]. Thus, we believe that the NP effects in B + → K + µ + µ − remain small in the 3-3-1-1 model. IV. RADIATIVE PROCESSES The branching fraction and the photon energy spectrum of the radiative penguin b → sγ process have been firstly reported by CLEO experiment, Br(b → sγ) = (3.21 ± 0.43 ± 0.27 +0. 18 −0.10 ) × 10 −4 [8]. Recently, HFLAV group has obtained the average result by combining the measurements from CLEO, BaBar and Belle, Br(b → sγ) = (3.32 ± 0.15) × 10 −4 [39] for a photon-energy cut-off E γ > 1. 6 GeV. This result is in good agreement with the SM prediction up to Next-to-Next-to-Leading Order (NNLO) Br(b → sγ) = (3.36 ± 0.23) × 10 −4 [59], [60], with the same energy cut-off E γ . It suggests that the NP contributions to this process, if any, have to be small. Thus, studying the b → sγ decay can give a strong constraint on the NP scale. The radiative process b → sγ is most conveniently described in the framework of an effective theory that arises after decoupling of new particles. Excluding the charged currents associated with the W ± µ gauge boson, the 3-3-1-1 model contains new charged currents, which couple to the new charged gauge bosons Y ± µ , two charged Higgs bosons H ± 4 , H ± 5 , and the FCNCs coupled to the Z 2,N as given in Eq. (24). All of the above currents generate the b → sγ process. Let us write down the charged scalar currents related to b → sγ. The H ± 4 only couples to the exotic quarks, so it does not create the flavor-changing charged currents (FCCCs) for SM quarks. While H ± 5 couples to the SM quarks and creates the scalar FCCCs. The relevant Lagrangian is where , s 2β = sin 2β, t 2β = tan 2β. The charged currents associated with the W ± , Y ± , are described by the V-A currents as follows The effective Hamiltonian for the decay b → sγ is split as the sum of the SM and 3-3-1-1 contributions Note that the Wilson coefficients C 7,8 will be ignored in our calculation since they are suppressed by the ratio m s /m b . The SM Wilson coefficients C SM 7,8 at the scale µ ∼ m W are first given by [61] C SM(0) 7 where the index 0 indicates that the Wilson coefficients are calculated without QCD correction. The NP contributes to C NP 7,8 at the quantum level via the higher order charged current interactions in Eqs. (49), (50) and the FCNCs given in Eq. (24). They can be split into each contribution as follows where with all functions f γ,g and f γ,g are defined as shown below The C Z 2,N (0) 7 (m Z 2,N ) are obtained by the FCNCs coupled to the Z 2,N and have a form as given in with are given by For w = 10 TeV, we have m Y 3.2 TeV, and obtain C have the form as [63], The branching ratio Br(b → sγ) is given as where N (E γ ) = 3.6(6)×10 −3 is a non-perturbative contribution, ueν e ) = 0.580(16) [62] and branching ratio for semi-leptonic decay Br(b → ceν e ) = 0.1086(35) [40]. Other parameters are input as in Sec. III A. The Br(b → sγ) behaves as a function of the new particle masses, such as m Y , m H 5 , m U . These masses are understood as free parameters. In the limit, u, v −f u 2 +v 2 uv ∼ w ∼ Λ, they can be rewritten as where, g = 4πα/s 2 . In Fig. 4, we show the dependence of Br(b → sγ) on the NP scale w in the limit u, v −f u 2 +v 2 uv ∼ w ∼ Λ. Each panel corresponds to the scenarios of mass hierarchy and three different choices of t β . We see that the branching ratio strongly depends on the values of t β where the term containing t β comes from C H 5 7 . So we conclude that C H 5 7 plays an important role in the radiative decay process b → sγ. This is true for all three scenarios of the mass hierarchy. Besides, Fig. 4 indicates that the mass hierarchy does not affect Br(b → sγ) much. This result is understood as the main contribution coming from C H 5 7 , and it is stronger than other contributions by the coefficient t 2 β . In the large t β limit, the Br The lower bound on the NP scale depends on the value of the t β , specifically, w ≥ 1 TeV for t β = 1; w ≥ 4.1 TeV for t β = 10; w ≥ 7.7 TeV for t β = 20. These limits are weaker than the ones mentioned above. To close this section, we consider the influence of NP on the Br(b → sγ) in the limit u, v −f ∼ w ∼ Λ. In Fig. 5, we see that the dependence of branching ratio on t β is not as strong as predicted in . B. Charged lepton flavor violation The charged lepton flavor violation (CLFV) processes are strongly suppressed in the SM with right-handed neutrinos, Br(l i → l j γ) 10 −55 . Meanwhile, the current experimental bounds limits are given as [40] Br(µ − → e − γ) < 4.2 × 10 −13 , It implies that the CLFV processes open a large window for studying the NP signals beyond the SM. Note that in the SM with right-handed neutrinos, the decay processes, l i → l j γ, come from the one-loop level with W ± mediated in the loop. The Br(l i → l j γ) is suppressed due to the mixing matrix elements of the neutrinos. The 3-3-1-1 model anticipates the existence of additional charged currents associated with the new charged particles, Y ± , H ± 4,5 . Consequently, the new oneloop diagrams in the model may contribute significantly to the Br(l i → l j γ). This branching ratio may reach the upper experimental bound given in Eq. (64). In order to study the CLFV processes, we first write down the relevant Lagrangian based on the physical states as follows The charged currents associated with the new gauge bosons are written in the physical states as follows L lepton Next, we write the effective Lagrangian relevant for the µ → eγ processes in the traditional form where the factors A L , A R are obtained by calculating all the one-loop diagrams. We use the 't Hooft-Feynman gauge and keep the external lepton masses for calculations. The obtained results are inspired by [66]. The factors A L,R are divided into individual contributions, as shown below where The functions f (x) and g(x) are defined by The notations m ν j , M ν j , m e , m µ are understood as the masses of light, heavy neutrinos, electron, and muon, respectively. From the effective Lagrangian (66), we finally got the branching ratio Br(µ → eγ) as follows where is the Fermi coupling constant, Br(µ → eν e ν µ ) = 100% as given in [40]. Before considering numerical calculations of the branching ratio Br(µ → eγ), let us make some assumptions. We assume that a diagonal matrix presents the Yukawa couplings h e ab in the flavor basis. Thus, the matrix U ν L is identified as the PMNS matrix U PMNS , which has been measured experimentally. Both the mixing matrices U ν R , V ν as well as U N L,R are new and not constrained by experiments. To simplify, we suppose that the Yukawa couplings of the right-handed neutrinos h ν are presented by a diagonal matrix. This indicates that the Majorana neutrino mass matrix has the form as M ν R = Diag(M ν 1 , M ν 2 , M ν 3 ) and thus the right-handed neutrino mixing mass matrix U ν R is a unit matrix. The mixing matrix V ν is also assumed to be diagonal. Finally, for the mixing matrix of the new leptons U N R , we can use three arbitrary angles θ N ij , (i, j = 1, 2, 3) and a Dirac CP phase δ N to parameterize. With the above option, the Yukawa couplings h e , h ν can be translated into the charged lepton and sterile neutrino masses as follows The Yukawa couplings h ν , which determine the neutrino Dirac mass, are rewritten by using Casas-Ibarra parametrization as given in [65] h ν = where R is an orthogonal matrix which is presented via arbitrary angles as the following For the magnitudes of relevant masses and the VEVs, we also work on the limits u, v w ∼ Λ, To be consistent with the unitary bound [67], we need the constraint: where θ ij are the mixing angles of the neutrino mixing matrix. In addition, the branching ratio Br(µ → eγ) also depends on the unknown parameters, such as six mixing angles (θ ij , θ N ij ), one CP phase δ N , the masses of new particles m N , M ν i . In the following, we are going to present the results of numerical calculations for the case where unknown parameters are chosen as θ N 12 = π/6, θ N 13 = π/3, θ N 23 = π/4, δ N = 0, The NP scale is strongly constrained by the experimental bounds on mixing mass parameters. We have obtained the lower bound on the new gauge boson mass M new > 12 TeV, which is more stringent than the constraint previously given in [32]. This change is because previous studies omitted the contributions of new Higgs, especially those of the SM. Our result is consistent with that of [68]. We also studied the tree-level FCNCs affecting the branching ratio of B s → µ + µ − , B → K * µ + µ − and B + → K + µ + µ − . In the parameter region consistent with the experimental constraints on the meson mass difference, the tree-level FCNCs give small contributions to these branching ratios, which is consistent with the measurement B s → µ + µ − [4-7] but can not explain the B → K * µ + µ − and B + → K + µ + µ − anomalies [16][17][18][19][20][21][22][23][24]. For the radiative decay processes, we concentrated on the flavor-changing b → sγ decay. The large contribution arises from the Wilson coefficient C H 5 7 yielded from one-loop diagrams with the new charged Higgs boson mediation. In spite of the enhanced contributions due to the factor t β = v/u, the predicted branching ratio Br(b → sγ) is consistent with the measurement [39], if M new is chosen as above mentioned. In contrast to the b → sγ decay, the branching ratio of the lepton flavor-violating µ → eγ decay obtains a large contribution from one-loop diagrams with new gauge bosons exchange. Due to the large mixing of new neutral leptons, the branching ratio Br(µ → eγ) can reach the experimental upper bound.
2020-09-22T01:00:46.215Z
2020-09-21T00:00:00.000
{ "year": 2020, "sha1": "859d16e28176bf9a60e181ba098accb572565299", "oa_license": "CCBY", "oa_url": "https://link.springer.com/content/pdf/10.1140/epjc/s10052-021-09583-x.pdf", "oa_status": "GOLD", "pdf_src": "Arxiv", "pdf_hash": "859d16e28176bf9a60e181ba098accb572565299", "s2fieldsofstudy": [ "Physics", "Art" ], "extfieldsofstudy": [ "Physics" ] }
267163974
pes2o/s2orc
v3-fos-license
A machine learning model identifies M3-like subtype in AML based on PML/RARα targets Summary The typical genomic feature of acute myeloid leukemia (AML) M3 subtype is the fusion event of PML/RARα, and ATRA/ATO-based combination therapy is current standard treatment regimen for M3 subtype. Here, a machine-learning model based on expressions of PML/RARα targets was developed to identify M3 patients by analyzing 1228 AML patients. Our model exhibited high accuracy. To enable more non-M3 AML patients to potentially benefit from ATRA/ATO therapy, M3-like patients were further identified. We found that M3-like patients had strong GMP features, including the expression patterns of M3 subtype marker genes, the proportion of myeloid progenitor cells, and deconvolution of AML constituent cell populations. M3-like patients exhibited distinct genomic features, low immune activity and better clinical survival. The initiative identification of patients similar to M3 subtype may help to identify more patients that would benefit from ATO/ATRA treatment and deepen our understanding of the molecular mechanism of AML pathogenesis. INTRODUCTION Acute myeloid leukemia (AML) results from the clonal expansion of hematopoietic precursor cells with disease-causing genetic mutations or chromosomal changes.2][3] Acute promyelocytic leukemia (APL) is a distinct subtype of AML characterized by the expansion and accumulation of leukemic cells that are blocked at the promyelocytic stage of granulocyte differentiation, as well as the presence of a specific disease driver fusion gene encoding the PML/ RARa oncoprotein. 4An atlas of PML/RARa direct targets has been identified, which redefined the activating function that acted through super-enhancers and explained synergism of ATRA/ATO. 5Morphologically, APL is recognized as M3 subtype of AML by the French-American-British classification.7][8] Among various subtypes of AML, M3 subtype has the highest survival rate, 9 which is attributed to the combination therapy of ATRA and ATO.ATRA and ATO trigger degradation of PML/RARa, thereby inhibiting disease progression, while non-M3 AMLs have a mixed response to this combination therapy. 10,11We would like to explore which non-M3 AML patients may benefit from ATRA and ATO combination therapy through in-depth study of PML/RARa target genes. In the genetics of myeloid tumors, chromosomal translocations usually involve transcription factors (TFs), which lead to abnormal regulation of downstream target genes by oncogenic fusion TFs, induce malignant cells proliferation, and interfere with bone marrow differentiation. 12As the most important oncogenic fusion in APL, both PML and RARa are TFs, and can directly trans-activate some essential oncogenes, which play important roles in disease progression of APL. 13,14On the other hand, PML/RARa can also suppress the expression of some tumor suppressor genes.For example, PML/RARa inhibits PU.1-dependent activation of immune subunits, thereby contributing to the escape of APL cells from immune surveillance. 15,16Both ATRA and ATO directly target PML/RARa-mediated transcriptional repression and protein stability.0][21] These genes are important for APL cell differentiation or proliferation.In M3 subtype patients, the combination therapy approach has a synergistic effect on the induction of myeloid differentiation and apoptosis.3][24][25] Overall, the successful usage of the ATRA-ATO combination therapy has good therapeutic efficacy and low drug resistance for M3 subtype patients and certain non-M3 AML cell lines.These results suggested that besides M3 subtype, other AML patients with similar expression patterns as M3 subtype ones might also benefit from the ATRA-ATO combination treatment strategy.The good therapeutic efficacy of ATRA-ATO might be not only dependent on the fusion event of PML/RARa, but also closely correlated with the expression features of its target genes. Besides the PML/RARa fusion event, the expression or genomic patterns of several genes also aid to subtype characterization and treatment of AML.For example, the gene FLT3, which is mutated in approximately 40% of human APL cases, cooperates with PML/RARa in the development of the APL phenotype in mouse. 26As another example, the expression of peptidyl-Prolyl cis-trans isomerase Pin1 is significantly increased in patients with various AMLs, including M3 subtype, which is discovered to be involved in a variety of cancer pathways in AML. 27Given these gene mutations and expression alteration, we believed that in addition to PML/RARa fusion events, other molecular alterations might also be involved in occurrence and progression of APL or AML.At present, the transcriptome characteristics of APL need to be further studied.Therefore, we believe that gene expression profiles can be used to explore similarities between M3 subtype and other AML subtypes, and those AML subtypes that are similar in gene expression to M3 subtype might deserve the same treatment strategy. In this study, we found that PML/RARa targets tend to be differentially expressed in multiple AML subtypes and contribute to the classification of M3 subtype.Because the expression of PML/RARa targets is the downstream consequence of PML/RARa regulatory mechanisms, we hypothesized that our computational approach may aid to the current classification of M3 subtype from the view of transcriptome and further discover additional subpopulation therapeutic loopholes, which in turn could help identify pathogenic mechanisms.Therefore, an enrichment-based scoring index, defined as M3-Like Score (M3-LS), was developed to assess the expression pattern similarity of PML/ RARa target genes in patients from non-M3 AML populations as M3 subtype.We further developed a classifier for identifying patients similar to M3 subtype with scores above a threshold according to Receiver operating characteristic curve (ROC) analysis.Moreover, by further requiring PML/RARa targets responded to ATRA/ATO or differentially expressed in M3 subtype, the performance of our classifier was improved.The robustness of our model was further validated in other independent AML populations.Notably, expression patterns of several vital marker genes in M3-like patients were discovered to be more concordant with M3 subtype.Moreover, we found that M3-like patients exhibited several features distinct from other non-M3 ones, including genomic mutation, molecular immune features, as well as survival prognosis.All these results indicated the need for identification of M3-like subtype based on transcriptome analysis, suggesting that these samples may also benefit from ATRA/ATO therapy. PML/RARa targets are perturbed across AMLs and help identify M3 subtype We respectively obtained 363 and 424 PML/RARa target genes that were significantly repressed and activated in M3 subtype from a previous study 5 by integrating the transcriptome and regulation of PML/RARa in NB4 cell line (Figure 1A).Moreover, differential expression analysis was performed by comparing the expression of patients in different subtypes with normal samples (FDR <0.05, |FC| > 1.5) in the training AML cohort.We next evaluated whether PML/RARa target genes are likely to be enriched in these differentially expressed genes based on hypergeometric tests.As a result, we found that PML/RARa target genes were significantly enriched in M3 subtype (p < 1.55e-13), and approximately $22.62% targets were significantly abnormally expressed (Figure 1B).Notably, we found that the enriched p value in M3 subtype was most significant, and the significant enrichments were also observed in other subtypes (Figure 1B).When changing the thresholds of differential expression analysis, we obtained similar results (Figures S1A and S1B).Taking the target gene WT1 as an example, Figure 1C showed the effect of PML/RARa on directly activated gene WT1 (Figure 1C).It was significantly over-expressed in both M3 (FDR <2.84e-62, FC = 6.73) and several other subtypes (Figure S1C), which is known as a significant predictor of AML recurrence 28 as well as an important marker for detection of AML minimal residues. 29We also found that the frequency of WT1 mutation in the validation cohort-1 was 6.8%, and it was 4.5% in the M3 subtype.Only 0.25% of the target genes were differentially expressed between WT1 mutation group and the wild-type group (Figure S2).These results suggest that WT1 mutation has no significant effects on the expression of PML/RARa targets.STAB1 as another example was also significantly up-regulated in M3 subtype (FDR <7.84e-89, FC = 5.46, Figure S1D), and reducing expression inhibits the growth of NB4 leukemia cells. 30STAB1 was also a poor prognostic factor in AML, and the oncogenic functions have been confirmed in melanoma. 31o further understand the functional roles of PML/RARa target genes, we next performed functional enrichment analysis (FDR <0.05).We found that these differentially expressed targets across AML subtypes were significantly enriched in myeloid cell differentiation, activation, and immune regulation-related functions (Figure 1D).For example, differentially expressed PML/RARa target genes in M3 subtype were significantly enriched in leukocyte migration, myeloid leukocyte activation and T cell activation (Figure 1D).We further explored whether the expression patterns of PML/RARa targets can help distinguish patients in M3 subtype from other subtypes.We performed the tSNE dimensionality reduction and found that almost all patients in M3 subtype were not only clustered together, but also obviously distinguished from other subtypes (Figures 1E, S1E, and S1F).These observations suggested that the PML/RARa targets exhibited M3 subtype specific expression patterns and could help identify more M3 patients.Moreover, we found that certain samples from other subtypes were clustered together with patients of M3 subtype, implying that these patients have more similar expression patterns as M3 subtype, although they do not have the PML/RARa fusion event. Together, these results suggested that the expression of PML/RARa targets was likely to be perturbed in multiple AML subtypes and the expression patterns of PML/RARa targets can greatly help identify patients similar as M3 subtype. M3-LS model accurately predicts M3 subtype in AML We next hypothesized that if PML/RARa-activated target genes were more likely to be upregulated in a patient, whereas repressed target genes were likely to be downregulated, the patient was more similar to M3 subtype.A computational model, M3-LS, based on the expression pattern of PML/RARa targets was developed to predict patients of M3 subtype.We next applied M3-LS model to the training AML cohort (Table 1), and found that the M3-LS can accurately distinguish patients in M3 subtype from other subtypes with an AUC 0.813 (Figure 2A).Next, we also trained random forest and XGboost models using the M3-LS as features in the training cohort, and the AUCs of two classifiers reached 1.00 and 0.979 (Figure 2A), respectively.The sensitivity reached 0.841 when the normalized M3-LS was 0.560.Based on this cutoff, we predicted M3 patients, and 73% of patients of M3 subtype were successfully predicted (Figure 2B).In addition, the normalized M3-LS in patients of M3 subtype were the highest compared to other subtypes (Figure 2C, p < 1.9e-15, Wilcoxon's rank-sum test). Moreover, the M3-LS model was found to successfully distinguish patients of M3 subtype from other subtypes in two independent AML cohorts assayed by different platforms.In the first validation cohort, the AUC scores respectively reached 0.852, 0.797, and 0.809 for three classifiers (Figure 2D).Similarly, approximately 75% of patients of M3 subtype were successfully predicted (Figure 2E), and their scores were also significantly higher than those in other subtypes (Figure 2F, p < 3.5e-08, Wilcoxon's rank-sum test).Our model was also validated in the second cohort (Figures 2G-2I).Thus, these results indicated that M3-LS model integrating the expression patterns with regulation information could accurately predict M3 subtype in AML from the view of transcriptome.That is in addition to the genomic event of PML/RARa fusion, perturbed expression patterns of its target genes can also reflect the molecular signature of M3 subtype. Performance of M3-LS model is improved by integrating ATO/ATRA response genes The combination of ATO and ATRA is a landmark treatment regimen in M3 AML. 32,33An increasing number of studies have also revealed that the treatment process can alter the expression of PML/RARa target genes, and subsequently perturb the downstream biological functions. 10,20,22We next explored to what extent the M3-LS model can be refined by integration of ATO and ATRA treatment datasets (Table 2).We first obtained drug-response genes after treated with ATO or ATRA, as well as abnormally expressed genes in M3 patients.There were 448/414 genes significantly down/up-regulated by ATRA treatment (Figure 3A).In addition, 61 genes were detected to respond to the treatment of ATO, and 671/407 genes were down-regulated/up-regulated in M3 AML when compared with normal samples (Figure 3A).Combined with the above gene set of PML/RARa targets, we obtained 109 refined target genes, including 61 activated and 48 repressed genes (Figure 3A).Then, the M3-LS model was re-trained based on these refined PML/RARa target genes, we found that M3 patients in the training cohort can be distinguished from other subtypes with higher accuracy (AUC = 0.965, Figure 3B).In particular, the AUCs of the refined random forest and XGboost classifiers respectively reached 1.00 and 0.999 (Figure 3B).Approximately 86.89% of M3 patients were successfully predicted, and their scores were significantly higher than other subtypes (Figure 3C, p < 2.22e-16, Wilcoxon's rank-sum test).The robustness of our M3-LS model was evaluated from three aspects.First, we randomly used 10%-100% patients to train the model and evaluate the effects of sample size.It was found that our model can reach high AUCs in different numbers of patients (Figure 3D), even in a small number of patients.Second, considering the relatively large size of non-M3 patients compared with M3 patients, we next randomly selected the same number of non-M3 patients as M3 to eliminate the imbalance effects, and these processes were repeated 1000 times.Our model can also obtain higher AUC values ranging from 0.95 to 0.98 (Figure 3E).Finally, the great improvements of our models were discovered in other two validation cohorts (Figures 3F-3I) and the AUC values reached up to 0.99 and 0.939 respectively (Figures 3F and 3H).Similarly, M3 patients exhibited significantly higher normalized M3-LS than other patients (Figures 3G and 3I).All these results supported that integration PML/RARa targets with ATO/ATRA response genes could further refine our model, and reveal that M3 subtype could be distinguished from AML at the transcriptome level. M3-LS model identifies additional patients like M3 subtype Based on the observations that M3-LS model can accurately predict M3 patients, we next predicted M3-like AML patients in three cohorts.That is if a patient of non-M3 subtype was predicted to be positive one, the patient was considered to form an additional subtype named M3like.In total, there were M3-like patients from 7.6% non-M3 subtypes in the training cohort, accounting for 14.04% of M1 subtype, 9.21% of M2, and 5.2% of M4 (Figure 4A).In addition, 3.61% and 12.37% AML patients in two validation cohorts were also predicted as M3-like subtype, respectively (Figure 4A).We next sought to understand the relevance of these defined M3-like patients to the functional, biological and clinical properties of M3 subtype.First, it was well known that AML is a malignant disease of myeloid progenitor cells. 34We thus applied the xCell 35 method to estimate the proportion of myeloid progenitor cells in AML patients.As a result, patients in both M3 and M3-like subtypes exhibited a much higher common myeloid progenitor (CMP) scores than the other subtypes (Figure 4B, p < 2.2e-16, Wilcoxon's rank sum tests).Moreover, several marker genes of M3 subtype exhibited significantly higher expressions in M3-like patients, such as WT1, GFI1, GATA2 and KDM1A (Figure 4C).For example, WT1, as an activated target gene of PML/RARa, was not only over-expressed in AML as described above, but also was repressed by both ATO and ATRA.WT1 has been found to be an important regulator of normal and malignant hematopoiesis, which is usually inactivated in APL patients and results in the complete loss of WT1's inhibitory function on APL tumor cells. 36We also observed higher expression of GATA2 in M3 and M3-like subtypes, which has been demonstrated as a prognosis factor in AML. 21The combination of KDM1A inhibitor and ATRA can promote the induction and differentiation of leukemia cells by ATRA. 37We found that the expression levels of KDM1A were significantly increased in M3 and M3-like subtypes.Significantly high expression of these genes was also discovered in the validation sets (Figures S3A-S3J). To explore the related molecular function of M3-like subtype, differentially expressed genes were first identified, and Figure 4D showed the 10 most significantly differentially expressed genes in the training and validation cohorts, respectively.Among them, WT1 and GFI1 are PML/RARa target genes.Notably, the target genes activated by PML/RARa were all up-regulated in M3-like subtype, while the target genes inhibited by PML/RARa were mostly down-regulated in M3-like subtype (Figure 4D).These findings suggested that the PML/RARa targets expression patterns of M3-like samples were highly similar to those of M3 subtype.Both carcinogenesis and immune related biological functions were further explored in AML patients by single sample gene set enrichment analysis (ssGSEA).The cancer hallmark-associated pathways were obtained from the literature 38 and the MSigDB database. 39Globally, patients in M3 and M3-like subtypes exhibited similar pathway activities across cancer hallmarks (Figures 4E and S3K).The patients in M3-like subtype were found to be enriched in six particular functions, including 'Negative regulation of cell proliferation', 'Negative regulation of cell cycle', 'Epithelial to mesenchymal transition', 'Cell migration', 'Vasculogenesis' and 'Chromosome organization', which were related to 'Insensitivity to Antigrowth Signals', 'Tissue Invasion and Metastasis', 'Sustained Angiogenesis' and 'Genome Instability and Mutation' cancer hallmarks.For the cancer hallmark-related pathways, the patients in M3 and M3-like subtypes were mostly enriched in pathways related to signal regulation, including WNT beta-catenin signaling, Notch signaling, Estrogen response early, TGF beta-signaling and Estrogen response late (Figure S3K).Thus, these findings revealed that multiple properties of M3-like patients were much more similar to M3 ones. M3-like patients with strong GMP and distinct genomic features A recent study has demonstrated that the cellular hierarchy composition constitutes a novel framework for understanding disease biology and advancing precision medicine in AML. 40We thus evaluated the cellular compositions of the AML patients.In total, the abundance of seven leukemic cell types was estimated by a deconvolution approach, three of which were leukemia stem and progenitor cells (LSPCs), namely Quiescent LSPCS, Primed LSPCS, and Cycling LSPCS.The other four leukemia cell types were GMP-like blasts, ProMono-like blasts, Mono-like blasts and cDC-like blasts, which were classified by a recent study. 41Based on the leukemia hierarchy composition, we revealed four distinct subtypes: Primitive (shallow hierarchy, LSPC-enriched), Mature (steep hierarchy, enriched for mature Mono-like and cDC-like blasts), GMP (dominated by GMP-like blasts) and Intermediate (balanced distribution).We found that patients in M3 and M3-like subtypes exhibited a higher proportion of GMP-like cells (Figure 5A).Moreover, the majority of patients of M3 and M3-like subtypes were classified as GMP subtypes (Figure 5B).By analyzing the expression of GMP-like marker genes, we found that these genes were more likely to be highly expressed in both M3 and M3-like patients (Figure 5C).For instance, the expression level of IGFBP2 is high in leukemia (Figure S4A).Inhibition of endogenous IGFBP2 expression in human leukemia cells leads to increased apoptosis, decreased migration, and decreased activation of AKT and other signaling molecules. 42MPO is generally considered to be the definitive marker of myeloblasts.Targeting MPO expression or enzyme activity sensitizes AML cells to cytarabine therapy by triggering oxidative damage and persistent oxidative stress, especially in AML cells with high MPO expression 43 (Figure S4B).We also observed higher expression of CLEC11A in M3 and M3-like subtypes (Figure S3C).TCGA data showed that high expression of CLEC11A was associated with a good prognosis 44 (Figure S4C). To better understand the genomic features of M3-like subtype, we analyzed the somatic mutations in the patients of validation cohort-1 (Figure S4D; Table S1).Generally, the mutation burden of M3 and M3-like subpopulation was relatively higher than other ones (Figure S4E).On the one hand, several genes exhibited higher mutation frequency in M3 patients (Figure S4F), such as FLT3 and ARID1B.On the other hand, distinct genomic features were found in M3-like patients, such as IDH2, RAD21 and CADM3 (Figure 5D).IDH2 mutation was not detected in M3-like subtype, and it has been shown that the vulnerability of IDH2 mutation in AML leads to sensitivity to APL-like targeted combination therapy. 33RAD21 was mutated in M3-like patients, which were more likely to be mutated in M3-like patients (Figure 5D, p = 0.013 and OR = 21.52).RAD21 is a complete subunit of the eukaryotic cohesive complex that regulates chromosome separation and DNA damage response. 45RAD21 mutation sensitized patients to treatment with the BCL2 inhibitor ABT-199, and reducing RAD21 levels sensitized AML cells to BCL2 inhibition. 46In detail, we found that FLT3, CRLF1 and CALR exhibited higher mutation frequency in M3 patients (Figure 5E), and TP53, RAD21, IDH2, and FLT3 exhibited higher frequency in M3-like patients (Figure 5E).Furthermore, we found specific CCDC60, BMPER, AMER3, AURKC, and AKNAD1 mutations only in M3-like subtype (Figure 5E).8][49] AURKC is a member of the aurora subfamily of serine/threonine protein kinases and may play a role in mitosis.It has been shown that single nucleotide polymorphisms in AURKC were associated with cancer risk in both glioblastoma and gastric cancer. 50,51These specific mutations could be used to define M3-like subtype.These results suggested that M3-like and M3 patients were highly similar in terms of GMP-like cells, and the abnormal genomic features were distinct. M3-like patients with low immune activity and better clinical survival Immunotherapy modulating the tumor microenvironment (TME) has a promising effect on AML, 52 but the therapy effects depend on the TME of patients.We next sought to determine whether the TMEs of M3-like patients were distinct from other subtypes.Immune scores were estimated in the training cohort by xCell, 35 and the relatively low immune scores of patients in M3 and M3-like subtypes were discovered, which were significant (Figure 6A, P < 3eÀ07 by Kruskal-Wallis Test).A similar situation was found in both validation cohorts (Figures S5A and S5B), suggesting that M3-like patients had lower immune activity than M3 patients.Moreover, we explored the expressions of LM22 immunotherapy gene sets in the training cohort, and also found that these genes exhibited significantly lower expressions in patients of M3 and M3-like subtypes (Figure S5C).Moreover, we used ssGSEA to estimate the abundance of cell types and the activities of particular gene sets.Interestingly, the proportions of myeloid cells in M3 and M3-like patients were higher (Figure 6B), and the b-catenin signaling pathway related to immunotherapy was also enriched in most M3 patients.In human metastatic melanoma samples, there is a correlation between the activation of the b-catenin signaling pathway in tumors and the absence of T cell gene expression signature, which leads to the mechanism of immunotherapy resistance. 53In contrast, M3 and M3-like patients were less enriched for other immune-related gene sets (Figure 6B).Moreover, in validation cohort-2, M3 and M3-like patients also had lower enrichment of immune-related gene sets, except for myeloid cell-related gene sets (Figure S5D). Finally, the clinical correlations were explored, we found that patients with different subtypes exhibited significantly distinct survival outcomes, in line with the observed associations with Cancer and Acute Leukemia Group B (CALGB) cytogenetics risk category, and patients in M3 and M3-like subtypes had better clinical survival (Figure 6C, p = 0.0004, log rank test).Moreover, there were higher proportions of patients with favorable outcomes in M3 and M3-like subtypes (Figure 6D, p < 2.2e-16, Fisher's exact test).So, M3-like patients were characterized by low infiltration of immune cells and better clinical survival outcome. DISCUSSION In this study, we developed a novel computational model to discover M3-like subtype of AML based on expression features of PML/RARa targets.Our analysis found that the expression of PML/RARa targets was frequently perturbed across AMLs and helped identify M3 subtype.Previous studies have shown that some AML patients with IDH2 mutations respond well to ATRA and ATO combination therapy, although they may not have the PML/RARa fusion protein. 33Therefore, we hypothesized that non-M3 patients with high expression of PML/RARa up-regulated target genes and low expression of down-regulated target genes were likely to be M3 subtype.Our computational model can not only distinguish patients of M3 subtype, but also can further predict a set of samples with similar expression patterns to M3 subtype. Notably, several results suggest that these M3-like patients are more consistent with M3 subtype, such as the expression patterns of several important marker genes of M3 subtype, the proportion of myeloid progenitor cells, as well as deconvolution of AML constituent cell populations.Furthermore, we found that M3-like patients exhibit some molecular features that differ from other non-M3-like patients, including genomic mutations and molecular immune signatures.Benefiting from the high efficiency of ATRA and ATO combined therapy, the survival prognosis of M3 patients is generally superior to that of other subtypes. 9Interestingly, we found that the clinical prognosis of M3-like samples was similar to that of M3 samples and significantly better than that of other samples.Moreover, an unexpected finding of a GSE10358, GSE61804, GSE68833, GSE12662, GSE12417, GSE37642.b http://v15.proteinatlas.org/about/download. c GSE83449, GSE9476, GSE12417, GSE34860, GSE37642. our study was that both M3 subtype and M3-like subtype tend to have low immune characteristics, which is also a possibility that they are not suitable for immunotherapy, further indicating that they might be suitable for targeted therapy.The most widely accepted treatment regimen of M3 subtype is the classic targeted combination therapy of ATO/ATRA, and their cure rate is up to 95%. 6,7Therefore, expanding this treatment plan to more types of AML can enable more leukemia patients to be treated effectively.Our model performance was improved by further requiring the PML/RARa targets to respond to ATRA/ATO or to be differentially expressed in M3 subtype.In addition, we also found that treatment did not significantly affect the expression of PML/RARa targets and the efficacy of the model.The Jaccard-coefficient of differentially expressed genes between treatment and diagnostic groups and PML/RARa targets was very low, only 0.0188.The AUC of the reconstructed model only based on diagnostic samples was 0.96.However, there are still some challenges in the optimization process.ATRA-treated cell lines collected by us were those of M3 subtype with higher consistency, while the ATO-treated cell lines were derived from multiple human tissues and were heterogeneous.Hence, we used different methods to extract ATRA and ATO target genes.If data on ATO/ATRA medication were consistent in the background of M3 subtype, our model could be further improved.Additionally, we tried to find M3-like cells in existing cell lines for testing the efficacy of ATRA and ATO.However, we found no cell lines with high M3-LS except for NB4 (M3 type) (Table S3).In future studies, we will try to construct M3-like primary cells to validate the model. A large number of targeted therapies for AML are currently being developed, and great progress has been made in targeted therapies for M3 patients.We believed that the initiative of identifying patients similar to M3 subtype in our study may help to find patients who would benefit from ATO/ATRA treatment and deepen our understanding of AML pathogenesis. Limitations of the study There are still several challenges in the optimization process.Our collections of ATRA-treated cell lines were those of M3 subtype with higher consistency, while the ATO-treated cell lines were derived from multiple human tissues and were heterogeneous.Hence, we used different methods to extract ATRA and ATO target genes.If data on ATO/ATRA medication were consistent in the background of M3 subtype, our model could be further improved. STAR+METHODS Detailed methods are provided in the online version of this paper and include the following: Figure 1 . Figure 1.PML/RARa targets are perturbed across AMLs and help identify M3 subtype (A) The change of gene expression after PML/RARa gene knockout.The labels show the top 10 genes that are significantly upregulated or downregulated.(B)Enrichment of PML/RARa target genes and differential genes between AML patients and healthy control samples (FDR <0.05, FC > 1.5).The height of the bar graph is the proportion of differentially expressed genes in the targets, and the line chart is the -log10(p-value) of the hypergeometric test between the differential genes and the target genes.(C) Top: PML/RARa effects on transcriptional activities of the directly activated gene WT1.This diagram illustrates that ChIP-seq abundance in WT1 after PML/ RARa knockout using small interfering RNA (siRNA) targeting the fusion site of PML/RARa.The panel shows the genome browser tracks of PML/RARa binding.Bottom: The abundance of RNA-seq was compared between two control samples and two samples using siRNA knockout of PML/RARa.The chip seq data were obtained from a previous study.5(D) Functional enrichment analysis of differentially expressed PML/RARa target genes in each AML subtype.E. t-SNE analysis of PML/RARa target genes transcriptomic data for 519 AML samples in the training cohort.Each point represents a sample visualized in a two-dimensional projection.Samples of each subtype are displayed using a different color.Particularly, M3 subtype samples represented by red dots are spontaneously clustered together. Figure 2 . Figure 2. M3-LS model accurately predicts M3 subtype in AML (A) Random forest, XGBoost, and M3-LS model were used to predict M3 samples, and Receiver operating characteristic curve (ROC) analysis was used to evaluate the prediction model.(B) The proportion of M3-like samples predicted by the optimized model in each subtype, amaranth represents the proportion of samples predicted to be M3-like subtype, and yellow represents the proportion of samples not predicted to be M3-like subtype.(C) Model scores were compared for each AML subtype.Boxes and violin plots showing median, 25th and 75th percentiles.Purple box and violin plots represent model scores for all AML samples except M3 subtype.Wilcoxon Rank-Sum test was used for statistical calculation.Validation cohort-1 and 2, (D and G) Random forest, XGBoost machine learning models and M3-like scoring index were used to predict M3 samples.(E and H) The proportion of M3-like samples predicted by the optimized model in each subtype.(F and I) Model scores were compared for each AML subtype. Figure 3 . Figure 3. Performance of M3-LS model was improved by integrating ATO/ATRA response genes (A) Venn plot of model optimization, including leading edge genes (LEGs) of ATO, differential genes robust rank aggregation results of ATRA, PML/RARa target genes, and differential expression genes of M3 and healthy controls in the training cohort.(B) Random forest, XGBoost machine learning model and optimized M3-LS index were used to predict M3 samples, and ROC analysis was used to evaluate the prediction model in the training cohort.(C) Comparison of the scores of the optimized models for each subtype.(D) The model was validated using ROC analysis.Use the model to predict several randomly selected samples, the line graph represents the size of the AUC.(E) Probability density distribution plot of AUC.(F) Random forest, XGBoost machine learning model and optimized M3-LS index were used to predict M3 samples, and ROC analysis was used to evaluate the prediction model in the validation cohort-1.(G) Comparison of model scores for each subtype in the validation cohort-1.(H) Random forest, XGBoost machine learning model and optimized M3-LS index were used to predict M3 samples, and ROC analysis was used to evaluate the prediction model in the validation cohort-2.(I) Comparison of model scores for each subtype in the validation cohort-2.Statistics were calculated using Wilcoxon Rank-Sum test. Figure 4 . Figure 4. M3-LS model identifies additional patients like M3 subtype (A) The proportion of M3-like samples predicted by the optimized model in each subtype in the training and validation cohorts, respectively.Amaranth represents the proportion of samples predicted to be M3-like subtype, and yellow represents the proportion of samples not predicted to be M3-like subtype.(B) Violin plot of the proportion of common myeloid progenitor (CMP) of each subtype identified in the training cohort.Boxes plots show median, 25th and 75th percentiles of CMP for each subtype.p values are calculated using Kruskal-Wallis Test.(C) The expression levels of WT1, GFI1, GATA2, and KDM1A of each subtype were compared AML cases with predicted as M3-like versus M3 subtype and other samples in the training cohort.p value was estimated using Kruskal-Wallis Test.(D) The differential expression of PML/RARa target genes in M3-like and other samples in the training and validation cohorts.The heatmap shows the fold change (FC) values of differential genes in M3-like samples relative to other samples, and the genes in red font are characteristic genes of M3 subtype.(E) Cancer Hallmark pathway enrichment of M3 subtype, M3-like subtype and other samples.The heatmap shows the results of single sample gene set enrichment analysis (ssGSEA) of each subtype sample in each Cancer Hallmark pathway (Statistical significance was assessed by Wilcoxon Rank-Sum test, *p < 0.05, **p < 0.01, ***p < 0.001, ****p < 0.0001).Data are represented as mean. Figure 5 . Figure 5. M3-like patients with strong GMP and distinct genomic features (A) Relative abundance of each leukemic cell type per patient.Each bar represents a patient, and the distribution of colors on each bar represents the distribution of the leukemia cell populations within their leukemic hierarchy.(B) Hierarchical classification of leukemia cells for each subtype in the training cohort.(C) In the training cohort, the expression of the GMP -like marker genes of M3 subtype, M3-like subtype and other samples.(D) Mutation frequency of some genes in M3-like subtype (left) and other sample (right).Statistical significance was assessed by Fisher's test.(E) Top 10 genes with mutation frequency in M3 subtype and M3-like subtype. Figure 6 . Figure 6.M3-like patients with low immune activity and better clinical survival (A) Immune scores for each subtype were calculated using Xcell.Boxplots show median, 25th and 75th percentiles of immunity scores for each subtype.p values are calculated using Kruskal-Wallis Test.(B) In the training cohort, enrichment of various immune gene cohorts and myeloid gene cohorts for M3 subtype, M3-like subtype and other samples.The heatmap shows the results of ssGSEA of each subtype sample in each gene cohort.(C) Kaplan-Meier survival analysis of AML cases predicted as M3-like versus M3 subtype and other samples in the validation cohort-1.p-values were estimated using the log rank test.(D) Percentage of favorable patients for each subtype in the validation cohort-1 (Statistical significance was assessed by Fisher's test, *p < 0.05, **p < 0.01, ***p < 0.001, ****p < 0.0001).Data are represented as mean. Table 1 . Characteristics of AML patients Table 2 . Cell lines treated with ATO or ATRA
2024-01-24T16:28:39.789Z
2024-01-01T00:00:00.000
{ "year": 2024, "sha1": "a42f9e3839b895d4b84c1b46af832006aecfb0e0", "oa_license": "CCBY", "oa_url": "https://doi.org/10.1016/j.isci.2024.108947", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "07a872748b8250d24124d0bb60d74d7dea40f5eb", "s2fieldsofstudy": [ "Medicine", "Computer Science" ], "extfieldsofstudy": [] }
244247731
pes2o/s2orc
v3-fos-license
THE MAIN PHASES OF OLEKSANDR ZUPAN'S DEVELOPMENT AS A GEOGRAPHER AND TEACHER The main phasess of life history and formation of scientific work of Professor Oleksandr Zupan, a famous Austrian geographer, climatologist, teacher, editor of the famous geographical journal "Petermanns Mitteilungen" are described. It is shown the contribution to the development of geographical science, Austrian and German school geography whithin the analysis of O. Zupan’s career. fifth Methodological principles of the research The methodological principles of the research are based on general scientific principles of historic authenticity, objectivity, eventuality, and dialectic understanding of the historical process. They are focused on the priority of the documents and scientific works of the scientist, which allow to analyze his activity and to identify the main stages of his formation as a scientist. We used general scientific analyses (typologization, classification), interdisciplinary (structurally-systemic approach). Historical (problem-chronological, comparatively-historical, descriptive) research methods, cartographic, as well as methods of source analysis and historiographical analysis and synthesis. Objective of the research The purpose of the study is to identify the main stages of Oleksandr Zupan's development as a scientist, who made a significant contribution to the development of geography, climatology and school geography. Objectives: to discover the main stages of Oleksandr Zupan's development as a geographer, teacher, and climatologist; to evaluate and systematize the published sourse base of the scientist; to analyze various spheres of the scientist's socio-geographical development. Introduction to the main material Studying the scientific activity of Oleksandr Zupan, it should be noted that the scientist had a very complex and rich path to the formation of himself as a scientist. His life path is marked by various events, which played a great role in his formation as a scientist. During his lifetime, Zupan visited various countries, including Italy, Slovenia, Austria, Germany, Ukraine, and Poland. During his research activity he visited more than 10 cities, where he wrote more than 20 works, some of them were republished several times. Moreover, for a long time he worked as the editor of the famous geographical magazine "Petermanns Mitteilungen", which also published a number of his ideas and studies. As you can see from and as a teacher. It should be noted that during this period the world saw the fundamental publication -"Fundamentals of physical geography" ("Grundzüge der physischen Erdkunde") [8]. Hhis work was translated into many European languages and was republished 6 times. During this period, he received the title of professor of geography at the Franz Josef University (now the National University of Chernivtsi named after Y. Fedkovich) and started the chain of geography courses at the same University [14]. The fourth phase -1885-1907. During this period, he moved to Gotha and became the editor of the famous geographical magazine "Petermanns Mitteilungen" [4,5]. During this period Zupan wrote most of his works, including Austria-Hungary, [11] which was published in 1889 in the Collection of Registers. Krichhoff's Collection of Regional Descriptions of Europe and Temperatur zonen der Erde (Temperature Zones on Earth, 1879) [6], where the first map of climatic zones developed by Oleksandr was published. He also published a number of works in the field of climatology and oceanography, polar research, physical geography, economic geography, history and others. In 1886 he was elected a member of the German Academy of Sciences Leopoldin, and in 1904 was awarded the Medal of Cotenius. [2] Fifth stage -1908 -1920. It is final stage of the scientific and pedagogical activity of Professor Oleksandr Zupan. He accepts the offer of the Wroclaw University and moves to Wroclaw, where he starts his teaching activity again [3]. The remaining period of his life he devoted to work in the field of political geography and in the summer of 1918, he published his last work "Leitlinien der allgemeinen politischen Geographie: Naturlehre des Staats" Conclusion There is no future without the past, that's why it is worth to remember the scientists who have made great efforts to ensure us a solid foundation for exploring the world from
2021-10-18T18:01:48.187Z
2021-08-31T00:00:00.000
{ "year": 2021, "sha1": "620a709feab4d69a7d56ce63cea36c1f2fe53872", "oa_license": "CCBYNCSA", "oa_url": "https://apcz.umk.pl/JEHS/article/download/JEHS.2021.11.08.049/29648", "oa_status": "GOLD", "pdf_src": "Adhoc", "pdf_hash": "65e29116a97d4668af2687afd3295f55efa2a08c", "s2fieldsofstudy": [ "Geography", "Education" ], "extfieldsofstudy": [ "Sociology" ] }
256154001
pes2o/s2orc
v3-fos-license
How Industry 4.0 and Sensors Can Leverage Product Design: Opportunities and Challenges The fourth industrial revolution, also known as Industry 4.0, has led to an increased transition towards automation and reliance on data-driven innovations and strategies. The interconnected systems and processes have significantly increased operational efficiency, enhanced organizational capacity to monitor and control functions, reduced costs, and improved product quality. One significant way that companies have achieved these benefits is by integrating diverse sensor technologies within these innovations. Given the rapidly changing market conditions, Industry 4.0 requires new products and business models to ensure companies adjust to the current and future changes. These requirements call for the evolutions in product design processes to accommodate design features and principles applicable in the current dynamic business environment. Thus, it becomes imperative to understand how these innovations can leverage product design to maximize benefits and opportunities. This research paper employs a Systematic Literature Review with Bibliometric Analysis (SLBA) methodology to explore and synthesize data on how Industry 4.0 and sensors can leverage product design. The results show that various product design features create opportunities to be leveraged to guarantee the success of Industry 4.0 and sensor technologies. However, the research also identifies numerous challenges that undermine the ongoing transition towards intelligent factories and products. Introduction Companies are integrating technologies to design products that meet their customers' growing needs and expectations. As a result, most firms are leveraging Industry 4.0 technologies associated with emerging intelligent factories and products that promise to transform the manufacturing process, thus impacting multiple market sectors [1]. Representatives from business, politics, and academia introduced the term "Industry 4.0" in 2011 through an initiative aiming to promote the idea as a technique to strengthen the German manufacturing sector. Bahrin et al. [2] define Industry 4.0 as the fourth industrial revolution involving digitization of the manufacturing sector through automation and data exchange in manufacturing technologies, including industrial Internet of Things (IoT), cyber-physical systems, cloud computing, and cognitive computing. In addition, sensors are critical components of Industry 4.0 since they connect various methods and devices, enabling multiple machines to communicate to track equipment and systems at each facility [3]. Consequently, incorporating sensors into Industry 4.0 technologies can enhance automation and sustainability and reduce costs through real-time output tracking and improved capability to monitor automated control systems. Product design is critical in actualizing Industry 4.0 and developing sensor technology. This argument is evident in Tatipala et al.'s [4] research that stated that "without design, Industry 4.0 will fail" since the design is vital in accelerating the transformation of the manufacturing process. Product design under Industry 4.0 and sensors involves networking between different elements, such as machines and products [3]. For example, using costeffective active sensors facilitates the data collection incorporated into Industry 4.0 to create intelligent, connected products, ensuring customer value creation. These processes can leverage product design principles that involve imagining, creating, and iterating products that address consumers' specific needs within a particular market. Therefore, smart products manufactured through Industry 4.0 can be customized to match target markets' needs and expectations, thus increasing competitiveness. However, Tatipala et al. [4] note that there is scarce research on product design in the context of Industry 4.0 despite promising benefits and opportunities. Therefore, this Systematic Literature Review with Bibliometric Analysis (SLBA) aims to identify the challenges and opportunities in integrating Industry 4.0 and sensors to leverage product design. The structure of the paper is as follows. First, we explain the methodological approach used to respond to the object of study. The second section presents the bibliometric analysis carried out. The following section describes the theoretical perspectives resulting from the analysis carried out. Finally, we provide conclusions, implications, and future research directions. Materials and Methods In order to collect and synthesize the necessary data for this study, a Systematic Literature Review with Bibliometric Analysis (SLBA) was developed. According to Romanelli et al. [5], performing a bibliometric analysis allows researchers to assess the developments made in a given field, illustrating how the evidence connects to show the structure of the field. In this sense, the methodology was adopted due to its ability to unpack the evolutionary nuances in the fields of Industry 4.0, sensor technology, and product design, while also shedding light on emerging issues in these fields. This process started with the definition of eligibility criteria to ensure that the results of the documents considered are accurate, objective, meaningful, and relevant to the study. Therefore, the researcher employed eligibility criteria for inclusion and exclusion. The LRSB involves screening and selecting information sources to ensure the validity and accuracy of the data presented, in a process consisting of 3 phases and 6 steps [6][7][8][9] (Table 1). Table 1. Process of systematic literature review with bibliometric analysis (SLBA). Fase Step Description Exploration Step 1 formulating the research problem Step 2 searching for appropriate literature Step 3 critical appraisal of the selected studies Step 4 data synthesis from individual sources Interpretation Step 5 reporting findings and recommendations This methodological approach focuses on bibliographical research in the online database for indexing scientific articles SCOPUS, one of the most important peer-reviewed databases in the academic world. The isolated use of Scopus is due to the fact that it is the main source of articles for academic journals/journals, covering about 19,500 titles from more than 5000 international publishers, including coverage of 16,500 peer-reviewed journals in a variety of scientific fields, thus providing a real view of the topics researched with scientific and/or academic relevance [7]. The methodological procedure started with the use of the keyword "Industry 4.0" in order to identify the appropriate sources in the Scopus directory. This initial search gener-ated a total of 23,300 references. In order to reduce the high number of resulting sources, other search criteria were implemented based on the argument by Rosário and Dias [8] that only articles in journals and papers presented at conferences considered of "high quality" should be synthesized in a literature review, recommending that researchers adopt appropriate inclusion and exclusion criteria. Rosário and Dias [9] further explain that literary analysis improves readers' understanding of the breadth and depth of existing literature. Therefore, to narrow the search to the most relevant literature, the keyword "sensors" was added, reducing it to 2870 documents, and later, a more specific keyword, "Product Design", was added, restricting the document results to 26 scientific and/or academic documents (21 Conferences; 10 Articles; 2 Comments; and 2 Book Chapter) ( Table 2). Literature Analysis: Themes and Trends The peer-reviewed documents were analyzed until October 2022. The year 2022 was the year with the highest number of peer-reviewed documents on the topic, with 16 publications. Figure 1 analyzes peer-reviewed publications published through October 2022. We can say that in 2022, there has been an interest in research on Industry 4.0, sensors, and product design. "high quality" should be synthesized in a literature review, recommending ers adopt appropriate inclusion and exclusion criteria. Rosário and Dias [9] that literary analysis improves readers' understanding of the breadth and ing literature. Therefore, to narrow the search to the most relevant literatur "sensors" was added, reducing it to 2870 documents, and later, a more spe "Product Design", was added, restricting the document results to 26 scien ademic documents (21 Conferences; 10 Articles; 2 Comments; and 2 Book C 2). Literature Analysis: Themes and Trends The peer-reviewed documents were analyzed until October 2022. The the year with the highest number of peer-reviewed documents on the topi lications. Figure 1 analyzes peer-reviewed publications published through We can say that in 2022, there has been an interest in research on Ind sors, and product design. In Table 3, we analyze the Scimago Journal & Country Rank (SJR), th and the H index by publication: Computers In Industry was the highest qu (SJR), Q1, and an H index of 108. There is a total of seven publications in Q cations in Q2, and four publications in Q3. Data from 12 publications are n In Table 3, we analyze the Scimago Journal & Country Rank (SJR), the best quartile, and the H index by publication: Computers In Industry was the highest quality with 2430 (SJR), Q1, and an H index of 108. There is a total of seven publications in Q1, three publications in Q2, and four publications in Q3. Data from 12 publications are not available. The thematic areas covered by the 26 scientific and/or academic documents were Computer Science (21); Engineering (21); Mathematics (5); Physics and Astronomy (5); Decision Sciences (4); Materials Science (4); Chemical Engineering (3); Business, Management and Accounting (2); Biochemistry, Genetics and Molecular Biology (1); Chemistry (1); Medicine (1); and Social Sciences (1). The most cited article was "Industrie 4.0 and smart manufacturing-a review of research issues and application examples" by Thoben et al. (2017) with 590 citations published in the International Journal of Automation Technology with 0.280 (SJR), the best quartile (Q3), and with an H index (20). The objective of this paper is to provide an overview of Industry 4.0 and smart manufacturing programs, analyze the application potential of CPS starting from product design through production and logistics up to maintenance and exploitation (e.g., recycling), and identify current and future research issues. In Figure 2, we analyze the evolution of documents' citations until October 2022. The number of citations shows a positive net growth with R2 of 79% for the year 2021 with 234 citations with a total of 811 citations. The H index was used to verify the productivity and impact of published works, based on the largest number of articles included that had at least the same number of citations. Of the documents considered for the H index, eight were cited at least eight times. In Appendix A, Table A1, citations of all scientific and/or academic documents until October 2022 are analyzed; nine documents were not cited in this period, making a total of 811 citations. The H index was used to verify the productivity and impact of publish based on the largest number of articles included that had at least the same citations. Of the documents considered for the H index, eight were cited at times. In Appendix A, Table A1, citations of all scientific and/or academic docum October 2022 are analyzed; nine documents were not cited in this period, mak of 811 citations. Figure 3 presents the bibliometric study to investigate and identify indica dynamics and evolution of scientific information. The study of bibliometric res the scientific software VOSviewer, aims to identify the main research keywords that are part of the research area of Industry 4.0, sensors, and product design can see more clearly the most network nodes. The node size represents the occ the keyword, i.e., the number of times the keyword occurs. The link between indicates the co-occurrence between the keywords, i.e., keywords that occur ously or occur together, and its thickness reveals the occurrence of co-occur tween the keywords, i.e., the number of times the keywords occur together o The larger the node, the greater the occurrence of the keyword, and the thick between the nodes, the greater the occurrence of co-occurrences between the Each color represents a thematic cluster, where the nodes and links in that clu used to explain the topic coverage (nodes) of the theme (cluster) and the rel (links) between the topics (nodes) that manifest under that theme (cluster). The research was based on the analyzed articles about Industry 4.0, se product design. The associated keywords are presented in Figure 4, making cle work of keywords that appear together/linked in each scientific article, thus a to know the topics studied by the researchers and to identify future research tr The biggest nodes in this mapping are Lifecycle, 3D printers, and the Things. The results of the keyword development map from the Vosviewer a into three clusters. Cluster 1 is red with 17 keyword items, cluster 2 is green w word items, and cluster 3 is blue with 6 keyword items, which can be seen i below. Cluster 1 is the largest cluster and refers to Lifecycle. These articles ma on data mining, data analytics, advanced technology, blockchain, product lifec manufacturing process. Cluster 2 refers to 3D printers and focuses on issues su nology transfer, change management, additive manufacturing, 3D printed mid Figure 3 presents the bibliometric study to investigate and identify indicators of the dynamics and evolution of scientific information. The study of bibliometric results, using the scientific software VOSviewer, aims to identify the main research keywords in studies that are part of the research area of Industry 4.0, sensors, and product design. Here, we can see more clearly the most network nodes. The node size represents the occurrence of the keyword, i.e., the number of times the keyword occurs. The link between the nodes indicates the co-occurrence between the keywords, i.e., keywords that occur simultaneously or occur together, and its thickness reveals the occurrence of co-occurrences between the keywords, i.e., the number of times the keywords occur together or co-occur. The larger the node, the greater the occurrence of the keyword, and the thicker the link between the nodes, the greater the occurrence of co-occurrences between the keywords. Each color represents a thematic cluster, where the nodes and links in that cluster can be used to explain the topic coverage (nodes) of the theme (cluster) and the relationships (links) between the topics (nodes) that manifest under that theme (cluster). The research was based on the analyzed articles about Industry 4.0, sensors, and product design. The associated keywords are presented in Figure 4, making clear the network of keywords that appear together/linked in each scientific article, thus allowing us to know the topics studied by the researchers and to identify future research trends. The biggest nodes in this mapping are Lifecycle, 3D printers, and the Internet of Things. The results of the keyword development map from the Vosviewer are divided into three clusters. Cluster 1 is red with 17 keyword items, cluster 2 is green with 15 keyword items, and cluster 3 is blue with 6 keyword items, which can be seen in Figure 4 below. Cluster 1 is the largest cluster and refers to Lifecycle. These articles mainly focus on data mining, data analytics, advanced technology, blockchain, product lifecycles, and manufacturing process. Cluster 2 refers to 3D printers and focuses on issues such as technology transfer, change management, additive manufacturing, 3D printed mid, and production technology. Cluster 3, Internet of Things, involves cloud manufacturing, cloud manufacturing, cyber-physical systems, assembly, and computer-aided design. The three clusters are interconnected through Industry 4.0 and Product Design themes. In Figure 5, a profusion of bibliographic couplings with a cited reference unit of analysis is presented. In Figure 5, a profusion of bibliographic couplings with a cited reference unit of an ysis is presented. Theoretical Perspectives In today's competitive business environment, companies face challenges such as customers' demand for individualized products and short product lifecycles. In addition, there is an increase in the need to integrate software into hardware products to deliver higher customer value while simultaneously increasing operational efficiencies across the organization [10]. As a result, Industry 4.0 has become a famous development in recent years to address the need to interconnect machines, products, and people [11]. It raises collaboration productivity by facilitating linked systems, devices, and human resources to create quality, customized, high-value products. Sensors as critical components of Industry 4.0 have supported these goals by enabling data collection, analysis, and processing, thus supporting automation and real-time monitoring of systems and processes [12]. However, developing products under the Industry 4.0 context must adhere to new design principles and guidelines to create intelligent products. Therefore, this research section synthesizes data to demonstrate how Industry 4.0 and sensors can leverage product design to ensure that systems, processes, and products are developed to satisfy consumer needs and expectations (Table 4). Title Authors Journal Object of Study and Main Conclusions Presents the technology, protocols, and new innovations in industrial internet of things (IIoT) [13] EAI/Springer Innovations in Communication and Computing Smart devices are changing people's daily life in the world, in which significant trend has already been extended to the industry sector. In the upcoming Industry 4.0, the connected smart devices all around the world via the Internet provide secure, real-time, and reliable services of sensing, communicating, and computing, making smart factories into realization. Theoretical Perspectives In today's competitive business environment, companies face challenges such as customers' demand for individualized products and short product lifecycles. In addition, there is an increase in the need to integrate software into hardware products to deliver higher customer value while simultaneously increasing operational efficiencies across the organization [10]. As a result, Industry 4.0 has become a famous development in recent years to address the need to interconnect machines, products, and people [11]. It raises collaboration productivity by facilitating linked systems, devices, and human resources to create quality, customized, high-value products. Sensors as critical components of Industry 4.0 have supported these goals by enabling data collection, analysis, and processing, thus supporting automation and real-time monitoring of systems and processes [12]. However, developing products under the Industry 4.0 context must adhere to new design principles and guidelines to create intelligent products. Therefore, this research section synthesizes data to demonstrate how Industry 4.0 and sensors can leverage product design to ensure that systems, processes, and products are developed to satisfy consumer needs and expectations (Table 4). Table 4. Articles and scientific documents from SCOPUS database. Title Authors Journal Object of Study and Main Conclusions Presents the technology, protocols, and new innovations in industrial internet of things (IIoT) [13] EAI/Springer Innovations in Communication and Computing Smart devices are changing people's daily life in the world, in which significant trend has already been extended to the industry sector. In the upcoming Industry 4.0, the connected smart devices all around the world via the Internet provide secure, real-time, and reliable services of sensing, communicating, and computing, making smart factories into realization. Title Authors Journal Object of Study and Main Conclusions Clarifying Data Analytics Concepts for Industrial Engineering [14] (No source information available) The paper provides an overview of the data analysis techniques that could be used to extract knowledge from data along the manufacturing process. The digital twin in Industry 4.0: A wide-angle perspective [15] Quality and Reliability Engineering International This paper is about surrogate models, also called digital twins, that provide an important complementary capacity to physical assets. Digital twins capture past, present, and predicted behavior of physical assets. The purpose of this study is to investigate and explore the potential of predictive maintenance and its relation to Industry 4.0, and product/process re-engineering through product lifecycle management (PLM), hence leading to Predictive Maintenance 4.0. Evaluation of different additive manufacturing technologies for MIDs in the context of smart sensor systems for retrofit applications [19] 14th International Congress: Molded Interconnect Devices, MID 2021-Proceedings In the context of this paper, three additive technologies are evaluated with respect to their applicability against the background of different retrofitting applications. A focus lies on the creation of 3D-shaped circuit carriers. An industry 4.0 framework for tooling production using metal additive manufacturing-based first-time-right smart manufacturing system [20] Procedia CIRP This paper presents a concept for an integrated process chain for tooling production based on metal additive manufacturing. A Sensor Data Fusion-Based Locating Method for Reverse Engineering Scanning Systems The present paper faces the locating problem of a handling device for reverse engineering scanning systems. It proposes a locating method by using sensor data fusion based on Kalman filter, implemented in a Matlab environment by using low-cost equipment. Predictive Maintenance in Industry 4.0 [22] ACM International Conference Proceeding Series This paper looks at how to support predictive maintenance in the context of Industry 4.0. Unsupervised learning for product use activity recognition: An exploratory study of a "chatty device" [23] Sensors This paper proposes a model that enables new forms of agile engineering product development via "chatty" products. Products relay their "experiences" from the consumer world back to designers and product engineers through the mediation provided by embedded sensors, IoT, and data-driven design tools. Title Authors Journal Object of Study and Main Conclusions Design of Injection Molding of Side Mirror Cover [24] Sensors and Materials The purpose of this paper is to develop a design for the injection molding of the product, which is applied in the concept of Industry 4.0 that aims to have intelligent processes. Utilizing cyber physical system to achieve intelligent product design: A case study of transformer [25] Advances in Transdisciplinary Engineering This study utilizes the framework of CPS to achieve intelligent product design. Towards Smart Assembly Based Design [26] Lecture Notes in Mechanical Engineering This paper proposes a new framework of data-driven smart assembly design to keep pace with the industrial and Information Technology (IT) revolution. The implementation of Industry 4.0 in manufacturing: from lean manufacturing to product design [11] International Journal of Advanced Manufacturing Technology With interconnection through Industry 4.0, upgraded legacy machinery can provide more in-depth and detailed process information, which, as well as enabling process improvements, can inform the product design to achieve higher production efficiency. Electrospindle 4.0: Towards Zero Defect Manufacturing of Spindles [27] CEUR Workshop Proceedings In this paper, the authors discuss the goals of the Electrospindle 4.0 project, which aims at applying Zero Defect Manufacturing principles to the production of spindles. This work deals with the preliminary design of a double ridge waveguide device to perform indirect measurements of the complex permittivity of a traditional food product from Sardinia (Italy), i.e., the Carasau bread, in the case of a small bakery industry. Barriers for industrial sensor integration design-an exploratory interview study [29] Journal of Mechanical Design, Transactions of the ASME The aim of this paper is to explore potential challenges within different contexts and suggest possible directions for research within the field of sensor integration design. Modeling Fused Filament Fabrication using Artificial Neural Networks [30] Production Engineering This study uses a trained artificial neural network (ANN) model as a digital shadow to predict the force within the nozzle of an FFF printer using filament speed and nozzle temperatures as input data. Lean thinking in the digital Era [31] IFIP Advances in Information and Communication Technology This paper describes the current state of the art in order to understand how lean thinking should be implemented in the context of the smart factory. A vest for treating jaundice in low-resource settings This paper aims to address the issues and causes of insufficient NJ phototherapy on a global scale, presenting the design, test, and development of a first prototype of a vest, with embedded fiber optics and sensors for autonomous phototherapy treatment of newborn jaundice in LRSs. Emotion recognition for semi-autonomous vehicles framework [33] International Journal on Interactive Design and Manufacturing This article proposes a novel approach for emotion recognition that not only depends on images of the face, as in the previous literature, but also on the physiological data An Augmented Reality inspection tool to support workers in Industry 4.0 environments [35] Computers in Industry In this paper, an innovative AR tool has been proposed to assist workers at the workplace during inspection activities of industrial products. 3D printed cellulose based product applications [36] Materials Chemistry Frontiers This review highlights the many promising and diverse functions and applications of sustainable 3D-printed cellulose-based products. The objective of this paper is to provide an overview of Industry 4.0 and smart manufacturing programs, analyze the application potential of CPS starting from product design through production and logistics up to maintenance and exploitation (e.g., recycling), and identify current and future research issues. The industrial sector is crucial in every country's economic growth since it is a key driver of economic growth and job creation. It involves manufacturing activities that transform raw materials into products, thus providing added value. With the increasing competition among manufacturing countries, developing and adopting advanced technologies to increase efficiencies and reduce costs have become a critical strategy [11]. As a result, the world has experienced an industrial revolution termed "Industry 4.0", characterized by the broad application of technologies that significantly change established practices. For example, Industry 4.0 manufacturing plants have increasingly adopted automation and robots to increase operational efficiencies and reduce costs [2]. Data are the primary driver for this industrial revolution since this involves using advanced Information and Communication Technology (ICT) to connect multiple manufacturing machines, factories, units, raw material suppliers, customers, logistics enterprises, and energy suppliers [13]. The use and integration of ICT across all levels of the manufacturing process build a smart manufacturing network that benefits from automated, autonomic, and optimized manufacturing processes. Industry 4.0, or the fourth industrial revolution, refers to the next phase of digitizing the manufacturing sector using technologies such as the Internet of Things (IoT), cyberphysical systems, and industrial Internet [38] and involves a combination of innovations, including software, sensors, processors, and communication technologies under the IoT and cyber-physical systems. These innovations are interconnected to facilitate information feeding into Industry 4.0, eventually adding value to the manufacturing processes [15]. The ultimate goal of Industry 4.0 is to create an open, smart manufacturing platform characterized by industrial-networked information applications [16]. Examples of technologies in Industry 4.0 include horizontal and vertical system integration, cybersecurity, the Internet of Things, the cloud, simulation, augmented reality, additive manufacturing (3D printing), big data analytics, and robots [2]. This networking is expected to allow companies easy and affordable access to modeling and analytical technologies that can be customized to meet each manufacturing company's needs. Understanding Industry 4.0 technologies can help determine how product design can be leveraged across product development processes to ensure organizational productivity, performance, and customer satisfaction. The horizontal and vertical system integration in Industry 4.0 reflects the evolution of cross-company, universal data-integration networks through automated value chains. The horizontal system integration involves Industry 4.0's connected cyber-physical and enterprise systems networks [15]. It facilitates increased flexibility, automation, and operational efficiency throughout production. For example, machines and production units are interconnected across the production network, allowing them to communicate and autonomously respond to dynamic production requirements. On the contrary, vertical system integration involves connecting all business units and processes within the organization [16]. For instance, this aspect ensures a seamless data flow across all departments, from R&D, quality assurance, product management, IT, sales, and marketing. Therefore, horizontal and vertical system integration is the backbone of Industry 4.0 as it involves the interconnection of all units, processes, expertise, and systems within or cross-company. The IoT refers to physical objects consisting of software, sensors, processing ability, and other technologies capable of connecting and exchanging data with other devices and systems over the Internet. In this case, IoT enhances a manufacturing company's computing power by allowing devices to communicate and interact, thus decentralizing analytics and decision making and facilitating real-time responses [15]. Cybersecurity refers to protecting interconnected systems, devices, networks, servers, and data from malicious attacks. While Industry 4.0 provides opportunities for companies to improve operational efficiencies, productivity, performance, and competitiveness, the interconnectivity within the production networks and across the value chains increases vulnerability to cyber security threats [17]. Therefore, cybersecurity technologies are critical success components of Industry 4.0 as they guarantee the safety and security of the systems and processes. Cloud technologies provide computing services such as servers, storage, databases, networking, software, analytics, and intelligence over the Internet, known as "the cloud", to support data-driven production processes [17]. These cloud computing technologies have increasingly become critical in Industry 4.0 as they allow data sharing across sites and company confines. Big data and analytics technologies allow the collection and comprehensive data analysis from multiple sources and customers. As a result, these innovations support realtime decision making, optimize production quality, and help reduce energy consumption and equipment maintenance [39]. On the other hand, simulations are used to leverage real-time data in a virtual framework to mirror a physical world, including machines, products, and humans. These technologies allow developers to test and optimize outcomes before introducing them to the actual application [17]. For example, simulations can adjust machine settings until the operator achieves the required performance levels. Similarly, additive manufacturing uses computer-aided design (CAD) or 3D object scanners to create objects with precise geometric shapes. It is widely used in Industry 4.0 to produce small batches of customized products that provide construction benefits such as complex, lightweight designs [19]. Other technologies, such as augmented-reality-based systems, send repair instructions or select parts in a warehouse using mobile devices. At the same time, robots are expected to increasingly become more autonomous, flexible, and cooperative to work safely alongside humans [20]. Although each of these Industry 4.0 technologies has distinct features and functionalities, interlinking them to form a complex interconnected network to enhance collaboration and operational efficiency. Sensors under Industry 4.0 Various industries use different types of sensors for varying applications in routine and commercial purposes. Sensors link multiple devices and systems and allow machines to communicate and track equipment and systems at each facility. Javaid et al. [3] (p. 2) define a sensor as a "device that detects the input stimulus, which may be any quantity, property, or condition from the physical environment, and responds to a measurable digital signal." Examples of input stimulus include environmental conditions such as temperature, pressure, moisture, heat, force, or light, while the response output has a frequency, resistance, capacitance, current, and voltage. As an independent system, sensors can process onboard by assessing the atmosphere conditions and changing operations accordingly [21]. The sensor technology in Industry 4.0 can collect and analyze large quantities of data rapidly and accurately to stimulate appropriate actions. This increased capacity eliminates issues such as human error, reduces the need to monitor systems, and enhances quality and production. Therefore, sensors are critical innovations in Industry 4.0. Industry 4.0 is characterized by integrated and smart networks that facilitate intelligent production. Sensors have multiple features and capabilities that make them essential for the success of Industry 4.0. For instance, they can capture and analyze data for appropriate decision making and promote automation across production lines by enabling self-optimization [21]. Other than the overall process automation, sensors have additional features such as predictive maintenance and asset monitoring or conditioning [22]. The current manufacturing sector is rapidly transitioning to automated technologies to improve production processes and product quality [23]. In this case, leveraging software intelligence will require advanced sensor technologies to optimize manufacturing activities, monitor processes, and increase efficiencies. According to Lo et al. [24], sensors' capabilities and capacities are widely applied in pharmaceutical and chemical plants and industrial robots, where sensor technologies are used for multiple applications, including process control through flow calculation and temperature sensors. In addition, smart sensors are used to evaluate dynamic circumstances and environmental changes throughout the manufacturing processes. Industry 4.0 is primarily data-driven, with technologies used in data collection, analysis, and interpretation playing the most critical roles. Sensors contribute to these practical applications by performing multiple vital functions, such as providing raw data that reveal broader inefficiencies affecting machine performance [24]. For example, smart sensing technologies collect data on temperature range, flow, pressure response, and measuring fluids. In addition, intelligent sensors are used in real-time data gathering, remote surveillance, and preventive maintenance, thus enhancing equipment performance. Javaid et al. [3] indicate that sensor technologies are used in manufacturing to perform multiple functions, including providing product details, ensuring precise positioning by giving feedback on motor movement, and issuing alerts on equipment conditions. In addition, sensors can be used to determine extrinsic and intrinsic characteristics of objects, including location, distance, proximity, temperature, and color [23]. Therefore, sensor technologies integrated into automation systems in Industry 4.0 provide a critical way of collecting data on procedures and production activities. Product Design The product design process involves creating a business innovation based on a market opportunity, a defined problem related to people's needs, technological possibilities, and business feasibility. It can be described as the process in which the designer establishes design decisions using various product data and transforms functional requirements into a specific implementation structure. Chen et al. [25] describe product design as a complex iterative process that begins with creating a product's principle scheme design, followed by the overall design and the final detailed scheme design. As a result of this complexity, the product design process is often broken down into various tasks and processes and characterized by a clear division of labor where each activity is assigned to specific skilled staff and departments [26]. Despite the allocation of functions throughout the design process, the involved experts collaboratively work to develop innovations that meet specific customer needs. The product design process can include multiple phases. This study focuses on three main stages: requirement analysis, conceptual design, and detailed design. Requirement Analysis During this phase, the designers and their teams analyze key customer preferences and translate them into useful product features and attributes. Appropriate methods are used to obtain customer requirement information, which is then screened for relevance and impact on product design [27]. For every manufacturing firm operating in the current competitive business environment, the primary motivation is to meet customer requirements by designing and producing products that directly address their problems. This necessity makes this initial stage of the product design process critical since it ensures that all product features align with customer needs and expectations [38]. As a result, Industry 4.0 technologies such as big data and the Internet of Things are used to collect and analyze data on customer preferences and expectations, and insights are integrated into product design decision making. Conceptual Design This stage involves a series of iterative and complex engineering processes where relevant knowledge is combined to establish a functional structure and search for a proper working principle. This stage pursues the correct combination mechanism, defines the basic solution course, and generates the design scheme [40]. Design concept generation is a critical determinant of the success of new product development. For instance, companies must develop products that meet their target customers' diverse, individualized needs while maintaining low production costs and product development cycles [26]. The conceptual design phase can solve these issues by ensuring the efficient use of product data to generate a concept that considers consumer preferences and needs, resource availability, and profitability. Traditionally, designers have relied on their knowledge and experience in conceptual product design. However, Industry 4.0 technologies have shifted this by increasing designers' access to data and innovations that lead to improved product design quality. Detailed Design This phase models the product development process using data to create product solutions based on requirements and the structure built in the first and second stages. For instance, the designer determines product features such as appearance, configurations, and parameter data [40]. In this case, the designers use the predefined product concept to complete a product's essential aspects, including desired performance levels and design information. Developing sensors and data storage technologies in Industry 4.0 avails large volumes and types of products that can aid the designing process [29]. As a result, data mining and database technologies are critical in developing and applying data-driven modeling methods in product design [30]. The data integrated into the design modeling process provide the rationale for the requirements considered in creating the detailed design. For instance, the data can be used to prove why the adopted design is the most appropriate solution to the predefined problem serving as the foundation for the product design process. Opportunities in Leveraging Product Design in Industry 4.0 and Sensors Industrial revolutions are associated with multiple benefits, including increased production performance through new technologies, reduced costs, and increased ability to develop new, affordable products that meet current market needs. As a result, creating a compatible product design and development process (PDDP) becomes necessary for companies to exploit these benefits [31]. Therefore, the success of Industry 4.0 and sensor technology depends on manufacturing companies' capacity to develop new products and business models adjusted to fit the rapidly changing market conditions [29]. Smart factories can support the production of customized products, while smart products have a higher potential to satisfy customer needs through information exchange and adaptiveness. However, research shows these benefits can only be achieved if the products are designed appropriately. Therefore, as the manufacturing sector transitions towards smart factories and smart products, the interdependence between Industry 4.0 technologies and innovations, including sensors and product design engineering, becomes increasingly deeper [32]. This section explores design features and opportunities that can be leveraged in the context of Industry 4.0 and sensors to guarantee continuous development and improvements. Design for Empowered Users/Customers Customer empowerment in product design requires customer involvement throughout the process. It can occur during the final product configuration definition, where the designer creates building blocks instead of finalized products, or during the production process, where the designer recognizes customers' capability to produce the final products [41]. Designing building blocks require an enormous understanding of the problem in which the solution is to be established. Suppose the customers are involved at this stage of product design. They must understand the combination of various design elements, shape, texture, color, negative/white space, and value [34,38]. However, their involvement improves the probability of high adoption rates since it means the final product will be easy to use by target customers since they were actively involved. Industry 4.0 technologies applicable in this designing phase include the cloud and augmented reality (AR). Marino et al. [35] explain that AR is a critical enabling technology in Industry 4.0 that integrates virtual information into a user's real-world perception by combining vision and sensor-based tools. In addition, the authors indicate that AR can be used to assess actual products to identify design discrepancies between the planned features and products developed. In the context of customers as configurators, AR can be used to enable customers to assess the design elements to ensure the right combination before settling on a solution. The second customer empowerment phase during the production process explores the possibility that customers produce the final products. In this case, the customers are encouraged to own the manufacturing process. Given the increased competition and customer awareness in the current business environment, companies increasingly involve their target customers in production [38]. Industry 4.0, as a data-driven industrial revolution, has seen an increase in customers' access to critical information that influences their perception of a product or the company itself. Therefore, leveraging this design feature can help create competitive products and improve a company's competitive position in the market [41]. The empowerment is achieved through multiple techniques, including customer education, availing user-friendly processes and tools, and ensuring a flexible, viable producing capacity. The Industry 4.0 technologies considered in this design phase include cloud and additive manufacturing (AM). The cloud facilitates data storage and sharing, thus contributing to empowerment through access to relevant knowledge and allowing customer contributions [17]. AM increases manufacturing operations' flexibility and efficiency by using computer-aided design (CAD) software or 3D object scanners to deposit material layer upon layer in precise geometric shapes [20,36]. The CAD software can be used to demonstrate customer information and digitally define objects, thus illustrating how the final product will look and creating an opportunity for feedback and further improvements. Design for Cyber Security Industry 4.0 is based on cyber-physical systems, where data, vertical and horizontal data integration and exchange, and data analytics play a central role. While these data innovations create opportunities to enhance efficiency, performance, productivity, and product quality, they also bring cyber-environment challenges, such as hackers and viruses, to the physical world [37]. In this case, the design engineer must guarantee data security by strengthening safety, security, privacy, and knowledge protection controls and measures [41]. Design for cyber security requires that the product design processes prioritize security as the first principle in design and across the value chain. Therefore, the design team must have IT experts who understand potential vulnerabilities associated with adding software and IoT into design solutions and possible countermeasures [38]. Systems designed for cyber security have higher capacities to meet individual needs since they can perform functions linked to sensing and responding, autonomy, and configuration. The interconnection in Industry 4.0 facilitates the optimization of the increasing data density and the fusion of information and operational technologies. However, it increases vulnerability to cyber threats making cyber security a core issue in Industry 4.0 [41]. In addition, sensors play a critical role in the initial stages of Industry 4.0, where they are attached to industrial assets to establish digital records by collecting data through imitation of human feelings and thoughts [33]. Consequently, research shows that sensor-based platforms and applications are highly vulnerable to cyberattacks [19]. Considering these circumstances, leveraging the product design process can help mitigate the cyber security issues in Industry 4.0 and sensor technologies by ensuring that the proposed and developed designs prioritize safety, security, privacy, and knowledge protection. Design for Data Analytics Data modeling and data analytics are crucial in the product design process. According to Cattaneo et al. [14], data analytics involves data mining and statistical models needed at the beginning to clean data and validate the rules. The IoT-based manufacturing sector in Industry 4.0 has led to the generation of a tremendous amount of data that requires analysis through multiple methods, including artificial intelligence, machine learning, and data mining [42]. Data collected about products, markets, or customer needs can provide design knowledge, creating opportunities to improve product competitiveness and production efficiency. In addition, data-driven product design involves using data modeling and analysis to uncover hidden patterns and relevance to enhance product and system schemes [14]. Product data are generated throughout the product lifecycle and through the interactions between products, humans, and the environment. Industry 4.0 innovations such as cyber-physical systems, digital Internet resources, and scientific experiments are critical product data sources. Other technologies that can be leveraged for data access include simulation and horizontal and vertical systems integration [18]. Therefore, data analytics leads to better product design decisions and improves customer satisfaction and organizational competitiveness since the designers better understand the target users, thus developing individualized tools and resources. This design feature can be integrated into Industry 4.0 and sensor technologies to ensure that smart factories align their goals and practices toward addressing customers' specific needs. Design for Changeability High dynamics and increased individualization characterize the current manufacturing sector. As a result, it has become critical for companies to have the ability to adjust their production systems to future needs and conditions quickly. Therefore, design for changeability features is concerned with designing systems and products with built-in robustness against slight use variations and potential future changes [27]. For example, intelligent factories and products require flexible updating and upgrading, the ability to accommodate new technologies, and adaptability to diverse user experiences. In this case, the designers must understand factors that may drive future changes in the product and then determine changeable solution architectures [41]. For example, dynamic marketplaces and technical evolution can lead to the need for changes. Design for changeability is described from four primary perspectives; robustness, flexibility, agility, and adaptability. Robustness refers to a system's insensitiveness towards changes, while flexibility focuses on the capacity to easily accommodate changes. Agility refers to the ability to change rapidly, while adaptability refers to the ability to adjust to changing circumstances [43]. As the manufacturing sector transitions towards smart factories and smart products under Industry 4.0, design for changeability has become a critical feature as it facilitates the creation of systems and products that can quickly and rapidly adjust to the changes. Therefore, the design for changeability principles should be leveraged when designing Industry 4.0 and sensor technology architectures. Challenges in Leveraging Product Design While multiple product design features can be leveraged in Industry 4.0 and sensor technologies, various challenges hinder their optimization. For instance, although there has been abundant research on Industry 4.0 and sensors, there is limited research on how they can leverage product design. It, therefore, becomes challenging to implement product design features and principles in smart factories and products due to a lack of adequate information and critical insights on how that can be achieved. Other challenges identified in the research include infrastructure constraints, lack of technological competencies, and legal issues relating to privacy and security. Infrastructure Constraints Advanced technologies used in Industry 4.0 and sensors are expensive, limiting some companies, especially SMEs, from leveraging product design features and associated opportunities. Maximizing benefits linked to the correlation between product design and Industry 4.0 innovations, including sensors, require access to advanced ICT infrastructure to facilitate the establishment of cyber-physical systems and IoT-based applications and systems [29]. Despite the rapid technological advancements, manufacturing plants still struggle with immature IT to support the integration of Industry 4.0 technologies and facilitate the smooth transition towards intelligent factories and products. Therefore, infrastructural factors remain a significant barrier to leveraging product design in Industry 4.0 and sensor technologies. Technological Competencies Lack of skills and knowledge on Industry 4.0 innovations and how to effectively integrate them into organizational systems and processes is a significant challenge. The authors in [20] indicate that most traditional manufacturing companies struggle with a skills gap due to the high number of aging populations in their workforce. Older employees have limited tech skills and knowledge, which limits their ability to embrace and adopt advanced technologies. As a result, these companies must develop strategies and programs to increase employees' skills and raise awareness on topics such as deep simulation and coding experience [38]. These training and awareness programs can lead to slow adoption rates since the employees may take time before they have adequate skills and knowledge to adopt the advanced technologies. Moreover, the process can also be influenced by employees' readiness to adopt the technologies [27]. For example, if the older workforce has negative attitudes towards the new technologies, the adoption rate or participation in the training programs may be lower. Alternatively, these companies can hire new employees with the necessary tech skills and knowledge. While these practices can increase an organization's competitiveness, they may be expensive, thus leading to organizational reluctance to adopt advanced technologies. Legal Issues Despite the massive research and awareness of cyber security and its impact on Industry 4.0, security and privacy concerns remain significant challenges. As a result, most governments have implemented regulations on data protection and IT security, liability, and intellectual property laws. Companies must meet compliance standards and ensure their practices are within the defined protocols to avoid legal issues, such as lawsuits. In addition, companies are expected to adhere to organizational or industry standards, codes, principles of good governance, and ethical and social norms [28]. However, compliance can be challenging due to laws varying across territories. For example, most companies function in other countries worldwide with emerging technologies. Differences in data protection and IT security laws and policies can significantly undermine these companies' compliance efforts. Conclusions This research identifies multiple design features that can be leveraged in Industry 4.0 and sensor technology to facilitate smooth transition and development. Firstly, design for empowered users/customers advocates for active engagement of target customers throughout the designing and product production processes. This opportunity is critical in Industry 4.0 since it ensures product development addresses customer needs and expectations directly. Secondly, design for cyber security can be leveraged in Industry 4.0 and sensors to reduce vulnerability to cyber-security threats. Cyber security has become a significant issue due to the increased interconnection in Industry 4.0 networks. Thus, leveraging this design feature can help mitigate the problem. Design for data analytics reinforces the significance of adopting data-driven designs and processes to ensure that organizational practices and products meet market needs. Finally, design for changeability ensures that companies adopt production systems and processes that can easily and quickly adjust to future changes and variations. However, various challenges often hinder leveraging these opportunities and features, including infrastructural constraints, inadequate technological competencies, and legal issues relating to security and privacy concerns. As companies continue to integrate Industry 4.0 technologies, they must develop practical solutions addressing these issues to ensure they benefit from the opportunities presented by emerging, advanced innovations. This article has some limitations, mainly in the selection and use of databases and in the keywords chosen. Although Scopus is the largest database, there are publications indexed in other databases that might be extremely important. Furthermore, the theme of this article focuses on how Industry 4.0 and sensors can leverage product design so other variables or other aspects are not explained in detail. As for the keywords used in the research, we consider that the term Industry 4.0 can be reductive in the search. For future research, we expect to use other databases such as EBSCO and ISI Web of Science to be able to see a map of the development of search trends, and also to use other keywords related to the term Industry 4.0. Conflicts of Interest: The funders had no role in the design of the study; in the collection, analyses, or interpretation of data; in the writing of the manuscript; or in the decision to publish the results.
2023-01-24T16:50:06.381Z
2023-01-19T00:00:00.000
{ "year": 2023, "sha1": "8f19145f342ac0a324eded025954d1d744808239", "oa_license": "CCBY", "oa_url": "https://www.mdpi.com/1424-8220/23/3/1165/pdf?version=1674132510", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "df36425f06815cfe6aef25032ad310d3ecb9037f", "s2fieldsofstudy": [ "Business" ], "extfieldsofstudy": [] }
244729721
pes2o/s2orc
v3-fos-license
LEGS: Learning Efficient Grasp Sets for Exploratory Grasping While deep learning has enabled significant progress in designing general purpose robot grasping systems, there remain objects which still pose challenges for these systems. Recent work on Exploratory Grasping has formalized the problem of systematically exploring grasps on these adversarial objects and explored a multi-armed bandit model for identifying high-quality grasps on each object stable pose. However, these systems are still limited to exploring a small number or grasps on each object. We present Learned Efficient Grasp Sets (LEGS), an algorithm that efficiently explores thousands of possible grasps by maintaining small active sets of promising grasps and determining when it can stop exploring the object with high confidence. Experiments suggest that LEGS can identify a high-quality grasp more efficiently than prior algorithms which do not use active sets. In simulation experiments, we measure the gap between the success probability of the best grasp identified by LEGS, baselines, and the most-robust grasp (verified ground truth). After 3000 exploration steps, LEGS outperforms baseline algorithms on 10/14 and 25/39 objects on the Dex-Net Adversarial and EGAD! datasets respectively. We then evaluate LEGS in physical experiments; trials on 3 challenging objects suggest that LEGS converges to high-performing grasps significantly faster than baselines. See https://sites.google.com/view/legs-exp-grasping for supplemental material and videos. I. INTRODUCTION Recent advances in deep learning have enabled the development of universal grasping systems that can robustly grasp a wide variety of objects [23-25, 28, 29, 34]. However, these systems can still struggle to grasp objects with adversarial [27,35] geometries or which are significantly out of distribution from the objects seen during training. This problem is common in many industrial settings, in which newly manufactured machine parts for custom applications may look very different from the objects in the datasets typically used for training universal grasping systems. Recently, bandit-style algorithms have been used to augment general-purpose grasping policies by rapidly adapting them to specific objects [11,19,21,22]. Recently, Danielczuk et al. [8] introduced Exploratory Grasping, where a robot learns to grasp novel objects through online exploration of grasps and stable poses. Their algorithm, Bandits for Online Rapid Grasp Exploration Strategy (BORGES), learns robust pose-specific grasping policies. However, BORGES limits exploration to a fixed set of 100 grasps per stable pose, possibly to overlooking other high-quality grasps. In this work, we extend Danielczuk et al. [8] to explore thousands of grasps per stable pose. Considering grasp sets of this scale increases the likelihood of converging to a 1 The AUTOLab at UC Berkeley, 2 Siemens Research Lab robust grasp, but also makes efficient exploration challenging. To address this challenge, we propose Learned Efficient Grasp Sets (LEGS), which adaptively curates an active set of promising grasps rather than restricting exploration to a small fixed subset. The key insight is to use a combination of priors from a universal grasping system and online trials to maintain confidence bounds on grasp-success probabilities. LEGS uses these bounds to (1) update the grasps in its active set and (2) decide when to stop exploring. This paper makes the following contributions: (1) a novel adaptive multi-armed bandits algorithm that curates a small set of high-performing grasps by actively removing and resampling grasps based on performance bounds and a novel termination condition that enables a robot to predict (with high confidence) when it reaches a desired level of performance; (2) a self-supervised physical grasping system where a robot explores candidate grasps with minimal human intervention (roughly 1 in every 100 grasp attempts); (3) simulation and physical experiments suggesting that LEGS can identify higher quality grasps within a fixed time horizon than prior algorithms which do not learn an active set. A. Universal Grasping Algorithms Recent robotic grasping algorithms generalize to a wide range of objects [18]. Open-loop algorithms synthesize grasps and predict their quality based on the geometry of the object, and then plan and execute a motion to attempt a high-quality grasp without feedback [20,[23][24][25]29]. Closed-loop grasp planners that use vision-based gripper servoing [28,34] and RL [15,16] have also been popular in prior work. LEGS is designed to leverage priors from these universal grasping algorithms to efficiently learn a robust grasp policy for a specific, difficult-to-grasp object [27,35]. We use priors from Dex-Net 4.0 [25], a general grasp planner that learns a graspquality estimator from a large dataset of 3D object models in simulation and then uses this estimator to sample and evaluate the quality of grasps in physical trials. B. Multi-Armed Bandits Prior work on multi-armed bandits [31] has studied settings where the number of actions is large compared to the number of timesteps allocated for exploration [2,6,13,14,33,36,37]. One popular algorithmic framework for this setting is called best arm identification, where the goal is to adaptively reject a set of arms from consideration when there is high confidence that they are suboptimal [1,5,17]. LEGS builds on these ideas, by adaptively filtering actions from an active set by maintaining confidence bounds on the reward corresponding to each action. This mechanism makes it possible to efficiently perform best arm identification across multiple bandits problems, where each bandit problem represents a distinct stable pose of an object. LEGS can quickly converge to high-quality grasps on problems with thousands of grasps per stable pose. C. Exploratory Grasping Universal grasping algorithms often struggle with certain objects [27,35]. Danielczuk et al. [8] show that grasping algorithms such as Dex-Net [25] are difficult to fine-tune online on such objects, and propose Exploratory Grasping, a problem formulation where the objective is to perform rapid online adaptation to grasp specific, unknown objects. To achieve this, prior works sample a fixed set of grasps on specific object stable poses and apply multi-armed bandit algorithms to rapidly identify high-performing candidates [11,19,21,22]. Danielczuk et al. [8] extend these ideas with BORGES, which explores grasps across all object stable poses by using Thompson sampling and a learned Dex-Net prior [21]. However, BORGES can often overlook highquality grasps since it restricts exploration to a small initial set of grasps. To address this issue, LEGS begins with a large set of grasp candidates and adaptively curates sets of promising grasps by adding and removing grasp candidates during exploration. By doing this, LEGS is able to converge to better long-term performance than BORGES (which uses a small fixed set of grasps), while also learning to robustly grasp an object faster than baselines that seek to directly explore large sets of grasp candidates. III. PROBLEM STATEMENT Overview: Given a difficult-to-grasp polyhedral object of unknown geometry that rests on a planar surface and is viewed by an overhead depth camera, we seek to learn to successfully grasp the object in all of its stable poses. Problem Setup: Given a polyhedral object o, let N be its number of stable poses. Each stable pose s ∈ {1, 2, . . . N } is associated with a landing probability λ s , which indicates the probability of the object landing in pose s when released from sufficient height in a randomized orientation [12,26]. Following Danielczuk et al. [8], we model our problem as a finitehorizon Markov Decision Process M = (S, A, T, R, H). We let S be the set of equivalence classes of distinguishable stable poses of the object and A be the set of all possible grasps on the object. Thus, A = N s∈S A s , where A s are the grasps available at a stable pose s. Given a grasp action a in stable pose s, the transition function T : S × A × S → [0, 1] determines the probability distribution over next stable poses. The reward function R : S × A → {0, 1} is binary: a grasp is successful and R(s, a) = 1 if the grasped object does not fall from the gripper after it is lifted, and R(s, a) = 0 otherwise. Let p sa = E[R(s, a)] be the expected success probability of grasp a on stable pose s. We define a grasping policy as: π : S × A → [0, 1], where π(a|s) denotes the probability of selecting grasp a in pose s. We denote the finite horizon of the MDP as H. The robot initially does not know any of the stable poses or the number of stable poses N . If a grasp is successful, the robot randomizes the orientation of the object in the gripper, drops the object so that the next stable pose s is determined by the landing probabilities {λ s } N s=1 , and records the observed stable pose s . We represent the actions, A s , at each stable pose s as candidate grasps sampled on the object. We use the same method as Mahler et al. [25] to sample antipodal grasps on each stable pose. We do not make any assumptions on the grasping modality, so in practice these grasps can be sampled from various different grasp planners, including parallel-jaw or suction grasp planners. We denote the number of possible grasps for pose s as K s = |A s | and the total number of grasps over all states as K = s∈S K s . An important difference between our problem setting and prior work [8] is that we consider settings in which K is large (> 1000) and thus is of the same order of magnitude as the exploration horizon, H. This significantly exacerbates exploration challenges, since there is not enough time to fully explore each grasp, motivating the key innovations in LEGS. Assumptions: In this work, we assume access to the following: (1) a grasp sampler which accepts as input a depth map and outputs a set of candidate grasp configurations on the surface of the depth map with associated robustness values; (2) a robot/gripper that can either execute these grasps or detect that they are in collision; (3) sufficient information in the camera image to detect whether the object stable pose changes; (4) an evaluation function to detect whether a grasp is successful. We note that these assumptions are satisfied by the system we build to instantiate LEGS in practice. In addition, we make the following assumptions about object's interaction with the environment: (5) if a grasp is unsuccessful, the object either remains in the same stable pose or topples into another stable pose; and (6) there exists a grasp with non-zero success probability on each stable pose. These last two assumptions are consistent with [8]. Metrics: We define the optimality gap, ∆ π as where p * s = max a∈As E[R(s, a)] and p s π(s) = E[R(s, π(s))]. In simulation, we can evaluate the ground-truth grasp-success probability for a given grasp with robust quasi-static grasp wrench space analysis [38]. We thus approximate p * s by sampling a large number of grasps on each stable pose. Intuitively, the optimality gap ∆ π measures the expected difference, across all stable poses, between the optimal policy, which selects the best available grasp, and the policy π. In physical experiments, the optimality gap cannot be computed so we report the grasp-success rate of the learned policy π. The objective is to find a policy that minimizes the optimality gap for a given object within H grasp attempts. Denoting a policy learned after H grasp attempts by π H , the objective is to identify π * H such that: IV. LEARNED EFFICIENT GRASP SETS We propose Learned Efficient Grasp Sets (LEGS), a multiarm bandits algorithm that uses confidence bounds on graspsuccess probability to maintain a small active set of candidate grasps. LEGS starts with an estimate of the prior success probabilities for all grasps in a large reservoir of possible grasps, and updates their grasp-success probabilities based on online grasp trials using Thompson sampling as in Danielczuk et al. [8]. However, unlike BORGES, LEGS uses the priors and online grasp trials to construct confidence bounds on the grasp-success probabilities for each grasp (Section IV-A). LEGS is summarized in Algorithm 1. Once LEGS visits a stable pose s, it checks whether it has visited s (line 4). In Sec. VI, we describe how to recognize stable poses in the physical setup. If the stable pose s has never been visited (line 5), LEGS adds the stable pose to the set of visited stable posesŜ (line 6) and initializes an active set of candidate grasps,à s , along with the parameters of a Beta distribution associated with each grasp in the active set (lines 7-8). We rank the grasps in the reservoir by their estimated grasp success probabilities under the Grasp Quality Convolutional Neural Network (GQ-CNN) from Dex-Net 4.0 [25] and select the k = 100 grasps with the highest values. In each iteration, LEGS executes the grasp with the highest sampled value from the posterior (lines 9-11), observes the outcome (line 12), and updates the posterior distribution [30] (lines 13-16). In conjunction, LEGS also constructs confidence bounds on each of the success probabilities of each grasp (Section IV-A). Every n iterations, it uses these confidence bounds to identify and remove the grasps with low robustness (Section IV-B) (line 18), and replaces them with newly sampled grasps where grasps are ranked by their estimated grasp success probabilities under GQ-CNN (lines [19][20]. A. Constructing Confidence Bounds on Robustness To determine which grasps to remove from the active set, LEGS constructs upper and lower confidence bounds on For each new grasp j = 1, . . . |B|, set α j , β j using prior from f θ and add new grasp toà s grasp robustness. We model the success probability of grasp i via X i ∼ Beta(α i , β i ), and empirically select a confidence threshold δ. Then the percent-point function PPF(X i , δ), the inverse of the cumulative distribution function F Xi (x), returns the value x such that F Xi (x) = δ. The (1 − δ)-lower and -upper confidence bounds for X i are X i, = PPF(X i , δ) and X i,u = PPF(X i , 1 − δ), respectively. As a grasp is sampled more often, the interval [X i, , X i,u ] tightens to reflect increased certainty in the robustness of the grasp. B. Posterior Dependent Grasp Removal LEGS avoids over-exploring less robust grasps by identifying and removing grasps from the active set that are highly likely to be either (1) inferior to another grasp in the active set (locally suboptimal) or (2) below a desired global grasp success probability threshold (globally suboptimal). Let the highest lower confidence bound across all active grasps be: X * = max i∈Ãs X i, . We define the set of locally suboptimal grasps as the set of grasps for which their (1 − δ)-confidence upper bound is worse than the (1−δ)-confidence lower bound for the best grasp in the active set: Thus, B represents the set of grasps that are likely to be inferior to the best known grasp in the active set. However, in the early stages of exploration, we may not yet have sampled a high-performing grasp and B may be empty. In these cases, we still desire to remove and resample grasps that, with high-confidence, are clearly low performing. Thus, given a minimum performance threshold γ ∈ [0, 1], we define the set of globally suboptimal grasps in the active set (denoted B γ ): grasps which have been sampled, but are likely to have success probability less than γ. We define B γ as We denote the set of attempted grasps in the active set as P, and let the index of the currently known best grasp be i * . The full set of grasps removed by LEGS is constructed by taking the union of the above sets: This allows LEGS to remove grasps which are unlikely to outperform the best known grasp in the current active set. C. Early Stopping Rather than setting the exploration horizon H to a fixed value, we can set a performance threshold and let LEGS stop exploring once it has high confidence that it has achieved the desired threshold. This early stopping condition allows LEGS to efficiently allocate exploration time by only continuing to explore objects that it cannot yet robustly grasp. Given a user-specified, minimum performance threshold ρ min ∈ [0, 1], we want to detect when, with high likelihood, the true performance of LEGS is above this threshold. More formally, given a confidence parameter δ stop ∈ [0, 1], we want to calculate a (1 − δ stop )-confidence lower bound, denoted by p , on the true expected performance of the grasping policy π, i.e., we want to find p such that P r p ≤ E s∈S [p s π(s) ] ≥ 1 − δ stop . Then, the robot can stop exploring when p ≥ ρ min . We cannot directly compute E s∈S [p s π(s) ] since we do not know the true stable pose distribution S. Thus, we take a Bayesian approach where we approximate p by sampling likely values of E s∈S [p s π(s) ] given the observed data and then by taking the δ stop -percentile of these samples [3,4]. First, for each observed stable pose, s, we estimate the expected performance of the best grasp asp * s = max i∈As αi αi+βi , where α i and β i are the parameters of the Beta posterior distribution over the success probability of grasp i. To reason about the performance of LEGS, we must account for uncertainty over the stable pose distribution, parametrized by the drop probabilities λ 1 , . . . , λ N . However, N is unknown. Thus, we model our belief over drop probabilities using a Dirichlet posterior distribution overN + 1 drop probabilities, whereN is the number of observed stable poses and the +1 allocates probability mass to unobserved stable poses. Assuming a uniform Dirichlet prior, we take the empirical drop counts c 1 , . . . , cN forN observed stable poses, and sample from the posterior distribution over stable pose drop probabilities, P r({λ s }N +1 s=1 | c 1 , . . . , cN , 0). Due to conjugacy [10], the desired posterior distribution is also a Dirichlet distribution with parameters (α 1 = c 1 + 1, . . . , αN = cN + 1, αN +1 = 1). Given a sample, {λ s }N +1 s=1 , from the above Dirichlet posterior, we transform it into a sample from the posterior over expected grasp robustness: p π = N s=1p * s ·λ s . where we conservatively assume that the robot will fail to grasp the object in any unseen poses. We calculate a (1 − δ stop )-confidence lower bound on the overall grasp robustness by finding the δ stop percentile,p = PPF(p π , δ stop ), using M samples of p π . V. SIMULATION EXPERIMENTS A. Experimental Setup We first evaluate LEGS in Exploratory Grasping with a variety of adversarial objects in simulation. Same as in Danielczuk et al. [8], we consider 14 Dex-Net 2.0 Adversarial objects [24] and all 39 EGAD! Adversarial evaluation objects [27]. We use Dex-Net 4.0 [25] to sample a large reservoir of K = 2000 grasps for each stable pose. We also use GQ-CNN to set the Beta prior for LEGS following the method from [8,21]. Using the method outlined in Section IV, we update the active grasp set after every n = 100 timesteps and use δ = 0.05 for constructing grasp confidence intervals with upper confidence threshold γ = 0.2. All experiments use a time horizon of H = 3000. We run 10 trials of each algorithm with 10 rollouts per trial, where each trial involves sampling a different reservoir of grasps, and each rollout for a trial involves running a grasp exploration algorithm. B. Baselines We compare LEGS against five baseline algorithms: Dex-Net, Tabular Q-Learning (TQL), BORGES (K s = 100), BORGES (K s = 2000), and LEGS (-AS). Dex-Net greedily chooses the best grasp evaluated by Dex-Net 4.0 [25] for each stable pose and does not do any online exploration. BORGES (K s = 100) leverages a prior calculated by GQ-CNN to seed grasp success probability estimates, and then performs Thompson Sampling for each encountered stable pose to explore an initial active set of 100 grasps sampled on each of the poses. While BORGES (K s = 100) is provided with the same initial active set as LEGS, unlike LEGS, BORGES (K s = 100) does not update its set over time. However, different from [8], it is not guaranteed that there will exist successful grasps on all stable poses when K s = 100. This implies that BORGES (K s = 100) may not be able to transit between stable poses. The K s = 100 Upper Bound refers to the optimality gap if on each stable pose, the best grasp in the active set is selected. BORGES (K s = 2000) is identical to BORGES (K s = 100), but instead directly explores the full reservoir of K s = 2000 sampled grasps. TQL implements tabular Q-learning on the full reservoir of K s = 2000 sampled grasps where each pose is a separate state s and each action a is a grasp on that pose and a Q-table Q[s, a] is constructed to keep track of the corresponding 1-step Q-values. The values in the Q-table are initialized using the GQ-CNN prior and actions are chosen based on an -greedy policy [32] with = 0.1. Finally, LEGS (-AS) is not provided with an initial active set, but instead operates on the full reservoir of K s = 2000 grasps and uses the posterior dependent removal procedure in Section IV-B to remove grasps from the reservoir. C. Experimental Results We first study aggregated results of LEGS and baselines over objects in the Dex-Net Adversarial and EGAD! evaluation datasets in Table I. We find that LEGS performs better than or equal to the baseline algorithms on 10 out of 14 objects in the Dex-Net Adversarial dataset, and on 25 out of 39 objects in the EGAD! evaluation dataset. In comparison, the best performing baseline algorithm, BORGES (K s = 2000), only performs at least as well as rest of the algorithms on 5 out of 14 Dex-Net Adversarial objects and 14 out of 39 EGAD! evaluation dataset. On all of these objects we find that Dex-Net, which is not updated online, has high optimality gap, motivating online grasp exploration. The improvement for LEGS over LEGS (-AS) and BORGES (K s = 2000) indicates the increased efficiency of restricting exploration to a small active set, while the gap between LEGS and BORGES (K s = 100) indicates the importance of updating this active set over time to prune poor performing grasps while discovering new, high-quality grasps outside of the initial active set. BORGES (K s = 100) cannot outperform the success rate of the best grasp in its initial set (K s = 100 upper bound). By contrast, LEGS, retains the efficiency of only exploring a small set of grasps while also being able to adapt this set over time to obtain successful grasps on difficult-to-grasp stable poses and reach a lower optimality gap. TQL learns much more slowly than BORGES because it fails to leverage the structure in the grasp exploration problem and does not learn separate policies for each stable pose. In Figure 2, we study LEGS and baselines on specific objects. We show two objects (Climbing Hold and C3) where LEGS converges faster to high performing grasps than prior algorithms and two objects (F6 and Turbine Housing) where LEGS does not outperform all baselines. We find that when high performing grasps are abundant, LEGS may converge to suboptimal grasps. However, when there are only few successful grasps, LEGS can converge to good grasps much faster than baselines. If high quality grasps are already in the active set, LEGS can rapidly distinguish them from other grasps. If the active set does not contain successful grasps, LEGS can quickly replace bad grasps in the active set. D. Early Stopping Results Next, we study the accuracy and effectiveness of the early stopping criterion (Section IV-C). We test the proposed highconfidence performance bound across all objects in the Dex-Net Adversarial object set (individual results per object are reported in the supplement). We check whether LEGS has reached the stopping condition every 100 grasps for a horizon of H = 3000 total grasp attempts and use δ stop = 0.05, resulting in a 95%-confidence lower boundp . We sample M = 3000 samples to estimatep . We first test how often the predicted bound is a true lower bound on performance. We find that, on average, across all Dex-Net Adversarial objects, our empirical lower bound is a 95.8%-accurate lower bound on the true performance over the true stable pose distribution. Thus,p forms an empirically valid (1 − δ stop )-confidence lower bound. We next test the tightness of our lower bound. On average, the difference between the true performance of LEGS and our empirical lower bound is only 2.97%. These results suggest that our lower bound is highly accurate and tight enough to provide a practical signal for when the robot can safely stop exploring. We next study, in simulation, the use our high-confidence bounds on performance for early stopping. As described in Section IV-C, given a user-specified, minimum performance threshold, ρ min , the robot stops exploring when the lower confidence boundp is greater than ρ min . When the robot chooses to stop exploring the object, we evaluate the ground truth performance of the learned policy and evaluate whether the true performance is also above the threshold ρ min . We evaluate a wide range of thresholds and plot the results in Figure 3. Results suggest that we can achieve highly accurate early stopping, allowing the robot to accurately terminate exploration well before the full horizon of 3000 steps. Fig. 3: Early Stopping Threshold Sensitivity: We evaluate early stopping over the Dex-Net Adversarial object set in simulation with a range of stopping thresholds, ρmin. We use a 95%-confidence lower bound on expected grasp robustness. Left: We plot the accuracy averaged over all objects and find that our empirical lower bound (Section V-D) is highly accurate across all stopping thresholds, ρmin. Right: We plot the number of steps before stopping, averaged across all objects. Intuitively, the required exploration time increases with higher performance thresholds. Importantly, the average number of steps before stopping is much lower than the 3000-step horizon. VI. PHYSICAL EXPERIMENTS In this section, we discuss our experimental setup for physical experiments, the methods we used to enable intervention-free grasp exploration on a physical robot and results evaluating the performance of LEGS and BORGES (K s = 2000) across 3 physical objects. A. Experimental Setup To deploy exploratory grasping algorithms on a physical robot, we modify the perception system introduced in Danielczuk et al. [8] to sample grasps and identify changes in the object stable pose. We capture a depth image of the object from an overhead camera, deproject it into a point cloud using the known camera intrinsics, demean the point cloud, and apply 3600 evenly spaced rotations to the point cloud around the camera's optical axis. We measure the chamfer distance between the rotated point clouds with previously cached point clouds and find the pair of point clouds that serves as the closest match. As in Danielczuk et al. [8], if at least 80 % of the points are less than 0.02 mm away from the closest points in the cached point cloud, we classify the two point clouds as belonging to the same stable pose. If none of the cached point clouds satisfies this condition, the point cloud is cached and treated as a new stable pose. If there exists a matching point cloud, we further align the translation and rotation of the point cloud via iterative closest point [7]. Upon discovery of a new stable pose, we use Dex-Net 4.0 [25] to sample, evaluate, and cache grasps in the grasp reservoir. Thus, LEGS can explore grasps on objects with unknown geometries and unknown numbers of stable poses. B. Self-Supervised Exploratory Grasping with LEGS Danielczuk et al. [8] find that re-dropping the object during experiments often cause it to fall out of the workspace, requiring extensive human effort to reset the object. To enable the robot to collect grasp data without human intervention, we introduce strategies to prevent the object from toppling out of the workspace while maintaining access to a wide variety of grasps. We drop the object within a bowl (Fig.1), where the object's rebound height is lower than the rim of the bowl. The bowl allows the object to stay in the visible range of the overhead camera. However, the bowl's rim can be an obstacle to grasps. We introduce two autonomous reset behaviors to address this: (1) we center the object above the bowl before dropping the object, ad (2) when the object topples near the boundary, the robot pushes the object towards the center of the bowl to improve grasp access [9]. Figure 4 shows learning curves from physical experiments comparing LEGS with BORGES (K s = 2000) on three challenging objects from the Dex-Net Adversarial Dataset [24]. We run 3 trials with 1 rollout per trial for each object. We find that on 2 out of the 3 objects, LEGS is able to outperform BORGES (K s = 2000) and identify high-performing grasps within a few hundred timesteps of online exploration. VII. DISCUSSION We present Learned Efficient Grasp Sets, an algorithm which efficiently explores large sets of grasps by adaptively constructing a small active set of promising grasps. Experiments suggest that LEGS identifies high-performing grasps more efficiently than baseline algorithms across 53 objects in simulation experiments and on three challenging objects in physical trials. We also propose a novel early stopping condition by computing a high-confidence lower bound on the expected grasp performance. Simulation results suggest that this high-confidence lower bound is highly accurate and tight. In future work, we will analyze LEGS to determine how the quality of the Dex-Net prior and the distribution over grasp success probabilities affect its convergence rate. Moreover, we will search for possible ways for LEGS to generalize across different stable poses and objects. APPENDIX We select the hyperparameters based on the ablation studies done on the Bar Clamp object in the Dex-Net 2.0 adversarial objects [24]. Specifically, we perform ablation studies on two hyperparameters: s the strength of GQCNN prior and δ for constructing grasp confidence intervals. For each ablation experiments, we run 10 trials of each algorithm with 10 rollouts per trial, where each trial involves sampling a different reservoir of grasps, and each rollout for a trial involves running a grasp exploration algorithm. All experiments are run over a time horizon of H = 3000. Our experiments (Table I) show that this set of hyperparameters tuned on a single object can be applied across many objects. A. Sensitivity to Prior Strength The effect of the strength of GQCNN prior is first studied in Li et al. [21]. In Danielczuk et al. [8], s is set to 5. Our sensitivity experiments show that when more grasps are sampled on each stable pose, s = 1 shows the best result. In this set of experiments, δ = 0.07. B. Sensitivity to Confidence Interval Parameter We also performed sensitivity experiments on δ for constructing confidence intervals for all grasps. Intuitively, using a small δ will lead to a larger confidence interval, which may slow down the updates to the active set. Using a large δ will lead to a smaller confidence interval, which may lead to false positives when identifying grasps with low success rate. In our ablation experiments, we find that δ = 0.05 gives the best performance on the bar clamp object. In this set of experiments, s = 1.
2021-12-01T02:15:42.956Z
2021-11-29T00:00:00.000
{ "year": 2021, "sha1": "bc61134c9a1372be95c4ad7b95e6465ba60c8926", "oa_license": null, "oa_url": null, "oa_status": null, "pdf_src": "Arxiv", "pdf_hash": "bc61134c9a1372be95c4ad7b95e6465ba60c8926", "s2fieldsofstudy": [ "Computer Science", "Engineering" ], "extfieldsofstudy": [ "Computer Science" ] }
99226902
pes2o/s2orc
v3-fos-license
Kinetics of Nonbranched-Chain Processes of the Free- Radical Addition to Molecules of Alkenes, Formaldehyde, and Oxygen with Competing Reactions of Resulting 1:1 Adduct Radicals with Saturated and Unsaturated Components of the Binary Reaction System Five reaction schemes are suggested for the initiated nonbranchedchain addition of free radicals to the multiple bonds of the unsaturated compounds. The proposed schemes include the reaction competing with chain propagation reactions through a reactive free radical. The chain evolution stage in these schemes involves three or four types of free radicals. One of them is relatively low-reactive and inhibits the chain process by shortening of the kinetic chain length. Based on the suggested schemes, nine rate equations (containing one to three parameters to be determined directly) are deduced using quasi-steady-state treatment. These equations provide good fits for the nonmonotonic (peaking) dependences of the formation rates of the molecular products (1:1 adducts) on the concentration of the unsaturated component in binary systems consisting of a saturated component (hydrocarbon, alcohol, etc.) and an unsaturated component (alkene, allyl alcohol, formaldehyde, or dioxygen). The unsaturated compound in these systems is both a reactant and an autoinhibitor generating low-reactive free radicals. A similar kinetic description is applicable to the nonbranched-chain process of the free-radical hydrogen oxidation, in which the oxygen with the increase of its concentration begins to act as an oxidation autoingibitor (or an antioxidant). The energetics of the key radicalmolecule reactions is considered. INTRODUCTION A free radical may be low-reactive if its unpaired p-electron may be delocalized, e.g., over conjugated bonds as in the case of allyl radical CH 2 =CHĊH 2 or along a double bond from carbon to the more electron-affine oxygen as in the case of formyl radical HĊ=O. Note that the activity of a free radical is also connected to the reaction heat in which it participates. In nonbranched-chain processes of reactive free radical (addend) addition to double bonds of molecules, the formation of rather low-reactive free radicals in reactions, which are parallel to or competing with propagation via a reactive radicals, lead to chain termination, because these lowreactive radicals do not participate in further chain propagation and because they decay when colliding with each other or with chaincarrier reactive radicals thus resulting in inefficient expenditure of the latter and process inhibition. In similar processes involving the addend and inhibitor radicals in diffusion controlled bimolecular chain-termination reactions of three types, the dependences of the rate of molecular 1:1 adduct formation on the concentration of the unsaturated component (which is the source of low-reactive free radicals in a binary system of saturated and unsaturated components) have a maximum, usually in the region of small (optimal) concentrations. The progressive inhibition of nonbranched chain processes upon exceeding this optimal concentration may be an element of self-regulation of the natural processes returning them to a steady state condition. Here, reactions of addition of reactive free radicals to multiple bonds of alkene, formaldehyde, and oxygen molecules to give 1:1 adduct radicals are taken as examples to consider the role of lowreactive free radicals as inhibitors of the nonbranched chain processes at moderate temperatures. In the case of oxidation, there are tetraoxyl 1:2 adduct radical arising upon addition of a peroxyl 1:1 adduct radical to molecular oxygen at high enough concentrations of the latter. The 1:1 adduct radical (which is the heaviest and the largest among the free radicals that result from the addition of one addend radical to the double bond of the molecule) may have an increased energy owing to the energy liberated in the transformation of a double bond into an ordinary bond (30-130 kJ mol -1 for the gas phase under standard conditions [1][2][3][4]). Therefore, it can decompose or react with one of the surrounding molecules in the place of its formation without diffusing in the solution and, hence, without participating in radical-radical chain termination reactions. Which of the two reactions of the adduct radical, the reaction with the saturated component or the reaction with the unsaturated component, dominates the kinetics of the process will depend on the reactivity and concentration ratios of the components in the binary system. Earlier [5,6], there were attempts to describe such peaking dependences fragmentarily, assuming that the saturated or unsaturated component is in excess, in terms of the direct and inverse proportionalities, respectively, that result from the simplification of a particular case of the kinetic equation set up by the quasi-steady-state treatment of binary copolymerization involving fairly long chains [5]. This specific equation is based on an irrational function, whose plot is a monotonic curve representing the dependence of the product formation rate on the concentration of the unsaturated component. This curve comes out of the origin of coordinates, is convex upward, and has an asymptote parallel to the abscissa axis. Replacing the component concentrations with the corresponding mole fractions generates a peak in this irrational function and thereby makes it suitable to describe the experimental data [7]. However, this circumstance cannot serve as a sufficient validation criterion for the mechanism examined, because the new property imparted to the function by the above artificial transformation does not follow from the solution of the set of algebraic equations that are set up for the reaction scheme accepted for the process in a closed system and express the equality of the steady-state formation and disappearance rates of the reactive intermediates. This publication presents a comprehensive review of the nonbranched-chain kinetic models developed for particular types of additions of saturated free radicals to multiple bonds [8][9][10][11][12][13][14]. It covers free radical additions to alkenes [10,11], their derivatives [8,9], formaldehyde (first compound in the aldehyde homological series) [8,9,12], and molecular oxygen [13,14] (which can add an unsaturated radical as well) yielding various 1:1 molecular adducts, whose formation rates as a function of the unsaturated compound concentration pass through a maximum (free radical chain additions to the С=N bond have not been studied adequately). In the kinetic description of these nontelomerization chain processes, the reaction between the 1:1 adduct radical and the unsaturated molecule, which is in competition with chain propagation through a reactive free radical ( • PCl 2 , С 2 Н 5 ĊНОН, etc.), is included for the first time in the chain propagation stage. This reaction yields a low-reactive radical (such as СН 2 =С(СН 3 )ĊН 2 or НĊ=О) and thus leads to chain termination because this radical does not continue the chain and thereby inhibits the chain process [8]. We will consider kinetic variants for the case of comparable component concentrations with an excess of the saturated component [10,11] and the case of an overwhelming excess of the saturated component over the unsaturated component [8,9,12]. Based on the reaction schemes suggested for the kinetic description of the addition process, we have derived kinetic equations with one to three parameters to be determined directly. Reducing the number of unknown parameters in a kinetic equation will allow one to decrease the narrowness of the correlation of these parameters and to avoid a sharp build-up of the statistical error in the nonlinear estimation of these parameters in the case of a limited number of experimental data points [15]. The rate constant of the addition of a free radical to the double bond of the unsaturated molecule, estimated as a kinetic parameter, can be compared to its reference value if the latter is known. This provides a clear criterion to validate the mathematical description against experimental data. The kinetic equations were set up using the quasi-steady-state treatment. This method is the most suitable for processes that include eight to ten or more reactions and four to six different free radicals and are described by curves based on no more than three to seven experimental points. In order to reduce the exponent of the R ]/dt = 0 equation to unity [8], we used the following condition for the early stages of the process: k 6 = 7 5 2 2 k k [16] and, hence, V 1 = V 5 + 2V 6 R ] are the concentrations of the addend radical and the low-reactive (inhibitor) radical, respectively; V 1 is the initiation rate; V 5 , 2V 6 , and V 7 are the rates of the three types of diffusion-controlled quadratic-law chain termination reactions; 2k 5 Our mathematical simulation was based on experimental data obtained for γ-radiation-induced addition reactions for which the initiation rate V 1 is known. The analysis of stable liquid-phase products was carried out by the gas chromatographic method. ADDITION TO THE С=С BOND OF ALKENES AND THEIR DERIVATIVES When reacting with alkenes not inclined to free-radical polymerization, the free radicals originating from inefficient saturated telogens, such as alcohols [17] and amines [18], usually add to the least substituted carbon atom at the double bond, primarily yielding a free 1:1 adduct radical. This radical accumulates an energy of 90-130 kJ mol -1 , which is released upon the transformation of the C=C bond to an ordinary bond (according to the data reported for the addition of nonbranched C 1 -C 4 alkyl radicals to propene and of similar C 1 and C 2 radicals to 1-butene in the gas phase under standard conditions [1][2][3][4]). Such adduct radicals, which do not decompose readily for structural reasons, can abstract the most labile atom from a neighbor molecule of the saturated or unsaturated component of the binary reaction system, thus turning into a 1:1 adduct molecule. The consecutive and parallel reactions involved in this free-radical nonbranched-chain addition process are presented below (Scheme 1). In the case of comparable component concentrations with a nonoverwhelming excess of the saturated component, extra reaction (1b) (k 1b  0) is included in the initiation stage [10,11]. In the case of an overwhelming excess of the saturated component reaction (1b) is ignored (k 1b = 0) [8,9,12]. The initiation reaction 1 is either the decomposition of a chemical initiator [5,17,18] or a reaction induced by light [5,17,18] or ionizing radiation [19][20][21][22][23]. The overall rate of chain initiation (reactions 1, 1a, and 1b) is determined by the rate of the rate-limiting step (k 1b > k 1a ). The reaction between the free radical  2 R , which results from reactions 1b and 4, and the saturated molecule R 1 А is energetically unfavorable because it implies the formation of the free radical  1 R , which is less stable than the initial one. The addition reaction 2 may be accompanied by the abstraction reaction 2a. R , but to a much lesser extent. The rates of the formation (V, mol dm -3 s -1 ) of the 1:1 adducts R 3 A (via a chain mechanism) and R 3 B (via a nonchain mechanism) in reactions 3 and 4 are given by the equations where V 1 is the rate of the initiation reaction 1; l = [R 1 A] and x = [R 2 B] are the molar concentrations of the initial components, with l > x; k 2 is the rate constant of the addition of the  1 R radical from the saturated component R 1 А to the unsaturated molecule R 2 В (reaction 2); and  = k 1a /k 1b and  = k 3 /k 4 are the rate constant ratios for competing (parallel) reactions ( is the first chain-transfer constant for the free-radical telomerization process [5]). The rate ratio for the competing reactions is V 3 /V 4 = l/x, and the chain length is v = V 3 /V 1 . Earlier mathematical simulation [8] demonstrated that replacing the adduct radical R 3 with the radical R 2 [5] in the reaction between identical radicals and in the reaction involving R 1 gives rise to a peak in the curve of the 1:1 adduct formation rate as a function of the concentration of the unsaturated component. Reaction 1b, which is in competition with reaction 1a, is responsible for the maximum in the curve described by Eq. (2), and reaction 4, which is in competition with reaction (3), is responsible for the maximum in the curve defined by Eq. (1). The number of unknown kinetic parameters to be determined directly (k 2 , , and ) can be reduced by introducing the condition   , which is suggested by the chemical analogy between the competing reactions pairs 1a-1b and 3-4. After these transformations, the overall formation rate equation for the 1:1 adducts R 3 A and R 3 B (which may be identical, as in the case of R 3 H [5,8,9,12,13,[18][19][20][21]), appears as where l m and x m are the component concentrations l and x at the points of maximum of the function. Provided that V 1 is known, the only parameter in Eq. (3a) to be determined directly is . If V 1 is known only for the saturated component R 1 A, then, for the binary system containing comparable R 1 A and R 2 B concentrations, it is better to use the quantity where 1χ = l/(l + x) and χ = x/(l + x) are the mole fractions of the components R 1 A and R 2 В (0 < χ <1), respectively, and χ m is the χ value at the point of maximum. The overall formation rate of the 1:1 adducts R 3 A and R 3 B is a sophisticated function of the formation and disappearance rates of the radicals The application of the above rate equations to particular single nonbranched-chain additions is illustrated in Fig. 1. Curve 1 represents the results of simulation in terms of Eq. (3b) for the observed 1:1 adduct formation rate as a function of the mole fraction of the unsaturated component in the phosphorus trichloride-methylpropene 1 reaction system at 303 K [19]. In this simulation, the 60 Co γ-radiation dose rate was set at P = 0.01 Gy s -1 and the initiation yield was taken to be G( • PCl 2 ) = 2.8 particles per 100 eV (1.60 × 10 -17 J) of the energy absorbed by the solution [19]. The product of reaction 3 is Cl 2 PCH 2 C(Cl)(CH 3 )CH 3 (two isomers), V 1 = 4.65 × 10 -9 mol dm -3 s -1 at χ = 0, and 2k 5 = 3.2 × 10 8 dm 3 mol -1 s -1 . This leads to α = (2.5 ± 0.4) × 10 3 , and the rate constant of reaction 2 derived from this α value is k 2 = (1.1 ± 0.2) × 10 4 dm 3 mol -1 s -1 . Note that, if the R 2 -B bond dissociation energy for the unsaturated component of the binary system is approximately equal to or above, not below, the R 1 -A bond dissociation energy for the saturated component, then the rate of reaction 4 relative to the rate of the parallel reaction 3 (chain propagation through the reactive free radical 1 R  ) will be sufficiently high for adequate description of R 3 A and R 3 B adduct formation in terms of Eqs. (1)-(3b) only at high temperatures [20]. In the phosphorus trichloride-propene system, the difference between the R 2 -B (B = H) and R 1 -A (A = Hal) bond dissociation energies in the gas phase under standard conditions [1] is as small as 5 kJ mol -1 , while in the tetrachloromethanemethylpropene (or cyclohexene) and bromoethane-2-methyl-2butene systems, this difference is 20.9 (37.7) and ~24 kJ mol -1 , respectively. Excess of the Saturated Component If the concentration of the saturated component exceeds the concentration of the unsaturated component in the binary system, reaction 1b can be neglected. If this is the case (k 1b = 0), then, in the numerators of the rate equations for reactions 3 and 4 (Eqs. (1) and (2)), l/(l + x) = 1 and the overall rate equation for the formation of the 1:1 adducts R 3 A and R 3 B will appear as where the parameters are designated in the same way as in Eqs. (1)- The rate equations for the chain termination reactions 5-7 (Scheme 1, k 1b = 0) are identical to Eqs. (9)-(11) (see below) with  = 0. Note that, if it is necessary to supplement Scheme 1 for k 1b = 0 with the formation of R 1 B via the possible nonchain reaction 2a (which is considered in the Section 2.1), the parameter k 2a should be included in the denominator of Eq. (4) to obtain The analytical expression for k 2 in the case of k 2a  0 is identical to the expression for k 2 for Eq. (4). The equation for the rate V 2a (R 1 B) can be derived by replacing k 2 with k 2a in the numerator of Eq. (4) containing k 2a in its denominator. ADDITION TO THE C=O BOND OF FORMALDEHYDE Free radicals add to the carbon atom at the double bond of the carbonyl group of dissolved free (unsolvated, monomer) formaldehyde. The concentration of free formaldehyde in the solution at room temperature is a fraction of a percent of the total formaldehyde concentration, which includes formaldehyde chemically bound to the solvent [27]. The concentration of free formaldehyde exponentially increases with increasing temperature [28]. The energy released as a result of this addition, when the C=O bond is converted into an ordinary bond, is 30 to 60 kJ mol -1 (according to the data on the addition of С 1 -С 4 alkyl radicals in the gas phase under standard conditions 1-4). The resulting free 1:1 adduct radicals can both abstract hydrogen atoms from the nearestneighbor molecules of the solvent or unsolvated formaldehyde and, due to its structure, decompose by a monomolecular mechanism including isomerization [9,12]. Addition of Free 1-Hydroxyalklyl Radicals with Two or More Carbon Atoms Free 1-hydroxyalkyl radicals (which result from the abstraction of a hydrogen atom from the carbon atom bonded to the hydroxyl group in molecules of saturated aliphatic alcohols but methanol under the action of chemical initiators [29,30], light [17,31], or ionizing radiation [32,33]) add at the double bond of free formaldehyde dissolved in the alcohol, forming 1,2-alkanediols [8,9,12,[29][30][31][32][33][34][35][36], carbonyl compounds, and methanol [8,33] via the chaining mechanism. (The yields of the latter two products in the temperature range of 303 to 448 K are one order of magnitude lower.) In these processes, the determining role in the reactivity of the alcohols can be played by the desolvation of formaldehyde in alcohol-formaldehyde solutions, which depends both on the temperature and on the polarity of the solvent [28,33]. For the radiolysis of 1(or 2)-propanol-formaldehyde system at a constant temperature, the dependences of the radiation-chemical yields of 1,2-alkanediols and carbonyl compounds as a function of the formaldehyde concentration show maxima and are symbatic [8,32]. For a constant total formaldehyde concentration of 1 mol dm -3 , the dependence of the 1,2-alkanediol yields as a function of temperature for 303-473 K shows a maximum, whereas the yields of carbonyl compounds and methanol increase monotonically [33] (along with the concentration of free formaldehyde [28]). In addition to the above products, the nonchain mechanism in the -radiolysis of the solutions of formaldehyde in ethanol and 1-and 2-propanol gives ethanediol, carbon monoxide, and hydrogen in low radiationchemical yields (which, however, exceed the yields of the same products in the -radiolysis of individual alcohols) [8,9,33]. The available experimental data can be described in terms of the following scheme of reactions: Scheme 2 Chain initiation Chain termination In these reactions, I is an initiator, e.g., a peroxide [29,30]; some reactive radical (initiator radical); R, an alkyl; ROH, a saturated aliphatic alcohol, either primary or secondary, beginning from ethanol; CH 2 O, the unsaturated molecule -free formaldehyde; • СН 2 ОН, the 1-hydroxymetyl fragment radical; • R (-H) OH, the reactive 1-hydroxyalkyl addend radical, beginning from 1hydroxyethyl; R (-H) (ОH)СН 2 О • , the reactive hydroxyalkoxyl 1:1 adduct radical; • СНО, the low-reactive formyl radical (inhibitor radical); R 0 H, the molecular product; R (-H) (OH)СН 2 ОН, 1,2alkanediol; R (-2H) HO, an aldehyde in the case of a primary alcohol and an R'R"CO ketone in the case of a secondary alcohol; R (- The chain evolution stage of Scheme 2 includes consecutive reaction pairs 2-3, 2-3a, and 3a-3b; parallel (competing) reaction pairs 3-3a, 3-3b, 3-4, and 3a-4; and consecutive-parallel reactions 2 and 4. Scheme 2 does not include the same types of radical-molecule reactions as were considered in Section 2.1 for Scheme 1. In addition, it seems unlikely that free adduct radicals will add to formaldehyde at higher temperatures the reaction of adding is unlikely because this would result in an ether bond. The addition of hydroxymethyl radicals to formaldehyde, which is in competition with reaction 3b, is not included as well, because there is no chain formation of ethanediol at 303-448 K [33]. At the same time, small amounts of ethanediol can form via the dimerization of a small fraction of hydroxymethyl radicals, but this cannot have any appreciable effect on the overall process kinetics. The addition of free formyl radicals to formaldehyde cannot proceed at a significant rate, as is indicated by the fact that there is no chain formation of glycol aldehyde in the systems examined [33]. The mechanism of the decomposition of the free adduct radical via reaction 3a, which includes the formation of an intramolecular НО bond and isomerization, can be represented as follows [8,9,12]: The probability of the occurrence of reaction 3a should increase with increasing temperature. This is indicated by experimental data presented above [8,9,12]. The decomposition of the hydroxyalkoxyl radical. R (-H) (ОH)СН 2 О • (reaction 3a) is likely endothermic. The endothermic nature of reaction 3a is indirectly indicated by the fact that the decomposition of simple C 2 C 4 alkoxyl radicals RО • in the gas phase is accompanied by heat absorption: ( ). Reaction 3b, subsequent to reaction 3a, is exothermic, and its heat for C 2 C 3 alcohols in the gas phase is [2][3][4]. As follows from the above scheme of the process, reactions 3a and 3b, in which the formation and consumption of the highly reactive free radical hydroxymethyl take place (at equal rates under steady-state conditions), can be represented as a single bimolecular reaction 3a,b occurring in a "cage" of solvent molecules. The free formyl radical resulting from reaction 4, which is in competition with reactions 3 and 3a, is comparatively low-reactive because its spin density can be partially delocalized from the carbon atom via the double bond toward the oxygen atom, which possesses a higher electron affinity [1]. For example, in contrast to the methyl and alkoxyl π-radicals, the formyl σ-radical can be stabilized in glassy alcohols at 77 K [37]. In the gas phase, the dissociation energy of the C-H bond in formyl radicals is half that for acetyl radicals and is about 5 times lower than the dissociation energy of the С α -Н bond in saturated C 1 -C 3 alcohols [1]. As distinct from reactions 3 and 3a,b, reaction 4 leads to an inefficient consumption of hydroxyalkoxyl adduct radicals, without regenerating the initial 1-hydroxyalkyl addend radicals. Reaction 4 together with reaction 6 (mutual annihilation of free formyl and chain-carrier 1-hydroxyalkyl radicals) causes the inhibition of the nonbranched-chain process. For the disproportionation of the free radicals, the heats of reactions 57 for C 1 C 3 alcohols in the gas phase vary in the range of The rates of the chain formation of 1,2-alkanediols in reaction 3 (and their nonchain formation in reaction 4), carbonyl compounds in reaction 3a, and methanol in reaction 3b are given by the following equations: where V 1 is the initiation rate, l is the molar concentration of the saturated alcohol at a given total concentration of formaldehyde 2 dissolved in it, x is the molar concentration of free formaldehyde (l >> x), k 2 is the rate constant of reaction 2 (addition of 1hydroxyalkyl free radical to free formaldehyde), and α = k 3 /k 4 and β = k 3а /k 4 (mol dm -3 ) are the ratios of the rate constants of the competing (parallel) reactions. Estimates of 2k 5 were reported by Silaev et al. [39,40]. From the extremum condition for the reaction 2 The alcohol concentration in alcohol-formaldehyde solutions at any temperature can be estimated by the method suggested in [38,39]. The data necessary for estimating the concentration of free formaldehyde using the total formaldehyde concentration in the solution are reported by Silaev et al. [28,39].   , we derived the following analytical expression: The overall process rate is a complicated function of the formation and disappearance rates of the • R (-H) OH and • СНО free radicals: The ratios of the rates of the competing reactions are V 3 /V 4 = αl/x and V 3a /V 4 = β/x, and the chain length is  = (V 3 + V 3a )/V 1 . The ratio of the rates of formation of 1,2-alkanediol and the carbonyl compound is a simple linear function of x: The equations for the rates of chain-termination reactions 5-7 are identical to Eqs. (12)-(14) (see below, Section 4.1). Neutral formaldehyde solutions in alcohols at room temperature primarily consist of a mixture of formaldehyde polymer solvates reversibly bound to alcohols; these polymer solvates differ in molecular mass and have the general formula RO(CH 2 O) n H, where n = 1-4 [27]. The concentration of formaldehyde that occurs in solution as a free, unsolvated active species chemically unbound to the solvent (this species is capable of scavenging free radicals) at room temperature is lower than a percent of the total formaldehyde concentration [27]. The concentration x of the free formaldehyde species in solutions was determined by high-temperature UV spectrophotometry in the range 335-438 K at the total formaldehyde concentration c 0 (free and bound species including the concentration of polymer solvates) of 1.0-8.4 mol dm -3 in water, ethanediol, methanol, ethanol, 1-propanol, 2-propanol, and 2-methyl-2-propanol [28] (see Table of the Appendix). This concentration increases with temperature according to an exponential law, and it can be as high as a few percent of the total concentration in solution under the test conditions, up to 19.3% in the case of 2-methyl-2-propanol at a total concentration of 1.0 mol dm -3 and a temperature of 398 K. The following empirical equation relating the concentration x (mol dm -3 ) of free formaldehyde to temperature T (K) and the total concentration c 0 in the solution (measured at room temperature), was developed by the treatment of 101 data points [28,39]: where the coefficients a and b were calculated as the parameters of a straight-line equation by the least-squares technique from the dependence of lg x on 1/T at c 0 = 1.0 mol dm -3 for various solvents, and the coefficient h was obtained as the average value of the slopes of lg x as linear functions of lg c 0 at various series of fixed temperatures. The Table 1 summarizes these coefficients for each solvent. As regards the experimental data, the error in the calculations of the concentration x of free formaldehyde made by Eq. (7) in the specified temperature range was no higher than 25%. On the assumption that the dependence of the density of a given solution on the concentration of formaldehyde is similar to the analogous linear dependence found for aqueous formaldehyde solutions (0-14 mol dm -3 ; 291 K) [27], the concentrations l T (mol dm -3 ) of alcohols in alcohol-formaldehyde solutions at a certain temperature can be estimated by the equation where c 0 is the total formaldehyde concentration (mol dm -3 ); M is the molecular mass (g mol -1 ) of the solvent; d and d T are the solvent densities (g cm -3 ) at room and given temperatures, respectively; the coefficients 8.4 × 10 -3 and 21.6 have the units of 10 3 g mol -1 and g mol -1 , respectively [38]. Earlier [28], it was found that the concentration x of the free formaldehyde species decreased with the solvent permittivity D 298 at a constant temperature. Water is an exception. Although water is more polar than alcohols, the concentration x of free formaldehyde in an aqueous solution is anomalously high and reaches the level of its concentration in 2-propanol, all other factors being the same (see Fig. 2) [28,39]. This can be due to the specific instability of hydrated formaldehyde species and the ease of their conversion into free formaldehyde with increasing temperature. 5) and (6) for describing the experimental dependences of the formation rates of 1,2-butanediol (curve 1) in reactions 3 and 4 and propanal (curve 2) in reaction 3a on the concentration of free formaldehyde in the 1-propanolformaldehyde reacting system at a total formaldehyde concentration of 2.0 to 9.5 mol dm -3 and temperature of 413 K [8,9,41]. The concentration dependence of the propanal formation rate was described using the estimates of kinetic parameters obtained for the same dependence of the 1,2-butanediol formation rate. We considered these data more reliable for the reason that the carbonyl compounds forming in the alcohol-formaldehyde systems can react with the alcohol and this reaction depends considerably on the temperature and acidity of the medium [27]. The mathematical modeling of the process was carried out using a 137 Cs γ-radiation dose rate of P = 0.8 Gy s -1 [32,41], a total initiation yield of G(CH 3 СН 2 ĊНОН) = 9.0 particles per 100 eV [8,9] (V 1 = 4.07  10 -7 mol dm -3 s -1 ), and 2k 5 = 4.7  10 9 dm 3 mol -1 s -1 ). The following values of the parameters were obtained: α = 0.36 ± 0.07, β = 0.25 ± 0.05 mol dm -3 , and k 2 = (6.0 ± 1.4)  10 3 dm 3 mol -1 s -1 . Figure 3. Reconstruction of the functional dependence (curves) of the product formation rates V 3, 4 and V 3а on the concentration x of free formaldehyde (model optimization with respect to the parameters α, β and k 2 ) from empirical data (symbols) for the 1propanol-formaldehyde system at 413 K [8,9,41]: (1, ) calculation using Eq. (5), standard deviation of S Y = 2.20  10 -7 ; (2, □) calculation using Eq. (6), S Y = 2.38  10 -8 . Note that, as compared to the yields of 1,2-propanediol in the γradiolysis of the ethanol-formaldehyde system, the yields of 2,3butanediol in the γ-radiolysis of the ethanol-acetaldehyde system are one order of magnitude lower [41]. Using data from [8,9], it can be demonstrated that, at 433 K, the double bond of 2-propen-1-ol accepts the 1-hydroxyethyl radical 3.4 times more efficiently than the double bond of formaldehyde [42]. Addition of Hydroxymethyl Radicals The addition of hydroxymethyl radicals to the carbon atom at the double bond of free formaldehyde molecules in methanol, initiated by the free-radical mechanism, results in the chain formation of ethanediol [34]. In this case, reaction 3a in Scheme 2 is the reverse of reaction 2, the 1-hydroxyalkyl radical • R (-H) OH is the hydroxymethyl radical • СН 2 ОН, so reaction 3b is eliminated (k 3b = 0), and reaction 5 yields an additional amount of ethanediol via the dimerization of chain-carrier hydroxymethyl radicals (their disproportionation can practically be ignored [43]). The scheme of these reactions is presented in [35]. The rate equation for ethanediol formation by the chain mechanism in reaction 3 and by the nonchain mechanism in reactions 4 and 5 in the methanol-formaldehyde system has a complicated form 3 as compared to Eq. (1) for the formation rate of the other 1,2-alkanediols [12]: where f = k 2 x 2 + (αl + β + x) 5 1 2k V . If the rate of ethanediol formation by the dimerization mechanism in reaction 5 is ignored for the reason that it is small as compared to the total rate of ethanediol formation in reactions 3 and 4, Eq. (9) will be identical to Eq. (5). After the numerator and denominator on the right-hand side of Eq. (5) are divided by k -2 ≡ k 3a , one can replace k 2 in this equation with K 2 = k 2 /k -2 , which is the equilibrium constant for the reverse of reaction 2. Ignoring the reverse of reaction 2 (k 3a = 0, β = 0) makes Eq. (5) identical to Eq. (4) for Scheme 1 at k 3b = 0 (see the Section 2.1). In this case, the rate constant k 2 is effective. ADDITION TO OXYGEN The addition of a free radical or an atom to one of the two multiply bonded atoms of the oxygen molecule yields a peroxyl free radical and thus initiates oxidation, which is the basic process of chemical evolution. The peroxyl free radical then abstracts the most labile atom from a molecule of the compound being oxidized or decomposes to turn into a molecule of an oxidation product. The only reaction that can compete with these two reactions at the chain evolution stage is the addition of the peroxyl radical to the oxygen molecule (provided that the oxygen concentration is sufficiently high). This reaction yields a secondary, tetraoxyalkyl, 1:2 adduct radical, which is the heaviest and the largest among the reactants. It is less reactive than the primary, 1:1 peroxyl adduct radical and, as a consequence, does not participate in further chain propagation. At moderate temperatures, the reaction proceeds via a nonbranchedchain mechanism. Addition of Hydrocarbon Free Radicals Usually, the convex curve of the hydrocarbon (RH) autooxidation rate as a function of the partial pressure of oxygen ascends up to some limit and then flattens out [6]. When this is the case, the oxidation kinetics is satisfactorily describable in terms of the conventional reaction scheme 2,5,6,16,44,45, which involves two types of free radicals. These are the hydrocarbon radical R • (addend radical) and the addition product However, the existing mechanisms are inapplicable to the cases in which the rate of initiated oxidation as a function of the oxygen concentration has a maximum (Figs. 4, 5) [46,47]. Such dependences can be described in terms of the competition kinetics of free-radical chain addition, whose reaction scheme involves not only the above two types of free radicals, but also the 4 RO  radical (1:2 adduct) inhibiting the chain process [13,14]. Scheme 3 3 In an earlier publication [8], this equation does not take into account reaction 3a. The only difference between the kinetic model of oxidation represented by Scheme 3 and the kinetic model of the chain addition of 1-hydroxyalkyl radicals to the free (unsolvated) form of formaldehyde in nonmethanolic alcohol-formaldehyde systems [8,9] bond RО-О • (for addition in the gas phase under standard conditions, this energy is 115-130 kJ mol -1 for C 1 -C 4 alkyl radicals [1,2,4] and 73 kJ mol -1 for the allyl radical [4]). Because of this, the adduct radical can decompose (reaction 3a) or react with some neighbor molecule (reaction 3 or 4) on the spot, without diffusing in the solution and, accordingly, without entering into any chain termination reaction. In reaction 3, the interaction between the radical adduct 2 RO  and the hydrocarbon molecule RH yields, via a chain mechanism, the alkyl hydroperoxide RO 2 H (this reaction regenerates the chain carrier R • and, under certain conditions, can be viewed as being reversible [2]) or the alcohol ROH (this is followed by the regeneration of R • via reaction 3b). The latter (alternative) pathway of reaction 3 consists of four steps, namely, the breaking of old bonds and the formation of two new bonds in the reacting structures. In reaction 3a, the isomerization and decomposition of the alkylperoxyl radical adduct 2 RO  with O-O and C-O or C-H bond breaking take place [6,44], yielding the carbonyl compound R (-Н) НО or R (-2Н) НО. Reaction 3b produces the alcohol ROH or water and regenerates the free radical R • (here, R and R are radicals having a smaller number of carbon atoms than R). As follows from the above scheme of the process, consecutive reactions 3a and 3b (whose rates are equal within the quasi-steady-state treatment), in which the highly reactive fragment, oxyl radical RО • (or • ОН) forms and then disappears, respectively, can be represented as a single, combined bimolecular reaction 3a,b occurring in a "cage" of solvent molecules. Likewise, the alternative (parenthesized) pathways of reactions 3 and 3b, which involve the alkoxyl radical RО • , can formally be treated as having equal rates. For simple alkyl C 1 -C 4 radicals R, the pathway of reaction 3 leading to the alkyl hydroperoxide RO 2 H is endothermic (ΔН˚2 98 = 30-80 kJ mol -1 ) and the alternative pathway yielding the alcohol ROH is exothermic (ΔН˚2 98 = -120 to -190 kJ mol -1 ), while the parallel reaction 3a, which yields a carbonyl compound and the alkoxyl radical RО • or the hydroxyl radical • ОН, is exothermic in both cases (ΔН˚2 98 = -80 to -130 kJ mol -1 ), as also is reaction 3b (ΔН˚2 98 = -10 to -120 kJ mol -1 ), consecutive to reaction 3a, according to thermochemical data for the gas phase [2][3][4]. In reaction 4, which is competing with (parallel to) reactions 3 and 3a (chain propagation through the reactive radical R • ), the resulting low-reactive radical that does not participate in further chain propagation and inhibits the chain process is supposed to be the alkyltetraoxyl 1:2 radical adduct 4,5 4 RO  , which has the largest weight and size. This radical is 4 It is hypothesized that raising the oxygen concentration in the o-xyleneoxygen system can lead to the formation of the RO • ···O2 intermediate complex [46] similar to the ROO • ···(-bond)RH complex between the alkylperoxyl 1:1 adduct radical and an unsaturated hydrocarbon suggested in this work. The electronic structure of the -complexes is considered elsewhere [48]. 5 Thermochemical data are available for some polyoxyl free radicals (the enthalpy of formation of the methyltetraoxyl radical without the energy of the possible intramolecular hydrogen bond Н···О taken into account is = -21.0  9 kJ mol -1 ) [49]. These data were obtained using the group contribution approach. Some physicochemical and geometric parameters were calculated for the methyl hydrotetraoxide molecule as a model compound 50-52. The IR spectra of dimethyl tetraoxide with isotopically labeled groups in Ar-O2 matrices were also reported [53]. For reliable determination of the number of oxygen atoms in an oxygen-containing species, it is necessary to use IR and EPR spectroscopy in combination with the isotope tracer method [53]. possibly stabilized by a weak intramolecular H···O hydrogen bond [54] shaping it into a six-membered cyclic structure 6 (sevenmembered cyclic structure in the case of aromatic and certain branched acyclic hydrocarbons) [56,57]: Reaction 4 in the case of the methylperoxyl radical RO  radical. 7 The latter process is likely accompanied by chemiluminescence typical of hydrocarbon oxidation [52]. These reactions regenerate oxygen as O 2 molecules (including singlet oxygen 8 [52,59]) and, partially, as O 3 molecules and yield the carbonyl compound R (-2H) HO (possibly in the triplet excited state [52]). Depending on the decomposition pathway, the other possible products are the alcohol ROH, the ether ROR, and the alkyl peroxide RO 2 R. It is likely that the isomerization and decomposition of the [6]. The equations describing the formation rates of molecular products at the chain propagation and termination stages of the above reaction scheme, set up using the quasi-steady-state treatment, appear as follows: presumably with a hydrogen bond [6], also forms in the transition state of the dimerization of primary and secondary alkylperoxyl radicals RО2 • via the Russell mechanism [5,55]. 7 Taking into account the principle of detailed balance for the various pathways of formation of products, whose numbers in the elementary reaction should not exceed three for possible involvement in the triple collisions in the case of the reverse reaction, since the probability of simultaneous interaction of four particles is negligible. 8 Note that the alkylperoxyl radicals RО2 • are effective quenchers of singlet oxygen О2(a 1 Δ g ) [58]. Kinetics of Nonbranched-Chain Processes of the Free-Radical Addition to Molecules of Alkenes, Formaldehyde, and Oxygen with Competing Reactions of Resulting 1:1 Adduct Radicals with Saturated and Unsaturated Components of the Binary Reaction System where V 1 is the initiation rate, l = The ratios of the rates of the competing reactions are V 3 /V 4 = αl/x and V 3a /V 4 = β/x, and the chain length is  = (V 3 + V 3a )/V 1 . Eq. (11) is identical to Eq. (6). Eqs (10a) and (10a) were obtained by replacing the rate constant k 2 in Eqs. (10) and (11) with its analytical expression (for reducing the number of unknown parameters to be determined directly). For αl >> β (V 3 >> V 3a ), when the total yield of alkyl hydroperoxides and alcohols having the same number of carbon atoms as the initial compound far exceeds the yield of carbonyl compounds, as in the case of the oxidation of some hydrocarbons, the parameter β in Eqs. (10) and (10a) can be neglected (β = 0) and these equations become identical to Eqs. (3) and (3a) with the corresponding analytical expression for k 2 . In the alternative kinetic model of oxidation, whose chain termination stage involves, in place of R • (Scheme 3), 2 RO  radicals reacting with one another and with 4 RO  radicals, the dependences of the chain formation rates of the products on the oxygen concentration x derived by the same method have no maximum: /(b 0 x + c 0 ), where a 0 , b 0 , and c 0 are coefficients having no extremum. For a similar kinetic model in which reactions 3a,b and 4 appearing in the above scheme are missing (k 3a = k 4 = 0), Walling [5], using the quasi-steady-state treatment in the long kinetic chain approximation, when it can be assumed that V 2 = V 3 , without using the substitution k 6 = 7 5 2 2 k k [5,6,16] (as distinct from this work), found that V 2 = V 3 is an irrational function of x: where a 1 , b 1 , c 1 , and d 1 are coefficients. Again, this function has no maximum with respect to the concentration of any of the two components. Thus, of the three kinetic models of oxidation mathematically analyzed above, which involve the radicals R • and 2 RO  in three types of quadratic-law chain termination reactions (reactions [5][6][7] and are variants of the conventional model [2,5,6,16,44,45], the last two lead to an oxidation rate versus oxygen concentration curve that emanates from the origin of coordinates, is convex upward, and has an asymptote parallel to the abscissa axis. Such monotonic dependences are observed when the oxygen solubility in the liquid is limited under given experimental conditions and the oxygen concentration attained 9 Unlike the conventional model, the above kinetic model of freeradical nonbranched-chain oxidation, which includes the pairs of competing reactions 3-4 and 3a-4 (Scheme 3), allows us to describe the nonmonotonic (peaking) dependence of the oxidation rate on the oxygen concentration (Fig. 4). In this oxidation model, as the oxygen concentration in the binary system is increased, oxygen begins to act as an oxidation autoinhibitor or an antioxidant via the shortening of the kinetic chains). The optimum oxygen concentration x m , at which the oxidation rate is the highest, can be calculated using kinetic equations (10a) and (11a) and Eq. (3a) with β = 0 or the corresponding analytical expression for k 2 . In the familiar monograph Chain Reactions by Semenov [60], it is noted that raising the oxygen concentration when it is already sufficient usually slows down the oxidation process by shortening the chains. The existence of the upper (second) ignition limit in oxidation is due to chain termination in the bulk through triple collisions between an active species of the chain reaction and two oxygen molecules (at sufficiently high oxygen partial pressures). In the gas phase at atmospheric pressure, the number of triple collisions is roughly estimated to be 10 3 times smaller than the number of binary collisions (and the probability of a reaction taking place depends on the specificity of the action of the third particle). Note that in the case of a gas-phase oxidation of hydrogen at low pressures of Pа and a temperature of 77 К [47] when triple collisions are unlikely, the dependence of the rate of hydrogen peroxide formation on oxygen concentration (the rate of passing of molecular oxygen via the reaction tube) also has a pronounced maximum (see curves 3 and 4 in Fig. 5) that indicates a chemical mechanism providing the appearance of a maximum (see reaction 4 of Scheme 4). Addition of the Hydrogen Atom A number of experimental findings concerning the autoinhibiting effect of an increasing oxygen concentration at modest temperatures on hydrogen oxidation both in the liquid phase [63] (Fig. 4, curve 2) and in the gas phase [47,64,65] (Fig. 5), considered in our earlier work [13,56,57,66], can also be explained in terms of the competition kinetics of free radical addition [14,67]. From Fig. 5 shows that the quantum yields of hydrogen peroxide and water (of products of photochemical oxidation of hydrogen at atmospheric pressure and room temperature) are maximum in the region of small concentrations of oxygen in the hydrogen-oxygen system (curves 1 and 2, respectively) [64]. (1, 2) Quantum yields of (1, •) hydrogen peroxide and (2, ○) water resulting from the photochemical oxidation of hydrogen in the hydrogen-oxygen system as a function of the oxygen concentration x (light wavelength of 171.9-172.5 nm, total pressure of 10 5 Pa, room temperature [64]). (3,4) Hydrogen peroxide formation rate V(Н 2 О 2 ) (dashed curves) as a function of the rate V(О 2 ) at which molecular oxygen is passed through a gas-discharge tube filled with (3, ) atomic and (4, □) molecular hydrogen. Atomic hydrogen was obtained from molecular hydrogen in the gasdischarge tube before the measurements (total pressure of 25-77 Pa, temperature of 77 K [47]). The symbols represent experimental data. Scheme 4 Nonbranched-chain oxidation of hydrogen and changes in enthalpy (ΔН˚2 98 , kJ mol -1 ) for elementary reactions 10 10 According to Francisco and Williams [49], the enthalpy of formation HO radical with a helical structure were carried out using the G2(MP2) method [68]. The stabilization energies of 88.5  0.8 kJ mol -1 , respectively. The types of the O4 molecular dimers, their IR spectra, and higher oxygen oligomers were reported [69,70]. The structure and IR spectrum of the hypothetical cyclotetraoxygen molecule O4, a species with a high-energy density, were calculated by the CCSD method, and its enthalpy of formation was estimated [71]. The photochemical properties of O4 and the van der Waals nature of the О2-О2 bond were investigated [72,73]. The most stable geometry of the dimer is two O2 molecules parallel to one Chain initiation The hydroperoxyl free radical 2 HO  [75-78] resulting from reaction 2 possesses an increased energy due to the energy released the conversion of the О=О multiple bond into the НО-О • ordinary bond. Therefore, before its possible decomposition, it can interact with a hydrogen or oxygen molecule as the third body via parallel (competing) reactions 3 and 4, respectively. The hydroxyl radical НО • that appears and disappears in consecutive parallel reactions 3 (first variant) and 3 possesses additional energy owing to the exothermicity of the first variant of reaction 3, whose heat is distributed between the two products. As a consequence, this radical has a sufficiently high reactivity not to accumulate in the system during these reactions, whose rates are equal (V 3 = V 3 ) under quasisteady-state conditions, according to the above scheme. Parallel reactions 3 (second, parenthesized variant) and 3 regenerate hydrogen atoms. It is assumed [56,57] that the hydrotetraoxyl radical 4 HO  (first reported in [79,80]) resulting from endothermic reaction 4, which is responsible for the peak in the experimental rate curve (Fig. 4, curve 2), is closed into a five-membered [ОО─Н···ОО] • cycle due to weak intramolecular hydrogen bonding [54,81]. This structure imparts additional stability to this radical and makes it least reactive. The 4 HO  radical was discovered by Staehelin et al. [82] in a pulsed radiolysis study of ozone degradation in water; its UV spectrum with an absorption maximum at 260 nm ( ε(HO  of the former is almost two times larger [82]. The another. The O4 molecule was identified by NR mass spectrometry [74]. assumption about the cyclic structure of the 4 HO  radical can stem from the fact that its mean lifetime in water at 294 K, which is (3.6 ± 0.4) × 10 -5 s (as estimated [66] HO  can exist in both forms, but the cyclic structure is obviously dominant (87%, K eq = 6.5) [85]. Reaction 4 and, to a much lesser degree, reaction 6 inhibit the chain process, because they lead to inefficient consumption of its main participants - 2 The hydrogen molecule that results from reaction 5 in the gas bulk possesses an excess energy, and, to acquire stability within the approximation used in this work, it should have time for deactivation via collision with a particle M capable of accepting the excess energy [87]. To simplify the form of the kinetic equations, it was assumed that the rate of the bimolecular deactivation of the molecule substantially exceeds the rate of its monomolecular decomposition, which is the reverse of reaction 5 [2]. Reactions ) at 0 K. 12 The planar, six-atom, cyclic, hydrogen-bonded dimer 2 2 ) (HO  was calculated using quantum chemical methods (B3LYP density functional theory) [88]. The hydrogen bond energy is 47.7 and 49.4 kJ mol -1 at 298 K for the triplet and singlet states of the dimer, respectively. 70. The reaction of ozone with Н • atoms, which is not impossible, results in their replacement with НО • radicals. The relative contributions from reactions 6 and 7 to the process kinetics can be roughly estimated from the corresponding enthalpy increments (Scheme 4). When there is no excess hydrogen in the hydrogen-oxygen system and the homomolecular dimer O 4 [71][72][73][74]89,90], which exists at low concentrations (depending on the pressure and temperature) in equilibrium with O 2 [70], can directly capture the Н • atom to yield the heteronuclear cluster  4 HO , 13 which is more stable than O 4 [70] and cannot abstract a hydrogen atom from the hydrogen molecule, nonchain hydrogen oxidation will occur to give molecular oxidation products via the disproportionation of free radicals. The low-reactive hydrotetraoxyl radical 4 HO  [82], which presumably has a high-energy density [71], may be an intermediate in the efficient absorption and conversion of biologically hazardous UV radiation energy the Earth upper atmosphere. The potential energy surface for the atmospheric reaction HO • + О 3 , in which the ) and О 2 ( is observed at altitudes of 30-80 and 40-130 km, respectively [93]. Staehelin et al. [82] pointed out that, in natural systems in which the concentrations of intermediates are often very low, kinetic chains in chain reactions can be very long in the absence of scavengers since the rates of the chain termination reactions decrease with decreasing concentrations of the intermediates according to a quadratic law, whereas the rates of the chain propagation reactions decrease according to a linear law. The kinetic description of the noncatalytic oxidation of hydrogen, including in an inert medium [87], in terms of the simplified scheme of free-radical nonbranched-chain reactions (Scheme 4), which considers only quadratic-law chain termination and ignores the surface effects [47], at moderate temperatures and pressures, in the absence of transitions to unsteady-state critical regimes, and at a substantial excess of the hydrogen concentration over the oxygen concentration was obtained by means of quasi-steady-state 13 It is impossible to make a sharp distinction between the two-step bimolecular interaction of three species via the equilibrium formation of the labile intermediate O4 and the elementary trimolecular reaction О2 + О2 + Н • →  4 HO . treatment, as in the previous studies on the kinetics of the branchedchain free-radical oxidation of hydrogen [76], even though the applicability of this method in the latter case under unsteady states conditions was insufficiently substantiated. The method was used with the following condition: 14 (6) and (7) quadratic-law chain termination -are identical to Eqs. (13) and (14) provided that β = 0. In these equations, l and x are the molar concentrations of hydrogen and oxygen (l >> x), l m and x m are the respective concentrations at the maximum point of the function, V 1 is the rate of initiation (reaction 1), α = k 3 /k 4 , the rate constant is derived from the condition ∂V 3 /∂x = 0, and 2k 5 is the rate constant of reaction 5 (hydrogen atom recombination), which is considered as bimolecular within the given approximation. 15 In the case of nonchain hydrogen oxidation via the above addition , the formation rates of the molecular oxidation products in reactions 6 and 7 (Scheme 4, k 2 = k 3 = k 4 = 0) are defined by modified Eqs. (13) and (14) in which β = 0, (αl + x) is replaced with 1, and k 2 is replaced with k add K eq (k add K eq is the effective rate constant of Н • addition to the О 4 dimer, К eq = k/k is the equilibrium constant of the reversible reaction 2О 2 k k' with k >> k add Н • ). The formation rates of the stable products of nonchain oxidation (k 3 = 0), provided that either reactions (2) given by modified Eqs. (13) and (14) with β = 0, (αl + x) replaced with 1, and х 2 replaced with х. Note that, if in Scheme 4 chain initiation via reaction 1 is due to the interaction between molecular hydrogen and molecular oxygen yielding the hydroxyl radical НО • instead of Н • atoms and if this radical reacts with an oxygen molecule (reaction 4) to form the hydrotrioxyl radical 3 HO  (which was obtained in the gas phase by neutralization reionization (NR) mass spectrometry [83] and has a lifetime of >10 -6 s at 298 K) and chain termination takes place via reactions 5-7 involving the НО • and 3 HO  , radicals instead of Н • and 4 HO  , respectively, the expressions for the water chain formation rates derived in the same way will appear as a rational 14 For example, the ratio of the rate constants of the bimolecular disproportionation and dimerization of free radicals at room temperature is k(HO • + HO2 • )/2k(2HO • )2k(2HO2 • ) 0.5 = 2.8 in the atmosphere [92] and k(H • + HO • )/2k(2H • )2k(2HO • ) 0.5 = 1.5 in water [94]. These values that are fairly close to unity. 15 This rate constant in the case of the pulsed radiolysis of ammonia-oxygen (+ argon) gaseous mixtures at a total pressure of 10 5 Pa and a temperature of 349 K was calculated to be 1.6 × 10 8 dm 3 mol -1 s -1 [65] (a similar value of this constant for the gas phase was reported in an earlier publication [95]). Pagsberg et al. [65] found that the dependence of the yield of the intermediate НО • on the oxygen concentration has a maximum close to 5 × 10 -4 mol dm -3 . In the computer simulation of the process, they considered the strongly exothermic reaction НО2 • + NН3  Н2О + • NНОН, which is similar to reaction 3 in Scheme 4, whereas the competing reaction 4 was not taken into account. function of the oxygen concentration x without a maximum: Curve 2 in Fig. 4 describes, in terms of the overall equation for the rates of reactions 3 and 7 (which was derived from Eqs. 3a and 14, respectively, the latter in the form [96] in which k 2 is replaced with its analytical expression derived from Eq. (10) with β = 0 everywhere), the dependence of the hydrogen peroxide formation rate (minus the rate 2 2 O H V = 5.19  10 -8 mol dm -3 s -1 of the primary formation of hydrogen peroxide after completion of the reactions in spurs) on the concentration of dissolved oxygen during the -radiolysis of water saturated with hydrogen (at the initial concentration 7  10 -4 mol dm -3 ) at 296 K [63]. These data were calculated in the present work from the initial slopes of hydrogen peroxide buildup versus dose curves for a 60 Co -radiation dose rate of Р = 0.67 Gy s -1 and absorbed doses of D  22.5-304.0 Gy. The following values of the primary radiation-chemical yield G (species per 100 eV of energy absorbed) for water -radiolysis products in the bulk of solution at pH 4-9 and room temperature were used (taking into account that V = GP and V 1 = G H P): see below) [94]; V 1 = 4.15  10 -8 mol dm -3 s -1 ; 2k 5 = 2.0 × 10 10 dm 3 mol -1 s -1 94. As can be seen from Fig. 4, the best description of the data with an increase in the oxygen concentration in water is attained when the rate V 7 of the formation of hydrogen peroxide via the nonchain mechanism in the chain termination reaction 7 (curve 1, α = (8.5  2)  10 -2 ) is taken into account in addition to the rate V 3 of the chain formation of this product via the propagation reaction 3 (dashed curve 2, α = 0.11  0.026). The rate constant of addition reaction 2 determined from α is substantially underestimated: k 2 = 1.34  10 7 (vs. 2.0  10 10 [94]) dm 3 mol -1 s -1 . The difference can be due to the fact that the radiation-chemical specifics of the process were not considered in the kinetic description of the experimental data. These include oxygen consumption via reactions that are not involved in the hydrogen oxidation scheme [66,97,98] GENERAL SCHEME OF THE ADDITION OF FREE RADICALS TO MOLECULES OF ALKENES, FORMALDEHYDE, AND OXYGEN The general scheme of the nonbranched-chain addition of a free radical from a saturated compound to an alkene (and its functionalized derivative), formaldehyde, or dioxygen (which can add an unsaturated radical as well) in liquid homogeneous binary systems of these components includes the following reactions [57,97,98]. The main molecular products of the chain process -R 3 А, RRСО, and R 4 А -result from reactions 3, 3a, and 3b -chain propagation through the reactive free radical 1 R  or 4 R  , RRCO. The competing reaction 4, which opposes this chain propagation, yields the by-product R 3 B a nonchain mechanism. The rate of formation of the products is a complicated function of the formation rates (V 3a = V 3b ) and disappearance rates of the free radicals 1 (14). The rate ratios of the competing reactions are V 3 /V 4(4а) = αl/x and V 3а /V 4(4а) = β/x (where α = k 3 /k 4(4а) , β = k 3а /k 4(4а) mol dm -3 , and l and x are the molar concentrations of the reactants R 1 A and R 2 B, respectively), and the chain length is v = (V 3 + V 3a )/V 1 . Unlike the dependences of the rates of reactions 4a (or 4 at k 1b = 0, with V 4(4a)  V 1 ), 5, and 7 (for the last two -Eqs. (12) and (14)), the dependences of the rates V of reactions 3, 3a,b, 4 (at k 1b ≠ 0), and 6 (Eqs. (1), (3)-(6), (10), (11), and (13)) on x have a maximum. Reaction 1b, which competes with reaction 1a, gives rise to a maximum in the dependence described by Eq. (2), whereas reaction 4 or 4a, competing with reactions 3 and 3a,b, is responsible for the maxima in the dependences defined by allow tentative estimates of the parameters k 2 and α to be derived from the experimental product formation rate V provided that V 1 and 2k 5 are known: where  = 1 under conditions (a) and (b) and  = 2 at the point of maximum (where k 2 x 2  (α + x) 1 5 2 V k ). Equations (10) and (11) under the condition k 2 x 2 >> (αl + β + x) 1 5 2 V k (descending branch of a peaked curve) can be transformed into Eqs. (17) and (18), respectively, which express the simple, inversely proportional dependences of reaction rates on x and provide tentative estimates of α and β: where  = 2 at the point of maximum (where k 2 x 2  (αl + β + x) 1 5 2 V k ) and  = 1 for the descending branch of the curve. Equation (3) for V 3, 4 under condition (b) transforms into Eq. (17). For radiation-chemical processes, the rates V in the kinetic equations should be replaced with radiation-chemical yields G using the necessary unit conversion factors and the relationships V = GP and V 1 =  1 G( 1 R  )P, where P is the dose rate,  1 is the electron fraction of the saturated component R 1 A in the reaction system [100], and G( 1 R  ) is the initial yield of the chain-carrier free radicals (addends) -initiation yield [39,94]. CONCLUSIONS In summary, the material on the kinetics of nonbranched-chain addition of free saturated radicals to multiple bonds of alkene (and its derivative), formaldehyde, or oxygen molecules makes it possible to describe, using rate equations (1)-(6), (9)-(11) obtained by quasisteady-state treatment, experimental dependences with a maximum of the formation rates of molecular 1:1 adducts on the concentration of an unsaturated compound over the entire region of its change in binary reaction systems consisting of saturated and unsaturated components (Figs. 1, 3, 4). The proposed addition mechanism involves the reaction of a free 1:1 adduct radical with an unsaturated molecule yielding a lowreactive free radical (the reaction 4 competing with the chain propagation reactions in Schemes 1-5). In such reaction systems, the unsaturated compound is both a reactant and an autoinhibitor, specifically, a source of low-reactive free radicals shortening kinetic chains. The progressive inhibition of the nonbranched-chain processes, which takes place as the concentration of the unsaturated compound is raised (after the maximum process rate is reached), can be an element of the self-regulation of the natural processes that returns them to the stable steady state. A similar description is applicable to the nonbranched-chain freeradical hydrogen oxidation in water at 296 K [63] (Fig. 4, curve 2). Using the hydrogen oxidation mechanism considered here, it has been demonstrated that, in the Earth's upper atmosphere, the decomposition of O 3 in its reaction with the НО • radical can occur via the addition of the latter to the ozone molecule, yielding the [82]. The optimum concentration x m of unsaturated component in the binary system at which the process rate is maximal can be derived with the help of obtained kinetic equations (3a), (4a), (10a), and (11a) or from the corresponding analytical expressions for k 2 if other parameters are known. This opens a way to intensification of some technological processes that are based on the addition of free radicals to the double bonds of unsaturated molecules and occur via a nonbranched-chain mechanism through the formation of 1:1 adducts.
2019-04-08T13:12:23.521Z
2017-01-10T00:00:00.000
{ "year": 2017, "sha1": "79d764a24486a5e3ceac31f7611bc340d285d8b1", "oa_license": null, "oa_url": "https://doi.org/10.21276/ijirem.2017.4.1.6", "oa_status": "GOLD", "pdf_src": "MergedPDFExtraction", "pdf_hash": "b6e1be8ee2d1cc0b9ce20a9b902fcfd51023c26e", "s2fieldsofstudy": [ "Chemistry" ], "extfieldsofstudy": [ "Chemistry" ] }
1576323
pes2o/s2orc
v3-fos-license
Energy Aware Processor Architecture for Effective Scheduling and Power Management in Cloud Using Inclusive Power-Cognizant Processor Controller The fast acceptance of cloud technology to industry explains increasing energy conservation needs and adoption of energy aware scheduling methods to cloud. Power consumption is one of the top of mind issues in cloud, because the usage of cloud storage by the individuals or organization grows rapidly. Developing an efficient power management processor architecture has gained considerable attention. However, the conventional power management mechanism fails to consider task scheduling policies. Therefore, this work presents a novel energy aware framework for power management. The proposed system leads to the development of Inclusive Power-Cognizant Processor Controller (IPCPC) for efficient power utilization. To evaluate the performance of the proposed method, simulation experiments inputting random tasks as well as tasks collected from Google Trace Logs were conducted to validate the supremacy of IPCPC. The research based on Real world Google Trace Logs gives results that proposed framework leads to less than 9% of total power consumption per task of server which proves reduction in the overall power needed. Introduction Cloud computing and its pay-as-per your use-cost model have enabled the software service providers, application service providers as well as hardware infrastructure service providers and platform service providers to provide computing services on demand and pay per use.This upward drift in cloud computing, combined with the demands for data storage virtualization is driving the rapid evolution of datacenter technologies towards more cost-effective, user driven and energy efficient solutions.Cloud computing is defined as "A large scale distributed computing paradigm that is driven by economies of scale, in which a pool of abstraction, virtualization, dynamically-scalable, managed computing power, storage, platforms, and services are delivered on demand to external customer over the internet" [1].Power consumption is one of the prominent issues in cloud [2].In cloud model, data owned by a user is managed in a distributed manner.It will consume more energy for allocating resource to correctly identified user process in a distributed cloud system.Moreover, multiple users access the cloud at same time, and this leads to increase in the energy cost enormously and this high energy consumption produces huge amount of heat, consequently the hardware system fails [3]. In cloud data center due to varying workloads, it is common that most servers run at low utilization.In a cloud datacenter, the energy efficiency can be achieved by making the idle server to sleep thereby by reducing the power consumption.In a low load condition, the processor utilization is 10% and their power consumption is over 50% of the peak power [4].In the cloud model, multiple data center applications are hosted on a common set of servers.This permits the application workloads to be consolidated in a small number of servers which are always better utilized.Consolidation can be problematic if it loads maximum workload into minimum no of servers and consequently suffers from performance degradation.Thus reducing the energy consumption of cloud data center is a challenging task .The concept of Green computing has gained much attention recently and it was developed for efficient resource utilization as well as for reduction in energy consumption.The proposed work presents a framework for power management in cloud.The proposed idea for the power management is implemented by calculating how much power and configurations are required for the server to process a task such as uploading a file and after that the task will be scheduled to server which requires a minimum power to process the task.The proposed system inaugurates a novel Inclusive Power-cognizant Processor Controller (IPCPC) for minimizing the power utilization and Inclusive Power-cognizant Processor Controller (IPCPC) integrates with collection of configuration management (CCM), Server/Task Mapping (STM), Anticipating power manager (APM).CCM is used for estimating the server configurations in the data center.Server/Task Mapping (STM) is used for scheduling and task mapping.APM can estimate the current power consumption of the server.Inclusive Power-cognizant Processor Controller (IPCPC) enables the CCM (Collection of Configuration Management) to set the configuration of server.APM can estimate the current power consumption of the server by identifying three major portions of the power consumption, such as power consumption of processor execution, power consumption of the server except for processors, and baseline power consumption of the idle processor.The output of APM is given to the Energy aware Earliest Deadline first algorithm.This scheduling algorithm maps the task to the virtual machine of the server.The unused virtual machine of server and their working frequency can be turned off to reduce the power consumption and extend the prolong life time of the multiple servers.The main objective of the proposed work is as follows. • Enhance the system performance by using a task scheduling algorithm. • Minimize the power consumption. The rest of this paper is organized as follows.Section 2 gives the reviews of previous works in power management and scheduling in cloud.Section 3 introduces the detailed architecture about the proposed work, and in Section 4 the experimental results are analyzed.Conclusions are finally drawn in Section 5. Related Work Energy conservation in cloud computing is attracting a wide range of attention in research area, and is leading to a new computing era known as green computing.Efficient scheduling techniques are there to reduce the energy conservation in data centers which have been thoroughly examined in [5]- [7].Chase, D. C. Anderson, P. N. Thakar, A. M. Vahdat, and R. P. Doyle propose the energy-efficient management issue of homogeneous resources in Internet hosting centers.The proposed method is ideal for power efficient resource allocation at data center level and energy consumption is reduced by switching idle servers to power saving modes [8].Arindam Banerjee, Prateek Agrawal, N.Ch.S.N.Iyengar [2] investigate all possible areas in a typical cloud infrastructure that are responsible for significant amount of energy consumption and proposes methodologies for decreasing power utilization.Shin-ichi Kuribayashi [3] identifies the need of collaboration among the entire servers, the communication network, and the power network for reducing power consumption in cloud environment.This paper proposes to use signaling sequences to exchange the information on power consumption between network and servers.In order to realize the proposed policy the volume of power consumption method by all network devices has been estimated and assigns it to an individual user.Luna Mingyi Zhang, Keqin Li, Dan Chia-Tien Lo and Yanqing Zhang [4], considers several green task scheduling algorithms for heterogeneous computers which will have continuous speeds and discrete speeds.All these algorithms focus on minimizing the consumption of energy as well as determining an optimal speed for the tasks assigned to the computer.Awada Uchechukwu, keqiu Li, and Yanming Shen [9], characterizes energy consumption and performance in cloud environments by analyzing and measuring the impact of various task and system configuration.This paper presents energy consumption formulas for calculating the total energy consumption in cloud environments.Andrew J. Younge, Gregor von Laszewski, Lizhe Wang, Sonia Lopez-Alarcon and Warren Carithers [10], presents a framework for providing efficient green enhancements within the scalable cloud computing architecture.The frame work derives efficient methods for VM scheduling, VM image management, and advanced data center design.The Scheduling technique addressed here contains the placement of VMs within the Cloud infrastructure while minimizing the operating costs of the Cloud itself.This is typically achieved by optimizing either power of the server equipment itself or the overall temperature within the data center.The image management attempts to control and manipulate the size and placement of VM images in various ways to conserve power.Yan Ma, Gong B, Sugihara R, and Gupta R. [11], investigates the power-aware scheduling algorithms for heterogeneous systems to meet the deadline constraints in high performance computing applications.A pricing scheme for tasks is also presented in the way that the price of a task differs as its energy usage and the price of a task will depend on the rigidity of its deadline. Lizhe Wanga et al. [12] studies the case of reducing power consumption of parallel tasks in a cluster with the Dynamic Voltage Frequency Scaling (DVFS) technique.This paper also discusses the relationship between energy consumption and task execution time. Robert Basmadjian, Hermann De Meer, Ricardo Lent and Giovanni Giuliani [13] studies the case of private cloud computing environments from the perspective of energy saving concerns.This paper presents a generic conceptual description for ICT resources of a data center and identifies their corresponding energy-related attributes.Power consumption prediction models for servers, storage devices and network equipment are presented in this paper and shows that by applying appropriate energy optimization policies guided through accurate power consumption prediction models, it is possible to save about 20% of energy consumption when typical single-site private cloud data centers are considered. Recently, a number of research works have been conducted in energy efficient scheduling data centers [14].The orthodox power reduction system in a cloud system agrees on an automatic scheme to control the usage of peripheral operations and processor frequency.These mechanisms fail to meet user requirements, consider workloads and operational status of processors in the multiple cloud servers in a data ware house.Also, the multiple Processors are not required since most of the idle time of cloud devices is not heavy loading.The unused idle processors can be shut down to save more power.In this paper a novel framework is established with the consideration of reduction in total energy consumption in datacenters.The proposed method shows that by applying energy consumption reduction technique and suitable scheduling technique, it is possible to save large amount of power in cloud data centers.Our main contributions on cloud storage by the proposed Inclusive Power-cognizant Processor Controller are as follows. • Innovative concept to reduce power consumption of server by Server/Task Mapping. • Power management for the entire cloud storage system. Power Aware Processor Using Inclusive Power-Cognizant Processor Controller This section gives the detailed explanation of the energy aware scheduler IPCPC which is proposed to minimize the power consumption of the server and thus enhances the system performance.IPCPC will collect the configuration details of the server when issuing or completing the task based on the current status of the server and server workload configuration.It can manage host off/on states, adjust the working frequency, and schedules the task queues of each server to achieve best system performance and to reduce the power consumption of the server system.To achieve the above objective, this mechanism schedules tasks of the task set under some con-straints.First, the tasks entered in to the system will be sorted based on their deadline.Second, all possible system configurations are determined by IPCPC.Then, the tasks are scheduled to most feasible configuration to achieve an improved load balance as well as reduced power consumption.To achieve this IPCPC processor manager uses three techniques CCM, STM, APM. Figure 1 illustrates the conceptual organization of IPCPC.The following subsections introduce the details of these three mechanisms. Collection of Configuration Management Technique In cloud system with IPCPC, the huge number of tasks are submitted into the cloud and these tasks are maintained in task-set which is denoted as Ti = {T1, T2 …, Tn}.Assume the available number of servers of the cloud is denoted as K.The Datacenter has much number of servers and is denoted as Si= {S1, S2, …, Sk}; and each server has number of virtual machines based upon their capacity.The enabling status of server is denoted as Si = 1, when the corresponding server is in power on stage, and S1 = 0, when the server is in shutdown/sleep stage.The set of all possible combinations of the data center enabling status is denoted as DCi = {S1, S2, ...., S(k-1)}, where DC1 = (1, 0, …, 0) and DC(k-1)= (1, 1, …, 1), the number of combination of the server enabling status is 2(k-1).The set of possible working frequencies of server Si is denoted as Fi= {fi j│1≤ j ≤ m, fi1 < fi2 <… fim}, where f1 is the lowest frequency and fm is a highest frequency.Therefore all working frequencies of the server systems are denoted as Freq = {F1… Fg… Fk}, Fg ∈ {f1…fm│f1 < … < fm}.The workload and executed server number of task i is denoted as Ti.L and Ti.S.The set of all tasks is denoted as TSi = {T1.S, T2.S… Tn.S}, also TS_cur and TS_temp represent the current task set and temporary task set.The proposed CCM technique must be executed to evaluate a feasible server configuration.The CCM is executed when a task is issued or when a task is initiated by Tissue = Begin or when task issue is completed by Tissue = Completed.CCM can determine the possible system configurations, which can achieve the lowest virtual machine migration; excellent load balance and the highest working frequency.From the collected configuration details, a suitable one for allocation is selected.The server system configuration is denoted as, { } Config Power,S, freq, L, TM = This can be generated by CCM.Equation (1) consists of five components, where S denotes reasonable server system, freq refers to working frequency of server and the Power denotes expecting power consumption of the server and is calculated using Anticipating power model.L is denoted as highest working load of server and TM is the maximum of task migration number achieved by STM.The additional functions of the IPCPC are listed as follows, Offline computing evaluates the relevant parameters, λ, ω, β and TS_cur, which are used by CCM and APM.Server/Task Mapping (STM) (Ti.L, Di) schedules and assigns the tasks based on their load and deadline.The following subsection discusses the technique in detail. Load (i, TS) estimates and returns the work load value of the task set on the server i, where task set is scheduled and reassigned in order to improve the load balance.This value is also used by the Anticipating power model to predict the power. Server Task Mapping The power consumption of a server is notably affected by the workload of server in cloud.As a matter of fact good workload balance among the servers will improve the overall performance of the datacenter.To achieve load balancing the proposed concept uses an effective scheduling algorithm called Earliest Deadline first.The scheduling algorithm considers the following factors such as deadline, cost, reliability and availability of workflow.The performance of job depends on the execution time (ei (Ti)) of task (Ti) which has to be executed on server machine.For this reason execution time of task should be calculated before assigning to server based upon MIPS rate.Deadline of task is represented as di. Task Arrangement In cloud large number of tasks Ti = {T1, T2, …, Tn} and servers Si = {S1, S2, Sk} are available.The algorithm 1 has detail description of task arrangement in queue.Initially assume the queue Q, current task set (TS_cur); temporary task set (TS_temp) are empty sets.The current task set contains the currently available number of task for scheduling and temporary task set maintains the currently executed tasks.If Task Ti enters into the cloud, here it is mentioned as Tissue = Begin.The basic idea of the proposed algorithm for task arrangement is to arrange the arriving task set based on the dead line.The load of each task is calculated as Ti.L using the auxiliary function Load (i, TS) and deadline as di based on the task length, where i, varies from 0 to n.The task set is maintained in queue and the workload of task set is the summation of the individual task load.Each task is sorted in ascending order of their load and their deadline.Flow chart is explained in Figure 2. 2) Use CCM to gather configuration of server. 3) Sort all the Si in descending order.4) If Si has feasible configuration then 5) Choose Si with config = {Power, s, Freq, L, TM} 6) Assign task Ti to Si 7) Update Si.config = {Power, s, Freq, L, TM} after completing allocation of task.The basic idea of the EDF scheduling algorithm is to use the APM and CCM to balance the work load and to reduce the power consumption of severs.Before the task is scheduled calculate the power of current server system by using APM and by using CCM measure the system configuration.Based on this information the task is scheduled to the feasible server system.Then TS_temp will be increased by one.When the task is fully completed then Tissue = Completed is initiated.The algorithm 2 has a detailed description of the scheduling process.EDF scheduling algorithm takes the parameter as the load of individual task, and its deadline.The proposed work calculates the deadline by considering the task length.The process which has minimum load and earliest deadline is sent to the head of the queue.This process is assigned to the enabling server i.The current task in queue is submitted for scheduling after arranging the tasks in ascending order of their workload and deadline.The server system configuration is identified and status of the server is evaluated.Based on this information it is found out whether the server has the capability to accommodate the task, if so, task will be allocated to the server.Figure 3 shows the data flow diagram for the EDF scheduling algorithm. Anticipating Power Model As mentioned earlier, IPCPC has three major techniques to define the power aware model of cloud.The power level of each server can't be calculated exactly and promptly by using power meter.So this section explains how to predict the power of server by using APM.It estimates the current power consumption of server by identifying the three major portions of the power consumption.They are power consumption of server execution, power consumption of the other components except for server and base line power consumption of the idle server. In Equation ( 2), APM denotes the predicted power, ε denotes the power consumptions of server's core processor and β represents the power consumptions of other components except for server processor in the cloud system.β can be treated as constant when the configurations of the components in the cloud server are same.When the data center consist K servers, Power consumption of server is denoted as μ.Equation (3) shows Total power consumption of the Servers is According to the results of [15] [16], power consumption of the server core is formulated as P = KCV2f, K denotes the constant; C represent the capacitance of the server; V refer to the working voltage of the server and f is the working frequency of the server processor.While the system work load is increased, the power consumption of the server processor is also increased.The enabling status of the server ON/OFF state also affects the where Ph denotes the enabling status of the server h; Ph = 1 refers a situation in which the power of the server is turned ON; Ph = 0 refers the situation in which the server h is OFF/sleep.Fh represents the working frequency of the server.Vh denotes the working voltage of the server; loadh refers to the work load of the server, which can estimate from the additional functions of IPCPC load (i, Ti.L) and TS_cur.Moreover, ω is a constant factor of workload and the power consumption of the server.Finally, the overall power consumption of the system can be represented as Equation ( 5). ( ) where ε, β, and ω can be obtained from the offline-computing (), it can be varied based on the various cloud system. Simulation and Experimental Results This section explains the experimental analysis of IPCPC that is defined in section 3. Experiments are conducted to analyze power consumption of each server.So, here to demonstrate the performance improvements of the IPCPC, the proposed algorithm is compared with EARH [17], and also compare with some existing scheduling algorithm like Greedy-R [18], Greedy-P [18], and FCFS [18].The performance metric, by which the proposed system assesses the system performance, includes following power consumption parameters.The parameter Resource utilization by task (RU) is the number of resources used by a task.Effective utilization (EU) defines whether resources are effectively utilized by varying number of tasks.Guarantee ratio (GR) gives total number of tasks guaranteed to meet their deadlines from the entire task set.The Total energy consumption (ΔEC total) parameter gives total energy consumed by server and Power consumption per task (PCT) gives total power consumption per accepted task count. Experimental Setup Cloudsim tool kit is used as simulation platform in this application.A data center has been simulated comprising multiple hosts with the CPU performance equivalent to 9600 MIPS, 40 GB RAM and 11 Tb of storage.Each Virtual machine requires up to 2400 MIPS.These VMs are needed in order to support a wide variety of hardware, software and varying user tasks.A hypervisor Xen provides the virtualized hardware to each VM.Next there is a need for an operating system within the VM to accomplish the task.X86 hardware is suggested for this application with includes operating system Linux.This configuration is able to detect various load of the task.This is takes only 15 seconds for running mod-probe to load single module. The aim of this set of experiments is to validate the performance effect of EDF scheduling algorithm.Figure 4 shows the performance of the EDF scheduling algorithm which is compared with Cura [19] and the other three existing algorithms.The parameter used here to compare is resource utilization with varying deadlines. To demonstrate the performance improvements of the IPCPC, the proposed algorithm is compared with EARH, and it is also compared with some existing scheduling algorithm like Greedy-R, Greedy-P, and FCFS. Figure 5 shows the comparison graph between proposed IPCPC with Cura.The parameter considered for the comparison is resource utilization with varying task count.The resource utilization parameter is considered for comparisons because ineffective utilization of the resources of cloud can definitely leads to diminishing power consumption.The aim of this set of experiments is to validate the performance effect of EDF scheduling algorithm. Figure 6(a) shows the algorithm basically keeps the guarantee ratio even if the value of task count is varied.IPCPC with EDF can have a higher guarantee ratio than other algorithm.Figure 6(c) shows comparison of total energy consumption of tasks.At this juncture six different algorithms are compared.From that it can be verified that the proposed IPCPC achieves more efficient result. Evaluation Based on Real Data from Google Trace The above groups of experiments show the performance of the different algorithms in various random inputting tasks.To evaluate the proposed algorithm in practical use, experiments is carried out using data from real world Google trace as input .The details of real world Google trace logs are given in paper [20].The trace log has information of 29 days.Totally 25 million tasks are recorded in trace log and grouped in 650 thousand jobs are processed in Google in nearly one month.Since there is massive amount of data, only first 5 hours in day 18 [20] were chosen for testing purpose.During these 5 hours 200 thousand tasks were submitted into the cloud.The task counts are varied in time manner.To finish the task it takes 1587 seconds on an average from the submission of task. The effective utilization of resources for varying task count is shown below in Figure 7.The experiments are based on tasks collected from Google trace log. The total number of tasks guaranteed to meet their deadlines based on tasks collected from Google trace log is shown below in Figure 8. Figure 9 gives the result of power consumption per task to the increasing task count based on experiments conducted from the tasks collected from Google Trace Log. Figure 10 gives the outcome of experiments conducted for total energy consumption for varying task count. All the above graphs demonstrate the results based on real world trace records.From the analysis of above results it can be proven that the projected framework power management efficient result when compared with previous algorithms. Conclusion In this paper, the problems of energy conservation in cloud are investigated.As a feasible solution, a framework for power management known as IPCPC is established.It can reduce the overall power consumption and enhances resource utilization.The experimental results prove that IPCPC can efficiently reduce the power consumption than the traditional power aware algorithm.The scrutiny of the experimental results shows that total power consumption per task of server in IPCPC is 9% which proves reduction in the overall power needed. Figure 1 . Figure 1.The organization of proposed IPCPC system. Figure 2 . Figure 2. The data flow diagram for Task arrangement algorithm.Else 8) If Si.Config < Si + 1.Config 9) Choose Si +1.ConfigThe basic idea of the EDF scheduling algorithm is to use the APM and CCM to balance the work load and to reduce the power consumption of severs.Before the task is scheduled calculate the power of current server system by using APM and by using CCM measure the system configuration.Based on this information the task is scheduled to the feasible server system.Then TS_temp will be increased by one.When the task is fully completed then Tissue = Completed is initiated.The algorithm 2 has a detailed description of the scheduling process.EDF scheduling algorithm takes the parameter as the load of individual task, and its deadline.The proposed work calculates the deadline by considering the task length.The process which has minimum load and earliest deadline is sent to the head of the queue.This process is assigned to the enabling server i.The current task in queue is submitted for scheduling after arranging the tasks in ascending order of their workload and deadline.The server system configuration is identified and status of the server is evaluated.Based on this information it is found out whether the server has the capability to accommodate the task, if so, task will be allocated to the server.Figure3shows the data flow diagram for the EDF scheduling algorithm. Figure 3 . Figure 3.The data flow diagram for EDF scheduling algorithm.power consumption of server.So Equation (3) can be extended as in Equation (4). 2 h Ph C Vh Fh loadh Figure 4 . Figure 4. Dead line based resource utilization of IPCPC. Figure 5 . Figure 5. Resource utilization based on number of task. Figure 6 . Figure 6.(a) Guarantee ratio; (b) power consumption per task to the varying task count; (c) total energy consumption. Figure 6 ( Figure 6(b) gives the result of power consumption per task to the increasing task count.Considering the above outcomes it is established that proposed IPCPC has least power consumption.Figure6(c) shows comparison of total energy consumption of tasks.At this juncture six different algorithms are compared.From that it can be verified that the proposed IPCPC achieves more efficient result. Figure 7 . Figure 7. Effective utilization of real world Google trace. Figure 8 . Figure 8.Guarantee ratio of real world Google trace. Figure 9 . Figure 9. Power consumption per task of real world Google trace. Figure 10 . Figure 10.Total energy consumption of real world Google trace. Figure 11 . Figure 11.Resource utilization of real world Google trace.
2017-10-30T03:56:50.139Z
2016-06-02T00:00:00.000
{ "year": 2016, "sha1": "07d83e909beb90232d347a56b03bbe53ddb690c7", "oa_license": "CCBY", "oa_url": "http://www.scirp.org/journal/PaperDownload.aspx?paperID=67610", "oa_status": "GOLD", "pdf_src": "ScienceParseMerged", "pdf_hash": "07d83e909beb90232d347a56b03bbe53ddb690c7", "s2fieldsofstudy": [ "Computer Science", "Engineering" ], "extfieldsofstudy": [ "Computer Science" ] }
208979485
pes2o/s2orc
v3-fos-license
Economic Factors behind Social Entrepreneurship in Bangladesh Bangladesh has pool of entrepreneurs whereas there are also new establishments; new employment opportunities and so are the income sources. For the better measurement of entrepreneurship characteristics, the growth and different indicators impact on entrepreneurship needs to be identified. Thus this paper tries to find out the key economic indicators of entrepreneurship in the context of Bangladesh. The research is based on secondary research; has used entrepreneurship as a dependent variable proxied by self-employment and seven independent variables—per capita income, unemployment rate, labor force, industrial structure change, capital, human capital and literacy rate. Two regression models have been used encompassing the stated variable data from year 2008 to 2018. In the first regression analysis it has been tried to identify whether the model can be constructed with the overall economic variables with the self employment. At second regression model it has been tried to find out whether there is the explain ability of the variables result in the regression analysis and what is the degree and pattern of the relationship. The research shows that literacy rate and human capital have aligned with the self employment. But all the other variables are not matched with the self employment and could not provide the support for self employment to thrive. And the linear regression analysis shows that per capita income, labor force and literacy rate play the most important role in case of nourishing self employment. Unemployment rate is found as contradictory with the findings in the context of Bangladesh. Introduction The study of economic indicators like employment index, gross domestic product, consumer price index, labor force index, etc., has been a crucial factor for the development since the economic revolution of the model countries. Developed countries are having distinctive research on these quantitative factors and different researches on this issue. Till now there are many economic indicators have been identified as crucial but still it is a mystery which indicator plays the most important role in the economy. Being a developing country in Bangladesh, entrepreneurship has played a major role in the development point table of the world ranking. In different sector entrepreneurship opens a new opportunity to expand the skill and business by which Bangladesh can compete with the other competitor countries. For the overall economic growth of the country entrepreneurship plays a major role, but not individually. Beside entrepreneurship there are different other economic factors which are playing a major role in the economic growth. And these economic factors are also playing a role of controller for entrepreneurship itself. Here, unemployment plays a very deep role in the economic development. Individuals confronted with unemployment and low prospects for wage-employment will turn to Self-employment as a viable alternative (Oxenfeldt, 1943). So, entrepreneurship can make an interrelation with the unemployment. Per capita income can be a booster for the new business establishments. Strong labor force plays a big role for the establishment of business. Government share, infrastructure, policy, rate on inflation capital, etc., also plays a key role to define the nature of the entrepreneurship. Different sector like agricultural, manufacturing and service structure change played an important role for the growth of entrepreneurship. Here in our study we will focus on the both theoretical sides of different authors and the implication or the validity of the theory on the context of Bangladesh. For a settlement a key indicator that is entrepreneurship is chosen in this study to magnify the growth of the economy because as there are new establishments, new employment opportunities are there and so are the income sources. Hence for the better measurement of entrepreneurship characteristics, the growth and different indicators impact on entrepreneurship is identified. Simultaneously different relevant variables which can make an impact on entrepreneurship will also be focused. An overall relationship between factors and entrepreneurship in the context of Bangladesh will surely focus the true condition of the economy. There are some noticeable facts in the economy of Bangladesh because in recent past there was an economic recession throughout the world. And as the economy pulls out of a recession, output increases by a greater percentage than the rise in employment. And as the economy goes into a recession, output decreases by a greater percentage than the reduction in employment. This is just one example. And with this change in unemployment, it creates a different impact on the entrepreneurship. There are many other different economic variables which are affected by the economy and affect the entrepreneurship. There are many variables which cause the entrepreneurship to change the rate. But the co-ordination and interdependency of those variables are still vague in Bangladesh. The research paper has tried to attribute some value in this concept. The fostering reason for this study is based on the fact that there has not been much analysis on the dependency on the economic factors of entrepreneurship. There were different studies on the individual factors but there was not enough research work regarding the one to one effect on entrepreneurship. The variation analysis and interdependency is also undone in Bangladesh. So the entrepreneurship's controller, the key economic variables the interdependency with key variable is still unknown and the pathway of different economic variables throughout the years is not also well constructed and researched. Thus the paper aims to find out the Entrepreneurship dependency on different relevant economic variables. The paper is basically based on the changing dimensions of entrepreneurship. In the outer countries many people start something on their own whenever they get the chance. This is also applicable for the developing countries. As entrepreneurship has been contributing a lot in their economy that is why the SME sector has been very much under observation of analysts. That is why, with the observation and analysis they have found the exact relevant impact of the key economic variables on entrepreneurship. So this study has tried to emphasize the indicators and the relationship with entrepreneurship. Entrepreneurship is considered as the measurement of the solely individual establishments which is also an important indicator for an economy. Bangladesh is also a country which has pool of entrepreneurs and because of this the microcredit policy was invented. So it will be advancement if the key variables affecting entrepreneurship can be identified. All these theoretical findings are found by different researchers but none of them are studied on the context of Bangladesh. So with the implementation of the data along with the theories side by side will be crucial in addressing the development indicators and their implication, which will redefine the entrepreneurship condition of Bangladesh. Thus the paper intends to depict the exact relationship with different economic variables with the entrepreneurship which can also be helpful to identify the determinants of entrepreneurship. Literature Review Entrepreneurship and the consideration of risk has always been the subject of study in the recent decade. The associations with the economic indicators have been used as the tool to find out the dimension and the changing diversity of entrepreneurship. The countries like Bangladesh are deeply depended on the labor force participation and the unemployment problem reduction in case of levering up their economic growth (Khuda, 2012). Author mentioned the key findings based on the sector. He focused on the labor force growth and the participation based on the growth, the employment status and the unemployment status of Bangladesh (1999-2000, 2002-03 and 2005-06). The findings indicate the contribution of agriculture in GDP, population increase in the recent years and comparative labor force participation (Male female comparison) and the sector wise growth for unemployment. Rocha and Ponczek V. identified the effects of adult literacy of individuals and income in Brazil. In Brazil, it was estimated that the return to education for Brazilian youths and adults at several education levels are of particular note (Fernandes & Narita, 2001;Fernandes & Annuatti, 2000). Blunch and Pörtner (2005, pp. 1-10) examined the effect of adult literacy programs on the living standards of households as measured by per capita spending. Basically the study was designed to make a link between literacy and return. Where we find that literate people earn 22.4% more than illiterate people, men are earning more than woman and experience and literacy increase the wage rate. Here they shows a model where they considered the main variable as Income, gender, complexion of children, age, employed, private sector, and working Hours/month. Nkurunziza (2012) had also analyzed the relationship between income per capita and entrepreneurship. The data gathered in this paper is based on Africa and this paper shows the relationship between income per capita and entrepreneurship as U shaped and the increment of entrepreneurship above the income level of $7300 (Income per capita). The paper identified the theory of two kinds which are opportunity entrepreneurs and necessity entrepreneurs based on type of income level (Wennekers, Van Stel, Carree, & Thurik, 2010, pp. 167-237). As the income level raises the number of entrepreneurship also increases. In the model the main variable used to find the interrelationship are entrepreneurship (firms per 1000 working-age people), number of full time permanent workers per firm Unskilled workers (in %), firms providing training (in %), and years of experience of top manager in firm's sector. Clogg and Sullivan (1982) discussed about the underemployment process with the labor constitution. The main theme here to be noted is the construction of LUF (Labor Utilization framework) along with the timeframe. Basically they have shown a model to consider different factors of LUF and constructed a linear model to make them considered along with their weights. It is an objective indicator to measure the underemployment of different time. In the model the author used five variables: Age, sex, color, time and LUF (sub unemployment, unemployment, low hour, low income, mismatch and adequacy). Meager (2004) constituted two hypotheses where one suggests that the unemployment automatically pushes people to be self employed. And the other hypothesis is about the economic activity which acts as a pull factor for self employment. To make a base point he considered some factors to relate unemployment with self employment like changing environment, sector change, changing aspiration, changing strategy of employing organization, government policy and economic change. The data illustration from OECD from the year 1970-88 and the charts failed to identify any clear relation between these two elements. Later on for the mathematical convenience a simple model is constructed for the inflow of the self employment, the outflow from self employment, the level of self employment at the end and the unemployment at the start of the period. The equation here is also different; two different equations are constructed (for inflow and the outflow). In this simple linear relationship the author found that the stock of self employment is a function of current employment rate, lagged employment, and a time trend. The conclusion draws that the behavior of self employment research works will not be depended on the unemployment and self employment relationship. Noseleit (2012) raised the topic of economic growth and structural change relation with entrepreneurship. Kuznets S. S. (1971), Baumol, Blackman and Wolf (1989) raised regarding changes in sectoral structure as an important driver of economic growth. But others raise the points of the technology necessity and cost. For example structural change was not without cost (Zagler, 2009). Costs of structural change entrepreneurship, structural change, and economic growth related to entrepreneurial activity included reallocations of factors that failed because the entrepreneur's vision of the future proved to be incorrect and also involved unemployment and redundant qualifications that aroused due to replacement of incumbent businesses. Here Noseleit showed that the entrepreneurial activity increases when there is any infrastructural development or changes. Here basically he analyzed based on the sector wise structural change and entrepreneurship and based on this the whole economic growth. He also focused on the supporting local entities to alleviate the structural change impact. Conducting a regression analysis he considered the variables Similarity between entries and initial sectoral structure in 1975, Start-up rate (log), highly skilled employment share (log), population density (log), market potential (log), Small business employment share, and Industry concentration. Baptista, Esca'ria and Madruga (2007) Huber-White-Sandwich robust estimation which takes into account variations in employment growth within and between regions over time simultaneously, being therefore preferable to fixed effects estimation. After the analysis the paper concluded that the indirect impact is much stronger than the direct impact because of competition, efficiency and innovation. The time lag could be eight years for the supply side effect. Self employment and unemployment interrelation and effects on each other is still under observation. In most of the cases it proved to be ambiguous to find out the solution. Thurik, Carree, Stel and Audretsch D. B. (2008) tried to find out the unique effect of entrepreneurship, is derived from two basic assumptions: Increasing unemployment leads to increasing start up activity theorized by Blau (1987), Evans and Jovanovic (1989), Evans and Leighton (1990), Blanchflower and Meyer (1994) and high unemployment rates may correlate with stagnant economic growth leading to fewer entrepreneurial opportunities by Audretsch (1995), Audretsch, Carree, Van and Thurik (2002). The authors built a two equation vector regression model and used the OECD data and reached to a result which is: change in unemployment has positive impact on self employment, and change in the self employment rate had a negative impact on unemployment rate. Method The research is based on the secondary research as the data used here are all secondary data collected from the reliable websites of the government and international organizations. Per capita income and contribution of service as the percentage of GDP are collected from the web data collection of World (1998,(1999)(2000)(2001)(2002)(2003)(2004)(2005)(2006)(2007)(2008)(2009)(2010)(2011)(2012)(2013)(2014)(2015)(2016)(2017)(2018),World Bank database (2018) The research will be based on the historical data of the economic variables and there will be two regression analyses. In the first regression analysis it will be tried to identify whether the model can be constructed with the overall economic variables with the self employment. At second regression model it will be tried to find out whether there is the explain ability of the variables result in the regression analysis and what is the degree and pattern of the relationship. It is like a depiction of the overall process. Thus the research equation will be: Self employment (SE) = α+ β I+ β U + β L + β IS + β C + β H + β LT The primary regression model above has been constructed based on the overall data so that it was easy to identify what can be the regression value and the impact of independent variables on the dependent variable. Later on a specific regression model will be run so that it will be easy to identify the relatively Self employment (SE) = α + β I, Self employment (SE) = α + β U, Self employment (SE) = α + βL, First Regression Model The primary regression analysis gives the following output: In this regression analysis, identified regression value is adjusted R 2 = 0.979. Identified significance F value is 2.220E-10. From the analyzed model, the coefficient and the p Value of the related variables are: Self employment (SE) = α+ β I+ β U + β L + β IS + β C + β H + β LT SE = 0.407 + (-0.0000312) I+ 0.01994U+ 3.72E-09L+ (-0.526) IS+ 0.131C+ (-0.253) H+ 0.0309 LT From these values it is seen that the value of the constant is 0.3875. The impact of unemployment is positively correlated with the self employment, the impact level is moderate. The impact of per capita income is negatively correlated, but the impact level is low. Labor and Capital are also positively correlated with the self employment. The capital has more influence on self employment than the labor. But industrial structure and human capital are negatively correlated with the self employment. The P values of the variables are: Second Regression Model The second regression model is linear regression model. The regression model measures the variability of the parameters which can be identified and explained by R 2 value. This regression model also finds the impact on the self employment and which variables and putting bigger impact on self employment. Variable type Variables Impact Dependent Variable Self employment Independent Variable Per capita income Positive Table: Regression with per capita income and self employment Here it is seen that per capita income has a positive impact on self employment. As the per capita increases, so does the self employment. Self employment (SE) = α+ βI Here P value (0.0000311) suggests that it has a strong influence on self employment. From the first regression model it can be seen that Per capita income has a positive correlation with the self employment. That is about the practical data with a mathematical analysis. In the theory it can be observed that the increment in the per capita income impact positively, but in a relevant range, that is up to the amount $7300. Above that level of income the impact can be otherwise (Nkurunziza, 2012), so in the context of Bangladesh the previous findings match with the data analysis. Self employment (SE) = α + βU 0.00829 The P value (0.00829) suggests that it has strong influence and creates impact on self employment. The unemployment rate has both positive and negative impact on self employment according to the theory. By the word of the previous experiments and experiences it is seen that the unemployment poses a upward pressure to self employment because unemployment always means there are more people than necessary. So some people who are energetic, high in spirit and capable of taking care of business take an initiative and start something on their own. Another theory supports that unemployment means there is a recession in the economy. So the recession means there is not ample opportunity to start a new business (Petrakis, 2004, pp. 85-98). In reality Bangladesh data set shows that unemployment poses negative impact on self employment. It is also true that self employment has a positive impact on the economy (Baptista et al., 2007, pp. 49-58 From the analysis it can be seen that the value of R 2 is 0.889, which explains most portion of variation. The significance F value is 4.738E-10, the probability of occurrence of this model by chance. The correlation value is like: Self employment (SE) = α + βL Self employment (SE) = 0.0077 + 2.89 E-09 L The correlation is positive but not that strong. The P value is Self employment (SE) = α + βL 4.738E-10 The P value is very low here (4.738E-10), suggesting it has strong impact on entrepreneurship. Availability of the labor force can be explained from another dimension. If Bangladesh has too much labor force then this will be a burden as a lot of people will be unemployed. It will create pressure on the rate of unemployment rate. If the unemployment rate is high then it will be an indication that the country is in recession, which is certainly a pressure on the growth of economy and entrepreneurship. The unemployment and underemployment rate will be high in this case (Clogg & Sullivan, 1982;Petrakis, 2004, pp. 85-98). Unemployment is positively related to the self employment and so does the labor force. So from the theoretical and mathematical context it is well concluded. Literacy rate in Bangladesh is getting higher every year, so it is necessary to check whether the previous researchers' findings match with Bangladesh context. Literacy rate has a positive impact on self employment. As the literacy rate increases the self employment rate also increases. The equation is given below: Self employment (SE) = α+ βLT Self employment (SE) = 0.0213+ 0.321LT The value of correlation is .0321. The value of R 2 is 0.863 which means it explains most of the variations in the model. It is a valid mode. The occurrence of this model by chance is 3.361E-09 alternatively it is called significance F. The P value of the regression model is: Self employment (SE) = α + βLT 3.361E-09 The P value is 3.361E-09 which shows that it can create quite an impact on self employment. Literacy rate has also both positive and negative impact on the self employment. Whenever the literacy rate increases it increases the knowledge about a particular subject. So the sector or subject related knowledge helps him or her to start a business on his/her own. It also measures the level of confidence to start a new business. On the other hand the education means adaptation of new technology so that the uncertainty also increases. Besides high level of education increase the wage rate of the employees, so the impact is quite ambiguous (Petrakis, 2004, pp. 85-98). According to the first regression model it can be seen that it impose a positive impact on the self employment. In the second model it is also seen that the literacy rate has a positive impact on the self employment rate. So it matches the previous findings being tried to imply. Conclusion Entrepreneurship itself is such a term which has a lot of controller variables on which it is very much reliant. Sometimes it is dependent on one variable which is identified as the key variable, sometimes it is a bunch of variables (i.e., Here the key variables appeared as per capita income, unemployment, etc. picture of one factor that is self employment and find the way to conduct a thorough research to reveal the secrets of the economy of Bangladesh.
2019-11-14T17:05:59.590Z
2019-11-12T00:00:00.000
{ "year": 2019, "sha1": "d49813ed42d1c6427e722c72e3840cd87d94d93a", "oa_license": "CCBY", "oa_url": "http://www.scholink.org/ojs/index.php/ijafs/article/download/2417/2511", "oa_status": "GOLD", "pdf_src": "MergedPDFExtraction", "pdf_hash": "32d6b2aecf918972186f58aa0c016ca3c1755574", "s2fieldsofstudy": [ "Economics" ], "extfieldsofstudy": [ "Economics" ] }
54609040
pes2o/s2orc
v3-fos-license
Recoil-β tagging study of the N = Z nucleus 66 As An in-beam study has been performed to further investigate the known isomeric decays and to identify T = 1 excited states in the medium-heavy N = Z = 33 nucleus 66 As. The fusion-evaporation reaction 40 Ca( 28 Si, pn ) 66 As was employed at beam energies of 75 and 83 MeV. The half-lives and ordering of two known isomeric states in 66 As have been determined with improved accuracy. In addition, several prompt γ -ray transitions from excited states, both bypassing and decaying to the isomeric states in 66 As, have been observed. Most importantly, candidates for the 4 + → 2 + and 6 + → 4 + transitions in the T = 1 band have been identified. The results are compared with shell-model calculations using the modern JUN45 interaction in the pf 5 / 2 g 9 / 2 model space. I. INTRODUCTION Self-conjugate odd-odd N = Z nuclei are interesting for various reasons, one of which is the competition between isospin T = 0 and T = 1 states, fundamentally arising from neutron-neutron (nn) or proton-proton (pp) (T = 1) and neutron-proton (np) (T = 1 or T = 0) correlations.In N = Z nuclei, neutrons and protons occupy the same single-particle orbits, which leads to the maximal overlap of their wave functions.This may lead to enhanced np pairing correlations in the isoscalar T = 0 channel.However, for medium-mass N = Z nuclei there is no clear evidence of the strong T = 0, np correlations until the A ∼ 90 mass region is reached [1][2][3]. Owing to the charge symmetry and charge independence of the strong nuclear force, any state that can be constructed in the even-even pp and nn systems (Z = N + 2 or Z = N − 2) has to exist also in the odd-odd (N = Z) np system.This fact leads to the concept of isospin symmetry, which implies that a set of states with the same isospin quantum number (T = 1) within an isobaric multiplet, are degenerate.However, the observed differences in the excitation energies of the isobaric analog states (IAS) originate from isospin nonconserving forces, such as the Coulomb interaction [4].The energy differences, called Coulomb energy differences (CED), between the IAS can be used to probe the microscopic and macroscopic structure of nuclei.CED have been used to provide information on the alignment of the valence nucleons [5], shape changes as a function of spin [6], and the evolution of nuclear radii along the yrast line [7]. Two isomeric states have been previously identified in 66 As [8].In more recent studies, the decay of the isomers was used as a tag to identify excited states above the isomeric states [9] and new prompt γ rays were associated with 66 As, without the ability to observe delayed transitions, in Ref. [10].In the current work, both the isomeric and the prompt T = 0 and T = 1 structures have been studied.The half-lives and ordering of the isomeric states have been determined with improved accuracy and internal conversion coefficients have been deduced for the transitions deexciting the isomers, allowing the determination of the corresponding experimental B(E2) transition strengths.Recent experimental [10] and theoretical [11] work has investigated the CED in the A = 66 ( 66 As/ 66 Ge) and A = 70 ( 70 Br/ 70 Se) systems.The present work agrees with some of the findings reported in Ref. [10], but differs for the T = 1, I π = 6 + state resulting in a positive CED behavior. The odd-odd N = Z nuclei in the mass A ∼ 60-70 region provide an opportunity to test shell-model (SM) interactions and model spaces for these midmass nuclei.In the present work SM calculations have been performed using the modern JUN45 interaction [12] and a pf 5/2 g 9/2 model space.The experimental results are compared with the SM predictions in terms of level energies, CED, and B(E2) values.proton-drip line where the production cross sections in fusionevaporation reactions become very small compared to the lighter fp shell nuclei.In addition, in the case of odd-odd N = Z nuclei, the T = 1 bands become rapidly nonyrast, which leads to the fact that they are weakly populated.All this means that new experimental approaches need to be investigated.The recent development of the recoil-β-tagging (RBT) technique [6,13,14] provides a tool to extend the use of tagging methodology to the region of exotic medium-mass nuclei around the N = Z line.In the RBT method, the recoil formed via fusion-evaporation reaction is identified by correlating a β particle originating from the β decay to the recoil from which it originated.This is not straightforward because the β-decay properties are not generally suitable for tagging purposes, owing to the long half-lives and continuous energy distributions of β particles.However, the mediumheavy odd-odd N = Z and N < Z nuclei have β-decay properties that are suitable for RBT and can thereby serve as a clean tag for prompt or delayed γ -ray transitions.Specifically, the odd-odd N = Z nuclei, like 66 As studied here, are Fermi superallowed β emitters, which have relatively short half-lives (∼100 ms) and high values of β-particle energy distributions up to ∼10 MeV.This differs from the other nuclei in the region, which have half-lives from seconds to hours and β-end-point energies reaching values only up to ∼3 MeV. The experiment was performed at the Accelerator Laboratory of the University of Jyväskylä, where the beam was delivered by the K-130 cyclotron.The 40 Ca( 28 Si,pn) 66 As reaction was employed at beam energies of 83 MeV (40 h of irradiation time) and 75 MeV (120 h of irradiation time) to populate excited states in 66 As.The 28 Si beam impinged on a nat Ca target rolled to a thickness of 800 μg/cm 2 , with an average beam intensity of 5 pnA.Prompt γ rays were detected at the target position by the JUROGAM II γ -ray spectrometer consisting of 24 EUROGAM Clover [15] and 15 EUROGAM Phase 1 [16] or GASP [17] type of Compton-suppressed germanium detectors with a total photopeak efficiency of 6.1% at 1.33 MeV.Fusion-evaporation recoils were separated from the primary beam and other unwanted reaction products by the gas-filled recoil separator RITU [18,19]. After separation, reaction products enter the GREAT spectrometer [20] located at the focal plane of RITU.In GREAT, reaction products first pass through a multiwire proportional counter (MWPC) and implant into a pair of 700-μm-thick double-sided silicon strip detectors (DSSDs), where the subsequent β decays of the recoils are also detected.Each of the DSSDs comprises an active area of 60 × 40 mm with a strip pitch of 1 mm, providing 4800 pixels in total.The fusion recoils are distinguished from scattered beam and other unwanted reaction products by energy-loss information obtained from the MWPC and time-of-flight information obtained between the DSSD and MWPC.In addition, the GREAT spectrometer has clover-and planar-type germanium detectors installed around the DSSDs to observe delayed γ -ray radiation.The clover detector is situated above the GREAT chamber, whereas the planar is placed directly behind the DSSDs in the vacuum chamber.The planar germanium detector was also used for detecting the high-energy β particles in coincidence with the energy-loss signal obtained from the FIG. 1. Identification matrix for high-energy β particles.The energy-loss information ( E) is obtained from the DSSD (x axis) and full energy information (E) from the planar Ge detector (y axis).A two-dimensional energy gate can be applied to select β particles to be correlated with recoils within a correlation time of 300 ms.The low-energy detection threshold can be varied to achieve better statistics or cleanliness of the tagged spectra. DSSD.The signals from each detector channel received a time stamp with a 10-ns precision from the triggerless total data readout (TDR) [21] data acquisition system and could be sorted on or off line according to the desired time and energy conditions.The software packages GRAIN [22] and RADWARE [23,24] were used to analyze the collected data. A. The recoil-β tagging method The exceptional β-decay properties of 66 As are suitable for successful tagging owing to the short half-life of ∼96 ms [25][26][27] and high β + -end-point energy of ∼9.6 MeV [28,29].This results from the fact that the ground state of 66 As has a Fermi superallowed β decay to the daughter 66 Ge.The identification of high-energy β + particles is carried out by detecting coincidences between the DSSD and the planar germanium detector within a 0-to 200-ns time gate.These detectors can provide E and full E information for these particles, respectively.From the E−E matrix, illustrated in Fig. 1, the events to be correlated with a recoil, which occurred in the same pixel of the DSSD as the β decay, within a maximum correlation time of 300 ms, are selected by setting a two-dimensional energy gate.The size of the gate can be varied to optimize for maximum statistics or for the cleanliness of the tagged spectra.The low-energy threshold for the β + particles was varied between 0.5-5 MeV during the analysis of the correlated γ -ray transitions.The transitions originating from excited states in 66 As were first identified with very strict tagging conditions, i.e., with high β + -particle energy threshold of the order of ∼3-5 MeV.The threshold was then relaxed to ∼0.5-3 MeV in order to perform prompt γ γ and angular distribution analysis with sufficient statistics. B. Angular distributions of γ -ray transitions The multipolarities of the strongest γ -ray transitions originating from 66 As were deduced by means of angular distributions and angular distribution ratios.JUROGAM II germanium detectors are divided into four rings at angles of 75.5 • (12 detectors), 104.5 • (12 detectors), 133.6 • (10 detectors), and 157.6 • (5 detectors) with respect to the beam direction.For γ -ray angular distributions, β-tagged, βand isomer-tagged, or only isomer-tagged prompt events were sorted separately into four spectra corresponding to different rings of detectors.The intensities of the γ rays of interest were extracted from each spectrum and normalized by the detection efficiency of the corresponding ring.The reduced angular distribution function where A 0 and A 2 are the angular distribution coefficients used as free fitting parameters and P 2 (cosθ ) is the second-order Legendre polynomial, was fitted to the detection angle vs γ -ray intensity plot.The fitted parameter A 2 was used to deduce the transition multipolarity: a positive value indicating a quadrupole character and a negative value a dipole character. The angular distribution ratios (R) were deduced by two methods depending on the γ -ray transition intensity and cleanliness.The R values were extracted from three γ γ matrices, which were formed by sorting β-tagged coincidence events with (133.6 • + 157.6 • ) vs (all angles), (104.5 • ) vs (all angles), and (75.5 • ) vs (all angles) combinations.By setting the same energy gates on the y axis (all angles) in each matrix, three coincidence spectra were formed representing the aforementioned detection angles.The intensity of the γ ray to be studied was again extracted from the spectra and normalized for the detection efficiency.The angular distribution ratio was calculated with the formulas R 1 = I γ (133.6 • + 157.6 • )/I γ (104.5 • ) and R 2 = I γ (133.6 • + 157.6 • )/I γ (75.5 • ), thus providing two R values for each transition from which the final value was calculated as a weighted average (see Table I). If allowed by the γ -ray intensity and cleanliness, two β-tagged (or βand/or isomer tagged) singles γ -ray spectra corresponding to the sum of angles (133.6 • + 157.6 • ) and (104.5 • + 75.5 • ) with two different β-particle energy gates (large gate = 0.5-10 MeV and small gate = 3-10 MeV) were used to compute the R value.The resulting R values with error estimates for the 66 As γ -ray transitions are listed in Table I, where the method used is also indicated.Transitions of known multipolarities originating from nuclei populated via other reaction channels were analyzed with the methods described above, yielding, on average, angular distribution ratios of 1.30 (7) for stretched I = 2, E2 and 0.70 (6) for stretched I = 1, M1, and E1 type of transitions. III. RESULTS The level scheme of 66 As constructed in the present work is shown in Fig. 2. Details of the measured γ -ray transitions are listed in Tables I and II.These results are based on the prompt, delayed, and delayed-prompt γ γ coincidence analysis.Isomeric structures in 66 As have been previously studied by Grzywacz et al., leading to the discovery of two isomeric states and nine connecting γ -ray transitions [8].Recently an in-beam study performed by de Angelis et al. [10] provided information on several new γ -ray transitions bypassing the isomeric states.In the following, results from the present data concerning both the isomeric and the prompt structures are presented.A comparison to the previous works is carried out and discrepancies are discussed. A. Isomeric states in 66 As The delayed γ -ray transitions, which were identified in Ref. [8], were also observed in the present study.This is illustrated in Figs.3(a) and 3(b) where the β-tagged delayed 66 As singles γ -ray spectra recorded in the planar and GREAT clover detectors, respectively, are presented. Coincidence relations between transitions below the isomeric states can be seen in Fig. 4, where β-tagged and gated γ -ray spectra from a planar-clover matrix are illustrated.The γ rays detected in the clover detector in coincidence with the 114-keV γ rays seen in the planar detector are shown in panel (a).Similarly, in panel (c) γ rays seen in the clover detector coinciding with the γ rays at 124 keV seen in the planar detector are presented.In panels (b) and (d) the same data are illustrated as in panels (a) and (c) but there is now a narrow γ γ time gate of −100-100 ns added to identify only prompt coincidences.The time gate on the γ planar -recoil time difference was set to 0-21 μs ) in all panels of Fig. 4. A comparison between panels (a) and (b) immediately reveals that the 124-keV transition is directly depopulating one of the isomeric states as the line at 124 keV disappears when imposing the prompt coincidence time gate.All of the other seven γ -ray peaks still remain in prompt coincidence with the 114-keV line when the γ γ time gate is applied, indicating that the 114-keV transition is directly deexciting the other isomeric state.Comparing panels (c) and (d) confirms the conclusions made above since the 124-keV line is no longer seen in coincidence with the 114-and 1553-keV lines after the narrow γ γ time gate is added.In addition, the isomeric state depopulated by the 124-keV line has to lie lower in excitation energy as it is fed from above by the 1553-keV γ -ray transition.The ordering of the 114-and 124-keV transitions was further confirmed by comparing the time stamps of these decay events.This was possible due to the time stamping with 10-ns precision of each data event in the TDR system.A comparison of the time stamps for the 114-and 124-keV γ γ coincidences leads to the conclusion that in 98% of the detected coincidences, the 114-keV transition precedes the 124-keV transition. The 1007-and 670-keV γ -ray transitions are seen in coincidence only with the 114-keV γ -ray transition, which indicates that they bypass the lower-lying isomeric state.However, the 1553-keV line is seen in coincidence with both the 114-and 124-keV lines and, as stated earlier, the 1553-keV transition precedes the 124-keV transition, as does the 114-keV transition.This leads to the conclusion that the isomeric states are connected by the consecutive 114-and 1553-keV γ rays.The sum of energies of the 124-and 1553-keV γ -ray transitions equals the sum of the 670-and 1007-keV transitions, which are concluded to form a parallel cascade with the 124-and 1553-keV transitions.When imposing the narrow γ γ time gate on the spectrum gated by the 124-keV γ -ray transition, coincidences are only observed with the 267-, 394-, 836-, and 963-keV transitions as illustrated in Fig. 4(d).This stems from the fact that the previously mentioned transitions must originate from states lying below the lower-lying isomeric state. Recoil-gated spectra from the planar-clover matrix are presented in Fig. 5 showing γ -ray transitions observed in the clover detector in coincidence with the 394-and 267-keV transitions detected in the planar.The reduction in statistics from β-tagging added to the drop in γ -ray detection efficiency of the planar detector above 150 keV did not permit a β-tagged γ γ analysis for these transitions.Observed coincidences presented in Fig. 5 show that the 670-, 836-, and 1007-keV γ -ray transitions are in coincidence with the 394-keV transition and that the 670-, 963-, and 1007-keV transitions are in coincidence with the 267-keV transition.As the 267-and 394-keV lines are not seen in mutual coincidence and the sum of energies of the 267-and 963-keV transitions equals the sum of the 394-and 836-keV transitions, it can be concluded that they form parallel cascades depopulating a state at 1230 keV.This state is fed by the 670-keV/1007-keV cascade from a state at 2907 keV.The ordering of the γ -ray transition pairs with energies of 670 keV/1007 keV, 394 keV/836 keV, and 267 keV/963 keV cannot be assigned unambiguously at this stage.This is established later on by prompt γ γ analysis (see Secs.III B1 and III B2). Half-lives of the isomeric states Half-lives of the isomeric states were determined making use of the logarithmic binning method described by Schmidt et al. [30,31].This method is very convenient for discriminating between different radioactive species and is applicable especially in the cases where only limited statistics are available.In this method, the number of radioactive decay events are plotted against the natural logarithm of the time differences giving rise to a bell-shaped distribution.The half-life can be extracted from the centroid of this distribution.The two-component function fitted to the half-life data is of the form where a substitution = lnt is introduced, n i and λ i , where i = {1, 2} are the number of counts and decay constants of two different activities, respectively.Figure 6 presents the half-life data and the fitted two-component functions under various gating conditions.The black and red data points FIG. 5. Recoil-gated delayed γ rays from the planar-clover matrix.The gate is set on the 394-keV transition detected in the planar, whereas in the inset the gate is set on the 267-keV transition.The time gate for γ planar -recoil time difference is set to 0-21 μs in the main figure and to 0-5 μs in the inset in order to avoid random coincidences with contaminant γ rays.In addition, a narrow −100to 100-ns γ γ time gate is applied in both panels.correspond to recoil-correlated and β-tagged delayed γ -ray data, respectively.The solid curves represent fits of Eq. ( 1) to the data.Recoil-gated data provide the desired statistics for reliable half-life determinations but to verify the accuracy of the results, the β-tagging conditions were also applied.The larger peaks in the time distributions presented in Figs.6(a) and 6(b) correspond to real activities caused by the decay of the isomeric states, whereas the smaller components at higher ln( t) values are attributable to random background.In the case of the higher-lying isomeric state, the half-life can be extracted from γ -recoil time differences of the 1553-keV γ rays detected in the GREAT clover detector.Other γ rays such as the 1007-and 670-keV transitions below the higher-lying isomeric state could have been used.However, this causes the random component to become the dominant part of the distribution owing to the background at lower energies originating mainly from Compton scattering.Using a single γ -ray energy gate to extract reliable γ -recoil time differences for the lower-lying isomeric state does not work owing to feeding of the higher-lying isomer.To overcome this issue, the time difference of two or more γ rays detected in the planar and clover detectors can be resolved.The time-difference spectrum presented in Fig. 6(b) shows the time difference between the 114-or 1553-keV transition recorded in the clover detector and the 124-keV transition observed in the planar detector.This method provides a low background time distribution in both recoil-correlated and β-tagged cases to accurately determine the half-life of the lower-lying isomeric state.An extremely clean time distribution can be obtained using the β-tagging condition without any random background events for the lower-lying isomeric state by excluding the detection of the 114-keV γ ray in the clover from the gating conditions, but naturally this yields fewer statistics.The time distribution obtained in this way is shown in Fig. 6(b) as a gray histogram.The half-life is extracted from the data by using the maximum likelihood method [30]. Half-lives for the 66 As isomeric states can be extracted from the fitted λ 1 parameter which yields t 1/2 = 8.01 (34) μs from the recoil-correlated data and t 1/2 = 7.70 (39) μs from the βtagged data for the higher-lying isomeric state.Corresponding values for the lower-lying isomeric state are t 1/2 = 1.16(4) μs from the recoil-correlated data and t 1/2 = 1.09 (10) μs from the β-tagged data.Applying the maximum likelihood method to the data, presented in Fig. 6(b) as a gray histogram, produces a value of t 1/2 = 0.99 +0.22 −0.16 μs for the lower-lying state.The values obtained from differently conditioned data are consistent within error limits and can be considered to give accurate values for the isomeric half-lives.To combine the final values for the half-lives, a weighted average was calculated for each isomer, yielding t 1/2 = 7.9(3) μs and t 1/2 = 1.15(4) μs for the higher-and lower-lying isomeric states, respectively.These values and the ones reported in Ref. [9] are in agreement within error limits. In the present study data were also produced for the 69 Ge and 65 Zn nuclei, which both contain long-lived states. Internal conversion coefficients of the isomeric γ -ray transitions The total internal conversion coefficients can be determined for the two transitions deexciting the isomeric states by demanding the preservation of the γ -ray intensity through a cascade.To evaluate the intensity balance, detailed information on the detector efficiencies is crucial.Efficiency curves for the planar and the clover germanium detectors were simulated with a GEANT4 toolkit [34] according to the experimental circumstances.The distribution of implanted recoils in the DSSD and the thickness of the implantation detector were taken into account in these simulations.As RITU is designed to operate in heavier mass region, the separation of fusion residues from the primary beam and other unwanted products is challenging in the mass A = 70 region.For this reason, the optimal settings for RITU could not be used, which caused the recoil distribution to be focused more on the right-hand side of the DSSDs.Clearly, if the recoil distribution is not uniform across the DSSD, the γ -ray dectection efficiencies of the planar and clover detectors placed around the DSSD will be affected by this geometrical deviation. The total intensity of the 114-keV transition feeding a state, which is depopulated by the 1007-and 1553-keV transitions, has to equal the sum of the intensities of the latter mentioned transitions.The internal conversion of the 1007-and 1553-keV transitions is negligible owing to the high energies, so there is no need to make assumptions about the transition characteristics nor correct the experimental intensities for conversion.The efficiency-corrected intensity of the β-tagged 114-keV γ -ray transition observed in the planar is thus compared to the sum of the efficiency-corrected intensities of the β-tagged 1007-and 1553-keV γ rays detected in the clover to resolve the total internal conversion coefficient for the 114-keV transition.Despite the β-tagging conditions, there is always a certain amount of contaminant events in the 114-keV planar peak originating from random correlations of the 65 Ga β decay to the excited states in 65 Zn, where one of the states is depopulated by a 115-keV γ -ray transition.Fortunately, the magnitude of contamination can be estimated and corrected for, as there is also a 61-keV γ -ray transition depopulating the same state as the 115-keV transition in 65 Zn.The intensity ratio of these transitions can be resolved as a function of γ -recoil time differences in order to obtain a correction factor for the 114-keV γ -ray intensity.This is shown in Fig. 3(a) as an inset.At time differences between 0.1 and 1 μs, the intensity ratio of the 114-and 61-keV peaks remains at a constant value, as it should before the ratio starts to increase monotonically owing to the decay of the higher-lying isomeric state in 66 As, which increases the intensity of the 114-keV peak rapidly.The correction factor 2.8 can be obtained from the plateau in the curve, which is then used to subtract the intensity corresponding to the contamination (2.8 × I 61keV ) from the total intensity of the 114-keV peak.After this correction, the total internal conversion coefficient can be determined, yielding the value of α exp = 0.41 (13) for the 114-keV transition in 66 As.The closest total internal conversion coefficients for this transition energy obtained from Ref. [35] are α E2 th = 0.48(1) and α M2 th = 0.59(1), hence suggesting the transition has an E2 character.The error of the theoretical value originates from the uncertainty in the energy measurement of the 114-keV γ ray. The total intensity of the 124-keV transition has to equal the sum of the 267-and 394-keV transition intensities as they feed and deexcite the same state.The problem is that this state is also fed from the higher-lying isomer via the 1007-and 670-keV transitions.As there is a large difference between the isomeric half-lives, setting a strict 0-to 1-μs time gate on the γ -recoil time difference, the additional feeding from above can be eliminated.The validity of the time gate can be verified from the plot presented in the inset of Fig. 3(a).Theoretical total internal conversion coefficients for the 267and 394-keV transitions are practically negligible for any of the multipolarities below λ = 4. Therefore, no assumptions on their character are needed nor corrections to the intensity for conversion.The efficiency-corrected intensity of the β-tagged 124-keV transition detected in the planar is thus compared to the sum of the efficiency-corrected intensities of the βtagged 267-and 394-keV transitions detected also in the planar giving rise to the total internal conversion coefficient of α exp = 0.31 (16).Relevant coefficients obtained from Ref. [35] are α E2 th = 0.35(1) and α M2 th = 0.43 (1), confirming the 124-keV transition multipolarity to be λ = 2 and suggesting an electric character. The experimental conversion coefficients reported in Ref. [8] are 1.3(4) for the 114-keV transition and 0.7(3) for the 124-keV transition.The discrepancies probably result from the underestimation of the γ -ray intensities in Ref. [8] owing to a large Compton background. B. Short-lived states in 66 As The γ rays originating from 66 As can be identified already with a large 1-to 10-MeV β gate, as illustrated in Fig. 7(a).This is essential when statistics are needed for the γ γ analysis and angular distributions.However, the spectrum tagged with the majority of detected β particles suffers from heavy contamination caused by stronger reaction channels such as 66 Ge and 65 Ga, which were the main contaminants.Raising the β-particle detection threshold by 2 MeV allows for clean identification of 66 As γ rays.From the β-tagged singles spectrum shown in Fig. 7(b), five prominent peaks located at energies of 355, 379, 394, 836−841, and 960-963 keV can be observed.These transitions have to originate from levels rather close to the ground state of 66 As because one would expect a rapid increase in the level density, hence strong FIG.7. Recoil β-tagged JUROGAM II singles spectra with a 300-ms correlation time.In panel (a) the β-particle energy gate is set at 1-10 MeV, whereas in panel (b) it is set at 3-10 MeV, with a background subtraction condition added to eliminate randomly correlated γ -ray transitions.Peaks labeled in black are transitions with 66 As, while gray labels are for transitions originating from other reaction channels such as 66 Ge, 65 Ge, 65 Ga, and 64 Zn. fragmentation of the γ -ray transition intensity, when going to higher excitation energy.The prominent peaks listed represent decays from both the T = 0 and the T = 1 states in 66 As.In the following discussion the results concerning the prompt γ -ray transitions are presented.The experimentally observed excited states in 66 As have been divided into isospin T = 1 and T = 0 structures.The illustrated γ γ coincidence spectra represent cases where rather strict β gates (∼3-10 MeV) have been used to show the cleanest coincidences.This naturally excludes some of the good events, which are more pronounced with relaxed gating conditions along with the contaminant γ -ray transitions.Coincidence spectra illustrated in Figs.8(a) and 8(b) represents the effect of the size of the β gate on the observed coincidences.In panel (b), the low-energy threshold is raised by 1.5 MeV, which produces a clean and low-background spectrum but the coincidence with the 1137-keV transition seems to be missing although it can be clearly identified in panel (a).In the other spectra shown in Figs. 9 and 10, all the transitions which have been found to coincide with the gating transition with relaxed tagging FIG. 8. β-tagged and gated prompt JUROGAM II spectra illustrating observed coincidences within the T = 1 band and between T = 0 and T = 1 bands.In panels (a) and (b) the gate is set on the 963-keV transition with β-particle energy gates of 2.5-10 and 4-10 MeV, respectively, to illustrate the effect of the size of the β gate on the gated spectra.In panel (c) the gates are set on the 963-and 1226-keV transitions with a 2.5-to 10-MeV β gate.The inset shows the low-background region where the 1226-and 1486-keV lines are identified.In each panel background subtraction is performed by setting a background gate, which has the same width as the main gate, near the gating transition.Peaks labeled in gray and marked with a "c" are contaminants from 66 Ge, 65 Ga, and 64 Zn. conditions are labeled even if they do not clearly stand out from the background in these particular figures. FIG. 9. β-tagged and gated prompt JUROGAM II coincidence spectra.In panel (a) the gate is set on the 836-keV transition with 2.75-to 10-MeV β gate.In panel (b) the gate is set on the 840-to 841-keV transitions with 3.25-to 10-MeV β gate.The inset in panel (b) illustrates a part of the coincidence spectrum gated by the 840to 841-keV transitions with 1.5-to 10-MeV β gate.Peaks labeled in dark gray are unidentified transitions while the one labeled in gray and marked with a "c" is a contaminant from 65 Ga. The angular distribution information [A 2 = 0.30(4)] and the value of the angular distribution ratio [R = 1.27 (15)] obtained for the 963-keV peak suggests a stretched E2 character.Thus, on the basis of intensity and energy arguments, the 963-keV transition is assigned as the 2 + 1 → 0 + 1 transition in 66 As.Analysis of the γ γ coincidences, with a gate set on the 963-keV transition and simultaneously varying the size of the β gate, reveals a peak located at 1226 keV (see Fig. 8).When the gate is set on the 1226-keV transition, the most intense coincidence is seen with the 963-keV transition; thus, these two transitions can be concluded to form a cascade.The energy of the 4 + 1 → 2 + 1 transition found in 66 Ge is 1216 keV, which is rather close to 1226 keV.These arguments along with the deduced angular distribution ratio of R = 1.64(58) for the 1226-keV transition suggests that it is the second transition in the 66 As T = 1 band deexciting a 4 + 3 state at 2189 keV.Further investigation of the coincidence events gated by the 1226-keV transition reveals a γ -ray peak at an energy of FIG. 10. β-tagged and gated prompt JUROGAM II spectra with gate on the (a) 355-keV and (b) 379-keV γ -ray transitions.The size of the β gate is 3.25-10 MeV in the both panels.Peaks labeled in gray and marked with a "c" are contaminants from 65 Ga and 62 Ga, while the one labeled in dark gray is an unidentified transition. 1486 keV.This transition stands out from the background with a rather large β gate of the order of 2-10 MeV and it can be distinguished as a separate peak from the 66 Ge 6 + 1 → 4 + 1 1481-keV transition.The 1486-keV transition is tentatively assigned to deexcite the T = 1, 6 + 1 state at 3674 keV, because of the similarity with the corresponding transition found from 66 Ge and observed coincidence relations.Coincident events with the 963-and 1226-keV lines are illustrated in Fig. 8(c), where the low-background region containing the candidates for the 4 + 3 → 2 + 1 and 6 + 1 → 4 + 3 transitions is shown in the inset.The peak at 1272 keV is a contaminant from 64 Zn.Further proof for the existence of the level at 3674 keV can be obtained from the other observed coincidences as discussed in Sec.III B3. T = 0 states The 836-keV transition seen in both delayed and prompt spectra is assigned to deexcite the lowest T = 0, 1 + 1 level.This fact is supported by the observed high intensity of prompt γ rays and the conclusions made from the delayed coincidence data.Furthermore, both the extracted angular distribution coefficient [A 2 = −0.36(3)] and the value of angular distribution ratio [R = 0.70 (12)] are indicative of a stretched I = 1, M1 transition.The prompt coincidences seen with a gate on the 836-keV transition are shown in Fig. 9(a).The most intense coincidences, when the β gate is relaxed slightly, occur with the 394-and 670-keV transitions.Both of these transitions were also seen in the delayed spectra; thus it can be assumed that these three transitions form a T = 0 cascade (Band 3).Angular distribution information obtained for the 394-and 670-keV lines suggests that they are both stretched I = 2, E2 transitions.Taking into account the γ -ray intensities deduced from the delayed data Table II), the 394-and 670-keV are assigned to deexcite a 3 + at 1230 and a 5 3 state at 1900 keV, respectively.It was confirmed earlier the isomeric transition, with experimental coefficient corresponding to an E2 character, is feeding the state at 1230 keV.Therefore, the isomeric state at 1354 keV is assigned as 5 + 1 .The nonobservation of the 1007-keV transition, which clearly belongs to the same T = 0 cascade with the 836-, 394-, and 670-keV transitions, in the prompt data might be attributable to the nonyrast nature of the level at 2907 keV added to the favored branching of the 1553-keV transition, which deexcites the same state.Remembering the experimental conversion coefficient, which suggests E2 character for the isomeric 114-keV γ -ray transition feeding the state at 2907 keV, the states at 2907 and 3021 keV can be assigned as 7 + 3 and 9 + 1 , respectively.The most intense coincidence with the 963-keV transition appears to be the 379-keV line, as illustrated in Fig. 8; hence, the 379-keV transition is concluded to feed the 2 + 1 state at 963 keV from another T = 0 sequence.The angular distribution coefficient [A 2 = −0.39(9)] and the angular distribution ratio [R = 0.77 (6)] obtained for the 379-keV transition strongly imply a stretched I = 1, M1 character for this γ ray; therefore, a spin assignment of 3 + 2 is made for the T = 0 level at 1342 keV.The 379-and 355-keV transitions are seen in strong mutual coincidence.The 355-keV line is seen also in coincidence with the 728-, 521-, and 394-keV mutually coinciding transitions, which in turn are seen from below by the 1 + 1 → 0 + 1 836-keV transition.This supports the fact that the 1137-keV transition lies between the 355-and 379-keV transitions.Both of these γ -ray transitions naturally see the 1137-keV line, as can be noted from Figs. 10(a) and 10(b).The 379-, 1137-, and 355-keV transitions are concluded to belong to the same T = 0 band (Band 2).The angular distribution coefficients and ratios suggest E2 character for both the 1137and the 355-keV transitions.Therefore, spin assignments of 5 + 4 and 7 + 2 are made for the T = 0 levels at 2479 and 2833 keV, respectively.It should be noted that the γ -ray energies of the parallel branches consisting of the transitions of 963, 379, and 1137 keV and 836, 394, 521, and 728 keV add to the same sum energy of 2479 keV.The angular distribution ratio obtained for the 728-keV transition partially deexciting the 5 + 4 level at 2479 keV has a value expected from an M1 character, whereas the R value of the subsequent 521-keV transition is consistent with a mixed M1/E2 transition.Based on these numbers, the level at 1751 keV is tentatively assigned as I = 4. The 556-keV transition is seen in coincidence with the 355-and 963-keV transitions, where the coincidence with the latter γ ray seems to be more intense.For this reason the 556-keV transition is assigned to feed the 2 + 1 state at 963 keV from a T = 0 state located at 1519 keV.Observed coincidences illustrated in Figs.8(a) and 8(b) and in Fig. 10(a) all show a peak at 960 keV, which corresponds exactly to the energy difference between the 2479-and 1519-keV levels.The angular distribution coefficient and ratio suggest E2 character for the 556-keV transition, which implies that the state at 1519 keV is 4 + 1 .This would lead to the fact that the 960-keV transition from the 2479-keV, 5 + 4 state to the 1519-keV, 4 + 1 state should be M1 type.Unfortunately, it was not possible to extract the R value with small-enough uncertainty to fix the multipolarity of the 960-keV transition.Therefore, the level at 1519 keV is only tentatively assigned as I = 4. The (T = 1, 6 + ) state at 3674 keV One of the most prominent peaks presented in Fig. 7(b), located at 836-841 keV, is most probably a triplet.As previously mentioned, the 836-keV line represents the transition from the 1 + 1 state in Band 3 and the formerly known 841-keV γ -ray transition feeds the isomeric 9 + 1 state [9] in Band 4. Looking at the coincidences illustrated in Figs.8(a), 8(c), and 10(a), a line at 840 keV can be observed in each of the figures, which cannot be associated with either of the two previously mentioned γ -ray transitions.After fixing most of the levels within the T = 1 and different T = 0 bands, the 840-keV line fits within error limits between the T = 0, 7 + 2 and tentative T = 1, 6 + 1 levels located at 2833 and 3674 keV, respectively, and satisfies the observed coincidences.The angular distribution ratio [R = 0.60 (21)], which is consistent with a stretched M1 transition, is derived for the 840-keV line because it can be effectively separated from the other members of the triplet by clean γ γ coincidence relations. Short-lived structures above the isomeric states The recoil-isomer tagging method [38,39] was employed alone and in conjunction with the β-tagging method.structures above the 9 + previously reported in Ref. [9], were also in the present study the ordering confirmed on basis γ γ analysis.transition is clearly the most intense, as can be noted from Fig. 11(a).Therefore, it has to be feeding the isomeric 9 + 1 state at 3021 keV.Both the angular distribution coefficient [A 2 = 0.30 (5)] and ratio [R = 1.17 (3)] deduced for the 841-keV transition are typical for an E2 transition.This leads to a spin assignment of 11 + 1 for the level at 3862 keV.A second intense transition in Fig. 11(a) is the 1462-keV line with angular distribution values indicating an E2 character.A strong mutual coincidence observed between the 841-and 1462-keV lines suggests that the latter transition feeds the 11 + 1 state and depopulates a 13 + 1 level at 5325 keV; hence, they belong to the same T = 0 band.The 1206-keV transition is observed in coincidence with both of the previously mentioned lines and the extracted angular distribution ratio implies M1 character.The 1206-keV transition is therefore assigned tentatively to depopulate a 14 + 1 state at 6530 keV, in good agreement with Ref. [9].The 722-keV transition [R = 1.49(28)] is observed FIG.11.Recoil-isomer and β-tagged JUROGAM II singles spectra.In panel (a) all delayed γ -ray transitions associated with 66 As are used as a tag with the β-energy gate of 1.5-10 MeV.In panel (b) only the delayed γ -ray transitions originating from states below the lower-lying isomeric 5 + 1 state in 66 As are used as a tag along with a β gate of 1.5-10 MeV.Peaks labeled in dark gray are unidentified transitions while the peak labeled in gray and marked with a "c" is a contaminant from 65 Ga. in coincidence with the 841-keV line simultaneously with the 1946-and 1262-keV transitions, but not with the relatively strong 1206-and 1462-keV transitions.The 722-keV transition is tentatively assigned to deexcite the 14 + 1 state at 6530 keV and to feed a 12 + 1 state at 5808 keV, which in turn is deexcited by the 1946-keV transition. Peaks labeled in gray in Figs.11(a) and 11(b) are γray transitions, which could not be associated with any of the competing reaction products nor linked with the other observed 66 As γ -ray transitions.The 894-, 909-, and 1133-keV transitions were also reported in Ref. [9] but the authors were unable to place them in the level scheme. Figure 11(b) shows a βand isomer-tagged JUROGAM II singles spectrum with 0-to 3-μs γ -recoil time gate suitable for the lower-lying 5 + 1 isomeric state.Three intense peaks at 841, 902, and 995 keV are observed.The latter two were confirmed to be in mutual coincidence, but could not be connected to any other prompt γ -ray transitions found in 66 As.The 902-and 995-keV transitions were investigated with very strict β and time gates and can be unambiguously associated with 66 As.As the 995-keV transition is found to be slightly more intense than the 902-keV transition, the 995-keV line is assigned to feed directly to the isomeric 5 + 1 state.The angular distribution ratios obtained both for the 902-and 995-keV lines, favor E2 type of transitions; thus, the levels at 2349 and 3251 keV are tentatively assigned as 7 + 1 and 9 + 2 , respectively.There seems to be a small peak at 835 keV right next to the 841-keV peak as illustrated in Fig. 11(b).In addition, there are some events detected around 1486 keV, which are visible in both panels of Fig. 11.One could speculate that a 835-keV M1 transition from a T = 1, 4 + 3 state could feed directly in the isomeric 5 + 1 state.However, this scenario could not be confirmed unambiguously during the data analysis; hence, it will be left as an open question. IV. DISCUSSION The structure of 66 As has been studied theoretically by Hasegawa et al. [36] and Honma et al. [12].Both of these studies were based on SM calculations using the 2p 3/2 , 1f 5/2 , 2p 1/2 , and 1g 9/2 single-particle orbits as a model space.Differences between these calculations arise mainly from the interaction used and the single-particle energies.Identical calculations as applied in Ref. [12] using the modern effective JUN45 interaction have been employed in the present work to compare with the experimental data.These calculations were extended beyond the isomeric structures to include properties of all states and E2/M1 transition strengths.The resulting theoretical level energies are illustrated in Fig. 12. A. Isomeric states and E2 transition strengths Studies presented in Refs.[36] and [12] both suggest that the structure of the experimentally observed isomeric 9 + 1 and 5 + 1 states can be interpreted as fully aligned protonneutron pairs in the g 9/2 and f 5/2 orbitals, respectively.This conclusion seems to be valid according to the experimentally confirmed spins and parities of these states.It is interesting to compare the different theoretical E2 transition strengths for the 9 + 1 → 7 + 3 and 5 + 1 → 3 + 1 transitions with the ones derived from the experimental lifetimes and conversion coefficients.The corresponding B(E2) values are listed in Table III, where experimental B(E2) values, as reported in Ref. [8], are also included for comparison.It should be noted that those values are derived from experimental half-lives (superseded later in Ref. [9]) and conversion coefficients.The extended P + QQ interaction with monopole corrections (hereafter called EPQQM) used in Ref. [36] produces B(E2) values, which differ approximately by factors of 0.1 and 10 with the respective experimental values.The experimental level energies of the isomeric 9 + 1 and 5 + 1 states are, however, roughly reproduced by the calculation.The present calculation using the JUN45 interaction produces a B(E2; 5 + 1,th → 3 + 2,th ) value, which agrees well with the experimental one, suggesting that the model correctly describes the wave functions of the states involved in the transition.Nevertheless, the predicted level energy for the isomeric 5 + 1,th state is 0.95 MeV below the experimental counterpart.The theoretical B(E2; 9 + 1,th → 7 + 2,th ) is again too low by factor of 10 and the 9 + 1,th level energy is 0.52 MeV below the experimental isomeric 9 + 1 state.Nucleon occupancies of orbitals from present SM calculation are presented in Table IV.This theoretical study and the one presented in Ref. [36] both predict ∼20% occupation of valence nucleons in the g 9/2 orbit in the case of the isomeric 9 + 1,th state, while for the other calculated levels the g 9/2 occupation is, on average, only 3%-6%.This is especially true for the theoretical 7 + 2,th state, which the isomeric 9 + 1,th state is expected to decay into.This result implies that the isomerism of the 9 + 1,th state is indeed attributable to its structural difference compared to the 7 + 2,th state.However, the present SM calculation predicts another 7 + 1,th state with almost identical orbital occupancies as obtained for the isomeric 9 + 1,th state.This structural similarity is naturally reflected in the pronounced E2 transition strength, which is of the order of 460 e 2 fm 4 .Taking this fact into account and remembering the theoretical underestimation of the B(E2; 9 + 1,th → 7 + 2,th ) value, one can speculate whether the mixing of the different 7 + states is correctly reproduced by the theory.Alternatively, the effect of the g 9/2 orbit on the structure of excited states in 66 As could possibly be refined.The isomerism of the 5 + 1 is not likely to originate from major structural differences, at least in the light of calculated orbital occupation numbers, but can simply be explained by the low decay energy. B. Oblate 3 + shape isomer The existence of a 3 + 1,th shape isomer was predicted in Ref. [36].The prediction of the isomerism arises from the calculated quadrupole moments from which one can infer an oblate shape for the 3 + 1,th state and prolate shapes for the other low-lying states.However, the predicted isomeric state was not found in the present study.The experimental setup used in this work has certain limitations to observe fast decays.This is attributable to the ∼500-ns flight time of fusion residues through the RITU separator.This limit is cross-section dependent, but if the isomer exists, the lifetime of the state should be of the order of >100 ns to be observed at the focal plane of RITU.Also, the 10-ns time resolution of TDR does not permit the investigation of small time differences of the γ rays measured at the JUROGAM II target position. Recent experimental work on 66 As reported in Ref. [10], led to the discovery of a 3 + 2 state with a 1.1(3)-ns half-life, which was determined on the basis of the centroid-shift method [40].This state is proposed to be the predicted oblate shape isomer and is deexcited by a strong 379-keV M1 and a weaker 506-keV and a nonobserved 112-keV γ -ray transition.In the present study a 3 + 2 state, which is deexcited similarly by the strong 379-keV M1 and weaker 506-keV (E2) γ -ray transitions, was identified.It is reasonable to assume that it is the same 3 + 2 state, which has been successfully discovered in both experiments.However, no 112-keV γ rays originating from 66 As were observed in the present study.In Ref. [10] the nonobservation of the 112-keV transition is explained by the germanium array detection efficiency, which was reduced owing to the strong absorption in the CsI charged particle ancillary detectors used in that experiment.With the JUROGAM II array such limitations were not present and therefore the reported 112-keV transition with 6% intensity should have been observed. If the 3 + 2 state has ∼1-ns half-life, the γ -ray emission should take place 0-30 mm downstream from the JUROGAM II target position.This would cause a slight drop in the detection efficiency of the 379-and 506-keV γ rays, but more importantly, the change in the detection angle would lead to an incorrect Doppler correction or a shift of a few keV in the measured γ -ray energy in the 75.5 • and 104.5 • JUROGAM II rings.This should be observable in the γ -ray spectrum as a broadened or skewed peak shape.The peak shapes of the 355-, 379-, and 394-keV transitions were examined but no differences in their respective shapes were observed. C. T = 1 and T = 0 states in 66 As The present SM calculation produces the level energies of the T = 1, 2 + 1 (967 keV), 4 + 3 (2222 keV), and 6 + 1 (3891 keV) states in relatively good agreement with the experimental 2 + 1 (963 keV), 4 + 3 (2189 keV), and (6 + 1 ) (3674 keV) states (see Fig. 12).Recent theoretical work by Kaneko et al. [11], which again is based on calculations identical to those used in the present work, predicts the CED between the T = 1 states in odd-odd N = Z systems and their analog even-even partners.Recent experimental work on 66 As [10] proposes a T = 1, 6 + 1 state at an energy of 3637 keV, which results in the initially positive CED trend between 66 As/ 66 Ge having a sudden negative gradient at spin 6h.In Ref. [10] this unusual behavior, along with the unique negative CED trend observed FIG. 13. (Color online) The experimental CED systematics for the mass A = 66, 70, 74, and 78 systems (solid lines).The calculated CED with JUN45 interaction for the mass A = 66 pair is shown as dashed line to compare with the experimental data.Data are taken from Refs.[6,37,[41][42][43]. within the A = 70 pair ( 70 Br/ 70 Se), was accounted for by the different mixing of competing shapes between the isobaric analog states.However, in Ref. [11] the SM calculations correctly reproduce the negative CED trend for the A = 70 pair with a nearly static oblate deformation in 70 Se.The main reason for the anomalous trend in the latter work is found to be the enhanced neutron and reduced proton excitations to the g 9/2 orbit owing to the electromagnetic spin-orbit interaction.In the present work, the candidate for the T = 1, 6 + 1 state is found to lie at 3674 keV, 37 keV higher than proposed in Ref. [10].This leads to a moderately positive CED behavior within the A = 66 pair, as illustrated in Fig. 13.A similar trend is also predicted by the present theoretical calculation, if one particularly considers the first 6 + th states (see Fig. 13).Figure 13 shows also heavier systems for comparison.In the case of the mass A = 74 and 78 pairs, large positive and almost flat CED trends are observed, respectively.Generally, the positive CED trends are explained by the Coriolis antipairing, i.e., breaking of valence nucleon pairs when angular momentum is generated [6].This causes the even-even N = Z − 2 partner to have a greater reduction in Coulomb energy because it has more pp pairs than the odd-odd N = Z partner of the multiplet.In the case of the A = 78 pair, an almost flat CED is proposed to be attributable to the deformed shell gap at Z, N = 38, which inhibits shape changes and suppresses pairing effects [44].The observed CED trend for the A = 66 pair is only slightly steeper than the one observed for the A = 78 pair.Clearly, the Z, N = 38 shell gap should not have much of an influence in the case of 66 As.In addition, taking into account the recent theoretical result for the mass A = 70 pair, coexisting shapes may not necessarily be the origin of the observed flatness in the CED behavior in the case of the mass A = 66 pair.In Ref. [11] the single-particle energy shift component, which is greatly affected by the electromagnetic spin-orbit interaction, is found to flatten the CED trend for the A = 66 system, as it is purely negative as in the case of the A = 70 pair.This hints toward the importance of the g 9/2 orbit and its interplay with the fp-shell orbits in the structure of the 66 As.Further discussion of CED and their implications around the N = Z line will be carried out in a future publication [45], where new results on the full A = 66 isospin triplet will be presented. In Fig. 2 the tentative 840-keV γ -ray transition connecting the supposed 6 + 1 state and the 7 + 2 is very interesting.The quasideuteron description [46] can be used to estimate and predict the isovector M1 transition strengths in odd-odd N = Z nuclei.According to this approximation, the M1 transition strength is greatly dependent on the characteristics of the single-particle orbits contributing to the level configuration.In the case of j = l + 1/2 orbitals, the spin of the nucleon and orbital angular momentum are aligned and strong isovector M1 transitions are favored.If the single-particle orbital is of type j = l − 1/2, the spin and orbital parts are out of phase, resulting in small M1 matrix elements.Obviously, as the low-lying excitations in 66 As are presumably mainly based on the f 5/2 (j = l − 1/2) and p 3/2 (j = l + 1/2) configurations, a strong M1 transition between the lowest T = 0 and T = 1 states, i.e., between 2 + 1 and 1 + 1 , is experimentally missing.The situation, however, might be different at higher values of angular momentum.As already noticed in the case of the 9 + 1 isomeric state, the importance of the g 9/2 (j = l + 1/2) orbit becomes evident.If one considers the situation where the amplitude of the g 9/2 component increases along with the spin within the T = 1 band, the M1 transitions might become the dominant decay mechanism over the E2 transitions.This might be the case for the 6 + 1 state where the 840-keV γ -ray branch to the T = 0, 7 + 2 state is greater (82%) than the 1486-keV γ -ray branch feeding the T = 1, 4 + 3 state (18%).The B(M1) value for the 6 + 1 → 7 + 2 transition can be estimated in a manner similar to that used in Ref. [47] by using the experimental branching ratio and recently measured B(E2; 2 + 1 → 0 + 1 ) value in 66 Ge [48].Assuming the B(E2) value does not significantly change between higher lying T = 1 states in 66 Ge, the B(M1; 6 + 1 → 7 + 2 ) value is estimated to be ∼1μ 2 N , which is surprisingly large.The present SM calculation does not support this scenario in terms of M1 transition strengths and g 9/2 occupancy (see Table IV).If the monopole matrix elements are correctly described by the theory, this should lead to a rather high M1 transition strength in the case of the 3 + 2 state decay to explain the experimentally observed favoring of the M1 branch over the E2 branch, but such enhancement was not predicted. The theoretically predicted level energies of the low-lying T = 0 states are in relatively good agreement with the experimental ones.The agreement is particularly good in the case of Band 3 (T = 0), which is connected to the isomeric states.The theory predicts three 7 + states with similar energies, which agrees extremely well with the experimental data.The theoretical description fails in the case of Band 4 and 5 in terms of excitation energy and level spacings.Despite the daunting task of theoretically describing odd-odd N = Z systems, the current model is found to do very well in the case of the low-lying excitations of 66 As.This fact is reflected in the experimental and theoretical B(E2; 5 + 1 → 3 + 1 ) values, which are in remarkable agreement. V. SUMMARY The odd-odd N = Z nucleus 66 As has been experimentally studied in detail.Prompt and delayed structures have been observed utilizing RBT and recoil-isomer tagging methods.The half-lives of two isomeric states and the internal conversion coefficients of the γ rays depopulating these levels were measured with improved accuracy, yielding the experimental B(E2) values.Some of the newly observed prompt γ -ray transitions were also identified in Ref. [10].The arrangement of the γ -ray transitions differs slightly between these two studies, especially within T = 0 structures.The level energies of the T = 1, 2 + 1 , and 4 + 3 states are established in agreement with the ones reported in Ref. [10].However, the candidates for the T = 1, 6 + 1 state differ in terms of level energy.Depending on which one of the experimental 6 + 1 energies is used, a somewhat different behavior in the CED trend is obtained.The SM calculations using the effective JUN45 interaction predicts that the CED should have a positive trend, which is consistent with the current data.Low-lying T = 0 states are described well by theory in terms of excitation energy when compared to the experimental counterparts.The same holds for the T = 1 band members.A disagreement between experiment and theory was found for the B(E2) and B(M1) strengths for the 9 + 1 → 7 + 3 and 6 + 1 → 7 + 2 transitions, respectively.This discrepancy is most likely attributable to theory not correctly reproducing the behavior of the g 9/2 orbit at higher spins. FIG. 3 . FIG. 3. β-tagged delayed66 As γ rays detected in the (a) planar and (b) clover germanium detectors.The low-energy threshold for the β particles was set to 1 MeV.Transitions with gray labels (and marked with a "c") in panel (a) are contaminants from the 65 Ga β decay feeding the excited states of 65 Zn.The time gate for γ -recoil time difference is 0-21 μs.Inset in panel (a): The intensity ratio of β-tagged 114-and 61-keV γ rays observed in the planar detector as a function of γ -recoil time difference.Information on the 65 Ga contamination in the 66 As 114-keV peak can be obtained from the flat part of the curve (see Sec. III A2 for details). FIG. 4 . FIG. 4. β-tagged and gated delayed γ -ray spectra from planarclover matrices.In panels (a) and (b) the gate is set on the 114-keV transition detected in planar, whereas in panels (c) and (d) the gate is set on the 124-keV transition.In all panels the β gate was set to 0.5-10 MeV and γ planar -recoil time gate to 0-21 μs.Panels (b) and (d) have a narrow −100to 100-ns γ γ time gate applied to identify only prompt γ -ray coincidences. FIG. 6 . FIG. 6. (Color online) Half-life data and fits used to extract the half-lives of the (a) 3021-keV and (b) 1354-keV isomeric states, respectively.The dashed line indicates the centroid of the time distribution, which corresponds to the half-life of the state.Details of the time spectra and determination of the half-lives are explained in the text. FIG. 12 FIG. 12. (Left) The energy levels of 66 As predicted by the present SM calculation.The width of the arrow corresponds to the relative value of the calculated E2 (solid arrow) and M1 (dashed arrow) transition strengths.The dashed levels are theoretically predicted but not observed in the experiment.(Right) Comparison of the experimental (Exp) and theoretical (Th) level energies for the T = 1 (right) and different T = 0 sequences. TABLE I . The prompt γ -ray transitions measured for 66 As.The energy of the γ rays (E γ ), relative γ -ray intensity (I rel ) normalized to 100 for the 2 + l Gate on 963 keV. TABLE II . The γ rays measured for 66 As at the focal plane of RITU.Intensities are relative to the 1 + 1 → 0 + 1 836-keV transition.E γ (keV) I rel (%) E i (keV) I π TABLE III . Comparison of experimental and SM-predicted γ -ray transition strengths for 66 As. TABLE IV . Nucleon occupation numbers of orbitals in the four model-space orbits for low-lying T = 1 and T = 0 states in 66 As.
2018-12-02T17:18:58.431Z
2013-08-21T00:00:00.000
{ "year": 2013, "sha1": "a663d643b4ae5ee0befe584bc0c8f41c66f90774", "oa_license": "CC0", "oa_url": "https://jyx.jyu.fi/bitstream/123456789/48636/1/julinrphysrevc.88.024320.pdf", "oa_status": "GREEN", "pdf_src": "ScienceParsePlus", "pdf_hash": "f32a669c9fc40ae295e6f7e0acb201160477e973", "s2fieldsofstudy": [ "Physics" ], "extfieldsofstudy": [ "Physics" ] }
14450358
pes2o/s2orc
v3-fos-license
Influence of assessment site in measuring transcutaneous bilirubin ABSTRACT Objective: To investigate the influence of the site of measurement of transcutaneous bilirubin (forehead or sternum) in reproducibility of results as compared to plasma bilirubin. Methods: A cohort study including 58 term newborns with no hemolytic disease. Transcutaneous measurements were performed on the forehead (halfway between the headline and the glabella, from the left toward the right side, making consecutive determinations, one-centimeter apart) and the sternum (five measurements, from the suprasternal notch to the xiphoid process with consecutive determinations, one-centimeter apart) using Bilicheck® (SpectRx Inc, Norcross, Georgia, USA). The correlation and agreement between both methods and plasma bilirubin were calculated. Results: There was a strong linear correlation between both determinations of serum bilirubin at the forehead and sternum (r=0.704; p<0.01 and r=0.653; p<0.01, respectively). There was correspondence of the mean values of transcutaneous bilirubin measured on the sternum (9.9±2.2mg/dL) compared to plasma levels (10.2±1.7mg/dL), but both differ from the values measured on the forehead (8.6±2.0mg/dL), p<0.05. Conclusion: In newborn term infants with no hemolytic disease, measuring of transcutaneous bilirubin on the sternum had higher accuracy as compared to serum bilirubin measurement on the forehead. INTRODUCTION Most newborn (NB) infants develop jaundice in the first week of life. It occurs in up to 92% of term and late preterm NB infants. (1) The occurrence of high serum bilirubin levels for prolonged time may permanently damage structures of the Central Nervous System, such as the globus pallidus, subthalamic nuclei, hippocampus, and oculomotor nucleus, among others, leading to kernicterus. (2) The indication of phototherapy for treating neonatal jaundice will depend on serum bilirubin levels, presence of blood incompatibility, weight, and chronological and gestational ages, in addition to associated comorbidities. (3) Hence the American Academy of Pediatrics recommends that every NB have its bilirubin level measured before hospital discharge and this measurement should be repeated on the first days after discharge. (4) Invasive bilirubin dosing demands drawing blood, with many inconveniences, such as technical difficulties in venous puncture, delay to obtain results, discomfort caused by pain (5,6) , and parental stress (7) , so it is important to minimize not only the amount of blood the NB loses in blood draws, but also to reduce to number of draws to the minimum possible. (8) In that sense, in the early 1980's non-invasive (transcutaneous) techniques were developed to measure bilirubin and minimize the inconvenience of blood draws. The first equipment developed only correlated the intensity of yellow skin color with bilirubinemia, suffering the interference of many factors, like the amount of melanin, hemoglobin and connective tissue. (5) In the past few years, a new generation of devices for the transcutaneous measurement of bilirubin has been produced (6) . They differ from previous models for being based on microspectrometry, which enables determining the optical density of bilirubin, hemoglobin and melanin in the subcutaneous layer of the NB infant skin. Excluding the factors that interfere in the determination of bilirubin leads to measuring its optical density in the subcutaneous capillaries and tissues with greater accuracy, (6) thus enabling the replacement of plasma measurements for transcutaneous measurements. (9) This technique has been studied in our context, and a good correlation between transcutaneous and serum bilirubin (10) has been found even in a multiracial population. (11) Recently, a metaanalysis gathered the results of 21 studies comparing transcutaneous bilirubin to serum bilirubin in preterm infants, confirming the accuracy of this technique also in this NB population. (12) On the other hand, bilirubin measurement may suffer the influence of the site of measurement: forehead or sternum. (13) The literature presents conflicting results, and it was demonstrated in term NB that transcutaneous measurements on the forehead and sternum are equivalent. (10,(14)(15)(16) Moreover, sternal measurement results in higher bilirubin levels than on the forehead. (17)(18)(19) OBJECTIVE To verify the influence of the measurement site of transcutaneous bilirubin, forehead or sternum, in the reproducibility of results, as compared to plasma bilirubin. METHODS After approval by the Research Ethics Committee (REF CEP/Einstein 08/896), a prospective cohort study was conducted including healthy term infants born at the Hospital Israelita Albert Einstein, a private tertiary-care hospital in the city of São Paulo (SP), in the period between April to September 2009. NB with gestational age ≥37 weeks with <72 hours of life were included. Newborn infants with hemolytic disease, with skin abnormalities or previous phototherapy treatment were excluded. Jaundice due to hemolytic disease was defined as that with early onset (in the first 48 hours of life), or with laboratory values that were incompatible with physiological jaundice (occurrence of reticulocytosis, positive Coombs or eluate tests). After obtaining the informed consent from parents or legal guardian, NB infants with clinical indication of plasma bilirubin measurement, according to the routine assessment by the neonatologist, had bilirubin measured transcutaneously too. Immediately after the collection of plasma bilirubin, always the same researcher measured transcutaneous bilirubin on the forehead and on the sternum, using the Bilicheck ® device (SpectRx Inc, Norcross, Georgia, USA). The device was calibrated before each measurement, according to the manufacturer's instructions to ensure measurement accuracy. (20) For each measurement, the device was positioned on the infant's skin, and five individual measurements in different points led to one result. On the forehead, the five measurements were taken halfway the hairline and glabella, starting on the left towards right side, one-centimeter apart. On the sternum, five measurements were taken, starting on the suprasternal notch to the xiphoid process, with consecutive one-centimeter apart determinations. The statistical analysis was performed through the calculation of Pearson's correlation coefficient for both transcutaneous measurements compared to plasma measurements, and then Bland-Altman charts were plotted to assess agreement, and, finally, simple variance analysis (one-way ANOVA) to compare means, with Student-Newman-Keuls as a discriminatory post-test. The calculated size of the sample was 50 NB, considering the difference between means to be detected as 1.0mg/dL, with an expected standard deviation of 2.4mg/dL (17) , 0.80 power test, and 0.05 significance level. For the statistical analysis, Sigma Stat software, version 2.0 was used. RESULTS A total of 58 NB were studied, with birth weight of 3221±402g (mean±standard deviation) gestational age of 38.4±1.2 weeks, one-minute Apgar score of 9.0±0.3 and five-minute Apgar score of 10.0±0.0. All NB infants were Caucasian, and measurements were taken with 1.8±0.9 days of life. A total of 94.8% of the NB included were born from C-sections and 5.2% from vaginal deliveries. A good linear correlation was observed both between transcutaneous bilirubin measurements on the forehead and serum levels (r=0.704; p<0.01), as well as between transcutaneous measurements on the sternum and serum levels (r=0.653; p<0.01) (Figures 1 and 2). Differences in the results between transcutaneous measurements on the forehead and sternum and total serum bilirubin are shown in figures 3A and 3B. The mean difference between transcutaneous measurements from the sternum and plasma bilirubin was 0.3mg/dL, number below that found in the difference between measurements taken on the forehead and corresponding plasma bilirubin levels (1.6mg/dL). The comparison between the means of bilirubin values found on the forehead, sternum, as well as the corresponding serum bilirubin is shown in figure 4. There was a correspondence of the values measured on the sternum with plasma values, but both differed from the values measured on the forehead (p<0.05). DISCUSSION This study had the objective of verifying in term NB with no hemolytic disease the influence of transcutaneous bilirubin measurement site in accuracy of results. The principal contribution of this study was to provide, with data obtained in our environment, information about an issue that the international literature shows nonhomogeneous, and sometimes conflicting, results. Accuracy in the measurement of transcutaneous bilirubin in relation to serum bilirubin has been recently demonstrated in a meta-analysis gathering data from 3527 patients published in 21 studies. In 16 of those studies, measurements were taken on the forehead, in 10 on the sternum, and in 3 on the abdomen. (12) Our data demonstrate that measurements taken on the sternum have a good correlation with serum measurements, unlike measurements taken on the forehead. Possibly as a result of the continuous exposure to room light, bilirubin measurements in areas that are not covered by clothes, like the face, may present lower bilirubin values. Many authors tried to relate the measurement site of transcutaneous bilirubin (forehead, sternum, dorsum, knee, or foot) with accuracy of results, (13) and the measurements taken on the forehead and sternum presented the best correlations with serum bilirubin. (6,17,21,22) Like the results of this study, Maisels et al. found a better correlation with serum bilirubin when transcutaneous measurements were taken on the sternum (r=0.953) as compared to measurements on the forehead (r=0,914). (18) Similarly to our results, a revision published in 2009, including 13 studies addressing the influence of measurement site on the results of transcutaneous bilirubin, concluded that the sternum presents good correlation with serum bilirubin. However, in six studies of this revision, no differences were observed between measurements taken on the forehead and sternum, and, in two studies, measurements taken on the forehead were more reliable than those taken on the sternum. (13) In another study, transcutaneous measurement of bilirubin taken on the forehead suffered the influence of crying, and the lowest values were found in the NB who were crying at the time of measurement. (23) In our study, NB infants were not crying upon measurement. In disagreement with this study, Bertini and Rubaltelli demonstrated that the precision of transcutaneous measurements, when taken on the forehead and sternum, are comparable, but sternum measurements are, in average, 0.8 to 0.9mg/dL higher. (10) Likewise, the average of transcutaneous bilirubin measurements taken on the trunk was demonstrated to be 0.4mg/dL higher than serum measurements, whereas measurements on the forehead were 0.3mg/dL (5mol/L) smaller than serum measurements. The authors concluded that plasma measurements and transcutaneous measurements showed approximate values, but after hospital discharge, forehead measurements underestimated values by 5%. The authors recommend using trunk measurements for bilirubinemia. (15) Similar results to this study were found in a group of 345 NB, in which a better correlation between blood bilirubin and transcutaneous measurements taken on the sternum than on the forehead were found. (19) Even though accuracy is similar for measurements taken on the frontal region or on the sternum, correlation is greater on the latter, possibly due to the head's exposure to room light. (24) Disagreeing with the findings of this study, in our environment, a group of 44 NB, with average gestational age of 35.1±3.4 weeks and average birth weight of 2151±889g, 73% of them Caucasian, was analyzed between the second and third days of life. The authors did not find differences between serum bilirubin levels and transcutaneous bilirubin levels measured on covered areas of the forehead and sternum 24 hours after the beginning of phototherapy. (16) This study presents some weaknesses, including its relatively small sample formed exclusively by term Caucasian NB with no hemolytic disease. The inclusion of only Caucasian NB infants was unintentional, at the time of patient recruitment, which resulted in a sample that is different from that of the Brazilian population. But this fact does not compromise significantly our conclusions. It is important, however, to highlight that the influence of the measurement site of transcutaneous bilirubin (forehead or sternum) on serum bilirubin should be also assessed in preterm infants of other races and with hemolytic disease, before generalizing these conclusions to these groups. CONCLUSION In Caucasian term NB without hemolytic disease, transcutaneous bilirubin measurement taken on the sternum presents greater accuracy than forehead measurements, when compared to serum bilirubin.
2017-08-15T21:20:01.162Z
2014-03-01T00:00:00.000
{ "year": 2014, "sha1": "053393886b601ce9da052ef7ebe62afdb704a0ed", "oa_license": "CCBY", "oa_url": "http://www.scielo.br/pdf/eins/v12n1/1679-4508-eins-12-1-0011.pdf", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "053393886b601ce9da052ef7ebe62afdb704a0ed", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
52181560
pes2o/s2orc
v3-fos-license
Relationship between perioperative thyroid function and acute kidney injury after thyroidectomy Thyroid dysfunction may alter kidney function via direct renal effects and systemic haemodynamic effects, but information on the effect of thyroid function on postoperative acute kidney injury (AKI) following thyroidectomy remains scarce. We reviewed the medical records of 486 patients who underwent thyroidectomy between January 2010 and December 2014. Thyroid function was evaluated based on the free thyroxine or thyroid stimulating hormone levels. The presence of postoperative AKI was determined using the Kidney Disease: Improving Global Outcomes (KDIGO) criteria. AKI developed in 24 (4.9%) patients after thyroidectomy. There was no association between preoperative thyroid function and postoperative AKI. Patients with postoperative hypothyroidism showed a higher incidence of AKI than patients with normal thyroid function or hyperthyroidism (19.4%, 6.7%, and 0%, respectively; P = 0.044). Multivariable logistic regression analysis showed that male sex (OR, 4.45; 95% CI, 1.80–11.82; P = 0.002), preoperative use of beta-blockers (OR, 4.81; 95% CI, 1.24–16.50; P = 0.016), low preoperative serum albumin levels (OR, 0.29; 95% CI, 0.11–0.76; P = 0.011), and colloid administration (OR, 5.18; 95% CI, 1.42–18.15; P = 0.011) were associated with postoperative AKI. Our results showed that postoperative hypothyroidism might increase the incidence of AKI after thyroidectomy. significantly different than that after hemithyroidectomy (4.7% vs. 6.4%; P = 0.566). AKI incidence was no different between patients who received postoperative T4 replacement and patients who did not (P = 0.286). Thyroid function and AKI. There was no association between preoperative thyroid function and the occurrence of postoperative AKI (P = 0.661). For 72 patients, thyroid function was evaluated within 7 postoperative days. There was a higher incidence of AKI among patients with postoperative hypothyroidism than among patients with normal thyroid function or hyperthyroidism (19.4%, 6.7%, and 0%, respectively; P = 0.044). Outcomes. Based on outcome analyses, patients with AKI were more likely to stay longer in hospital (6.5 [4,11] days vs. 5 [4,7] days; P = 0.040) than patients without AKI ( Table 4). None of the patients with the postoperative AKI needed renal replacement therapy. Discussion In this study cohort, 4.9% of the patients undergoing thyroidectomy developed AKI based on the KDIGO criteria. There was a higher incidence of AKI in patients with postoperative hypothyroidism than in patients with normal thyroid function or hyperthyroidism. Multivariable analysis indicated that male sex, preoperative use of beta-blockers, low serum albumin levels, and colloid administration were associated with the occurrence of AKI. Moreover, postoperative AKI was associated with longer hospital stays. The occurrence of postoperative AKI raises major concerns regarding patient safety, but few studies to date have assessed AKI incidence after thyroidectomy. Previous studies have reported that AKI develops in 0.8-10% of patients after non-cardiac surgeries [15][16][17] . Additionally, Recent studies have also shown that AKI develop in 4.4% of patients after unilateral total knee arthroplasty 18 . After colorectal surgery, AKI has also been shown to develop in 9.6% of patients based on the Acute Kidney Injury Network (AKIN) criteria, and in 5.5% of patients based on the Risk, Injury, Failure, Loss, and End-stage Renal Failure (RIFLE) criteria 19 . The prevalence of AKI in the present study is consistent with previous reports, although it should be noted that thyroidectomy is a relatively low-risk surgery in terms of bleeding or haemodynamic instability. The impact of thyroid dysfunction on renal function has been emphasized in recent studies. Thyroid hormones play important roles in renal development and the function of many transport systems along the nephron 1,2,5,6 . They also affect water and electrolyte metabolism, as well as cardiovascular function 3,4,9 . All these effects lead to important alterations in renal function in both hyperthyroidism and hypothyroidism. Serum creatinine levels are lower in cases of hyperthyroidism, whereas contrast findings are noted in cases of hypothyroidism. The renal impairment associated with hypothyroidism is primarily believed to be a result of reduced cardiac output and the subsequent decrease in the renal blood flow and GFR 7,13,14,20,21 . Thus, hypothyroidism may contribute to the exacerbation of pre-existing chronic kidney disease or the occurrence of AKI in the presence of other renal insults. Before radioiodine scanning for thyroid cancer follow-up, patients must stop taking levothyroxine and placed in a hypothyroid state. Kreisman et al. reported that such patients show a consistent elevation of serum creatinine levels in the hypothyroid state, and that this elevation is reversible after replacement of levothyroxine 7 . Our study showed comparable results in that patients with postoperative hypothyroidism exhibited a higher incidence of AKI than patients with normal thyroid function or hyperthyroidism (19.4%, 6.7%, and 0%, respectively). The time over which AKI develops in patients with hypothyroidism remains unknown. In a previous study, the serum creatinine levels were found to be elevated within 2 weeks of the onset hypothyroidism 7 . We defined AKI using the KDIGO criteria based on the serum creatinine level within 7 days, and this may be an insufficient Table 3. Univariate and multivariable regression analyses to identify factors associated with acute kidney injury after thyroidectomy. Odds ratios and 95% confidence intervals (CI) are expressed. The variables with P < 0.1 in univariate analyses were entered into the multivariable logistic regression model. period to detect the effect of postoperative hypothyroidism on postoperative AKI. Nevertheless, the association between postoperative thyroid function and serum creatinine level should be carefully considered by clinicians, because hypothyroidism can lead to AKI in patients with normal preoperative creatinine levels. Although the elevation of serum creatinine levels typically normalizes following thyroid hormone replacement after a short period of hypothyroidism, slower and incomplete recovery has been noted in cases with more prolonged periods of severe hypothyroidism 20 . Furthermore, the changes in renal function in the hypothyroid state may also lead to potential alterations in therapeutic drug doses 7 . Our multivariable analysis agreed with previous studies in terms of factors associated with AKI, including male sex, preoperative use of beta-blockers, low serum albumin level, and colloid administration. Albumin is known to have a renoprotective effect, mediated by antioxidant and anti-inflammatory properties 22,23 . Moreover, it functions as a reservoir for signalling molecules and donors of nitric oxide (NO) that enhance the renal blood flow and GFR by dilating vessels and improving renal function 24 . Furthermore, albumin tends to improve the microcirculatory performance that supports the maintenance of major organ functions 25 . Thus, both preoperative and postoperative hypoalbuminaemia are major risk factors for AKI in many previous studies 18, [26][27][28] . Although controversial, beta-blockers have similar effects to albumin on renal function. Despite the concerns about haemodynamic effects including a decrease in renal blood flow, several beta-blockers are known to mitigate renal injury by antioxidant properties or activating NO synthase. Several animal studies have reported that beta-blockers reduce the severity of AKI or have renal protective effects [29][30][31] . However, Le Manach et al. reported that the use of preoperative beta-blockers was associated with an increased frequency of renal failure because beta-blockers limit the increase of compensated cardiac output when major blood loss occurs 32 . In a number of studies of the effects of beta-blockers on advanced liver diseases, patients receiving beta-blockers had a high probability of developing AKI, and this was related to the inhibited cardiac compensatory reserve 33,34 . These harmful effects of beta-blockers could be related to renal hypoperfusion 35 . Furthermore, beta-blockers are recommended as third-line antihypertensive agents in patients with proteinuria according to the Kidney Disease Outcomes Quality Initiative (K/DOQI) guidelines 36 . Thus, the effectiveness of beta-blockers against postoperative AKI among patients in normal haemodynamic states after minor surgery remains unclear. Further studies are required to clarify the effects of beta-blockers on renal function. The detrimental effect of colloid administration on renal function remains a major concern. The oncotic force of these solutions may decrease renal filtration pressure, and this may inhibit renal function 37 . Another potential pathologic mechanism involves renal interstitial proliferation, macrophage infiltration, and tubular damage contributing to hydroxyethyl starch-induced nephrotoxicity 38 . Previous studies have shown the adverse renal effects of colloid administration in critically ill and septic patients 39,40 , although sufficient evidence on this topic is not available for healthy patients under perioperative care. A retrospective study of 174 patients who underwent orthotropic liver transplantation showed a higher incidence of AKI after colloid administration as compared to albumin administration 41 . In contrast, another study showed no association between intraoperative colloid administration and increased AKI risk after living donor hepatectomy 42 . Despite the relatively healthy patient characteristics in the present study, colloid administration was associated with AKI after thyroidectomy. However, additional studies with a randomized, controlled design are needed to clarify the findings regarding the nephrotoxicity of colloid administration in surgical patients. The prevalence of underlying diabetes mellitus was higher in the AKI group, but it showed no statistical relationship with AKI following multivariable analysis. Diabetes is also known as one of the risk factors of for postoperative AKI; and this association is thought to result from the possibility of pre-existing CKD 43 . This may affect our results because we excluded patients with CKD. Additionally, the development of postoperative AKI is related to the type of operation. Previous reports about the relationship between diabetes and postoperative CKD showed inconsistent results varying by procedure 44 . Lastly, the low incidence of postoperative AKI in our study may have affected the multivariable analysis findings. The retrospective observational study design resulted in some important limitations. As serum creatinine was not measured on every single day of postoperative admission and follow-up, there might be some undetected cases of postoperative AKI. Nevertheless, the incidence of postoperative AKI in this study was 4.9%, which is consistent with previous reports. Additionally, in accordance with KDIGO guidelines, the frequency of serum creatinine and urine output measurements to detect AKI should be individualized based on patient risk 45 . A lack of urine analysis, including sodium concentration and proteinuria, can be another concern. The analysis of urine sodium concentration can identify the cause of AKI after thyroidectomy. Although the renal impairment associated with hypothyroidism is primarily believed to be a result of reduced cardiac output and the subsequent decrease in renal blood flow and GFR, thyroid hormone is also known to affect kidney function by direct effects on the renal tubular system. Additionally, the presence of proteinuria in diabetes can result in the loss of thyroid hormone, and diabetes itself can contribute to AKI incidence. Although we considered as many variables as possible and performed multivariable analysis to obtain reliable results, we could not eliminate the possibility of residual confounding variables. Additionally, the low incidence of postoperative AKI among patients involved in this retrospective study limited the power to detect the relationship between thyroid function and AKI, as well as the effects of the investigated variables on AKI. Nevertheless, this was a suitable strategy for evaluating the effect of thyroid function on postoperative AKI in the absence of prospective studies. Further prospective studies with well-constructed designs for clarifying the effect of thyroid function on postoperative AKI are needed. In conclusion, AKI developed in 4.9% of patients who underwent thyroidectomy. We found that there was a higher incidence of AKI among patients with postoperative hypothyroidism than among patients with normal thyroid function or hyperthyroidism after thyroidectomy. As the knowledge of the association between postoperative thyroid function and postoperative AKI may have important clinical implications, further prospective studies should be conducted to clarify the effect of thyroid function on postoperative AKI incidence in thyroidectomy patients. Methods After approval was obtained from the Institutional Review Board of Asan Medical Center, we reviewed the records of all patients who underwent thyroidectomy for thyroid cancer at Asan Medical Center, Seoul, Republic of Korea, between January 2010 and December 2014. Informed consent was waived due to the retrospective nature of our study. Of the 516 identified patients, we excluded those aged <18 years (n = 7) and those with chronic kidney disease (n = 23). Thus, a total of 486 patients were finally included in the present study (Fig. 1). This manuscript adheres to the STROBE guidelines. We collected information regarding the baseline characteristics and laboratory, intraoperative, and postoperative data from the computerized patient record system at our institution (Asan Medical Center Information System Electronic Medical Records). Baseline characteristics included sex, age, body mass index, comorbidities (hypertension, diabetes mellitus, and cardiovascular disease), and the use of prescribed medications (beta-blockers and levothyroxine). Pathological diagnosis, tumour stage, and tumour size were also included as cancer characteristics. Laboratory data included sodium, potassium, chloride, calcium, haemoglobin, albumin, uric acid, and serum creatinine levels. To evaluate thyroid function, free thyroxine (FT4), and thyroid stimulating hormone (TSH) levels were recorded. Levels of serum TSH and FT4 were measured using the TSH-CTK-3 immunoradiometric assay (IRMA) kit (DiaSorin S.p.A, Saluggia, Italy) and fT4 radioimmunoassay (RIA) kit (Beckman Coulter/Immunotech, Prague, Czech Republic), respectively. Hyperthyroidism was defined as having a TSH level < 0.45 mIU/L with normal FT4 levels or FT4 levels >2.0 ng/dL. Hypothyroidism was defined as having a TSH level >4.5 mIU/L with normal FT4 levels or FT4 levels <0.8 ng/dL 46 . The type of thyroidectomy and lymph node dissection performed was also recorded. Recorded intraoperative data included anaesthesia time, lowest mean blood pressure, volume of administered fluids, and use of vasoactive drugs. Anaesthesia time was defined as the time from anaesthesia induction to the transfer of the patient from the operating room. Intraoperatively, additional fluid or vasoactive drug administration were considered if systolic blood pressure was maintained below 80 mmHg. The primary outcome of this study was the prevalence of AKI based on the Kidney Disease: Improving Global Outcomes (KDIGO) criteria. According to the KDIGO criteria, AKI was defined as an increase in the serum creatinine level by ≥0.3 mg/dL within 48 hours or an increase in serum creatinine by ≥1.5 times within 7 days 45 . Serum creatinine was measured on days 1, 2, 3, 5, and 7 after surgery and at least one time during that period in all patients. We did not use the urinary output criterion due to the unreliability of urine output measurements. The other outcome variables included the occurrence of postoperative intensive care unit (ICU) admission and the duration of hospital stay. Statistical analysis. Data are presented as mean ± standard deviation, median (interquartile range), or number (percentage), as appropriate. The χ 2 -test or Fisher's exact test was used to compare categorical variables in postoperative AKI groups. Continuous variables in these two groups were compared using the t-test, or the Mann-Whitney U test if the distribution was not normal. To identify the risk factors for postoperative AKI, logistic regression analysis was used to calculate ORs with 95% CIs. All variables in Tables 1 and 2 were tested, and variables with P < 0.1 after univariate analysis were entered into the multivariable logistic regression model. The final models were determined by backward elimination procedures with P < 0.05 as model retention criteria. All P values less than 0.05 were considered statistically significant. All statistical analyses were performed using SPSS Statistics (version 21; IBM Corp, Chicago, IL). Data Availability Statement All data generated or analysed during this study are available from the corresponding author upon reasonable request.
2018-09-14T14:12:18.610Z
2018-09-10T00:00:00.000
{ "year": 2018, "sha1": "07c3d00198b2f458a50a5e5ff18b283acd509432", "oa_license": "CCBY", "oa_url": "https://www.nature.com/articles/s41598-018-31946-w.pdf", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "bd12022221bbe17b368c41b577169beb6627a945", "s2fieldsofstudy": [ "Medicine", "Biology" ], "extfieldsofstudy": [ "Medicine" ] }
12477400
pes2o/s2orc
v3-fos-license
Crystallographic Characterization on Polycrystalline Ni-Mn-Ga Alloys with Strong Preferred Orientation Heusler type Ni-Mn-Ga ferromagnetic shape memory alloys can demonstrate excellent magnetic shape memory effect in single crystals. However, such effect in polycrystalline alloys is greatly weakened due to the random distribution of crystallographic orientation. Microstructure optimization and texture control are of great significance and challenge to improve the functional behaviors of polycrystalline alloys. In this paper, we summarize our recent progress on the microstructure control in polycrystalline Ni-Mn-Ga alloys in the form of bulk alloys, melt-spun ribbons and thin films, based on the detailed crystallographic characterizations through neutron diffraction, X-ray diffraction and electron backscatter diffraction. The presented results are expected to offer some guidelines for the microstructure modification and functional performance control of ferromagnetic shape memory alloys. Introduction Conventional shape memory alloys (SMAs) can generate large output strains as a result of reversible martensitic transformation. However, a major inconvenience for the practical application of SMAs is their low working frequency (less than 1 Hz), since thermal activation is necessary. Recently, a significant breakthrough in the research of high performance SMAs came about with the discovery of ferromagnetic shape memory alloys (FSMAs) [1], where the magnitude of output strain is comparative to that in conventional shape memory alloys [2][3][4][5][6][7][8]. Moreover, the possibility of controlling the shape change by the application of magnetic field enables relatively higher working frequency (KHz) than that of conventional shape memory alloys. With the integration of large output and fast dynamic response under the external magnetic field, FSMAs are conceived as the promising candidates for a new class of actuation and sensing applications. During the last two decades, numerous experimental studies have been conducted on the composition-dependent magnetic shape memory behavior in Ni-Mn-Ga alloys. Thus far, the field induced output strains have almost reached the theoretical limit in single crystals, i.e.,~7%,~11% and 12% in single crystals with 5M, 7M and NM martensite [6][7][8], respectively. It should be noted that the high-cost for the fabrication of single crystals represents a severe obstacle for practical applications. In contrast, the preparation of polycrystalline alloys are much simpler and of lower cost. However, a more or less random distribution of crystallographic orientation in polycrystalline alloys greatly weakens the field controlled functional behavior. To improve the functional properties in polycrystalline alloys, microstructure optimization and texture control are of great significance and challenge. In this paper, we present our recent progress on the microstructure control in polycrystalline Ni-Mn-Ga alloys in the form of bulk alloys, melt-spun ribbons and thin films, based on the detailed crystallographic characterizations through neutron diffraction, X-ray diffraction and electron backscatter diffraction (EBSD). For the bulk alloys, which were prepared by directional solidification, the thermo-mechanical treatment (compressive load applied during the martensitic transformation) was introduced in order to redistribute the variants. For the melt-spun ribbons with strong preferential orientation, the orientation inheritance between austenite and 7M martensite was analyzed. For the thin films deposited on MgO(1 0 0) substrate, the preferential orientation and variant distribution were illustrated. Experimental Bulk polycrystalline alloys with the nominal composition of Ni 50 Mn 30 Ga 20 (at. %) and Ni 50 Mn 28.5 Ga 21.5 (at. %) were prepared by directional solidification. In order to obtain a composition homogenization, the directionally solidified Ni 50 Mn 30 Ga 20 and Ni 50 Mn 28.5 Ga 21.5 bulk alloys were homogenized at 1173 K for 24 h in a sealed vacuum quartz tube, followed by the quenching into water. A part of homogenized alloy was ground into powder and then the powder was annealed at 873 K for 5 h in vacuum to release the internal stress for the subsequent powder X-ray diffraction (XRD) measurements. Ribbons with the nominal composition of Ni 53 The room-temperature crystal structure was determined by X-ray diffraction (XRD) with Cu-Kα radiation. The martensitic transformation temperatures were measured by differential scanning calorimetry (DSC, TA Q100) with a heating and cooling rate of 10 K/min. The microstructural characterization was performed in a field emission gun scanning electron microscope (SEM, Jeol JSM 6500 F) with an EBSD acquisition camera and Channel 5 software. The neutron diffraction measurements were performed using the materials science diffractometer STRESS-SPEC operated by FRM II and HZG at the Heinz Maier-Leibnitz Zentrum (MLZ), Garching, Germany, with a monochromatic wavelength of 2.1 Å [32]. The uniaxial compressive load Thermo-Mechanical Treatment of Directionally Solidified Alloys In general, the martensitic transformation is deformation-dominant diffusionless phase transformation with symmetry break. The lower symmetry of the product martensitic phase may result in the formation of self-accommodated multi-variants to compensate the elastic strains associated with the phase transformation. However, such self-accommodated microstructure is not favorable for the achievement of magnetic shape memory effect in Ni-Mn-Ga alloys, since the co-existence of multi-variants would greatly enhance the resistance for the variant reorientation. As the deformation accompanying the martensitic transformation is anisotropic, unidirectional constraint (tension or compression) during the martensitic transformation could promote the formation of certain favorable variants but eliminate some other unfavorable ones [34]. Therefore, strong preferential orientation of martensite can be achieved through the selective formation of favorable variants with the application of an external field during the martensitic transformation, thus to realize the optimization of crystallographic anisotropy and magnetic shape memory effect [2,4,13,14,35]. In this section, thermo-mechanical treatments (compressive load applied during the martensitic transformation) were introduced in order to reformulate the variant distribution. Through in-situ neutron diffraction, the martensitic transformation process under uniaxial compressive load was traced and the direct evidence on the variant redistribution induced by thermo-mechanical treatments was followed. Thermo-Mechanical Treatment of Directionally Solidified Alloys In general, the martensitic transformation is deformation-dominant diffusionless phase transformation with symmetry break. The lower symmetry of the product martensitic phase may result in the formation of self-accommodated multi-variants to compensate the elastic strains associated with the phase transformation. However, such self-accommodated microstructure is not favorable for the achievement of magnetic shape memory effect in Ni-Mn-Ga alloys, since the co-existence of multi-variants would greatly enhance the resistance for the variant reorientation. As the deformation accompanying the martensitic transformation is anisotropic, unidirectional constraint (tension or compression) during the martensitic transformation could promote the formation of certain favorable variants but eliminate some other unfavorable ones [34]. Therefore, strong preferential orientation of martensite can be achieved through the selective formation of favorable variants with the application of an external field during the martensitic transformation, thus to realize the optimization of crystallographic anisotropy and magnetic shape memory effect [2,4,13,14,35]. In this section, thermo-mechanical treatments (compressive load applied during the martensitic transformation) were introduced in order to reformulate the variant distribution. Through in-situ neutron diffraction, the martensitic transformation process under uniaxial compressive load was traced and the direct evidence on the variant redistribution induced by thermo-mechanical treatments was followed. Austenite to 7M Martensite Transformation Polycrystalline Ni 50 Mn 30 Ga 20 alloy with 7M martensite at room temperature was prepared by directional solidification. The cylindrical-shaped (φ5 mm×10 mm) samples with the axial direction parallel to solidification direction were cut from the homogenized ingot for thermo-mechanic treatment and neutron diffraction. The actual composition was verified to be Ni 50.1 Mn 28.8 Ga 21.1 by energy dispersive spectroscopy (EDS). According to DSC measurements, the start and finish temperatures of the forward (M s , M f ) and inverse martensitic transformation (A s , A f ) were determined to be 347.8 K, 331.3 K, 336.8 K and 352.2 K, respectively. Powder XRD measurements reveal that the directionally solidified Ni 50 Mn 30 Ga 20 alloy consists of 7M martensite at room temperature with lattice parameters a 7M = 4.2651 Å, b 7M = 5.5114 Å, c 7M = 42.365 Å, and β = 93.27 • , where the crystal structure of 7M martensite is depicted as an incommensurate monoclinic superstructure consisting of ten unit cells [22]. Moreover, microstructural observations have shown that the original austenite of the directionally solidified alloy forms coarse columnar-shaped grains with the grain size of several hundreds of microns along the solidification direction (SD) [36]. To reveal the global texture of the directionally solidified Ni 50 Mn 30 Ga 20 alloy, the complete pole figures were measured by neutron diffraction. The high penetration capability of neutrons, which exceeds that of X-rays by about four orders of magnitude, is considered to allow more reliable analysis on the global orientation distribution of the studied samples. Figure 2 Polycrystalline Ni50Mn30Ga20 alloy with 7M martensite at room temperature was prepared by directional solidification. The cylindrical-shaped (φ5 mm×10 mm) samples with the axial direction parallel to solidification direction were cut from the homogenized ingot for thermo-mechanic treatment and neutron diffraction. The actual composition was verified to be Ni50.1Mn28.8Ga21.1 by energy dispersive spectroscopy (EDS). According to DSC measurements, the start and finish temperatures of the forward (Ms, Mf) and inverse martensitic transformation (As, Af) were determined to be 347.8 K, 331.3 K, 336.8 K and 352.2 K, respectively. Powder XRD measurements reveal that the directionally solidified Ni50Mn30Ga20 alloy consists of 7M martensite at room temperature with lattice parameters a7M = 4.2651 Å, b7M = 5.5114 Å, c7M = 42.365 Å, and β = 93.27°, where the crystal structure of 7M martensite is depicted as an incommensurate monoclinic superstructure consisting of ten unit cells [22]. Moreover, microstructural observations have shown that the original austenite of the directionally solidified alloy forms coarse columnar-shaped grains with the grain size of several hundreds of microns along the solidification direction (SD) [36]. To reveal the global texture of the directionally solidified Ni50Mn30Ga20 alloy, the complete pole figures were measured by neutron diffraction. The high penetration capability of neutrons, which exceeds that of X-rays by about four orders of magnitude, is considered to allow more reliable analysis on the global orientation distribution of the studied samples. In order to modify the variant distribution, cyclic thermo-mechanical treatments were performed on the directionally solidified Ni50Mn30Ga20 alloy, and the martensitic transformation process under external load was traced by in-situ neutron diffraction. The sample was first heated to 393 K to reach the fully austenite state, where the uniaxial compressive load was applied along the solidification direction (SD). Since the austenite of the directionally solidified Ni50Mn30Ga20 alloy possesses the strong <0 0 1>A preferential orientation in parallel to the solidification direction, the uniaxial compressive load was actually applied along the <0 0 1>A. Prior to thermo-mechanical treatment, the neutron diffraction patterns were collected at 393 K and 303 K in the 2θ range of ~36°-52° for the tested sample, as shown in Figure 3a. It is seen that within measured 2θ range, only In order to modify the variant distribution, cyclic thermo-mechanical treatments were performed on the directionally solidified Ni 50 Mn 30 Ga 20 alloy, and the martensitic transformation process under external load was traced by in-situ neutron diffraction. The sample was first heated to 393 K to reach the fully austenite state, where the uniaxial compressive load was applied along the solidification direction (SD). Since the austenite of the directionally solidified Ni 50 Mn 30 Ga 20 alloy possesses the strong <0 0 1> A preferential orientation in parallel to the solidification direction, the uniaxial compressive load was actually applied along the <0 0 1> A . Prior to thermo-mechanical treatment, the neutron diffraction patterns were collected at 393 K and 303 K in the 2θ range of~36 • -52 • for the tested sample, as shown in Figure 3a. It is seen that within measured 2θ range, only {2 0 0} A diffraction can be observed in the austenite (a A = 5.83 Å) temperature region. After martensitic transformation, {2 0 0} A evolves into {−1 0 10} 7M , {1 0 10} 7M and {0 2 0} 7M , where the {1 0 10} 7M diffraction possesses the strongest intensity. Figure 3b-d displays the serial patterns measured on cooling across the martensitic transformation under the compressive load of −10 MPa (Cycle 1), −25 MPa (Cycle 2) and −50 MPa (Cycle 3) applied along the solidification direction, respectively. With the increase of compressive load, the intensity of {0 2 0} 7M diffraction increases gradually. After three cycles of treatment, there remained almost only the {0 2 0} 7M diffraction in the measured 2θ range, as shown in Figure 3e. Apparently, the uniaxial compression has exerted significant influence on the variant distribution, creating a strong preferential orientation of the {0 2 0} 7M . Moreover, with increasing the compressive load, the macroscopic deformation amount accompanying the martensitic transformation increased gradually, i.e., −2.1%, −2.8% and −3.3% for Cycle 1, Cycle 2 and Cycle 3, respectively, which also indicates the increase in the degree of preferred variant orientation. Figure 3e. Apparently, the uniaxial compression has exerted significant influence on the variant distribution, creating a strong preferential orientation of the {0 2 0}7M. Moreover, with increasing the compressive load, the macroscopic deformation amount accompanying the martensitic transformation increased gradually, i.e., −2.1%, −2.8% and −3.3% for Cycle 1, Cycle 2 and Cycle 3, respectively, which also indicates the increase in the degree of preferred variant orientation. The applied uniaxial compressive load can also result in the increase of martensitic transformation temperatures. Experimentally, the increases of Ms under −10 MPa, −25 MPa and −50 MPa applied during martensitic transformation were ~0.9 K, ~2.3 K and ~8.5 K, respectively [36]. The shifts of transformation temperatures under uniaxial load σ can be well explained by the Clausius-Clapeyron relation: dσ/dT = −∆S·ρ/ε, where ∆S and ε stand, respectively, for the entropy change and transformation strain, and ρ is the mass density. According to Clausius-Clapeyron relation, the increase of Ms under uniaxial load of −10 MPa, −25 MPa and −50 MPa was determined as 1.2 K, 4.1 K and 9.5 K, respectively, which is very close to the experimentally observed transformation temperature shifts [36]. The applied uniaxial compressive load can also result in the increase of martensitic transformation temperatures. Experimentally, the increases of M s under −10 MPa, −25 MPa and −50 MPa applied during martensitic transformation were~0.9 K,~2.3 K and~8.5 K, respectively [36]. The shifts of transformation temperatures under uniaxial load σ can be well explained by the Clausius-Clapeyron relation: dσ/dT = −∆S·ρ/ε, where ∆S and ε stand, respectively, for the entropy change and transformation strain, and ρ is the mass density. According to Clausius-Clapeyron relation, the increase of M s under uniaxial load of −10 MPa, −25 MPa and −50 MPa was determined as 1.2 K, 4.1 K and 9.5 K, respectively, which is very close to the experimentally observed transformation temperature shifts [36]. transformation. Under the compressive load applied during martensitic transformation, the variants with the reduction in the plane spacing should be more favorable. Thus, the formation of {0 2 0} 7M from {2 0 0} A is preferred under compressive load, leading to the large macroscopic strain and the formation of a strong <0 1 0> 7M preferred crystallographic orientation along the loading axis. Austenite to 5M Martensite Transformation In order to figure out the effect of external loading direction applied during the martensitic transformation on the selection of preferential variants, a directionally solidified Ni50Mn28.5Ga21.5 alloy with two preferred orientations with <0 0 1>A and <1 1 0>A parallel to the solidification direction (SD) was used to perform thermo-mechanical treatments. The rectangular parallelepiped samples (10 mm × 6.5 mm × 6.5 mm) with their longitudinal direction parallel to the solidification direction were cut from the homogenized alloy for neutron diffraction testing. According to the EDS, the actual composition was determined to be Ni49.6Mn28.4Ga22.0. Powder XRD measurement shows that the alloy composes of 5M martensite at room temperature and the lattice parameters were determined to be a5M = 4.226 Å, b5M = 5.581 Å, c5M = 21.052 Å and β = 90.3°. DSC measurements demonstrate that the martensitic transformation occurs above the room temperature. The start and finish temperatures of the forward and reverse martensitic transformation determined from DSC measurements are, respectively, 322.9 K (Ms), 318.4 K (Mf), 329.5 K (As) and 333.2 K (Af) [38]. [38]. Austenite to 5M Martensite Transformation In order to figure out the effect of external loading direction applied during the martensitic transformation on the selection of preferential variants, a directionally solidified Ni 50 Mn 28.5 Ga 21.5 alloy with two preferred orientations with <0 0 1> A and <1 1 0> A parallel to the solidification direction (SD) was used to perform thermo-mechanical treatments. The rectangular parallelepiped samples (10 mm × 6.5 mm × 6.5 mm) with their longitudinal direction parallel to the solidification direction were cut from the homogenized alloy for neutron diffraction testing. According to the EDS, the actual composition was determined to be Ni 49.6 Mn 28.4 Ga 22.0 . Powder XRD measurement shows that the alloy composes of 5M martensite at room temperature and the lattice parameters were determined to be a 5M = 4.226 Å, b 5M = 5.581 Å, c 5M = 21.052 Å and β = 90.3 • . DSC measurements demonstrate that the martensitic transformation occurs above the room temperature. The start and finish temperatures of the forward and reverse martensitic transformation determined from DSC measurements are, respectively, 322.9 K (M s ), 318.4 K (M f ), 329.5 K (A s ) and 333.2 K (A f ) [38]. [38]. It is seen in Figure 5 that both the {1 0 5} 5M /{−1 0 5} 5M and {0 2 0} 5M poles are roughly located at the tilt angle Psi =~0 • ,~40 • and~90 • in the corresponding pole figures. Since {1 0 5} 5M /{−1 0 5} 5M and {0 2 0} 5M of 5M martensite are originated from {2 0 0} A for the transformation from austenite to 5M martensite [39], it can be inferred that the initial austenite of directionally solidified Ni 50 Mn 28.5 Ga 21.5 alloy mainly possesses two preferred orientation components, i.e., <0 0 1> A //SD and <1 1 0> A //SD. Thus, during the subsequent thermo-mechanical treatment process, the compressive load applied along SD during the martensitic transformation can be viewed along <0 0 1> A and <1 1 0> A . For the initial austenite with the preferred orientation of <0 0 1> A //SD, the resultant {1 0 5} 5M /{−1 0 5} 5M and {0 2 0} 5M of martensite are either perpendicular or parallel to the SD. More specifically, for {1 0 5} 5M / {−1 0 5} 5M , the orientation component perpendicular to the SD possesses the much higher intensity, indicating that {1 0 5} 5M /{−1 0 5} 5M tends to be perpendicular to SD. On the other hand, for {0 2 0} 5M , the orientation component parallel to the SD has the higher intensity. For the initial austenite with the orientation of <1 1 0> A //SD, the resultant {2 0 0} 5M /{0 0 10} 5M tends to be perpendicular to SD and {1 2 5} 5M /{−1 2 5} 5M to be parallel to SD. For each cycle of the thermo-mechanical treatment, there remained two diffractions of 5M martensite, i.e., {0 2 0} 5M and {1 0 5} 5M /{−1 0 5} 5M , in the measured 2θ range after the martensitic transformation. However, the intensity ratio between {0 2 0} 5M and {1 0 5} 5M /{−1 0 5} 5M increases with the increase of compressive load, suggesting a redistribution of martensitic variants induced by the compressive load applied during the martensitic transformation [38]. Figure 7 shows the corresponding pole figures measured by neutron diffraction after five cycles of thermo-mechanical treatment for directionally solidified Ni50Mn28.5Ga21.5 alloy. It should be mentioned that, although the pole figures presented in Figure 5 (without thermo-mechanical treatment) and Figure 7 (after thermo-mechanical) were obtained from two different samples, the initial texture of two samples should be very similar since they were cut from the same directionally solidified alloy rod and one sample was just adjacent to the other when cutting. It is seen that under the compressive load applied along <0 0 1>A during the martensitic transformation (Figure 7a 5}5M⊥SD from {2 2 0}A of austenite should be preferentially activated to accommodate the external constraint. Therefore, the coupling between anisotropic lattice distortion in martensitic transformation and the external constraint dominates the preferred orientation of the martensite variants formed under the external constraint applied during the martensitic transformation [38]. Figure 5 (without thermo-mechanical treatment) and Figure 7 (after thermo-mechanical) were obtained from two different samples, the initial texture of two samples should be very similar since they were cut from the same directionally solidified alloy rod and one sample was just adjacent to the other when cutting. It is seen that under the compressive load applied along <0 0 1> A during the martensitic transformation (Figure 7a [38]. Thus, the variant orientation distribution under the compressive load applied during the martensitic transformation is strongly dependent on the austenite orientation and the direction of the external load. With of austenite should be preferentially activated to accommodate the external constraint. Therefore, the coupling between anisotropic lattice distortion in martensitic transformation and the external constraint dominates the preferred orientation of the martensite variants formed under the external constraint applied during the martensitic transformation [38]. Orientation Inheritance from Austenite to 7M Martensite in Melt-Spun Ribbons The rapid solidification based on melt-spinning technique has been proven to be an effective processing route for the preparation of ribbon shaped ferromagnetic shape memory alloys [40][41][42][43][44][45][46][47][48][49][50]. This method can avoid long time post heat treatment to achieve the composition homogeneity. Moreover, melt-spun ribbons usually tend to form a highly textured microstructure [51]. In this section, Ni53Mn22Ga25 and Ni51Mn27Ga22 ribbons with austenite and 7M martensite at room temperature respectively, were prepared by melt-spinning. The preferred orientation of austenite and 7M martensite in ribbons was presented and their correlation was further analyzed. Orientation Inheritance from Austenite to 7M Martensite in Melt-Spun Ribbons The rapid solidification based on melt-spinning technique has been proven to be an effective processing route for the preparation of ribbon shaped ferromagnetic shape memory alloys [40][41][42][43][44][45][46][47][48][49][50]. This method can avoid long time post heat treatment to achieve the composition homogeneity. Moreover, melt-spun ribbons usually tend to form a highly textured microstructure [51]. In this section, Ni 53 Figure 8a shows the EBSD orientation map measured from ribbon plane for the Ni 53 Mn 22 Ga 25 ribbons. It is seen that the austenite grain appears in equiaxed shape in the ribbon plane with an averaged grain size of~10-20 µm. Figure 8b displays the corresponding {2 2 0} A , {4 0 0} A and {4 2 2} A pole figures recalculated from EBSD measurements. Obviously, the austenite in ribbons develops a strong preferred orientation with {4 0 0} A parallel to ribbon plane [52], which should be attributed to the thermal gradient during the melt-spinning process. Figure 9 shows 10 of 20 Figure 8a shows the EBSD orientation map measured from ribbon plane for the Ni53Mn22Ga25 ribbons. It is seen that the austenite grain appears in equiaxed shape in the ribbon plane with an averaged grain size of ~10-20 μm. Figure 8b displays the corresponding {2 2 0}A, {4 0 0}A and {4 2 2}A pole figures recalculated from EBSD measurements. Obviously, the austenite in ribbons develops a strong preferred orientation with {4 0 0}A parallel to ribbon plane [52], which should be attributed to the thermal gradient during the melt-spinning process. Figure 9 shows Figure 8a shows the EBSD orientation map measured from ribbon plane for the Ni53Mn22Ga25 ribbons. It is seen that the austenite grain appears in equiaxed shape in the ribbon plane with an averaged grain size of ~10-20 μm. Figure 8b displays the corresponding {2 2 0}A, {4 0 0}A and {4 2 2}A pole figures recalculated from EBSD measurements. Obviously, the austenite in ribbons develops a strong preferred orientation with {4 0 0}A parallel to ribbon plane [52], which should be attributed to the thermal gradient during the melt-spinning process. Figure 9 shows [52]. Based on the pole figures presented in Figure 8 (austenite) and Figure 9 (7M martensite) in ribbons, it can be inferred that the transformation from austenite to 7M martensite exhibits a strong orientation inheritance and such orientation inheritance should be attributed to the intrinsic orientation relationship between austenite and 7M martensite [52]. Preferential Orientation and Variant Distribution of Thin Film The magnetron sputtering technique has been viewed as an effective method for the texturation of ferromagnetic Ni-Mn-Ga thin films epitaxially grown on a single crystal substrate [53][54][55][56][57][58][59][60][61]. In general, the epitaxial growth of Ni-Mn-Ga thin films on single crystal substrate may produce quite different microstructures compared to those of polycrystalline bulk alloys [18,20,27]. The microstructural and crystallographic characterizations of thin films remain challenging due to the local constraints from substrates, the specific geometry of thin films, and the ultrafine microstructures of constituent phases. Because of the lack of direct correlation of martensitic microstructures with crystallographic orientations, precise information on the configurations of variants in Ni-Mn-Ga thin films are still not available. In this section, based on XRD measurements and electron backscatter diffraction (EBSD) analyses, the crystal structures of constituent phases, the configurations of martensite variants and their orientation correlations are addressed. Global Microstructure and Texture of Thin Film Epitaxially grown thin films with nominal composition of Ni 50 Mn 30 Ga 20 were prepared on the MgO(1 0 0) substrate with a Cr buffer layer by DC magnetron sputtering [62,63]. Figure 10a shows the ψ-dependent XRD patterns of the thin films obtained by conventional θ-2θ coupled scanning at the room temperature. At each tilt angle ψ, there appear only a limited number of diffraction peaks. Figure 10b presents the XRD patterns measured using a large-angle position sensitive detector under two different incident beam conditions. Some extra diffraction peaks can be seen in the 2θ range of 48 • -55 • and~82 • . Based on the XRD patterns in Figure 10a,b, it can be inferred that austenite, 7M martensite and NM martensite co-exist in the as-deposited thin films at room temperature. The austenite phase has a cubic L2 1 crystal structure with lattice constant a A = 5.773 Å. The 7M martensite phase has a monoclinic crystal structure with lattice constants a 7M = 4.262 Å, b 7M = 5.442 Å, c 7M = 41.997 Å, and β = 93.7 • . The NM martensite phase is of tetragonal crystal structure with lattice constants a NM = 3.835 Å and c NM = 6.680 Å [62,63]. [52]. Based on the pole figures presented in Figure 8 (austenite) and Figure 9 (7M martensite) in ribbons, it can be inferred that the transformation from austenite to 7M martensite exhibits a strong orientation inheritance and such orientation inheritance should be attributed to the intrinsic orientation relationship between austenite and 7M martensite [52]. Preferential Orientation and Variant Distribution of Thin Film The magnetron sputtering technique has been viewed as an effective method for the texturation of ferromagnetic Ni-Mn-Ga thin films epitaxially grown on a single crystal substrate [53][54][55][56][57][58][59][60][61]. In general, the epitaxial growth of Ni-Mn-Ga thin films on single crystal substrate may produce quite different microstructures compared to those of polycrystalline bulk alloys [18,20,27]. The microstructural and crystallographic characterizations of thin films remain challenging due to the local constraints from substrates, the specific geometry of thin films, and the ultrafine microstructures of constituent phases. Because of the lack of direct correlation of martensitic microstructures with crystallographic orientations, precise information on the configurations of variants in Ni-Mn-Ga thin films are still not available. In this section, based on XRD measurements and electron backscatter diffraction (EBSD) analyses, the crystal structures of constituent phases, the configurations of martensite variants and their orientation correlations are addressed. Global Microstructure and Texture of Thin Film Epitaxially grown thin films with nominal composition of Ni50Mn30Ga20 were prepared on the MgO(1 0 0) substrate with a Cr buffer layer by DC magnetron sputtering [62,63]. Figure 10a shows the ψ-dependent XRD patterns of the thin films obtained by conventional θ-2θ coupled scanning at the room temperature. At each tilt angle ψ, there appear only a limited number of diffraction peaks. Figure 10b presents the XRD patterns measured using a large-angle position sensitive detector under two different incident beam conditions. Some extra diffraction peaks can be seen in the 2θ range of 48°-55°and ~82°. Based on the XRD patterns in Figure 10a,b, it can be inferred that austenite, 7M martensite and NM martensite co-exist in the as-deposited thin films at room temperature. The austenite phase has a cubic L21 crystal structure with lattice constant aA = 5.773 Å. The 7M martensite phase has a monoclinic crystal structure with lattice constants a7M = 4.262 Å, b7M = 5.442 Å, c7M = 41.997 Å, and β = 93.7°. The NM martensite phase is of tetragonal crystal structure with lattice constants aNM = 3.835 Å and cNM = 6.680 Å [62,63]. Figure 11b, the right side and the left side of the image represent the microstructure near the film surface and deep inside the film, respectively. Although the thin film has an overall plate-like microstructure, there exists certain plate thickening from its interior to its surface, which indicates a complete change of microstructural constituents or phases along the film thickness [28]. Based on EBSD measurements [62], it is revealed that the coarse plates in the top layer of the film are of the NM martensite, whereas the fine plates in the film interior are of the 7M martensite. Moreover, by combination of the XRD results, it is deduced that the NM martensite is located near the free surface of the film, the austenite above the substrate surface, and the 7M martensite in the intermediate layers between them. Figure 11b, the right side and the left side of the image represent the microstructure near the film surface and deep inside the film, respectively. Although the thin film has an overall plate-like microstructure, there exists certain plate thickening from its interior to its surface, which indicates a complete change of microstructural constituents or phases along the film thickness [28]. Based on EBSD measurements [62], it is revealed that the coarse plates in the top layer of the film are of the NM martensite, whereas the fine plates in the film interior are of the 7M martensite. Moreover, by combination of the XRD results, it is deduced that the NM martensite is located near the free surface of the film, the austenite above the substrate surface, and the 7M martensite in the intermediate layers between them. Figure 11a presents a secondary electron (SE) image acquired from the top surface of an electrolytically polished sample with gradient thickness relative to the film surface. As schematically illustrated in Figure 11b, the right side and the left side of the image represent the microstructure near the film surface and deep inside the film, respectively. Although the thin film has an overall plate-like microstructure, there exists certain plate thickening from its interior to its surface, which indicates a complete change of microstructural constituents or phases along the film thickness [28]. Based on EBSD measurements [62], it is revealed that the coarse plates in the top layer of the film are of the NM martensite, whereas the fine plates in the film interior are of the 7M martensite. Moreover, by combination of the XRD results, it is deduced that the NM martensite is located near the free surface of the film, the austenite above the substrate surface, and the 7M martensite in the intermediate layers between them. (Figure 12b), {0 0 4} NM and {2 2 0} NM tends to be close to the substrate surface. Although X-ray diffraction offers global texture information of the film, it is difficult to correlate the crystallographic features with those of the microstructure. Therefore, SEM/EBSD analysis is needed. (Figure 12b), {0 0 4}NM and {2 2 0}NM tends to be close to the substrate surface. Although X-ray diffraction offers global texture information of the film, it is difficult to correlate the crystallographic features with those of the microstructure. Therefore, SEM/EBSD analysis is needed. Figure 13 presents typical SE images of 7M martensite for Ni50Mn30Ga20 thin film. It is seen in Figure 13a that martensite plates are clustered in groups, exhibiting either low relative contrast (e.g., Group 1) with straight plates parallel to the substrate edges or high relative contrast (e.g., Group 2 and Group 3) with bent plates oriented at roughly 45° with respect to the substrate edges. In addition, the traces of inter-plate interfaces in high relative contrast zones have three distinct orientations, as indicated by the dotted yellow and green lines and the solid black lines in Figure 13b. In fact, the SE image contrast is related to the surface topography of an observed object. Thus, the low relative contrast zones and the high relative contrast zones are expected to have low and high surface reliefs, respectively. Here, the low relative contrast zone (Group 1) corresponds to the so-called Y pattern, and the high relative contrast zone (Group 2 or Group 3) to the X pattern [64]. 7M Variants Distribution in the Thin Film EBSD measurements show that one 7M martensite plate corresponds to one orientation variant. There are in total four different variants distributed in one plate group. Here, the four orientation variants, representing one plate group with low and high relative contrast, are denoted by the show that there exist three types of twinning relation between the adjacent variant, i.e., Type-I, or Type-II, or compound twinning relation, for the four variants in one plate group. The complete twinning elements were reported in elsewhere [62]. Figure 13a that martensite plates are clustered in groups, exhibiting either low relative contrast (e.g., Group 1) with straight plates parallel to the substrate edges or high relative contrast (e.g., Group 2 and Group 3) with bent plates oriented at roughly 45 • with respect to the substrate edges. In addition, the traces of inter-plate interfaces in high relative contrast zones have three distinct orientations, as indicated by the dotted yellow and green lines and the solid black lines in Figure 13b. In fact, the SE image contrast is related to the surface topography of an observed object. Thus, the low relative contrast zones and the high relative contrast zones are expected to have low and high surface reliefs, respectively. Here, the low relative contrast zone (Group 1) corresponds to the so-called Y pattern, and the high relative contrast zone (Group 2 or Group 3) to the X pattern [64]. EBSD measurements show that one 7M martensite plate corresponds to one orientation variant. There are in total four different variants distributed in one plate group. Here, the four orientation variants, representing one plate group with low and high relative contrast, are denoted by the symbols V L A , V L B , V L C , V L D and V H A , V H B , V H C , and V H D , respectively. Crystallographic calculations show that there exist three types of twinning relation between the adjacent variant, i.e., Type-I, or Type-II, or compound twinning relation, for the four variants in one plate group. The complete twinning elements were reported in elsewhere [62]. Analyses show that, in the low relative contrast zone, the majority of variants are in Type-I twin relation. Both Type-I and Type-II twin interfaces are nearly perpendicular to the substrate surface. For the high relative contrast zone, the majority of variants are in Type-II twin relation. The existence of height differences between adjacent variants accounts for the high relative contrast in this region. Further crystallographic calculations indicate that the preferential occurrence of different twinning type is a consequence of external constraint from the rigid substrate. The dominated twinning type allows effective cancellation of the shear deformation in the film normal direction [62]. Figure 15a shows an SE image of NM martensite for Ni50Mn30Ga20 thin film. Similar to 7M martensite, the clustered colonies can also be characterized by two different relative contrasts, i.e., low relative contrast (Z1) or high relative contrast (Z2), as illustrated in Figure 15a. The low relative contrast zones consist of long and straight plates running with their length direction parallel to one edge of the substrate (i.e., [1 0 0]MgO or [0 1 0]MgO). The high relative contrast zones are of shorter and somewhat bent plates that orient roughly at 45°with respect to the substrate edges. Analyses show that, in the low relative contrast zone, the majority of variants are in Type-I twin relation. Both Type-I and Type-II twin interfaces are nearly perpendicular to the substrate surface. For the high relative contrast zone, the majority of variants are in Type-II twin relation. The existence of height differences between adjacent variants accounts for the high relative contrast in this region. Further crystallographic calculations indicate that the preferential occurrence of different twinning type is a consequence of external constraint from the rigid substrate. The dominated twinning type allows effective cancellation of the shear deformation in the film normal direction [62]. Figure 15a shows an SE image of NM martensite for Ni50Mn30Ga20 thin film. Similar to 7M martensite, the clustered colonies can also be characterized by two different relative contrasts, i.e., low relative contrast (Z1) or high relative contrast (Z2), as illustrated in Figure 15a. The low relative contrast zones consist of long and straight plates running with their length direction parallel to one edge of the substrate (i.e., [1 0 0]MgO or [0 1 0]MgO). The high relative contrast zones are of shorter and somewhat bent plates that orient roughly at 45°with respect to the substrate edges. Analyses show that, in the low relative contrast zone, the majority of variants are in Type-I twin relation. Both Type-I and Type-II twin interfaces are nearly perpendicular to the substrate surface. For the high relative contrast zone, the majority of variants are in Type-II twin relation. The existence of height differences between adjacent variants accounts for the high relative contrast in this region. Further crystallographic calculations indicate that the preferential occurrence of different twinning type is a consequence of external constraint from the rigid substrate. The dominated twinning type allows effective cancellation of the shear deformation in the film normal direction [62]. Figure 15a shows an SE image of NM martensite for Ni 50 Mn 30 Ga 20 thin film. Similar to 7M martensite, the clustered colonies can also be characterized by two different relative contrasts, i.e., low relative contrast (Z 1 ) or high relative contrast (Z 2 ), as illustrated in Figure 15a. The low relative contrast zones consist of long and straight plates running with their length direction parallel to one edge of the substrate (i.e., [1 0 0] MgO or [0 1 0] MgO ). The high relative contrast zones are of shorter and somewhat bent plates that orient roughly at 45 • with respect to the substrate edges. NM Variants Distribution in the Thin Film Microstructural observation reveals that there exist two variants distributed alternately in one martensite plate, as highlighted with yellow and blue lines in Figure 15b. Of the two contrasted neighboring lamellae, one is thicker and the other is thinner, which is different from the situation of 7M martensite. The two lamellar variants in one plate have a compound twin relationship with the {1 1 2} NM as twinning plane and <1 1 −1> NM as twinning direction. As the BSE image contrast for a monophase microstructure with homogenous chemical composition originates from the orientation differences of the microstructural components, the thicker and thinner lamellae distributed alternately in each plate should be correlated with two distinct orientations, which is also confirmed by the indexation of Kikuchi line patterns [63]. Microstructural observation reveals that there exist two variants distributed alternately in one martensite plate, as highlighted with yellow and blue lines in Figure 15b. Of the two contrasted neighboring lamellae, one is thicker and the other is thinner, which is different from the situation of 7M martensite. The two lamellar variants in one plate have a compound twin relationship with the {1 1 2}NM as twinning plane and <1 1 −1>NM as twinning direction. As the BSE image contrast for a monophase microstructure with homogenous chemical composition originates from the orientation differences of the microstructural components, the thicker and thinner lamellae distributed alternately in each plate should be correlated with two distinct orientations, which is also confirmed by the indexation of Kikuchi line patterns [63]. Detailed EBSD orientation analyses were conducted on the NM martensite plates in the low and high relative contrast zones (Z1 and Z2 in Figure15a). In each variant colony, there are four types of plates, i.e., A, B, C and D in the low relative contrast (Z1) zones and 1, 2, 3 and 4 in the high relative contrast (Z2) zones, as illustrated in Figure 15c,d. Since one NM plate contains two variants, there are in total eight NM variants in one variant colony. For easy visualization, they are denoted as V1, V2, …, V8 in Figure15c and SV1, SV2, …, SV8 in Figure 15d, where the symbols with odd subscripts correspond to the thicker (major) variants and those with even subscripts the thinner (minor) variants. The measured orientations of the NM variants in the two relative contrast zones are presented in the form of {0 0 1}NM and {1 1 0}NM pole figures, as displayed in Figure 16a,b [63]. For the low relative contrast zones (Z1), the major and minor variants are oriented respectively with their {1 1 0}NM planes and {0 0 1}NM planes nearly parallel to the substrate surface (Figure 16a). In the high relative contrast zones (Z2), such plane parallelisms hold for plates 2 and 4 but with an Detailed EBSD orientation analyses were conducted on the NM martensite plates in the low and high relative contrast zones (Z 1 and Z 2 in Figure 15a). In each variant colony, there are four types of plates, i.e., A, B, C and D in the low relative contrast (Z 1 ) zones and 1, 2, 3 and 4 in the high relative contrast (Z 2 ) zones, as illustrated in Figure 15c,d. Since one NM plate contains two variants, there are in total eight NM variants in one variant colony. For easy visualization, they are denoted as V 1 , V 2 , . . . , V 8 in Figure 15c and SV 1 , SV 2 , . . . , SV 8 in Figure 15d, where the symbols with odd subscripts correspond to the thicker (major) variants and those with even subscripts the thinner (minor) variants. The measured orientations of the NM variants in the two relative contrast zones are presented in the form of {0 0 1} NM and {1 1 0} NM pole figures, as displayed in Figure 16a,b [63]. For the low relative contrast zones (Z 1 ), the major and minor variants are oriented respectively with their {1 1 0} NM planes and {0 0 1} NM planes nearly parallel to the substrate surface (Figure 16a). In the high relative contrast zones (Z 2 ), such plane parallelisms hold for plates 2 and 4 but with an exchange of the planes between the major and minor variants, whereas both major and minor variants in plates 1 and 3 are oriented with their {1 1 0} NM planes nearly parallel to the substrate surface (Figure 16b). In correlation with the microstructural observations, plates 2 and 4 are featured with higher brightness and plates 1 and 3 with lower brightness [63]. Indeed, for the two distinct relative contrast zones, the crystallographic orientations of the in-plate martensitic variants with respect to the substrate surface are not the same, which should be the origin of the topological differences observed for the two relative contrast zones. In the low relative contrast zones, the in-plate major and minor variants have the same orientation combination for all NM plates and they are distributed symmetrically to the inter-plate interfaces. As no microscopic height misfits across inter-plate interfaces appear in the film normal direction, the relative contrast between adjacent NM plates is not pronounced in the SE images. However, in the high relative contrast zone, the asymmetrically distributed lamellar variants in adjacent NM plates lead to the pronounced height misfits across inter-plate interfaces in the film normal direction, which gives rise to surface reliefs, hence the high relative contrast between adjacent NM plates [63]. exchange of the planes between the major and minor variants, whereas both major and minor variants in plates 1 and 3 are oriented with their {1 1 0}NM planes nearly parallel to the substrate surface (Figure 16b). In correlation with the microstructural observations, plates 2 and 4 are featured with higher brightness and plates 1 and 3 with lower brightness [63]. Indeed, for the two distinct relative contrast zones, the crystallographic orientations of the in-plate martensitic variants with respect to the substrate surface are not the same, which should be the origin of the topological differences observed for the two relative contrast zones. In the low relative contrast zones, the in-plate major and minor variants have the same orientation combination for all NM plates and they are distributed symmetrically to the inter-plate interfaces. As no microscopic height misfits across inter-plate interfaces appear in the film normal direction, the relative contrast between adjacent NM plates is not pronounced in the SE images. However, in the high relative contrast zone, the asymmetrically distributed lamellar variants in adjacent NM plates lead to the pronounced height misfits across inter-plate interfaces in the film normal direction, which gives rise to surface reliefs, hence the high relative contrast between adjacent NM plates [63]. Conclusions (1) The influence of uniaxial compression on martensitic transformation in directionally solidified Ni50Mn30Ga20 and Ni50Mn28.5Ga21.5 polycrystalline alloys was studied by neutron diffraction. It was shown that the distribution of martensite variants can be tuned through cyclic thermo-mechanical treatments. For the Ni50Mn30Ga20 alloy with <0 0 1>A preferential orientation parallel to the solidification direction, a strong <0 1 0>7M preferential orientation of 7M martensite along the loading direction (//solidification direction) was induced by the external compression during martensitic transformation. In addition, it was found that the selection of Conclusions (1) The influence of uniaxial compression on martensitic transformation in directionally solidified Ni 50 Mn 30 Ga 20 and Ni 50 Mn 28.5 Ga 21.5 polycrystalline alloys was studied by neutron diffraction. It was shown that the distribution of martensite variants can be tuned through cyclic thermo-mechanical treatments. For the Ni 50 Mn 30 Ga 20 alloy with <0 0 1> A preferential orientation parallel to the solidification direction, a strong <0 1 0> 7M preferential orientation of 7M martensite along the loading direction (//solidification direction) was induced by the external compression during martensitic transformation. In addition, it was found that the selection of preferential variants induced by thermo-mechanical treatments was strongly dependent on the austenite orientation and the direction of external load, which was evidenced in Ni 50 Mn 28.5 Ga 21.5 polycrystalline alloys with <0 0 1> A and <1 1 0> A parallel to the solidification direction. In the low relative contrast zone, the majority of variants are in Type-I twin relation, whereas for the high relative contrast zone, the majority of variants are in Type-II twin relation. The selection of twinning type is a consequence of external constraint from the rigid substrate and the twinning type with less shear deformation in the film normal direction is favored. For NM martensite, one plate group of NM martensite also consists of 4 martensite plates, but each plate is composed of two twin related variants with one thicker than the other. The in-plate major and minor variants are distributed symmetrically to the inter-plate interfaces in low relative contrast zones, but asymmetrically distributed in high relative contrast zones. The difference in the orientation combination of the in-plate variants accounts for the topological differences observed for the two relative contrast zones. The presented investigations are expected to provide some fundamental information for the microstructure modification and functional performance control of ferromagnetic shape memory alloys.
2017-07-27T07:29:29.770Z
2017-04-27T00:00:00.000
{ "year": 2017, "sha1": "f1a11f1eef57b3a7304dee2f929505dc582be020", "oa_license": "CCBY", "oa_url": "https://www.mdpi.com/1996-1944/10/5/463/pdf", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "f1a11f1eef57b3a7304dee2f929505dc582be020", "s2fieldsofstudy": [ "Materials Science" ], "extfieldsofstudy": [ "Materials Science", "Medicine" ] }
258235380
pes2o/s2orc
v3-fos-license
Combination of double-sliding advancement genioplasty and prearthroplastic distraction osteogenesis in cases of TMJ ankylosis with severe mandibular atrophy ABSTRACT The aim of this study is to present a case of facial asymmetry secondary to unilateral long-standing temporomandibular joint (TMJ) ankylosis managed by a staged treatment protocol. Treatment for facial asymmetry secondary to unilateral TMJ ankylosis can have varied approaches followed by different workers according to their experiences. This predistraction arthroplasty versus prearthroplastic distraction debate has been at the center stage in literature for quite some time. Hereby, we present a case followed by the latter approach along with double-sliding genioplasty to correct chin asymmetry. A 25-year-old male patient with a history of facial trauma 15 years ago reported a complaint of inability to open mouth and gradually developing facial asymmetry. The patient was thoroughly evaluated using radiographs and cephalometric analysis to establish the diagnosis of TMJ ankylosis with facial asymmetry and suspected sleep apnea. The patient was treated according to our institutional protocol of prearthroplastic asymmetry correction followed by ankylosis release along with double-sliding genioplasty to correct residual deformity at a later date. Correction of facial asymmetry before ankylosis release provides a more evidence-based approach as supported by the current literature. Plus, any residual deformity can be rectified using orthomorphic procedures such as genioplasty. Since there is an ongoing debate in the current literature about sequencing in the treatment of facial asymmetry cases, the presented case adds to the argument that the approach followed herein provides for more favorable outcome. INTRODUCTION The treatment of temporomandibular joint (TMJ) ankylosis involves restoring the function of the joint and correction of associated facial asymmetry. The correction of facial deformity can be done by distraction osteogenesis, orthognathic surgery, advancement genioplasty, or a combination of any of these procedures. The correction of facial asymmetry along with arthroplasty can also be done in the same stage. [1] If not in the first stage, it can be done in the second stage of the surgical treatment. Prearthroplastic mandibular distraction osteogenesis is indicated in some cases such as those with obstructive sleep apnea (OSA), because there is an evidence in the literature that if correction of such deformities is not done before arthroplasty, it can lead to severe patient discomfort during physiotherapy leading to the lack of physiotherapy, eventually leading to the risk of reankylosis. [2] In cases with a severe chin deformity, a double-sliding genioplasty can give better results as it can provide up to 20 mm chin advancement with a good surface contact. [3] We have Combination of double-sliding advancement genioplasty and prearthroplastic distraction osteogenesis in cases of TMJ ankylosis with severe mandibular atrophy documented a similar case of bilateral TMJ ankylosis with associated severe mandibular atrophy, surgically treated with prearthroplastic distraction osteogenesis, and second stage surgery comprising TMJ interpositional arthroplasty and double-sliding genioplasty for the correction of chin deformity. CASE REPORT A 25-year-old male patient reported to the oral and maxillofacial surgery outpatient department, King George's Medical University, Lucknow, India, complaining of reduced mouth opening, severe snoring, episodes of apnea during sleep, and a bird-like unpleasant facial appearance, giving a history of trauma due to fall from height 15 years before. On clinical examination, there was the complete absence of mouth opening, a severely retruded mandible, and the absence of TMJ movements on palpation [ Figure 1]. Signs of mild OSA were seen during polysomnography. [4] Computed tomography (CT) revealed huge bony TMJ ankylosis bilaterally with a retruded mandible with bilaterally enlarged coronoid processes. The posterior airway space was consistently reduced [ Figure 2]. A bilateral mandibular body discrepancy of 11 mm and a chin discrepancy of 17 mm were calculated based on CT measurements and cephalometric analysis. The phase of surgical treatment commenced with prearthroplastic mandibular distraction osteogenesis with the placement of extraoral uniplanar distractors bilaterally over the body region between 2 nd and 3 rd molars. Distraction was started after a latency period of 5 days and 11 mm distraction was done bilaterally. Clinically, an Angle's Class I molar relation was achieved after the completion of distraction phase. The distractors were retained in the patient thereafter for the period of consolidation [ Figure 3]. Second-stage surgery was planned after 4 months of consolidation phase. Bilateral TMJ osteoarthrectomy and interpositioning with temporalis fascia was done along with bilateral coronoidectomy. Simultaneous correction of chin deformity was done with a double-sliding advancement genioplasty. A planned chin advancement of approximately 17 mm was achieved with intraoperative mouth opening of 45 mm [ Figure 4]. A satisfactory chin and facial profile were obtained postoperatively with promising soft tissue changes in the chin [ Figures 5 and 6]. The posterior airway space was also satisfactory with no signs of OSA in postoperative polysomnography. DISCUSSION A comprehensive treatment protocol for the management of TMJ ankylosis includes functional correction as well as the correction of residual facial deformity due to TMJ ankylosis. [5] Different sequencing is used in the treatment of such cases which differs from case to case. Mandibular distraction osteogenesis is a widely used treatment modality in cases of TMJ ankylosis with mandibular micrognathia. As stated by Papageorge and Apostolidis [6] in 1999, distraction osteogenesis has many advantages to correct the facial deformity caused by mandibular hypoplasia in TMJ ankylosis patients, just like the case reported here. The growth of callous can be controlled, it let the surrounding soft tissue to regenerate. Distraction osteogenesis may be opted for either before or after the TMJ arthroplasty. Andrade et al. proposed prearthroplastic distraction osteogenesis in cases with associated OSA. According to the author, if osteoarthrectomy is chosen as a treatment modality before distraction osteogenesis is such cases, the patient will have severe discomfort during physiotherapy which will lead to a lack of proper physiotherapy, eventually increasing the risk of TMJ ankylosis or even worsening of OSA. [2] There has always been a debate for the treatment option to be preferred for the treatment of TMJ ankylosis patients with facial deformity. Some authors preferred arthroplasty before distraction. In a study by Chellappa et al. [7] in 2015, 10 cases of unilateral TMJ ankylosis were treated with prearthroplastic distraction and 10 with simultaneous arthroplasty and distraction. They concluded that the former has better control on distraction; contralateral nonankylosed joint may experience pain. Zhang et al. [8] in their study concluded that distraction osteogenesis as the first-stage treatment and arthroplasty or TMJ reconstruction as the second-stage treatment is suitable for the management of patients with TMJ ankylosis and secondary deformities, especially those with Obstructive sleep apnoea/hypoapnoea syndrome (OSAHS). In a systematic review by Chugh et al. [9] in 2021, it was concluded that prearthroplastic DO appears to be the best timing for the correction of dentofacial deformity in the mandible. Treatment of facial asymmetry or facial deformity in patients with TMJ ankylosis may be achieved with the virtue of orthognathic surgery including advancement genioplasties, distraction osteogenesis, or a combination of these procedures. According to Wiese and Lawson et al., [3] the limitation of single-sliding genioplasty is that chin advancement of no more than 10 mm is possible with this procedure. To overcome this limitation, they proposed a technique of "Double Sliding" or "Multiple sliding genioplasty" or "Tandem Genioplasty," in which an advancement ranging from 20 mm to 30 mm could be achieved. [4,10] However, the limitation of this procedure is that this is not possible in cases with a compromised mandibular height. CONCLUSION Double-sliding genioplasty can be advantageous in cases which require extreme chin advancement provided the available mandibular height is adequate. It can be proposed that cases with bilateral TMJ ankylosis with severe mandibular micrognathia and OSA can be treated with a second-stage surgical procedure consisting of prearthroplastic mandibular distraction osteogenesis and a second-stage surgery consisting of TMJ arthroplasty and simultaneous correction of chin deformity with a double-or single-sliding advancement genioplasty depending on the amount of advancement required. Declaration of patient consent The authors certify that they have obtained all appropriate patient consent forms. In the form the patient (s) has/have given his/her/their consent for his/her/their images and other clinical information to be reported in the journal. The patients understand that their names and initial s will not be published and due efforts will be made to conceal their identity, but anonymity cannot be guaranteed. Financial support and sponsorship Nil. Conflicts of interest There are no conflicts of interest.
2023-04-20T15:08:29.711Z
2023-04-14T00:00:00.000
{ "year": 2023, "sha1": "d873d75bfba9bc0ff77f2affba549235d42cec48", "oa_license": "CCBYNCSA", "oa_url": "https://journals.lww.com/njms/Fulltext/2023/14010/Combination_of_double_sliding_advancement.24.aspx", "oa_status": "HYBRID", "pdf_src": "PubMedCentral", "pdf_hash": "43f3e50aa23f26bafd23d615dc73d43bddb67005", "s2fieldsofstudy": [ "Medicine", "Engineering" ], "extfieldsofstudy": [ "Medicine" ] }
258687386
pes2o/s2orc
v3-fos-license
Progressive Cardiac Metabolic Defects Accompany Diastolic and Severe Systolic Dysfunction in Spontaneously Hypertensive Rat Hearts Background Cardiac metabolic abnormalities are present in heart failure. Few studies have followed metabolic changes accompanying diastolic and systolic heart failure in the same model. We examined metabolic changes during the development of diastolic and severe systolic dysfunction in spontaneously hypertensive rats (SHR). Methods and Results We serially measured myocardial glucose uptake rates with dynamic 2‐[18F] fluoro‐2‐deoxy‐d‐glucose positron emission tomography in vivo in 9‐, 12‐, and 18‐month‐old SHR and Wistar Kyoto rats. Cardiac magnetic resonance imaging determined systolic function (ejection fraction) and diastolic function (isovolumetric relaxation time) and left ventricular mass in the same rats. Cardiac metabolomics was performed at 12 and 18 months in separate rats. At 12 months, SHR hearts, compared with Wistar Kyoto hearts, demonstrated increased isovolumetric relaxation time and slightly reduced ejection fraction indicating diastolic and mild systolic dysfunction, respectively, and higher (versus 9‐month‐old SHR decreasing) 2‐[18F] fluoro‐2‐deoxy‐d‐glucose uptake rates (Ki). At 18 months, only few SHR hearts maintained similar abnormalities as 12‐month‐old SHR, while most exhibited severe systolic dysfunction, worsening diastolic function, and markedly reduced 2‐[18F] fluoro‐2‐deoxy‐d‐glucose uptake rates. Left ventricular mass normalized to body weight was elevated in SHR, more pronounced with severe systolic dysfunction. Cardiac metabolite changes differed between SHR hearts at 12 and 18 months, indicating progressive defects in fatty acid, glucose, branched chain amino acid, and ketone body metabolism. Conclusions Diastolic and severe systolic dysfunction in SHR are associated with decreasing cardiac glucose uptake, and progressive abnormalities in metabolite profiles. Whether and which metabolic changes trigger progressive heart failure needs to be established. Li et al Metabolic Changes in Failing SHR Hearts considers hypertension and LVH as stages A and B of HF, respectively. 4 Close to half of the patients who present with clinical features of HF exhibit preserved left ventricular ejection fraction (HFpEF) 2 (also referred to as diastolic HF), while the other half have HF with reduced ejection fraction (HFrEF; also referred to as systolic HF). Both HFpEF and HFrEF are associated with significant morbidity and mortality (22%). 5 Although treatment of hypertension is beneficial in HF prevention, it is not clear whether treatment reduces morbidity or mortality in patients with known HF. 6,7 Novel effective diagnostic and therapeutic strategies are needed to improve the management of the increasing number of adults with HF. 3 Numerous studies have described metabolic abnormalities in HFrEF, but only a few have evaluated metabolic changes in HFpEF. 8 Whether different metabolic changes are associated with or trigger HFpEF and HFrEF is currently unknown. The spontaneously hypertensive rat (SHR) is a widely used animal model of primary hypertension, LVH, and HF development. 9 SHR are prehypertensive with normal cardiac function at 1 month of age, exhibit hypertension with mild cardiac systolic dysfunction at 2 months, and LVH beginning at 5 months. 10 SHR show compensated cardiac hypertrophy at 9 months 11 and develop diastolic and severe systolic dysfunction between 12 and 20 months. [11][12][13] Note that clinical symptoms defining HF (shortness of breath, fatigue and weakness, and edema) are difficult to document in animals. Therefore, we are not using the term HF when describing results for SHR. However, when discussing our findings, we compare metabolic abnormalities associated with diastolic and severe systolic dysfunction in SHR with those observed in patients with HFpEF or diastolic HF and HFrEF or systolic HF, respectively. In a previous study we found that, in young SHR, profound metabolic changes were present at the earliest stages of hypertension, before or concomitant with mild cardiac systolic dysfunction, while LVH only developed later. 10 Intriguingly, we recently demonstrated that targeting metabolic abnormalities with metformin during early hypertension prevented cardiac metabolic and functional abnormalities, as well as LVH, suggesting that metabolic changes that develop in response to chronic hypertension may trigger and sustain cardiac dysfunction and increasing LV mass. 14 Another group observed in older SHR (8-22 months of age), by longitudinal imaging with small animal positron emission tomography (PET)/computed tomography, increased glucose and fatty acid utilization at 8 months of age that was sustained up to 20 months. 15,16 Decreased systolic function and increased cardiac volume occurred only at 20 months. In the present study, we evaluated relationships between metabolic and functional and structural RESEARCH PERSPECTIVE What Is New? • Serial in vivo 2-[ 18 F] fluoro-2-deoxyd-glucose positron emission tomography imaging of spontaneously hypertensive rats revealed that the rate of glucose uptake in hearts with mild diastolic and systolic dysfunction was increased and was markedly decreased with progressive diastolic and severe systolic dysfunction when compared with control rat hearts. • Metabolomics analysis of spontaneously hypertensive rat hearts uncovered distinct abnormalities between hearts at 12 and 18 months of age when spontaneously hypertensive rats developed mild diastolic and systolic dysfunction and progressive diastolic and severe systolic dysfunction, respectively. What Question should be Addressed Next? • Whether and how uncovered cardiac metabolic abnormalities trigger progressive diastolic and systolic dysfunction and heart failure will need to be explored. abnormalities during the development of diastolic and severe systolic dysfunction in SHR. We used serial dynamic 2-[ 18 F] fluoro-2-deoxyd-glucose positron emission tomography (FDG PET) imaging to measure myocardial glucose uptake rates and cardiac magnetic resonance imaging (CMR) to measure left ventricular mass (LVM) and diastolic and systolic function in vivo in the same 9-, 12-, and 18-month-old SHR and Wistar Kyoto (WKY) rats. In addition, we performed cardiac metabolomics analyses at 12 and 18 months in separate groups of rats. Our data demonstrate that, at 12 months of age, SHR hearts with LVH and mild diastolic and systolic dysfunction exhibit increased FDG uptake rates when compared with WKY but decreasing FDG uptake rates when compared with 9-month-old SHR. At 18 months, SHR hearts with pronounced diastolic and severe systolic dysfunction, and significant LVH, have dramatically decreased myocardial FDG uptake rates. Cardiac metabolomics analyses established different metabolite profiles in the 2 age groups consistent with progressive metabolic impairments with increasing severity of diastolic and systolic dysfunction in SHR hearts. METHODS The data supporting the findings of this study are available from the corresponding authors upon reasonable request. The personnel who performed CMR imaging and metabolomics studies were blinded to the experimental groups. Rat Model Male SHR and WKY rats at 3 weeks of age were purchased from Charles River (Charles River, Kingston, NY) and housed under controlled conditions (temperature 21±1 °C, humidity 60%±10%, 12 hours light/12 hours dark cycle, and free access to standard rat chow and water). Rats were used for experiments as described below at 9, 12, and 18 months of age during the development of diastolic and systolic HF. Note that hearts subjected to ex vivo molecular analyses (metabolomics and immunoblotting) were isolated from 3 different groups of rats (1 for each age group) and these rats did not undergo FDG PET and CMR imaging. All animal experiments were approved by the Institutional Animal Care & Use Committee of the University of Virginia and performed according to the National Institutes of Health Guide for the Care and Use of Laboratory Animals. FDG PET Imaging In Vivo Rates of myocardial FDG uptake (Ki) were determined by dynamic FDG PET imaging using the Siemens Focus F 120 microPET scanner at 9 and 12 months and the trimodal Albira PET/computed tomography/ single-photon emission computed tomography 17 scanner at 18 months of age, as we described for mice 18,19 and rats. 10 List-mode data acquired with the microPET were histogrammed, reconstructed, and analyzed using methods we described in prior studies in mice and rats. 10,[18][19][20][21] Imaging of rats with the Albira trimodal imager followed a similar protocol as with the micro-PET as described in recent studies from our laboratory. 20,21 Using formalisms developed in our laboratory for mouse and rat hearts, 10,19,21 a 3-compartment kinetic model that simultaneously corrects for spillover and partial volume effects for both the blood pool and myocardium was used to compute rates of myocardial FDG uptake (Ki). The analysis was performed using the MATLAB_r2018a (Mathworks Inc., Natick, MA) computing environment. CMR Imaging In Vivo Two days after FDG PET scans, rats were subjected to in vivo CMR using an electrocardiogram-triggered cine black blood pulse sequence 22 on a 7T Bruker-Siemens scanner (ClinScan) as described 10 23,24 With the heart in sinus rhythm, the trans-mitral and transaortic velocity-time curves were used to quantify IVRT, defined as the time between the closure of the aortic valve and the opening of the mitral valve. Cardiac and Blood Metabolite Analyses and Blood Pressure Measurements Blood glucose was measured in tail vein blood of SHR and WKY rats after a 6-hour fast using a glucometer (ACCU-CHEK Nano, Bayer). Rats were then subjected to invasive carotid artery catheterization to measure mean arterial pressure (MAP) as we previously described. 10 Briefly, rats were anesthetized with Inactin Hydrate (Sigma; 100 mg/kg BW) via intraperitoneal injection. A PE-50 catheter was inserted into the right carotid artery and MAP was recorded over 30 minutes using a Blood Pressure Analyzer (Micromed Inc). After the blood pressure measurements, blood samples were collected from the right carotid artery through the PE-50 catheter. Next, hearts were excised, rinsed with phosphate-buffered saline, dried on blotting paper, dropped into liquid nitrogen, and stored at −80 °C until processing. Frozen hearts were weighed (total heart weight [HW]) and powdered with a Cellcrusher (from CELLCRUSHER) in liquid nitrogen. Aliquots were processed for evaluation of protein expression (see below) and for metabolomics analysis by Metabolon Inc as described. 10 Statistical Analysis Due to the invasive and terminal method used to acquire the MAP measurements, replicated MAP measurements were obtained at only 1 time point (age) per rat, and the replicated MAP measurements were treated as random subsample units and analyzed by mixed 2-way ANOVA with random subsample error. 26 In terms of the mixed 2-way ANOVA model specification, the dependent variable was MAP, the independent variables were the rat type (SHR and WKY) and assessment age (9, 12, and 18 months), and the random error components were the traditional 2-way ANOVA residual error and subsampling error. For hypothesis testing, a set of linear contrasts of the ANOVA least-squared means was utilized to conduct betweenrat-type comparisons of MAP distribution means at 9, 12, and 18 months. The standard errors of the linear contrasts were estimated based on the residual mean square error, and a Bonferroni corrected P<0.05 decision rule was used as the null hypothesis rejection rule. PET and CMR serial imaging data (Ki, EDV, ESV, EF, LVM, LVM/BW, and IVRT) were analyzed by way of linear mixed models for time point by time point between rat type comparisons. Hypothesis testing was with respect to the mean response, and a comparison-wise P≤0.05 decision rule was used for the null hypothesis rejection criterion for between-rat-type comparisons. Within rat type trends in the serial PET and CMR in vivo imaging data were examined by Piecewise Random Coefficient Regression as described. 10 A P≤0.05 decision rule was used as the null hypothesis rejection criterion for testing for nonzero slope, and for testing within rat type differences in the Piecewise Random Coefficient Regression slope parameter values. Immunoblots, HW, HW/BW ratios, circulating blood glucose, insulin, FFA, BCAA, and β-hydroxybutyrate (BHB) levels were analyzed using paired Student t tests comparing results for same age groups of SHR and WKY. For metabolomics data, raw area count values for each of the 733 biochemicals analyzed were rescaled to set medians equal to 1, and missing values were imputed with minimum values on a per biochemical basis. The rescaled and imputed values were transformed to their natural logs and 2-way ANOVA analysis was performed to identify metabolites that differed significantly between groups (SHR and WKY of the same age). Exponentiated least-square means were used to determine geometric mean ratios (relative fold differences) between metabolite levels for SHR and WKY at the same age. Principal Component Analysis was used as an unsupervised analysis to reduce the dimension of metabolomics data. Hierarchical Clustering Analysis was used as an unsupervised method for clustering data to show large-scale differences using the Euclidean distance where each sample was a vector with all of the metabolite values. Supervised classification of data was performed by Random Forest Analysis as described. 27 Principal Component Analysis, Hierarchical Clustering, and Random Forest Analysis were all performed on natural log transformed data. To determine correlations of metabolite levels with HWs, we used linear Principal Component Analysis 28 on raw area count metabolite levels rescaled to set medians equal to 1. A P≤0.05 decision rule was used as the null hypothesis rejection criterion. The statistical software packages SAS version 9.4 (SAS Institute Inc., Cary, NC) and GraphPad Prism 7 (GraphPad Inc., La Jolla, CA) were used to conduct the statistical analyses for all data except the metabolomics data. These were analyzed using ArrayStudio and the programs R (http://cran.r-proje ct.org/) or JMP Statistical Software. RESULTS Blood Pressure, EDV, ESV, Systolic and Diastolic Function, and LVM and BW in SHR as a Function of Age MAPs were significantly higher in SHR than WKY rats at 9, 12, and 18 months of age (P=0.001, P<0.001, and P=0.003, respectively). All SHR values were in the hypertensive range (>160 mm Hg; Figure 1A). At 12 months of age, IVRT was on average longer in SHR when compared with WKY (P=0.01), with 4 out of 8 SHR above the range of WKY and the remaining 4 SHR exhibiting the highest IVRT value found in only 1 WKY (19.8 milliseconds; Table S1). Thus, 50% of SHR developed diastolic dysfunction at 12 months of age and the other 50% were in the high normal range. At 18 months IVRT was increased above the normal range in all surviving SHR with longer IVRT in SHR with severe systolic dysfunction and significantly increased EDV (P<0.001) than in SHR with preserved EF and no change in EDV (P=0.01; Figure 1E). Interestingly, SHR that died between 12 and 18 months had on average the highest IVRT at 12 months (24.2±2.2 milliseconds) versus first [19.8 milliseconds] and second [22±1.1 milliseconds] SHR groups). IVRT was not determined in the rats used in this study when they were 9 months old because the method for measuring IVRT in rats was not established then. Preliminary data with a separate group of rats suggest that diastolic dysfunction may already be present in some SHR at 9 BWs in SHR were significantly decreased at 12 and 18 months of age when compared with WKY rats (P=0.02 and P<0.001, respectively) but were like WKY at 9 months of age ( Figure 1F). SHR showed significantly elevated LVM (Table S1) and LVM/BW ratios at 9 (P=0.01), 12 (P<0.001), and 18 months, with higher LVM/BW ratios for SHR with severely reduced EF versus SHR with preserved EF (P<0.001 and P<0.001, respectively; Figure 1G). Further analysis revealed that BW for SHR with severely reduced EF decreased significantly from 9 to 18 months of age (slope=−3.457 units/month, P=0.013; Figure 1F), while BW did not change for SHR with preserved EF (slope=−1.865 units/month, P=0.318; Figure 1F). Hence, LVM/BW ratios in SHR from 9 to 18 months of age increased more in rats with severely reduced Li Figure 1G) than in rats with preserved EF (slope=0.1444 units/month, P<0.001; Figure 1G). For WKY, BW increased from 9 to 18 months of age (slope = 7.488 units/month, P<0.001; Figure 1F) as well as LVM/BW ratios (slope = 0.0797 units/month, P=0.01; Figure 1G). Consistent with increased LVM and LVM/BW in SHR determined by CMR, harvested SHR HWs and HWs normalized to BWs were increased at 9, 12, and 18 months of age (P<0.05 versus WKY; Figure 1H). In Vivo Myocardial Glucose Uptake Abnormalities in SHR as a Function of Age SHR hearts exhibited a 3.2-fold higher FDG uptake rate than WKY hearts at 9 months of age (P<0.001; Figure 1I). At 12 months, when compared with SHR at 9 months, the FDG uptake rate in SHR hearts ( Figure 1I) dropped dramatically (2.3-fold; P<0.001; Figure 1E). However, FDG uptake rates in SHR hearts were still 2-fold higher than in WKY hearts (P=0.033; Figure 1I). At 18 months, the rates of glucose uptake remained elevated (1.8-fold; P=0.08) in SHR hearts with preserved mild systolic dysfunction ( Figure 1D). However, FDG uptake rates were dramatically reduced (3. mTOR and GRP78 Activation Earlier, in a model of acutely increased cardiac pressure overload in mice using transverse aortic restriction, we found that changes in glucose metabolism precede and regulate remodeling of the stressed heart, with glucose 6-phosphate playing a critical role in load-induced mechanistic target of rapamycin (mTOR) activation and endoplasmic reticulum stress. 25 Increased mTOR activity has also been implicated in the development of cardiac hypertrophy in SHR. 29 Consistent with these observations, we observed increased mTOR activation (as determined by measuring p70S6 kinase [p70S6K] phosphorylation) 10 in young SHR hearts at early stages of hypertension and normalization of mTOR activity together with prevention of LVH in response to metformin treatment. 14 In this study with older SHR hearts, we found increased p70S6K phosphorylation most likely due to increased mTOR activity in SHR hearts at 9 and 12 months of age (Figure 2A), with a statistically significant difference between SHR and WKY at 12 months (P<0.001; Figure 2B). Total p70S6K expression in SHR and WKY hearts was similar at all ages. Although LVH further increased in SHR hearts that develop severe systolic dysfunction, mTOR activity was no longer A, SHR (n=6) and WKY (n=6) hearts were immunoblotted for phospho-p70S6K (p-p70S6K, a marker for mTOR activity), total p70S6K, and GAPDH. B, Signal intensities for p-p70S6K and p70S6K were normalized to GAPDH, and p-p70S6K/p70S6K ratios were determined. Results were normalized within age groups (for which samples were analyzed on the same immunoblots) and fold changes shown. C, SHR (n=6) and WKY (n=6) hearts were immunoblotted for GRP78, a marker for ER stress, and GAPDH. D, Signal intensities for GRP78 normalized to GAPDH are presented. Results were normalized within age groups (for which samples were analyzed on the same immunoblots) and fold changes were graphed. All data are shown as mean±SE. *P≤0.05 SHR vs WKY. Data were compared between SHR and WKY at each age using paired Student t tests. ER indicates endoplasmic reticulum; mTOR, mechanistic target of rapamycin; SHR, spontaneously hypertensive rats; and WKY, Wistar Kyoto rats. elevated at 18 months of age. It is thus possible that factors other than increased mTOR activity promote cardiac hypertrophy. 30 As a marker of ER stress, we evaluated GRP78 protein expression. There was no significant change in GRP78 in SHR hearts at 9, 12, and 18 months when compared with WKY rat hearts ( Figure 2C and 2D). Metabolomics Analysis of SHR Hearts at 12 and 18 Months of Age To obtain further insight into cardiac metabolic changes, hearts from 12-and 18-month-old SHR and WKY were analyzed for 733 named metabolites using metabolomics. Levels of 230 (90 up/140 down) and 347 (118 up/229 down) metabolites were different between SHR and WKY hearts at 12 and 18 months, respectively. Principal Component Analysis of the metabolite data demonstrated that the first principal component separated SHR and WKY well while the second principal component separated the 2 age groups (12 and 18 months; Figure 3A). These results show that chronic hypertension and advancing age induce substantial shifts in cardiac metabolite profiles. Separation between SHR and WKY and SHR at 12 and 18 months was also observed with Hierarchical Cluster Analysis ( Figure 3B). Random Forest Analysis binned samples into age groups based on metabolite similarities and differences with an accuracy of 67% for WKY and an accuracy of 100% for SHR. Among the top 30 metabolites that contributed most strongly to the separation between SHR at 12 and 18 months were changes in amino acid metabolites as well as fatty acid-and BCAA-derived carnitines, suggesting increasing abnormalities in protein remodeling and energy metabolism, respectively ( Figure 3C). Note, due to high costs for CMR and PET imaging, rats used for cardiac metabolomics analyses did not undergo CMR or FDG PET analyses. Thus, metabolite profiles for SHR hearts at 12 and 18 months could not be directly linked to the severity of diastolic or systolic dysfunction or to changes in FDG uptake rates in the same hearts. This is a limitation of the present study. Nonetheless, the cardiac metabolic profiles of SHR at 12 and 18 months differ considerably. Tables S2 and S3 provide lists of individual metabolites with relative fold differences between SHR and WKY hearts specified. Table S2 shows metabolites that only differed at 12 or 18 months and metabolites that changed in opposite directions at 12 and 18 months. Table S3 presents metabolites that were changed in the same direction at both 12 and 18 months. In Figures 3D and 3E, changes of representative metabolites in energy-providing pathways (selected from Tables S2 and S3, respectively) are displayed. Most of the metabolites shown in Table S2 and Figure 3D were normal in hearts of 12-month-old SHR but significantly decreased in 18-month-old SHR hearts when compared with WKY rat hearts of the same age. Relevant to energy metabolism, this includes long-chain saturated, monounsaturated, and polyunsaturated fatty acids, long-chain fatty acidcontaining monoacylglycerols, and short-, medium-, and long-chain fatty acid-derived carnitines, BCAAderived carnitines, and the Krebs cycle-associated succinylcarnitine and acetyl-coenzyme A (CoA). However, a few short-chain carnitines derived from fatty acid or ketone body metabolism 31 (including butenoylcarnitine, and hydroxy fatty acyl carnitines (R)-3hydroxybutyrylcarnitine, (S)-3-hydroxybutyrylcarnitine and 3-hydroxyhexanoylcarnitine) were highly elevated (2-4-fold) in SHR hearts at 12 and normal at 18 months. Valine, isoleucine-derived tiglylcarnitine (C5:1-DC), and lactate were moderately elevated at 12 months and normal at 18 months. Several fatty acid and/or BCAAderived carnitines, including valerylcarnitine (C5), butyrylcarnitine, β-hydroxyisovaleroylcarnitine, and carnitine were increased at 12 months and decreased at 18 months. Among the metabolites that changed in the same direction in hearts of both 12-and 18-month-old SHR, the majority (≈2/3) were downregulated, and fold changes were larger at 18 months for many of them (Table S3 and Figure 3E). Downregulated metabolites, as relevant to energy metabolism, included the fatty acid synthesis intermediate malonylcarnitine, a few long-chain monounsaturated and polyunsaturated fatty acids, diacylglycerols, several medium-and long-chain fatty acid-derived carnitines, the fatty acid dicarboxylate 2-hydroxyglutarate, BCAA-derived metabolites methylsuccinate and methylsuccinoylcarnitine, glucose metabolites 2-and 3-phosphoglycerate, and glycogen metabolites. Metabolites that were increased in both 12-and 18-month-old SHR hearts included fatty acid dicarboxylates glutarate and maleate, monohydroxy fatty acid 4-hydroxybutyrate, dihydroxy fatty acids, nervonoylcarnitine, the ketone body 3-hydroxybutyrate, the BCAAs leucine and isoleucine and several of their metabolites, the glucose metabolite phosphoenolpyruvate, as well as the Krebs cycle intermediates fumarate and malate. There were widespread disturbances in nucleotide metabolism, with metabolites mostly downregulated (including the cyclic adenosine monophosphate cAMP, an important signaling molecule; Table S3). Notable exceptions were xanthine, urate, and thymidine, which were upregulated in SHR hearts at 18 months (Table S2). Nucleotide metabolites involved in energy transfer, ATP, ADP, and AMP, as well as reduced nicotinamide adenine dinucleotide (NADH) and flavin adenine dinucleotide, were normal, but NAD+ was decreased in 12-and 18-month-old SHR hearts (Table S3 and Figure 3E). There were also numerous abnormalities in amino acid metabolites in SHR hearts (in addition to BCAA metabolites mentioned above), including decreased glutamate at 12 months (Table S2) and increased tyrosine at 12 and 18 months (Table S3). Indicators of protein remodeling were decreased (for example, N6-acetyllysine; Table S3), while indicators of protein degradation were increased (such as histidine degradation products hydantoin-5propionate at 18 [Table S2] and imidazole lactate at 12 and 18 months [ Table S3]). Notable changes of other amino acid metabolites were 2-to 4-fold decreases in carnosine and anserine (Table S3), 2 histidyl peptides that can scavenge lipid peroxidation products generated during heart failure, 32 and increased aminosugar N-acetylglucosamine 6-phosphate at 18 months (Table S2). Altered redox homeostasis in SHR hearts was reflected in decreased cystathionine (Table S3) that together with increased 2-hydroxybuturate (18 months, Table S2) and increased cysteine (Table S3) suggests Metabolites were analyzed in hearts of SHR and WKY at 12 (n=6) and 18 months (n=6) as described. 10 A, Principal Component Analysis, with light blue cones representing WKY at 12, dark blue cylinders WKY at 18, orange cones SHR at 12, and red cylinders SHR at 18 months (mo). B, Hierarchical Clustering Analysis. C, Random Forest Analysis. D and E, Metabolite profiles. D, Metabolites that were differentially modified in hearts of 12-and 18-month-old SHR relative to WKY at the same age with most metabolites normal (P>0.05 SHR vs WKY) at 12 months, except valerylcarnitine, (R)-3-hydroxybutyrylcarnitine, butyrylcarnitine, and carnitine that were statistically significantly different (P<0.05 SHR vs WKY). Statistically significant (P<0.05 SHR vs WKY) decreases or increases were found for all metabolites shown for 18-month-old rats except for (R)-3-hydroxybutyrylcarnitine that was normal (P>0.05 SHR vs WKY). E, Metabolites that were changed in the same direction in hearts of 12-and 18-month-old SHR relative to WKY at the same age. P<0.05 SHR vs WKY at same age, except 0.05≤P≤0.1 for octanoylcarnitine, phosphoenolpyruvate, fumarate, and NAD+ at 12, and for leucine at 18 months. For analyses, raw area count values for each biochemical shown were rescaled to set medians equal to 1. Rescaled values were transformed to their natural logs, and 2-way ANOVA analysis was performed to identify metabolites that differed significantly between groups (SHR and WKY at the same age). Exponentiated least-square means ±95% CI are shown in (D) and (E). Principal Component Analysis (A), Hierarchical Clustering (B), and Random Forest Analysis (C) were performed on natural log transformed data. Lipid, BCAA (branched chain amino acid), glucose, TCA (tricarboxylic acid cycle) name metabolic pathways with which designated metabolites are associated. NAD+ indicates nicotinamide adenine dinucleotide; SHR, spontaneously hypertensive rats; and WKY, Wistar Kyoto rats. increased synthesis of glutathione, a thiol with antioxidant properties. Increased oxidized glutathione at 18 months (Table S2) and reduced glutathione at 12 and 18 months (Table S3) in SHR hearts are compatible with increased oxidative stress. Concomitant increases in γ-glutamyl amino acids in 18-month-old SHR hearts (Table S2) further support increased demand for glutathione regeneration due to increased cardiac oxidative stress. Endocannabinoids, fatty acid-derived metabolites with anti-inflammatory properties, were normal at 12 months and decreased at 18 months (Table S2). Highly inflammatory fatty acidderived eicosanoids, including 12-and 15-hydroxyeicosatetraenoic acids, however, were normal. Changes in membrane phospholipid composition in SHR hearts at 12 and 18 months were suggested by up-and downregulation of several phospholipids, including phosphatidylcholine, phosphatidylethanolamine, glycosyl phosphatidylethanolamine, phosphatidylglycerol, phosphatidylinositol, and sphingolipids (Tables S2 and S3). To complement the cardiac metabolite analysis, we measured circulating levels of energy-providing substrates, FFAs, glucose, BCAA, and the major ketone body β-hydroxybutyrate (BHB, also named 3-hydroxybutyrate) in SHR and WKY rats at 9, 12, and 18 months. FFA and BCAA levels were significantly increased in SHR at 12 and 18 months ( Figure 4A and 4B), whereas BHB was significantly increased only at 18 months ( Figure 4C). Glucose levels (after 6 hours fasting) were similar for SHR and WKY at all ages. Insulin levels in SHR at 12 and 18 months were lower (P=0.016 and P=0.073, respectively; Figure 4D and 4E). The normal glucose and lower insulin levels exclude general insulin resistance and major abnormalities in whole body glucose homeostasis as contributing factors to metabolic changes observed in SHR hearts. Because rats used for metabolomics analysis were not subjected to FDG PET and CMR imaging, metabolic signatures could not be directly correlated with the severity of diastolic and systolic dysfunction or with changes in FDG uptake. However, increased LVH and LVH/BW in SHR with severe systolic dysfunction suggested a correlation between heart mass and cardiac function. Simple linear regression analyses combining all data at 12 and 18 months of age for the group of SHR that were imaged indeed showed positive and negative correlations, respectively, between LVM and diastolic function (IVRT; r 2 =0.47, P=0.009) and LVM and systolic (EF) function (r 2 =0.50, P=0.007). We thus used the weights of the rat hearts that were subjected to metabolomics analysis as surrogate for the severity of cardiac dysfunction in SHR. After combining data for 12-and 18-month-old SHR and WKY, linear Principal Component Analysis was performed to calculate the proportion of total variability in HW (dependent variable) that is explained by a combination of 17 metabolites (independent variables). The 17 metabolites were representative of metabolites derived from fatty acid, glucose, amino acid, and ketone body metabolism, with most of them (13) differentially changed between 12 and 18 months (Table S2) and the remaining (nervonoylcarnitine, 3-hydroxybutyrate, carnosine, and anserine) substantially changed in the same direction at 12 and 18 months (Table S3). Figure 5A shows a 3-dimensional plot of HW and the 2 principal components PC1 and PC2 illustrating clear separation of SHR and WKY data into 2 clusters. PC1 and PC2 explained 45% and 26% (total of 71%) of the variance in the data. To understand how much each of the metabolites contributed to PC1 and PC2, loadings were calculated (Table S4) In summary, SHR hearts at 9 months of age had mild cardiac systolic dysfunction in the presence of elevated glucose uptake and increased LVH. At 12 months, SHR hearts maintained mild systolic dysfunction and exhibited diastolic dysfunction and increasing LVH. Myocardial glucose uptake was decreased in 12-month-old SHR versus 9-month-old SHR but was still elevated compared with WKY. A small group of SHR maintained mild systolic function and elevated glucose uptake at 18 months of age but showed increasing diastolic dysfunction and LVH. Most SHR developed severe systolic dysfunction and pronounced diastolic dysfunction and LVH with drastically decreased myocardial glucose uptake or did not survive to 18 months. The metabolite profiles of SHR hearts showed clear abnormalities at 12 months and more pronounced alterations at 18 months when all SHR have diastolic and most exhibit severe systolic dysfunction. The changes are consistent with deteriorating impairments in energy metabolism (illustrated in Figure 6). Decreasing mitochondrial β-oxidation of fatty acids was indicated by reduced fatty acyl carnitine levels of mostly longchain fatty acyl carnitines at 12 months and long-, medium-, and short-chain fatty acyl carnitines at 18 months. Impaired glycolysis in SHR hearts was indicated by decreases in glycolytic intermediates such as 2-and 3-phosphoglycerates at 12 and 18 months. Increased lactate and normal pyruvate in SHR hearts at 12 months are compatible with increased conversion of pyruvate to lactate in the presence of impaired glucose oxidation. Increased phosphoenolpyruvate levels at 12 and 18 months are consistent with reduced glucose oxidation and simultaneously decreased conversion of phosphoenolpyruvate to pyruvate. Differences in BCAA metabolism between 12 and 18 months were indicated by changes in BCAAderived carnitines. While they were normal or increased at 12 months, they were decreased at 18 months. Increased cardiac BCAA and BCAA α-ketoacid levels were present at both 12 and 18 months. Increased 3-hydroxybutyrate at 12 and 18 months in SHR hearts and the circulation at 18 months, together with increased 3-hydroxybutyrylcarnitines in SHR hearts at 12 months support changes in ketone body metabolism. Shortages in acetyl-CoA and succinylcarnitine (18 months), NAD+ (12 and 18 months) and increases in Krebs cycle intermediates that lie upstream of NAD+ using/NADH producing rate-limiting reactions in the Krebs cycle (malate and fumarate at 12 and 18 months), are compatible with bottlenecks in generating metabolites needed for production of reducing equivalents that can be used in the respiratory chain. Analysis of the relationship between HW and select metabolites uncovered several metabolites for which levels were highly correlated with increasing HW. These include 3-hydroxybutyrate and ketone body-, fatty acid-, and BCAA-derived carnitines. Impairments in fatty acid, glucose, and BCAA metabolism happened in the presence of increased levels of circulating free fatty acids and BCAA, and normal glucose levels. The decreased insulin levels excluded whole body insulin resistance as a cause for cardiac metabolic changes. In conclusion, our analyses revealed progressive metabolic impairments in SHR that paralleled increasing severity of cardiac diastolic and systolic dysfunction. DISCUSSION Many studies have reported cardiac metabolic abnormalities in pressure overload-induced compensated cardiac hypertrophy and systolic HF (HFrEF) in animals and humans. 8,33 However, examination of metabolic abnormalities associated with diastolic dysfunction (HFpEF) is limited. 8,34 Our in-depth analysis of metabolic changes with mild diastolic and systolic dysfunction in 12-month-old SHR hearts and comparison of these in the same model with abnormalities in SHR hearts at 18 months of age when severe systolic and advanced diastolic dysfunction develops in most SHR add key information towards establishing metabolic changes during the progression of diastolic and systolic HF. Li 16 may be due to the fact that, although age-dependent changes in FDG uptake rates were analyzed, no parallel functional analysis separated hearts based on the severity of diastolic and systolic dysfunction. 15,16 Interestingly, in a clinical study we found stepwise decreases in FDG uptake rates with increasing severity of diastolic dysfunction in hypertensive humans. 36 SHR in aggregate also have decreasing glucose uptake with increasing diastolic dysfunction between 12 and 18 months of age. However, linear regression analysis relating individual Ki values for FDG uptake and IVRT data did not show a significant correlation in SHR. Metabolite Profiles of Failing Hearts Cardiac metabolomics analyses have been performed in rodent models during the development of compensated hypertrophy and systolic HF, after transverse aortic constriction in mice without 37 and with small apical myocardial infarction, 38 and Dahl salt-sensitive rats fed a high salt diet. 35 Only one of these studies 37 performed nontargeted metabolomics analyzing 288 named metabolites (versus 733 in our study); the other 2 did targeted metabolite analyses. 35,38 No previous study has performed metabolomics analysis of hearts with diastolic dysfunction or HFpEF. When comparing metabolite profiles at the time of severe systolic dysfunction or HFrEF, changes in lipid, glucose, and BCAA metabolites, as well as Krebs cycle intermediates, were commonly identified. However, none of the reported abnormalities changed in the same direction in all studies, except that increases in the BCAAs leucine, isoleucine, and/or valine in heart or plasma were reported by all. There were, however, metabolites that were similarly upregulated or downregulated in our SHR hearts at 18 months and at least 1 of the models of systolic HF. Those included decreases in the fatty acids nonadecanoate and archidate, and the fatty acid and BCAA metabolite 2-methylmalonylcarnitine, 37 changes in amino acid metabolites with increases in pipecolate (lysine metabolite), 37 cysteine and citrulline (arginine metabolite), 35 and decreases in hypotaurine and cystathionine (methionine/cysteine/taurine metabolites), carnosine (histidine metabolite), and creatinine and carnitine, 38 and nucleotide metabolites inosine monophosphate, 3′-AMP, and 3′5′-ADP, 37 and increased polyamine metabolites putrescine and spermidine. 37 The lack of abnormalities common to all models of systolic HF may be due to differences in animal models, methods used to induce systolic HF, and time span to develop systolic HF, as well as the time point during development of systolic HF at which analyses were performed. Fatty Acid and Glucose Metabolism With Diastolic Dysfunction In HFpEF, increased fatty acid oxidation and decreased or increased glucose oxidation have been observed. 34 In a mouse model of aortic artery constriction, a decrease in myocardial glucose oxidation preceded the development of diastolic dysfunction, 39 while in a rat model glucose oxidation and glycolysis increased with HFpEF. 40 In Dahl salt-sensitive rats fed a high salt diet for up to 9 weeks, cardiac glycolysis increased while glucose oxidation did not change with progressive diastolic dysfunction, leading to an uncoupling of glucose metabolism and increased proton production (lactate). 41 No decreases in fatty acid oxidation or overall ATP production were observed in the same Dahl salt-sensitive rat model with diastolic dysfunction; impaired fatty acid oxidation was only present with systolic HF. Our metabolomics data with SHR suggest reduced cardiac fatty acid metabolism and oxidation (despite increased circulating FFA levels) at 12 months when in the imaging group one-half of the SHR hearts develop overt diastolic dysfunction, and IVRT for the other half are in the high normal range. As it relates to glucose metabolism, the sharp decrease in glucose uptake in SHR hearts at 12 months, while still increased compared with WKY, is accompanied by decreases in glucose metabolites upstream of pyruvate. Marginally increased lactate in SHR hearts at 12 months indicates that pyruvate, instead of entering the Krebs cycle for oxidation, is metabolized to lactate at an increased rate consistent with the previous observation in Dahl saltsensitive rats with diastolic dysfunction. 41 Fatty Acid and Glucose Metabolism With Systolic HF In systolic HF, decreased fatty acid oxidation together with decreased myocardial fatty acid uptake, 35 decreased fatty acid transporter expression, 35 and decreased expression of enzymes involved in fatty acid metabolism 35,37 have been consistently reported. 34 Decreased myocardial fatty acid metabolism was also observed in human patients with systolic HF. 42 Our fatty acid metabolite data for SHR at 18 months when severe systolic dysfunction develops in most SHR are consistent with these earlier observations. Impaired glucose metabolism in HFrEF is commonly characterized by increased glycolysis and decreased glucose oxidation 34 in the presence of increased glucose uptake and increased glucose transporter GLUT1 expression. 35 But decreased expression of glucose transporters GLUT1 and GLUT4, and of phosphofructokinase, an important rate-limiting enzyme of glycolysis, have been described in failing human hearts. 43 Our data for 18-month-old SHR are more consistent with the latter findings as we observe decreased glucose uptake with severe systolic dysfunction and decreases in glycolysis intermediates. Pyruvate was increased or decreased in systolic HF models 35,37,38 but was normal in 18-month-old SHR hearts. However, phosphoenolpyruvate was increased 2-fold in 18-month-old SHR hearts. Pyruvate kinase is the rate-limiting enzyme that irreversibly metabolizes phosphoenolpyruvate to pyruvate, and increased expression of the pyruvate kinase M2 (instead of the M1) isoform in the failing heart has been reported. 44 Furthermore, a recent study showed that pyruvate kinase M1 expression was reduced in failing human and mouse hearts and that cardiomyocyte-specific deletion of pyruvate kinase M1 exacerbated cardiac dysfunction and fibrosis in response to pressure overload. Conversely, pyruvate kinase M1 overexpression in cardiomyocytes protected the heart from pressure overload-induced heart failure. 45 Thus, increased phosphoenolpyruvate in SHR hearts may be due to decreased pyruvate kinase M1 expression/activity. BCAA Metabolism With Diastolic and Systolic HF Recently, changes in cardiac BCAA metabolism have been reported in patients and animals with systolic HF. 46,47 Specifically, impaired BCAA oxidation with increased plasma and cardiac BCAA and BCAA α-ketoacids levels and decreased activity of the branched chain α-keto acid dehydrogenase, the rate-limiting enzyme in BCAA catabolism, have been described. 8,34,46 Little is currently known about BCAA metabolism in HFpEF. 34 Our findings of increased BCAA and BCAA α-ketoacids and BCAA metabolites upstream of branched chain α-keto acid dehydrogenase and decreased BCAA metabolites (including BCAA-derived carnitines) downstream of branched chain α-keto acid dehydrogenase in SHR hearts at 18 months of age are consistent with the earlier observations in systolic HF. However, abnormalities in BCAA metabolites were already present at 12 months in SHR hearts when several SHR exhibit diastolic dysfunction. Normal or increased levels of branched chain α-keto acid dehydrogenase products together with increased BCAA and BCAA α-ketoacids suggest that BCAA may still be used for oxidation in hearts of 12-month-old SHR. Although BCAA oxidation only accounts for ≈1% to 2% of total ATP produced in the normal heart and <1% in the failing heart, accumulation of BCAAs and BCAA α-ketoacids impairs glucose oxidation and increases mTOR activity to stimulate cardiac growth in the failing heart. 8 Indeed, mTOR activity is increased at 12 months, and glucose metabolism is impaired at 12 and 18 months in SHR hearts. Ketone Body Metabolism With Diastolic and Systolic HF Previous studies suggested increased cardiac ketone body oxidation in humans and animal models of HF 8,34,48 with concomitant inhibition of fatty acid oxidation. 8,34 Increased serum ketone body levels have been observed in human patients with HFpEF that were higher than in patients with HFrEF. 49 Substantially increased 3-hydroxybutyrate (also referred to as BHB) in SHR hearts at 12 and 18 months, increased levels of ketone body breakdown products at 12 months, and elevated circulating BHB levels at 18 months are compatible with increased BHB metabolism and may also suggest differences in ketone body use with increasing cardiac dysfunction. Note, increased ketone body metabolism is not necessarily associated with increased ketone body oxidation. A recent study suggested decreased BHB oxidation in HFpEF 50 in the presence of increased cardiac BHB and decreased circulating BHB. Our observation in SHR hearts at 12 months of age is compatible with increased formation of 3-hydroxybutyrylcarnitine from 3-hydroxybutyrate. As described by Soeters et al, this could involve direct activation of 3-hydroxybutyrate to 3-hydroxybutyryl-CoA by acetyl-CoA synthase and subsequent conversion to 3-hydroxybutyrylcarnitine by carnitine acetyltransferase carnitine. 31 The formation of 3-hydroxybutyrylcarnitine from 3-hydroxybutyrate suggests that delivery of 3-hydroxybutyrate exceeds oxidation capacity compatible with impaired BHB oxidation observed by Deng et al. 50 Mechanisms responsible for increased 3-hydroxybutyrylcarnitine levels in SHR hearts at 12 months of age will need to be further explored. At 18 months of age, although cardiac and circulating BHB are both significantly elevated, 3-hydroxybutyrylcarnitine levels in SHR hearts are no longer significantly different from WKY, suggesting changes in BHB metabolism. Whether increased ketone body oxidation occurs at 18 months in SHR hearts as suggested by other studies of HFrEF 8 will need to be determined. Energy Deficit With Diastolic and Systolic HF Kato et al showed a deficit in myocardial energy reserve by in situ 31 P-magnetic resonance spectroscopy (phosphocreatine/ATP) with systolic HF in Dahl salt-sensitive rats. 35 We observed that acetyl-CoA, the common breakdown product of all energy-providing substrates and the key metabolite oxidized in the Krebs cycle, was reduced at 18 months in SHR hearts, implying a deficit in the generation of metabolites (reduced nicotinamide dinucleotide NADH and dihydroflavine adenine dinucleotide) necessary for oxidative phosphorylation of ADP to ATP. Consistent with this and previous studies (reviewed in 51 ), we also observed decreased NAD+ at 12 and 18 months in SHR hearts. Although the mechanisms leading to altered NAD(H) levels in HF are currently unknown, decreased NAD+ is not only a problem for energy transfer but also for redox homeostasis and signaling and may contribute in several ways to the deterioration of cardiac function and changes in cardiac structure. 8,51 Interestingly, supplementation of nicotinamide precursors raises NAD levels in failing mouse hearts and improves HF outcomes. 8,51 Metabolic Changes in Hearts of Young SHR We previously analyzed metabolic changes in 2-month-old SHR during early hypertension. 10 We observed increased cardiac glucose uptake similar to 9-and 12-month-old SHR hearts. However, changes in cardiac metabolites were different in young SHR hearts. Markedly elevated pyruvate, fatty acyl-and BCAA-derived carnitines, and increased markers of oxidative stress and inflammation (including lipid oxidation and peroxidation products) were compatible with augmented glucose, fatty acid, and BCAA catabolism with supply of metabolites for oxidation in the Krebs cycle exceeding mitochondrial capacity. Limitations A major limitation of our study is that we were not able to perform FDG PET and CMR on the rat hearts that were used for metabolomics analysis. However, using HW as a surrogate for the severity of cardiac dysfunction, we identified several metabolites for which levels correlated with increasing HW. Causal relationships between these metabolites, HW, and cardiac function will need to be explored in the future. Metabolomics analyses provide single time point steady-state metabolite measurements and thus fail to capture the rates at which metabolic reactions occur. To clearly identify defect(s) responsible for changes in specific metabolites in SHR hearts, metabolic flux studies with stable isotopes will need to be performed. Our interpretation of metabolite profiles in SHR hearts provided above, therefore, estimates likely scenarios on how metabolic pathways may be affected and serves as a guide for isotope studies. However, observed metabolite changes as discussed above are, at least in part, consistent with previous studies that directly measured fatty acid and glucose oxidation in different animal models of HF. Our study used only male SHR and WKY. The overall lifetime risk of HF is similar between men and women, but there are important and under-recognized sex differences. Men are predisposed to HFrEF, whereas women predominate in HFpEF. 52 The higher risk of HFrEF in men compared with women may be attributable to men's predisposition to macrovascular coronary artery disease and myocardial infarction, whereas coronary microvascular dysfunction/endothelial inflammation has been postulated to play a key role in HFpEF in women. The guidelines for HF treatment are predominately based on male-derived data because only 20% to 25% of cohorts recruited to HF clinical trials were women. Large gaps in knowledge exist in sex-specific mechanisms, optimal drug doses for women, and sexspecific criteria for device therapy. Analyzing female rats in addition to male rats in our study would have been cost prohibitive. In the future, to establish sex-specific changes, female SHR and WKY need to be evaluated. SHR represent a genetic model of hypertension, and several genetic differences from control WKY have been identified. 53 It is thus possible that at least some of the cardiac metabolic changes are due to intrinsic differences between SHR and WKY. For example, impaired fatty acid uptake caused by an inherent deletion of the long-chain fatty acid transporter CD36 in SHR strains including the one we used for our studies 10 has been described. 54 However, earlier studies observing normal fatty acid uptake in hearts of 9-month-old fed SHR and increased fasting fatty acid uptake rates in hearts of SHR at 8 and 13 months of age 15,16 suggest that CD36 is not the only determinant of fatty acid uptake in SHR hearts. Our own earlier metabolomics analyses also do not support a limited supply of longchain fatty acids to SHR hearts at 2 months of age. 10 In addition, our unpublished metabolomics analysis of hearts at 1 month of age, when SHR are prehypertensive, cardiac function is normal, and no cardiac hypertrophy is present, 10 support minimal intrinsic metabolic abnormalities. As it relates to energy metabolism, changes (SHR/WKY) were mostly restricted to glucose metabolism (including increased 2,3 diphosphoglycerate [1.5], phosphoenolpyruvate [2.9], and pyruvate [1.9]). These were compatible with increased glucose uptake and oxidation in isolated hearts of 1-month-old SHR hearts. 10 Only a few changes in fatty acid metabolites, incompatible with decreased long-chain fatty acid uptake, were observed; there were increases in a couple of fatty acids (laurate [1.6] . Although we cannot rule out that genetic differences account for some of the observed changes, the fact that the metabolite profiles of 1-, 2-, 12-, and 18-month-old SHR hearts differ drastically suggests that age-and hypertension-associated changes in the heart also contribute to cardiac metabolite differences between SHR and WKY. Future studies with other models of hypertension, cardiac hypertrophy, and HF will establish which of the metabolic changes are specific to SHR and which are disease related. CONCLUSIONS Progressive cardiac diastolic and systolic dysfunction in SHR is marked by profound myocardial metabolic abnormalities. Increased and decreased glucose uptake, respectively, differentiate SHR hearts with mild systolic from SHR hearts with severe systolic dysfunction. Escalating impairments in fatty acid, glucose, and BCAA metabolism during the progression from mild diastolic and systolic dysfunction to pronounced diastolic and severe systolic dysfunction are indicated by deteriorating production of metabolites necessary for ATP generation, with ketone bodies possibly used as an alternative energy substrate. Widespread abnormalities in nucleotide, amino acid, and phospholipid metabolism may further contribute to deteriorating cardiac function and mediate structural abnormalities. At this point, we do not know which metabolic changes trigger or are detrimental to the development of diastolic and/or severe systolic dysfunction in SHR. Since abnormalities in Table S1. WKY1 WKY2 WKY3 WKY4 WKY5 WKY SHR1 SHR2 SHR3 SHR4 SHR5 SHR6 SHR7 SHR8 SHR SHR
2023-05-16T06:17:07.408Z
2023-05-15T00:00:00.000
{ "year": 2023, "sha1": "d537e8f5db00efc1395a3a40ccbd8cf1156a50a5", "oa_license": "CCBYNCND", "oa_url": "https://doi.org/10.1161/jaha.122.026950", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "3fdb2b328b8ed1ac0d6122bb3921fd522ef0fb5f", "s2fieldsofstudy": [ "Medicine", "Biology" ], "extfieldsofstudy": [ "Medicine" ] }
225523712
pes2o/s2orc
v3-fos-license
Settings-Free Hybrid Metaheuristic General Optimization Methods Several population-based metaheuristic optimization algorithms have been proposed in the last decades, none of which are able either to outperform all existing algorithms or to solve all optimization problems according to the No Free Lunch (NFL) theorem. Many of these algorithms behave effectively, under a correct setting of the control parameter(s), when solving different engineering problems. The optimization behavior of these algorithms is boosted by applying various strategies, which include the hybridization technique and the use of chaotic maps instead of the pseudo-random number generators (PRNGs). The hybrid algorithms are suitable for a large number of engineering applications in which they behave more effectively than the thoroughbred optimization algorithms. However, they increase the difficulty of correctly setting control parameters, and sometimes they are designed to solve particular problems. This paper presents three hybridizations dubbed HYBPOP, HYBSUBPOP, and HYBIND of up to seven algorithms free of control parameters. Each hybrid proposal uses a different strategy to switch the algorithm charged with generating each new individual. These algorithms are Jaya, sine cosine algorithm (SCA), Rao’s algorithms, teaching-learning-based optimization (TLBO), and chaotic Jaya. The experimental results show that the proposed algorithms perform better than the original algorithms, which implies the optimal use of these algorithms according to the problem to be solved. One more advantage of the hybrid algorithms is that no prior process of control parameter tuning is needed. Introduction It is well known that metaheuristic optimization methods are widely used to solve problems in several fields of science and engineering. Population-based metaheuristic methods iteratively generate new populations to increase diversity in the current generation. This increases the probability of reaching the optimum for the considered problem. These algorithms are proposed to replace exact optimization algorithms when they are not able to reach an acceptable solution. The inability to provide an adequate solution may be due to either the characteristics of the objective function or the wide search space, which renders a comprehensive search useless. In addition, classical optimization methods, such as greedy-based algorithms, need to consider several assumptions that make it hard to resolve the considered problem. Preliminaries As mentioned above, among the best free control parameter algorithms are the Jaya algorithm [16], the SCA algorithm [13], the supply-demand-based optimization method [15], Rao's optimization algorithms [18], the Harris hawks optimization method (HHO) [17], and the teaching-learning-based optimization (TLBO) algorithm [14]. Among these proposals, the HHO algorithm is the most complex. It consists of two phases. During the first phase, the elements of the population are replaced without comparing the fitness of the associated solutions, which is an unwanted strategy for hybrid algorithms. In addition, the SDO algorithm, which offers impressive initial results, works with two populations preventing its integration in our hybrid proposals. As mentioned earlier, the use of chaotic maps can improve the behavior of some metaheuristic methods. The 2D chaotic map reported in [33] has significantly improved the convergence rate of the Jaya algorithm [33,77]. The generation of the 2D chaotic map is shown in Algorithm 4, where the initial conditions are chA 1 = 0.2, chB 1 = 0.3, k = i, and dimMap = 500. The computed values of chA i and chB i are in [−1, 1]. The chaotic Jaya algorithm (in short, CJaya) is shown in Algorithm 5, where ch x , x ∈ [1 . . . 6] are chaotic values randomly extracted from the 2D chaotic map. Other chaotic maps have been applied to Jaya in [32,78]. However, they do not surmount the chaotic behavior of the aforementioned 2D map. As they present a similar structure, Algorithms 1-5 are used for designing our hybrid algorithms. Pop k m = MinValue k + (MaxValue k − MinValue k ) * r 1 8: end for 9: Compute and store function fitness F(Pop k m ) 10: end for 11: for iterator = 1 to max_ITs do 12: Search for the current BestPop and WorstPop 13: for m = 0 to popSize do 14 Hybrid Algorithms The proposed hybrid algorithms are designed using the seven algorithms described in Section 2. These algorithms have been selected thanks to their performance in solving constrained and unconstrained functions, but also because they share a similar structure that allows the implementation of different hybridization strategies. Algorithm 6 shows the skeleton of the proposed hybrid algorithms, which includes all common and uncommon tasks without any updating procedure of the current population. Since the TLBO algorithm is a two-phase algorithm, the proposed hybrid algorithms apply these two phases consecutively to each individual. In contrast to the other algorithm where a single-phase is executed, a control parameter Phase is applied to process twice the same individual when the TLBO algorithm is used (see lines 24-29 of Algorithm 6). The algorithm used to obtain a new individual is determined by AlgSelected (see line 17 of Algorithm 6). In Algorithms 6-9, AlgSelected determines the algorithm accountable for producing a new individual. Given that only algorithms that are free of control parameters have been considered, proposals that require the inclusion of control parameters have been discarded. Following these guidelines, we have designed three hybrid algorithms, an analysis of which is provided in Section 4. The first proposed hybrid algorithm, shown in Algorithm 7, processes the entire population in each iteration using the same algorithm, and is referred to as the HYBPOP algorithm. This is the most straightforward hybridization technique where the requirement to follow the structure given by Algorithm 6 is not mandatory on all algorithms. In Algorithms 7-9, NumO f Algorithms is the number of algorithms free of control parameters involved in the hybrid proposals. Set the scaling factor S F and teaching factor T F (an integer random value ∈ [1, 2]) 8: for k = 1 to numDesignVars do 9: AveragePop k = ∑ m 1 Pop k /numDesignVars 10: end for 11: The second algorithm, named HYBSUBPOP, is described through Algorithm 8. It logically splits the population into sub-populations. During the optimization process, each sub-population will be processed by one of the seven algorithms mentioned previously. It is worth noting that the aim of the proposed hybrid algorithms is not to improve the convergence ratio of the used algorithms separately, nor to perform optimally for a particular problem. It is to show outstanding performance for a large number of problems without adjusting any control parameters of the considered algorithms. Numerical Experiments In this section, the performance of the proposed hybrid algorithms is analyzed through solving 28 well-known unconstrained functions (see Table 1), the definitions of which can be seen in [77]. The proposed algorithms were implemented in the C language, using the GCC v.4.4.7 [79], and an Intel Xeon E5-2620 v2 processor at 2.1 GHz. The hybrid proposals, along with the original algorithms, have been implemented and tested using C language. The C implementations of the original algorithms used are not available through the Internet. However, their Java/Matlab implementations are commonly available. The data collected from the experimental analysis are as follows: • NoR-AI: the total number of replacements for any individual. • NoR-BI: the total number of replacements for the current best individual. • NoR-BwT: the total number of replacements for the current best individual with an error of less than 0.001. • LtI-AI: the last iteration (iterator) in which a replacement of any individual occurs. • LtI-BI: the last iteration (iterator) in which a replacement of the best individual occurs. Three of the five analyzed data (NoR-) indicate the number of times the current individual (Pop m ) is replaced by a new individual (newPop m ), which provides a better fitness function (see line 21 of Algoritm 6), while the remaining two (LtI-) refer to the last generation (iterator) in which at least one individual has been replaced. All data given below have been obtained under 50 runs, 50, 000 iterations (max_ITs = 50, 000) and two population sizes (popSize = 140 and 210). The maximum values of the analyzed data are listed in Table 2. Tables 3-5 show the data of all the considered algorithms independently, i.e., without hybridization. As expected, the behavior of the different algorithms does not follow a familiar pattern. In addition, it depends on the objective function. Regarding a global convergence analysis, both TLBO and CJaya behave better but with a higher order of complexity (see [77,80]). Moreover, it is noted that when using TLBO, two new individuals are generated in each iteration; one in the teacher phase and the other one in the learner phase. The values in brackets in Tables 3-5 refer to the standard deviation of the data under 50 runs. Note that heuristic optimization algorithms are partially based on randomness, which leads to high values of standard deviation. The average standard deviations are approximately equal to 16%, 22%, 15%, 30%, 23%, 23%, and 22% for Jaya, Chaotic Jaya, SCA, RAO1, RAO2, RAO3, and TLBO, respectively. An important aspect, not shown in Tables 3-5, is whether the solution obtained by each algorithm is acceptable or not. In particular, the original algorithms fail to obtain a solution tolerance of less than 0.001 for 3, 8, 2, 4, 7, 5, and 2 functions for Jaya, CJaya, SCA, RAO1, RAO2, RAO3, and TLBO, respectively. Therefore, considering only original algorithms, there is no algorithm whose behavior is always the best, which justifies the development of a generalist hybrid system that can solve a large number of benchmark functions and engineering problems. Comparing the quality of the solutions obtained from the proposed hybrid algorithms, it can be concluded that the HYBSUBPOP algorithm is the worst one because the same thoroughbred algorithm is always applied to the same sub-population, which degrades the algorithm's performance for a small population. Contrary to HYBSUBPOP, the HYBPOP and HYBIND algorithms apply the selected algorithms to all individuals, which leads to better-exploiting hybridizations. The HYBSUPOP algorithm fails to obtain a solution tolerance of less than 0.001 in 3 functions (F11, F23, and F27) and the HYBPOP and HYBIND algorithms fail in only one function (F27 and F11, respectively). If the population size is increased to 210, the HYBIND algorithm succeeds with all functions, thus the HYBIND algorithm has a slightly better performance in comparison to HYBPOP. Local exploration has improved both in the HYBPOP method and especially in the HYBIND method, as stated above. Figures 1 and 2 show the convergence curves of both all the individual methods and the three hybrid methods proposed for the first 1000 and 100 iterations, respectively, for functions F1, F8, F11, and F18. Each point in both figures is the average of the data obtained from 10 runs. As shown in these figures, the curves of the three hybrid methods are similar to the curves of the best single algorithms for each function. Therefore, global exploitation, while not improving all methods, behaves similarly to the best single methods for each function. It should be noted that the hybrid methods behave similarly to the best individual methods for each function, which are not always the same. Table 6 sorts the algorithms according to the number of iterations required to obtain an error of less than 0.001, if an algorithm is missing in a row an acceptable solution is not reached. As seen from this table, no algorithm outperforms all other algorithms. Moreover, a computational cost analysis would be necessary to classify them correctly. Table 7 exhibits the computational cost of different algorithms. It reveals from this table that the hybrid algorithms are mid-ranked in terms of computational cost, and HYBIND is computationally less expensive than HYBPOP. An analysis of the contribution of each algorithm in the HYBPOP and HYBIND algorithms is exhibited in Tables 8-10. Table 8 indicates the number of times that an individual has been replaced in each algorithm. The replacement is accepted when the new individual improves the fitness of the current solution. As seen from Table 8, the HYBIND algorithm performs more replacements of individuals. In addition, the numbers of replacements per individual for the contributing algorithms are nearly equal, except for the RAO1 algorithm, where the contribution to replacements is limited. The standard deviations of each data (from 50 runs) are being put in brackets. We found that, on average, the standard deviations for HYBPOP and HYBRID algorithms are both equal to 14%. Table 9 shows the last iteration in which each optimization algorithm replaces an individual in the population, i.e., when it no longer brings improvement to the hybrid algorithm. As can be seen from Table 8, the optimization algorithms, except the RAO1 algorithm, work efficiently in the hybrid algorithms. It is also revealed that the considered algorithms contribute to more generations in the HYBIND algorithm. The mean value of the standard deviation rises to 28% and 23% for HYBPOP and HYBIND, respectively, due to the randomness behavior and lower LtI-AI costs. Finally, Table 10 shows the last iteration in which each algorithm obtains a new optimum. A careful analysis of the results in Table 10 reveals that in the HYBPOP algorithm, the seven algorithms contribute similarly to reaching a better solution as new populations are produced. By contrast, when using the HYBIND algorithm, the powerful algorithms are CJaya and TLBO. It should be noted that the CJaya algorithm extracts random individuals from the population to generate new individuals. The TLBO algorithm collects all the individuals of the population to obtain new individuals. Therefore, these algorithms exploit the results obtained from the rest of the algorithms to converge towards the optimum. This fact is due to the nature of these algorithms, where the best solution correctly guided the individuals. The mean value of the standard deviation is high because the LtI-BI is strongly affected by randomness behavior. It has been found that the HYBSUBPOP algorithm does not reach excellent optimization performance because of the lack of harmony between the original algorithms, so it has left without further analysis. On the other hand, the exploitation phase of the HYBPOP and HYBIND algorithms are similar. In contrast, the HYBIND algorithm outperforms the HYBPOP one in terms of exploitation. The hybridization of the original algorithms is implemented at the individual level in the HYBIND algorithm, contrary to the HYBPOP algorithm, in which that hybridization is performed at the population level. Finally, the HYBPOP algorithm included algorithms that update the population without analyzing the fitness of the associated solutions, while this restriction is mandatory in the HYBIND algorithm. Conclusions This paper proposed a hybridization strategy of seven well-known algorithms. Three hybrid algorithms free of setting parameters dubbed the HYBSUBPOP, HYBPOP, and HYBIND algorithms are designed. These algorithms are derived from a dynamic skeleton allowing the inclusion of any metaheuristic optimization algorithm that exhibits further improvements. The only requirement in merging a new optimization algorithm into the proposed skeleton is to know if the replacement of an individual on that algorithm is based on the enhancement of the cost function or not. Moreover, both chaotic algorithms and multi-phase algorithms have been employed to design the proposed hybrid algorithms, which proves the versatility of the proposed hybridization skeleton. The experimental results show that the HYBPOP and HYBIND algorithms effectively exploit the capabilities of all the considered algorithms. They present an excellent ability to solve a large number of benchmark functions while improving the quality of the solutions obtained. Generally speaking, the hybridization at the individual level is better than that at the population level, which explains why the performance of the HYBSUBPOP algorithm is inferior to the other hybrid algorithms. As future lines of work, we intend to integrate more efficient algorithms into the proposed hybridization skeleton as well as to evaluate new versions of hybridization, and extend the performance analysis of the potential algorithms for solving more complex functions and real-world engineering problems.
2020-07-09T09:08:33.840Z
2020-07-03T00:00:00.000
{ "year": 2020, "sha1": "a4ffe7fa78e6d4830ea59642b2d7605717a4db9a", "oa_license": "CCBY", "oa_url": "https://www.mdpi.com/2227-7390/8/7/1092/pdf", "oa_status": "GOLD", "pdf_src": "Adhoc", "pdf_hash": "b67d45f520de17346cf23da78f103109476dc94e", "s2fieldsofstudy": [ "Computer Science", "Engineering" ], "extfieldsofstudy": [ "Computer Science" ] }
244000698
pes2o/s2orc
v3-fos-license
Engaging Private Health Care Providers to Identify Individuals with TB in Nepal In Nepal, 47% of individuals who fell ill with TB were not reported to the National TB Program in 2018. Approximately 60% of persons with TB initially seek care in the private sector. From November 2018 to January 2020, we implemented an active case finding intervention in the Parsa and Dhanusha districts targeting private provider facilities. To evaluate the impact of the intervention, we reported on crude intervention results. We further compared case notification during the implementation to baseline and control population (Bara and Siraha) notifications. We screened 203,332 individuals; 11,266 (5.5%) were identified as presumptive for TB and 8077 (71.7%) were tested for TB. Approximately 8% had a TB diagnosis, of whom 383 (56.2%) were bacteriologically confirmed (Bac+). In total, 653 (95.7%) individuals were initiated on treatment at DOTS facilities. For the intervention districts, there was a 17%increase for bacteriologically positive TB and 10% for all forms TB compared to baseline. In comparison, the change in notifications in the control population were 4% for bacteriologically positive, and −2% all forms. Through engagement of private sector facilities, our intervention was able to increase the number of individuals identified with TB by over 10% in the Parsa and Dhanusha districts. Introduction Annually, tuberculosis (TB) affects over 10 million individuals worldwide [1]. Nepal, like its two neighboring countries, India and China, is plagued with a high TB burden. Based on a recent National TB Prevalence Survey in 2018, there are about 117,000 people living with TB in Nepal resulting in a reported prevalence rate of 416 per 100,000 population, 1.8 times higher than previously estimated [2]. In addition to underestimation of TB burden in the country, Nepal also faces challenges with underreporting of TB cases. In 2018, there were 69,000 individuals who fell ill with TB in Nepal, however only 32,474 (47%) were reported to the National TB Program [3]. This may be due to a variety of reasons including extended and costly travel to health facilities, poor TB knowledge and initial care-seeking in the private sector [4]. To address underreporting from the private sector, a key recommendation from Nepal's 2018 National Prevalence Survey is the establishment of a mandatory TB notification system in the private sector [2]. In Nepal, the private sector finances approximately 65% of health care [5]. Patient pathway analyses have shown that approximately 60% of persons with TB initially seek care in the private sector upon developing symptoms [6]. Previous studies have noted that in urban settings of Nepal, 50% of individuals with TB 2 of 8 were poorly managed by the private sector where staff is often not adequately trained in TB management and care [5,7]. Further, a study conducted among private providers in Nepal noted that only 27% of private providers maintained a complete record of the individuals with TB whom they diagnosed and/or treated [5]. These findings underscore the importance of private sector engagement in controlling the TB epidemic in Nepal. In efforts to improve TB screening for those accessing care in the private sector, Sahayog Samittee Nepal (SS Nepal), a non-profit organization in Nepal aiming to ensure the right to health care for all individuals, designed and implemented a private sector engagement intervention. This intervention, supported by a TB REACH grant, aimed to intensify TB case finding in private healthcare provider facilities, including private physicians and pharmacies. This evaluation presents the crude results of our intervention and overall additional TB notifications following the implementation period. Setting From November 2018 to January 2020, we developed and implemented a private sector active case finding (ACF) intervention in two districts, Parsa and Dhanusha, henceforth evaluation population (EP). Two other districts, Bara and Siraha, with similar sociodemographics, population size and economic indicators were selected as control districts, henceforth control population (CP). This was done to help evaluate the results of the intervention by comparing CP and EP notification trends (see Figure 1). developing symptoms [6]. Previous studies have noted that in urban settings of Nepal, 50% of individuals with TB were poorly managed by the private sector where staff is often not adequately trained in TB management and care [5,7]. Further, a study conducted among private providers in Nepal noted that only 27% of private providers maintained a complete record of the individuals with TB whom they diagnosed and/or treated [5]. These findings underscore the importance of private sector engagement in controlling the TB epidemic in Nepal. In efforts to improve TB screening for those accessing care in the private sector, Sahayog Samittee Nepal (SS Nepal), a non-profit organization in Nepal aiming to ensure the right to health care for all individuals, designed and implemented a private sector engagement intervention. This intervention, supported by a TB REACH grant, aimed to intensify TB case finding in private healthcare provider facilities, including private physicians and pharmacies. This evaluation presents the crude results of our intervention and overall additional TB notifications following the implementation period. Setting From November 2018 to January 2020, we developed and implemented a private sector active case finding (ACF) intervention in two districts, Parsa and Dhanusha, henceforth evaluation population (EP). Two other districts, Bara and Siraha, with similar socio-demographics, population size and economic indicators were selected as control districts, henceforth control population (CP). This was done to help evaluate the results of the intervention by comparing CP and EP notification trends (see Figure 1). Dhanusha and Parsa are two border districts located in Province 2 in southeastern Nepal. Province 2 borders India and has an all forms TB case notification rate (CNR) of 92 per 100,000 population [8]. Despite being the province with the fourth highest CNR in the country, the private sector only contributes to 17% of case finding in the province, which is the second lowest contribution rate, only preceded by Sudurpaschim (12%) [8]. In Dhanusha, the intervention was implemented in two municipalities: Janakpur (also known as Janakpursham) and Sabaila, both located southeast of Kathmandu. Janakpur is a sub-metropolitan city in Dhanusha district and has a population of approximately 173,924, which is nearly 25% of the district's total population. Sabaila has a population of 24,893, which represents 3% of the district's total population. In Parsa, the intervention was implemented in Birgunj and Pokhariya, located south of Kathmandu. Birgunj is a metropolitan city with a population of 240,922, representing 35% of the district's total population. Pokhariya is a municipality with a population of 32,885, which is nearly 5% of the district's total population. Intervention Prior to implementing the intervention, we mapped and engaged private provider facilities in the two Parsa and Dhanusha districts to establish "cough screening desks" (CSDs). Private health facility staff, hereafter health volunteers (HV), were placed at CSDs to screen individuals seeking care for TB ( Figure 2). HVs approached all attendants of private provider facilities for TB screening. Ultimately, only consenting individuals were screened using a paper-based screening questionnaire, which included questions on TB symptoms (cough ≥ 2 weeks, fever, night sweats, loss of appetite, weight loss, and/or presence of blood in sputum), previous history of TB and contact with persons with TB. An individual was identified as presumptive for TB if they had one or more TB-like symptoms, previous history of TB and/or had recently been in contact with a person with confirmed TB. Consenting individuals identified as being presumptive for TB were asked to provide a sputum sample if Xpert testing was available or two sputum samples in case of microscopy testing. One sputum sample was taken on-the-spot, and another was taken one-hour later. If sputum could not be provided by the individual identified as presumptive, they were referred for clinical examination and CXR by a physician. Health mobilizers (HM), recruited by SS Nepal staff, delivered the sputum samples from each CSD to NTP laboratories for sputum smear or Xpert (depending on availability) evaluation using motorbikes. Those individuals confirmed for TB, either clinically or bacteriologically, were referred to public Directly Observed Treatment Short-Course (DOTS) facilities for treatment initiation and TB notification. Each private provider received a performance-based incentive of 200 Nepalese rupee (NPR), approximately 1.70 USD, per individual confirmed with TB and 10 NPR (0.08 USD) for each sputum sample collected. Each HV received 1.5 NPR per individual screened at the CSD. The laboratory personnel at the NTP laboratories received 10 NPR (0.08 USD) per sputum slide examined. HMs received a monthly salary and allowance for their motorbike fuel. Data Collection, Analysis, and Evaluation The methodology for evaluating the intervention followed the established TB REACH monitoring and evaluation framework [9]. To assess the impact of the intervention, the framework compares TB notifications from the EP during the timeframe of the project to; (1) historic notifications in the intervention districts prior to project implementation and to (2) notifications from the CP where the intervention was not implemented. As part of the evaluation, we established a set of indicators that were collected from the participating health facilities in the intervention districts. Indicators, disaggregated by district, included the number of individuals screened, tested for TB, bacteriologically confirmed, clinically confirmed, initiated on treatment, and successfully treated. Data from each of the participating private health care facilities in the intervention districts were collected using paper-based screening forms that were filled by HV's at the CSD. Each HV was also responsible for entering all patient information, testing results and referral into a presumptive TB register at the CSD. Every month, HVs tabulated the data from the presumptive TB register to send to the Project Coordinator via mobile phone. During monthly meetings, HVs also brought and submitted the paperversion of the tabulated indicators to the Project Coordinator. The Project Coordinator digitized the aggregate indicators into an Excel sheet and sent them for approval to the District Health Officers (Authorized Staff of NTCC). Aggregate data from all CSDs wereentered and tabulated on Excel 2016. Additional analyses were undertakenusing R Studio. TB case notifications (bacteriologically confirmed and clinically diagnosed) were collected from the NTP registers for the previous three years for both the EP and CP. To achieve this, SS Nepal was granted credentials to access the NTP District Health Information Software (DHIS) 2 which contains TB notification data for all districts. Changes in TB notifications for both the intervention and control districts were calculated from the difference between the historical and intervention period and the trend was calculated using simple regression analysis. A test of proportions was used to assess whether the additionality (representing the change in case notifications) in the EP was significantly different from that in the CP. Results In total, we mapped 115 private health facilities in the Parsa and Dhanusha districts, of which 63 (55%) were engaged by the intervention. There were 27 physicians, 30 pharmacies/auxiliary health workers (AHWs), and 6 laboratories offering outpatient department (OPD) services. We engaged 63 HVs and 4 HMs. From November 2018 to January 2020, we screened 203,332 individuals for TB, of whom 109,874 (53.5%) were male and 93,458 (46.5%) were female (see Table 1). No refusals for screening were recorded. Among these individuals, 11,266 (5.5%) were identified as presumptive for TB of whom 8077 (71.7%) were tested for TB. The main reasons for not being bacteriologically tested for TB were loss to follow-up and inability to produce sputum. Individuals who received a clinical diagnosis or who had extrapulmonary TB were included in the all forms (AF) category, as well as those who had a diagnosis via sputum smear or Xpert for pulmonary TB. The bacteriologically confirmed (Bac+) category is limited to those who received sputum smear or Xpert diagnosis. Approximately 8% (682) were confirmed for TB, of whom 383 (56.2%) were Bac+. Among those who were confirmed for TB, 431 (63.5%) were male and 251 (36.5%) were female. In total, 653 (95.7%) of individuals were initiated on treatment at DOTS facilities and 540 (82.7%) completed treatment. As shown in Table 1, throughout the intervention more individuals were screened, tested and treated in Dhanusha compared to Parsa. Further, there were more males (431, 63.5%) diagnosed with TB in comparison to females (251, 36.5%). Overall, the intervention resulted in an 8% increase in the number of individuals identified with TB in Parsa and 13% in Dhanusha compared to baseline (Table 2). Table 2 further comparesthe change in notifications between the baseline period of 16 November 2016 to 15 November 2018 (exactly one year prior to initiation of the intervention) and after implementation of the intervention (16 November 2018 to 15 January 2020). In the EP, there was a 17%increase for Bac+ and10%for all forms AFcompared to baseline. In comparison, the change in notifications in the CP were 4% for Bac+ and −2% AF.Test of proportion results demonstrate that the changes observed for Dhanusha and Parsa on both Bac+ and AF case notifications compared to the observed changes in the CP are significant (p < 0.01). Discussion Our project was able to engage with over half of the private sector facilities that we mapped in our intervention districts. Through our engagement of private sector facilities, we were able to increase the number of individuals identified with TB by 10% in Parsa and Dhanusha districts compared to baseline which demonstrates the need for interventions engaging the private sector in Nepal. Our results highlight the fact that TB affects males more than females. While the proportion of males and females screened as well as tested were approximately the same, more males (63.5%) were diagnosed with TB in comparison to females (36.5%). This is reflected in other literature indicating that men represent 57% of the people who develop TB, in comparison to women who represent 32% [10]. This may be due to risk-exposing occupations, care seeking behaviors, or biological differences [10][11][12]. Of the 8077 microbiological tests conducted (i.e., Xpert MTB/RIF or sputum smear microscopy), only 383 (5.0%) were bacteriologically confirmed. In a study conducted by Nepal et al., 32 (6.8%) individuals were confirmed as bacteriologically positive for TB among 468 microbiological tests conducted (5). This suggests that our bacteriological positivity rate was lower than expected. This may indicate poor quality of sputum sample production and/or examination. To improve sputum quality, additional education on how to produce effective sputum samples for individuals with presumptive TB should be provided [13]. Further, we found that while Bac+ notifications remained slightly lower in the EP (1472 versus 1524) post-implementation, AF notifications were higher in the EP (2670) as compared to those in the CP (2494). Although there could be various reasons for this, it is possible that providing the opportunity for CXR screening to individuals who were symptomatic but could not provide a sputum sample contributed to all forms TB detection. According to Nepal's recent TB prevalence survey, 70% individuals identified with TB did not have TB-like symptoms and were only identified by chest X-ray (CXRs) [3]. Although the current intervention did not focus on CXR referral, SS Nepal scaled up their private sector engagement intervention in January 2020 and has placed a bigger emphasis on CXRs. SS Nepal is currently collecting data on CXR referrals and outcomes. Further, our intervention results showcase a higher level of individuals screened, tested, diagnosed, and subsequently treated in Dhanusha district in comparison to Parsa district. We found that more private providers agreed to participate in the intervention in Dhanusha, thus more CSDs were implemented which led to more individuals screened. Further, there was a higher proportion of Bac+ found among tested in Dhanusha (66.3%) than in Parsa (33.7%). One of the reasons for this could be that in Parsa there is only one GeneXpert machine, however more are present in Dhanusha, thus more individuals were tested using GeneXpert, while in Parsa more individuals were tested with sputum smear. Specifically, at the time of the implementation, there were four Xpert machines in Dhanusha: one at Janakpur District Health Office laboratory, one at Yadukoha Primary Health Care Centre (PHC), one at PHC of Sabaila Municipality and one at Dhalkebar Health Post. In Parsa, the only Xpert machine was located at Narayani central hospital.This underscores the importance of increasing accessibility to GeneXpert machines in Nepal to increase TB case detection. Lessons Learnt As this intervention was implemented as a proof-of-concept, the accumulated lessons learnt are important to highlight. First, certain providers were situated far away from NTP laboratories, rendering sputum transport by HM much more difficult. In these cases, the project team recommended that individuals identified as presumptive at those locations be referred to nearby NTP laboratories for sputum collection by NTP staff. At the NTP laboratories, there were instances of supply chain issues causing shortage of reagents for microscopy testing and/or lack of Xpert cartridges. These issues were addressed through careful coordination and collaboration with the NTP as well as ensuring communication with other NTP laboratories that could be used as backup. Alternative NTP laboratories were also used when GeneXpert machines were out of service. This highlights the importance of ensuring robust laboratory networks, as well as strong engagement of local NTP staff to enable quick action when such challenges arise. Further, there was unexpected staff turnover among the HVs, which was addressed through continuous re-orientation and training of staff. There was also some reluctance from individuals with TB symptoms to provide sputum since it was not prescribed by their physician. Such concerns were appeased through education and counselling from the HVs. Lastly, difficulties in ensuring treatment enrollment for individuals who did not have access to a phone or lived outside the intervention districts were resolved through close communication with the DOTS center staff. Limitations While our evaluation has highlighted the strengths of our intervention, there were some limitations. First, we did not document the experiences of the private providers who engaged with our intervention. Our intervention showcases the successes of engaging private providers; however, to enable successful planning of future interventions, we require knowledge on the experiences of private providers to ensure their requests are integrated into future approaches. Secondly, our intervention only engaged with private provider facilities/clinics and did not engage with public sector clinics. For this reason, the total number of individuals identified with TB in these two districts may have been higher if screening had been carried out in public facilities as well. Nevertheless, we aimed to engage private providers due to previous findings indicating a high prevalence of initial care seeking in the private sector [6]. Further, we only engaged with two municipalities within each district. Since the TB REACH grant received was aimed at proof-of-concept of the approach employed, only a limited number of districts were involved. However, given the success of this intervention in increasing TB case finding in Dhanusha and Parsa's private sectors, SS Nepal has received a second TB REACH grant to scale up the intervention to three other districts. Another limitation is that the project was not able to account for individuals who were visiting the CSDs who lived outside the EP, thus certain individuals may have been found presumptive in the CSDs in the EP but notified as cases in the CP, which may have diluted the yield of the intervention. It was also not possible to distinguish notifications from the public and private sector for the intervention, thus we could only report on case notifications integrating both sectors. Future studies should consider disaggregating notifications by public and private sector to enable evaluation of intervention on private sector notifications. Additionally, the CP had higher notifications than the EP despite similar population and sociodemographic characteristics. We believe that this could be due to ongoing interventions from another organization providing TB services in many districts including Bara and Siraha (CP), where there was ongoing Global Fund support to increase ACF in government facilities. At the time of implementation, there was an ongoing intervention, IMPACT TB, which also aimed to increase TB case detection in four districts in Nepal, including Dhanusha [14]. This intervention, implemented by the Birat Nepal Medical Trust, could also account for part of the increase in case notifications in this district. It is important to note that this could partially explain the higher increase in case notifications seen in Dhanusha compared to Parsa (13% versus 8%). This could have resulted in a synergistic effect of both interventions implemented in the same period, thus the increase in notifications cannot be solely associated with the SS Nepal intervention. Conclusions Our evaluation showcases the impact of engaging private providers in TB screening and diagnosis. Through the presence of CSDs directly at the private provider clinic, we were able to screen a significant proportion of individuals. Private providers are often the first point of contact for many individuals seeking care and integrating TB screening into their facilities was proven to increase TB case detection and notification in two urban districts of Nepal. Our intervention demonstrated a 10% increase in TB case notifications. To further increase the level of involvement of private providers, qualitative studies understanding their experiences with active case finding interventions are required. These will enable implementors to provide holistic interventions, which are not only beneficial to the individuals with TB, but also facilitate engagement with private providers. Further, similar successful interventions should be piloted and evaluated in the country, specifically in rural areas of Nepal where populations have limited access to health services. Proof-of-concept interventions such as this one also present important opportunities to compile lessons learnt and to share with the TB community to provide recommendations to improve and strengthen future ACF implementation.
2021-11-12T16:11:30.246Z
2021-11-01T00:00:00.000
{ "year": 2021, "sha1": "29ff4197ef4104f6e985bd16ca25ebb9edf474d2", "oa_license": "CCBY", "oa_url": "https://www.mdpi.com/1660-4601/18/22/11762/pdf", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "c9b8c7823731c6ec0f20c16b62995aee9f910be0", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [] }
119180416
pes2o/s2orc
v3-fos-license
Monte Carlo studies of dynamical compactification of extra dimensions in a model of nonperturbative string theory The IIB matrix model has been proposed as a non-perturbative definition of superstring theory. In this work, we study the Euclidean version of this model in which extra dimensions can be dynamically compactified if a scenario of spontaneously breaking the SO(10) rotational symmetry is realized. Monte Carlo calculations of the Euclidean IIB matrix model suffer from a very strong complex action problem due to the large fluctuations of the complex phase of the Pfaffian which appears after integrating out the fermions. We employ the factorization method in order to achieve effective sampling. We report on preliminary results that can be compared with previous studies of the rotational symmetry breakdown using the Gaussian expansion method. Introduction Large-N reduced models have been proposed as the non-perturbative definition of superstring theory. Especially, the IIB matrix model [1] is one of the most successful proposals. The IIB matrix model is formally obtained by the dimensional reduction of ten-dimensional N = 1 super-Yang-Mills theory to zero dimensions. In the IIB matrix model, spacetime is dynamically generated from the degrees of freedom of the bosonic matrices, despite the fact that it does not exist a priori in the model. Superstring theory is well-defined only in ten-dimensional spacetime, and it is an important question how our four-dimensional spacetime dynamically emerges. Monte Carlo studies of the IIB matrix model have a possibility to shed light on this question from a first principle calculation. The Euclidean version of the IIB matrix model is obtained after a Wick rotation of the temporal direction. It has a manifest SO (10) rotational symmetry which, if spontaneously broken, yields a spacetime compactified to lower dimensions. However, its numerical simulation has been hindered by the "complex action problem", because the Pfaffian obtained after integrating out the fermions is complex in general. Apart from the matrix models of superstring theory, there are many interesting systems that are plagued by the "complex action problem". Lattice gauge theories with a non-zero chemical potential are the ones that have attracted most of the attention in this context. In this work, we apply the "factorization method", which was originally proposed in ref. [2] and generalized in ref. [3], to the Monte Carlo studies of the Euclidean version of the IIB matrix model. The IIB matrix model has also been studied analytically by the Gaussian Expansion Method (GEM) [4,5]. Preliminary results of our Monte Carlo simulation are consistent with the GEM results and provide evidence that the factorization method is a successful approach to studying interesting systems that suffer from the complex action problem. Factorization method Generally, it is difficult to numerically simulate the complex action system Since e −S 0 +iΓ is not real positive, we cannot view it as a sampling probability in the Monte Carlo simulation. One way to calculate the vacuum expectation value (VEV) of an observable O is to use the reweighting O = Oe iΓ 0 Here, · · · and · · · 0 are the VEV's for the original partition function Z and the phase-quenched partition function Z 0 = dAe −S 0 , respectively. This is not an easy task since the phase Γ may fluctuate wildly. In order to compute O with given accuracy one needs O(e const.V ) configurations, where V is the system size. This is called the "sign problem" or the "complex action problem". Yet another problem is that the important configurations are different for different partition functions. This is called the "overlap problem". We are plagued with this overlap problem in trying to obtain the VEV O through the simulation of the phase-quenched partition Z 0 . The factorization method was proposed in order to reduce the overlap problem and achieve an importance sampling for the original partition function Z [2,3]. We select the set of the observables Σ = {O k |k = 1, 2, · · · , n}, (2.2) which are strongly correlated with the phase Γ. In the following, we define the normalized observ- We employ the factorization property of the density of states ρ(x 1 , · · · , x n ): 3) The constant C = e iΓ 0 is irrelevant in the following. ρ (0) (x 1 , · · · , x n ) = ∏ n k=1 δ (x k −Õ k ) 0 is the density of states in the phase-quenched model. w(x 1 , · · · x n ) = e iΓ x is the VEV in the constrained system (2.4) When the system size V goes to infinity, the VEV's are given by Õ k =x k , where (x 1 , · · · ,x n ) is the position of the peak of ρ(x 1 , · · · , x n ). This can be obtained by solving the saddle-point equation When we properly choose the maximal set of the observables Σ, we achieve effective importance sampling for the original partition function Z [3]. Euclidean version of the IIB matrix model We study the IIB matrix model [1], which is defined by the following partition function: where the bosonic part S b and the fermionic part S f are respectively 2) The bosons A µ (µ = 1, 2, · · · , 10) and the Majorana-Weyl spinors ψ α (α = 1, 2, · · · , 16) are N × N traceless hermitian matrices. In the following, without loss of generality we set g 2 N = 1. The indices are contracted by the Euclidean metric after the Wick rotation. Γ µ are the 16 × 16 Gamma matrices after the Weyl projection, and C is the charge conjugation matrix. This model has the SO(10) rotational symmetry. In ref. [6], it is shown that the partition function is positive definite without cutoffs. This model is formally obtained by the dimensional reduction of ten-dimensional N = 1 super Yang-Mills theory to zero dimensions. The IIB matrix model has the N = 2 supersymmetry ε ψ = ε. (3.4) For the linear combinationδ This leads to the interpretation of the eigenvalues of the bosonic matrices A µ as the spacetime coordinates. Hence, the spontaneous symmetry breakdown (SSB) of the SO(10) rotational symmetry is identified with the dynamical compactification of the extra dimensions. The order parameters of the SSB of the SO(10) rotational symmetry are the eigenvalues λ n (n = 1, 2, · · · , 10) of the "moment of inertia tensor" which are ordered as λ 1 > λ 2 > · · · > λ 10 before taking the expectation value. If λ 1 , · · · , λ d grow and λ d+1 , · · · , λ 10 shrink in the large-N limit, this suggests the SSB of the SO(10) rotational symmetry to SO(d) and hence the dynamical compactification of ten-dimensional spacetime to d dimensions. This scenario has been studied via GEM in ref. [5]. The results of the studies of the SO(d) symmetric vacua for 2 ≤ d ≤ 7 are summarized as follows: 1. The extent of the shrunken directions r = lim N→∞ √ λ n (n = d + 1, · · · , 10) is r 2 ≃ 0.155, which does not depend on d (universal compactification scale). 2. The ten-dimensional volume of the Euclidean spacetime does not depend on d except d = 2 (constant volume property). For the extent of the extended directions R = lim N→∞ √ λ n (n = 1, 2, · · · , d), the volume is V = R d r 10−d = l 10 , with l 2 ≃ 0.383. 3. The free energy takes the minimum value at d = 3, which suggests the dynamical emergence of three-dimensional spacetime. In ref. [4], the six-dimensional version of the Euclidean IIB matrix model was studied via GEM, and the six-dimensional version also turns out to have these three properties. The same model was studied numerically in ref. [7], and the results are consistent with the GEM results. Next, we review the mechanism of the dynamical compactification of spacetime in the Euclidean IIB matrix model [8]. Integrating out the fermions, we have where M aα,bβ = −i f abc (C Γ µ ) αβ A c µ is a 16(N 2 − 1) × 16(N 2 − 1) anti-symmetric matrix. The indices a, b, c run over 1, 2, · · · , N 2 − 1, and f abc are the structure constants of SU(N). A c µ are the coefficients in the expansion A µ = ∑ N 2 −1 c=1 A c µ T c with respect to the SU(N) generators T c . Under the transformation A 10 → −A 10 , PfM becomes complex conjugate. We define the phase of the Pfaffian Γ as PfM = |PfM |e iΓ . PfM is real for the nine-dimensional configuration A 10 = 0. When the configuration is d-dimensional (3 ≤ d < 9), we find ∂ m Γ ∂ A a 1 µ 1 ···∂ A am µm = 0 for m = 1, 2, · · · , 9 − d, because the configuration is at most nine-dimensional up to the (9 − d)-th order of the perturbations. Thus, the phase of PfM becomes more stationary for the lower dimensions. The numerical results in ref. [9] also suggest that there is no SSB of the rotational symmetry in the phase-quenched model. We calculate λ n 0 numerically, where · · · 0 is the VEV with respect to the phase-quenched partition function (3.8) We use the Rational Hybrid Monte Carlo (RHMC) algorithm, whose details are presented in Appendix A of ref. [7]. The result in fig. 1 shows that λ n 0 converge to l 2 ≃ 0.383 at large N for all n = 1, 2, · · · , 10. This suggests that there is no SSB of the SO(10) rotational symmetry, and that the result is consistent with the constant volume property. Results The model (3.1) suffers from a strong complex action problem, and we apply the factorization method to this system. It turns out to be sufficient to constrain only one eigenvalue λ n ; namely the choice of the set Σ in eq. (2.2) should be Σ = {λ n }. This is because the larger eigenvalues do not affect much the fluctuation of the phase. This choice of Σ is similar to that of the six-dimensional version of the IIB matrix model [7]. When we constrain λ n , the eigenvalues λ n , λ n+1 , · · · λ 10 take the small value, which corresponds to the SO(d) symmetric vacuum, with n = d + 1. This leads us to simulate the partition function of the constrained system which is simulated via the RHMC algorithm. The ratioλ n = λ n / λ n 0 corresponds to the square of the ratio of the extents of the extended and shrunken directions (r/l) 2 , in the SO(d) vacua with n = d + 1. The saddle-point equation (2.5) is now simplified as in the large-N limit. · · · n,x is the VEV of the partition function Z n,x . We have e iΓ n,x = cos Γ n,x , because under the transformation A 10 → −A 10 the Pfaffian PfM becomes complex conjugate while the bosonic action (3.2) and the eigenvalues of the tensor (3.6) are invariant. The solution of the saddle-point equation (4.2)x n gives the VEV λ n =x n in the SO(d) vacuum with n = d + 1. Solving this saddle-point equation amounts to finding the minimum of the free energy in the SO(d) vacuum with n = d + 1. The GEM result suggests that the free energy takes the minimum for the SO(3) vacuum. In order to reduce the CPU costs, we focus on the n = 3, 4, 5 cases, which correspond to the SO(2), SO(3), SO(4) vacua, respectively. In fig. 2 (LEFT) we plot log w n (x) for n = 4 up to N = 16, where we observe a good scaling behavior at small x 1 N 2 log w n (x) ≃ −a n x 11−n − b n . (4.5) The coefficients a n and b n are obtained for each N, by fitting the data. Then, we extrapolate the coefficients a n , b n and obtain the large-N limit, which corresponds to Φ n (x) = lim N→+∞ 1 N 2 log w n (x). This is represented by the solid line in fig. 2 (LEFT). The function f (0) (x) has a scaling behavior around 0. Subtracting this effect in order to reduce finite-N effects, we plot 1 fig. 2 (RIGHT). We find that the results scale reasonably well up to N = 24 in the small-x region x ≤ 0.4. This implies the hard-core potential structure at small x. In the six-dimensional version of the IIB matrix model, this effect is absent in the one-loop approximation [2], but is observed in the full model without one-loop approximation [7]. The intersection of 1 represents the solution of the saddle-point equation (4.2). Fig. 2 (RIGHT) shows that the solution x n is close to r 2 l 2 ≃ 0.155 0.383 = 0.404 · · · for n = 4. For n = 3, 5, too, we have obtained similar results, and the solutionx n is close to 0.404. This is consistent with the "universal compactification scale" property. Next, we compare the free energy (4.4) for the SO(d) vacuum. The free energy at x =x n is with n = d + 1. Due to the scaling behavior (4.6), the first term of the r.h.s of eq. (4.7) vanishes at large N. Thus we compare 1 N 2 log w n (x n ). From fig. 3, we see that the free energy F SO(2) is much higher than F SO(3) and F SO(4) around x ≃ 0.4. It is still difficult to determine whether the SO(3) or the SO(4) vacuum is energetically favored. More analysis will be reported elsewhere. Conclusion In this work, we have performed Monte Carlo simulations of the Euclidean version of the IIB matrix model using the factorization method, in order to study the dynamical compactification of the extra dimensions. The results turn out to be consistent with the GEM predictions. We have seen that in the phase-quenched model there is no SSB of the SO(10) rotational symmetry, and that the volume of spacetime is consistent with the GEM results. The function f (0) n (x) has a hard-core potential structure, and as a result of that, the computed shrunken dimensions are found to be consistent with the GEM results. Also, we have succeeded in finding that the SO(2) vacuum is energetically disfavored, compared to the SO(3) or SO(4) vacuum. The results of the Lorentzian version of the IIB matrix model, where (3+1)-dimensional spacetime is found to expand dynamically [10], and the scenario discussed in this work, suggest that the physical interpretation of the Euclidean IIB matrix model needs to be further investigated.
2015-11-11T09:36:15.000Z
2015-09-16T00:00:00.000
{ "year": 2015, "sha1": "1215e5ed9cb850be75e97fa39672475958fce248", "oa_license": "CCBYNCND", "oa_url": "https://pos.sissa.it/251/307/pdf", "oa_status": "HYBRID", "pdf_src": "Arxiv", "pdf_hash": "602322a15641010f2f82161527a43faf69ac5f59", "s2fieldsofstudy": [ "Physics" ], "extfieldsofstudy": [ "Physics" ] }
259219289
pes2o/s2orc
v3-fos-license
The Discourse of Historicity in George Orwell’s 1984 The issues of historicity, the Party's control over memory and history, and the effect of Newspeak on historical consciousness are all covered in this in-depth analysis of George Orwell's dystopian classic 1984. This study explores the ways in which the Party manipulates historical records, the importance of comprehending historicity, and the ways in which language and memory are shaped and controlled within the novel's totalitarian society by drawing on the theories of New Historicism, including authors like Michel Foucault, Stephen Greenblatt, Catherine Gallagher, Hayden White, and Louis Montrose. By carefully examining these ideas, this analysis illuminates the complex relationship between language, memory, power, and historical interpretation, underscoring the perils of authoritarianism and the necessity of keeping a range of viewpoints and critical thinking. I. INTRODUCTION English novelist, essayist, and critic George Orwell was born in 1903 as Eric Arthur Blair (Crick, 2004). He is well known for his enlightening and dystopian writings, with 1984 being one of his most significant works. In 1984, which was first published in 1949, a totalitarian society headed by the repressive Party and starring the mysterious Big Brother is depicted (Orwell, 1949). The book examines issues like censorship, governmental surveillance, psychological trickery, and the loss of personal freedom. The themes and motifs in 1984 were greatly influenced by Orwell's own encounters with and views of totalitarian governments, particularly those of Nazi Germany and the Soviet Union. Orwell wanted to warn people about the perils of dictatorship and the potential repercussions of unfettered state power through his writing (Hitchens, 2002). Because 1984 explores how historical narratives may be manipulated and controlled in a totalitarian state, understanding the discourse of historicity in this novel is of utmost importance. In order to preserve its dominance and control over the people, the ruling Party in the novel's dystopian future modifies and destroys historical documents. We learn about the power relationships between the state and its people, the effect of propaganda and false information on forming collective memory, and the possible risks of a society lacking in true historical knowledge by analyzing the discourse of historicity in 1984. The way Orwell depicts historical revisionism serves as a caution against the rewriting and distorting of history for political ends. It pushes the readers to consider how crucial it is to maintain independent thought in the face of totalitarian governments. Orwell emphasizes the vulnerability of societies without a trustworthy historical record and the potential effects of such manipulation on individual and collective identities through the concept of historicity in 1984. Through a thorough study of the discourse of historicity in 1984, one may find tangible examples of revisionism and totalitarian regimes. This helps us to navigate through the hidden motivations of these regimes at forging history and knowledge for the sake of power. We may think critically about the power relations at play in the construction of historical narratives. The discourse of historicity is used by the governing Party in George Orwell's 1984 as a potent instrument for manipulating and controlling the collective memory of the society, illuminating the grave repercussions of historical revisionism and the demise of individual agency under a totalitarian dictatorship. This research paper aims to expose the dangers of historical manipulation and emphasize the importance of maintaining an accurate historical record as a defense against authoritarian control through an analysis of the Party's control over history, the role of protagonist Winston Smith, and the more general socio-political commentary embedded within the novel. In the dystopian future depicted in 1984, there are three superstates that are at war with one another all the time. Winston Smith, a middle-aged party member who lives in Oceania under the Party's totalitarian control, is the main character of the novel. Big Brother and the Party have total control over every aspect of residents' lives, including their memories and ideas. Winston is employed by the Ministry of Truth, where he tampers with the past to support Party propaganda. As a result of his growing disillusionment with the harsh system, he discreetly rebels by engaging in an illicit romance with Julia, a fellow Party member. Winston also grows to have a keen interest in the past and looks for secret information. Winston runs into O'Brien, a high-ranking Party member who he thinks is a part of the Brotherhood, a covert resistance group, as he engages in illegal operations. Winston and Julia are eventually apprehended by the Thought Police, tortured horribly, and brainwashed. The abuse and exploitation of authority by the ruling Party is one of the main themes of 1984. The story describes a society in which the Party uses monitoring, propaganda, and psychological blackmail to exert complete control over its population. The Party's motto, "War is Peace, Freedom is Slavery, Ignorance is Strength," highlights the Party's obstinate desire of power, according to Orwell (Orwell, 1949, p. 4). This contradictory slogan exemplifies the Party's capacity to manipulate language and the narrative in order to strengthen their hold on power. The concepts of reality, truth, and the malleability of history are further explored in 1984. The Party practices historical revisionism, tampering with documents and eliminating any proof that conflicts with its interpretation of events. Winston Smith, the main character, holds a position at the Ministry of Truth where he is tasked with editing historical records to support the Party's viewpoint. The Party's attempt to dominate not just the present but also the past in order to ensure their domination over the future is reflected in their manipulation of history and truth. The theme of surveillance is also present throughout the entire novel. Telescreens, concealed microphones, and the ever-vigilant Thought Police are symbols of the Party's pervasive monitoring regime. This ongoing surveillance fosters fear, stifles criticism, and invades personal privacy, adding to the novel's repressive tone. Another important theme in 1984 is resistance, as well as the human spirit's indomitable spirit. Characters like Winston and Julia oppose the Party's authority in search of freedom and real human connection, despite the Party's efforts to put an end to independent thought and put down a revolt. This cautionary novel 1984 by George Orwell warns against the perils of dictatorship, linguistic manipulation, and the loss of personal freedom. It continues to be a potent and timely work that inspires thought about the precarious balance between authority and individual rights. The Party's massive surveillance is one example of how it controls its citizens. As Winston Smith, the protagonist, reflects, "There was, of course, no way of knowing whether you were being watched at any given moment (...) You had to live(...) in the assumption that every sound you made was overheard" (Orwell, 1949, p. 3). By assuring conformity and stifling opposition, this extensive surveillance fosters a culture of perpetual dread and helps the Party keep its hold on power warns against the perils of dictatorship, linguistic manipulation, and the loss of personal freedom. It continues to be a potent and timely work that inspires thought about the precarious balance between authority and individual rights. The Party's massive surveillance is one example of how it controls its citizens. The protagonist, Winston Smith, muses, "Of course, there was no way to tell if you were being watched at any particular time. Living on the presumption that everyone could hear everything you said was necessary " (Orwell, 1949, p. 3). By assuring conformity and stifling opposition, this extensive surveillance fosters a culture of perpetual dread and helps the Party keep its hold on power. The Party manipulates history and truth in addition to surveillance to control the narrative and uphold its authority. As a Party member named O'Brien put it, "to know and not know, to be conscious of complete truthfulness while telling carefully constructed lies... to hold simultaneously two opinions which canceled out, knowing them to be contradictory and believing in both of them" is the definition of doublethink (Orwell, 1949, p. 35). This distortion of reality eliminates personal responsibility and promotes cognitive dissonance, which makes it simpler for the Party to impose its ideology on the populace. The degradation of one's right to privacy and the destruction of uniqueness characterize the society portrayed in 1984. The protagonist's illicit relationship with Julia evolves into a metaphor for resistance against the Party's hold on intimate relationships and feelings. As Orwell writes, "Their embrace had been a battle, the climax a victory. It was a blow struck against the Party. It was a political act" (Orwell, 1949, p. 132). The Party's aim to govern its citizens' private and emotional life as well as their outer behavior is demonstrated by this act of disobedience. Orwell's dystopian imagination is clearly a mirror of the time's historical context. Orwell intended to decode the very world he lived in, warning his readers from falling into the same trap again. Robert A. Lee notes, "The society portrayed by Orwell is a logical extension of trends that were only beginning to be observable when the book was written, and it remains a thought-provoking reminder of the potential abuses of authority" (Lee, 2009, p. 93). The writer sought to annoy his readers by giving them the feeling of unfreedom and showing the horrible scenarios that they may experience. II. UNDERSTANDING HISTORICITY Historicity is a concept that does not simply mean construction, interpretation, and comprehending history. This concept has gone through extended studies and attention by great theorists in this field. Foucault's view of historicity is unique. He tends to focus on the power dynamics that sustain the construction of history. He believes that there is a power exchange and roleplay throughout history. He also proposes that history is not a neutral unbiased knowledge that reflects past events, but a set of narratives that undergo the impact of the mainstream authorities. He believes that "History is not a science that reconstructs the past for its own sake; it is a practice of power that shapes the present and the future" (Foucault, 1977, p. 139). He also adds that historicity is part of the power exercise and process of power creation. Hartog investigates historicity via the prism of memory. He contends that the context in which historical awareness is created, and a society's remembering practices have a significant impact. Historical awareness, according to Hartog, is an active and deliberate process of remembering and forgetting rather than a collection of facts or a straightforward portrayal of the past (Hartog, 2003, p. 25). This viewpoint emphasizes how memories shape historical narratives and how historicity is a subjective concept. The importance of narrative in aiding historical comprehension is emphasized by White's idea of narrative historiography. He contends that historians create stories to give the past context and logic. White states, "History is not the givenness of events themselves, but the givenness of texts, the construction of stories about the events" (White, 1978, p. 2). According to White, historicity is closely related to the decisions historians make when choosing and interpreting events to tell a whole tale. The "hermeneutics of suspicion" theory, developed by Ricoeur, examines the complexities of interpretation and significance present in historical texts. He contends that a critical analysis of underlying ideologies and covert motives is necessary for historical comprehension. Ricoeur states, "To understand the past, we need to unveil the hidden meanings, ideological biases, and cultural assumptions that shape historical texts" (Ricoeur, 1984, p. 52). This viewpoint emphasizes the significance of connecting with historical sources critically in order to find various layers of historicity. Hartog investigates the idea of historicity by analyzing the connection between memory and history. He proposes that memory practices and how societies remember and forget the past have a significant impact on how we view history. Historical consciousness is dependent on a society's remembering practices and the discourses that organize them, according to Hartog (Hartog, 2003, p. 20). This viewpoint emphasizes how memory impacts our perception of history, underlining the dynamic and subjective element of historicity. The idea of historicity is highly relevant to both literary and social analysis because it sheds light on the complex relationships that connect the past, present, and future and offers insights into societal dynamics and individual experiences. By examining the effects of historical circumstances and events on people's lives and society institutions, literature frequently engages with historicity. As Hayden White suggests, "Narrative is the primary way in which historical consciousness is organized" (White, 1978, p. 2). Through their tales, literary works capture the historical zeitgeist by illuminating the complexity and nuances of the past and their effects on the present. Literature sheds light on the human experience during certain historical eras by fusing real and fictitious components. This illumination provides insightful information about how individuals negotiate and make sense of their lives in connection to greater historical forces. Additionally, historicity is essential to social research because it offers a framework for comprehending the creation and evolution of societies over time. Fernand Braudel argues, "The long duration (la longue durée) is the foundation of any historical understanding" (Braudel, 1958, p. 29). The ability to recognize trends, pinpoint structural changes, and comprehend the ingrained mechanisms that mold civilizations depends on historical perspectives in social analysis. The causes and effects of social events can be ascertained, the underlying power dynamics can be identified, and social structures and institutions can be critically analyzed by looking at historical contexts. The genealogy idea put out by Michel Foucault emphasizes the importance of history to social understanding even further. According to Foucault, historical research should concentrate on the emergence and evolution of power relations and discourses over time. He states, "My objective(...)is to create a history of the different modes by which, in our culture, human beings are made subjects" (Foucault, 1977, p. 149). Social analysis can better comprehend the dynamics of dominance and resistance within societies by tracking the historical development of power structures and looking at the ways in which people are constituted as subjects. III. HISTORICAL CONTEXT OF 1984 To fully understand "1984", it is essential to consider the political and social setting that George Orwell's book was influenced by. The dystopian future Orwell envisioned in the book was greatly influenced by his personal experiences and opinions about the political milieu of his time. The development of authoritarian governments in the 20th century had a significant impact on Orwell. In particular, the rise of fascism in Europe and Joseph Stalin's totalitarian reign in the Soviet Union had an impact on Orwell's depiction of oppressive regimes. Orwell, who took part in the Spanish Civil War, saw the rise of dictatorship firsthand. He wrote, "I have seen British imperialism at work in Burma, and I have seen something of the effects of poverty and unemployment in Britain. But...I should say that the horrors of the Russian régime have far exceeded them" (Orwell, 1946, p. 7). Orwell voiced tremendous concern about the implications of totalitarianism on individual freedom and human rights in the excerpt above. Throughout Orwell's political age, propaganda and mass surveillance were frequently used as tools of control. The widespread use of propaganda during World War II and the Cold War increased Orwell's awareness of how governments might sway public opinion. He wrote, "Political language...is designed to make lies sound truthful and murder respectable, and to give an appearance of solidity to pure wind" (Orwell, 1946, p. 180). This sentence exemplifies Orwell's suspicion of deception and the use of language as a tool of oppression. Additionally, the social inequities and class distinctions of Orwell's day strongly influenced his image of a stratified society in "1984". The widening gap between the rich and the poor as well as the struggles that the working class faced had an impact on Orwell's portrayal of the Party's control over the proletariat. According to Orwell, the vast majority of people are kept in a state of ignorance and subjection so that they can be used whenever their rulers feel their collective body essential (Orwell, 1946, p. 162). This statement perfectly captures Orwell's concern about how the ruling class exploits and controls the working class. George Orwell's book "1984" was greatly influenced by the real events of his time. The dystopian society that is depicted in the novel was significantly affected by the political climate that Orwell encountered and observed in the middle of the 20th century. Two significant influences on Orwell's writing were the rise of authoritarian governments and the horrors of World War II. In "1984", Orwell contrasts his depiction of a brutal and oppressive state with the totalitarian regimes in Nazi Germany and Stalinist Russia. The evils of the Russian regime, according to Orwell, "have far surpassed them [British imperialism and poverty in Britain]" (Orwell, 1946, p. 7). The totalitarian Big Brother regime in his novel serves as a vehicle for Orwell's deep concern about the destructive implications of totalitarianism on individual freedom and human rights. Additionally, Orwell's writing was significantly influenced by the propaganda and information manipulation of the war. Orwell has firsthand knowledge of the impact of propaganda and the politicization of the truth. Political language, according to him, "is meant to make lies sound true and murder respectable" (Orwell, 1946, p. 180). In the novel, Orwell depicted the distortion of reality and language manipulation through the Party's control of the media and the concept of "Newspeak," and this quote captures his skepticism about those practices. Additionally, the social and political climate of post-war Europe, which featured escalating Cold War tensions and the rise of communism, had an impact on Orwell's portrayal of a surveillance state in "1984". Because of his involvement in the Spanish Civil War and his fervent dedication to socialist ideals, Orwell's evaluation of authoritarian governments was impacted by both of these factors. Since 1936, every sentence in my serious writing has been either directly or indirectly intended at dictatorship, the author stated (Orwell, 1946, p. 7). This section demonstrates Orwell's commitment to educating readers about the dangers of dictatorship and how they influenced the themes and plot of 1984. IV. THE PARTY'S CONTROL OVER HISTORY The events of his time in history had a big impact on George Orwell's novel "1984". The political climate that Orwell encountered and witnessed in the middle of the 20th century had a significant impact on the dystopian world that is depicted in the novel. The rise of totalitarian governments and the horrors of World War II had a significant impact on Orwell's writing. In "1984", Orwell contrasted the totalitarian regimes of Nazi Germany and Stalinist Russia with his vision of a merciless and authoritarian government. Orwell claimed that the evils of the Russian regime "have far surpassed them [British imperialism and poverty in Britain]" (Orwell, 1946, p. 7). This passage exemplifies Orwell's deep concern for the terrible implications of tyranny on human rights and individual freedom, which he turned into the totalitarian Big Brother regime in his novel. Orwell's literature was also greatly influenced by the propaganda and information manipulation of the war. Orwell had firsthand experience with the power of propaganda and the twisting of the truth for political ends. Political terminology "is created to make lies sound true and murder respectable," he claimed (Orwell, 1946, p. 180). This quotation captures Orwell's skepticism towards the falsification of the truth and the linguistic manipulation that he depicted in the book through the Party's control over the media and the concept of "Newspeak." Orwell's portrayal of a surveillance state in "1984" was also influenced by the social and political climate of post-war Europe, which included escalating Cold War tensions and the rise of communism. Orwell's perspective on authoritarian governments was shaped by his personal involvement in the Spanish Civil War and his fervent dedication to socialist ideals. The author asserted that since 1936, every sentence of my serious writing has been either directly or indirectly aimed at dictatorship (Orwell, 1946, p. 7). This passage demonstrates Orwell's commitment to informing readers about the dangers of dictatorship and how they influenced the themes and plot of "1984" in his writing. The Party in George Orwell's book "1984" has complete control over memory and history and uses them as a tool of power. We can examine the methods and ramifications of the Party's control over history and memory in the dystopian society portrayed in the book via the glasses of theorists like Hartog, Foucault, Ricoeur, and White. François Hartog's concept of regimes of historicity provides insights into the Party's manipulation of history. According to Hartog, different historical periods have distinct ways of relating to the past. In "1984," the Party establishes its own regime of historicity by constantly rewriting history and altering past events to align with its current objectives. The Party's motto, "Who controls the past controls the future; who controls the present controls the past," exemplifies this manipulation (Orwell, 1949, p. 37). By controlling the narrative of history, the Party maintains its authority and perpetuates its control over the present and future. Michel Foucault's concept of power and knowledge sheds light on the Party's control over memory. Foucault argues that power operates through the production and control of knowledge. In 1984, the Party employs tactics such as Newspeak and Doublethink to manipulate and control memory. Newspeak, the Party's language, limits the range of thought and erases critical thinking by eliminating words and concepts that challenge the Party's authority. Doublethink encourages citizens to hold contradictory beliefs, effectively distorting their memories and rendering them susceptible to the Party's propaganda. As Foucault suggests, "Power(...)is exercised rather than possessed; it is not the 'privilege,' acquired or preserved, of the dominant class" (Foucault, 1977, p. 141). The Party's control over memory enables its dominance and sustains its oppressive regime. Paul Ricoeur's hermeneutic approach contributes to understanding the Party's manipulation of history and memory. Ricoeur emphasizes the interpretive process involved in understanding and reconstructing the past. In "1984," the Party's Ministry of Truth serves as a symbol of historical distortion and manipulation. Winston Smith, the protagonist, works at the Ministry of Truth, altering historical records to fit the Party's narrative. Ricoeur's perspective helps us see how the Party's control over history and memory denies individuals the opportunity for genuine understanding and interpretation of the past. It further reinforces the Party's control by eroding individuals' sense of reality and historical truth. The Party's skewed use of historical narratives is revealed by Hayden White's theories on the formation of historical narratives. According to White, storytelling strategies and narrative frameworks naturally affect historical writing. The Party's revision of history in "1984" is consistent with White's view of historiography as an artistic endeavor. The Party creates a story that advances its objectives and uses a skewed account of the past to influence the present and the future. This narrative manipulation is exemplified by the term "doublethink," in which the Party concurrently fabricates incompatible historical versions without hesitating or acknowledging them. The Party has power over the communal memory and sense of reality by controlling the narrative. Understanding the Party's influence over history and memory requires an understanding of Stephen Greenblatt's notion of cultural poetics. Literature is just one of the cultural practices that are studied in terms of how power structures influence and are influenced by it. Cultural poetics is demonstrated in "1984" through the Party's control over historical narratives, as the Party chooses the interpretations and meanings of historical events. Greenblatt argues that "meaning and value are created and modified in specific historical contexts" (Greenblatt, 1990, p. 18). The Party fabricates history and falsifies documents in order to support its rule and stifle any competing viewpoints. The Party's control over history and memory is further explained by Michel Foucault's concept of discourse. Systems of knowledge and power that influence how we perceive the world are referred to as discourse. The Party's discourse in "1984" imposes a single, totalitarian account of history and squelches all competing accounts. Foucault states, "Discourses are practices which systematically form the objects of which they speak" (Foucault, 1977, p. 49). The Party controls language and information through tools like Newspeak and the Ministry of Truth, which in turn controls memory and historical interpretation. The idea of discourse as developed by Michel Foucault helps to further explain how the Party controls memory and history. Systems of power and knowledge known as discourse influence how we perceive the world. In "1984," the Party's rhetoric enforces a solitary, totalitarian interpretation of history and squelches all competing accounts. Foucault states, "Discourses are practices which systematically form the objects of which they speak" (Foucault, 1977, p. 49). The Party has control over language and information through the use of tools like Newspeak and the Ministry of Truth, which also has a direct impact on memory and historical interpretation. New Historicism also places a strong emphasis on how literature interacts with and reflects the social and political environment of its time. George Orwell uses political ideas and historical events from his time to construct a dystopian future in "1984." It is possible to interpret the Party's control of history and memory in the novel as a critique of historical revisionism and the shaping of popular memory for political purposes. Analyzing how the Party manipulated history and memory in "1984" using New Historicism theories demonstrates the complex interrelationship between authority, knowledge, and literature. The molding of historical narratives by the Party and the repression of alternative interpretations highlights how power molds and regulates how society views the past. New Historicism offers important insights into the Party's totalitarian control and the ramifications of historical manipulation in "1984" by exploring the interaction between literature and historical context. Catherine Gallagher, in her work on counter-history, sheds light on the suppression of alternative narratives by the Party. Gallagher argues that counter-history examines marginalized voices and alternative interpretations of the past. In "1984," the Party's manipulation of historical records serves to erase counterhistories and maintain its version of events as the sole truth. As Gallagher states, "By recovering marginal voices, counter-history makes us aware of the necessary ideological preconditions of all historical truth" (Gallagher, 1992, p. 22). The Party's control over history erases dissenting viewpoints and reinforces its ideological hegemony. Hayden White's concept of employment helps analyze the Party's control over memory in the novel. White argues that narratives are structured through specific plot devices, influencing the way events are interpreted. In "1984," the Party employs a specific employment that portrays itself as the ultimate authority while vilifying its opponents. This narrative structure shapes collective memory, ensuring the Party's control over interpretations of the past. White asserts, "Historical narratives are, after all, verbal fictions, the contents of which are as much invented as found" (White, 1987, p. 7). The Party's manipulation of memory through narrative employment reflects its desire for total domination. Additionally, Louis Montrose's emphasis on the historicity of texts contributes to our understanding of the Party's control over history and memory. Montrose argues that texts are embedded in historical processes and reflect the socio-political context of their creation. In "1984," Orwell draws on historical events and ideologies of his time to create a dystopian world. The Party's control over history and memory mirrors the historical circumstances of oppressive regimes. Montrose explains, "Works of literature are historical practices and bear the traces of the circumstances that produced them" (Montrose, 1989, p. 22). The Party's manipulation of historical narratives reflects its desire to maintain power and suppress dissent. V. CONCLUSION In conclusion, the examination of the Party's control over history and memory in George Orwell's 1984, through the lens of New Historicism and the theories of various scholars such as Catherine Gallagher, Hayden White, Louis Montrose, and others, reveals the profound impact of power, language, and memory manipulation in a dystopian society. The Party's control over history, exemplified by the manipulation of historical records, serves as a mechanism for maintaining its authoritarian rule. Through Newspeak, the Party restricts language and narrows the range of thought, effectively erasing dissenting voices and alternative interpretations of the past. This suppression of counter-history reinforces the Party's version of events as the only acceptable truth. Additionally, the Party's manipulation of historical records influences collective memory and historical consciousness. By selectively altering or erasing information, the Party shapes the perception of the past, reinforcing its authority and consolidating its power. The Party's employment of history through Newspeak constructs a narrative that perpetuates its dominance while marginalizing opposing viewpoints. The application of New Historicism allows us to critically analyze the Party's control over history and memory, highlighting the intricate relationship between power, language, and historical interpretation. The theories of Gallagher, White, Montrose, and others offer valuable insights into the mechanisms through which the Party shapes historical consciousness and maintains its ideological hegemony. Ultimately, the examination of the Party's manipulation of history and memory in 1984 serves as a stark reminder of the dangers of authoritarian regimes and the importance of preserving diverse perspectives and critical thinking. By understanding and questioning the manipulation of history, we are better equipped to safeguard against the erasure of truth and the distortion of collective memory.
2023-06-22T15:02:53.110Z
2023-06-18T00:00:00.000
{ "year": 2023, "sha1": "1856244f301fccabbae71612ecee8138eca1cb3c", "oa_license": "CCBY", "oa_url": "https://doi.org/10.24018/ejlang.2023.2.3.105", "oa_status": "HYBRID", "pdf_src": "MergedPDFExtraction", "pdf_hash": "a40c04983444127d5c7cfef4e5f69898fb1f7162", "s2fieldsofstudy": [ "History", "Political Science" ], "extfieldsofstudy": [] }
252715729
pes2o/s2orc
v3-fos-license
Universal Medical Image Segmentation using 3D Fabric Image Representation Encoding Networks —Data scarcity is a common issue for deep learning applied to medical image segmentation. One way to address this problem is to combine multiple datasets into a large training set and train a unified network that simultaneously learns from these datasets. This work proposes one such network, Fabric Image Representation Encoding Network (FIRENet), for simultaneous 3D multi-dataset segmentation. As medical image datasets can be extremely diverse in size and voxel spacing, FIRENet uses a 3D fabric latent module, which automatically encapsulates many multi-scale sub-architectures. An optimal combination of these sub-architectures is implicitly learnt to enhance the performance across many datasets. To further promote diverse-scale 3D feature extraction, a 3D extension of atrous spatial pyramid pooling is used within each fabric node to provide a finer coverage of rich-scale image features. In this study, FIRENet was first applied to 3D universal bone segmentation involving multiple musculoskeletal datasets of the human knee, shoulder and hip joints. FIRENet exhibited excellent universal bone segmentation performance across all the different joint datasets. When transfer learning is used, FIRENet exhibited both excellent single dataset performance during pre-training (on a prostate dataset), as well as signifi-cantly improved universal bone segmentation performance. In a following experiment which involves the simultaneous segmentation of the 10 Medical Segmentation Decathlon (MSD) challenge datasets. FIRENet produced good multi-dataset segmentation results and demonstrated excellent inter-dataset adaptability despite highly diverse image sizes and features. Across these experiments, FIRENet’s versatile design streamlined multi-dataset segmentation into one unified network. Whereas traditionally, similar tasks would often require multiple separately trained networks. Abstract-Data scarcity is a common issue for deep learning applied to medical image segmentation. One way to address this problem is to combine multiple datasets into a large training set and train a unified network that simultaneously learns from these datasets. This work proposes one such network, Fabric Image Representation Encoding Network (FIRENet), for simultaneous 3D multi-dataset segmentation. As medical image datasets can be extremely diverse in size and voxel spacing, FIRENet uses a 3D fabric latent module, which automatically encapsulates many multi-scale sub-architectures. An optimal combination of these sub-architectures is implicitly learnt to enhance the performance across many datasets. To further promote diverse-scale 3D feature extraction, a 3D extension of atrous spatial pyramid pooling is used within each fabric node to provide a finer coverage of rich-scale image features. In this study, FIRENet was first applied to 3D universal bone segmentation involving multiple musculoskeletal datasets of the human knee, shoulder and hip joints. FIRENet exhibited excellent universal bone segmentation performance across all the different joint datasets. When transfer learning is used, FIRENet exhibited both excellent single dataset performance during pre-training (on a prostate dataset), as well as significantly improved universal bone segmentation performance. In a following experiment which involves the simultaneous segmentation of the 10 Medical Segmentation Decathlon (MSD) challenge datasets. FIRENet produced good multi-dataset segmentation results and demonstrated excellent inter-dataset adaptability despite highly diverse image sizes and features. Across these experiments, FIRENet's versatile design streamlined multi-dataset segmentation into one unified network. Whereas traditionally, similar tasks would often require multiple separately trained networks. Deep learning methods for medical image segmentation is a rapidly evolving field with the potential to enhance disease diagnosis [36] and treatment planning [44]. The effectiveness of deep learning can be largely attributed to its data-driven nature. However, this reliance on data can also be a major limitation in medical image analysis as data scarcity remains a problem. Unlike the abundance of publicly available largescale datasets [34], [18], [9] used in 2D computer-vision tasks, (expert) labelled medical image datasets are much smaller in quantity due to several factors: • Data acquisition challenges: The acquisition of medical images such as magnetic resonance (MR) scans is highly specialised and resource-intensive. Voxel-wise manual annotation of 3D volumes (as required by training 3D image segmentation models) is expertise and time-intensive and subject to variable operator error. In addition, careful planning and expert contouring protocols are typically required to minimise intra-and inter-rater segmentation variability. • Data fragmentation: Clinical studies involving medical imaging are typically highly focused, relatively small investigations due to high imaging costs. Datasets from different studies can exhibit considerable inter-dataset variations (for example, different imaging fields of view, and contrasts) associated with different acquisition sequences and protocols. Most current deep learning methods in medical image analysis lack versatility as they are typically trained on individual domain-specific datasets. • Access to image datasets: Collecting large-scale medical imaging datasets with expert annotations is difficult as explicit consent, strict adherence to ethics and systematic coordination are required, as the publication of patient data, even when de-identified, is a highly sensitive matter. Multi-dataset learning (including transfer learning [53], [7] and simultaneous multi-dataset learning [23]) has been shown to improve segmentation performance. However, most current deep learning methods used for medical image analysis are designed and optimised for single-dataset applications. Even highly adaptive architectures like nnUNet [25] are trained on a per dataset basis. Currently, using one model instance to segment data distributions (datasets) remains a challenge. Previous works in this field include 3D MDUNet and 3DU 2 N et, which both simultaneously applied a U-Net-like architecture to several MSD challenge datasets. However, they were only evaluated on a selection of the MSD datasets. To extend the coverage to drastically more datasets, a model with more focus on diverse-scale feature extraction and architecture versatility is desirable. Fabric-shaped architectures [58], [19], [43] are an idea candidate as they create a superposition of many multi-scale sub-architectures. This work proposes a versatile end-to-end 3D fabric network, Fabric Image Representation Encoding Network (FIRENet), which self-adapts simultaneous to multi-dataset medical image datasets for medical image segmentation. The central features of FIRENet are summarised as follows: 1) Multi-dataset and multi-scale image feature representations are learnt and encoded in the dense residual fabric (DRF) (Figure 1) of many multi-scale sub-architectures. The major advantage of DRF is architectural generalisability. This work demonstrates its application to multiple medical image datasets simultaneously without dataset-specific hyper-parameter tuning. 2) The nodes in DRF are connected via weighted residual summation (WRS) (weighted connections) for improved adaptability. These connections are trainable for automatic architecture-level adaptation to different medical image datasets. (Figure 1). 3) A 3D extension of atrous spatial pyramid pooling (ASPP) [5] is employed in each node of DRF to provide finer coverage of different-sized (image) features. The experiment results show FIRENet's excellent I) single dataset performance for semantic prostate segmentation II) simultaneous universal (support for multi-anatomy) bone segmentation with and without transfer learning performance, and III) simultaneous multi-object segmentation performance for all the 10 image datasets in the MSD challenge. Notably, FIRENet also exhibits multi-dataset adaptability without tailored training procedures to reach convergence. A. Convolutional Neural Networks (CNNs) for image segmentation The most common class of deep learning models for (medical) image segmentation are CNNs. Intially, CNNs were developed for image classification [29], [46] and were later found suitable for image segmentation tasks. Typically, CNNs use consecutive convolution (image filtering) and hierarchical down-sampling to extract features. However, since image segmentation outputs dense predictions (one for every pixel/voxel), image segmentation models require feature learning at both global (coarse) and local (fine-detailed) scales. Excessive down-sampling, as used in classic CNNs [46], [22] can degrade segmentation accuracy due to loss of image resolution. In response, later works, including UNet [42], incorporated an encoder-decoder architecture which first extracts large-context features using an encoder CNN, then re-constructs the full-resolution image using a decoder CNN (reversing the down-sampling performed by the encoder). Another notable contribution of UNet is the use of shortcuts. During the encoding phase, a shortcut temporarily stores high-resolution features. In the decoding phase, the stored high-resolution features are added back to fill in the lost fine-detailed features. There have been several recent UNetbased networks [59], [56], [39], [60] successfully applied to medical image segmentation tasks. For 3D medical image analysis, most 2D segmentation methods can be readily extended to 3D [8], [37] to process volumetric medical image data. The primary issue in model building is the extremely large memory consumption of 3D convolution. Therefore, workarounds such as reducing model complexity are often required to train 3D CNNs. However, overly simplifying a CNN architecture can limit its learning capacity. In response, 2.5D CNNs [49], [47] and patch-based methods [10], [12], [14], [13] have been proposed as alternative solutions to end-to-end 3D methods. However, both 2.5D and patch-based methods are unable to capture the full 3D context of an image. For patch-based methods, there are also additional hyper-parameters (such as fixed patch size) and pre or post-processing steps that can limit the model's applicability to different datasets. B. Multi-scale feature extraction Multi-scale feature extraction [17], [38], [2], [57], [31] has been increasing in popularity for extract rich image feature representation. Typically, multi-scale feature extraction divides the input into several branches, each with a different receptive field. For example, the atrous spatial pyramid pooling (ASPP) in DeepLabV3 [5], [6] is a powerful multiscale feature extractor using parallel dilated [54] convolution branches to achieve diverse receptive field sizes. Another more recent multi-scale network achieving state-of-the-art 2D image segmentation performance is the HRNet [48]. Multiscale networks like DeepLabV3 and HRNet are, in essence, ensembles of different-scaled features and have been shown to substantially improve the accuracy of complex image segmentation tasks. The evolution of CNNs has resulted in expanded hyperparameter search space, and exhaustive architecture search has became infeasible. This problem gave rise to a class of fabriclike CNN [58], [19], [43] seeking to generally encapsulate an exponential number of multi-scale sub-architectures. These networks use interlaced multi-scale convolutional blocks, and all the blocks are trained end-to-end using gradient descent. Recently, AutoDeepLab [35] demonstrated explicit architecture. It employed weighted trainable connections between cells during training. After training, weak connections can be pruned to reveal a compact architecture for the specific training set. The resulting architecture was found to be as successful as many hand-crafted architectures. 1) Medical image segmentation involving multiple datasets: While deep learning models for 3D medical image segmentation have become more and more sophisticated [21], [32], most them are still limited to one study (dataset) at a time. Incorporating multiple datasets can be beneficial for medical image datasets in tackling data scarcity. Works such as [41], [28], [50], [45], [7] have all demonstrated that pre-trained weights (transfer learning) can generally lead to improved accuracy and convergence. However, applying a deep learning model at a larger scale (and to multiple medical image datasets) is still an under-explored application with many practical benefits. A deep learning model specifically developed for multidataset medical image analysis requires excellent dataset generalisabilty. Several works have developed models that can self-adapt to different medical image datasets. For example, nnUNet [25] automatically configures the model's hyperparameters according to the geometry of the training dataset. Neural Architecture Search (NAS) [55], [35], [61] is another popular class of methods to create neural networks that best suit the data. However, methods like nnUNet and NAS are not suitable for simultaneous multi-dataset processing as the models are configured (or optimised) on a per dataset basis. The resulting model has limited generalisability to new datasets, especially in medical image analysis. Apply one model for multi-dataset medical image segmentation is more challenging and there are fewer studies in the literature. Methods such as 3D U 2 Net [23], [51], [24], [33] showed that it is highly desirable, and indeed possible, to segment multiple datasets (organs) using a unified methods. However, methods such as [51] reply on and complex components developed for a limited scope (CT lesion detection) making them unsuitable for more general multi-dataset medical image analysis. [23], [33] and [24] demonstrated more flexibility by incorporating domain adapters throughout the model. However, non of these methods have demonstrated simultaneous multi-dataset segmentation at a large-scale: most of them were only evaluated on a small subset of the MSD challenge datasets. Moreover, these works used U-Net-like backbones which lack diverse-scale feature coverage. Hence their applicability to diverse image sizes can be limited. II. METHODS A. Network architecture 1) Dense residual fabric latent module: Fabric structures present a general multi-scale architecture solution that inherently aligns with the nature of multi-dataset medical image segmentation. Hence, this work employs a 3D fabric latent representation module, dense residual fabric (DRF) (Figure 3a), in anticipation of diverse-sized medical images. The DRF consists of inter-weaved 3D feature extractors (denoted ψ). Each feature extractors has three major components (as Figure 2): input size equaliser (ISE), weighted residual summation (WRS) and Atrous Spatial Pyramid Pooling 3D (ASPP3D). ISE: operation for automatically resizing and voxelaligning incoming 3D feature maps. ISE is required for feature map summation and concatenation. WRS: operation for fusing the aligned feature maps from ISE. As Figure 3b shows, each input is multiplied by its associated sigmoid-gated weight before adding with others. The weights are trainable and are uniformly sampled from {[-0.03, 0.03]}. WRS gives DRF the flexibility to optimise connection strengths between inputs. ASPP3D: Most multi-scale networks can only extract features at a limited number of scales (using different branches). ASPP3D gives DRF additional node-level multiscale feature extraction capabilities. For instance, a DRF with three branches each using ASPP3D nodes with dilation rates 1, 2 and 4 would yield nine unique receptive field sizes (3, 5, 6, 7, 10, 12, 14, 20 and 28). This would otherwise require nine dedicated branches without ASPP3D. 2) The fabric structure: We span the fabric with two axes (Figure 3a): a width axis W representing the number of different-scaled branches, a depth axis N representing the fabric's network-wide depth. For a given 3D input of scale s, the fabric first splits it into w ∈ W parallel branches .., w}, is fed into subsequent feature extractors ψ (i+1,j) , ψ (i+1,j+1) , and ψ (i+1,j−1) using WRS. Strided-convolution and bi-linear up-sampling are used to resize feature maps to their target sizes as needed. We avoid transpose convolution as it has been shown to produce "checkerboard" artefacts [40]. The depths {c (i,j) |i ∈ {1, ..., n} and j ∈ {1, ..., w}} of the feature extractors are distributed following the geometry of a pyramid -increasing towards the mid-point of the lowest resolution branch (ψ (i=n/2, j=w) ) of the fabric. Let C be DRF's input channels, then the number of channels of any feature extractor in the first half of the fabric (from i = 0 to i = n 2 ) can be defined as c (i,j) = min(C × 2 i−1 , C × 2 j−1 ). The number of channels of the feature extractors in the second half of the fabric then gradually shrinks along N , mirroring the first half. At the end of the fabric, the different-scaled parallel branches are merged using WRS to form an output of the original scale s. 3) Dense residual connections: [22] showed that network depth is positively correlated to training difficulty. We include supplementary residual shortcuts (Figure 3c) to densely connect the feature extractors in the fabric. That is, in addition to the different-scaled features from the immediate previous layers, each feature extractor ψ (i,j) receives shortcut signals from all other preceeding feature extractors with compatible channel sizes ({ψ (î,j) |î ∈ {0, ..., i − 2}, c (î,j) = c (i,j) } where c stands for the number of channels) as illustrated in Figure 3c. 4) Encoder-decoder backbone: As maintaining highresolution features through a fully 3D network is not feasible for current generation GPUs, the DRF is embedded in limited encoder-decoder base (Figure 1), with WRS acting as shortcuts to passing features from the encoder to the decoder. The encoder and the decoder have the same number of convolutional blocks. Each block is a residual unit [22] with two convolutional layers followed by max-pooling. In addition, a convolutional layer is added to each encoder-todecoder shortcut to reduce semantic gaps [59]. 5) Instantiation parameters: The encoder contains two convolutional blocks of 32 and 64 channels, respectively. Then, the encoded representation of the input is passed into a DRF instantiated with W = 3, N = 4 and C = 64. Each feature extractor has three parallel branches with dilation rates of 1, 2 and 4, respectively. Finally, the fabric output is passed through two decoder blocks with 64 and 32 channels, respectively, to arrive at the network's output. The shortcut convolutional layers used for semantic gap reduction have the same depths as their corresponding encoder or decoder blocks. B. Training Various works have shown that deep supervision [30] substantially increases the performance of deep learning for image segmentation [26], [33]. For the training of FIRENet, a similar concept is used where each decoder block produces an auxiliary segmentation output through point-wise (1 × 1 × 1) convolution. The loss for each output (auxiliary and main) is the sum of a categorical cross-entropy loss and a Dice similarly coefficient (DSC) loss, and they were minimised using the Adam optimiser [27]. The hardware used for training consisted of an NVIDIA Tesla V100 (32GB), and the training duration was capped to 1 day as the performance plateaus. C. Experiment setups 1) Experiment I: Multi-dataset Transfer Learning: FIRENet was tested for transfer learning involving several 3D medical imaging datasets. For pre-training, FIRENet was first trained on a recently released 3D prostate magnetic resonance (MR) dataset [15] (the prostate dataset). Then, the trained FIRENet instance was transferred for simultaneous multi-dataset bone segmentation on a composite bone dataset (the multi-bone dataset). Elastic deformation was used for data augmentation [42]. In line with other methods applied to this dataset, the evaluation metrics used were DSC, Hausdorff distance (HD) and mean surface distance. The prostate dataset contains 211 3D MR examinations of the pelvic region with manual segmentation labels for five foreground classes: body, bone (pelvic spine and girdle, proximal femur), urinary bladder, rectum and prostate. The multi-bone dataset is composed of four smaller datasets: three 3T MR imaging musculoskeletal (MSK) datasets (knee [20], shoulder [52], hip [4]) and the OAI knee [1] dataset. The main difficult of segmenting this experiment is the diverse image sizes and imbalanced numbers of training examples in each dataset (62, 25, 53 and 507 MR examinations, respectively). 2) Experiment II: Simultaneous multi-dataset segmentation on MSD: Medical Segmentation Decathlon (MSD) is a well-known 10-dataset segmentation challenge targeted at assessing the generalisability of machine learning models applied to medical image segmentation. Most previous works ( [23] and [24]) on simultaneous multi-dataset segmentation were performed on a subset of the MSD datasets only, which excluded important and also challenging tasks such as HepaticVessel and Lung. To provide a complete performance assessment, the current work evaluated FIRENet on all 10 MSD constituent datasets. These multi datasets are highly diverse in image size and voxel spacing with the smallest dimension being 11 voxels and the largest dimension being 751 voxels. For preprocessing, each image was re-sampled to the same voxel spacing of [1, 1, 1] and patches were extracted from the re-sampled images. To limit memory usage, the size for patch extraction was set to min(d, 160) for each dimension (where d is the number of voxels of that dimension). Finally, voxel intensity standardisation was applied before entering the network. The dataset was divided into training and validation sets according to an 80%-20% split ratio as per previous work [33]. Because the datasets contain different numbers of classes, and a universal categorical output for all the classes would be unrealistically memory intensive, each dataset was paired with a designated up-sampling decoder after DRF to produce segmentation with the desired number of classes. However, the DRF, which contains most of the learnt features, was shared across the 10 datasets. The segmentation for each dataset was predicted using the shared DRF and its designated decoder. The evaluation metric was the averaged per-class (excluding the background) DSC which alleviates bias towards majority classes. Due to the lack of available comparison methods for this multi-bone segmentation task, nnUNet was chosen, and one instance was trained to establish a baseline. All the datasets were pooled into single dataset, during which the smaller datasets were duplicated to ensure data balance. All the preprocessing, training and evaluation steps were carefully followed as per nnUNet's official instructions and performed using the official scripts 1 . III. RESULTS AND DISCUSSION A. Experiment I: Multi-dataset Transfer Learning 1) FIRENet pre-training evaluation: Table I shows the subject-level, 3-fold validation segmentation results from 1 https://github.com/MIC-DKFZ/nnUNet the prostate MR pre-training experiment for the body, bone, urinary bladder, rectum and prostate classes. FIRENet was compared to four other contemporary 3D deep learning baseline methods (3D UNet [8], improved UNet [26], VNet [37] and CAN3D [11]). FIRENet produced better results than the baseline methods in DSC, HD and mean surface distance values across the different classes. In terms of the performance on outlier cases (min DSC), FIRENet was amongst the most resilient models producing fewer segmentation errors. Table II shows the baseline results published by [16], [3] using traditional methods. These baseline results are comparable even to the more recently deep learning results (UNet and VNet) in Table I. FIRENet and CAN3D's median results significantly exceed these traditional methods, especially across the most challenging classes (prostate and rectum). 2) Transfer learning: For transfer learning, two FIRENet instances were applied to the multi-bone dataset: the FIRENet (denoted FIRENet-T) previously pre-trained on the prostate dataset and a randomly initialised FIRENet (denoted FIRENet-R). The bone segmentation DSC results of FIRENet-R and FIRENet-T are provided in Tables III and Fig 6. Overall, FIRENet-R and T exhibited the ability to simultaneously segment diverse medical image data despite relying on only one shared set of weights: the results on the OAI dataset are comparable to other methods [11], [1] which were specifically designed for one dataset. Comparing the mean DSC values of FIRENet-R and FIRENet-T, it can be seen that FIRENet could leverage pre-training to improve the segmentation performance across all the constituent bone datasets. Moreover, as Fig 6, FIRENet-T's lowest DSC results were also noticeably improved over FIRENet-R, indicating reduced critical segmentation errors. In terms of convergence, FIRENet-T was also more stable and faster (Fig. 5) supporting the benefits of transfer learning for simultaneous multi-dataset segmentation. The results on this nnUNet's bone segmentation task indicated it is unsuitable for simultaneous multi-dataset processing -the best DSC results were substantially lower at 0.612, 0.851, 0.633 and 0.986 for the MSK hip, knee, shoulder and OAI, respectively. As the visualisation in Fig 4, the results from nnUNet were inconsistent with several segmentation failures each one of the smaller datasets (MSK knee, shoulder and hip). It was also observed that the training process of nnUNet was highly unstable, and early stopping was required to obtain usable results. nnUNet's inability to produce satisfactory results could be attributed to the data-dependent nature of its self-configuring procedure, as well as the lack of multi-scale feature exchange in the UNet architecture. In the case of FIRENet-R and FIRENet-T, aside from the deliberate lack of dataset-specific configurations, there is also a strong emphasis on multi-dataset feature learning. Figure 7 A, B and D show visualisations of how the multi-scale features are exchanged within the DRF of FIRENet. Four representative pairs of feature maps were captured before and after WRS. Noticeably, WRS is shown to merge features to create more prominent activation. It also seems to "clean up" the activation Table III maps by removing "extraneous" signals. In Figure 7B, WRS appears to have removed noise-like artefacts that share little correlation with the shape of the bone. B. Experiment II: Simultaneous multi-dataset segmentation on MSD FIRENet's segmentation performance were compared to the recently published 3D MDUNet [33] and 3D U 2 N et, and some example predictions are visualised in Fig. 8. As IV shows, compared to 3D MDUNet and its 3D U 2 N et baseline (as appears in the 3D MDUNet publication), FIRENet's DSC results in the Hippocampus, Pancreas and Spleen segmentation tasks are considerably improved. Compared to the original 3D U 2 N et, FIRENet achieved significantly better Pancreas segmentation DSC (by 12.1%), and similar segmentation DSCs in the Heart, Liver and Prostate segmentation tasks. However, it did also under-perform in terms of Hippocampus segmentation DSC. It is worth noting that both the 3D MDUNet and 3D U 2 N et only involved a subset of the MSD challenge and, by extension, only a subset of the challenge's complexity. In the meantime, FIRENet was trained on all 10 datasets. While nnUNet has been the gold standard in the MSD challenge, as experiment I shows, pooling different datasets together is not within the scope of nnUNet's design. The currently published nnUNet results are indeed state-of-the-art, but all specially tuned for one dataset at a time. Hence, nnUNet's single dataset results were not included as a performance baseline for this experiment. C. Limitations As FIRENet universally applies to different imaging datasets, its design and training methodology purposefully lack domain-specific optimisations. Compared to public challenge results produced by highly specialised and singledataset methods, a general architecture does not often yield optimal numerical results across all the different datasets. Although FIRENet's architecture is free of overhead in handling multiple datasets, there is an increase in total model size with each additional decoder (required for different output formats). Finally, as a CNN, FIRENet faces the current limitations of deep learning. For example, the lack of clinical explain-ability and difficulty extrapolating to unseen data distributions. D. Future work As new labelled medical imaging datasets and deep learning accelerators become available, FIRENet's size and training set composition can continue to expand. It also would be beneficial to train multiple instances of FIRENet to specialise in different imaging modalities. In terms of architectural development, FIRENet could be effectively re-purposed for classification, regression and multi-task learning by adding downstream prediction heads. For example, a classification head could be added for 3D medical image classification based on the features extracted from FIRENet. IV. CONCLUSION In response to the issue of data scarcity in deep learning for medical image analysis, his work proposes FIRENet, a versatile 3D neural network architecture geared towards simultaneous multi-dataset segmentation. To ensure maximum flexibility when learning features from multiple datasets, FIRENet uses a generally inclusive fabric structure to encapsulate a superposition of many sub-networks, thus alleviating the need for dataset-specific architecture designs. In addition, each fabric node employs ASPP 3D for rich-scale feature extraction to ensure maximum coverage of different scaled features. The prostate, bone and MSD segmentation tasks showed that FIRENet is well-suited for multi-modal, multisize and multi-target segmentation simultaneously.
2022-10-06T13:09:49.757Z
2022-01-01T00:00:00.000
{ "year": 2022, "sha1": "a5d76ee56645c5ead252b2508f9ff7791225be5f", "oa_license": null, "oa_url": null, "oa_status": null, "pdf_src": "Arxiv", "pdf_hash": "a5d76ee56645c5ead252b2508f9ff7791225be5f", "s2fieldsofstudy": [ "Computer Science", "Environmental Science" ], "extfieldsofstudy": [ "Engineering", "Computer Science" ] }
258223393
pes2o/s2orc
v3-fos-license
Hydraulic Performance of Wave-Type Flow at a Sill-Controlled Stilling Basin : Downstream of the sluice gate or weir, wave-type flows inevitably occur in stilling basins with no tailwater. This paper aims to investigate the hydraulic performance of wave-type flows at a sill-controlled stilling basin through experimental research. The flow pattern, bottom pressure profiles along the stilling basin, and the air concentrations on the bottom and the sidewall were examined in five sill-controlled stilling basins by altering the sill position and the height. The results show that wave-type flow patterns contain submerged and non-submerged jumps, which are relevant to ambient pressure head and air entrainment. The bottom pressure profiles are related to larger pressure fluctuations at large unit discharges and two peak pressure values in the vicinity of the sill. The air concentrations on the bottom and the sidewall decrease with the increasing unit discharge. The flow zone in the vicinity of the sill should be focused upon concerning protection against cavitation damage because of the slight air entrainment and significant pressure fluctuations. These findings advance our understanding of wave-type flows, and their ambient pressure heads and air entrainment are useful for designing the sill-controlled stilling basin in Introduction A hydraulic jump is a phenomenon that occurs by converting supercritical to subcritical flow regimens downstream of hydraulic structures.Hydraulic jumps mostly occur in stilling basins with sills or blocks after the sluice gates or weirs.For a sill-controlled stilling basin, the flow passing over the sill changes with the supercritical Froude number, and the relevant hydraulic characteristics (i.e., the hydraulic jump performance) are strongly influenced by the position and height of the sill [1][2][3][4].In the stilling basin, hydraulic jumps can be classified successively into five types-namely A, B, minimum B, C, and wave-type flow-as the tailwater decreases, as illustrated in Figure 1 [5].The A-jump corresponds practically to a classical jump as the sill position is at the end of the surface roller, and the sill has no effect on the jump (Figure 1a).The B-jump occurs when the tailwater depth decreases, the jump toe moves toward the sill and the deflection of the bottom stream occurs (Figure 1b).Consequently, the minimum B-jump is the formation of a second roller at the downstream of the sill and a C-jump forms when the maximum difference between the flow depth over the sill and the tailwater depth is realized (Figure 1c,d).As the tailwater is gradually reduced, eventually reaching the point of no tailwater, a wave-type flow will occur in the vicinity of the sill, and the resulting downstream flow is characterized by supercritical flow conditions.Distinct from other types of flow, wave-type flows result in Appl.Sci.2023, 13, 5053 2 of 14 excessive standing waves and highly erosive supercritical flow downstream of the sill; thus, this type of flow should be generally avoided (Figure 1e).In the design of stilling basins or energy dissipators, it is very important to make sure that a hydraulic jump occurs for all possible tailwater depths.Thus, according to the practical engineering design of the sill-controlled stilling basin (i.e., an abrupt bottom rise), wave-type flow inevitably occurs when there is no tailwater [6,7]. Appl.Sci.2023, 13, x FOR PEER REVIEW 2 of 15 1c,d).As the tailwater is gradually reduced, eventually reaching the point of no tailwater, a wave-type flow will occur in the vicinity of the sill, and the resulting downstream flow is characterized by supercritical flow conditions.Distinct from other types of flow, wavetype flows result in excessive standing waves and highly erosive supercritical flow downstream of the sill; thus, this type of flow should be generally avoided (Figure 1e).In the design of stilling basins or energy dissipators, it is very important to make sure that a hydraulic jump occurs for all possible tailwater depths.Thus, according to the practical engineering design of the sill-controlled stilling basin (i.e., an abrupt bottom rise), wavetype flow inevitably occurs when there is no tailwater [6,7].In the literature, much attention has been given to wave formation in different types of hydraulic jumps [8][9][10].Different cases of wave-type flows resulting from abrupt bottom changes have been studied under supercritical downstream conditions.Kawagoshi and Hager [11] investigated the wave formation at an abrupt drop and examined various parameters, such as the sequent depth ratio, the maximum height, the location of the plunging point, the length of the downstream jump, and the resultant surface roller.Hager and Li [5] conducted an analysis on the impact of a continuous, transverse sill on the hydraulic jump in a rectangular channel.The study revealed that the jump controlled by the sill was a perturbed classical jump, as evident from the overall jump pattern.A significant reduction in the energy dissipation of the wave-type flow was observed compared to other patterns.Eroğlu and Tokyay [12] presented a simple empirical expression for determining the hydraulic characteristics of wave-type flows at abrupt bottom drops.Eroğlu and Taştan [13] analyzed the flow pattern and energy dissipation of wave-type flow when the basin bottom both rose and fell.Huang et al. [14] conducted an experimental study on wave characteristics in stilling basins with negative steps.The study In the literature, much attention has been given to wave formation in different types of hydraulic jumps [8-10].Different cases of wave-type flows resulting from abrupt bottom changes have been studied under supercritical downstream conditions.Kawagoshi and Hager [11] investigated the wave formation at an abrupt drop and examined various parameters, such as the sequent depth ratio, the maximum height, the location of the plunging point, the length of the downstream jump, and the resultant surface roller.Hager and Li [5] conducted an analysis on the impact of a continuous, transverse sill on the hydraulic jump in a rectangular channel.The study revealed that the jump controlled by the sill was a perturbed classical jump, as evident from the overall jump pattern.A significant reduction in the energy dissipation of the wave-type flow was observed compared to other patterns.Ero glu and Tokyay [12] presented a simple empirical expression for determining the hydraulic characteristics of wave-type flows at abrupt bottom drops.Ero glu and Taştan [13] analyzed the flow pattern and energy dissipation of wave-type flow when the basin bottom both rose and fell.Huang et al. [14] conducted an experimental study on wave characteristics in stilling basins with negative steps.The study describes wave height, period, probability density, and power spectrum in different stilling basins.It establishes relationships between characteristic wave heights and provides an empirical formula for relative characteristic wave height.Zhou et al. [15] studied the energy dissipation of wave-type flow caused by a sill in the stilling basin and compared them with the data of energy dissipation by using a positive step.Moreover, a better agreement with respect to energy dissipation curves can be obtained by considering both upstream and downstream conditions. Previous research has investigated different cases involving wave-type flows, including energy dissipation and hydraulic variables such as sequent depth ratios, maximum wave height, and wave profile.However, the effect of wave-type flows on the ambient pressure head and air entrainment has not been analyzed thoroughly.In this study, we aim to examine the hydraulic characteristics of wave-type flows, with a special focus on the bottom pressure profile along the stilling basin and the concentration of air on the bottom and sidewall.By gathering this data, it will be possible to improve the accuracy of computational fluid dynamics (CFD) models used to predict the hydraulic behavior of wave-type flows.Furthermore, the findings of this research can expand the application of sill-controlled stilling basins by developing new relationships based on the data collected. Experimental Setup and Methodology The experiments were conducted in the High-Speed Flow Laboratory at Hohai University in Nanjing, China.The experimental setup consisted of a large feeding basin, a pump, an approach conduit, a rectangular flume, a stilling basin model, and a flow return system (Figure 2).The rectangular flume made of Perspex was 25.00 m in length, 0.50 m in width, and 0.60 m in height.The stilling basin model consisted of a weir and a sill.The height of the weir P was 0.36 m.The weir was a standard Waterways Experiment Station (WES) weir with a crest profile of y = 1.81x 1.85 and a chute with an angle of 57 • relative to the basin bottom.The sill in the stilling basin had a thickness and width of 0.01 m and 0.50 m.Wave-type flow was observed downstream of the weir by a sill with no tailwater conditions, and downstream flows were also supercritical. Appl.Sci.2023, 13, x FOR PEER REVIEW 3 of 15 describes wave height, period, probability density, and power spectrum in different stilling basins.It establishes relationships between characteristic wave heights and provides an empirical formula for relative characteristic wave height.Zhou et al. [15] studied the energy dissipation of wave-type flow caused by a sill in the stilling basin and compared them with the data of energy dissipation by using a positive step.Moreover, a better agreement with respect to energy dissipation curves can be obtained by considering both upstream and downstream conditions.Previous research has investigated different cases involving wave-type flows, including energy dissipation and hydraulic variables such as sequent depth ratios, maximum wave height, and wave profile.However, the effect of wave-type flows on the ambient pressure head and air entrainment has not been analyzed thoroughly.In this study, we aim to examine the hydraulic characteristics of wave-type flows, with a special focus on the bottom pressure profile along the stilling basin and the concentration of air on the bottom and sidewall.By gathering this data, it will be possible to improve the accuracy of computational fluid dynamics (CFD) models used to predict the hydraulic behavior of wave-type flows.Furthermore, the findings of this research can expand the application of sill-controlled stilling basins by developing new relationships based on the data collected. Experimental Setup and Methodology The experiments were conducted in the High-Speed Flow Laboratory at Hohai University in Nanjing, China.The experimental setup consisted of a large feeding basin, a pump, an approach conduit, a rectangular flume, a stilling basin model, and a flow return system (Figure 2).The rectangular flume made of Perspex was 25.00 m in length, 0.50 m in width, and 0.60 m in height.The stilling basin model consisted of a weir and a sill.The height of the weir P was 0.36 m.The weir was a standard Waterways Experiment Station (WES) weir with a crest profile of y = 1.81x 1.85 and a chute with an angle of 57° relative to the basin bottom.The sill in the stilling basin had a thickness and width of 0.01 m and 0.50 m.Wave-type flow was observed downstream of the weir by a sill with no tailwater conditions, and downstream flows were also supercritical. Figure 3 shows the experimental variables for submerged and non-submerged jumps, including the upstream flow depth, y0; the approaching supercritical flow depth, y1 upstream of the sill; the inflow discharge Q; the sill height, s, the length of the stilling basin, ls (i.e., the sill position), from the weir toe (Station 0.0 m) to the upstream face of the Figure 3 shows the experimental variables for submerged and non-submerged jumps, including the upstream flow depth, y 0 ; the approaching supercritical flow depth, y 1 upstream of the sill; the inflow discharge Q; the sill height, s, the length of the stilling basin, l s (i.e., the sill position), from the weir toe (Station 0.0 m) to the upstream face of the sill; and the downstream depth, y 2 .The inflow discharge, Q, was measured using a 90 • V-notch weir with an accuracy of ±1%, and flow depths y 0 and y 2 were measured using point gauges with an accuracy of ±1%.Due to the effect of free-surface instability, y 2 was measured 10 m downstream of the sill, where the free-surface undulations diminished. sill; and the downstream depth, y2.The inflow discharge, Q, was measured using a 90° Vnotch weir with an accuracy of ±1%, and flow depths y0 and y2 were measured using point gauges with an accuracy of ±1%.Due to the effect of free-surface instability, y2 was measured 10 m downstream of the sill, where the free-surface undulations diminished.Since a submerged jump may occur under certain conditions, a different method was used to calculate the flow depth y1 and velocity v1.For non-submerged jumps (Figure 3b), flow depth y1 was calculated by means of an energy balance (i.e., , where E0 is the total energy upstream of the weir, g is the acceleration of gravity, B = 0.50 m is the basin width, and φ = 0.95 is the coefficient of velocity), and velocity v1 was calculated by means of the continuity law.For submerged jumps (Figure 3a), v1 was calculated as 2 , where y4 denotes the flow depth at the terminal section of the weir [16], and y1 was calculated by means of the continuity law. Figure 4 shows the layout of the air concentration and pressure measurement points.The pressure measurement points were placed at the centerline of the bottom (Figure 4b) corresponding to station 0 m, 0.05 m, 0.15 m, …, and 1.85 m (from the second to the last for every 0.1 m interval).The pressure on the bottom was measured with piezometric tubes with an error of ±0.5 mm.The air concentration measurement points were placed both 0.1 m off the centerline on the bottom (Figure 4b) and 0.05 m above the bottom on the sidewall at the same station (Figure 4a), which corresponds to station 0.35 m, 0.45 m, …, and 1.05 m (from the first to the last for every 0.1 m interval).The air concentrations both on the bottom and the sidewall were measured by using a CQ6-2005 aeration apparatus with a sampling rate of 1020 Hz, a time period of 10 s, and an error of ±0.3% [17]. For this study, the experiments were conducted for inflow unit discharges 0.102 m 2 /s ≤ qw ≤ 0.230 m 2 /s (i.e., 0.102, 0.154, 0.188, 0.205 and 0.230), equaling Reynolds numbers of 3.97 × 10 5 ≤ Re ≤ 9.14 × 10 5 .Hence, the Reynolds numbers were large enough to avoid significant scale effects as identified in air-water flows in the stilling basin [18].The experiments encompassed five sill-controlled stilling basins by altering the position (ls) and height (s) of the sill.Table 1 lists the experimental flow conditions for all sill configurations comprising the upstream and downstream Froude numbers F1 ( / ) and F2 ( / ).Since a submerged jump may occur under certain conditions, a different method was used to calculate the flow depth y 1 and velocity v 1 .For non-submerged jumps (Figure 3b), flow depth y 1 was calculated by means of an energy balance (i.e., y , where E 0 is the total energy upstream of the weir, g is the acceleration of gravity, B = 0.50 m is the basin width, and ϕ = 0.95 is the coefficient of velocity), and velocity v 1 was calculated by means of the continuity law.For submerged jumps (Figure 3a), v 1 was calculated as 2g(y 0 − y 4 ), where y 4 denotes the flow depth at the terminal section of the weir [16], and y 1 was calculated by means of the continuity law. Figure 4 shows the layout of the air concentration and pressure measurement points.The pressure measurement points were placed at the centerline of the bottom (Figure 4b) corresponding to station 0 m, 0.05 m, 0.15 m, . . ., and 1.85 m (from the second to the last for every 0.1 m interval).The pressure on the bottom was measured with piezometric tubes with an error of ±0.5 mm.The air concentration measurement points were placed both 0.1 m off the centerline on the bottom (Figure 4b) and 0.05 m above the bottom on the sidewall at the same station (Figure 4a), which corresponds to station 0.35 m, 0.45 m, . . ., and 1.05 m (from the first to the last for every 0.1 m interval).The air concentrations both on the bottom and the sidewall were measured by using a CQ6-2005 aeration apparatus with a sampling rate of 1020 Hz, a time period of 10 s, and an error of ±0.3% [17]. For this study, the experiments were conducted for inflow unit discharges 0.102 m 2 /s ≤ q w ≤ 0.230 m 2 /s (i.e., 0.102, 0.154, 0.188, 0.205 and 0.230), equaling Reynolds numbers of 3.97 × 10 5 ≤ Re ≤ 9.14 × 10 5 .Hence, the Reynolds numbers were large enough to avoid significant scale effects as identified in air-water flows in the stilling basin [18].The experiments encompassed five sill-controlled stilling basins by altering the position (l s ) and height (s) of the sill.Table 1 lists the experimental flow conditions for all sill configurations comprising the upstream and downstream Froude numbers √ gy 1 ) and Flow Pattern In most practical cases, the stilling basin with a positive step or a sill (i.e., abrupt bottom rise) was constructed to grant a forced jump inside the basin [2]. Figure 5 illustrates the wave-type flow for both Case M31 and M32 at different inflow unit discharges. Flow Pattern In most practical cases, the stilling basin with a positive step or a sill (i.e., abrupt bottom rise) was constructed to grant a forced jump inside the basin [2]. Figure 5 illustrates the wave-type flow for both Case M31 and M32 at different inflow unit discharges. As the unit discharge increased, implying a decrease in F 1 , the beginning of the jump roller moved downstream.The wave type flow exhibited a significant wave height in the vicinity of the sill, a significant water drop downstream of the sill, and the resultant downstream supercritical flow (F 2 > 1), as listed in Table 1.In addition, air entrainment occurred at the toe of the jump, and entrained bubbles were transported into the downstream zone. As the unit discharge increased, implying a decrease in F1, the beginning of the jump roller moved downstream.The wave type flow exhibited a significant wave height in the vicinity of the sill, a significant water drop downstream of the sill, and the resultant downstream supercritical flow (F2 > 1), as listed in Table 1.In addition, air entrainment occurred at the toe of the jump, and entrained bubbles were transported into the downstream zone.For comparison purposes, the beginning of the jump roller in Case M32 moved downstream more clearly, presenting a distinct transition from the submerged jump to the non-submerged jump, especially when the unit discharge varied from 0.154 m 2 /s to 0.205 m 2 /s.Moreover, the visual observations suggested a more significant amount of air in the submerged jump in Case M32, and even the entrained air could reach the stilling basin bottom.For comparison purposes, the beginning of the jump roller in Case M32 moved downstream more clearly, presenting a distinct transition from the submerged jump to the non-submerged jump, especially when the unit discharge varied from 0.154 m 2 /s to 0.205 m 2 /s.Moreover, the visual observations suggested a more significant amount of air in the submerged jump in Case M32, and even the entrained air could reach the stilling basin bottom. In addition to the wave type flow, a jet flow may occur, and the curvature of the streamline approaching the sill became larger.The supercritical flow would splash over the sill and a cavity formed between the jet flow and the downstream depth [7].Because the splash flow occurred when the relative sill height of s/y 1 was larger than its critical value, the flow pattern was closely related to inflow conditions and the sill height. In Figure 6, the relative sill height S (i.e., S = s/y 1 ) is plotted against the upstream Froude number F 1 .The data for the critical splash flow conditions are also illustrated in this figure.The relative sill height, s/y 1 , in this study conformed to the wave-type flow conditions found in a previous study [7].When wave-type flow occurred with a positive step, the flow zone behind the step could be classified into aerated or nonaerated flow.The main difference between these two wave types was that the aerated wave-type flows were associated with a better aeration effect and higher wave heights due to the subatmospheric pressure at the horizontal step surface [19]. In addition to the wave type flow, a jet flow may occur, and the curvature o streamline approaching the sill became larger.The supercritical flow would splash the sill and a cavity formed between the jet flow and the downstream depth [7].Be the splash flow occurred when the relative sill height of s/y1 was larger than its c value, the flow pattern was closely related to inflow conditions and the sill height. In Figure 6, the relative sill height S (i.e., S = s/y1) is plotted against the upst Froude number F1.The data for the critical splash flow conditions are also illustrat this figure.The relative sill height, s/y1, in this study conformed to the wave-type conditions found in a previous study [7].When wave-type flow occurred with a po step, the flow zone behind the step could be classified into aerated or nonaerated The main difference between these two wave types was that the aerated wave-type were associated with a better aeration effect and higher wave heights due to the s mospheric pressure at the horizontal step surface [19]. For these two wave types, the data for the relative positive step height hs/y1 ag F1 are also plotted in Figure 6 [13], and the line, S = 1, was referred to as the limit bet non-wave and wave-type flows.For wave-type flow occurring with a sill in this s the data for S against F1 also suggest that a higher relative sill height correspondi submerged wave-type flows is relevant for a better aeration effect, as observed in F 5. Pressure Head Figure 7 shows the maximum, mean, and minimum pressure head, Hp, on the bo along the stilling basin starting at the weir toe (Station 0.0 m) at different unit disch for Case M32 and a CHJ (a classical hydraulic jump).It can be observed that the bo pressure varied greatly along the stilling basin for each unit discharge and reached a both at the location upstream and downstream of the sill (Figure 7a-e).Moreover, peak values all increased with an increasing unit discharge due to the increase in surface elevation.The maximum, mean, and minimum pressure head of a CHJ is shown in Figure 7f, where the depth of supercritical flow (y1) and its subcritical conj depth (y2) are 0.17 m and 0.92 m, respectively, when qw = 0.93 m 2 /s [20].For compa purposes, the pressure profiles of a CHJ increased monotonically along the stilling b Figure 6.Relative sill height S against the upstream Froude number F 1 [7,13]. For these two wave types, the data for the relative positive step height h s /y 1 against F 1 are also plotted in Figure 6 [13], and the line, S = 1, was referred to as the limit between non-wave and wave-type flows.For wave-type flow occurring with a sill in this study, the data for S against F 1 also suggest that a higher relative sill height corresponding to submerged wave-type flows is relevant for a better aeration effect, as observed in Figure 5. Pressure Head Figure 7 shows the maximum, mean, and minimum pressure head, H p , on the bottom along the stilling basin starting at the weir toe (Station 0.0 m) at different unit discharges for Case M32 and a CHJ (a classical hydraulic jump).It can be observed that the bottom pressure varied greatly along the stilling basin for each unit discharge and reached a peak both at the location upstream and downstream of the sill (Figure 7a-e).Moreover, these peak values all increased with an increasing unit discharge due to the increase in water surface elevation.The maximum, mean, and minimum pressure head of a CHJ is also shown in Figure 7f, where the depth of supercritical flow (y 1 ) and its subcritical conjugate depth (y 2 ) are 0.17 m and 0.92 m, respectively, when q w = 0.93 m 2 /s [20].For comparison purposes, the pressure profiles of a CHJ increased monotonically along the stilling basin, except at its beginning.As observed in Figure 7a-e, for a small unit discharge (e.g., q w = 0.102 m 2 /s, and 0.154 m 2 /s), the values of the maximum, mean, and minimum pressure were nearly the same; i.e., the pressure fluctuation along the stilling basin bottom is small.However, as the unit discharge q w increased (q w = 0.188 m 2 /s, 0.205 m 2 /s and 0.230 m 2 /s, pressure fluctuations became greater in the vicinity of the sill, particularly at q w = 0.205 m 2 /s.This was attributed to the turbulent roller region of the jump, which was closer to the sill related to the transition from a submerged jump to a non-submerged jump.However, the pressure fluctuation of the classic hydraulic jump was more distinct at a near prototype scale.In order to prevent erosion below overflow spillways, chutes, and sluices, the pressure peak of wave-type flows should be carefully focused upon in this study. Appl.Sci.2023, 13, x FOR PEER REVIEW 8 of 15 except at its beginning.As observed in Figure 7a-e, for a small unit discharge (e.g., qw = 0.102 m 2 /s, and 0.154 m 2 /s), the values of the maximum, mean, and minimum pressure were nearly the same; i.e., the pressure fluctuation along the stilling basin bottom is small.However, as the unit discharge qw increased (qw = 0.188 m 2 /s, 0.205 m 2 /s and 0.230 m 2 /s, pressure fluctuations became greater in the vicinity of the sill, particularly at qw = 0.205 m 2 /s.This was attributed to the turbulent roller region of the jump, which was closer to the sill related to the transition from a submerged jump to a non-submerged jump.However, the pressure fluctuation of the classic hydraulic jump was more distinct at a near prototype scale.In order to prevent erosion below overflow spillways, chutes, and sluices, the pressure peak of wave-type flows should be carefully focused upon in this study.Figure 8 illustrates the normalized mean pressure heads (Hp/y2) along the stilling basin at different unit discharges (qws) for each case and a CHJ.In this figure, the downstream flow depth, y2, was used to normalize Hp.In general, the streamwise dimensionless (c) q w = 0.188 m 2 /s; (d) q w = 0.205 m 2 /s; (e) q w = 0.230 m 2 /s; (f) q w = 0.93 m 2 /s for a classical hydraulic jump [16]. Figure 8 illustrates the normalized mean pressure heads (H p /y 2 ) along the stilling basin at different unit discharges (q w s) for each case and a CHJ.In this figure, the down-stream flow depth, y 2, was used to normalize H p .In general, the streamwise dimensionless mean pressure heads H p /y 2 s for each case (Figure 8a-e) exhibited a similar trend at different unit discharges, with two distinct peak values observed along the stilling basin.For a CHJ in Figure 8f, H p /y 2 typically decreased initially and then increased along the stilling basin.A larger H p /y 2 could be obtained for a smaller q w until the end of the jump, except for the beginning of the stilling basin.According to the existence of pressure peaks and the sill position in Figures 7 and 8, the streamwise dimensionless mean pressure indicated the following flow zones: (1) deflection zone, (2) jump zone, and (3) wave impact zone. The defection zone was characterized by the pronounced mean pressures due to the impact and curvature of the flow.In this zone, the pressure head decreased, and the dimensional impact pressure Hp/y2s was determined using the upstream weir flow.In the jump zone, the jump formation involved a strong pressure variation.The pressure head increased continually and reached the maximum closest to the upstream face of the sill.The resultant first peak value was much larger than the downstream depth.For the wave impact zone, wave-type flows induced a significant water drop downstream of the sill and resulted in a sudden pressure decrease and a secondary peak.The peak pressure value is still very high compared to downstream stable flow conditions y2 (e.g., Hp/y2 approximates 2.0 times y2 when qw = 0.205 m 2 /s for Case M31).After the wave impact, the pressure head gradually decreased to a constant value.The pressure profile for a classical hydraulic jump by a sill-for which its position was at sta./y2 = 5.71, 5.79, 5.87 at a unit discharge of 0.24 m 2 /s, 0.46 m 2 /s, and 0.93 m 2 /s, respectively-is also illustrated in Figure 8f.Apart from the apparent pressure drop and wave impact in the vicinity of the sill, the mean pressure downstream of the sill was quasi-hydrostatic [21].Generally, the designers and contractors should reinforce the stilling basin with concrete and steel to prevent the scouring of the bedrock, particularly for the local peak pressure [22].The relative first and second peak pressure head in the vicinity of the sill normalized by the sill height Hp/s can be expressed in terms of dimensionless variables as where L = ls/y1 and S = s/y1 denote the relative length and height of the stilling basin, re- The defection zone was characterized by the pronounced mean pressures due to the impact and curvature of the flow.In this zone, the pressure head decreased, and the dimensional impact pressure H p /y 2 s was determined using the upstream weir flow.In the jump zone, the jump formation involved a strong pressure variation.The pressure head increased continually and reached the maximum closest to the upstream face of the sill.The resultant first peak value was much larger than the downstream depth.For the wave impact zone, wave-type flows induced a significant water drop downstream of the sill and resulted in a sudden pressure decrease and a secondary peak.The peak pressure value is still very high compared to downstream stable flow conditions y 2 (e.g., H p /y 2 approximates 2.0 times y 2 when q w = 0.205 m 2 /s for Case M31).After the wave impact, the pressure head gradually decreased to a constant value.The pressure profile for a classical hydraulic jump by a sill-for which its position was at sta./y 2 = 5.71, 5.79, 5.87 at a unit discharge of 0.24 m 2 /s, 0.46 m 2 /s, and 0.93 m 2 /s, respectively-is also illustrated in Figure 8f.Apart from the apparent pressure drop and wave impact in the vicinity of the sill, the mean pressure downstream of the sill was quasi-hydrostatic [21]. Generally, the designers and contractors should reinforce the stilling basin with concrete and steel to prevent the scouring of the bedrock, particularly for the local peak pressure [22].The relative first and second peak pressure head in the vicinity of the sill normalized by the sill height H p /s can be expressed in terms of dimensionless variables as where L = l s /y 1 and S = s/y 1 denote the relative length and height of the stilling basin, respectively; a, b, c, and d are constants.The relative first and second pressure peak values for sill configurations can be expressed with the following equations. The graphs of Equations ( 2) and ( 3) are plotted in Figure 9. Equations ( 2) and (3) also reflected the relationships among the peak pressure heads, inflow conditions, and the sill configurations; i.e., the relative peak pressure increased with an increasing F 1 and the relative length of the stilling basin l s /y 1 but decreased with increasing sill height s/y 1 .Generally, the designers and contractors should reinforce the stilling basin with concrete and steel to prevent the scouring of the bedrock, particularly for the local peak pressure [22].The relative first and second peak pressure head in the vicinity of the sill normalized by the sill height Hp/s can be expressed in terms of dimensionless variables as where L = ls/y1 and S = s/y1 denote the relative length and height of the stilling basin, respectively; a, b, c, and d are constants.The relative first and second pressure peak values for sill configurations can be expressed with the following equations. The graphs of Equations ( 2) and ( 3) are plotted in Figure 9. Equations ( 2) and (3) also reflected the relationships among the peak pressure heads, inflow conditions, and the sill configurations; i.e., the relative peak pressure increased with an increasing F1 and the relative length of the stilling basin ls/y1 but decreased with increasing sill height s/y1. Air Entrainment The jump formation in the stilling basin would result in extreme water turbulence and pressure fluctuations on the bottom or the sidewall (e.g., with a restricted width) [23]. Air Entrainment The jump formation in the stilling basin would result in extreme water turbulence and pressure fluctuations on the bottom or the sidewall (e.g., with a restricted width) [23].Due to the structural vibration caused by these pressure fluctuations and high velocity near the bottom of the basin, the risk of cavitation increases. The cavitation damage can be greatly by introducing enough air, and more attention should be focused upon the air concentration of the flow (i.e., the ratio of air volume to the sum of the air and water volumes) on the bottom and the sidewall.Air concentrations on the bottom (C b ) and the sidewall (C s ) in the vicinity of the sill at different inflow unit discharges q w s for both Case M32 and M31 are highlighted in Figure 10.In this figure, X = (x − l s )/l s represents the location of the measuring point relative to the sill position along the stilling basin floor.As shown in Figure 10a, air concentrations (Cb and Cs) decreased with an increasing qw for M32.At each qw value, a decreasing trend was observed in the upstream flow zone of the sill, and an opposite trend was found downstream of the sill.The data for M31 also exhibited the same trend at most of the flow zone in Figure 10b, but most bottom air concentrations upstream of the sill measured as zero due to the flow pattern of wave-type flows.For instance, with the increase in qw for Case M32, the jump type changed from submerged to non-submerged, and the amount of entrained air was gradually transported downstream of the sill.At a small qw value, entrained air resulting from the submerged jump at the toe of the weir reached the stilling basin bottom (e.g., qw = 0.102 m 2 /s and 0.154 As shown in Figure 10a, air concentrations (C b and C s ) decreased with an increasing q w for M32.At each q w value, a decreasing trend was observed in the upstream flow zone of the sill, and an opposite trend was found downstream of the sill.The data for M31 also exhibited the same trend at most of the flow zone in Figure 10b, but most bottom air concentrations upstream of the sill measured as zero due to the flow pattern of wave-type flows.For instance, with the increase in q w for Case M32, the jump type changed from submerged to non-submerged, and the amount of entrained air was gradually transported downstream of the sill.At a small q w value, entrained air resulting from the submerged jump at the toe of the weir reached the stilling basin bottom (e.g., q w = 0.102 m 2 /s and 0.154 m 2 /s in Figure 5a).The blackwater zone close to the stilling basin bottom gradually enlarged with an increase in the unit discharge.In contrast, for Case M31, the flow zone upstream of the sill was always characterized by the blackwater in this study (Figure 5b). Table 2 shows the air concentrations on the bottom and the sidewall along stilling basin for all cases.Generally, air concentrations on the sidewall (C s ) were higher than that on the bottom (C b ).Peterka [24] proved that when the air concentration on the structure's surface was 1.0-2.0%, the cavitation damage could be considerably reduced.It was worth noting that the flow zone in the vicinity of the sill had a relatively small air concentration value (i.e., the value is below 1%) both on the bottom and the sidewall.Within these zones, the pressure profiles obviously fluctuated, especially for the location upstream of the sill, as observed in Figure 7. Thus, the resultant slight air entrainment and large pressure fluctuation should be focused upon in future work.In the term "x/y" of the air concentration data, x and y are experimental data on the bottom and the sidewall, respectively. Conclusions Downstream of the sluice gate or weir, wave-type flows may occur under no tailwater conditions in a stilling basin.Experimental tests were conducted on five different sill configurations, including the sill position and the sill height.Five test unit discharges between 0.154 m 2 /s and 0.230 m 2 /s were conducted.The main findings are summarized as follows: (1) When a sill is located near the upstream weir flow (i.e., the weir toe in this study), the jump types of the wave flow can be classified as submerged and non-submerged.The submerged wave type flow corresponding to a higher relative sill height was relevant for obtaining a better aeration effect.(2) The ambient pressure head of the wave-type flow (i.e., the bottom pressure of the stilling basin) is strongly influenced by the flow pattern.Pressure fluctuations were more significant in the vicinity of the sill, and these are caused by the movement of the turbulent region of the jump, especially for the change in wave-type flow from a submerged jump to a non-submerged jump.The streamwise mean bottom pressure profile revealed the existence of three distinct flow zones: deflection zone, (2) jump zone, and (3) wave impact zone.There were two peak pressure points along the stilling basin, and these values can be distinguished by the upstream Froude number and the position and height of the sill.(3) The air concentrations on the bottom and the sidewall were also affected by the flow pattern.For a given sill-controlled stilling basin, the air concentrations on the bottom and the sidewall decreased with increasing unit discharges.The flow zone within the vicinity of the sill had slight air entrainment and significant pressure fluctuations, which may be prone to cavitation.Thus, this region near the sill should be focused upon in order to provide protection. The findings from this study have the potential to expand the application of sillcontrolled stilling basins in hydraulic engineering by establishing new relationships.Additionally, the results can be used to improve the accuracy of CFD models in predicting wave-type flow behavior. Figure 4 . Figure 4. Layout of air concentration and pressure measurement points: (a)sideview; (b) plain view. Figure 4 . Figure 4. Layout of air concentration and pressure measurement points: (a) sideview; (b) plain view. Figure 7 . Figure 7. Maximum, mean and minimum pressure head on the bottom along the stilling basin at different unit discharges for Case M32 and a CHJ: (a) q w = 0.102 m 2 /s; (b) q w = 0.154 m 2 /s;(c) q w = 0.188 m 2 /s; (d) q w = 0.205 m 2 /s; (e) q w = 0.230 m 2 /s; (f) q w = 0.93 m 2 /s for a classical hydraulic jump[16]. Figure 9 . Figure 9.The relative first and second pressure peak: (a) first peak; (b) second peak. Figure 9 . Figure 9.The relative first and second pressure peak: (a) first peak; (b) second peak. Figure 10 . Figure 10.Air concentrations on the bottom (Cb) and the sidewall (Cs) for both Case M32 and M31 at different qw values: (a) Case M32; (b) Case M31. Figure 10 . Figure 10.Air concentrations on the bottom (C b ) and the sidewall (C s ) for both Case M32 and M31 at different q w values: (a) Case M32; (b) Case M31. Table 1 . Hydraulic and geometrical parameters of the sill-controlled stilling basin. Table 1 . Hydraulic and geometrical parameters of the sill-controlled stilling basin. NSWTF and SWTF are the abbreviations for non-submerged and submerged wave-type flow, respectively. Table 2 . Air concentrations on the bottom and the sidewall (%).
2023-04-20T15:20:26.771Z
2023-04-18T00:00:00.000
{ "year": 2023, "sha1": "5165f1d42a8eff1c2289b311b9713b37dc5c6657", "oa_license": "CCBY", "oa_url": "https://www.mdpi.com/2076-3417/13/8/5053/pdf?version=1681810506", "oa_status": "GOLD", "pdf_src": "ScienceParsePlus", "pdf_hash": "ae6f655074e38e3c0830b4211c938edf8a84de1f", "s2fieldsofstudy": [ "Environmental Science", "Engineering" ], "extfieldsofstudy": [] }
119620417
pes2o/s2orc
v3-fos-license
Mollification of $\mathcal{D}$-solutions to Fully Nonlinear PDE Systems In a recent paper (arXiv:1501.06164) the author has introduced a new theory of generalised solutions which applies to fully nonlinear PDE systems of any order and allows the interpretation of merely measurable maps as solutions. This approach is duality-free and builds on the probabilistic representation of limits of difference quotients via Young measures over certain compactifications of the"state space". Herein we establish a systematic regularisation scheme of this notion of solution which, by analogy, is the counterpart of the usual mollification by convolution of weak solutions and of the mollification by sup/inf convolutions of viscosity solutions. Introduction is valued. Obviously, D i ≡ ∂/∂x i , x = (x 1 , ..., x n ) , u = (u 1 , ..., u N ) and R N n 1 s = R N n . In the recent paper [K8] we introduced a new theory of generalised solutions which allows for merely measurable maps to be rigorously interpreted and studied as solutions of systems with even discontinuous coefficients and without requiring any structural assumptions (like ellipticity or hyperbolicity). Namely, our approach applies to measurable solutions u : Ω ⊆ R n −→ R N of the pth order system (1.2) F x, u(x), D [p] u(x) = 0, x ∈ Ω, where D [p] u := Du, D 2 u, ..., D p u denotes the pth order Jet of u. Since we do not assume that solutions must be locally integrable on Ω, the derivatives Du, ..., D p u may not have a classical meaning, not even in the distributional sense. Using this new approach, in the very recent papers [K8, K9, K10] we studied efficiently certain interesting problems arising in PDE theory and in vectorial Calculus of Variations which we discuss briefly at the end of the introduction. In the present paper we are concerned with the development of a systematic method of mollification of generalised solutions to the fully nonlinear system (1.2) by constructing approximate smooth solutions to approximate systems. The mollification method we establish herein is the counterpart of the standard mollification by convolution which is the standard analytical tool in the study of weak solutions and to the so called sup/inf convolutions used in the theory of viscosity solutions of Crandall-Ishii-Lions (for a pedagogical introduction we refer to [K7]). Our starting point for the definition of solution is not based either on standard duality considerations via integration-by-parts (the cornerstone of weak solutions) or on the maximum principle (the mechanism of the more recent method of viscosity solutions). Instead, we build on the probabilistic representation of limits of difference quotients by utilising Young measures, also known as parameterised measures. These are well-developed objects of abstract measure theory of great importance in Calculus of Variations and PDE theory (see e.g. [K8] and [E, P, FL, CFV, FG, V, KR]). In the present setting, a version of Young measures is utilised in order to define generalised solutions of (1.2) by applying them to the difference quotients of the candidate solution. The essential idea restricted to the first order case p = 1 of (1.2) goes as follows: let u ∈ W 1,1 loc (Ω, R N ) be a strong solution to (1.3) F x, u(x), Du(x) = 0, a.e. x ∈ Ω. That is, we understand the gradient Du : Ω ⊆ R n −→ R N n as a probability-valued map given by the Dirac measure at the gradient in hopes of relaxing the requirement to have a concentration measure. The goal is to allow instead general probability-valued maps arising as limits of the difference quotients for nonsmooth maps. Indeed, if u : Ω ⊆ R n −→ R N is only measurable, we consider the probability-valued mappings where D 1,h is the usual difference quotients operator and R N n is the Alexandroff 1-point compactification R N n ∪ {∞}. Namely, we view D 1,h u as an element of the space of Young measures Y Ω, R N n (see the next Section 2 for the precise definitions). By using that Y Ω, R N n is weakly* compact, there always exist probability-valued mappings Du ∈ Y Ω, R N n such that along infinitesimal subse- for any"diffuse" gradient Du, where supp * (Du(x)) := supp(Du(x)) \ {∞}. Since (1.4) and (1.5) are independent of the regularity of u, they can be taken as a notion of diffuse derivatives of the measurable map u : Ω ⊆ R n −→ R N and of D-solutions to the PDE system (1.3) respectively. If u happens to be weakly differentiable, then we have Du = δ Du a.e. on Ω and we reduce to strong solutions. Except for a small further technical generalisation (we may need to take special difference quotients depending on F ), (1.4) and (1.5) comprise our notion of generalised solutions in the first order case of (1.2). This paper is organised as follows. The introduction is followed by Section 2 which is a quick review of the main points of [K8] necessary for this work. The main results of this paper are in Section 3 (Theorem 11 and Corollary 12) and establish the main properties of our approximations. The technical core of our analysis is contained in the preparatory Lemma 10. We expect the analytical results established herein to play a prominent role in future developments of the theory, but we refrain from providing any immediate applications in this paper. We conclude this introduction with some results recently obtained by using the technology of D-solutions. Our motivation to introduce them primarily comes from the necessity to study the recently discovered equations arising in vectorial Calculus of Variations in the space L ∞ , that is for variational problem related to functionals (1.6) E ∞ (u, Ω) := H(·, u, Du) L ∞ (Ω) applied to Lipschitz maps u : Ω ⊆ R n −→ R N (for an introduction to the topic we refer to [C,BEJ,K7]). In the simplest case of H(·, u, Du) = |Du| 2 , the analogue of the Euler-Lagrange equation is the ∞-Laplace system: ..., N and [Du] ⊥ is the orthogonal projection on the orthogonal complement of the range of Du. The vectorial case of the theme has been pioneered by the author in a series of recent papers [K1]- [K6], while the scalar case is relatively standard by now and has been pioneered by Aronsson in the 1960s ( [K7]). In the paper [K8] we studied the Dirichlet problem for (1.7), while in [K9] we studied the Dirichlet problem for the system arising from the general functional (1.6) for n = 1. In [K9] 1 The version of the definition we are using herein is different from the one we put foremost in [K8]- [K10] because this simplifies the proofs that follow. In [K8] we proved several equivalent formulations of the same notion which we will not utilise, so we take this version as primary here. we also considered the Dirichlet problem for fully nonlinear 2nd order degenerate elliptic systems and in [K10] we considered the problem of equivalence between distributional and D-solutions for linear symmetric hyperbolic systems. 2. A quick guide to D-solutions for fully nonlinear systems 2.1. Preliminaries. We begin with some basic material needed in the rest of the paper. Basics. The constants n, N ∈ N will always denote the dimensions of the domain and the target of our candidate solutions u : Ω ⊆ R n −→ R N defined over an open set. Such mappings will always be understood as being extended by zero on R n \ Ω. Unless indicated otherwise, Greek indices α, β, γ, ... will run in {1, ..., N } and latin indices i, j, k, ... (perhaps indexed i 1 , i 2 , ...) will run in {1, ..., n}, even when their range of summation may not be given explicitly. The norm symbols | · | will always mean the Euclidean ones, whilst Euclidean inner products will be denoted by either "·" on R n , R N or by ":" on tensor spaces. For example, on R N n p s we have Our measure theoretic and function space notation is either standard as e.g. in [E, E2] or self-explanatory. For example, the modifier "measurable" will always mean "Lebesgue measurable", the Lebesgue measure will be denoted by | · |, the characteristic function of a set E by χ E , the L p spaces of maps u : Ω ⊆ R n −→ R N by L p (Ω, R N ), etc. We will systematically use the Alexandroff 1-point compactification of R N n p s . The metric topology on it will be the standard one which makes it isometric to the sphere of equal dimension (via the stereographic projection which identifies the the north pole with infinity {∞}). It will denoted by We also note that balls taken in R N n p s (which we will view as a metric vector space isometrically contained into R N n p s ) will be understood as the Euclidean. Young Measures. Let E be a measurable subset of R n and K a compact subset of some Euclidean space, which we will later take to be R N n × · · · × R N n p s . Definition 1 (Young Measures). The set of Young Measures Y (E, K) consists of the probability-valued mappings which are measurable in the following weak* (i.e. pointwise) sense: for any continuous function Ψ ∈ C 0 (K), the function E ⊆ R n −→ R given by The set Y (E, K) can be identified with a subset of the unit sphere of a certain L ∞ space and this provides very useful compactness and other properties. Consider the L 1 space of Bochner integrable maps which are valued in the separable space C 0 (K) of continuous functions over K. For background material on these spaces we refer e.g. to [FL, Ed, F, V]. The elements of L 1 E, C 0 (K) coincide with the Carathéodory functions in the sense that each such Φ induces a map E x → Φ(x, ·) ∈ C 0 (K). By Carathéodory functions we mean that for every X ∈ K the function x → Φ(x, X) is measurable and for a.e. x ∈ E the function X → Φ(x, X) is continuous. The Banach space L 1 E, C 0 (K) is separable and by using the duality C 0 (K) can be shown that (see e.g. [FL]) which are weakly* measurable and the norm of the space is given by Here " · (K)" denotes the total variation on K. The duality pairing between the spaces ·, · : Then, the set of Young measures can be identified with a subset of the unit sphere of L ∞ w * E, M(K) : Remark 2 (Properties of Y (E, K)). The following facts about Young measures will be extensively used hereafter (the proofs can be found e.g. in [FG]): The set of Young measures is convex and sequentially compact in the weak* topology induced from The next lemma is a minor variant of a classical result (see [K8,FG,FL]) but it plays a fundamental role in our setting because it guarantees the compatibility of strong solutions with D-solutions. ( General frames, derivative expansions, difference quotients. In what follows we will consider non-standard orthonormal frames of R N n p s and write derivatives D p u with respect to them. This generalisation is irrelevant to the mollification results we establish herein but it was absolutely essential for the existenceuniqueness results we established in [K8]- [K10]. In any case, these bases will not appear explicitly anywhere in the proofs and they will not imply any technical ramifications. Let {E 1 , ..., E N } be an orthonormal frame of R N and suppose that for each α = 1, ..., N we have an orthonormal frame {E (α)1 , ..., E (α)n } of R n . Given such bases, we will equip the space R N n p s with the following induced orthonormal base: is the symmetrised tensor product. Given such frames, let i 1 denote the usual pth order directional derivative along the respective directions. Then, the pth order derivative D p u of a map u : Ω ⊆ R n −→ R N can be expressed as (2.3) We will use the following compact notation for the (formal) Taylor expansion around a point x ∈ Ω: The notation "⊗p" stands for the pth tensor power and ":" is the obvious contraction of indices which in index form reads Expansions analogous to (2.3) will also be applied to difference quotients which play a crucial role in our approach. Given a ∈ R n with |a| = 1 and h ∈ R \ {0}, the 1st order difference quotient of u along the direction a at x will be denoted by By iteration, if h 1 , ..., h p = 0 the pth order difference quotient along a 1 , ..., a p is We now introduce difference quotients taken with respect to frames as in (2.1). Definition 4 (Difference quotients). Let {E 1 , ..., E N } be an orthonormal frame of R N and let also {E (α)1 , ..., E (α)n } be for each α = 1, ..., N an orthonormal frame of R n , while for any p ∈ N the tensor space R N n p s is equipped with the frame (2.1). Given any vector-indexed infinitesimal sequence we define the pth order difference quotients of the measurable mapping u : Ω ⊆ R n −→ R N (with respect to the fixed reference frames) arising from (h m ) m∈N p as the family of maps each of which is given by The notation in the bracket above is as in (2.4), (2.5). Further, given any matrixindexed infinitesimal sequence we will denote its nonzero row elements by m q := (m 1 q , ..., m q q ) ∈ N q , q = 1, ..., p, and we define the pth order Jet D [p],hm u of difference quotients of u (with respect to the reference frames) arising from (h m ) m∈N p×p as the family of maps each of which is given by Definition 5 (Multi-indexed convergence). If m ∈ N p×p is a lower trigonal matrix of indices as above, the expression "m → ∞" will by definition mean successive convergence with respect to each index separately in the following order: Main definitions and some analytic properties. Definition 6 (Diffuse Jets). Suppose we have fixed some reference frames as in Definition 4. For any measurable map u : Ω ⊆ R n −→ R N , we define the diffuse pth order Jets D [p] u of u as the following subsequential weak* limits: which arise as m → ∞ along multi-indexed infinitesimal subsequences. As a consequence of the separate convergence, the pth order Jet is always a (fibre) product Young measure: Next is the central notion of generalised solution. We will use the notation X ≡ (X 1 , ..., X p ) for points in R N n × · · · × R N n p s and also the symbol "supp * " to denote the reduced support of a probability measure ϑ ∈ P R N n × ... × R N n p s off "infinity": Definition 7 (D-solutions for pth order systems). Let Ω ⊆ R n be an open set and a Carathéodory mapping. Assume also that we have fixed some reference frames as in Definition 4 and consider the pth order PDE system We say that the measurable map u : Ω ⊆ R n −→ R N is a D-solution of (2.6) when for any diffuse pth order Jet D [p] u of u arising from any infinitesimal multi-indexed sequence (h m ) m∈N p×p (Definition 6) we have We now consider the consistency of the D-notions with the strong/classical notions of solution. For more details we refer to [K8] and also to [K9, K10]. In general, diffuse derivatives may be nonunique for nonsmooth maps. However, as the next simple consequence of Lemma 3 shows, they are compatible with weak derivatives and a fortiori with classical derivatives: Lemma 8 (Compatibility of weak and diffuse derivatives). If u ∈ W p,1 loc (Ω, R N ), then the pth order diffuse Jet D [p] u is unique and for any k ∈ N we have The next result asserts the plausible fact that D-solutions are compatible with strong solutions. Its proof is an immediate consequence of Lemma 8. Proposition 9 (Compatibility of strong and D-solutions). Let F be a Carathéodory map as in (1.1) and u ∈ W p,1 loc (Ω, R N ). Consider the pth order PDE system Then, u is a D-solution on Ω if and only if u is a strong a.e. solution on Ω. Lemma 8 and Proposition 9 remain true if u is merely p-times differentiable in measure, a notion weaker than approximate differentiability (see [K8,AM]). For more details on the material of this section (e.g. analytic properties, equivalent formulations of Definition 7, etc) we refer to [K8]- [K10]. Mollification of D-solutions to fully nonlinear systems We begin with the next result which is the main technical core of our constructions. Our method of proof is inspired by the paper of Alberti [A]. Lemma 10 (Construction of the approximations). Let Ω ⊆ R n −→ R N be a measurable map and p ∈ N. Then, for any ε > 0 and any multi-index m ∈ N p×p as in Definition 4 , there exist a measurable set E ε,m ⊆ Ω and a smooth map The reader can easily be convinced that even if u ∈ L 1 loc (Ω, R N ), the standard mollifier u * η ε of u does not satisfy these approximation properties (the best we can get is approximation in dual spaces, not almost uniform on Ω). Proof of Lemma 10. Let u : Ω ⊆ R n −→ R N be a given measurable map (extended on R n \ Ω by zero). Let D [p],hm u be the Jet of pth order difference quotients of u where the multi-index m ∈ N p×p is fixed. We also fix ε > 0. Step 1. We may assume that Ω has finite measure. This hypothesis does not harm generality for the following reason: assuming we have established (3.1), (3.2) on subdomains of Ω which have finite measure, we can fill Ω a.e. by disjoint open cubes (Ω i ) ∞ 1 such that Ω \ ∪ ∞ 1 Ω i = 0 and on each Ω i (3.1) holds with 2 −i ε instead of ε for respective sequences of functions (u ε,m,i ) ∞ i=1 and sets (E ε,m,i The conclusion of Lemma 10 then follows. Step 2. We now show there exists a measurable set and smooth maps Indeed, let us define and for any R > 0 we consider the truncation for any R > ε we have that Further, we can find a sequence of smooth compactly supported maps (V ε,k ) ∞ k=1 such that for any s ∈ [1, ∞), V ε,k − T R(ε) (V ) −→ 0, in L s (Ω) and a.e. on Ω as k → ∞. In view of the identity (3.17) |Ω αδ | = α n |Ω δ |, by choosing and (3.15) has been established as well. For each i ∈ N such that Q δ,i ⊆ Ω, we consider a cut off function and let x δ,i ∞ 1 := the centres of the cubes Q δ,i ∞ i=1 ⊆ R n . We now define a mapping u ε,m ∈ C ∞ c (Ω, R N ) which is given by Then, by (3.19), on each Q αδ,i ⊆ Ω, the map u ε,m equals the restriction of a pth order polynomial and hence for any k ∈ {1, ..., p}, we have for x ∈ Q αδ,i . By recalling the properties of the measurable set F ε,m ⊆ Ω of (3.3), for any k ∈ {0, 1, ..., p} and for a.e. x ∈ Q αδ,i \ F ε,m , we have which by (3.9) gives that a.e. on ∈ Q αδ,i \ F ε,m . By (3.21) and (3.10)-(3.14) we deduce that Finally, we set Hence, by replacing ε by ε/3p, we see that (3.1) has been established. Step 4. We now establish (3.2) under the additional hypothesis that u ∈ L r (Ω, R N ). We begin by noting that on top of (3.3) we can also arrange to have (3.25) u − U 0,ε,m L r (Ω) ≤ ε. This follows by the next simple modification of Step 2: we replace T R (V ) by u, T R D [p],hm u , choose s := r and use that u can be approximated in the L r norm by smooth compactly supported mappings. Then we obtain (3.25) and the first and last inequalities of (3.3). The middle inequality of (3.3) follows by (3.25) and (by perhaps modifying F ε,m and the choice of ε accordingly): Next, by (3.25) and by (3.20) we have that Moreover, by (3.20) (and (3.19)) we have the estimates By inserting (3.27) into (3.26) we obtain the estimate (3.28) In view of (3.28) and of (3.13)-(3.18), by decreasing δ further and by increasing α even further, we can achieve u − u ε,m L r (Ω) ≤ 7ε. Hence, the desired conclusion follows by replacing ε by ε/7. The lemma has been established. By utilising Lemma 10, we may now state and prove the main result of this paper. Theorem 11 (Mollification of D-solutions). Let Ω ⊆ R n be an open set and Then, there exists a multi-indexed sequence of maps (u m ) m∈N p×p ⊆ C ∞ 0 (Ω, R N ) with the following properties: for any diffuse pth order jet D [p] u generated along subsequences of an infinitesimal multi-indexed sequence (h m ) m∈N p×p , there exists a single-indexed subsequence with the following properties: (3.30) both as ν → ∞. In addition, for each ν ∈ N, the map u ν ∈ C ∞ 0 (Ω, R N ) is a smooth strong solution to the approximate pth order PDE system x ∈ Ω and f ν : Ω ⊆ R n −→ R M is a measurable map satisfying f ν −→ 0 as ν → ∞ in the following sense: on Ω. Note that the second statement of (3.30) is interesting because the (fibre) product Young measure D [p] u can be weakly* approximated by the product Young measures δ D [p] u ν as ν → ∞ (confer with Definition 6). The following consequence of our constructions will also follow from Theorem 11. Corollary 12 (Mollification of D-solutions contn'd). In the setting of Theorem 11, the mode of convergence (3.32) is (up to the passage to a subsequence) equivalent to either of the modes of convergence as ν → ∞: on Ω. (b) For any ε > 0, and any E ⊆ Ω with |E| < ∞, In addition, we have The next example shows that in general it is not possible to strengthen the modes of convergence in (3.32)-(3.35) to L p or even to a.e. on Ω even for linear equations: Example 13 (Optimality of Theorem 11). Consider the equation (I) Let u c ∈ C 0 0 (0, 1) be a Cantor-type function 2 with u c = 0 and u c = 0 a.e. on (0, 1). Then, u c is a D-solution because it satisfies the equation a.e. on (0, 1). However, for any hypothetical approximate equation on Ω and f ε −→ 0 in L 1 (0, 1), by Poincaré inequality we have u ε c L 1 (0,1) ≤ C u ε c L 1 (0,1) = C f ε L 1 (0,1) −→ 0, as ε → 0, while u ε c −→ u c a.e. on Ω and u c = 0. (II) Let u s ∈ L 1 c (0, 1) be a singular solution to the equation such that for any diffuse derivative Du s ∈ Y (0, 1), R , we have Du s ≡ δ {∞} on a set of positive measure. Such a solution can be constructed by taking a compact nowhere dense set K ⊆ (0, 1) of positive measure (e.g., we may take 1 is an enumeration of the rationals in [1/3, 2/3]) and setting u s := χ K . Then, u s satisfies as h → 0. By Lemma 3, we have that all diffuse gradients coincide and are given by a.e. x ∈ (0, 1). However, it is impossible to obtain f ε −→ 0 a.e. on (0, 1) for any approximation scheme as in the theorem such that For the proof of Theorem 11 we need three lemmas which are given right next. The first two are variants of results established in [K8], while the third one is a consequence of standard results on Young measures. They are all given below in the generality of Young measures because they do not utilise the special structure of diffuse derivatives. For the sake of completeness, we provide the first two results in full by giving all the details of their proofs, while for the third we give a precise reference. Lemma 14 (Convergence lemma V2, cf. [K8]). Suppose that u m −→ u ∞ a.e. on Ω, as m → ∞ where u ∞ , (u m ) ∞ 1 are measurable maps Ω ⊆ R n −→ R N . Let W be a finite dimensional metric vector space, isometrically and densely contained into a compactification K of itself. Suppose also we are given Carathéodory mappings and we are also given Young measures ϑ ∞ , (ϑ) ∞ 1 in Y Ω, K such that the following modes of convergence hold true: Then, if for a given function Φ ∈ C 0 c (W) we have We will later apply this lemma to the case of W = R N n × · · · × R N n p s for the compactification of W given by the torus K = R N n × · · · × R N n p s . We remind that the metric on W is the product metric induced by the imbedding of R N n q s into R N n p s , q = 1, ..., p. Proof of Lemma 14. We first fix Φ ∈ C 0 c (W) and define and we claim that our convergence hypotheses imply φ m −→ 0 a.e. on Ω. Indeed, let us fix x ∈ Ω such that u m (x) −→ u ∞ (x) (and the set of such points x has full measure). Then, we can find compact sets K R N and K W such that for large m ∈ N we have u m (x), u ∞ (x) ∈ K and also supp(Φ) ⊆ K . By the convergence assumption on the maps F m , we have as m → ∞. Since this happens for a set of points x ∈ Ω of full measure, we deduce that φ m (x) −→ 0 for a.e. x ∈ Ω. We now fix R > 0 and Φ ∈ C 0 c (W) as in the statement of the lemma and set This convergence and the form of Ω R imply that the Carathéodory functions Thus, the weak*strong continuity of the duality pairing We recall now that by our hypothesis the right hand side of the above vanishes in order to obtain the desired conclusion after letting i → ∞ and then taking R → ∞. The next lemma says that if the distance between two sequences of measurable maps asymptotically vanishes, then the maps represent the same Young measure in the compactification (see also [K8,FL]). Lemma 15. Let W be a finite dimensional metric vector space, isometrically and densely contained into a compactification K of itself. Let also E ⊆ R n be a measurable set. If U m , V m : E ⊆ R n −→ W are measurable maps satisfying Proof of Lemma 15. We begin by fixing ε > 0, φ ∈ L 1 (E) and Φ ∈ C 0 (K). Since Φ is uniformly continuous on K, there exists a bounded increasing modulus of continuity ω ∈ C 0 [0, ∞) such that By letting first m → ∞ and then ε → 0, the density in L 1 E, C 0 (K) of the linear span of the products of the form φ(x)Φ(X) implies the desired conclusion. The following result is the last ingredient needed for the proof of Theorem 11. Lemma 16. Let W and W be finite dimensional metric vector spaces, isometrically and densely contained into certain compactification K and K of W , W respectively. Let also E ⊆ R n be a measurable set. If U m : E ⊆ R n −→ W , V m : E ⊆ R n −→ W are sequences of measurable maps satisfying Proof of Lemma 16. The proof of this result can be found (actually in a much more general topological setting) e.g. in [FG], Corollary 3.89 on p. 257. Now we may prove our main result. Proof of Theorem 11. Step 1. We begin by noting a general convergence fact. Let (X, ρ) be a metric space, f ∈ X and let also m ∈ N p×p be a matrix of indices as in Definition 4. Suppose that {f m } m∈N p×p ⊆ X is a multi-indexed sequence such that the successive limit converges to f in X (Definition 5): Then, there exist subsequences (m b a,ν ) ∞ ν=1 , a, b ∈ {1, ..., p}, such that, if (m ν ) ∞ ν=1 ⊆ N p×p is the single-indexed sequence with components m b a,ν , then f m ν converges to f in X: lim This is a simple consequence of the definitions of limits. Step 2. Let u : Ω ⊆ R n −→ R N be a D-solution to the system and let D [p] u = Du × · · · D p u be a pth order Jet of u arising along matrix-indexed infinitesimal subsequences as m → ∞ (Definitions 4, 5, 6, 7). Since the weak* topology on the Young measures is metrisable, we may apply Step 1 to for some metric ρ inducing the weak* topology. Thus, we infer that there exists a single-indexed subsequence (m ν ) ∞ 1 ⊆ N p×p such that lim Since u is measurable, by invoking Lemma 10 for ε = 1/|m| we obtain a multiindexed sequence (u m ) m∈N p×p ⊆ C ∞ 0 (Ω, R N ). Let (u ν ) ν∈N be the subsequence of it corresponding to (m ν ) ∞ 1 . Then, by (3.1) and by recalling that almost uniform implies a.e. convergence, we immediately have u ν −→ u a.e. on Ω as ν → ∞. Further, again by (3.1) we have on Ω, as ν → ∞. By Lemma 15 and the above, we obtain Step 3. We now define for each ν ∈ N the mapping f ν : Ω ⊆ R n −→ R M given by In order to conclude the theorem, we seek to show that for any Φ ∈ C 0 c R N n × · · · × R N n p s , we have that for a.e. x ∈ Ω, Φ D [p] u ν (x) f ν (x) −→ 0 as ν → ∞. This last statement is a consequence of the Convergence Lemma 14. Indeed, since u is a D-solution on Ω, for a.e. x ∈ Ω, we have F x, u(x), X = 0, X ∈ supp * D [p] u(x) . We fix such an x as above and we choose a function Φ as above. Then, we note that the definition of D-solutions implies that the continuous function R N n × · · · × R N n p s X −→ Φ(X) F x, u(x), X ∈ R is well-defined on the compactification and vanishes on the support of the probability measure D [p] u(x). Hence, we have R N n ×···×R N n p s Φ(X) F x, u(x), X d D [p] u(x) (X) = 0, a.e. x ∈ Ω. In addition, by utilising that u ν −→ u a.e. on Ω \ E and Lemma 16, we have = Ω \E R N n ×···×R N n p s min F x, u(x), X , 1 d D [p] u(x) (X) dx. Since u is a D-solution, we have sup X∈ supp * (D [p] u(x)) F x, u(x), X = 0 a.e. on Ω. Moreover, for any ε ∈ (0, 1) we have the inequality Further, we note that for a.e. x ∈ Ω \ E, D [p] u(x) is a probability measure on R N n × · · · × R N n p s (not just on the compactification). By recalling the definition of the function ψ ν , (3.36) and the above observations give Conclusively, we have obtained that f ν −→ 0 locally in measure on Ω \ E and hence up to a subsequence f ν −→ 0 a.e. on Ω \ E as ν → ∞. The corollary has been established.
2015-08-22T15:01:15.000Z
2015-08-22T00:00:00.000
{ "year": 2015, "sha1": "701ccff2be1a1f680e031e445dea45464551c73f", "oa_license": null, "oa_url": null, "oa_status": null, "pdf_src": "Arxiv", "pdf_hash": "701ccff2be1a1f680e031e445dea45464551c73f", "s2fieldsofstudy": [ "Mathematics" ], "extfieldsofstudy": [ "Mathematics" ] }
5403852
pes2o/s2orc
v3-fos-license
Predictors of postoperative decline in quality of life after major lung resections § Objective: Severe impairment in quality of life (QoL) is one of the major patients’ fears about lung surgery. Its prediction can be valuable information for both patients and physicians. The objective of this study was to identify predictors of clinically relevant decline of the physical and emotional components of QoL after lung resection. Methods: This is a prospective observational study on 172 consecutive patients submitted to lobectomy or pneumonectomy (2007—2008). QoL was assessed before and 3 months after operation through the administration of the Short Form 36v2 survey. The relevance of the perioperative changes in physical component summary (PCS) and mental component summary (MCS) scales was measured by the Cohen’s effect size (mean change of the variable divided by its baseline standard deviation). An effect size > 0.8 is regarded as large and clinically relevant. QoL changes were dichotomized according to this threshold. Logistic regression and bootstrap analyses were used to identify reliable predictors of large decline in PCS and MCS. Results: A total of 48 patients (28%) had a large decline in the PCS scale and 26 (15%) in the MCS scale. Patients with a better preoperative physical functioning ( p = 0.0008) and bodily pain ( p = 0.048) scores and those with worse mental health ( p = 0.0007) scores were those at higher risk of a relevant physical deterioration. Patients with a lower predicted postoperative forced expiratory volume in 1 s (ppoFEV1; p = 0.04), higher preoperative scores of social functioning ( p = 0.02) and mental health ( p = 0.06) were those at higher risk of a relevant emotional deterioration. The following logistic equations were derived to calculate the risk of decline in physical or emotional components of QoL, respectively: risk of physical decline: ln R /(1 + R ): (cid:2) 11.6 + 0.19XPF, physical functio- ning + 0.05XBP, bodily pain (cid:2) 0.05XMH, mental health; risk of emotional decline: ln R 1 /(1 + R 1 ): (cid:2) 8.06 (cid:2) 0.03XppoFEV1 + 0.11XSF + 0.055XMH. Conclusions: A consistent proportion of patients undergoing lung resection exhibit an important postoperative worsening in their QoL. We were able to identify reliable risk factors and predictive equations estimating this decline. These findings may be used as selection criteria for efficacy trials on perioperative physical rehabilitation or psychological treatments, during preoperative counseling, in the surgical decision-making process and for selecting those patients who would benefit from physical and emotional supportive programs. Introduction The decision to proceed to surgery is often based on measurable physiologic parameters mainly associated with perioperative morbidity and mortality. Little attention in this regard is paid to the patients' physical and emotional perceptions and expectations. However, what patients fear most about lung surgery, in most of the cases, is not much the risk of cardiopulmonary morbidity, but rather to be left physically and mentally handicapped and not be able anymore to resume a decent daily life style [1]. Although an abundance of risk models has been published to predict perioperative morbidity and mortality, scant information exists regarding the prediction of postoperative quality of life (QoL). The objective of this investigation was to develop models predicting the risk of decline of physical and emotional components of QoL after lung resection. Patients and methods This is a prospective longitudinal study of 172 consecutive patients submitted to lobectomy (160 cases) or pneumonectomy (12 cases) from January 2007 to December 2008 for non-small-cell lung cancer (NSCLC) at a single center. QoL of all patients was assessed before (within 1 month of the operation) and after (3 months) the operation by the administration of the Short Form 36v2 (SF36v2) survey. The study was approved by the local Institutional Review Board of the hospital, and all patients gave their informed consent to participate. Postoperative early mortality was 1.1% (two patients) and the other 14 patients were not available for follow-up at 3 months; these patients were excluded from the analysis. Similarly, all those patients who submitted to extended procedures such as pleuropneumonectomy and chest wall or diaphragm resections were not included. Operability exclusion criteria included a predicted postoperative forced expiratory volume in 1 s (ppoFEV1) and predicted postoperative diffusion lung capacity for carbon monoxide (ppoDLCO) below 30% of predicted value in association with a V O 2 peak (V O 2 , maximal oxygen uptake) lower than 10 ml kg À1 min À1 . As a rule, all operations were performed through a lateral muscle-sparing, nerve-sparing [2] thoracotomy by board-certified thoracic surgeons. Patients were extubated in the operating room and transferred to a dedicated thoracic ward. Postoperative management focused on early-as-possible mobilization, antithrombotic and antibiotic prophylaxis, and physical and respiratory rehabilitation. Thoracotomy chest pain was assessed at least twice daily and controlled through a systemic continuous infusion of non-opioid drugs. Therapy was titrated to achieve a visual analog score < 5 (in a scale ranging from 0 to 10) during the first 48-72 h. This regimen was usually switched to an oral therapy after removal of chest tubes. No formal preadmission or post-discharge physiotherapy or psychological supportive programs were administered. Neurological or psychotropic personal medications, if present, were generally resumed the day following surgery. QoL was assessed by the SF36v2 questionnaire [3], which is a generic instrument assessing eight physical and mental health concepts (PF, physical functioning; RP, role limitation caused by physical problems; BP, bodily pain; GH, general health perception; VT, vitality; SF, social functioning; RE, role limitation caused by emotional problems, and MH, mental health). Scores standardized to norms and weighted averages are used to create summary physical component summary (PCS) and mental component summary (MCS) scores on a standard scale . In the SF36v2, all health dimension scores are standardized to norms by employing a linear transformation of data originally scored on a 0-100 scale. Norm-based scores have a mean of 50 and a standard deviation of 10. As a consequence, for all health dimensions and component scales, any score <50 falls below the general population mean and each point represents 1/10th of a standard deviation. This allows for a direct comparison of measures among different populations and scales. Statistical analysis The importance of the perioperative changes in physical and emotional composite scales (PCS and MCS) was measured by the Cohen's effect size method (mean change of the variable divided by its baseline standard deviation) [4]. An effect size >0.8 is regarded as large and clinically relevant [5]. QoL changes were dichotomized according to this threshold (>0.8 or 0.8). Stepwise logistic regression analysis was used to identify reliable predictors of perio-perative relevant decline in PCS and MCS (preoperative minus postoperative values effect size >0.8). The following variables were initially screened by univariate analysis to be included in the logistic regression: age, body mass index (BMI), forced expiratory volume in 1 s (FEV1), forced expiratory capacity (FVC), FEV1-to-FVC ratio, diffusion lung capacity for carbon monoxide (DLCO), ppoFEV1 and ppoDLCO), arterial oxygen and carbon dioxide tensions, preoperative hemoglobin level, smoking pack-years, American Society of Anesthesiology Score, Eastern Cooperative Oncology Group score, history of coronary artery disease (CAD), neoadjuvant chemotherapy, diabetes, type of operation (lobectomy vs pneumonectomy), the presence of peripheral vascular disease, history of cerebrovascular accident, and all eight individual QoL physical and emotional domains. Numeric variables were tested by using the unpaired Student's t-test (normal distribution) or the Mann-Whitney test (non-normal distribution). Categorical variables were compared by the Chi-square test or the Fisher's exact test as appropriate. Variables with a p < 0.1 at univariate analysis were used as independent predictors in two stepwise backward logistic regression analyses (dependent variables relevant decline of PCS or relevant decline of MCS, respectively). A p < 0.1 cutoff was used for retention of variables in the final model. Multicolinearity was avoided by using in the regression only one variable of a set of correlated variables (r > 0.5), which was selected by bootstrap technique. Bootstrap analyses using 1000 samples of the same number of patients as the original dataset were used to assess reliability of the final models and predictors. In the bootstrap procedure, repeated samples of the same number of observations as the original database were selected with replacement from the original set of observations. For each sample, stepwise logistic regression was performed. The stability of the final stepwise model can be assessed by identifying the variables that enter most frequently in the repeated bootstrap models and comparing those variables with the variables in the final stepwise model. If the final stepwise model variables occur in a majority (>50%) of the bootstrap models, the original final stepwise regression model can be judged to be stable [6][7][8]. Statistical analysis was performed on the Stata 9.0 statistical software. Results The characteristics of the 172 patients included in this series are provided in Table 1 Compared with the average general population (score 50), 59 patients (34%) had a reduced (<50) physical composite scale before the operation and 85 (49%) after 3 months. Ninety-three patients (54%) had a depressed (<50) mental composite scale before surgery and 81 (47%) after 3 months. Forty-eight patients (28%) had a large decline in the physical composite scale and 26 (15%) in the emotional composite scale. Table 2 shows the results of the univariate comparison between patients with and without large decline of PCS. Table 3 shows the results of the univariate comparison between patients with and without large decline of MCS. Variables with a p-level < 0.1 were used as independent predictors in logistic regression analyses, whose results are displayed in Table 4. Independent reliable predictors of physical decline included higher preoperative physical functioning ( p = 0.0008) and bodily pain ( p = 0.048) scores, and lower mental health ( p = 0.0007) score. Reliable predictors of emotional decline were a lower ppoFEV1 ( p = 0.04), higher preoperative scores of social functioning ( p = 0.02) and mental health ( p = 0.06). Similar to the physical component, the proportion of patients with a postoperative perceived emotional decline was not influenced by the type of operation (pneumonectomy 16% vs lobectomy 15%, p = 0.9), age (elderly > 70 years of age 16% vs younger 15%, p = 0.8), or COPD status (18% vs 13%, p = 0.3), but it was increased in those patients experiencing postoperative complications (24% vs 11%, p = 0.02). This last finding may be explained by a higher psychological burden caused by a longer postoperative stay and the need for more complex management. The following logistic equations were derived to calculate the risk of decline in physical or emotional composite scales of QoL, respectively: Discussion Patient-centered outcomes are gaining importance in orienting health-care management. The focus of health-care providers and the public is gradually shifting from early postoperative endpoints (such as morbidity and mortality) to long-term outcomes (such as survival, residual function, and QoL). For decades surgeons' attitude in evaluating surgical success has focused mainly on minimizing the risk of Results are expressed as medians with ranges unless otherwise indicated. BMI: body mass index; COPD: chronic obstructive pulmonary disease (FEV1 < 80% + FEV1/FVC ratio < 0.7); CAD: coronary artery disease. postoperative complications and death. This mentality has led to a physician-driven counseling system, often relegating the patients to a passive role. This has been the case particularly for subjects affected by a fatal disease, such as lung cancer, for which surgery may still represent the only chance of cure. The increased depression and tensionanxiety levels present in these patients compared with the general population [9] and the prospect of a cure may in fact lead them to totally rely on the physician's clinical judgment. Fortunately, in most recent years, this trend has changed and there is now a greater attention both to what patients really fear about their surgical experience and to the price they are willing to pay for increasing their chance of cure. Many of them are willing to accept even postoperative cardiopulmonary complications, but less so long-term functional disability [1]. For this reason, we as physicians should be willing to inform patients about their postoperative residual functional and emotional status. Unfortunately, despite recent guidelines emphasizing the importance of this parameter [10], validated models to predict postoperative QoL are still lacking. Therefore, the objective of this investigation was to develop equations to predict the risk of postoperative decline of physical and emotional components of QoL. The intent is twofold: to provide a specific tool for perioperative counseling, assisting in setting patients expectations about their surgical experience and to identify those patients at increased risk of QoL deterioration, who may benefit from perioperative and post-discharge physical and mental supportive programs. QoL was assessed before and after operation by administering the SF36v2 questionnaire, which is one of the mostused instruments for evaluating physical and mental status of patients [3]. This generic type of health measure uses normbased scores allowing the comparison of test results of a group in analysis to the general population mean and of scores of different groups of patients. Although other types of surveys exist and are probably more specific for pulmonary and neoplastic patients, the SF36 has a well-established reliability and has been reported to be sensitive to postoperative changes after thoracic surgery for NSCLC [11]. We chose to re-assess the patients after 3 months with the main intent to limit the dropout rate and include the majority of patients undergoing major lung resections during that period. Longer follow-up has been associated with dropout rates as high as 40% in patients with lung cancer [12,13]. This would have inevitably determined a 'creamskimming' effect with the best patients selected for evaluation. Indeed, at 3 months, we already had 8% of patients lost at follow-up or who refused to be re-assessed. Perioperative changes in physical and emotional composite scales were assessed by the Cohen's effect size [4], a standardized mean difference scale-free statistics, providing a measure of separation between two group means. This parameter represents a standardized measure of the magnitude of the effect of an intervention, and it is particularly used in social and behavioral sciences. According to the Cohen's conventional criteria [5], an effect > 0.8 is defined as large. Accordingly, we chose this cutoff to dichotomize the perioperative decline in QoL. Based on these criteria, we found that a considerable proportion of patients experience a large decline in physical and emotional components of their QoL compared with their preoperative status. Furthermore, compared with the general population, nearly half of the patients displayed a depressed physical and emotional status 3 months after surgery. Although we were not able to find an association between decline in perceived physical status and preoperative physiologic parameters, specific QoL domains were found to be associated with this outcome. Patients with better preoperative physical functioning and bodily pain perception (less symptomatic) and those with worse mental health are those at higher risk of experiencing a large physical decline. For instance, based on the logistic regression equation, a patient with preoperative SF36v2 norm-based PF and BP scores of 55 (falling above the general population mean of a half SD) and with an MH score of 45 (falling below the general population mean of a half SD) would have a risk of large physical decline of 31%. The risk of perceived emotional decline was found to be greater in patients with lower ppoFEV1 and higher preoperative social functioning and mental health scores. Based on the logistic regression equation, a patient with a preoperative ppoFEV1 of 50% along with SF and MH scores of 55 (falling above the general population mean of a half SD) would have a risk of large emotional decline of 38%. In general, these findings confirm that patients reporting a better preoperative physical fitness, but with more compromised mental/emotional status, are those more prone to experience severe deterioration of their physical condition. It is likely that those patients with an impaired physical condition and with a more stable emotional status have lower expectations and are more prepared to be sick and face the challenges of a cancer operation. Similar findings were reported in the elderly when compared with younger patients [14,15]. Similarly, those patients feeling emotionally better before the operation are those experiencing the worst emotional decline, presumably because their expectations were higher than those with an already-compromised emotional status. In addition to mental-related scales of QoL, ppoFEV1 seems inversely related to decline in MCS in these patients. Although comparison with previous investigations appears difficult due to a different methodology and design of the study, this is one of the rare circumstances in which an objective physiologic respiratory parameter is found to be associated with postoperative change in emotional status. Previous investigations have found these factors more related to physical components of QoL or global scores [15,16]. The study may have potential limitations: Similar to all longitudinal studies, we had dropouts (cancer recurrence, denial to take test, etc.). These patients may have been those in the worst conditions and their inclusion in the analysis may have influenced the results. This should be taken into account when interpreting this and other similar studies. QoL continues to improve up to 6-12 months [17,18]. Thus, extending the follow-up may have affected the results of this study and generalization of our models for longer-period evaluations need to be confirmed. Eighteen patients (10%) in this series underwent adjuvant chemotherapy. Although this factor has been shown by some to influence postoperative QoL [19], we decided to include these patients after a preliminary analysis showed no standardized mean difference in PCS or MCS values compared with those without chemotherapy. This analysis was performed by using the SF36v2 survey, a generic QoL instrument. Reproducibility of these results with other instruments needs to be verified. QoL may reflect patients' perspective and may be affected by many external factors with an emotional impact such as type of information provided, radicality of the treatment, satisfaction with the provided care, and availability of family and social support. Further analyses are needed to include these factors in predictive models and assess their role in influencing residual QoL. A consistent proportion of patients undergoing lung resection exhibit an important postoperative worsening in their QoL. We were able to identify reliable risk factors and predictive equations estimating this decline. These findings may be used as selection criteria for efficacy trials on perioperative physical rehabilitation or psychological treatments, during preoperative counseling, in the surgical decision-making process, and for selecting those patients who would benefit from physical and emotional supportive programs.
2018-04-03T00:49:54.288Z
2011-05-01T00:00:00.000
{ "year": 2011, "sha1": "a54f14e045d6db392319bfcd21b094a9ce15c076", "oa_license": "CCBY", "oa_url": "https://academic.oup.com/ejcts/article-pdf/39/5/732/22142568/39-5-732.pdf", "oa_status": "HYBRID", "pdf_src": "Adhoc", "pdf_hash": "4eeba0fabaad22c1ffd6c5d5dc6a9e5ffce50267", "s2fieldsofstudy": [ "Medicine", "Psychology" ], "extfieldsofstudy": [ "Medicine" ] }
191142416
pes2o/s2orc
v3-fos-license
Data-Driven Adaptive Iterative Learning Method for Active Vibration Control Based on Imprecise Probability A data-driven adaptive iterative learning (IL) method is proposed for the active control of structural vibration. Considering the repeatability of structural dynamic responses in the vibration process, the time-varying proportional-type iterative learning (P-type IL) method was applied for the design of feedback controllers. The model-free adaptive (MFA) control, a data-driven method, was used to self-tune the time-varying learning gains of the P-type IL method for improving the control precision of the system and the learning speed of the controllers. By using multi-source information, the state of the controlled system was detected and identified. The square root values of feedback gains can be considered as characteristic parameters and the theory of imprecise probability was investigated as a tool for designing the stopping criteria. The motion equation was driven from dynamic finite element (FE) formulation of piezoelectric material, and then was linearized and transformed properly to design the MFA controller. The proposed method was numerically and experimentally tested for a piezoelectric cantilever plate. The results demonstrate that the proposed method performs excellent in vibration suppression and the controllers had fast learning speeds. Introduction Many industrial systems accomplish tasks in a limited period of time and repeat control processes continuously. In these systems, it is attractive to improve the system performance by repeating the control process, which draws attention to intelligent control strategy, named the iterative learning (IL) method. The IL method is applicable to controlled systems with repetitive motion properties. The fundamental IL method is a learning process based on output errors and learning gains. To obtain better control performances, the upgraded system inputs will be generated at the next repetitive processes by the latest tracking errors [1]. In practical industrial processes, the IL method is an effective approach to produce the control inputs, so the system outputs are as close as possible to the desired system outputs, such as: control trajectory tracking for lower limb rehabilitation [2], design of a shaping method for the residual vibration control of industrial robots [3], compensation for aerodynamic disturbance of the aerial refueling system [4], design of the controller for homing guidance of missiles [5]. Considering the repeatability of structural dynamic responses in the vibration process, several research groups have applied the proportional-type iterative learning (P-type IL) method with fixed gain to suppress the vibrations of piezoelectric laminated composite structures. The In contrast, the theory of imprecise probability can work as a more general model to deal with uncertainties. The probability is represented by intervals, which interprets uncertainty from the perspective of behavior and achieves good results in the application of state diagnosis. In addition, the sliding model controller is removed, which can reduce the computational burden. The theory of imprecise probability provides a formal framework to determine an optimal decision under uncertainties of the state of system, which makes it suitable for a wide range of application areas [23,24]. In this paper, the theory of imprecise probability was used to design the stopping criteria. To deal with the problems of the multicriteria and multiobjectives in vibration control systems, Dempster's combination rule was used to fuse multi-source information. Based on the imprecise probability theory and the combination rule, the learning processes of all controllers can be monitored and diagnosed in real-time. In this paper, by combining the time-varying P-type IL method with the MFA method, a data-driven adaptive IL method is presented for the vibration active control of piezoelectric laminated composite structures. Considering the system uncertainty in practical applications, the MFA control was incorporated into the time-varying P-type IL method to tune in real-time the learning gains. The square root values of feedback gains were regard as characteristic parameters. Based on the imprecise probability theory, a multi-source information diagnosis technology was presented for the design of the stopping criteria. Decisions made under the imprecise probability theory were used to decide whether the learning processes should be terminated. Numerical simulations and experimental studies were carried out, and the results were analyzed and discussed. In the rest of the paper, the state-space model of the system is established for the controller design. The motion equation of the piezoelectric structure also driven by the P-type IL method is shown in Section 2. Section 3 introduces the dynamic linearization technique for the state-space system, and the MFA controller is given. The stopping criteria based on the imprecise probability theory and Dempster's combination rule is proposed in Section 4. The proposed method is summarized in Section 5. In Section 6, numerical simulation results are presented for verifying the effectiveness of the proposed method. A complete vibration control system is established, and the results are discussed in Section 7. The conclusions and future outlooks are given in Section 8. State-Space Model and P-Type IL Method A finite element (FE) formulation for the dynamic response of piezoelectric material has been given as [14]: where M uu and C uu are the mass matrix and the damping matrix;K uu , K uφ , K φu , and K φφ represent the stiffness matrix, the piezoelectric coupled matrix, the coupled capacity matrix, and piezoelectric capacity matrix, respectively; F ue and F φ are the external force vector and the electric load vector; q and φ are the nodal displacement vector and the voltage vector. denote the first and second derivatives versus time. The damping matrix is usually linear with respect to the mass matrix and stiffness matrix using the Rayleigh damping coefficients α and β: Equation (1) can be uncoupled into the electric potential: M uuq + C uu . q + K * q + K uφ K −1 φφ F φ = F ue (4) where K * = K uu − K uφ K −1 φφ K φu . The electric load vector is usually equal to zero in the sensor. Again using Equation (3), the sensor electric potential is given as φ s = −K −1 φφ K φu q. The first-time derivatives of φ s can be given as: The system output error can be defined as: where y d and y are the desired system output signal and the measured system output signal, respectively. The desired system output signal y d is always zero. The measured system output signal y is equal to the first-time derivatives of the sensor electric potential . φ s in Equation (5). The system output error at the kth moment as a discrete-time system is given as: According to the P-type IL method [7], the feedback gain can be expressed in the iteration form: where δ is the proportional learning gains matrix. The actuation voltage can be written as: The electric load vector at kth moment is given as: where C a represents the capacitance constant of the piezoelectric material. By combining Equations (5), (9), and (10), motion Equation (4) can be approximated as follows: Dynamic Linearization and MFA Controller Design Combining Equations (5) and (7), the state form of system (11) can be rewritten as: . where In the time-varying P-type IL version, the updated rule is given as [25]: where δ(k − 1) is the time-varying learning gains matrix. For the sample period T, we have . y(k) = [y(k+1) − y(k)]/T, and the discrete-time of system (12) can be transformed as follows using Equation (13): (14) where F * (k) = K * q(k) − F ue (k), δ(k − 1), and y(k) are the system input and output, respectively. The proofs of Lemma 1 can be obtained by similar steps (see Reference [20]) and are omitted. Based on Equation (15), the following dynamic linearization from can be obtained: where ϕ 1 (k) and ϕ 2 (k) are dynamically changed. The MFA controller for calculating the learning gains δ is given as follows [20]: where a step size constant ϕ ∈ (0, 1] is added to make Equation (17) general. The parameters of the PPD matrix are estimated as follows [20]: where a step-size constant η ∈ (0, 1] is added to make Equation (18) general. Preliminary Notion of Imprecise Probability In the theory of imprecise probability, many decision criteria are developed [26]. The Γ−maximin criterion was applied in this paper to design the stopping criteria. Assuming that a decision d induces a real-value gain J d , and the set of all available decisions is D, d ∈ D. Our purpose was to identify the optimal decision d in D, and the solution is given as follows: The variables whose values are uncertain can influence the gain J d . According to the expected utility of its gain, the decision can be ranked reasonably, and the expected utility should be maximized. where E µ (J d ) is the expected utility of the gain J d , and µ is the probability measure. As a simplified form for Equation (20), the E p can be seen as the replacement of E µ , and the Γ−maximin criterion can be written as where E p is a lower expected utility by minimizing E µ . The Γ−maximin criterion can be understood as a worst-case optimization, and a decision can be made by maximizing the worst expected gain. Fault Reliability The real-time feedback gains of controllers are obtained for diagnosing the system state. Assume that there are N sensors glued on the laminated composite plate in the vibration control system. When the jth sensor works, there are L j characteristic parameters to represent the state types of the system. For the sake of simplicity, suppose that all state types are independent of each another. Only one state can occur at any given time. Let S j represents the characteristic parameter vector obtained from the jth sensor: where s ji is the ith element of S j , the characteristic parameter s ji obtained from jth sensor can be used to identify the certain state i in current circumstances, i = 1, 2, . . . , L j , L j is the number of the characteristic parameters provided by the jth sensor. Considering an exponential function form as the evidence generating function, a basic fault reliability assignment can be defined as: where r i , a i , and α i are constants, which can be directly determined by expert experience or prior knowledge. In state diagnostics, m ji can be considered as the degree of reliability in the certain state i by evaluating the measurements obtained from the jth sensor. The fault reliability for the jth sensor can be calculated as follows: Establish Fault Probability Interval The fault reliabilities obtained from N controllers can be expressed in vector form M = {m 1 , m 2 , · · · , m N }. After sorting the elements from small to large, a new fault reliability vector can be obtained M = {m 1 , m 2 , · · · , m N }, which are divided into two groups, namely, a low fault reliability group M min and a high fault reliability group M max . where l is a natural number. The fault reliability conflicts of the M min and M max are smaller than that of the M , thus better fused results can be obtained using Dempster's combination rule. Suppose m Π and m Λ are two fault reliabilities obtained from the same fault reliability group. The degree of conflict among these two fault reliabilities is shown by the conflict coefficient K as follows [27,28]. The larger the value of K, the more conflicting the two fault reliabilities are: where ∅ is an empty set. In the same fault reliability group, Dempster's combination rule is given as follows [28,29]: Dempster's combination rule in Equation (27) is applied to fuse the fault reliabilities in groups M min and M max , and the two fused fault reliabilities are denoted as m min and m max . Based on the Pignistic probability transformation (PPT), the fused fault reliabilities will be the fault probability, namely, P min = m min and P max = m max , when there is only one element in the fault reliability vector. The fault probability interval can be established as [P min , P max ]. The system in this paper consists of two types of states: learning termination and normal learning. The fault probability interval mentioned above is the prediction of the following fault indication function. Diagnosis Cost Functions and Decision-Making The diagnosis cost functions can be designed as follows: where f 1 expresses that the fault is occurrence, a(e) is the gain when the fault actually occurs and the state is diagnosed correctly,b(e) is the gain when the fault does not occur and the state is diagnosed incorrectly; f 2 denotes that the fault is not occurrence, c(e) is the gain when the fault actually occurs and the state is diagnosed incorrectly, d(e) is the gain when the fault does not actually occur and the state is diagnosed correctly, and satisfying a(e) > b(e) and d(e) > c(e), and e is the parameter, which can be directly determined by expert experience or prior knowledge. Considering the fault indication function Equation (28) is predicted, the fault diagnosis problem is transformed into the process of decision-making for the expected intervals of f 1 and f 2 . In the Γ−maximin criterion, the decision is made by comparing the lower expected utility of f 1 and f 2 . The expected intervals of f 1 and f 2 can be calculated as follows: The square root values of feedback gains are regarded as the characteristic parameters. The basic fault reliability assignment value can be calculated using the exponential function. The fault reliability vectors are divided into two groups, including the high fault reliability group and low fault reliability group. By Dempster's combination rule, the fused results of the two groups above are used to establish the fault probability interval. The Γ−maximin criterion in the theory of imprecise probability is adopted for state diagnosis. The threshold value is predefined to serve as the stopping criteria. Based on the diagnosis results of the Γ−maximin criterion, decision-making can be fulfilled by comparison with the threshold value. The Summary of the Proposed Method In summary, the flow chart is shown in Figure 1 and detailed as follows: Symmetry 2019, 11, 746 9 of 22 FE Modeling and Setting of Controller Parameters The numerical simulations were carried out via vibration active control on the cantilevered plate with piezoelectric patches. The piezoelectric cantilevered plate comprise done laminated composite plate (414 mm × 120 mm × 1 mm), on which six piezoelectric patches (60 mm × 24 mm × 1 mm) were bonded in pairs at the plate, as shown in Figure 2. The laminated composite plate was made of graphite-epoxy (GE, carbon-fiber reinforced) composite material, which included five Step 1. Construct the full-form dynamic linearization model in Equation (15). Step 2. Predict the time-varying PPD values in Equation (18) merely using the on-line system input δ(k) and output y(k) data. Step 4. Calculate the feedback gain matrix G(k) in Equation (13). Step 5. Extract real-time feedback gains used to transfer the characteristic parameters S k . Step 6. Calculate the fault reliabilities in Equation (24) for all sensors. Step 7. Divide the fault reliability value vector into two groups: the high fault reliability group and the low fault reliability group Equation (25). Step 8. Respectively fuse the elements of the two groups above using Dempster's combination rule Equation (27). Step 9. Establish the fault probability interval by the fused results and calculate the expected interval Equation (30) of the diagnoses cost function Equation (29). Step 10. Make decisions based on the lower expected utility and stopping criteria. FE Modeling and Setting of Controller Parameters The numerical simulations were carried out via vibration active control on the cantilevered plate with piezoelectric patches. The piezoelectric cantilevered plate comprise done laminated composite plate (414 mm × 120 mm × 1 mm), on which six piezoelectric patches (60 mm × 24 mm × 1 mm) were bonded in pairs at the plate, as shown in Figure 2. The laminated composite plate was made of graphite-epoxy (GE, carbon-fiber reinforced) composite material, which included five substrate layers. Its total thicknesswas1mm with the angle-ply (0/90/0/90/0), and the thickness of each substrate was 0.2 mm. The upper piezoelectric patches were actuators and the lowers ones worked as sensors. We distinguished the three actuator-sensor pairs as a, b and c, respectively. The positions of the piezoelectric patches were chosen via Reference [29]. The locations of Point A, Point B and Point C are given in Figure 2. The root of the laminated composite plate was clamped. The properties of the laminated composite plate and piezoelectric material are listed in Table 1. substrate layers. Its total thicknesswas1mm with the angle-ply (0/90/0/90/0), and the thickness of each substrate was 0.2 mm. The upper piezoelectric patches were actuators and the lowers ones worked as sensors. We distinguished the three actuator-sensor pairs as a, b and c, respectively. The positions of the piezoelectric patches were chosen via Reference [29]. The locations of Point A, Point B and Point C are given in Figure 2. The root of the laminated composite plate was clamped. The properties of the laminated composite plate and piezoelectric material are listed in Table 1. Graphite-Epoxy (GE) Piezoelectric Material Yong's modulus (GPa) Elastic stiffness (GPa) In this paper, the dynamic FE model for simulating the piezoelectric cantilevered plate was constructed using ANSYS. The laminated composite plate and piezoelectric patches were modeled by SOLID46 elements and SOLID5 elements, respectively. The laminated composite plate was meshed with 69 × 20 × 1 elements, and each piezoelectric patch was meshed with10 × 4× 1 elements. For the degree of electric freedom, the nodes at the surface of piezoelectric patches were coupled by command CP. Modal analysis was carried out to identify the natural frequencies of the piezoelectric cantilevered plate and to design the sampling period for the numerical simulations [30]. The first three natural frequencies of the piezoelectric cantilevered plate were calculated, which also implied good agreement with the comparison between the numerical results and experimental results in Table 2. The largest error percentage, 13.9%, arose in the second modal frequency. Since the numerical results of the modal frequencies were used to get approximate values to verify the dynamic FE model, the difference in the modal frequencies between the numerical and experimental results were acceptable. The sampling period was taken as T = 1/(20ω 1 ), where ω 1 represented the first natural frequency of the piezoelectric cantilevered plate. α = 2β = 0.003 were the Rayleigh damping coefficients. The constants of the MFA controllers were given as: γ = 1, ϕ = 1, µ = 1, and η = 1. The fault reliability can be calculated in Equation (24). The system in this paper consisted of two types of states: learning termination and normal learning. The square root values of the feedback gains were regarded as the characteristic parameters. The constants for the calculation of the basic fault reliability assignment Equation (23) are given as: r = 1, a = 2.924, and α = −1. The constants of the diagnosis cost functions are defined as: a(e) = 1, b(e) = 1, c(e) = 1,and d(e) = 1. The threshold value is predefined to serve as the stopping criteria, and decision-making can be fulfilled by comparing with the threshold value. The controllers connected with different sensors may have distinct convergence speeds. To make all controllers sufficiently learn, two threshold values were defined as: for the lower expected utility of fused fault reliabilities, the threshold value was specified at 0.9323; for the single fault reliability, the threshold value was specified at 0.7978. The learning process should be terminated as long as one of the two threshold values above was met. Otherwise, the learning process should be continued. In the P-type IL method, the maximum iteration number was defined as 500, and the fixed learning gains were given as δ 1 = 0.078 and δ 2 = 0.068 for various simulations. Harmonic Excitation The vibration active control for the first mode of the piezoelectric cantilevered plate was investigated in this case. Considering the harmonic excitation generated by the function f (t) = 5 cos(ω 1 t)N, the plate was driven at Point C, the constant ω 1 = 17.083rad/s(5.4377Hz) was the first natural frequency. All numerical results corresponding to the robust MFA-IL control are also given in this section. In Figure 3a,b, the time-history dynamic responses at Point A and Point B are, respectively, given, and the figure illustrates that the first mode vibration was suppressed effectively by the proposed method, the P-type IL method and robust MFA-IL control. The control performance of the piezoelectric actuators was not able at the places with (e.g., Point A) or without piezoelectric sensors (e.g., Point B). Nevertheless, it is worth noting that the conclusions obtained above were distinct from Saleh s [31]. Saleh pointed out that the P-type IL method cannot effectively control the unwanted vibration at locations of the observation points. Furthermore, it was also noteworthy that the P-type IL method could not obviously control the first mode vibration of the piezoelectric structures. As an effective vibration active control system, it is possible to reduce the amplitude of the overall structure not merely the parts of the structure. Before designing the vibration active control system, the control rule needs to be considered carefully for achieving satisfactory control results. The control performance of the system also relates to the positions and sizes of the piezoelectric patches [32]. The best positions for the piezoelectric patches were always chosen at the places where the mechanical strain was the largest. To generate satisfactory control forces, the dimensions of the piezoelectric actuators should be investigated and designed. The dimensions of the piezoelectric sensors should be selected appropriately, and then precise information on the structural vibration can be acquired. A misreading of sensor measurement signals may generate unreasonable control force, and the dynamic performance of the system may seriously deteriorate. As long as the positions and dimensions of the piezoelectric patches were chosen appropriately, the P-type IL method presented good performance on the first mode vibration control. Besides, the controllability of structural vibration was notable at the locations with sensors and without sensors. The actuator time-history voltages are presented in Figure 4a,b, and the actuator voltages changed suddenly at 4.4s while the system was controlled by the P-type IL method. After the learning processes were terminated, the amplitudes of the actuator voltages reconstructed smoothness. The controllers connected with distinct sensors had different convergence speeds in the learning processes, which may cause the control force to mismatch among each other. If a piezoelectric actuator cannot perform as desired, the adjacent piezoelectric actuators will be negatively affected. To avoid this phenomenon, more iterations are needed to improve the control stability. Less iteration numbers may directly lead to system spillover. The instability phenomenon did not occur when the system was controlled by the proposed method and the robust MFA-IL control. The measurement signals from sensor a/b and sensor c are shown in Figure 4c,d. In comparison with the P-type IL method, smaller amplitudes were obtained as long as the As an effective vibration active control system, it is possible to reduce the amplitude of the overall structure not merely the parts of the structure. Before designing the vibration active control system, the control rule needs to be considered carefully for achieving satisfactory control results. The control performance of the system also relates to the positions and sizes of the piezoelectric patches [32]. The best positions for the piezoelectric patches were always chosen at the places where the mechanical strain was the largest. To generate satisfactory control forces, the dimensions of the piezoelectric actuators should be investigated and designed. The dimensions of the piezoelectric sensors should be selected appropriately, and then precise information on the structural vibration can be acquired. A misreading of sensor measurement signals may generate unreasonable control force, and the dynamic performance of the system may seriously deteriorate. As long as the positions and dimensions of the piezoelectric patches were chosen appropriately, the P-type IL method presented good performance on the first mode vibration control. Besides, the controllability of structural vibration was notable at the locations with sensors and without sensors. The actuator time-history voltages are presented in Figure 4a,b, and the actuator voltages changed suddenly at 4.4s while the system was controlled by the P-type IL method. After the learning processes were terminated, the amplitudes of the actuator voltages reconstructed smoothness. The controllers connected with distinct sensors had different convergence speeds in the learning processes, which may cause the control force to mismatch among each other. If a piezoelectric actuator cannot perform as desired, the adjacent piezoelectric actuators will be negatively affected. To avoid this phenomenon, more iterations are needed to improve the control stability. Less iteration numbers may directly lead to system spillover. The instability phenomenon did not occur when the system was controlled by the proposed method and the robust MFA-IL control. The measurement signals from sensor a/b and sensor c are shown in Figure 4c,d. In comparison with the P-type IL method, smaller amplitudes were obtained as long as the piezoelectric cantilevered plate was controlled by the robust MFA-IL control and the proposed method. The root mean square (RMS) values of the dynamic responses and measurement signals were used to quantitatively analyze the performance of the P-type IL method, the robust MFA-IL control, and the proposed method, which are listed in Table 3. From Table 3, both the robust MFA-IL control and the proposed method had better control performance by comparing the P-type IL method. The vibration amplitude was reduced 41.22% under the control of the proposed method, and the vibration amplitudes reduced 40.36% under the control of the robust MFA-IL control. The proposed method and the robust MFA-IL had similar control precision. The computational time for the various algorithms is shown in Figure 5, including the time for running each iteration and the time for convergence of the feedback gains. From Figure 5, both the robust MFA-IL and the proposed method have fast convergence speed, which makes them overcome the inherent shortcoming of the P-type IL method. By comparing with the proposed The computational time for the various algorithms is shown in Figure 5, including the time for running each iteration and the time for convergence of the feedback gains. From Figure 5, both the robust MFA-IL and the proposed method have fast convergence speed, which makes them overcome the inherent shortcoming of the P-type IL method. By comparing with the proposed method, the computational burden of the robust MFA-IL control was higher when the controller implemented each iteration. The extension of computational time resulted in increasing the time delay. Large time delays will bring uncontrollability to the vibration suppression system. In the proposed method, the main computational cost focused on the part of the MFA control, which was implemented by iterative computation for determining the learning gains in real time. Apart from the MFA control, the SMC was also integrated into the robust MFA-IL control. The introduction of SMC brings great computational burden, and results in great time delays. Generally speaking, the more actuator-sensor pairs bonded on the plate, the more the vibration modes of the plate which can be controlled. Therefore, the time delay also increases with the increase in pairs of actuators-sensors, which may limit the application are as for the robust MFA-IL control. To obtain a slight improvement in control precision, the robust MFA-IL control brings a larger time delay. In practical applications, the design of the vibration control system should be composed of control precision and realization of vibration suppression. The proposed method can be based on a compromise. actuators-sensors, which may limit the application are as for the robust MFA-IL control. To obtain a slight improvement in control precision, the robust MFA-IL control brings a larger time delay. In practical applications, the design of the vibration control system should be composed of control precision and realization of vibration suppression. The proposed method can be based on a compromise. The real-time diagnosis results for the fused information and single information source are given in Figure 7. From Figure 7, the controllers connected with actuator a/b had faster convergence speed than that connected with actuator c. Based on the theory of imprecise probability, all controllers could learn sufficiently, and satisfactory control performance could be achieved. The learning processes of feedback gains are depicted in Figure 6a,b. From Figures 4 and 6, the proposed method and the robust MFA-IL had fast learning speed and maintained a good control performance and system stability. actuators-sensors, which may limit the application are as for the robust MFA-IL control. To obtain a slight improvement in control precision, the robust MFA-IL control brings a larger time delay. In practical applications, the design of the vibration control system should be composed of control precision and realization of vibration suppression. The proposed method can be based on a compromise. The real-time diagnosis results for the fused information and single information source are given in Figure 7. From Figure 7, the controllers connected with actuator a/b had faster convergence speed than that connected with actuator c. Based on the theory of imprecise probability, all controllers could learn sufficiently, and satisfactory control performance could be achieved. The real-time diagnosis results for the fused information and single information source are given in Figure 7. From Figure 7, the controllers connected with actuator a/b had faster convergence speed than that connected with actuator c. Based on the theory of imprecise probability, all controllers could learn sufficiently, and satisfactory control performance could be achieved. To verify the stability of the controllers, an instability test is carried out in this section. The noise signals are shown in Figure 8 are added to excite the piezoelectric cantilevered plate at Point A. The noise signals start at 6s and lasts only one second. The parameters of controllers are set up the same as mentioned above. The time-history dynamic responses at Point A and Point B are given in Figure 9a, and the measurement signals from sensor a/b and sensor c are shown in Figure 9b. When noise signals begin to excite, the dynamic responses of the plate and measurement signals from sensors change greatly; however, the divergence phenomenon was not found. After stopping the excitation of the noise signals, the vibration control system was restored to the stability state by the proposed method. To verify the stability of the controllers, an instability test is carried out in this section. The noise signals are shown in Figure 8 are added to excite the piezoelectric cantilevered plate at Point A. The noise signals start at 6s and lasts only one second. The parameters of controllers are set up the same as mentioned above. The time-history dynamic responses at Point A and Point B are given in Figure 9a, and the measurement signals from sensor a/b and sensor c are shown in Figure 9b. When noise signals begin to excite, the dynamic responses of the plate and measurement signals from sensors change greatly; however, the divergence phenomenon was not found. After stopping the excitation of the noise signals, the vibration control system was restored to the stability state by the proposed method. To verify the stability of the controllers, an instability test is carried out in this section. The noise signals are shown in Figure 8 are added to excite the piezoelectric cantilevered plate at Point A. The noise signals start at 6s and lasts only one second. The parameters of controllers are set up the same as mentioned above. The time-history dynamic responses at Point A and Point B are given in Figure 9a, and the measurement signals from sensor a/b and sensor c are shown in Figure 9b. When noise signals begin to excite, the dynamic responses of the plate and measurement signals from sensors change greatly; however, the divergence phenomenon was not found. After stopping the excitation of the noise signals, the vibration control system was restored to the stability state by the proposed method. To verify the stability of the controllers, an instability test is carried out in this section. The noise signals are shown in Figure 8 are added to excite the piezoelectric cantilevered plate at Point A. The noise signals start at 6s and lasts only one second. The parameters of controllers are set up the same as mentioned above. The time-history dynamic responses at Point A and Point B are given in Figure 9a, and the measurement signals from sensor a/b and sensor c are shown in Figure 9b. When noise signals begin to excite, the dynamic responses of the plate and measurement signals from sensors change greatly; however, the divergence phenomenon was not found. After stopping the excitation of the noise signals, the vibration control system was restored to the stability state by the proposed method. Random Excitation In this case, the plate was driven at Point C by the random force as follows in Figure 10. Symmetry 2018, 10, x FOR PEER REVIEW 16 of 22 In this case, the plate was driven at Point C by the random force as follows in Figure 10. The time-history dynamic responses at Point A and Point B are shown in Figure 11. The control voltages of actuator a/b and actuator c are presented in Figure 12a, b. The measurement signals from sensor a/b and sensor c are displayed in Figure 12c, d. The feedback gains are depicted in Figure 13. The time-history dynamic responses at Point A and Point B are shown in Figure 11. The control voltages of actuator a/b and actuator c are presented in Figure 12a,b. The measurement signals from sensor a/b and sensor c are displayed in Figure 12c,d. The feedback gains are depicted in Figure 13. In this case, the plate was driven at Point C by the random force as follows in Figure 10. The time-history dynamic responses at Point A and Point B are shown in Figure 11. The control voltages of actuator a/b and actuator c are presented in Figure 12a, b. The measurement signals from sensor a/b and sensor c are displayed in Figure 12c, d. The feedback gains are depicted in Figure 13. In this case, the plate was driven at Point C by the random force as follows in Figure 10. The time-history dynamic responses at Point A and Point B are shown in Figure 11. The control voltages of actuator a/b and actuator c are presented in Figure 12a, b. The measurement signals from sensor a/b and sensor c are displayed in Figure 12c, d. The feedback gains are depicted in Figure 13. From the results above, the proposed method makes the system have smaller amplitudes of dynamic responses and faster convergence speed. The learning gain  in P-type IL method is the fixed constant, which is selected based on the practical experience of researchers. A larger learning gain can lead to system instability and robustness reduction [10], thus, a smaller learning gain is necessary to improve the system control precision. However, the smaller the learning gain selected, the more iterations are needed, thus the learning speeds of controllers slow down [7,9]. In the proposed method, the learning gain can be self-tuned by the system's dynamic behavior. The convergence speeds of the controllers are improved, and the high control precision can also be obtained. In this case, the RMS values for evaluating the proposed method and the P-type IL method are listed in Table 3.The real-time diagnosis results of the system states are given in Figure 14. From the results above, the proposed method makes the system have smaller amplitudes of dynamic responses and faster convergence speed. The learning gain  in P-type IL method is the fixed constant, which is selected based on the practical experience of researchers. A larger learning gain can lead to system instability and robustness reduction [10], thus, a smaller learning gain is necessary to improve the system control precision. However, the smaller the learning gain selected, the more iterations are needed, thus the learning speeds of controllers slow down [7,9]. In the proposed method, the learning gain can be self-tuned by the system's dynamic behavior. The convergence speeds of the controllers are improved, and the high control precision can also be obtained. In this case, the RMS values for evaluating the proposed method and the P-type IL method are listed in Table 3.The real-time diagnosis results of the system states are given in Figure 14. From the results above, the proposed method makes the system have smaller amplitudes of dynamic responses and faster convergence speed. The learning gain δ in P-type IL method is the fixed constant, which is selected based on the practical experience of researchers. A larger learning gain can lead to system instability and robustness reduction [10], thus, a smaller learning gain is necessary to improve the system control precision. However, the smaller the learning gain selected, the more iterations are needed, thus the learning speeds of controllers slow down [7,9]. In the proposed method, the learning gain can be self-tuned by the system's dynamic behavior. The convergence speeds of the controllers are improved, and the high control precision can also be obtained. In this case, the RMS values for evaluating the proposed method and the P-type IL method are listed in Table 3. The real-time diagnosis results of the system states are given in Figure 14. From the results above, the proposed method makes the system have smaller amplitudes of dynamic responses and faster convergence speed. The learning gain  in P-type IL method is the fixed constant, which is selected based on the practical experience of researchers. A larger learning gain can lead to system instability and robustness reduction [10], thus, a smaller learning gain is necessary to improve the system control precision. However, the smaller the learning gain selected, the more iterations are needed, thus the learning speeds of controllers slow down [7,9]. In the proposed method, the learning gain can be self-tuned by the system's dynamic behavior. The convergence speeds of the controllers are improved, and the high control precision can also be obtained. In this case, the RMS values for evaluating the proposed method and the P-type IL method are listed in Table 3.The real-time diagnosis results of the system states are given in Figure 14. ExperimentSetup To validate the feasibility and control effect of the proposed method, an experimental system to control the vibration of the piezoelectric cantilevered plate was developed, as shown in Figure 15. Experiments on the first mode vibration control were conducted. The experiment setup consisted of a piezoelectric cantilevered plate with one laminated composite plate and six piezoelectric patches, the vibration excitation system, the data acquisition system, and the vibration active control system. The laminated composite plate was made up of GE composite material. The dimensions of the piezoelectric cantilevered plate are given in Section 6.1. The excitation position Point C was replaced by a metal patch. control the vibration of the piezoelectric cantilevered plate was developed, as shown in Figure 15. Experiments on the first mode vibration control were conducted. The experiment setup consisted of a piezoelectric cantilevered plate with one laminated composite plate and six piezoelectric patches, the vibration excitation system, the data acquisition system, and the vibration active control system. The laminated composite plate was made up of GE composite material. The dimensions of the piezoelectric cantilevered plate are given in Section 6.1. The excitation position Point C was replaced by a metal patch. The signal generator (DH1301, Taizhou, China) was used to generate the external excitation signals. After digital to analog (D/A) conversion, the excitation signals were amplified by the voltage amplifier (YE5872A, PA, USA) and then were used to drive the piezoelectric cantilevered plate by the electric-eddy current exciter (JZF-1, Beijing, China). The electric signals were transformed into a mechanical force. Three piezoelectric sensors were applied to detect the vibration information, and their measurement signals were selected as the feedback signals. After analog-to-digital (A/D) conversion, all measurement data were acquired and stored in the PC. Since the control target of the piezoelectric cantilevered plate was the first mode, a low-pass filter was applied to eliminate the high-frequency noise. The controllers implemented the signal processing and calculation in the real-time semi-physical simulation system (Quarc, Toronto, Canada). Running the proposed method, the controllers generated the control signals. After D/A conversion, the control outputs were sent to the high-voltage amplifier (E70, Harbin, China) and then were applied to piezoelectric actuators for vibration suppression. The experimental sample period was chosen as 3 ms. Modal Analysis A swept sine (chirp) signal with an amplitude of 100 V was used to identify the modal frequencies of the system and excite actuator a. The initial frequency was 0.5 Hz, and the terminal frequency was50 Hz. Fourth-order Butterworth filters were utilized to eliminate high-frequency noises. The cutoff frequency of low-pass filters was specified at 30 Hz in the modal identification, and the cutoff frequency was14 Hz in the first mode control. After filtering, the time-domain response signal measured by sensor a was stored and shown in Figure 16a. The fast Fourier transform (FFT) of the time-domain response data was computed to depict the frequency response of the system in Figure 16b. From Figure 16b, the first three modal frequencies were obtained and are listed in Table 2. The signal generator (DH1301, Taizhou, China) was used to generate the external excitation signals. After digital to analog (D/A) conversion, the excitation signals were amplified by the voltage amplifier (YE5872A, PA, USA) and then were used to drive the piezoelectric cantilevered plate by the electric-eddy current exciter (JZF-1, Beijing, China). The electric signals were transformed into a mechanical force. Three piezoelectric sensors were applied to detect the vibration information, and their measurement signals were selected as the feedback signals. After analog-to-digital (A/D) conversion, all measurement data were acquired and stored in the PC. Since the control target of the piezoelectric cantilevered plate was the first mode, a low-pass filter was applied to eliminate the high-frequency noise. The controllers implemented the signal processing and calculation in the real-time semi-physical simulation system (Quarc, Toronto, Canada). Running the proposed method, the controllers generated the control signals. After D/A conversion, the control outputs were sent to the high-voltage amplifier (E70, Harbin, China) and then were applied to piezoelectric actuators for vibration suppression. The experimental sample period was chosen as 3 ms. Modal Analysis A swept sine (chirp) signal with an amplitude of 100 V was used to identify the modal frequencies of the system and excite actuator a. The initial frequency was 0.5 Hz, and the terminal frequency was50 Hz. Fourth-order Butterworth filters were utilized to eliminate high-frequency noises. The cutoff frequency of low-pass filters was specified at 30 Hz in the modal identification, and the cutoff frequency was14 Hz in the first mode control. After filtering, the time-domain response signal measured by sensor a was stored and shown in Figure 16a. The fast Fourier transform (FFT) of the time-domain response data was computed to depict the frequency response of the system in Figure 16b. From Figure 16b, the first three modal frequencies were obtained and are listed in Table 2. ExperimentResults The proposed method and the P-type IL method were investigated for vibration active control of the flexible plate during the experiments. In this section, the plate was driven at 5.326 Hz for the first mode control. In the P-type IL method, the number of iterations was predefined as 1500 to improve the system stability and control precision, and the value of learning gain was specified as 1 0.54  . In the MFA controller, the parameters were selected as The stopping criteria in this section were the same with those above in Section 6.1. The measurement signals of the sensors (shown in Figure 17a,b) were moved forward for the phase delay compensation due to the hardware factors. The P-type IL method and the proposed method were performed at 5 s after the harmonic excitation started. ExperimentResults The proposed method and the P-type IL method were investigated for vibration active control of the flexible plate during the experiments. In this section, the plate was driven at 5.326 Hz for the first mode control. In the P-type IL method, the number of iterations was predefined as 1500 to improve the system stability and control precision, and the value of learning gain was specified as Φ 1 = 0.54. In the MFA controller, the parameters were selected as γ = 1, ϕ = 0.1, µ = 0.1,and η = 1. The stopping criteria in this section were the same with those above in Section 6.1. The measurement signals of the sensors (shown in Figure 17a,b) were moved forward for the phase delay compensation due to the hardware factors. The P-type IL method and the proposed method were performed at 5 s after the harmonic excitation started. ExperimentResults The proposed method and the P-type IL method were investigated for vibration active control of the flexible plate during the experiments. In this section, the plate was driven at 5.326 Hz for the first mode control. In the P-type IL method, the number of iterations was predefined as 1500 to improve the system stability and control precision, and the value of learning gain was specified as 1 0.54  . In the MFA controller, the parameters were selected as The stopping criteria in this section were the same with those above in Section 6.1. The measurement signals of the sensors (shown in Figure 17a,b) were moved forward for the phase delay compensation due to the hardware factors. The P-type IL method and the proposed method were performed at 5 s after the harmonic excitation started. The measurement signals from sensor a/b and sensor c are presented in Figure 17a,b, and the control voltages of actuator a/b and actuator c are presented in Figure 17c,d. The data used to calculate the RMSs are recorded after learning termination, and the RMSs are given in Table 3. From Table 3, the proposed method reached the comparatively ideal control performance: the vibration amplitudes were reduced 33.9% under the control of the proposed method, and the vibration amplitudes were reduced 31.8% under the control of the P-type IL method. This excellent performance was obtained by integrating the MFA method into time-varying P-type IL method. The learning processes of the feedback gains are depicted in Figure 18. The proposed method is feasible to simultaneous maintain the control performance and damp down quickly for structural vibration. Under the control of different methods, the feedback gains obtained from the same controllers result in distinct values. The real-time diagnosis curves for the fused information and single information sources are given in Figure 19. By the theory of imprecise probability, the learning processes of feedback gains can be diagnosed in real time. The decisions made based on the designed stopping criteria causes all controllers to learn sufficiently, and excellent control performance was obtained. The measurement signals from sensor a/b and sensor c are presented in Figure 17a, b, and the control voltages of actuator a/b and actuator c are presented in Figure 17c, d. The data used to calculate the RMSs are recorded after learning termination, and the RMSs are given in Table 3. From Table 3, the proposed method reached the comparatively ideal control performance: the vibration amplitudes were reduced 33.9% under the control of the proposed method, and the vibration amplitudes were reduced 31.8% under the control of the P-type IL method. This excellent performance was obtained by integrating the MFA method into time-varying P-type IL method. The learning processes of the feedback gains are depicted in Figure 18. The proposed method is feasible to simultaneous maintain the control performance and damp down quickly for structural vibration. Under the control of different methods, the feedback gains obtained from the same controllers result in distinct values. The real-time diagnosis curves for the fused information and single information sources are given in Figure19. By the theory of imprecise probability, the learning processes of feedback gains can be diagnosed in real time. The decisions made based on the designed stopping criteria causes all controllers to learn sufficiently, and excellent control performance was obtained. Conclusions and Outlooks A data-driven adaptive IL method was proposed for vibration active control of piezoelectric laminated composite structures. Based on the P-type IL method, the motion equation of the piezoelectric cantilevered plate was derived by the dynamic FE equations. The PPD matrix is estimated by the modified projection algorithm for dynamically linearizing the motion equation. Considering the uncertain non-linear dynamic processes, the MFA controller was designed. The MFA method was applied to self-tune the learning gains of the time-vary P-type IL method for accelerating the learning speed. The square root values of the feedback gains were regarded as characteristic parameters to diagnose the state of the vibration control system. Based on the theory of From the results above, the proposed method makes the system have smaller amplitudes of dynamic responses and faster convergence speed. The learning gain  in P-type IL method is the fixed constant, which is selected based on the practical experience of researchers. A larger learning gain can lead to system instability and robustness reduction [10], thus, a smaller learning gain is necessary to improve the system control precision. However, the smaller the learning gain selected, the more iterations are needed, thus the learning speeds of controllers slow down [7,9]. In the proposed method, the learning gain can be self-tuned by the system's dynamic behavior. The convergence speeds of the controllers are improved, and the high control precision can also be obtained. In this case, the RMS values for evaluating the proposed method and the P-type IL method are listed in Table 3.The real-time diagnosis results of the system states are given in Figure 14. Conclusions and Outlooks A data-driven adaptive IL method was proposed for vibration active control of piezoelectric laminated composite structures. Based on the P-type IL method, the motion equation of the piezoelectric cantilevered plate was derived by the dynamic FE equations. The PPD matrix is estimated by the modified projection algorithm for dynamically linearizing the motion equation. Considering the uncertain non-linear dynamic processes, the MFA controller was designed. The MFA method was applied to self-tune the learning gains of the time-vary P-type IL method for accelerating the learning speed. The square root values of the feedback gains were regarded as characteristic parameters to diagnose the state of the vibration control system. Based on the theory of imprecise probability, the stopping criteria were designed. On this basis, the decisions were carried out to avoid the over-learning of controllers. When the positions and dimensions of the piezoelectric patches are chosen appropriately, the P-type IL method shows good effectiveness on the first mode control of the piezoelectric cantilevered plate. Besides, the good controllability of the structural vibration was notable at the locations with sensors or without sensors. The conclusions obtained above in this paper were different other published studies. Considering the system uncertainties, the MFA method was applied to self-tune in real-time the learning gains by the system's dynamic behavior. The introduction of the MFA method accelerated the convergence speed of the controller and improved the system's control precision. The stopping criteria based on the theory of imprecise probability allowed all controllers to learn sufficiently, and satisfactory control performance was achieved. The proposed method overcomes the shortcomings of the P-type IL method to achieve the expected control performance. The robust MFA-IL control improved control precision at the expense of great time delay, while the proposed method reduced the computational burden and misdiagnosis for system states at the expense of a slight decrease in control precision. The proposed method in this paper can be a compromise. The proposed method has an open scheme, which can be integrated with other methods, including a model-based method. The data-driven method and model-based method can be complementary and cooperative in the design of a control system. The more precise information and model of the system obtained, the better the control performance of the system can be expected. In the future, the proposed method can be integrated with a model identification method to handle more complex problems in practical applications, which will simplify the structure of controllers and obtain a satisfactory control precision.
2019-06-14T14:47:37.590Z
2019-06-02T00:00:00.000
{ "year": 2019, "sha1": "9d11355ec162ff72d7fb17e564c2ca8852ef72ad", "oa_license": "CCBY", "oa_url": "https://www.mdpi.com/2073-8994/11/6/746/pdf", "oa_status": "GOLD", "pdf_src": "Adhoc", "pdf_hash": "9e1cc9bf9433d2a646e017341313ec169464d580", "s2fieldsofstudy": [ "Engineering" ], "extfieldsofstudy": [ "Computer Science" ] }
136535792
pes2o/s2orc
v3-fos-license
Two color probing of the ultrafast photo-acoustic response in a single biological cell The measurement of the mechanical properties of single biological cells with a nanometer depth resolution using only coherent light is proposed. A pump-probe set-up based on an ultrafast laser (100 fs pulses) is used to excite and detect acoustic frequencies in the GHz range. Experiments are performed on single fixed mouse MC3T3 cells adhering on titanium alloy substrate. Using two different probe wavelengths, the contributions to the optical detection resulting from the cell interface displacements and from interactions between acoustic waves and the laser light are identified. Semi-analytical calculations allow the determination of acoustic celerities and thicknesses in cells thinner than 150 nm. Introduction Since the 80's, [1] interest in the picosecond acoustics technique has been growing owing to its wide range of applications in non-destructive control and solid-state physics. By this technique high frequency acoustic waves are both generated and detected with laser light pulses (duration<1 ps), allowing the measurement of the optical, thermal, or mechanical properties of submicrometric materials. [2] Some applications of picosecond acoustics are thickness measurement and bonding control at the nanometer scale, or studies of material microstructures. Experiments can be performed in opaque or transparent media. In this paper, the picosecond acoustics technique is used in biological cells, the smallest unit of life that is classified as a living thing. Innovative experiments have already been performed in vegetal allium cepa cells and the potentialities of the picosecond acoustics technique to improve current cell imaging resolution have been demonstrated. [3][4] The method is now applied to single mouse MC3T3 fixed cells adhering on a titanium alloy substrate. Acoustic celerities are measured in cell thinner than 150 nm. In the second section of the paper, the optical detection in a cell of finite thickness adhering on an opaque substrate is proposed. We use two different probe wavelengths at the same point in the cell in order to identify the different contributions to the opto-acoustic detection. Simulation results and measurements are compared in the third section to determine the acoustic celerities and thicknesses of the cells. Picosecond acoustics detection in a thin viscoelastic film adhering on a half-space substrate The picosecond acoustics technique has already been used to study thin transparent films, from nanometric to micrometric thicknesses. [5][6][7] The cell is modeled as a thin transparent elastic film of thickness d adhering on an opaque substrate. A short light pulse is absorbed in the vicinity of the substrate and the subsequent thermal expansion generates a short strain pulse. The continuity of displacement and stress at the cell/substrate interface launches two acoustic pulses, one propagating in the cell and one propagating in the substrate. The acoustic wavelength of the acoustic pulse propagating in the cell is dictated by the acoustic celerity and the optical penetration depth in the titanium alloy substrate, [1] around 5.8 nm/ps and 20 nm, respectively. This acoustic wavelength is about 5 nm in the present study. We impose a zero stress boundary condition at the cell/air interface. The acoustic pulse propagates in the cell and is reflected at the cell interfaces. The period of the first harmonic of the acoustic resonances in the cell is: where v and d are the acoustic celerity and the cell thickness of the cell, respectively. The optical detection of the acoustic response is the sum of the interfaces displacements and the acousto-optic interaction between the acoustic pulse and the probe laser radiation. [6] The acoustooptic interaction leads to the so-called Brillouin oscillation. [5] Their period for a normal probe incidence is: where λ and n are the laser wavelength and the optical index of the medium. The amplitude of these oscillations is proportional to the piezo-optic coefficient. [6] (a) (b) Figure 1. Picosecond acoustics of a thin biological cell. For each plot, calculated signals labelled with squares and stars markers represent the acousto-optic contribution and the interface displacement contribution, respectively. We compare cells where (a) the acoustic frequency is higher than acoustooptic frequency and (b) the inverse situation. Unlabelled plots are the sum of the acousto-optic and interface displacement contributions. λ<<d, that is T B <<T A : Brillouin oscillation arise in the cell. On figure 1(a), λ>>d and Brillouin oscillations cannot arise. Let us now compare experimental data obtained in submicrometric cells with the modeling presented above. Cell #1 Cell #2 Figure 2. Experimental data (unlabelled plots) obtained in two different cells. On the left-hand side plots, detection is done using blue probe, and on right-hand side plots, detection in performed at the same point in the cell with a red probe. Triangles are corresponding calculated signals, sum of the acousto-optic detection (squares) and the interface displacement detection (stars). The experimental set-up is a classical pump-probe setup used to generate and to detect acoustic waves in a single MC3T3 fixed cell adhering on a Ti6Al4V substrate. [8] Red pulses (800 nm) of 100 fs duration are generated by a mode-locked Ti:sapphire laser, of repetition rate 82 MHz. A polarizing beam splitter divides the laser beam in pump and probe beams. The pump beam passes through an acousto-optic modulator (330 kHz) to provide the reference signal for lock-in amplification. One of the laser beams is converted into blue light using a BBO crystal and the other beam passes through a delay line to provide a tunable time delay between the pump and the probe. Both beams propagate through the transparent cell and are focused at a normal incidence with a X20 microscope objective at the Ti6Al4V surface and can be used either as pump or probe. The width at half-height of the space cross-correlation of the pump and probe beams is approximately 5 µm. Reflectometric measurements are presented. Four experimental results obtained on two different cells, one cell per line, and corresponding simulations are presented on figure 2. Unlabelled plots are experimental detection. On the left plots, signals are detected using blue probe while on right plots the detection is done using red probe at the same point of the cell. The experiments are led on the vacuole across the nucleus, the cytoplasm and the cytoskeleton of the cell. Lines with triangles are the corresponding simulation. Physical parameters are identical for each cell. The piezo-optic coefficients are adjusted to match the ratio of the acoustooptic to interface displacement contributions to the change of reflectivity. For each cell, the signal detected using the blue probe allows the measurements of the mechanical properties of the inspected cell. Assuming a cell optical index equal to 1.4, [9] the acoustic celerity is measured using the acoustooptic detection (square plots) equal to 4.4 and 3.7 nm/ps in cells 1 and 2, respectively, corresponding to 33 and 26 GHz acousto-optic frequency signal, respectively. Taking a cell density equal to 1100 kg.m -3 , the measured stiffness of the cell nucleus is evaluated between 15 to 21 GPa. Comparable rigidity has already been measured in osteoblasts cells on titanium alloy substrate. [10] The interface displacement contribution (star plots) allows the measurements of the cell thicknesses, 300 and 140 nm for cell 1 and 2, respectively. Data obtained using the red probe do no permit the mechanical evaluation of the cell for two reasons. For cell 1, the signal is very low. This may be attributed to a weak piezo-optic coefficient in the MC3T3 cell at the probe wavelength. The cell 2 is very thin compared to the red probe optical wavelength and the probe is almost not sensitive to the acoustooptic effect. It is not possible to distinguish the acoustic celerity from the cell thickness value in the detection. Conclusion Optical detection of acoustic frequencies higher than 30 GHz has been successfully performed in MC3T3 cells allowing the determination of acoustic celerity and thickness in single cells thinner than 150 nm. This technique suggests promising perspectives in single cells imaging and biomedical applications, as various as cancer studies or cell adhesion on biomaterials.
2019-04-28T13:13:15.989Z
2011-01-01T00:00:00.000
{ "year": 2011, "sha1": "a684edc9bf92158eda82dbf34bc6ac47e0f5aa30", "oa_license": null, "oa_url": "http://iopscience.iop.org/article/10.1088/1742-6596/278/1/012042/pdf", "oa_status": "GOLD", "pdf_src": "IOP", "pdf_hash": "0e11801dc5b05a65e5d30f2905dd8d66b64ea527", "s2fieldsofstudy": [ "Physics", "Biology", "Engineering" ], "extfieldsofstudy": [ "Physics", "Materials Science" ] }
202016569
pes2o/s2orc
v3-fos-license
Incidence and Severity of Virus Diseases of Okra (Abelmoschus Esculentus l Moench.) Under different Mulching Types Okra (Abelmoschus esculentus L Moench) is of the family Malvaceae. There are four known domesticated species of Abelmoschus. Among these, A. esculentus (common okra) is most widely cultivated in South and East Asia, Africa, and the southern USA. In the humid zone of West Africa Countries, A. caillei (West African okra) with a longer production cycle, is also cultivated [1]. Plants of A. manihot sometimes fail to flower and this species is extensively cultivated for leaves in Papua New Guinea (Farooq 2010), Solomon Islands and other South Pacific Islands [2]. As a plant originated in Africa, okra is now cultivated in tropical, subtropical and warm temperate regions around the world [1]. The economic importance of okra cannot be overemphasized. However, it is widely cultivated fruit vegetable and fruit crop by subsistence farmers of guinea and Sudan savannah of West Africa (Kumar 2010). be pulverized, moistened and enriched with organic matter before sowing. Also, it is ideally recommended to plant okra on plains of sandy loam soil of pH of 6.0 to 6.8 for an excellent production especially when incorporated with organic mulch well treated [5]. Okra cultivation and production has been widely practiced because of its importance to the economy development and can be found in almost every market in Africa (AVRDC, 2004). Okra is the most important fruit vegetable crop and a source of calorie (4550Kcal/kg) for human consumption. It ranks first before other vegetable crops [6]. Okra contains carbohydrate, protein and vitamin C in large quantities [7]. The essential and non-essential amino acids that okra contains are comparable to that of Soybean. It was also reported by Eke et al. [8] that fresh okra fruit is a good source of vitamins, minerals and plant proteins. As a result it plays a vital role in human diet; it can be consumed boiled, fried or cooked for the young immature fruits. The word mulch has been probably derived from the German word "molsch" means soft to decay, which apparently referred to the use of straw and leaves by gardeners as a spread over the ground as mulch [9]. Mulches are used for various reasons in agriculture but water conservation and erosion control are the most important objectives particularly in arid and semi-arid regions. Other reasons for use of mulching include soil temperature modification, weed control, soil conservation and after decomposition of organic mulch add plant nutrients, improvement in soil structure, increase crop quality and yield. Mulching reduces the deterioration of soil by way of preventing the runoff and soil loss, minimizes the weed infestation and reduces water evaporation [9]. Thus, it facilitates more retention of soil moisture and helps in control of temperature fluctuations, improves physical, chemical and biological properties of soil, as it adds nutrients to the soil and ultimately enhances the growth and yield of crops [10]. In addition mulch can effectively minimize water vapour loss, soil erosion, weed problems and nutrient loss [11]. Organic mulches are efficient in reduction of nitrates leaching, improve soil physical properties, prevent erosion, supply organic matter, regulate temperature and water retention, improve nitrogen balance, take part in nutrient cycle as well as increase the biological activity [12]. Natural materials cannot be easily spread on growing crops and require considerable human labour [13]. Chen and Katan [14] also reported high water content in the top 5 cm of soil (an increase of 4.7 per cent in clayey, 3.1 per cent in loamy and 0.8-1.8 per cent in sandy soil) with polythene mulch. Das, et al., 2000 observed that use of polyethylene mulch in the field, increased the soil temperature especially in early spring, reduced weed problems, increased moisture conservation, reduction in certain insect pest population, higher crop yield and more efficient use of soil nutrients. Abu-Awwad (2009) showed that covering of soil surface reduced the amount of irrigation water required by the pepper and the onion crop by about 14 to 29 and 70 per cent respectively. Trials conducted in the higher potential areas of Zimbabwe indicated that mulching significantly reduced surface runoff and infiltration [15]. Therefore, the main objectives of this paper are to investigate the use of mulching as a cultural practise in ameliorating viral diseases incidence and severity on Okra; to evaluate the effect of treatment combinations {mulching materials (dry grasses and polythene film). Material and Methods The experiment was conducted at the Teaching and Research June to October and with a brief dry spell, which in most cases occur in the second half of August. The peak rainfall period is June/ July and September/October, while the short dry season last from November to December. Also, the daily temperature ranges from 26 o C and 49 o C [16]. The place of the field Experiment falls under AEZ-(Agro Ecological zone). The topography of the land was medium high with sandy loam soil. And this area has been proven to be suitable for okra cultivation [17]. The experiment was laid down in 3 X 4 Factorial Design fitted into Randomized Completely Block Design (RCBD) with three (3) replicates. Each block consists of 12 treatment combinations. Total land area planted measured 30m X 15m, Block sizes measured 5 X 15m with 1m alley ways between replicates. Experimental field was partitioned into three blocks with mulching types within the plots. Each experimental plot consisted of 24 ridges each 5m long. The Mulching types were at 3 levels namely: No mulching, plastic (polythene) mulching, organic mulching (dry grass). Each treatment was replicated 3 times and was randomly assigned to each plot. All data were collected on a weekly basis on growths, yield and disease parameters as at when due in the morning and the Results Analysis of the results on mulching types showed that at 3rd and 4th week after planting, highest incidence were recorded on mulching level of no mulch (13.21% and 23.7%) respectively while mulching types of polythene had the lowest % incidence (3.29% and 5.66%) respectively. However, it implied that the mulching type level of no mulch and grasses are not significantly different when compare with each other. However, they are significantly different to polythene mulching type. At week 5, there was significant difference between regimes where dry grasses mulching was applied and that polythene mulching types while there was no significant difference between the regime where no mulching was applied and the dry grasses mulching type's regimes. The values from week 6 and 7 follow the same trend where at week 6 and 7, there was significant difference between the regimes without mulching types and the polythene mulching types regimes while there was no significant difference between regimes where mulching was not applied and dry grasses mulching type ( Table 1). The main effect of the mulching types in Table 2 showed that at 3 rd week after planting no mulch had the highest percentage severity of 17.72% followed by dry grasses (16.57%) and polythene (5.36%) had the lowest severity. However, it implied that there was no significant difference between mulching type level of no mulch and dry grasses while polythene mulch is significant different from no mulch and dry grasses. (Table 6). Discussion Farmers are continually developing a stronger interest in okra production given its potential as an economic crop and its ability to grow optimally in the absence of fertilizers and also its ability However, the effect of mulching treatment combinations tried in this study had effects on the incidence and severity of viral diseases. This study showed that the incidence of virus diseases was lowest at the treatment combination where polythene mulching was applied and the highest at the no mulch. Therefore, it implied that low virus incidence existed under the polythene mulch. This assertion is in agreement with Alegbejo [18] when they reported that viral incidence decreases progressively with weeding regime. This development must have been a result of polythene inhabiting the growth of weeds that usually serve as abode of potential vectors of the viruses [19]. Viral incidence was observed to be the highest at the regime that was not weeded and lowest at the regime that was weeded thrice. The reason for high incidence recorded was attributed to high weed interference. This declaration is in conformity with Hooks et al. (2012) who reported that weeds acts as reservoirs for insect, disease agent and nematodes. This study also showed that interactive effective of mulching could effectively reduce incidence of virus diseases but it is better determined by polythene mulching type and this was in accordance with report of Holland, [20] who reported that polythene mulched rhyzosphere had greatest potential that aid growth and development of herbaceous plant combating viral diseases. Also, yield reduction due to insect pest was estimated to be 89.7-91.6% in regime with dry grasses compared to no mulch and polythene mulch, this assertion is in agreement with Aiyelaagbe and Jolaoso [21] who reported that damage by insect pest on okra can be as high as 80-100% if not effectively controlled. In this study, it was showed that Okra mosaic virus was the most virulent virus that was positive irrespective of control measures applied this was so because OkMV had the widest host range [18]. According to Alegbejo, OkMV's epidemiology premised on early rains with intermittent dry and wet spell also other conditions that favour OkMV are warm weather condition and availability of abundant vectors and alternative host. More so, the study also showed that a treatment combination of polythene mulching and weeding thrice produced highest yield parameters. This suggests that weeding could be effective in viral disease control, it is better determined on polythene mulch. This could be explained by protection provided by polythene against insect that harboured in alternative host (weed) that surrounded the okra plant and this corroborates data obtained by Bhardwaj [13,[22][23].
2019-09-09T21:21:29.826Z
2019-06-20T00:00:00.000
{ "year": 2019, "sha1": "70ae406c3fdb9bf02d51e5541949bc2234259ef8", "oa_license": "CCBY", "oa_url": "https://biomedres.us/pdfs/BJSTR.MS.ID.003231.pdf", "oa_status": "GOLD", "pdf_src": "MergedPDFExtraction", "pdf_hash": "0e51a3d411331507e814ee28428aa18e7c746096", "s2fieldsofstudy": [ "Agricultural and Food Sciences" ], "extfieldsofstudy": [ "Biology" ] }
253049924
pes2o/s2orc
v3-fos-license
Regional Public Service Agency’s Financial Management Implementation in Walanda Maramis North Minahasa Public Hospital Regional Public Service Agency’s Financial Management is one of the new policies from government in order to improve financial performances and public services at Walanda Maramis North Minahasa Public Hospital. The purpose of this research is to analyze Regional Public Service Agency’s Financial Management in Walanda Maramis North Minahasa Public Hospital (before and after implementing). This research uses qualitative approach with descriptive analyze method; secondary data sources from 2020 and 2021 financial reports, journal articles, reference books and interview instruments. Balanced Scorecard and Model Analyze by Miles and Huberman is the analyzing technic used to comprehensively measure the financial and non-financial performances. Result of this research shown: 1) Walanda Maramis North Minahasa Public Hospital had implementing financial management of Regional Public Service Agency in forms of governances, accountabilities and transparency; and 2) performances evaluations is consist of three aspects, financial performance, operational services performance, and service quality improvement performance, which contributing to the public’s services and welfare, which also got Healthy grade financial performance with 74.15 score. Introduction Reforms in the field of financial management mandate a shift in the budgeting system from traditional to performance-based budgeting, so that the use of government and public funds becomes output-oriented. For the purpose mentioned above, the government strives to continue to conduct performance assessments of institutions or organizations that do not only apply to profit-oriented institutions or organizations, but also need to be carried out on non-commercial institutions or organizations including Regional General Hospitals (RSUD). Based on Government Regulation Number 12 of 2019, which explains that BLUD is an agency within the government that was formed to provide services to the community in the form of providing goods and/or services that are sold without prioritizing seeking profit and in carrying out their activities based on the principles of efficiency and productivity. Permendagri Number 77 of 2020 concerning Regional Public Service Bodies (BLUD) is a form of legal certainty with the development of laws and regulations regarding BLUDs as technical guidelines for local governments in financial management. The implementation of the BLUD financial management pattern has not all been able to run optimally. This is due to the fact that there are still obstacles in the BLUD's internal and external environment. In the BLUD's own internal environment, it is still constrained by human resources who understand limited BLUD operations both in terms of quality and quantity of resources. The lack of understanding of the pattern of BLUD financial management has resulted in the emergence of wrong assumptions in various aspects. The flexibility of financial management owned by BLUD causes BLUD to be equated with Regional Mining Business Entity (BUMD). Flexibility in financial management at hospitals is expected to improve service performance and financial performance, so hospitals are able to provide optimal health services and can compete with competitors.), so that BLUDs cannot be equated with BUMDs that carry out their activities for profit. Therefore, it is very important for local governments to understand thoroughly the pattern of BLUD financial management. Meanwhile, BLUD internal problems can be managed by conducting training for human resources involved in the BLUD financial management pattern, or adding human resources in accordance with the competencies required in the BLUD financial management pattern. One strategy to be able to serve the public well is how to make regional apparatuses that operationally provide services to the community given flexibility in their financial management. The flexibility of financial management requires good governance. The Regional General Hospital (RSUD) Maria Walanda Maramis, North Minahasa Regency, is required to comply with all government regulations related to the implementation of financial management, and is required to be transparent and accountable in the use of state money and public funds. So far, transparency and accountability in the financial management of the Maria Walanda Maramis Hospital in North Minahasa Regency have been implemented by presenting an accountability report for the use of funds submitted to several external parties, namely the Ministry of Finance and the Ministry of Health. However, so far, the people who are part of the interested parties have not received information on the management of these funds. The task of managing finance and health services becomes very difficult with demands for financial transparency and accountability that must be published in the media, as well as in deciding the standard amount of health costs that must be communicated to the public, including the amount of the need for health services as a whole. As an institution that is capital-intensive, labor-intensive and science and technology-intensive, this hospital requires skilled professionals in modern business management. Through the Financial Management Pattern of the Public Service Agency (PPK-BLUD), Maria Walanda Maramis Regional General Hospital, North Minahasa Regency, is expected to be able to improve the performance of its services to the community in order to promote general welfare and educate the nation's life, by providing flexibility in financial management based on economic principles. and productivity and the application of sound business practices. Public Service Agency Concept Law of the Republic of Indonesia Number 1 of 2004 concerning the State Treasury, Article 1 states that the Public Service Agency is an agency within the government that was formed to provide services to the community in the form of providing goods and/or services that are sold without prioritizing seeking profit and in carrying out its activities based on on the principles of efficiency and productivity. This concept was re-adopted in its implementing regulations, namely in Article 1 number 1 of PP No. 12 of 2019 concerning Financial Management of Public Service Agencies. Minister of Home Affairs Regulation Number 77 of 2020 concerning Regional Public Service Agency (BLUD). This is a form of legal certainty, with the development of legislation regarding BLUDs as a guide for local governments in managing BLUD finances. Agencies included in the BLUD include hospitals, health centers, educational institutions, licensing services, and broadcasting. Good governance is characterized by strict supervision, improving public sector performance and dealing with corruption. Good governance can also improve organizational leadership, management, and oversight, result in more effective interventions, and ultimately lead to better outcomes with improved people's lives. Hospital Financial Management Government Regulation Number 12 of 2019 which regulates the Regional Public Service Agency (BLUD), emphasizes that RSUD must make many adjustments, especially in terms of financial and budget technical management, including determining costs. Regarding flexible hospital financial management, it is required to become an inexpensive and quality public service institution. Based on the concept of PP No. 12 of 2019, the government hospital has undergone a change as a Regional Public Service Agency. This institutional change has an impact on financial accountability no longer to the ministry of health, but to the ministry of finance. Financial reporting must comply with Financial Accounting Standards, so financial technical management must also be carried out with reference to the principles of accountability, transparency and efficiency. The budget prepared by the Hospital must also be prepared on a performance-based basis. Based on these principles, the technical aspects of financial management need to be supported by a good and sustainable relationship between the hospital and the government and stakeholders, especially in determining the cost of health services which include unit cost, efficiency and service quality. What needs to be considered again is the existence of an audit or examination not only from an independent party on financial reporting but also a clinical audit. Some things that must be prepared for a hospital to become a BLUD in the financial aspect are: a. Rate determination must be based on unit cost and service quality. Hospitals must be able to carry out cost tracing for the determination of all kinds of rates stipulated in the service. So far, the aspect of determining tariffs is still based on budgets or government subsidies, so there is still a cost culture that does not support improving performance or service quality. The preparation of hospital rates should be based on unit cost, market (consumer's ability to pay and the chosen strategy). The tariff is expected to cover all costs, apart from the expected subsidies. b. Budgeting must be based on cost accounting, not only based on subsidies from the government. Thus, budgeting must be based on indicators of input, process and output. c. Prepare financial reports in accordance with PSAK 45 which is prepared by professional accounting organizations and is ready to be audited by an Independent Accounting Firm, not audited by the government. d. An indicator-based and evidence-based remuneration system. In preparing the remuneration system for regional public hospitals, it is necessary to have a rationale that the level of remuneration is based on the level, namely level one is the basic salary which is a means of guaranteeing safety for employees. Basic salary is not affected by hospital income. The second level is incentives, namely as a means of providing motivation for employees. The provision of these incentives is strongly influenced by hospital income. The third level is a bonus as a means of giving rewards to employees. The provision of this bonus is strongly influenced by the level of hospital profits. With the implementation of the institutional change to become a Regional Public Service Agency, in the technical aspect of finance, it is hoped that the hospital will provide quality assurance and cost certainty leading to better health services. In order to improve the performance of regional budgets, one important aspect is the issue of regional financial management and regional budgets. For this reason, regional financial management is needed that is able to control regional financial policies economically, efficiently, effectively, transparently and accountably. The principles underlying the management of state/regional finances should always be firmly adhered to and implemented by government administrators, because basically the community has basic rights over the government. Hospital Performance Assessment System The performance appraisal system through indicators is one of the tools that can be used to continuously assess a process of BLU Hospital activities. As a regionalowned hospital, the hospital BLUD must be able to provide information that describes the hospital's progress over a certain period. Relevant Previous Research Previous empirical research, used as study material in this study are as follows: Meidyawati, in her research entitled Analysis of the Implementation of Financial Management Patterns for Public Service Agencies (PPK-BLU) at the Bukit Tinggi National Stroke Hospital; where the results of the study indicate that the hospital has implemented a BLU financial management pattern in the form of governance, accountability, and transparency. Assessment of the National Stroke Hospital is seen from 3 (three) aspects, namely finance, service operations and improving service quality; very beneficial for service to the community by obtaining a performance value of "A" a score of 79.20 in the sense of "Healthy". Indarto Waluyo, in his research entitled Public Service Agency: A New Pattern in Financial Management in Government Work Units. The results of the study indicate that by implementing the Public Service Agency Management Pattern (BLU), it will increase the potential of government work units effectively and efficiently by providing services to the public based on specific arrangements regarding government work units that carry out community services in various forms. Freddy Semuel Kawatu (2020), the results of the study stated that the financial performance of Manado State University from 2017 to 2019 experienced a positive increasing trend with an average level of financial independence of 16.43%, while the growth of financial ratios, income and expenditure experienced fluctuating growth. and SILPA is increasing every year. This indicates that the capability and financial performance of Manado State University is still not optimal, so it is necessary to make more improvements in its financial management. Madjid, et.al (2009) examined the financial performance of 69 hospitals belonging to the central and local governments, the results showed that in general the average current ratio, quick ratio, and debt ratio were quite good, but many BLUD RSUDs had ratio figures. below average compared to above average. Research Methods This study uses a qualitative approach, with a descriptive analysis method. Data collection techniques, for this field research were carried out by observation and interviews as well as providing a list of questions (questionnaires) and structured statements to the respondents, both hospital leaders, employees and other stakeholders (hospital service users). Secondary sources as a comparison material obtained from the Financial Performance Report and Service Operational Performance of the Maria Walanda Maramis Hospital in North Minahasa Regency. Secondary data were also obtained from journals, literature, related reports and other written works related to this research. Data analysis technique, using Balanced Scorecard Analysis (in this study did not use the deductive method, so there is no need for a hypothesis). After the data has been collected, data analysis can be carried out, using descriptive analysis methods. Analysis of Governance Pattern Implementation. Based on the results of the analysis of the implementation pattern of the Regional General Hospital (RSUD) Maria Walanda Maramis, North Minahasa, it has been going quite well, but there are still weaknesses related to: a. The organization and management that have been built have not fully paid attention to organizational needs, mission and strategy development, and have not changed the paradigm of the work culture of organizational units in the Maria Walanda Maramis Hospital, North Minahasa. The hospital organization as a whole is not ready to change the paradigm from civil servants to entrepreneurs. b. In the implementation of accountability, not all proposals from the work unit can be fulfilled so that the implementation of the main tasks and functions of the work unit has not reached the maximum target. Furthermore, there are still surrogate/followup programs from the Ministry of Health that must be carried out by hospitals that require coordination in their implementation and require time to realize them. Then the programs that have not achieved the planned targets have not been evaluated for the causes and constraints. c. The formulation of targets has not been in line with the formulation of policies, this can be seen in the policy of the hospital management system starting from four perspectives of the balance score card (financial, customer, internal business, and learning and growth). Analysis of Financial Report Implementation. The accounting system and financial reports of regional public service agencies (BLUD) are organized in accordance with the Financial Accounting Standards (SAK) issued by the accounting profession association. If there is no accounting standard, the BLUD can apply industry-specific accounting standards after obtaining approval from the Minister of Finance. The BLUD Accounting and Financial Reporting System is regulated in the Minister of Finance For the consolidation (consolidation) of BLUD financial statements with the financial statements of ministries/institutions, it is carried out in accordance with Government Accounting Standards (SAP), accompanied by financial statements in accordance with SAK. The BLUD's annual financial report is audited by an external auditor. BLUD financial reports include budget realization reports/operational reports (activity reports), balance sheets, cash flow reports, and notes to financial reports and performance reports. The financial statements of the Maria Walanda Maramis Hospital, North Minahasa, have been audited annually by an independent auditor and since becoming a BLUD from 2021 there has been no unqualified opinion. The analysis of the financial statements prepared by the Maria Walanda Maramis Hospital, North Minahasa, was in accordance with the Regulation of the Minister of Finance and the Hospital BLUD Accounting Guidelines. However, there are still some things that become limitations/obstacles in the preparation of these financial statements, namely: 1) BLU is required to compile financial reports with SAK which is accrual basis and SAP which is cash basis for consolidation purposes, both of which have different accounting systems and estimates that make it difficult for hospitals to make adjustments for consolidation with financial statements with Ministries/Institutions, so that consolidation can only be carried out on an estimated balance sheet, while for the Maria Walanda Maramis North Minahasa Regional General Hospital, it has not yet developed a Cost Accounting System to produce information on the cost of goods manufactured, unit cost per service unit, and variance evaluation, which very important for planning and control, decision making, calculation of service rates and remuneration. 2) The review of financial reports conducted by the Internal Audit Unit (SPI) is still not optimal because the SPI has not been fully supported by human resources who meet the competency qualifications to review financial statements. Performance Analysis of Maria Walanda Hospital Maramis Overall, the performance score obtained by the Maria Walanda Maramis Hospital, North Minahasa, after becoming a BLUD, there was an increase in the performance value obtained in the first year by 1.65 points, the second year 3.20 points, although there has not been a significant increase, the hospital remains are in the "HEALTH" level of health with an A grade. The implementation of PK-BLUD at the Maria Walanda Maramis Hospital, has only been running for almost 1 year, becoming a gradual BLUD in 2020 and a full BLUD later in 2021. The status of the hospital BLUD has been obtained without being preceded by the readiness of all hospital parties to make various changes according to with the government's goal of making the hospital a BLUD, so that changes and adjustments that need to be made are slow and gradual. Improvements to the performance data collection system need to be carried out, especially to produce accurate and reliable performance scores for decision making. Increasing the value of financial performance, services, service quality and benefits to the community cannot run by itself, because it is closely related to other aspects such as increasing transparency and accountability, implementing good governance, improving the quality of human resources, placing employees in accordance with with the required competencies, good and orderly resource management and the reliability of performance data sources. Furthermore, professional management support is urgently needed, which has a commitment to always focus on improving performance. Although there has not been a significant change in hospital performance indicator values, PK-BLUD has provided benefits for the smooth delivery of services to patients, including: 1) PK-BLUD provides flexibility in the use of funds, where the hospital can use the funds obtained from its operations without having to first deposit it into the state treasury and through a bureaucratic procedure for disbursing a long and time-consuming disbursement, which in turn disrupts hospital operations due to running out of funds. 2) PK-BLUD simplifies the process of procuring goods and services, especially medicines and consumables which routinely must be available quickly because hospitals can make purchases directly from distributors, so they can get cheaper prices and official discounts on invoices (discount on factur) causes the selling price of drugs charged to patients to be cheaper. PK-BLUD provides hospital flexibility to cooperate in the form of joint operation (KSO) or memorandum of understanding (MoU) with third parties. With KSO/MOU, the process of obtaining equipment becomes easier, does not require long bureaucracy and if there is damage/disruption to the equipment provided by the KSO, the company will immediately repair/replace it, so as not to interfere with the smooth running of services to patients. Based on the calculation above, the current ratio of Maria Walanda Maramis Hospital is 3.36, meaning that the total current assets are 3.36 times short-term liabilities or each short-term liability of Rp. 1.00 guaranteed by current assets of Rp 3.36. If it is seen from the standard size of the hospital that the normal value of the current ratio is 1.75 -2.75, it can be said that the current ratio of Maria Walanda Maramis Hospital has a good current ratio compared to the standard current ratio of hospitals. It can be interpreted that the ability of the Maria Walanda Maramis Hospital, North Minahasa Regency in guaranteeing current liabilities is very good. Based on the calculation above, it can be seen that the Total Debt to Equity Ratio is 0.188. The greater the ratio of liabilities to equity, the better the company's ability to survive in bad conditions and still be able to meet its obligations to creditors. The value of the debtto-equity ratio of 1, states that the liabilities and owner's equity have the same value. In other words, if the company has a loss equal to the amount of its liabilities, then the company's total assets remaining to creditors will be equal to the amount of their claims on these assets. Thus, it can be said that the debt-to-equity ratio of RSUD Maria Walanda Maramis, North Minahasa Regency is still quite good. Based on the calculation above, it can be seen that the Total Assets Turnover at the Maria Walanda Maramis Hospital is 0.053 which means that for every Rp. 1, the fixed assets rotate 0.053 times a year. The standard size of total asset turnover is 0.9 to 1.1 times. Thus, it can be said that the total asset turnover at the Maria Walanda Maramis Hospital is still low. Conclusion This study aims to determine the implementation of the pattern of financial management of the Regional Public Service Agency of Maria Walanda Maramis Hospital, especially in terms of 2 (two) aspects, namely financial performance and service operational performance. The implementation of the financial management of the Regional Public Service Agency of the Maria Walanda Maramis Hospital has been running effectively according to the regulations of the Ministry of Finance and the Ministry of Health, although it has not been optimally useful for public services. The results of financial performance are in accordance with the guidelines for evaluating the performance of the Regional Public Service Agency (BLUD) in the field of Health services, where the Current ratio of Maria Walanda Maramis Hospital is 3.36, meaning that the total current assets are 3.36 times short-term liabilities or each short-term liability of Rp. . 1.00 guaranteed by current assets of Rp 3.36. Thus, it can be said that the debt-to-equity ratio of RSUD Maria Walanda Maramis, North Minahasa Regency is still quite good. Total Debt to Equity Ratio or debt to equity ratio, where the ability of the Maria Walanda Maramis Hospital is 0.188. The greater the ratio of liabilities to equity, the better the company's ability to survive in bad conditions and still be able to meet its obligations to creditors. The value of the debt-to-equity ratio of the Maria Walanda Maramis Hospital, North Minahasa Regency is still quite good. Total Assets Turnover or total asset turnover at the Maria Walanda Maramis Hospital was obtained 0.053 meaning that for every IDR 1.0, the fixed assets rotated 0.053 times a year. The standard size of total asset turnover is 0.9 to 1.1 times. It can be said that the total asset turnover at the Maria Walanda Maramis Hospital is still low. Service operational performance in customer perspective based on the balanced scorecard at Maria Walanda Maramis Hospital in North Minahasa Regency, where most of the respondents said they were https://doi.org/10.1051/shsconf/202214903044 , 03044 (2022) SHS Web of Conferences 149 ICSS 2022 satisfied, as many as 66.45%; while respondents who stated that they were very dissatisfied 3%, were not satisfied 10.11%, and the remaining 20.44% said they were very satisfied. The highest satisfaction value is in the friendliness of the officers and service announcements, while the lowest satisfaction value is in the complaint service variable. And when viewed from the average value of customer satisfaction as a whole is 3.07 which means the service is very good.
2022-10-22T15:19:26.210Z
2022-10-20T00:00:00.000
{ "year": 2022, "sha1": "e7d64d5a2d95bdfa29af119918b75f5f86aa1f29", "oa_license": "CCBY", "oa_url": "https://www.shs-conferences.org/articles/shsconf/pdf/2022/19/shsconf_icss2022_03044.pdf", "oa_status": "GOLD", "pdf_src": "Anansi", "pdf_hash": "443d03ee50e8a347efbbae70476d1f752ae1bfa2", "s2fieldsofstudy": [ "Economics", "Political Science" ], "extfieldsofstudy": [] }
196809989
pes2o/s2orc
v3-fos-license
Phylogenetic insight of Nonribosomal peptide synthetases (NRPS) Adenylate domain in Antibacterial potential Streptomyces BDUSMP 02 isolated from Pitchavaram Mangrove Identification of gene clusters in Streptomyces holds promise for the discovery of regulatory pathways linked to bioactive metabolites. We isolated a broad-spectrum antibacterial potential Streptomyces sp BDUSMP 02 from mangrove sediment. We further found a distinct of phylogeny pattern for NRPS A-domain in the Streptomyces sp BDUSMP 02. The result suggests that Streptomyces sp BDUSMP 02 has the potential to produce a new type of antibacterial compounds belonging to NRPS type. Background: In the last five decades, natural compounds produced by actinobacteria have been enormously utilized to develop most of the common antibiotics commercialized by pharmaceutical industries [1]. Given this, isolation of actinomycetes from unexplored marine environment has been attracted particular attention due to their structural diversity and distinct bioactivities of secondary metabolites produced by them [2]. In evidence, Salinispora comes under the genus Actinomycete was first isolated from ocean sediments [3]. Mangrove forests are located in the tidal zones in tropical and subtropical regions [4]. Bissett et al. (2007) reported that mangrove sediments are known to contain high organic content, which favour the rapid development of species diversity corresponding to environmental variation [5]. The exploitation of mangrove actinomycetes for bioactive compounds has been increased dramatically [6][7][8]. Streptomyces sp isolated from mangrove ecosystem have been able to grow in freshwater, brackish water and seawater which suggest that they are adapted to various environmental conditions due to the water current [7]. Besides, it could be a starting point to study the evolution of gene clusters responsible for the biosynthesis of novel antibiotic because of their adaptation to extraordinarily salty and marshy condition [7]. It is evident that gene clusters in Streptomyces likely to encode natural product biosynthetic pathways in sequenced microbial genomes [9]. The biosynthetic potential of different strains isolated 413 ©Biomedical Informatics (2019) from various sources can be approximated by the detection of the genes involved in the synthesis of secondary metabolites such as those for a polyketide synthase (PKS) and non-ribosomal peptide synthetase (NRPS). Non-ribosomal peptide synthetases (NRPSs) are megaenzymes usually with a multimodular structure, which catalyze the non-ribosomal assembly of peptides from proteinogenic and non-proteinogenic amino acids . Komaki and Harayama (2006) reported that DNA sequence based on these genes could be used to predict the chemical nature of compounds [14]. The biological functions of NRPS via synthesized compounds associated with the chemical nature of peptide, which is correlated with the gene sequence [11]. Therefore it is crucial to study the phylogenic insight of NRPS in the potential actinomycete would provide new opportunities for drug discovery. Materials and Methods: Isolation and identification of Actinomycetes: Soil samples were collected from Mangrove sediment of Pitchavaram (Latitude of 11.4' N-Longitude of 79.8' E), Tamil Nadu, India, in sterile airlock polythene bags and transported to the laboratory according to a previously described method [6]. One gram of air-dried each spot soil samples was added to a 9 ml of sterile water and subjected to selective pretreatment of dry heat at 56 °C for 10 min to effectively increase the number of myceliumforming actinomycetes relative to the non-actinomycetal heterotrophic microbial flora. After that, the samples were vigorously shaken and further diluted up to 10 -6 in sterile water. 100µl of each diluted sample was inoculated by spreading with a sterile glass rod onto humic acid-vitamin B agar (HV) medium (Hayakawa & Ohara,1987) supplemented with antibiotics of cycloheximide (40 µg/ml), nystatin (30 µg/ml) and nalidixic acid (10 µg/ml) after autoclave to inhibit the fungal and nonfilamentous bacterial growth. The inoculated plates were incubated at 30 °C for ten days or until appearance of colonies with a tough leathery texture, dry or folded appearance, and branching filaments with or without aerial mycelia. Phylogenetic analysis of NRPS Adenylate domain BLAST network services at the NCBI were used to analyze the resulting NRPSs gene sequence [16]. Multiple alignments were performed using CLUSTAL_X version 1.8 [17]. The phylogenetic tree was inferred Neighbor-joining method using MEGA 6.0 software package [18]. The unrooted phylogenetic tree topology was evaluated by using the bootstrap resamplings method with 1000 replicates [19]. Results and Discussion: Isolation and characterization of Mangrove Actinomycete: The results of morphological, physiological and biochemical characteristics of strain BDUSMP 02 are shown in (Table 1). The cell wall of the strain found to contain LL-diaminopimelic acid (chemotype I), which is characteristic for the genus Streptomyces. Phylogenetic analysis of the 16S rRNA gene sequence (1388 bp) of strain BDUSMP 02 revealed that the isolate belongs to the genus Streptomyces. The 16S rDNA sequence has been deposited in the GenBank database under Accession No. KF918272.1. Based on morphological, physiological, biochemical characterization and 16S rDNA sequence analysis, the isolate was named as Streptomyces sp. BDUSMP 02. In agreement with those previous reports, the results presented in this study denoted that bioactive secondary metabolite production by the mangrove sediment actinomycete and its gene clusters responsible for the biosynthesis could be at a later stage taken into the molecular biology of natural product research. NRPSs gene adenylation domain (A-domain): Streptomyces sp. BDUSMP 02 non-ribosomal peptide synthetase gene, partial cds, has been deposited in GenBank database under Accession No. KJ598809.1.The resulting amino acid sequences corresponding to their nucleotide sequences of amplified NRPS Adomain showed conserved motif, as shown in Figure 1. There are three core motifs in the amplified 450 bp fragments of A-domain identified including A2, TGxPKGV, A3, FD and A4, NxYGPTE. 417 ©Biomedical Informatics (2019) NRPS A -domain was best matched with the previously reported Streptomyces. The resulting amino acid sequences shared low similarities with those available in databanks. Liu et al. (2019) reported that Streptomyces isolated from mangrove sediment harbouring NRPS genes which involved in the synthesis of the antibacterial compound. Secondary metabolite production in Streptomyces is growth dependent and involves the expression of physically clustered regulatory and biosynthetic genes by a tightly regulated mechanism [21]. Similarly, in the present study, biosynthetic NRPS gene sequences provided valuable genomicbased information in parallel with the antimicrobial activity of isolate. Our results thus proved the presence of NRPS genes in support of the bioassay-guided analysis for antibacterial activity. Figure 2 presents the phylogenetic tree of Streptomyces sp BDUSMP 02 based on NRPS A-domain amino acid sequence. The NRPS Adomain amino acid sequence of the isolate Streptomyces sp BDUSMP 02 showed a less identity to the sequences from various Streptomyces sp. A good agreement between bioassay-guided identification antibacterial properties this isolate had functional NRPS genes in their putative gene cluster responsible for the synthesis of antibacterial compounds. Interestingly the strain NRPS A -domain shared similarity with Streptomyces avermilities. It is therefore desirable to isolate the secondary metabolite with antibacterial properties to relate with its functional genes. Conclusion: We describe a Streptomyces sp. from mangrove environment as a promising source of novel antibacterial compounds. There is increasing interest in the characterization of gene clusters, which mainly contain NRPS, PKS and NRPS/PKS in addition to culturedependent experimentation for distinct bioactivities. We found NRPS adenylate domain from the potential isolate, which can be further explored for the drug discovery using a genome mining approach.
2019-07-18T14:22:04.250Z
2019-06-15T00:00:00.000
{ "year": 2019, "sha1": "4125ee72a4468c8d6569bcab171d81f1289a7b84", "oa_license": "CCBY", "oa_url": "http://www.bioinformation.net/015/97320630015412.pdf", "oa_status": "HYBRID", "pdf_src": "PubMedCentral", "pdf_hash": "4125ee72a4468c8d6569bcab171d81f1289a7b84", "s2fieldsofstudy": [ "Biology", "Environmental Science", "Chemistry" ], "extfieldsofstudy": [ "Biology", "Medicine" ] }
251371772
pes2o/s2orc
v3-fos-license
On Model Identification and Out-of-Sample Prediction of Principal Component Regression: Applications to Synthetic Controls We analyze principal component regression (PCR) in a high-dimensional error-in-variables setting with fixed design. Under suitable conditions, we show that PCR consistently identifies the unique model with minimum $\ell_2$-norm. These results enable us to establish non-asymptotic out-of-sample prediction guarantees that improve upon the best known rates. In the course of our analysis, we introduce a natural linear algebraic condition between the in- and out-of-sample covariates, which allows us to avoid distributional assumptions for out-of-sample predictions. Our simulations illustrate the importance of this condition for generalization, even under covariate shifts. Accordingly, we construct a hypothesis test to check when this conditions holds in practice. As a byproduct, our results also lead to novel results for the synthetic controls literature, a leading approach for policy evaluation. To the best of our knowledge, our prediction guarantees for the fixed design setting have been elusive in both the high-dimensional error-in-variables and synthetic controls literatures. Introduction We consider error-in-variables regression in a high-dimensional setting with fixed design. Formally, we observe a labeled dataset of size n, denoted as {(y i , z i ) : i ≤ n}. Here, y i ∈ R is the response variable and z i ∈ R p is the observed covariate. For any i ≥ 1, we posit that where β * ∈ R p is the unknown model parameter, x i ∈ R p is a fixed covariate, and ε i ∈ R is the response noise. Unlike traditional settings where z i = x i , the error-in-variables (EiV) setting reveals a corrupted version of the covariate x i . Precisely, for any i ≥ 1, let where w i ∈ R p is the covariate measurement noise, π i ∈ {1, NA} p is a binary mask with NA denoting a missing value, and • is the Hadamard product. Further, we consider a high-dimensional setting where n and p are growing with n possibly smaller than p. We analyze the classical method of principal component regression (PCR) within this framework. PCR is a two-stage process: first, PCR "de-noises" the observed in-sample covariate matrix Z = [z T i ] ∈ R n×p via principal component analysis (PCA), i.e., PCR replaces Z by its low-rank approximation. Then, PCR regresses y = [y i ] ∈ R n on the lowrank approximation to produce the model estimate β. This focus of this work is to answer the following questions about PCR: Q1: "When p > n, is there a model parameter that PCR consistently identifies?" Q2: "Given deterministic, corrupted, and partially observed out-of-sample covariates, can PCR recover the expected responses?" Contributions Model identification. Regarding Q1, we prove that PCR consistently identifies the projection of the model parameter onto the linear space generated by the underlying covariates. This corresponds to the unique minimum ℓ 2 -norm model, which is arguably sufficient for valid statistical inference (Shao and Deng, 2012). Out-of-sample prediction. For Q2, we leverage our results for Q1 to establish non-asymptotic out-of-sample prediction guarantees that improve upon the best known rates. Notably, these results are novel for the fixed design setting. In the course of our analysis, we introduce a natural linear algebraic condition between the in-and out-of-sample data that supplants distributional assumptions on the underlying covariates that are common in the literature. We construct a hypothesis test to check when this condition holds in practice. We also illustrate the importance of this condition through extensive simulations. Applications to synthetic controls. Our responses to Q1-Q2 lead to novel results for the synthetic controls literature, a popular framework for policy evaluation (Abadie and Gardeazabal, 2003;Abadie et al., 2010). In particular, our results provide theoretical guarantees for several PCR based methods, namely Amjad et al. (2018Amjad et al. ( , 2019. To the best of our knowledge, we provide the first counterfactual ℓ 2 -prediction guarantees for the entire counterfactual trajectory in a fixed design setting for the synthetic controls literature. We apply our hypothesis test to two widely analyzed studies in the synthetic controls literature. Organization Section 2 details the PCR algorithm. Section 3 describes our problem setup and assumptions. Section 4 provides formal statistical guarantees on Q1-Q2. Section 5 reports on simulation studies. Section 6 presents a hypothesis test to check when a key assumption that enables PCR to generalize holds in practice. Section 7 contextualizes our findings within the synthetic controls framework. Section 8 discusses related works from the errorin-variables, PCR, and functional PCA/PCR literatures. Section 9 offers directions for future research. We relegate all mathematical proofs to the Appendix. Notation For a matrix A ∈ R a×b , we denote its operator (spectral), Frobenius, and max element-wise norms as ∥A∥ 2 , ∥A∥ F , and ∥A∥ max , respectively. By rowspan(A), we denote the subspace of R b spanned by the rows of A. Let A † denote the pseudoinverse of A. For a vector v ∈ R a , let ∥v∥ p denote its ℓ p -norm. We define the sub-gaussian (Orlicz) norm as ∥v∥ ψ 2 . Let ⟨·, ·⟩ and ⊗ denote the inner and outer products, respectively. For any two numbers a, b ∈ R, we use a ∧ b to denote min(a, b) and a ∨ b to denote max(a, b). Let [a] = {1, . . . , a} for any positive integer a. Let f and g be two functions defined on the same space. We say that f (n) = O(g(n)) if and only if there exists a positive real number M and a real number n 0 such that for all n ≥ n 0 , |f (n)| ≤ M |g(n)|. Analogously we say f (n) = Θ(g(n)) if and only if there exists positive real numbers m, M such that for all n ≥ n 0 , m|g(n)| ≤ |f (n)| ≤ M |g(n)|; f (n) = o(g(n)) if for any m > 0, there exists n 0 such that for all n ≥ n 0 , |f (n)| ≤ m|g(n)|; f (n) = ω(g(n)) if for any m > 0, there exists n 0 such that for all n ≥ n 0 , |f (n)| ≥ m|g(n)|. O(·) is defined analogously to O(·), but ignores log dependencies. Observations As described in Section 1, our in-sample (train) data consists of n labeled observations {(y i , z i ) : i ≤ n}. By contrast, our out-of-sample (test) data consists of m ≥ 1 unlabeled observations. That is, for i > n, we observe the covariates z i but do not observe the associated response variables y i . Let Z = [z T i : i ≤ n] ∈ R n×p and Z ′ = [z T i : i > n] ∈ R m×p denote the matrices of in-and out-of-sample covariates, respectively. Description of Algorithm We describe PCR, as introduced in Jolliffe (1982), with a variation to handle missing data. I: Model identification. Let ρ denote the fraction of observed entries in Z. Replace all missing values (NA) in the covariate matrices with zero. Let Z = (1/ ρ)Z = n∧p i=1 s i u i ⊗ v i , where s i ∈ R are the singular values and u i ∈ R n , v i ∈ R p are the left and right singular vectors, respectively. For a hyperparameter k ∈ [n ∧ p], let Z k = k i=1 s i u i ⊗ v i and define the estimated model parameter as II: Out-of-sample prediction. Let ρ ′ denote the proportion of observed entries in Z ′ . Let where s ′ i ∈ R are the singular values and u ′ i ∈ R m , v ′ i ∈ R p are the left and right singular vectors, respectively. Given algorithmic parameter ℓ ∈ [m∧p], let Z ′ℓ = ℓ i=1 s ′ i u ′ i ⊗ v ′ i , and define the test response estimates as y ′ = Z ′ℓ β. If the expected responses are known to belong to a bounded interval, say [−b, b] for some b > 0, then the entries of y ′ are truncated as follows: for every i > n, if y i > b. Additional Useful Properties of PCR We state a few useful properties of PCR that we use extensively. These are well-known results that are discussed in Chapter 17 of Roman (2008) and Chapter 6.3 of Strang (2006). Property 2.1 The PCR solution, β, as given in (3), is 1. the unique solution to the following program: minimize ∥β∥ 2 over β ∈ R p such that β ∈ arg min 2. embedded within the rowspan( Z k ). Imputing Missing Covariate Values As shown in Agarwal et al. (2019Agarwal et al. ( , 2021, PCR can equivalently be interpreted as first applying the matrix completion algorithm, hard singular value thresholding (HSVT), on Z to obtain Z k , and then performing OLS with this de-noised output matrix. Accordingly, this work utilizes the simple imputation method of replacing NA values with zero to enable HSVT. We justify this imputation approach as follows: by setting NA values to zero, it follows that E[Z ij ] = ρX ij + (1 − ρ)0 = ρX ij ; recallingZ ij = (1/ ρ)Z ij , we then obtain E[Z ij ] = X ij . Indeed, constructing Z such that E[ Z] = X is a crucial step that enables the HSVT subroutine of PCR to produce a good estimate of X through Z k . Naturally, there are other matrix completion methods such as nearest neighbors or alternative least squares that do not first impute missing values. As long as the approach taken yields a sufficiently good estimator, cf. Lemma 3 of Appendix B, our main results on model parameter identification and generalization would naturally extend to these settings. Choosing the Number of Principal Components The ideal number of principal components k is rarely known a priori. As such, the problem of choosing k has become a well-studied problem in the low-rank matrix completion literature and there exists a suite of principled methods. These include visual inspections of the plotted singular values (Cattell, 1966), cross-validation (Wold, 1978;Owen and Perry, 2009), Bayesian methods (Hoff, 2007), and "universal" thresholding schemes that preserve singular values above a precomputed threshold (Gavish and Donoho, 2014;Chatterjee, 2015). are sampled independently from N (0, 1); entries of W are sampled independently from N (0, σ 2 ) with σ 2 ∈ {0, 0.2, . . . , 0.8}. We see a steep drop-off in magnitude in the singular values across all noise levels-this marks the "elbow" point. Top singular values of Z correspond closely with those of X (σ 2 = 0). The remaining singular values are induced by W . Thus, the "effective rank" of Z is the rank of X. A common argument for these approaches is rooted in the underlying assumption that the smallest non-zero singular value of the "signal" X is well-separated from the largest singular value of the "noise" W . Under reasonable "signal-to-noise" (snr) scenarios, Weyl's inequality implies that a sharp threshold or gap should exist between the top r singular values and remaining singular values of the observed data Z. This gives rise to a natural "elbow" point, shown in Figure 1, and suggests choosing a threshold within this gap. As such, a researcher can simply plot the singular values of Z and look for the elbow structure to decide if PCR is suitable for the application at hand. We formalize a notion of snr in (4) and establish our results in the following section with respect to this quantity. Problem Setup This section formalizes our problem setup. Let X = [x T i : i ≤ n] ∈ R n×p and X ′ = [x T i : i > n] ∈ R m×p represent the underlying in-and out-of-sample covariates, respectively. Assumptions Collectively, we assume (1) and (2) are satisfied. We make the additional assumptions. Assumption 3.1 is a standard assumption in the regression literature that posits the idiosyncratic response noise to be independent draws from a subgaussian distribution. Assumption 3.2 (Covariate noise and missing values) Let {w i : i ≤ n + m} be a sequence of independent mean zero subgaussian random vectors with ∥w i ∥ ψ 2 ≤ K and ∥E[w i ⊗ w i ]∥ 2 ≤ γ 2 . Let π i ∈ {1, NA} p , where NA denotes a missing value, be a vector of independent Bernoulli variables with parameter ρ ∈ (0, 1]. Further, let ε i , w i , π i be mutually independent. Consistent with standard assumptions in the error-in-variables (EiV) regression literature, Assumption 3.2 posits the idiosyncratic EiV vector-valued noise w i to be subgaussian and independent across measurements; note, however, that the noise is allowed to be dependent within a measurement, i.e., the coordinates of w i can be correlated. Finally, we require missing entries in the observed covariate vector to be missing completely at random (MCAR). In Section 4.3.1, we discuss ways to allow for more heterogeneous missingness patterns. Assumption 3.3 (Bounded covariates) Let ∥X∥ max ≤ 1 and ∥X ′ ∥ max ≤ 1. Assumption 3.3 bounds the magnitude of the underlying noiseless covariates, not the observed noisy covariates. This assumption is made to simplify our analysis and it can be generalized to hold for any C that is an absolute constant. Our theoretical results will correspondingly only change by an absolute constant as well. Main Results This section responds to Q1-Q2. For ease of notation, let C, c > 0 be absolute constants whose values may change from line to line or even within a line. Let H = X † X ∈ R p×p and H ⊥ = I − H denote the projection matrices onto the rowspace and nullspace of X, respectively. Let H ′ , H ′ ⊥ ∈ R p×p be defined analogously with respect to X ′ . We definẽ β * = Hβ * as the projection of β * onto the linear space spanned by the rows of X. Model Identification Q1: "When p > n, is there a model parameter that PCR consistently identifies?" The model parameter β * is not identifiable in the high-dimensional regime as infinitely many solutions satisfy (1). Among all feasible parameters, we show that PCR recoversβ * , the unique parameter with minimum ℓ 2 -norm that is entirely embedded in the rowspace of X, provided the number of principal components k is aptly chosen. From Property 2.1, recall that PCR enforces β ∈ rowspan( Z k ). Hence, if k = r and the rowspace of Z r is "close" to the rowspace of X, then β ≈β * . The "noise" in Z arises from the missingness pattern induced by π and the measurement error W ; meanwhile, the "signal" in Z arises from X, where its strength is captured by the magnitude of its singular values. Accordingly, we define the snr as Here, s r is the smallest nonzero singular value of X, ρ determines the fraction of observed entries, and √ n + √ p is induced by the perturbation in the singular values from W . As one would expect, the signal strength s r scales linearly with ρ. From standard concentration results for sub-gaussian matrices, it follows that ∥W ∥ 2 = O( √ n + √ p) (see Lemma 9). With this notation, we state the main result on model identification. Theorem 4.1 Let Assumptions 3.1-3.3 hold. Consider (i) PCR with k = r = rank(X), (ii) ρ ≥ c(np) −1 log 2 (np), and (iii) snr ≥ C(K +1)(γ+1). Then w.p. at least 1−O((np) −10 ), Interpretation. We make a few remarks on Theorem 4.1. First, condition (iii) is not necessary but we impose it to simplify the parameter estimation bound in (5). Please refer to (31) in Appendix B for details. We now briefly discuss why the ℓ 1 -norm ofβ * shows up in the bound. Our analysis of the parameter estimation error involves an EiV error term of the form ∥(X − Z k )β * ∥ 2 , which can be bounded as follows: See (22) in Appendix B for details. From (6), it is clear that ∥β * ∥ 1 is controlled if s r is sufficiently large. Indeed, Assumption 4.1 below is one such natural condition on s r . To gain a better view on Theorem 4.1 regarding consistency, let us suppress dependencies on (K, γ, σ) for the following discussion. Theorem 4.1 implies that a sufficient condition for consistency is given by That is, PCR recoversβ * provided snr grows sufficiently fast. Finally, (6) implies that (5) can be purely expressed through the smallest nonzero singular value of X. We now describe a natural setting for which we can provide an explicit bound on the snr. Towards this, we introduce the following assumption and discuss its meaning in Section 8.3. Ignoring dependencies on (ρ, r, d), Corollary 4.1 implies that the model identification error scales as min{1/ √ np, 1/(n ∧ p) 2 }. Hence, the error vanishes as min{n, p} → ∞. The requirement that p grows arises from the error-in-variables problem; more specifically, in the PCA subroutine, we show that Z k is a good estimate of X provided both n and p grow (see Lemmas 2 and 3 in Appendix B for details). Out-of-sample Prediction Q2: "Given deterministic, corrupted, and partially observed out-of-sample covariates, can PCR recover the expected responses?" Towards answering Q2, we define PCR's out-of-sample (test) prediction errors with respect to y and y trunc as respectively. Let s ℓ , s ′ ℓ ∈ R be the ℓ-th largest singular values of X and X ′ , respectively. Recall from Section 2 that s ℓ , s ′ ℓ are defined analogously for Z and Z ′ , respectively. Analogous to (4), we define a signal-to-noise ratio for the out-of-sample covariates as Next, we bound MSE test in probability and MSE trunc test in expectation with respect to snr and snr test . For ease of notation, we define n min = n ∧ m and n max = n ∨ m. Theorem 4.2 Let the setup of Theorem 4.1 hold with ρ ≥ c(mp) −1 log 2 (mp). Consider (i) PCR with ℓ = r ′ = rank(X ′ ) and (ii) ∥β * ∥ 1 = Ω(1). Then w.p. at least 1 − O((n min p) −10 ), where here, δ β is given by the righthand side of (5) and C ′ noise = C(K + 1) 6 (γ + 1) 4 (σ 2 + 1). where Interpretation. Let us briefly dissect Theorem 4.2. Firstly, condition (ii) is not necessary but made to simplify the resulting bound. On a more interesting note, it is well known that generalization error bounds rely on some notion of "closeness" between the in-and out-ofsample covariates. A canonical assumption within the statistical learning theory literature considers the two sets of covariates to be drawn from the same underlying distribution a la i.i.d. samples. As seen in (10) and (11), we consider a complementary notion of covariate closeness that is captured by the term ∥H ⊥ H ′ ∥ 2 in ∆ 1 . In words, it measures the size of the linear subspace spanned by the out-of-sample covariates that is not contained within the linear subspace spanned by the in-sample covariates. Effectively, this term quantifies the ℓ 2 -distance, or ℓ 2 -similarity, between the in-and out-of-sample covariates. If each outof-sample covariate is some linear combination of the in-sample covariates, then this error term vanishes and the out-of-sample prediction error decreases. We formalize this concept in Assumption 4.2 below. To aid our intuition of Assumption 4.2, consider (1) in the classical regime where n > p. The canonical assumption within this paradigm considers X to have full column rank, i.e., rank(X) = p. Accordingly, the in-sample covariates span R p so the subspace spanned by the out-of-sample covariates necessarily lies within that spanned by the in-sample covariates, yielding ∥H ⊥ H ′ ∥ 2 = 0. In this view, Assumption 4.2 generalizes the full column rank assumption in the classical regime to the collinear setting in the high-dimensional regime. Proof Under Assumption 4.2, we have ∥H ′ H ⊥ ∥ 2 2 = 0. Proof Using identical arguments to those made in the proof of Corollary 4.1 and noting r ′ ≤ r, it follows that Assumption 4.3 gives snr test ≥ cρ (m ∧ p)/r. Plugging the bounds on snr, snr test , and (7) into Theorem 4.2 completes the proof. For the following discussion, we suppress dependencies on (K, γ, σ, r) and log factors, assume ρ = Θ(1), and only consider the scaling with respect to (n, m, p). Corollary 4.3 implies that if p = o(nn min ) and n = o(p 2 ), 1 then the out-of-sample prediction error vanishes to zero both in expectation and w.h.p., as n, m, p → ∞. If we make the additional assumption that n = Θ(p) and p = Θ(m), then the error scales as O(1/n) in expectation. This improves upon the best known rate of O(1/ √ n), established in Agarwal et al. (2019Agarwal et al. ( , 2021; notably, these works do not provide a high probability bound. Additionally, Agarwal et al. (2019Agarwal et al. ( , 2021 require i.i.d. covariates to leverage standard Rademacher tools for their out-of-sample analyses. In contrast, we consider fixed design points, thus our generalization error bounds do not rely on distributional assumptions regarding X and X ′ . Finding the optimal relative scalings of (n, m, p) to achieve consistency remains future work. Heterogeneous Missingness Patterns Assumption 3.2 considers MCAR patterns in the observed covariate matrix Z. This is motivated by the HSVT subroutine of PCR, as discussed in Section 2.4.1. If the missingness pattern is instead heterogeneous, other matrix completion methods designed for such settings can be utilized to more accurately recover the underlying covariates. Matrix completion with heterogeneous missingness patterns is an active area of research and there has been a recent emergence of exciting results, including Schnabel et al. (2016); Ma and Chen (2019); Sportisse et al. (2020) and Bhattacharya and Chatterjee (2022) to name a few. At a high-level, these algorithms follow a two-step approach: (i) construct estimates ρ ij of ρ ij ; (ii) use ρ ij and Z to estimate X ij . With regards to step (i), let Π ∈ {0, 1} n×p denote the binary mask matrix with E[π ij ] = ρ ij . The common assumption driving these approaches is that E[Π] is a low-rank matrix; note if E[π ij ] = ρ (MCAR), then rank(E[Π]) = 1. As such, matrix completion algorithms can be first applied to Π to obtain the estimates ρ ij . Then, X can be estimated using ρ ij and Z. Within the context of this work, if the matrix completion algorithm can faithfully recover the underlying covariates, cf. Lemma 3 of Appendix B, then our main results in Section 4 would naturally extend. A formal analysis of this more general estimator is left as interesting future work. For the specific setting where there is a different probability of missingness {ρ j } j∈[p] for each of the p covariates, we propose a straightforward extension of PCR. Let ρ j be the fraction of observed entries in the j-th column of Z. Let P ∈ R p×p be a diagonal matrix with the j-th diagonal element given by ρ j . After setting the NA values of Z to zero, we now redefine Z as Z = Z P . In words, rather than uniformly re-weighting the Z by 1/ ρ, we now re-weight the j-th column of Z by 1/ ρ j . As a result, our theoretical results will go through in an analogous manner with the scaling now depending on ρ min = min j∈[p] ρ j . PCR Theory with Misspecified Number of Principal Components The results of this section rely on an oracle version of PCR that has access to the true ranks of X and X ′ . We leave a formal treatment of PCR when the number of principal components is misspecified as an important future line of inquiry. With that said, we remark that the universal data-driven approach of Gavish and Donoho (2014), as mentioned in Section 2.4.2, often performs remarkably well in practice. We apply this approach in our simulation studies on PCR's generalization performance in Sections 5.2-5.4. Towards a Lower Bound on Model Identification To the best of our knowledge, Theorem 4.1 provides the first upper bound on PCR's model parameter estimation error in the high-dimensional EiV setting with fixed design. In Lemma 24 of Appendix F, we take the first step towards establishing a complementary lower bound to better understand the limitations of PCR in such a setting. Viewing Generalization through Assumption 4.2 As discussed, our out-of-sample guarantees do not rely on any distributional assumptions between the in-and out-of-sample covariates. Rather, our results rely on a purely linear algebraic condition given by Assumption 4.2. In this view, Assumption 4.2 offers a complementary, distribution-free perspective on generalization and has possible implications to learning under covariate shifts. We examine the role of Assumption 4.2 in our simulations in Section 5. As a preview, our results provide empirical evidence that PCR can generalize even when the in-and out-of-sample covariates obey different distributions provided Assumption 4.2 holds. In light of these findings, we furnish a data-driven diagnostic in Section 6 to check when Assumption 4.2 may hold in practice. Illustrative Simulations In this section, we present illustrative simulations to support our theoretical results. We provide details of the simulations in Appendix A. PCR Identifies the Minimum ℓ 2 -norm Model Parameter To see how Theorem 4.1 plays out in practice, we design a simulation on model identification. Setup. We consider p = 512 and r = 15. We generate β * and set it to have unit norm. For each n ∈ {30, 98, 167, . . . , p}, we generate the X and define the minimum ℓ 2 -norm solution asβ = X † Xβ * . We conduct 1000 simulation repeats per sample size n. For each repeat, we sample (ε, W ) to construct y = Xβ * + ε and Z = X + W . Results. For each simulation repeat, we apply PCR on (y, Z) to learn a single β with k = r chosen correctly. Figure 2 visualizes the root-MSE (RMSE) of β with respect toβ * and β * . As predicted by Theorem 4.1, the RMSE with respect toβ * decays to zero as the sample size increases. In contrast, the RMSE with respect to β * stays roughly constant across different sample sizes. This reaffirms that PCR identifies the minimum ℓ 2 -norm solution amongst all feasible solutions. PCR is Robust to Covariate Shifts We study the PCR's generalization properties, as predicted by Theorem 4.2, in the presence of covariate shifts, i.e., the in-and out-of-sample covariates follow different distributions. Setup. We adopt the same considerations on (n, p, r) and generate β * as in Section 5.1. Let m = n. For each n, we generate X as per distribution D 1 . We then generate four different out-of-sample covariates as follows: are distinct distributions from one another and from D 1 . Critically, Assumption 4.2 is satisfied between X and X ′ i for every i ∈ {1, . . . , 4}. We define For each simulation repeat, we apply PCR on (y, Z) to learn a single β by choosing k via the universal data-driven approach of Gavish and Donoho (2014). For each i, we construct y ′ i from the de-noised version of Z ′ i and β. Figure 3 displays the MSE of y ′ i with respect to θ ′ i . As predicted by Corollary 4.3, the out-of-sample prediction error decays as the sample size increases for each covariate shift. Hence, our results provide further evidence that PCR is robust to corrupted out-of-sample covariates and, perhaps more importantly, covariate shifts provided Assumption 4.2 holds. PCR Generalizes under Assumption 4.2 This simulation further examines the role of Assumption 4.2. Specifically, we compare PCR's generalization error under two settings: (i) there is covariate shift but Assumption 4.2 holds; (ii) there is distributional invariance (i.e., the in-and out-of-sample covariates obey the same distribution) but Assumption 4.2 is violated. Setup. We adopt the same considerations on (n, m, p, r) and generate β * as in Section 5.2. For each n, we generate X ∼ D 1 . We then generate two out-of-sample covariates: We conduct 1000 simulation repeats. For each repeat, we sample (ε, W , W ′ ) to construct y = Xβ * + ε, Z = X + W , and Z ′ i = X ′ i + W ′ . Results. For each simulation repeat, we apply PCR on (y, Z) to learn a single β by choosing k via the universal data-driven approach of Gavish and Donoho (2014). For each i, we construct y ′ i from the de-noised version of Z ′ i and β. Figure 4 displays the MSE of y ′ i with respect to θ ′ i . When Assumption 4.2 holds, the MSE decays as the sample size increases; by contrast, when Assumption 4.2 fails, the MSE is stagnant across varying sample sizes. Our findings reinforce the importance of Assumption 4.2 for PCR's ability generalize. PCR Generalizes with MCAR Entries This simulation investigates PCR's out-of-sample performance under varying intensities of MCAR patterns in the observed covariate matrices. Setup. We adopt the same considerations on (n, m, p, r) and generate β * as in Section 5.2. For each n, we generate X, X ′ ∼ D 1 with Assumption 4.2 satisfied. Next, we define θ ′ = X ′ β * . We consider varying intensities of MCAR entries with ρ ∈ {0.4, 0.6, 0.8, 0.99}. We conduct 1000 simulation repeats for each (ρ, n) pair. For each repeat, we sample Note that there are ρ entries in Π, Π ′ that are randomly assigned the value 1, and each iteration considers a different permutation of revealed entries. Results. For each simulation repeat, we apply PCR on (y, Z) to learn β by choosing k via the universal data-driven approach of Gavish and Donoho (2014). We construct y ′ from the de-noised version of Z ′ and β. Figure 5 displays the MSE of y ′ with respect to θ ′ . Across varying intensities of ρ, the MSE decays as the sample size increases, which suggests that PCR can generalize when entries in the observed covariate matrices are MCAR. A Hypothesis Test for Assumption 4.2 Our theoretical and empirical results highlight the importance of Assumption 4.2. Accordingly, we present a hypothesis test to check when Assumption 4.2 holds in practice. Recall the definitions of (H, H ⊥ ) and (H ′ , H ′ ⊥ ) as defined at the start of Section 4. We consider the hypotheses H 0 : rowspan(X ′ ) ⊆ rowspan(X) and H 1 : rowspan(X ′ ) ⊈ rowspan(X). Since (X, X ′ ) are unobserved, we use (Z, Z ′ ) as proxies. To this end, let H k and H ′ℓ denote the projection matrices formed by the right singular vectors of Z k and Z ′ℓ , respectively; see Section 2.2 for a recall of relevant notation. We then define our test statistic as In words, τ measures the ℓ 2 -distance between the in-and out-of-sample covariates represented by the rowspaces of Z k and Z ′ℓ , respectively. We define the test as follows: for any significance level α ∈ (0, 1) and corresponding critical value τ (α), retain H 0 if τ ≤ τ (α) and reject H 0 if τ > τ (α). In Sections 6.1 and 6.2 below, we discuss two approaches to perform the hypothesis test. Type I and Type II Guarantees Given our choice of τ and τ (α), we control both Type I and Type II errors of our test. For ease of exposition, we will consider a more restrictive form of Assumption 3.2, namely the entries of the covariate noise are independent and (Z, Z ′ ) are fully observed. The particular C for which Theorem 6.1 holds depends on the underlying distribution of the covariate noise w i . C can be made explicit for certain classes of distributions; as an example, Corollary 6.1 specializes Theorem 6.1 to when w i are normally distributed. Corollary 6.1 Consider the setup of Theorem 6.1 with C = 4. Let w i be normally distributed for all i ≤ n + m. Then, We now argue (13) is not a restrictive condition. Conditioned on H 1 , observe that r ′ > ∥HH ′ ∥ 2 F always holds. If Assumptions 4.1 and 4.3 hold, then one can easily verify that the latter two terms on the right-hand side of (13) decay to zero as (n, m, p) grow. Computing the Critical Value Computing τ (α) requires estimating (i) ς 2 ; (ii) r, r ′ ; (iii) s r , s ′ r ′ . Under our assumptions, the covariance of w can be estimated from the sample covariance matrices of (Z, Z ′ ). By standard random matrix theory, the singular values of Z and X are close. Thus, as discussed in Section 2.4.2, the spectrum of Z serves as a good proxy to estimate (r, s r ). Analogous arguments hold for Z ′ with respect to X ′ . Corollary 6.2 specializes τ (α) under Assumptions 4.1 and 4.3. Corollary 6.2 Let the setup of Theorem 6.1 hold. Suppose Assumptions 4.1 and 4.3 hold. If we consider the noiseless case, w i = 0, then τ (α) = 0. More generally, if the spectrum of X and X ′ are well-balanced, then Corollary 6.2 establishes that τ (α) = o(1), even in the presence of noise. We remark that Corollary 6.1 allows for exact constants in the definition of τ (α) under the Gaussian noise model. A Practical Approach We now provide a practical approach to computing τ (α). To build intuition, observe that τ represents the remaining spectral energy of H ′ not contained within H. Further, we note τ is trivially bounded by r ′ . Thus, one can fix some fraction α ∈ (0, 1) and reject H 0 if τ > τ (α), where τ (α) = r ′ α. In words, if more than α fraction of the spectral energy of H ′ lies outside the span of H, then the alternative test rejects H 0 . We remark that this variant is likely more robust compared to its exact computation counterpart in (12), which requires estimating several "nuisance" quantities and varies with the underlying modeling assumptions on the covariate noise and singular values. Accordingly, without knowledge of these quantities, we recommend the practical approach. To see how the practical heuristic plays out in practice, see Section 7.3 and Squires et al. (2022). Synthetic Controls This section contextualizes our results in Section 4 for synthetic controls (Abadie and Gardeazabal, 2003;Abadie et al., 2010), which has emerged as a leading approach for policy evaluation with observational data (Athey and Imbens, 2017). Towards this, we connect synthetic controls to (high-dimensional) error-in-variables regression with fixed design. Synthetic Controls Framework Consider a panel data format where observations of p + 1 units, indexed as {0, . . . , p}, are collected over n + m time periods. Each unit i at time t is characterized by two potential outcomes, Y ti (1) and Y ti (0), corresponding to the outcomes under treatment and absence of treatment (i.e., control), respectively (Neyman, 1923;Rubin, 1974). For each unit, we observe their potential outcomes according to their treatment status, i.e., we either observe Y ti (0) or Y ti (1), never both. Let Y ti denote the observed outcome. For ease of exposition, we consider a single treated unit indexed by the zeroth unit and referred to as the target. We refer to the remaining units as the control group. We observe all p + 1 units under control for the first n time periods. In the remaining m time periods, we continue to observe the control group without treatment but observe the target unit with treatment. Precisely, We call the first n and final m time steps the pre-and post-treatment periods, respectively. We encode the control units' pre-and post-treatment With these concepts in mind, we connect the synthetic controls framework to our setting of interest. Out-of-Sample Prediction Synthetic controls tackles the counterfactual question: "what would have happened to the target unit in the absence of treatment?" Formally, the goal is to estimate the (expected) counterfactual vector E[y ′ (0)], where y ′ (0) = [Y t0 (0) : t > n] ∈ R m . Methodologically, this is answered by regressing y on Z and applying the regression coefficients β to Z ′ to estimate the treated unit's expected potential outcomes under control during the post-treatment period. From this perspective, we identify that counterfactual estimation is precisely outof-sample prediction. Error-in-Variables As is typical in panel studies, potential outcomes are modeled as the addition of a latent factor model and a random variable in order to model measurement error and/or misspecification (Abadie, 2021) represent latent time and unit features with r much smaller than (n, m, p), and ε t0 ∈ R models the stochasticity. This is also known as an interactive fixed effects model (Bai, 2009). Put differently, the observed matrices Z and Z ′ are viewed as noisy instantiations of X = E[Z] and X ′ = E[Z ′ ], where X, X ′ are low-rank matrices. They represent the matrices of latent expected potential outcomes, which are a function of the latent time and unit factors. Since β is learned using Z not X, synthetic controls is an instance of error-in-variables regression. Remark 1 (Clarifying MCAR entries) As described in Section 3, we allow the entries in Z and Z ′ to be missing completely at random (MCAR). We emphasize that these missing elements do not correspond to our counterfactual estimands of interest. Readers who find the MCAR setting to be implausible can proceed with the balanced panel data setting in mind. Linear Model The underlying premise behind synthetic controls is that the target unit is a weighted composition of control units. In our setup, this translates more formally as the existence of a linear model β * ∈ R p satisfying We note that (Agarwal et al., 2021) establish that such a β * exists w.h.p. if r is much smaller than (n, m, p). Fixed Design Several works in the literature, e.g., Agarwal et al. (2021), enforce the latent time factors to be sampled i.i.d. Subsequently, the pre-and post-treatment data under control are also i.i.d. In contrast, we consider a fixed design setting that avoids distributional assumptions on the expected potential outcomes. This allows us to model settings with underlying time trends or shifting ideologies, which are likely present in many panel studies. Novel Guarantees for the Synthetic Controls Literature With our connection established, we transfer our theoretical results to the synthetic controls framework. In particular, we analyze the robust synthetic controls (RSC) estimator of Amjad et al. (2018) and its extension in Amjad et al. (2019), which learns β via PCR. Model Identification Intuitively, β * defines the synthetic control group. That is, the magnitude (and sign) of the ith entry specifies the contribution of the ith control unit in the construction of the target unit. Theorem 4.1 establishes that RSC consistently identifies the unique synthetic control group with minimum ℓ 2 -norm. Counterfactual Estimation We denote RSC's estimate of the expected counterfactual trajectory as The counterfactual estimation error is then which precisely corresponds to (8). Theorem 4.2 immediately leads to a vanishing bound on (14) as (n, m, p) grow. The exact finite-sample rates given in Theorem 4.2 improve upon the best known rate provided in Agarwal et al. (2021), which is only established in expectation and for random designs. To the best of our knowledge, Theorem 4.2 is also the first guarantee for fixed designs in the synthetic controls literature. Examining Assumption 4.2 for Two Synthetic Controls Studies We revisit two canonical synthetic controls case studies: (i) terrorism in Basque Country (Abadie and Gardeazabal, 2003) and (ii) California's Proposition 99 (Abadie et al., 2010). These studies have been used extensively to explain the utility of the synthetic controls method. We apply the practical variant of our hypothesis test for Assumption 4.2 in Section 6.2 to study the potential feasibility of counterfactual inference in both studies. Terrorism in Basque Country Background & setup. Our first study evaluates the economic ramifications of terrorism on the Basque Country of Spain. Our data is comprised of the per-capita GDP associated with 17 Spanish regions over 43 years. Basque Country is the sole treated unit that is affected by terrorism; the remaining p = 16 regions are the control regions that are relatively unaffected by terrorism. The pre-and post-intervention durations are n = 14 and m = 29 years, respectively. We note that the original work of Abadie and Gardeazabal (2003) uses 13 additional predictor variables for each region, including demographic information pertaining to one's educational status, and average shares for six industrial sectors. We only use information related to the outcome of interest, i.e., the per-capita GDP. Hypothesis test results. We consider α = 0.05. We estimate r ′ = 3 via the universal data-driven approach of Gavish and Donoho (2014). This sets τ (α) = 0.15. Estimating r analogously to r ′ , we obtain r = 5 and τ = 0.61. Since τ > τ (α), our test suggests that the PCR-based method of Amjad et al. (2018) may not be suitable for this study under our assumptions. In fact, our test only passes for (effectively) α > 0.21, which roughly translates to allowing for over 21% of the spectral energy of H ′ to fall outside of H. California Proposition 99 Background & setup. Our second study evaluates the effect of California's Proposition 99 on the consumption of tobacco. Our data is comprised of annual per-capita cigarette sales at the state level for 39 U.S. states for 31 years. With the exception of California, the other states in this study neither adopted an anti-tobacco program or raised cigarette sales taxes by 50 cents or more. As such, the remaining p = 38 states are considered the control states and California is considered the treated state. The pre-and post-intervention durations are n = 18 and m = 13 years, respectively. The original work of Abadie et al. (2010) uses six additional covariates per state. We do not include these variables in our study. Discussion of Findings Although our tests do not pass for either study, our results are not meant to discredit the previous conclusions drawn in Amjad et al. (2018) and Agarwal et al. (2021). Rather, our tests highlight that these studies warrant further investigation. We hope our findings not only motivate the usage of this test, but also spark the development of new robustness tests to stress investigate the assumptions that underlie statistical methods and thus the associated causal conclusions drawn from these methods. Related works This section discusses related prior works from several literatures. Principal Component Regression Since its introduction in Jolliffe (1982), there have been several notable works analyzing PCR, including Bair et al. (2006); Agarwal et al. (2019Agarwal et al. ( , 2021; Chao et al. (2019). We pay particular attention to Agarwal et al. (2019Agarwal et al. ( , 2021 given their closeness to this article. Agarwal et al. (2019Agarwal et al. ( , 2021 purely focuses on prediction and thus, do not provide any results for model identification. This work proves that PCR identifies the unique minimum ℓ 2 -norm model with non-asymptotic rates of convergence. Out-of-Sample Prediction Agarwal et al. (2019,2021) shows that PCR's out-of-sample prediction error decays as O(1/ √ n) when m, p = Θ(n). Agarwal et al. (2019Agarwal et al. ( , 2021 conjecture that their "slow" rate is an artefact of their Rademacher complexity arguments. By leveraging our model identification result in Theorem 4.1, we establish the "fast" rate of O(1/n). Covariate design. Agarwal et al. (2019Agarwal et al. ( , 2021 considers a random design setting with i.i.d. covariates. By contrast, we consider a fixed design setting. As Shao and Deng (2012) notes, estimation in high-dimensional regimes with fixed designs is very different from those with random designs due to the identifiability of the model parameter. Additionally, since we treat the covariates as deterministic, we do not impose that the in-and out-of-sample covariates obey the same distribution. Under the linear algebraic condition of Assumption 4.2, we prove that PCR achieves consistent out-of-sample prediction in Corollary 4.2. Functional Principal Component Analysis We consider functional principal component analysis (fPCA), which generalizes PCA to infinite-dimensional operators (Yao et al., 2005;Li and Hsing, 2010;Descary et al., 2019). This literature often assumes access to n randomly sampled trajectories at p locations, which are carefully chosen from a grid with minor perturbations, forming an n × p data matrix, D. Thus, D T D is the empirical proxy of the underlying covariance kernel that corresponds to these random trajectories. Under appropriate assumptions on the trajectories, the D T D matrix can be represented as the additive sum of a low-rank matrix with a noise matrix. This resembles the low-rank matrix estimation problem with a key difference being that all entries here are fully observed. In Descary et al. (2019), the low-rank component is estimated by performing an explicit rank minimization, which is known to be computationally hard. The functional (or trajectory) approximation from this low-rank estimation is obtained by smoothing (or interpolation)-this is where the careful choice of locations in a grid plays an important role. The estimation error is provided with respect to the normalized Frobenius norm (i.e., Hilbert-Schmidt norm when discretized). Finally, we remark that the fPCA literature has thus far considered diverging n with fixed p or n ≫ p. In comparison, PCR utilizes hard singular value thresholding (HSVT), a popular method in the matrix estimation toolkit, to recover the low-rank matrix; such an approach is computationally efficient and even yields a closed form solution. As shown in Agarwal et al. (2021), PCR can be equivalently interpreted as HSVT followed by ordinary least squares. Hence, unlike the standard fPCA setup, PCR allows for missing values in the covariate matrix since HSVT recovers the underlying matrix in the presence of noisy and missing entries. Analytically, our model identification and prediction error guarantees rely on matrix recovery bounds with respect to the ℓ 2,∞ -norm, which is stronger than the Frobenius norm, i.e., (np) −1/2 ∥A∥ F ≤ n −1/2 ∥A∥ 2,∞ . Put differently, the typical Frobenius norm bound is insufficient to provide guarantees for PCR with error-in-variables. Finally, our setting allows for both n ≪ p and n ≫ p; the current fPCA literature only allows for n ≫ p. In this view, our work offers several directions for research within the fPCA literature: (i) allow the sampling locations to be different across the n measurements, provided there is sufficient overlap; (ii) consider settings beyond n ≫ p; (iii) extend fPCA guarantees for computationally efficient methods like HSVT. There has also been work on functional principal component regression (fPCR), which allows β * to be an infinite-dimensional parameter. Notable works include Hall and Horowitz (2007) and Cai and Hall (2006), which consider the problems of model identification and prediction error, respectively. These works, however, do not allow for error-in-variables. As noted above, model identification and out-of-sample guarantees at the fast rate of O(1/n) for PCR with error-in-variables in the finite-dimensional case has remained elusive. Extending these results for fPCR with error-in-variables remains interesting future work. Out-of-Sample Prediction By and large, this literature has focused on model identification. Accordingly, the algorithms in the works above are ill-equipped to produce reliable predictions given corrupted and partially observed out-of-sample covariates. Therefore, even if the true model parameter β * is known, it is unclear how prior results can be extended to establish generalization error bounds. This work shows PCR can be easily adapted to handle these cases. Knowledge of Noise Distribution Many existing algorithms explicitly utilize knowledge of the underlying noise distribution to recover β * . Typically, these algorithms perform corrections of the form To carry out this computation, one must assume access to either oracle knowledge of E[W T W ] or obtain a good data-driven estimator for it. As Chen and Caramanis (2013) note, such an estimator can be costly or simply infeasible in many practical settings. PCR does not require any such knowledge. Instead, the PCA subroutine within PCR implicitly de-noises the covariates. The trade-off is that our results only hold if the number of retained singular components k is chosen to be the rank of X. Although there are numerous heuristics to aptly choose k, we leave a formal analysis of PCR when k is misspecified as important future work. Operating Assumptions We compare our primary assumptions with canonical assumptions in the literature. I: Low-rank vis-á-vis sparsity. The most popularly endowed structure in high-dimensional regression is that the model parameter β * is r-sparse. This work posits that the in-sample covariate matrix X is described by r nonzero singular values. These two notions are related. If rank(X) = r, then there exists an r-sparseβ such that Xβ * = Xβ; see Proposition 3.4 of Agarwal et al. (2021). Meanwhile, if β * is r-sparse, then there exists aX of rank r that also provides equivalent responses. In this view, the two perspectives are complementary. With that said, it is difficult to verify the sparsity of β * , but the low-rank assumption on X can be examined through the singular values of Z, as described in Section 2.4.2. It is also well-established that (approximately) low-rank matrices are abundant in real-world data science applications (Xu, 2017;Udell andTownsend, 2017, 2018). II: Well-balanced spectra vis-á-vis restricted eigenvalue condition. The second common condition in the literature captures the amount of "information spread" across the rows and columns of X, which leads to a bound on its smallest singular value. This is referred to as the restricted eigenvalue condition (see Definitions 1 and 2 in Loh and Wainwright (2012)), which is imposed on the empirical estimate of the covariance of X. This work assumes the spectrum of X is well-balanced (Assumption 4.1). This assumption is not necessary for consistent estimation. Rather, it is one condition that yields a reasonable snr, which guarantees both model identification and vanishing out-of-sample prediction errors. In many prior works, the restricted eigenvalue condition (and its variants) are shown to hold w.h.p. if the rows of X are i.i.d. (or at least, independent) samples from a mean zero sub-gaussian distribution. This data generating process implies that the smallest and largest singular values of X are of order O( √ n + √ p). However, under the assumptions rank(X) = r and |X ij | = Θ(1), one can verify that ∥X∥ 2 = Ω( np/r). The difference in the typical magnitude of the largest singular value reflects the difference in applications in which a restricted eigenvalue assumption versus a low-rank assumption is likely to hold. The restricted eigenvalue assumption is particularly suited in applications such as compressed sensing where researchers design X. The applications arising in the social or life sciences primarily involve observational data. In such settings, a low-rank assumption on X is arguably more suitable to capture the latent structure amongst the covariates. Ultimately, the Assumption 4.1 is similar to the restricted eigenvalue condition in that it requires the smallest and largest nonzero singular values of X to be of the same order. It turns out that analogous assumptions are pervasive across many fields. Within the econometrics factor model literature, it is standard to assume that the factor structure is separated from the idiosyncratic errors, e.g., Assumption A of Bai and Ng (2020); within the robust covariance estimation literature, this assumption is closely related to the notion of pervasiveness, e.g., Proposition 3.2 of Fan et al. (2018); within the matrix/tensor completion literature, it is assumed that the nonzero singular values are of the same order to achieve minimax optimal rates, e.g., Cai et al. (2019). Assumption 4.1 has also been shown to hold w.h.p. for the embedded Gaussians model, which is a canonical probabilistic generating process used to analyze probabilistic PCA (Tipping and Bishop, 1999;Bishop, 1999;Agarwal et al., 2021). Finally, like the low-rank assumption, a practical benefit of the well-balanced spectra assumption is that it can be empirically examined via the same procedure outlined in Section 2.4.2. Linear Regression with Hidden Confounding The problem of high-dimensional error-in-variables regression is related to linear regression with hidden confounding, a common model within the causal inference and econometrics literatures (Guo et al., 2020;Ćevid et al., 2020). As noted by Guo et al. (2020), a particular class of error-in-variables models can be reformulated as linear regression with hidden confounding. Using our notation, they consider a high-dimensional model where the rows of X are sampled i.i.d. As such, X can be full-rank, but W is assumed to have low-rank structure. The aim of this work is to estimate a sparse β * . In comparison, we place the low-rank assumption on X, and assume the rows of W are sampled independently and thus, can be full-rank. Notably, for this setup, Ćevid et al. (2020) "deconfounds" the observed covariates Z by a spectral transformation of its singular values. It is interesting future work to analyze PCR for this important and closely related scenario. Conclusion The most immediate direction for future work is to establish bounds when the covariates are approximately low-rank. Within this context, our analysis suggests PCR induces an additional error of the form ∥(I − V r V T r )β * ∥ 2 , where V r is formed from the top r principal components of X. This is the unavoidable model misspecification error that results from taking a rank r approximation of X. It stands to reason that soft singular value thresholding (SVT), which appropriately down-weights the singular values of Z, may be a more appropriate algorithmic approach as opposed to the hard SVT. Another future line of research is to bridge our out-of-sample prediction analysis with recent works that analyze over-parameterized estimators. Bartlett et al. (2020), for instance, demonstrates that the minimum ℓ 2 -norm linear regression solution predicts well out-ofsample despite a perfect fit to noisy in-sample data; this phenomena is known as "benign overfitting". To establish their result, Bartlett et al. (2020) introduces two notions of "effec-tive rank" of the data covariance, and characterize linear regression problems that exhibit benign overfitting with respect to these quantities. In comparison, this work characterizes the out-of-sample prediction performance of PCR with respect to the ℓ 2 -distance between the in-and out-of-sample covariates (see Assumption 4.2). Accordingly, one exciting research agenda is to explore the interplay of these two conceptions for over-parameterized linear estimators. This may also have implications for approximately low-rank settings. A. Illustrative Simulations: Details We present the generative models in our simulation studies in Section 5. A.1 PCR Identifies the Minimum ℓ 2 -norm Model We generate X = U V T , where the entries of U , V are sampled independently from a standard normal distribution. Next, we generate β * ∈ R p by sampling from a multivariate standard normal vector with independent entries, and normalize it by ∥β * ∥ 2 so that it has unit norm. We defineβ * = X † Xβ * . For each simulation repeat, we independently sample the entries of ε ∈ R n from a normal distribution with mean 0 and variance σ 2 = 0.2. The entries of W ∈ R n×p are sampled in an identical fashion. We then define our observed response vector as y = Xβ + ε and observed covariate matrix as Z = X + W . For simplicity, we do not mask any of the entries. A.2 PCR is Robust to Covariate Shifts We generate X = U V T as in Appendix A.1. Next, we generate four different out-of-sample covariates, defined as X ′ 1 , X ′ 2 , X ′ 3 , X ′ 4 via the following procedure: We independently sample the entries of U ′ 1 from a standard normal distribution, and define X ′ By construction, the mean and variance of the entries in X ′ 3 match that of X ′ 1 ; an analogous relationship holds between X ′ 4 and X ′ 2 . While X ′ 1 follows the same distribution as that of X, there is a clear distribution shift from X to X ′ 3 , X ′ 2 , X ′ 4 . We proceed to generate β * from a standard multivariate normal. We define θ ′ 1 = X ′ 1 β * , and define θ ′ 2 , θ ′ 3 , θ ′ 4 analogously. Further, the entries of ε and W , W ′ are independently sampled from a normal distribution with variance σ 2 = 0.2. We define the training responses as y = Xβ * + ε and observed training covariates as Z = X + W . The first set of observed testing covariates is defined as Z ′ 1 = X ′ 1 + W ′ , with analogous definitions for Z ′ 2 , Z ′ 3 , Z ′ 4 . A.3 PCR Generalizes under Assumption 4.2 We generate X = U V T as in Appendix A.1. We now generate two different testing covariates. First, we generate X ′ 1 = U ′ V T , where the entries of U ′ are independently sampled from a normal distribution with mean zero and variance 5. As such, it follows that Assumption 4.2 immediately holds between X ′ 1 and X, though they do not obey the same distribution. Next, we generate X ′ 2 = U V ′T , where the entries of V ′ are independently sampled from a standard normal (just as in V ). In doing so, we ensure that X ′ 2 and X follow the same distribution, though Assumption 4.2 no longer holds. A.4 PCR Generalizes with MCAR Entries We generate X = U V T as in Appendix A.1 and generate X ′ = U ′ V T , where the entries of U ′ are independently sampled from a standard normal. As such, it follows that Assumption 4.2 immediately holds between X ′ and X. We generate β * as in Appendix A.2, and define θ = X ′ β * . We also generate (ε, W , W ′ ) as in Appendix A.2. There are ρ entries in Π, Π ′ that are randomly assigned the value 1, and each iteration considers a different permutation of revealed entries. Putting everything together, we define the training data as y = Xβ * + ε and Z = (X + W ) • Π, and testing data as B. Proof of Theorem 4.1 We start with some useful notation. Note Xβ * = Xβ * . Let y = Xβ * + ε be the vector notation of (1) with y = [y i : i ≤ n] ∈ R n , ε = [ε i : i ≤ n] ∈ R n . Throughout, let X = U SV T denote the singular value decomposition (SVD) of X. Recall that we write Z = ρ −1 Z = U S V T for the SVD of Z. Its truncation using the top k singular components is denoted as Z k = U k S k V T k . Further, we will often use the following bound: for any A ∈ R a×b , v ∈ R b , where ∥A∥ 2,∞ = max j ∥A ·j ∥ 2 with A ·j representing the j-th column of A. As discussed in Section 4.1, we will considerβ * as our model parameter of interest. This corresponds to the unique minimum ℓ 2 -norm model parameter satisfying (1) for i ≤ n. As a result, it follows that where V ⊥ represents a matrix of orthornormal basis vectors that span the nullspace of X. Similarly, let V k,⊥ ∈ R p×(p−k) be a matrix of orthonormal basis vectors that span the nullspace of Z k ; thus, V k,⊥ is orthogonal to V k . Then, Note that in the last equality we have used Property 2.1, which states that V T k,⊥ β = 0. Next, we bound the two terms in (17). To begin, note that since V k has orthonormal columns. Next, consider where we used (15). Recall that Z k = U k S k V T k . Therefore, Therefore using (18), we conclude that By Property 2.1 we have, From (20) and (21), we have where we used (15). From (19) and (22), we conclude that where (a) follows from V T ⊥β * = 0 due to (16). Then, From (24) and (25), it follows that Bringing together (17), (23), and (26). Collectively, we obtain Key lemmas. We state the key lemmas bounding each of the terms on the right hand side of (27). This will help us conclude the proof of Theorem 4.1. The proofs of these lemmas are presented in Sections B.1, B.2, B.3, B.4. Lemma 2 Consider the setup of Theorem 4.1, and PCR with parameter k = r. Then, for any t > 0, the following holds w.p. at least 1 − exp −t 2 : Here, s r > 0 represents the r-th largest singular value of X. B.1 Proof of Lemma 2 Recall that U , V denote the left and right singular vectors of X (equivalently, ρX), respectively; meanwhile, U k , V k denote the top k left and right singular vectors of Z (equivalently, Z), respectively. Further, observe that E[Z] = ρX and letW = Z − ρX. To arrive at our result, we recall Wedin's Theorem (Wedin, 1972). correspond to the truncation of U , V (respectively, U , V ) that retains the columns corresponding to the top k singular values of A (respectively, B). Let s k denote the k-th singular value of A. Then, Using Theorem B.1 for k = r, it follows that where s r is the smallest nonzero singular value of X. Next, we obtain a high probability bound on ∥W ∥ 2 . To that end, We bound the two terms in (40) separately. We recall the following lemma, which is a direct extension of Theorem 4.6.1 of Vershynin (2018) for the non-isotropic setting, and we present its proof for completeness in Section B.5. Lemma 6 (Independent sub-gaussian rows) Let A be an n × p matrix whose rows A i are independent, mean zero, sub-gaussian random vectors in R p with second moment matrix Σ = (1/n)E[A T A]. Then for any t ≥ 0, the following holds w.p. at least 1 − exp −t 2 : The matrixW = Z − ρX has independent rows by Assumption 3.2. We state the following Lemma about the distribution property of the rows ofW , the proof of which can be found in Section B.6. From Lemmas 6 and 7, w.p. at least 1 − exp −t 2 , Finally, we claim the following bound on ∥E[W TW ]∥ 2 , the proof of which is in Section B.7. Lemma 8 Let Assumption 3.2 hold. Then, we have From (40), (42) and Lemma 8, we have w.p. at least 1 − exp −t 2 for any t > 0 For this, we conclude the following lemma. Using the above and (39), we conclude the proof of Lemma 2. B.2 Proof of Lemma 3 We want to bound ∥X − Z k ∥ 2 2,∞ . To that end, let ∆ j = X ·j − Z k ·j for any j ∈ [p]. Our interest is in bounding ∥∆ j ∥ 2 2 for all j ∈ [p]. Consider, Now, note that Z k ·j − U k U T k X ·j belongs to the subspace spanned by column vectors of U k , while U k U T k X ·j −X ·j belongs to its orthogonal complement with respect to R n . As a result, Therefore, where we have used the fact that ∥ U k U T k ∥ 2 = 1. Recall that U ∈ R n×r represents the left singular vectors of X. Thus, We now state Lemmas 10 and 11. Their proofs are in Sections B.8 and B.9, respectively. Lemma 11 Consider any matrix Q ∈ R n×ℓ with 1 ≤ ℓ ≤ n such that its columns Q ·j for j ∈ [ℓ] are orthonormal vectors. Then for any t > 0, Subsequently, w.p. Recalling X = U SV T , we obtain U U T X ·j = X ·j since U U T is the projection onto the column space of X. Therefore, Using Property 3.3, note that ∥X ·j ∥ 2 2 ≤ n. Thus using Lemma 2 with k = r, we have that w.p. at least 1 − O(1/(np) 10 ), we have Concluding. From (43), (47), and (48), we claim w.p. at least 1 − O(1/(np) 10 ) This completes the proof of Lemma 3. B.3 Proof of Lemma 4 To bound s k , we recall Weyl's inequality. Lemma 12 (Weyl's inequality) Given A, B ∈ R m×n , let σ i and σ i be the i-th singular values of A and B, respectively, in decreasing order and repeated by multiplicities. Then for all i ∈ [m ∧ n], Lets k be the k-th singular value of Z. Then, s k = (1/ ρ)s k since it is the k-th singular value of Z = (1/ ρ)Z. By Lemma 12, we have recall that s k is the k-th singular value of X. As a result, From Lemma 9 and Lemma 10, it follows that w.p. at least 1 − O(1/(np) 10 ), This completes the proof of Lemma 4. B.4 Proof of Lemma 5 We need to bound ⟨ Z k ( β −β * ), ε⟩. To that end, we recall that β = V k S −1 k U T k y, Z k = U k S k V T k , and y = Xβ * + ε. Thus, Therefore, Now, ε is independent of U k , S k , V k since Z k is determined by Z, which is independent of ε. As a result, Therefore, it follows that where we used the fact E[ε] = 0. To obtain a high probability bound, using Lemma 16 it follows that for any t > 0 due to Assumption 3.1, and note that we have used the fact that U k U T k is a projection matrix and ∥X∥ 2,∞ ≤ √ n due to Assumption 3.3. Similarly, for any t > 0 due to Assumption 3.1, and Finally, using Lemma 17 and (51), it follows that for any t > 0 since U k U T k is a projection matrix and by Assumption 3.1. From (49), (52), (53), and (54), we conclude that w.p. at least 1 − O(1/(np) 10 ), This completes the proof of Lemma 5. B.5 Proof of Lemma 6 As mentioned earlier, the proof presented here is a natural extension of that for Theorem 4.6.1 in Vershynin (2018) for the non-isotropic setting. Recall that ∥A∥ = max where S p−1 , S n−1 denote the unit spheres in R p and R n , respectively. We start by bounding the quadratic term ⟨Ax, y⟩ for a finite set x, y obtained by placing 1/4-net on the unit spheres, and then use the bound on them to bound ⟨Ax, y⟩ for all x, y over the spheres. Step 1: Approximation. We will use Corollary 4.2.13 of Vershynin (2018) to establish a 1/4-net of N of the unit sphere S p−1 with cardinality |N | ≤ 9 p . Applying Lemma 4.4.1 of Vershynin (2018), we obtain To achieve our desired result, it remains to show that where ϵ = K 2 max(δ, δ 2 ). Step 2: Concentration. Let us fix a unit vector x ∈ S p−1 and write Since the rows of A are assumed to be independent sub-gaussian random vectors with ∥A i,· ∥ ψ 2 ≤ K, it follows that Y i = ⟨A i,· , x⟩ are independent sub-gaussian random variables i ] are independent, mean zero, sub-exponential random variables with As a result, we can apply Bernstein's inequality (see Theorem D.1) to obtain where the last inequality follows from the definition of δ in (41) and because (a+b) 2 ≥ a 2 +b 2 for a, b ≥ 0. Step 3: Union bound. We now apply a union bound over all elements in the net, for large enough C. This concludes the proof. B.6 Proof of Lemma 7 Recall that z i = (x i + w i ) • π i , where w i is an independent mean zero subgaussian vector with ∥w i ∥ ψ 2 ≤ K and π i is a vector of independent Bernoulli variables with parameter ρ. Hence, E[z i − ρx i ] = 0 and is independent across i ∈ [n]. The only remaining item is a bound on ∥z i − ρx i ∥ ψ 2 . To that end, note that Now, (ρ1 − π i ) is independent, zero mean random vector whose absolute value is bounded by 1, and is component-wise multiplied by x i which are bounded in absolute value by 1 as per Assumption 3.3. That is, x i • (ρ1 − π i ) is a zero mean random vector where each component is independent and bounded in absolute value by 1. That is, ∥ · ∥ ψ 2 ≤ C. For w i • π i , note that w i and π i are independent vectors and the coordinates of π i have support {0, 1}. Therefore, from Lemma 13, it follows that ∥w i • π i ∥ ψ 2 ≤ ∥w i ∥ ψ 2 ≤ K by Assumption 3.2. The proof of Lemma 7 is complete by choosing a large enough C. Lemma 13 Suppose that Y ∈ R n and P ∈ {0, 1} n are independent random vectors. Then, Proof Given a binary vector P ∈ {0, 1} n , let I P = {i ∈ [n] : P i = 1}. Observe that Here, • denotes the Hadamard product (entry-wise product) of two matrices. By definition of the ψ 2 -norm, Let u 0 ∈ S n−1 denote the maximum-achieving unit vector (such a u 0 exists because inf{· · · } is continuous with respect to u and S n−1 is compact). Now, For any u ∈ S n−1 , observe that Therefore, taking supremum over u ∈ S n−1 , we obtain Note that ∥diag(X T X)∥ 2 ≤ n due to Assumption 3.3. Using Assumption 3.2, it follows that This completes the proof of Lemma 8. B.8 Proof of Lemma 10 By the Binomial Chernoff bound, for α > 1, By the union bound, Noticing α + 1 < 2α < 2α 2 for all α > 1, we obtain the desired bound claimed in Lemma 10. To complete the remaining claim of Lemma 10, we consider an α that satisfies for a constant C > 0. Thus, Then, with ρ ≥ c log 2 np np , we have that α ≤ 2. Further by choosing C > 0 large enough, we have holds w.p. at least 1 − O(1/(np) 10 ). This completes the proof of Lemma 10. B.9 Proof of Lemma 11 By definition QQ T ∈ R n×n is a rank ℓ matrix. Since Q has orthonormal column vectors, the projection operator has ∥QQ T ∥ 2 = 1 and ∥QQ T ∥ 2 F = ℓ. For a given j ∈ [p], the random vector Z ·j − ρX ·j is such that it has zero mean, independent components that are sub-gaussian by Assumption 3.2. For any i ∈ [n], j ∈ [p], we have by property of ψ 2 norm, ∥z ij − ρx ij ∥ ψ 2 ≤ ∥z i − ρx i ∥ ψ 2 which is bounded by C(K + 1) using Lemma 7. Recall the Hanson-Wright inequality (Vershynin (2018)): Theorem B.2 (Hanson-Wright inequality) Let ζ ∈ R n be a random vector with independent, mean zero, sub-gaussian coordinates. Let A be an n × n matrix. Then for any t > 0, Now with ζ = Z ·j − ρX ·j and the fact that Q T Q = I ∈ R ℓ×ℓ , ∥QQ T ζ∥ 2 2 = ζ T QQ T ζ. Therefore, by Theorem B.2, for any t > 0, where ζ = Z ·j − ρX ·j , and hence (a) follows from E[ζ] = E[Z ·j − ρX ·j ] = 0, (b) follows from ζ having independent components and (c) follows from each component of ζ having ψ 2 -norm bounded by C(K + 1). Therefore, it follows by union bound that for any t > 0, This completes the proof of Lemma 11. C. Proof of Theorem 4.2 Recall that X ′ and Z ′ denote the latent and observed testing covariates, respectively. We denote the SVD of the former as X ′ = U ′ S ′ V ′ T . Let s ′ ℓ be the ℓ-th singular value of X ′ . Further, recall that Z ′ = (1/ ρ ′ )Z ′ , and its rank ℓ truncation is denoted as Z ′ℓ . Our interest is in bounding ∥ Z ′ℓ β − X ′β * ∥ 2 . Towards this, consider We shall bound the two terms on the right hand side of (55) next. Now, note that ∥Z ′ − Z ′ℓ ∥ 2 is the (ℓ + 1)-st largest singular value of Z ′ . Therefore, by Weyl's inequality (Lemma 12), we have for any ℓ ≥ r ′ , In turn, this gives Thus, we have Recall that H and H ⊥ span the rowspace and nullspace of X, respectively; similarly, recall that H ′ and H ′ ⊥ are defined analogously with respect to X ′ . As a result, Let H r = V r V T r denote the projection matrix onto the rowspace of Z r . Thus, From (23) and above, we obtain Thus, In summary, plugging (57) and (58) into (56), we have Bounding ∥( Z ′ℓ − X ′ )β * ∥ 2 2 . Using inequality (15), Combining. Incorporating (59) and (60) into (55) where Note that (61) is a deterministic bound. We will now proceed to bound ∆ 1 and ∆ 2 , first in high probability then in expectation. D. Helpful Concentration Inequalities In this section, we state and prove a number of helpful concentration inequalities used to establish our primary results. Lemma 14 Let X be a mean zero, sub-gaussian random variable. Then for any λ ∈ R, Lemma 15 Let X 1 , . . . , X n be independent, mean zero, sub-gaussian random variables. Then, Theorem D.1 (Bernstein's inequality) Let X 1 , . . . , X n be independent, mean zero, subexponential random variables. Then, for every t ≥ 0, we have where c > 0 is an absolute constant. Lemma 16 (Modified Hoeffding Inequality) Let X ∈ R n be random vector with independent mean-zero sub-Gaussian random coordinates with ∥X i ∥ ψ 2 ≤ K. Let a ∈ R n be another random vector that satisfies ∥a∥ 2 ≤ b almost surely for some constant b ≥ 0. Then for all t ≥ 0, where c > 0 is a universal constant. Proof Let S n = n i=1 a i X i . Then applying Markov's inequality for any λ > 0, we obtain P (S n ≥ t) = P (exp(λS n ) ≥ exp(λt)) Now, conditioned on the random vector a, observe that where the equality follows from conditional independence, the first inequality by Lemma 14, and the final inequality by assumption. Therefore, Optimizing over λ yields the desired result: Applying the same arguments for −⟨X, a⟩ gives a tail bound in the other direction. Lemma 17 (Modified Hanson-Wright Inequality) Let X ∈ R n be a random vector with independent mean-zero sub-Gaussian coordinates with ∥X i ∥ ψ 2 ≤ K. Let A ∈ R n×n be a random matrix satisfying ∥A∥ 2 ≤ a and ∥A∥ 2 F ≤ b almost surely for some a, b ≥ 0. Then for any t ≥ 0, Proof The proof follows similarly to that of Theorem 6.2.1 of Vershynin (2018). Using the independence of the coordinates of X, we have the following useful diagonal and off-diagonal decomposition: Therefore, letting we can express A ij X i X j ≥ t/2 =: p 1 + p 2 . We will now proceed to bound each term independently. . Applying Markov's inequality for any λ > 0, we have Since the X i are independent, sub-Gaussian random variables, X 2 i − E[X 2 i ] are independent mean-zero sub-exponential random variables, satisfying Conditioned on A and optimizing over λ using standard arguments, yields Step 2: off-diagonals. Let S = i̸ =j A ij X i X j . Again, applying Markov's inequality for any λ > 0, we have Let g be a standard multivariate gaussian random vector. Further, let X ′ and g ′ be independent copies of X and g, respectively. (2018)) where |λ| ≤ c/a. Optimizing over λ then gives Step 3: combining. Putting everything together completes the proof. E. Proof of Theorem 6.1 Type I error. We first bound the Type I error, which anchors on Lemma 18, stated below. The proof of Lemma 18 can be found in Appendix E.2. Bounding H) is a projection matrix, and hence ∥I − H∥ ≤ 1. By adapting Lemma 2, we have w.p. at least 1 − α 2 Note that we have used the following: (i) ∥ H ′ℓ − H ′ ∥ F = ∥ sin Θ∥ F , where sin Θ ∈ R r ′ ×r ′ is a matrix of principal angles between the two projectors (Absil et al., 2006), which implies rank( H ′ℓ − H ′ ) ≤ r ′ ; (ii) the standard norm inequality ∥A∥ F ≤ rank(A)∥A∥ 2 for any matrix A. Using the result above, we have Again, to arrive at the above inequality, we use ∥I − H∥ 2 ≤ 1 and tr( H ′ℓ ) = r ′ . Defining the upper bound as τ (α) completes the bound on the Type I error. Type II error. Next, we bound the Type II error. We will leverage Lemma 19, the proof of which can be found in Appendix E.3. Lemma 19 The following equality holds: τ = r ′ − c 1 − c 2 , where We proceed to bound each term on the right hand side of (82) separately. Lemma 21 Let the setup of Lemma 2 hold. Further, assume the entries of W and W ′ are independent Gaussian r.v.s with variance ς 2 . Then for any α ∈ (0, 1), we have w.p. at least 1 − α, Proof The proof is identical to that of Lemma 2 except ∥Z − X∥ is now bounded above using Lemma 20. The remainder of the proof of Corollary 6.1 is identical to that of Theorem 6.1. E.2 Proof of Lemma 18 Observe that Applying these two sets of equalities above together completes the proof. E.3 Proof of Lemma 19 Because the columns of H ′ℓ are orthonormal, r ′ = ∥ H ′ℓ ∥ 2 F = ∥ H k H ′ℓ ∥ 2 F +∥(I − H k ) H ′ℓ ∥ 2 F . Therefore, it follows that Now, consider the second term of the equality above. Lemma 23 If A ∈ R n×n is a symmetric matrix and B ∈ R n×n is a symmetric PSD matrix, then tr(AB) ≤ λ max (A) · tr(B), where λ max (A) is the top eigenvalue of A. F. Towards a Lower Bound on Model Identification We now take a first step towards establishing a lower bound on PCR's parameter estimation error in Lemma 24 below. Recall that Theorem 4.1 implies that PCR faithfully recovers the model parameterβ * provided snr grows sufficiently fast. Conversely, if snr = O(1), then Lemma 24 suggests the parameter estimation error is lower bounded by an absolute constant. To establish our result, we show that the Gaussian location model problem (Wu, 2020) is an instance of error-in-variables regression. where B 2 = {v ∈ R p : ∥v∥ 2 ≤ 1}. We make several important remarks. First and foremost, our result stated in Lemma 24 is only a partial correspondence with that stated in Theorem 4.1. The minimax bound in Lemma 24 is stated with ρ = 1, i.e., it does not capture the refined dependence on ρ. Meanwhile, (4) and (5) suggest that the error decays as ρ −4 . While this dependency on ρ may not be optimal, similar dependencies have appeared in error bounds within the errorin-variables literature, e.g., Loh and Wainwright (2012) and references therein. Establishing the optimal dependence with respect to ρ is interesting future work. Moreover, Lemma 24 considers the constraint set B 2 , which contrasts with that considered in the main body of this work. Finally, as seen in the proof below, our reduction argument utilizes a specific choice of X while the main body of this work considers a fixed design matrix that the practitioner is unable to choose. Closing the gap on these limitations would significantly enhance the current lower bound, and we leave a formal treatment of this problem as important future work. F.1 Proof of Lemma 24 Broadly, we proceed in three steps: (i) stating the Gaussian location model (GLM) and an associated minimax result; (ii) reducing GLM to an instance of error-in-variables regression; (iii) establishing a minimax result on the parameter estimation error of error-in-variables using the GLM minimax result. Gaussian location model. Below, we introduce the GLM setting through a well-known minimax result. Reducing GLM to error-in-variables. We will now show how an instance of GLM can be reduced to an instance of error-in-variables. Towards this, we follow the setup of Lemma 25 and define β * = θ * , β = θ, and s = 1/σ. For convenience, we write β = β * + η, where the entries of η are independent Gaussian r.v.s with mean zero and variance 1/s 2 ; hence β ∼ N (β * , (1/s 2 )I p ). Now, recall that the error-in-variables setting reveals a response vector y = Xβ * + ε and covariate Z = X + W , where the parameter estimation objective is to recover β * from (y, Z). Below, we construct instances of these quantities using β, β * as follows: (i) Let the SVD of X be defined as X = su ⊗ v, where u = (1, 0, . . . , 0) T ∈ R n and v = β * . Note by construction, rank(X) = 1 and β * ∈ rowspan(X). (ii) To construct y, we first sample ε ∈ R n whose entries are independent standard normal r.v.s. Next, we define y = su + ε. From (i), we note that Xβ * = su such that y can be equivalently expressed as y = Xβ * + ε. (iii) Let Z = su ⊗ β. By construction, it follows that Z = X + su ⊗ η. Note that W = su ⊗ η is an n × p matrix whose entries in the first row are independent standard normal r.v.s and the remaining entries are zero. To attain our desired result, it suffices to establish that p/s 2 = Ω(1). By (4) and under the assumption n = O(p), we have that s 2 ≤ 2snr 2 (n + p) ≤ csnr 2 p for some c > 0. As such, if snr = O(1), then the minimax error is bounded below by a constant.
2020-10-28T01:01:38.360Z
2020-10-27T00:00:00.000
{ "year": 2020, "sha1": "ea968d2d3b00a1defac01bc9781c561a2681b62a", "oa_license": null, "oa_url": null, "oa_status": null, "pdf_src": "Arxiv", "pdf_hash": "ea968d2d3b00a1defac01bc9781c561a2681b62a", "s2fieldsofstudy": [ "Computer Science", "Mathematics" ], "extfieldsofstudy": [ "Mathematics", "Computer Science" ] }
255342186
pes2o/s2orc
v3-fos-license
REPIMPACT - a prospective longitudinal multisite study on the effects of repetitive head impacts in youth soccer Repetitive head impacts (RHI) are common in youth athletes participating in contact sports. RHI differ from concussions; they are considered hits to the head that usually do not result in acute symptoms and are therefore also referred to as “subconcussive” head impacts. RHI occur e.g., when heading the ball or during contact with another player. Evidence suggests that exposure to RHI may have cumulative effects on brain structure and function. However, little is known about brain alterations associated with RHI, or about the risk factors that may lead to clinical or behavioral sequelae. REPIMPACT is a prospective longitudinal study of competitive youth soccer players and non-contact sport controls aged 14 to 16 years. The study aims to characterize consequences of exposure to RHI with regard to behavior (i.e., cognition, and motor function), clinical sequelae (i.e., psychiatric and neurological symptoms), brain structure, function, diffusion and biochemistry, as well as blood- and saliva-derived measures of molecular processes associated with exposure to RHI (e.g., circulating microRNAs, neuroproteins and cytokines). Here we present the structure of the REPIMPACT Consortium which consists of six teams of clinicians and scientists in six countries. We further provide detailed information on the specific aims and the design of the REPIMPACT study. The manuscript also describes the progress made in the study thus far. Finally, we discuss important challenges and approaches taken to overcome these challenges. Introduction Repetitive head impacts (RHI) and concussions are common in athletes participating in contact sports. RHI differ from concussion as RHI are considered hits to the head that usually do not result in acute symptoms and are therefore also referred to as "subconcussive" head impacts. However, recent studies suggest that RHI, particularly when sustained in close proximity in time, may have cumulative effects (Koerte et al., 2012;Koerte, Lin, Muehlmann, et al., 2015a;Koerte, Lin, Willems, et al., 2015b;Koerte, Mayinger, Muehlmann, et al., 2015c). Nonetheless, little is known about brain alterations caused by RHI, or about the risk factors that lead to clinical or behavioral sequelae. Soccer provides an accessible model to investigate the effects of RHI. Soccer is the most popular and fastest growing sport in the world with an estimated 265 million players (http://www.fifa.com2021, 2021). Soccer is unique in that the unprotected head is intentionally used when heading the ball, making it the sport with the largest number of RHI (Covassin et al., 2003a(Covassin et al., , 2003bGessel et al., 2007). Match heading frequency is greater for boys than girls, but increases with age (Sandmo, Andersen, et al., 2020a), andwith the addition of many more during practiceplayers are likely exposed to thousands of headings during their career. Head accelerations of up to 20 g may apply when heading a soccer ball (Naunheim et al., 2003;Sandmo, McIntosh, et al., 2019;Shewchenko et al., 2005aShewchenko et al., , 2005bShewchenko et al., , 2005c. Indeed, there is some evidence to suggest a link between RHI in soccer players and brain alterations (Koerte et al., 2012;Koerte, Lin, Muehlmann, et al., 2015a;Koerte, Mayinger, Muehlmann, et al., 2015c). More severe effects can be expected when a child or adolescent with insufficient neck strength or motor control attempts to head a high velocity ball (Covassin et al., 2012;Harmon et al., 2013;Hessen et al., 2007). Recently, a directive by the United States Soccer Federation to ban headings for players under age 11 has been followed by The Football Association in England as well as by The Scottish Football Assocation (Association, 2020;England, 2020;Soccer, 2019). However, there is very limited scientific evidence in children 11 years or younger playing soccer to support such directives. On the other hand, there is preliminary evidence to suggest that RHI may have harmful effects on older youth players as well. For example, a small study on female youth soccer players (15-18 years) found cognitive dysfunction immediately after exposure to RHI during a soccer practice (number of performed headers based on self-report: median 6; range 2-20) (M. R. Zhang et al., 2013). Another study in 10 young adult soccer players (mean age 21 years) found transient dysfunction of vestibular processing one day following an experimental exposure to bouts of RHI (Hwang et al., 2017). Moreover, one study in male youth soccer players (15-17 years) showed less improvement on a cognitive task over the course of a play season, compared with table tennis players (Koerte et al., 2017). However, the above mentioned studies were small (n < 20) and did not employ quantitative measures such as neuroimaging or fluid biomarkers to investigate the underlying pathophysiology (for review see (Tarnutzer et al., 2017). Nonetheless, these studies support the hypothesis that exposure to RHI in youth athletes may disrupt brain processes and thus form the rationale for studying youth soccer players with an array of objective, quantitative measures. The current paper describes the aims, study design, and methodological approach of the REPIMPACT study as well as the structure of the REPIMPACT Consortium. We also describe the progress made in the study, the characteristics of the study population as well as significant challenges and approaches taken to overcome these. Study aims REPIMPACT is a prospective longitudinal multisite study of competitive youth soccer players and control athletes who do not participate in contact sports. The study aims to characterize between-group differences in behavior, clinical sequelae, neuroimaging measures, as well as blood-and saliva-derived measures of molecular processes, over the course of one year (Fig. 1). A second aim is to explore the association between RHI exposure and changes in outcome measures over time within the soccer group. Consortium structure The REPIMPACT Consortium includes investigators from 6 research groups based in 6 countries: Germany, Belgium, Israel, Norway, Slovakia, and The Netherlands, as well as consultants from institutions in the U.S. Altogether, the REPIMPACT Consortium consists of physicians, neuroscientists, computer scientists, mathematicians, engineers, statisticians, computational biologists, psychologists, and neurobiologists. Collectively, the research team has expertise in traumatic brain injury, sports-related concussion, sports-medicine, neurology, child neurology, child psychiatry, advanced structural and functional neuroimaging, mathematical modelling, medical image processing, integrative computational biology, statistical analysis, neuropsychology, cognitive neuroscience, neurophysiology, neurodegeneration, and neuroimmunology. The consortium included three data acquisition sites: Germany (Ludwig-Maximilians-Universität, Munich), Belgium (KU Leuven and University Hospitals Leuven, Leuven), and Norway (Oslo Sports Trauma Research Center, Oslo). The three additional sites contributed with specific expertise: Imaging protocols and algorithm development were headed by the group in Israel (Tel-Aviv University, Tel-Aviv). The group in The Netherlands (Utrecht University, Utrecht) led image postprocessing. Fluid biomarker analyses were headed by the group in Slovakia (Institute of Neuroimmunology, Slovak Academy of Sciences, Bratislava). The consultants added specific expertise in MR spectroscopy, neuroimaging in traumatic brain injury, and biostatistics to the Consortium. All specific tasks were divided into work packages and distributed between the consortia partners. Study design REPIMPACT is a prospective longitudinal multisite study evaluating youth soccer players with exposure to RHI compared to a control group of athletes without exposure to RHI. The REPIMPACT study collected a comprehensive longitudinal battery of quantitative measures across three time points within a period of 15 months: TP1before the beginning of the competitive season, TP2towards the end of the season, TP3 -2 months after TP2. Study participants Youth athletes were recruited from competitive soccer clubs in Germany, Belgium, and Norway. We presented the study aims and provided information relevant to the study to players, parents, and coaches at competitive youth soccer clubs and academies in Germany, Belgium, and Norway. Control athletes without exposure to RHI were recruited through email and information relevant to the study that was addressed to relevant sports associations and clubs (e.g., swimming). Due to limited funding, only male athletes were included. Fig. 1 REPIMPACT aims to characterize between-group differences in behavior, clinical sequelae, molecular processes, as well as neuroimaging measures of brain biochemistry, brain connectivity, and brain structure Inclusion criteria for soccer players were: 14-16 years of age, male, participation in competitive soccer with at least three soccer training sessions per week, and proficiency in the language of the respective country (i.e., German, Dutch, and Norwegian). Inclusion criteria for the control athletes were: 14-16 years of age, male, participation in a competitive non-contact sport with at least three weekly training sessions, no history of contact sports within 12 months prior to inclusion in the study, and proficiency in the language of the respective country (i.e., German, Dutch, and Norwegian). Exclusion criteria for both groups were: a history of physician-diagnosed concussion or any other form of traumatic brain injury, recent brain surgery, physician-diagnosed developmental or learning disorder, any active or past neurological disorder, history of prematurity (birth prior to 37 weeks of gestation), intake of neuroleptic or psychiatric medication, illegal substance abuse, and contraindications to magnetic resonance imaging (MRI). Study protocol The study protocol at each time point (TP1-TP3) included collection of demographics, medical history, and previous exposure to RHI, comprehensive neuropsychological evaluation, neuropsychiatric screening, balance assessment, neuroimaging, and blood and saliva collection (Fig. 1). Most procedures were identical between data acquisition sites (Table 1), and those that were not identical are mentioned in the text below. Demographics, Medical History, and Exposure to RHI We conducted a semi-structured interview to capture demographic data and information on current and previous sport history, concussion, and pre-existing personal and medical history. Participants also completed a standardized measure of self-reported symptoms from the Sport Concussion Assessment Tool 5 (SCAT5) symptom list. Detailed information was acquired on training habits and life-style, including age at start of systematic training, hours of training per week, number of headings performed per day/week/year, position in the field, injuries in general, head injuries, history of symptomatic concussion, and handedness. In addition, "HeadCount" (Lipton et al., 2013), a previously published standardized questionnaire, was validated for this age group and then applied to evaluate exposure of RHI. This questionnaire gathers information on intentional headers as well as on unintentional head impacts sustained in practices and games. Intelligence A measure of fluid intelligence (Culture Fair Intelligence Test (CFT20-R)) was administered at the first time point. IQ testing was performed at the German site only. Neuropsychiatric Screening In order to screen for psychiatric symptoms the Youth Self Report (YSR) (Achenbach & Rescorla, 2001) was administered to gather information about the following dimensions: anxious/depressed, withdrawn/depressed, somatic complaints, social problems, thought problems, attention problems, rule-breaking behavior, and aggressive behavior. Neurological Evaluation A semi-standardized neurological evaluation including neurological and developmental history as well as neurological examination that were performed to screen for coincidental and confounding neurological illnesses, and to document the presence or absence of neurological abnormalities. Whenever possible, the neurological exam was performed at TP1; however, for practical reasons, in some cases it was performed instead at TP2 or TP3. A neurological examination was performed to evaluate cranial nerves, motor strength and tone, deep tendon reflexes, sensation, coordination, fine motor function and gait, validated and standardized as described elsewhere (Hadders-Algra et al., 2010). Neurocognitive functioning Was evaluated through a computer assessment of psychomotor function, executive function, mental flexibility, processing speed, attention, learning, memory, and working memory using the "Cogstate" Computerized Cognitive Assessment Tool (Collie et al., 2007). Balance Assessment Balance performance was assessed using the Balance Tracking System (Goble et al., 2016) (BTrackS, Balance Tracking Systems Inc., San Diego, CA, USA) in all three data acquisition sites. The EquiTest System (NeuroCom International, Clackamas, OR, USA) was used in addition at the Belgian site. Neuroimaging REPIMPACT collected 3 T MRI data were acquired using a common protocol between the three data acquisition sites (Table 2). Sequences were designed to optimize sensitivity to alterations in brain structure and function, while minimizing site differences, allowing for better harmonization. Before data acquisition, each site was asked to scan an identical phantom, which was used to assure that the MRI scanners provide comparable signal and comparable image deformations. The three scanners were from the same vendor (Philips) and had the same major software version (5.3). Sequence parameters were matched across scanners to minimize between-site variability. Diffusion and functional MRI sequences were identical in Germany and Belgium and leveraged multi-band acquisitions. Since the Norwegian site did not have multi-band capabilities, matching sequences that do not use multi-band were designed and installed. Finally, test-retest experiments were performed on healthy volunteers at each site, to evaluate the comparability of within-site and between-site variability ("travelling heads"). To further assure site comparability, an imaging manual was provided for each site and the technicians in each site were instructed on how to follow this manual when acquiring the data by representatives from the REPIMPACT consortium. The imaging protocol included: 1) high-resolution T1-weighted and T2-weighted anatomical images to characterize regional and gross gray and white matter volume, cortical thickness, and myelin mapping; 2) a multi-shell diffusion acquisition with low and high-b-values to investigate brain tissue microstructure and structural connectivity. The multiple shells allow the application of novel analysis methods that go beyond the common DTI analysis such as free-water imaging (Pasternak et al., 2009), neurite orientation dispersion and density imaging (NODDI) (Zhang et al., 2012), and diffusion kurtosis imaging (DKI) (Jensen et al., 2005). Parameters of these models are putative measures for gliosis, vasogenic and cytotoxic edema, axonal and myelin pathology, atrophy, and neuroinflammation; 3) High spectral resolution and multiple regions-of-interest MR spectroscopy sequences to assess brain metabolism. Short echo single voxel spectroscopy was acquired in the anterior cingulate gyrus and the periventricular white matter, which have been characterized by abnormalities in TBI (Bartnik-Olson et al., 2020); 4) Functional MRI sequence to detect subtle alterations of functional connectivity at resting state complementing structural MRI and diffusion MRI (Fig. 3). Fluid Biomarkers Blood and saliva samples were collected, preprocessed, and stored at −80°C before shipping the samples to Bratislava, Slovakia for further analyses. Blood-derived measures: The rationale for analyzing miRNAs is based on recent reports on their potential diagnostic value in traumatic brain injury (Atif and Hicks, 2019;Toffolo et al., 2019). Moreover, miRNAs may potentially provide insight in the processes (physiological and pathological) induced by repetitive head impacts. Because miRNAs cross the blood brain barrier and are stable in peripheral tissue and fluid, they can be quantified in e.g., plasma and saliva. In addition, specific neuro-and immune-related proteins such as total/ phosphorylated microtubule associated tau protein, neurofilament light polypeptide, ubiquitin C-terminal hydrolase-L1, glial fibrillary acidic protein, and cytokines (IL6, IL10, TNFα) have recently been shown to serve as promising biomarkers and prognostic indicators of traumatic brain injury (Bhomia et al., 2016;Di Pietro et al., 2017;Gill et al., 2018;Peltz et al., 2020;Redell et al., 2010;Yang et al., 2016). Validated circulating miRNAs will be analyzed using integrative computational biology to identify dysregulated pathways associated with RHI. In a secondary and exploratory aim, we investigate the utility of the above listed measures The values indicated within [] are specific for the sequence without multi-band extracted from saliva as possible replacement for blood samples. Statistical approach For the statistical analysis of the repeated measures, we will use linear mixed effect models. Our models will include a fixed effect for site, age of the participant (in months), time since baseline (in months), a group variable (soccer vs control) and a binary variable indicating whether the measurement is baseline (TP1) or during the course of the study (TP2 and TP3). We will also include fixed effects for time since baseline interactions with group and with the measurement occasion. Finally, we will also include a random intercept for each subject and a random slope for time since baseline to account for the between and within subject variability. Our main hypothesis is that there will be significant differences in the change of the outcomes between soccer players and controls over the course of the three time points; this will be tested using the parameter of baseline x group interaction. Other hypotheses are: 1) there are group differences at baseline (TP1), which will be tested using the parameter of the group main effect; and 2) there is an association between changes in outcome measures and exposure to RHI. To evaluate the association between RHI exposure and changes in outcome measures over time, the model will be applied to the soccer players only and instead of group as exposure variable, we will include measures of head impact exposure. To adjust for inflated false-positive due to multiple comparisons, we will apply the Benjamini-Hochberg procedure to restrict false discovery rate at 5% level. All analyses will be conducted in R (R Core Team, 2017) with lme4 (Bates et al., 2005). Fig. 2 Overview of tested and included participants at the three timepoints at all data acquisition sites. Abbreviations. SOC = soccer players, CON = control athletes, GER = Germany, BEL = Belgium, NOR = Norway, TP1 = first assessment, TP2 = second assessment, TP3 = third assessment Enrollment Launched in the fall of 2017, the REPIMPACT Consortium has completed the planned data collection. The three data acquisition sites have successfully enrolled 129 athletes into the study. Monthly conference calls among the REPIMPACT Consortium have ensured consistency of enrollment criteria and data sampling between the three data acquisition sites. In total, 167 athletes were invited for testing. Of those, 35 were excluded during the medical interview or neurological examination (history of physician-diagnosed concussion (n = 15), physician-diagnosed migraine (n = 2), history of prematurity (n = 4), dyslexia (n = 7), history of viral or cerebral infections (n = 3), oppositional defiant disorder (n = 1), congenital hydrocephalus (n = 1), claustrophobia (n = 1), control participant actively playing contact sport (n = 1)). In addition, three participants were excluded after inclusion due to an incidental finding on MRI suggestive of a neurological abnormality. Control athletes participated in a variety of non-contact sports ( Table 3). The final cohort included a total of 82 soccer players and 47 control athletes (Fig. 2). Of these 129 athletes, 106 completed all three time points. The mean time between TP1 and TP2 was 9.6 months (range 6.9-14.0 months). The time between TP2 and TP3 was on average 2.7 months (range 0.5-6.0 months). The mean time between TP1 and TP3 was 12.2 months (range was 8.9-16.1 months). Of note, due to a nation-wide lockdown in the spring and summer of 2020, 3 of the enrolled controls and 4 of the enrolled soccer players could not be followed up at TP2 and TP3 in Germany. Neuroimaging Of the 129 included study participants, MRI data are available for TP1 (n = 125), TP2 (n = 108), and TP3 (n = 100). 96 participants completed MRI at all three time points. Data quality has been continuously monitored and data has been preprocessed (Fig. 3). Moreover, diffusion imaging data has been harmonized to correct for between-site differences (Cetin Karayumak et al., 2019). Fluid biomarkers Using of the miRNA arrays that allows for quantification of 179 different microRNAs in a single plasma sample we performed primary screening employing RT-qPCR technique. We have identified a population of miRNAs with altered expression in soccer players when compared to controls and also in the longitudinally collected samples in control and soccer players. So far we have identified several panels of deregulated miRNAs as primary hits for further investigation as potential diagnostic or prognostic biomarkers. After the validation of primary hits the data are prepared for Fig. 3 The REPIMPACT imaging protocol includes sequences for the acquisition of structural, diffusion and function MRI data. Sequences were designed to be as similar as possible across sites and scanners. Imaging analyses include segmentation and parcellation of structural data, diffusion measures, structural and functional connectivity analyses as well as MR spectroscopy bioinformatics and prediction of gene targets and deregulated molecular pathways associated with RHI. Data storage The data were de-identified using an alphanumeric code without any personal information. Identifiable information is kept separate from de-identified research data. To ensure that data security is consistent with the European General Data Protection Regulation (GDPR), data is stored on secure servers while providing individual access to the REPIMPACT Consortium Investigators. A common Research Electronic Data Capture (REDCap) (Harris et al., 2009) database was created and maintained to ensure standardized and structured demographic and clinical data entries. Neuroimaging and balance data were stored on secure servers. Prior to uploading, data was stored locally in each site. Uploaded data was fully de-identified and neuroimaging data was "de-faced". Following the upload, the neuroimaging data was organized according to Brain Image Data Structure (BIDS) (Gorgolewski et al., 2016) criteria. Blood and saliva samples were stored locally at −80°C and after the end of data collection securely transferred to Slovakia for further analysis. Data Curation Throughout the study, uniformity of data between sites and data quality have been monitored, discussed, and addressed in the monthly Consortium calls and annual Consortium meetings. Further, inclusion and exclusion criteria have been discussed whenever necessary. Following the completion of data collection, data have been digitalized using the secure web application REDCap. Consistency in entering the data has been ensured across all three sites. A master file was prepared that summarizes the completeness of data for all questionnaires and assessments and made available to all Consortium investigators. Moreover, any additional information (e.g., detailed information on medical history or data quality) that was deemed important for the interpretation of the data was prepared in order to share information. Quality checks have been performed for all questionnaires, clinical evaluations and neuroimaging data, and quality issues and artefacts have been noted and categorized. Between site inconsistencies The difference in timing of the competitive season between the three data acquisition sites represented a challenge. While soccer teams in Germany and Belgium begin their season in late August of a given year, teams in Norway start in late January. Since, REPIMPACT aimed to include comparably competitive non-contact sports athletes, the difference in timing of the play season also affected the choice of noncontact sports: the Germany and Belgium sites included mostly swimmers as well as rowers and table tennis players while the Norway site also included sports that have their competitive season during winter time (e.g., cross-country skiing). These differences reflect national and regional differences between sites. Differences in characteristics of control participants became apparent during data curation. Initially, exclusion criteria included participation in soccer beyond age 12 years. However, during the study it became evident that recruitment of control participants was challenging and that particularly participation in soccer was very common given the popularity of the sport across Europe. It was then decided to also include those who stopped participation in soccer at least 12 months prior to enrolment in REPIMPACT. In our database this group of controls has been labeled as "intermediate controls". The proportion of controls and intermediate controls varies between study sites and this difference between sites will be taken into account when performing statistical analyses. Quantification of head impact exposure REPIMPACT intended to measure head impact exposure using the in-ear sensor MV1 (MVTRAK, Durham, NC, USA). Thus, before the start of the cohort study, we performed both laboratory and on-field evaluation for the MV1 sensor, which demonstrated major challenges (Sandmo et al., 2019). In brief, in the laboratory setting, the sensor was mounted to a Hybrid III headform (HIII) and impacted with a linear impactor or football (range: 9-144 g). Random and systematic error were calculated using HIII as reference. The MV1 sensor showed considerable random error and substantially overestimated head impact exposure. While MV1 displayed accuracy in counting the number of head impacts, it provided inaccurate information on the magnitude of acceleration. Most importantly, due to poor positive predictive value for detecting headers in real-life settings, secondary verification would be needed, using e.g. video analysis or direct observation. Further, the substantial effort required for installing the devices on each individual player before each training and match made them infeasible to use. As a result of this experiment we decided not to use the MV1 sensor in the REPIMPACT study. Instead of using physical sensors, REPIMPACT validated and applied a questionnaire known as HeadCount (Catenaccio et al., 2016), using self-reported head impacts as a measure for estimating periodical head impact exposure. The validation study (for a full report see Sandmo, Gooijers, et al., 2020b) was conducted at all three sites. In brief, we found that self-reported data could be used to group youth players into high and low heading exposure groups, but not to estimate individual heading exposure. We then used this questionnaire to capture information on head impacts experienced by our study participants. More accurate methods for estimation of head impact exposure remain an important challenge for the field. Neuroimaging During the first year of the study, a hardware failure made an update of the MR scanner at the German site necessary. Following this update, the image quality of the multi-band diffusion and resting-state fMRI sequences was insufficient. The failure was identified through periodic quality control of the obtained images. Following the identification of the issue it was decided to no longer use the multi-band option in the German site, and instead, the non-multi band sequence from Norway was installed and tested at the German site. To account for this change in sequences in the image processing, harmonization and statistical analyses we defined the German data as originating from two sites, one with multi-band data, and one without. Conclusions REPIMPACT aims to comprehensively address the gap in knowledge regarding the effects of RHI on brain structure, function, biochemistry and development in competitive youth soccer players. Between 2017 and 2020, REPIMPACT enrolled 129 youth athletes in three European countries employing a multimodal and multidimensional approach of comprehensive measures across many domains. The study includes advanced neuroimaging techniques including novel, sensitive, and specific measures that are expected to be better associated with clinical and behavioral outcome measures. As such, the REPIMPACT study is positioned to address several key scientific questions regarding the effects of exposure to RHI on the brain. The Consortium proactively applied several measures to ensure consistency of participant inclusion, data collection, and data quality across the data acquisition sites, yet, there is the possibility of site-specific effects in the assessment of study participants. As with most multisite multinational studies of this scope, there were challenges and limitations (e.g., only male youth athletes were included). By reporting the setup of the study, the limitations, challenges and how we addressed them, we intend this manuscript to be of use for researchers that are planning to establish new multisite studies. In summary, the REPIMPACT Consortium is on track to making important contributions to our understanding of the effects of RHI on the brain in youth soccer athletes. Moreover, we anticipate that this study will pave the way for management guidelines and ultimately for prevention of brain alterations due to exposure to RHI in athletes. Funding Funding was provided through the framework of ERA-NET Neuron. The individual national funding agencies are the German Ministry for Education and Research (Germany), the Research Foundation Flanders (G0H2217N) and Flemish Government (Sport Vlaanderen, D3392), Slovak Academy of Sciences and Ministry of Education of Slovak Republic (APVV-17-0668), the Dutch Research Council (NOW), the Norwegian Research Council (NFR), and the Ministry of Health, Israel (#3-13898). Open Access funding enabled and organized by Projekt DEAL. Acknowledgments We thank all study participants for taking the time to contribute to our research. We also thank everyone involved in data collection. Authors' contributions Inga K. Koerte drafted the manuscript and approved the final version of the manuscript; Roald Bahr critically edited the manuscript and approved the final version of the manuscript; Peter Filipcik critically edited the manuscript and approved the final version of the manuscript; Jolien Gooijers critically edited the manuscript and approved the final version of the manuscript; Alexander P. Lin critically edited the manuscript and approved the final version of the manuscript; Yorghos Tripodis critically edited the manuscript and approved the final version of the manuscript; Martha E. Shenton critically edited the manuscript and approved the final version of the manuscript; Alexander Leemans critically edited the manuscript and approved the final version of the manuscript; Nir Sochen critically edited the manuscript and approved the final version of the manuscript; Stephan P. Swinnen critically edited the manuscript and approved the final version of the manuscript; Ofer Pasternak critically edited the manuscript and approved the final version of the manuscript. Data availability All data and material used are available upon request. Declarations adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons licence, and indicate if changes were made. The images or other third party material in this article are included in the article's Creative Commons licence, unless indicated otherwise in a credit line to the material. If material is not included in the article's Creative Commons licence and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this licence, visit http://creativecommons.org/licenses/by/4.0/.
2023-01-02T15:22:27.939Z
2021-09-10T00:00:00.000
{ "year": 2021, "sha1": "3959e0d999254c1429b930b3fe1a4d3b90f419b2", "oa_license": "CCBY", "oa_url": "https://link.springer.com/content/pdf/10.1007/s11682-021-00484-x.pdf", "oa_status": "HYBRID", "pdf_src": "SpringerNature", "pdf_hash": "3959e0d999254c1429b930b3fe1a4d3b90f419b2", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [] }
244909642
pes2o/s2orc
v3-fos-license
COVID-19 and the labour market outcomes of disabled people in the UK The economic impact of COVID-19 has exacerbated inequalities in society, but disability has been neglected. This paper contributes to this knowledge gap by providing a comprehensive analysis of the differential labour market impact of COVID-19 by disability in the UK. Using data from the Labour Force Survey before and during the pandemic it estimates disability gaps in pre-pandemic risk factors, as well as changes in labour market inequality nearly one year on. Disabled workers are found to face higher COVID-19-related economic and health risks, including being more likely to work in ‘shutdown’ industries, and in occupations with greater proximity to others and exposure to disease. However, established measures of inequality, including the disability employment and pay gap suggest limited impact of COVID-19 in 2020. Nevertheless, the increase in the probability of being temporarily away from work, even among otherwise comparable workers, is 40% higher for disabled workers and consistent with disproportionate use of the government's job retention scheme. While the reasons for this are likely to be complex, there is a risk that it will contribute to future disability-related labour market inequality. Introduction One of the defining features of COVID-19 has been the way it has reinforced inequalities in society, including in the UK. While attention focused most immediately on ethnicity because of dramatic differences in health risk (see Platt and Warwick, 2020) there was subsequent concern relating to gender due to the associated closure of schools and additional childcare responsibilities (see, Hupkau and Petrongolo, 2020) and age as a consequence of pronounced job losses among young people (see, Wilson and Papoutsaki, 2021). In contrast, disability has been largely neglected. Indeed, in a comprehensive analysis of COVID-19 on inequality in the UK by Blundell et al. (2020) which documented variation in labour market outcomes by socio-economic status, education, age, gender and ethnicity, disability is not mentioned. This is despite disabled people representing nearly 20% of the UK working-age population, being subject to some of the most profound and persistent labour market inequality pre-pandemic (Baumberg et al., 2015), and broader United Nations calls for a disability-inclusive COVID-19 government response. In the UK, the Office of National Statistics (ONS) provided early statistical evidence relating to disability and health risks, and social isolation during COVID-19. Conditional on other risk factors (including underlying health conditions), the risk of death due to COVID-19 was found to be significantly higher for disabled compared to non-disabled people (ONS, 2021a). Disabled people also reported a more detrimental impact of COVID-19 on their life and wellbeing than non-disabled people (ONS, 2021b). The relative absence of disability in evidence on economic inequality is, however, consistent with broader neglect of the economic contribution of disabled people and dearth of labour market analysis relative to other protected characteristics (Jones and Wass, 2013), including in relation to the economic cycle (Jones et al., 2021). This paper aims to address this knowledge gap by providing the first comprehensive analysis of the impact of COVID-19 on disability-related labour market inequality. We do this in the context of the UK, and provide evidence to December 2020, just less than one year into the pandemic. Building on a series of studies relating to other protected characteristics (for example, Blundell et al., 2020 andWarwick, 2020) we use data from the Quarterly Labour Force Survey (QLFS) to explore the differential labour market impact by disability in two stages. First, we use pre-pandemic (2019) data to estimate relative COVID-19 work-related economic and health risks by disability. For example, we explore economic risks such as working in a shutdown industry and health risks including exposure to disease. We estimate both raw disability gaps and those adjusted for other personal characteristics. Subsequently we compare labour market outcomes in 2020 with those in 2019 to explore the differential labour market impact, including in relation to economic status, proxies for 'furlough' (the UK governments coronavirus job retention scheme (JRS) -see Adams-Prassl et al., 2020b), working reduced hours, working from home and pay. Again, we consider disability gaps before and after accounting for other characteristics, including occupational and industrial risks, in order to identify aggregate gaps and those among 'comparable workers'. Such evidence is clearly timely and relevant to policy designed to improve disability-related labour market equality, particularly the government's recent National Disability Strategy (NDS) and current 'levelling up' agenda. Based on pre-COVID-19 job characteristics we find that, relative to comparable non-disabled workers, disabled workers face higher COVID-19-related economic and health risks. This includes a higher probability of working in a shutdown industry, and being in an occupation with greater proximity to others and exposure to disease. The likely protection provided by homeworking is unclear, with disabled workers more likely to work from home but to be employed in occupations with less homeworking potential. Established indicators of labour market inequality including the disability employment gap (DEG) and disability pay gap (DPG), however, show little change in 2020. In contrast, the increase in the probability of being temporarily away from work (which includes those on the government JRS) is about 40 percent larger for disabled workers even after accounting for differences in work-related characteristics. While potentially reducing the short-term labour market impact of COVID-19 on disability inequality, the risk is that some longer-term consequences of this remain. The remainder of the paper is structured as follows. Section 2 provides a brief overview of pre-existing disability-related labour market inequality in the UK and early international evidence on disability inequality and COVID-19. Data from the QLFS and measures used in this analysis are introduced in Section 3 and the statistical analysis applied is outlined in Section 4. Section 5 presents our findings in relation to the labour market impact of COVID-19 by disability and Section 6 briefly concludes. Pre and early pandemic disability-related labour market inequality Disabled people in the UK experience some of the most pronounced labour market inequality of all groups protected under the 2010 Equality Act. Academic and policy attention has focused on the DEG, the percentage point difference in the employment rate between disabled and non-disabled people, which at about 30 percentage points is both large and enduring (see, for example, Baumberg et al., 2015). Conditional on employment, disabled workers have also been found to be more likely to work part-time (Jones, 2007) and in self-employment (Jones and Latreille, 2011) with these differences potentially leading to greater susceptibility to COVID-19-related labour market consequences (see, Blundell et al., 2020 for evidence on the disproportionate impact on the self-employed). Further, there is evidence of a sizeable DPG (Longhi et al., 2012) likely to reinforce this sensitivity given evidence of a disproportionate COVID-19 impact on the low paid (see, Blundell et al., 2020). Disability gaps in labour market outcomes are typically smaller, but remain evident, after the adjustment for other observable personal and (where relevant) work-related characteristics, consistent with disability-related labour market inequality. In contrast, industrial and occupational segregation by disability, particularly important given the sectoral impact of COVID-19, has not been extensively explored. Evidence related to the impact of the economic cycle on disability inequality is useful in anticipating the impact of COVID-19 as an economic contraction. Internationally disabled people have been found to be 'first fired, last hired' (Kruse and Schur, 2003), with US evidence relating to the financial crisis confirming that disabled workers were more likely to be displaced (Mitra, and Kruse, 2016). In the UK, Jones et al. (2021) explore the in-work experience of the financial crisis, finding comparable disabled employees more likely to report recession-induced changes to workload, work organisation, wages and access to training than their non-disabled counterparts, a possible reflection of employers' greater ability to discriminate in a downturn and/or changing priorities from equality towards performance. Nevertheless, while providing important context, COVID-19 is distinct from previous downturns in the speed of contraction and subsequent recovery, its dramatic sectoral impact, and the extent of government support. The latter, particularly the JRS designed to cushion job loss in the UK, is anticipated to limit the impact on employment (and the DEG) relative to similar cyclical contractions. COVID-19 has, however, also brought wider changes in the social and physical environment, benefit system and healthcare, potentially with differential effects by disability. Where these disproportionately affect disabled people, disability gaps in the labour market impact of COVID-19 are likely to be magnified. In relation to the impact of COVID-19, while the evidence on other protected characteristics including gender and ethnicity has grown rapidly, including in the UK, the international evidence on disability is scarce. In terms of pre-COVID-19 risk factors, Schur et al. (2020) highlight the potential benefits of flexible working arrangements, particularly working from home, for disabled people. However, while they find disabled workers in the US are more likely to primarily work from home in their current role, they argue the potential impact of increased homeworking is more limited since disabled workers are less likely to be employed in occupations with high homeworking potential. In the UK, Hoque and Bacon (2021) find that, in 2011, disabled employees are no more likely to work from home than comparable non-disabled employees. They set out conflicting arguments in relation to the benefits of homeworking for disabled people but confirm the restricting role of the less skilled occupational distribution among disabled workers. The additional health risks posed by COVID-19 may, however, create or exacerbate a pre-existing disability gap in the benefits of homeworking, leading to a differential increase during COVID-19. In terms of early economic outcomes, Houtenville et al. (2021) use data from the US Current Population Survey to track employment rates for disabled and non-disabled people from February 2020 to January 2021 and find largely common trends. Using the same data but restricting their analysis to people in work within the last 12 months, Schur et al. (2021) instead find that the DEG increased during COVID-19, partially due to differential occupation-related risks. In the UK, Citizens Advice (2020) report, on the basis of a survey of 6015 people, a higher risk of redundancy among disabled workers between June and July 2020, that increases with disability severity (particularly those required to 'shield'). Using national data from the COVID-19 monthly (April-June 2020) surveys of Understanding Society, Emerson et al. (2021) further explore the initial impact of COVID-19 and find that disabled people (albeit defined several years prior) were more likely than non-disabled people to work reduced hours and experience greater financial stress, as measured by food poverty, debt and self-assessed financial circumstances. These differences are reduced but not eliminated by controlling for basic demographic characteristics and pre-lockdown financial status. In contrast, they find no differences in redundancy rates or job loss. Importantly, however, the analysis does not control for established COVID-19 work-related risk factors, including industry and occupation. Finally, in evidence to the UK Work and Pensions Committee Inquiry into the DEG submitted during the development of this paper, Roberts et al. (2021) find no significant change in the DEG but a disproportionate increase in disabled people being away from work based on descriptive statistics from the QLFS from January 2018 until September 2020. They suggest a higher prevalence of disabled workers in part-time, insecure jobs and in sectors at high risk as potential drivers, something we explore in the multivariate analysis which follows. The early UK evidence therefore tentatively suggests a disproportionate labour market impact of COVID-19 on disabled people. It is, however, limited in both scope and depth, with studies typically relying on descriptive statistics, sometimes based on relatively small samples, non-standard measures, periods early in the pandemic and undertaking limited pre-pandemic comparison. This paper starts to address these limitations by using large-scale, nationally representative data, to analyse a comprehensive range of established indicators by disability as defined by legislation. Following Blundell et al. (2020) we first assess the potential differential impact based on pre-pandemic disability gaps in established COVID-19-related economic and health risk factors. We then trace changes in disability gaps in labour market outcomes post-pandemic, including national measures of disability inequality in employment status and pay (the latter highlighted by Schur et al., 2021 as important for future COVID-19-related research), as well as proxies for government employment support, changes in hours and homeworking. Our analysis considers the period up to the end of 2020, nearly a year post-pandemic, and extends the focus of the early literature beyond immediate short-term changes. Given the consistency of the QLFS over time, we utilise information pre-pandemic as a comparator and explore the influence of pre-existing trends. Importantly, we build on the disability inequality literature, to explore the extent to which disability gaps arise due to disability per se or pre-existing factors, including prior labour market disadvantage. In doing so, we extend the literature on disability inequality to consider whether this profound external health and economic shock compounded existing inequalities and contribute new evidence on disability to the growing literature on COVID-19-related labour market inequality (see Adams-Prassl et al., 2020a;Blundell et al., 2020 andWarwick, 2020 for the UK). Such evidence is clearly important to the NDS, and the government aim to get 1 million more disabled people into work by 2027. The Quarterly Labour Force Survey (QLFS) We use data from the QLFS (ONS, 2020), the largest nationally representative household survey in the UK, which contains comprehensive information on personal and work-related characteristics and has been extensively used for analysis of disability (for example, Baumberg et al., 2015) and to track the early impact of COVID-19 (for example, Blundell et al., 2020). It has several advantages in this context. It contains comparable data before and during COVID-19, including detailed information on occupation and industry to control for recognised risk factors. Critically it collects information on disability according to an established definition aligned to legislation, and for a large enough sample to perform robust analysis. A further advantage is that we track labour market outcomes using conventional measures that can be compared pre-pandemic. The trade-off is, however, that, unlike specialised surveys, current versions of the QLFS do not contain tailored COVID-19-related measures. COVID-related questions added to the QLFS are currently classed as experimental, with access restricted (ONS, 2021c). The QLFS has a rotational panel design such that, in every quarter, 20 percent of individuals are in their first wave and 20 percent are in their fifth and final wave. Two separate datasets are constructed for this analysis. First, to explore risk factors, an annual 2019 (pre-COVID-19) cross-sectional dataset is created by pooling individuals in their first or final wave across the four constituent quarters. Second, to explore the labour market impact, individuals in wave 5 are retained across the four quarters in 2019 and 2020 (the maximum post-pandemic period available at the time of writing). The restriction to individuals in wave 5 has two advantages. First, we utilise two independent annual cross sections. Second, it was particularly wave 1 data collection undertaken via faceto-face interviews which were replaced with telephone interviews, that was directly affected by COVID-19. The trade-off is that the wave 5 sample is most affected by attrition across QLFS panel element. Our findings are, however, robust to a series of changes, including pooling individuals in wave 1 and 5, and given COVID-19-related changes in sample composition (see, ONS, 2020b), additionally controlling for housing tenure (see Appendix Table A5). Throughout we define post-COVID-19 as after the initial national lockdown (March 23, 2020) and principally compare this to the same period one year earlier (pre--COVID-19). This captures the initial national lockdown and relaxation, and subsequent devolved local and national restrictions in Autumn 2020). Albeit subject to a series of changes (including generosity), the government JRS operated throughout this period. Our sample is restricted to working-age individuals (aged 16-65) throughout, with additional restrictions imposed depending on the precise measure analysed (see below). Given evidence of diverging pre-COVID-19 trends, particularly narrowing of the DEG (see Appendix Figure A2), in additional specifications we extend our pre-COVID-19 period to the same period each year from 2013 (the longest period over which disability is consistently measured) to control for pre-existing convergence/divergence in disability gaps which would otherwise potentially bias our estimate of the impact of COVID-19. Disability Disability is defined according to the 2010 Equality Act where a longterm health problem substantially limits day-to-day activities. Individuals are asked 'Do you have any physical or mental health conditions or illnesses lasting or expecting to last 12 months or more?'. Those who respond positively are then asked 'Does your condition or illness reduce your ability to carry out day-to-day activities?' to which individuals can respond Yes, a little; Yes, a lot; and Not at all. As per guidance from the UK Government Statistical Service on the Equality Act 2010, those who respond yes to the first and second question (either a little or a lot) are defined as disabled (see ONS, 2021c). Remaining individuals form the non-disabled group. As is typical in the literature, we predominately focus on this global, binary measure. However, since individuals indicate the nature of their health problem(s) from a list 17 (18 in 2020) responses, in a similar manner to Jones et al. (2018), we construct a measure of severity based on multiple health problems and use information on the main health problem to create a measure of physical versus mental impairment (see Appendix Table A2 for definitions). In sensitivity analysis we explore impairment further by disaggregating it into 5 groups (see Appendix Table A7). While widely used, there are well-established limitations of using self-reported information on disability for labour market analysis. First, given the individual nature of the threshold for defining a health condition as limiting, self-reported information will suffer from measurement error and likely downward bias estimates. Second, offsetting this, if disability is used to justify inferior economic outcomes, disability inequality will be overestimated (see Bound, 1991). While disability has been on a rising trend in the UK since 2013, it is possible that COVID-19 itself (particularly long-COVID) increased disability prevalence in 2020. COVID-19 might have also influenced disability reporting, although the direction of this is less clear. While there are potential incentives to over-report disability, such as to justify government support, there are likely to be opposing pressures given greater stigma/increased COVID-19-related economic risks. A significant increase in disability prevalence among the working-age population, from 19.3 to 20.1 percent pre-and post-COVID-19, is evident in the QLFS but this seems to follow a rising trend from 2016 rather than reflect a distinct COVID-19-related increase (see Appendix Figure A1). In terms of type and severity, the increase is evident among those with multiple impairments and impairments relating to breathing and organs, and other. Pre-pandemic economic and health risk factors The impact of COVID-19 is separated into 2019 risk factors and changes in outcomes pre-and post-COVID-19. In defining the former we use established measures based on analysis of the early impact of COVID-19 (see Appendix Table A1 for details). Our measures capture both economic and health-related risks. First, following Joyce and Xu (2020) and Blundell et al. (2020) we capture the risk of low labour demand resulting from the sectoral nature of the COVID-19 policy response using a binary measure for shutdown industries defined based on detailed (4-digit) 2007 Standard Industry Classification (SIC) covering industries such as retail, transport, accommodation, and leisure. Although the focus has been on job loss, following Farquharson et al. (2020) we also consider risks associated with being a key worker (defined using the ONS (2020a) classification based on detailed (4-digit) Standard Occupational Classification (SOC) 2010 and SIC codes). In being in high demand, key workers are likely to be at greater health risk from COVID-19 but also from high work intensity. We also measure health risks more directly utilising information on pre-pandemic exposure to COVID-19 derived from ONS analysis of the US Occupational Information Network (O*NET). More specifically, proximity to others and exposure to disease are measured on a standardised scale from 0 to 100 (increasing in risk) and mapped at the detailed SOC level. Proximity to others can also be considered as an economic risk due to the likely impact of social distancing. Our final set of measures capture working from home, expected to reduce economic and health risks. First, we focus on the probability of 'mainly' working from home. Second, we use detailed SOC measures of potential homeworking (previously found to impact on COVID-19related job loss, Adams-Prassl et al., 2020a) derived by ONS from O*NET. Overall homeworking ability is derived from five facets and measured as an index from 0 to 5, decreasing in ability. All work-related risks are measured conditional on work (employment or self-employment). Economic outcomes Although much of the early literature focused on risk factors by necessity, we also consider peri-pandemic labour market outcomes. These include established measures of disability-related inequality. We also capture a reduction in labour demand not reflected in employment status, for example, individuals who are furloughed as part of the Government JRS (see Brewer et al., 2020). In the absence of a direct measure, we utilise the proportion temporarily away from paid work (compared to the previous year) as recommended by ONS (2020c) and applied by Wilson and Papoutsaki (2021) among others. We further explore changes in hours among those who remain in work to capture additional adjustment at the intensive margin and 'flexible' furlough. For being temporarily away and hours we create additional measures which capture these being the outcome of 'economic or other' causes to further align to COVID-19. This information can also be used to explore the probability of being away from work due to being 'sick or injured' but, consistent with evidence on sickness absence rates during COVID-19 (ONS, 2021d), we find no significant increase in this post-COVID-19. We complement this with self-reported information on underemployment, measured as a preference to work more hours at the same rate of pay. We also explore differences in actual homeworking (as described above). Finally, given the potential for adjustment, both through furlough (which requires employers to pay a minimum of 80% of usual pay for hours not worked up to a monthly cap of £2500) but also pay freezes or cuts, we consider the hourly DPG. Except for hours, in-work measures are considered for all workers to capture the full effect of COVID-19 including the influence of furlough, although we explore the robustness of our findings to restricting the analysis to those who remain in work (results available upon request). Analytical approach Regression analysis is applied to estimate adjusted disability gaps in pre-COVID-19 risk factors and differential changes in outcomes pre-and post-COVID-19 by disability. We model each 2019 risk factor (R i ) for individual i using Ordinary Least Squares (OLS) as follows: where D i is a binary measure of disability and P i denotes personal characteristics namely gender; age band; marital status; presence of dependent children; highest qualification; ethnicity and region. All models also include a control for quarter given the nature of these data. We explore the disability gap (μ) before and after accounting for personal characteristics. Work-related characteristics are excluded since they are likely to be jointly determined with occupation and industry. Where risk factors are binary, we therefore estimate linear probability models, but estimates are similar to marginal effects from the corresponding probit models. For each labour market outcome, the change in the disability gap pre-and post-COVID-19 is estimated as follows: where the labour market outcomes for individual i in year t are given by L it , and disability, and personal characteristics are defined above. For inwork outcomes, we additionally include work-related characteristics (W it ) including part-time employment; self-employment (where relevant); months tenure with current employer (and tenure squared) and sector. In an additional specification we also control for SOC 2010 major occupations and SIC 2007 industry sectors to capture work-related economic risks as discussed above. Adams-Prassl et al. (2020a, 2000b and Hupkau and Petrongolo (2020) among others estimate similar specifications when modelling job loss and furlough. Except for hourly pay, which is only available for employees, we retain self-employed workers in our sample given previous evidence of their disproportionate COVID-19-related impact. Consequently, we are unable to include controls for temporary employment or workplace size, but these are included in an additional specification restricted to employees. Our focus is on the interaction between disability and the period post-COVID-19 (Post t ) where β measures the change in the disability gap over time. Its statistical significance would indicate a differential change in outcomes pre-and post-COVID-19. While the sample is too small to explore variation over the post-COVID-19 period, the results are robust to controlling for post-COVID-19 x month interactions (see Appendix Table A5). We introduce personal and (where relevant) work-related characteristics sequentially and explore the impact on β. Without controls, β measures the overall COVID-19 differential impact by disability. The inclusion of controls nets out other risk factors, including differences in the concentration of disabled workers in industries and/or occupations more affected by COVID-19. It comes closer to estimating the disproportionate impact on disabled workers in comparable jobs, or inequality which has been the focus of the literature. As in equation (1), μ is the pre-COVID-19 disability gap. As is well-established, to interpret β, the change in the disability gap (or difference-in-difference) as approaching a causal impact of COVID-19 requires the assumption of parallel trends in outcomes by disability pre-COVID-19. This is not feasible for the DEG. In a final specification we extend the pre-COVID-19 period to 2013 and include a time trend and disability time trend interaction. The latter captures longer-term disability-related outcome convergence/divergence that could otherwise be attributed to COVID-19. Throughout OLS estimate are provided for ease of interpretation. Appendix Table A2 provides full definitions and means for all the control variables by disability and pre-/post-COVID-19. The descriptive statistics confirm some well-established differences, including that disabled people are older and less qualified on average; however, they also highlight some differences particularly relevant to COVID-19, including higher rates of part-time employment among disabled workers, and a relative concentration in less skilled occupations and industries including distribution, hotels and restaurants and public, administration, education and health. Risk factors (pre-COVID-19) Table 1 presents 2019 COVID-19 work-related risk factors for workers (employees and the self-employed), by disability status. Percentage point gaps between disabled and non-disabled workers are supplemented with differences (relative to the non-disabled) in percent to facilitate comparison between measures. Disabled workers face higher economic and health risks of COVID-19. For example, in terms of economic risks, disabled workers are 11 percent more likely to be employed in a shutdown industry, with disability gaps evident in retail, accommodation and food, and personal care (see Appendix Table A3). In terms of health risks, disabled and non-disabled workers have a similar probability of being a key worker, but this disguises differences between key worker occupations. Disabled workers are significantly more likely to work in health and social care; key public services; food and other necessary goods and in local and national government but are significantly less likely to work in transport or utilities, communication and financial services (see Appendix Table A3). In relation to direct health risk measures, disabled workers are significantly more likely to work in occupations involving proximity to others and exposure to disease. Consistent with recent US evidence (Schur et al., 2020), pre-pandemic disabled workers are slightly more likely than non-disabled workers to work from home but are less likely to work in occupations with high homeworking ability, consistent with homeworking providing a form of accommodation of disability. As noted by Schur et al. (2020), this generates an inconclusive picture in terms of COVID-19. While the higher homeworking probability reduces COVID-19-related health and economic risks, disability-related occupational differentials mean disabled workers will be less likely to benefit from COVID-19-related increases in homeworking. Overall, disabled workers appear to have higher COVID-19-related health and labour market risks, albeit it is important not to infer higher risks for disabled people due to their lower employment rate. It is also worth noting that (except for actual homeworking) these disability gaps relate to differences in occupation and industry rather than disability per se but nevertheless are likely to have implications for disabled people's experience of work, and health and economic outcomes during COVID-19. Of course, disability gaps might be a consequence of other personal characteristics correlated with disability, to which we now turn. (1)) for the six risk factors. Model (1) confirms the raw gaps discussed above. Controls for personal characteristics (coefficient estimates available upon request) are added in Model (2) and the disability gap tends to narrow slightly. Nevertheless, even after accounting for this, disabled workers remain at higher COVID-19-related economic and health risks, including working in a shutdown industry, and in occupations with proximity to others and exposure to disease. This is a concern given the likely more acute implications of these risks for disabled workers due to existing economic inequalities and underlying differences in health. Consistent with the discussion of Table 1, the role of homeworking is confirmed as complex and to depend on the extent to which disabled workers had disproportionate access during COVID-19, something we explore below. Table 3 presents descriptive statistics for labour market outcomes including employment status, being temporarily away from work, and in-work measures such as hours, homeworking and pay, pre-and post-COVID-19 respectively, by disability status. We present disability gaps as well as post-COVID-19 values relative to pre-pandemic levels. The data confirm well-established disability-related labour market inequality, including a DEG of about 30 percentage points, an additional disability gap in hours for those in work, and a DPG of about 15 percent. Early economic impact In terms of the change pre-and post-COVID-19, and notwithstanding the rise in unemployment, there is relatively limited impact on employment status for either disabled or non-disabled people. This has been previously recognised (see, for example, Brewer et al., 2020) and largely attributed to the JRS, although it is thought to partially reflect changes in the QLFS sample composition, something we explore in the multivariate analysis which follows. There is more evidence of changes in outcomes among those in employment, and consistent with the government JRS scheme, the proportion of workers temporarily away from work more than doubles post-COVID-19. Moreover, consistent with Roberts et al. (2021), we find the disability gap in being away from work doubles from 4 to 8 percentage points, suggesting disabled workers are disproportionately affected, possibly reflecting a greater requirement to Notes: Authors calculations based on the QLFS 2019 (waves 1 and 5). (i) All figures relate to workers (employees and the self-employed). (ii) The percentage disability gap (in parenthesis) is measured relative to the non-disabled. (iii) ***, **,* denote statistical significance from zero at the 1%, 5% and 10% level respectively. (iv) Sample sizes are specific to the risk measure and are reported in parenthesis []. (iii) *p < 0.10, **p < 0.05, ***p < 0.01. (iv) All models include a constant and quarter fixed effects. (v) All figures relate to workers (employees and the selfemployed). shield. Consistent with this, a greater proportion of disabled workers report being temporarily away from work post-COVID-19 due to economic reasons (9 percent compared to 7 percent). Interestingly, among those who remain in work, disabled workers are no more likely to report changes in hours for economic reasons suggesting a higher risk of full, but not partial, furlough. Aligned to this, actual hours among those who remain in work are reduced only slightly, albeit the gap between usual and actual hours widens more substantially. While homeworking increases during COVID-19, the growth according to our measure (from 14 percent to 18 percent) is surprisingly limited and might reflect a lack of clarity around whether temporary COVID-19-related changes should be included in the LFS definition of 'mainly' working from home. The rates are, for example, substantially lower than homeworking in the ONS Labour Market Survey, which refers to working from home in the reference week (ONS, 2020d). There is evidence of nominal wage growth for both disabled and non-disabled employees and suggestive evidence that the DPG has widened. These trends are explored more formally in Table 4 which presents the pre-COVID-19 disability gap, the impact of COVID-19 on nondisabled people and the differential COVID-19 impact by disability (β in equation (2)). It is the latter, which demonstrates whether the disability gap has changed and provides our estimate of a differential experience of COVID-19. Successively more comprehensive specifications are reported in Models (1)-(4) where, in Model (4) the controls for occupation and industry capture broad differences in risk factors (coefficient estimates available upon request). The sample necessarily varies between outcomes, but for those measured for workers we estimate an additional specification in Model (5) restricted to employees. COVID-19 is associated with a significant but relatively small decline in the probability of employment among the working-age population. We find limited impact on the DEG, where there is weak evidence of significant narrowing (by about 2 percentage points) in Model 2. This appears to contrast to the evidence on expectations of redundancy from Citizens Advice (2020), but it is worth highlighting that because of the lower pre-COVID-19 employment rates among disabled people the same percentage reduction in the probability of employment will lead to a narrowing DEG. That is, non-disabled people are likely to be disproportionately impacted simply because they are more likely to be in work. Nevertheless, in contrast to the decline for non-disabled people, Table 3 shows a positive percentage change in the employment rate of disabled people pre-and post-COVID-19, albeit this is negligible and insignificant. Since workers on furlough remain employed, we explore the impact on being away from work. Here we find an increase among non-disabled workers of about 10 percentage points post-COVID-19 and considerable widening of the disability gap, which nearly doubles. Further, this is not explained by differences in the jobs disabled workers hold and appears to relate to disability per se. Indeed, these results are robust to the inclusion of more detailed (4-digit) controls for occupation and industry or controls for shutdown industries and ability to work from home (see Appendix Table A5). The widening disability gap is likely to arise from both demand and supply side influences and is not necessarily a signal of employer marginalisation since disabled workers might have greater need to 'shield' or desire to avoid COVID-19-related health risks which are higher for those with underlying conditions. It is also possible that employers might have selectively used 'furlough' to retain those experiencing disability onset, particularly temporary disability. Nevertheless, the differential might have longer-term consequences on disabilityrelated labour market inequality, not limited to disproportionate job losses following withdrawal of the JRS but through, for example, the impact on human capital accumulation and career progression. We additionally explored disability gaps in economic-related reasons for job loss and reductions in hours post-COVID-19 (see Appendix Table A4), and consistent with Table 3 our findings confirm a significant disability gap in being away from work for economic reasons, but not hours conditional on remaining in work. In terms of other outcomes, as expected, COVID-19 is associated with an increase in homeworking, but disabled workers experienced a much smaller increase (2 percentage points compared to 4 percentage points for non-disabled workers), albeit the difference is not significant among (ii) ***,**,* denote significance of the disability gap at the 1%, 5% and 10% level respectively. (iv) Usual and actual hours include paid overtime. a Unemployment is measured as a percentage of the economically active population. b Sample is restricted to employees. employees. The differential is also insignificant when the sample is restricted to those who remain in work, suggesting the disproportionate use of furlough likely contributes to the widening disability gap (results available upon request). Overall, therefore there is no evidence that disabled workers have disproportionately worked from home during COVID-19. This is true after controlling for occupation which, as noted above, likely limited the increase among disabled workers. Average hourly wages have grown during COVID-19 at a similar rate for disabled and non-disabled employees (7 percent and 3 percent before and after adjusting for characteristics respectively) resulting in stability of the raw and adjusted DPG. This is despite the disability gap in the probability of furlough. Given the availability of data pre-COVID-19 we explore the extent to which changes estimated between 2019 and 2020 might reflect a continuation of a prior trend in Model (6). Disability differences in time trends are only statistically significant in the case of employment, and consistent with this, we find no significant change in the DEG during COVID-19 in this specification suggesting the previous evidence of narrowing reflected continuation of pre-existing trends. The remaining findings of a widening disability gap in being away from work, a smaller increase in homeworking among disabled people and no change in the DPG are confirmed. In Table 5 we explore whether the changes post-COVID-19 exhibit heterogeneity by disability severity and type. For conciseness, we present the most comprehensive specification with personal and (where relevant) work-related characteristics, including occupation and industry, but the key findings are not sensitive to this choice (see Appendix Table A6). In terms of severity the findings confirm previous evidence of more substantial pre-pandemic 'gaps' for those with multiple health problems. In most cases the differential impact of COVID-19 is similar Table 4 COVID-19 labour market indicators, difference-in-difference estimates. Employment (1) (2) (3) (4) (5) Notes: Authors calculations based on the QLFS 2019 and 2020 (wave 5) (and in Model (6) QLFS 2013-2020 (wave 5)). The sample is the working-age population for employment, workers (employees and self-employed) for temporarily away and working at home and employees for pay. (i) Reference categories are non-disabled and pre-COVID-19. (ii) Robust standard errors in parentheses. (iii) *p < 0.10, **p < 0.05, ***p < 0.01. (iv) All models include a constant term. Work-related characteristics for employees (Model (5)) additionally include temporary employment and workplace size. Model (6) additionally controls for a time trend and disability, time trend interaction. a Sample is restricted to employees. between single and multiple conditions, the main exception being that the DEG has narrowed exclusively among those with multiple health problems. In terms of type, the DEG, probability of being temporarily away from work and the DPG are wider pre-pandemic for those with mental health problems, but it is those with physical disabilities that appear to fare worse during COVID-19. There is no evidence of a reduction in the DEG among those with physical impairments, evidence of an increase in being away from work and relative reduction in the probability of homeworking. While the increase in furlough might reflect higher COVID-19-related health risks for those with physical impairments, the reduction in homeworking is more difficult to explain. Further analysis, which separates broad types of physical disabilities (see Appendix Table A7) suggests it is with people with impairments relating to breathing and organs, who might be particularly at risk during COVID-19, who exhibit a differential labour market experience. Conclusion Using data from the largest household survey in the UK this paper provides the first comprehensive analysis of the economic impact of COVID-19. It explores both pre-COVID-19 work-related risks and the impact of COVID-19 on disability labour market inequality. Importantly the QLFS allows us to explore established measures of COVID-19-related impacts and disability inequality, and use multivariate analysis to control for a rich set of personal and work-related factors, and pre-pandemic trends. In doing so, the analysis integrates and extends two distinct themes within the inequalities literature. First, it explores disability, neglected in existing economic analysis of inequality arising from COVID-19. Second, it extends the literature on disability-related labour market inequality, to assess changes brought by COVID-19, a profound external health and economic shock. Based on pre-pandemic (2019) data, disabled workers are found to be at higher COVID-19 work-related economic and health risks. For example, disabled workers, are 11 percent more likely than non-disabled workers to work in shutdown industries. The higher risks are partly a function of differences in other personal characteristics, but a significant residual disability gap remains. Regardless of the underlying reason, the higher risks for disabled workers are of concern since they suggest a compounding effect of COVID-19 on health and labour market inequalities. Our analysis traces the latter. The role of occupational risks in explaining differential COVID-19 health impacts on disabled people remains an important question to be explored. By the end of 2020 we observe an impact of COVID-19 on employment, being temporarily away from work and homeworking. While there is limited impact on established measures of disability inequality, including the DEG and DPG, disabled people appear to be more likely to use the government JRS, with the rise in being temporarily away from work 40% greater among disabled workers. Importantly, this disability gap is evident among comparable workers and does not simply reflect differences in pre-COVID-19 risk factors. This difference is also evident if we define the reason for being temporarily away from work as economic, aligned to COVID-19 restrictions. Interestingly, the effect appears to operate through being completely rather than partially away from work, with disabled people remaining in work being no more likely to reduce their hours. It also appears to reflect changes for those with physical rather than mental health impairments, and particularly those with impairments relating to breathing and organs, a likely reflection of high COVID-19 related health risks. The higher probability of being way from work among disabled people might therefore reflect personal choice, the requirements of shielding, as well as employer-initiated protection or discrimination, and distinguishing between these is an important avenue for future research. The longer-term implications of this remain to be seen but there is a clear risk that disabled workers will be disproportionately in jobs unsustainable in the absence of government support, albeit early evidence suggests this is far less than the number of people on furlough at the end of the JRS (ONS, 2021e). It is also possible that there is a longer-term scarring impact resulting from the depreciation of human and firm specific capital, which may itself have differential effects by disability. Tracing the longer-term impact of COVID-19 and the future DEG and DPG is therefore critical. Related to this, several important questions remain including the impact of COVID-19 on disability prevalence, as well as the differential impact of more permanent labour market changes brought by COVID-19. Indeed, there is a question as to whether in highlighting the vulnerabilities of those with underlying health conditions COVID-19 may have reinforced negative stereotypes relating to disabled workers (see Bui et al., 2020 for similar arguments relating to older workers). Conversely and albeit not without risks, there are likely to be potential disproportionate benefits for disabled people of permanently higher rates of homeworking. Our evidence suggests these have not been realised during COVID-19 and therefore questions the impact of more permanent change. This, however, requires ongoing scrutiny, particularly given the imperfect nature of our measure of homeworking. Evidence of widening disparities for many protected groups during COVID-19 has focused attention on inequality. It is critical that disability is embedded within this and the current UK 'levelling up' policy agenda. In this respect, future analysis of the impact of COVID-19 needs to explore disability gaps in broader measures including income and poverty, and health and wellbeing. Longitudinal data offers additional opportunities to explore the impact of COVID-19 on disability gaps in labour market entry and exit, including whether the impact of disability onset on job retention has changed. Of course, COVID-19 has also disrupted existing data collection, including the QLFS and these findings remain to be explored with complementary data. Internationally, future research is also needed to assess the extent to which our findings are specific to the UK context and policy response, where the emphasis has been on protecting jobs. Notes: Authors calculations based on the QLFS 2019 and 2020 (wave 5). The sample is the working-age population for employment, workers (employees and self-employed) for temporarily away and working at home and employees for pay. (i) Reference categories are non-disabled and pre-COVID-19. (ii) Robust standard errors in parentheses. (iii) *p < 0.10, **p < 0.05, ***p < 0.01. (iv) Specification includes personal and (where relevant) work-related characteristics, including occupation and industry. Declaration of competing interest None.
2021-12-07T14:07:39.354Z
2021-12-01T00:00:00.000
{ "year": 2021, "sha1": "203b20d3748ae4543c60c168f0db1a0fb702abfe", "oa_license": null, "oa_url": "https://doi.org/10.1016/j.socscimed.2021.114637", "oa_status": "BRONZE", "pdf_src": "PubMedCentral", "pdf_hash": "c513e66ed3a73e49787dd79fb3f86c6780df58a0", "s2fieldsofstudy": [ "Economics", "Sociology" ], "extfieldsofstudy": [ "Medicine" ] }
260272835
pes2o/s2orc
v3-fos-license
Conspiratorial Ideation Is Associated with Lower Perceptions of Policy Effectiveness: Views from Local Governments during the COVID-19 Pandemic Governments around the world struggled to formulate an effective response to the coronavirus disease 2019 pandemic, which was hampered by the widespread diffusion of various conspiracy theories about the virus. Local governments are often responsible for the implementing mitigation measures such as mask mandates and curfews but have received very limited attention in the scholarly literature. In this article, the authors use data from local policy actors in Colorado to evaluate the relationship between conspiratorial beliefs and perceptions of mitigation policy effectiveness. The authors find that many local policy actors hold conspiratorial beliefs, which combine with partisanship to contribute to lower perceptions of policy effectiveness. The authors conclude by discussing future research directions, noting that the broad adoption of conspiracy theories likely changes enforcement at the local scale. Original Article Governments around the world introduced measures to mitigate the spread of coronavirus disease 2019 (COVID-19) (known more colloquially as the "coronavirus"). These included mandating masks, social distancing, closing businesses or limiting their hours, and other measures. Although there is disagreement about the magnitude of their effectiveness, these efforts did reduce the spread of COVID-19 and saved lives (Goldstein, Levy Yeyati, and Sartorio 2021;Howard et al. 2021;Huang et al. 2021;Lyu and Wehby 2020;Trivedi and Das 2021). However, there were also nontrivial criticisms and resistance to many of these measures. Some of these criticisms rested on the secondary effects of the mitigation efforts. For instance, limiting business hours (or occupancy) could significantly reduce revenues. Closing schools and transitioning to online learning is likely not as effective as in-person learning (Akpınar 2021;Armstrong-Mensah et al. 2020). Lockdowns contributed to job losses (Fazzari and Needler 2021). Social distancing likely deteriorated mental health (Kämpfen et al. 2020;Rodríguez-Fernández et al. 2021:19). Furthermore, the policy response to the pandemic likely engendered food and energy insecurity (Mayer and Ryder 2022;Pereira and Oliveira 2020). Thus, there are real and deleterious impacts of aggressive mitigation measures. Yet in addition to these critiques, some of the resistance to COVID-19 mitigation appears to have been motivated by partisanship and conspiracy theorizing. A range of conspiracy theories emerged, and some version of these conspiracy theories was adopted by many people in the United States. These include the belief that COVID-19 was a biological weapon created by the Chinese government, that it was manufactured for the sake of population control, or that vaccines contain mind-control microchips, among many others (Douglas 2021;Miller 2020;Uscinski et al. 2020). Belief in conspiracy theories has long been a common feature of American politics (Hofstadter 1964). Conspiratorial beliefs can have undesirable social consequences such as reducing prosocial behavior and eroding trust in experts (Jolley and Douglas 2014;Pummerer 2022;Van Prooijen, Spadaro, and Wang 2022). For instance, those who believe that climate change is a hoax are less likely to make efforts to reduce their carbon footprint (Pummerer 2022;Van der Linden 2015;Van Prooijen et al. 2022). Using an online panel for the general U.S. population, Pummerer et al. (2022) found that belief in COVID-19 conspiracy theories is associated with less support for COVID-19 mitigation. A study using Amazon Mechanical Turk data reached similar conclusions (Imhoff and Lamberty 2020). Soon after the start of the pandemic, conservative U.S. media figures and politicians began to question the severity of COVID-19 and promulgate conspiracy theories (Bisbee and Lee 2022;Hamilton and Safford 2021;Hart, Chinn, and Soroka 2020), undoubtedly shifting attitudes among conservatives and Republicans. Existing research demonstrates that Republican party affiliation or conservative political ideology significantly reduces vaccination intentions or other efforts to address COVID-19 (Bruine de Bruin, Saw, and Goldman 2020;Callaghan et al. 2021;Kerr, Panagopoulos, and van der Linden 2021;Romer and Jamieson 2021). Yet conservatives might support some mitigation measures if they are not especially intrusive or punitive (Lyons and Fowler 2021). In the United States, measures such as mask mandates and curfews were typically instituted by state governments and relied upon local enforcement (Karch 2020). At the substate level, counties and municipal governments also instituted a variety of policy measures (Ebrahim et al. 2020). A large literature in political science points to the role of local governments and "street-level" bureaucrats in implementing policies that are instituted or enforced at the local scale (Lee and Park 2021;Meyers and Vorsanger 2007). This work implies that local actors do not necessarily enact policies in the way envisioned by higher level policy makers and in some cases exercise a great deal of discretion. Immigration policy in an example, where municipal governments do not fully cooperate with federal authorities (Blizzard and Johnston 2020;Ridgley 2008). A state government might institute a mask mandate, but the enforcement of this mandate occurs through the discretion of local actors. Business owners prefer for local law enforcement to enforce mask mandates, but law enforcement were often reluctant (Jacobs and Ohinmaa 2020). Thus, local policy actors used some discretion in how they enforced policies. Yet there is remarkably little direct research about local governments and COVID-19 mitigation measures. In particular, we do not know the extent of conspiratorial thinking among local policy actors, and how this may affect perceptions or actions around mitigation. Using survey data collected among local governments in Colorado, we evaluate how conspiracy theories about COVID-19 influence perceptions of the effectiveness of COVID-19 mitigation efforts. Study Region The present research is part of a larger project to understand the impacts of COVID-19 in the state of Colorado in the western United States. This project involved interviews with the public and policy makers, as well as a general population survey. Colorado has a large and growing economy and regularly is listed as one of the healthiest states in the United States. In 2020, during the height of the pandemic, the state government implemented a variety of measures, such as mask mandates, curfews, and closing of government offices. However, the governor also received both praise and criticism by deferring to local governments to decide on issues such as mask mandates and school closures as the pandemic wore on (Netsanet, Sempson, and Choe 2022). Thus, Colorado provides a compelling context to study perceptions of local policy actors because county and municipal governments were given such authority. Data Collection Sampling local governments presents several unique challenges. There are no third-party providers (e.g., Dynata, Qualtrics) that can provide low-cost and quick data. To circumvent this issue, we acquired a list of municipalities, county, and local governments from the state treasurer's office. We collected e-mail addresses from local government Web sites to develop a sample to distribute a survey hosted on the Qualtrics platform. Roughly 2 percent of local governments-typically small, rural locations-did not provide direct e-mail addresses to local policy actors but rather used online contact forms. In these cases we submitted the survey link and a short invitation via the online contact forms. Four local governments did not provide any contact information for staff or officials, only names. We attempted to locate administrative staff members (e.g., secretaries) who might be able to provide contact information, but those efforts were not successful. We sought to include any local policy actors that might have some role on enforcement and implementation. These included, but were not limited to, mayors and other elected boards, law enforcement, commissioners, court officials, and many others. We excluded local government employees who likely had little role in enforcement, such as groundskeepers. We acquired 3,310 total e-mail addresses. Among these, 552 e-mails were "bounced"; that is, they were marked as spam and could not reach the recipients. Another 27 were undeliverable, and 4 were duplicates (the same e-mail address was listed for multiple people). Thus, we proceeded with 2,727 valid e-mail addresses. We used seven contact attempts between July 13, 2021, and August 17, 2021, with a median completion time of 6.3 minutes. Four hundred twenty-nine policy actors began the survey, although the number of completions was lower at 202, for a completion rate of 41 percent. Nearly all (about 95 percent) of the incompletes occurred when a respondent navigated to the survey but did not answer a single question. We screened respondents for residence in Colorado, age over 18 years, and employment in local government. Using the most conservative response rate (American Association for Public Opinion Research 1), the response rate was 7.4 percent. Although other studies of local policy actors in the region have produced similar response rates (e.g., Mayer 2018), we suspect that several factors may have coalesced to lower the response rate. For one, some of the nonresponders were likely laid off from their local government positions, as governments were facing severe fiscal challenges. They may not have felt that they were eligible to participate. Second, a few respondents mentioned that they had already filled out COVID-19 surveys, although not ones related to their roles in local government. We suspect that research fatigue or even confusion among surveys might have eroded the response rate. Finally, during this time, many local governments were operating with few in-person staff members and had moved to virtual meetings. Some digital fatigue may have occurred, wherein potential respondents simply did not want to spend extra time on their devices. Dependent Variable: Perceived Policy Effectiveness Following other research (e.g., Maekelae et al. 2020), we sought to understand perceptions of the effectiveness of a variety of mitigation measures for COVID-19 ("we'd like you to think about how effective various efforts to combat the spread of COVID-19 have been"). These included curfews, stay-at-home orders, mask mandates in businesses, mask mandates in public spaces, temporarily closing nonessential businesses, temporarily closing schools, temperature and symptom checks before entering buildings, limiting social gatherings, signs encouraging sick people to stay home, signs encouraging handwashing, and vaccinations. Figure 1 show the distribution of these items. Signs, masks, vaccinations, limiting social gatherings all had more than 20 percent of respondents stating that they were "extremely effective" while business closures, school closures, symptom checks and curfews were seen as less effective. Notably, fewer than 5 percent of respondents stated that none of the mitigation efforts were not effective. We next performed factor analysis on these items to understand their underlying dimensionality. To do so, we estimated a polychoric correlation matrix (a type of correlation coefficient for ordinal data) and extracted the factors using the iterated principal factors method with a varimax rotation (Holgado-Tello et al. 2010). The factor analysis (Appendix A) produced three factors that we call "general effectiveness," "mask mandates," and "information. Conspiracy Theories Borrowing from the prior literature on COVID-19 conspiracy theories, we asked the extent to which respondents agreed or disagreed with the following conspiratorial beliefs the origins about COVID-19: COVID-19 is a myth to force vaccinations, there is no such thing as COVID-19, COVID-19 was created as part of a U.S. bioweapon program, COVID-19 was created as part of a Chinese bioweapons program, Big Pharma is deliberately encouraging the spread of COVID-19 to make profits, the government could cure COVID-19 but chooses not to, and COVID-19 escaped from a laboratory. Respondents could "strongly disagree" or "strongly agree" with these items. Figure 2 implies that conspiratorial beliefs are not uncommon but certainly not prevalent among our respondents. For instance, 46.1 percent "strongly disagree" that COVID-19 was created by the Chinese government as part of a bioweapons program, and only 11.9 percent "strongly agree." Yet 15.7 percent "strongly agree" that COVD-19 escaped from a laboratory. Furthermore, only 18 percent of the sample replied "strongly disagree" for all the conspiracy questions. We used factor analysis to determine the dimensionality of these items, again using a polychoric correlation matrix with the iterated principal factors for extraction and a varimax rotation, this analysis strongly pointed to a single-factor solution (Appendix B). Partisanship and Control Variables The perceived effectiveness of policies could depend upon respondent's own experiences with mitigation policies at the local scale. We asked respondents if their local government had implemented any of the following policies: curfews, stay-at-home orders, mask mandates for government buildings, mask mandates for public places, virtual public meetings, temporarily closing facilities, temporarily closing business, and limiting social gatherings. We then created a tally of "yes" responses to use as a control in our regression models (mean = 4.78, range = 0-8). Earlier we noted the partisan nature of resistance to COVID-19 mitigation. We asked respondents, "Politically, how do you identify?" with response categories ranging from "very conservative" to "very liberal." Thirty-five percent of the sample identified as "very conservative" or "somewhat conservative" and 33.9 percent as "very liberal" or "somewhat liberal." We also control for education using a seven-category variable for education that we recoded into two categories to represent those who had completed college and those who had not. We also control for respondent's household income (1 = less than $50,000, 2 = $50,000-$99,999, 3 = $100,000 or more). Given the relatively small sample size (n = 167 in the following regressions), we opted for few control variables but Note: A = agree; D. = disagree; Neither = neither agree or disagree; SA = strongly agree; SD = strongly disagree. provide correlations between the outcomes and other sociodemographic variables in Appendix C. Regression Models Our factor score for general effectiveness is continuous, so we turn to ordinary least squares regression to model the effects of the predictor variables on this outcome (Table 1). In the first model, we include all the variables described in the prior section except for political ideology. In model 2, we add political ideology and evaluate changes in the estimates of model 1. In model 1, the conspiracy factor score is statistically significant (b = −0.292, p = .007), and its effects are similar in model 2 with the inclusion of political ideology (b = −0.249, p = .013). Political ideology is significant, with "very conservative" respondents less likely to view policies as effective (b = −1.853, p = 0.000). The inclusion of political ideology has also markedly improved the R 2 value from .112 to .261. Local government mitigation efforts are associated with increased perceptions of effectiveness in both models (b = 0.104, p = .026 in both models), while income is nearly significant. But in both models, the effects of demographic factors appear to be relatively muted, a finding that is echoed in the additional models reported in Appendix D. Endorsement of conspiracy theories, experience with local mitigation efforts, and political ideology are the most consistent predictors. In Appendix D, we estimate regression models for the information and mask mandate factor scores, which show results that are in line with those from Table 1. We provide a series of robustness checks in Appendix E, which imply that the effects reported in Table 3 are relatively robust. Discussion and Conclusion Local governments were important actors during the height of the COVID-19 pandemic but have received limited attention in the literature. We find that a nontrivial portion of local policy actors ascribe to various conspiracy theories. These beliefs, coupled with political conservatism, are associated with lower perceptions of policy effectiveness. That is, prior research implies that conspiratorial beliefs lead to lower support for mitigation measures, and our work extends these findings to the domain of perceived policy effectiveness. Although further research is needed, our results imply that conspiracies theories might influence how policy is enacted at the local scale; policy actors who endorse conspiracy theories may exercise some diin how mitigation measures (e.g., mask mandates) are enforced. Future studies should link enforcement data with conspiracy theorization, or researchers could use field-based qualitative methods (e.g., participant observation) to observe the enforcement process during a future pandemic. Also, for any given conspiracy theory, some 10 percent to 22 percent stated that they did not agree or disagree (i.e., perhaps they were undecided); enforcement could become more troublesome if these respondents move toward endorsing conspiracy theories. Notably, most of the conspiracy theories covered by our scale are related to its origins, not its effects. Conspiracy beliefs about the origins of COVID-19 could hypothetically coexist with a recognition that the virus is dangerous. However, our results imply that conspiratorial beliefs about the origins of COVID-19 reduce perceptions of effectiveness, possibly leading to less rigid enforcement of state mandates at the local scale. Those who do not believe the expert opinion on the origins of COVID-19 (i.e., that it originated in nature) appear to be less willing to stop the spread of COVID-19. Furthermore, the variation in agreement across the different types of conspiracy theories supports particularism, that is, that conspiracy theories cannot be lumped together but must be assessed on the basis of their own evidential merits (Dentith and Keeley 2018). Richards (2022) designed a framework for evaluating conspiracy theories as they range along detachment from reality and threat level. Yet other aspects, such as who and where gives attention to a particular theory may also be important. For example, the conspiracy theory with the least amount of disagreement and the most about of uncertainly was the "lab leak" theory (i.e., COVID-19 escaped from a lab). Unlike the idea that COVID-19 did not exist, some reputable, mainstream publications have engaged with the lab leak theory (i.e., NPR, the Washington Post, and Vanity Fair), and it was endorsed by the U.S. Department of Energy (albeit with a great deal of uncertainty) long after our data collection ended (Davis and Hawkins 2023). Presumably this increased spread of the theory and lending some credibility to it in early and mid-2021, just before we collected these data (Eban 2021;Farhi and Barr 2021;Ruwitch 2021). Thus, developing clearer understandings of the levels and variations of COVID-19 theories and the potential differential disconnection from reality and impact of potential harm is an important area for future research. Finally, our present work suggests that more attention should be given to local governments and how they implement health-related policies, during pandemics or other times. Our work implies that conspiratorial beliefs, coupled with political conservatism, were potentially a barrier to an effective response to COVID-19 during the height of the pandemic. This could also persist in other domains; for example, perhaps climate change conspiracy theories hinder local efforts to promote renewable energy or other decarbonization measures, or conspiracy theories about election results could lead to local governments refusing enforce election outcomes. Any of these are worrying possibilities in the current U.S. political climate. Data Access Statement The research data supporting this publication are available as supplementary information accompanying this publication. Funding The author(s) disclosed receipt of the following financial support for the research, authorship, and/or publication of this article: This study was funded by a Rapid Response Grant from the Natural Hazards Center in Boulder, Colorado. Supplemental Material Supplemental material for this article is available online.
2023-07-29T15:08:59.526Z
2023-01-01T00:00:00.000
{ "year": 2023, "sha1": "63547f56273ae12c02ffa2169e285cffe7c3a51d", "oa_license": "CCBYNC", "oa_url": null, "oa_status": null, "pdf_src": "PubMedCentral", "pdf_hash": "f78b5fb0757d93981a74091afe0acf2930c68308", "s2fieldsofstudy": [ "Political Science" ], "extfieldsofstudy": [ "Medicine" ] }
42983040
pes2o/s2orc
v3-fos-license
Mechanisms Responsible for the Promoter-Specific Effects of Myocardin. Understanding the mechanism of smooth muscle cell (SMC) differentiation will provide the foundation for elucidating SMC-related diseases such as atherosclerosis, restenosis and asthma. Recent studies have demonstrated that the interaction of SRF with the co-activator myocardin is a critical determinant of smooth muscle development. It has been proposed that the specific transcriptional activation of smooth muscle-restricted genes, as opposed to other SRF-dependent genes, by myocardin, results from the presence of multiple CArG boxes in smooth muscle genes that facilitate myocardin homodimer formation. This proposal was further tested in the current study. Our results show that the SMC-specific telokin promoter, which contains only a single CArG box, is strongly activated by myocardin. Furthermore, myocardin and a dimerization defective mutant myocardin, induce expression of endogenous telokin but not c-fos in 10T1/2 fibroblast cells. Knocking down myocardin by siRNA deceased telokin promoter activity and expression in A10 SMCs. A series of telokin and c-fos promoter chimeric and mutant reporter genes were generated to determine the mechanisms responsible for the promoter specific effects of myocardin. Data from these experiments demonstrated that the Ets binding site (EBS) in the c-fos promoter partially blocks the activation of this promoter by myocardin. However, the binding of Ets factors alone was not sufficient to explain the promoter specific effects of myocardin. Elements 3’ of the CArG box in the c-fos promoter act in concert with the EBS to block the ability of myocardin to activate the promoter. Conversely, elements 5’ and 3’ of the CArG box in the telokin promoter act in concert with the CArG box to facilitate myocardin stimulation of the promoter. Together these data suggest that the promoter specificity of myocardin is dependent on complex combinatorial interactions of multiple cis elements and their trans binding factors. following order to further investigate the mechanisms underlying the promoter-specific effects of myocardin we have compared the ability of myocardin to activate two single CArG box-containing genes, the smooth muscle-specific telokin gene and the widely expressed c-fos gene. Results demonstrate that myocardin and its dimerization deficient LZ (leucine zipper) mutant is capable of strongly trans-activating single CArG box containing, smooth muscle-specific, telokin promoter and to induce telokin expression in 10T1/2 cells although the myocardin LZ mutant is less effective than the wild type myocardin. In contrast, myocardin had no effect on c-fos promoter activity or c-fos gene expression in 10T1/2 cells. Knocking down endogenous myocardin in SMC cells by siRNA decreased telokin promoter activity and endogenous telokin expression. Analysis of a series of chimeric and mutant telokin and c-fos reporter genes demonstrated that in the c-fos promoter the Ets binding site (EBS), which binds ets factors, partially blocks the activation of this promoter by myocardin, however, an additional region between –300 and +39 is required to prevent myocardin activation of the c-fos promoter. Conversely, multiple cis-elements in telokin promoter are required for maximal myocardin activation. We propose that the gene specificity of myocardin is dependent on combinations of multiple positive and negative cis elements and their trans binding factors. Introduction There is extensive evidence showing that altered control of the differentiated state of smooth muscle cells contributes to the development and/or progression of a variety of diseases, including atherosclerosis, hypertension and asthma. These diseases are all associated with decreased expression of proteins required for the differentiated function of smooth muscle cells. An understanding of the mechanisms that control smooth muscle cell differentiation is required before it will be possible to determine how these control processes are altered in pathological conditions. 4 upregulated following myocardin infection of rat aortic smooth muscle cells (12). In order to further investigate the mechanisms underlying the promoter-specific effects of myocardin we have compared the ability of myocardin to activate two single CArG boxcontaining genes, the smooth muscle-specific telokin gene and the widely expressed cfos gene. Results demonstrate that myocardin and its dimerization deficient LZ (leucine zipper) mutant is capable of strongly trans-activating single CArG box containing, smooth muscle-specific, telokin promoter and to induce telokin expression in 10T1/2 cells although the myocardin LZ mutant is less effective than the wild type myocardin. In contrast, myocardin had no effect on c-fos promoter activity or c-fos gene expression in 10T1/2 cells. Knocking down endogenous myocardin in SMC cells by siRNA decreased telokin promoter activity and endogenous telokin expression. Analysis of a series of chimeric and mutant telokin and c-fos reporter genes demonstrated that in the c-fos promoter the Ets binding site (EBS), which binds ets factors, partially blocks the activation of this promoter by myocardin, however, an additional region between -300 and +39 is required to prevent myocardin activation of the c-fos promoter. Conversely, multiple cis-elements in telokin promoter are required for maximal myocardin activation. We propose that the gene specificity of myocardin is dependent on combinations of multiple positive and negative cis elements and their trans binding factors. QuickChange site-directed mutagenesis kit (Stratagene, La Jolla, CA) (6). All promoter reporter genes were constructed by cloning fragments of promoters into the pGL 2 B luciferase vector (Promega, Madison, WI). The mouse and rabbit telokin promoterluciferase reporter gene used includes nucleotides -190 to +181 (T370) and -256 to +147 (T400), respectively, of the telokin gene as described previously (13). The SM22aluciferase reporter gene includes nucleotides -475 to +61 of mouse SM22a (14,15). The SM a-actin promoter fragment extended from nucleotide -2,555 to +2,813 (9) and the SM-MHC promoter from -4,200 to +11,600 (16). The Egr1 and c-fos luciferase reporter genes spanned from -637 to +79 and -605 to +39, respectively. The minimal TK promoter used comprised nucleotides -113 to +20 of the thymidine kinase gene. All mutant reporter gene constructs were initially generated in pCR pBlunt vector (Invitrogen, Carlsbad, CA) by QuickChange site-directed mutagenesis kit (Stratagene, La Jolla, CA) and then transferred to pGL 2 b vector. The resultant plasmids were sequenced to verify the integrity of the insert. Transfection was carried out as previously described (17). The level of promoter activity was evaluated by measurement of the firefly luciferase relative to the internal control Renilla luciferase using the Dual Luciferase Assay System essentially as described by the manufacturer (Promega, Madison, WI). A minimum of six independent transfections was performed and all assays were replicated at least twice. Results are reported as the mean ± SEM. Reverse transcription coupled to PCR. Total RNA was isolated with TRIzol reagent (Invitrogen, Carlsbad, CA). A pair of unique primers for telokin was designed as sense 5 ' -G A C A C C G C C T G A G T C C A A C C T C C G -3 ' a n d a n t i s e n s e 5 ' - Results Myocardin trans-activates the telokin promoter. In contrast to many smooth muscle Figure 3C). To confirm that telokin promoter activity is myocardin dependent in SMCs, plasmid-based myocardin siRNA or a scrambled siRNA control pshuttle plasmid were transiently co-tranfected into A10 SMCs together with telokin promoter reporter genes and luciferase activity determined. As shown in figure 3D, the activity of the rabbit telokin promoter, but not the thymidine kinase promoter was significantly reduced to approximately 40% of control levels in A10 cells transfected with either 300ng or 600ng of myocardin siRNA plasmid. Maximal myocardin activity on the telokin promoter requires multiple ciselements. As both telokin and c-fos promoters contain single CArG boxes we by guest on March 23, 2020 http://www.jbc.org/ Downloaded from determined whether the specific sequences of the CArG box within the telokin promoter contributes to ability of myocardin to activate the promoter. Reporter genes were generated in which the telokin promoter CArG box was mutated to the c-fos gene CArG box sequence or the SM22a gene CArG-near sequence or to a sequence no longer able to bind SRF. These mutant reporter genes were co-transfected together with myocardin and luciferase activity determined ( Figure 4A). Mutant telokin promoter reporter genes containing either a c-fos or SM22a CArG box were activated by myocardin similar to the wild-type telokin promoter. As expected a mutant telokin promoter that was unable to bind SRF showed no activation by myocardin, showing that the intact CArG is critical for the myocardin activation ( Figure 4A). These data demonstrated that SRF binding to the CArG box is necessary for myocardin activation of the telokin promoter but the sequence of the CArG box does not explain the ability of myocardin to activate the telokin promoter as opposed to the c-fos promoter. To define the minimal regions of the telokin promoter required for myocardin activation, the ability of myocardin to activate a series of deletion constructs was determined ( Figure 4B). Results from this analysis suggest that the regions between -80 and -66 (an AT-rich region) and between +36 to +82 are important for myocardin activation. In contrast, deletion of residues -190 to -80 or +82 to +171 did not alter the ability of myocardin to activate the promoter, suggesting that these regions are not important for this effect. Deletion of the region from +36 to +82 or from -80 to -66 decreased the ability of myocardin to activate the promoter over 10-and 20-fold, respectively. These data demonstrated that the CArG box together with regions from +36 to +82 and -80 to -66 are necessary for myocardin activation of the telokin promoter. To determine if these regions are sufficient to confer myocardin activation, the telokin CArG box, -66 to -80 region (AT-rich region) and +36 to +82 region were fused to a minimal TK promoter, alone or in combination. Each of these regions alone was not sufficient to confer a large amount of myocardin activation ( Figure 4C). Although the CArG element alone increased activation to 11-fold, when all three elements were present the ability of myocardin to activate the minimal TK promoter was increased to 50-fold. These data suggest that multiple cis-elements of telokin promoter are necessary and largely sufficient to confer maximal activation by myocardin. The ets binding site (EBS) in c-fos promoter partially blocks myocardin activation. It has been reported that the SRF binding affinity of the c-fos CArG box is higher than the SM22a CArG boxes and the variations among CArG boxes of c-fos and SM22a influence cell type specificity of expression (19). To determine if the specific sequence of the c-fos CArG box is important for the lack of response of this promoter to myocardin, the CArG box was mutated to the telokin CArG box sequence or to a sequence unable to bind SRF. Analysis of these mutant reporter genes demonstrated that c-fos promoters containing either the native or telokin CArG box sequence were poorly activated by myocardin ( Figure 5B) and, as expected, a mutant c-fos promoter that was unable to bind SRF showed no activation by myocardin. These data together with those obtained from the mutant telokin promoters described in figure 4A, suggest that the precise sequence of the CArG boxes in the c-fos and telokin promoters does not account for the promoter-specific effects of myocardin. It has recently been reported that growth signals can repress smooth muscle-specific genes by triggering the displacement of myocardin from SRF by ELK1, an ets family member that competes for the myocardin docking site on SRF through a structurally related SRF-binding motif (20,21). binding site adjacent to the CArG box, sequence alignment between the telokin and cfos promoters revealed a significant degree of sequence similarity in this AT-rich region ( Figure 5A). This sequence similarity allowed us to determine if the sequences immediately 5' of the CArG boxes are important for the promoter-specific effects of myocardin. When the EBS region in the c-fos promoter was mutated to the corresponding sequence in the telokin promoter the mutant c-fos promoter remained largely refractory to myocardin activation (activation was increased from 4-fold to 10fold, figure 5B). In addition, when the corresponding region of the telokin promoter was mutated to match the EBS and surrounding nucleotides of the c-fos promoter this did not prevent myocardin from activating the promoter ( Figure 5C). Similar results were obtained when the CArG box sequences were also switched in conjunction with the ATrich/EBS sequences ( Figure 5C). These data suggest that there are additional regulatory regions within the c-fos promoter that prevent myocardin activation of the promoter. To begin to further identify these regions a truncated promoter was generated (-324 to +39) in which the region 5' of the ets binding site was deleted. This truncated construct had similar myocardin activation to the wild type c-fos promoter (Figure 6 B). In addition, changing the EBS and CArG box from the c-fos promoter to corresponding telokin promoter sequences, within the context of this truncated promoter, had no further effect on myocardin activation ( Figure 5B). In a reciprocal experiment changing the AT-rich region and CArG box of the telokin promoter to the corresponding sequences in the c-fos promoter, within the context of a -80 to +82 minimal telokin promoter did not prevent myocardin from activating the telokin promoter ( Figure 5C). Together these data suggest that sequences in the c-fos promoter between -300 and +39 and between -55 and +82 of the telokin promoter are responsible for the promoter specific effects of myocardin on these two genes. To determine if the telokin +36 to +82 region is sufficient to confer myocardin responsiveness to the c-fos gene, this fragment was added to c-fos promoter and the ability of myocardin to activate the promoter determined. This chimeric promoter showed only a small increase in myocardin activation to 8-fold as compared to the 4- fold activation of the wild-type c-fos promoter ( Figure 5D). When the +36 to +82 region was added in combination with the telokin AT-CArG sequence, no further activation of the promoter was observed. These data imply that the positive elements within the telokin promoter are not able to override the negative elements located between -300 to +39 of the c-fos promoter. Discussion Our data demonstrate that myocardin increases telokin expression through a CArGdependent mechanism that requires the cooperative activity of multiple cis-acting regulatory elements. Conversely the inability of myocardin to activate the growth factor responsive c-fos gene appears to result from both the lack of these key cooperative positive regulatory elements together with the presence of multiple negative elements that help prevent myocardins' activation of the promoter. It has been proposed that the ability of myocardin to specifically activate cardiac and smooth muscle-specific genes is dependent on cooperative interaction of pairs of CArG boxes. This would explain why growth-regulated genes such as c-fos, that contain a single CArG box, are not activated by myocardin (6). However, another early growth response gene, Egr-1, that has 5 CArG boxes located in the 5' flanking promoter sequence, was not activated by myocardin ( Figure 1A) or MRTF-A (7,8). In addition, the proximal SM a-actin or SM myosin heavy chain promoters, that each contain two CArG boxes are not sufficient to drive SMC-specific transgene expression (9,10) whereas the telokin promoter which contains only one CArG box is sufficient to drive SMC-specific expression in transgenic mice (22). In the current study, we have further shown that the telokin promoter is strongly activated by myocardin (Figure 1, 4A), that myocardin can activate endogenous telokin expression in 10T1/2 cells ( Figure 2) and that knocking down endogenous myocardin in SMC cells decreases telokin expression ( Figure 3). Furthermore, although a myocardin LZ mutant, which is not able to dimerize, activated the telokin promoter ( Figure 2D) and induced endogenous telokin expression in 10T1/2 fibroblast cells (Figure 2A,C) it did so much less effectively compared to the wild type myocardin. These data suggest that a myocardin monomer is sufficient to induced telokin and other smooth muscle-specific gene expression when expressed at high levels. However, at more physiological levels of expression it is likely that the ability of myocardin to dimerize is important for its ability to activate smooth muscle genes, including those such as the telokin gene, that contain only a single CArG box in their promoter regions. Taken together, these data would suggest that the ability of paired CArG box elements to promote myocardin dimerization is not sufficient to account for the smooth and cardiac muscle-specific effects of myocardin. Although siRNA mediated knockdown of myocardin resulted in decreased telokin, SM22a and calponin expression in A10 cells no changes in the level of expression of SM a-actin or the 130kDa MLCK were observed (Figure 3). These latter findings are puzzling in light of our data ( Figure 2) and that of others that have shown that myocardin induces expression of SM a-actin and the 130kDa MLCK in 10T1/2 cells. At least one explanation for this apparent discrepancy could be that in A10 cells much of the expression of SM a-actin and the 130kDa MLCK may be occurring through myocardin independent mechanisms. This is particularly likely for these two proteins as expression of neither protein is restricted to smooth muscle cells. For example, SM a-actin is expressed in skeletal muscle myoblasts that do not express myocardin and the 130kDa MLCK is expressed in most adult cell types (23,24). Together these data suggest that the expression of SM a-actin and the 130kDa MLCK may occur by myocardin dependent pathways in some cell types and by myocardin independent pathways in other cells that do not express myocardin. Consistent with previous reports (2-4), our results demonstrate that an intact CArG element is required but not sufficient for telokin promoter activation by myocardin ( Figure 4). Although essential for myocardin activation the precise sequence of the CArG box has little effect on the ability of myocardin to activate the promoter. Within the telokin promoter at least two additional regions (-80 to -66 and +36 to +82) are required to act in concert with the CArG box to facilitate high levels of promoter activation by myocardin ( Figure 4). Although this combination of elements is sufficient to confer a significant amount of myocardin activation to a minimal thymidine kinase promoter, these elements are not sufficient to confer increased myocardin responsiveness to the c-fos promoter ( Figure 5D). These data would suggest that, in addition to lacking key positive acting cis-regulatory elements, the c-fos promoter also contains a negative regulatory region located between nucleotides -300 and +39 that blocks the activity of these positive elements. Based on a recent report demonstrating that growth signals can repress smooth muscle-specific genes by triggering the displacement of myocardin from SRF by ELK1, an ets family protein, it is logical to propose that the inability of myocardin to activate the c-fos promoter is likely to be due to binding of ets to the serum response element of the c-fos promoter (20,21). Similarly, there are multiple ets binding sites surrounding the CArG boxes in the Egr-1 promoter that may be responsible for the inability of myocardin to activate this promoter despite the presence of multiple CArG boxes (25). However, although our data demonstrated that the ets binding site in the c-fos promoter is partially involved in inhibiting the activation of this promoter by myocardin, when the ets-binding site and CArG box in the c-fos promoter were replaced with the corresponding sequences from the telokin promoter the mutant c-fos promoter remained refractile to myocardin stimulation ( Figure 5). Conversely when the AT-rich region and CArG box from the telokin promoter were replaced with the ets binding site and CArG box from the c-fos promoter the mutant telokin promoter remained strongly activated by myocardin. Together these data would suggest that the regions 3' of the CArG boxes of the c-fos (-300 to +39) and telokin (-55 to +82) promoters, rather than the EBS/AT-rich sequences or CArG boxes, are critical for determining the promoters' responsiveness to myocardin. Presumably each region binds to specific trans-acting factors that interact with myocardin and/or SRF to modify their function. The -55 to +82 region in the telokin gene is highly conserved between species being 90% identical between mouse, rabbit and human genes also suggesting that this region contains important regulatory elements (26). The identity of the key regulatory factors is currently unknown, however, it is tempting to speculate that these factors may comprise part of the transcription initiation complex that forms on each gene. This proposal arises from the observation that the putative regulatory regions span the transcription initiation sites and that the telokin and c-fos promoters utilize different cis-elements to initiate transcription. The telokin promoter initiates transcription from multiple start sites spanning approximately 70-80bp in a TATA-independent manner whereas the c-fos gene is a classical TATAdependent gene with a single transcription initiation site. Although the core components of the transcription initiation complex will be identical on both genes, additional accessory factors may be promoter specific. This raises the possibility that specific components of the transcription initiation complexes may be required for myocardin to strongly activate transcription. In summary, our results suggest that the promoter-specific effects of myocardin, and likely also MRTF-A, result from a complex interaction of positive regulatory elements that include at least one CArG box in responsive genes together with negative regulatory elements in unresponsive genes that block the activity of myocardin family members.
2018-04-03T05:29:00.608Z
2005-03-18T00:00:00.000
{ "year": 2005, "sha1": "999ab8b3fa54303023f71538ef82e36d251f7310", "oa_license": "CCBY", "oa_url": "http://www.jbc.org/content/280/11/10861.full.pdf", "oa_status": "HYBRID", "pdf_src": "ScienceParsePlus", "pdf_hash": "4d47b3728bb29dd0eec6c914b10d8449b54ffb70", "s2fieldsofstudy": [ "Biology" ], "extfieldsofstudy": [ "Biology", "Medicine" ] }
246473432
pes2o/s2orc
v3-fos-license
Robustness and Consistency in Linear Quadratic Control with Untrusted Predictions We study the problem of learning-augmented predictive linear quadratic control. Our goal is to design a controller that balances \textit{"consistency"}, which measures the competitive ratio when predictions are accurate, and \textit{"robustness"}, which bounds the competitive ratio when predictions are inaccurate. We propose a novel $\lambda$-confident policy and provide a competitive ratio upper bound that depends on a trust parameter $\lambda\in [0,1]$ set based on the confidence in the predictions and some prediction error $\varepsilon$. Motivated by online learning methods, we design a self-tuning policy that adaptively learns the trust parameter $\lambda$ with a competitive ratio that depends on $\varepsilon$ and the variation of system perturbations and predictions. We show that its competitive ratio is bounded from above by $ 1+{O(\varepsilon)}/({{\Theta(1)+\Theta(\varepsilon)}})+O(\mu_{\mathsf{Var}})$ where $\mu_\mathsf{Var}$ measures the variation of perturbations and predictions. It implies that when the variations of perturbations and predictions are small, by automatically adjusting the trust parameter online, the self-tuning scheme ensures a competitive ratio that does not scale up with the prediction error $\varepsilon$. Can such adversarial guarantees be provided for control policies that use black-box AI predictions? To provide adversarial guarantees necessarily means not precisely following the black-box AI predictions. Thus, there must be a trade-off between the performance in the typical case (consistency) and the quality of the adversarial guarantee (robustness). Trade-offs between consistency and robustness have received considerable attention in recent years in the online algorithms community, starting with the work of [20], but our work represents the first work in the context of control. Contributions. In this paper, we answer the question above in the affirmative, in the context of linear quadratic control, providing an novel algorithm that trades off consistency and robustness to provide adversarial gaurantees on the use of untrusted predictions. Our first result provides a novel online control algorithm, termed λ-confident control, that provides a competitive ratio of 1 + min{O(λ 2 ε) + O(1 − λ) 2 , O(1) + O(λ 2 )}, where λ ∈ [0, 1] is a trust parameter set based on the confidence in the predictions, and ε is the prediction error (Theorem 2.2). When the predictions are accurate (ε ≈ 0), setting λ close to 1 will obtain a competitive ratio close to 1, and hence the power of the predictions is fully utilized; on the other hand, when the predictions are inaccurate (ε very large), setting λ ≈ 0 will still guarantee a constant competitive ratio, meaning the algorithm will still have good robustness guarantees when the predictions turn out to be bad. Therefore, our approach can get the best of both worlds, effectively using black-box predictions but still guaranteeing robustness. The above discussion highlights that the optimal choice of λ depends on the prediction error, which may not be known a priori. Therefore, we further provide an adaptive, self-tuning learning policy (Algorithm 3) that selects λ so as to learn the optimal parameter for the actual prediction error; thus selecting the optimal balance between robustness and consistency. Our main result proves that the self-tuning policy maintains a competitive ratio bound that is always bounded regardless of the prediction error ε (Theorem 3.1). This result is informally presented below. This result provides a worst-case performance bound for the use of untrusted predictions, e.g., the predictions from a black-box AI tool, regardless of the accuracy of the predictions. The second term in the competitive ratio upper bound indicates a nontrivial non-linear dependency of CR(ε) and the prediction error ε, matching our experimental results shown in Section 4. The third term measures the variation of perturbations and predictions. Such a term is common in regret analysis based on the "Follow The Leader" (FTL) approach [14,15]. For example, the regret analysis of the Follow the Optimal Steady State (FOSS) method in [19] contains a similar "path length" term that captures the variation of the state trajectory. Proving our main result is complex due to the fact that, different from classical online learning models, the cost function in our problem depends on previous actions via a linear dynamical system (see (1)). The time coupling can even be exponentially large if the dynamical system is unstable. To tackle this time-coupling structure, we develop a new proof technique that relates the regret and competitive ratio with the convergence rate of the trust parameter. Finally, in Section 4 we demonstrate the effectiveness of our self-tuning approach using three examples: a robotic tracking problem, an adaptive battery-buffered EV charging problem and the Cart-Pole problem. For the robotic tracking and adaptive battery-buffered EV charging cases, we illustrate that the competitive ratio of the self-tuning policy performs nearly as well as the lower envelope formed by picking multiple trust parameters optimally offline. We also validate the practicality of our self-tuning policy by showing that it not only works well for linear quadratic control problems; it also performs well in the nonlinear Cart-Pole problem. Preliminaries We consider a Linear Quadratic Control (LQC) model. Throughout this paper, · denotes the 2 -norm for vectors and the matrix norm induced by the 2 -norm. Denote by x t ∈ R n and u t ∈ R m the system state and action at each time t. We consider a linear dynamic system with adversarial perturbations, x t+1 = Ax t + Bu t + w t , for t = 0, . . . , T − 1, where A ∈ R n×n and B ∈ R n×m , and w t ∈ R n denotes some unknown perturbation chosen adversarially. We make the standard assumption that the pair (A, B) is stabilizable. Without loss of generality, we also assume the system is initialized with some fixed x 0 ∈ R n . The goal of control is to minimize the following quadratic costs given matrices A, B, Q, R : where Q, R 0 are positive definite matrices, and P is the solution of the following discrete algebraic Riccati equation (DARE), which must exist because (A, B) is stabilizable and Q, R 0 [13]. Given P , we can define K := (R + B P B) −1 B P A as the optimal LQC controller in the case of no disturbance (w t = 0). Further, let F := A − BK be the closed-loop system matrix when using u t = −Kx t as the controller. By [13], F must have a spectral radius ρ(F ) less than 1. Therefore, Gelfand's formula implies that there must exist a constant C > 0, ρ Our model is a classical control model [16] and has broad applicability across various engineering fields. In the following, we introduce ML/AI predictions into the classical model and study the trade-off between consistency and robustness in this classical model for the first time. Untrusted Predictions Our focus is on predictive control and we assume that, at the beginning of the control process, a sequence of predictions of the disturbances ( w 0 , . . . , w T −1 ) is given to the decision maker. At time t, the decision maker observes x t , w t−1 and picks a decision u t . Then, the environment picks w t , and the system transitions to the next step according to (1). We emphasize that, at time t, the decision maker has no access to (w t , . . . , w T ) and their values may be different from the predictions ( w t , . . . , w T ). Also, note that w t can be adversarially chosen at each time t, adaptively. The assumption that a sequence of predictions available is ( w t , . . . , w T ) is not as strong as it may first appear, nor as strong as other similar assumptions made in literature, e.g., [29,19], because we allow for prediction error. If there are no predictions or only a subset of predictions are available, we can simply set the unknown predictions to be zero and this does not affect our theoretical results and algorithms. In our model, there are two types of uncertainty. The first is caused by the perturbations because the future perturbations (w t , . . . , w T −1 ) are unknown to the controller at time t. The second is the prediction error due to the mismatch e t := w t − w t between the perturbation w t and the prediction w t at each time. Formally, we define the prediction error as ε (F, P, e 0 , . . . , e T −1 ) := ( Notice that the prediction error is not defined as a form of classical mean squared error for our problem. The reason is because the mismatch e t at each time has different impact on the system. Writing the prediction error as in (2) simplifies our analysis. In fact, if we define u t and u t as two actions given by an optimal linear controller (formally defined in Section 1.3.2) as if the true perturbations are w 0 , . . . , w T −1 and w 0 , . . . , w T −1 , respectively, then it can be verified that ε = T −1 t=0 u t − u t , which is the accumulated action mismatch for an optimal linear controllers provided with different estimates of perturbations. In Section 4, using experiments, we show that the competitive ratios (with a fixed "trust parameter", defined in 2.2) grow linearly in the prediction error ε defined in (2). Finally, we assume that the perturbations (w 0 , . . . , w T −1 ) and predictions ( w 0 , . . . , w T −1 ) are uniformly bounded, i.e., there exist w > 0 and w > 0 such that w t ≤ w and w t ≤ w for all 0 ≤ t ≤ T − 1. In summary, Figure 1 demonstrates the system model considered in this paper. Defining Consistency and Robustness As discussed in the in introduction, while predictions can be helpful, inaccurate predictions can lead to unbounded competitive ratio. Our goal is to utilize predictions to achieve good performance (consistency) while still providing adversarial worst-case guarantees (robustness). In this subsection, we formally define the notions of consistency and robustness we study. These notions have received increasing attention recently in the area of online algorithms with untrusted advice, e.g., [4,22,27,7,6,5]. We use the competitive ratio to measure the performance of an online control policy and quantify its robustness and consistency. Specifically, let OPT be the offline optimal cost when all the disturbances (w t ) T t=0 are known, and ALG be the cost achieved by an online algorithm. Throughout this paper we assume OPT > 0. We define the competitive ratio for a given bound on the prediction error ε, as follows. Building on the definition of competitive ratio, we define robustness and consistency as follows. Definition 1.2. An online algorithm is said to be γ-robust if, for any prediction error ε > 0, the competitive ratio satisfies CR(ε) ≤ γ, and an algorithm is said to be β-consistent if the competitive ratio satisfies CR(0) ≤ β. Background: Existing Algorithms Before proceeding to our algorithm and its analysis, we first introduce two extreme algorithm choices that have been studied previously: a myopic policy that we refer to as 1-confident control, which places full trust in the predictions, and a pure online strategy that we refer to as 0-confident control, which places no trust in the predictions. These represent algorithms that can achieve consistency and robustness individually, but cannot achieve consistency and robustness simultaneously. The key challenge of this paper is to understand how to integrate ideas such as what follows into an algorithm that achieves consistency and robustness simultaneously. A Consistent Algorithm: 1-Confident Control. A simple way to achieve consistency is to put full faith in the untrusted predictions. In particular, if the algorithm trusts the untrusted predictions and follows them, the performance will always be optimal if the predictions are accurate. We refer to this as the 1-confident policy, which is defined by a finite-time optimal control problem that trusts that ( w 0 , . . . , w T −1 ) are the true disturbances. Formally, at time step t, the actions (u t , . . . , u T ) are computed via arg min (ut,...,u T −1 ) (1) for all τ = t, . . . , T − 1. With the obtained solution (u t , . . . , u T −1 ), the control action u t at time t is fixed to be u t and the other actions (u t+1 , . . . , u T −1 ) are discarded. We highlight the following result (Theorem 3.2 in [29]) that provides an explicit expression of the algorithm in (3), which can be viewed as a form of Model Predictive Control (MPC). It is clear that this controller (3) (or equivalently (4)) achieves 1-consistency because, when the prediction errors are 0, the control action from (3) (and the state trajectory) will be exactly the same as the offline optimal. However, this approach is not robust, and one can show that prediction errors can lead to unbounded competitive ratios. In the next subsection, we introduce a robust (but non consistent) controller. A Robust Algorithm: 0-Confident Control. On the other extreme, a natural way to be robust is to ignore the untrusted predictions entirely, i.e., place no confidence in the predictions. The 0-confident policy does exactly this. It places no trust in the predictions and synthesizes the controller by assuming w t = 0. Formally, the policy is given by This recovers the optimal pure online policy in classical linear control theory [3]. As shown by [29], this controller has a constant competitive ratio and therefore is O(1)-robust. However, this approach is not consistent as it does not utilize the predictions at all. In the next section, we discuss our proposed approach, which achieves both consistency and robustness. Consistent and Robust Control The goal of this paper is to develop a controller that performs near-optimally when predictions are accurate (consistency) and meanwhile is robust when the prediction error is large. As discussed in the previous section, a myopic, 1-confident controller that puts full trust into the predictions is consistent, but not robust. On the other hand, any purely online 0-confident policy that ignores predictions is robust but not consistent. The algorithms we present the trade off between these extremes by including a "confidence/trust level" for the predictions. The algorithm design challenge is to determine the right way to balance these extremes. In the first (warmup) algorithm, the policy starts out confident in the predictions, but when a threshold of error is observed, the policy loses confidence and begins to ignore predictions. This simple threshold-based policy highlights that it is possible for a policy to be both robust and consistent. However, the result also highlights the weakness of the standard notions of robustness and consistency since the policy cannot make use of intermediate quality predictions and only performs well in the extreme cases when predictions are either perfect or poor. Thus, we move to considering a different approach, which we term λ-confident control. This algorithm selects a confidence level λ that serves as a weight for a linear combination between purely myopic 1-confident control and purely online 0-confident control. Our main result shows that this policy provides a smooth trade-off between robustness and consistency and, further, in Section 3, we show that the confidence level λ can be learned online adaptively so as to achieve consistency and robustness without exogenously specifying a trust level. Algorithm 1: Threshold-based Control Initialize δ = 0 for t = 0, . . . , T − 1 do if δ < σ then Compute u t with the best myopic online algorithm A Online without predictions end Update x t+1 = Ax t + Bu t + w t and δ ← δ + w t − w t end Warmup: Threshold-based control We begin by presenting a simple threshold-based algorithm that can be both robust and consistent, though it does not perform well for predictions of intermediate quality. This distinction highlights that looking beyond the classical narrow definitions of robustness and consistency is important when evaluating algorithms. The threshold-based algorithm is described in Algorithm 1. It works by trusting predictions (using 1-confident control update (4)) until a certain error threshold σ > 0 is crossed and then ignoring predictions (using an online algorithm A Online that attains a (minimal) competitive ratio C min for all online algorithms that do not use predictions). The following result shows that, with a small enough threshold, this algorithm is both robust and consistent because, if predictions are perfect it trusts them entirely, but if there is an error, it immediately begins to ignore predictions and matches the 0-confident controller performance, which is optimal. A proof can be found in Appendix D. Theorem 2.1. There exists a threshold parameter σ > 0 such that Algorithm 1 is 1-consistent and (C min + o(1))robust, where C min is the minimal competitive ratio of any pure online algorithm. The term o(1) in Theorem 2.1 converges to 0 as T → ∞. While Algorithm 1 is optimally robust and consistent, it is unsatisfying because it does not improve over the online algorithm unless predictions are perfect since in the proof, we set the threshold parameter σ > 0 arbitrarily small to make the algorithm robust and 1-consistent and the definition of consistency and robustness only captures the behavior of the competitive ratio CR(ε) for either ε = 0 or ε is large. As a result, in the remainder of the paper we look beyond the extreme cases and prove results that apply for arbitrary prediction error quality. In particular, we prove competitive ratio bounds that hold for arbitrary ε, of which consistency and robustness are then special cases. λ-confident control We now present our main results, which focus on a policy that, like Algorithm 1, looks to find a balance between the two extreme cases of 1-confident and 0-confident control. However, instead of using a threshold to decide when to swap between them, the λ-confident controller considers a linear combination of the two. Specifically, the policy presented in Algorithm 2 works as follows. Given a trust parameter 0 ≤ λ ≤ 1, it implements a linear combination of (4) and (5). Intuitively, the selection of λ allows a trade-off between consistency and robustness based on the extent to which the predictions are trusted. Our main result shows a competitive ratio bound that is consistent with this intuition. A proof is given in Appendix B. Algorithm 2: λ-confident Control Under our model assumptions, with a fixed trust parameter λ > 0, the λ-confident control in Algorithm 2 has a worst-case competitive ratio of at most From this result we see that λ-confident control is guaranteed to be 1 + H (1−λ) 2 C -consistent and This highlights a trade-off between consistency and robustness such that if a large λ is used (i.e., predictions are trusted), then consistency decreases to 1, while the robustness increases unboundedly. In contrast, when a small λ is used (i.e., predictions are distrusted), the robustness of the policy converges to the optimal value, but the consistency does not improve on the robustness value. Due to the time-coupling structure in the control system, the mismatches e t = w t − w t at different times contribute unequally to the system. As a result, the prediction error ε in (2) and (7) is defined as a weighed quadratic sum of (e 0 , . . . , e T −1 ). Moreover, the term OPT in (6) is common in the robustness and consistency analysis of online algorithms, such as [22,20,7,5]. Self-Tuning λ-Confident Control While the λ-confident control finds a balance between consistency and robustness, selecting the optimal λ parameter requires exogenous knowledge of the quality of the predictions ε, which is often not possible. For example, black-box AI tools typically do not allow uncertainty quantification. In this section, we develop a self-tuning λ-confident control approach that learns to tune λ in an online manner. We provide an upper bound on the regret of the self-tuning λ-confident control, compared with using the best possible λ in hindsight, and a competitive ratio for the complete self-tuning algorithm. These results provide the first worst-case guarantees for the integration of black-box AI tools into linear quadratic control. Our policy is described in Algorithm 3 and is a "follow the leader" approach [14,15]. At each time t = 0, . . . , T − 1, it selects a λ t in order to minimize the gap between ALG and OPT in the previous t rounds and chooses an action using the trust parameter λ t . Then the state x t is updated to x t+1 using the linear system dynamic in (1) and this process repeats. Note that the denominator of λ t is zero if and only if η ( w; s, t − 1) = 0 for all s. To make λ t well-defined, we set λ = 1 for this case. Algorithm 3: Self-Tuning λ-Confident Control The key to the algorithm is the update rule for λ t . Given previously observed perturbations and predictions, the goal of the algorithm is to find a greedy λ t that minimizes the gap between the algorithmic and optimal costs so that λ t := min λ . This can be equivalently written as which is a quadratic function of λ. Rearranging the terms in (8) yields the choice of λ t in the self-tuning control scheme. Algorithm 3 is efficient since, in each time step, updating the η values only requires adding one more term. This means that the total computational complexity of λ t is O(T 2 n α ), where α < 2.373, which is polynomial in both the time horizon length T and state dimension n. According to the expression of λ t in Algorithm 3, at each time t, the terms η(w; s, t − 2) and η( w; s, t − 2) can be pre-computed for all s = 0, . . . , t − 1. Therefore, the recursive formula η(w; s, t) := t τ =s F τ −s P w τ = η(w; s, t − 1) + F t−s P w t implies the update rule of the terms {η(w; s, t − 1) : s = 0, . . . , t − 1} in the expression of λ t . This gives that, at each time t, it takes no more than O(T n α ) steps to compute λ t where α < 2.373 and O(n α ) is the computational complexity of matrix multiplication. Convergence We now move to the analysis of Algorithm 3. First, we study the convergence of λ t , which depends on the variation of the predictions w := ( w 0 , . . . , w T −1 ) and the true perturbations w := (w 0 , . . . , w T −1 ), where we use a boldface letter to represent a sequence of vectors. Specifically, our results are in terms of the variation of the predictions and perturbations, which we define as follows. The self-variation µ VAR (y) of a sequence y := (y 0 , . . . , y T −1 ) is defined as The goal of the self-tuning algorithm is to converge to the optimal trust parameter λ * for the problem instance. To specify this formally, let ALG(λ 0 , . . . , λ T −1 ) be the algorithmic cost with adaptively chosen trust parameters λ 0 , . . . , λ T −1 and denote by ALG(λ) the cost with a fixed trust parameter λ. Then, λ * is defined as λ * := min λ∈R ALG(λ). Further, let W (t) := t s=0 η( w; s, t) Hη( w; s, t). We can now state a bound on the convergence rate of λ t to λ * under Algorithm 3. The bound highlights that if the variation of the system perturbations and predictions is small, then the trust parameter λ t converges quickly to λ * . A proof can be found in Appendix C.1. Regret and Competitiveness Building on the convergence analysis, we now prove bounds on the regret and competitive ratio of Algorithm 3. These are the main results of the paper and represent the performance of an algorithm that adaptively determines the optimal trade-off between robustness and consistency. Regret. We first study the regret as compared with the best, fixed trust parameter in hindsight, i.e., λ * , whose corresponding worst-case competitive ratio satisfies the upper bound given in Theorem 2.2. Note that the baseline we evaluate against in Regret = ALG(λ 0 , . . . , λ T −1 ) − ALG(λ * ) is stronger than baselines in previous static regret analysis for LQR, such as [2,10] where online controllers are compared with a linear control policy u t = −Kx t with a strongly stable K. The baseline policy considered in our regret analysis is the λ-confident scheme (Algorithm 2) with which contains the class of strongly stable linear controllers as a special case. Moreover, the regret bound in Lemma 2 holds for any predictions w 0 , . . . , w T −1 . Taking w t = w t for all t = 0, . . . , T − 1, our regret directly compares ALG(λ 0 , . . . , λ T −1 ) with the optimal cost OPT, and therefore, our regret also involves the dynamic regret considered in [19,29,30] for LQR as a special case. To interpret this lemma, suppose the sequences of perturbations and predictions satisfy: These bounds correspond to an assumption of smooth variation in the disturbances and the predictions. Note that it is natural for the disturbances to vary smoothly in applications such as tracking problems where the disturbances correspond to the trajectory and in such situations one would expect the predictions to also vary smoothly. For example, machine learning algorithms are often regularized to provide smooth predictions. Given these smoothness bounds, we have that Note that, as long as To understand how this bound may look in particular applications, suppose we have ρ(s) = O(1/s). In this case, regret is polylogarithmic, i.e., Regret = O((log T ) 2 ). If ρ(s) is exponential the regret is even smaller, i.e., if ρ(s) = O (r s ) for some 0 < r < 1 then Regret = O(1). The regret bound in Lemma 2 depends on the variation of perturbations and predictions. Note that such a term commonly exists in regret analysis based on the "follow the leader" approach [14,15]. For example, the regret analysis of the follow the optimal steady state (FOSS) method in [19] contains a similar "path length" term that captures the state variation and there is a fundamental limit on regret that depends on the variation budget (c.f. Theorem 3 in [19]). There is a similar variation budget of the predictions or prediction errors in Theorem 1 of [30]. In many robotics applications (e.g., the trajectory tracking and EV charging experiments in this paper shown in Section 4), each w t is from some desired smooth trajectory. Competitive Ratio. We are now ready to present our main result, which provides an upper bound on the competitive ratio of self-tuning control (Algorithm 3). Recall that, in Lemma 2, we bound the regret Regret := ALG(λ 0 , . . . , λ T −1 ) − ALG(λ * ) and, in Theorem 2.2, a competitive ratio bound is provided for the λ-confident control scheme, including ALG(λ * )/OPT. Therefore, combining Lemma 2 and Theorem 2.2 leads to a novel competitive ratio bound for the self-tuning scheme (Algorithm 3). Note that compared with Theorem 2.2, which also provides a competitive ratio bound for λ-confident control, Theorem 3.1 below considers a competitive ratio bound for the self-tuning scheme in Algorithm 3 where, at each time t, a trust parameter λ t is determined by online learning and may be time-varying. where H, C OPT and ε are defined in Theorem 2.2. In contrast to the regret bound, Theorem 3.1 states an upper bound on the competitive ratio CR(ε) defined in Section 1.2, which indicates that CR(ε) scales as 1 + O(ε)/ (Θ(1) + Θ(ε)) as a function of ε. As a comparison, the λ-confident control in Algorithm 2 has a competitive ratio upper bound that is linear in the prediction error ε (Theorem 2.2). This improved dependency highlights the importance of learning the trust parameter adaptively. Our experimental results in the next section verify the implications of Theorem 2.2 and Theorem 3.1. Specifically, the simulated competitive ratio of the self-tuning control (Algorithm 3) is a non-linear envelope of the simulated competitive ratios for λ-confident control with fixed trust parameters and as a function of prediction error ε, it matches the implied competitive ratio upper bound 1 + O(ε)/ (Θ(1) + Θ(ε)). Theorem 3.1 is proven by combining Lemma 2 with Theorem 2.2, which bounds the competitive ratios for fixed trust parameters. Case Studies We now illustrate our main results using numerical examples and case studies to highlight the impact of the trust parameter λ in λ-confident control and demonstrate the ability of the self-tuning control algorithm to learn the appropriate trust parameter λ. We consider three applications. The first is a robot tracking example where a robot is asked to follow locations of an unknown trajectory and the desired location is only revealed the time immediately before the robot makes a decision to modify its velocity. Predictions about the trajectory are available. However, the predictions can be untrustworthy so that they may contain large errors. The second is an adaptive battery-buffered electric vehicle (EV) charging problem where a battery-buffered charging station adaptively supplies energy demands of arriving EVs while maintaining the state of charge of the batteries as close to a nominal level as possible. Our third application considers a non-linear control problem -the Cart-Pole problem. Our λ-confident and self-tuning control schemes use a linearized model while the algorithms are tested with the non-linear environment. We use the third application to demonstrate the practicality of our algorithms by showing that they do not only work for LQC problems, but also non-linear systems. To illustrate the impact of randomness in prediction errors in our case studies, the three applications all use different forms of random error models. For each selected distribution of w − w, we repeat the experiments multiple times and report the worst case with the highest algorithmic cost; see Appendix E for details. The robot controller's location at time t + 1, denoted by p t+1 ∈ R 2 , depends on its previous location and its velocity v t ∈ R 2 such that p t+1 = p t + 0.2v t and at each time t + 1, the controller is able to apply an adjustment u t to modify its velocity such that v t+1 = v t + 0.2u t . Together, letting x t := p t − y t , this system can be recast in the canonical form in (1) as Experimental results. In our first experiment, we demonstrate the convergence of the self-tuning scheme in Algorithm 3. To mimic the worst-case error, a random prediction error e t = w t − w t at each time t is used. We then sample prediction error and implement our algorithm with several error instances and choose the one the worst competitive ratio. The details of settings can be found in Appendix E. To better simulate the task of tracking a trajectory and make it easier to observe the tracking accuracy, we ignore the cost of increasing velocity by setting R as a zero matrix for Figure 2a and Figure 2b. In Figure 2a, we observe that the tracking trajectory generated by the self-tuning scheme converges to the unknown trajectory (y 1 , . . . , y T ), regardless of the level of prediction error. We plot the tracking trajectories every 60 time steps with a scaling parameter (defined in Appendix E) c = 10 −2 (left), c = 10 −1 (mid) and c = 1 (right) respectively. In all cases, we observe convergence of the trust parameters. Moreover, for a wide range of prediction error levels, without knowing the prediction error level in advance, the scheme is able to automatically switch its mode and become both consistent and robust by choosing an appropriate trust parameter λ t to accurately track the unknown trajectory. In Figure 2b, we observe similar behavior when the prediction error is generated from Gaussian distributions. Next, we demonstrate the performance of self-tuning control and the impact of trust parameters. In Figure 3, we depict the competitive ratios of the λ-confident control algorithm described in Section 2.2 with varying trust parameters, together with the competitive ratios of the self-tuning control scheme described in Algorithm 3. The label of the x-axis is the prediction error ε (normalized by 10 3 ), defined in (7). We divide our results into two parts. The left sub-figure in Figure 3 considers a low-error regime where we observe that the competitive ratio of the self-tuning policy performs closely as the lower envelope formed by picking multiple trust parameters optimally offline. The right sub-figure in Figure 3 shows the performance of self-tuning for the case when the prediction error is high. For the high-error regime, the competitive ratio of the self-tuning control policy is close to those with the best fixed trust parameter. Application 2: Adaptive battery-buffered EV charging Problem description. We consider an adaptive battery-buffered Electric Vehicle (EV) charging problem. There is a charging station with N chargers, with each charger connected to a battery energy storage system. Let x t be a vector in R N + , whose entries represent the State of Charge (SoC) of the batteries at time t. The charging controller decides a charging schedule u t in R N + where each entry in u t is the energy to be charged to the i-th battery from external power supply at time t. The canonical form of the system can be represented by where A is an N × N matrix denotes the degradation of battery charging levels and B is an N × N diagonal matrix whose diagonal entry 0 ≤ B i ≤ 1 represents the charging efficiency coefficient. In our experiments, without loss of generality, we assume A and B are identity matrices. The perturbation w t is defined as a length-N vector, whose entry w t (i) = E when at time t an EV arrives at charger i and demands energy E > 0; otherwise w t (i) = 0. Therefore the perturbations (w 0 , . . . , w T −1 ) depend on the arrival of EVs and their energy demands. The charging controller can only make a charging decision u t at time t before knowing w t (as well as w t+1 , . . . , w T −1 ) and the EVs that arrive at time t (as well as future EV arrivals). The goal of the adaptive battery-buffered EV charging problem is to maintain the battery SoC as close to a nominal value x as possible. Therefore, the charging controller would like to minimize T −1 t=0 (x t − x) Q (x t − x) + u t Ru t , equivalently, T −1 t=0 x t Qx t + u t Ru t where Q can be some positive-definite matrix and R encodes the costs of external power supply. In our experiments, we set Q as an identity matrix and R = 0.1 × Q. Experimental results. We show the performance of self-tuning control and the impact of trust parameters for adaptive EV charging in Figure 6a and 6b. In Figure 6a we consider a synthetic case when EVs with 5 kWh battery capacity arrive at a constant rate 0.2, e.g., 1 EV arrives every 5 time slots. The results are divided into two parts. In Figure 6b, we use daily data (ACN-Data) that contain EVs' energy demands, arrival times and departure times collected from a real-world adaptive EV charging network [17]. We select a daily charging record on on Nov 1st, 2018, depicted in Figure 5. The left sub-figure considers a magnified low-error regime and the right sub-figure shows the performance of self-tuning for the case when the prediction error is high. For both regimes, the competitive ratios of the self-tuning control policy perform nearly as well as the lower envelope formed by picking multiple trust parameters optimally offline. We see in both Figure 3 and Figure 6a that with fixed trust parameters the competitive ratio is linear in ε, matching what Theorem 2.2 indicates (in the sense of order in ε). Moreover, for the self-tuning scheme, in both Figure 3 and Figure 6a, we observe a competitive ratio 1 + O(ε)/ (Θ(1) + Θ(ε)), which matches the competitive ratio bound given in Theorem 3.1 in order sense (in ε). (b) Experiments with daily EV charging data [17]. Figure 6: Impact of trust parameters and performance of self-tuning control for adaptive battery-buffered EV charging with synthetic EV charging data (top) and realistic daily EV charging data [17] (bottom). The third set of experiments we consider is the classic Cart-Pole problem illustrated in Figure 7. The goal of a controller is to stabilize the pole in the upright position. This is a widely studied nonlinear system. Neglecting where u is the input force; θ is the angle between the pole and the vertical line; y is the location of the pole; g is the gravitational acceleration; l is the pole length; m is the pole mass; and M is the cart mass. Taking sin θ ≈ θ and cos θ ≈ 1 and ignoring higher order terms, the dynamics of the Cart-Pole problem can be linearized as where in the above η := l 4 3 − m m+M and, in our experiments, we set the cart mass M = 10.0kg, pole mass m = 1.0kg, pole length l = 10.0m and gravitational acceleration g = 9.8m/s 2 . We set Q = I and R = 10 −3 and each w t is a fixed external force defined as Experimental results. We show the performance of the self-tuning control (Algorithm 3) and the impact of trust parameters for the Cart-Pole problem in Figure 8, together with the λ-confident control scheme in Algorithm 2 for several fixed trust parameters λ. The algorithms are tested using the true nonlinear dynamical equations in (11)- (12). In Figure 8, we change the variance σ 2 of the prediction noise e t = w t − w t at each time t and plot the average episodic rewards in the OpenAI Gym environment [9]. Different from the worst-case settings in the previous two applications, we run episodes multiples times and show plot the mean rewards. The height of the shadow area in Figure 8 represents the standard deviation of the rewards. The detailed hyper-parameters are given in Section E. Our results show that, despite the fact that the problem is nonlinear, the self-tuning control algorithm using a linearized model is still able to automatically adjust the trust parameter λ t and achieves both consistency and robustness, regardless of the prediction error. In particular, it is close to the best algorithms for small prediction error while also staying among the best when prediction error is large. Concluding Remarks In this paper we detail an approach that allows the use of black-box AI tools in a way that ensures worst-case performance bounds for linear quadratic control. Further, we demonstrate the effectiveness of our approach in multiple applications. There are many potential future directions that build on this work. First, we consider a linear quadratic control problem in this paper, and an important extension will be to analyze the robustness and consistency of non-linear control systems. Second, our regret bound (Lemma 2) and competitive results (Theorem 3.1) are not tight when the variation of perturbations or predictions is high, therefore it is interesting to explore the idea of "follow-the-regularized-leader" [23,21] and understand if adding an extra regularizer in the update rule of λ for self-tuning control can improve the convergence and/or the regret. Finally, characterizing a tight trade-off between robustness and consistency for linear quadratic control is of particular interest. For example, the results in [22,27] together imply a tight robustness and consistency trade-off for the ski-rental problem. It would be interesting to explore if it is possible to do the same for linear quadratic control. A Useful Lemmas Before proceeding to the proofs of our main results, we present some useful lemmas. We first present a lemma below from [28] that characterizes the difference between the optimal and the algorithmic costs. Lemma 3 (Lemma 10 in [28] ). For any ψ t ∈ R n , if at each time t = 0, . . . , T − 1, then the gap between the optimal cost OPT and the algorithm cost ALG induced by selecting control actions (u 1 , . . . , u T ) equals to where H := B(R + B P B) −1 B and F := A − HP A. The next lemma describes the form of the optimal trust parameter. Proof of Lemma 4. The optimal trust parameter λ * is implying that λ * = λ T . Next, we note that the static regret depends on the convergence of λ t . Proof of Lemma 6. Based on the assumption, for any 1 ≤ t ≤ T , we have that Since W t = 0 for all 1 ≤ t ≤ T and W T = 0, the lemma follows. We first prove the following theorem. Theorem B.1. With a fixed trust parameter λ > 0, the λ-confident control in Algorithm 2 has a worst-case competitive ratio of at most where H := B(R + B P B) −1 B , OPT denotes the optimal cost, C > 0 is a constant that depends on A, B, Q, R and ε (F, P, e 0 , . . . , e T −1 ) := B.1 Proof of Theorem 2.2 Denote by ALG the cost induced by taking actions (u 0 , . . . , u T −1 ) in Algorithm 2 and OPT the optimal total cost. Note that we assume OPT > 0. Lemma 3 implies that Therefore, with a sequence of actions (u 1 , . . . , u T ) generated by the λ-confident control scheme, (16) leads to where e t := w t − w T for all t = 0, . . . , T − 1. Moreover, denoting by x * t and u * t the offline optimal state and action at time t, the optimal cost satisfies for some constant 0 < D 0 < min{λ min (P ), λ min (Q)/2} that depends on Q, R and K where in (17), λ min (Q), λ min (R) and λ min (P ) are the smallest eigenvalues of positive definite matrices Q, R and P , respectively. Let Continuing from (19), Putting (21) into (18), we obtain which implies that To obtain the second bound, noting that Noting that W : for some constant C > 0 that depends on A, B, Q and R. C Regret Analysis of Self-tuning Control Throughout, for notational convenience, we write C.1 Proof of Lemma 1 In this section, we show the proof of Lemma 2 and Lemma 1. We begin with rewriting λ t − λ T as below. Applying Lemma 6, it suffices to prove that for any 1 ≤ t ≤ T . In the following, we deal with the terms (a) and (b) separately. C.1.1 Upper bound on (a) To bound the term (a) in (23), we notice that (a) can be regarded as a difference between two algebraic means. Rewriting the first mean in (a), we get We state a lemma below, which states that the sequence (η( w; 0, T ), . . . , η( w; T, T )) satisfies the assumption in Lemma 7. C.1.2 Upper bound on (b) Next, we provide a bound on (b) in (23). For (b), we have Noting that η( w; s, T ) − η( w; s, t) = T τ =t+1 F τ −s P w τ , we obtain By our assumption, w t ≤ w and w t ≤ w for all t = 0, . . . , T − 1. Therefore, for any s ≤ t: Plugging (35) Using the same argument, the following bound holds for (34): Finally, together, (31) and (39) imply the following: The same argument also guarantees that The following lemma together with (40) and (41) justify the conditions needed to apply Lemma 6. D Proof of Theorem 2.1 First, note that the total cost is given by J = T −1 t=0 x t Qx t + u t Ru t + x T P x T . Since we can choose a threshold σ > 0 arbitrarily small, the error must exceed a threshold σ. Without loss of generality, we suppose the accumulated error δ exceeds the threshold σ at time s ≥ 0 and assume the predictions w t , 0 < t < s − 1 are accurate. Throughout, we define J 1 := s−1 t=1 x t Qx t + u t Ru t and J 2 := T −1 t=s x t Qx t + u t Ru t and use diacritical letters J, x and u to denote the corresponding cost, action and state of the threshold algorithm (Algorithm 1). We consider the best online algorithm (with no predictions available) that minimizes its corresponding competitive ratio and use diacritical letters J, x and u to denote the corresponding cost, action and state. The competitive ratio of the best online algorithm is denoted by C min . D.1 Upper Bound on J 1 We first provide an upper bound on J 1 , the first portion of the total cost. For 1 ≤ t < s, the threshold-based algorithm gives Lemma 10 in [28] implies and ALG(s : Rewriting (44) Therefore, combining (43) and (45), Denote by ∆J 1 := J 1 − J 1 . We obtain Since the following is true: we have Therefore, as a conclusion, J 1 can be bounded from above by D.2 Upper Bound on J 2 For section D.1, we know that x s − x s = O(1). Let J 2 denote the cost by running 1-confident algorithm from x s with correct prediction, and x t denote the state we get in the procedure. Then Therefore, If x t = O(1) and u t = O(1) for all t, then Otherwise, suppose x i 1 , x i 2 , . . . , x i k and u j 1 , u j 2 , . . . , u j l are some functions of T , then for any 1 ≤ m ≤ k and 1 ≤ n ≤ l, x im /x im Qx im → 0 and u jn /u jn Ru jn → 0. Therefore, Combine the two cases, we can conclude that Therefore, from (46) and (47), we conclude that The proof completes by noticing that when the prediction error is zero and w t = w t for all t = 0, . . . , T − 1, the accumulated error δ will always be 0 and since the threshold σ is positive, the algorithm is always optimal and 1-consistent. As a result, Algorithm 1 is 1-consistent and (C min + o(1))-robust. E Experimental Setup In our three case studies, we consider i.i.d. prediction errors, i.e., e t = w t − w t is an i.i.d. additive prediction noise. To illustrate the effects of randomness for simulating the worst-case performance, we consider varying types of noise in the case studies. For the robot tracking case, we set e t = cX where X ∼ B(10, 0.5) is a binomial random variable with 10 trials and a success probability 0.5 and c > 0 is a scaling parameter. For the battery-buffered EV charging case, we set e t = Y where X ∼ N (0, σ 2 ) is a normal random variable with zero mean and σ 2 is a variance that can be varied to generate varying prediction error. For the Cart-Pole problem, we set e t = Zw t where w t = 60 × B with η := l 4 3 − m m+M , and Z ∼ N (0, σ 2 ) is a normal random variable with zero mean and σ 2 is a variance ranging between 0 to 8 × 10 2 . To simulate the worst-case performance of algorithms, in our experiments we run the algorithms 5 times, with a new sequence of prediction noise generated at each time and choose the one with the largest overall cost. Finally, Table 1 and Table 2 list the detailed settings and the hyper-parameters used in the robot tracking, battery-buffered EV charging and Cart-Pole case studies.
2021-06-18T01:15:54.660Z
2021-06-17T00:00:00.000
{ "year": 2021, "sha1": "8382fd416d94ee37d576bd96c51f3897459dd2a5", "oa_license": null, "oa_url": "https://dl.acm.org/doi/pdf/10.1145/3508038", "oa_status": "BRONZE", "pdf_src": "Arxiv", "pdf_hash": "28a2792491b3a0bff8bb8dab88c6c80edaea40d0", "s2fieldsofstudy": [ "Computer Science", "Mathematics" ], "extfieldsofstudy": [ "Computer Science", "Engineering", "Mathematics" ] }
203201122
pes2o/s2orc
v3-fos-license
Digital Virtual Simulation Experiment Design of Consumer Behavior in Smart Classroom Scenario —This work provides a specific experimental teaching paradigm for the establishment and application of the social context aware system under the smart classroom. The paper studies the physical context aware system based on the smart classroom, and generates the original big data by constructing the Consumer Behavior Cyber-physical system. And then, the dimensionality reduction of the data is processed to form a reliable data source for deep learning, and the TRIZ theory method is used to develop the virtual simulation experiment design of the specific consumption scene. The research has a positive exploration of the virtual simulation experiment of economic and management behaviors, and also verifies the practical application value of virtual simulation frontier technology in experimental teaching. I. INTRODUCTION The construction and teaching practice of smart classrooms is in the ascendant in China. It provides an advanced implementation platform for teaching content management, classroom data processing, classroom situational design and data-driven scientific management of teaching. At present, the design and implementation of the smart classrooms with Sichuan University, Huazhong Normal University, Xi'an Jiaotong University and other institutions as pioneers has achieved considerable results in the fields of resource platform construction, physical space design, and information interaction means realization and general data collection. These results provide strong support for the deep virtualization design of experimental teaching based on smart classrooms. Therefore, the laboratory construction and teaching design based on VR and AR technology will become the focus and experimental field of the smart classroom concept in the reform of the virtual simulation experiment. A. The education applications of Cyber-physical system The CPS (Cyber-physical system) is a simulation system in which the physical environment is fully digitized into a virtual reality environment [1] .Virtual simulation of complex systems has been attempted at engineering design, architectural design and medical teaching practice in China [2] . This field is different from traditional mathematical modeling. Instead, product digitalization method is adopted to realize the transformation from physical environment of digital twin CPS (Cyber-physical system) to virtual environment [3] . Practitioners in the field of engineering teaching and practice took the lead in proposing the construction of digital twin production system CPPS (Cyber-physical production system) using real-time data acquisition and processing technology. Beijing University of Aeronautics and Astronautics (Tao Fei, Liu Weiran, 2018) put forward the idea of Cyberphysical workshop, and applied the idea of information physics integration fusion to the digital operation design of production site [4] . Since then, Donghua University has applied the Cyberphysical experiment of aerospace structural component modeling [5] to realize the digital interactive integration of physical space and information space in production site (Guo Dongsheng, Bao Jinsong, 2018). Beijing Institute of Technology has realized the Cyber-physical design of complex environment for spacecraft assembly on orbit (Zhang Yuliang, Zhang Jiapeng). In addition, in the field of Humanities and Social Sciences, Cyber-physical applications have been adopted in the construction of material cultural heritage [6] , reflecting the shift of VR virtual reality applications in social sciences from frontend sensory simulation to back-end application data simulation (Qin Xiaozhu, Zhang Xingwang, 2018). B. The situation of smart classroom construction The construction of smart classroom forms the contextawareness system in the physical world, while the teaching activities in smart classroom are social situations characterized by social intercourse. In social situations, the social activities in the human real world can be mapped to virtual social networks through electronic social relationships, thus forming a Cyberphysical world convergence. In the Cyber-physical world convergence, social scenarios in real social networks and virtual social networks can be perceived as social scenarios; in real social networks, social scenarios are acquired through physical sensors; in virtual social networks, social scenarios are acquired through various social application software interface APIs. Therefore, according to data sources, social scene acquisition platform can be divided into physical platform, virtual platform and hybrid platform. The core mission of this research is to integrate CPS from design idea and implementation scheme into the experimental teaching of economics and management department, and to establish the Cyber-physical consumer behavior system (herein after referred to as CPCBS). Previous research results of this study have successfully constructed a VR experimental design of consumption accompanying environment, which has been applied in the exploration of experimental teaching of economics and management. This research is to follow up the design effect of VR visual exhibition experiment in the early stage, relying on the construction of smart classroom, using digital twin technology to realize the digital conversion of consumer behavior characteristics and the interactive integration of virtual and real information, thereby applying CPS system to the experimental teaching practice of economics and management. A. The design concept The study uses the theory of social context-awareness to guide the establishment of cyber-physical system under consumption scenarios. Social context-awareness computing is a computational model, which is the product of the integration of social context-awareness and social computing. Social situation belongs to one kind of scenario, which mainly refers to the aggregation of users' social relations and social activities. Its focus shifts from the user's physical environment (location, time, temperature, etc.) to the user's social environment (social relations, social roles, interactive events, etc.). Social contextawareness computing has three meanings: identifying social scenarios, perceiving social scenarios and computing social scenarios [7] . In this computing model, the system can discover and utilize social scenario information, analyze and process the social attributes of user scenarios, and provide users with the required services the five social contexts include: people, social event, object, at time, at place. Social context-awareness computing based on consumption scenarios can play an incredible role in the real world. Previous studies have explored how to identify individuals' influences based on community scenarios and optimize advertising strategies. In terms of customized movie information retrieval and recommendation, H.P. Xuan's research achieves intelligent distribution based on real space-time scene (Where? When? With who?) and virtual scene (My Movie History of Facebook account) that information realizes intelligent distribution. The design and deployment of the existing smart classroom has completed the work of physical context-awareness computing. Now the engineering digital twin technology is also the engineering scenario application of physical contextawareness computing. At present, the establishment and application of social context-awareness system has not been applied in the experimental teaching of economics and management. B. Design objectives The system is intended to address three real-world application issues: • The application content of experimental teaching is not deep enough in the existing construction scheme of smart classroom, and the users' big data of smart classroom platform is still lack of application setup. • The existing VR virtual simulation laboratory construction and research focus on the realization of sensory simulation implementation, and lack of progress in the simulation of data background. • The teaching application of the existing Cyberphysical system is limited to the field of engineering manufacturing simulation, and has not been applied in the simulation teaching experiment of economic and management system. C. Implementation Methods In the design of CBCPS, the key of this research is to realize effective information conversion from physical layer -signal layer--digital layer, and digitalize the real consumption environment. In addition, embodying the frontier results of fog computing and edge computing in computationally supported virtual consumer behavior decision-making has become a technical issue to be further explored. • Using TRIZ analogy method of engineering, 39 inventive principles and 40 corresponding contradictory methods to solve practical innovation problems are transformed by analogy of contradictory matrix in the design of CPCBS. The analysis results the main contradictory and conflict relationship, and then the overall solution design is realized by collecting the core data and modeling the system. • Deploy full digital acquisition environment. The physical context-awareness system of smart classroom should be built and deployed in physical layer, device layer and network layer. CPCBS should enhance the perceived devices density and acquisition frequency of consumer behavior and status. In addition, in order to meet the needs of cognitive multi-modality learning, eye tracker, and brain biometric scanner should be deployed to collect and digitalize non-contact consumer bio-signals. The acquisition and digital conversion of consumer physical space signals are realized by using the entity green curtain in VR environment, limb movement locator and trajectory location in Internet of Things environment. The above data constitute a large data base RAW for consumer behavioral experiments. • Store the above data in the cloud, and form a large data base through structured data processing, and construct a system model of consumer behavior characteristics under the condition of cardinal utility theory and Advances in Social Science, Education and Humanities Research, volume 336 ordinal utility theory. Then invoke deep learning model to realize the design of simulation experiment. • In the VR and AR environment, the system experiment of consumer behavior virtualization is carried out by changing the external impact such as the consumption environment, and the experimental design of teaching and research purposes is gradually realized. • To debug and optimize the system arithmetic logic, we use the double-blind test method of real consumer information interaction in physical world and information interaction in CPCBS system to solve the specific marketing problems of new product development, consumer characteristics portrait, market segmentation and so on. After completing the research and design of the project, relevant research concepts can be introduced into the experimental design of consumer behavior, and the digital modeling of consumer evaluation system can be carried out to realize the dynamic simulation of demand side. In addition, a digital simulation system for consumer decision-making with edge computing features can be realized, which can provide a testable digital feedback simulation environment for enterprise business activities such as R&D, market segmentation, new product promotion, and simulate the real market feedback effect. The scope of project revenue covers the related experiments of marketing behavior in management and economics, which can effectively improve the effectiveness of existing VR experiments. IV. DEEP LEARNING AND VIRTUAL SIMULATION EXPERIMENT DESIGN BASED ON CBCPS The main idea of virtual simulation experiment design based on CBCPS is to mine the big data of Cyber-physical system through deep learning technology, and form the virtual simulation of consumer behavior data. The CBCPS acquires information through the physical layer of smart classroom and carries out digital regulation through digital-to-analog conversion. This multi-dimensional and multi-modal primitive digital information exists in a rough and non-structured way. Many problems in the complexity and diversity of consumer behavior data affect the quality of big data itself and the feasibility of in-depth learning. Therefore, more consideration should be given to missing values, redundancy and sparsity of data. The massive real-time data collected by CBCPS cannot be directly mined by the deep learning model, thus reduced-dimensional vectorization method is needed to process the original information data into linear machine-readable data. This type of data text can be recognized and calculated by the deep learning model and used in decision-making of artificial intelligence system. Data mining based on deep learning technology mainly includes the input of large data of consumption contextawareness, large data processing based on deep learning model and the output of large data mining results. Among them, CPS chieves the input of large data, and the purpose of large data mining determines the output of large data mining results in relevant consumption scenarios. Therefore, deep learning model will choose the direction of input and output according to the scope of application of different models. Deep Belief Networks (DBN), Convolutional Neural Network (CNN), Recurrent Neural Network (RNN), Long Short Term Memory Networks (LSTM), and Recursive Neural Network (RNN), etc. is a typical and widely used deep learning model. The specific experimental design and model are selected as follows: A. Consumer behavior tracking and stochastic performance prediction based on deep learning Set up the consumer behavior experiment scene of virtual simulation, distinguish the real laboratory scene from the virtual reality scene, and capture and track the state characteristics of the subjects in the physical scene of intelligent classroom in real time. Because there are many and complex factors affecting consumers' choices, Deep Belief Networks and Convolutional Neural Network are more suitable for data processing. B. Random experiment design of consumer evaluation dynamic model The consumer dynamic evaluation model based on typical consumer behavior test experiment can be relied on the real digital twin system, or can be fitted with the data of virtual social platform. Long-term and short-term memory network and recurrent neural network have good applicability in data processing and deep learning. C. New product testing experiment based on deep learning New product testing includes the application of consumer evaluation model, the behavior tracking of consumer contextawareness, and the design of multi-modal communication mechanism with consumer feedback. It is suitable for the mixed use of multiple in-depth learning models. In the experimental design of event impact, the recurrent neural network is used as the pre-test arrangement of the pretreatment experiment. D. Simulation of Consumer Psychological Emotional Cognition and Scene Perception Cognitive computation of lagging consumer's psychological emotion strongly relies on data processing of text and image. Convolutional neural network and long-term and short-term memory neural network have strong applicability. In real-time consumer scenario perception computing, consumer's psychological and emotional perception needs deep belief network to realize deep learning. The virtual simulation experiment design completed through the above in-depth learning can realize the front-end of VR and AR laboratory sensory simulation and the information exchange with the back-end of economic operation data simulation. The idea of near-field simulation of consumption decision-making is proposed to verify the commercial deployment and decision support of edge computing and fog computing under ubiquitous computing conditions. V. CONCLUSIONS This paper studies how to establish Cyber-physical consumer behavior system, through social context-awareness computing and in-depth learning technology to achieve digital virtual simulation of consumer scenarios, and complete realtime interaction and customer behavior testing of physical products and virtual products in the marketing experiment of economics and management. The contributions of relevant experimental designs are as follows: • The research embodies the deep application results of smart classroom construction in the field of laboratory teaching. It provides strong support for the experimental teaching of business research methods, consumer behavior and network marketing courses in economics and management. Relevant experimental design projects can be deployed on the front line of virtual simulation teaching in this field. • The research assumption and the guiding ideology of experimental design are prospective for the construction of Virtual Simulation Laboratory for economics and management. Digital processing of consumer behavior is accompanied by VR simulation of consumer scenes. It integrates University experimental teaching with virtual product design and market development of enterprises to achieves the combination of production, teaching and research. • The research results achieve interdisciplinary communication. Digital simulation of consumer behavior is the inevitable process of Cyber-physical technology from product digitalization to enterprise marketing digitalization. Virtual docking of Market Research scenario and engineering manufacturing process makes enterprise R&D process multi-modal VR. • The practical significance of this research lies in the application of Cyber-physical in the virtual simulation experiment of economics and management, the realization of digital interactive teaching environment, and the formation of a demonstration sample of large data application in intelligent classroom. The research results can provide platform support for core big data and dynamic digital-to-analog conversion simulation for the establishment of smart classrooms in the management laboratories, and the application of VR and AR technology in the experimental courses of management humanities can also form an effective promotion scheme, which can effectively help non-engineering disciplines to realize the construction of virtual simulation teaching laboratories. In the further construction of smart classroom, the teaching research results also become the near-line function module supplement of the cloud platform of smart classroom, which plays a reverse role in effectively exerting the potential efficiency of smart classroom and improving the level of laboratory data operation.
2019-09-19T09:14:06.642Z
2019-08-01T00:00:00.000
{ "year": 2019, "sha1": "b9a25aee973aa38038bb96d73e54e8d1230b0712", "oa_license": "CCBYNC", "oa_url": "https://www.atlantis-press.com/article/125915848.pdf", "oa_status": "GOLD", "pdf_src": "Anansi", "pdf_hash": "eff09f52add13683f69be4ff0a54929784bc2d83", "s2fieldsofstudy": [ "Computer Science", "Education", "Economics" ], "extfieldsofstudy": [ "Computer Science" ] }
268012389
pes2o/s2orc
v3-fos-license
Stigma in Elderly Females with Stress Urinary Incontinence: A Latent Profile Analysis Background : Stress urinary incontinence (SUI) is a commonly occurring urological disorder in females, particularly among the elderly population. Females with SUI often experience significant stigma associated with their condition. This study aimed to investigate the current status of stigma among elderly females with SUI and analyze its heterogeneous subtypes. Methods : The Stigma Scale for Chronic Illness (SSCI) was used to survey 245 participants in two tertiary hospitals in Guangdong from November 2021 to September 2022. Latent profile analysis was employed to create a classification model, and variance and correlation analyses were conducted to assess the influencing factors. Results : A total of 245 elderly females with SUI participated in the survey. They had an average stigma score of 83.70 ± 13.88, consisting of self-stigma (48.64 ± 8.04) and perceived stigma (35.06 ± 6.80) scores. Latent profile analysis identified three distinct and comparable subtypes: the low-self-low-perceived group (14.69%), the high-self-medium-perceived group (49.38%), and the high-self-high-perceived group (35.91%). These subtypes exhibited statistically significant differences in all dimensions and the overall stigma score ( p < 0.05) and were found to be correlated with the patient’s level of education, marital status, drinking habits, number of chronic illnesses, presence of diabetes, and frequency of urinary leakage ( p < 0.05). Conclusion : This study demonstrates that elderly females with SUI face elevated levels of stigma, and it reveals distinct classification characteristics among them. Additionally, it emphasizes the importance of providing specific support and attention to individuals with higher levels of education, increased fluid intake, marital status, severe urinary leakage, and diabetes. Introduction Stress urinary incontinence (SUI) is a urinary system disorder characterized by involuntary urine leakage during activities like coughing, sneezing, physical exertion, or other situations that elevate intra-abdominal pressure, resulting in temporary urinary incontinence (UI) [1].Among females, especially in the elderly demographic, SUI stands as a prevalent urologic condition.Studies have reported prevalence rates ranging from 18.9% to 40%, with a notably higher prevalence of up to 28.2% in females aged over 60 years [2][3][4][5].A study has revealed that 60.6% of patients experiencing UI perceive it as significantly more embarrassing than depression and cancer [6].This embarrassment often leads to a delay in seeking medical treatment due to the presence of qualitative shame.Consequently, patients find it challenging to access timely and effective therapeutic measures, which worsens disease symptoms and adds to their psychological stress.The intensifying symptoms and negative emotions further contribute to an increased sense of shame among patients, subsequently diminishing their social participation and reducing their inclination to seek medical treatment [7][8][9][10].This creates a vicious circle that adversely impacts the overall quality of life of these patients.Additionally, it is important to acknowledge that several factors, including a lack of awareness and education, cultural norms, gender roles, and age, contribute to the stigma surrounding incontinence [8][9][10].Currently, research concerning the experienced stigma among elderly females with SUI [9][10][11][12] predominantly focuses on evaluating clinical outcomes using composite scores, often without considering the heterogeneity among the items in these scales.However, latent profile analysis (LPA) is a clustering method based on a latent variable model, offering the capability to identify different groups within the data and describe the unique characteristics of each group [13].Hence, this study aims to use LPA as a tool for exploring and analyzing the various subgroups of stigma characteristics present among elderly females with SUI.The results of this study provide evidence to furnish valuable insights for the development of targeted nursing interventions.These interventions are designed to reduce stigma, minimize its impact on patients' health-related behavior, and ultimately improve their overall quality of life. Participants and Procedure This cross-sectional study was conducted from November 2021 to September 2022 at two tertiary hospitals located in Guangzhou, Guangdong Province.The study specifically targeted participants admitted to the urology and geriatric departments, employing a simple random sampling method.The inclusion criteria were as follows: (1) Patients who met the diagnostic criteria outlined by the International Association of Urinary Control for UI [14]; (2) SUI diagnosis confirmed by a physician; (3) elderly females aged ≥60 years; (4) patients with clear consciousness, devoid of verbal communication impairments, possessing some level of text reading comprehension ability, and capable of independently completing the questionnaire; and ( 5) patients with relatively stable health conditions. Sample Size The current study was designed to conduct a crosssectional assessment of the prevalence of morbidity and the stigma experienced by female patients with SUI in a specific location.We conducted a two-sided test with a significance level (alpha) set at 0.05, considering an expected standard deviation of 30 and a margin of error of 5.The sample size was determined using PASS 15 software (NCSS, LLC., Kaysville, UT, USA) [15], resulting in a calculation of N = 139 cases.Accounting for a 20% anticipated loss to follow-up rate, a minimum of 174 cases were required as study participants.Ultimately, the study successfully enrolled 245 elderly female patients with SUI. Sociodemographic and Clinical Characteristics Based on the existing literature, the survey questionnaire assessed various sociodemographic characteristics, including age, educational attainment, income, marital status, obesity, history of constipation, and water intake.It also collected data on participants' smoking and drinking habits.Furthermore, the questionnaire gathered information regarding clinical characteristics, encompassing the type and number of chronic diseases, history of genitourinary surgeries, and details pertaining to UI, such as the type of incontinence, number of leakage episodes, and frequency of micturition. Stigma Assessment The Stigma Scale for Chronic Illness (SSCI) is a comprehensive measurement tool developed by Rao et al. [16] in 2009.This tool was specifically designed to assess the extent of stigma experienced by patients with various chronic diseases and builds upon the foundation of the Patient-Reported Outcome Measurement Information System.The SSCI comprises 24 items classified into two dimensions: self-stigma and perceived stigma.Out of these, 13 items pertain to self-stigma, while the remaining 11 items are associated with perceived stigma.A 5-point Likert scale, ranging from 1 (none) to 5 (always), is employed in the scale, resulting in a total score range of 24 to 120 points.Higher scores indicate a greater degree of morbid shame.Deng et al. [17] adapted this scale into a Chinese version known as the Chronic Disease Stigma Scale.The adapted scale exhibited excellent internal consistency and stability, as reflected by Cronbach's alpha coefficient of 0.95.Moreover, the total scale exhibited a content validity of 0.932, while each individual item demonstrated a content validity ranging from 0.800 to 1.000. Statistical Analysis Mplus 8.3 software (Muthén & Muthén, Los Angeles, CA, USA) was employed to construct a latent profile classification model.This model used the SSCI scores as exogenous variables and targeted elderly female patients with SUI.Initially, the model consisted of a single category, and subsequent iterations expanded the number of category models.Model fitness was assessed based on multiple criteria, including Akaike Information Criterion (AIC), Bayesian Information Criterion (BIC), sample-corrected BIC (aBIC), entropy index, and the Roe-Mondale-Reubencorrected likelihood ratio criterion (LMR), using the Bootstrap Likelihood Ratio Test (BLRT) [18,19].The criteria for evaluating the model's fitness encompassed the following: (1) Smaller values of AIC, BIC, and aBIC indicate better model fit [18]; (2) Higher entropy values closer to 1, indicate a greater probability of accurate individual categorization [13,20]; (3) LMR and BLRT were employed to compare the fit difference between the "k" and "k-1" models.A p-value < 0.05 indicated that the k models outperformed the k-1 models.Iterations continued until an optimal model fit was achieved [21]. Upon determining the optimal model, sociodemographic and clinical characteristics were compared among profiles using the combined sample from the discovery and replication cohorts.IBM SPSS statistics for Windows (version 23; IBM Corp., Armonk, NY, USA) was utilized to analyze the sociodemographic and clinical characteristics across different profiles.Variations were examined through the analysis of variance (ANOVA), t-tests, and χ 2 tests [22]. Ethical Considerations The study was conducted according to the guidelines of the Declaration of Helsinki, and approved by The Ethics Committee of the First Affiliated Hospital of Guangdong Pharmaceutical University (2022-87).Before we distributed the questionnaires, we assured the medical staff that the questionnaire will be used for academic research, their personal information will remain confidential, and they could withdraw at any stage.Moreover, the participants signed informed consent forms. Participant Characteristics The mean scores for total, self, and perceived stigma were 83.70 ± 13.88, 48.64 ± 8.05, and 35.06 ± 6.80, respectively, constituting approximately 69.7%, 74.8%, and 63.7% of the total score.A total of 245 elderly females participated in this study, with a mean age of 73.91 ± 9.02 years.Table 1 provides detailed information about the participants, including the proportions of participants with various characteristics.Statistically significant differences in p-values were observed among participants with varying levels of education, water intake, smoking or drinking habits, the presence or absence of diabetes mellitus, and the frequency of urinary leakage and urination.Furthermore, significant differences were found in the total stigma and perceived stigma scores among participants with different marital statuses.Additionally, significant differences were observed in the total stigma and self-stigma scores among participants with varying income levels. Latent Profile Analysis The process began with the initial model, progressively constructing potential category models ranging from 1 to 6, with the results outlined in Table 2.As the number of model categories increased, both the AIC and BIC values exhibited a gradual decrease, indicative of an improved model fit.It is worth noting that each model maintained an entropy index >0.8,indicating a reliable classification.Furthermore, the Bootstrap Likelihood Ratio Test indexes for the 2 to 6 category models all registered values <0.05, indicating that the model with "k" categories outperformed the model with "k-1" profiles.Regarding the AIC and BIC indices, the 3-category model surpassed the 2-category model but was slightly inferior to the 4-category model.However, within the 3-category model, there were more substantial reductions of 7.80% and 6.47% in AIC and BIC, respectively, compared to the 4-category model, which showed decreases of 1.51% and 1.22%, respectively.This suggests that the decline in AIC and BIC was more significant in the 3-category model than in the 4-category model.Additionally, the proportion of the category with the lowest relative frequency in the 3-category model stood at 14%, which was slightly higher than that observed in the 4-category model.Considering a combination of the model fitting indices and model simplicity, it is concluded that the 3-category potential profile model represents the most suitable model. The scores of the three potential categories on the SSCI scale are shown in Fig. 1.Class 1, comprising 14.69% of the population, exhibited scores below the mean in all dimensions and was consequently labeled as the "low-selflow-perceived" group.Class 2, constituting 49.38% of the population, scored close to the mean in perceived stigma and above the mean in self-stigma and was labeled as the "high-self-medium-perceived" group.Class 3, comprising 35.91% of the population, achieved scores above the mean in all dimensions and was labeled as the "high-self-highperceived" group.An ANOVA was conducted, using the participants' potential categories as independent variables and the scores of the dimensions as well as the total scores as dependent variables, as outlined in Table 3.The results revealed statistically significant differences (p < 0.05) in selfmorbid shame among the three potential categories, with the "high-self-high-perceived" group recording the highest score, followed by the "high-self-medium-perceived" and "low-self-low-perceived" groups.Similarly, significant differences (p < 0.05) were observed in perceived morbid shame among the three potential categories, with the "high-self-high-perceived" group having the highest score, followed by the "high-self-medium-perceived" and "lowself-low-perceived" groups.Similarly, the differences in the SSCI scores among the three potential categories were all statistically significant (p < 0.05), with the "high-selfhigh-perceived" group again recording the highest score, followed by the "high-self-medium-perceived" and "lowself-low-perceived" groups. Participant Characteristics Across Potential Categories The potential categories were analyzed using a χ 2 test and correlation analysis with the general data.This analysis uncovered statistically significant differences (p < 0.05) across various factors, including educational attainment, marital status, water intake, the number of chronic diseases, the presence of diabetes mellitus, and the frequency of urinary leakage.Among the potential categories, the "lowself-low-perceived" group exhibited the highest proportion of individuals with an elementary school education or below (50.0%), with an adjusted residual of 4.1.Conversely, the "high-self-high-perceived" group had the highest proportion of individuals with a junior high school education (50.0%), but with an adjusted residual of -2.8.The "highself-medium-perceived" group had the highest proportion of individuals with a senior high school education or above (36.4%),with an adjusted residual of 2.1.Regarding marital status, the "high-self-high-perceived" group had the highest proportion of individuals with a spouse (83.0%), with an adjusted residual of 2.8, whereas the "high-selfmedium-perceived" group had the highest proportion of individuals without a spouse (44.4%), with an adjusted residual of 2.4.Considering water intake, the "low-self-lowperceived" group had the highest proportion of individuals consuming 0-1000 mL (27.8%), with an adjusted residual of 5.3, while the "high-self-medium-perceived" group had the highest proportion of individuals consuming 1000-2000 mL (79.3%), with an adjusted residual of 2.3.Meanwhile, the "high-self-high-perceived" group had the highest proportion of individuals consuming ≥2000 mL (29.5%), with an adjusted residual of 2.7.Regarding the number of chronic diseases, the "high-self-mediumperceived" group had the highest proportion of individuals with <2 conditions (50.4%), with an adjusted residual of 2.9, whereas the "low-self-low-perceived" group had the highest proportion of individuals with ≥5 conditions (50.4%), also with an adjusted residual of 2.9.Regarding diabetes mellitus, the "high-self-high-perceived" group had the highest proportion of patients with diabetes (68%), with an adjusted residual of 2.9, while the "high-self-mediumperceived" group had the highest proportion of patients without diabetes (89.3%), with an adjusted residual of 6.8.As for urinary leakage, the "low-self-low-perceived" group had the highest proportion of individuals experiencing leaks ≤1 (50.0%), with an adjusted residual of 3.5, while the "high-self-high-perceived" group had the highest proportion of individuals experiencing leaks several times per day (26.1%), with an adjusted residual of 2.5.Fig. 2 shows a visual representation of the correlation between participant characteristics and potential categories. Discussion This study aimed to investigate the prevailing levels of stigma among elderly females experiencing SUI.Additionally, the study also employed the LPA technique to categorize participants according to their stigma experiences, leading to the identification of three distinctive profiles: low-self-low-perceived, high-self-medium-perceived, and high-self-high-perceived.The findings indicated that the most substantial proportion of participants fell into the "high-self-medium-perceived" group. In this study, it was observed that elderly female patients experiencing SUI exhibited elevated levels of stigma.Furthermore, the patients' level of self-stigma was found to surpass their levels of perceived stigma.These findings can be attributed to several factors.Firstly, SUI can exert a significant impact on patients' social activities and overall quality of life.Consequently, patients may harbor increased concerns about their own physical well-being, leading to an increased sense of self-morbid shame [23][24][25].Moreover, older adults often have limited social circles and may place less emphasis on external evaluations.Consequently, their perceived morbid shame scores tend to be relatively low [26,27].A study involving 506 female patients experiencing UI, Guan et al. [27] also found that patients had the highest scores for intrinsic shame, which is consistent with the findings of this current study. The study revealed that educational attainment (level of literacy), marital status, and water intake emerged as significant factors influencing the sense of shame among elderly female patients with SUI.Patients with varying levels of literacy may harbor different attitudes toward themselves and their illness, thereby impacting the degree of shame they experience [28].Those with higher levels of literacy might place greater importance on etiquette and cultural refinement in social interactions, potentially leading to a greater mental and psychological burden when experiencing incontinence.This, in turn, may increase the likelihood of falling into the "high-self-medium-perceived" group.Conversely, patients with low levels of literacy may tend to belong to the "low-self-low-perceived" group.These findings align with those of Wang et al. [28], which is a study investigating the relationship between stigma and healthcare-seeking behaviors in elderly females.The study found that marital status independently influenced the intention of patients to seek healthcare.However, that study did not identify a direct impact of marital status on the stigma of patients.Conversely, our study demonstrated that patients with a spouse were more inclined to belong to the "high-self-high-perceived" group, while patients without a spouse were more likely to fall into the "low-self-lowperceived" group.This difference could be attributed to patients with spouses being more concerned about their image and privacy, which may lead to an intensified sense of stigma.However, these findings should be further explored with larger sample sizes.The daily water intake exhibited a positive correlation with the stigma, with higher water intake associated with a higher likelihood of falling into the "low-self-low-perceived", "high-self-medium-perceived", and "high-self-high-perceived" groups.This might be because excessive water intake can burden the digestive system, exacerbating UI symptoms and intensifying feelings of shame and embarrassment.The findings from Andersen et al. [29] also suggest that a well-managed water intake regimen can help alleviate the UI symptoms, consequently reducing the stigma of the patient [30,31]. The study also found that the presence of diabetes mellitus, the frequency of urine leakage, and the number of chronic diseases can impact the perception of stigma in elderly females with SUI [32].Patients with comorbid diabetes were more inclined to belong to the "high-self-highperceived" group, whereas patients without diabetes were more likely to fall into the "high-self-medium-perceived" group.This phenomenon may be attributed to the fact that diabetes not only affects incontinence symptoms but can also lead to other health issues like retinopathy and neuropathy.These additional health concerns increase the susceptibility of patients to external influences, thereby amplifying their perception of stigma [33][34][35][36][37].This finding is consistent with the findings of Akyirem et al. [33].Furthermore, patients with less frequent urine leakage were more likely to fall into the "low-self-low-perceived" group, while those with more frequent leakage tended to belong to the "high-self-high-perceived" group.This observation can be attributed to the fact that urinary leakage not only impacts the social activities of the patients but also increases the burden and discomfort experienced by others.Consequently, patients become more acutely aware of their incontinence [7,8,38,39].This aligns with the results reported by Cai [38], which indicated that patients with UI often have their social interactions and comfort affected by urine leakage.It is important to note that while some studies have shown that chronic diseases can lead to patients developing a sense of shame [40][41][42], the present study found that patients with ≥5 chronic diseases reported a lower sense of shame compared to those with <2 chronic diseases.This could be attributed to patients gradually accepting their con-dition and adapting to the impact of their illnesses on their lives when dealing with multiple chronic diseases.However, it is worth acknowledging that the study's conclusion may be limited by its small sample size and underrepresentation, highlighting the need for further investigation with a larger sample size. In conclusion, this study highlights the significant impact of stigma on elderly females experiencing SUI.The comprehension of factors influencing these emotional responses, such as level of literacy, marital status, water intake, the presence of diabetes mellitus, the frequency of urinary leakage, and the number of chronic diseases, can help healthcare professionals design customized interventions and support systems aimed at enhancing the psychosocial well-being of these patients.Through the reduction of stigma and the promotion of acceptance, there is the potential to improve the overall quality of life for elderly females living with SUI. Limitations This study has certain limitations.First, due to its cross-sectional design, causal relationships could not be inferred from the results.Second, data collection was limited to participants from the two tertiary hospitals in Guangdong, China, and focused exclusively on older adults, thereby limiting the generalizability of the results. Conclusion In summary, this study used the potential profile analysis method alongside the SSCI scale to investigate the stigma among elderly female patients with SUI.The findings identified three distinct subgroups, with the majority falling into the "high-self-medium-perceived" group.Particular attention should be directed toward patients with high levels of literacy, elevated water intake, a spouse, serious urine leakage, and coexisting diabetes.Tailored nursing interventions should be implemented to enhance their mental well-being and diminish the burden of stigma they experience. Fig. 1 . Fig. 1.Latent profile indicators mean values for the three profiles.Note: S1-S24 refers to entries 1 to 24 of the Stigma Scale for Chronic Illness (SSCI) scale.
2024-02-27T16:23:31.415Z
2024-02-23T00:00:00.000
{ "year": 2024, "sha1": "40c0bb4cda15c6bd75aa8948a6a66bf27b13e7f1", "oa_license": "CCBY", "oa_url": "https://www.imrpress.com/journal/CEOG/51/2/10.31083/j.ceog5102053/pdf", "oa_status": "GOLD", "pdf_src": "ScienceParsePlus", "pdf_hash": "8b46e74640f912d6a7f7536262df97525a90ff0d", "s2fieldsofstudy": [ "Medicine", "Sociology" ], "extfieldsofstudy": [] }
251991614
pes2o/s2orc
v3-fos-license
Depression Detection on Twitter Social Media Using Decision Tree Depression is a major mood illness that causes patients to experience significant symptoms that interfere with their daily activities. As technology has developed, people now frequently express themselves through social media, especially Twitter. Twitter is a social media platform that allows users to post tweets and communicate with each other. Therefore, detecting depression based on social media can help in early treatment for sufferers before further treatment. This study created a system to detect if a person is indicating depression or not based on Depression Anxiety and Stress Scale - 42 (DASS-42) and their tweets using the Classification and Regression Tree (CART) method with TF-IDF feature extraction. The results show that the most optimal model achieved an accuracy score of 81.25% and an f1 score of 85.71%, which are higher than baseline results with an accuracy score of 62.50% and an f1 score of 66.66%. In addition, we found that there were significant effects on changing the value of the maximum features in TF-IDF and changing the maximum depth of the tree to the model performance. Introduction Depression is a mental health mood disorder that causes patients to experience severe symptoms that affect their daily activities such as eating, sleeping, working, and how they feel or think [1]. According to WHO, depression affects 3.8% of the human population worldwide, with 5.0% of adults and 5.7% of adults over 60 years old. Approximately 280 million people worldwide suffer from depression. Depression can cause a person to suffer extremely and exhibit poor performance in daily activities; it can even lead to suicide. People with depression are frequently misdiagnosed, while people who are not depressed are prescribed antidepressants [2]. With the development of technology, humans often express themselves through posts on social media. Therefore, a study by Budiman et al. [3] was carried out to collect data with keywords that indicated depressive disorders on the Twitter platform by involving psychiatrists to label datasets that indicated depression or not. Based on that study, we can identify whether a person is indicated to be depressed or not through social media, especially Twitter. Social media is an online platform for socializing between users with similar interests, backgrounds, or activities that allows the users to interact without restrictions. With social media, it is possible for humans to communicate with each other wherever they are and whenever they want [4]. According to Kepios, as of April 2022, 58.7% of humans worldwide have social media accounts [5]. Twitter is a social media for connecting and communicating through the quick and frequent exchange of messages. Users can post tweets containing text, photos, videos, and links. In addition, tweets will be shown on the profile and can be seen by followers or can be searched on Twitter [6]. Statistica Research Department shows that in January 2022, Twitter had 342.75 million monetizable daily active users worldwide, with Indonesia being ranked fifth [7]. Many studies have been done to detect depression through social media, especially Twitter. Research conducted by Nugroho, K. S. et al. [8], who researched on Twitter about the potential for depression and anxiety disorder using BiLSTM, resulted in an accuracy score of 94.12%. However, although the accuracy is high, BiLSTM can cause overfitting if the dataset is not big enough. Research by Ahmed Husseini et al. [9] conducted a study of depression detection from Twitter users using several methods. The study stated that Recurrent Neural Network (RNN) resulted in an accuracy score of 91.245% but has limitations regarding long sentences. A study by Rizki, A. et A. et al. [11], who defined a binary classification that identifying a person indicated depression or not based on his Twitter activities using Support Vector Machine (SVM), Naive Bayes (NB), and Decision Tree (DT) with all possible combinations of feature values shows the SVM model has achieved the best accuracy metric combinations with 82.5% of accuracy. Although the DT model can fail if exposed to brand-new data with 77.5% of accuracy and NB with 80% of accuracy. In a Study by Le Yang et al. [12], classified depression from audio and video information using a Decision Tree, the performance was almost 100% correctly classified. In the test set, the f1 score resulted in 72.4%, which is higher than the baseline. Suppose we can detect whether someone is indicating depression through their social media. In that case, further treatment can be given, either professionally or moral assistance from the closest person, before being handled further. So, studying a system that can detect whether a person is indicating depression or not based on their tweets can assist in providing treatment for people who are indicating depression. In this research was conducted to build a classification model that aims to classify the data from tweets to detect whether someone is indicating depression or not. We proposed the Decision Tree method, because based on study by Le Yang et al. [12], using a Decision Tree was almost 100% correctly classified with 72.4% of f1 score. In addition, we focused on increase the accuracy and f1 score by hyperparameter tuning to make a better model that can get a better prediction. So, classifying someone that indicating depression using Decision Tree model is proven to produce good performance. Research Methods This research on the detection of users that indicate depression or not is based on several studies as a reference. We proposed a Decision Tree (DT) based model, namely Classification and Regression Tree (CART), that can detect which users indicate depression by user tweets using Term Frequency-Inverse Document Frequency (TF-IDF) for feature extraction. Figure 1. shows the flowchart that runs on the system. This section explains about methods used in this research. Data Collection The dataset was obtained through Twitter crawling. Before crawling the tweets, we shared the Depression Anxiety Stress Scale (DASS) 42 questionnaire with respondents. This questionnaire is for labeling the dataset. DASS-42 is a psychological assessment scale to measure a person's depression, anxiety, and stress level based on 42 questions. Each scale (depression, anxiety, and stress) contains 14 items. Table 1 shows the distribution of items [13]. Self-assessment is done by filling in a scale value of 0 to 3 for each item with the information 0: does not occur, 1: rarely occurs, 2: sometimes occurs, and 3: often occurs. The DASS-42 was assessed by calculating the total score for each disorder, so the maximum score for each disorder was 3 x 14 is 42. Table 2 shows the severity of the disorder [13]. In this research, we only use the depression scale for labeling the respondents that indicated depression if that person has a score above 9 (10 to 42 will be labeled as indicating depression) without paying attention to the severity of the disorder. Table 3 shows the 14 questions. After respondents have completed the questionnaire (DASS-42) and filled in their Twitter usernames, we crawl their tweets for the dataset. We crawl the tweets without dates and keywords limit. Result of data collection contains username, tweet, and label with csv Jurnal RESTI (Rekayasa Sistem dan Teknologi Informasi) Vol. format that be the dataset for next process. Table 4 shows the example of data collection result. Merasa sedih dan depresi 5 Kehilangan minat pada banyak hal (misal: makan, ambulasi, sosialisasi) 6 Merasa diri tidak layak 7 Merasa hidup tidak berharga 8 Tidak dapat menikmati hal-hal yang saya lakukan 9 Merasa hilang harapan dan putus asa 10 Sulit untuk antusias pada banyak hal 11 Merasa tidak berharga 12 Tidak ada harapan untuk masa depan 13 Merasa hidup tidak berarti 14 Sulit untuk meningkatkan inisiatif dalam melakukan sesuatu The dataset contains 157 users with usernames, tweets, and labels. Figure 2 shows the distribution of dataset labels, which contains two labels, "1" means to indicate depression, and "0" means not to indicate depression. There were 92 users who indicated depression and 65 users who did not indicate depression. Data Preprocessing Data preprocessing is a method to make data of higher quality and improve performance [14]. In this research, preprocessing techniques are case folding, data cleaning, tokenization, stop word removal, and stemming. Case folding is a stage of changing uppercase letters into lowercase letters [15]. Data cleaning is a process to remove the noises in the data like numbers, emoticons, and punctuation to remove unnecessary information [16]. Tokenization is the process of splitting sentences into tokens of words. Stop word removal is the process of removing words that are unimportant to reduce word dimensions. Finally, stemming is the process of returning affixes to basic words [15]. Table 5 shows the example of data preprocessing. Feature Extraction with TF-IDF Machine learning algorithms cannot process raw text directly. Instead, it needs feature extraction to convert text into a matrix or vector [17]. Feature extraction is a technique to remove irrelevant data features to reduce the data space dimensions [18]. In this research, we proposed Term Frequency-Inverse Document Frequency (TF-IDF) as feature extraction. TF-IDF is a technique that calculates the weight of each word. TF is to measure how many words appear in one document, while IDF calculates the weight of each word in a document. The more words appear, the higher the weight of those words [19]. Modeling with Decision Tree Decision Tree (DT) is an algorithm that has the concept of converting data into a visual form in the form of decision tree rules [20]. DT is a classification model like a tree where each tree branch represents the choice, and the tree's leaf represents the decision's outcome. The advantage of this method is that it can change the decision-making area to be simpler and more specific than was previously complex. In addition, DT is flexible in selecting features from various internal nodes. The selected features will differentiate a criterion from other criteria in the same node. This flexibility can improve the quality of the decision's results [21]. A tree starts with a root node that represents a decision. Then, based on the root node, it will be split into branches representing the possible decision. Finally, the result is a leaf node that represents the resulting class [21]. DT needs to split a node based on the best value. Different DT algorithms use different calculations to get the best value for splitting the node. Table 6 shows the calculation comparison [22]. (1984) to refer to DT algorithms for classification or regression modeling. CART uses the gini index to split criterion [23]. Gini index is defined as: D is a dataset containing n samples, and Pj is the relative probability that the sample of category j appears in dataset D. Gini index is used to differentiate the highest number between categories at different nodes in the data. Therefore, the sample's category distribution is more uneven when the gini index value is lower. That means the capacity to differentiate between various categories is improved if the subset created by the splitting point has a higher category purity [23]. Gini(D) is the gini index of an attribute; n1 represents the amount of data in D1 and n2 represents the amount of data in D2 [23]. Evaluation In this research, we used the accuracy score and f1 score to evaluate the system's performance. Accuracy represents how many classes are classified correctly. Accuracy is obtained by True Positive (TP), True Negative (TN), False Positive (FP), and False Negative (FN) from the confusion matrix. The confusion matrix represents the actual and predicted class in a square matrix [24]. Accuracy is defined as: F1 score can be defined as a harmonic mean of precision and recall. A high f1 score means the model has good precision and recall values. Precision is the ratio between TP and total data that is predicted to be positive, and recall is the ratio between TP and total data that is positive [25]. F1 score, precision, and recall are defined as: Dataset In this research, we shared the DASS-42 questionnaire with respondents to label the dataset. Table 7 shows the top five rows from the DASS-42 result. After that, we did data preprocessing for the dataset from case folding to stemming. Table 8 shows example of data preprocessing result. Then, we split the dataset and did feature extraction using TF-IDF with various values of features. These various ratios of split data and various value of features in TF-IDF is to determine the baseline. We used data split into 70:30 ratio, 80:20 ratio, and 90:10 ratio with maximum features in TF-IDF into 5000 maximum features, 7000 maximum features, and 10000 maximum features before modeling. Experimental Result In this research, we conducted three experiments, namely the CART algorithm with various ratios of data split and various maximum features in TF-IDF to determine the baseline; the CART algorithm with hyperparameter tuning the maximum depth of the tree to increase performance; and using other DT-based algorithms to compare with our model. Figure 3. shows the result of the 70:30 ratio of data split with 5000, 7000, and 10000 maximum features in TF-IDF. The best result is 5000 maximum features with a 56.25% accuracy score and 55.31% f1 score. Figure 4. shows the result of the 80:20 ratio of data split with 5000, 7000, and 10000 maximum features in TF-IDF. The best result is 5000 maximum features with a 59.37% accuracy score and 43.47% f1 score. Finally, Figure 5. shows the result of the 90:10 ratio of data split with 5000, 7000, and 10000 maximum features in TF-IDF. The best result is 5000 maximum features with a 62.50% accuracy score and 66.66% f1 score. Jurnal RESTI (Rekayasa Sistem dan Teknologi Informasi) Vol. Based on these results, the model generated the best data split with the 90:10 ratio and 5000 maximum features in TF-IDF with a 62.50% accuracy score and 66.66% f1 score. This result will be the baseline for the next experiment. The comparison of the ratio of data splits, and the number of maximum features can be seen in Table 9. The value of features in TF-IDF has a significant effect on the performance. The more features in TF-IDF will decrease the accuracy and increase the f1 score, but the smaller features in TF-IDF will increase the accuracy and decrease the f1 score. As seen at the 70:30 ratio of data split, when the maximal features are increased from 5000 to 10000, the accuracy decreases by 4.17%, and the f1 score increases by 1.29%. At the 80:20 ratio of data split, when the maximal features are increased from 5000 to 10000, the accuracy decreases by 6.25%, and the f1 score increases by 13.67%. At the 90:10 ratio of data split, when the maximal features are increased from 5000 to 10000, the accuracy decreases by 6.25%, but the f1 score does not increase or decrease. Based on these results, we concluded that the higher amount of data train would enhance the model's performance, but the higher number of features in TF-IDF will decrease the accuracy score but increase the f1 score. Figure 6 shows the accuracy results by trying various values for the parameter maximum depth of the tree against the train and test data. Figure 7 shows the trendline of the accuracy of train and test data. As can be seen, increasing the maximum depth values will enhance training data accuracy but smaller the test data's accuracy. The trendline for the training data is higher, but the trendline for the test data is lower. The gap between train and test accuracy is higher. That leads to overfitting, which the model predicts almost Jurnal RESTI (Rekayasa Sistem dan Teknologi Informasi) Vol. perfectly on the training data but fails to predict on the test data. We did pre-pruning the tree by early stopping the growth of the tree. We obtain the most optimal value of the maximum depth is 4. Based on this value, the accuracy for train data is 82.26%, and test data is 81.25%. Based on this scenario, the accuracy was increased by 18.75% from the baseline. So, tuning the maximum depth of the tree can lead to a better model's performance, but it also can lead to overfitting because the test data fails to predict as well as train data. The third experiment compares the CART algorithm to other DT-based algorithms: AdaBoost Decision Tree, Gradient Boosted Decision Tree, and Random Forest. The comparison results can be seen in Table 10. CART algorithms with hyperparameter tuning have the best result among other DT-based algorithms, including the baseline result. The accuracy increases by 18.75%, and the f1 score increases by 19.05% from the baseline. That means the hyperparameter tuning the tree's maximum depth significantly affects the model's performance. Conclusion In this research, we created a detection model that predicts whether a user is indicated depression or not by their tweets. We develop a CART algorithm with TF-IDF feature extraction and hyperparameter tuning the maximal depth of the tree. Our model outperforms the baseline result and other DT-based algorithms such as AdaBoost, Gradient Boosting, and Random Forest. The best model is the CART with a 90:10 ratio of data split, 5000 maximum features in TF-IDF, and 4 of maximum depth of the tree with an accuracy score of 81.25% and f1 score of 85.71%. Furthermore, our experiments show there is a significant effect on changing the amount of the train data, the value of features in TF-IDF, and the value of the depth of the tree for the model, but it must be done carefully so that the model does not overfit. Based on the results, the classification model can detect whether a person is indicating depression or not by their tweet with good performance, which can assist in providing treatment for a person who is indicating depression. For future work, it can be tested by using more datasets, with another tuning the parameter, and other feature extraction methods as the comparison to conduct the classification and detection to get better performance.
2022-09-02T15:13:23.741Z
2022-08-31T00:00:00.000
{ "year": 2022, "sha1": "91857731697f55c6101c011e3ecc547755eb1bc4", "oa_license": "CCBY", "oa_url": "https://jurnal.iaii.or.id/index.php/RESTI/article/download/4275/637", "oa_status": "GOLD", "pdf_src": "Anansi", "pdf_hash": "c8467c4372fb0aab45843e8aa20a8107645b7edb", "s2fieldsofstudy": [ "Computer Science" ], "extfieldsofstudy": [] }
235677161
pes2o/s2orc
v3-fos-license
Clinical Perspectives on the Use of Subcutaneous and Oral Formulations of Semaglutide Early and effective glycemic control can prevent or delay the complications associated with type 2 diabetes (T2D). The benefits of glucagon-like peptide-1 receptor agonists (GLP-1RAs) are becoming increasingly recognized and they now feature prominently in international T2D treatment recommendations and guidelines across the disease continuum. However, despite providing effective glycemic control, weight loss, and a low risk of hypoglycemia, GLP-1RAs are currently underutilized in clinical practice. The long-acting GLP-1RA, semaglutide, is available for once-weekly injection and in a new once-daily oral formulation. Semaglutide is an advantageous choice for the treatment of T2D since it has greater efficacy in reducing glycated hemoglobin and body weight compared with other GLP-1RAs, has demonstrated benefits in reducing major adverse cardiovascular events, and has a favorable profile in special populations (e.g., patients with hepatic impairment or renal impairment). The oral formulation represents a useful option to help improve acceptance and adherence compared with injectable formulations for patients with a preference for oral therapy, and may lead to earlier and broader use of GLP-1RAs in the T2D treatment trajectory. Oral semaglutide should be taken on an empty stomach, which may influence the choice of formulation. As with most GLP-1RAs, initial dose escalation of semaglutide is required for both formulations to mitigate gastrointestinal adverse events. There are also specific dose instructions to follow with oral semaglutide to ensure sufficient gastric absorption. The evidence base surrounding the clinical use of semaglutide is being further expanded with trials investigating effects on diabetic retinopathy, cardiovascular outcomes, and on the common T2D comorbidities of obesity, chronic kidney disease, and non-alcoholic steatohepatitis. These will provide further information about whether the benefits of semaglutide extend to these other indications. Glycemic management in patients with T2D has become more individualized, and there are now several different treatment options available, with various factors influencing the most appropriate choice for individual patients. Glucagonlike peptide-1 (GLP-1) receptor agonists (GLP-1RAs) are a wellestablished class of glucose-lowering agents that act on multiple pathophysiological defects in T2D, providing effective glycemic control, weight loss, and a low risk of hypoglycemia, with a well-characterized safety profile (3). In addition, as described by Smits and van Raalte in this supplement (4), certain GLP-1RAs have also been shown to reduce the risk of cardiovascular (CV) events, as well as some renal-related endpoints, in CV outcomes trials (CVOTs) (5)(6)(7)(8). This article will review the place of GLP-1RAs in therapy and, within this class, specifically discuss some clinical considerations around the use of the long-acting GLP-1RA, semaglutide, when given subcutaneously or via its new oral formulation. WHAT IS THE PLACE OF GLP-1RAS IN THERAPY? Metformin is the first-line therapy of choice for most patients with T2D; however, if patients do not achieve their individualized HbA 1c target after 3-6 months, another glucose-lowering medication should be added (9). In 2018, the American Diabetes Association (ADA)/European Association for the Study of Diabetes (EASD) consensus for the management of hyperglycemia in T2D presented a new decision algorithm and, as part of this, key patient characteristics should be assessed including the existence of comorbidities, such as atherosclerotic CV disease (CVD), chronic kidney disease (CKD), or heart failure (HF), which necessitate the preferential use of certain classes of glucose-lowering agents as second-line therapy (9,10). In patients who have established atherosclerotic CVD or evidence of high atherosclerotic CVD risk, the ADA/EASD consensus now recommends either a GLP-1RA or a sodiumglucose co-transporter-2 inhibitor (SGLT2i) (if estimated glomerular filtration rate [eGFR] is adequate) with proven efficacy to reduce the risk of CV events (9,11). This change represents a shift in diabetes management beyond glycemic control alone and was based on CVOTs, which demonstrated that several GLP-1RAs and SGLT2is reduced the risk of major adverse CV events (MACE; CV death, nonfatal myocardial infarction, and nonfatal stroke) compared with placebo (5-8, 12, 13). A 2019 update to the ADA/EASD consensus, based on results from the REWIND CVOT with dulaglutide, suggests that a GLP-1RA or SGLT2i should also be considered in high-risk T2D patients without established CVD but with indicators of high CV risk, such as age ≥55 years with coronary, carotid, or lowerextremity artery stenosis >50%, left ventricular hypertrophy, an eGFR <60 mL/min/1.73 m 2 , or albuminuria (8,11). Of note, beneficial outcomes observed in CVOTs do not appear to be restricted to patients with elevated HbA 1c , and the 2019 update of the ADA/EASD consensus suggests that GLP-1RAs or SGLT2is should be considered independently of baseline HbA 1c or the individualized HbA 1c target in patients at high CV risk (11). In recent guidelines from the European Society of Cardiology on diabetes, prediabetes, and CVD, in collaboration with the EASD, a GLP-1RA or SGLT2i with proven CVD benefit is recommended as an add-on therapy to metformin and even as a first-line therapy in drug-naïve or metformin-intolerant patients with T2D and CVD or at high or very high CV risk (14). For patients in which HF or CKD predominates, the ADA/ EASD consensus recommends an SGLT2i with evidence of reducing HF and/or CKD progression, or if SGLT2is are not tolerated or contraindicated or if eGFR is less than adequate, a GLP-1RA with proven CV benefit can be added (11). If further treatment intensification is needed after second-line SGLT2i therapy, a GLP-1RA may be added (11). Recent results from a meta-analysis indicate greater reductions in HbA 1c , body weight, and systolic blood pressure with a lower requirement of rescue therapy when a GLP-1RA was added in combination with an SGLT2i vs. SGLT2i monotherapy alone (15). For patients without CVD, the ADA/EASD consensus advocates involving specific factors that could impact on the choice of treatment, including the need to avoid weight gain and/or hypoglycemia, in the decision cycle (9,11). In addition, the importance of choosing treatment regimens to optimize adherence and persistence is emphasized (9). For patients without established CVD but with a compelling need to minimize weight gain or promote weight loss, either a GLP-1RA with good efficacy for weight loss or an SGLT2i is recommended (9,11). For patients without established CVD but with a compelling need to minimize hypoglycemia, a GLP-1RA, an SGLT2i, a dipeptidyl peptidase-4 inhibitor, or a thiazolidinedione are the recommended options. A sulfonylurea or a thiazolidinedione should be considered when cost is a major issue. Current Underutilization Despite being effective glucose-lowering therapies with CV and renal benefits, GLP-1RAs are often underutilized. A nationwide analysis in Denmark found that, while the use of GLP-1RAs has increased since their introduction in 2005, they still only accounted for 8% of all glucose-lowering drugs used in 2017 (16). In a survey of patients who initiated a GLP-1RA in Northern Italy over the period 2010 to 2018 (N = 5,408), it appeared that over time GLP-1RAs were being prescribed to patients with progressively more advanced disease, with significant increases in baseline age, diabetes duration, presence of CVD, and insulin use in patients receiving GLP-1RA therapy during the study period (17). This apparent delay in prescribing GLP-1RAs and intensifying treatment, despite poor glycemic control in a substantial proportion of patients, was also seen in a UK survey of 113 physicians who contributed data for 1,096 patients (18). The median time from diagnosis to GLP-1RA initiation was 6.1 years and patients had HbA 1c values above 7.0% for a median of 13.5 months prior to switching from their last oral regimen to a GLP-1RA. In a UK physician perceptions survey completed in 2014, factors that most commonly caused hesitation when prescribing GLP-1RAs included that they were not considered first-line therapy according to guidelines, their injectable mode of administration, cost, and the potential for gastrointestinal (GI) adverse effects (19). The most common reasons reported for prescribing GLP-1RAs were weight loss, good efficacy, and low hypoglycemia risk. DEVELOPMENT OF GLP-1RAS AND SEMAGLUTIDE Although GLP-1RAs act via the same overall mechanism, they vary structurally, and differ in their pharmacokinetics and clinical specifics ( Table 1), with some degree of heterogeneity in respect to their ability to reduce HbA 1c and body weight, and evidence of cardiorenal protection (27,28). The first GLP-1RAs to be developed needed to be administered subcutaneously twice daily (exenatide (20)) or once daily (lixisenatide (21) and liraglutide (22)). Subsequent developments led to the approval of longer-acting GLP-1RAs that could be administered once weekly (exenatide extended release [ER] (23), dulaglutide (24), and semaglutide (25)) to reduce the injection burden and improve convenience. Indeed, once-weekly regimens have been associated with better adherence than more frequently dosed agents (exenatide vs. liraglutide) (29), and this may lead to improved outcomes. It is known that some patients prefer oral over injectable medications (44,45), and lower treatment adherence has been reported with more frequent administration or when patients perceive the treatment as difficult or inconvenient (45,46). Oral medication may also help to overcome the clinical inertia seen in the frequent reluctance to initiate injectable medicines. For this reason, an oral formulation of semaglutide was developed and was approved for the treatment of adults with T2D by the U.S. Food and Drug Administration in September 2019 and by the European Medicines Agency in April 2020. In Europe, subcutaneous semaglutide and oral semaglutide are indicated as adjuncts to diet/exercise either as monotherapy, when metformin is considered inappropriate due to intolerance or contraindications, or in combination with other glucoselowering medication(s), for patients who do not have sufficient glycemic control (25,26). As the first oral formulation of a GLP-1RA, oral semaglutide represents a useful option to help improve acceptance and adherence compared with injectable formulations in those patients with a preference for oral therapy, and may contribute to the reversal of current underutilization, potentially leading to earlier initiation of GLP-1RAs in the T2D disease continuum. DOSING CONSIDERATIONS WITH SUBCUTANEOUS AND ORAL SEMAGLUTIDE Dose Escalation As a class, GLP-1RAs have a well-defined safety profile. The most commonly reported adverse events (AEs) are GI-related effects, including nausea, diarrhea, and vomiting, which are generally mild-to-moderate in severity and transient in nature (47). In general, GI AEs are most frequent shortly after treatment initiation and therefore slow up-titration of the dose is recommended for most GLP-1RAs ( Table 1). For subcutaneous semaglutide, the starting dose is 0.25 mg once weekly, and after 4 weeks, the dose should be increased to 0.5 mg once weekly (25). After at least 4 weeks on a dose of 0.5 mg once weekly, the dose can be increased to 1 mg once weekly to further improve glycemic control. For oral semaglutide, patients should start treatment with the 3 mg dose once daily for 1 month, then increase to 7 mg once daily (26). After at least 1 month on a dose of 7 mg once daily, the dose can be increased to a maintenance dose of 14 mg once daily if needed to further improve glycemic control. When starting semaglutide, patients should be reassured that GI AEs do not affect the majority of patients and are likely to be only mild-tomoderate in severity and transient (25,26). To help minimize any nausea, patients could be advised to eat smaller meals and stop when they feel full, and to avoid meals with a high fat content (48)(49)(50). Dosing Instructions Subcutaneous semaglutide can be dosed at any time on the day of the weekly injection, with or without meals (25). For oral semaglutide, the presence of food in the stomach impairs absorption (51,52). Patients are advised to swallow the oral semaglutide tablet on an empty stomach, with a sip of water (up to half a glass of water equivalent to 120 mL), and to wait at least 30 minutes before eating, drinking, or taking other oral medications (26). This may be problematic for some patients, and may influence their preferred choice of formulation. In pharmacokinetic studies, subcutaneous or oral semaglutide did not have clinically relevant effects on the exposure of other widely used medications, such as warfarin, metformin, digoxin, atorvastatin/rosuvastatin (53)(54)(55), or the combined oral contraceptive, ethinylestradiol/levonorgestrel ( Figure 1) (56,57). In addition, oral semaglutide did not have clinically relevant effects on the exposure of lisinopril or furosemide (53,54). When tested with omeprazole, which increases gastric pH, no clinically relevant interactions were observed on the exposure of oral semaglutide (58). In a drug-drug interaction study, levothyroxine exposure was increased by 33% when co-administered with oral semaglutide 14 mg, which may be due to delayed gastric emptying and increased levothyroxine absorption (59). Monitoring of thyroid parameters should therefore be considered when treating patients with oral semaglutide at the same time as levothyroxine (26). When coadministering other oral medications, it is important to adhere to the administration instructions for oral semaglutide, and consider increased monitoring for medications that have a narrow therapeutic index or that require clinical monitoring (60). In population pharmacokinetic and exposure−response analyses, the exposure range following oral semaglutide was wider than for subcutaneous dosing but with a considerable overlap between oral semaglutide 7 and 14 mg and subcutaneous semaglutide 0.5 and 1.0 mg (61). The effect of switching between oral and subcutaneous semaglutide cannot easily be predicted because of the high pharmacokinetic inter-individual variability of oral semaglutide; however, exposure after 14 mg oral semaglutide once daily appears comparable with 0.5 mg subcutaneous semaglutide once weekly (26). It is recommended that patients switching from once-weekly subcutaneous semaglutide at a dose of 0.5 mg can be transitioned onto oral semaglutide at a dose of 7 or 14 mg once daily, up to 7 days after their last injection of subcutaneous semaglutide; however, there is no equivalent oral dose for those switching from subcutaneous semaglutide 1 mg (60). SEMAGLUTIDE IN RENAL IMPAIRMENT CKD is a common complication of T2D and a major cause of morbidity and mortality (62). The exendin-4-based GLP-1RAs, exenatide (immediate-release and ER) and lixisenatide are partially renally eliminated and are not recommended in patients with severe renal impairment (eGFR <30 mL/min/1.73 m 2 ) To provide further data on the use of semaglutide in patients with renal dysfunction, the PIONEER 5 trial evaluated the efficacy and safety of once-daily oral semaglutide 14 mg vs. placebo in 324 patients with T2D and moderate renal impairment (eGFR 30-59 mL/min/1.73 m 2 ) (65). Superior and significant reductions in HbA 1c and body weight were observed with oral semaglutide vs. placebo over 26 weeks, and renal function was unchanged throughout the study in both treatment groups. Patients with CKD were also included in the SUSTAIN 6 and PIONEER 6 CVOTs (6,66). Indeed, in SUSTAIN 6, the CKD-related endpoint of new or worsening nephropathy was found to occur in significantly fewer patients in the subcutaneous semaglutide group compared with the placebo group (3.8% vs. 6.1%; HR 0.64; 95% CI 0.46-0.88; p = 0.005) (6). GLP-1RAs may exert beneficial actions on the kidneys through reductions in blood glucose, blood pressure, and weight, as well as via possible direct cardio-nephroprotective mechanisms, such as improved endothelial dysfunction, reduced oxidative stress, and reduced inflammation (62). The phase III A B FIGURE 1 | Effect of (A) subcutaneous semaglutide and (B) oral semaglutide on the pharmacokinetics of co-administered drugs (53)(54)(55)(56)(57). AUC, area under the curve; CI, confidence interval; C max , maximum concentration. FLOW trial (NCT03819153) is ongoing to determine the effect of once-weekly subcutaneous semaglutide 1.0 mg vs. placebo on the progression of renal impairment in over 3,000 patients with T2D and CKD (eGFR 50-75 mL/min/1.73 m 2 and urinary albuminto-creatinine ratio [UACR] >300-<5,000 mg/g or eGFR 25-50 mL/min/1.73 m 2 and UACR >100-<5,000 mg/g) (67). The primary endpoint is the time to the first occurrence of a composite primary outcome event, defined as persistent eGFR decline of ≥50% from trial start, reaching ESRD, death from kidney disease, or death from CVD for up to 5 years. SEMAGLUTIDE IN HEPATIC IMPAIRMENT There is a complex interplay between T2D and liver disease, particularly non-alcoholic fatty liver disease (NAFLD) and nonalcoholic steatohepatitis (NASH), which are common in patients with T2D (68). The mechanisms responsible for the link between NAFLD and T2D are not completely understood but could include genetic factors, insulin resistance, dysfunctional adipose tissue, chronic hyperglycemia, altered gut microbiome, and changes in hepatokines, among others (68,69). Novel therapies are in demand for the treatment of NAFLD, and early studies suggested that GLP-1RAs may reduce liver inflammation and fibrosis (72). Potential mechanisms for the GLP-1RAs' benefit in the context of NAFLD include: reduced body weight and body fat through central regulation of satiety; reduced hepatic, skeletal muscle, and adipose tissue insulin resistance due to decrease in body weight; modified intestinal lipoprotein metabolism; and amelioration of dysfunctional adipose tissue and enhancement of insulin release (72,73). The safety and efficacy of liraglutide 1.8 mg once daily for 48 weeks were tested in a phase II trial in 52 patients with NASH, in which this drug was found to be well-tolerated (74). Furthermore, there was evidence of histological resolution in the end-of-treatment biopsy in 39% of patients in the liraglutide group compared with only 9% in the placebo group. A phase II trial recently evaluated the effects of once-daily subcutaneous semaglutide (0.1 mg, 0.2 mg, and 0.4 mg) vs. placebo in 320 patients with NASH (75). Treatment with semaglutide 0.4 mg resulted in a significantly higher percentage of patients achieving the primary endpoint of NASH resolution and no worsening of fibrosis than placebo after 72 weeks (59% vs. 17%; p < 0.001). Given the lack of hepatic GLP-1 receptor expression, the potential mechanism of action by which semaglutide results in NASH resolution may be mediated via weight loss. However, semaglutide is also associated with improvements in insulin resistance, hepatic lipotoxicity, and hepatic inflammation. In pre-clinical models, improvements in inflammation with liraglutide were shown to be independent of weight reduction, as was prevention of initiation of fibrosis (76). Thus, it appears unlikely that improvements in NASH with GLP-1 receptor agonists are solely mediated via weight reduction. SEMAGLUTIDE IN OBESITY Compared with other GLP-1RAs, the capability for weight loss appears to be higher with semaglutide, and the ADA/EASD consensus provides the following ranking for weight-loss efficacy: subcutaneous semaglutide > liraglutide > dulaglutide > exenatide > lixisenatide (9). The mechanisms responsible for weight loss have been investigated for both subcutaneous and oral semaglutide (77,78). In 30 patients with obesity, ad libitum energy intake was substantially lower with once-weekly subcutaneous semaglutide (dose escalated to 1.0 mg) vs. placebo for 12 weeks, and this was associated with reduced appetite and food cravings, better control of eating, and lower preference for fatty, energy-dense food (77). Subcutaneous semaglutide induced a 5.0 kg reduction in mean body weight after 12 weeks, which was found to be derived predominantly from body fat mass reduction, assessed by air displacement plethysmography. Consistent results have been observed with once-daily oral semaglutide (dose escalated to 14 mg) vs. placebo in a similar study in 15 patients with T2D (78). A phase II dose-finding trial evaluated the efficacy and safety of once-daily subcutaneous semaglutide in promoting weight loss (79). In total, 957 patients with obesity (body mass index [BMI] ≥30 kg/m 2 ) but without T2D were randomized to oncedaily subcutaneous semaglutide (dose escalated to 0.05 mg, 0.1 mg, 0.2 mg, 0.3 mg, or 0.4 mg), once-daily subcutaneous liraglutide (dose escalated to 3.0 mg), or placebo, in combination with dietary and physical activity counseling, with the primary endpoint of percentage weight loss at week 52. Estimated mean weight change was -2.3% for the placebo group and ranged from -6.0% with subcutaneous semaglutide 0.05 mg to -13.8% with subcutaneous semaglutide 0.4 mg after 52 weeks (all p ≤ 0.001). Furthermore, mean body weight reductions with semaglutide at a dose of 0.2 mg or higher were significantly greater than with liraglutide (-7.8%). These findings paved the way for the phase III STEP (Semaglutide Treatment Effect in People with obesity) program, which is currently investigating body weight changes following treatment with once-weekly 2.4 mg subcutaneous semaglutide (80). This global clinical program has enrolled approximately 5,000 adults with overweight or obesity. The main eligibility criteria for weight in the STEP 1, 3, 4, and 5 trials were BMI ≥30 kg/m 2 or BMI ≥27 kg/m 2 with at least one weight-related comorbidity (hypertension, dyslipidemia, obstructive sleep apnea, or CVD), while patients in STEP 2 had to have a BMI ≥27 kg/m 2 and T2D. The primary endpoint of STEP 1-5 is the change from baseline to end of treatment in body weight; the proportion of patients achieving a body weight reduction of ≥5% is a co-primary endpoint in STEP 1-3 and 5. In the completed STEP trials, semaglutide 2.4 mg as an adjunct to lifestyle intervention led to mean body weight losses of 15-17% over 68 weeks in patients without T2D (STEP 1, 3 and 4), with a smaller mean weight loss of 9.6% seen in patients with T2D over the same period (STEP 2). At week 68, 86-89% of patients without T2D achieved ≥5% body weight loss (STEP 1, 3 and 4), with 69% of patients with T2D achieving this threshold (STEP 2). Across all studies, semaglutide 2.4 mg also demonstrated benefits beyond weight loss on cardiometabolic parameters and patient-reported outcomes (81)(82)(83)(84). In addition to the STEP program, the effect of semaglutide treatment on CV outcomes is being assessed in adults aged ≥45 years with overweight or obesity. The SELECT phase III trial (NCT03574597) is investigating whether once-weekly subcutaneous semaglutide (up to 2.4 mg) can reduce MACE vs. placebo in approximately 17,500 people with overweight or obesity and established CVD with a follow-up of approximately 5 years (85). ADDITIONAL LARGE-SCALE ONGOING STUDIES WITH SEMAGLUTIDE Following the phase III programs for subcutaneous and oral semaglutide, additional questions remain that are being investigated in ongoing studies. In the CVOT, SUSTAIN 6, subcutaneous semaglutide was associated with a higher risk of diabetic retinopathy complications than placebo after 2.1 years (6). Most events occurred early in the trial, and this has been suggested to be attributable to the magnitude and rapidity of the HbA 1c reduction in patients with pre-existing diabetic retinopathy (86). Patients with proliferative retinopathy or maculopathy resulting in active treatment were excluded from the PIONEER 6 CVOT, in which no apparent imbalance was observed between oral semaglutide and placebo in the AE reporting of diabetic retinopathy over 16 months (66). The long-term FOCUS phase III trial (NCT03811561) is currently ongoing to specifically investigate the effects of subcutaneous semaglutide on diabetic retinopathy complications (87). Approximately 1,500 patients with T2D and Early Treatment Diabetic Retinopathy Study (ETDRS) level of 10-75 in both eyes and no ocular or intraocular treatment for diabetic retinopathy or diabetic macular edema in the 6 months prior to screening will receive once-weekly subcutaneous semaglutide 1.0 mg or placebo for up to 5 years, with the primary endpoint of progression of 3 steps or more in ETDRS level. Subcutaneous semaglutide significantly reduced the rate of MACE vs. placebo in a post-hoc non-prespecified analysis of SUSTAIN 6, but it is unknown whether oral semaglutide can also reduce CV events (6). In PIONEER 6, oral semaglutide significantly reduced the rate of MACE and decreased all-cause mortality vs. placebo. However, while oral semaglutide was demonstrated to be noninferior to placebo in PIONEER 6, the trial was not powered to assess any potential CV benefit (66). SOUL (NCT03914326) is an ongoing CVOT evaluating the effects of once-daily oral semaglutide (up to 14 mg) vs. placebo in 9,642 patients with T2D and CVD, cerebrovascular disease, symptomatic peripheral artery disease, or CKD (88). The primary endpoint is time to the first occurrence of MACE, with a follow-up of approximately 5 years. Secondary endpoints will explore the effects of oral semaglutide on other CV endpoints and assess any improvements in additional diabetic complications, including CKD and limb ischemia. CONCLUSIONS The benefits of GLP-1RAs are becoming increasingly recognized in international T2D recommendations and, along with other agents targeted at T2D pathophysiology, such as SGLT2is, their initiation early in the disease trajectory is advocated. The higher efficacy of semaglutide in reducing HbA 1c and body weight compared with other GLP-1RAs and favorable clinical characteristics make semaglutide, either subcutaneous or oral, an advantageous choice for T2D treatment. Oral semaglutide provides an additional treatment option for patients and physicians who may be reluctant to initiate or intensify therapy by injection, and this may also help to increase earlier GLP-1RA utilization. Where unanswered questions remain about the impact of semaglutide on outcomes, ongoing trials are underway to provide additional clarity. Effects on diabetic nephropathy and retinopathy are being assessed for subcutaneous semaglutide, and whether there are any positive CV benefits of oral semaglutide will also be determined. The management of comorbidities that are increasingly common in patients with T2D, such as obesity and liver disease, need to be better addressed; in this respect, ongoing trials will provide further information about whether the benefits of semaglutide extend to these other indications.
2021-06-30T13:22:45.474Z
2021-06-29T00:00:00.000
{ "year": 2021, "sha1": "853994e08e1991eafb691065ddee6f3203740136", "oa_license": "CCBY", "oa_url": "https://www.frontiersin.org/articles/10.3389/fendo.2021.645507/pdf", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "853994e08e1991eafb691065ddee6f3203740136", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
211033000
pes2o/s2orc
v3-fos-license
Lysophosphatidic Acid Induces Apoptosis of PC12 Cells Through LPA1 Receptor/LPA2 Receptor/MAPK Signaling Pathway Lysophosphatidic acid is a small extracellular signaling molecule, which is elevated in pathological conditions such as ischemic stroke and traumatic brain injury (TBI). LPA regulates the survival of neurons in various diseases. However, the molecular mechanisms underlying LPA-induced neuronal death remain unclear. Here we report that LPA activates LPA1 and LPA2 receptors, and the downstream MAPK pathway to induce the apoptosis of PC12 cells through mitochondrial dysfunction. LPA elicits the activation of ERK1/2, p38, and JNK pathways, decreases the expression of Bcl2, promotes the translocation of Bax, and enhances the activation of caspase-3, resulting in mitochondrial dysfunction and cell apoptosis. This process can be blocked by LPA1 receptor antagonist and LPA2 receptor antagonist and MAPK pathway inhibitors. Our results indicate that LPA1 receptor, LPA2 receptor and MAPK pathway play a critical role in LPA-induced neuronal injury. LPA receptors and MAPK pathways may be novel therapeutic targets for ischemic stroke and TBI, where excessive LPA signaling exist. INTRODUCTION Lysophosphatidic acid (LPA) is the structurally simplest phospholipid, which functions as an extracellular signaling molecule via binding to its receptors. Six G protein-coupled LPA receptors have been reported (Hecht et al., 1996;Noguchi et al., 2003;Kotarsky et al., 2006;Lee et al., 2006;Pasternack et al., 2008;Wu et al., 2018). Activation of LPA receptors results in the activation of various downstream pathways including MAPK, Rock, and PI3K (Wu et al., 2018). Through activating these pathways, LPA mediates a series of cellular functions including cell proliferation, cell migration, and cell survival (Wu et al., 2018). In recent years, LPA has been found to induce neuronal death both in vitro and in vivo (Ramesh et al., 2018). Steiner et al. (2000) found that micromole of LPA induced cell death of cultured hippocampal neurons and neuronal PC12 cells. The concentration of LPA in the cerebrospinal fluid of patients with traumatic brain injury (TBI) was elevated compared to controls. The administration of an LPA monoclonal antibody blocked LPA signaling and exerted a protective effect against TBI-induced brain injury (Crack et al., 2014). Wang et al. (2018), we reported that after ischemic brain injury, the concentration of LPA was increased in the rat brain, while an inhibitor of autotaxin, which is the enzyme that catalyzes the production of LPA, reduced the apoptotic rate of neurons in a rat model of ischemia-induced brain injury. These evidence suggest that LPA may regulate neuronal damage under pathological conditions. However, the specific molecular mechanisms underlying LPA-induced neuronal death remains unclear. Our current research focuses on how LPA induces neuronal apoptosis as it is one of the major pathways by which neuronal death occurs (Hollville et al., 2019). Neuronal apoptosis induced by LPA was accompanied by a decrease in mitochondrial membrane potential (MMP) (Steiner et al., 2000), hence we tested whether mitochondrial dysfunction contributes to LPA-induced cell death. Among the six LPA receptors, LPA1 receptor and LPA2 receptor are most extensively studied and their specific antagonists are widely used in research (Lee et al., 2019;Lopez-Serrano et al., 2019). It has been reported that LPA induces activation of the MAPK pathway, which consists of ERK1/2, p38, and JNK (Park et al., 2018). Here we investigated the molecular mechanisms underlying LPA-induced cell death. We found that the activation of LPA1 receptor/LPA2 receptor/MAPK pathway and mitochondrial dysfunction contribute to LPAinduced neuronal injury. Cell Culture PC12 cells were obtained from the China Center for Type Culture Collection, and cultured in DMEM medium supplemented with 10% horse serum (v/v), 5% fetal bovine serum (v/v), 50 U/ml penicillin, and 50 µg/mL streptomycin at 37 • C in a 5% CO 2 humidified incubator. The cells were differentiated by incubation in DMEM medium supplemented with 50 ng/ml nerve growth factor for 2 days before experiments. Primary rat cortical neurons were prepared from E16 embryos, and cultured as previously described (Liu et al., 2008;Meng et al., 2019) at 37 • C at 37 • C in a 5% CO 2 humidified incubator. Cortical neurons were cultured for 8 days in vitro before experiments. Preparation of LPA Stock Solutions and Treatment of Cells Lysophosphatidic acid stock solution was prepared by dissolving LPA in calcium-and magnesium-free phosphate buffered saline (PBS), pH 7.2 in the presence of 1% (w/v) bovine serum albumin (essentially fatty acid-free). 5 mg LPA is dissolved in about 11.45 ml PBS, and 1 mM LPA stock solutions is achieved. Neuronal PC12 cells were incubated in Locke's solution before LPA treatment and were treated in three ways. The first way of treatment: PC12 cells were treated with different concentrations of LPA (20, 40, 60 µM, respectively)/BSA or BSA alone for 24 h and then were used for further experiments. The second way of treatment: PC12 cells were treated with LPA/BSA for various time (0, 6, 12, and 24 h, respectively), and were used for further experiments. The third way of treatment: PC12 cells were pretreated with DMSO (vehicle), LPA1 receptor antagonist (AM095, 5 µM), LPA2 receptor antagonist (5 µM), ERK1/2 inhibitor (U0126, 5 µM), p38 inhibitor (SB203580, 10 µM), or JNK inhibitor (SP600125, 10 µM) for 2 h, then all groups were subjected to LPA for 24 h. The cells were used for further experiments. CCK-8 Assay The viability of neuronal PC12 cells was measured using the CCK-8 assay kit according to the manufacture's manual. Briefly, PC12 cells were plated into a 96-well plate (10000 cells/well) and cultured in the presence of LPA for the desired time. 10 µl CCK-8 solution was mixed with 100 µl medium and added to each well. The absorbance at 450 nm was measured after 2 h of treatment. TUNEL Staining Apoptotic DNA fragmentation was examined using the Onestep TUNEL apoptosis assay kit according to the manufacture's protocol. Briefly, PC12 cells were plated into a 24-well plate and cultured in the differentiation medium for 48 h, and then were treated as described (see section "CCK-8 Assay"). The cells were fixed in 4% paraformaldehyde for 30 min, permeabilized in 0.3% Triton X-100 for 5 min, and then incubated with TUNEL kits for 1 h at 37 • C. The slides were washed with PBS, and stained with DAPI solution for 5 min. Four independent fields were selected for examination. The percentage of TUNEL-positive nuclei in the region was calculated to evaluate apoptosis. The marker index is measured by the number of dead cells per visual field/all the cells in the visual field, and the apoptotic index (AI) of each sample was equal to the mean value of marker index in different visual fields. Measurement of Mitochondrial Membrane Potential ( m) Mitochondrial membrane potential was assessed by Rhodamine 123 (Rh123) probe. PC12 cells were plated into 6-well plates. After treatment, the cells were incubated with 5 µM Rh123 at 37 • C for 30 min. Then the fluorescence intensity of Rh123 was measured by a fluorescence microscope or flow cytometry. The depolarization of MMP causes a rise in fluorescence intensity of Rh123 (Huang et al., 2007). Quantitative PCR PC12 cells were plated into culture dishes (6 cm in diameter) and cultured in the differentiation medium for 48 h, and then were treated as described. Total RNA was extracted using Trizol reagent (Thermo Fisher Statistical Analysis All the quantitative data were presented as mean ± SEM. Statistical analysis was performed using Mann-Whitney U-test or Kruskal-Wallis test followed by Bonferroni post hoc test. Differences with P-values < 0.05 were considered significant. LPA Induces Apoptosis and Mitochondrial Dysfunction in Neuronal PC12 Cells and Primary Neurons We first examined the effect of LPA on cell viability using CCK-8 kit. LPA decreased the cell viability of neuronal PC12 cells in a concentration-and time-dependent manner (Figures 1A,B). TUNEL staining found that LPA increased the apoptotic rate of PC12 cells in a concentration-and time-dependent manner (Figures 1C,D). Next, we detected the MMP of neuronal PC12 cells treated with LPA using Rh123 staining. Fluorescence microscopy and flow cytometry experiments found that LPA induced a significant increase in fluorescence intensity of Rh123 in a concentration-and time-dependent manner (Figures 1E,F). To confirm the effect of LPA, we also tested the effect of LPA in primary neurons and got similar results as in neuronal PC12 cells (Supplementary Figures S1a,b). These results indicate that the LPA induces cell injury and loss of MMP impairments both in neuronal PC12 cells and cultured neurons. To investigate the underlying molecular mechanisms of LPAinduced cell injury, We performed Western blot and quantitative PCR to investigate the expression of MMP-related genes. As shown in Figure 2, LPA challenge elicited ERK1/2, p38 and JNK phosphorylation in a concentration-and time-dependent manner, indicating LPA induces the activation of the MAPK pathways (Figures 2A-F). We further investigated the expression of apoptosis-related genes, and found that Blc2 mRNA level, Bcl2 protein level, and Bcl2/Bax ratio were all significantly decreased in neuronal PC12 cells following LPA treatment (Figures 3A-D). LPA also induced the translocation of Bax from the cytoplasm to mitochondria. The protein level of cleaved caspase-3 was elevated after LPA treatment (Figures 3C,D). These results indicate that LPA activates the MAPK pathways and induces mitochondrial dysfunction. Blockade of LPA1 Receptor and LPA2 Receptor Attenuates LPA-Induced Neuronal Damage and Mitochondrial Dysfunction To explore the roles of LPA1 receptor and LPA2 receptor in LPA-induced neuronal injury, we treated PC12 cells with LPA1 receptor antagonist and LPA2 receptor antagonist, respectively, and then exposed the cells to 60 µM LPA. We found that both LPA1 receptor antagonist and LPA2 antagonist markedly mitigated neuronal injury induced by LPA as shown by CCK-8 and TUNEL staining analysis (Figures 4A,B). MMP analysis demonstrated that both LPA1 receptor antagonist and LPA2 receptor antagonist significantly ameliorated mitochondrial dysfunction induced by LPA ( Figure 4C). Moreover, both the LPA1 receptor antagonist and LPA2 receptor antagonist blocked the decrease of Bcl2 mRNA level, Bcl2 protein level, and Bcl2/Bax ratio induced by LPA. LPA1 receptor antagonist and LPA2 receptor antagonist also blocked the translocation of Bax and the activation of caspase-3 (Figures 5A,B). We also assessed the effect of LPA receptor antagonists on cell injury induced by a lower concentration of LPA (20 µM), and got similar results as in cells treated with 60 µM LPA (Supplementary Figures S2a-c). To further confirm the role of LPA receptors in LPA-induced cell injury, we tested the effect of another LPA1 receptor antagonist, BMS986020. BMS986020 exerted similar protective effects as AM095 (Supplementary Figures S3a-c). In primary neurons, we got similar results as in PC12 cells (Supplementary Figures S4a,b). These results indicate that LPA1 receptor and LPA2 receptor mediate LPA-induced neuronal injury and mitochondrial dysfunction. Blockade of ERK1/2, p38, and JNK Pathways Ameliorates LPA-Induced Neuronal Injury and Mitochondrial Dysfunction To investigate the role of MAPK pathways in the toxic effect of LPA, we tested the effect of ERK1/2 inhibitor, p38 inhibitor, FIGURE 4 | Blockade of LPA1 receptor and LPA2 receptor prevents LPA-induced neuronal damage, and alleviates mitochondrial dysfunction. The viability of neuronal PC12 cells was measured using CCK-8 kit (A). The apoptosis of neuronal PC12 cells was detected by TUNEL staining (B). The MMP of neuronal PC12 cells was estimated by Rh123 staining (C). Scale bar: 20 µm. Data are mean ± SEM of five independent experiments. * * P < 0.01. and JNK inhibitor on LPA-induced neuronal injury. We preincubated the cells with ERK1/2 inhibitor, p38 inhibitor, and JNK inhibitor, respectively, and then added 60 µM LPA to the culture medium. We found that ERK1/2 inhibitor, p38 inhibitor, and JNK inhibitor markedly ameliorated LPA-induced neuronal injury, as shown by CCK-8 and TUNEL staining analysis (Figures 6A,B). Furthermore, the MAPK inhibitors also alleviated the mitochondrial dysfunction induced by LPA as demonstrated by the detection of MMP ( Figure 6C). Besides, ERK1/2 inhibitor, p38 inhibitor, and JNK inhibitor blocked the decrease of Bcl2 mRNA level, Bcl2 protein level, and Bcl2/Bax ratio as well as the translocation of Bax and the activation of caspase-3 (Figures 7A,B). We also assessed the effect of the MAPK inhibitors on lower concentration of LPA (20 µM). These inhibitors also attenuated cell injury and mitochondrial dysfunction induced by 20 µM of LPA ( Supplementary Figures S5a-c). In primary neurons, we also got similar results as in PC12 cells (Supplementary Figures S6a,b). These results indicate that ERK1/2, p38 and JNK pathways participate in LPAinduced neuronal damage and mitochondrial dysfunction. DISCUSSION Apoptosis is a form of programed cell death. It is mediated through endogenous and exogenous pathways (Zaman et al., 2014;Green and Llambi, 2015). Mitochondrial dysfunction plays a pivotal role in the endogenous apoptotic pathway (Lopez and Tait, 2015). It has been reported that the relative ratio of anti-apoptotic protein Bcl2 and pro-apoptotic protein Bax decides the fate of cells. When the ratio of Bcl2/Bax decreases, Bax translocates from the cytoplasm to mitochondria, which contributes to mitochondrial dysfunction . Following mitochondrial dysfunction, downstream apoptosisrelated protein caspase-3 IS activated, which led to cell apoptosis (Martinou and Youle, 2011;Hassan et al., 2014;Green and Llambi, 2015). In the present study, LPA-induced cell apoptosis was accompanied by decreased MMP, an observation that is consistent with the findings by Steiner et al. (2000). More importantly, we found that LPA induced a reduction in Bcl2 mRNA levels, Bcl2 protein levels as well as the ratio of Bcl2/Bax. Consequently, the translocation of Bax from the cytoplasm to mitochondria was increased, and apoptosis-related protein caspase-3 was activated. Furthermore, we demonstrated that this process was mediated by LPA1 receptor, LPA2 receptor, and MAPK pathways, as the pathological process of LPA was blocked by LPA1 receptor antagonist, LPA2 receptor antagonist, and MAPK pathway inhibitors. Previous studies found that LPA and MAPK pathways mediate distinct changes in different cells. For example, LPA induces the proliferation of ovarian carcinoma cells (Rogers et al., 2018). However, LPA induces cell apoptosis in neurons (Steiner et al., 2000). Lower concentrations of LPA (0.1-1 µM) has been reported to attenuate apoptosis induced by Lipopolysaccharide (LPS) in human umbilical cord mesenchymal stem cells. However, LPA at high concentrations (>1 µM) induces cell injury (Li et al., 2017). These results indicate that the effect of LPA is concentration-dependent. Here we show that LPA induced cell injury in PC12 cells and primary neurons. These results suggest that the effect of LPA is dependent on its concentration and the cell type. Ischemic stroke and TBI induce cell apoptosis and mitochondrial dysfunction (Cheng et al., 2012;Yang et al., 2018;Han et al., 2019). Here we show that LPA induced cell apoptosis and mitochondrial dysfunction, which was blocked by LPA1 receptor antagonist, LPA2 receptor antagonist, and MAPK pathway inhibitors. Since the LPA level in the brain increased after ischemic stroke and TBI, and the administration of LPAdirected monoclonal antibody and autotaxin inhibitor reversed the neuronal damage, we believe that LPA/LPA receptors/MAPK axis plays an important role in ischemic stroke and TBI (Li et al., 2008;Crack et al., 2014;Wang et al., 2018). In summary, our results indicate that LPA1 receptor/LPA2 receptor/MAPK pathway and mitochondrial dysfunction mediate the neuronal apoptosis induced by LPA. The LPA1 receptor antagonist, LPA2 receptor antagonist, and inhibitors against MAPK pathways may be novel therapeutic strategies for patients with diseases like ischemic stroke and TBI, where excessive LPA signaling exist. DATA AVAILABILITY STATEMENT All datasets generated for this study are included in the article/Supplementary Material. AUTHOR CONTRIBUTIONS JZ and YL performed most of the experiments. ZZ conceived the project and designed the experiments. CW, YW, YZ, and LH participated in data analysis. All authors have contributed to this last version of the manuscript. FUNDING This study was supported by the National Natural Science Foundation of China (Grant no. 81671051) to ZZ. SUPPLEMENTARY MATERIAL The Supplementary Material for this article can be found online at: https://www.frontiersin.org/articles/10.3389/fnmol.2020. 00016/full#supplementary-material FIGURE S1 | Lysophosphatidic acid induces cell injury and mitochondrial dysfunction in a dose-dependent manner. TUNEL staining detects the apoptosis of primary neurons following treatment (a). Rh123 staining estimates the MMP of primary neurons after treatment (b). Scale bar: 20 µm. Data are mean ± SEM of four independent experiments for Kruskal-Wallis test, ##P < 0.01; for Bonferroni post hoc test (compare each group with control), * P < 0.05, * * P < 0.01. FIGURE S2 | Blockade of LPA1 receptor and LPA2 receptor prevent LPA-induced neuronal damage as well as alleviating mitochondrial dysfunction. The viability of neuronal PC12 cells was measured with CCK-8 following treatment (a). The apoptosis of neuronal PC12 cells was detected by TUNEL staining after treatment. Scale bar: 50 µm. (b) The MMP of neuronal PC12 cells was estimated by Rh123 staining following treatment. Scale bar: 20 µm. (c) Data are mean ± SEM of four independent experiments. * P < 0.05, * * P < 0.01. FIGURE S3 | Blockade of LPA1 receptor using BMS986020 prevents LPA-induced neuronal damage and alleviates mitochondrial dysfunction. The viability of neuronal PC12 cells was measured using CCK-8 kit (a). The apoptosis of neuronal PC12 cells was detected by TUNEL staining. Scale bar: 20 µm. (b) The MMP of neuronal PC12 cells was estimated by Rh123 staining. Scale bar: 50 µm. (c) Data are mean ± SEM of four independent experiments. * P < 0.05, * * P < 0.01. FIGURE S4 | Blockade of LPA1 receptor and LPA2 receptor protects against LPA-induced neuronal damage, and alleviates mitochondrial dysfunction. The apoptosis of primary neurons was detected by TUNEL staining (a). The MMP of neuronal primary neurons was estimated by Rh123 staining (b). Scale bar: 20 µm. Data are mean ± SEM of four independent experiments. * P < 0.05, * * P < 0.01. FIGURE S5 | Blockade of MAPK pathway prevent LPA-induced neuronal damage as well as alleviating mitochondrial dysfunction. The viability of neuronal PC12 cells was measured with CCK-8 following treatment (a). The apoptosis of neuronal PC12 cells was detected by TUNEL staining after treatment. Scale bar: 50 µm (b). The MMP of neuronal PC12 cells was estimated by Rh123 staining following treatment. Scale bar: 20 µm. (c) Data are mean ± SEM of four independent experiments. * P < 0.05, * * P < 0.01. FIGURE S6 | Blockade of MAPK pathway protects against LPA-induced neuronal damage, and alleviates mitochondrial dysfunction. The apoptosis of primary neurons was detected by TUNEL staining (A). The MMP of neuronal primary neurons was estimated by Rh123 staining (B). Scale bar: 20 µm. Data are mean ± SEM of four independent experiments. * P < 0.05, * * P < 0.01.
2020-02-06T14:12:22.548Z
2020-02-06T00:00:00.000
{ "year": 2020, "sha1": "4b22d5e00ff6bd12edf42ad5f60f66778d6940e2", "oa_license": "CCBY", "oa_url": "https://doi.org/10.3389/fnmol.2020.00016", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "4b22d5e00ff6bd12edf42ad5f60f66778d6940e2", "s2fieldsofstudy": [ "Biology", "Chemistry" ], "extfieldsofstudy": [ "Medicine", "Chemistry" ] }
269932316
pes2o/s2orc
v3-fos-license
High power illumination system for uniform, isotropic and real time controlled irradiance in photoactivated processes research In the study of photocatalytic and photoactivated processes and devices a tight control on the illumination conditions is mandatory. The practical challenges in the determination of the necessary photonic quantities pose serious difficulties in the characterization of catalytic performance and reactor designs and configurations, compromising an effective comparison between different experiments. To overcome these limitations, we have designed and constructed a new illumination system based in the concept of the integrating sphere (IS). The system provides uniform and isotropic illumination on the sample, either in batch or continuous flow modes, being these characteristics independent of the sample geometry. It allows direct, non-contact and real time determination of the photonic quantities as well as versatile control on the irradiance values and its spectral characteristics. It can be also scaled up to admit samples of different sizes without affecting its operational behaviour. The performance of the IS system has been determined in comparison with a second illumination system, mounted on an optical bench, that provides quasi-parallel beam (QPB) nearly uniform illumination in tightly controlled conditions. System performance is studied using three sample geometries: a standard quartz cuvette, a thin straight tube and a microreactor by means of potassium ferrioxalate actinometry. Results indicate that the illumination geometry and the angular distribution of the incoming light greatly affect the absorption at the sample. The sample light absorption efficiency can be obtained with statistical uncertainties of about 3% and in very good agreement with theoretical estimations. Appendix A.2. Tube (TB) The tube (TB) sample consists of a silicone cylindrical tube with internal and external diameters of 1.0 mm and 2.9 mm, respectively.In order to define the irradiated surface, the tube was covered by black, UV absorbing tape except for a small, 30 mm long section enclosing an internal exposed volume of 0.024 cm 3 . Upper drawing in Figure (A.1) schematically shows a cross section of the TB geometry under QPB illumination.The tube is covered by the opaque absorbing tape except for the small, L = 30 mm length portion from A to B. In QPB illumination light reaches the actinometer along planes perpendicular to the tube axis and the actinometer, flowing from left to right in the picture, is exposed to light only across the length L defined by the A-B section.Therefore, if Q is the flow rate, and s the tube's cross section, the residence time t is given by In case of irradiation inside the integrating sphere, lower drawing of Figure (A.1), light reaches the outer tube surface from every angle.Because of refraction, light reaches the actinometer volume before point A and also after B. These regions, although small in extent, are not exposed in the QPB configuration.In consequence, for the same flow rate, the residence time is slightly longer in the IS. A simple estimate of the correction to the residence time inside the IS can be obtained as follows.Let D = 2.9 mm and d = 1 mm the external and internal tube diameters.Then, the volume ∆V of actinometer that is exposed to refracted UV radiation before point A is calculated using elementary geometry as with θ r,max the maximum angle of refraction at point A, corresponding to incident light at a grazing angle of incidence θ i = π/2.This volume can be conveniently expressed as an equivalent increment in the effective length of the irradiated region ∆L = ∆V /s. For a refractive index n = 1.4 at 365 nm, according to the Snell's law, the maximum refraction angle is θ r,max = arcsin(1/n) = 0.80 rad and ∆L = 1.8 mm.On the one hand, we should count the effect at both sides of the tube, A and B, and multiply this number by 2. On the other hand at A and B only light coming from half of the solid angle can reach the tube's inner section before A or after B, and thus we should divide it by 2. These two factors cancel and or, for a given flow rate, 6.1% longer in the IS than in the QPB system. Appendix A. 3. Microreactor (MR) The microreactor (MR) sample was built in a 40 mm × 40 mm × 5 mm polydimethylsiloxane (PDMS) slab using 3D printing technologies.It has a 345.4 mm long, serpentine-shaped channel with circular cross section, 1.38 mm in diameter, defining an internal volume of 0.517 cm 3 .Accuracy in the geometric dimensions is better than 2% according to the measurements of residence times at known flow rates.Inlet and outlet silicone feeding tubes are fixed to the microreactor using connection tips directly glued in the PDMS structure. Connection tips and silicone tubes are largely transparent to UV light, allowing conversion in the actinometer outside the microreactor channel.During tests we have checked that the increment in the actinometer conversion fraction due to this factor can be as large as 20%, depending on the illumination geometry and exposed length of the feeding tubes.In order to have actinometer conversion only in the microreactor channel, and therefore a precise control on the irradiated volume, the connection tips and the feeding tubes were completely wrapped with UV absorbing tape. Appendix A.4. Light propagation in the samples We discuss here some details of the light propagation across these samples that are relevant in the context of the present study.Of particular interest is the CV sample, where actinometer conversion fraction can be accurately related with the measured irradiance values in normal, parallel beam illumination and total absorption conditions. Consider a quartz cuvette sample CV filled with an actinometer solution and a light beam reaching the external surface, the air-quartz interface, at point A with an angle of incidence θ 1 (Figure A.2). Refraction at A changes the direction of the light beam, that reaches the internal surface, the quartzactinometer interface, at A'.At this point it suffers a second refraction and enters the actinometer volume.Angles of incidence and refraction at each point are related through the Snell's law and depend on the refractive indexes. Refractive indexes at λ = 365 nm are n 1 = 1 for air and n 2 = 1.56 for quartz [2].For the actinometric solution we can safely assume that it has a refractive index very similar to that of deionized water n 3 = 1.35 [3].Since n 1 < n 3 < n 2 then θ 1 > θ 3 > θ 2 .Although not shown in the figure, there is also light reflection in A and A'.The fraction of light that passes to the second medium after refraction is the transmittance T , that is given by the Fresnel equations and has a complex dependence on the refractive indexes as well as on the angle of incidence [4].In normal incidence, points B and B' in Figure A.2, θ 1 = θ 2 = θ 3 = 0 and the light beam maintains its original direction.In this case, Fresnel equations provide a a particularly simple expression for the transmittance T . that gives T = 0.952 at the air-quartz interface and T = 0.995 at the quartzactinometer interface.In normal incidence, the overall transmittance from air to actinometer is their product, T = 0.947.Expression (A.6) is frequently used, because of its simplicity, despite normal incidence conditions rarely being met in real experiments.Sometimes transmittance is simply taken as T = 1.Care must be taken since normal incidence is seldom found in practice, and transmittance sharply drops at large angles of incidence.Finally, let us consider a light beam that reaches the CV wall at point C, located on the cuvette edges defined by its wall thickness.Like in A, light is refracted, here reaching the wall at C'. Since n 2 > n 1 , the phenomenon of total reflection occurs in C' and the beam is completely reflected towards the inner wall at C", where it enters the acninometer volume, after a second refraction.Notice that in absence of refraction at C this light beam would not enter the actinometer volume.To obtain a precise relationship between actinometer conversion fraction and sample irradiance, situations like those at point C must be avoided.This can be done ensuring nearly normal incidence at C (small θ 1 ) so that the refracted light beam either travels inside the cuvette wall or its reflection at C' escapes the cuvette's walls before reaching the inner surface.Refractive indexes and cuvette dimensions set a limiting angle of incidence θ 1 ≤ 9.9 • for which this condition is satisfied. Details of light propagation differ in each sample, due to their different geometries.Figure (A.3) schematically shows CV, MR and TB samples under parallel beam illumination that it is also normal to the external surface of CV and MR.For the CV sample there is normal incidence in all interfaces and the effective width of the actinometer irradiated surface coincides with the geometric width w.In the MR sample, there is no refraction in the first surface but there is refraction in the internal surface, because of the cylindrical geometry of the channel. The refractive index of PDMS is 1.45 at λ = 350 nm [5] so that the preceding discussion also applies here.Light beams are refracted in the PDMSactinometer interface, slightly spreading inside the actinometer volume, working as a divergent flat-concave cylindrical lens.The exposed surface coincides with the geometric cross section of the microreactor channel, but the length of the light paths inside the actinometer is affected by refraction.Finally, in the case of the TB sample, normal incidence does not exist (except for light beams along the tube's diameter) and there is refraction in both the external and internal surfaces.First refraction bends light beams towards the inner channel while the second refraction spreads the light beam again, but retaining convergence.As expected, the tube walls act as a convergent cylindrical lens, focusing the light beam.The effective light collecting width is greater than the geometric diameter w and refraction also affects the effective path length inside the inner cavity.Besides refraction, total reflection phenomena, not shown in the figure, are also expected in the actinometer walls for the TB and MR samples. Lensing effects do not appear only on the interface of curved surfaces.They are also present in planar surfaces when incoming light is not perpendicular to the sample surface.This fact explains the apparently higher irradiance inside the CV sample, as determined by actinometry, respect to the irradiance at the external sample surface, determined with the spectrophotometer, in case of uniform, isotropic illumination inside the IS. For simplicity, consider a parallel light beam that falls perpendicularly to a small surface element ∆S in the interface between air (n = 1) and a second medium with refractive index n ′ > 1, Figure A.4(a).At ∆S the irradiance will have a value E 0 .Now consider exactly the same light beam, but arriving at the interface with an angle of incidence θ, Figure A.4(b).Because irradiance depends on the projected surface in the direction of the beam, the irradiance at ∆S will now be E = E 0 cos θ < E 0 .In order to have the same irradiance than in case (a), we must increase the beam intensity a factor 1/ cos θ.A similar situation occurs if light comes isotropically from every direction, like in the IS system, Figure A.4(c), where more light intensity is necessary to achieve the same irradiance.Now consider a second surface element ∆S ′ , the surface of the actinometer, situated inside the second medium.In normal incidence there is no refraction, and irradiance at ∆S ′ will be the same than in ∆S multiplied by the corresponding transmittance, like in the case of the CV sample in QPB illumination.In case (c) with isotropic light incidence, refraction at the interface redirects light towards the surface element ∆S ′ .This reduces the incidence angle at ∆S ′ effectively increasing the irradiance.Alternatively, we can see that the light reaching ∆S ′ comes from a greater effective surface in the interface.If light intensity in (a) and (c) was tuned to give identical irradiance at ∆S, then irradiance at ∆S ′ will be greater, with higher actinometer conversion, in (c) than in (a).Although the effect is clear for a elementary small surface ∆S ′ , it is not so simple for extended surfaces, where it becomes a border effect.For a detailed calculation, the angular dependence of the transmittance and the details of the geometries and materials should be taken into account.That theoretical study is beyond the scope of the present work.However, we have checked the border effects in case of the CV sample inside the IS system (see following section), testing that light enters the actinometer from the entire external surface of the cuvette, which is 12.5 mm wide including the wall thickness, something that does not happen in the QPB system.It must be finally noticed that this is not an effect exclusive of the isotropic illumination in the IS system.It must be present, to some extent, in all systems where illumination is not perfectly normal. Appendix A.5. CV sample: actinometer conversion fraction as a function of the exposed surface The effect of the illumination geometry, for a given irradiance value, in the conversion fraction X is summarized in the illumination factor η.This effect can be also interpreted as an increase of the effective collecting surface area, with S ef f = ηS, being S the geometric area of the actinometer volume exposed to light.As it has been shown in the main text, due to refraction in the CV walls, light reaches the actinometer from points in the external CV surface, in the borders defined by the cuvette's walls, that do not contribute in QPB illumination.The external and internal widths of the CV sample are in 1.25:1 ratio.Notice that the numerical coincidence with the value η = 1.22 is casual, since the same factor affects the other sample geometries. To verify this point, we irradiated a CV sample in the IS successively covering a greater fraction of its external surface using the UV absorbing tape, changing in this way the light gathering area.More precisely, we irradiated the CV sample in the following configurations (neck and base of the CV sample are always covered) A Four sides uncovered.This is the normal configuration used in the experiments described in the main text.Except for the covered areas, we followed the same procedure for the irradiation and analysis of the actinometer conversion fraction than in the regular experiments.The results are shown in Figure (A.5).Data points calculated using the geometric area of the exposed surface of the actinometer inside the CV sample, are shown in red.Data points calculated with exposed external surface of the CV sample are shown in black.Notice that they displace to the right, since with this assumption their photon dose increments by a factor 1.25.Points E1 and E2 are shown in blue.Since the CV borders are covered with tape, for E1 and E2 the internal and external areas are equal and they are not affected by the area correction.Both fitting lines in red (uncorrected) and black (corrected) include points E1 and E2.If no correction is assumed, points E1 and E2 have similar photon dose than point D, but they exhibit conversion fractions X that are about 20% lower than for case D. This confirms the influence of the wall edges of the cuvette.The surface correction factor affects all data points except E1 and E2, bringing all of them, including E1 and E2, to a common line, increasing a little bit the value of R 2 .In addition, the slope of the corrected data is compatible with the main CV data shown in the text, because the 1.25 factor resembles the actual value η = 1.22, while the uncorrected data is not. This confirms that light enters the actinometer volume through the edges corresponding to the CV walls.However, it can not be taken as a precise determination of the effect, that would require a more exact control on the exposed surfaces. Appendix B. M365LP1 UV LED source The M365LP1 is a non-encapsulated LED with a 1.4 mm × 1.4 mm, square emitting surface that is fixed to a heat sink for efficient heat dissipation.According to the technical specifications, the LED characteristic optical power is 1400 mW and the electrical power consumption 6800 mW, thus having an electrical efficiency of 20.6%.Its nominal peak wavelength is in the 360 nm to 370 nm range, typically 365 nm, and it has a bandwidth, quoted as Full Width at Half Maximum (FWHM), of 9 nm.LED optical output is controlled manually varying the driving current using a LEDD1B driver (Thorlabs Inc.).Peak wavelength in LED sources slightly drifts with increasing driving currents.For the M365LP1, monitoring during experiments confirms that it is centred in the 366-368 nm range, in agreement with its technical specifications.It has been tested that LED output stability for periods of time comparable to those used in this work (typically less than 3 minutes) is good, with variations (standard deviation) less than 0.16% of the nominal value. Appendix B.1. LED irradiance model. Derivation The emitting surface of the M365LP1 LED is a small square, with a 2 mm long diagonal, so that it can be considered as a point-like light source at distances greater than about 20 mm.In this condition, it is easy to obtain an expression for the irradiance E at a point P located in a surface S parallel to the LED emitting surface. The geometry of interest is depicted in Furthermore, non-encapsulated LEDs, like the M365LP1, can be regarded as perfectly diffuse (lambertian) emitters whose radiant intensity has an angular dependence given by where I 0 is the radiant intensity in the direction perpendicular to the LED surface and θ ′ is the angle that the direction of interest makes with the normal to the LED surface. Inserting equation (B.2) into equation (B.1) gives For this particular geometry, cos θ ′ = cos θ = z/d and d 2 = z 2 + r 2 , therefore with a maximum irradiance E max = I 0 /z 2 obtained at point O, right below the LED in the figure (r = 0).Equation (B.4) can be rewritten in terms of E max to obtain the irradiance distribution over S, the LED irradiance model, which obviously exhibits the rotational symmetry around the Z axis. Also of interest is the total flux F T emitted by the LED source over the entire hemisphere that allow us to compute the illumination efficiency ξ, defined as the fraction of the emitted flux that reaches a sample with surface with the uniformity index U I defined later in this text (see section Irradiance uniformity). Appendix B.2. LED irradiance model. Experimental validation The LED irradiance model in equation (B.5) has been tested experimentally in the laboratory.The M365LP1 and the light detector were aligned on an optical table, with the LED on an optical rail with engraved scale, enabling the adjustment of the distance z.The detector mas mounted on a computer controlled linear displacement stage, precision 0.01 mm, allowing fine-tuned transverse displacements along the distance r.To avoid unwanted reflections that could interfere during the measurements the entire system was enclosed in a protective box made with highly absorbing, black material.During the measurements, the laboratory was kept completely dark except for the LED source. After optical alignment, the LED output was fixed at a constant value and irradiance measurements were taken at different values of r and z, representative of those to be used in the experiment.On axis measurements at O (r = 0) were taken at 21 values of z ranging from 1.9 cm to 21.9 cm.At each z, irradiance was also measured at 0.50 cm intervals in the transverse direction r, at both sides of the optic axis, up to r = 4.0 cm. Figure B.7 shows the measured irradiances at r = 0 as a function of the inverse squared LED-detector distance 1/z 2 , arbitrarily normalized to the irradiance at z = 1.9 cm.Experimental values accurately fit to a straight line, with There is a very good general agreement between model predictions and measurements, even at the relatively short distance z = 2.3 cm.For even shorter distances, not measured in this tests, the assumption that the LED source behaves as a point-like source may be questioned.At z = 2.3 cm the mean value of the absolute differences between measurements and model predictions is 4.9%.As z increases these differences notably decrease and for the data shown in the figure, are 0.7% at z = 8.2 cm and ≤ 0.2% at z = 14.2 cm and z = 20.2cm.The figure also shows the cross sections, in scale, of the CV, TB and MR samples to visualize the expected variations in irradiance across their surface at each value of z.This clearly shows that illumination uniformity, to be discussed in detail in the following section, improves as z increases respect to the characteristic dimension of the target sample.However, because of the 1/z 2 dependence, very large z values drastically decrease the irradiance at the sample, requiring unpractical long exposure times for a given actinometer conversion fraction.In practice, a compromise with respect to the distance z is necessary.Appendix C. Irradiance uniformity.Uniformity Index U I Ideally, a good illuminating system should provide a uniform illumination, the same irradiance, over the entire surface of the sample under study.In this situation irradiance could be measured at any representative point in the sample and calculations of area integrated quantities like total incident radiant or photon flux would be straightforward.The uniformity of the irradiance on a particular surface S can be appropriately quantified using an uniformity index U I .This index can be defined in several ways, not mathematically equivalent but conveying the same idea, i.e. quantifying the departure of a perfectly uniform illumination.In particular, we will use here 1 where E max and E avg are the maximum and average irradiance in S.This form of U I is particularly useful because it is a simple ratio and includes E avg that can be used to give the total power received by a surface S as the product E avg S. Furthermore, the point of maximum irradiance E max is usually easier to identify or to measure. With this definition U I ≤ 1 and U I = 1 only in the case of a perfectly uniformly illuminated surface, where irradiance at all points is equal and E max = E avg .In case of uneven illuminations U I < 1.For narrow band emitting sources U I can be regarded as a purely geometric quantity and in consequence applies equally to the irradiance or to the photon irradiance. Because illuminating systems tend to have a symmetric design, an irradiance measurement at the centre of the sample is likely to correspond to E max and not to E avg .Unless U I = 1, using E max instead of E avg produces a biased, overestimated estimation of the irradiance.Regardless of the illumination geometry and using (C.1) the error committed can be written in terms of U I as showing for instance that errors lower than 5% require U I ≥ 0.95 and errors lower than 1% can be obtained only if U I ≥ 0.99. In the case of a point-like LED source illuminating a sample surface S, the uniformity index U I depends on the LED-surface distance z and on the sample size and shape.For a general plane surface with arbitrary shape S, E avg can be calculated by numerical integration of the LED irradiance model.An analytic solution can be obtained in some simple geometries, in particular, in the case of a circular disk with radius R, the geometry of some spiral-shaped microrreactors [6,7].Because of the rotational symmetry around the optic axis Z, the disk geometry has the maximum possible value of U I for a given z and total surface area and it depends solely on the ratio R/z. To obtain this expression, we first compute the average irradiance is the radiant flux F reaching the disk divided by the disk area, E avg = F/S with S = πR 2 and the total flux with E given by the LED irradiance model equation (B.5).Since E depends solely on r, we can take the elementary surface as dS = 2πrdr, as shown in Figure C.9, and integrate (C.4) so that depends only on the ratio R/z.Inverting this expression the necessary R/z ratio for a given U I can be obtained.Although these expressions are exact only for a disk geometry, they can be used for other (not very elongated) geometries as a first approximation.For this purpose, we can consider R as a characteristic sample dimension, or as the radius of an equivalent disk with the same area than the real surface. For the disk geometry U I ≥ 0.95 and U I ≥ 0.99 require R/z = 0.23 and R/z = 0.10 respectively, i.e. the LED-disk distance must be 4 or 10 times the disk radius R. Conversely, configurations with a single LED very close to the sample surface are prone to very large errors.If z is of the same order than R, then R/z ≈ 1 and U I ≈ 0.5 so that irradiance is overestimated by a 2 factor.It must be noted that if the irradiance is overestimated, the efficiency of a particular catalyst or reaction system may be underestimated by the same factor. Figure (C.10)shows U I as a function of the distance z for the disk geometry and for the CV sample, calculated by direct numerical integration of the LED irradiance model, equation (B.5).Disk radius R was chosen so that the disk has the same area than the CV sample in QPB illumination.As expected U I,disk > U I,CV and the disk shows a better uniformity.Major differences can be seen at small z values, where border effects introduced by each geometry are relevant due to the proximity of the LED source.As z increases, irradiance uniformity improves and the differences between both geometries tend to disappear.At the distance z = 25 cm used in the present work, as shown in the inset figure, the uniformity of the CV sample is very close to that of the disk, i.e. it is very close to the maximum attainable uniformity for this kind of illumination. Appendix D. Light measurement Irradiance has been determined on a spectral basis using a miniature spectrometer (Stellar Net BLUE-wave) configured with a 25 µm slit and a 600 grooves/mm diffraction grating.The spectrometer is connected to a UV-VIS-NIR cosine corrector (Stellar Net CR2, 200-1700 nm, 180 • FOV) with a UV-VIS fiber-optic cable (190-1100 nm, 600 µm core diameter).The cosine corrector is necessary to obtain unbiased measurements in diffuse light environments like inside the IS.To reduce systematic errors when comparing them, the same detector system was used in QPB and IS experiments. Spectral irradiance data are collected in the 270-1100 nm range at ∆λ = 0.5 nm intervals.The whole system was factory calibrated (NIST traceable source) for absolute irradiance measurements.Irradiance E (Wm −2 ) and photon flux density (photon irradiance) E p (m −2 s −1 ) are computed from the spectral irradiance data as and with E i the spectral irradiance (Wm −2 nm −1 ) at wavelength λ i , h the Planck's constant and c the speed of light in vacuum.The factor 0.5 accounts for the 0.5 nm bin size of the spectral irradiance data.In order to include the whole M365LP1 spectral emission we set the summation limits λ 1 = 330 nm and λ 2 = 410 nm, i.e. a range equivalent to about 9 times the LED's FWHM.It must be noticed that spectral summations in (D.1) and (D.2) give the actual irradiance and photon irradiance, irrespective of the exact position of the central peak wavelength or the typical small asymmetries in the peak shape of narrow-band LED sources.Therefore, equation (D.2) will give more accurate estimates than simply computing E p dividing the total irradiance by hc/λ 0 at the theoretical central wavelength λ 0 in the technical specifications.For instance, in this last case, a small difference δλ 0 = 2 or 3 nm between the true and the theoretical peak wavelength (365 nm) would result in a bias error of 0.5% to 0.8% in E p . Appendix E. Actinometer Potassium ferrioxalate actinometry is based on the photoreduction of Fe(III) to Fe(II), whose amount, after complexation with a buffer-phenanthroline solution, is determined by absorbance measurements at 510 nm.The useful spectral domain for ferrioxalate actinometry is 250 nm -500 nm, with a relatively constant quantum yield Φ. The preparation of potassium ferrioxalate was performed according to the IUPAC recommendations [8].Ferrioxalate crystals were obtained from a mixture of 3 volumes of K 2 C 2 O 4 , 1.5 M, with 1 volume of FeCl 3 , 1 M.The precipitate was recrystallized three times with hot water, filtered and dried in darkness at 45 • for 24 h.The 6 mM (C 0 ) ferrioxalate solution used in the actinometric measurements was prepared dissolving 2.947 g of ferrioxalate crystals in 100 mL of H 2 SO 4 , 0.5 M, later diluted with deionized water up to 1 L. Actinometer preparation and all the experiments were performed in a dark room, with working areas illuminated with a custom illumination system using narrow band red LEDs (Thorlabs LED630E) with peak emission at 630 nm and 10 nm FWHM, greatly exceeding the recommended minimum wavelength of 500 nm necessary to prevent accidental conversion in the actinometer during manipulation [8]. Absorbance measurements in a 1/10 diluted, 0.6 mM, ferrioxalate solution give a neperian absorption coefficient κ = 1890 L mol −1 cm −1 at 365 nm.Since κ is a decreasing function of wavelength, a more accurate estimate is obtained using an spectral average weighted by the normalized emission of the M365LP1 LED source (Figure E.11) which gives κ = 1720 L mol −1 cm −1 .This corresponds to a decadic absorption coefficient ϵ = 747Lmol −1 cm −1 in agreement with recent determinations [9].According to the Beer-Lambert law, for this absorption coefficient and actinometer concentration (C 0 = 6 mM) the fraction of light not absorbed after 1 cm path length is only 3.3 × 10 −5 , which can be completely neglected.This implies that total absorption conditions are fully satisfied in the CV sample.However, this is not true for the TB and MR samples.For Fe(II) complexation, a premixed buffer-phenanthroline solution was prepared in proportions 8:1 with a 0.1% (1 g/L) solution of 1,10-phenanthroline and a buffer solution of 82 g of NaC 2 H 5 CO 2 and 10 mL of concentrated H 2 SO 4 diluted in 1 L of deionized water.Immediately after irradiation, 1 mL (V 1 ) of irradiated actinometer solution was introduced in a volumetric flask containing 4.5 mL of the buffer-phenanthroline solution and deionized water was added up to a final volume of 10 mL (V 2 ).After 1 h of storage in darkness, concentration of phenanthroline complex was determined by spectrophotometric absorbance measurements at λ = 510 nm in a standard, 1 cm path-length cuvette . The calibration of the relationship between the absorbance at 510 nm, A 510 , and the concentration of Fe(II) , C F e , was performed with standard solutions of known C F e [10].The relationship is accurately linear, A 510 = aC F e + b with a = 11.13 ± 0.10 mM −1 and b = 0.0014 ± 0.0045 (R 2 = 0.9995).These results are similar to those reported in the literature [6,11].Since b is compatible with 0, C F e is simply given by the ratio A 510 /a and the concentration ratio of Fe(II) ions respect to the original ferrioxalate solution (C 0 ), X F e , is given by For each experimental run, and together with the X F e ratios corresponding to the irradiated actinometer, the same ratio was obtained for a blank, non irradiated sample, X F e,0 , that accounts for the residual presence of iron ions prior to irradiation [12].The actinometer conversion fraction due to irradiation only, X, is therefore Appendix F. Irradiance determination in the QPB illumination system Precise determination of the average irradiance at the sample surface in the QPB systems requires a tight control on the LED, sample and detector positions.To achieve this, all these elements were mounted on an optical table with posts providing the vertical adjustment.For horizontal alignment, sample holders were fixed on a computer controlled motorized translation stage allowing precise lateral displacements (precision 0.01 mm), perpendicular to the optic axis Z.After alignment, the optic axis Z passed through the centres of the LED and the sample front surface, marked with a cross (+) in Figure (F.12).At this points the maximum irradiance E max is obtained. For the CV and TB samples, the light detector was fixed close to the sample, at transverse distances r = 15 mm and r = 6 mm respectively.Therefore, the irradiance at the detector, E D is slightly less than the irradiance at the centre of the sample, E max , so that E max = f D E D .The numerical value of f D is obtained directly from equation (B.5) and is f D = 1.0072 for the CV sample and f D = 1.0012 for the TB sample.In both cases, the required corrections are very small.In this configuration, multiple spectra can be obtained during irradiation and the temporal average can be taken as a better estimate.Typically 6 to 8 spectra were recorded in each experiment. As for the MR sample, due to its square shape, the detector cannot be put so close, and it was placed at 40 mm off its centre.The computer controlled linear stage was used to switch positions between sample and detector, moving the detector to the position occupied by the centre of the MR sample.In this case, several spectral irradiance measurements were taken before and after each irradiation experiment and the average spectrum was employed in the calculations.Since measurements are taken at the position of the centre of the MR sample, f D = 1. According to equation (C.1), the average irradiance on each sample was calculated as E avg = U I E max = U I f D E D .The uniformity indexes for each sample were calculated by numerical integration of equation (B.5) over the cross section of the exposed actinometer volume, taken into account both shapes and dimensions.Table F.1 lists the numerical values of these factors as well as the overall correction factor f D U I which is actually very small (less than 0.5%).The QPB condition is obtained using a relatively large LED-sample distance z, resulting in small irradiance values at the sample.At z = 25 cm a maximum irradiance of about 7 Wm −2 was obtained with the maximum LED driving current.This implies a total LED flux of 1.37 W, close to the nominal 1.4 W in the technical data.For the CV sample the illumination efficiency (equation B.7) with S = 3.375 cm 2 , U I = 0.9972, z = 25.0 cm is ξ = 1.7 • 10 −3 or 0.17% which is very small, clearly showing that for the QPB system uniformity is obtained at the expense of irradiance. of the IS, θ ′ = 0 (cos θ ′ = 1) and d = R, the sphere's radius.Furthermore dS ′ = R 2 sin θdϕ and then showing that the relationship between the irradiance at dS is simply related with the radiance of the IS walls as Although integration has been performed for a dS element at the centre of the sphere, the result is valid at other off-centre locations.The reason is that the incoming flux from a given point of the IS surface depends solely on the extent of the solid angle, since L IS is not angle dependent.In consequence, equal solid angles contribute exactly the same to E s , irrespective of the direction.This means that, as long as the sample size and properties do not perturb the behaviour of the IS, the irradiance at all the points on the sample's surface should be the same.In other words, irradiance is uniform. It only remains to relate the radiance of the internal walls of the IS, L IS , with the irradiance measured by the detector E D .First we notice that a detector placed in the inner surface of the IS actually measures the irradiance at the IS internal walls, i.e.E D = E IS .On the other hand, at any point in the IS surface, E IS , (total incoming radiant flux per unit of surface area) can be related with the radiant exitance M IS (total outgoing radiant flux per unit of surface area) as with ρ being the diffuse reflectance of the IS walls.For lambertian surfaces, radiant exitance and radiance are simply related [13,14] as M IS = πL IS so that Combining with the previous result, we finally obtain as stated. Appendix G.2. Design and construction The integrating sphere consists of two separated hemispheres that fit together by means of a half-lap joint that runs over the entire border of each hemisphere.This joint ensures a good mechanical adjustment, preventing light to enter or exit through the junction during operation.Due to limitations in the maximum size that could be made by the 3D printer, each hemisphere was actually printed in two parts that were later glued and mechanically fixed to form a single body.Necessary ports for the input light sources, light detector and feed tubes for continuous flow experiments were also created during the printing process.When the two hemispheres are brought together to form the sphere, they are secured in position using two screws with nuts that are fixed by hand.The lower hemisphere (LH) has a rod like joint that fits into a stainless steel post fixed to a heavy stainless steel plaque, providing a sturdy, stable base during measurements.If needed the base can be further screwed to another surface. Holes, 30 mm in diameter, were created during the printing process to hold low profile (4.5 mm) 30 mm in diameter thread adapters, permanently fixed with glue.These thread adapters allow standard LED sources to be screwed in the outside of the sphere and, at the same time, reduce the input hole diameter to 11.4 mm, minimizing its effect on the sphere port fraction f .Due to their low profile, the LED emitting surface is placed very close to the input hole, increasing the light gathering efficiency.Thread adapters also admit light guide pipes and light filter systems to be used with other light sources like high power Xe lamps.Small circular baffles, directly welded on the internal side of the thread adapters, redirect the input light to the internal surface of the integrating sphere, avoiding direct, non-diffuse illumination of the sample or the light detector. The lower hemisphere has two independent light inputs, located in opposite sites and with their centres 18 • below the horizontal plane.The detector port is also allocated in this hemisphere, with its centre 18 • below the horizontal and 30 • away from one of the light inputs.The cosine corrector of the detector head is directly screwed in the IS using an adapter (Stellar Net CR2-AD) glued in the sphere's wall (port D) ensuring that the CR2 is levelled with the sphere's internal surface. Two upper hemispheres (UH1 and UH2) have been constructed.One of them (UH2) has two extra light input ports, placed at right angles respect to Figure A. 1 : Figure A.1: Effect of the illumination geometry in the effective residence time (effective irradiated length) in the TB.Drawing not to scale. Figure A. 2 : Figure A.2: Light refraction in the walls of a quartz cuvette Figure A. 3 : Figure A.3: Schematic diagrams showing light propagation in CV, MR and TB samples under parallel beam illumination.Drawings are not to scale. Figure A. 4 : Figure A.4: Effect of the angle of incidence in the irradiance on a surface element. B One side covered with tape.C Two sides covered with tape.D Three sides covered, only one side open to light.The width of the open side is equal to the external width of the CV sample (1.25 cm) E Like in D, but we also carefully covered the two edges, 1.25 mm wide, that correspond to the CV wall thickness.In this case the width of the open section is 1.0 cm and coincides with the width of the internal cross section of the actinometer volume.This configuration was measured twice (E1 and E2) F Sample completely covered with tape.Light should not reach the actinometer. Figure A. 5 : Figure A.5: Actinometer conversion fraction X in the CV sample.Photon dose calculated with the internal actinometer exposed area (red) or the external cuvette exposed area (black).Blue points simultaneously belong to both calculation methods.Letters indicate the CV sections covered by the black tape, schematically shown at the top of the figure. Figure (B.6).Consider the optic axis of the system (Z) perpendicular to S at O and passing through the LED position, at O ′ .The distance z between O and O ′ is the the LED-surface distance and every point P in S is located at a transverse distance r from O. Figure B. 6 : Figure B.6: LED geometry for the LED irradiance model Figure B. 7 : Figure B.7: Testing the LED irradiance model for the M365LP1.Experimental, normalized on axis (r = 0) irradiance E/Emax as a function of the inverse squared distance 1/z 2 .Experimental values (open triangles) and its linear fit (dashed red line) compared to the model predictions (solid black line). Figure B. 8 : Figure B.8: Testing the LED irradiance model for the M365LP1.Experimental, normalized irradiance E/Emax as a function of transverse distance r for selected values of the LEDdetector distance z.Symbols represent experimental values and solid lines the corresponding curves given by the irradiance model.Cross sections of CV, TB and MR samples are shown at scale for comparison. Figure C. 9 : Figure C.9: A disk of radius R illuminated by a single, point-like source placed at a distance z on the disk axis. Figure C. 10 : Figure C.10: Uniformity index U I for the disk geometry and the CV sample as a function of the LED sample distance z.The inset shows the region around z = 25 cm that corresponds to the experimental set up. Figure E. 11 : Figure E.11: Neperian absorption coefficient κ of potassium ferrioxalate as a function of wavelength.The emission spectrum of the M356LP1 UV LED source, scaled by an arbitrary constant for display purposes, is shown in red. Figure F. 12 : Figure F.12: Sample positioning for QPB illumination measurements before enclosing the system with the protecting box.Upper row: CV (a) and TB (b) together with the light detector D (the small white circle is the cosine corrector of the light detection system.In (c) the MR sample is pictured through the small window in the internal wall of the enclosing box and detector is not visible.Lower row schematically shows the position of the detector D, close to the CV (d) and TB (e) samples.Detector and MR sample switch positions for determination of irradiance values using a computer controlled motorized stage.For each sample, the cross (+) marks the optic axis, passing through the LED position. Figure G. 14 : Figure G.14:A surface element dS at the centre of an integrating sphere with radius R. 1 Other possibilities include E min /Eavg, E min /Emax, 1−(Emax−E min )/Eavg or (Emax− E min )/(Emax + E min ) with Emax, E min and Eavg the maximum, minimum and average irradiance values at the surface. Table F . 1 : Correction factors for average irradiance determination in QPB illumination
2024-05-22T15:12:08.991Z
2024-05-01T00:00:00.000
{ "year": 2024, "sha1": "48948c2eb26cd15c364535a60c3874aab007ee20", "oa_license": "CCBYNCND", "oa_url": "https://doi.org/10.1016/j.heliyon.2024.e31309", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "d07769e0127843b9734da719c66fa43a324202a8", "s2fieldsofstudy": [ "Engineering", "Physics" ], "extfieldsofstudy": [] }
102776302
pes2o/s2orc
v3-fos-license
Optimization of Processing Conditions of Traditional Cured Tuna Loins – Muxama Muxama is a traditional highly-valued food product prepared from dry-cured tuna loins in southern Portugal and Spain. The production procedure has seen little change over the last centuries. The muxama’s stability is due to reduced water activity. In addition, the drying method has secondary effects on characteristics of flavor, color, and the nutritional value of the product. Our objectives were to describe the dynamics of important physicochemical parameters such as moisture content, water activity (aW), NaCl concentration (as water–phase salt, ZNaCl), pH and color, during the salting and drying stages of muxama production, and to test the effect(s) of changes in the traditional processing conditions followed in southern Portugal, aiming at optimizing the production procedure. The lowest values of moisture and aW and highest ZNaCl obtained after drying tuna loins for seven days at 20 ◦C exceeded the values reported for commercial products and have impact on the appearance (color) of the product. Therefore, drying tuna loins at lower temperatures (ca. 14 ◦C) is probably more appropriate. The results obtained in this study could be used in the design of future experiments at other conditions and to assess other quality parameters, e.g., total volatile base nitrogen (TVB-N), thiobarbituric acid reactive substances (TBA-RS), microorganism abundance and sensory attributes, and subsequent validation trials. Introduction Muxama (in Portugal), or mojama (in Spain), is a traditional, highly valued food product prepared from dry-cured tuna loins that is a delicatessen in the southern Iberian Peninsula: Algarve (Portugal) and Andalucía, Murcia, Alicante and Valencia (Spain).Its production process is slightly different among locations.These differences supported the application and recent registration of two Protected Geographical Indications (PGI), Mojama de Barbate [1] and Mojama de Isla Cristina [2], within the European Union's quality schemes for agricultural products and foodstuffs [3] by two municipalities in Andalucia (Spain), namely Barbate and Isla Cristina.Muxama is one of numerous food products that can be obtained from a tuna at the end of the (traditional) quartering of specimens a.k.a."ronqueamento" (or "ronqueo") [4][5][6]. Succinctly, the tuna (mostly Thunnus obesus and T. albacares) loins are salted and dried according to a predominantly artisanal procedure that incorporates empirical knowledge passed down numerous generations since at least the tenth century [6] or even earlier.According to [4,5], native Iberians were already drying and salting fish, particularly tuna, in pre-Roman times (earlier than the second century B.C.) and during Roman rule over Hispania (until the fourth century).The practice was further developed by the Arabs during Al-Andalus (in the eighth and ninth centuries) [7,8].The preparation of muxama involves a series of steps that are described in [4,6,7] and more recently in [9].The production process changed little over the years but today the tuna used in the production of muxama is fished elsewhere and arrives frozen to the plants [4,10] instead of being fished using an "armação" (or "almadrava"), an off-shore maze of bottom-fixed nets to imprison, capture and hold the fish [11]. Drying is one of the earliest known means of preserving food [12][13][14], namely fish and other seafood.A preparatory dry salting or brining stage usually precedes it and the stability of the end-product derives from the reduced water activity and, in some products, lowered pH.In addition, the drying method has secondary effects on characteristics of flavor, color, and the nutritional value of the products [14,15].A number of authors have studied and/or reviewed the salting and drying process of fish and other seafood and its effects on various quality parameters of the final products, e.g., [13,[16][17][18][19][20][21][22][23][24][25].In what concerns muxama, Barat and Grau [10] observed a clear shortening of the processing time required to obtain muxama with the simultaneous brine thawing and salting of frozen tuna loins. Our objectives were to describe the dynamics of important physical-chemical parameters such as moisture content, water activity (a W ), NaCl concentration (as water-phase salt and ratio of NaCl incorporation during drying), pH and color, during the salting and drying stages of muxama production and to test the effect(s) of changes in the traditional processing conditions followed in southern Portugal (Algarve), aiming at optimizing the production procedure. Results In Experiment I, the incorporation of salt in tuna loins was accompanied by water loss (Figure 1a-c).Notwithstanding, the patterns of these processes along the salting period were quite different for the outer, exterior (Ext.)portions when compared to the center, interior (Int.)portions.Differences were more prominent for the diffusion of NaCl that followed a sigmoidal, logistic model in the case of interior portions instead of the expected hyperbolic behavior of Fickian diffusion processes found in exterior portions of the loins (Figure 1a).Initially, salt was incorporated and water was diffused out of the loins at higher rates in the case of exterior portions (Figure 1a,b).In contrast, the distance corresponding to the outer portion is responsible for a delay in the increase of water-phase salt (Z NaCl ), about 10 h (Figure 1a).These changes in Z NaCl were statistically modelled using the Zugarramurdi and Lupin [26] model for the exterior portions and the three-parameter logistic model for the interior portions (Table 1).In addition, the final NaCl concentration was higher and the moisture content and water activity were substantially lower for exterior portions than for interior portions of the loins (Figure 1a-c)., filled circles • and continuous line -) portions of tuna loins along the 24 h period of salting.Lines depict the models described in Table 1 and shaded areas correspond to 95% confidence intervals. Table 1.Mathematical models fitted to the parameters (y) for exterior and interior portions of tuna loins during salting experiment.and exterior (Ext., filled circles • and continuous line -) portions of tuna loins along the 24 h period of salting.Lines depict the models described in Table 1 and shaded areas correspond to 95% confidence intervals. Exterior Zugarramurdi & Lupín [26]: X 0 s = −0.009(0.009) p = 0.9186 X 1 s = 0.246 (0.019) p < 0.0001 k s = 0.100 (0.018) p = 0.0009 Interior Biexponential [27]: Changes in portions' pH were readily visible and displayed a similar behavior but were slightly higher in the interior portions throughout the salting period (Figure 1).A two-compartment exponential model fitted the data for exterior and interior portions of "loins" during salting (Table 1).Generally, the pH decreased along the experiment, especially during the first hours of salting. The physical-chemical changes related to the diffusive processes of salt intake and water loss were accompanied by changes in appearance, namely color (Figure 2), that were similar in exterior and interior portions of tuna loins in terms of plain L*, a* and b* but not in terms of composite color parameters, particularly chroma and saturation.Despite the observed variability, values of L* and a* decreased in interior portions of loins whilst they remained stable in exterior portions.In addition, the changes in chroma and saturation were more obvious in interior portions, which peaked after about 12-16 h salting time.Changes in portions' pH were readily visible and displayed a similar behavior but were slightly higher in the interior portions throughout the salting period (Figure 1).A two-compartment exponential model fitted the data for exterior and interior portions of "loins" during salting (Table 1).Generally, the pH decreased along the experiment, especially during the first hours of salting. The physical-chemical changes related to the diffusive processes of salt intake and water loss were accompanied by changes in appearance, namely color (Figure 2), that were similar in exterior and interior portions of tuna loins in terms of plain L*, a* and b* but not in terms of composite color parameters, particularly chroma and saturation.Despite the observed variability, values of L* and a* decreased in interior portions of loins whilst they remained stable in exterior portions.In addition, the changes in chroma and saturation were more obvious in interior portions, which peaked after about 12-16 h salting time.When studying the drying stage of the traditional process of producing muxama (Experiment II), data on moisture, a W , NaCl content (as Z NaCl and ratio of NaCl incorporation during drying, R NaCl ), and color (Commission Internationale de l'Éclairage CIE L*a*b* and derived parameter) was also obtained at important milestones: fresh, raw material; just after salting; and following the drying stage. Except for a* and Hue angle (H ab ), all other variables were affected by the temperature and the duration of the drying stage (Figures 3 and 4).The combination of high(er) temperature-long(er) time contributes to significantly decrease moisture content and a W and to increase NaCl content (in terms of Z NaCl and R NaCl ).The additive effects of temperature and time were unique to moisture and Z NaCl .Except for a* and Hue angle (Hab), all other variables were affected by the temperature and the duration of the drying stage (Figures 3 and 4).The combination of high(er) temperature-long(er) time contributes to significantly decrease moisture content and aW and to increase NaCl content (in terms of Z NaCl and R NaCl ).The additive effects of temperature and time were unique to moisture and Z NaCl .The moisture content (H) and aW at the end of drying were significantly lower (28-32 g•100 g −1 and 0.70-0.73,respectively) compared to values obtained just after salting, particularly for loins dried for seven days at 20 °C (H = 26 g•100 g −1 and aW = 0.707). Moreover, during drying there was a significantly increase the NaCl content, by 2.5-4-fold the concentration determined after 24 h salting (R NaCl ), mainly in the case of samples dried for 7 days at 20 °C.The final NaCl concentration (in terms of Z NaCl ) was 0.15-0.16.Temperature and time also affected other variables in a multiplicative way, i.e., there were significant interaction effects.Color changes were readily visible along the drying of loins, that were conveyed in the significant difference found in color difference (ΔE) and chroma/saturation values among treatments.At higher temperature, the loins were significantly darker after the longer period of drying (seven days) in contrast to what was found for the loins dried only four days.Unexpectedly, no significant changes were found for parameter a*, i.e., in terms of redness, with the loins remaining reddish throughout the drying stage.The moisture content (H) and a W at the end of drying were significantly lower (28-32 g•100 g −1 and 0.70-0.73,respectively) compared to values obtained just after salting, particularly for loins dried for seven days at 20 • C (H = 26 g•100 g −1 and a W = 0.707). Moreover, during drying there was a significantly increase the NaCl content, by 2.5-4-fold the concentration determined after 24 h salting (R NaCl ), mainly in the case of samples dried for 7 days at 20 • C. The final NaCl concentration (in terms of Z NaCl ) was 0.15-0.16.Temperature and time also affected other variables in a multiplicative way, i.e., there were significant interaction effects.Color changes were readily visible along the drying of loins, that were conveyed in the significant difference found in color difference (∆E) and chroma/saturation values among treatments.At higher temperature, the loins were significantly darker after the longer period of drying (seven days) in contrast to what was found for the loins dried only four days.Unexpectedly, no significant changes were found for parameter a*, i.e., in terms of redness, with the loins remaining reddish throughout the drying stage. Discussion The incorporation of salt was accompanied, as expected [15], by water loss in Experiment I.In fact, salting is basically a sodium and chloride transport by a diffusion mechanism induced by differences in concentrations and osmotic pressures among inter-cells and salting agent [10,13].Notwithstanding, the observed patterns of these processes were distinct for the exterior and interior portions of the loins. On one hand, the diffusion length of water and solutes involved in mass transport are supposed to affect the osmotic concentration behavior.Moreover, the rate of salt uptake by food diminishes when equilibrium between the concentration in the salt medium and the food matrix is attained [24].Hence, the distinct behaviors of salt uptake observed in exterior and interior portions.In addition, the higher solid gain at/near the surface and consequent formation of a solute layer was probably the cause of decreased water loss in interior portions, due to a reduction of diffusion [24].On the other hand, the results might also reflect the fact that frozen-thawed tuna was used herein, since the resultant flesh characteristics and cell structure affect salt diffusion [16,24]. Furthermore, the pH is expected to affect salting of fish via its effect upon ions (Cl − ) diffusion and water loss and ultimately osmotic equilibrium [28] due to alterations in the selective permeability of cell membranes.The pH in fresh tuna determined in this study, 5.9-6.0, is consistent with published results [29,30].Gallart-Jornet et al. [7] reports a pH of 5.8 in muxama; close to the values found herein.In contrast, Lã and Vicente [4] found that the pH of muxama from southeastern Algarve (Portugal) was 7.10, higher than that of fresh tuna (5.72), but the authors do not provide an explanation for that result. We modeled the changes in the abovementioned physical-chemical parameters of tuna loins during the dry-salting stage of the production process using simpler and/or general equations (Table 1) than those published (e.g., [10]).In contrast, we decided not to model the changes observed in the values of color parameters considering the observed variability but values of L* and a* decreased in interior portions of loins while remaining relatively constant in exterior portions and chroma and saturation peaked after 12-16 h of drying. In Experiment II, we studied the drying stage of the traditional process of muxama production.The values of moisture content and a W obtained at the end of the drying stage, particularly for loins dried for 7 days at 20 • C, are lower than values of moisture of muxamas reported by Lã and Vicente [4] and Gallart-Jornet et al. [7], 47-50 g•100 g −1 , and a W measured by Gómez et al. [31] in samples of muxama from Spain, 0.851, and by Lã and Vicente [4] in muxamas from Vila Real de Santo António (southeast Algarve, Portugal), 0.79.Most likely, this was due to a relatively shorter/minimal desalting stage in our experiment that contributed to prolonging salt incorporation/water loss during the following drying stage.The relatively low a W contributes to stability of the product since it is expected to inhibit the development of a number of (pathogenic) microorganisms [28]. The removal of water and the continued penetration of the remaining salt during the drying trials contributed to significantly increase the NaCl content.The final NaCl concentration (in terms of Z NaCl ) was in line with values reported by Lã and Vicente [4], approximately 10% (i.e., Z NaCl of about 0.17), and by Gallart-Jornet et al. [7], 7-8 g•100 g −1 (Z NaCl of 0.13-0.14)for marketed muxamas. In addition, color changes were readily visible during the drying of loins.The changes were expressed as significant differences found in ∆E and chroma/saturation values among treatments.The values of ∆E calculated herein are greater than 2.3, a value stated as the just noticeable difference [32].Seemingly, these composite color parameters reflected the changes in L* but not in a*.The color changes might be the result of browning reactions taking place in association with water removal thru drying and lipid oxidation favored by NaCl.Notwithstanding, according to EU Implementation Regulations 2015/2110 and 2016/199 [1,2], muxama is expected to be dark brown on the outside and deep red on the inside.When cut, it shows varying darker shades at the edges. Materials and Methods Tuna loin replicates (30 × 30 × 100 mm, Figure 5) mimicking the parallelepiped-shape and size-proportion of actual tuna loins were used herein.These loins were prepared from fresh tuna (Thunnus sp.) loins acquired at the fish market in Faro (Algarve, Portugal) and frozen using a blast and fluid bed freezer (Armfield Ltd., Ringwood, England) until core temperature reached −20 • C. Before each experiment, stored frozen loins were thawed (overnight) in air inside a walk-in cooler at 4 • C until a core temperature of 0 • C was attained.Two distinct, successive experiments were carried out, each conducted once with n = 2 per sampling time.Experiment I: To study the dynamics of salting, ten loins were stacked in alternating layers of fish and solid salt (1:1 w/w) in a polystyrene box for up to 24 h.Two loins were sampled at the start of the experiment (0 h) and after 2.5, 5, 10 and 24 h of dry-salting.From each loin, the inner, center portion was separated from the outer, exterior section (Figure 5).The concentration of NaCl, the aW, moisture content (g•100 g −1 ) and pH were determined for the two portions (in duplicate) using respectively, a chlorides-selective probe (Crison, Barcelona, Spain) connected to a potentiometer (Crison), an aW-meter (Rotronic HygroLab 3, Bassersdorf, Switzerland), and a pH-meter (Crison).NaCl concentration was handled herein as water-phase salt concentration, = /( + ) where X NaCl is the concentration of NaCl (g•100 g −1 ) and X W is the moisture content (g•100 g −1 ), because salt content is meaningful for sensory perception and to favor (deleterious) enzymatic or bacterial reactions when in solution [16,33].Moreover, color measurements (6 per loin) were carried out directly on the samples using a tri-stimulus colorimeter (Hach Lange Spectro-Color, Dusseldorf, Germany) and the CIE L*a*b* color scale (Commission Internationale de l'Éclairage CIE, Vienna, Austria).Composite color descriptors, color difference (ΔE), chroma (C), saturation (Sab) and Hue angle (Hab), were derived from CIE L*a*b* parameters [32,34]. Experiment II: To study the drying stage of muxama production, loins were stacked in alternating layers of fish and solid salt (1:1 w/w) in a polystyrene box for 24 h and then hanged to dry Two distinct, successive experiments were carried out, each conducted once with n = 2 per sampling time.Experiment I: To study the dynamics of salting, ten loins were stacked in alternating layers of fish and solid salt (1:1 w/w) in a polystyrene box for up to 24 h.Two loins were sampled at the start of the experiment (0 h) and after 2.5, 5, 10 and 24 h of dry-salting.From each loin, the inner, center portion was separated from the outer, exterior section (Figure 5).The concentration of NaCl, the a W , moisture content (g•100 g −1 ) and pH were determined for the two portions (in duplicate) using respectively, a chlorides-selective probe (Crison, Barcelona, Spain) connected to a potentiometer (Crison), an a W -meter (Rotronic HygroLab 3, Bassersdorf, Switzerland), and a pH-meter (Crison).NaCl concentration was handled herein as water-phase salt concentration, Z NaCl = X NaCl / X NaCl + X W where X NaCl is the concentration of NaCl (g•100 g −1 ) and X W is the moisture content (g•100 g −1 ), because salt content is meaningful for sensory perception and to favor (deleterious) enzymatic or bacterial reactions when in solution [16,33].Moreover, color measurements (6 per loin) were carried out directly on the samples using a tri-stimulus colorimeter (Hach Lange Spectro-Color, Dusseldorf, Germany) and the CIE L*a*b* color scale (Commission Internationale de FishesFigure 1 . Figure 1.Changes in (a) NaCl concentration (as water-phase salt concentration Z NaCl ); (b) moisture content; (c) water activity (aW) and (d) pH in the interior (Int., empty circles ○ and dotted line----) and exterior (Ext., filled circles • and continuous line -) portions of tuna loins along the 24 h period of salting.Lines depict the models described in Table1and shaded areas correspond to 95% confidence intervals. =Figure 1 . Figure 1.Changes in (a) NaCl concentration (as water-phase salt concentration Z NaCl ); (b) moisture content; (c) water activity (a W ) and (d) pH in the interior (Int., empty circles • and dotted line --)and exterior (Ext., filled circles • and continuous line -) portions of tuna loins along the 24 h period of salting.Lines depict the models described in Table1and shaded areas correspond to 95% confidence intervals. Figure 2 . Figure 2. Changes in color (a-c) Commission Internationale de l'Éclairage CIE L*a*b* and derived parameters: (d) Chroma; (e) Saturation; and (f) Hue angle in the interior (Int., empty circles ○ and dotted line ----) and exterior (Ext., filled circles • and continuous line -) portions of tuna loins along Figure 2 . Figure 2. Changes in color (a-c) Commission Internationale de l'Éclairage CIE L*a*b* and derived parameters: (d) Chroma; (e) Saturation; and (f) Hue angle in the interior (Int., empty circles • and dotted line --) and exterior (Ext., filled circles • and continuous line -) portions of tuna loins along the 24 h period of salting.Non-parametric smoothing curves (splines) are shown for illustrative purposes only. Fishes 2018, 3 , 3 5 of 12 ( ±0.71), a* = 5.01 (±0.71) and b* = −0.59(±0.79) for salted loins.The further effects of temperature (14 and 20 • C) and time (four and seven days) of the subsequent drying stage on those parameters and other, derived parameters were studied in the context of a two-level factorial experiment.The analysis of variance (ANOVA) results are compiled in Table2. Figure 3 . Figure 3. Interaction plots for (a) moisture; (b) water activity (aW); (c) ratio of NaCl incorporation during drying (R NaCl ); and (d) Z NaCl content of muxama obtained from portions of tuna loins previously dry-salted for 24 h and dried at 14 or 20 °C (black squares and red triangles, respectively) for four or seven days (individual data points are presented as filled circles). Figure 3 . Figure 3. Interaction plots for (a) moisture; (b) water activity (a W ); (c) ratio of NaCl incorporation during drying (R NaCl ); and (d) Z NaCl content of muxama obtained from portions of tuna loins previously dry-salted for 24 h and dried at 14 or 20 • C (black squares and red triangles, respectively) for four or seven days (individual data points are presented as filled circles). Figure 4 . Figure 4. Interaction plots for color (a,c) CIE L* and b* and derived parameters (b) saturation (Sab) and (d) color difference (ΔE) of muxama obtained from portions of tuna loins previously dry-salted for 24 h and dried at 14 or 20 °C (black squares and red triangles, respectively) for four or seven days (individual data points are presented as filled circles). Figure 4 . Figure 4. Interaction plots for color (a,c) CIE L* and b* and derived parameters (b) saturation (S ab ) and (d) color difference (∆E) of muxama obtained from portions of tuna loins previously dry-salted for 24 h and dried at 14 or 20 • C (black squares and red triangles, respectively) for four or seven days (individual data points are presented as filled circles). Fishes loins acquired at the fish market in Faro (Algarve, Portugal) and frozen using a blast and fluid bed freezer (Armfield Ltd., Ringwood, England) until core temperature reached −20 °C.Before each experiment, stored frozen loins were thawed (overnight) in air inside a walk-in cooler at 4 °C until a core temperature of 0 °C was attained. Figure 5 . Figure 5. Illustration of the size and shape of the tuna loins used in the salting-drying experiments carried in this study.In Experiment I, (A) exterior and (B) interior portions of the loins were sampled and analyzed whereas complete loins were used in Experiment I. See main text for further details. Figure 5 . Figure 5. Illustration of the size and shape of the tuna loins used in the salting-drying experiments carried in this study.In Experiment I, (A) exterior and (B) interior portions of the loins were sampled and analyzed whereas complete loins were used in Experiment I. See main text for further details. Table 1 . Mathematical models fitted to the parameters (y) for exterior and interior portions of tuna loins during salting experiment.
2019-09-23T11:36:51.612Z
2018-01-09T00:00:00.000
{ "year": 2018, "sha1": "4a8e4b8281c9e6a09efe40fc25dc13f9380094a8", "oa_license": "CCBY", "oa_url": "https://www.mdpi.com/2410-3888/3/1/3/pdf?version=1515512326", "oa_status": "GOLD", "pdf_src": "ScienceParseMerged", "pdf_hash": "4c48d8c136e7a9b1b62d9ac2f98ab354bb918006", "s2fieldsofstudy": [ "Agricultural and Food Sciences" ], "extfieldsofstudy": [ "Chemistry" ] }
224849462
pes2o/s2orc
v3-fos-license
Recent Development of Microfluidic Technology for Cell Trapping in Single Cell Analysis: A Review : Microfluidic technology has emerged from the MEMS (Micro-Electro-Mechanical System)-technology as an important research field. During the last decade, various microfluidic technologies have been developed to open up a new era for biological studies. To understand the function of single cells, it is very important to monitor the dynamic behavior of a single cell in a living environment. Cell trapping in single cell analysis is urgently demanded There have been some review papers focusing on drug screen and cell analysis. However, cell trapping in single cell analysis has rarely been covered in the previous reviews. The present paper focuses on recent developments of cell trapping and highlights the mechanisms, governing equations and key parameters a ff ecting the cell trapping e ffi ciency by contact-based and contactless approach. The applications of the cell trapping method are discussed according to their basic research areas, such as biology and tissue engineering. Finally, the paper highlights the most promising cell trapping method for this research area. Introduction In the past decade, single cell analysis has received significant attention from the research community due to its wide applications in pharmaceutical [1], biology [2], healthcare [3] and tissue engineering [4]. Many studies have shown that individual cells, even those that are identical in morphology, exhibit intercellular variations due to differences in their micro-environmental conditions and gene expression [5,6]. Previous cell analysis was performed based on a large population of cells, which reflected the average values derived from the bulk cell response [7]. The bulk cell analysis approach ignored the characteristics of individual cells. The limitations of bulk cell analysis have motivated the development of single cell analysis. In contrast to the bulk cell analysis, single cell analysis reveals the significant physiological characteristics of an individual cell, such as metabolism [8], protein level [9] and gene expressions [10]. Progresses in single cell analysis depend on the development of tools and equipment which allow new insights into the cell. Since the invention of the microscope, single cell analysis has been successfully carried out. Many new illumination, staining, and detection methods, such as flow cytometry [11], have been developed in order to increase the optical resolution of the microscope to observe the behavior of single cell [12]. With these methods, single cell analysis has become feasible. However, the conventional methods of observing single cell have limited the performance in terms of standardized reproducibility and high throughput, since the accuracy of a traditional single cell where τ p is the particle response time to changes in the flow field, and α is the density ratio between the fluid and the particle. → F R represents Brownian random force, and → F S represents spring force. The embedded obstacles play an important role in changing the external forces exerted on the cells, such as Stokes drag and pressure gradient forces. Once target cells are trapped in hydrodynamic trapping locations, they can be further employed for various studies. Typically, cell trapping obstacles include walls or pores with various shapes, and arrays consisting of a pattern of the trapping obstacles can be used to realize cell trapping. The development of microfluidic device has driven the evolution of various hydrodynamic trapping devices. Chen et al. [49] designed an integrated microfluidic device for particle arrangement and isolation ( Figure 1). The device was able to selectively immobilize desired microparticles in an array of hydrodynamic traps based on three different physical characteristics: size, elastic modulus and internal structure. A scaling theory based on particle and trap dimensions, particle elastic modules and applied pressure was also developed to define the criterion for particle parking. Moreover, r p represents the radius of the particle, r c represents the half width of the trap entrance, xm represents the height (as shown in Figure 1c), µ represents the friction coefficient between the flow channel and the particle, and κ = 4 x m − r c tan −1 (x m /r c ) ; additionally, hc and hp represent channel height and particle height, respectively. C is a correction factor, representing the deformation of flow channel. Critical pressure indicates that, at a certain pressure, the trapping process can be realized. The trapping process can be achieved, considering the particles with different sizes and stiffness. The isolation efficiency can reach as high as 95%. This setup can be potentially employed in trapping soft and biological objects. Processes 2018, 6, x FOR PEER REVIEW 3 of 34 development of microfluidic device has driven the evolution of various hydrodynamic trapping devices. Chen et al. [49] designed an integrated microfluidic device for particle arrangement and isolation ( Figure 1). The device was able to selectively immobilize desired microparticles in an array of hydrodynamic traps based on three different physical characteristics: size, elastic modulus and internal structure. A scaling theory based on particle and trap dimensions, particle elastic modules and applied pressure was also developed to define the criterion for particle parking. Moreover, rp represents the radius of the particle, rc represents the half width of the trap entrance, xm represents the height (as shown in Figure 1c), μ represents the friction coefficient between the flow channel and the particle, and  ; additionally, hc and hp represent channel height and particle height, respectively. C is a correction factor, representing the deformation of flow channel. Critical pressure indicates that, at a certain pressure, the trapping process can be realized. The trapping process can be achieved, considering the particles with different sizes and stiffness. The isolation efficiency can reach as high as 95%. This setup can be potentially employed in trapping soft and biological objects. Zhu et al. [50] presented a proof-of-concept microfluidic device for the immobilization, culturing and imaging of zebrafish embryos. The schematic illustration and actual photo of their microfluidic device are shown in Figure 2. The device consisted of a flat glass substrate and two layers of polydimethylsiloxane (PDMS) structures replicated from 3D-printed masters. An embryo-culturing channel and five traps were embedded at the bottom PDMS layer to load and capture embryos. The working procedure is as follows. The first step is to induce embryo through the embryo inlet. The second step is to insert the PTFE (polytetrafluoroethylene) plug to avoid the leakage of working fluid. The third step is to introduce working fluid into the device by a syringe pump. The fourth step is to tilt the device slightly, to enable embryo trapping function to trap embryos one by one. The shear stress on the immobilized embryos was estimated by using the Computational Fluid Dynamics (CFD) Zhu et al. [50] presented a proof-of-concept microfluidic device for the immobilization, culturing and imaging of zebrafish embryos. The schematic illustration and actual photo of their microfluidic device are shown in Figure 2. The device consisted of a flat glass substrate and two layers of polydimethylsiloxane (PDMS) structures replicated from 3D-printed masters. An embryo-culturing channel and five traps were embedded at the bottom PDMS layer to load and capture embryos. The working procedure is as follows. The first step is to induce embryo through the embryo inlet. The second step is to insert the PTFE (polytetrafluoroethylene) plug to avoid the leakage of working fluid. The third step is to introduce working fluid into the device by a syringe pump. The fourth step is to tilt the device slightly, to enable embryo trapping function to trap embryos one by one. The shear stress on the immobilized embryos was estimated by using the Computational Fluid Dynamics (CFD) simulations [51]. This device could be potentially applied to monitor the development of dynamic embryonic. Fan et al. [52] designed a microfluidic device, using flow resistance, to achieve high-efficient single-cell capture and analysis. A schematic illustration of their microfluidic device is shown in Figure 3a. Each single cell was encapsulated in a micro-droplet that was formed at the T-junction ( Figure 3b). The micro-droplets at the beginning moved in the main channel at certain approximate velocity; however, once a micro-droplet was blocked in the main channel of a "main-bypass" structure unit, the rest of the micro-droplets would travel through the bypass channel into the subsequent "main-bypass" unit and another droplet would also be trapped (Figure 3c). This trapping process would repeat itself and continue. After that, the trapped cell could be further observed and analyzed ( Figure 3d). The cell trapping efficiency of this device was up to 60% by controlling the injection process. Fan et al. [52] designed a microfluidic device, using flow resistance, to achieve high-efficient single-cell capture and analysis. A schematic illustration of their microfluidic device is shown in Figure 3a. Each single cell was encapsulated in a micro-droplet that was formed at the T-junction ( Figure 3b). The micro-droplets at the beginning moved in the main channel at certain approximate velocity; however, once a micro-droplet was blocked in the main channel of a "main-bypass" structure unit, the rest of the micro-droplets would travel through the bypass channel into the subsequent "main-bypass" unit and another droplet would also be trapped (Figure 3c). This trapping process would repeat itself and continue. After that, the trapped cell could be further observed and analyzed ( Figure 3d). The cell trapping efficiency of this device was up to 60% by controlling the injection process. Similar to the design of Fan et al., Xu et al. [53] developed a microfluidic device with double-slit arrays, as shown in Figure 4a. The microfluidic device consisted of an inlet reservoir, support and disperse pillars, micro-array, outlet channel and outlet reservoir. The double-slit structure of micro-array is shown in Figure 4b. The effects of different combinations of flow velocity, the fluid pressure and the stress of cells on the cell trapping efficiency were also investigated, in detail, in their work. This is the inheritance of previous work, which found that the double-slit arrays perform better compared to single-slit and seamless structure [54]. The geometric effect was employed to optimize the stress that cells suffered. The trapping efficiency was found to be dependent on the flow velocity, the fluid pressure and the equivalent stress of cells. The trapping efficiency was up to 70%. Similar to the design of Fan et al., Xu et al. [53] developed a microfluidic device with double-slit arrays, as shown in Figure 4a. The microfluidic device consisted of an inlet reservoir, support and disperse pillars, micro-array, outlet channel and outlet reservoir. The double-slit structure of microarray is shown in Figure 4b. The effects of different combinations of flow velocity, the fluid pressure and the stress of cells on the cell trapping efficiency were also investigated, in detail, in their work. This is the inheritance of previous work, which found that the double-slit arrays perform better compared to single-slit and seamless structure [54]. The geometric effect was employed to optimize the stress that cells suffered. The trapping efficiency was found to be dependent on the flow velocity, the fluid pressure and the equivalent stress of cells. The trapping efficiency was up to 70%. Zhu et al. [55] also developed a microfluidic device for cell trapping based on hydrodynamic method. For the first step, they tested the capabilities of cell trapping in three types of microstructures, as shown in Figure 5a. In these three designs, their gaps were embedded at different locations around the pillars. It was found that the microstructure of type C was the most efficient in cell trapping, as shown in Figure 5a(ii). The blue bar and red bar represent the trapping efficiency Similar to the design of Fan et al., Xu et al. [53] developed a microfluidic device with double-slit arrays, as shown in Figure 4a. The microfluidic device consisted of an inlet reservoir, support and disperse pillars, micro-array, outlet channel and outlet reservoir. The double-slit structure of microarray is shown in Figure 4b. The effects of different combinations of flow velocity, the fluid pressure and the stress of cells on the cell trapping efficiency were also investigated, in detail, in their work. This is the inheritance of previous work, which found that the double-slit arrays perform better compared to single-slit and seamless structure [54]. The geometric effect was employed to optimize the stress that cells suffered. The trapping efficiency was found to be dependent on the flow velocity, the fluid pressure and the equivalent stress of cells. The trapping efficiency was up to 70%. Zhu et al. [55] also developed a microfluidic device for cell trapping based on hydrodynamic method. For the first step, they tested the capabilities of cell trapping in three types of microstructures, as shown in Figure 5a. In these three designs, their gaps were embedded at different locations around the pillars. It was found that the microstructure of type C was the most efficient in cell trapping, as shown in Figure 5a(ii). The blue bar and red bar represent the trapping efficiency Zhu et al. [55] also developed a microfluidic device for cell trapping based on hydrodynamic method. For the first step, they tested the capabilities of cell trapping in three types of microstructures, as shown in Figure 5a. In these three designs, their gaps were embedded at different locations around the pillars. It was found that the microstructure of type C was the most efficient in cell trapping, as shown in Figure 5a(ii). The blue bar and red bar represent the trapping efficiency without reversed flow and with reversed flow. The trapping efficiency of type C with reversed flow is about 90%, which is much higher than that of type A and type B. After that, a face-to-face heart-shaped microstructure was developed to carry out cell trapping, using type C structure (Figure 5b). Oil was employed in the isolate chambers, to reduce cross-talk. To study the trapping mechanism, numerical simulation work was carried out so as to obtain the flow velocity and shear stress through the pillars. A shadow area with relatively low flow velocity was obtained behind the pillar. The finding supported the hypothesis that the cell could be trapped behind the pillar. The efficiency of cell trapping and cell pairing were 93% and 84%, respectively. The height of the gap and the height of the pillar were modified in the numerical simulation, to investigate their effect on the efficiency of cell pairing. Oil was employed in the isolate chambers, to reduce cross-talk. To study the trapping mechanism, numerical simulation work was carried out so as to obtain the flow velocity and shear stress through the pillars. A shadow area with relatively low flow velocity was obtained behind the pillar. The finding supported the hypothesis that the cell could be trapped behind the pillar. The efficiency of cell trapping and cell pairing were 93% and 84%, respectively. The height of the gap and the height of the pillar were modified in the numerical simulation, to investigate their effect on the efficiency of cell pairing. Dielectrophoresis (DEP) Actuated Cell Trapping Dielectrophoresis (DEP) is an effective way to realize cell trapping in microfluidic devices. The principle of DEP actuated cell trapping is to employ the DEP force imposing on a dielectric particle/cell. In a non-uniform electric field, the strength of DEP force is strongly dependent on the magnitude and polarity of the charges induced on a particle/cell. The DEP force can be expressed as follows: where ε e , a, f, E and σ represent the permittivity of the medium, radius of cell, frequency, electric field strength and conductivity, respectively. Aslan and Kulah [56] designed a portable microfluidic DEP device to realize DEP actuated cell trapping and CMOS (Complementary Metal Oxide Semiconductor) imaging function. The manufacturing process of their microfluidic DEP device is shown in Figure 6. RIE is short for Reactive-Ion Etching. To realize the CMOS imaging function, glass was employed as the substrate due to its transparency characteristic. The channel was made of parylene material on account of its advantages of biocompatibility and bio-stability. AC (Alternating Current) signal was applied on the electrodes of microfluidic DEP device to achieve MCF-7 (Michigan Cancer Foundation-7) breast cancer cell trapping. The proposed DEP device could be potentially used where equipment is limited. The counting accuracy was up to 90%, as reported in the paper. It was dependent on the conductivity and angular frequency of the electric field. Takeuchi et al. [57] developed an electro-active micro-well array with barriers (EMAB) to realize highly efficient single cervical cell trapping. The schematic illustration of the EMAB device is shown in Figure 7. The patterned electrodes were embedded at the bottom of cell-sized micro-wells to achieve cell trapping. The cell could be trapped in the micro wells by applying a sinusoidal electric potential (peak to peak voltage, Vpp = 5 V at 1 MHz) to the electrodes. With the help of barriers located beside the micro-well, cell holding can be realized even after shutting off the DEP, as shown in Figure 7a. The actual image of the EMAB microfluid device is shown in Figure 7b. As shown in the yellow region of Figure 7b, the microfluidic channel consisted of many microwells and barriers. Each microwell contained a pair of ITO electrodes at the bottom to achieve cell trapping and holding functions. The microfluid device could be employed for cell trapping, staining and imaging ( Figure 7c). The cell trapping efficiency was up to 92%, which was determined by the permittivity of the Takeuchi et al. [57] developed an electro-active micro-well array with barriers (EMAB) to realize highly efficient single cervical cell trapping. The schematic illustration of the EMAB device is shown in Figure 7. The patterned electrodes were embedded at the bottom of cell-sized micro-wells to achieve cell trapping. The cell could be trapped in the micro wells by applying a sinusoidal electric potential (peak to peak voltage, V pp = 5 V at 1 MHz) to the electrodes. With the help of barriers located beside the micro-well, cell holding can be realized even after shutting off the DEP, as shown Puri et al. [58] presented a C-serpentine microchannel to achieve the trapping and separation of live and dead yeast cells (Saccharomyces cerevisiae) through DEP. The schematic illustration of the geometry is shown in Figure 8a,b. The C-serpentine geometry was employed to generate a gradient distribution of the electric field. To specify the electric distribution in the geometry, a yeast structure with multi-shell model was employed. The model consisted of three concentric layers of wall, membrane and nucleus ( Figure 8c). Due to the differences in the electric conductivity of the cell membrane, the live and dead yeast cells would be driven to pDEP and nDEP region, respectively ( Figure 8d). An average trapping efficiency of 97.9% for dead cells and 93.4% for live cells was obtained, which was determined by the applied voltage. Puri et al. [58] presented a C-serpentine microchannel to achieve the trapping and separation of live and dead yeast cells (Saccharomyces cerevisiae) through DEP. The schematic illustration of the geometry is shown in Figure 8a,b. The C-serpentine geometry was employed to generate a gradient distribution of the electric field. To specify the electric distribution in the geometry, a yeast structure with multi-shell model was employed. The model consisted of three concentric layers of wall, membrane and nucleus ( Figure 8c). Due to the differences in the electric conductivity of the cell membrane, the live and dead yeast cells would be driven to pDEP and nDEP region, respectively ( Figure 8d). An average trapping efficiency of 97.9% for dead cells and 93.4% for live cells was obtained, which was determined by the applied voltage. Fritzsch et al. [59] demonstrated contactless cell trapping with the octupole technology, using DEP. This technology can be applied in the miniaturized octupole cytometry. Compared with traditional cytometry approach, the proposed approach could trap the targeted cells for further analysis. To investigate the trapping efficiency of single cell, three different octupole nDEP field control modes were employed, including ACB (non-rotating octupolar field), ACC (non-rotating quadrupolar field) and ROTX (rotating quadrupolar field). It was found that cells could be efficiently trapped under ROTX mode. Contactless cell trapping was realized by using the octupole technology, which was independent of cell size and morphology. with multi-shell model was employed. The model consisted of three concentric layers of wall, membrane and nucleus ( Figure 8c). Due to the differences in the electric conductivity of the cell membrane, the live and dead yeast cells would be driven to pDEP and nDEP region, respectively ( Figure 8d). An average trapping efficiency of 97.9% for dead cells and 93.4% for live cells was obtained, which was determined by the applied voltage. Chen et al. [60] reported a microfluidic chip for trapping Shewanella oneidensis bacteria at the cell level, using positive DEP (pDEP) effect. The schematic illustration of the experimental setup was provided in Figure 9. The bacteria were first injected into the microfluidic chip, using a syringe pump. HEPES (4-(2-hydroxyethyl)-1-piperazineethanesulfonic acid) is a zwitterionic sulfonic acid buffering agent. As the bacteria traveled in the microchannel, the trapping process could be captured by a fluorescence microscope combined with a CCD (charge-coupled device) camera. This device demonstrated the possibility in trapping bacteria at the cell level. Fritzsch et al. [59] demonstrated contactless cell trapping with the octupole technology, using DEP. This technology can be applied in the miniaturized octupole cytometry. Compared with traditional cytometry approach, the proposed approach could trap the targeted cells for further analysis. To investigate the trapping efficiency of single cell, three different octupole nDEP field control modes were employed, including ACB (non-rotating octupolar field), ACC (non-rotating quadrupolar field) and ROTX (rotating quadrupolar field). It was found that cells could be efficiently trapped under ROTX mode. Contactless cell trapping was realized by using the octupole technology, which was independent of cell size and morphology. Chen et al. [60] reported a microfluidic chip for trapping Shewanella oneidensis bacteria at the cell level, using positive DEP (pDEP) effect. The schematic illustration of the experimental setup was provided in Figure 9. The bacteria were first injected into the microfluidic chip, using a syringe pump. HEPES (4-(2-hydroxyethyl)-1-piperazineethanesulfonic acid) is a zwitterionic sulfonic acid buffering agent. As the bacteria traveled in the microchannel, the trapping process could be captured by a fluorescence microscope combined with a CCD (charge-coupled device) camera. This device demonstrated the possibility in trapping bacteria at the cell level. Magnetophoresis Actuated Cell Trapping Magnetophoresis is another effective way to realize cell trapping in microfluidic channels. The principle of magnetic actuated cell trapping is to apply magnetic force on a particle/cell. This kind of trapping method can be further classified into positive and negative magnetophoresis. Positive magnetophoresis is the migration of magnetic particle/cell in a diamagnetic medium, while negative magnetophoresis is in a magnetic medium [61]. In addition, if the susceptibility of the particle/cell is larger than the ambient medium, positive magnetophoresis would also occur. The magnetic force imposing on a particle suspended in a fluid medium is as follows: HEPES represents 4-(2-hydroxyethyl)-1-piperazineethanesulfonic acid; CCD represents charge-coupled device. Magnetophoresis Actuated Cell Trapping Magnetophoresis is another effective way to realize cell trapping in microfluidic channels. The principle of magnetic actuated cell trapping is to apply magnetic force on a particle/cell. This kind of trapping method can be further classified into positive and negative magnetophoresis. Positive magnetophoresis is the migration of magnetic particle/cell in a diamagnetic medium, while negative magnetophoresis is in a magnetic medium [61]. In addition, if the susceptibility of the particle/cell is larger than the ambient medium, positive magnetophoresis would also occur. The magnetic force imposing on a particle suspended in a fluid medium is as follows: where V is the volume of the particle, χ is the magnetic susceptibility, χ m is the susceptibility of the surround medium, µ 0 is the magnetic permeability of the air and B is the magnetic flux density. Scherr et al. [62] developed a two-magnet microfluidic setup to achieve high efficiency trapping for biofluids. In this configuration, the trapping process was realized by using two magnets instead of one magnet. The experimental bead distributions in a stationary tube are shown in Figure 10. Under the configuration of one magnet, the bead dispersed along the one side wall only. However, in the presence of the second magnet, the bead dispersed along both sides of the wall. The interaction area between the beads and fluids was much larger than that in the single magnet configuration, and the bead volume fraction was found to increase three times, compared with the one-magnet system. The bead volume fraction was determined by magnetic field. Scherr et al. [62] developed a two-magnet microfluidic setup to achieve high efficiency trapping for biofluids. In this configuration, the trapping process was realized by using two magnets instead of one magnet. The experimental bead distributions in a stationary tube are shown in Figure 10. Under the configuration of one magnet, the bead dispersed along the one side wall only. However, in the presence of the second magnet, the bead dispersed along both sides of the wall. The interaction area between the beads and fluids was much larger than that in the single magnet configuration, and the bead volume fraction was found to increase three times, compared with the one-magnet system. The bead volume fraction was determined by magnetic field. Kirby et al. [63] performed cell separation and trapping, using the magnetic force and centrifugal force. These two forces were employed in a centrifuge-magnetophoretic microfluidic device. There were six chambers embedded in the disk-shaped device. Three magnets were located besides each chamber. As the disk rotated, the centrifugal force was introduced. The combination of magnetic force and centrifugal force resulted in the separation and trapping of particles/cells of different sizes. This centrifugal microfluidic platform could be used for the separation and trapping of blood cells and tagged cancer cells. Guo et al. [64] developed a magnetic controlled microfluidic device to trap magnetic tagged Salmonella typhimurium ( Figure 11). In this microfluidic device, sample stream and buffer stream were injected into the microflow channel. The magnetic tagged Salmonella typhimurium was separated because of the lateral magnetic force. The separated Salmonella typhimurium was led toward the patterned nickel array for trapping. For the cell trapping using positive magnetophoresis technique, the major difficulty was the accumulation of magnetic particles/cells into a cluster. The formed cluster would block the microflow channel. To solve this problem, researchers tried to regulate the gradient of magnetic field. The trapping efficiency was affected by the magnetic force and the drag force. Kirby et al. [63] performed cell separation and trapping, using the magnetic force and centrifugal force. These two forces were employed in a centrifuge-magnetophoretic microfluidic device. There were six chambers embedded in the disk-shaped device. Three magnets were located besides each chamber. As the disk rotated, the centrifugal force was introduced. The combination of magnetic force and centrifugal force resulted in the separation and trapping of particles/cells of different sizes. This centrifugal microfluidic platform could be used for the separation and trapping of blood cells and tagged cancer cells. Guo et al. [64] developed a magnetic controlled microfluidic device to trap magnetic tagged Salmonella typhimurium ( Figure 11). In this microfluidic device, sample stream and buffer stream were injected into the microflow channel. The magnetic tagged Salmonella typhimurium was separated because of the lateral magnetic force. The separated Salmonella typhimurium was led toward the patterned nickel array for trapping. For the cell trapping using positive magnetophoresis technique, the major difficulty was the accumulation of magnetic particles/cells into a cluster. The formed cluster would block the microflow channel. To solve this problem, researchers tried to regulate the gradient of magnetic field. The trapping efficiency was affected by the magnetic force and the drag force. Huang et al. [65] embedded a microwell in a microfluidic device, to achieve immunomagnetic single cell trapping function ( Figure 12). A layer consisting of microwells was embedded between the microchannel and the magnet. Due to the presence of the microwell, uniform distribution of the magnetic field in the microfluidic device could be achieved. The single-particle trapping efficiency in the microwell can be up to 62%, and the purity can be up to 99.6%. The immunomagnetic labeled THP-1 cell was employed to demonstrate the feasibility of the microfluidic device. The trapping efficiency was supposed to be affected by the magnetic susceptibility of Dynabeads occupying the microwell. Huang et al. [65] embedded a microwell in a microfluidic device, to achieve immunomagnetic single cell trapping function ( Figure 12). A layer consisting of microwells was embedded between the microchannel and the magnet. Due to the presence of the microwell, uniform distribution of the magnetic field in the microfluidic device could be achieved. The single-particle trapping efficiency in the microwell can be up to 62%, and the purity can be up to 99.6%. The immunomagnetic labeled THP-1 cell was employed to demonstrate the feasibility of the microfluidic device. The trapping efficiency was supposed to be affected by the magnetic susceptibility of Dynabeads occupying the microwell. If the susceptibility of the particle/cell is smaller compared to the ambient medium, negative magnetophoresis will occur. Hejazian and Nguyen [66] reported a new trapping method for the sizeselective non-magnetic particles. The diluted ferrofluid was employed as the working fluid. The schematic illustration of the microchannel is shown in Figure 13a. Two arrays of magnets were embedded at the two opposite sides of a straight microchannel. The minimum and maximum of the simulated magnetic field were labeled dark blue and light blue, respectively. Figure 13b shows the experimental trapping results of small particles and large particles at the magnetic field maxima and minimum. The red spots stand for the small particles, while the green spots stand for the large particles. The physics behind the phenomena were illustrated by the combination of three kinds of forces, namely hydrodynamic force, negative magnetophoretic force and magnetoconvective force (Figure 13c). The trapping efficiency was determined by three kinds of forces, namely negative magnetophoretic, magnetoconvective and hydrodynamic forces. If the susceptibility of the particle/cell is smaller compared to the ambient medium, negative magnetophoresis will occur. Hejazian and Nguyen [66] reported a new trapping method for the size-selective non-magnetic particles. The diluted ferrofluid was employed as the working fluid. The schematic illustration of the microchannel is shown in Figure 13a. Two arrays of magnets were embedded at the two opposite sides of a straight microchannel. The minimum and maximum of the simulated magnetic field were labeled dark blue and light blue, respectively. Figure 13b shows the experimental trapping results of small particles and large particles at the magnetic field maxima and minimum. The red spots stand for the small particles, while the green spots stand for the large particles. The physics behind the phenomena were illustrated by the combination of three kinds of forces, namely hydrodynamic force, negative magnetophoretic force and magnetoconvective force (Figure 13c). The trapping efficiency was determined by three kinds of forces, namely negative magnetophoretic, magnetoconvective and hydrodynamic forces. Wang et al. [67] employed micro-magneto-fluidic technique to trap the bacteria suspending in flowing fluid (Figure 14a). Thermal bonding technique was employed to build the microfluidic channel using poly(methyl methacrylate) (PMMA). An island was embedded at the center of the microchannel as shown in Figure 14b. The magnetic nanoparticles in the ferrofluid were magnetite (Fe 3 O 4 ). The polybeads were employed to predict the behavior of bacteria due to their similar size. The trapping behavior under the combination of ferrofluid and magnetic field is shown in Figure 14c. The variation in (x, y) coordinate of trapped bacteria with time under applied magnetic field is shown in Figure 14d. The effects of applied magnetic field, duration of application of magnetic field and fluid flow rate on the trapping efficiency were investigated systematically. Wang et al. [67] employed micro-magneto-fluidic technique to trap the bacteria suspending in flowing fluid (Figure 14a). Thermal bonding technique was employed to build the microfluidic channel using poly(methyl methacrylate) (PMMA). An island was embedded at the center of the microchannel as shown in Figure 14b. The magnetic nanoparticles in the ferrofluid were magnetite (Fe3O4). The polybeads were employed to predict the behavior of bacteria due to their similar size. The trapping behavior under the combination of ferrofluid and magnetic field is shown in Figure 14c. The variation in (x,y) coordinate of trapped bacteria with time under applied magnetic field is shown in Figure 14d. The effects of applied magnetic field, duration of application of magnetic field and fluid flow rate on the trapping efficiency were investigated systematically. Instead of placing multiple pairs of magnets along a straight flow channel [68], Zhou et al. [69] placed the magnet near a T-junction microchannel (Figure 15a). The magnet was placed along the centerline of the main branch of T-junction. The flow direction was indicated by the blue color. The diamagnetic particles were trapped along the side wall of the main branch, while magnetic particles were trapped along the wall of side branch, indicating the existence of negative and positive magnetophoresis (Figure 15b). A 3D numerical model was developed to simulate the trapping procedure. The effect of ferrofluid was found to be important. Wilbanks et al. [70] investigated the effects of magnet asymmetry on the trapping performance of diamagnetic particle in ferrofluid flow. The magnetic configuration is shown in Figure 16a. The asymmetric magnets were embedded at two sides of the microflow channel, to realize trapping function (Figure 16b). The dimensions of the microchannel with the magnets were provided in Figure 16c. Under the influence of magnet asymmetry, a circular streamline shape of trapping particles was obtained, and the trapping performance was found to be dependent on the asymmetry of magnets. Wilbanks et al. [70] investigated the effects of magnet asymmetry on the trapping performance of diamagnetic particle in ferrofluid flow. The magnetic configuration is shown in Figure 16a. The asymmetric magnets were embedded at two sides of the microflow channel, to realize trapping function (Figure 16b). The dimensions of the microchannel with the magnets were provided in Figure 16c. Under the influence of magnet asymmetry, a circular streamline shape of trapping particles was obtained, and the trapping performance was found to be dependent on the asymmetry of magnets. Gertz and Khitun [71] investigated the trapping of red blood cells (RBC), using magnetic nanoparticles. The schematic illustration of the experimental setup is shown in Figure 17a. Two Cu wires covered by silicon dioxide were embedded in the working device. A power supply was employed to provide micro-electromagnet field to trap RBCs. Without the activation of current, the RBCs were randomly distributed in the channel; once the current was on, the RBCs were trapped around the wire (Figure 17b,c). The strength of the magnetic field influenced the trapping procedure significantly. Wilbanks et al. [70] investigated the effects of magnet asymmetry on the trapping performance of diamagnetic particle in ferrofluid flow. The magnetic configuration is shown in Figure 16a. The asymmetric magnets were embedded at two sides of the microflow channel, to realize trapping function (Figure 16b). The dimensions of the microchannel with the magnets were provided in Figure 16c. Under the influence of magnet asymmetry, a circular streamline shape of trapping particles was obtained, and the trapping performance was found to be dependent on the asymmetry of magnets. Gertz and Khitun [71] investigated the trapping of red blood cells (RBC), using magnetic nanoparticles. The schematic illustration of the experimental setup is shown in Figure 17a. Two Cu wires covered by silicon dioxide were embedded in the working device. A power supply was employed to provide micro-electromagnet field to trap RBCs. Without the activation of current, the RBCs were randomly distributed in the channel; once the current was on, the RBCs were trapped around the wire (Figure 17b,c). The strength of the magnetic field influenced the trapping procedure significantly. Optical Tweezers Optical tweezer has been regarded as an effective method to trap cell/particle at microscale [43]. Optical tweezer employs a focused laser beam to induce optical force, to realize trapping function. However, the capture efficiency of optical tweezer is not high sometimes, due to the low refractive index contrast of some biological cells [72,73]. It is still a challenge to improve the trapping efficiency of biological cells by using optical tweezer [74,75]. Recently, some researchers employed the thermal Optical Tweezers Optical tweezer has been regarded as an effective method to trap cell/particle at microscale [43]. Optical tweezer employs a focused laser beam to induce optical force, to realize trapping function. However, the capture efficiency of optical tweezer is not high sometimes, due to the low refractive index contrast of some biological cells [72,73]. It is still a challenge to improve the trapping efficiency of biological cells by using optical tweezer [74,75]. Recently, some researchers employed the thermal effect induced by the optical absorption, to enhance trapping efficiency. The combination of thermal effect and natural convection flow could trap the cells into a hotter region [76]. The optical force imposed on a single particle is expressed as follows: T M = 0.5Re εEE * + µHH * − 0.5 ε|E| 2 + µ|H| 2 I where n is the surface normal vector, I is the unit dyadic, and ε and µ are the electric permittivity and magnetic permeability of the surrounding medium. E is electric field, and H is magnetic field. Li et al. [77] reported a new type of optical tweezer that made use of thermophoresis and natural convection flow to trap and arrange erythrocytes ( Figure 18). A schematic illustration of the thermophoresis and natural convection flow is shown in Figure 18a. Introduced by the optical absorption through a fiber, a hot zone formed on the quartz plate. This hot zone resulted in a temperature gradient on the working plate. This thermophoresis and natural convection flow under low incident power could trap the erythrocyte effectively. In addition, the optical scattering force under high incident power could be employed to arrange the erythrocytes efficiently (Figure 18b). The erythrocytes were trapped and arranged over a long distance, without injury. The enlarged view and SEM (scanning electron microscope) image of graphene-coated microfiber probe (GCMP) are provided in Figure 18c,d. Figure 18e presents the schematic diagram of the experimental setup: A 980 nm laser was focused on the GCMP through the fiber, and a CCD camera was mounted on the top to monitor the trapping and arrangement process. The AFM (atomic force microscopy) image of an erythrocyte is shown in Figure 18f. Sipova et al. [78] have also employed the photothermal effect, to probe DNA films. DNA cargo from individual gold nanoparticles were successfully trapped and manipulated by optical tweezers. The trapping procedure was affected by the natural convection flow and thermophoretic force on the particles. Liu et al. [79] developed a microfluidic device to selectively trap Escherichia coli cells in human blood solution, based on size and shape. A fiber optical tweezer was embedded into a T-type microflow channel to realize the trapping Escherichia coli function ( Figure 19). With the help of optical tweezer, the Escherichia coli cells were selectively trapped at the tip of the optical fiber tweezer. The trapping efficiency of E. coli was 39.5%, and the separation efficiency was 100%. The optical force played an important role in the trapping and separation process. and SEM (scanning electron microscope) image of graphene-coated microfiber probe (GCMP) are provided in Figure 18c,d. Figure 18e presents the schematic diagram of the experimental setup: A 980 nm laser was focused on the GCMP through the fiber, and a CCD camera was mounted on the top to monitor the trapping and arrangement process. The AFM (atomic force microscopy) image of an erythrocyte is shown in Figure 18f. Sipova et al. [78] have also employed the photothermal effect, to probe DNA films. DNA cargo from individual gold nanoparticles were successfully trapped and manipulated by optical tweezers. The trapping procedure was affected by the natural convection flow and thermophoretic force on the particles. Liu et al. [79] developed a microfluidic device to selectively trap Escherichia coli cells in human blood solution, based on size and shape. A fiber optical tweezer was embedded into a T-type microflow channel to realize the trapping Escherichia coli function ( Figure 19). With the help of optical tweezer, the Escherichia coli cells were selectively trapped at the tip of the optical fiber tweezer. The trapping efficiency of E. coli was 39.5%, and the separation efficiency was 100%. The optical force played an important role in the trapping and separation process. Lee et al. [80] employed optical trapping and microfluidics to investigate the mechanism of red blood cell (RBC) aggregation. Schematic illustration of the experimental chamber is shown in Figure 20, including the top view (a) and side view (b). Solution 2 (S2) was introduced into the microfluidic flow channel through pressure supply. The cells located in the larger chamber with Solution 1 (S1). Optical tweezer was employed to trap the cells to the S2. The evidence for the cross-bridge induced interaction of cells was observed in the experiment. The initial solution played an important role in measuring the cell-interaction strength. Lee et al. [80] employed optical trapping and microfluidics to investigate the mechanism of red blood cell (RBC) aggregation. Schematic illustration of the experimental chamber is shown in Figure 20, including the top view (a) and side view (b). Solution 2 (S2) was introduced into the microfluidic flow channel through pressure supply. The cells located in the larger chamber with Solution 1 (S1). Optical tweezer was employed to trap the cells to the S2. The evidence for the cross-bridge induced interaction of cells was observed in the experiment. The initial solution played an important role in measuring the cell-interaction strength. Pilat et al. [81] developed a promising microfluidic device with many functions for assessing the optical trapping experiments quantitatively ( Figure 21). The layout of the microfluidic device was designed to guarantee that the cells could not flow out of the chamber due to their low-diffusion rate. A benchmark for safe and non-invasive optical trapping of Saccharomyces cerevisiae could be achieved by using this configuration. Pilat et al. [81] developed a promising microfluidic device with many functions for assessing the optical trapping experiments quantitatively ( Figure 21). The layout of the microfluidic device was designed to guarantee that the cells could not flow out of the chamber due to their low-diffusion rate. A benchmark for safe and non-invasive optical trapping of Saccharomyces cerevisiae could be achieved by using this configuration. Pilat et al. [81] developed a promising microfluidic device with many functions for assessing the optical trapping experiments quantitatively (Figure 21). The layout of the microfluidic device was designed to guarantee that the cells could not flow out of the chamber due to their low-diffusion rate. A benchmark for safe and non-invasive optical trapping of Saccharomyces cerevisiae could be achieved by using this configuration. (d) Zhang et al. [82] proposed and demonstrated a hollow annular-core fiber (HACF) based optical tweezer for living cell trapping and sterile transporting ( Figure 22). A microfluidic channel was embedded in the optical fiber, allowing for the cells/particles flowing through the flow channel. The competition between the optical trapping forces (OTF) and the liquid viscous resistances (LVR) determined the trapping location and moving trajectory. Zhang et al. [82] proposed and demonstrated a hollow annular-core fiber (HACF) based optical tweezer for living cell trapping and sterile transporting ( Figure 22). A microfluidic channel was embedded in the optical fiber, allowing for the cells/particles flowing through the flow channel. The competition between the optical trapping forces (OTF) and the liquid viscous resistances (LVR) determined the trapping location and moving trajectory. Liu et al. [83] proposed and demonstrated an optofluidic strategy to trap and transport cell chain using large-tapered-angle fiber probe (LTAP). In their research, Escherichia coli cells, yeast cells and red blood cells were used to study the feasibility of this approach. Their strategy employed the Liu et al. [83] proposed and demonstrated an optofluidic strategy to trap and transport cell chain using large-tapered-angle fiber probe (LTAP). In their research, Escherichia coli cells, yeast cells and red blood cells were used to study the feasibility of this approach. Their strategy employed the combination of optical force and flow drag force. The experimental results of trapping and transporting E. coli cell chain are shown in Figure 23. The trapping procedure could be controlled by adjusting the laser power and flow velocity. Qi et al. [84] employed optical tweezer and microfluidic devices to trap and sort denitrifying anaerobic methane oxidizing (DAMO) microorganisms. This technique showed many advantages, such as high purity, low infection rates and no harm to cell viability. The schematic illustration of the chip design is shown in Figure 24. Mixed culture and buffer solution were introduced into the microfluidic channel. The optical tweezer was employed, at the outlet, to trap a target DAMO cell (marked black circle) and transport it into the collection channel. This technique could be potentially used for slow-growing microorganisms. Qi et al. [84] employed optical tweezer and microfluidic devices to trap and sort denitrifying anaerobic methane oxidizing (DAMO) microorganisms. This technique showed many advantages, such as high purity, low infection rates and no harm to cell viability. The schematic illustration of the chip design is shown in Figure 24. Mixed culture and buffer solution were introduced into the microfluidic channel. The optical tweezer was employed, at the outlet, to trap a target DAMO cell (marked black circle) and transport it into the collection channel. This technique could be potentially used for slow-growing microorganisms. Acoustic Trapping Another method for the active trapping of cells is acoustic actuated cell trapping. Ultrasonic standing waves (USWs) can be employed for contactless cell trapping in microfluidic channels. Acoustic trapping is widely used in microfluidic system for cell trapping, transportation and manipulation. Yin et al. [85] proposed a particle-based cell manipulation method employing acoustic radiation forces, as shown in Figure 25. Three typical types of particles were selected in their investigation, which were ploy(lactic-co-glycolic acid) (PLGA) microspheres, silica-coated magnetic microbeads and polydimethylsiloxane (PDMS) microspheres. Their responses to ultrasonic standing waves (USWs) demonstrated that the PDMS microspheres were suitable for cell trapping. This proposed method did not have a harmful effect on the cells. Acoustic contrast factor played an important role in the trapping procedure. Acoustic Trapping Another method for the active trapping of cells is acoustic actuated cell trapping. Ultrasonic standing waves (USWs) can be employed for contactless cell trapping in microfluidic channels. Acoustic trapping is widely used in microfluidic system for cell trapping, transportation and manipulation. Yin et al. [85] proposed a particle-based cell manipulation method employing acoustic radiation forces, as shown in Figure 25. Three typical types of particles were selected in their investigation, which were ploy(lactic-co-glycolic acid) (PLGA) microspheres, silica-coated magnetic microbeads and polydimethylsiloxane (PDMS) microspheres. Their responses to ultrasonic standing waves (USWs) demonstrated that the PDMS microspheres were suitable for cell trapping. This proposed method did not have a harmful effect on the cells. Acoustic contrast factor played an important role in the trapping procedure. Acoustic Trapping Another method for the active trapping of cells is acoustic actuated cell trapping. Ultrasonic standing waves (USWs) can be employed for contactless cell trapping in microfluidic channels. Acoustic trapping is widely used in microfluidic system for cell trapping, transportation and manipulation. Yin et al. [85] proposed a particle-based cell manipulation method employing acoustic radiation forces, as shown in Figure 25. Three typical types of particles were selected in their investigation, which were ploy(lactic-co-glycolic acid) (PLGA) microspheres, silica-coated magnetic microbeads and polydimethylsiloxane (PDMS) microspheres. Their responses to ultrasonic standing waves (USWs) demonstrated that the PDMS microspheres were suitable for cell trapping. This proposed method did not have a harmful effect on the cells. Acoustic contrast factor played an important role in the trapping procedure. Fornell et al. [86] established an microfluidic system to trap hydrogel droplets, using acoustic forces. The experimental setup is shown in Figure 26. A T-shaped microfluidic channel was employed to generate cell-laden droplets. The cell-laden droplets were then cross-linked with UV light. Next, the droplets were introduced into a second microfluidic channel, where they were trapped by acoustic forces. The droplets could be trapped at a flow speed of up to 3.2 mm/s. The trapping process was realized by the acoustic forces. Fornell et al. [86] established an microfluidic system to trap hydrogel droplets, using acoustic forces. The experimental setup is shown in Figure 26. A T-shaped microfluidic channel was employed to generate cell-laden droplets. The cell-laden droplets were then cross-linked with UV light. Next, the droplets were introduced into a second microfluidic channel, where they were trapped by acoustic forces. The droplets could be trapped at a flow speed of up to 3.2 mm/s. The trapping process was realized by the acoustic forces. Lim et al. [87] reported a novel method for evaluating the acoustic trapping performance by tracking the motion of a microparticle. The acoustic trapping force was assessed based on a series of microscopy images obtained from a high-speed camera and a high-resolution microscopy. This method could be employed to estimate cell membrane deformability. The experimental setup and procedure for measuring the trapping forces are shown in Figure 27. A microparticle was randomly selected and trapped by the acoustic tweezer. Then the transducer was turned off and translated by a certain distance of 250 μm. After that, sinusoidal bursts were applied to the transducer, and the motion of the attracted particle toward the acoustic focus center was recorded, which could be further analyzed to estimate the trapping force. Lim et al. [87] reported a novel method for evaluating the acoustic trapping performance by tracking the motion of a microparticle. The acoustic trapping force was assessed based on a series of microscopy images obtained from a high-speed camera and a high-resolution microscopy. This method could be employed to estimate cell membrane deformability. The experimental setup and procedure for measuring the trapping forces are shown in Figure 27. A microparticle was randomly selected and trapped by the acoustic tweezer. Then the transducer was turned off and translated by a certain distance of 250 µm. After that, sinusoidal bursts were applied to the transducer, and the motion of the attracted particle toward the acoustic focus center was recorded, which could be further analyzed to estimate the trapping force. Fornell et al. [86] established an microfluidic system to trap hydrogel droplets, using acoustic forces. The experimental setup is shown in Figure 26. A T-shaped microfluidic channel was employed to generate cell-laden droplets. The cell-laden droplets were then cross-linked with UV light. Next, the droplets were introduced into a second microfluidic channel, where they were trapped by acoustic forces. The droplets could be trapped at a flow speed of up to 3.2 mm/s. The trapping process was realized by the acoustic forces. Lim et al. [87] reported a novel method for evaluating the acoustic trapping performance by tracking the motion of a microparticle. The acoustic trapping force was assessed based on a series of microscopy images obtained from a high-speed camera and a high-resolution microscopy. This method could be employed to estimate cell membrane deformability. The experimental setup and procedure for measuring the trapping forces are shown in Figure 27. A microparticle was randomly selected and trapped by the acoustic tweezer. Then the transducer was turned off and translated by a certain distance of 250 μm. After that, sinusoidal bursts were applied to the transducer, and the motion of the attracted particle toward the acoustic focus center was recorded, which could be further analyzed to estimate the trapping force. Wu et al. [88] reported a simple and reliable method to generate multicellular spheroids, using acoustic method ( Figure 28). Their device consisted of capillaries, a standing surface acoustic wave (SSAW) generator, a pair of interdigital transducers (IDTs) and a piezoelectric substrate. Once the radio frequency signal was applied, a periodic distributed acoustic field could be formed in the capillary. Due to the gradient of the acoustic field, an acoustic radiation force was generated, and pressure node array was generated. The suspended cells in the capillary were pushed by the acoustic force to the pressure nodes and assembled into spheroids there. Wu et al. [88] reported a simple and reliable method to generate multicellular spheroids, using acoustic method ( Figure 28). Their device consisted of capillaries, a standing surface acoustic wave (SSAW) generator, a pair of interdigital transducers (IDTs) and a piezoelectric substrate. Once the radio frequency signal was applied, a periodic distributed acoustic field could be formed in the capillary. Due to the gradient of the acoustic field, an acoustic radiation force was generated, and pressure node array was generated. The suspended cells in the capillary were pushed by the acoustic force to the pressure nodes and assembled into spheroids there. Lu et al. [89] developed a microfluidic platform, to trap and isolate cancer cells based on their size, using acoustic microstreaming ( Figure 29). With the activation of acoustic microstreaming, the microtrap would discriminate and trap the cancer cells at the vicinity. The tunable and reversible properties of the acoustic microstreaming produced by the micropillar trap played an important role which affected the trapping efficiency. Hayakawa et al. [90] also proposed similar methods for trapping single motile cell. Lu et al. [89] developed a microfluidic platform, to trap and isolate cancer cells based on their size, using acoustic microstreaming ( Figure 29). With the activation of acoustic microstreaming, the microtrap would discriminate and trap the cancer cells at the vicinity. The tunable and reversible properties of the acoustic microstreaming produced by the micropillar trap played an important role which affected the trapping efficiency. Hayakawa et al. [90] also proposed similar methods for trapping single motile cell. Xu et al. [91] developed an improved method to separate sperm cells from dilute "large volume" samples that contained an abundance of female DNA, by using bead-assisted acoustic trapping ( Figure 30). One PDMS fluid layer was sandwiched between two glass reflecting layers, forming a resonator. Through the employment of this three-layer structure, trapping nodes were generated based on ultrasonic standing waves. The addition of polymeric beads with a critical concentration in the dilute sample was found to initiate the aggregation and improve the sperm cell trapping significantly, while not affecting the DNA extraction and PCR (polymerase chain reaction). Hence, this successful bead-assisted trapping of sperm cells in the enclosed glass-PDMS-glass microdevice suggested that acoustic differential extraction (ADE) could be a useful tool for the processing of real forensic samples. Xu et al. [91] developed an improved method to separate sperm cells from dilute "large volume" samples that contained an abundance of female DNA, by using bead-assisted acoustic trapping ( Figure 30). One PDMS fluid layer was sandwiched between two glass reflecting layers, forming a resonator. Through the employment of this three-layer structure, trapping nodes were generated based on ultrasonic standing waves. The addition of polymeric beads with a critical concentration in the dilute sample was found to initiate the aggregation and improve the sperm cell trapping significantly, while not affecting the DNA extraction and PCR (polymerase chain reaction). Hence, this successful bead-assisted trapping of sperm cells in the enclosed glass-PDMS-glass microdevice suggested that acoustic differential extraction (ADE) could be a useful tool for the processing of real forensic samples. Lu et al. [92] investigated topographical manipulation of microparticles and cells, using acoustic microstreaming. This technique was named as acoustic topographical manipulation (ATM). The working principle is shown in Figure 31a. The microparticles were introduced into the aqueous microfluidic system and deposited on the bottom of the cell surface. Some particles became obstacles due to the existence of electrostatic and van der Waals interactions. A localized microstreaming around the obstacle would be formed upon the application of a standing acoustic wave filed. The Lu et al. [92] investigated topographical manipulation of microparticles and cells, using acoustic microstreaming. This technique was named as acoustic topographical manipulation (ATM). The working principle is shown in Figure 31a. The microparticles were introduced into the aqueous microfluidic system and deposited on the bottom of the cell surface. Some particles became obstacles due to the existence of electrostatic and van der Waals interactions. A localized microstreaming around the obstacle would be formed upon the application of a standing acoustic wave filed. The acoustic microstreaming force, as well as radiation forces, could trap the microparticles at the vicinity of the obstacles. Noteworthy, the localized acoustic microstreaming vortex, as the manipulating force, would guide the topographic movement of the microparticle around the obstacle (Figure 31b-e). Dependence of the manipulating microparticle's velocity on the applied driving frequency and voltage of the acoustic transducer were also studied in the work (Figure 31f,g). Lu et al. [92] investigated topographical manipulation of microparticles and cells, using acoustic microstreaming. This technique was named as acoustic topographical manipulation (ATM). The working principle is shown in Figure 31a. The microparticles were introduced into the aqueous microfluidic system and deposited on the bottom of the cell surface. Some particles became obstacles due to the existence of electrostatic and van der Waals interactions. A localized microstreaming around the obstacle would be formed upon the application of a standing acoustic wave filed. The acoustic microstreaming force, as well as radiation forces, could trap the microparticles at the vicinity of the obstacles. Noteworthy, the localized acoustic microstreaming vortex, as the manipulating force, would guide the topographic movement of the microparticle around the obstacle (Figure 31b-e). Dependence of the manipulating microparticle's velocity on the applied driving frequency and voltage of the acoustic transducer were also studied in the work (Figure 31f,g). Meng et al. [93] reported an improved sonoporation method to trap microcells using cavitation effect as shown in Figure 32. Multiple rectangular microchannels of uniform size were embedded staggered along the main microflow channel to produce microbubble array. The microbubble array oscillated with almost the same amplitude and resonant frequency, resulting in the homogeneous sonoporation. The microcells were trapped at the corner of the rectangular micro side channels due to the generated acoustic radiation forces introduced by the oscillating microbubbles. Meng et al. [93] reported an improved sonoporation method to trap microcells using cavitation effect as shown in Figure 32. Multiple rectangular microchannels of uniform size were embedded staggered along the main microflow channel to produce microbubble array. The microbubble array oscillated with almost the same amplitude and resonant frequency, resulting in the homogeneous sonoporation. The microcells were trapped at the corner of the rectangular micro side channels due to the generated acoustic radiation forces introduced by the oscillating microbubbles. Conclusions This paper reviews the different methods for cell trapping. At the early development stages, hydrodynamic trapping, which is a contact-based method, has been proved to be efficacious in cell trapping. Many researchers focused on contact-based cell trapping devices due to their simple fabrication processes. With the development of microfluidic technologies, an increasing number of contactless cell trapping methods have been proposed and reported. Through contactless cell trapping approaches, cell trapping can be achieved to satisfy various requirements. Cell trapping at a precisely defined location has been receiving more and more attention from both the engineering community as end-user and research community as research tool. Magnetophoresis is highly recommended for cell trapping in microfluidic system. This technique achieves cell trapping function without changing the physical properties of sample solution, such as pH value, ion concentration and temperature. Being easy to design, easy to operate and low-cost make it a popular option in the scientific research community. Conclusions This paper reviews the different methods for cell trapping. At the early development stages, hydrodynamic trapping, which is a contact-based method, has been proved to be efficacious in cell trapping. Many researchers focused on contact-based cell trapping devices due to their simple fabrication processes. With the development of microfluidic technologies, an increasing number of contactless cell trapping methods have been proposed and reported. Through contactless cell trapping approaches, cell trapping can be achieved to satisfy various requirements. Cell trapping at a precisely defined location has been receiving more and more attention from both the engineering community as end-user and research community as research tool. Magnetophoresis is highly recommended for cell trapping in microfluidic system. This technique achieves cell trapping function without changing the physical properties of sample solution, such as pH value, ion concentration and temperature. Being easy to design, easy to operate and low-cost make it a popular option in the scientific research community.
2020-10-19T18:07:47.090Z
2020-10-05T00:00:00.000
{ "year": 2020, "sha1": "5e13bc9c269fe73632357f96f64fa1d21934ec96", "oa_license": "CCBY", "oa_url": "https://www.mdpi.com/2227-9717/8/10/1253/pdf", "oa_status": "GOLD", "pdf_src": "Adhoc", "pdf_hash": "35176cf358dc2e4d30bf7001e7f6dabc3310d96d", "s2fieldsofstudy": [ "Biology", "Engineering" ], "extfieldsofstudy": [] }
9957134
pes2o/s2orc
v3-fos-license
Circadian Rhythm in Kidney Tissue Oxygenation in the Rat Blood pressure, renal hemodynamics, electrolyte, and water excretion all display diurnal oscillation. Disturbance of these patterns is associated with hypertension and chronic kidney disease. Kidney oxygenation is dependent on oxygen delivery and consumption that in turn are determined by renal hemodynamics and metabolism. We hypothesized that kidney oxygenation also demonstrates 24-h periodicity. Telemetric oxygen-sensitive carbon paste electrodes were implanted in Sprague-Dawley rats (250–300 g), either in renal medulla (n = 9) or cortex (n = 7). Arterial pressure (MAP) and heart rate (HR) were monitored by telemetry in a separate group (n = 8). Data from 5 consecutive days were analyzed for rhythmicity by cosinor analysis. Diurnal electrolyte excretion was assessed by metabolic cages. During lights-off, oxygen levels increased to 105.3 ± 2.1% in cortex and 105.2 ± 3.8% in medulla. MAP was 97.3 ± 1.5 mmHg and HR was 394.0 ± 7.9 bpm during lights-off phase and 93.5 ± 1.3 mmHg and 327.8 ± 8.9 bpm during lights-on. During lights-on, oxygen levels decreased to 94.6 ± 1.4% in cortex and 94.2 ± 8.5% in medulla. There was significant 24-h periodicity in cortex and medulla oxygenation. Potassium excretion (1,737 ± 779 vs. 895 ± 132 μmol/12 h, P = 0.005) and the distal Na+/K+ exchange (0.72 ± 0.02 vs. 0.59 ± 0.02 P < 0.001) were highest in the lights-off phase, this phase difference was not found for sodium excretion (P = 0.4). It seems that oxygen levels in the kidneys follow the pattern of oxygen delivery, which is known to be determined by renal blood flow and peaks in the active phase (lights-off). INTRODUCTION Endogenous timing mechanisms have evolved to adapt to environmental changes imposed by alternating periods of light and darkness. Twenty-four hours patterns in sleep, activity, food intake, and excretion are well-known representatives of such physiological homeostatic adaptations. Via the kidneys, these homeostatic mechanisms maintain constancy of the bodies' extracellular fluid compartment throughout the 24-h cycle. In mammals, diurnal variations in urinary volume and electrolyte excretion are the best studied features of 24-h renal rhythm. Urinary excretion of water, sodium, and potassium peak during the Abbreviations: CKD, chronic kidney disease; GFR, glomerular filtration rate; HR, heart rate; MAP, mean arterial pressure; MESOR, circadian rhythm adjusted mean; pO 2 , tissue oxygen concentration; RAAS, renin-angiotensin-aldosterone system; RBF, renal blood flow. active period of the day when intake is also highest (Cohn et al., 1970;Hilfenhaus, 1976;Pons et al., 1996;Zhang et al., 2015). Concomitantly, the activity of the renin-angiotensin-aldosterone system (RAAS), which is a major regulator of blood pressure, appears in a circadian fashion in rodents (Hilfenhaus, 1976), and humans (Armbruster et al., 1975;Mahler et al., 2015). Herein, plasma aldosterone levels peak just before the activity phase and are inverted by reversal of the light-dark cycle (Hilfenhaus, 1976). Disturbance of the circadian blood pressure pattern, exposed as a non-dipping profile at night, is associated with hypertension and nephropathy (Fukuda et al., 2006;Sachdeva and Weder, 2006). Sleep problems have been reported in almost 80% of endstage renal disease patients (Koch et al., 2009), and a disturbed blood pressure pattern is associated with higher risk for chronic kidney disease (CKD) (Portaluppi et al., 1991). Not surprisingly, restoration of the dipping profile during the inactive phase has been a target of interest for anti-hypertensive chronotherapy (Simko and Pechanova, 2009). Timed-inhibition of the renin angiotensin system can be used to suppress the rise in blood pressure upon awakening (Oosting et al., 1999). Disturbed renal sodium transport seems to be linked to abnormal circadian blood pressure profiles (Fujii et al., 1999). Troughs in the excretion patterns of electrolytes typically occur during the resting or sleeping phase of the 24-h cycle. Assuming that the kidneys consume most energy and oxygen on sodium transport (Brezis et al., 1994), one could hypothesize that tissue oxygen concentration (pO 2 ) in renal tissue is lowest when sodium reabsorption activity is highest. On the other hand, the increased oxygen use in the kidney may be fully matched by increased oxygen delivery because 24-h variations in blood pressure and renal blood flow coincide with periods of highest excretion (Pons et al., 1996). Normal kidney oxygenation is crucial as disturbed pO 2 within the kidneys has been linked to the progression of CKD (Evans et al., 2013). Data on 24-h variations in oxygenation are lacking because, until recently, it was not possible to measure kidney oxygenation continuously. To answer the question whether 24-h variations in renal function are associated with synchronous variations in renal oxygenation, pO 2 levels in the kidney were continuously monitored in healthy rats by a telemetry based technique for 5 consecutive days. Additionally, 24-h variations in tissue oxygenation were compared for renal cortical and medullary tissue and linked to the magnitude of day/night differences in water and food intake, and urinary water and electrolyte excretion. Very recently, Adamovich et al. described that oxygen levels may adjust the timing of the internal circadian clock (Adamovich et al., 2017). They showed that in the rat kidney (cortex) a circadian rhythm in pO 2 levels can be detected. In this study we further substantiate these finding and expand this to both the medullary and cortical region in the kidney. Animals Experiments were conducted in male Sprague Dawley rats (250-300 grams, supplier: Charles River). All procedures were approved by the Animal Ethics Committee of University of Utrecht (DEC 2014.II.03.015) and were in accordance with the Dutch Codes of Practice for the Care and Use of Animals for Scientific Purposes. All animals were kept on a 12 h light/dark cycle with lights-on at 6 a.m. (ZT 0), and lights-off at 6 p.m. (ZT 12). Rats had access to water and standard rat chow (contains 0.3% Na + and 0.69% K + ) ad libitum. To promote animal welfare and normal physiological activity around the clock, the rats were cohoused. Only during urine collection for electrolyte analysis were the rats housed individually for 24 h. System Overview The telemetry based technique to measure oxygenation in the kidney has been described in detail (Emans et al., 2016;Koeners et al., 2016). In summary, oxygen sensitive carbon paste electrodes were implanted in the right kidney, either in the cortex (n = 7) or medulla (n = 9). The kidney was exposed via laparotomy. The telemeter (TR57Y, Millar, Houston, US) was placed in the rat abdomen. After 2 weeks of complete recovery and stabilization of oxygen signal, 24-h oscillations in pO 2 were recorded continuously. In a third group, blood pressure telemeters (TRM54P, Millar, Houston, US) were implanted in the abdominal aorta (n = 8). Analysis After subtraction of the off-set value, original pO 2 data were filtered with a 25 Hz low-pass digital filter. Artifacts were removed when the 1st order derivative exceeded a threshold of 5 nA/s, as described previously (Emans et al., 2016). To describe the 24-h rhythm, 1 h average pO 2 -values were calculated. These hourly averages were used to determine the mean pO 2 level over the 5 consecutive days and were then re-expressed relatively to this 5-day mean value (MESOR). The Cosinor method was applied to determine the amplitude and phase of the oscillation in the pO 2 signal (Refinetti et al., 2007). Twenty-four hours rhythmicity was determined when the amplitude of the fitted curve was significantly >0. For blood pressure and heart rate, absolute values were used. Electrolyte Excretion Rats were individually housed in metabolic cages for 24 h (n = 13) to determine water and food intake and to collect urine. Urine was sampled in epochs of 12 h starting at 6 p.m. (lights-off/active phase) and continued at 6 a.m. (lights-on/resting phase). In these 12-h urine samples sodium and potassium concentrations were determined by flame photometry (Model 420, Sherwood, UK). Urinary creatinine was determined by DiaSys Kit (DiaSys Diagnostic Systems, Holzheim, Germany). Distal sodium/potassium exchange was quantified as kaliuresis/(natriuresis + kaliuresis; Hene et al., 1984). Statistics Data are expressed as mean ± SEM. The data collected during active respectively resting phase were compared by paired Student's t-test. Differences were considered significant when p < 0.05. RESULTS An original tracing of 3.5-day consecutive recording of cortical pO 2 levels is depicted in Figure 1A. An example of a tracing obtained with the probe in the medulla is presented in Figure 1B. On visual inspection of the raw telemetric data, oxygen levels in both the cortex and medulla peaked during the lights-off period in these nocturnally active rats, while trough values were usually found during the lights-on or resting period of the day. The 5-day mean was set at 100% pO 2 . Quantification of these observations by the Cosinor analyses revealed that during the lights-off phase, oxygen levels increased to 105.3 ± 2.1 and 105.2 ± 3.8% in renal cortex and medulla, respectively. During the lights-on phase, oxygen levels decreased to 94.6 ± 1.4% in cortex and 94.2 ± 8.5% in medulla, relatively to the 5-day mean ( Table 1). The mean amplitude of the fitted curve for pO 2 rhythmicity tended to be larger in cortex than in medulla (5.8 vs. 4.9%), although this difference was not significant (Figure 2). Ninetyfive percentage Confidence intervals (95% CI) of both cortex and medulla oxygenation did not exceed the MESOR. Twenty-four hours blood pressure and heart rate rhythms were in phase with those occurring in pO 2 . Water and food intake were significantly higher during the lights-off phase than during the lights-on phase (28 ± 2 vs. 4 ± 1 ml and 20 ± 1 vs. 3 ± 1 g, P < 0.001, Figures 3A,B). Urine volume did not differ much between lights-off and lights on ( Figure 3C). Creatinine excretion tended to increase during the lights-off vs. lights-on phase (P = 0.052, Figure 3D). There was no phase difference for urinary sodium excretion or Na + /creatinine (Figures 3E,H), but urinary potassium excretion and K + /creatinine were increased during the 12 h lights-off vs. the lights on period (P < 0.01, Figures 3F,I). Distal Na + /K + exchange (kaliuresis/natriuresis + kaliuresis) was increased in the lights-off period (P < 0.001, Figure 3G). DISCUSSION Normotensive rats display a significant diurnal rhythmicity in renal oxygenation (Adamovich et al., 2017). In this study we FIGURE 2 | Cosinor analysis of the averaged circadian rhythms in kidney oxygenation, blood pressure, and heart rate. Data are plotted as hourly mean values ± SEM as recorded over 5 days in each rat relatively to the overall mean value (=MESOR, indicated by the dotted line). Note that different rats were used for obtaining oxygenation in (A) cortical and (B) medullary pO 2 as well as for the (C) blood pressure. (D) Heart rate values were derived from blood pressure measurements. A significant circadian rhythm was assessed when the amplitude of the fitted curve was statistically >0, see Table 1. dissected the rhythmicity of cortical and medullary oxygenation, that both follow a diurnal pattern. Using telemetric techniques, peak values in tissue oxygenation were found during the lights-off period when renal excretion of electrolytes was highest. Trough values in renal pO 2 -values were observed during the lights-on period when excretion patterns are minimal. These data suggest that the circadian rhythm in (both cortical and medullar) pO 2 is mainly the result of a 24 variation in oxygen delivery to the kidneys. In rats, cardiac output and blood pressure are highest during the lights-off period (Oosting et al., 1997), when rats display highest locomotor activity and eat and drink the most. Assuming that during normal activity renal blood flow is stable at approximately 20% of cardiac output, oxygen delivery is highest to this organ during the active phase. While direct renal blood flow measurements are not available over 24 h, previous studies in rats using inulin and p-aminohippuric acid clearance have repeatedly (Pons et al., 1996) shown that GFR and RBF indeed peak during the lights-off period. This corroborates the hypothesis that oxygen delivery is the most important determinant of this circadian pattern in oxygenation. Recently, 24-h oxygen recordings in the kidneys have been obtained in sheep (Calzavacca et al., 2015). In that study, a clear 24-h pattern in tissue oxygenation of the kidneys was absent. RBF and tissue perfusion did not show a circadian fluctuation either. Presumably, there was a suppression of normal locomotor activity in these sheep because they were housed in metabolic cages. In the current study, we co-housed the rats to facilitate normal social behavior and thereby normal locomotor activity. An alternative explanation for the discrepancy between our observations and those in sheep may be that ruminants have a less pronounced fasting phase than omnivorous species such as rats and that that the delivery of nutrients is more constant than in diurnal active species such as rats. While studying mechanisms of 24-h variation in potassium excretion, Steel et al. found that the bulk of potassium excretion was determined by food intake (delivery) rather than the flow (Steele et al., 1994). This suggests that peak oxygen levels in the kidneys may also occur in parallel with delivery patterns of certain nutrients, waste products, or electrolytes. Future studies are needed to sort out cause and consequence of such associations. Very recently, a similar daily pattern in pO 2 in the kidney was briefly described in rodents. Peak values were found during lights-off, when oxygen consumption rate was highest as well (Adamovich et al., 2017). FIGURE 3 | Water and food intake and urine analysis. Rats were individually housed in metabolic cages for 24 h. Urine was collected in 2 samples, one during the lights-off (active) and one during the lights-on (resting) period. Individual data and mean ± SEM are indicated for (A) water intake, (B) food intake, (C) urine volume, (D) creatinine excretion, (E) Na + excretion, (F) K + excretion, (G) Urine K + /Urine (K + + Na + ) as an estimate of distal Na + /K + exchange, (H) Na + /creatinine, and (I) K + /creatinine. Lights-on/off differences were compared with paired Student's t-tests. In rats ANGII and Aldosterone peak during the lightson period, when electrolyte excretion is lowest (Hilfenhaus, 1976;Lemmer et al., 2000;Naito et al., 2009). These hormones stimulate tubular sodium re-absorption during the lights-on period and thereby determine oxygen use. Presumably, this is an evolutionary mechanism to retain fluids when water intake is low. RBF and GFR also exhibit 24 h periodicity, peaks during lights-off and troughs during lights-on (Pons et al., 1996). Important genes related to renal sodium and water transport, like NHE, aquaporin 2 and 4, have been linked to circadian expression (Saifur Rohman et al., 2005). A lower kaliuresis and distal Na + /K + exchange at the time of a decline in tissue oxygenation at rest suggest that oxygen consumption per se is not contributing to the pattern of renal oxygen content in our study. The variations in arterial blood pressure and heart rate throughout the day have been studied in detail in healthy, chronically instrumented rats (Henry et al., 1990;Janssen et al., 1992;Teerlink and Clozel, 1993;van den Buuse, 1994). The variation between the nightly peaks and daily troughs are less pronounced in our study than reported by some others. This is probably caused by the fact that the light dark cycle in the animal room corresponded with real day and night making it possible that researcher or care taker-induced minor disturbances may have occurred in the recordings during the daily resting phase of the animals thereby underestimating the current 24-h amplitude in blood pressure oscillation. Reversing the experimental light/dark cycle would probably not have diminished the current 24-h amplitudes but actually magnified them. This may also apply for pO 2 -values. We decided to set the 5-day mean at 100% pO 2 for each animal to allow interanimal comparison (Emans et al., 2016). The between animal comparison would decrease the sensitivity by introducing a large SD between animals and obscuring day/night variation within one animal. Another technical limitation (inherent to studying small rodents) was that we were not able to record RBF variations. However, our setting does not interfere with natural behavior and physiological processes of the nocturnal animals. The rats were unrestrained and cohoused, which allows them full activity at night, accompanied by higher heart rates and probably a higher RBF as well. Oxygen and arterial pressure assessment could not be performed in the same animal. Cortex and medulla recordings were also performed in separate animals. However, the rhythms were consistent within each of the three groups, suggesting that extrapolation of the results to the full set is acceptable. The animals were fully acclimatized toward the 12:12 light dark cycle in our facility. Our assessment of natriuresis may have been affected by our chosen 12-h sampling period, because sodium excretion can peak just before the light phase (Roelfsema et al., 1980;Pons et al., 1996). Due to ethical and technical issues, we decided not to acclimatize our rats to metabolic cage housing. This may have interfered with the excretion values calculated from collected urine. Other researchers, who applied an equilibration period in metabolic cages for 2-3 days (Nikolaeva et al., 2012;Johnston et al., 2016) did find significant differences in urine sodium excretion in rodents. However, since diurnal effects were prominent for fluid and food intake, urine flow, kaliuresis, and distal Na/K exchange, this suggests that, if anything, these diurnal differences would have been even more marked in acclimatized rats. The kidney has more clock regulated genes than most other organs (Gumz, 2016). It has also been suggested that every cell type in the kidney follows its own circadian clock (Tokonami et al., 2014). Probably cortex and medulla follow their own circadian pattern as well. However, we did not find a different pattern in oxygenation between cortex and medulla. Our data suggest that the kidney may be more vulnerable to hypoxia during sleep. Actions that make the kidneys hypoxic in general could lead to damage during the resting phase, when the oxygen concentrations are already somewhat lower. This could be relevant in the pathogenesis of diseases that are associated with low oxygenation at night, such as obstructive sleep apnea and the associated progression of renal disease. One could argue that diseases associating with kidney hypoxia, for instance CKD (Evans et al., 2013) and diabetes (Franzen et al., 2016), could be more progressive during the resting phase. On the other hand, our data may also provide a new look at the association of neglecting to align with the inner circadian clock (e.g., in shift workers) with the development of hypertension and CKD (Lieu et al., 2012). Furthermore, low oxygen levels at rest could contribute to the non-dipping profile and hypertension, because low levels of oxygen in the kidneys may not allow a normal decline in MAP and RBF during rest. In conclusion, the circadian rhythm of regional kidney oxygenation that we describe, is a new phenomenon that provides further research opportunities for the onset and progression of hypertension and CKD. This support is gratefully acknowledged.
2017-05-17T19:56:18.780Z
2017-04-06T00:00:00.000
{ "year": 2017, "sha1": "25aa9bc176aef7825fc8fd52cffd2cc6862b8a2f", "oa_license": "CCBY", "oa_url": "https://www.frontiersin.org/articles/10.3389/fphys.2017.00205/pdf", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "25aa9bc176aef7825fc8fd52cffd2cc6862b8a2f", "s2fieldsofstudy": [ "Biology", "Medicine" ], "extfieldsofstudy": [ "Biology", "Medicine" ] }
151212724
pes2o/s2orc
v3-fos-license
KM Practice in Malaysia Community College: KMS to Support KM Framework Knowledge Management (KM) is a new concept, especially in a community college environment, where knowledge has yet been captured, collaborated and managed systematically. Realizing the value and importance of KM approach, the researcher attempted to identify several goals to be achieved to provide for a viable KM framework that will support the current activities of knowledge transfer and sharing in a community college environment. Five different techniques were used including observation, small talk, interview, field notes & survey and experimental. Yet this study also covered the KMS development for CMS technology as stated in the framework. This study focuses how to create a framework that works for community colleges since they are looked upon as the lifelong learning and training center in the country. The finding has shown that the KMS (prototype) makes the KM framework visible and possible to be implemented in Malaysia community colleges. Coinciding community colleges act as agents for the Malaysian government to develop the local communities’ socioeconomic through knowledge transfer and knowledge sharing and also as a part of the higher learning institution. A. Enablers extracted from the Success KMS Model derived Delone and McLean's IS Success Model by Joel L Feliciano (2006) Joel L has discussed the enablers of the success of a KMS model. Enabler mean something that causes something else is likely to occur, or more effective. He stressed technical and organizational factors that make some of the functions of the KMS. For technical enabler, he said, there were several more enablers for driving under the knowledgeable workers to interact with the KMS such as scalable, the ability of a system to measure from the local level to organization levels. Uniqueness of the system must be measured to meet the needs of the organization also the model touched on taxonomy of knowledge. Then adaptable, the system should be able to incorporate new technologies discussed about the blog and mobile usage. Next, transparency, the system must be transparent to the worker. Furthermore, dependable to get the confidence of worker to use the system frequently, the interface of the system or the crucial aspect is to get the worker's contribution. Moreover, personalization, the KMS should make compatible to review the knowledge exists on a particular subject, providing a platform for smart system to have the power to recommend other destinations of knowledge sources. Another part is organizational enablers. Resource allocationtime allocate and monetary resources. Then sharing -policies and culture, corporate culture plays an important role in determining if k-worker is going to share knowledge or not. Evaluation, also a part of enabler, such as annual evaluation, will help the evaluation of k-worker and the organization to determine how much the system is being used and how much they help. The next is trainingcrucial to knowledge generation and KM in general. Lastly, Business alignmentsthe process of the organization need to be matched by the system, as well as the strategic plan for the organization. comments were received from Participants enrolled in the prototype. One participant pointed out that "This tool is useful to me because you have a lot of important information in one place instead of searching through pages and pages to get what you want. It saves time and effort as well." Another Participant said that "Some of the things that I really liked were the fact that you were able to post screen shots giving a clear idea of what the steps were. I also loved the fact that it was possible to add comments in case a question arose." In addition to the use of these useful posts for assignments, it was interesting that the participants still requested more information and tips from the instructor regarding the project assignments. D. Knowledge Management System (KMS) Prototype for CMS Hence, a survey was conducted and a questionnaire is randomly distributed by using the online. The items were divided into four categories creation, organization, distribution and search. Researchers agreed these elements are crucial for the knowledge and information visualization for the KMS. The survey was responded by 83 candidates as target including the staffs, local communities and alliances. By using rating style, the researchers build the items based on 5 values. The value 2 corresponds to "very little" improvement, 3 to "moderate" and 4 to "high" improvement. The full scale reaches from 1 ("worse") to 5 ("very high"). The result has shown by the table 2. Conclusion Firstly, the researchers identify community colleges as the center to socioeconomic development with strong links to the government, coordinated by a management, which actively supports the technology and knowledge transfer and provides communities with facilities and services. They attract, mainly local communities, who expect benefits and synergies from the college community existence. These co-operations between community colleges and local community are depicted in different ways, through formal or informal linkages and through human resources based issues. Additionally, the social and physical structure influences the performance and the style of work in community colleges
2019-05-13T13:05:27.433Z
2012-10-24T00:00:00.000
{ "year": 2012, "sha1": "84aaccb99ebac573340561ffced5f49ea0ba4ff3", "oa_license": "CCBY", "oa_url": "https://www.ojcmt.net/download/km-practice-in-malaysia-community-college-kms-to-support-km-framework.pdf", "oa_status": "HYBRID", "pdf_src": "Adhoc", "pdf_hash": "2dd3c0f0346fadc08fadde8a27362ff07bee1be6", "s2fieldsofstudy": [ "Education", "Business" ], "extfieldsofstudy": [ "Sociology" ] }
259138365
pes2o/s2orc
v3-fos-license
Quantum fluctuations spatial mode profiler The spatial mode is an essential component of an electromagnetic field description, yet it is challenging to characterize it for optical fields with low average photon number, such as in a squeezed vacuum. We present a method for reconstruction of the spatial modes of such fields based on the homodyne measurements of their quadrature noise variance performed with a set of structured masks. We show theoretically that under certain conditions we can recover individual spatial mode distributions by using the weighted sum of the basis masks, where weights are determined using measured variance values and phases. We apply this approach to analyze the spatial structure of a squeezed vacuum field with various amount of excess thermal noise generated in Rb vapor. I. INTRODUCTION Transverse spatial distribution is an important element of the description of any classical or quantum electromagnetic field. For many applications it is essential to restrict light propagation within a single, well-defined spatial mode. However, the multimode nature of light can be desirable in fields such as optical information multiplexing 1,2 or imaging [3][4][5] . In either case the ability to identify and characterize the spatial mode composition of an electromagnetic field becomes a helpful tool. Several solutions have been recently proposed for classical optical fields, in which specially designed dispersive elements help spatially separate various modes (often in Laguerre-Gauss or Hermit-Gauss basis) into uniquely positioned spots [6][7][8][9] . The situation becomes significantly more challenging when the multimode optical field consists primarily of squeezed vacuum quantum fluctuations, since there is no accompanying strong classical field to tune to a selected mode. In this case identification of individual modes becomes akin looking for a black cat in a dark room. Traditional quantum noise detection requires a strong local oscillator (LO) to amplify weak fluctuations to the detectable level. However, this method relies on perfect spatial overlap of the LO and the unknown quantum probe 10,11 , and thus requires a priori knowledge of the quantum field's transverse distribution, or the perfect shape of the LO need to be found via the set of optimization measurements. The situation becomes more complicated if the quantum noise is spatially multimode, such as a mixture of, e.g., squeezed vacuum and thermal light. In some cases, e.g., if the squeezed modes are not overlapping, it is possible to obtain the information about their number, shapes and squeezing parameters by reducing the size of the LO mode [12][13][14] or sampling nearby pixels correlations 15 as it was demonstrated for twin-beam squeezing. Multimode quantum field is a useful resource for quantum imaging, as the information about spatial transmission masks can be obtained by shaping the LO 12,16,17 or analyzing noise correlation for each camera pixel 18,19 . However, these measurements rely on the relative modification of the quantum probe noise, and may not be useful for the diagnostic of the original multi-mode probe itself. The Bloch-Messiah reduction 20-22 offers a promising method to extract information about squeezing modes of a multimode optical field. It was shown to recover the set of quantum eigenmodes of the frequency comb 23,24 and parametric amplifiers via the diagonalization of the measurement basis. However, this is a data intensive procedure -the required number of measured covariances is proportional to the square of the measurement basis elements. Here we propose a protocol for characterizing and reconstructing the spatial profiles of single-and two mode quantum fluctuations with no prior information. While not as general as the Bloch-Messiah reconstruction, our method is significantly simpler since the required number of measurements scales linearly with the number of basis elements (spatial pixels). In particular, we reconstruct a transverse distribution of a single squeezed vacuum mode, and then expand the formalism to describe the mixture of the squeezed and thermal modes. Our method is based on single pixel imaging techniques [25][26][27][28][29] adapted to the quantum domain and combining it with homodyne detection 30 . Full wavefront information about the phase and amplitude in each point is extracted from the quantum quadrature variance measurements. In our experimental reconstruction we use a quadrature squeezed vacuum source based on the PSR nonlinearity in Rb atoms [31][32][33] , and trace the modification of the output quantum state from mostly single-mode squeezed vacuum to an admixture of squeezed vacuum and excess thermal noise as the temperature of Rb vapor increases [34][35][36][37] . However, our method is general and can be adopted for wide range of squeezed light sources and wavelengths. The general idea of our quantum noise mode profiler is inspired by classical single-pixel imaging 27 combined with the homodyne detection 30 . The principle difference is that we detect the quadrature noise variance, rather than average light power, of the optical field after a set of spatial transmission masks, as illustrated in Fig. 1.(a). For each squeezed mode the mask changes its quantum noise by reducing their quadrature fluctuations as well as their squeezing angle. We trace these changes for each mask H m using the homodyne detector. By analyzing the variance as a function of the LO phase, we can find minimum and maximum variance V ± m as well as the relative phase θ m [ Fig. 1(b,c)] and use them to calculate the weights to reconstruct the original signal spatial profile using the masks [ Fig. 1(d)]. Notably, the same procedure will work if the masks are placed on the LO, rather than the signal field. II. QUADRATURE VARIANCE CALCULATIONS FOR MULTIMODE QUANTUM FLUCTUATIONS In this section we analytically calculate the quadrature variance of a Gaussian optical quantum state with multiple spatial modes overlapped with a LO in a balanced homodyne measurement setup and validate the mode reconstruction method from the measured noise values. To calculate the homodyne detection output in the multimode case, we need a powerful formalism that can efficiently handle the multimode complexity. Fortunately, we can model the signal quantum field as Nmode Gaussian states -continuous variable (CV) states with Gaussian Wigner function 38 : These states are completely determined by their first two moments, the mean vectorx = (q 1 ,p 1 , ...q N ,p N ) T and covariance matrix where {*,*} denotes anticommutator,q k = 1 √ 2 (â † k +â k ) and are the quadrature operators associated with the kth mode defined via standard creation (â † k ) and annihilation (â k ) operators. Diagonal elements of the covariance matrix represent the quadrature variance of the field modes. For example, a signal field consisting of two squeezed vacuum modes with different spatial profiles is represented by a 4 × 4 where r k and φ k are the squeezing parameter and squeezing angle for each mode. Next, we need to describe the transformation of the quantum state after the mask and predict the output of the homodyne detector. We can model these optical elements using two symplectic matrices. Matrix B models a mask as a beam splitter with transmission T , and matrix R represents the single mode phase rotation θ : The first diagonal matrix element of the final covariance matrix 39 provides the value of the the output noise quadrature at the output of the homodyne detection V (θ ): where we connect the initial and final covariance matrices by applying the transformation with matrix multiplication. Taking into account the positions of the nonzero elements of the matrices involved, indices j and m can be only 1 and 2 and consequently indices k and l can be 1 and 3 or 2 and 4. This simplifies the matrix product to only 8 nonzero terms: The beam splitter matrix B accounts for the transformation of each of the squeezed modes by the mask. The transformation (matrix) coefficients are exactly the overlap between the input (u k (x, y)) and output (H(x, y)) modes which is defined by the integral: In this case of two modes, T = |O 1 | 2 , 1 − T = |O 2 | 2 , and the final expression for the two single mode squeezing signal variance after the mask becomes: This result can be easily generalized to N modes, by applying N-way beam splitter transformation and absorbing the phases, induced by the transformation, into the squeezing angles. Next, we can further extend the treatment into the squeezed thermal states. The covariance matrix is changed by adding additional factors 2n th,k to all diagonal terms, wheren th,k is the average thermal photon number in the kth mode. Combining all this into Eq. 7 we get: It is easy to see that the results in Eqs.7 and 8 can be written in the same general form: if we identify V + k = e r k + 2n th,k and V − k = e −r k + 2n th,k . III. UNKNOWN SPATIAL MODE RECONSTRUCTION VIA QUADRATURE NOISE MEASUREMENTS Now we are ready to discuss the reconstruction of the unknown signal mode profile by measuring its quadrature variance after a complete set of transmission masks H m (x, y). In general, the signal may consist of multiple spatial modes, each described by u k (x, y). For the purpose of this discussion, we will assume that the quantum fluctuations of each modes are defined by its maximum and minimum quadrature noise V ± k (normalized to the vacuum state noise), and θ k is the squeezing angle with respect to the local oscillator, that we assume to be a single-mode coherent field with the spatial distribution u LO (x, y). To gain information about the spatial profile of the input field we modify the signal field by passing it through various mask H m (x, y) and measuring the corresponding quadrature variance V m (θ ): where Note that such a mask can instead be introduced into the LO path as it would not change above overlap parameter definition except the mask would appear as complex conjugated. In most situations, we do not have information about either spatial distribution or noise statistics of either of the participating modes, and need to extract them from the measurements. This may not be possible under general conditions, since the contributions of all modes can be combined into one simple functional dependence: where V + m and V − m are the maximum and minimum quadratures detected for the m th mask respectively and θ m is some global mask dependent phase shift. While these parameters are relatively simple to extract from experimental data (see Fig. 1), the system of measurements is under-constrained, and we generally do not have enough information to independently extract V ± k , O km , θ k , and θ km . Nevertheless, below we consider several important cases, for which we can obtain the quantum mode profiles. A. Reconstruction of a spatial mode for a single-mode squeezed vacuum Let's assume that the input state consists of a squeezed vacuum field in a single unknown spatial mode. In this case, Eq. 11 simplifies to here we dropped the mode index k = 1. It is easy to see that minimum and maximum values of the measured quadrature variance are equal to V ± m = |O sq m | 2 (V ± − 1) + 1, and therefore we can extract the value of overlap parameter as where we omitted the factor 1/ √ V + −V − since it is a common normalization factor for any mask H m . We can use well established single pixel camera methods for the intensity 27 or field 30 spatial distribution reconstruction modified to recover the squeezed field multiplied by the LO field profile, which we call the shaped squeezed field: which is the main interest of this manuscript. We can see that projection of the shaped squeezed field to a mask is given by where O sq m is the weight of the mth mask in the reconstruction of the shaped squeezed field. The above equation can be written in matrix notation as where we move from the continuous two dimensional xy representation (Eq. 12) to pixel basis (p) and unfold 2D space to a single column tracking pixel location. To have fully define system, we need as many independent mask measurements as there are sampled pixels. The rest is just linear algebra. The shaped squeezed field can be calculated based on measurements as Here rows of matrix H consist of the pixel representations of the masks. If the H T = H −1 , such is in the case of the Hadamard masks, the above equation simplifies to One potential obstacle comes from the requirement of mask overlaps to have a ±1 factor (Eq. 15 ). This ambiguity is resolved by measuring a complementary mask shape 1 − H m that defines an overlap with the unity mask as the reference (O r ). A mask overlap added to its complementary needs to be equal to the reference overlap for any mask, thus constraining the sign. Then a simple comparison of possible permutations of ±1 multipliers for the mask and its complementary one provides the correct sign. Overall, we have the method to obtain the shaped squeezing field u * LO (x, y)u sq (x, y) up to some normalization numerical factor for the single squeezed mode state. B. Mode decomposition reconstruction for thermal and squeezed vacuum modes Now we consider the input state as combination of one squeezed mode (sq subindex) and one thermal mode (th subindex). In this case we can use Eq. 8 to calculate the expected quadrature variance: Note that the variance of the thermal state (V th ) does not depend on the quadrature angle, and thus its contribution is phase-independent. This equation obeys general form, Eq. 13. Thus we can easily detect V ± m and θ m however there is not enough information to find O sq m , O th m , V ± , and V th from just 3 observables. The large thermal mode shifts the observed quantum noise up and dominate it. But Eq. 21 shows that to obtain squeezed mode overlap we need to track noise contrast (difference between maximum and minimum noise) as shown in Eq. 15. This is correct even in the presence of a strong thermal mode. To reconstruct the shaped squeezed mode U sq , we can use exactly the same formalism as we used for the case of single squeezed mode above. Moreover, we assume that thermal mode is much noisier than shot noise, i.e. V th ≫ 1, and consequently thermal mode variance is much large than squeezed quadrature variance, i.e V th ≫ V − . With this assumption where we again neglected the common normalization factor 1/V th . We can reconstruct intensity overlap of the local oscillator with the thermal mode, i.e. the shaped thermal field intensity Here we use the fact that variance of the thermal mode is proportional to its intensity and this relationship does not depend on the loss of the system. This allows us to generalize single pixel detector intensity formalism 18,27 . IV. EXPERIMENTAL REALIZATION The experimental apparatus used to illustrate our method is depicted in Fig. 2. The pump laser beam input power is 7.3 mW at the entrance of the Rb cell and has radius of 60 µm in the focus (at the center of the cell). We use a strong linearly polarized pump tuned to the 5S 1/2 F = 2 → 5P 1/2 transition of the 87 Rb atoms to generate squeezed vacuum field in the orthogonal polarization via polarization self-rotation (PSR) effect 19,36 . The output squeezed vacuum field is the input state to the quantum mode spatial profiler, as previous research indicated that this field may contain several squeezed or thermal modes 35,36 . For the measurements we reuse the pump field as a LO for the homodyning balanced photodiode detector (BPD) which measures quadrature fluctuations in the squeezed field. We use an interferometer consisting of two polarizing beam splitters (PBS) and two mirrors (one of which is mounted on a PZT transducer) to introduce the controllable phase shift (θ ) between the LO and squeezed field. We use a phase-only liquid crystal spatial light modulator (SLM, model Meadowlark Optics PDM512-0785). We take advantage of the polarization dependence of the SLM to impose spatial masks only on the squeezed field, without affecting the local oscillator. This arrangement is crucial to reduce the effect of the temporal common phase flicker due to the liquid crystal driving circuit. Since both optical fields propagate and bounce off the SLM together, they see the SLM phase flicker as a common phase which cancels out in the measurement. To introduce a field amplitude mask, we apply a blazing diffraction grating pattern with different modulation depth 30,40 and select its zeroth order. This way we can controllably apply "on" or "off" patterns of the Hadamard mask basis set to shape the squeezed field. Technically, we need masks with 1 and -1 amplitudes for the Hadamard patterns. As -1 intensities are physically not feasible, we use 1 and 0 patterns and their complementary, following a well established technique for single pixel camera detectors 27 . After the SLM, the unchanged LO and masked squeezing field enter the homodyning BPD, and we record the squeezed field quadrature variance (noise level) with a spectrum analyzer. We measure noise level as a function of the LO phase for every mask (V m (θ )), see Fig. 1, and extract maximum noise levels V + m , minimum noise levels V − m , and the corresponding phase shift θ m for every mask, as shown in Eq. 13. A blank mask with no modifications to the input squeezed beam is used to define a reference phase with the LO. From this measurement using Eqs. 15 and 22, we are able to reconstruct the mask overlap for squeezed (O sq m ) and thermal (O th m ) fields. Once we know this, we reconstruct the shaped squeezed and thermal fields using Eqs. 20 and 23. V. EXPERIMENTAL MODE RECONSTRUCTIONS PSR squeezing makes a potent subject for the mode decomposition analysis, as many previous experiments demonstrated that it is far from pure, and it is plagued by excess noise [34][35][36] that increases with temperature of Rb vapor. The spatial mode analysis can shine light on the nature of the excess noise. In particular, we assume that the optical field coming out of the Rb cell consists of a single-mode squeezed vacuum and some thermal noise mode. Previous measurements suggest that shapes of these two modes do not match each other. To distinguish between them we run the mode decomposition analysis for two different Rb cell temperatures: T = 65 o C, for which the maximum PSR squeezing is detected, and we suspect relatively small contribution from the thermal noise as this low temperature regime is close to the single squeezed mode 37,41 , and at T = 80 o C, for which the excess noise dominates due to the significant addition of the thermal mode. For a direct comparison, see Fig. 3a,c where squeezing reconstruction has larger noise values and Fig 4a,c where thermal reconstruction has larger noise values compared to the squeezed amplitude. Fig. 3 and Fig. 4 present the 32x32 pixel reconstructions of the squeezed vacuum output that follows the analysis described in Sec.III B. Each figure has three distinct columns. The first column shows the amplitude and phase of the overlap between the squeezed mode and the LO, reconstructed using Eq. 15. The second column shows the thermal mode shaped intensity reconstructed with Eq. 23. Note the thermal state by definition has no phase dependence. This is used as an implicit assumption during reconstruction. Finally, the last column shows the classical reconstruction 30 using a small leakage of the classical LO field into the squeezing polarization due to the limited extinction ratio of the polarizing beam displacer). The lower temperature corresponds to a lower atomic density and weaker nonlinear effect which is in charge of squeezing and output mode structure 37 . The reconstruction at 65 • C temperature (Fig. 3) shows a clear fundamental Gaussian beam shape in both classical (Fig. 3d,e) and quantum (Fig.3a,b) reconstructions. This is expected, since the squeezing is generated in the mode very similar to the LO which was used as a pump for the squeezer 36,37,[41][42][43] . At 65 • C we observe -2.0 dB of squeezing (noise suppression relative to the shot noise level) directly out of the Rb cell. Due to some absorption in optical elements such as polarizers and less than 100% reflection off the SLM, this amount of squeezing is reduced to -0.5 dB when the squeezing propagates through the imaging optics (see Fig. 2). We also detect about 5.7 dB of antisqueezing at the detector after passing through the imaging optics, hinting about the thermal noise presence. While we cannot predict the shape of the thermal mode, we must assume that it occupies similar space as the squeezed vacuum as we observe its negative effect on observed squeezing noise 36,37 . This prediction is supported by the measured thermal mode profiles. To increase atomic density we raise the Rb cell temperature to 80 • C. At this high temperature, we no longer have any squeezing (measured V − m exceeds the shot noise level) as the minimum noise is 2.7 dB above shot noise (due to increased contribution of the thermal mode) and the maximum noise is 11.5 dB above the shot noise after passing through the imaging optics. This noise increase is expected with higher temperatures. When compared to the low temperature reconstruction Fig. 3, we see a spatial mode change in both classical and quantum reconstructions (see Fig. 4). In the classical fields overlap reconstruction, an additional "ring" appears ( Fig.4e), likely due to self-defocusing of the laser field in hot atomic vapor. The quantum reconstructions (Fig.4a,b) also show modification of the original Gaussian, even though they suffer from some digital "boxiness" that is highly dependent on post-processing phase choices. However, even the imperfectly reconstructed thermal mode shape (Fig.4c) (that is phase-independent) is very distinct from the classical shapes, as two "lobes" appear. One can notice similar two-lobe structure even in the low-temperature thermal mode reconstruction (Fig.3c), albeit much less obvious. The magnitude of the reconstructed fields is proportional to input squeezing and thermal variances (recall that we did not normalize by √ V + −V − and V th in Eqs. 15 and 22). Thus we can see that at higher atomic densities a noisier (higher input variance) field is generated. We would like to note that it is possible to get higher resolution images, since we were mainly limited by the acquisition time for each mask and speed of SLM (Meadowlark PDM512) FIG. 3. A low temperature reconstruction (65ºC) where a) is the amplitude of the shaped squeezed field b) is the squeezed phase c) is the amplitude of the shaped thermal field and d) and e) are the amplitude of the shaped classical field and phase reconstructions respectively. Classical field images (recovered with methods described in 30 ) are included to provide comparison. Phase colorbars are in radians. Quantum fields amplitude colorbars are proportional to the square root of quantum noise variance. FIG. 4. A high temperature reconstruction (80ºC) where a) is the amplitude of the shaped squeezed field b) is the squeezed phase c) is the amplitude of the shaped thermal field and d) and e) are the amplitude of the shaped classical field and phase reconstructions respectively. Classical field images (recovered with methods described in 30 ) are included to provide comparison. Note the thermal shape difference (c) compared to Fig. 3c. Phase colorbars are in radians. Quantum fields amplitude colorbars are proportional to the square root of quantum noise variance. liquid crystal settling, which was the bottleneck of our setup. It takes about 45 minutes to collect a 32x32 pixel reconstruction. VI. CONCLUSION We demonstrated a method to reconstruct spatial profile of an optical field consisting of several quantum noise modes with different transverse profiles. The proposed formalism is general but we specifically considered the case of a singlemode squeezed vacuum field, alone or with some contribution of a thermal mode. We applied this analysis for the squeezed vacuum generated in Rb vapor due to PSR effect, and observe signs of thermal noise emergence at higher temperatures, as expected from previous experimental results. Potentially, when measurements extract enough information about the covariance matrix a back transformation can be applied and the initial covariance matrix can be exactly reconstructed. We can verify the reconstruction fidelity when the process finds a diagonal covariance matrix. The developed profiler technique has potential use in many quantum communication and precision measurement applications, where exact mode matching with an unknown quantum mode is necessary for high-fidelity quantum state detection.
2023-06-13T01:16:01.291Z
2023-06-01T00:00:00.000
{ "year": 2023, "sha1": "863a380103f9da0b02bb829b36a5a2688216e7fa", "oa_license": null, "oa_url": null, "oa_status": null, "pdf_src": "Arxiv", "pdf_hash": "863a380103f9da0b02bb829b36a5a2688216e7fa", "s2fieldsofstudy": [ "Physics" ], "extfieldsofstudy": [ "Physics" ] }
246472943
pes2o/s2orc
v3-fos-license
Stabilization of three-dimensional charge order through interplanar orbital hybridization in PrxY1−xBa2Cu3O6+δ The shape of 3d-orbitals often governs the electronic and magnetic properties of correlated transition metal oxides. In the superconducting cuprates, the planar confinement of the \documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$${d}_{{x}^{2}-{y}^{2}}$$\end{document}dx2−y2 orbital dictates the two-dimensional nature of the unconventional superconductivity and a competing charge order. Achieving orbital-specific control of the electronic structure to allow coupling pathways across adjacent planes would enable direct assessment of the role of dimensionality in the intertwined orders. Using Cu L3 and Pr M5 resonant x-ray scattering and first-principles calculations, we report a highly correlated three-dimensional charge order in Pr-substituted YBa2Cu3O7, where the Pr f-electrons create a direct orbital bridge between CuO2 planes. With this we demonstrate that interplanar orbital engineering can be used to surgically control electronic phases in correlated oxides and other layered materials. as a 2D electronic phenomenon hosted in the CuO 2 planes, reflecting the weak interplanar coupling of the planar Cu 3d x 2 Ày 2 orbitals. In diffraction experiments, the 2D character is evidenced by a reciprocal space 'rod' that is broad along the out-of-plane direction (Miller index L), and maximized at half-integer values of L due to a weak, but out-of-phase, coupling between adjacent planes 19 . The strength of this interplanar coupling can be further quantified by extracting the correlation lengths from the widths of the scattered CO peaks along L. In YBa 2 Cu 3 O 6+δ (YBCO), the highest reported outof-plane correlation length (10 Å) is nearly an order of magnitude smaller than the highest reported in-plane correlation length (95 Å) 5 , highlighting the 2D nature of the CO phase. It is not clear whether disorder [20][21][22] or the low dimensionality of the underlying Cu 3d x 2 Ày 2 orbitals intrinsically limits the out-of-plane correlation length, or if the CO could, in principle, develop into a truly long-range order, as suggested by recent experiments [20][21][22] . It has since been observed that the application of certain perturbationshigh magnetic fields 4,23-25 , epitaxial strain in thin films 26 , or uniaxial strain 27,28can induce a CO phase with threedimensional (3D) coherence. Upon the application of these external influences, a second CO peak emerges, this time centered at integer L-values, evidencing an out-of-plane coupling that locks the phase of adjacent CuO 2 planes. The 3D CO peaks have significantly increased out-of-plane correlation lengths, achieving up to 55 Å 24 , 61 Å 26 , and 94 Å 27 , respectively. All of these 3D CO correlation lengths are still considerably shorter than the typical crystalline c-axis correlation lengths found in this compound. Furthermore, the 2D rod centered at half-integer L-values gets enhanced upon applying the external influences, showing a persistent coexistence of the 3D and 2D COs. While it is easy to discern the 2D nature of the unperturbed CO upon consideration of the underlying planar Cu 3d x 2 Ày 2 orbitals, the mechanisms by which these external perturbations are able to induce a 3D CO peak remain unclear. Moreover, the in situ application of these perturbations presents complicated technical challenges that preclude many experimental techniques altogether, making it difficult to systematically investigate how the dimensionality of the CO can be tuned and obscuring its connection with SC. Taking an orthogonal route, we hypothesized that 3D CO could instead be stabilized by virtue of tuning the underlying orbital character via hybridization to more directly enhance the out-ofplane coupling between adjacent CuO 2 plane layers. Here we show that, by substituting Pr on the Y sites in Pr x Y 1−x-Ba 2 Cu 3 O 7 (Pr-YBCO) (Fig. 1a), a highly correlated 3D CO state can be stabilized with an out-of-plane correlation length of~364 Å (Fig. 1b), a number that is bound by the crystalline correlation length, within our experimental resolution. This material was chosen because substitution by Pr, which is the largest trivalent rare-earth ion, except for Ce which does not form the YBCO structure 29 , results in the emergence of hybridization between the Pr 4f orbitals and planar CuO 2 states 30 that yields an electronically relevant, hybridized orbital 31 with spatial extension in three dimensions, in stark contrast to the planar Cu 3d x 2 Ày 2 orbitals that dominate the physics of the parent compound. Unlike substitution by other rare-earth elements, such as Dy, which do not significantly alter the parent YBCO phase diagram 32 , increasing Pr substitution in the Pr x Y 1−x Ba 2 Cu 3 O 7 system continually reduces the superconducting T c , yielding a pseudogap regime 30,33,34 and eventually an antiferromagnetic insulating phase 30,[35][36][37][38][39][40] . Furthermore, the in-and out-of-plane zero-temperature superconducting coherence lengths are substantially longer in Pr-YBCO than in YBCO and increase monotonically with Pr concentration [41][42][43] . This indicates that SC gains additional 3D character with increasing Pr substitution and has been attributed to increased coupling between CuO 2 planes through the bonding with the substituted Pr ions. Various results suggest that localized Pr 4f states are appreciably hybridized with the valence band states associated with the conducting CuO 2 planes, specifically the O 2p level 44,45 . We present density-functional calculations showing that, through this hybridization which is unique to Pr, the CO on adjacent CuO 2 planes can couple to yield a stable 3D CO phase. Altogether, our results constitute the first detection of a fully stabilized, long-range 3D CO that competes with SC, achieved by intrinsically engineering the orbital character of the electronic structure. Results We used resonant soft x-ray scattering (RSXS) at the Cu L 3 and Pr M 5 edges to investigate the CO properties in a Pr x Y 1−x Ba 2 Cu 3 O 7 sample with x ≈ 0.3 and a superconducting T c = 50 K; a concentration value chosen because it features pseudogap behavior, as measured by various probes 30,33,34 , and because it yields a T c similar to underdoped YBa 2 Cu 3 O~6 .67 , a doping level where the CO phase is maximal. Due to not having detwinned samples (Methods section), we cannot determine whether the 3D CO peak is biaxial or uniaxial. If the 3D CO is uniaxial, we cannot determine whether it is located along the H or K reciprocal axis. The location of the 3D CO peak is thereby referred to as being along H or K to reflect this, except for places where the position is labeled simply by K for the sake of readability. Reciprocal space dependence A reciprocal space map of the HL or KL-plane in reciprocal lattice units (r.l.u.), measured at T c = 50 K at 932.4 eV is shown in Fig. 2a. In eminent contrast to all other reports of 3D CO, no scattered intensity was detected in the vicinity of L ≈ 1.5, indicating the apparent absence of 2D CO (see Supplementary Methods). This represents the first unique aspect of our work: to within the limits of our instrumental resolution, we only observe a peak at L = 1, suggesting an effective isolation of the CO phase with an out-of-plane coupling. Further inspection of the 3D CO signal displays a reciprocal space structure that is broad along H or K but narrow along L. X-ray absorption fine structure measurements indicate that, while the Pr ions are relatively well-ordered at the Y sites, there is clear disorder in the CuO 2 planes and in the oxygen environment around the Pr 29 , which makes the enhancement of the correlation length along the c-axis even more striking. The broad shape of the peak along H or K at L = 1, shown in Fig. 2b, is consistent with the broad feature observed in many previous RSXS measurements of CO in cuprates 12,14,15,46 that has been attributed to a fluctuating component in YBCO 47 , suggesting that the actual static contribution may be narrower than it appears. Another important feature of our discovery is shown in Fig. 2c, which compares reciprocal space cuts along L close to integer values with K centered at the in-plane CO wavevector. The broadest peak (dark green triangles) displays the data reported for 3D CO induced by high magnetic field 4 . The next broadest peak (red stars) displays the data for 3D CO induced by epitaxial strain in a thin film 26 . The next broadest peak (lime green squares) displays the data measured under the application of 1.0% uniaxial strain 27 , which has yielded 3D CO with the previously highest reported out-of-plane correlation length. The Pr-YBCO 3D CO peak (purple circles) is considerably narrower, yielding a correlation length of~364 Å. This value is found to be similar to the absorption length for this compound, photon energy (~930 eV), and angle of incidence (~10 ∘ ). The observable correlation length of the 3D CO may thus be limited by the finite penetration depth being of similar magnitude. We believe this is not a significant factor, however, due to the (002) structural reflection (blue diamonds) having a correlation length that is within the experimental uncertainty of the 3D CO, even though it was measured at higher energy (~1750 eV) and angle of incidence (~38°), both of which contribute to a significantly longer absorption length. This suggests that, in this Pr-YBCO system, the 3D CO peak has a width that is limited by the width of the crystallographic Bragg peaks. The measured~364 Å thus represents a lower bound on the out-of-plane correlation length. Energy dependence The energy dependence of the scattered intensity at Q = (0 -0.335 1) is shown in Fig. 3a, overlaid with the corresponding x-ray absorption spectrum (XAS) measured with the electric field of the x-rays parallel to the bond directions in the CuO 2 planes. The XAS reveals two resonances that correspond to the Pr M 5 (930.9 eV) and the Cu L 3 (932.6 eV) edges. There are also two peaks observed in the energy dependence of the 3D CO (930.3 eV and 932.8 eV), which most likely correspond to contributions from Pr and Cu, respectively. However, due to the energetic overlap of the Pr M 5 and Cu L 3 edges, the energy dependence of the scattering is unavoidably complex; as such, we refer to them simply as peaks A and B (see Supplementary Discussion). Unlike in YBCO films with 3D CO 26 , we do not observe a significant shift in spectral weight to higher energy that would indicate CO coupling through the CuO chains. Furthermore, we observe in Fig. 3b that the 3D CO peak can still be detected at energies far below the resonance (850 eV) albeit much more weakly, which is in contrast to all other cuprates where the CO peaks studied by RSXS lack sufficient scattering strength to be observed off-resonance. This indicates a sizeable lattice distortion rarely seen 7 in other cuprate systems, highlighting that in Pr-YBCO the 3D CO becomes more structurally stable than previously reported. Temperature dependence The interplay of the 3D CO with superconductivity can be investigated by measuring the temperature dependence of the former. In Fig. 3c, we plot the scattered intensity at Q = (0 -0.335 1) at energies corresponding to peaks A and B in the energy dependence as a function of temperature. While the overall scattering intensity is higher at the peak A energy than the peak B energy, which is consistent with the measured energy dependence, it is notable that the 3D CO scattering signal is still detectable at room temperature for both energies. Upon cooling from T = 300 K, the temperature dependence at both energies maintain roughly equivalent slopes until within the vicinity of T c = 50 K. Cooling below T c produces a cusp-like maximum, indicating a competition between SC and the isolated 3D CO phase. This signature behavior confirms that CO is at least a major contributor to the observed scattering, regardless of whether additional structural a b c contributions exist. This is in contrast to the 3D CO induced by very high magnetic fields, where any competition between 3D CO and SC is obscured by the very presence of the magnetic field which, while necessary to induce 3D coherence, comes with the unavoidable expense of greatly suppressing the SC phase. Discussion Having established experimentally that 3D CO can be stabilized with long out-of-plane correlation length, we turn to discuss the possible origin of the c-axis coupling in the Pr-YBCO system. It is already well known that, unlike any other rare earth, Pr substitution uniquely suppresses SC in YBCO by localizing holes via orbital hybridization 44,45,48,49 . To this end, we performed density-functional theory plus Hubbard U (DFT+U) calculations for both PrBa 2 Cu 3 O 6 (PrBCO) and DyBa 2 Cu 3 O 6 (DyBCO) structures 50 to understand the role of this hybridization within the context of 3D CO and its competition with SC (see Supplementary Methods). Figure 4a schematically depicts the orbital character of the electronic states near the Fermi level (E F ) in PrBCO. In addition to the characteristic pdσ bands of the CuO 2 planes which host all the 2D electronic phenomena, another band crosses the Fermi level. From prior calculations 49 , it is clear that the effective doping is affected, as this band above the Fermi level takes holes from the superconducting band, which is consistent with the observation that T c is suppressed with increasing Pr concentration. This results from the antibonding coupling between the Pr 4f zðx 2 Ày 2 Þ state and its nearestneighbor O 2p π states in adjacent CuO 2 planes 48,49 (Fig. 4c, d). We speculate that this orbital coupling with an out-of-plane component locks together the phase of the CO on adjacent CuO 2 planes, resulting in a diffraction peak at L = 1 19 . For later rare-earth elements with lower 4f energy, the 4f zðx 2 Ày 2 Þ -2p π antibonding band is expected to be lowered and removed from the Fermi level. Figure aspect of Pr makes it the appropriate rare-earth to substitute into Y to stabilize and isolate 3D CO, which occurs concomitantly with a lattice distortion, according to our data. The exact structural mechanism of stabilization, e.g., phonons, is a subject of future research. Our discovery of a fully stable 3D CO without a 2D signal has important implications to our understanding of CO and its interplay with SC. First, we confirm that a fully coherent, isolated 3D CO can be stabilized despite the intrinsic disorder inevitably present in cuprates. We note here that our Pr-substituted samples are expected to host at least as much structural and chemical disorder than in pristine YBCO, if not more so, due to the additional defect channel. This result may further elucidate the complex relationship between CO and SC, both of which have now been shown to substantially gain 3D character with increasing Pr concentration in the Pr-YBCO system [41][42][43] . Second, we confirm that a stable 3D CO still coexists and competes with SC, implying that the system's ground state can comprise two long-range, static, coexisting orders. Third, since the 3D coupling does not rely on the CuO chains that are unique to YBCO, perhaps other forms of hybridization can be used to stabilize 3D CO in other cuprate families, which has not yet been observed. Finally, we show that controlling the orbital content of the Fermi surface by assigning it a 4f character with an out-of-plane component can yield a sizable impact on the electronic ordering tendencies of the CuO 2 plane. It can be used as a tuning knob to study the validity of 2D models to describe layered systems, like the cuprates or intercalated graphitic systems [51][52][53][54][55] . In summary, we have shown how utilizing the hybridization between the 4f states of Pr and planar CuO 2 orbitals to tune the underlying orbital character can significantly enhance the out-of-plane coupling, phase-locking the CO across adjacent planes and rendering a stable CO phase that is fully correlated along the out-of-plane direction without the 2D version. The c-axis correlation length has a lower bound matching that of the crystal itself, showing that Pr substitution is the most efficient way of stabilizing 3D CO compared to using external perturbations, like magnetic fields and strain, and uniquely does not suffer from experimental complications arising from in situ application. Furthermore, through resonant spectroscopy, we attribute the formation of 3D coupling to the role of the Pr ions located between CuO 2 planes. To understand the mechanism of this out-of-plane coupling, we turned to DFT+U calculations that show a hybridized 4f-2p band crossing the Fermi level, a feature that is unique to Pr-substituted YBCO. Since our system does not rely on external perturbations, other techniques can be employed to investigate this material and shed light on the connection between CO and SC. Moreover, this demonstrates how the influence of underlying orbital character on an electronic phase can be tuned via orbital hybridization, which can be generalized to other correlated transition metal oxides and layered systems. Sample preparation Single crystals of Pr x Y 1−x Ba 2 Cu 3 O 7 45 were grown according to the method described in reference 56 . The starting materials used in the crystal growth consisted of 99.99% pure Y 2 0 3 , Pr 6 O 11 , BaCO 3 , and CuO powders. The crystals were annealed in flowing oxygen to maintain full oxygenation and optimize their superconducting properties. The Pr 0.3 Y 0.7 Ba 2 Cu 3 O~7 sample we studied has an orthorhombic crystal structure that is not detwinned with lattice parameters c = 11.67 Å and a = b = 3.87 Å. The superconducting transition temperatures of the crystals were determined from magnetization measurements performed with a vibrating sample magnetometer in a Quantum Design DynaCool Physical Property Measurement System. RSXS measurement The data shown in this manuscript were collected from scattering experiments carried out at beam line 13-3 of the Stanford Synchrotron Radiation Lightsource (SSRL). Crucial measurements and insights where gained through scattering experiments carried out at Sector 29 of the Advanced Photon Source (APS). The sample was mounted using silver paint on an in-vacuum multiple-circle diffractometer. The sample temperature was controlled by an open-circle helium cryostat. The incident photon polarization was fixed as σ (vertical linear) polarization. The (0 K L) scattering plane was determined by aligning the (0 0 2), (0 -1 1), and (0 1 1) structural Bragg reflections at 1746 eV photon energy. We note that we have also observed this phenomenon in a second sample with a similar Pr concentration (see Supplementary Note). A 256 × 1024 pixel (26 μm × 26 μm pixel size) CCD detector was used. The scattering intensity data were collected within a region-ofinterest in the center of the CCD detector. Dark images and data measured by the CCD detector outside of this region-of-interest were used to subtract any background fluorescence contributions, which were generally very small compared to the 3D CO scattered intensity, except for when off-resonance or at high temperature. A beam shutter was used to cut the incoming x-ray beam between two consecutive CCD shots to prevent undesired collection of x-ray photons during read-out. A 100 nm Parylene/100 nm Al filter was placed in front of the CCD to stop photoelectrons emitted from the sample from contributing to the signal on the CCD. Further details about the data collection and analysis methods used may be found in the Supplementary Methods. Data availability The data generated in this study have been deposited in the Harvard Dataverse database available at https://doi.org/10.7910/DVN/2BIWWI.
2022-02-03T02:15:44.949Z
2022-02-02T00:00:00.000
{ "year": 2022, "sha1": "61911efce3348753a18832d356296a76442d8dac", "oa_license": "CCBY", "oa_url": "https://www.nature.com/articles/s41467-022-33607-z.pdf", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "9d7f8f29c44ceb1afad5dba5496d92708cf24494", "s2fieldsofstudy": [ "Physics" ], "extfieldsofstudy": [ "Physics" ] }
259993456
pes2o/s2orc
v3-fos-license
A detailed analysis of single‐channel Nav1.5 recordings does not reveal any cooperative gating Cardiac voltage‐gated sodium (Na+) channels (Nav1.5) are crucial for myocardial electrical excitation. Recent studies based on single‐channel recordings have suggested that Na+ channels interact functionally and exhibit coupled gating. However, the analysis of such recordings frequently relies on manual interventions, which can lead to bias. Here, we developed an automated pipeline to de‐trend and idealize single‐channel currents, and assessed possible functional interactions in cell‐attached patch clamp experiments in HEK293 cells expressing human Nav1.5 channels as well as in adult mouse and rabbit ventricular cardiomyocytes. Our pipeline involved de‐trending individual sweeps by linear optimization using a library of predefined functions, followed by digital filtering and baseline offset. Subsequently, the processed sweeps were idealized based on the idea that the ensemble average of the idealized current identified by thresholds between current levels reconstructs at best the ensemble average current from the de‐trended sweeps. This reconstruction was achieved by non‐linear optimization. To ascertain functional interactions, we examined the distribution of the numbers of open channels at every time point during the activation protocol and compared it to the distribution expected for independent channels. We also examined whether the channels tended to synchronize their openings and closings. However, we did not uncover any solid evidence of such interactions in our recordings. Rather, our results indicate that wild‐type Nav1.5 channels are independent entities or exhibit only very weak functional interactions that are probably irrelevant under physiological conditions. Nevertheless, our unbiased analysis will be important for further studies examining whether auxiliary proteins potentiate functional Na+ channel interactions. However, similar pathological manifestations have been observed in patients without any SCN5A mutation, which led to the idea that the function of Na v 1.5 channels is tightly controlled by several regulatory proteins (Abriel et al., 2015;Rivaud et al., 2020). These interacting proteins play diverse roles, for example in membrane trafficking of the channels, in their anchoring to cytoskeletal proteins, in their post-translational modifications and in the regulation of their biophysical properties (Abriel et al., 2015;Dong et al., 2020). For instance, calmodulin regulates Na v 1.5 channel function, and mutations in either the binding site on Na v 1.5 or in calmodulin itself can reduce the peak of I Na and destabilize its inactivation, leading to an increased persistent I Na (Kang et al., 2021;Kim et al., 2004). Another interacting protein is 14-3-3η, and it has been suggested that this protein plays a role in Na + channel clustering and modifies the biophysical properties of I Na by shifting its steady-state inactivation curve to more negative potentials (Allouis et al., 2006). In a recent study, Clatot et al. (2017) suggested that the α subunits of two Na + channels form dimers, and that two neighbouring channels can interact either directly or indirectly via the mediator protein 14-3-3η. Based on single-channel recordings in HEK293 cells expressing wild-type human Na + channels, the authors proposed that the biochemical interaction of these dimers may cause the sodium channels to also interact functionally by exhibiting cooperative gating, implying that the probability that the two channels are open simultaneously is increased (Clatot et al., 2017). In their experiments, the functional interaction was reduced by difopein, a protein that disrupts the interaction via 14-3-3η. However, the authors did not report any changes in the overall macroscopic behaviour of I Na (Clatot et al., 2017). Thus, the cardiac Na + channel could be considered as a dimeric complex rather than a single functional unit. The notion that Na + channels interact functionally represents a paradigm shift because it implies that the functional units of I Na are channel dimers (or possibly multimers) rather than single channels. In a previous study, we developed a mathematical model of a pair of interacting channels compatible with the data of Clatot et al. (2017), and illustrated the implications of this interaction in causing the dominant negative effect of the BrS-causing p.L325R variant of the human Na v 1.5 channel (Hichri et al., 2020). However, the paradigm of functionally interacting Na + channels is still in its emergence and requires further investigation. Therefore, to gain more detailed insights into the functional interactions between Na + channels and to understand their implications for physiology, biophysics, pathophysiology and modelling, we developed an automated pipeline to analyse cell-attached patch clamp recordings of Na + channel currents, and applied it to own recordings using HEK293 cells expressing human Na v 1.5 channels and in adult mouse and rabbit ventricular cardiomyocytes. Single-channel recordings and cell-attached recordings are indeed the most suitable approach to study the microscopic behaviour of ion channels. However, such recordings are corrupted by noise, baseline drift and capacitance artefacts. Very commonly, these disturbances are removed manually before further analysis. Moreover, the analysis frequently requires user-defined thresholds to identify channel openings and closings. This large number of manual interventions is highly prone to subjective bias and may lead to inaccurate data interpretation. To minimize this bias, our analysis pipeline presented in this study has only a minimal set of parameters. It subtracts the capacitance artefacts and baseline drifts, idealizes the single-channel currents and counts the number of open channels at every time point during every recorded sweep. Then, we used different approaches to explore any significant interaction quantitatively. We generalized our previously published method (Hichri et al., 2020) to quantify the interaction between two or more identical and indistinguishable channels. Next, we applied a modification of the method proposed by Chung and Kennedy (1996) to investigate the tendency of channels to synchronize their openings and closings (coupled gating). Overall, we did not find any evidence for a strong functional interaction in our recordings compared to what was reported by others (Clatot et al., 2017). Rather, our results indicate that wild-type Na v 1.5 channels are independent entities or exhibit only very weak functional interactions that are probably irrelevant under physiological conditions. Ethical approval The handling of animals was done in accordance with the ethical principles and guidelines of the Swiss Academy of Medical Sciences. The procurement of animals, husbandry and experiments were done according to the European Convention for the Protection of Vertebrate Animals used for Experimental and other Scientific Purposes. According to Swiss legislation, the protocols used here were approved and authorized by the Commission of Animal Experimentation of the Cantonal Veterinary Office of the Canton of Bern, Switzerland [authorizations BE 88/2022 (for mice) and BE 132/2020 (for rabbits)]. The experiments were conducted according to the animal welfare committee guidelines of the University of Bern, under the overarching framework of Swiss federal legislation and the guidelines of the Swiss Academy of Medical Sciences. The investigators understand the ethical principles under which the journal J Physiol 601.17 operates and our work complies with the animal ethics checklist outlined in the editorial by Grundy (2015). Isolation of mouse ventricular myocytes Ventricular cardiomyocytes from wild-type adult C57BL/6J mice (own breeding at the Central Animal Facility of the Faculty of Medicine of the University of Bern) were isolated according to a modified procedure using established enzymatic methods Rougier et al., 2019). At least 7 days before the mice were killed, tunnel/cup handling procedures were performed to reduce the stress of the animals. The animals always had ad libitum access to food (standard diet) and water. The mice (n = 4 females, 23-28 weeks old, weighting 22-28 g) were deeply and terminally anaesthetized via an intraperitoneal injection of ketamine and xylazine (200 and 20 mg/kg body weight, respectively). To avoid blood coagulation inside the heart, which could interfere with the ex vivo procedures, heparin (5000 IU/mL, 4 mL/kg body weight) was also added to the anaesthesia mix. The depth of anaesthesia was assessed by the absence of the pedal withdrawal reflex. The animals were then killed by cervical dislocation before a thoracotomy was performed. Hearts were rapidly excised, cannulated and mounted on a Langendorff apparatus for retrograde perfusion (3 mL/min) at 37°C. The hearts were rinsed free of blood with a nominally Ca 2+ -free solution containing (in mmol/L): NaCl, 135; KCl, 4; MgCl 2 , 1.2; NaH 2 PO 4 , 1.2; HEPES, 10; glucose, 11; pH = 7.4 (adjusted with NaOH), and subsequently digested by a solution supplemented with 50 μM Ca 2+ and collagenase type II (1 mg/mL, Worthington, Allschwil, Switzerland) for ∼15 min. Following digestion, the atria were removed, and the ventricles transferred to a nominally Ca 2+ -free solution, where they were minced into small pieces. Single cardiac myocytes were liberated by gentle trituration of the digested ventricular tissue and filtered through a 100 μm nylon mesh. Isolation of rabbit ventricular myocytes Ventricular myocytes from adult wild-type New Zealand white rabbits were kindly provided by Katja Odening and her research group. The animals were bred at the Centre for Experimental Models and Transgenic Services, University Medical Centre, Freiburg, Germany, and housed at the Central Animal Facility of the Faculty of Medicine of the University of Bern. They were fed ad libitum (normal diet) and had constant access to water. The rabbits (n = 2 females, 6 months old, weight 3.8 kg) were anaesthetized by subcutaneous injection of ketamine and xylazine (12.5 and 3.75 mg/kg body weight, respectively) and anti-coagulated with an intravenous injection of heparin (1000 IU in 1 mL). The animals were then killed by using an intravenous injection of pentobarbital (150 mg/kg body weight). This was assessed by the cessation of breathing, dilated pupils and the absence of corneal reflex and pain reaction. The hearts were then excised and mounted on a Langendorff apparatus for enzymatic digestion with collagenase, as described in detail by Odening et al. (2019). These myocytes were obtained as leftover cells obtained in the context of separate studies by Katja Odening. Electrophysiological recordings Cell-attached patch clamp recordings were acquired using the integrating head stage mode for quiet single-channel recordings (β configuration was set to Patch) of an Axopatch 200B amplifier (Molecular Devices Corp., Sunnyvale, CA, USA). The gain was set to α = 10. Experiments were controlled using pClamp10 software (Molecular Devices). Currents were filtered with a 5 kHz four-pole Bessel filter and digitized at a sampling rate of 100 kHz. The bath solution contained (in mmol/L): KCl, 140; NaCl, 5; MgCl 2 , 3; EGTA, 2; glucose, 6; HEPES, 10 at a pH = 7.4 adjusted with CsOH. The high potassium concentration in the bath ensured that resting membrane potential was brought near zero and thus facilitated control of the transmembrane voltage of the part of the cell membrane under the pipette. The pipette solution contained (in mmol/L): NaCl, 140; TEA-Cl, 30; CaCl 2 ,0.5; HEPES, 10 (nifedipine, 0.002, was used to inhibit Ca v 1.2 calcium channels when recording Na + currents in cardiomyocytes) at pH = 7.4 adjusted with CsOH. Patch pipettes with a resistance of 5-12 M were pulled from quartz capillaries (QF150-75-10; Sutter Instruments, Hofheim, Germany) using a P2000 laser puller (Sutter Instruments). The pipettes were coated with Sylgard® (Sigma-Aldrich, St. Louis, MO, USA) to decrease their wall capacitance. Pipette capacitance was compensated for to minimize the capacitive transient as far as possible. Cell-attached recordings were conducted upon a 50 ms depolarizing step to −60, −40 or −20 mV from a holding potential of −120 mV, and the recordings were repeated every 1 s. Sweeps were continuously acquired until their number reached 1000 or until the seal was lost. For the determination of single-channel conductance, 50 sweeps were acquired. The recordings were obtained at room temperature (22-23°C). Recordings were pursued only if seal resistance was >10 G . Recordings from the isolated myocytes were conducted on the lateral membrane. Data analysis Cell-attached recordings were analysed using custom MATLAB programs (MathWorks, Natick, MA, USA). The programs build up an automated pipeline that comprises multiple steps (Fig. 1). Of note, in the cell-attached mode, inward Na + currents have a positive sign and are acquired as such. Thus, in this work, Na + currents were considered to be positive and their sign was not inverted in the analysis and graphical representations. Overview of the analysis As illustrated in Fig. 1, raw single-channel recordings are always corrupted by capacitance artefacts, baseline The pipeline is illustrated by the flowchart on the left. First row: example sweeps obtained from a patch in an HEK cell upon application of a voltage step from +120 to +40 mV (corresponding, in the cell-attached mode, to a membrane depolarization from −120 to −40 mV). The voltage step was applied at time 0. The thick green curves represent the trends identified by the de-trending algorithm. Second row: sweeps after subtraction of the trends. Third row: de-trended sweeps after digital filtering (in this example: low-pass with 3.7 kHz cut-off frequency) and offsetting the baseline (absence of any open channel) to 0. drifts and high-frequency noise. Therefore, in a first step, capacitive artefacts and baseline drifts were identified and subtracted in every recorded sweep using a library of exponential and linear functions by applying an algorithm developed by (Abrams et al., 2020;Kang et al., 2021). In a second step, the resulting de-trended sweeps were optionally low-pass filtered. In a third step, the baseline of the de-trended and filtered sweeps (i.e. the current in the absence of any open channels) was offset to 0. Idealizing the currents represented the fourth step of the processing pipeline. Here, we developed a new approach based on the idea that the ensemble average of the idealized currents should deviate as little as possible from the ensemble average current of the processed sweeps recorded from a given patch. In detail, the idealized current levels were determined iteratively (assuming that all channels produce the same current) using a non-linear procedure minimizing the mean square residual. This procedure was repeated under the assumption of a given number of channels (increasing from one to five), and the residual converged when the maximal number of simultaneous openings was identified (Fig. 2). To explore functional interactions between channels, we counted, for every time point during the voltage clamp protocol, the number of simultaneously open channels and calculated the distribution of the fractions of sweeps with k open channels ( f k ), with k ranging from 0 to the number of channels in the patch. Under the assumption of identical and indistinguishable channels, the probability that a channel is open (p open ) was calculated from the distribution f k . Then, from p open , the binomial distribution f k that would be expected for channels that do not interact functionally (i.e. independent channels) was computed. Next, we compared statistically the observed and expected distributions f k and f k (multiplied by the number of sweeps) using the χ 2 test. For the case of two channels, this approach corresponds to the analysis of a contingency table, as illustrated in the bottom left panel of the Abstract Figure. To quantify the Figure 2. Idealization of single-channel currents A, de-trended and baseline offset sweeps (black) recorded from an HEK cell membrane patch depolarized to −40 mV that exhibited up to four simultaneously open channels. The same sweeps are replicated in every column, whereby only one channel was open in the first row, two were simultaneously open in the second, three in the third and four in the fourth (labels). The orange trace represents the idealized currents together with the number of simultaneously open channels, under the assumption of one to four channels present in the patch (columns, labels). B, mean square residual (top) and single-channel current (bottom) output by the idealization algorithm under the assumption of one to five channels in the patch. C, ensemble average current (of 516 sweeps, top) computed from the processed sweeps and from the idealized currents, as well as the residual (bottom) under the assumption of N assumed = 4 channels. The voltage step was applied at time t = 0. Note that the analysis was started at t = 0.38 ms because the very large capacitive artefact (orders of magnitude larger than the single-channel currents, see Fig. 1) could not be fully and reliably de-trended before that time. interaction, we also computed the difference of the Shannon entropies (Shannon, 1948) of the distributions f k and f k ; this entropy difference is expected to be 0 for independent channels and negative for functionally interacting channels. To quantify the strength of the interaction between channels, we calculated the relative differences ε k between f k and f k in a small time window centred around the peak of p open . In the presence of cooperative gating, we always expect ε 1 to be negative. In addition, we examined if the channels have a tendency to synchronize their openings and closings by inspecting the transition rates over a predefined time interval between configurations with given numbers of open channels (arrows in the bottom right panel of the Abstract Figure). These rates were compared to those that would be expected for non-interacting channels using an approach inspired by the work of Chung and Kennedy (1996). For two channels exhibiting coupled openings, the observed transition rate from zero ot two open channels is expected to be larger than that without interaction. Conversely, for two channels exhibiting coupled closings, the observed transition rate from two to zero open channels is expected to be larger than that without interaction. The de-trending, filtering and idealization pipeline as well as the functions used to assess channel interactions were incorporated into user-friendly MATLAB graphical interfaces that are available to the scientific community on the repository Zenodo (https://doi.org/10.5281/zenodo. 7817601). Technical and mathematical details of our analysis are provided in the remainder of the Methods section. Details of the analysis Step 1: The first step involves subtracting the capacitance artefact and the baseline drift from each individual sweep by using a library of functions, as described by Kang and colleagues (Abrams et al., 2020;Kang et al., 2021). This library contains predefined functions to remove the capacitive artefact as well as linear drifts and constant offsets. In our application, the library contains 25 decaying exponential functions with time constants ranging from 0.05 to 7.40 ms in a geometric progression as well as one linear and one constant function. In this de-trending process, the following cost function is minimized: and I DE = I measured − E f it I measured is the raw sweep from a recording (in the form of a column vector of N samples elements, N samples being the number of time samples), while D is an N samples × n E matrix containing, in its columns, the n E = 27 library functions, and E is a vector of corresponding n E coefficients. The vector s (with N samples ) represents an estimation of positive outliers (samples > 0) corresponding to the actual current through one or several Na + channels. The parameter λ is a regularization penalty parameter for the optimization problem, which should be set at a value not exceeding the root mean square (RMS) noise level. The vertical line brackets · 2 and · 1 represent the L 2 and L 1 norms, respectively. E f it is the fitted capacitive transient and linear drift, and I DE is the de-trended current that is left after subtracting this capacitive and baseline drift fit from the raw current. The cost function was minimized using an iterative steepest gradient descent algorithm [MATLAB code developed by Kang and colleagues (Abrams et al., 2020;Kang et al., 2021) and kindly provided by Manu Ben-Johny, Columbia University]. In our application, we observed that setting λ to 0.01 pA (5% of a typical RMS noise level of 0.20 pA) and 500-800 iterations always led to a good performance. In the original algorithm, the coefficients in the vector E are constrained to be zero or positive (Abrams et al., 2020;Kang et al., 2021), but in some circumstances, a better fit was obtained without constraint on the coefficients in E. Step 2: In a second step, if necessary, the de-trended current was digitally filtered using a 3.7 kHz low-pass filter. This filtering was conducted using convolution with a Gaussian kernel. In many cases, filtering was not necessary due to the good quality of the raw signal. Step 3: The algorithm of Kang and colleagues (Abrams et al., 2020;Kang et al., 2021) performs well to de-trend the signal, up to a constant offset term which depends on the parameter λ. Therefore, in this third step, the baseline was offset to 0 according to the following algorithm based on the assumptions that (i) for a given patch, there is a certain fraction of blank sweeps (in our experiments, this fraction was usually >10%), (ii) in every sweep, all channels are closed for a certain period of time (which is usually the case for a small number of Na + channels) and (iii) the single-channel current is larger than the peak-to-peak noise (which was the case in our recordings). The sweeps were ranked by peak-to-peak amplitudes, and the fraction f blank of sweeps with the lowest peak-to-peak amplitudes was extracted and their samples were pooled together; this pool of samples served to estimate the peak-to-peak noise level I pp of the recordings. The value of the parameter f blank was set by default to 0.1. Then, the sweeps having a peak-to-peak amplitude lower than θI pp (where θ is a parameter used as a multiplicative threshold) were categorized as blank, and these sweeps were used to estimate the mean residual current μ. A default value of set of samples in the interval [μ − I pp /2, μ + I pp /2] was extracted, a Gaussian function was fitted to the histogram of this subset of samples and the fitted peak of this Gaussian was subtracted from the sweep. Step 4: The next step is the idealization of the de-trended and baseline offset current sweeps, I sweep,i , with the index i going from 1 to the number of utilizable sweeps, N sweeps . The novel idea behind this step is that the ensemble average of the idealized sweeps (E idealized ), identified by thresholds between current levels, must reconstruct at best the ensemble average current of the de-trended sweeps (E sweeps ). E idealized is defined as where I idealzed,i is the ith idealized sweep, and the ensemble average current of the de-trended sweeps is computed as The sweeps are idealized by identifying thresholds between current levels corresponding to zero, one, two, etc., simultaneously open channels, up to the maximal number (N max ) of simultaneously open channels in a given set of sweeps (corresponding to a single membrane patch). Of note, this maximal number is not necessarily equal to the actual number of channels present in the patch, but it is in no case greater than it. Assuming that the channels have an identical unitary current i u , the current levels are equally spaced from i 0 to i 0 + N max i u , where i 0 is the zero-current level (in the absence of any open channel). We then set the thresholds at the midpoints between these levels and idealized the sweeps by rounding every individual sample to the nearest current level. The aim of the idealization algorithm is to find the values of i 0 and i u that minimize a cost function defined by the difference between E sweeps and E idealized . As a cost function, we used the L 2 norm of this difference divided by the number of time samples (N samples ), that is the mean square residual (R 2 ) defined as where t j is the time of the jth sample. Due to the thresholding procedure, R 2 (a function of i 0 and i u ) is piecewise constant and thus discontinuous in the i 0 -i u parameter space. Hence, any minimization algorithm can easily miss the true minimum of R 2 . Therefore, the computation of R 2 was regularized by introducing a regularization parameter ρ to smooth the transition over the thresholds, as follows. The thresholds θ k between the kth and the k+1th current levels were defined as Then, for every sample s from I sweep,i (t ), a set of auxiliary variables p k (with k ranging from 0 to N max ) was defined as The values of p k can be intuitively understood as the 'probability' that sample s corresponds to the current carried by k open channels. The idealized sample s idealized was then computed as These idealized samples served to construct the idealized sweeps, their ensemble average E idealized and to compute R 2 , now a continuous function of i 0 and i u . Of note, in the limit as ρ approaches 0, the procedure outlined above corresponds to rounding the samples to the nearest current level. Starting with ρ = 0.04 pA (∼20% of the typical RMS noise level of 0.20 pA), the minimum of R 2 was identified using the Nelder-Mead downhill simplex method (function 'fminsearch' in MATLAB) (Nelder & Mead, 1965). As initial guesses, a value of 0 was used for i 0 and the initial guess of i u was estimated from a priori knowledge of the single-channel current (typically 1.5-2.5 pA) expected at a given step potential or directly from a rapid visual inspection of the sweeps. The minimization algorithm was then re-run a further seven times after halving ρ every time, with initial guesses provided by the values of i 0 and i u minimizing R 2 in the previous run. At the end, the algorithm was run with ρ = 0. This iterative procedure resulted in robust convergence insensitive to the initial guess of i 0 and i u . An initial execution of the algorithm was carried out with N max = 1, providing optimized values of i 0 , i u and R 2 . The algorithm was then repeated with increasing values of N max . Incrementing N max led to a convergence of R 2 as well as of i 0 and i u . Increasing N max further then did not improve the minimization of R 2 . Therefore, the value of N max for which R 2 reached its minimum was considered as the correct estimate of the maximal number of simultaneously open channels in an experiment with a given patch. At the end of this idealization step, the algorithm then outputs the number of open channels at every sample of every sweep. Quantification of the interaction between channels To explore if there is any significant interaction between channels under non-stationary conditions, we generalized our previously published approach (Hichri et al., 2020). This generalization allows us to quantify the interaction between two or more channels under the assumptions that all the channels are identical and indistinguishable and that the action of one channel on another is reciprocal. After obtaining the idealized traces, our program counts the number of sweeps containing 0, 1, 2, . . . , N max open channels as a function of time. These counts correspond to current levels L k (t ), with k ∈ {0, 1, 2, · · · , N max } and N max k=0 L k (t ) = N sweeps . The respective fractions were calculated as f k (t ) = L k (t ) N sweeps , and hence f k (t ) denotes the fraction of sweeps having k channels open at time t. In the limit of a large number of sweeps, these fractions approach the true probabilities of observing a given number of open channels at a given time during the voltage clamp protocol. From the distribution of these counts, the binomial distribution expected for independent non-interacting channels was calculated as follows. First, an assumption must be made regarding the number of channels actually present in the patch (N assumed ), a number that is necessarily greater than or equal to N max , the maximal number of simultaneously open channels observed in a given patch. From the assumption that the channels are identical, the probability (fraction of cases) that any given channel is open (p open ) or respectively shut (i.e. closed or inactivated) is computed as where f N assumed ,shut and f N assumed ,open are respectively the fractions of channels being shut and open. These fractions are calculated from the fractions yielded by the observed counts. From these, using the binomial distribution formula, we compute the fractions of sweeps f k and the number of sweeps L k that would be expected to yield k open channels at time t in the absence of any interaction (the overbar indicates expected values without interaction) as . . , f n (t )} forms a sample from a binomial distribution. The significance of the difference between the observed distribution of counts {L 0 (t ), L 1 (t ), L 2 (t ), . . . , L N assumed (t )} and the expected distribution {L 0 (t ), L 1 (t ), L 2 (t ), . . . , L N assumed (t )} was ascertained using the χ 2 test. This χ 2 test typically yielded p-values that fluctuated between 0 and 1, with values sometimes near 0. However, one must here keep in mind that one or only a few p-values lower than a predefined threshold (e.g. 0.05) does not imply significance, because repeated testing was conducted at every time point. Moreover, it must be noted that p open (t ) as well as f k (t ) and f k (t ) are auto-correlated signals because their temporal variations are small and progressive rather than large and abrupt from sample to sample. For this reason, classical correction methods (such as Bonferroni correction) cannot be applied. In the presence of a truly significant interaction, however, one can expect that the p-value will remain lower than a predefined threshold for a substantial period of time. Thus, we considered the interaction to be significant only when p remained below 0.05 for at least 0.5 ms (50 samples). The difference between the distributions { f 0 (t ), f 1 (t ), f 2 (t ), . . . , f N assumed (t )} and { f 0 (t ), f 1 (t ), f 2 (t ), . . . , f N assumed (t )} was also quantified using the difference between the Shannon entropies (Shannon, 1948) of these distributions as follows: where S(t ) andS(t ) represent Shannon's entropy of the observed and expected distributions, respectively. The entropy difference S(t ) quantifies the interaction at any time point based on the information that is lost by assuming independent (i.e. non-interacting) channels. If the channels do not exhibit any interaction, S(t ) is expected to be 0. Otherwise, S(t ) is expected to be negative. J Physiol 601.17 The binomial coefficients in the denominators take into account that there are ( N assumed k ) possible arrangements of k open channels among a total of N assumed . Because the channels are identical, each arrangement then has the same probability. The entropy is maximized by the binomial distribution. To quantify the strength of the interaction between channels, we calculated (for k ∈ {1, 2, · · · N assumed }) the relative difference between f k and f k in a small time window (0.4 ms, unless specified otherwise) centred around the peak open probability (the peak of f N assumed ,open ). Specifically, we calculated the means μ k and μ k of f k and f k , respectively, within this time window, and quantified the effect as the relative change of μ k with respect to and μ k as The presence of an interaction will lead to ε k = 0 for some of the k values. For the case of two channels with cooperative gating, we expect ε 1 to be negative and ε 2 to be positive. Even with more than two channels, we always expect ε 1 to be negative. The approach presented above detects deviations of the distribution of the f k values from the binomial distribution expected in the absence of interactions. However, this approach does not provide any direct information regarding the tendency of channels to synchronize their openings or closings (coupled gating). To gain such information, we implemented an approach based on the work of Chung and Kennedy (1996). We consider a collection of N identical channels that can be either shut or open at arbitrary predefined times t 1 and t 2 during the voltage clamp protocol, with t 1 < t 2 . By counting the number of sweeps in which there are i channels open at time t 1 and j channels open at time t 2 and by dividing this number by the number of sweeps in which there are i channels open at time t 1 , we can obtain an estimate of the transition probability (during that time interval) from a configuration with i open channels to one with j open channels. These probabilities can be arranged as an N + 1 by N + 1 matrix as where r i→ j is the conditional probability (or the fraction of observations) of having j channels open at time t 2 when there were i channels open at time t 1 . For the particular case with only one channel (N = 1), we have where α and β can be understood as opening and closing probabilities over the interval from t 1 to t 2 . Note that, in general, for channels having more than one open or more than one shut configuration (as is the case for Na + channels), the matrices A t 1 →t 2 and V t 1 →t 2 as well as the values of α and β depend on both t 1 and t 2 and not only on the time interval t = t 2 − t 1 . The reason for this is the non-stationary behaviour of our collection of channels, which, in their ensemble average, exhibit a transient behaviour (activation and inactivation). Chung and Kennedy analysed only stationary recordings, which made it possible to pool together all transition counts for all possible intervals of one sampling period; however, in our case, such pooling cannot be conducted because of non-stationarity. Chung and Kennedy (1996) described in detail how to derive A from V (i.e. from α and β) for any N using Kronecker products and further matrix operations under the assumption that the channels are independent. Importantly, this derivation does not require stationarity. For example, for N = 2, where the overbar indicates the assumption of independence and distinguishes it from the matrix A t 1 →t 2 estimated from experimental data. The key step is then to find the values of α and β that yieldĀ t 1 →t 2 that fits the observed A t 1 →t 2 at best. As a measure of the quality of the fit, we used A t 1 →t 2 −Ā t 1 →t 2 2 , the Frobenius norm of the difference between the two matrices and minimized it using the algorithm of Nelder and Mead (1965). By examining the entries of A t 1 →t 2 andĀ t 1 →t 2 , information can then be obtained regarding whether transitions between configurations with given numbers of open channels are favoured or not by a possible interaction. For example, for N = 2, r 0→2 >r 0→2 suggests coupled openings while r 2→0 >r 2→0 suggests coupled closings during the interval from t 1 to t 2 . In our concrete application, we selected a predefined interval t of 0.05 ms, computed A t 1 →t 2 andĀ t 1 →t 2 with t 1 sliding along the voltage clamp protocol and t 2 = t 1 + t. Then, we inspected whether plots of corresponding entries of A t 1 →t 2 andĀ t 1 →t 2 vs. t 1 overlapped or not. Figure 1 illustrates our automated analysis pipeline. The first three rows in Fig. 1 show example current sweeps recorded in the cell-attached mode from an HEK cell upon repetition of a depolarizing voltage step, the trends identified by the de-trending algorithm, the sweeps after subtracting these trends, and the corresponding sweeps after digital filtering and automated baseline offset. The bottom row in Fig. 1 shows the de-trended sweeps together with their idealization. In this example, maximally one channel was open in the first two sweeps, whereas up to two channels were open together in the third sweep. Pipeline for the analysis of single-channel recordings The idealization algorithm is illustrated in Fig. 2. Figure 2A shows sweeps from an HEK cell patch that exhibited up to four open channels simultaneously. In the first column of Fig. 2A, the algorithm was run under the assumption of N assumed = 1 channel in the patch. Under this assumption, the single-channel current was clearly overestimated (orange traces) and the algorithm failed to detect multiple opening levels. With N assumed = 2, the algorithm performed better, but failed to idealize the current levels with three or four simultaneous openings. Performance was further improved with N assumed = 3 (third column), whereby the simultaneous opening of four channels was still poorly detected. In contrast, optimal performance was achieved with N assumed = 4 (fourth column). Figure 2B shows the convergence of both the identified single-channel current and the mean squared difference (residual) between the ensemble average current computed directly from all the sweeps of this experiment (516 successfully recorded sweeps before the seal was lost) and the ensemble average of the idealized currents. The performance of the algorithm was not improved by increasing N assumed beyond 4. Thus, there were at most four channels open simultaneously in this experiment. Figure 2B illustrates the precise reconstruction of the ensemble average current by our algorithm with a low residual, for N assumed = 4. Scarce evidence of cooperative channel interactions in the HEK cell expression system as well as in isolated cardiomyocytes Idealized currents output by the automated algorithm were then used to ascertain whether cooperative functional channel interactions were present in our cell-attached patch clamp recordings. Specifically, we ascertained whether the distribution of the number of open channels vs. time deviates from the binomial distribution that would be predicted in the absence of interactions. Figure 3 illustrates an experiment with an HEK cell expressing human Na v 1.5 channels. In this experiment, 160 successive sweeps were recorded upon application of the voltage protocol before the seal became unstable and was eventually lost. Figure 3A shows example sweeps and Fig. 3B shows the idealization of the 160 sweeps in the form of a colour map. This map shows that there was no manifest rundown of the preparation. While most channel activity occurred at the L 1 level (cyan), there were occasional simultaneous openings of the two channels (L 2 level, magenta). Figure 3C shows the observed fractions f 1 and f 2 of sweeps exhibiting, at every individual time point, one or two open channels, together the fractions f 1 and f 2 that would be expected without channel-channel interactions (independent channels). Figure 3D shows the corresponding single-channel open probability p open (i.e. the fraction of open channels, assuming that the channels are identical), which peaked at 0.16 at time t = 2 ms. There was no large difference between observed ( f 1 and f 2 ) and expected ( f 1 and f 2 ) fractions. In a window of 0.4 ms centred about the peak open probability, f 1 was only slightly larger than f 1 while f 2 was less than f 2 (coloured arrows in Fig. 3C). Quantitatively, ε 1 amounted to 0.11 (positive) while ε 2 was −0.72 (negative), which in fact suggests against cooperative gating. The difference in Shannon's entropy between the observed and expected distributions fluctuated between 0 and −0.025 and remained altogether small (Fig. 3E). The χ 2 test comparing these distributions yielded p-values that fluctuated erratically between 0 and 1 (Fig. 3F), and the duration of the longest segment during which p remained <0.05 was 0.13 ms, near the peak of p open . In summary, the results of this experiment do not reveal any strong or significant interaction between the channels. Figure 4 depicts an experiment and the corresponding analysis for an isolated murine ventricular myocyte, in a manner similar to that in Fig. 3. The membrane patch, containing two Na + channels, was stepped to a membrane potential of −40 mV. Figure 4A shows examples among the 100 successfully recorded sweeps. The full set of 100 idealized sweeps is shown as a colour map in Fig. 4B. As shown in Fig. 4C, the observed fractions of sweeps with respectively one and two open channels did not manifestly differ from the fractions expected based on the binomial distribution, as can be appreciated from the overlap of corresponding solid and dotted traces. Peak open probability reached its maximum of 0.43 at near t = 0.5 ms (Fig. 4D). Quantitatively, ε 1 was −0.01 while ε 2 was 0.02. Similarly to the experiment shown in Fig. 3, the entropy difference fluctuated between 0 and −0.025 (Fig. 4E). The p-value of the χ 2 test fluctuated between 0 and 1 (Fig. 4F) and the duration of the longest segment with p < 0.05 was 0.06 ms, in a narrow interval occurring at a time clearly falling into the macroscopic inactivation process (reflected by the time course of single-channel open probability p open ), but comprising only a small segment of inactivation. Thus, there was no significant cooperation (or antagonism) between the channels. An experiment with an isolated rabbit ventricular myocyte is shown in Fig. 5, in which currents from two Na + channels (Fig. 5A) were recorded from the patch and idealized (Fig. 5B). As shown in Fig. 5C, contrary to the anticipation of finding f 1 < f 1 and f 2 > f 2 based on the hypothesis of cooperative gating, the observed f 1 was larger than the expected f 1 and f 2 was smaller than the expected f 2 in the period preceding the peak of open probability (corresponding to macroscopic activation shown in Fig. 5D, see coloured arrows in Fig. 5C), with ε 1 = 0.05 and ε 2 = −0.01. This behaviour was similar to that shown in Fig. 3. The entropy difference between the distributions remained small (Fig. 5E), between 0 and −0.015, and the duration of the longest segment with p < 0.05 for the χ 2 test was 0.20 ms (Fig. 5F). Thus, there was no manifest functional interaction between the channels. Figure 6 illustrates that our analysis can also applied to more than two channels. In Fig. 6, an HEK cell patch with four human Na v 1.5 channels was examined. The protocol yielded 516 utilizable sweeps (Fig. 6A) before the seal was lost. Figure 6B shows the proportion of sweeps that exhibited one to four open channels vs. time ( f 1 to f 4 ) and the corresponding proportions based on the binomial distribution ( f 1 to f 4 ). In the presence of more than two channels, the interpretation of these distributions is slightly different from that with only two channels. Under the hypothesis of cooperative gating, we now anticipate finding f 1 < f 1 and f 4 > f 4 , while no clear anticipation can be made for f 2 and f 3 . The data of Figs 6B and 6C indeed show that f 1 was less than f 1 around the peak of p open (ε 1 = −0.04), while f 4 was then more than f 4 (ε 4 = 1.28). Moreover, f 3 was also larger than the expectation (ε 3 = 0.22), while f 2 was slightly smaller (ε 2 = −0.06). Thus, the fraction of observations with three or four open channels was larger and the fraction with one or two open channels was smaller than the expectation from the binomial distribution. Therefore, there was some synergy between the channels. However, the entropy difference again remained between 0 and −0.015 (Fig. 6D), and the duration of the longest segment with p < 0.05 for the χ 2 test was only 0.40 ms, during early inactivation (Fig. 6E). Table 1 lists all 16 experiments in which at least 100 sweeps were recorded successfully. The durations of the longest segments with p < 0.05 ranged from 0 to 0.45 ms, and these segments occurred inconsistently during diverse phases (activation, near the peak, inactivation) of the ensemble average current. The brevity of such segments and their inconsistent occurrence thus do not lend support to the idea that Na + channels interact functionally. Furthermore, ε 1 , expected to be negative in the presence of cooperative openings, was positive in some experiments. In addition, ε 1 in our experiments was considerably closer to 0 compared to ε 1 = −0.69, the value we calculated previously based on the data of (Clatot et al., 2017;Hichri et al., 2020). Considering all the experiments listed in Table 1, ε 1 was 0.018 ± 0.051 (mean ± SD) and was not statistically different from 0 (P = 0.1768, two-tailed Student's t test vs. 0; P = 0.3255, Wilcoxon signed rank test). This suggests that under Figure 3 shows results with HEK293/hNa v 1.5 cell 01 at +40 mV. Figure 4 shows results with mouse cell 07 at +40 mV. Figures 5 and 8 show results with rabbit cell 01 at +60 mV. Figure 6 shows results with HEK293/hNa v 1.5 cell 05 at +40 mV. our experimental conditions, the cooperative interaction between channels was very weak, if present at all. Single-channel conductances are typical of voltage-gated Na + channels Figure 7 shows single Na v 1.5 channel current-voltage relationships observed for the HEK cell human Na v 1.5 expression system as well as for the adult murine and rabbit ventricular myocytes. For this analysis, protocols with at least 50 sweeps were used and the single-channel current was obtained using our idealization algorithm. On average, single-channel conductance was comparable for all three species (human: 21.9 pS; mouse: 18.8 pS; rabbit: 23.8 pS) and in the range of values reported in the literature (Benndorf, 1994;Kang et al., 2021;van Bemmelen et al., 2004). These values, together with the clearly positive reversal potentials and the typical activation and inactivation time courses of the ensemble average currents (Figs 2-6), demonstrate that our single-channel recordings were indeed carried by voltage-gated Na + channels. Analysis in the time domain does not reveal coupled gating The analyses presented so far have examined the distributions of the numbers of simultaneously open channels at individual time points, but provide no information with respect to the question of whether channel openings or closings occur close together in time. To answer this question and further substantiate the absence of coupled gating in our experiments, we developed an analysis based on the approach by Chung and Kennedy (1996). First, we estimated the probabilities of transiting from a configuration with i open channels to one with j open channels (r i→ j ) between a given time t 1 and time t 2 = t 1 + t, with a fixed t and with t 1 sliding along the depolarizing step. Then, we compared these probabilities with those expected according to Chung and Kennedy assuming identical, indistinguishable and independent (non-interacting) channels. As time interval t, we used 0.05 ms (five samples). This is approximately the time constant of the low-pass filter used. Of note, this analysis requires a sufficient number of sweeps (>400) to provide a meaningful result, and we illustrate it in Fig. 8 for the 498 sweeps recorded from a rabbit cardiomyocyte patch with two Na + channels (same data as in Fig. 5B). Figure 8 shows that the transition probability r 0→2 of going from zero to two open channels within the interval t was not larger (in fact lower) than expected for independent channels, arguing against the idea of coupled openings. Similarly, r 2→0 was not larger either (in fact lower) than expected for independent channels, arguing against the suggestion of coupled closings. Thus, the results of this analysis contradict the hypothesis of coupled gating. Similar results were obtained for a t of 0.1 and 0.2 ms. As positive control for the analysis illustrated in Fig. 8, we used data generated by stochastic simulations using our previously published Markovian model of two interacting Na + channels (Hichri et al., 2020). In this modelling approach, individual channels were represented by the six-state model of Clancy and Rudy (1999) and a 36-state model of a channel pair was constructed by combining every possible state of the first channel with every possible state of the second. Interactions were then represented by modifying the free energies of the combined states and the energy barriers between these states to mimic the published results of Clatot et al. (2017). Specifically, the interaction (called Interaction II in Hichri et al., 2020) produced a strongly increased f 2 , a strongly decreased f 1 as well as coupled openings and closings (the majority of latencies between coupled openings/closings within 0.01 ms), without a manifest difference in ensemble average current. Figure 9 shows that the observed r 0→2 was clearly larger than that expected for independent channels up to 1 ms (activation phase), reflecting the coupled openings, while the observed r 2→0 was larger than that expected up to 1.5 ms (inactivation), reflecting the coupled closings. Furthermore, r 1→0 and r 1→2 were also increased, indicating that the situation with only one open channel is less stable. Moreover, r 0→1 , r 1→1 and r 2→1 were all decreased, reflecting the fact that the system of two channels was less prone to transition into a configuration with one open channel, again suggesting coupled gating. Figure 8. Analysis of coupled gating between two Na + channels in a rabbit ventricular myocyte patch This analysis was conducted on the same set of 498 sweeps with voltage step to −60 mV shown in Fig. 5B. A, ensemble average current; the grey rectangle corresponds to the interval analysed in B. B, the individual panels (labels) show the estimated transition probabilities r i→ j of going from a configuration with i open channels to one with j open channels between time t 1 (after the onset of the voltage step, abscissae) and time t 2 = t 1 + t, with t = 0.05 ms. Red data points show the probabilities estimated directly from the data and blue data points show the expected probabilities under the assumption that the channels are independent, calculated according to Chung and Kennedy (1996). This positive control analysis underlines the idea that coupled gating was absent in the experiment in Fig. 8, or that the two channels were even prevented from opening or closing together. Our results also demonstrate that the analysis of Chung and Kennedy can be used for non-stationary single-channel recordings. Discussion In this work, we have developed a novel automatic pipeline to de-trend and analyse single-channel currents from cell-attached patch-clamp recordings with the aim to quantify functional channel-channel interactions. We performed cell-attached recordings of Na + currents, not only in HEK293 cells expressing wild-type human Na v 1.5 channels but also in adult mouse and rabbit cardiomyocytes. The latter cells permitted us to assess the presence or absence of interactions between Na v 1.5 channels from other species in their native cellular environment. In contrast to the findings of Clatot et al. (2017) that voltage-gated Na + channels exhibit cooperative and coupled gating, we did not uncover any quantitative evidence of such interactions in our recordings, neither in the same HEK293 cell expression system nor in isolated cardiomyocytes. Indeed, the time intervals consistent with cooperativity with p < 0.05 (repeated χ 2 test) when comparing the observed distributions of current levels vs. the expected binomial distributions were too brief and occurred inconsistently during different phases of the ensemble average current. Moreover, when comparing these distributions, we observed that the relative difference for the first level at the moment of peak p open (our quantitative parameter ε 1 ) was in most experiments positive or very close to 0, and, on average, not clearly different from 0. This observation does not support the hypothesis of cooperative gating, which is expected to decrease the probability of observing only one open channel. In experiments with two channels, the relative difference for the second level (ε 2 ) was frequently negative, which also does not support this hypothesis. Furthermore, our refined analysis based on the method of Chung and Kennedy (1996) did not demonstrate any increased propensity of the channels synchronizing their openings and closings. Together, under our experimental conditions, we do not corroborate the presence of coupled gating between Na v 1.5 channels. Our results are sustained by several methodological and technical advantages. First, we did not limit our experiments and analysis only to patches containing two channels, because if channels do interact, this interaction would also have been revealed by our parameter ε 1 in the case of three or more channels. Second, all the currents were recorded at a sampling frequency of 100 kHz with only minimal filtering, which allowed us to have a high temporal resolution and thus more accuracy in the analysis. Third, we aimed to obtain large numbers of sweeps in individual patches and did not pool sweeps Figure 9. Analysis of coupled gating in a Markovian model of two interacting Na + channels The same analysis as in Fig. 8 was conducted for a stochastic simulation (1000 sweeps) of a pair of interacting Markovian cardiac Na + channel models with coupled gating (Hichri et al., 2020). The voltage was stepped to −20 mV at time 0. A, ensemble average current; the grey rectangle corresponds to the interval analysed in B. B, the individual panels (labels) show the estimated transition probabilities r i→ j of going from a configuration with i open channels to one with j open channels between time t 1 (abscissae) and t 2 = t 1 + t, with t = 0.05 ms. Red: probabilities estimated directly from the simulated data. Blue: expected probabilities under the assumption of channel independence. J Physiol 601.17 from different patches. Because different patches may be intrinsically dissimilar due to cellular variability, we thereby avoided possible confounding factors arising from such pooling. Finally and importantly, all our cell-attached recordings were processed and idealized with the same automated pipeline and with a minimal number of manual settings. One strength of our idealization algorithm is that it considers the entire set of sweeps simultaneously rather than sequentially sweep by sweep. The question thus arises why our results are different from those of Clatot et al. (2017). One possible explanation is that the experimental conditions (including culture conditions), although comparable, were not identical. In particular, we opted for a physiological Na + concentration (140 mmol/L) in the pipette, while Clatot et al. (2017) used an increased Na + concentration of 280 mmol/L. The resulting difference in osmolarity may possibly have caused the channels to interact in their experiments. Another possible explanation is that wild-type Na + channels only very rarely form functionally interacting pairs and we were unfortunate to miss this phenomenon in our entire set of experiments. If such interactions are indeed very rare (in the wild-type situation), it is then questionable whether it modifies cardiac cellular excitability in a substantial and relevant manner. Comparison to previous studies The question of whether Na + channels functionally interact has been a matter of debate since the 1980s. While performing cell-attached and inside out patch-clamp recordings with a neuroblastoma cell line, Aldrich et al. (1983) observed that there was a higher probability of an even number of channels under the pipette than an odd number, which led to the suggestion that Na + channels might not be independent entities. However, by analysing the mean durations of periods with only one channel open vs. two channels open simultaneously, they later did not detect any difference between the observed mean periods with two open channels and the predicted mean periods based on the open lifetime of a single channel under the assumption of independent channels (Aldrich & Stevens, 1987). This led them to conclude that there is no inter-channel cooperativity. Kiss and Nagy (1985) also addressed the question of whether Na + channels interact by performing cell-attached recordings in mouse neuroblastoma cells. They reported that in their recordings, there was a tendency of blank sweeps (without any channel activity) and of sweeps with channel openings (irrespective of whether one, two or more channels opened) to form clusters of consecutive sweeps (Kiss & Nagy, 1985). In patches with three or more channels recorded at positive transmembrane potentials (>10 mV), they demonstrated that sweeps with channel openings tend to form large clusters. However, this finding can be explained by the larger channel open probability at potentials >10 mV (Kiss & Nagy, 1985). In a computational model, Naundorf et al. (2006) showed that by introducing populations of coupled Na + channels, it was possible to replicate the rapid initiation and variable onset voltage of neuronal action potentials as seen in experiments. However, they did not perform any single-channel recordings to quantify or substantiate this coupling. Although we did not observe functional channelchannel interactions, our results do not exclude the idea that Na + channels can form dimers or multimers (biochemical interaction) as reported in previous studies (Clatot et al., 2017;Iamshanova et al., 2022). Our results also do not exclude the possibility that channels start interacting under conditions of cellular stress. For example, Undrovinas et al. (1992) showed that during voltage steps starting from a very negative holding potential (≤ −150 mV) and in the presence of the ischaemic metabolite lysophosphatidylcholine, cardiac Na + channels tend to exhibit synchronized openings and closings. However, the proportion of patches with channels exhibiting coupled gating was low, in the range of a few per cent. Intriguingly, there are also reports that Na + channels may exhibit negative cooperativity. For example, in experiments with batrachotoxin-modified neuronal channels, Iwasa et al. (1986) observed that the likelihood of having two channels open together was lower than that expected from the binomial distribution. Conversely, the probability of having only one channel open was higher than the expected one (corresponding to a positive ε 1 in our framework). All these effects reported under stress conditions may be the consequence of a direct interaction between channels, or an indirect one consecutive to modifications in the regulatory proteins of the Na + channel complex. Clinical implications and translational perspective While our results do not provide evidence of functional interactions between wild-type Na v 1.5 channels, they do not exclude that Na + channel variants may possibly exhibit functional interactions between themselves or with wild-type channels. This may be relevant in the context of inherited channelopathies when typically variant and wild-type channels are co-expressed (Clatot et al., 2018;Ruhlmann et al., 2020;Sottas & Abriel, 2016). Furthermore, a functional interaction may be caused and modulated by a vast array of Na + channel-associated proteins under specific conditions not met under our experimental conditions. Our analysis pipeline offers the prospect to process single/multi-channel recordings with minimal human interventions, and the output idealized current can then be used to explore potential cooperativity in variant dimers and multimers of Na v 1.5 channels as well as in other channels from the Na v 1.X family. If such interactions are identified and if they appear relevant for function, this can pave new ways for the discovery of new therapeutic targets for arrhythmias and neurological disorders. Limitations Despite the rigor of our analysis, our work has several limitations that should be discussed. First, our analyses require a large number of sweeps (at least 100, but ideally 1000 or more) to have sufficient power. However, this is inherent to any method examining interactions at the single-channel level due to the stochastic nature of channel gating. Obtaining large numbers of sweeps from the same patch is challenging and frequently limited by the instability of the patch and its seal. Second, performing cell-attached recordings may be associated with the difficulty of controlling transmembrane potential precisely. Thus, experiments conducted with the same test potential but on different cells may lead to different peak open probabilities, and thus such experiments cannot be directly compared. This was the principal reason why we did not pool sets of sweeps in our analyses. A possible solution would be to use excised inside-out or outside-out patches, but this will occur at the expense of losing the intracellular environment on the intracellular side of the channels, especially the biochemical interaction of the Na v 1.5 channels with intracellular anchoring or regulatory proteins (Allouis et al., 2006;Gavillet et al., 2006;Jespersen et al., 2006;Kang et al., 2021;Lemaillet et al., 2003). This aspect is particularly important when one investigates the gating behaviour of the channels in isolated cardiomyocytes. However, excised patches may be less stable than cell-attached recordings. Because Na + channels activate and inactivate faster at large depolarizing step potentials, we limited our cell-attached recordings to low depolarizing steps between −60 and −20 mV. This was necessary, because at more positive step potentials, a large number of channel openings are completely masked by the capacitive artefact and the de-trending algorithm no longer operates reliably. Moreover, single-channel current amplitudes become smaller with increasing step voltages (see Fig. 7), which also affects the reliability of the idealization procedure. Thus, one may argue that cooperative gating could be more apparent at higher voltages. At such voltages, the openings may merge more due to shorter latencies to first opening and shorter open times, which may then give a false impression of cooperative gating. This underlies the importance of always applying a rigorous unbiased analysis. Another important point to note is that one does not know the true number of channels in a patch. It can never be excluded that in a given patch there were more channels than the maximal number of simultaneously open channels observed in the corresponding set of sweeps, raising the question of how underestimating the number of assumed channels N assumed would affect the results. To answer this question, we repeated the analyses in Figs 3-6 and Table 1 with increasing values of N assumed . Interestingly, increasing N assumed by one, two or three channels always increased the values of ε 1 , which became more and more positive. Since ε 1 is, in our opinion, a suitable marker to ascertain the magnitude of the effect of any putative interaction, these tests do not support the idea that underestimating the true number of channels may have concealed cooperative or coupled gating in our analysis. Moreover, increasing N assumed had only a minor influence on the calculated intervals during which the p-values of the χ 2 tests were <0.05. It may be speculated that cooperative interactions appear only in the presence of larger numbers or densities of Na + channels, as suggested and modelled mathematically in a neuroscience study by Naundorf et al. (2006). The performance of our idealization algorithm decreases with increasing channel numbers because of the additional noise arising from the intrinsic fluctuations of the current flowing through open channels, which blurs the individual current levels. Evidencing interactions at the single-channel levels is therefore difficult in the presence of larger Na + channel clusters. In this context, further insight may be provided by non-stationary mean-variance analysis (Hille, 2001;Sigworth, 1980), an elegant technique in which the relationship between the variance and the mean of the current is fitted by the quadratic function predicted by the binomial distribution expected for independent channels. The single-channel current is then obtained from the initial slope of the quadratic relationship. However, if channels gate cooperatively, this technique will overestimate the single-channel current. For example, if channel clusters consist of dimers that open and close almost together, then the single-channel current would be overestimated by a factor close to 2. Because we did not find substantial functional interactions in the first place, we did not pursue this aspect further. Finally, in the absence of any experimental positive control data, we had recourse to computer simulations of interacting channels (Fig. 9). Conclusions Whether cardiac Na + channels or channels of the Na v 1.X family interact functionally is controversial, and our study will not resolve the debate. However, our J Physiol 601.17 study underlines the value of recording as many sweeps as possible to address this question. It also underlines the importance of rigorous and unbiased signal processing and the development and application of appropriate analyses. We believe that continuing research on this controversial question is nevertheless necessary, as discovering situations in which channels do interact may have a profound impact on fundamental notions of physiology with outreaching consequences for biomedical applications.
2023-07-21T06:17:50.706Z
2023-07-20T00:00:00.000
{ "year": 2023, "sha1": "ff5079e5f0e8389226fe4fd0e22b0aa8801dbf69", "oa_license": "CCBY", "oa_url": "https://onlinelibrary.wiley.com/doi/pdfdirect/10.1113/JP284861", "oa_status": "HYBRID", "pdf_src": "Wiley", "pdf_hash": "ae6d4e4633d2fa9e2a7077f41420d084328ea68e", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
264360951
pes2o/s2orc
v3-fos-license
“Alphabet” Selenoproteins: Implications in Pathology Selenoproteins are a group of proteins containing selenium in the form of selenocysteine (Sec, U) as the 21st amino acid coded in the genetic code. Their synthesis depends on dietary selenium uptake and a common set of cofactors. Selenoproteins accomplish diverse roles in the body and cell processes by acting, for example, as antioxidants, modulators of the immune function, and detoxification agents for heavy metals, other xenobiotics, and key compounds in thyroid hormone metabolism. Although the functions of all this protein family are still unknown, several disorders in their structure, activity, or expression have been described by researchers. They concluded that selenium or cofactors deficiency, on the one hand, or the polymorphism in selenoproteins genes and synthesis, on the other hand, are involved in a large variety of pathological conditions, including type 2 diabetes, cardiovascular, muscular, oncological, hepatic, endocrine, immuno-inflammatory, and neurodegenerative diseases. This review focuses on the specific roles of selenoproteins named after letters of the alphabet in medicine, which are less known than the rest, regarding their implications in the pathological processes of several prevalent diseases and disease prevention. Introduction It is now very well known that the history and importance of the implications of selenoproteins in health and diseases began in 1817 when the trace element selenium (Se) was first discovered by the Swedish chemist Jöns Jacob Berzelius and named after the Greek goddess of the Moon, Selene, and originally considered a naturally occurring toxicant.In 1957, this point of view changed thanks to Schwartz's and Foltz's unexpected discovery that selenium prevented liver necrosis in rats.This discovery changed the perception of selenium as a health threat.As time passed, selenium began to be viewed as an essential and beneficial trace element for health.Based on these discoveries, the era of selenoproteins started in 1974 when the American biochemist Thressa Campbell Stadtman added the famous and unique new amino acid selenocysteine (Sec, U) as the 21st "naturally occurring" in the genetic code [1].Sec is cotranslationally inserted into nascent polypeptide chains in response to the UGA codon, known as the stop codon.For this "magic" to be possible, organisms evolved using the intensely researched insertion machinery requiring a cis-acting Sec insertion sequence (SECIS) element [2]. Regarding selenoproteins and selenoproteome, 25 selenoprotein genes corresponding to 25 selenoproteins have been identified in humans, showing different properties and functions, most broadly classified as antioxidant enzymes.This selenoproteome includes glutathione peroxidases (GPxs), iodothyronine deiodinases (DIOs), thioredoxin reductases (TRxRs), methionine sulfoxide reductases (Msrs), and selenoproteins named after letters of the alphabet (H, I, K, M, N, O, P, R, S, T, V, and W).GPxs are oxidoreductases that are involved in reducing varied hydroperoxides, such as hydrogen peroxide (H 2 O 2 ) to H 2 O using glutathione (GSH).The DIO selenoproteins (DIO1, DIO2, and DIO3) are implicated in regulating thyroid hormones and catalyzing reductive deiodination.TrxRs are essential protein disulfide reductases in cells.Msrs are thiol (or selenol)-dependent oxidoreductases [2].Several functions of the "alphabet" selenoproteins are listed in Table 1.V Specific expression in testes [7] W Antioxidant [25] The functions of selenoproteins are altered in diseases associated with mutations of SECISBP2 (Selenocysteine Insertion Sequence-Binding Protein 2, SECIS Binding Protein 2) or SEPSECS (Sep (O-Phosphoserine) tRNA:Sec (Selenocysteine) tRNA Synthase) genes.SECISBP1 encoded a protein that represents an essential component of the machinery that carries out the insertion of Sec into selenoproteins.Some mutations generate only clinical phenotypes expressed in specific tissues due to the deficiency of particular selenoproteins, whereas other phenotypes have multifactorial causes [26].All these patients have similarly low plasma selenium levels, reflecting low SELENOP and GPx3 synthesis and abnormal thyroid hormone values caused by the diminished activity of deiodinases.Their values are raised for FT4 and reverse T3 (rT3), normal to low for FT3, and normal to high for FSH [27][28][29].Children have a growth slowdown, intellectual development delay, and motor coordination deficits [30,31].Several patients have shown progressive muscular dystrophy in axial and proximal limb muscles that is very similar to the phenotype of the SELENON deficiency myopathy [32].Another phenotype revealed azoospermia due to the compromised latter stages of spermatogenesis caused by a marked deficiency of testis-expressed selenoproteins GPx4, TXNRD3, and SELENOV [33][34][35][36][37].Other phenotypes that include increased subcutaneous fat mass, insulin sensitivity, and cutaneous photosensitivity have possible multifactorial origins involving impaired antioxidant and ER stress defense [29].Studies on patients with SEPSECS phenotypes report serious intellectual and developmental delays, spasticity, epilepsy, and axonal neuropathy.In addition, some patients suffer from autosomal recessive pontocerebellar hypoplasia type 2D (PCH2D), also known as progressive cerebellocerebral atrophy (PCCA), and have optic atrophy and hypotonia with progressive microcephaly caused by the atrophy process [38][39][40][41][42]. This review focuses on the implications of the selenoproteins named after letters of the alphabet, which are less known than the rest.These selenoproteins also play vital roles in the pathogenesis and prevention of many diseases (cardiovascular, gastrointestinal, hepatic, immuno-inflammatory, neurodegenerative, oncological, and muscular diseases, type 2 diabetes, etc.) as described below.The pathological conditions arise, as mentioned above, due to deficiency of selenium or cofactors, on the one hand, or the polymorphism in selenoproteins genes and synthesis, on the other hand. An already large number of studies have shown that selenoproteins are involved in many processes in the organism, such as cellular oxidative stress, ER stress, antioxidant defense, and regulating the inflammatory and immune response [43][44][45][46], and have essential functions in antioxidant, anti-apoptosis, anti-inflammation, and other various complex mechanisms [47][48][49]. Various selenoproteins have an ER response function to ER stress conditions.ER is widely distributed in eukaryotic cells and is an essential organelle involved in protein processing and steroid synthesis [50].When too many unfolded or misfolded proteins in RE are accumulated for a long period of time, it can lead to an imbalance in calcium homeostasis and, consequently, to an ER stress response, which, if it is not well managed, activates the corresponding signaling pathway and induces apoptosis [51].The ER-resident selenoproteins involved in regulating ER stress include 15 kDa selenoproteins, DIO2 (iodothyronine deiodinase 2), SELENOS, SELENON, SELENOK, SELENOM, and SELENOT [52][53][54][55].These ER-resident selenoproteins are implicated in ER stress, inflammation, and/or intracellular calcium homeostasis by regulating the calcium flux [56,57].SELENON acts as a redox cofactor for ryanodine receptors (RyRs) [54], whereas Sep15, a redox enzyme, is associated with the proteins implicated in protein-folding quality control [58]. Implications of Selenoproteins in Cardiovascular Diseases In the case of selenium deficiency conditions, vascular injury is triggered through multiple mechanisms, such as necrosis, apoptosis, and inflammation [50,59].Increased selenoprotein expression in vascular endothelial cells may play a protective role by reducing abnormal cell adhesion induced by pro-inflammatory cytokines [60,61].In addition, the downregulation of SELENOS can effectively prevent the development of cardiovascular diseases, such as atherosclerosis and hypertension [50].Generally, selenoproteins protect the heart from the accumulated cholesterol in blood vessel walls by increasing the levels of coenzyme A in myocardial cells and increasing energy production [62]. Studies have shown that selenium deficiency could play an essential role in the pathogenesis of Keshan disease (KD), an endemic cardiomyopathy that leads to heart failure [63][64][65].The disease was first reported in Keshan County in northeast China in 1935.Similar cases were reported in Nagano Prefecture in Japan and the northern mountains of North Korea in the 1950s.KD occurs because of low body selenium levels, a consequence of low selenium quantities in the soil in Keshan County [66,67], and oral selenium supplementation was found to eliminate Keshan disease a long time ago [68].Regarding Keshan disease, an infection with Coxsackie virus B3 (CVB3) is a factor that also contributes to this disease [69,70], but the exact mechanism of selenium implication remains unclear [71]. Keshan disease is characterized by cardiac arrhythmia, acute heart failure, and congestive heart failure, and it is classified into acute, sub-acute, chronic, and latent KD.Nowadays, acute and sub-acute cases are almost absent.Only chronic and latent cases are reported, but rare, in many geographical areas.Besides KD, selenium deficiency is also correlated with other cardiovascular diseases, such as cardiomyopathies, atherosclerosis, coronary heart disease, myocardial infarction, and heart failure [43].A recent study regarding the serum selenoprotein P and Keshan disease carried out in Heilongjiang Province in China concluded that the mean serum SELENOP levels of 28 KD endemic counties, meaning 56% of all surveyed endemic counties, were lower than those in all endemic counties and in a spatial regression analysis were positively correlated with the per capita GDP [72]. Besides Keshan disease, Chagas disease is caused by low selenium intake and a microbial parasite infection with Trypanosoma cruzi.Some patients infected with this parasite develop cardiomyopathy as a common cause of heart failure in South America [73].Moreover, patients with Chagas disease tend to develop increased heart dysfunction, which may suggest a protective role of selenoproteins that remains to be fully elucidated [74,75]. SELENOT was shown to prevent free-radical injuries and the death of the cell during ischemia/reperfusion as SELENOT-derived peptides protect the heart from these processes by inhibiting apoptosis and oxidative stress [52].In regulating cardiac apoptosis and survival mechanisms during cell stress conditions, ER stress has an essential role.The involvement of selenoproteins in regulating the mechanisms and pathways of the ER stress response represents only a component in that laborious and complex response in the organism.The ER stress induced by misfolded proteins is regulated by SELENOK in association with SELENOS, whereas SELENOM, SELENON, and Sep15 may regulate the cardiac response to ER stress [76].It is well known already that SELENOK is an ER protein with an antioxidant function in cardiomyocytes, having a high mRNA expression in the heart [77]. Plasma SELENOP supplies cells with selenium, providing the necessary support for the optimal expression of selenoproteins.Moreover, SELENOP reduces peroxynitrite induced by protein oxidation and nitration and lipid and LDL peroxidation by oxidizing TRX (thioredoxin reductase) in return [78]. Clinical studies examining the correlation between selenium status and cardiovascular pathology mortality have provided contradictory data, but low selenium levels correlate with the risk of myocardial infarction [79,80]. Schomburg et al. reported a strong association between low SELENOP levels and the mortality risk associated with all causes, including cardiovascular mortality and a first cardiovascular event.The studies were performed on a large group of Swedish subjects with no history of cardiovascular events [81].In addition, Schomburg et al. concluded the hypotheses of the mechanisms involved in the SELENOP influence on modifying cardiovascular risk [81,82].These hypotheses are as follows: SELENOP transports selenium to tissues with specific uptake receptors ApoER2 or megalin, so selenoprotein biosynthesis increases to play roles in antioxidative defense and in regulating the protein qualitycontrol systems.SELENOP is capable of catalyzing the degradation of phospholipid hydroperoxides by exhibiting GPx (glutathione peroxidase) activity, thereby protecting the cell membrane integrity [83] and LDL particles from oxidation [84].SELENOP reduces peroxynitrite [77] and associates with the extracellular matrix via the heparin-binding domain [85].SELENOP binds heavy metals, such as Cd, As, and Hg, preventing toxicity in the plasma [86] and reducing oxidative stress.Studies carried out refer to a cohort with Hg-exposed subjects and do not apply to the general population that is not exposed to Hg.A recent study has shown that subjects with high selenium intake and levels were less hypertensive and had reduced stroke and myocardial infarction than those with low selenium levels (Table 2) [87].Type 2 Diabetes Mellitus P [122,123], S [124,125], K [126] Obesity P [127,128], S [129], R [130], N, W [131] Implications of Selenoproteins in Liver Diseases Many experiments have demonstrated that selenoproteins are involved in nonalcoholic fatty liver disease (NAFLD), which is, nowadays, considered the most common chronic liver disease and associated with serious complications, such as obesity and/or insulin resistance [132].Studies found that the SELENOP levels were positively correlated with insulin resistance and NAFLD, but for serum selenium levels, the conclusions were different [88,132].Wang et al. have shown that adding 1.0 mg/kg of Se can reduce the liver damage induced by high fat in a NAFLD pig model [89].Moreover, Zhu et al. identified several upregulated selenoproteins in mild NAFLD liver samples compared to healthy controls, such as SELENON, SELENOP, SELENOT, SELENOW, DIO2, DIO3, GPx4, and GPx5, suggesting that in NAFLD, selenium-related processes are progressively perturbed [90].In addition, other experiments revealed the essential role of selenoproteins in hepatic function after genetically excluding them in mice, which, under these conditions, developed hepatocellular degeneration and necrosis, leading to early death [133]. The liver secretory selenoprotein SELENOP is related to insulin resistance [134].By administrating native selenoprotein P, the insulin signals are broken down to manage the insulin function in both hepatocytes and myocytes.In contrast, the knockdown and exclusion of SELENOP enhance common reactivity to insulin and enhance glucose tolerance in mice [134].At the same time, a selective loss of so-called housekeeping selenoproteins SELENOP, SELENOF, DIO1, and TXNRD1 determined the upregulation of the genes involved in cholesterol biosynthesis and the downregulation of the genes that have roles in cholesterol metabolism and transport, suggesting the effect of these selenoproteins in favoring hypercholesterolemia [91]. In an article, Stergios A. Polyzos et al. concluded that the association between Se or SELENOP and insulin resistance, representing a principal pathogenic factor in NAFLD, remains inconclusive.Results of clinical studies are conflicting, except those performed in advanced liver diseases, such as cirrhosis or hepatocellular carcinoma, in which lower plasma selenium and SELENOP are consistent findings [135]. Other studies regarding SELENOS have shown that its mRNA level in the liver of pigs induced by high fat can be significantly increased, and its expression is negatively correlated with the apoptosis rate and the symptoms of nonalcoholic steatohepatitis, suggesting that this selenoprotein may be essential in the protection of the liver from high-fat-induced damage [89]. It is already known that dietary selenium deficiency can reduce liver selenase activity and, consequently, lead to oxidative stress and so, afterward, initiates oxidative stress-related signals [136,137].In this situation, redox imbalance is induced by regulating selenoproteins at mRNA and protein levels by blocking the GSH system while enhancing GSH synthesis and catabolism [137].In hepatocellular carcinoma, selenium plays an immunomodulatory role by regulating oxidative stress, inflammation, immune response, cell proliferation and growth, angiogenesis, signaling pathways, and apoptosis [136,138].As shown by Sang et al., Se concentration was usually low in patients with hepatocellular carcinoma, and enhancing the Se concentration via exogenous supplementation was correlated with reducing the number and size of tumors [138]. Recent experiments have demonstrated that in the liver, there is a group of proteins called hepatokines, such as selenoprotein P, fetuin-A, and fibroblast growth factor-21 (FGF-21), that directly affect glucose and lipid metabolisms, similar to adipokines and myokines [139]. A serial analysis of gene expressions revealed that SELENOP is associated with insulin resistance in humans [140].Studies have also shown that patients with type 2 diabetes mellitus and those with NAFLD have higher serum levels of this type of selenoprotein than healthy controls [141][142][143].Moreover, it was found that salsalate and adiponectin ameliorated palmitate-induced insulin resistance in hepatocytes by inhibiting the selenoprotein P via the AMPK-Forkhead box protein O1α (FOXO1α) pathway, suggesting that this action might be a novel mechanism in mediating the antidiabetic effects of salsalate and adiponectin [143,144]. Implications of Selenoproteins in Intestinal Diseases There is strong evidence that Se levels are linked to the incidence and severity of intestinal diseases, which have become very frequent and serious pathologies in the world, including inflammatory bowel disease (IBD) and colorectal cancer (CRC) [145,146].Inflammatory bowel disease is a generalized term that includes Crohn's disease (CD, regional ileitis) and other ulcerative colitis.Selenium reduces intestinal inflammation due to the action of selenoproteins, which have a protective role.In intestinal infections, their actions involve type-3 innate lymphocytes (ILC3) and T-helper cells 17 (Th17), which protect the intestinal barrier essential for maintaining physiological intestinal function [147,148].Inflammation leads to barrier damage by increasing ROS (reactive oxygen species) production, while dietary Se supplementation can reduce their levels [148]. SELENOP is significantly reduced in the serum of Crohn's disease (CD) subjects, and its serum concentration is negatively correlated with CRC risk [76]. Selenoprotein P originates from the colonic epithelium and represents the source of antioxidant-mediated cancer protection associated with colitis.In contrast, the downregulation promotes oxidative stress in ulcerative colitis [149].Intestinal epithelial SELENOP knockdown increases the tumor load and genomic instability in cancer associated with the colitis model, suggesting its important role in the development of colon cancer [147,149]. Moreover, reduced selenium levels promote helper T-assisted 1 (Th1) cell differentiation in patients with Crohn's disease.Selenium supplementation can inhibit Th1 cell differentiation through SELENOW, eliminate cytoplasmic ROS, and relieve symptoms in patients with Crohn's disease [150].In addition, experiments performed both in vitro and in vivo on Sep15 knockout colon cancer cells or mouse models have shown a reversal of the colon cancer phenotype and a reduction in the number of chemical-induced tumors [58,137]. SELENOS and SELENOK have also been implicated in inflammation and IBD [92][93][94].It has been reported that increased production of cytokines has an inflammatory effect, with a decrease in the expression of SELENOS.Moreover, in the absence of SELENOK, the inflammatory cytokines decrease [93].These findings must be further investigated. In IBD, many immune cells, such as macrophages, T-cells, and innate lymphoid cells, are involved in this pathological condition, and studies have shown the important impact of selenium and selenoproteins in inflammatory signaling pathways implicated in the pathogenesis of this disease.Two transcription factors, nuclear factor-κB (NF-κB) and peroxisome proliferator-activated receptor γ (PPARγ), involved in activating immune cells and also implicated in various stages of inflammation, are impacted by the Se status.In addition, there is a correlation between the levels of NF-κB in the gut and the severity of IBD.Before resection surgery for Crohn's disease, histological colon samples revealed a correlation between NF-κB levels and histological score, where higher levels led to a higher histological score [151].Because NF-κB is a redox-sensitive transcription factor, it is also regulated by selenoproteins, which possibly act as antioxidants and can alleviate the symptoms of IBD [152].Studies regarding SELENOP, which has both reductase and peroxidase activities, have shown that it is decreased in IBD.The oxidative stress developed during IBD can lead to the activation of NF-κB, so selenoproteins SELENOP and GPx2 (glutathione peroxidase 2) have the role and ability of reducing this stress, and this could lead to a decrease in the activation of NF-κB [153]. PPARγ is a key receptor that is highly expressed in the epithelial cells of the colon, second to adipose tissue, and, like NF-κB, has been implicated in the inflammation of the colon [154].In contrast with NF-κB, whose expression is increased in IBD, in the case of PPARγ, a greater decrease is observed in patients suffering from ulcerative colitis compared to those suffering from Crohn's disease [155]. Selenium plays an essential role in activating PPARγ and its ligands, which are derived from the arachidonic acid pathway of cyclooxygenase activity in macrophages.Selenium can increase both PPARγ and its ligand, the prostaglandin 15d-PGJ2 [93,156], so, eventually, under selenium supplementation, IBD would be significantly ameliorated. Colorectal cancer (CRC) could be another complication of IBD, and patients suffering from IBD could have a high risk of developing CRC.Clinical trials that administrated Se supplements reported a decrease in the number of colorectal cancer cases compared to those patients that were administered a placebo [157].Oxidative damage to DNA can lead to tumor development; in that case, selenoproteins can decrease the risk of CRC [157], so selenium and selenoproteins can be used as chemoprotective agents, since selenium is involved in regulating apoptosis and proliferation of the intestinal epithelium [153]. Implications of Selenoproteins in Cancer As many studies have demonstrated, both selenium and selenoproteins play important roles in the occurrence of tumors and the progression of the malignant process [158][159][160][161]. Many selenoprotein gene polymorphisms have been linked to the risk of developing cancer.Polymorphisms in SELENOP, besides GPx2 and GPx4 (glutathione peroxidases), have been implicated in colorectal cancer [95,96].Sep15 polymorphisms have been related to an increase in lung cancer risk [97].SELENOS promoter polymorphisms have been linked to gastric cancer [98].Recent experiments have shown that epistasis between the polymorphism of SELENOS and mitochondrial superoxide dismutase (SOD) has been linked to prostate cancer risk [162].Moreover, changes in the expression of SELENOP, Sep15, GPx1, GPx2, and TrXR1 (thioredoxin reductase 1) have been related to different forms of cancer [161,163]. SELENOK acts as a tumor suppressor in human choriocarcinoma cells because it negatively regulates human chorionic gonadotropin β subunit and β-HCG expression, which may be used as a novel therapeutic target for human choriocarcinoma in vitro [99].In addition, regarding SELENOK, it was found that this selenoprotein is critical in promoting calcium fluxes that induce melanoma progression [100,101]. Numerous analyses were performed using NPC (Nutritional Prevention of Cancer) trials to determine whether selenium acts as a cancer-preventing agent.One of them has referred to the possibility that selenium supplementation could reduce the risk of skin carcinomas.The trials concluded that although the incidence of skin cancer did not differ between those groups from the trials, the total incidence of cancer decreased, including prostate, lung, and colorectal cancer [171].The studies confirmed the protective effect of selenium supplementation in preventing prostate cancer [172].Another recent study, the SELECT (Selenium and Vitamin E Cancer Prevention) study, found no significant decrease in prostate cancer after selenium supplementation.The SELECT study used purified selenomethionine, while the NPC study used selenized yeast [173]. Implications of Selenoproteins in Neurological Diseases The brain retains selenium even under conditions of dietary selenium deficiency.Selenoproteins are most expressed in the brain, especially in cortex and hippocampus dysfunction [174,175].Selenoproteins are essential for physiological brain function, and a decline in their function can lead to impaired cognitive function and neurological diseases [175][176][177][178]. ROS actions and damage are taking place in neurodegenerative disorders, such as Alzheimer's disease (AD), Parkinson's disease (PD), Huntington's disease (HD), epilepsy, ischemic damage, brain tumors, exposure to environmental toxins, and drugs [178]. Implications of Selenoproteins in Alzheimer's Disease (AD) Alzheimer's disease, the most common type of progressive dementia, involves parts of the brain that control thought, memory, and language.AD manifests in memory loss, impaired cognitive function, and changes in behavior and personality [179].The brains of AD patients accumulate abnormal amounts of extracellularly amyloid plaques consisting of the protein amyloid β, and tau proteins, as well as intracellularly as neurofibrillary tangles, form in the brain, affecting neuronal functioning and connectivity, resulting in the progressive loss of brain function.The abnormal interaction of β-amyloid 42 with copper, zinc, and iron induces peptide aggregation and oxidation in AD.Amyloid β degradation is mediated by extracellular metalloproteinases, neprilysin, insulin-degrading enzyme (IDE), and matrix metalloproteinases.In their autopsy studies, Dorothea Strozyk et al. found a strong inverse correlation between cerebrospinal fluid β-amyloid 42 and cerebrospinal metals, such as copper, zinc, iron, manganese, and chromium, with no association with selenium or aluminum.Moreover, it was also found a synergistic interaction of elevated copper and zinc with lower cerebrospinal fluid β-amyloid 42 levels [180]. Most cases of AD are late-onset and progress with age [181]. Studies have shown that several autosomal dominant mutations can result in earlyonset AD.One of these mutations is in presenilin-2, an enzyme involved in processing amyloid precursor protein [182]. It is believed that SELENOM might play a suppressive or protective role in AD because, in a mouse model that overexpressed the human mutation in presenilin-2, the levels of brain SELENOM were reduced [102]. Another selenoprotein, SELENOP, is abundant in the human brain in neurons and ependymal cells [187].SELENOP expression in the brain increases with age, suggesting a probable role of SELENOP in decreasing oxidative stress [188].Studies found that the genetic deletion of SELENOP results in a decrease in central-nervous-system-associated selenium levels, suggesting that other selenoproteins compensate for the SELENOP deficiency and that the basal brain selenium levels probably consist of a priority for the available selenium in the body [181,189].Selenoprotein P deficiency determines subtle spatial acquisition learning and memory deficits and severely disrupts synaptic plasticity in area CA1 of the hippocampus.Researchers concluded that it is difficult to discern whether these effects are due to SELENOP or the loss of the selenium transport to the brain [190]. Bellinger et al. investigated the expression of SELENOP in the post-mortem human brain and discovered a unique expression pattern of SELENOP within the center of neuritic (dense-core) plaques and found a co-localization of SELENOP with plaques and neurofibrillary tangles, which suggests a possible role of SELENOP in reducing the oxidation accompanying plaques [191]. SELENOP is highly influenced by dietary selenium, so selenium supplementation may play a direct neuroprotective role by increasing SELENOP expression [192].Several studies have even suggested that selenium supplementation can decrease amyloid toxicity in cell cultures and animal models [180,193]. Considering oxidative stress, a hallmark of Alzheimer's disease, SELENOP, due to its prominent antioxidant role, might act in AD by protecting neurons against oxidative lesion damage or transporting selenium so that other antioxidant selenoproteins can be further synthesized.SELENOP encodes two His-rich regions that are high-affinity binding sites for transition metals, suggesting its possible role in blocking metal-mediated β-amyloid 42 aggregation and further subsequent ROS (oxidative reactive species) generation [103].In addition, studies found that SELENOP inhibits tau aggregation due to its two His-rich domains and disassembles formed aggregates of tau that are induced by the presence of Cu + /Cu 2+ [194].These two His-rich regions of SELENOP associate with the acidic tail of α-tubulin via an ionic interaction, suggesting that SELENOP can possibly be involved in microtubule events that are associated with the maintenance of cell polarity, intracellular transport, and cell division and migration [175,195]. Implications of Selenoproteins in Parkinson's Disease (PD) Parkinson's disease (PD) is a neurodegenerative disorder characterized by a loss of motor control, caused mainly by a dramatic loss of dopaminergic neurons in the midbrain substantia nigra [196,197].Before cell loss, Lewy bodies are formed, which are intracellular bodies of insoluble protein aggregates of ubiquitinated α-synuclein [198].Symptoms of PD include rigidity, bradykinesia, resting tremor, flexed posture, "freezing", loss of movement control, and postural reflexes, with mood changes and cognitive impairments occurring in the later stages of the disease.Parkinson's disease is the major cause of Parkinsonism, a clinical syndrome comprising combinations of motor problems as mentioned above [196]. The substantia nigra and putamen have higher selenium concentrations than other brain regions [149].Selenium may play an important role in PD by reducing oxidative stress via selenoproteins [149].In PD, it was found that plasma selenium decreases [199].An explanation might be that there is an intense selenium utilization for selenoprotein synthesis in the brain, possibly to prevent further oxidative damage.SELENOP is found together with presynaptic terminals in the striatum.Besides SELENOP, GPx4 is also decreased in the substantia nigra in patients with PD [104].Moreover, glutathione levels in the midbrain are also decreased before clinical symptoms, so GPx function is impaired, promoting oxidation [200,201]. Loubna Boukhzar et al. found that SELENOT plays a major role in protecting dopaminergic neurons against oxidative stress because, according to their studies, its loss enhanced the neurotoxin-induced degeneration of the nigrostriatal system, decreased dopamine secretion, and impaired motor function.These studies represent the first data that demonstrated that SELENOT is involved in the nigrostriatal pathway and the involvement of a selenoprotein in maintaining the functionality of the dopaminergic system and preserving motor function under oxidative stress conditions [105].Previous studies have shown that only several selenoproteins, particularly TrxR (thioredoxin reductase), can protect neuronal cells [202,203].SELENOT exerts an oxidoreductase activity like TxrR through its thioredoxin-like fold, so it represents a new important component of the thioredoxin system, localized in ER, in addition to the cytosol and mitochondrial TrxR1 and TrxR2 [204].Experiments performed using quantitative PCR, immunochemical, and Western blot analyses revealed that SELENOT expression is significantly increased in PD mice models, both in vitro and in vivo.The researchers concluded that SELENOT acts as a gatekeeper of redox homeostasis in the nigrostriatal pathway that is essential for physiological dopamine secretion and, therefore, maintaining motor function under oxidative stress conditions.Moreover, the oxidoreductase activity in the nigrostriatal pathway from the substantia nigra pars compacta to the caudate putamen prevents rapid-onset motor impairments in mouse models of PD [105].Alongside Boukhzar et al., Bellinger et al. reported an altered expression of SELENOP and GPx4 in survival nigral cells and dystrophic putamen dopaminergic fibers in Parkinson's disease patients, suggesting that different selenoproteins may be useful as complementary biomarkers of PD [105]. Implications of Selenoproteins in Epilepsy Epilepsy is a chronic neurological disease characterized by periodic episodes of abnormal electrical activity (seizures) that cause temporary interruptions in normal brain function.The types of seizures vary and are clinically classified into partial epilepsy syndromes with a specific location and generalized epilepsy syndromes that spread throughout the brain [205].Generalized syndrome seizures typically originate simultaneously in both cerebral hemispheres, whereas in partial ones, seizures originate in one or more foci but can spread throughout the brain.Epilepsies are also classified according to their etiology as idiopathic and symptomatic.Idiopathic forms develop from reappearing unprovoked seizures, have no apparent neurological problems, have an unknown cause, and may be influenced by genetic factors.Symptomatic epilepsies are sporadic and characterized by multiple seizures, and they have many causes, such as cellular and anatomical inborn brain abnormalities and impaired metabolic brain processes [206]. A considerable number of studies have demonstrated an inverse correlation between serum selenium levels and epileptic seizures [207,208].In infants, studies have also shown that low selenium serum levels lead to seizures and neurological disturbances [208].Also, even in the case of febrile seizures, which are not abnormal in childhood, there is an inverse correlation with serum selenium levels, suggesting the preventive role of selenium against certain types of epilepsy [209].In addition, selenium deficiency promotes the risk of seizures in childhood epilepsy [207,210,211].However, a recent study demonstrated decreased serum selenium and zinc in patients with idiopathic intractable epilepsy, independent of the nutrition intake [212].It is mentioned that epilepsy may increase the utilization of selenium even if the intake is adequate, probably supporting the activity of GPx antioxidant activity and other selenoproteins to prevent the cytotoxicity of seizures.This hypothesis is confirmed by the increased expression of SELENOW, GPx1, and TrxR1 observed in the excised brain tissue of patients with severe epilepsy requiring surgery [106]. Epilepsy, ischemia, and brain trauma may trigger the initiation of a cascade of free radicals and the activation of pro-apoptotic transcription factors with consequent neuronal loss [213]. The knockout of SELENOP increases seizures in selenium deficiency, while brainspecific knockout of all selenoproteins leads to severe seizures [107]. Implications of Selenoproteins in Muscle Diseases Selenium deficiency causes muscle disorders, observed in both humans and animals, especially in regions with low selenium soil quantities.Selenium deficiency causes myotonic dystrophy with weakness and muscle pain.White muscle disease (WMD) is a muscle disorder developed in farmed regions, with livestock raised on land with low selenium levels [214].The muscles of affected animals appear paler than normal and may show distinct longitudinal striations or a distinct chalky appearance due to abnormal calcium storage.This disease can affect both skeletal and cardiac muscles where SELENOW is highly expressed.SELENOW was the first selenoprotein described to be linked to a muscular disorder [108].SELENOW is less abundant in the muscles of WMD animals.The sarcoplasmic reticulum of the muscles in WMD has impaired calcium sequestration, resulting in the calcification of skeletal and cardiac muscle tissue.Studies have also revealed that SELENOW is complexed with glutathione in the cytosol through a covalent linkage to one of the cysteine residues.SELENOW is named after white muscle disease, and its levels are upregulated in response to exogenous oxidants in muscle cells [109,110]. The term "muscular dystrophy" includes several muscular disorders characterized by the slow degeneration of muscle tissue [111]. Several of these muscular disorders have genetic causes.One of these muscular dystrophies, termed "Multi-minicore disease", is a recessively inherited form characterized by multiple small lesions and cores scattered throughout the muscle fiber on muscle biopsy and clinical features of a congenital myopathy [112].Although there is genetic heterogeneity with clinical variability, the classic phenotype is easily recognizable via spinal rigidity, early scoliosis, and respiratory impairment.Multi-minicore disease occurs due to recessive mutations in the selenoprotein N gene (SEPN 1), whereas recessive mutations in the skeletal muscle ryanodine receptor gene (RYR 1) have been associated with wider clinical features, such as ophthalmoplegia, distal weakness, and wasting or predominant hip girdle involvement, resembling central core disease (CCD).In CCD, there may be a histopathologic continuum at biopsy, with multiple larger lesions ("multicores") due to dominant RYR 1 mutations [112,113].The role of SELENON in these diseases remains elusive because its exact function is still incompletely known.One mutation causing Multiminicore Disease involves the loss of a selenium-response element (SRE), a cis element found in some selenoproteins and the SECIS element.The SRE is localized within the RNA-coding region following the UGA codon.An SRE mutation prevents read-through, leading to an early termination of translation [114]. Ryanodine receptors are channels in the sarcoplasmic reticulum responsible for the redox-sensitive calcium-stimulated release of calcium from intracellular stores [215].These receptors potentiate calcium signals that may be initiated from the membrane calcium channels and receptors or via other calcium store channels, for example, InsP3-sensitive channels [216]. Regarding glucose tolerance in muscles, adenosine monophosphate-activated protein kinase (AMPK) is a mediator in the regulatory activity of SELENOP, so this fact considers SELENOP a future therapeutic target in diabetes mellitus 2 types [224]. Implications of Selenoproteins in Inflammation and Immune Response SELENOK is one of the selenoproteins that are essentially involved in calcium flux, T-cell proliferation, and neutrophil migration in immune cells, also protecting the cells from ER-stress-induced apoptosis [91].In regulating immunity, SELENOK represents a cofactor of enzymes involved in the key post-translational transformations of proteins by enhancing the catalytic efficiency, and it also has a biochemical role through antioxidant and protein repair [225]. Selenoprotein S is also involved in the immune response.SELENOS is an ER membrane protein that interacts with the ER membrane protein Derlin and the VCP (p97, valosincontaining protein), which is a cytosolic ATPase [226][227][228].VCP is translocated to the ER membrane by binding to SELENOS during endoplasmic-reticulum-associated degradation (ERAD) and is responsible for the retro-translocation of misfolded proteins from the ER, where they are tagged with ubiquitin and then transported to the cell proteasome [181,226].Because of its action, SELENOS is also named VIMP for VCP (valosin-containing protein)interacting membrane protein [115]. Selenoprotein K is another known p97(VCP)-binding-selenoprotein, and the expression of both SELENOK and SELENOS is increased under ER stress.The translocation of p97 (VCP) to the ER membrane is regulated by SELENOS, not by SELENOK, but p97(VCP) is required for associating SELENOK with SELENOS.In addition, the interaction between p97(VCP) and SELENOK is regulated via SELENOS.The degradation of ERAD substrates requires p97 (VCP), and its translocation from the cytosol to the ER membrane is essential to shuttle ERAD substrates to the proteasome.SELENOK and SELENOS are essential to forming the ERAD complex, alongside p97(VCP), in their response to ER stress [116,117]. The polymorphisms of the SELENOS promoter can lead to the downregulated expression of SELENOS and cause the accumulation of many misfolded proteins in ER.Subsequently, ER stress can induce NF-κB, which can upregulate inflammatory cytokines and lead to apoptosis [181]. The expression of SELENOS in liver cells is regulated by inflammatory cytokines and extracellular glucose [229,230].Studies reveal that polymorphisms significantly impair the expression of selenoprotein S, for example, a change from G to A at position -105 in the SELENOS promoter [231].Moreover, subjects with this polymorphism have increased plasma levels of inflammatory cytokines TNFα and IL-1β, and this polymorphism is correlated with increased incidence of stroke in women [232], pre-eclampsia [233], coronary heart disease [234], and gastric cancer [103].The -105 polymorphism exhibits epistasis with the -511 polymorphism of IL-1β, and both increase the risk of rheumatoid arthritis, although there was no correlation of polymorphisms with rheumatoid arthritis alone [235].On the other hand, other studies did not find correlations with stroke [236], autoimmune disorders [237], or inflammatory bowel disease [112]. In the inflammatory phase of wound healing, soluble factors are released, such as chemokines and cytokines, to phagocyte the debris, bacteria, and damaged tissues.Recent studies have revealed that SELENOS has an essential role in this inflammatory phase.As mentioned in this review, SELENOS is a transmembrane protein found in ER with a function that includes removing the misfolded proteins from the ER lumen, protecting the cells from oxidative damage, and contributing to ER-stress-induced apoptosis.The depletion of SELENOS by siRNA increases the release of inflammatory cytokines IL-6 and TNF-α, so SELENOS may regulate the cytokine production in macrophages and subsequently participate in controlling the inflammatory responses [14]. Other research has shown that the results of a real-time PCR study revealed a lower expression of SELENOP mRNA in whole blood in Kashin-Beck Disease (KBD) patients compared to healthy controls, with a higher expression in the articular cartilage tissue.These findings have suggested that the decreased SELENOP mRNA expression in KBD reflected the selenium deficiency condition in KBD patients.Under the selenium deficiency condition, the glutathione (GSH) metabolism is impaired and glutathione peroxidase activity decreases, leading to an increase in oxidative damage in bone and articular cells [121].KBD is a particular type of chronic osteoarthritis, an endemic disease in the northern part of China, Russia, and a few northern areas of North Korea.KBD mainly affects the knee, ankle, and hand joints, causing articular cartilage damage and chondrocyte apoptosis.KBD has traditionally been classified as a non-inflammatory osteoarthritis, but recent studies demonstrate that inflammation plays an important role in its development and evolution.Recently, it was found that KBD is not only an endemic disease anymore because of non-endemic factors such as age, altered biomechanics, joint trauma, and secondary osteoarthritis that also can cause this disease.It was concluded that advanced stages of joint complications and failure in KBD are tightly linked with the immune response, and the subsequent stage of chronic inflammation leads to the progression of the disease [238]. Implications of Selenoproteins in Type 2 Diabetes Mellitus SELENOP, which originates from the liver, is essential for supplying extrahepatic tissues with the selenium required for the biosynthesis of selenoproteins.It has been shown that increased plasma SELENOP levels are associated with hyperglycemia in patients with type 2 diabetes mellitus (T2DM) [122,123].Moreover, it was recently found that high SELENOP plasma levels are also associated with hepatic steatosis and fibrosis in NAFLD patients [88].Insulin sensitivity in the liver and skeletal muscle was improved in SELENOPdeficient mice, while intraperitoneal injection with SELENOP impaired insulin signaling, suggesting that SELENOP is a hepatokine capable of inducing insulin resistance [101,210]. Several studies have revealed that in humans, plasma SELENOP levels were saturated at a daily intake of approx.50-100 µg Se and did not further increase by ingesting selenium supplements in larger doses [101,210,239].High plasma levels may be an accompanying effect of insulin resistance and hyperglycemia because research has shown that its hepatic biosynthesis is suppressed by insulin and increased by high glucose concentrations [101,240,241].Selenoprotein P hepatic transcription is regulated similarly to that of a gluconeogenic enzyme through transcription factors FoxO1 and HNF-4α together with the co-activator PGC-1α and may also become dysregulated in hyperglycemia and insulin-resistance states [101,241,242]. Many researchers suggest that suppressing SELENOP may provide therapeutic ways to treat T2DM and its vascular complications [243].Metformin (an antidiabetic drug) phosphorylates and inactivates FoxO3a via the activation of AMPK (AMP-activated protein kinase) and suppresses SELENOP expression in hepatocytes [244].Eicosapentaenoic acid (an ω-3 polyunsaturated fatty acid) downregulates SELENOP by inactivating sterol-regulatoryelement-binding protein-1c, independently of the AMPK (AMP-activated protein kinase) pathway [245].Moreover, the novel molecular strategy for neutralizing SELENOP monoclonal antibody AE2 reportedly improved glucose tolerance, insulin secretion, and insulin resistance in vivo and in vitro [246]. Serum SELENOS, mostly secreted by hepatocytes, was associated with T2DM and its macrovascular complications (macroangiopathy) [125,247].SELENOS has antioxidant and anti-inflammatory functions, so it contributes to maintaining the morphology of ER and the regulation of ER stress, suggesting that it may be involved in the occurrence and development of T2DM [248,249].Moreover, several genetic polymorphisms in the SELENOS gene were demonstrated to be related to T2DM, serum insulin levels, blood glucose levels, and the homeostasis model assessment of insulin resistance [247,250]. SELENOK protects cells from apoptosis induced by ER stress and is essential for promoting Ca 2+ flux during immune cell activation [19,251].Experiments performed in vitro have shown that the expression of SELENOK, as well as DIO2 (deiodinase2), was downregulated by about 10% due to high glucose levels [126]. Recent studies have discovered the role of SELENOV in protection against the oxidative damage of oxygen and nitrogen reactive species (ROS/RNS) mediated by ER stress [252,253]. Implications of Selenoproteins in Obesity Adipocyte SELENOP is significantly influenced by pro-inflammatory stimuli involved in the pathogenesis of obesity and its associated metabolic disorders.Studies have shown that differentiated adipocytes responded to omentin exposure in vivo with a significant decrease in SELENOP expression and the pro-inflammatory response [127].Omentin is a novel adipokine with insulin-sensitizing effects and is especially produced by visceral adipose tissue, where circulating levels are decreased in insulin-resistant conditions, such as obesity and diabetes.Other studies concluded that SELENOP gene expression in 3T3-L1 adipocytes was reduced in response to TNF-α or H2O2 treatment, indicating a link between adipose tissue inflammation and oxidative stress in obesity and altered selenoprotein metabolism [128].Moreover, the negative regulation of SELENOP levels occurs in increased pro-inflammatory cytokine IL-6 and MCP1 induced by hypoxia [128]. Researchers have demonstrated a significant decrease in SELENOP gene expression in the adipose tissue of obese (ob/ob), HFD-fed, Zucker rats, and insulin-resistant patients [254].When leptin treatment was administrated in ob/ob mice, there was a shift to lipid catabolism genes that involved the inhibition of SREBP1 (sterol regulatory elementbinding protein 1) downstream signaling, as well as the upregulation of SELENOP and SREBP1 expression in the liver [255].In contrast, its expression was found to be two-fold higher in the obese adipose tissue of OLETF rats [256]. SELENOS expression in adipose tissue is increased in obese patients and is significantly correlated with anthropometric measures of obesity and insulin resistance.Studies performed in vitro using isolated human adipocytes have demonstrated that insulin upregulates its expression, suggesting a link between insulin resistance and SELENOS expression in obesity [129]. In the development of obesity and/or its associated metabolic impairments, methionine sulfoxide reductases (Msrs) may also be involved.Experiments studying diet-induced obesity in HFD-fed subjects, with 45% calories from fat, reduced both MsrA and MsrB (predominantly MsrB1, also known as SELENOR) activities and also their protein abundance in VAT (visceral adipose tissue) but not in SAT (subcutaneous adipose tissue) [130]. It has also been demonstrated that obesity upregulates the hepatic expressions of MsrB1, SELENON, SELENOP, and SELENOW, as well as GPx4 in diabetic patients by 33-35% compared to non-obese subjects [131]. Conclusions Members of the selenoproteins family named after letters of the alphabet require, like other selenoproteins, a common set of cofactors for their synthesis, depending on dietary selenium intake.The energy consumed for their synthesis suggests their great importance for cell physiological function, a consequence of their quite varied roles."Alphabet" selenoproteins are also involved in numerous diseases and pathological conditions, including type 2 diabetes, cardiovascular, muscular, brain, liver, neurodegenerative, immuno-inflammatory, and gastrointestinal diseases, as described above in this article.Consequently, it is also of great importance to expose the medical correlations and implications of these "alphabet" selenoproteins, which are less known than the rest of the selenoproteins, which, otherwise, could risk remaining overlooked, especially regarding establishing quicker prevention, on one hand, and the diagnostic and therapeutical management of the diseases, on the other hand.Given the numerous and varied roles of these selenoproteins, strategies to target the expression of specific selenoproteins could and should be considered in the future for therapeutic and prevention management.Although the functions of several selenoproteins remain unknown, further research and understanding of each member of this whole selenoproteins family, including the "alphabet" selenoproteins, will be essential in establishing the health benefits of selenium.
2023-09-22T13:06:24.915Z
2023-10-01T00:00:00.000
{ "year": 2023, "sha1": "2ef17a9a227fad07253688114f350f21fafec95b", "oa_license": "CCBY", "oa_url": "https://www.mdpi.com/1422-0067/24/20/15344/pdf?version=1697694633", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "692f13873afb3afca8310f81505cf5dee74f66ac", "s2fieldsofstudy": [ "Medicine", "Biology" ], "extfieldsofstudy": [ "Medicine" ] }
15152717
pes2o/s2orc
v3-fos-license
Macau, World Capital for Gambling: A Longitudinal Study of a Youth Program Designed to Instill Positive Values This study investigated the effectiveness of a positive youth development program for Chinese Secondary 3 students in two schools, who had been followed up since their entry to Secondary 1. A mixed research method was carried out using a pre- and post-test pre-experimental design and a focus group for the participants. The subjective outcome evaluations included participants’ perceptions of the program, program instructors, benefits of the program and overall satisfaction, and were positive. The longitudinal data from the objective outcome evaluation showed some notable improvements, and the overall effect of the program was also found to be positive for newcomers in the junior secondary years. The focus group interviews revealed mostly positive feedback in terms of the students’ general impressions of the program, with the majority of participants perceiving benefits to themselves from the program. The findings offer positive evidence of the effectiveness of the program. INTRODUCTION Macau is a small city located near Hong Kong in South East Asia, famous for tourism and its growing gaming industry. In 2011, the estimated population of Macau was 557,400; it has a comparatively young population, those aged between 10 and 24 making up 21% of the total population (1). The Macau Government opened a gaming licensure in 2002, leading to the rapid development of this industry, which has generated a considerable increase of revenue to contribute to the economic growth of Macau, but also may have potentially negative influences on adolescents. Attracted to the employment opportunities and perks in the gaming industry, many adults work in casinos, which require them to work long and irregular hours. One potential implication of this development was highlighted in a government report that clearly identified the problem of lack of communication of parents with their children and the adverse effect on their adolescent development (2). The Youth Indicators published by the Education and Youth Affairs Bureau in 2009 revealed that many teenagers lacked social norms, and that their participation in social functions/affairs and their sense of belongingness to Macau had deteriorated in comparison to an earlier study conducted in 2006 (3). This research also reflects a dramatic increase in youths' stress levels, originating from pressure at school as well as family conflicts. A recent study has revealed that over half of 744 respondents (54%) agreed that gambling was a common phenomenon in young people in Macau. It is suggested that by building up positive social norms and a sense of morality in adolescents, a more harmonious society in Macau may result (4). A well-structured local youth program can potentially help adolescents' positive growth and ensure that they are better prepared for future challenges in life. At present, youth studies and theoretically sound and comprehensive programs for adolescent positive growth and development in Macau are lacking (5,6). In this study, positive youth development is simply defined as "the growth, cultivation, and nurturance of developmental assets, abilities, and potentials in adolescents" (7). A review by Catalano et al. (8) of 77 programs for positive youth development in North America found that only 25 were successful in terms of positive changes in some objective outcome indicators. However, 15 positive youth development constructs were identified in one or more of the goals of these successful programs. These constructs included: (1) promotion of bonding, (2) cultivation of resilience, (3) promotion of social competence (SC), (4) promotion of emotional competence (EC), (5) promotion of cognitive competence (CC), (6) promotion of behavioral competence (BC), (7) promotion of moral competence (MC), (8) cultivation of self-determination (SD), (9) development of self-efficacy (SE), (10) promotion of spirituality, (11) promotion of beliefs in the future (BF), (12) development of clear and positive identity, (13) recognition for positive behavior, (14) providing opportunities for prosocial involvement (PI), and (15) fostering prosocial norms (PN). With financial support from The Hong Kong Jockey Club Charities Trust through a joint research project consisting of five universities in Hong Kong, a well-tested and comprehensive positive youth development program, "P.A.T.H.S.," has been developed (9,10). The word "P.A.T.H.S." denotes Positive Adolescent Training through Holistic Social Programs, and consists of two tiers of programs. The Tier 1 Program is a universal positive youth development program in which students in Secondary 1-3 participate, normally with 20 h of training in the full program or at least 10 h of training in the core program in each grade. The Tier 1 Program incorporated the 15 positive youth development constructs identified from the existing successful programs (8): Bonding (BO), SC, EC, CC, BC, MC, SE, PN, Resilience (RE), SD, Spirituality (SP), Clear and Positive Identity (ID or CPI), BF, PI, and Recognition for Positive Behavior (PB). All these constructs emphasized helping students to learn and develop their personal autonomy on moral principles, or to make independent and critical judgments via a happy, healthy, and stimulating teaching and learning process during their schooling. Hong Kong and Macau share a similar Chinese culture, therefore, the well-tested comprehensive positive youth development program "P.A.T.H.S.," developed for Chinese students in Hong Kong, was modified and adapted for use in Macau. With support from the Education and Youth Affairs Bureau, a local research team was formed by the author and his colleagues, who modified the program so that the content would reflect the local terminology, such as Macau citizens instead of Hong Kong citizens, government structure of Macau Special Administrative Region instead of Hong Kong Special Administrative Region, some indigenous heritage and customs to suit the local context (11). The team also monitored the implementation of the program and evaluated its effectiveness for three consecutive academic years after completion. Two secondary schools were invited to participate as pilot schools to run the program, starting with their Secondary 1 students. Training for teachers and school social workers was also organized both in Hong Kong and Macau. Positive findings of the Secondary 1 and 2 program evaluations during the academic years 2009-2010 and 2010-2011 respectively were reported (11,12). Macau has a history of a 15-year free non-tertiary education system, with direct promotion from primary to secondary school without any public examination. Individual admission examination is required after secondary school education for entry into local universities. There is a rather relaxed and less competitive learning atmosphere in Macau's education system. It is not uncommon to see students repeating their studies in some classes in primary and secondary schools if they find their academic performance unsatisfactory. It is roughly estimated that about 20% of students repeat grades in the junior secondary years in many schools (13). In this study, the effectiveness of the Macau version of the Tier 1 Program of "P.A.T.H.S." for Secondary 3 students in two pilot schools during the academic year 2011-2012 is evaluated. Since the Secondary 3 classes of the pilot schools included new students who were either repeating the Secondary 3 class or had transferred from other schools, the effectiveness of the program on these new participants was also examined. MATERIALS AND METHODS A mixed research method was adopted for the triangulation of data. It consisted of a quantitative approach using a pre-and posttests pre-experimental design, together with a qualitative approach using a participant focus group. PARTICIPANTS The two main sources of data were self-reported questionnaires and focus group discussions. The study participants included all Secondary 1 students in the two chosen schools, School A and School B, totally 232 starting in 2009. These students were followed for 3 years up to and including Secondary 3. When this group of students was promoted to Secondary 2, 53 dropped out of their classes and 79 new students joined the program, either to repeat Secondary 2 or because they were transferred from other schools. When the group was promoted to Secondary 3, 41 dropped out and 48 new students were added ( Table 1). Regarding the focus group discussion, two group interviews were conducted, consisting with 8 participants selected randomly from School A in one group and another 8 from School B for the second group. There were 256 and 240 students who participated in the Wave 5 (W5) pre-test and Wave 6 (W6) post-test respectively. Among the 256 students in the W5 pre-test, 48 (18.75%) were new participants. After discarding the questionnaires that were invalid (mainly due to missing data), 236 questionnaires were successfully matched for analysis. Among these 236 students, 142 had joined the program in Secondary 1, 50 in Secondary 2, and 44 in Secondary 3 ( Table 1). There were no significant differences in socio-demographic background between the new (joined in Secondary 3) and old (joined in Secondary 1 and 2) students using the chi-square test, except for age ( Table 2). The mean age of the new students (µ ± SD = 16.06 ± 1.17) was higher than that of the old students (µ ± SD = 15.00 ± 1.13). INSTRUMENTS The two set of questionnaires used in Years 1 and 2 were used again when the students were promoted to Year 3. The components of these questionnaires are described below. LIFE SATISFACTION SCALE Life satisfaction is another important indicator of positive youth development (15). The five-item Life Satisfaction Scale (LIFE) was developed by Diener et al. (16) to assess a person's global judgment of his/her own quality of life. The Chinese version was translated by Shek (17) with acceptable psychometric properties. The Cronbach's alpha of the present study is 0.80. A higher LIFE score indicates a higher level of life satisfaction. BEHAVIORAL INTENTION SCALE The five-item scale was used to assess the adolescents' behavioral intention to engage in problem behavior, including drinking, smoking, taking drugs, having sex, and gambling. The scale was developed by Shek et al. (7) and has good reliability (α = 0.84). The Cronbach's alpha of the present study is 0.71. A higher Behavioral Intention Scale (BI) score indicates a higher behavioral intention. SCHOOL ADJUSTMENT MEASURES The scale was developed by Shek (18) and has good reliability (α = 0.73). The school adjustment measures (SA) consist of three items. Two assess the participant's perception of his/her academic performance. The third assesses the participant's perception of his/her conduct. Previous studies have shown these measures to be temporally stable and valid (19,20). The Cronbach's alpha of the present study was 0.84. In line with other measures, a higher scale score indicates a higher level of school adjustment in this study. SUBJECTIVE OUTCOMES SCALE (FORM A) The Subjective Outcome Evaluation Form (Form A) was designed by Shek and Siu (21). The Form consists of totally 39 items and 4 open questions, which are divided into 5 parts. The first asks for the participants' views on the program (10 items). The second examines the participants' views of those involved in delivering the program, including teachers and/or social workers (10 items). The third section examines the participants' perceptions of the effectiveness of the program (16 items). Three items ask about the likelihood of their joining a similar program in the future, their overall satisfaction with the program, and whether they would recommend it to others. The final part consists of four open questions on things that participants have learned and appreciated most, as well as their opinions about the instructors and areas for improvement. The Form has good reliability on all 39 items (α = 0.99, mean inter-item correlation = 0.80) (22). The present study has a Cronbach's alpha of 0.98, with mean inter-item correlation of 0.55. PROCEDURES Quantitative data was collected at two time-points in each year. Firstly, before the program started, the pre-test self-reported questionnaires were completed within 1-2 weeks after the start of the school year. The second data collection time-point occurred at the end of that academic year, after the students had finished the program. The pre-test data Wave 1 (W1) and post-test data Wave 2 (W2) of Secondary 1 were collected and analyzed (11). The pre-test data Wave 3 (W3) and post-test data Wave 4 (W4) of Secondary 2 were analyzed and the results were published (12). These data were used as the baseline for the longitudinal assessment for the Secondary 3 year. The pre-test data W5 and post-test data W6 of Secondary 3 were collected in this study to assess the effectiveness of the Secondary 3 program. At the pre-and post-tests, the participants were invited to complete a valid and reliable questionnaire, including measures of positive youth development, life satisfaction, school adjustment, adolescent problem behaviors, and demographic information. An identical questionnaire was used in the pre-and post-tests. After completion of the program each year, an evaluation questionnaire was also completed by the participants to assess their satisfaction with the course and perceived benefits of the program, providing subjective outcome measures for evaluation. www.frontiersin.org The focus group interview took about 1 h each time. The principal investigator conducted the focus group interview using a semi-structured interview guide provided by the Hong Kong research team. DATA ANALYSIS Descriptive statistics were used to present the subjective outcome measures. The paired-samples t -test, one-way ANOVA, and repeated ANOVA were performed to examine differences between the scales, providing objective outcome measures for evaluation. Regarding the qualitative data, the content of the interviews was audio-taped with the consent of the participants. It was then transcribed by the research assistant and checked for accuracy by the principal investigator. The raw data of the two groups were analyzed together through coding. After comparison of all the coding, relevant themes were developed. Table 3 shows the participants' perception of the program and its instructors. Since the Likert scales used for this section of questionnaire were from 1 to 6, the proportion of responders who endorsed 1, 2, or 3 for disagreement were summed and compared to the proportion of responders who endorsed 4, 5, 6 for agreement. The results showed that most of the students evaluated the program positively. With reference to the views of the course, the most positive response was that there was much peer interaction among students (84.1% agreed responses; M = 4.53). The least positive one was overall, I have a very positive evaluation of the program (73.1% agreed responses; M = 4.03). With regard to the views of the instructors, the most positive response was they felt that the instructors encouraged them to participate in the activities (92.9% agreed responses; M = 4.71). The least positive one was the instructors' teaching skills were good (79.2% agreed responses; M = 4.30). SUBJECTIVE OUTCOME EVALUATION As far as the participant's perception of the benefits of the Tier 1 program was concerned, since the Likert scales used for this section of questionnaire were from 1 to 5, the proportion of responders who endorsed 1 and 2 were summed as unhelpful responses and compared to the proportion of responders who endorsed 4 and 5 for helpful responses. After discarding the middle score of 3, the scales would be symmetrical for comparison. The results showed that all participants evaluated the course as having more helpful than the unhelpful responses. The most helpful response was that it has enhanced their SC (53.9% helpful responses; M = 3.48). The least helpful one was that it has strengthened their bonding with teachers, classmates, and their family (39.9% helpful responses; M = 3.23) ( Table 4). Regarding other aspects of the evaluation, most of the participants indicated that they would recommend the program to their friends who had needs and conditions similar to them (74.2%), and over 60% of them would consider joining similar courses in the future (64.8%) ( Table 5). On the whole, a vast majority of them were satisfied with this course (87.1%). The qualitative analysis of the four open-ended questions will not be reported in this paper. OBJECTIVE OUTCOME EVALUATION For the objective evaluation of the Secondary 3 program, a paired t -test was used. Table 6 highlights the changes between the pretest (W5) and the post-test (W6) for participants who had joined the program in Secondary 3. There was a significant improvement of the score of the CYPDS, with 8 of the 15 subscales www.frontiersin.org (RE, SC, RP, EC, CC, MC, SD, PN) and SA found to have significant positive changes. For the students who had joined the program before Secondary 3, significant positive changes were also found in the score of CYPDS, three of the subscales (RE, CC, ID), and LIFE (Table 7). However, the SA score was found to have decreased significantly. Table 8 shows the differences between W5 and W6 among all the participants. On the whole, significant improvements were found in CYPDS, eight of the CYPDS subscales (RE, SC, EC, CC, MC, SD, ID, PN) and LIFE. To investigate the outcomes against the length of time participating in the program, the W6 data was analyzed using one-way ANOVA. Non-significant results were revealed in all the scales except BI (F = 3.86, df = 2.237, p = 0.022) ( Table 9). To assess the longitudinal effect of the program from Secondary 1 to 3, a repeated ANOVA was done using W1 (baseline), W2, W4, and W6 data after the participants had completed all 3 years of program. Significant results were found in the CYPDS (F Table 10). The scoring trends of these scales are shown in Figure 1. The scores of CYPDS and LIFE were found to decrease after the Secondary 1 or 2 program, but increased later after completion of the Secondary 3 course. On the other hand, significant negative changes were observed in SA and BI over the duration of the program. FINDINGS OF THE FOCUS GROUP INTERVIEWS Eight students from School A, four male and four female, participated in one of the group interviews. Two were very quiet and seldom responded to the interviewer. Eight students from School B, six male and two female, participated in the other group interview. All were quite responsive to the group discussion. The qualitative findings were mainly analyzed in two areas: the participants' general impression of the program and perceived benefits of the program to themselves. The preliminary analyses were classified into positive and negative comments on the program. Regarding their general impressions, among the 14 who gave feedback, 4 claimed that they felt bored, lacked time in 1 session, and felt tired toward the end of the day when the program was conducted or the content had been repeated with less input; the rest gave positive responses such as that the content met their daily needs, that it was not as boring as other subjects, that it was interesting, relaxing, interactive, free communication, and an enjoyable small group discussion. The activities that aroused their interest were games, videos, success stories, experiences shared by instructors, and incentives provided during the course. With Frontiers in Public Health | Child Health and Human Development reference to the perceived benefits, eight students who gave positive feedback asserted that through the program, they had learned how to establish goals for the future, to see things from different angles, to make decisions, to control their emotions, and to develop healthy relationships with others. Some of their narratives were as follows: There was one session about how to develop your future. I might be a bit puzzled about my own future. So, the worksheet in that session helped me to establish personal goals and dreams. -From a student in School A. Once I was taught to see things from different angles, I practiced what I had been taught and found that things were really different when you saw them from different perspectives. -From a student in School A. One session talked about romance. If I have to choose a girl or a boy, it is possible that I will remember the principles from that session, and that they will help me to make the right decision. -From a student in School A. It is helpful to manage one's emotions. We were taught some techniques, such as eating, listening to music, etc. Listening to music helps to release emotions. It has a calming influence. -From a student in School B. My temper has improved somewhat. I remember being taught not to be unhappy due to being angry at others. -From a student in School B. In our age group, dating is common, and we think it is very important. But we do not know how to manage broken relationships. Some even think of committing suicide. Through this program, we learned that even without love from the opposite sex, we still have friendships, and love from our family members. -From a student in School B. DISCUSSION In this study, the participants' perceptions of the program and of their program instructors were positive. With regard to their perceptions of the effectiveness of the program and their overall satisfaction, the feedback was also positive. On the whole, the subjective outcome evaluations generally supported positive perceptions of the program, program instructors, benefits of the program and overall satisfaction with the whole course, and these findings were consistent with previous results from the programs for Secondary 1 and 2 in Macau (11,12), and those reported in Hong Kong (22,23). Though the results in Hong Kong were better than those in Macau, it should be noted that it was reasonable to sum the proportion of responders who endorsed 1, 2, or 3 and compare them to the proportion of responders who endorsed 4, 5, 6 since the scales were symmetrical. On the other hand, it was inappropriate to sum the proportion of respondents that endorsed 1 and 2 and compare to the proportion that endorsed 3, 4, or 5, which might give an exaggerated figure since they were not symmetrical. One of the subscales in the Hong Kong study was calculated in this manner which might present an overly positive picture. Referring to their views of the instructors, fewer participants agreed that the instructors' teaching skills were good. This might be one of the reasons why some participants claimed that they felt bored when interviewed in the focus groups. There may be several possible reasons for this view. First, participants in Secondary 3 were older; they may have had more expectations of their instructors. Second, since the same 15 constructs are repeated each year but with more in-depth exploration for the senior class, more knowledge and skill are required of the instructors in order to integrate and apply the constructs to the different levels. In future, more training should be provided to instructors, and sharing among peers will help enrich knowledge and experiences to enhance the teaching www.frontiersin.org skills needed to conduct this type of adolescent program. However, when viewing the most positive responses of the two main sets of subjective data, most participants agreed that the program provided much peer interaction amongst them and the instructors encouraged their participations, which were internally consistent with their perception that the program has enhanced their SC. Besides, the objective outcome evaluation and the qualitative data provide alternative evidence to support a positive view. With regard to the objective outcome evaluation, our findings showed that the Secondary 3 program was effective for newly joined participants, with improvements in more than half of the subscales. This was consistent with the findings of the Secondary 2 program, which was also effective for new participants (12). Possible reasons may be related to its new, interactive, and nonacademic nature. In addition, great improvement was also found in their school adjustment. The program may have had some effect on their conduct, while repeating Secondary 3 may have helped to improve their academic performance. With reference to the old participants who had joined the program before Secondary 3, positive changes were also found. However, there was a decrease in the school adjustment score in this group; one possible reason may be the higher academic demands placed on them when they are promoted to the senior class in secondary school. Viewing the old and new participants as a single group, we can see that there was a significant improvement in development and life satisfaction, which is consistent with the results in Hong Kong (24). However, when comparing the Secondary 3 students in Macau with means of CPYDS = 4.44, BI = 1.56 with the Hong Kong group of students with means of CPYDS = 4.54, BI = 1.44 (24), the students in the Hong Kong group showed a slightly better improvement than those in Macau. It would be interesting if the "before score" in Hong Kong could also be compared with the "before score" in Macau to see the real differences and to explore the reasons behind them. This area is therefore worthy of further exploration. As far as the duration of participating in the program is concerned, joining in either the Secondary 1, 2, or 3 program and using the W6 as the final exit point, no significant differences in the development scales or other measures were found, which shows that participants can join at different times, with a final effective result. The findings revealed both the short-and longterm effects of the 3-year programs on the program participants. In other words, participants can benefit from joining either one, Frontiers in Public Health | Child Health and Human Development two, or all three programs. On the other hand, there was a significant difference in the problem behavior intention. Participants joining the program for three consecutive years starting from Secondary 1 displayed a lower level of intention to engage in problem behavior than did students joining in Secondary 2 and 3, which is also consistent with the studies in Hong Kong (25) that found that the program can be a protective factor in preventing adolescent problem behavior. It also showed that the program can be more beneficial if students can participate at an earlier age. When assessing the longitudinal effect of the program from Secondary 1 to 3, there was a positive result on the development scale and life satisfaction, which is consistent with the findings in Hong Kong (24). The findings once again revealed the effectiveness of the modified program in Macau with the positive results of the Secondary 1 and 2 programs (11,12). Although there were negative changes in school adjustment and problem behavior intention, as indicated in the longitudinal study in Hong Kong (25), the program can still be a protective mechanism in delaying adolescent problems. THE FOCUS GROUP INTERVIEWS Regarding the general impression of the program, only a few of the participants perceived it to be boring. Most of them found the program more relaxing and interesting than their conventional moral or civics classes or other formal classes. The class was interactive, with a variety of activities and free communication in small groups. They enjoyed games and video shows that were more up-to-date and related to their daily life. It is consistent with the observation of Shek (24) that only a small portion of participants, approximately 15%, failed to perceive the program as effective. With reference to the perceived benefits of the program, all participants were positive in their feedback. Generally speaking, benefits were observed on both personal and interpersonal levels. The focus group observations were generally consistent with both the subjective and objective outcome evaluation findings in this study, with the students moving in a positive direction in various developmental domains such as establishing personal goals, controlling their emotions, and developing rational thinking and healthy interpersonal relationships. Some suggestions for improving the program were also obtained from the focus groups, such as increasing outdoor activities and extending the time of each session to facilitate more in-depth discussion. On the whole, the program is effective due to its non-academic, informal, and interactive nature, which is more receptive by the adolescents in building up their positive potentials. On the other hand, more attention should be paid to the behavioral intention of having drugs, sex, and gambling, which may be due to the different messages conveyed by the popularity of internet and may not be easily discerned by adolescents nowadays. LIMITATIONS This study had several limitations. First, only two schools were involved and the sample size was relatively small, raising the potential for sample bias and making any generalization of the findings difficult. Secondly, the present study was based on a one-group pre-/post-test design, which may not be the most appropriate. Other approaches such as the randomized control trial, which can provide a more rigorous design to give more insight into the effectiveness of an intervention program, could have been considered. Thirdly, a comparatively large portion of students dropped out of the program, which may have affected the longitudinal observation and the observation of long-term effects. It would be worthwhile to follow up this portion of students to see if there is any change in scores afterward. Nevertheless, the present study can contribute to an understanding of the potential benefits of evidence-based youth work, and is also a ground-breaking scientific study showing the impact of the P.A.T.H.S. program on the holistic development of Chinese adolescents in Macau. CONCLUSION Subjective outcome evaluation highlighted positive results from all participants. The effect of the program from the objective outcome evaluation was found to be positive to both the newcomers and the old students in the junior secondary years. There was also an overall positive feedback from the focus group interviews. Based on the findings from the subjective and objective outcome evaluations and the focus group interviews, it can be concluded that there is positive evidence of the effectiveness of the Secondary 3 program. In addition, the longitudinal data from the objective outcome evaluation did support an improvement from different time spans in the program. It demonstrated that the 3-year P.A.T.H.S. program has both short-and long-term benefits for program participants. However, students who participated in the program for three consecutive years did better than those who took part in only a 1-or 2-year program in terms of the intention to engage in problem behavior. It showed that the Tier 1 program of the P.A.T.H.S. project can help to prevent adolescent problem behavior through promoting positive development. This program is particularly beneficial to the adolescents in Macau since they are more easily to be tempted for gambling as Macau is a world capital for gambling. IMPLICATIONS FOR SCHOOL HEALTH There are several implications of this study for school health. First, it demonstrates that participating in a youth development program can have both short-and long-term positive effects for personal growth, and that the program can also be a protective factor in preventing adolescent problem behavior. Second, the interactive and non-academic nature of the program can be more easily accepted by participants, especially for Chinese adolescents in Asian countries where teachers are dominant and directive in the Eastern pedagogical culture and academic performance is greatly emphasized (26). Third, since not every youth development program incorporates a mechanism for evaluation, this study shows the importance of evaluating youth programs and provides evidence for the promotion of health during early adolescence.
2016-05-12T22:15:10.714Z
2013-11-26T00:00:00.000
{ "year": 2013, "sha1": "0239f13f406b696c6c786e86f6dd0f8fce2c9b07", "oa_license": "CCBY", "oa_url": "https://www.frontiersin.org/articles/10.3389/fpubh.2013.00058/pdf", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "0239f13f406b696c6c786e86f6dd0f8fce2c9b07", "s2fieldsofstudy": [ "Business", "Education", "Sociology" ], "extfieldsofstudy": [ "Medicine" ] }
86813167
pes2o/s2orc
v3-fos-license
Results of single-stage acetabulum reconstruction and total hip arthroplasty in the management of pelvic discontinuity caused by ununited acetabulum fracture Introduction: Non-union of acetabulum fracture is a common occurrence in developing countries. Pelvic discontinuity due to ununited fracture acetabulum poses a serious challenge in the management of the patient. Results of ORIF of these fractures are not promising due to various reasons. We aim to present results on single-stage acetabulum reconstruction and total hip arthroplasty, the literature on which is lacking. Materials and Methods: We prospectively studied the outcome of single-stage acetabulum reconstruction and total hip arthroplasty in patients with pelvic discontinuity caused by ununited fracture acetabulum of more than 3 months duration. From March 2015 to December 2018, 11patients (7 males, 4 females, average age 42 years) were treated in our hospital. All cases underwent open reduction internal fixation with posterior column plate, impaction bone grafting, uncemented porous coated hemispherical acetabulum cup hip arthroplasty. Results: Union of the columns was achieved in all the cases with 7 patients (63.6%) showed union at 12 months and rest 4 patients (36.4%) took 18 months to unite. At final follow up, mean Harris hip score improved from 54.2 points (range 24-63) to 91.4 points (range 79-95). One superficial surgical site infection and one heterotopic ossification was noted in our study which had no sequel on clinical outcome at their last follow up. Conclusion: Single stage acetabulum reconstruction by posterior column plating with impaction bone grafting and uncemented porous coated hemispherical acetabulum cup hip arthroplasty is a genuine option in management of pelvic discontinuity caused by ununited fracture of acetabulum. Introduction Acetabulum fracture going into non-union is very commonly encountered in developing countries as most of the fractures are either neglected or treated conservatively that may land up in non-union. 1,2 Open reduction and internal fixation of these fractures are associated with difficult mobilization of caudal fragment due to scarring between fragments and results are not promising because of repeated indentation of head, achieving acceptable reduction, abnormal callus formation and chondral changes. 3 Total hip arthroplasty is a reliable option for these neglected, ununited fractures but there are problems like non-union of column fracture, medial displacement and rotation of caudal fragment making total hip arthroplasty difficult. So, the question which comes in mind is that whether osteosynthesis is to be done simultaneously with total hip arthroplasty or osteosynthesis is to be performed first to ensure healing of the column followed by total hip arthroplasty because stability of the cup is dependent on integrity of column. Two stage management of pelvic discontinuity associated with total hip arthroplasty is well described in the literature. 4 Also, studies on single stage management of acute acetabulum fracture by acetabulum reconstruction and total hip arthroplasty are present 5 but literature on single stage management of pelvic discontinuity caused by ununited fracture acetabulum are lacking. We aim to present our results of single stage management of pelvic discontinuity caused by ununited fracture acetabulum by acetabulum reconstruction and total hip arthroplasty. Materials and Methods This is a prospective study done in King George Medical University, Lucknow from March 2015 to December 2017. Eleven patients of pelvic discontinuity caused by ununited fracture acetabulum of more than 3 months duration were included in this study after taking due clearance from ethical committee from university. Pelvic discontinuity associated with revision hip replacement surgery, mal-united acetabulum fractures, previously treated failed acetabulum fractures and iatrogenic injury were excluded from the study. Pre-operatively x-rays of pelvis and Judet views and CT scan with 3D reconstruction were obtained in all patients. Basic haematological investigations were done to rule out infection. After all aseptic preparation, patient was positioned in lateral position. Standard postero-lateral approach was used. Haemostasis was achieved. Standard femoral neck osteotomy was done. Exposure of the acetabulum was done to assess the fracture and the bony defect. Fracture sites were curetted from inside the acetabulum including anterior column, posterior column and quadrilateral plate. At this point, discontinuity between the superior and caudal fragments assessed and record was made of the stability of the columns. Posterior column of acetabulum was exposed by putting bone levers in sciatic notches while maintaining hip extension and knee flexion to avoid sciatic nerve injury. Thereafter, caudal fragment was mobilized with the help of Schanz screw in the ischial tuberosity and posterior column was reduced at its best possible position. Reduction was temporally fixed with k-wires and inter-fragmentary screws were applied wherever applicable. Reduction was supported by posterior column plate extending from ischial tuberosity to supraacetabular region. 2-3 cortical screws were used in distal as well as proximal fragments. According to bone defect in the acetabular socket, bone graft was prepared from femoral head. Using transverse acetabular ligament as the landmark, serial reaming of the acetabulum was done. Acetabulum was under-reamed deliberately and the defect was re-assessed. Bone graft applied over the defect and impaction grafting was done in all cases using reverse reaming. Posterior column fracture site was also grafted from outside the acetabulum. Trial of the acetabulum cup was done according to the last size reamer. Stability was assessed and final component of the acetabulum of same size was applied. Implant in all the cases used was hemispherical porous coated acetabulum cup to avoid sinking of the cup. We took superior acetabular dome as a reference to avoid medial mal-placement of the cup because the caudal fragment was already medially displaced in most of the cases. Cup was fixed with 2-3 screws into the intact Ilium and posterior column. Standard cemented or uncemented femoral stem was applied. 36 mm femoral head was used. Thereafter, hip joint stability was assessed and closure was done in layers with drain. Antibiotic was given 30 minutes before the incision. Post-operatively 5 days of IV antibiotics were given following which oral antibiotics till stitch removal. In-bed mobilization exercises were done for 6 weeks. Partial weight bearing was allowed at 6 weeks. Full weight bearing started according to discontinuity healing. Post operatively x-rays of pelvis and Judet views were done to see radiological union and possible complication. Functional status were assessed by Harris Hip Score at each follow up visit. Follow up was done at 6wks, 12wks, 24wks and thereafter on six monthly basis. Results Eleven patients were operated upon during the study. Patient characteristics are depicted in Table 1. Two patients had associated ipsilateral shaft of femur fracture for which closed reduction internal fixation with intramedullary interlocking nail was performed at the time of trauma. No patient was lost to follow up in this series. Radiological results at last follow up and by comparing serial radiographs showed that all the cases had union as well as well-fixed implant without evidences of loosening or mal-orientation. Migration of cup was present in one patient, although the patient was asymptomatic and implant was well-fixed as depicted in Fig. 1. Nine out of the 11 grafts (impaction grafting) showed union at three months and they had become structurally integrated with the parent bone, as evidenced by the trabecular reorientation on serial radiographs. Rest 2 grafts showed union at six months post-operatively. Union of the columns took longer time in our study with 7 patients (63.6%) showed union at 12 months and rest 4 patients (36.4%) took 18 months to unite. Fig. 2 shows union of column at 1year with the patient's functional status. Superficial surgical site infection was seen in one patient which was treated with local debridement and IV antibiotics. Heterotopic ossification (HO) was present in one hip (9.1%) where excessive periosteal stripping was done due to comminuted posterior wall fragment. However, this had no sequel on the final clinical outcome. No dislocation, neurological complications or deep venous thrombosis was noticed during the follow up. Discussion Recently, hip joint replacement has been advocated in many studies in acute acetabulum fractures especially in elderly patients with impaction of femoral head or comminution of dome where high chance of failure of open reduction and fixation is present. 6,7 Total hip arthroplasty in fractures of acetabulum does not produce same results as in case of arthritis of hip joint from other causes because of complexity regarding bone defects, bony irregularities, mal-union or nonunion of acetabular fractures. 8 Pelvic discontinuity caused by ununited fracture acetabulum has many challenges while planning for total hip arthroplasty as integrity of column is utmost important for survival of arthroplasty. Reduction and maintenance of fragments till union is one problem. Also, to provide adequate contact between acetabular cup and host bone is a challenging task. Many techniques like plating of columns with acetabular cup implantation, 9 cup cage construct, 10 jumbo cup and rings, 11 custom triflange cup 12 and distraction methods 13 are described in literature to treat pelvic discontinuity due to various etiology. Pelvic discontinuity is most commonly encountered in revision hip surgeries. Other causes are traumatic and sometimes iatrogenic during uncemented cup implantation. Each techniques has its own indications and complications but in general when stability of the cup is provided by the acetabulum component itself by cages or rings, it has more chances of implant failures due to problems of osteo-integration. In our study, we have tried to be more biological by doing reduction of posterior column as much as possible and applied compression plate on it. Acetabulum defect was assessed and according to its morphology graft was prepared from femoral head. Defect was grafted followed by impaction grafting. Impaction bone grafting has crucial role not only in providing biological environment for acetabulum cup osteo-integration but also causes union of fractures. This has been supported by many studies. 14,15 The shortcoming of our technique is that if the fracture fails to unite then it will lead to failure of acetabulum cup. So, the question arises that is it worth to do open reduction internal fixation and total hip arthroplasty in a single stage or go first for healing of fracture then proceed for total hip replacement in second stage. Two stage treatment of old or neglected ununited acetabulum fracture has the burden of dual surgery, require more blood transfusion and difficult second surgery due to lot of fibrosis and scarring. Two stage treatments reserved for osteoporotic patients with pelvic discontinuity caused by revision hip surgery in which first stage reconstruction of acetabulum with plate and bone grafting followed by total hip arthroplasty in second stage. Biology of such patients is not favourable for single stage management due to bone defect because of osteolysis and decreased healing potential of the discontinuity. 4 In our study 63.6% of patients were united in 12 months. Rest 36.4% patients had delayed union who had taken 18 months to unite. Comparing with studies on pelvic discontinuity associated with revision hip treated by various methods Berry et al 16 Limitation of our study is that sample size is limited. Also, the mean follow up period is not sufficient to comment on the final outcome, functional status, revision rate and possible long term complications. Most importantly, there is no study on ununited fracture acetabulum causing pelvic discontinuity available in literature for comparing our results for planning future cases of similar nature. Conclusion Although this study is preliminary, we conclude that single stage acetabulum reconstruction by posterior column plating with impaction bone grafting and uncemented porous coated hemispherical acetabulum cup hip arthroplasty is a reliable option in management of pelvic discontinuity caused by ununited fracture of acetabulum.
2019-03-28T13:33:14.031Z
2018-12-30T00:00:00.000
{ "year": 2020, "sha1": "d9a617f6953ea811ac4a100a332547be67e73f9d", "oa_license": "CCBYNCSA", "oa_url": "https://www.ijos.co.in/journal-article-file/8094", "oa_status": "GOLD", "pdf_src": "Anansi", "pdf_hash": "ba9a693a0aa075210e4476cbb1c8571d40b0bff5", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
247097369
pes2o/s2orc
v3-fos-license
A phase II study of chemotherapy in combination with telomerase peptide vaccine (GV1001) as second-line treatment in patients with metastatic colorectal cancer Background: GV1001 is a human telomerase peptide vaccine that induces a CD4/CD8 T-cell response against cancer cells, thereby affording an immunological anti-tumor effect. Here, we evaluated the efficacy and safety of GV1001 in combination with chemotherapy in patients with metastatic colorectal cancer who had failed first-line chemotherapy. Methods: This multicenter, non-randomized, single-arm phase II study recruited recurrent or metastatic colorectal cancer patients with measurable disease who had failed first-line chemotherapy. Patients received GV1001 and chemotherapy concomitantly based on a pre-established schedule. Cytotoxic chemotherapy and targeted agents (bevacizumab, cetuximab, or aflibercept) were allowed to be used at the discretion of the investigator. The primary endpoint was the disease control rate; secondary endpoints were the objective response rate, progression-free survival, overall survival, and safety outcomes. The baseline serum eotaxin level (a potential predictive biomarker of GV1001) was analyzed. To determine whether an adequate immune response had been induced, a delayed-type hypersensitivity test and a T-cell proliferation test were performed. Results: From May 13, 2015 to October 13, 2020, 56 patients with recurrent or metastatic colorectal cancer treated in seven hospitals of South Korea were enrolled. The median patient age was 64 years (range, 29-82 years); 67.9% were men. Of all patients, 66.1% had left-side colorectal cancer and the RAS mutation was present in 25%. The disease control rate and the objective response rates were 90.9% (95% confidence interval [CI]: 82.4-99.4%) and 34.1% (95% CI, 20.1-48.1%), respectively. The median progression-free survival was 7.1 months (95% CI, 5.2-9.1 months) and the median overall survival was 12.8 months (95% CI, 9.9-15.8 months). The most common all-grade adverse events were neutropenia (48.2%), nausea (26.8%), neuropathy (25.0%), stomatitis (21.4%), and diarrhea (21.4%). Immune response analysis showed that no patient had positive delayed-type hypersensitivity test results; antigen-specific T-cell proliferation was observed in only 28% of patients. The baseline eotaxin level was not associated with any efficacy outcome. Conclusion: Although no clear GV1001-specific immune response was observed, the addition of GV1001 vaccination to chemotherapy was tolerable and associated with modest efficacy outcomes. Introduction Colorectal cancer (CRC) is the third most common cancer and the second leading cause of cancer death worldwide [1]. Although the overall mortality of CRC continues to decline if the disease is operable, the survival outcomes of metastatic disease remain dismal. Chemotherapy in combination with targeted monoclonal antibodies has become the main treatment modality for inoperable disease; however, immunotherapy has recently changed the treatment paradigm for metastatic CRCs. After pembrolizumab, an anti-programmed death 1 (PD-1) immune checkpoint inhibitor, exhibited a significant clinical benefit in patients with mismatch repair-deficient (dMMR) CRCs [2], many immune checkpoint inhibitors were investigated. At the 2021 American Society of Clinical Oncology (ASCO) annual meeting, pembrolizumab first-line therapy was reported to be superior to chemotherapy in patients with dMMR CRCs [3]. Anti-cancer vaccination is another type of immunotherapy; many vaccines (including peptide, autologous, and dendritic cell vaccines) have been tested in CRC patients. However, no such vaccine has exhibited a clinical benefit thus far [4][5][6]. Telomerase (a telomere-repair enzyme) is expressed in 85-90% of human solid cancers [7]; thus, it is an attractive target for anti-cancer treatment. In normal cells, the telomeric ends of DNA become progressively shortened with repeated cell division; the cells eventually enter replicative senescence [8]. However, cancer cells avoid such senescence, becoming immortal through the reactivation of telomerase. This has a crucial role in the oncogenic transformation of many cancers, including CRCs [9,10]. GV1001 is a human telomerase, 16-amino acid peptide vaccine derived from the reverse transcriptase subunit. GV1001 induces a CD4/CD8 T-cell response against cancer cells, yielding an immunological anti-tumor effect [11]. After a GV1001-specific immune response and promising efficacy results were obtained in early-stage clinical studies of patients with pancreatic and non-small cell lung cancers [12,13], a large-scale study evaluating the synergistic effects of GV1001 and conventional chemotherapy in pancreatic cancer patients was conducted, but failed to prove the benefit over chemotherapy alone [14]. However, in the subgroup analysis, patients with high baseline eotaxin level were significantly associated with better overall survival (OS) with GV1001 vaccination [15]. At the 2021 ASCO annual meeting, synergistic effects of GV1001 and conventional chemotherapy were reported in pancreatic cancer patients with high eotaxin levels [16]. The median OS significantly GV1001 has also been investigated in advanced melanoma [17], B-cell chronic lymphocytic leukemia [18], cutaneous T-cell lymphoma [19] and has shown modest efficacy outcome with induction of immune response. However, to date, GV1001 has not been investigated in patients with advanced CRCs. Most (approximately 85% of patients) CRCs have chromosomal instability (CIN), while other CRCs have a high grade microsatellite instability (MSI) phenotype, and telomere dysfunction may be considered a major driving mechanism of CIN development [10]. Consistent results that increased telomerase activity is associated with tumor progression and poor survival have been reported [20], and these results provides a theoretical background for investigating GV1001, a telomerase peptide vaccine, in patients with advanced CRCs. In this study, we evaluate the efficacy and safety of GV1001 vaccination in combination with chemotherapy as a second-line treatment for patients with metastatic CRCs. Study design and patient eligibility This study was a multicenter, single arm, phase 2 trial done at 7 hospitals in South Korea. Patients were eligible for this study if they fulfilled all of the following criteria: (1) pathologically confirmed recurrent, or metastatic colorectal cancer patients who failed fist-line chemotherapy (oxaliplatin or irinotecan containing regimen); (2) measurable disease, as defined using version 1.1 of the Response Evaluation Criteria In Solid Tumors (RECIST); (3) age ≥19 years; (4) Eastern Cooperative Oncology Group (ECOG) performance status of 0-2; (5) life expectancy ≥ 12 weeks; (6) adequate hematological, renal, and hepatic functions, as defined using an absolute neutrophil count of ≥1.5 × 10 9 /L, a platelet count of ≥100 × 10 9 /L, serum creatinine levels of ≤1.5 × upper limit of normal or creatinine clearance ≥50 mL/min, serum bilirubin ≤2 × UNL, aspartate aminotransferase and alanine aminotransferase levels of ≤2.5 ×; and (7) willingness to provide informed consent to participate in this study. Patients were excluded based on the following criteria: (1) other previous or concurrent malignancies within the last 5 years, with the exception of cured basal cell carcinoma of the skin or carcinoma in situ of the uterine cervix; (2) presence of intracerebral metastases or meningeal carcinomatosis; (3) other clinically significant comorbid conditions, such as an active infection or severe cardiopulmonary dysfunction; (4) medication that might affect immunocompetence such as long-term steroids or other immunosuppressants for an unrelated condition. Treatment The vaccine GV1001 consists of a synthetic peptide corresponding to the 16 amino acid residue 611 to 626 (EARPALLTSRLRFIPK) of human telomerase reverse transcriptase (hTERT) and is capable of binding to molecules encoded by multiple alleles of all three loci of HLA class II. GV1001 was manufactured by Samsung Pharmacy (Hwasung-si, Korea) and supplied by GemVax & KAEL (Seongnam-si, Korea). The selection of second-line chemotherapeutic agents and targeted agents (bevacizumab, cetuximab or aflibercept) depended on the investigator's choice. 0.56 mg of GV1001 was injected intradermally on days 1, 3, 5 and 8 during the first cycle of chemotherapy, then once on day 1 of subsequent cycles. GV1001 was diluted with 0.3 ml of 0.9% normal saline and administered intradermally to the lower abdomen within 6 hours after dilution. This treatment was repeated every 2 weeks until treatment is discontinued due to the subject's request, toxicities, or disease progression. Serum eotaxin level test To determine the relationship between eotaxin level and treatment response, we conducted eotaxin level test in patients who consented to the test. Peripheral blood was collected at the baseline, the first day of 2 nd , 4 th , 7 th , 10 th cycles of treatment and analyzed using Bio-Plex ® 200 systems at the Seoul Clinical Laboratories. Delayed-type hypersensitivity test Delayed-type hypersensitivity (DTH) test was performed to determine whether an immune response has been induced. The test was performed at the baseline and on the first day of 2 nd , 4 th , 7 th and 10 th cycles of chemotherapy. We continued the test until the result was positive. 0.08 mL of the remaining solution (Solution A) after preparation for GV1001 injection is diluted in 0.22 mL of normal saline with a concentration of about 0.7 mg/ml (Solution B). 0.15ml of solution B is extracted and administered intradermally on the opposite lower abdomen within 6 hours after GV1001 injection. If the erythema or induration is more than 5 mm, it is evaluated as positive. T-cell proliferation test Peripheral blood mononuclear cells (PBMCs) were isolated from blood samples before the start of vaccination and on the first day of 2 nd , 4 th , 7 th , and 10 th cycles of chemotherapy to conduct T-cell proliferation test. T-cell proliferation was detected by flow cytometry using carboxy fluorescein diacetate succinimidyl ester (CFSE) (eBioscience 65-0850). After thawing PBMCs in at the end of the cycle, 1~5 × 10 6 cells were incubated with 2mM CFSE at RT for 10min, washed with ice-cold completed RPMI1640 medium. 1 × 10 5 CFSE stained cells were seeded in an anti-Human CD3 (1 mcg/ml) coated 96well culture plate in completed RPMI1640 medium. CFSE labeled PBMCs were stimulated with anti-Human CD28 (1 mcg/mL), GV1001 peptide (20 mcg/mL) in the anti-Human CD3 coated 96 well culture plate. PBMCs were incubated at 37 °C, 5% CO2 for 72h. Dividing cells were detected by flow cytometry and analyzed using CYTOFLEX software (Beckman). A positive proliferative T-cell response was defined if one of the following criteria was met: i) a stimulation index (SI) ≥ 2 (SI was calculated by dividing T-cell population after GV1001 injection by that of the baseline value); ii) the difference in the number of T-cell division before and after GV1001 injection ≥ 1. Endpoints The primary endpoint was disease control rate (DCR) and secondary endpoints were overall response rate (ORR), progression-free survival (PFS), OS and toxicity profiles. DTH test and T-cell proliferation test was performed to evaluate immune response which was the exploratory endpoint. Statistical analysis According to Simon's optimal two-stage design, 46 patients were required for enrollment to test the null hypothesis that the true DCR is 30% versus the alternative hypothesis that the true DCR is at least 50%, with two-sided alpha of 0.10 and 90% power. If 7 patients or more with disease control (complete response + partial response + stable disease) were observed among 22 patients in the first stage, the study was continued with 24 additional patients included. As the drop-out rate was assumed to be 20%, the number of patients necessary for recruitment into the study was calculated to be 57. Descriptive statistics were used to summarize the patients' characteristics, tumor responses, and safety events. The Kaplan-Meier method was used to estimate the median PFS and OS. All patients who received at least one cycle of treatment were included in the safety analysis and those who underwent at least one response evaluation were defined as modified intent-to-treat (mITT) population and included in the efficacy analysis. Patient characteristics From May 13, 2015 to October 13, 2020, 56 patients with recurrent or metastatic CRC treated in seven hospitals of South Korea were enrolled. Table 1 shows the baseline characteristics of all patients. The median age was 64 years (range, 29-82 years) and 67.9% were men. Of all patients, 92.8% exhibited ECOG performance status 0-1. The primary tumors were predominantly located in the left side of the colon (left-vs. right-sided, 66.1% vs. 33.9%) and the RAS mutation was present in 25% of patients. Prognostic factors We performed univariate and multivariate analyses to identify factors potentially prognostic of PFS and OS (Table S1). The multivariate analysis included factors with p-values < 0.5 in the univariate analyses. In the mITT population, two factors were independently associated with a poor PFS in multivariable analysis: age ≥ 65 years (hazard ratio for PFS, 3.37 [95% CI, 1.34-8.49], p = 0.010) and ECOG performance status 1 or 2 (hazard ratio for PFS, 2.6 [95% CI, 1.01-6.69], p = 0.048). DTH and T-cell proliferation tests DTH results were available for 20 patients; no patients exhibited positive results during treatment. T-cell proliferation tests were conducted on 25 patients; GV1001-specific T-cell proliferation was evident in 7 (28.0%). The positive result of one patient is shown in Figure 2. Neither the ORR (42.8% in the positive vs. 53.3% in the negative group, p = 0.943) nor the DCR (100.0% in the positive vs. 88.9% in the negative group, p = 1.000) differed between the T-cell proliferation-positive and -negative groups. The median PFS (8.5 months [95% CI, 3.0-13.9 months] vs. 4.7 months [95% CI, 2.5-6.9 months], p = 0.303) tended to be longer in the T-cell proliferation-positive group, but this difference was not statistically significant. The median OS could not be analyzed in this subgroup because of the censored data. Discussion In this study, we evaluated the efficacy and safety of GV1001 combined with chemotherapy in CRCs patients. To our knowledge, this is the first study to test a telomerase vaccine in patients with CRC. Figure 2. In vitro T cell proliferation in PBMC before the vaccination (C0D1) and after GV1001 vaccination (cycles 2, 4, 7 and 10). Histogram plots showing the division peaks following anti-CD3 (1 µg/mL), anti-CD28 (1 µg/mL), and GV1001 (20 µg/mL) stimulation of carboxy fluorescein diacetate succinimidyl ester (CFSE)-labeled CD3 high cells. Significance was evaluated by one-way ANOVA. **p < 0.01, and ***p < 0.001. In patient no. 11, T cells started to divide (histogram) from the fourth cycle of the GV1001 vaccination, and the CFSE intensity of CD3 high cells also significantly increased from the fourth cycle to the tenth cycle of treatment. In the results, the DCR was 90.9% (95% CI, 82.4-99.4); this was higher than the predefined value for proof of efficacy. The median PFS and OS (secondary endpoints) were 7.1 months (95% CI, 5.2-9.1 months) and 12.8 months (95% CI, 9.9-15.8 months); these were comparable to the values in pivotal studies of second-line chemotherapies for CRCs [21,22]. However, no obvious immune response (on the DTH or T-cell proliferation test) was observed, in contrast to other GV1001 trials with pancreatic and non-small cell lung cancers. No patient exhibited positive results on the DTH test; antigen-specific T-cell proliferation was observed in only 28% of patients (7 of 25). In addition, the results of the T-cell proliferation test were not correlated with the efficacy outcomes. A discrepancy between the DTH reactions and T-cell responses was also observed in a previous study on pancreatic cancer patients [12] and a plausible explanation is suggested that different sensitivities in the two assays or biologically different immune reactions generated by vaccination might have influenced the outcomes. Considering our results, it is difficult to clearly determine whether the observed efficacy is attributable to the synergistic effect of GV1001 vaccination and chemotherapy or to chemotherapy itself. Unlike other studies of GV1001, we did not combine injections of granulocyte macrophage colony-stimulating factor (GM-CSF) with GV1001 vaccination, which may explain why we did not observe an obvious immune response. In general, a level of immunogenicity that breaks the immune tolerance of the host is essential for a cancer vaccine to be effective; concomitant delivery of adjuvant GM-CSF with a vaccine is widely adopted strategy. Although GM-CSF-based vaccines induced potent anti-tumor immune responses in preclinical studies [23,24], the effects were not robust in clinical trials; they sometimes contradicted the results from animal models [25,26]. Under certain conditions, GM-CSF induces the production of myeloid-derived suppressor cells and immunosuppressive regulatory T cells, leading to unexpected outcomes. Based on the fact that adding GM-CSF to vaccination could induce immunosuppression [27], we hypothesized that GV1001 vaccination alone (i.e., without GM-CSF injection) might induce an adequate immune response. However, it is possible that the omission of GM-CSF may have compromised immunogenicity. Moreover, when cytotoxic chemotherapy is used in conjunction with anti-cancer vaccination, the chemotherapy itself is immunosuppressive and can thus affect antigen-specific T-cell responses. Gemcitabine and fluorouracil, the combination partners of GV1001 in phase III trial of pancreatic cancer, have preclinical evidence for synergism with GV1001 that these agents induce apoptosis of cancer cells leading to the release of antigens which can be taken up by antigen-presenting cells and cross-presented to cytotioxic T cells [28,29]. However, there is lack of evidence that oxaliplatin and irinotecan (the drugs used in the present study) act synergistically with GV1001. Eotaxin is an eosinophil-specific chemokine associated with allergic reactions [30]. In general, chemokines have important roles in cancer progression that involve modulating tumor cell growth and migration [31]. However, little is known regarding the role of eotaxin in cancer. After the predictive value of eotaxin for GV1001 vaccination has been proposed in TeloVac study [15], consistent results were obtained in a subsequent phase III study of pancreatic cancer patients with high eotaxin levels [16]. However, in this study, there was no significant relationship between baseline eotaxin level and efficacy outcomes. Because no obvious immune response was induced, it is difficult to clearly explain a possible role for eotaxin as a predictive marker in this study. In addition, considering the conflicting results of the role of eotaxin in various types of cancer, the role of eotaxin as a predictive marker for GV1001 in pancreatic cancer as well as in other cancers needs to be further verified. Indeed, eotaxin level was associated with a poor prognosis in certain types of cancer [32]; conversely, it was associated with tumor suppression in other cancers [33]. Conclusion Although no obvious immune response was observed, this first clinical study of telomerase vaccination for CRC patients showed that GV1001 vaccination in combination with conventional chemotherapy was tolerable and associated with modest efficacy outcomes. More robust studies are required to validate a potential role for GV1001 in CRC treatment.
2022-02-26T00:22:54.900Z
2022-02-14T00:00:00.000
{ "year": 2022, "sha1": "c7a4e40092d0e0b0b94406e90d93b33e9fefa2db", "oa_license": "CCBYNC", "oa_url": "https://www.jcancer.org/v13p1363.pdf", "oa_status": "GOLD", "pdf_src": "Anansi", "pdf_hash": "7fc40f4c120ca3e4008b384760cda057fdae237e", "s2fieldsofstudy": [ "Medicine", "Biology" ], "extfieldsofstudy": [ "Medicine" ] }
17714595
pes2o/s2orc
v3-fos-license
Transcriptional Activation of Pericentromeric Satellite Repeats and Disruption of Centromeric Clustering upon Proteasome Inhibition Heterochromatinisation of pericentromeres, which in mice consist of arrays of major satellite repeats, are important for centromere formation and maintenance of genome stability. The dysregulation of this process has been linked to genomic stress and various cancers. Here we show in mice that the proteasome binds to major satellite repeats and proteasome inhibition by MG132 results in their transcriptional de-repression; this de-repression is independent of cell-cycle perturbation. The transcriptional activation of major satellite repeats upon proteasome inhibition is accompanied by delocalisation of heterochromatin protein 1 alpha (HP1α) from chromocentres, without detectable change in the levels of histone H3K9me3, H3K4me3, H3K36me3 and H3 acetylation on the major satellite repeats. Moreover, inhibition of the proteasome was found to increase the number of chromocentres per cell, reflecting destabilisation of the chromocentre structures. Our findings suggest that the proteasome plays a role in maintaining heterochromatin integrity of pericentromeres. Introduction Packaging of DNA into chromatin plays an important role in transcriptional regulation. Euchromatin is accessible to the transcription machinery whereas heterochromatin is more compact and associated with transcriptional repression [1]. Multiple factors including transcription factors, post-translational histone modifications and DNA methylation are thought to maintain heterochromatin repression [2]. Among them, histone hypoacetylation, histone 3 lysine 9 trimethylation (H3K9me3) and heterochromatin protein 1 (HP1α) were shown to be required for maintenance of heterochromatin [3,4]. Constitutive heterochromatin predominantly consists of satellite repeats. In mouse cells, pericentromeric and centromeric satellite repeats are the major and minor satellite repeats respectively [5]. Heterochromatinisation of pericentromeric repeats is important for centromere formation and maintenance of genome stability [6]. Low levels of pericentromeric satellite repeat transcription have been detected under various physiological conditions, including cell cycle, senescence, development and differentiation [7][8][9][10]. However, aberrant overexpression of pericentromeric satellite repeats has been detected in several pathological conditions, including cellular stress [11][12][13], cancer [14][15][16][17] and some genetic disorders [18][19][20]. The proteasome is a highly conserved proteolytic complex comprised of the catalytic 20S core particle (CP) capped at one or both ends by the 19s regulatory particle (RP). It regulates protein quality by recognising, unfolding and degrading polyubiquitin tagged, aged, misfolded or damaged proteins [21][22][23]. Growing evidence, mainly from studies in yeast, suggests that the proteasome is associated with chromatin and regulates transcription [24][25][26][27][28][29][30]. Thus, the proteasome regulates the levels and binding of activators as well as recruitment of co-activators at 5' regulatory regions, thereby controlling transcriptional initiation [26,31,32] as well as elongation [27,33]. It is also thought to enable release of RNA polymerase II (RNAPII) and thereby regulate transcription termination [34]. Moreover, defects of the proteasome subunits in yeast were shown to enhance transcriptional repression of heterochromatin [35]. Additionally, ubiquitin mediated degradation of the Jmj family protein Epe1 was shown to be required for the accurate formation of heterochromatin boundaries [36]. Notably, few studies, mostly in mammalian cells, suggest that the proteasome also regulates transcriptional repression. For example, inhibition of the 20S proteasome resulted in increased levels of RNAPII and the active chromatin mark H3K4me3 at the glucocorticoid responsive gene promoter, where proteasome binding was identified in human cells [37]. Another study proposed that the proteasome blocks nonspecific transcription initiation by preventing formation of the preinitiation complex at cryptic transcription sites [38] and degrades RNAPII or a member of the pre-initiation complex that drives the transcription at these ectopic sites, thereby suppressing transcription. Moreover, a study performed on rat liver showed that proteasome inhibition led to global histone hypomethylation (especially at H3K9 and H3K27 residues) and hyperacetylation [39]. Here we demonstrate that proteasomal activity in mice is also involved in the repression of pericentromeric satellite repeat expression and integrity of pericentromeric clusters. Results and Discussion Binding of the 20S proteasome at major satellite repeats Several studies have shown the presence of the proteasome in eukaryotic nuclei [40][41][42][43][44] and its recruitment to chromatin including centromeres [45], telomeres [46] and sites of cryptic transcriptional initiation [38]. To investigate whether the proteasome might participate in transcriptional silencing of heterochromatin, proteasome binding at pericentromeric and several other endogenous repeats was analysed using ChIP-seq data previously obtained in mouse 3T3-L1 cells [47]. The results indicated a~1.2 fold enrichment of the proteasome at pericentromeric major satellite repeats and LINE L1 elements and~1.9 fold enrichment at centromeric minor satellite repeats, compared to input, whereas all other elements showed no signal above input (Fig 1A). To replicate this qualitative observation, ChIP was performed in another mouse NIH3T3 cell line using an antibody against the 20S proteasome which confirmed binding of the 20S proteasome to major satellite repeats as well as to LINE L1 elements (Fig 1B). The signal from minor satellite was relatively low and close to background, as was the case for the ChIP-seq data, making it difficult to be precise about the extent of enrichment in this location. Transcriptional activation of major satellite repeats expression upon proteasome inhibition To assess the role of the proteasome at major satellite repeats, pericentromeric transcription was assessed in NIH3T3 cells treated with a widely used and specific proteasome inhibitor Binding of the 20S proteasome particle at major satellite repeats. (A) ChIP-seq data analysis obtained by ChIP against FLAG-tagged β1 subunit (PSMB1) of the 20S proteasome particle in the mouse 3T3-L1 cell line. The enrichment was greater at major and minor satellites as well as LINE_L1 elements but not other classes of DNA repetitive elements. (B) ChIP-qPCR analysis using antibody against β6 subunit of the 20S proteasome and no antibody control performed on the mouse NIH3T3 cell line. Enrichment level is shown relative to input after subtraction of background. Error bars = SEM of 3 biological replicate. MG132. A dose-dependent increase in transcript level of major satellite repeat was observed upon MG132 treatment, reaching 10-fold in treated cells when compared to dimethyl sulfoxide (DMSO-vehicle) (Fig 2A), whereas no effect was seen on the minor satellite expression (Fig 2A). The increase in the major satellite transcript levels could be due to an increase in transcription or to inhibition of major satellite transcript degradation. To assess whether transcription was required for the increase in major satellite repeats transcripts, cells were treated with the transcriptional inhibitor Actinomycin D (ActD) concomitant with MG132. Transcription was effectively inhibited (Fig 2B) and the increase of the major satellite repeat transcription upon proteasome inhibition was blocked in cells treated concomitantly with both MG132 and ActD (Fig 2B), indicating that their upregulation in response to MG132 was indeed transcription dependent. This was further validated at the single cell level using RNA fluorescent in situ hybridization (RNA FISH) targeting major satellite transcripts. Proteasome inhibition led to an Kinetics of the effect of the proteasome inhibition and/or RNAPII inhibition on the expression of major satellite repeats. The right graph indicates the efficiency of the transcriptional inhibition measured by decay of the c-MYC and major transcripts. The exponential decay curve was obtained using best fitted nonlinear regression model. NIH3T3 cells were treated with either 20μM MG132, 20μg/ml ActD or both 20μg/ml ActD and 20μM MG132 for 1h, 2h, 4h and 8h followed by RNA extraction and q-RT-PCR. The relative expression was normalised against spiked RNA or Gapdh and shows fold change relative to DMSO. Error bars = SEM of 3 biological replicates. (C) RNA FISH imaging of major satellite repeat transcription upon proteasome inhibition NIH3T3 cells were treated with 20μM M132 for 2h and 4h followed by RNA-FISH analysis using major satellite probe. Representative mid-zone confocal z-sections images for DAPI (blue), major (red) and merge are shown for 2h and 4h with DMSO and MG132 treatment. Scale bar 2μm. * p<0.05, ** p<0.001 (Student's 't' test). increase in the major satellite repeat expression in MG132 treated cells for 4h ( Fig 2C). Major satellite signal was located surrounding or within a proportion of DAPI-dense heterochromatin regions showing the characteristic morphology of chromocentres [48]. Signal was also identified in areas of the nucleus lacking chromocentres suggesting either partial transcription from intergenic major satellite DNA sequences or migration of the transcript away from the chromocentres ( Fig 2C). Thus, the RNA-FISH analysis is consistent with the previous observation that major satellite repeat transcription was upregulated upon proteasome inhibition (Fig 2A and 2B). Upregulation of major satellite repeats upon proteasome inhibition occurs independently of cell cycle perturbation Considering that (i) proteasome inhibition induces cell-cycle arrest in various cell types [49][50][51][52] and (ii) cell cycle was shown to regulate the transcription of both pericentromeric and centromeric satellite repeats [7,53], it was necessary to determine whether the transcriptional activation of major satellite repeats was due to cell cycle skewing. To confirm the cell-cycle dependent transcription of major and minor satellite repeats, counterflow centrifugal elutriation [54] was performed for cell synchronization. Elutriation offers the advantage of selection of cells in different stages of cell cycle and, unlike chemical agents, does not affect the metabolism of cells [54]. The expression of major and minor satellite repeats peaked in G1 and G2/M phases of the cell cycle respectively (S1 Fig), consistent with previous published studies (Ferri et al., 2009;Lu and Gilbert, 2007). We next investigated the effect of proteasome inhibition on (i) the kinetics of transcriptional activation of major and minor satellite repeats and (ii) the cell cycle. Interestingly, the kinetic analysis showed an upregulation of the major satellite repeat expression already at 4h after MG132 treatment (Fig 3A), the time point where no significant effect was seen on the cell cycle profile ( Fig 3B). As previously, minor satellite repeat expression was unaffected ( Fig 3A). Taken together, these results suggest that upregulation of major satellite repeat expression upon proteasome inhibition is unlikely to be a consequence of cell cycle skewing because the upregulation of the major satellite repeat expression occurred without any significant effects on the cell cycle distribution. The effect of proteasome inhibition on chromatin structure To assess whether the major satellite repeat upregulation was accompanied by an alteration in the chromatin, histone modifications including H3 acetylation, H3K9me3, H3K4me3 and H3K36me3 were analysed by ChIP-qPCR after treatment of NIH3T3 cells with MG132. H3K9me3, which is considered a "hallmark" of heterochromatin [55] was found to be unaffected by MG132 treatment (Fig 4A). This was further confirmed by immunofluorescence (IF) (Fig 4B). Similar to H3K9me3, the levels of H3 acetylation, H3K4me3 (which is normally associated with promoters of actively transcribed genes [56,57]) and H3K36me3 (which is normally associated with elongating RNAPII [58,59]) were found to be similar between cells treated with MG132 and untreated (DMSO) cells ( Fig 4A). Thus, the ChIP-qPCR results suggest that proteasome inhibition does not have any net effect on these histone modifications at the major satellite repeat locus. The lack of effect might seem surprising given that proteasome inhibition led to significantly increased transcription of the repeats. This may relate to the proportion of repeat sequences that have been activated by inhibiting the proteasome. The RNA FISH experiment indicates that the number of repeats or heterochromatic clusters that are transcriptionally activated might be a small proportion of the total and therefore any chromatin change would be diluted by the majority of DNA sequences which are not expressing the transcript. This lack of effect on H3K9me3 has been observed in several previous studies where major satellite repeats were found to be upregulated to a similar degree. In contrast, in Pax3/Pax9 deficient iMEFs [60] there was a more significant increase in major satellite transcription and a clear loss of H3K9me3. Lastly, Zhang et al, have similarly reported that transcriptional activation of silent heterochromatin in yeast can occur without any significant changes in the histone modifications [61]. Heterochromatin at pericentromeres is not only regulated by histone modification but also by structural proteins such as HP1α which are involved in chromatin condensation and maintenance of stable heterochromatin [62,63]. Therefore, the effect of proteasome inhibition on the localisation and distribution of HP1α was evaluated by IF after treatment of NIH3T3 cells with proteasome inhibitor. As expected, in the untreated cells (DMSO), HP1α was concentrated in the DAPI dense stained regions, confirming its localisation to pericentromeric heterochromatin. Interestingly, MG132 treatment resulted in a dispersed distribution of the HP1α protein throughout the nucleus, (Fig 4C and S2A Fig) however, total HP1α levels remained similar between treated and untreated cells (S2B Fig). Thus, proteasome inhibition led to displacement of HP1α from chromocentres, without visible changes in the DAPI dense staining domains. Whether HP1α is displaced specifically from pericentromeric heterochromatin (or from other repressed genomic loci that are associated with chromocentres) remains to be shown. Furthermore this observation is consistent with previous studies where loss of HP1 from pericentromeric heterochromatin was not sufficient to disrupt the DAPI dense stained regions or H3K9me3 [64,65]. It is also in line with previous reports [64,66], where transcriptional activation of pericentromeric repeats was accompanied by either partial or full displacement of HP1 from pericentromeres without any changes in H3K9me3 levels, the latter of which serves as a platform for HP1 recruitment to chromatin [67,68]. Also, HP1α/β double knockout in MEF cells led to upregulation of major satellite repeat expression without affecting the localization of H3K9me3 [64]. Another example comes from BRCA1-deficient cells, where a significant reduction in the number of HP1 positive foci and loss of ubiquitylation of histone H2A (H2Aub) was reported to result in activation of major and minor satellite repeat transcription [66]. Lastly, mutation of variant H3.3 at an early stage in development resulted in increased accumulation of major satellite repeat transcripts which was accompanied by displacement of HP1 from chromocentres [69]. Therefore, it is tempting to speculate that dissociation of HP1α, as seen here, could be sufficient for the remodelling of heterochromatin rendering it accessible to the transcriptional machinery. The immunofluorescence imaging of chromocentres (visualised by DAPI) after treatment with proteasome inhibitor did not reveal any obvious structural changes (Fig 4C). Considering that (i) major satellite repeats reside at these regions and (ii) a previous study suggested that upregulation of major satellite repeat expression was accompanied by a marked decrease in the number of chromocentres [66], here the number of chromocentres was analysed in NIH3T3 cells after proteasome inhibition. To acquire cells in a high-throughput manner, an imaging flow cytometer (ImageStream X) was used. Cells were treated with either DMSO or MG132, followed by staining with DRAQ5, which is less toxic for living cells compared to DAPI [70]. Treatment of cells with MG132 resulted in a shift of the distribution of the number of chromocentres per cell towards the right (Fig 5), which increased with time. It is possible that the increased number of chromocentres per cell after proteasome inhibition might result from Here we showed the binding of proteasome to major satellite repeats and their dysregulation upon proteasome inhibition by MG132. As the proteasome participates in a large number of cellular pathways and controls the steady-state level of many proteins, it is difficult to distinguish between direct involvement of the proteasome on the transcription of the pericentromeric repeat and indirect chromocentres upon proteasome inhibition. NIH3T3 cells were treated with 20μM MG132 for 4h and immunolabeled with HP1α antibody (green) and co-stained with DAPI (blue). Scale bar 10μm. effects mediated by a protein whose level is controlled by proteasome activity. Several studies have shown that the proteasome degrades stalled RNAPII during transcription-coupled repair [71,72] and a similar mechanism could potentially operate at major satellite repeats. At pericentromeric repeats the proteasome might function to degrade RNAPII and thereby prevent their expression. Additionally, accumulation of misfolded proteins due to proteasome inhibition is known to trigger cell stress responses [73][74][75][76][77][78][79][80]. Therefore, a component from the cell stress pathway might also be responsible for the de-repression of the major satellite repeat. For example, HSF1 was previously shown to activate the transcription of pericentromeric Satellite II and III repeats in human cells upon heat shock in order to form nuclear stress bodies [13,81,82]; this effect was reported to be human-specific (Valgardsdottir et al., 2008), but similar factors may play a role in other mammals. It is interesting that the eviction of HP1 and disaggregation of chromocentres shown here precedes the detection of major satellite transcripts suggesting that the proteasome is required for integrity of heterochromatin. Quantitative Reverse Transcription Coupled to PCR After RNA isolation using TRIZOL 1 Reagent (Invitrogen) and genomic DNA digestion with a DNA free kit (Ambion), cDNA was synthesized using ThermoScript kit (Invitrogen) and random hexamers following the manufacturer's instructions. The quantitative PCR was performed using the SYBR 1 Green Jumpstart™ Taq ReadyMix™ (Sigma-Aldrich 1 ) or SensiMix™ (Bioline) in a Chromo4 DNA engine (MRJ) with Opticon Monitor 3 (BioRad) software. A list of primers used throughout this study is shown in S1 Table. Flow Cytometric Analysis Cells were fixed in ice cold 70% ethanol by incubation overnight at -20°C. Next day, the fixed cells were washed three time in PBS and treated with 0.2μg/μl RNAse A (Sigma-Aldrich) in PBS for 20min at 37°C. Cells were washed again in PBS and cellular DNA was stained with 50mg/ml propidium iodide (Millipore) diluted in PBS. Stained cells were quantified by FACS (BD LSRII) and gated according to single-cell population and to DNA content representative of the cell cycle. Cell cycle separation by counterflow centrifugal elutriation Cell cycle separation was conducted using an elutriation system with JE-5.0 elutriator rotor (Beckmann Coulter Inc) equipped with Avanti J-26 XP centrifuge (Beckmann Coulter Inc) and a pump. 2X10^8 cells were harvested and washed twice with Elutriation buffer (3.4mM EDTA and 1% FBS in 1X PBS). In order to obtain a single cell suspension the cell pellet was resuspended in 40ml Elutriation buffer and passed twice through an 18-gauge needle (25G) syringe. Next, the sample was loaded into the pre-assembled elutriation chamber. Throughout the elutriation process the centrifuge was maintained at a constant speed of 1700rpm at 4°C. To obtain elutriation fractions, the flow rate of the elutriation buffer was increased from 8ml/ min to 20ml/min by 1ml/min increments. Consecutively 200ml effluent volumes were collected from the centrifuge for each flow rate and cells were pelleted by centrifugation. To assess the quality of synchronization in each elutriated fractions, cells were stained PI followed by FACS analysis. Based on the cell cycle profile similarity, elutriation fractions were further grouped and categorised into 3 different fractions. For the ChiP-seq analysis, the repeat element annotations for the mouse GRCm38 genome build were downloaded from the RepeatMasker track from the UCSC genome browser web site. GSM841627 and GSM1095381 sequencing reads (Catic et al., 2013) were aligned to repeat genomes of several different repeat families using Bowtie 2 (v2.2.6; default parameters), and hits were counted whereby multiple matches of a sequence are possible. The repeat genomes were constructed by concatenating the sequences of each instance of the repeat into a single repeat genome for that repeat family where individual repeats were separated by NNNNN to prevent sequences from mapping over instance boundaries. The minor satellite consensus sequence: TTGTAGAACAGTGTATATCAATGAGTTACAATGAGAAACATGGAAAATGATAAAAACCA CACTGTAGAACATATTAGATGAGTGAGTTACACTGAAAAACACATTCGTTGGAAACGGGAT, was added manually to the Minor Satellite repeat genome. RNA-FISH RNA in situ hybridization (RNA-FISH) was performed using ViewRNA™ ISH Cell Assay kit (Affymetrix, eBioscience) following the manufacturer's instructions. The images were acquired with a 63X oil-immersion lens using a Leica Microsystems SP5 confocal microscope. Analysis of images was performed by Fiji Image J software. RNA probe: Major satellite RNA probe set was designed and produced by eBioscience (Affymetrix) probe developers. This probe was V00846 -Mouse Satellite DNA sequence-type 1. Immunofluorescence Cells were fixed with 4% PFA diluted in PBS containing 0.1% (v/v) Triton X100 for 15 min at room temperature followed by three washes with PBS and permeabilisation with 0.5% (v/v) Triton X100 diluted in PBS for 30min at room temperature. Cells were washed again three times in PBS and incubated in 20mM glycine dissolved in PBS for 30min at room temperature. Subsequently, cells were blocked for 1h with PBS+ (1%BSA, 0.1% Casein, 0.02% Fish Skin Gelatin in 1X PBS. pH7. and incubated with primary antibody appropriately diluted in PBS + in a dark humidified chamber overnight at 4°C. Cells were washed again three times with PBS and incubated in fluorochrome-conjugated (Alexa488 or Alexa568) secondary antibody diluted to the required concentration in PBS for 1h at RT in a dark humidified chamber. Cells were further washed nine times with PBS and incubated for 15min with DAPI diluted in 1:1000 in PBS at room temperature before mounting them using Vectashield mounting medium (Vector Laboratories). Images were acquired with Leica Microsystems SP5 confocal microscope. Analysis of images was performed by Fiji Image J software. Antibodies used for immunofluorescence were: HP1α (Millipore, 05689), H3K9me3 (Millipore, 17-625) ImagestreamX 0.5 million NIH3T3 cells were pelleted and washed twice in cold PBS++ (PBS containing 1mM EDTA and 0.02% (w/v) Sodium azide) and resuspended in 100μl of 1μM DRAQ5 dissolved in PBS++. 20000 events were collected at 40X magnification in bright field and the 658nm laser wavelength with ImageStreamX (Amnis, Seattle, Washington). Raw data was quantitated using the associated Image analysis software (IDEAS Amnis). After single cell and DRAQ5 fluorescence gating, the number of chromosome clusters per cell was determined by computing the intensity of localized Draq5 bright spots within the image that were greater than 2.75 pixels in radius. Western blot Cells were lysed using RIPA buffer (50mM Tris pH 8.0, 150mM NaCl, 0.5% Na-deoxycholate, 1% NP-40, 0.1% SDS. Freshly were added 0.5μl P8430 Protease inhibitor cocktail (Sigma) and 5μl 0.1M PMSF in isopropanol per 1 ml of RIFA buffer) and cell extract was cleared by centrifugation at 12 000g for 15min. The supernatant was collected and the protein concentration was determined by Bradford dye colorimetric assay (Bio-rad), following the manufacturer's instructions. For protein denaturation, lysate samples with specific amounts of protein were mixed with 6X Laemmli buffer (Alfa Aesar) followed by incubation at 100°C for 10min. 5μg of protein was loaded on a SDS-PAGE gel and when electrophoresis was completed, the proteins were electro-transferred from SDS-PAGE gel to pre-washed PVDF membrane (GE Healthcare) in the presence of Transfer buffer (25mM Tris pH 8.3, 190mM glycine. Add freshly 20% (v/v) methanol). The membrane was blocked in blocking buffer (0.1% (v/v) Tween 20, 5% non-fat milk in PBS) for 1h at room temperature followed by incubation with the primary antibody in blocking buffer on a rocking platform overnight at 4°C. After washing the membrane in PBS supplemented with 1% Tween, the membrane was incubated with secondary antibody conjugated with Horse Radish Peroxidase (HRP) in blocking buffer for 1h at room temperature. The presence of HRP on the membrane was then detected using ECL Plus Western Blotting Detection Reagents (GE Healthcare) using the manufacturer's instructions.
2018-04-03T04:55:51.545Z
2016-11-02T00:00:00.000
{ "year": 2016, "sha1": "6c7e065a9166b3196d621172f6a6a11ad334a465", "oa_license": "CCBY", "oa_url": "https://journals.plos.org/plosone/article/file?id=10.1371/journal.pone.0165873&type=printable", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "6c7e065a9166b3196d621172f6a6a11ad334a465", "s2fieldsofstudy": [ "Biology" ], "extfieldsofstudy": [ "Biology", "Medicine" ] }
264445477
pes2o/s2orc
v3-fos-license
Using Visual Patient to Show Vital Sign Predictions, a Computer-Based Mixed Quantitative and Qualitative Simulation Study Background: Machine learning can analyze vast amounts of data and make predictions for events in the future. Our group created machine learning models for vital sign predictions. To transport the information of these predictions without numbers and numerical values and make them easily usable for human caregivers, we aimed to integrate them into the Philips Visual-Patient-avatar, an avatar-based visualization of patient monitoring. Methods: We conducted a computer-based simulation study with 70 participants in 3 European university hospitals. We validated the vital sign prediction visualizations by testing their identification by anesthesiologists and intensivists. Each prediction visualization consisted of a condition (e.g., low blood pressure) and an urgency (a visual indication of the timespan in which the condition is expected to occur). To obtain qualitative user feedback, we also conducted standardized interviews and derived statements that participants later rated in an online survey. Results: The mixed logistic regression model showed 77.9% (95% CI 73.2–82.0%) correct identification of prediction visualizations (i.e., condition and urgency both correctly identified) and 93.8% (95% CI 93.7–93.8%) for conditions only (i.e., without considering urgencies). A total of 49 out of 70 participants completed the online survey. The online survey participants agreed that the prediction visualizations were fun to use (32/49, 65.3%), and that they could imagine working with them in the future (30/49, 61.2%). They also agreed that identifying the urgencies was difficult (32/49, 65.3%). Conclusions: This study found that care providers correctly identified >90% of the conditions (i.e., without considering urgencies). The accuracy of identification decreased when considering urgencies in addition to conditions. Therefore, in future development of the technology, we will focus on either only displaying conditions (without urgencies) or improving the visualizations of urgency to enhance usability for human users. Introduction Vast amounts of data are being generated daily within healthcare, especially in electronic anesthesia records, where, among other data, continuous patient monitoring data are stored.The ever-increasing use of this data will fundamentally change and improve the way medical care will be practiced in the future [1][2][3][4].A pressing challenge is to adequately process the data so that caregivers can make evidence-based decisions for the benefit of patients [1].Machine learning (ML) can curate and analyze large amounts of data, identify the underlying logic, and generate models that can accurately recognize a situation or predict a future state [5,6].Predictive ML models have already been developed for various fields of medicine [7][8][9].However, a significant gap exists between the number of developed models, clinically tested applications, and commercially available products [8]. There are several reasons why ML models do not deliver the expected performance in clinical trials [10,11].One is a lack of trust of the users in the models [7,12,13].To increase trust, clinically meaningful models should be developed with good unbiased data and should not patronize the users but rather support them in their clinical work [11,[14][15][16]].An integral part of such a clinically meaningful model is the presentation of information without imposing an additional cognitive load on the user [17].A decision support tool that uses a ML model should not lead to alarm fatigue or increased workloads but provide actionable advice that fits into existing workflows [4,13]. To make the ML models that we developed for vital sign predictions in surgical patients clinically meaningful and usable, we developed a user-centered, patient avatarbased graphical representation to visualize vital sign predictions.These visualizations are an extension to Visual Patient (VP), an avatar-based patient monitoring technology [18].VP has been available in Europe since 2023 as the Philips Visual-Patient-avatar.Studies reported that healthcare providers were able to retrieve more vital signs with higher diagnostic confidence and lower perceived workload when using VP rather than waveand number-based monitoring, allowing them to obtain a comprehensive picture of the patient's condition more quickly [18,19].Additionally, care providers positively reviewed the technology and found it intuitive and easy to learn and use [20]. The project's objective is to implement vital sign predictions into the VP (provisionally named VP Predictive).To achieve this goal, the project aims to integrate the front-endi.e., the way predictions are presented to the users-with the back-end-i.e., the ML models calculating the predictions. In the present study, we report the validation process of the front-end.Specifically, we aimed to determine how accurately users identify the different vital sign prediction visualizations after a short educational video.The development and validation process of the back-end ML models is the subject of a separate study. Methods A declaration of non-jurisdiction (BASEC Nr.Req-2022-00302) was issued by the Cantonal Ethics Committee, Zurich, Switzerland.Due to the study's exemption from the Human Research Act, ethical approval was not required for the German study centers.Participation was voluntary and without any financial compensation.All participants signed a consent for the use of their data.In reporting the study, we followed the Guidelines for Reporting Simulation Research in Health Care, an extension of the CONSORT and STROBE statements [21]. Study Design and Population We conducted an investigator-initiated, prospective, multi-center, computer-based simulation study at the University Hospitals of Zurich, Frankfurt, and Wuerzburg.The study consisted of three parts.First, we validated the prediction visualizations by testing their identification by physicians.We included senior and resident physicians employed in the study centers' anesthesia or intensive care departments according to availability.Following this part, we invited participants from Frankfurt and Wuerzburg to take part in face-to-face, standardized interviews.From the interview transcripts, we identified key topics and derived representative statements.In the third study part, the participants from all three centers rated these statements on Likert scales. VP and VP Predictive VP is a user-centered visualization technology specifically developed to improve situation awareness (Supplementary Materials Video S1).It creates an animated avatar of the patient to visually display various vital signs according to the real-time conventional monitoring data. VP Predictive was developed as an add-on to VP, with the goal of integrating vital sign predictions into the standard VP.A prediction consists of a condition and an urgency.The condition signals which vital sign is predicted to change and in which direction (low/high), while the urgency gives the time horizon in which this change is expected to occur.The VP Predictive educational video (Supplementary Materials Video S2) and Figure 1 explain the technology. Following this part, we invited participants from Frankfurt and Wuerzburg to take part in face-to-face, standardized interviews.From the interview transcripts, we identified key topics and derived representative statements.In the third study part, the participants from all three centers rated these statements on Likert scales. VP and VP Predictive VP is a user-centered visualization technology specifically developed to improve situation awareness (Supplementary Materials Video S1).It creates an animated avatar of the patient to visually display various vital signs according to the real-time conventional monitoring data. VP Predictive was developed as an add-on to VP, with the goal of integrating vital sign predictions into the standard VP.A prediction consists of a condition and an urgency. Condition There are 22 condition visualizations, which are based on the original VP visualizations.These conditions are displayed as blank visualizations with white dashed borders and superimposed on the VP.The only exception to this display method is oxygen saturation, for which a "low" condition is shown by coloring the blood pressure shadow of the original VP in blue. Urgency There are three different urgencies: urgent, intermediate, and non-urgent.For an urgent prediction, the corresponding condition is shown for 3.5 s every 7 s and flashes during the display.An intermediate urgency prediction is shown for 3.5 s every 14 s and does not flash.Finally, a non-urgent prediction is shown for 3.5 s every 28 s and is partially transparent.This way, a more urgent prediction is displayed more frequently than a less urgent one.The additional flashing (urgent) and transparency (non-urgent) are designed to allow users to distinguish the different urgencies upon first viewing. Study Procedure We conducted a computer-based simulation study followed by standardized interviews and an online survey. Part I: Simulation Study Participants were welcomed into a quiet room.After a short session briefing, and the completion of a sociodemographic survey, we showed the participants a video explaining VP (Video S1).Afterward, participants had the opportunity to practice on a Philips Visual-Patient-avatar simulator for up to 5 min.Afterward, an educational video explaining VP Predictive was shown (Video S2). During the simulation, each participant was shown 33 videos.Each video displayed a standard VP with all vital signs in the normal range, along with an overlaid prediction visualization containing a single condition and urgency.To provide each participant with a randomized set of 33 videos and to ensure that each video was equally represented, we first created randomized sets of 66 videos (3 urgencies × 22 conditions).Then, each set was split in two (videos 1 to 33 and 34 to 66) and watched in sequence by the participants.During the videos, the participants were asked to select the condition shown (22 possible answers) and urgency (3 possible answers).We stopped the video as soon as the participant had answered, or after one minute at the latest.After the participant had completed all questions, we played the next video in the set.All data were collected on an Apple iPad (Apple Inc., Cupertino, CA, USA) using the app iSurvey (Harvestyourdata.org,Wellington, New Zealand) [22]. Part II: Standardized Interviews After a short break, we conducted a standardized interview with participants from Frankfurt and Wuerzburg.The question was as follows: "What do you think about the VP Predictive visualizations?".The answers were recorded using an Apple iPhone and later automatically transcribed using Trint (Trint Limited, London, UK).The transcripts were then manually checked for accuracy and translated into English using DeepL (DeepL SE, Cologne, Germany).After manually checking the translation, we divided the text into individual statements for analysis.Using the template approach, we developed a coding tree [23].Two study authors independently coded each statement.Differences in coding were discussed, and a joint coding per statement was agreed upon. Part III: Online Survey Based on the interview results, we created six statements on recurring topics to be rated using Likert scales in an online survey.This survey was designed using Google Forms (Google LLC, Mountain View, CA, USA) and sent by email to all participants of study part I.The survey remained active for three weeks in July-August 2022.Halfway through this period, a single reminder email was sent. Outcomes 2.4.1. Part I: Simulation Study We defined correct prediction identification as the primary outcome.If participants correctly identified both condition and urgency, we counted this as correctly identifying the prediction.As secondary outcomes, we chose correct condition identification and correct urgency identification, defined as the correctly identified condition and urgency, respectively.In addition, we analyzed the 22 conditions and the 3 urgencies individually. Part II and III: Standardized Interviews and Online Survey For the standardized interviews, we analyzed the distribution of individual statements within the topics of the coding tree.For the online survey, we analyzed the distribution of the answers on the 5-point Likert scale for each statement (from "strongly disagree" to "strongly agree"). Statistical Analysis For descriptive statistics, we show medians and interquartile ranges for continuous data and numbers and percentages for categorical data. Part I: Simulation Study We used mixed logistic regression models with just an intercept to estimate the correct prediction, condition, and urgency identification while considering that we had repeated, non-independent measurements from each study participant.The estimates are given as percentages with 95% confidence intervals (95% CI).For estimates by condition, we added the condition information to the aforementioned model.We used a mixed logistic regression model to see if there was a learning effect by including the number of the respective question (between 1 and 33).Estimates of this model are given as odds ratios (OR). Part II and III: Standardized Interviews and Online Survey In part II of the study, we assessed the agreement of the two coders prior to consensus by calculating the interrater reliability using Cohen's Kappa.In part III, we used the Wilcoxon matched-pairs signed-rank test to evaluate whether the answers significantly deviated from neutral.We used Microsoft Word, Microsoft Excel version 16.77.1 (Microsoft Corporation, Redmond, WA, USA), and R version 4.2.0 (R Foundation for Statistical Computing, Vienna, Austria) to manage and analyze our data.We used GraphPad Prism version 9.4.1 (GraphPad Software Inc., San Diego, CA, USA) to generate the figures.We considered a p-value < 0.05 to be statistically significant. Sample Size Calculation To assess the appropriate sample size for the simulation study, we conducted a pilot study with six participants at the University Hospital Zurich.Correct prediction identification was 94.4%.Considering that these participants were already familiar with VP (but did not know VP Predictive), we calculated the sample size based on a true proportion of 90%.In this case, 70 participants are needed to construct a 95% CI for an estimated proportion that extends no more than 10% in either direction. Results We recruited 70 anesthesiologists and intensive care physicians in April-May 2022.All participants completed the simulation study.A total of 21 out of the 70 participants (30.0%) gave an interview, and 49 participants (70.0%) completed the online survey.Table 1 shows the study and participants' characteristics. Figure 2 shows these results for each condition individually.It is apparent that not all conditions were identified equally well.The best-identified conditions showed close to 90% correct prediction identification, whereas a few showed less than 60% correct prediction identification.The mixed logistic regression model-based estimations tended to be a few percentage points higher. Correct Condition Identification Considering conditions alone (without urgencies), 2117/2310 (91.7%) were correctly identified.The mixed logistic regression model showed an accuracy of 93.8% (95% CI 93.7-93.8%).Figure 3 shows the correct condition identification for each condition individually.Most conditions were very well identified, with two exceptions: low pulse rate (68.6%) and low respiratory rate (58.1%). Correct Condition Identification Considering conditions alone (without urgencies), 2117/2310 (91.7%) were correctly identified.The mixed logistic regression model showed an accuracy of 93.8% (95% CI 93.7-93.8%).Figure 3 shows the correct condition identification for each condition individually.Most conditions were very well identified, with two exceptions: low pulse rate (68.6%) and low respiratory rate (58.1%). Correct Condition Identification Considering conditions alone (without urgencies), 2117/2310 (91.7%) were correctly identified.The mixed logistic regression model showed an accuracy of 93.8% (95% CI 93.7-93.8%).Figure 3 shows the correct condition identification for each condition individually.Most conditions were very well identified, with two exceptions: low pulse rate (68.6%) and low respiratory rate (58.1%). Learning Effect The mixed logistic regression model showed a significant learning effect on correct prediction identification, with the odds of correctly identifying the predictions increasing by 3% for each additional prediction shown (OR 1.03, 95% CI 1.02-1.04,p < 0.001). Part II: Standardized Interviews From the transcripts of the interviews, we identified 126 different statements.At first coding, the two independent raters agreed on the classification of 83.3% of the statements (105/126), with a Cohen's Kappa of 0.8.Most of the positive comments considered VP Predictive to be intuitive.Negative comments mainly concerned identification difficulties, especially with the different urgencies.Several participants noted a learning effect during the session or believed an additional learning effect could be achieved by using VP Predictive more frequently.Figure 4 shows the coding tree in detail.Note that 15.1% of the statements were not codable; these primarily represented statements not relevant to the posed question. Learning Effect The mixed logistic regression model showed a significant learning effect on correct prediction identification, with the odds of correctly identifying the predictions increasing by 3% for each additional prediction shown (OR 1.03, 95% CI 1.02-1.04,p < 0.001). Part II: Standardized Interviews From the transcripts of the interviews, we identified 126 different statements.At first coding, the two independent raters agreed on the classification of 83.3% of the statements (105/126), with a Cohen's Kappa of 0.8.Most of the positive comments considered VP Predictive to be intuitive.Negative comments mainly concerned identification difficulties, especially with the different urgencies.Several participants noted a learning effect during the session or believed an additional learning effect could be achieved by using VP Predictive more frequently.Figure 4 shows the coding tree in detail.Note that 15.1% of the statements were not codable; these primarily represented statements not relevant to the posed question. Part III: Online Survey The questionnaire was completed by 70.0% of the invited participants (49/70).Most of the participants agreed or strongly agreed that VP Predictive was fun to use (32/49, 65.3%) and intuitive (25/49, 51.0%); many of them also agreed or strongly agreed that it was eye-catching (23/49, 46.9%).Almost two-thirds (32/49, 65.3%) agreed or strongly agreed that the urgency identification was difficult.Nevertheless, most participants (31/49, 63.3%) agreed or strongly agreed that they had a steep learning curve during the study session, and only very few (5/49, 10.2%) disagreed or strongly disagreed that they could imagine working with VP Predictive in the future.Figure 5 shows these results in detail. Part III: Online Survey The questionnaire was completed by 70.0% of the invited participants (49/70).Most of the participants agreed or strongly agreed that VP Predictive was fun to use (32/49, 65.3%) and intuitive (25/49, 51.0%); many of them also agreed or strongly agreed that it was eye-catching (23/49, 46.9%).Almost two-thirds (32/49, 65.3%) agreed or strongly agreed that the urgency identification was difficult.Nevertheless, most participants (31/49, 63.3%) agreed or strongly agreed that they had a steep learning curve during the study session, and only very few (5/49, 10.2%) disagreed or strongly disagreed that they could imagine working with VP Predictive in the future.Figure 5 shows these results in detail. Discussion We sought to investigate VP Predictive.This technology is an extension of the original VP designed to easily represent vital sign predictions with little cognitive load.Participants correctly identified both condition and urgency in the prediction visualizations in almost three quarters of the cases (74.3%).The majority found VP Predictive to be enjoyable to use, with 65.3% rating it as fun and only 16.3% considering it not intuitive. In this study, correct condition identification was high (91.7%).Regarding the conditions with the lowest percentages of correct identification, i.e., low pulse rate and low respiratory rate, we believe the reason for this result lies in the short display time (3.5 s) combined with the slow movement of the corresponding visualizations.In this short time frame, the visualizations, which move very slowly, perform less than a complete cycle, making users probably less confident about what they saw.We, therefore, believe that a longer display time may solve this problem. Compared to correct condition identification, correct prediction identification (i.e., correct identification of both condition and urgency) was not an equally high percentage (74.3%).This finding is also in line with the participants' subjectively perceived difficulty in identifying the different urgencies, as expressed during the interviews and in the survey.The different urgencies aimed to provide vital sign predictions with an expected occurrence time.For example, the prediction for low blood pressure could be displayed with three different urgencies (e.g., 1, 5, or 20 min).The differences in the percentages of correct identification become understandable when considering that the identification of conditions alone involved the interpretation of less visual information than when additional urgencies also needed to be identified. Discussion We sought to investigate VP Predictive.This technology is an extension of the original VP designed to easily represent vital sign predictions with little cognitive load.Participants correctly identified both condition and urgency in the prediction visualizations in almost three quarters of the cases (74.3%).The majority found VP Predictive to be enjoyable to use, with 65.3% rating it as fun and only 16.3% considering it not intuitive. In this study, correct condition identification was high (91.7%).Regarding the conditions with the lowest percentages of correct identification, i.e., low pulse rate and low respiratory rate, we believe the reason for this result lies in the short display time (3.5 s) combined with the slow movement of the corresponding visualizations.In this short time frame, the visualizations, which move very slowly, perform less than a complete cycle, making users probably less confident about what they saw.We, therefore, believe that a longer display time may solve this problem. Compared to correct condition identification, correct prediction identification (i.e., correct identification of both condition and urgency) was not an equally high percentage (74.3%).This finding is also in line with the participants' subjectively perceived difficulty in identifying the different urgencies, as expressed during the interviews and in the survey.The different urgencies aimed to provide vital sign predictions with an expected occurrence time.For example, the prediction for low blood pressure could be displayed with three different urgencies (e.g., 1, 5, or 20 min).The differences in the percentages of correct identification become understandable when considering that the identification of conditions alone involved the interpretation of less visual information than when additional urgencies also needed to be identified. Interestingly, the primary outcome result in the pilot study differed significantly from the one in the actual study (pilot 94.4% vs. study 74.3% correct prediction identification).One possible explanation for this difference is that the pilot study cohort was already familiar with the original VP visualizations (although not with the prediction visualizations) and, thus, had fewer new things to learn before the study.In comparison, the majority of the actual study participants were encountering VP for the first time.This raises the question of whether a longer familiarization period could have improved the percentage of correct urgency identification, and, thus, also that of correct prediction identification. This hypothesis is supported by the learning effect that we confirmed quantitatively and from the participants' feedback.Intuitiveness and learning ease are essential for accepting new technologies and are crucial for their successful clinical introduction [20].In our case, these requirements seem to have been achieved, as the majority of the survey participants could imagine working with VP Predictive in the future. Considering our study results, we believe that-with some modifications-VP Predictive may have the potential to display vital sign predictions generated by ML models in a way that professionals can understand and translate into direct actions.VP Predictive is intended to guide users' attention.When alerted by a prediction, caregivers should ultimately consider all available information and decide on an appropriate response (e.g., fluids or vasopressors in case of a low blood pressure prediction).However, like any new technology, it needs to be learned and trained before it can be integrated into practice.VPP can probably simplify this process by intuitively displaying predictions in the form of visual representations, compared to using numbers or curves.On the other hand, this can also lead to the loss of potentially useful information. Strengths and Limitations First, the conditions under which the study took place differ from the clinical reality, in which many more factors are present [24].Second, participants evaluated only videos in which the VP was shown in a physiological state and in which exactly one prediction was shown at a time.Such scenarios differ from the more complex clinical reality, so studies in more realistic settings will be needed to evaluate the true clinical value (e.g., a high-fidelity simulation study) [25].Third, the standardized interviews were conducted only with willing participants from Frankfurt and Wuerburg, and the online survey was not completed by all participants, thus, reducing the sample size of these two study parts compared to that of the simulation study. At the same time, a computer-based study also has advantages over a real-life study.First, it allows completely new technologies to be tested without patient risks [26].It also standardizes the study conditions, an essential prerequisite for minimizing possible bias due to external disturbances. Another strength of our study is that it was multicenter and multinational, allowing the results to be generalized to a certain extent.Based on the pilot study, the trial was adequately powered; however, the participants' selection was based on availability during working hours and, therefore, was not random. Conclusions Despite promising results and feedback, the current Visual Patient Predictive visualizations need some modifications followed by further high-fidelity simulation studies to test its suitability for the intended task of displaying vital sign predictions to healthcare providers in an easily understandable way.In this study, care providers correctly identified >90% of the conditions (i.e., without considering urgencies).The percentage of correct identification decreased when considering urgencies in addition to conditions.Therefore, in future development of the technology, we will focus on either only displaying conditions (without urgencies) or on improving the visualizations of urgency to enhance usability for human users. The condition signals which vital sign is predicted to change and in which direction (low/high), while the urgency gives the time horizon in which this change is expected to occur.The VP Predictive educational video (Supplementary Materials Video S2) and Figure 1 explain the technology. Figure 1 . Figure 1.Visual Patient and Visual Patient Predictive.(a) Visual Patient displays vital signs in the form of colored visualizations; (b) Visual Patient Predictive uses the same visualizations as blank figures with dashed borders.Images (c-f) show examples where tidal volume (c), bispectral index (d) and train-of-four ratio (e) are predicted to become high, and oxygen saturation (f) is predicted to become low, respectively. Figure 1 . Figure 1.Visual Patient and Visual Patient Predictive.(a) Visual Patient displays vital signs in the form of colored visualizations; (b) Visual Patient Predictive uses the same visualizations as blank figures with dashed borders.Images (c-f) show examples where tidal volume (c), bispectral index (d) and train-of-four ratio (e) are predicted to become high, and oxygen saturation (f) is predicted to become low, respectively. Figure 2 . Figure 2. Correct prediction identification (correctly identified condition and urgency) for each condition individually: (a) the percentages of correct prediction identification, (b) the estimates based on the mixed logistic regression model.ST-deviation, ST-segment deviation; TOF, train-of-four ratio; Temp, body temperature; etCO2, end-expiratory carbon dioxide concentration; BIS, bispectral index; TV, tidal volume; BP, blood pressure; SpO2, oxygen saturation; RR, respiratory rate; HR, heart rate; CVP, central venous pressure; PR, pulse rate (vital signs are color-coded for better readability). Figure 2 . Figure 2. Correct prediction identification (correctly identified condition and urgency) for each condition individually: (a) the percentages of correct prediction identification, (b) the estimates based on the mixed logistic regression model.ST-deviation, ST-segment deviation; TOF, train-of-four ratio; Temp, body temperature; etCO2, end-expiratory carbon dioxide concentration; BIS, bispectral index; TV, tidal volume; BP, blood pressure; SpO2, oxygen saturation; RR, respiratory rate; HR, heart rate; CVP, central venous pressure; PR, pulse rate (vital signs are color-coded for better readability). Figure 2 . Figure 2. Correct prediction identification (correctly identified condition and urgency) for each condition individually: (a) the percentages of correct prediction identification, (b) the estimates based on the mixed logistic regression model.ST-deviation, ST-segment deviation; TOF, train-of-four ratio; Temp, body temperature; etCO2, end-expiratory carbon dioxide concentration; BIS, bispectral index; TV, tidal volume; BP, blood pressure; SpO2, oxygen saturation; RR, respiratory rate; HR, heart rate; CVP, central venous pressure; PR, pulse rate (vital signs are color-coded for better readability). Figure 4 . Figure 4. Distribution of the statements within the topics of the coding tree.We show percentages and numbers. P o s it iv e s ta te m e n t 3 Figure 4 . Figure 4. Distribution of the statements within the topics of the coding tree.We show percentages and numbers. Figure 5 . Figure 5. Doughnut charts showing the statements and the distribution of the answers on the 5point Likert scale.The results are shown as numbers.We calculated p-values using the Wilcoxon signed-rank test to determine whether the responses significantly deviated from neutral.VPP, Visual Patient Predictive; IQR, interquartile range. Figure 5 . Figure 5. Doughnut charts showing the statements and the distribution of the answers on the 5-point Likert scale.The results are shown as numbers.We calculated p-values using the Wilcoxon signedrank test to determine whether the responses significantly deviated from neutral.VPP, Visual Patient Predictive; IQR, interquartile range.
2023-10-13T13:06:41.418Z
2023-10-01T00:00:00.000
{ "year": 2023, "sha1": "903a7c24515ad89dfa879411c10616d7fae4c8df", "oa_license": "CCBY", "oa_url": null, "oa_status": null, "pdf_src": "PubMedCentral", "pdf_hash": "6635c49cd4c5262e6a70e75626d0698ea1dc930b", "s2fieldsofstudy": [ "Medicine", "Computer Science" ], "extfieldsofstudy": [ "Medicine" ] }
236944071
pes2o/s2orc
v3-fos-license
Study of muscle fibers of the extensor digitorium longus and soleus muscles of C57BL/6 females exposed to glyphosate during pregnancy and lactation ABSTRACT Objective To evaluate the morphology and morphometry of the muscles extensor digitorium longus and soleus of C57BL/6 females, who were exposed to glyphosate during pregnancy and lactation. Methods Twelve female mice from the C57BL/6 lineage were used. After detection of pregnancy, they were divided into a Control Group, which received only water, and a Glyphosate Group, which received water with 0.5% glyphosate during pregnancy and lactation. Both groups received ad libitum standard diet. After weaning, the females were euthanized and weighed; naso-anal length was measured, and fats were collected and weighed. The muscles extensor digitorium longus and soleus were collected, and their length and weight were measured. Then, the muscles were fixed in Methacarn to perform the histological study of muscle fibers. Results Glyphosate Group presented lower weight gain during pregnancy and also lower final body weight and naso-anal length; however, the other body parameters evaluated did not present a significant difference in relation to the Control Group. Significant differences were also not observed in the analysis of muscle fibers and connective tissue. Conclusion Exposure to 0.5% glyphosate during pregnancy and lactation resulted in lower weight gain during pregnancy, final weight, and naso-anal length. Despite not directly altering the morphology of muscle tissue, these results may indicate enough exposure to interfere with animal metabolism. ❚ INTRODUCTION Glyphosate (N-(phosphonomethyl)glycine) is an organophosphorus compound that ranked first in the list of the ten best-selling active ingredients in Brazil, in 2018. (1) It is present in the formulation of Roundup ® Original DI (Monsanto do Brasil LTDA., São Paulo, SP, Brazil), one of the most widely used herbicides in the world, (2) which accounted for almost 72% of global pesticide use in 2016. (3) Its mechanism of action consists in inhibiting the enzyme 5-enolpyruvylshikimate-3-phosphate synthase of the shikimate pathway, responsible for the production of the intermediate chorismate, a compound required in the synthesis of aromatic amino acids essential for plant development. (4) Although this pathway is not present in mammals, studies have shown that the herbicide is toxic in rats (5) and mice, (6) as well as in humans, (7) associated with the genesis of several diseases. (8) In Brazil, there are still no limits set on glyphosate or any other herbicide in water or soil by regulatory agencies. According to the Environmental Protection Agency (EPA), an agency from the United States, the glyphosate limit in drinking water is 700µg/L, with an acceptable daily dose of 0.05mg/kg per body weight. (9) However, it is common for the stipulated dose to be exceeded, which in turn, reflects in the increased concentration of this compound in the environment, (10) promoting contamination of rivers and surface waters, (11) and becoming a potential source of exposure to humans. (12) With regard to human health, studies have shown exposure to glyphosate has been recurrently associated with some health problems, such as cancer, endocrine disruption, (12) depression, Parkinson's disease, Alzheimer's disease, (13) among others. In rodents, experimental studies have shown that pesticide exposure increases the incidence of tumors, (6) and promotes abnormalities in liver, heart, and brain function, (14) as well as damage to cell junctions of intestinal cells, leading to increased membrane permeability. (15) Furthermore, organophosphate pesticides have been found to promote inhibition of acetylcholinesterase (AChE), (16) and lead to degeneration (17) and necrosis of muscle fibers. (18) It is known that pregnancy is a period of numerous physiological changes, which promote the vulnerability of both mother and fetus. Therefore, it is strictly important to reduce exposure to any toxins during this period. However, maternal exposure to pesticides is becoming increasingly common, since it can occur through contact with air, water, contaminated food, in the work environment, during the mixing of chemical compounds, in the application of pesticides, in the cleaning of equipment, or even indirectly, during the handling of contaminated crops or food. (19) Although some experimental studies report that exposure to glyphosate promotes changes in some tissues, and in the metabolism of the offspring of rats and mice, (2,20) the effects of exposure to this herbicide on the skeletal muscles of females exposed during pregnancy and lactation are not known yet. Thus, this study is of great importance for understanding of possible musculoskeletal changes promoted by exposure to glyphosate. ❚ OBJECTIVE To evaluate the morphology and morphometry of the muscles extensor digitorum longus and soleus of C57BL/6 females exposed to glyphosate during pregnancy and lactation. ❚ METHODS Obtaining the animals Initially, 30 C57BL/6 mice of reproductive age were used, 20 females and 10 males, aged between 60 and 90 days, with a mean body weight of 20g to 25g. The animals were kept under controlled temperature (28±2 o C) and light conditions (12 hours light/dark), and received standard rodent chow (Supralab, São Leopoldo, RS, Brazil) and filtered water ad libitum throughout the experiment. All experiments reported in this study were conducted in accordance with national and international legislation, as per the guidelines of the National Council for the Control Crossbreeding After 7 days of acclimatization, vaginal smears were taken to follow the estrous cycle of females, which were allocated for mating when they were in proestrus, with the proportion of two females to one male, during the night. In the morning of the following day, the vaginal smear was taken again to identify spermatozoa and the estrous cycle was determined to confirm pregnancy. Females considered pregnant showed the presence of spermatozoa or a 4-day stay in the diestrous phase after mating. The females that were not pregnant were again submitted to the mating process until pregnancy was confirmed. Glyphosate administration Once pregnancy was confirmed, the females were placed in individual boxes, separated into Control Group (CTL, n=6), which received filtered water during the entire period of pregnancy (21 days) and lactation (30 days), and Glyphosate Group (GF, n=6), which received the herbicide 0.5% glyphosate Roundup ® Original DI in drinking water, from the fourth day of pregnancy until the end of lactation. This dosage had been used in a previous study, (20) and was chosen because it mimics direct groundwater contamination, because it is similar to the amount of pesticide found in water after agricultural practices. (21) The commercial formulation of Roundup ® Original DI glyphosate used contained 445g/L of N-(phosphonomethyl)glycine diammonium salt, equivalent to 370g/L (37.0%m/v) of the active component glyphosate [N-(phosphomethyl)glycine]. Euthanasia of females After 30 days of lactation, weaning occurred and the females were euthanized after completing two estrous cycles. The animals were anesthetized with xylazine hydrochloride (Anasedan ® , Vetbrands, Axxon Group, Rio de Janeiro, RJ, Brazil) and ketamine hydrochloride (Dopalen ® , Vetbrands, Axxon Group, Rio de Janeiro, RJ, Brazil) at concentrations of 9mg/kg and 90mg/kg, respectively, and were finally euthanized. Females were weighed after euthanasia. Naso-anal length (NAL) was measured, and retroperitoneal and perigonadal fat was collected and weighed. Collection of the muscles extensor digitorum longus and soleus To collect the extensor digitorum longus (EDL) muscle, the skin was detached and the tibialis anterior muscle was removed for dissection and removal of the EDL muscle. The gastrocnemius muscle was removed for dissection and removal of the soleus (SOL) muscle. The EDL and SOL muscles were weighed (g) on analytical scales (Shimadzu UX620H, São Paulo, SP, Brazil) and their length (mm) was measured with the aid of a digital pachymeter (Digimess ® , São Paulo, SP, Brazil). Histological study For the study of muscle fibers, the EDL and SOL muscles of the right antimere of the pelvic limbs were removed and stored in a glass container with Methacarn fixative. After 24 hours, they were transferred to 70% alcohol and embedded in paraffin, with an n-butyl alcohol embedding protocol. From the analysis of ten microscopic fields (40x lens) for each animal, the EDL and SOL muscles were transversely cut and submitted to hematoxylin-eosin (22) staining, for morphological analysis of the muscle fibers, quantification of the numbers of nuclei and fibers, nucleus-fiber ratio, area, and major and minor diameter of each muscle fiber. The same cutting procedure was performed for Masson's trichrome staining, (23) which allows quantifying connective tissue, by analysis of ten microscopic fields for each animal (20x lens). The images of the muscle fibers were obtained using an Olympus BX60 ® microscope, coupled to an Olympus DP71 camera (Tokyo, Japan), with the aid of the DP Controler 3.2.1 276 software. The Image-Pro Plus 6.0 ® software (Media Cybernetics, Maryland, USA) was used for morphological and morphometric analysis of the materials. Statistical analysis The data obtained were submitted to statistical analysis using the GraphPad Prism ® (La Jolla, CA, USA) software, taking into consideration the results of the normality tests. For the data found to be normal, the statistical test used was the Student's t test, whereas for the data that were not normal, the Mann-Whitney test was used. Values of p<0.05 were considered significant. einstein (São Paulo). 2021;19:1-7 ❚ RESULTS Pregnancy and lactation data The GF Group had lower gestational weight gain (p=0.0327) when compared to the CTL Group (Table 1). However, the data for weight loss during lactation, gestation time, and litter size showed no statistical differences when compared to the CTL Group (Table 1). Morphological and morphometric analysis of muscle fibers The evaluation of the muscle fibers of the EDL and SOL muscles showed fibers with preserved morphology, maintaining the polygonal aspect, the presence of peripheral nuclei, and the eventual presence of central nuclei in the two groups studied ( Figure 1). As to the morphometric analysis of the muscle fibers of the EDL and SOL muscles, none of the parameters evaluated showed significant differences between Groups CTL and GF (Table 3). Body data Animals from the GF Group showed lower body weight (p=0.0103) and NAL (p=0.0002) when compared to the CTL Group (Table 2). In contrast, the parameters of retroperitoneal and perigonadal fat weights were similar between the Groups ( Table 2). The weight and length data of the EDL and SOL muscles also showed no statistical differences when comparing the Groups (Table 2). Analysis of the amount of connective tissue Masson trichrome staining showed the presence of connective tissue between muscle fibers, especially in the perimysium involving the fascicles (Figure 2). Regarding the estimated amount of connective tissue in the two muscles studied, no statistical differences were observed between the CTL and GF Groups (Figure 3). ❚ DISCUSSION The study showed the exposure of females to glyphosate during pregnancy and lactation promoted lower weight gain during gestation, which was also observed in other studies that exposed pregnant rats to 1% (14,24) and 0.5% (2) glyphosate concentrations. Furthermore, glyphosate exposure also resulted in lower final body weight and NAL of exposed animals, which corroborates the findings of Teleken et al., (20) who also exposed mice to 0.5% glyphosate during these phases of their life cycle. Although the present study did not verify assess the animals' water and food consumption, Beuret et al., (24) and McKenna et al., (25) showed animals exposed to glyphosate had lower water and food consumption compared to unexposed animals, which justifies the lower weight gain and the lower final body weight and NAL of the GF Group, since glyphosate administration may reflect in reduced palatability of ingested water, or promote changes in the thirst regulatory centers, due to the effects of the herbicide and its metabolites. (14) As to muscle fibers, Bright et al., (17) demonstrated that rats exposed to sublethal doses of sarin presented with degeneration in muscle fibers and mononuclear infiltrates in the diaphragm muscle, when euthanized 24 hours and 3 days after exposure, respectively. De Bleecker et al., (18) noted that exposure to paraoxon compound promoted fiber necrosis in several muscle groups of rats, with predominance in the diaphragm muscle. However, mixed muscles, such as masseter and soleus, were also affected. In view of the results, the authors observed a correlation between the oxidative capacity of muscles and their susceptibility to necrosis, with mixed muscles showing a predominance of oxidative fibers being more prone to necrosis. Although the literature findings demonstrated exposure to organophosphorus compounds promotes degeneration and necrosis of muscle fibers, as well as the relation between predominance of fiber type and susceptibility to necrosis, the same was not observed in the EDL and SOL muscle fibers upon exposure to glyphosate. This may be justified by recent findings showing the toxic potential of this herbicide during direct exposure is minimal, despite the current association of glyphosate exposure with occurrence of diseases. (26) Thus, the absence of changes in the morphological and morphometric parameters of the EDL and SOL muscles of the females, and of any necrotic processes pointing to possible fiber degeneration, may be associated with the low toxicity of the herbicide in this first exposure. However, even if the low toxicity of glyphosate in direct exposure is demonstrated, a study showed that, despite not causing effects in the first generation, this herbicide promotes an increase in the occurrence of diseases in the offspring of exposed rats. Hence, its ability to promote epigenetic changes, which will be transmitted to subsequent generations. (27) Due to the effects promoted by exposure, glyphosate has been investigated as a potential chemical endocrine disruptor, (28) which consists of a substance capable of altering the maternal environment, and influencing the stages of intrauterine development, as well as increasing the risk of chronic diseases in adulthood. (29) Despite low toxicity during direct exposure, the potential action of glyphosate as an endocrine disruptor can promote changes in exposed offspring, and it is strictly necessary to take this fact into account in the etiology of diseases in future generations. ❚ CONCLUSION Exposure to 0.5% glyphosate during pregnancy and lactation promoted lower weight gain during gestation and lower body weight and size of females. Although the morphological characteristics of muscle tissue were not altered, the change in body parameters indicates glyphosate may interfere in the metabolism of the animal, promoting changes in its cycle of obtaining and storing energy.
2021-08-08T05:25:14.156Z
2021-08-02T00:00:00.000
{ "year": 2021, "sha1": "e19cf1b9d0b636f73ec61402de9f808bc649f7dc", "oa_license": "CCBY", "oa_url": "https://journal.einstein.br/wp-content/uploads/articles_xml/2317-6385-eins-19-eAO5657/2317-6385-eins-19-eAO5657.pdf", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "e19cf1b9d0b636f73ec61402de9f808bc649f7dc", "s2fieldsofstudy": [ "Environmental Science" ], "extfieldsofstudy": [ "Medicine" ] }
232423362
pes2o/s2orc
v3-fos-license
White Matter Structural Network Analysis to Differentiate Alzheimer’s Disease and Subcortical Ischemic Vascular Dementia To explore the evaluation of white matter structural network analysis in the differentiation of Alzheimer’s disease (AD) and subcortical ischemic vascular dementia (SIVD), 67 participants [31 AD patients, 19 SIVD patients, and 19 normal control (NC)] were enrolled in this study. Each participant underwent 3.0T MRI scanning. Diffusion tensor imaging (DTI) data were analyzed by graph theory (GRETNA toolbox). Statistical analyses of global parameters [gamma, sigma, lambda, global shortest path length (Lp), global efficiency (Eg), and local efficiency (Eloc)] and nodal parameters [betweenness centrality (BC)] were obtained. Network-based statistical analysis (NBS) was employed to analyze the group differences of structural connections. The diagnosis efficiency of nodal BC in identifying different types of dementia was assessed by receiver operating characteristic (ROC) analysis. There were no significant differences of gender and years of education among the groups. There were no significant differences of sigma and gamma in AD vs. NC and SIVD vs. NC, whereas the Eg values of AD and SIVD were statistically decreased, and the lambda values were increased. The BC of the frontal cortex, left superior parietal gyrus, and left precuneus in AD patients were obviously reduced, while the BC of the prefrontal and subcortical regions were decreased in SIVD patients, compared with NC. SIVD patients had decreased structural connections in the frontal, prefrontal, and subcortical regions, while AD patients had decreased structural connections in the temporal and occipital regions and increased structural connections in the frontal and prefrontal regions. The highest area under curve (AUC) of BC was 0.946 in the right putamen for AD vs. SIVD. White matter structural network analysis may be a potential and promising method, and the topological changes of the network, especially the BC change in the right putamen, were valuable in differentiating AD and SIVD patients. INTRODUCTION According to a report from the World Health Organization in 2017, almost 50 million people have been diagnosed with dementia, and the number is said to increase to 82 million by 2030 (The Lancet Neurology, 2018). Dementia creates a heavy financial burden on society. Alzheimer's disease (AD) is a progressive and degenerative disease resulting in cognitive impairment and behavior dysfunction. Vascular dementia (VaD) due to various vascular pathologies is the second most common cause of dementia after AD (Kang et al., 2016). AD and VaD accounts for approximately 60% and 20% of dementia, respectively (Rizzi et al., 2014). Subcortical ischemic vascular dementia (SIVD) accounts for a large part of VaD (Benjamin et al., 2016). SIVD has been focused on due to its high prevalence. It is hard to identify SIVD and AD clinically, due to the similar neuropsychological symptoms between them. The main issues for SIVD patients are executive and semantic memory dysfunction (Palesi et al., 2018). Previous studies demonstrated that the cognitive impairment of SIVD was related to the disconnection of the frontal subcortical circuit (Seo et al., 2010). The progression of SIVD was reversible as the vascular risk factors of SIVD were controllable. However, AD patients mostly displayed diffuse cortex atrophy and the progress was irreversible (McDonald et al., 2009). The microstructure of the human brain has been explored in vivo by neuroimaging in recent years. Many studies have proved that the cognitive impairment of SIVD resulted from a lesion in the white matter (Chen et al., 2018). Diffusion tensor imaging (DTI) was acknowledged as a precise MRI method, which is sensitive to the microstructural change of white matter. understand cerebral microstructure change, and may further elucidate the etiology of cognitive and behavioral deficits in SIVD patients. In the present study, we aim to explore brain structural network alterations in AD and SIVD patients, and to explore the value of brain structural network analysis in the differentiation of AD and SIVD. Participants A total of 67 right-handed Chinese Han subjects [19 SIVD,31 AD,and 17 normal control (NC)] were enrolled in this study at the Department of Radiology in the First Affiliated Hospital of Soochow University from June 2018 to June 2020. This study was approved by the ethics committees of the First Affiliated Hospital of Soochow University, and written informed consent was obtained from each subject prior to participation. All subjects underwent a comprehensive neuropsychological test and 3.0 Tesla MRI scanning of the whole brain. The cognitive functions of all the subjects were evaluated by an experienced neuropsychologist. General cognitive function of participants was evaluated using the Beijing version of the Montreal cognitive assessment (MoCA) and the mini mental state examination (MMSE; Lu et al., 2011). Episodic memory function was assessed by the auditory verbal learning test Huashan version (AVLT) including the auditory verbal learning test immediate recall (AVLT-IR) and the auditory verbal learning test delayed recall (AVLT-DR; Zhao et al., 2012). Executive function was assessed by the Stroop color-word test. The speed (Stroop test 1) and the accuracy (Stroop test 2) of performance were measured. The inclusion criteria for AD patients referred to the National Institute on Aging-Alzheimer's Association Criteria (McKhann et al., 2011). Besides, AD patients in our study also met following criteria: (1) absence of WMH or mild severity of WMH by T2 FLAIR image; and (2) with a MoCA score less than 26. According to the score of MoCA, the AD patients and SIVD patients were subdivided into mild cognitive impairment (18≤MoCA< 26) and dementia (MoCA≤17). NC were age and gender-matched healthy volunteers: (1) without clinical evidence or history of cognitive disfunction with MoCA≥26; (2) without brain abnormality detected on a routine non-contrast MRI examination; and (3) no neuropsychological disorders. The exclusion criteria for all participants were as follows: (1) metabolic conditions, such as hypothyroidism or folic acid deficiencies; (2) a history of stroke; (3) central nervous system diseases that could cause cognitive decline, such as Parkinson's disease, epilepsy, multiple sclerosis and so on; and (4) with MRI scanning contraindications. Image Acquisition All MRI examinations were performed using a 3.0 T MRI scanner (Signa HDxt, GE Healthcare, Milwaukee, WI, USA) with an eight-channel head coil. A three-dimensional fast spoiled gradient recalled (3D-FSPGR) sequence was performed with the following parameters: repetition time (TR) 6.50 ms, echo time (TE) 2.80 ms, inversion time (TI) 900 ms, flip angle 8 • , field of view (FOV) 256 × 256 mm, number of slices 176, slice thickness 1 mm without slice gap, and scan time 4 min. DTI data were obtained using an echo planar imaging (EPI) sequence with the following parameters: TR 17,000 ms, TE 85.4 ms, flip angle 90 • , matrix size 128 × 128, FOV 256 × 256 mm, slice thickness 2 mm without slice gap, number of signal averages (NEX) 2, with 30 non-collinear directions of diffusion encoding (b = 1,000 s/mm 2 for each direction), and a scan time of 9 min. Additionally, axial T2-weighted and FLAIR sequences were obtained to detect visible white matter damage. Data Processing The PANDA toolbox based on FMRIB Software Library v5.0 was applied in tha DTI data process (Cui et al., 2013), containing several steps (brain extraction, DTI images format conversion, realignment, eddy current and motion artifact correction, fractional anisotropy (FA) calculation, and diffusion tensor tractography). When tracking white matter fibers, a fractional anisotropy (FA) value threshold of 0.2 and a turning angle threshold of 45 • of the Fiber Assignment by Continuous Tracking (FACT) algorithm were set (Basser et al., 2000). The Anatomical Automatic Labeling (AAL) atlas was used to parcellate each brain into 90 regions of interest (ROIs). The nodes of the structural network were defined according to the AAL template. Interconnections between brain regions were taken as the edges of the structural network. If the number of interconnected white matter fibers was more than 3, an edge in the structural network was defined. The global topological parameters including the small-world [gamma, sigma, lambda, and global shortest path length (Lp)], global efficiency (E g ), and local efficiency (E loc ) were obtained by the GRETNA toolbox (Wang et al., 2015). E g indicates how efficiency information is exchanged over the whole network. E loc calculates clustering and specialization within a network and the fault tolerance of the network. Lp, a measurement of the average nodal shortest path length, reflects the speed information transfer to the whole brain. Sigma, the ratio of gamma and lambda, is a measurement of the small-world property of the network. In addition, the betweenness centrality (BC) of a node was measured to describe nodal characteristics of the white matter structural network. BC is the fraction of all shortest paths in the network that pass through a given node. Demographics and Clinical Variables All statistical analyses were performed using the Statistical Product and Service Software (SPSS ver. 20.0, Chicago, IL, USA). The normality of distribution of continuous variables was examined by the Kolmogorov-Smirnov test. The differences of age and years of education among groups were analyzed by the Kruskal-Wallis H test. The differences of MMSE, MoCA, AVLT-IR, AVLT-DR, and Stroop tests 1 and 2 among groups were analyzed by general linear modeling, with some effects for covariates (age, gender, and years of education); and Bonferroni correction was used to adjust p values in multiple comparisons. The distribution of cognitive severity between AD patients and SIVD patients were analyzed by the χ 2 test. The group differences of categorical variables were analyzed by the Pearson test when the sample size was over 40 and the minimal expected frequency was over 5. Otherwise, the correction formula of chi-squared test would be chosen; and the R×C table was used when the dependent variable was over 2. A p value less than 0.05 was considered statistically significant and continuous variables were reported as mean ±SD. Comparison of Topographic Network Parameters Among Groups Statistical analyses of global and nodal parameters were performed with the gretna toolbox. Between the two groups, global network parameters and BC of each node were compared by a two-sample t-test with FDR correction and with age, gender, and years of education as covariates. For global and nodal parameters, a p-value less than 0.001 was considered statistically significant. In order to evaluate the diagnostic accuracy of BC in nodes with group differences in identifying AD patients and SIVD patients, the receiver operating characteristic (ROC) analysis was performed and the area under curve (AUC) was calculated by MedCalc (MedCalc statistical software, ver.15.8). White Matter Structural Connectome Comparison Between Groups Network-based statistical analysis (NBS), a methodology improving the statistical power though controlling the type I error, was applied to identify the change of structural connections in AD patients vs. NC and SIVD patients vs. NC. Statistical comparisons in NBS were conducted with age, sex, and years of education as covariates. A p-value less than 0.05 was considered statistically significant. Demographic and Neuropsychological Characteristics The demographic information and neuropsychological performance of AD and SIVD patients are shown in Table 1. As shown in Table 1, there were no significant differences of gender and years of education among the three groups. Compared with NC, the scores of MoCA, MMSE, AVLT-IR, and AVLT-DR of AD and SIVD groups were significantly decreased. When AD patients compared with SIVD patients and NC compared with SIVD patients, Stroop tests 1 and 2 of SIVD patients were significantly reduced. However, there were no significant differences in scores of MMSE, MoCA, AVLT-IR, and AVLT-DR in AD patients vs. SIVD patients. There were no significant differences in the scores of Stroop tests 1 and 2 between AD patients and NC. There were respectively 19 AD patients and 14 SIVD patients with dementia and there was no statistical difference in the distribution of severity of cognitive dysfunction between AD and SIVD patients (p = 0.369). Between-Group Difference in the Global Parameters of the Network All the subjects performed small-world organization (sigma>1) in the DTI structural network. As shown in Table 2, there were no statistical significances in sigma and gamma when the AD group was compared with NC and the SIVD group was compared with NC. The E g of AD patients and SIVD patients were statistically decreased and their lambda values were increased. The E loc of the SIVD group was significantly reduced in SIVD patients vs. NC. Moreover, the lambda of SIVD patients was also increased when SIVD patients were compared with AD patients. SIVD patients showed significantly increased Lp when the SIVD group was compared with NC and the SIVD group was compared with the AD group. Compared with AD patients, the E g and E loc in SIVD patients were reduced. Between-Group Difference in Nodal Parameters When AD patients were compared with NC, the BC of prefrontal cortices such as the left inferior frontal gyrus (orbital part) and left superior frontal gyrus (medial part) in the AD group were decreased significantly. Besides, AD patients had significantly decreased BC in the right rolandic operculum, right postcentral, left superior parietal gyrus, and left precuneus (Table 3; Figure 1A). Compared with NC, SIVD patients demonstrated decreased BC mainly in prefrontal regions, such as the right superior frontal gyrus, right superior frontal gyrus (orbital part), right inferior frontal gyrus (opercular part), and left anterior cingulate gyrus, as well as in the right putamen and right superior occipital gurus (Table 4 and Figure 2A). Between-Group Difference in Structural Connectivity Compared with NC, the AD group demonstrated 41 significantly decreased structural connections and 18 increased structural connections. The decreased structural connections of AD patients mainly involved temporal and occipital regions, while approximately 50% of increased structural connections involved frontal and prefrontal regions ( Figure 1B). When SIVD patients were compared with NC, SIVD patients manifested 80 significantly decreased structural connections and 13 increased structural connections. Structural connections between frontal and prefrontal regions were significantly decreased in SIVD patients. Additionally, the structural connections between frontal-subcortical regions and prefrontalsubcortical regions were also decreased ( Figure 2B). The structural connectivity alterations between the AD and NC groups. Horizontal axis and vertical axis respectively represent 90 brain regions in the automated anatomical labeling (AAL) template. The dots in different locations in the matrix represent the structural connectivity between the two different brain regions identified by the horizontal axis and vertical axis. The colors of the dots represent structural connection strength, quantified by the values on the right of the color bar. The dot color with a positive value represents increased structural connection between the two brain regions in the AD group, compared with the NC group, while the dot color with a negative value represents a decreased structural connection. Discriminative Power of BC Among AD, SIVD, and NC Groups The AUC of BC in significantly different cortices between AD patients and NC ranged from 0.766 to 0.935. As shown in Table 3, the AUC of BC in the left precuneus was 0.935 which was the highest. When SIVD was compared with NC, the AUC of BC in right putamen was 0.974 which was the highest ( Table 4). As shown in Table 5, the AUC of BC in significantly different nodes (from the SIVD group vs. the NC group and the AD group vs. the NC group) in identifying AD patients and SIVD patients ranged from 0.599 to 0.946; and the AUC of BC in the right putamen was 0.946 which was the highest ( Table 5). DISCUSSION According to the present study, executive dysfunction was significantly different between the SIVD and AD groups. SIVD patients performed worse in the Stroop color-word tests than AD patients. SIVD patients spent more time on Stroop test 1 (lower speed) and naming/reading errors (lower accuracy) occurred more frequently than in AD patients, as shown in Table 1. The structural connectivity alterations between the SIVD and NC groups. Horizontal axis and vertical axis respectively represent 90 brain regions in the automated anatomical labeling (AAL) template. The dots in different locations in the matrix represent the structural connectivity between the two different brain regions identified by horizontal axis and vertical axis. The colors of the dots represent structural connection strength, quantified by the values on the right of the color bar. The dot color with a positive value represents increased structural connection between the two brain regions in the SIVD group, compared with the NC group, while the dot color with a negative value represents a decreased structural connection. table refer to Tables 3, 4. AD, Alzheimer's disease; SIVD, subcortical ischemic vascular dementia; AUC, area under curve. It was indicated that the executive function of SIVD patients was more seriously damaged. Subcortical vascular pathology which interrupted frontal-striatal circuits was frequently present in SIVD patients. Executive dysfunction in SIVD patients may result from the disruption of cortical and subcortical connections (O'Brien and Thomas, 2015;Tuladhar et al., 2015). Many studies suggested that memory deficit was more obvious in AD patients. However, the SIVD group performed worse than the AD group in the present study (Graham et al., 2004;Reed et al., 2007). Liu et al. (2019) found that memory loss existed in the deteriorated process of SIVD patients and memory decreased with cognitive deterioration. Moreover, there were studies that proved that memory deterioration was correlated with the overall severity of dementia (Kang et al., 2016). Thus, the discrepancy supposedly resulted from the impartial distribution of global cognitive severity. DTI and the graph theory method were applied to explore the structural network alteration of AD and SIVD patients. In the present study, AD and SIVD patients and NC performed small-world properties. Small-world organization, reflecting an optimal balance of integration and segregation, appears to be a ubiquitous organization of anatomical connectivity (Bassett and Bullmore, 2006;Zhou et al., 2020). Sigma is a measurement of small-world property in the structural network (Sun et al., 2017). The decreased sigma in the structural network indicates a less optimal topological structure. Although there were no significant differences in sigma between AD and NC and SIVD and NC, it did not indicate that the small-world topological properties of AD and SIVD were normal. Because sigma was calculated by dividing gamma by lambda, the sigma value could be affected both by the gamma value and lambda value. Lambda is the ratio of the characteristic path length between real and 100 random networks, which quantifies the overall routing efficiency of a network (Zhang et al., 2019). The lambda values of SIVD and AD patients were obviously increased in the present study. It was suggested that the network integration in SIVD and AD patients was disrupted. E g and E loc respectively measures the ability of parallel information transmission over the network and the fault tolerance of the network (Zhang et al., 2019). The E g and E loc in SIVD patients and E g in AD patients were significantly reduced compared with NC, which indicated that global integration of the structural network was disrupted and information processing was impaired in the AD and SIVD groups (Bai et al., 2012). Lp, a measurement of the average of the nodal shortest path length, reflects the speed of information transfer to the whole brain (Li et al., 2017). Significantly increased Lp in SIVD indicated the disrupted global integration of the structural network. When SIVD patients compared with AD patients, SIVD patients performed lower E g , E loc and higher Lp, lambda, suggesting that the destruction of white matter structure in the SIVD group was worse than that in the AD group. BC is the fraction of all shortest paths in the network that pass through a given node (Fortanier et al., 2019). Decreased nodal BC means reduced nodal importance in the network. Compared with NC, AD patients demonstrated significantly reduced BC in the left precuneus. A previous study found that inappropriate atrophy existed in the precuneus in AD patients and the precuneus played an important role in the default mode network (Abu-Akel and Shamay-Tsoory, 2011;Hojjati et al., 2018). Besides the left precuneus, AD patients showed decreased BC in parietal and prefrontal cortices, such as the right postcentral, left superior parietal gyrus, right rolandic operculum, left inferior frontal gyrus (orbital part), and left superior frontal gyrus (medial part). Some studies suggested that atrophy of the AD brain may initiate in the inferior parietal cortices and spread to the prefrontal cortices (McDonald et al., 2009). These structural alterations may be linked to the corresponding BC changes in these regions and illuminate one of the underlying reasons of cognitive dysfunction in AD patients. Executive dysfunction is the main clinical symptom of SIVD patients which may be correlated with the integrity of prefrontalsubcortical circuits (Román et al., 2002). Compared with NC, the BC of the left anterior cingulate gyrus in SIVD patients was significantly decreased in the present study. The cingulum, a complex structure that interconnects the frontal, parietal, and medial temporal regions (Bubb et al., 2018), plays an important role in executive function and episodic memory. The impairment of the cingulum may result in an executive dysfunction and memory deficit in SIVD patients. Additionally, the BC was decreased in some prefrontal cortices such as the right superior frontal gyrus, right superior frontal gyrus (orbital part), and right inferior frontal gyrus (opercular part), as well as in the right putamen and right superior occipital gyrus in SIVD patients. Some researchers suggested that the volume of the putamen was smaller and that the occipital gyrus was thinner in SIVD patients (Seo et al., 2010;Thong et al., 2014). The occipital gyrus is the main origin and destination of long-association fibers, among which the inferior frontooccipital fasciculus plays an important role in attention and visual processing. Therefore, alterations in the structural occipital gyrus may lead to function deficits in attention and visual processing (Catani and Thiebaut De Schotten, 2008). There were some differing main brain regions involved in the structural network in AD and SIVD patients. It was suggested that the left precuneus was vulnerable to damage in AD patients while the right putamen was vulnerable to damage in SIVD patients in the present study. The prefrontal and frontal cortices and subcortical regions were the main regions affected in the SIVD group which was consistent with a previous study (Yi et al., 2012;Thong et al., 2014). As to the diagnostic efficiency of nodal BC in differentiating SIVD and AD patients, the AUC of the right putamen was 0.946 which was the highest. Therefore, the change of BC in the right putamen could identify and distinguish SIVD patients and AD patients. Structural connections in frontal-prefrontal regions, frontalsubcortical regions, and prefrontal-subcortical regions were reduced in SIVD patients in the present study. Previous studies demonstrated decreased frontal-subcortical connections in SIVD patients (Sang et al., 2020), which was in line with the present study. Besides, the structural connections between prefrontal and frontal regions were reduced which supposedly resulted in their damage in SIVD patients. AD patients mostly manifested decreased structural connections in the temporal and occipital regions and increased connections in the frontal and prefrontal regions in the present study. In the study of McDonald et al. (2009), AD patients with mild cognitive impairment suffered from atrophy of the temporal and occipital cortices. Structural alterations in the temporal and occipital cortices may explain the decreased structural connections in these regions which are related to function deficits. However, increased structural connections in the frontal and prefrontal regions could potentially compensate for function deficits in AD patients. As discussed above, although most of the conclusions from the present study were similar with those from previous studies, there were still many new findings. First, the structure of the precuneus was deconstructed in AD patients, which illuminated part of underlying reasons for cognitive dysfunction, while AD patients had increased prefrontal-frontal structural connections, which were supposed to compensate for function deficits. Second, the structure of occipital cortices were impaired in SIVD patients, which pertained to the function deficits of attention and visual processes. Finally, the BC change of the right putamen had the highest AUC in the ROC analysis, suggesting that it was suitable to differentiate between SIVD patients and AD patients. There were several limitations in the present study. First, the sample size of the SIVD group was quite small, which needs to be enlarged in future studies. Second, the intraindividual and intra-group differences in global cognition were not considered sufficiently. Although there were no statistical differences in MMSE and MoCA scores between AD and SIVD patients, and there was no statistical difference in the distribution of severity of cognitive dysfunction between AD and SIVD patients, the confounding factor of the severity of cognitive impairment existed. AD and SIVD patients need to be further subdivided in a future study. Third, the AD patients were recruited by clinical criteria which had not been proved by pathology. CONCLUSION White matter structural network analysis including the topological changes of the network, especially the BC change in the right putamen may be a potential and promising method for differentiating AD and SIVD patients. DATA AVAILABILITY STATEMENT The raw data supporting the conclusions of this article will be made available by the authors, without undue reservation. ETHICS STATEMENT The studies involving human participants were reviewed and approved by the ethics committees of the First Affiliated Hospital of Soochow University. The patients/participants provided their written informed consent to participate in this study. AUTHOR CONTRIBUTIONS HD and YW designed this study. MF, YL, ZW, ZS, and MM collected the patients' data. The cognitive functions of all the subjects were evaluated by YZ. Each author participated in writing the article. Each author gave final agreement to be accountable for all aspects of the work in ensuring that questions related to the accuracy or integrity of any part of the work are appropriately investigated and resolved. All authors contributed to the article and approved the submitted version.
2021-03-31T13:26:10.559Z
2021-03-31T00:00:00.000
{ "year": 2021, "sha1": "e7eab1686b7a4998502e8104e9bfc248878555a8", "oa_license": "CCBY", "oa_url": "https://www.frontiersin.org/articles/10.3389/fnagi.2021.650377/pdf", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "e7eab1686b7a4998502e8104e9bfc248878555a8", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
266605512
pes2o/s2orc
v3-fos-license
Iraqi heritage restoration, grassroots interventions and post-conflict recovery: reflections from Mosul The deliberate targeting and violent destruction of cultural heritage in Iraq’s ancient city of Mosul by the Islamic State (2014–17) has recently given way to the emergence of heritage initiatives aimed at restoring its urban character and reviving its cosmopolitan spirit. Such restoration projects invariably stir debates over timing, funding and local consultation, as well as their potential to contribute to post-war social cohesion and communal healing. This article argues that in post-conflict settings heritage restoration is always an ambivalent and contingent process, involving the selective use of emotive historic symbols to create new realities. Based on 50 in-depth interviews with a diverse section of Moslawi society and site observations from Mosul (2022–23), the article explores local perspectives and the ongoing dynamic negotiation of heritage restoration. Amidst conflicting communal perceptions of large-scale internationally funded reconstruction projects, the article highlights the potential for grassroots heritage initiatives to offer a new impetus towards communal rehabilitation. The paper focuses on three less examined but locally championed Moslawi heritage sites—the souqs, Qila’yat district and heritage homes. These civic spaces may offer greater opportunity for social recovery through economic development, cultural exchange and everyday co-existence. Introduction After a 20-year hiatus, on 1 May 2023, Mosul's Spring Festival finally returned to the city's war-scarred streets. 1 A public parade of decorative floats, musical troupes and costumed performers showcasing Mosul's and Nineveh's ancient heritage drew thousands of Moslawis, including government officials, local dignitaries and representatives of Muslim, Christian, Shabak, Turkmen and Yazidi communities (Iraqi Presidency, 2023).Optimistically entitled 'Eternal Spring, Reconstruction and Peace', the festival captures the hope of Mosul's re-birth after the genocidal 'cultural cleansing' (Baker et al., 2010) of the Islamist terrorist group 'Islamic State' (IS), which targeted and desecrated over 70 shrines, mosques, churches and heritage homes within the city (Matthews et al., 2020: 128).Aspirations for Mosul's urban and social rehabilitation have become linked to heritage restoration projects, largely funded by foreign states and international agencies, seeking to recreate the city's cosmopolitan past.In the words of UNESCO's initiative, 'Revive the Spirit of Mosul', such projects aim to contribute to 'community reconciliation and peace building through the recovery of the living environment and rehabilitation of the city's heritage sites' (UNESCO, 2023). Post-conflict heritage restoration projects stir heated debates over international funding and local agency; timing and sequencing; conservation or transformation; and the need for integration of cultural heritage into wider reconstruction policies (Isakhan and Meskell, 2023; Munawar, 2023; Barakat, 2021).Equally divisive is the dispute over heritage's potential to contribute to post-war social cohesion and communal healing or converselyits political malleability in enforcing structural inequalities and elite-driven agendas (Giblin, 2014; Matthews et al., 2020).This article argues that in post-conflict settings heritage restoration is an ambivalent and contingent process: a dynamic contestation over the selective use of emotive symbols by multiple actors to create new realities from past memories.This paper seeks to move beyond the question of whether heritage restoration heals or hurts post-conflict societies, focusing instead on how individuals relate to, interpret and navigate heritage discourses and projects as part of their individual and communal recovery.In so doing, it argues that grassroots heritage initiatives can contribute to more holistic peacebuilding approaches as they have the potential to empowerrather than simply co-opt-local communities; incorporate heritage sites within communal living spaces, translating pluralist values into embodied practices of co-habitation and commercial co-operation; and provide historic spaces to celebrate local traditions, architecture and culture, restoring communal pride and belonging amidst post-conflict pain and urban dislocation. Mosul presents a fascinating case to observe heritage restoration as a dynamic process.Entangled within global interventionist debates and Iraq's delicate recovery, the city bears witness to the postwar realities of communal fragmentation, displacement and urban destruction.While academic attention has been turned to heritage restoration after violence, particularly in Syria and Iraq (Quntar et al., 2015; Clapperton et al., 2017; Munawar and Symonds, 2022), very few studies focus on local opinions towards heritage initiatives.Isakhan and Meskell's (2023) large-scale survey of 1600 Moslawi attitudes to heritage restoration, conducted in the spring of 2021, provides a welcome exception, offering insights into local perspectives on internationally led heritage projects.Our research seeks to build on and contribute to such findings, based on ethnographic observations and interviews with a wide range of Moslawi society.This includes government officials, scholars, activists, archaeologists, religious and tribal leaders, ordinary citizens, new urban migrants and displaced minorities-sampled across religious, class, gender and age divides. 2 The findings reveal diverging attitudes, with interviewees frustrated by the slow speed of Mosul's restoration, beset with endemic political corruption and disjointed municipal planning.Heritage restoration, however, still offers many Moslawis the opportunity to reimagine a more triumphant and pluralist past-an escape from the constant reminders of IS's extremism.For some, it represents an act of resistance, an attempt to reassert one's identity after deliberate attempts at 'urbicide' and 'identicide' (Kalman, 2017). Our findings first focus on local responses to UNESCO's key restoration sites-al-Nouri Mosque, Nabi Younis, and al-Taheera and al-Sa'aa churches.While most affirm these iconic sites' symbolic importance to the city, many argue that everyday religious spaces also need to contribute to communal rehabilitation.Secondly, Moslawis emphasised that social cohesion does not merely derive from reconstructing collective symbolic icons but from reviving shared everyday heritage sites such as the souqs (markets), old neighbourhoods (as Qila'yat) and heritage homes.Such sites are often championed by local entrepreneurs and have the potential to reconnect economic, social and civic ties through joint commercial ventures, housing projects and cultural initiatives.Our research indicates that heritage restoration efforts in Mosul should extend beyond physical sites and focus on rebuilding communities, which will require that strategic initiatives incentivise minority returns (Christian, Yazidi, Turkmen, Shabak) while adapting to new demographic realities and accommodating the needs of Mosul's current inhabitants.Before exploring the research findings, the paper examines the relevant literature on cultural heritage, healing and recovery, and provides a summary of research methods and the Mosul context.The article concludes by applying the empirical insights derived from Mosul to other post-conflict settings, in which heritage restoration is part of the recovery process. Heritage, healing and post-conflict recovery Although the significance of heritage restoration in post-war recovery is widely accepted, its framing, role and potential remain contested.Scholars have sought to approach cultural heritage through a human rights framework, emphasising the role of the UN Human Rights Council and UNESCO as international protectors and global guardians of 'tangible/intangible cultural rights' and 'universal world heritage'.However, studies warn that such intergovernmental bodies often favour state interests over individual rights, legitimising exclusive nationalist discourses rather than safeguarding minority concerns Larkin and Rudolf (Meskell, 2013; Matthews et al., 2020).A second approach has been to link heritage recovery to global sustainable development goals (SDGs), in which principles of 'Building Back Better' integrate heritage projects in attempts to reconstruct 'inclusive, safe, resilient and sustainable' cities (SDG 11).Such universalist principles require greater contextualisation, especially in settings of recurring violence (Khalaf, 2020), and may lead to the exploitative commodification of heritage sites (Miura, 2015) or 'cultural heritage predation' by political/sectarian actors (Kathem et al., 2022). Thirdly, heritage reconstruction has been credited with contributing to peacebuilding-enabling communities to overcome traumatic loss, re-appropriate suffering and rebuild peaceful futures (Isakhan and Shahab, 2020: 7; Giblin, 2014).Scholars posit that heritage restoration provides reclaimed spaces for personal recovery, communal connection and social cohesion.As Atabay et al. (2022: 15) argue, heritage sites can anchor and embed peacebuilding initiatives into local contexts, tying the 'recovery of built environment' to the recovery of 'individuals and communities'.Research on postconflict memorialisation in Bosnia and Herzegovina (BiH) similarly stresses that heritage restoration can facilitate individual empowerment and temporal re-orientation: 'we remember in order to give meaning to the present and thus gain power over the future' (Palmberger, 2016: 12). Meskell and Scheermeyer's (2008: 154) analysis of post-apartheid South Africa observes the capacity of 'heritage as therapy' to address past wrongs, yet still recognises that state-led 'heritage pageantry' is often more about 'national performance rather than social justice and restitution'.Indeed, the misuse and politicisation of post-war heritage initiatives in peacebuilding has been well documented, from Israel/Palestine, the Balkans and South Africa to Afghanistan and Mali (Dumper and Larkin, 2012; Lostal and Cunliffe, 2016; Isakhan and Meskell, 2023).These cases demonstrate the dangers of elite instrumentalization of heritage sites, often used to bolster ethno-nationalist claims and commercial interests or prioritise international heritage agendas.As Barakat (2021: 445) reminds us, often, in post-conflict settings, 'restoring iconic heritage with outstanding universal value marginalises the most vulnerable heritage locations key to socioeconomic restoration'. In response, multiple grassroots initiatives have emerged in the form of peace museums and heritage centres-District Six, Cape Town; Community Peace Museums Heritage Foundation, Kenya; Casa Museo de a Memoria de Tumaco, Colombia-seeking to challenge dominant discourses, integrate marginal voices and reclaim hidden historic lives: 'We do not wish to recreate District Six as much as to re-possess the history of the area as a place where people lived, loved and struggled' (McEachern, 1998: 504).While such 'heritage from below' (Robertson, 2016) projects may provide alternative sites and interpretations, they also can be 'selective, biased and partial to the actual past itself' (Muzaini and Minca, 2018: 12).This article does not romanticize grassroots heritage initiatives as more effective forms of cultural rehabilitation, but rather suggests they have greater potential to contribute to post-war healing due to their local ownership, organic development and holistic approach to social recovery.Emerging research confirms the positive effects of community-based heritage projects on individuals' mental health, stirring a passion for history, and the creation of new spaces of 'hybrid inter-relational and interstitial connectivities' in which 'people's sense of place, belonging and security can grow' (Power and Smyth, 2016: 166). This study acknowledges however that post-conflict heritage can both unify and divide, recognising its ambivalence and contingency.As Giblin (2014: 515) explains, the symbolic healing of traumatic pasts is 'negotiated in the present through the continuous creation and deconstruction of emotive symbols to create social, political and economic cultural renewal'.Heritage never stands still; it is constantly remade to affirm lost identities and assert social and political claims through embodied symbols.In times of postwar flux, as in the case of Mosul and Iraq, heritage is 'intensified as the past is aggressively negotiated', or as Giblin (2014: 500, 515) contends, 'when a culture is in a perceived state of shock … heritage use becomes intensified in response to the preceding trauma as an accelerated form of cultural production'.If post-war settings exacerbate heritage's role as a site of contestation and dissonance, they also provide an opportunity to address its dichotomous function.As Viejo-Rose (2011: 214) argues in her book on the role of cultural heritage in Reconstructing Spain, 'Only by recognizing [heritage's] potential to impart messages of fear, domination, and violence can its potential as a resource to reconciliation be engaged and any historical grievances linked to it addressed'. Our research in Mosul grapples with this ambiguity and encourages a deeper engagement with Moslawi voices and local heritage sites.An integrated approach to heritage as a form of post-conflict recovery must seek to balance rights and economic growth alongside social stability and urban regeneration.While scholars have stressed the importance of 'dynamic and ongoing' communal engagement to support heritage projects, there remains a lack of understanding of how sites intersect with collective memories (Munawar, 2023; Larkin and Rudolf 2023a), and how to reconcile the conflicting interests of 'heritage owners and conservators-restorers' (Hirsenberger et al., 2019: 217).The Moslawi case affirms recent research in BiH and Kosovo, which demonstrates the importance of anchoring heritage restoration in the work of local heritage actors, supporting their capacity for organic growth while recognising how 'presences and absences in the post-conflict landscape' impact 'processes of exclusion and inclusion' (Kappler and Selimovic, 2021: 3). Post-war recovery is never the return to an idealised past but the search for a mediated and pragmatic everyday.While heritage restoration projects seek to recreate pluralist and shared pasts, these can contradict contemporary urban realities.Bȃdescu (2020: 130), writing on postwar Sarajevo and Beirut, notes, 'Reconstruction stressed the continuity of religious buildings belonging to all groups, sustaining a cosmopolitan brand which is belied by now segregated demographic realities and daily practices'.Bȃdescu distinguishes between cosmopolitanism as a heritage marketing brand-important in generating international support-and the perverse reality of 'exclusionary cosmopolitanism', in which forced displacement was never reversed and urban newcomers (Shi'a in Beirut and rural Bosniaks in Sarajevo) were blamed for contributing to a deterioration of culture within the city.Such processes have relevance for Mosul, highlighting the danger of social exclusion, and the prioritisation of heritage sites over communal repatriation.The potential of grassroots heritage initiatives in Mosul to contribute to peace therefore depends on transforming symbols of pluralist life into everyday realities of shared coexistence through revived local markets, historic neighbourhoods and heritage museums/homes. Mosul context and research methods Mosul has long been one of Iraq's most ancient, multicultural, ethnically mixed (Arab, Kurd, Turkmen, Assyrian, Shabak) and religiously diverse (Sunni/Shia Muslim, Christian, Yazidi) cities.It has been a strategic hub, linking historic trade routes and imperial conquests; located on the edge of empire, its boundaries and loyalties have been fought over by international powers, state authorities and Islamist insurgents (Shields, 2000).Mosul's cosmopolitan nature has been subject to demographic shifts caused by Iraqi Baathist 'Arabization' policies from the 1970s-displacing non-Arabs and resettling Arabs in northern provinces-and more recently through the rise of al-Qaeda and the IS takeover (2014-17) resulting in the killing, forced expulsion and mass exodus of the city's Christian, Yazidi and Shia minorities (Mufti, 2004; Isakhan and Shahab, 2020).This traumatic social rupture has stirred Moslawi memories and romantic tales of urban coexistence, resilience and collective struggle.As one Moslawi, a Sunni Arab teacher, reflects, 'We want to go back in time when people from different sects stood together to repel the Persian invasion back in the 18th century.The leader of the campaign was Nadir Shah.All the people of Mosul stood together-Christians and Muslims'. 3uch nostalgia for collective religious unity undoubtedly reveals public frustrations with Mosul's post-liberation fragmentation, evidenced in urban dislocation, state and municipal infighting, and splintered security actors-the Iraqi army, Popular Mobilization Forces (PMF / al-hashd) and Kurdish peshmerga.Nevertheless, it also demonstrates why Mosul remains such a fascinating case to test the limits and potentiality of heritage restoration in contributing to Mosul's and Iraq's post-IS recovery.Since 2018, multilateral institutions (UNESCO, UNDP, EU, World Monuments Fund) and foreign governments have raised almost $150 million towards the documenting, digitising and restoration of museums, libraries, churches, mosques, statues, shrines, archaeological sites and local craftworks (Isakhan and Meskell, 2019, 2023).Our research explores local attitudes towards such heritage projects and seeks to understand the sites and spaces ignored by external funders but championed by local communities. During February 2022 and June 2023, we conducted 50 in-depth interviews with individuals from Mosul and Nineveh, including activists, scholars, religious leaders, tribal sheikhs, politicians, community representatives and ordinary citizens across ethnoreligious (31 Arab Sunni; 5 Shia; 5 Christian; 5 Yazidi; 2 Turkmen; 2 Shabak), gender (41 male; 9 female), age and class divides. 4Our sample comprised locals/returnees (67%), some new urban migrants (8%) and displaced minorities (25%), seeking to capture the diversity of the city, but also reflecting the challenge of including more female and minority voices due to social constraints and public mistrust.While the sample does not claim to fully represent Moslawi society, it captures key dominant themes and reflects current demographic realities.Interviews were conducted by the authors in Arabic, half online and half in person, during fieldtrips to Baghdad and Mosul, with interviews conducted in Hamdaniya, Sheykhan and Sinjar.A diverse sample was achieved by relying on the authors' Iraqi research networks and through snowballing techniques.Interviewees provided informed consent and have been fully anonymised to protect their identities, while pseudonyms were used to reflect the diversity of the sample. 5Research of such a sensitive nature requires understanding, reflexivity and empathy, and the authors drew on their previous Iraqi and post-conflict experience, learning when to pause, simply listen or terminate the interview.As foreign Arabic-speaking researchers who have lived in the Middle East, we continue to navigate 'outsider-insider-inbetweener' dynamics (Milligan, 2016), constantly reflecting how our status/positionality affects local responses. In addition to qualitative semi-structured interviews, ethnographic observations of heritage sites were conducted during a field trip to Mosul in May 2023.The authors participated in walking tours around the city accompanied by local activists, historians and photographers.During these 'walking interviews' (Evan and Jones, 2011), personal stories of loss and resilience were infused within tours of rehabilitated mosques and newly restored public squares as well as visits to Moslawi homes, where returnees spoke of clearing rubble and dead bodies before they could re-inhabit their buildings.In postwar Mosul, few photographs or walled posters attest to the city's 40,000 martyrs, yet as one activist shared: the images and pictures are in our minds.Every time I pass a street, a house or junction, traumatic memories flood back … In this house a whole family died due to a direct coalition rocket strike … Now there is just an empty space in the middle of a row of buildings.They have been physically erased. 6awing on rich personal testimonies, the paper also integrates secondary sources such as policy reports, news articles and academic journals, as well as social media in Arabic and English. 'Here UNESCO works to Revive the Spirit of Mosul' Five years after the launch of UNESCO's 'Revive the Spirit of Mosul' project in 2018, it remains the preeminent cultural heritage programme providing a hub for reconstruction initiatives, international fundraising and the aspiration to 'develop a people-centred urban vision for the future' (Khalaf, 2020).The priority of the programme has been the rehabilitation of Mosul's iconic religious sites-the great mosque of al-Nouri and its leaning minaret (al-hadba); Nabi Younis Mosque and the Assyrian palace of Nineveh; and al-Sa'aa and al-Tahera churches within the Christian square (Hosh al-Bieaa).While only few of the interviewed Moslawis have challenged the selection and symbolic importance of these historic sites, many have shared their concerns over funding, design plans, community consultation and adaptive heritage functions (Isakhan and Meskell, 2019, 2023). These religious sites are emblematic of Mosul's destruction under IS and their recovery is explained by interviewees in terms of physical healing or search for belonging.As one young Christian Moslawi explains, 'ISIS bombed every part that belongs to our identity.We are now left without an identity.We are bereft of our history'. 7A number of Sunni Arab interviewees felt that the rehabilitation of churches and mosques could help expunge the 'spiritual evil' and 'extremist virus' spread by IS, who deliberately detonated a bomb within Nabi Younis on the most holy night of Ramadan, Laylat al-Qadr (Night of Power), and destroyed al-Nouri mosque's al-Hadba minaret as a final act of religious desecration in June 2017.Some interviewees, particularly among the older generation of Moslawis, felt that no restoration scenario could ever restore the intimate memories, irretrievably lost with the destruction of the original religious sites: It is possible to rebuild and renovate 100 mosques like Al-Nouri Mosque.The problem is not this.When I used to be a child, I used to climb the top of al-Hadba minaret.I have so many memories in this place … We are deeply angered by those who deprive us from the collective memories and the special relationship we have with these sites. 8though most interviewees blamed IS for the urbicide inflicted on the city, many also accused International Coalition forces of material destruction during the battles for 3 and 4). Isakhan and Meskell's (2023) large-scale quantitative survey of Moslawi attitudes to heritage restoration confirms dichotomous local perspectives over the importance of 'heritage authenticity'-faithful restoration to 'the way it was before' balanced against adaptive transformation of heritage sites into modern, useful community buildings.While 48% of the 1600 interviewees preferred that 'the sites are restored and reconstructed into a new and more modern structure', 43% maintained that the sites were either to be reconstructed to their pre-war condition or to the way they were when they were first built (Isakhan and Meskell, 2023: 16).These figures may suggest a greater local appetite for innovation than international heritage bodies and conservationists are willing to concede, yet there remains much ambiguity as to how citizens envision this reconstruction process.Our interviews attest to diverging perspectives, cutting across age, gender and religion.Predictably, archaeologists valued conservatism, heritage activists favoured innovation and ordinary Moslawis desired heritage projects that create 'historic re-connection', while still meeting everyday needs.One archaeologist commenting on al-Nouri Mosque's reconstruction insisted: 'People in Mosul like what is ancient without any alteration or modification to the originality of the site'. 9However, many Moslawis remain ambivalent, nostalgic for the pre-war version of their sacred sites, acknowledging the need for adaption and yet accepting that replicating old facades will not bring back lived experiences: Rebuilding is possible, and we can have a newer and better minaret.The issue is that the now destroyed minaret was full of life because it had soul to it, formed by the people who share memories in this special place. 10 the prominent Mosul historian Omar Mohammed explained in an interview, the iconic optics of many symbolic heritage sites can be preserved while modernising the old structures so that they can resist future challenges such as extreme weather variations. 11osul residents' tolerance for minor modifications should not be read as a willingness to trade the familiar sights of their cherished cultural landmarks for any sort of radical architectural departure.Local suspicions persist over the funding of al-Nouri Mosque's reconstruction by UAE ($50.4 million) and the Egyptian-awarded architectural design that some interviewees claim betrays the integrity of the site and threatens to transform al-Nouri into a 'tourist site that is bereft of any cultural, memorial, historic, or archaeological significance to Moslawis'. 12Other activists and community leaders warn against superficial public consultations and the lack of buy-in, particularly from excluded minorities, be they Yazidi, Shabak or Turkmen.As one activist reflects, 'the reconstruction of a site is not about the physical building, it's about the rehabilitation of its spirit.And its spirit comes from the people.If they are not involved, this site will have zero value'. 13 Sh'ia Shabak religious leader lamented the loss of 7000 Shabak from Mosul, their forced expulsion to 64 towns in Eastern Nineveh valley disconnecting them from religious sites and heightening 'Christian-Shabak struggles over services'. 14The long-term impact of the displacement of Mosul's minorities poses difficult questions regarding the efficacy of restoring tangible religious sites without the physical presence of the living communities.While the reconstruction of Mosul's churches is perceived as a critical step towards reviving the city's multicultural heritage, a majority of interviewees expressed doubts whether such measures would bring back the Christians.A Syrian Catholic priest, Father Ra'id from al-Bishara church, admitted, 'The city needs services and infrastructure to restore its soul.After this comes the reconstruction of houses and churches'. 15Heritage restoration can play an integrative healing function only when it is accompanied by socioeconomic opportunities, security and some type of restorative justice (Barakat, 2021). Nabi Younis: layers of complication Controversies abound over the reconstruction of one of Mosul's most ancient heritage sites.The Mosque of Nabi Younis sits atop a shrine of the prophet Jonah revered by Muslims, Christians and Jews; it is built upon a Nestorian Church, a Jewish synagogue and the Assyrian palace of King Esarhaddon, who ruled Nineveh in the seventh century BC.Before IS destruction, Nabi Younis drew worshippers from multiple backgrounds as well as Moslawis seeking the best views of the city and to socialise on the landscaped grounds (Nováček et al., 2021). While many interviewees claim its reconstruction is an important milestone towards the post-conflict healing of the city's diverse communities, heated debates surround the ongoing excavations and its stalled restoration.One local scholar cautioned against pushing for the site's rapid reconstruction in the absence of adequate joint mechanisms to bring back its true spirit: How do you convince the Muslims that there are popular Christian or Jewish sites underneath?How do you convince the Christians that there is a mosque on the top?And how do you convince the archaeologist, that on the top of the heritage site, there are sites with contemporary value.That is a very complicated matter in Mosul and it's not given enough time to be discussed. 16e disputes over restoration which have delayed progress at the site underline the importance of a coherent national policy of heritage reconstruction which can accommodate conflicting sub-national preferences.Nevertheless, as one interviewee pointed out, neither UNESCO nor the Iraqi government have demonstrated initiative to support the mosque's reconstruction, which is currently being funded by donations of Moslawis and a local charity: Many ministers came to visit the mosque and have promised us several times that reconstruction will be funded by the government, but all these promises were empty.The government purposely delayed reconstruction due to the discovery of Esarhaddon Palace. 17e commitment of Mosul's Arab Sunni residents to restoring their local mosque seems to confirm the findings of Isakhan and Meskell (2023: 20) that Moslawis favour the restoration of local spiritual centres over grand heritage projects or archaeological digs.This certainly applies to the ongoing restoration of multiple mosques throughout Mosul, undertaken by families, communities and Islamic waqf.It reflects the local pressure applied on Iraqi authorities to finally cede that Nabi Younis Mosque be rebuilt by Sunni Waqf authorities alongside the ongoing excavations of the Assyrian palace by international archaeologists.This reveals that Moslawis view religious sites not merely as heritage repositories but as living spaces for everyday worship, prayer, religious festivals and social gatherings.Therefore, the potential of heritage restoration to contribute to social cohesion and communal re-integration can be identified in everyday dynamic spaces, in which cultural heritage is intertwined within souqs, old neighbourhoods and heritage homes. Souqs and bazaar Despite a lack of government funding, Mosul's old city souqs and the historic Saray bazaar have made an impressive comeback, largely driven by grassroots initiatives, local owners and philanthropists, such as the distinguished al-Jalili family.From jewellers' shops to the blacksmith alley, Ali Al-Baroodi (2021) has documented the hopes of merchants and craftsmen slowly reviving their businesses.When asked about the challenges faced in rebuilding his shops, the local entrepreneur Wadhah al-Jalili explained: 'The bridges were all down in River Tigris, and my properties are in the heart of the bazaar, and no one can reach them with lorries as they were blocked with rubble' (Al-Baroodi, 2021).Nonetheless, Jalili decided to push ahead with reconstruction, restoring workshops and ancient khans such as Khan al-Komrk (Ali et al., 2022), while helping those struggling with the lack of funds and government support. 18Locally led reconstruction within the souqs has shown sensitivity to heritage and cultural needs, while repurposing 'shops for handicrafts and old Moslian professions' and khan squares for 'community cultural events' (Ali et al., 2022: 15).Interviewees praised the independent restorations within some souqs, identifying them as One of the safest areas, in which Moslawis have enough power to decide what and how it is to be preserved, because they were completely rehabilitated and recovered by their landlords, by the families themselves, so they have full authority over them. 19his cross-community solidarity, as reflected in the social dynamics of the Saray bazaar, is what native Moslawis describe as the heart and soul of their city's past.As one local merchant recounted: I used to go to the trade centre (souq) in the old industrial area in Mosul where I witnessed the co-existence between different social groups-all sat next to each other.I used to see my dad sitting next to a Yazidi merchant selling hummus who wanted to sit in our office, next to a merchant selling wheat from al-Hamdaniya district in Nineveh, next to another merchant selling barley … This was like a university, which is a place bringing different people from different religions, ethnicities and sects together. 20e nostalgic hope of reviving commercial pluralism will depend on reconnecting familial links and trade relationships, often between urban and rural communities.The old bazaar also has to compete with the new shopping centres in the eastern side of the city as well as Mosul's al-Jadida neighborhood.Meanwhile, the Saffarin souq and Al Quozin souq, once a jewel of Mosul, are reportedly 'struggling to remain open after the city's liberation from IS as most of those who worked there were either killed or have fled the city'. 21Therefore, attracting investors, restoring war-ravaged buildings and securing capital for those who have risked reopening workshops are steps that the Iraqi government could undertake to ensure the future of the bazaar, not just as a commercial hub but as a thriving symbol of Moslawi and Iraqi multiculturalism.As one interviewee reflected, 'Reconciliation cannot be forced.It can only happen naturally, when we have the people communicating with each other, which is why the role of the local markets of Mosul, especially that of the old market, is fundamental'. 22he souqs contain important heritage sites-old hammams (bath houses), mosques, khans-which according to Moslawis, 'reaffirm and reassert Moslawi identity, ethnicity, religion and culture'. 23Our interviews confirm the significance and emotional connection many Moslawis feel towards the souqs.Therefore, active protection of this heritage site and government investment may help restore greater confidence in the Iraqi authority's commitment to Mosul's revival.Lastly, in view of its contribution to the economic recovery of conflict-affected communities in the old city, the bazaar can serve as a positive example of a local-needs-driven approach to heritage restoration which can unlock synergies between international and local heritage practitioners, residents and state officials.Moreover, Mosul can draw lessons from the disastrous post-war souq renovations in Beirut by Solidere, which created exclusive, designer-driven outlets aimed at Arab Gulf tourism rather than local citizens' needs and ended up obstructing social re-integration within the city (Hourani, 2015). Reconstructing the Qila'yat district Another key site, lauded by locals as carrying the DNA of the city, is Mosul's Qila'yat district.With its historic riverside forts dating back to 1080 BC, the Qila'yat neighborhood holds symbolic value for many of the city's residents.As a native of the district explains, 'Rebuilding this area is reviving the civilization and culture of Mosul … The Qila'yat is home to the first mosque in the city ever built and the oldest church'. 24he cultural significance of the Qila'yat district extends beyond its historic architecture to its emotive history.Residents tell stories of shared lives, with one interviewee arguing that Moslawi youth need to learn tolerance and co-existence not from books or Prevention of Violent Extremism (PVE) programmes, but through the restoration of plural neighbourhoods in which they 'will witness how a mosque and church stand side-by-side.This is the tolerance and multi-culturalism that Mosul is well known for'. 25However, the neglect of the district under the former governor Nawfal Aqoub raises suspicions among locals (Al-Baroodi, 2021).They wonder about the motivations behind such actions, suspecting an attempt to erase the city's history.Systemic administrative negligence fuels feelings of mistrust among residents, who often believe there has been 'a deliberate effort to prolong the war in the old city and destroy many of the city's historical and cultural landmarks'. 26Despite the establishment of the 'Mosul Reconstruction Committee', little progress has been made in rebuilding the Qila'yat area.Residents continue to live in camps or rented houses, unable to return home.A report of the Iraqi human rights observatory stressed that efforts to rebuild these areas are undermined by 'impossible conditions, complex procedures, problems related to real estate borders, and the loss of identity documents' (Al-Araby Al-Jadeed, 2021).Furthermore, armed parties exploit ownership rights without compensating the area's original inhabitants. Finally, the economic potential of the district increases the appetite of profit-driven entrepreneurs to cash in their political leverage over the formal and informal bidding process.One Moslawi activist has thus warned about the risk of corrupt officials endorsing reconstruction designs, which would be far removed from the locals' legitimate interests: There are different bids being submitted to rebuild Qila'yat.These bids are all catastrophic because some of them want to transform the neighbourhood into a modern one with skyscrapers like Dubai, others have a vision of turning the neighbourhood into a corniche, while other bidders want to open shisha lounges in the area … If Qila'yat disappeared, Mosul as a city ends and becomes a soulless city. 27 this powerful quote demonstrates, the Qila'yat should be rebuilt sensitively, balancing the potential for tourist growth-so residents 'make a living by reviving the infrastructure'-against the importance of historical integrity, communal consultation and local-led expertise.The restoration of such Moslawi neighbourhoods, rather than symbolic religious sites or civic landmarks, may contribute more to resurrecting the spirit of intercommunal solidarity, as cherished by many Qila'yat residents.As one Sunni Arab resident reflects, We used to mingle with our friends and neighbors who were from different religious and ethnic groups … We lived in one united community.In addition, I want to salute my Yazidi friends.They used to come sell bulgur wheat in the area.All women used to wait for the seller to come so they can buy wheat and cook it.Cooperation and collaboration between women in the area is what makes Qila'yat unique.We coexisted with Christians and Yazidis. 28id stories of shared Moslawi life, residents of the Qila'yat (now predominantly Arab Sunni) still hope that their former neighbors (Christians, Yazidis, Shabak) will return and receive compensation to rebuild their lives.The area presents a golden opportunity to restore not just the built environment of an iconic urban centre, but also to rekindle a sense of trust among conflict-affected communities that heritage reconstruction initiatives can tangibly contribute to post-war social healing.By embracing their cultural heritage and involving the people connected to the district, Mosul can rebuild not only its historic urban centre, but also its sense of identity and communal bonds. Heritage homes One of the most notable UNESCO achievements in Mosul has been the restoration of over 124 historic houses within the old city (Mosul Eye, 2022).Yet, even more remarkable has been the initiative to restore grassroots-led heritage/museum homes showcasing Moslawi tangible and intangible cultural heritage.Three prominent examples are the restored historic houses of the Mosul Heritage organization located in bayt al Rahim, the Baytna (Our Home) Institution for Culture, Heritage and Arts and the al-Talib home of the grassroots organization 'Volunteer with Us'.These three initiatives have thrived through a symbiotic relationship between the activist founders and the legal owners of the houses.The legal owners have donated or rented out the buildings to be restored as heritage sites and repurposed as headquarters for organizations with a strong humanitarian mission and local grounding. Ayoub Thanoon, founder of the Mosul Heritage project, shared how the family of the deceased owner donated the house, entrusting him with transforming it into a heritage museum.The house, which was partially destroyed during the conflict with IS, has been restored with meticulous attention to detail.It comprises multiple floors containing Moslawi traditional living spaces and an impressive heritage museum.The museum displays locally donated artifacts and antiquities from different periods of the city's history.It includes old sewing machines, record players, newspapers and banknotes dating back to Iraq's monarchy.The museum highlights architectural landmarks of Mosul and Nineveh, including a three-dimensional model of al-Hadba leaning minaret and images from the legendary caravan city of Hatra, which was severely damaged by IS.The main purpose of the museum is not only to preserve Mosul's heritage but also to increase public awareness about the city's iconic legacy.Workshops are organized to develop the skills of local activists, aiming to revitalize the tourism industry in Mosul.Ayoub proudly shared, 'Everyone visits the museum.What shocked us was that people from Duhok are coming to visit the museum.Visitors come from all districts and towns around Mosul such as Al Hamdaniya, Baaj, Bartella, and Tel Kaif'. 29nvolving local communities in the museum's activities also enhances these audiences' heritage appreciation.This increased appreciation can lead to more sustainable protection of these assets, ultimately benefiting the entire community (Matthews et al., 2020: 135). Within a short span of six months, the museum has recorded 24,000 visitors, including 12,000 university students and school groups.The museum has become an attractive destination for tourists and local families, while the Mosul Heritage organization has expanded its outreach throughout the Nineveh governorate as part of the Nineveh Heritage Preservation Initiative.They organize excursions to different towns and districts, connecting communities and fostering a shared pride in the governorate's diverse cultural heritage.This people-centred approach to heritage promotion has the potential to improve locals' livelihood prospects while contributing to the emotional recovery and reconciliation of conflict-affected communities. An equally inspiring local initiative is the Baytna Institution for Culture, Heritage and Arts, founded by Moslawi journalist and cultural entrepreneur Saker al-Zakariya in 2019.The institution aims to showcase the unifying power of Mosul's rich cosmopolitan legacy.Housed in a carefully restored traditional house in the heart of Mosul's Old City, Baytna is a place that is dedicated to Moslawi identity.The museum house is adorned with portraits of famous Moslawi artists, writers and public figures, while exhibiting vintage items and memorabilia, reminding visitors of Mosul's glorious past as a centre of trade and cultural exchange.It hosts regular cultural events, including art exhibitions, readings and musical performances, that aim to create an atmosphere of pride, a space, in al-Zakariya's words, in which 'people feel proud of their city's heritage'.Baytna aims to combat the stigma associated with the city's violent history by reshaping the narrative and highlighting the resilience of its people.Al-Zakariya recognizes that the restoration of physical structures is not enough to heal the city's wounds.By offering Baytna as a space where locals can reconnect with their history and rediscover their roots, he endeavours to restore a sense of dignity, identity and pride in the residents. The third historic home, bayt al-Talib, which houses the grassroots organization 'Volunteer with Us' led by Omar Mohammed, acts as a gathering point for volunteers and activists who are passionate about preserving Mosul's legacy.'Volunteer with Us' mobilizes local youth, engaging them in various projects related to heritage, education and community development.From the rehabilitation of schools destroyed during the war with IS to working with orphans or teaching children how to build vertical gardens, the team of 'Volunteer with Us' seeks to tackle the grievances of multiple conflict-affected communities.Through workshops and activities, the organization encourages residents to share their stories, music and celebrations, fostering unity and understanding among different communities.Omar and his team stress that 'cultivating an atmosphere of acceptance and love will help repair the cracks in society and revive the social fabric of Mosul'. 30Barakat (2021: 443) similarly argues that in postconflict settings, where socio-economic and political upheaval create uncertainty and apprehension, heritage restoration assumes a critical role in defining and reaffirming postconflict identities: 'it is critical to acknowledge that, from the perspective of those affected, heritage becomes more than mere tangible manifestations-as does architecture, so do historical artefacts and archival documents start to assume complex roles in forging identity'. While these grassroots initiatives have garnered praise for their commitment to preserving Mosul's heritage and fostering community engagement, sceptics may question their inclusivity and ability to address the city's economic challenges.As Bȃdescu (2020: 133) reminds us, heritage projects may help create 'spatial practices-uses and clusters of people around places perceived as being cosmopolitan', but this is different from restoring cosmopolitan attitudes and cross-communal networks.Nevertheless, in post-conflict Mosul, such projects are crucial for consolidating local peacebuilding platforms and fostering innovative collaborations.These heritage hubs are less centred on the commodification and tourist-driven models that often affect large-scale projects which can lead to urban gentrification and a deepening of post-war inequalities (Meskell, 2021).Like District Six Museum in Cape Town, a community-led project to restore and remember a diverse South African neighbourhood, Moslawi heritage homes focus on the 'reappropriation of the city which was taken from them' (McEachern, 1998: 517).In the uncertainty of post-apartheid South Africa and post-IS Iraq, community-driven heritage projects suggest 'the retrieval of a more desirable past provides a way into new identity' (McEachern, 1998: 517). Iraqi officials should build on these heritage initiatives, promoting inclusive economic development which ensures that the benefits of heritage restoration reach all segments of society.A holistic approach is needed, combining heritage preservation with measures aimed at job creation, economic growth and welfare.Empowering local actors who understand Mosul's history and priorities is vital.The international community should provide financial assistance and capacity-building to these local heritage entrepreneurs, while respecting their autonomy and avoiding the imposition of 'peacebuilding jargon' (Larkin and Rudolf, 2023b). Figures 5 and 6: Heritage homes-Baytna and 'Volunteer with Us'.Photos: Inna Rudolf.the most pressing matter for Moslawis is the rebuilding of public facilities such as hospitals, schools, and services.The most important consideration for the people in Mosul is achieving security and stability; food; education; and only later comes the organisation of cultural, tourist and recreational activities. 31e city's holistic recovery hinges on connecting symbolic and living heritage with safe, viable communities.Many Moslawis are uncomfortable whenever external funding is seen as prioritizing inanimate artifacts over the needs of the living.Social cohesion, psychological healing and urban recovery may be tied more to the provision of new municipal services than to the costly restoration of old heritage sites.As one female Sunni Moslawi explained: 'We need to build modern infrastructure.When I see a new building in Mosul, I feel positive and much better.I feel comfortable because I see that there is development in the infrastructure'. 32he slow investment in Mosul's infrastructure draws criticism for not aligning with the priorities of its inhabitants.Road repairs favour the movement of armed actors, while demolition orders consolidate new ownership conglomerations.The stalled re-opening of Mosul airport further reflects nefarious power dynamics and elite negotiation: There is a struggle internally and internationally, there is an agenda, which we don't fully understand, to not execute the reconstruction of Mosul airport.… It is Mosul's gateway to the world in terms of tourism, business … There is a deliberate delay to not reconstruct it. 33e suspicions surrounding Mosul's reconstruction mirror Iraq's post-IS uncertainties.Many residential buildings in the city lie in ruins, and their owners struggle to receive adequate compensation from Iraqi authorities.The lack of tangible support for the reconstruction of people's homes hinders the return of internally displaced persons (IDPs) and prompts locals to emigrate.With a significant portion of Mosul's native population either deceased, missing or displaced, the meticulously restored cultural and religious landmarks of these uprooted communities run the risk of becoming museum artifacts with limited meaning to those who stay or those who have newly arrived.Heritage restoration projects conducted in contexts of violence and displacement should avoid turning the city's cosmopolitan legacy into a static museum. Conclusion In conclusion, our research in Mosul affirms the potential of grassroots-led heritage projects to contribute to post-conflict recovery in Iraq.In practical terms, smaller sites and projects result in quicker delivery, more flexible approaches, local engagement and selforganization.This can help build momentum, evident in the ongoing recovery of Moslawi souqs and heritage homes.In terms of social impact, grassroots initiatives can also encourage urban pride and solidarity, erase vestiges of traumatic violence and provide spaces for communal resilience. The Mosul case study highlights three generalisable principles of relevance to heritage restoration in other post-conflict settings.Firstly, grassroots heritage restoration that empowers local communities is far more likely to outlast and have deeper impact than externally funded, time-limited initiatives.Origins and power structures matter, and bottom-up initiatives have the potential to create networking synergies and crosscommunal exchanges, in which local citizens can emotionally reconnect to their city and culture.Second, community-led projects are often a reminder that historic heritage sites must be integrated within communal living spaces; symbolic restoration of sites of religious pluralism (mosques, churches, shrines) must be accompanied by the rehabilitation of historic spaces of everyday cohabitation and co-operation.Thirdly, the nostalgic romanticised past provides an avenue for social recovery and for the revival of civic pride: a historic vantage point for residents to process their loss and suffering.Mosul's heritage homes create both spaces to celebrate Mosul's illustrious past but also opportunities to develop cultural awareness, heritage skills and citizen volunteerism. This paper, however, does not seek to simply juxtapose heritage projects (externally led/grassroots-initiated) or highlight the inherent tensions between global normative heritage concepts and domestic local realities.Instead, the authors recognise the need for a more holistic heritage approach in post-conflict settings.This inevitably involves integrating heritage restoration projects within wider reconstruction plans; co-ordinating stronger heritage networks across local/state/international actors; and utilizing heritage's potential for economic development and social recovery.Within Mosul, such an approach is possible but remains highly contingent on Iraq's national recovery and political stability, as well as the challenges of overcoming endemic corruption.On a local level, our findings point to the importance of integrating local heritage clusters and large-scale symbolic sites; grappling with how heritage should reflect displacement and demographic shifts; and navigating the impulse to honour the past, while not neglecting traumatic pain and loss.If preservation and conservation efforts are adapted to meet the needs of conflictaffected communities, the city can heal, and its diverse heritage can help generate a shared, historically grounded identity.
2023-12-30T16:16:42.419Z
2023-12-28T00:00:00.000
{ "year": 2024, "sha1": "719569f42eada789b7cf28d59a0012cbade9a61a", "oa_license": "CCBY", "oa_url": "https://journals.sagepub.com/doi/pdf/10.1177/14696053231220908", "oa_status": "HYBRID", "pdf_src": "Sage", "pdf_hash": "9a48b986cd2817a44cc9609f06214184673afc64", "s2fieldsofstudy": [ "History", "Sociology", "Political Science" ], "extfieldsofstudy": [] }
52018901
pes2o/s2orc
v3-fos-license
Oral steroids for resolution of otitis media with effusion in children (OSTRICH): a double-blinded, placebo-controlled randomised trial Summary Background Children with persistent hearing loss due to otitis media with effusion are commonly managed by surgical intervention. A safe, cheap, and effective medical treatment would enhance treatment options. Underpowered, poor-quality trials have found short-term benefit from oral steroids. We aimed to investigate whether a short course of oral steroids would achieve acceptable hearing in children with persistent otitis media with effusion and hearing loss. Methods In this individually randomised, parallel, double-blinded, placebo-controlled trial we recruited children aged 2–8 years with symptoms attributable to otitis media with effusion for at least 3 months and with confirmed bilateral hearing loss. Participants were recruited from 20 ear, nose, and throat (ENT), paediatric audiology, and audiovestibular medicine outpatient departments in England and Wales. Participants were randomly allocated (1:1) to sequentially numbered identical prednisolone (oral steroid) or placebo packs by use of computer-generated random permuted block sizes stratified by site and child's age. The primary outcome was audiometry-confirmed acceptable hearing at 5 weeks. All analyses were by intention to treat. This trial is registered with the ISRCTN Registry, number ISRCTN49798431. Findings Between March 20, 2014, and April 5, 2016, 1018 children were screened, of whom 389 were randomised. 200 were assigned to receive oral steroids and 189 to receive placebo. Hearing at 5 weeks was assessed in 183 children in the oral steroid group and in 180 in the placebo group. Acceptable hearing was observed in 73 (40%) children in the oral steroid group and in 59 (33%) in the placebo group (absolute difference 7% [95% CI −3 to 17], number needed to treat 14; adjusted odds ratio 1·36 [95% CI 0·88–2·11]; p=0·16). There was no evidence of any significant differences in adverse events or quality-of-life measures between the groups. Interpretation Otitis media with effusion in children with documented hearing loss and attributable symptoms for at least 3 months has a high rate of spontaneous resolution. A short course of oral prednisolone is not an effective treatment for most children aged 2–8 years with persistent otitis media with effusion, but is well tolerated. One in 14 children might achieve improved hearing but not quality of life. Discussions about watchful waiting and other interventions will be supported by this evidence. Funding National Institute for Health Research (NIHR) Health Technology Assessment programme. Exclusion criteria Children with one of more of the following were not eligible for inclusion:  Current involvement in another clinical trial of an investigational medicinal product (CTIMP) or have participated in a CTIMP during the last 4 months,  Current systemic infection or ear infection,  Cleft palate, Down's syndrome, diabetes mellitus, Kartagener's or Primary Ciliary Dyskinesia, renal failure, hypertension or congestive heart failure,  Confirmed, major developmental difficulties (e.g. are tube fed, have chromosomal abnormalities),  Existing known sensory hearing loss,  Taken oral steroids in the preceding 4 weeks,  Had a live vaccine in the preceding 4 weeks if aged under 3 years,  Has a condition that increases their risk of adverse effects from oral steroids (i.e. on treatment likely to modify the immune system or who are immunocompromised, such as undergoing cancer treatment),  Has been in close contact with someone known or suspected to have Varicella (chicken pox) or active Zoster (Shingles) during the 3 weeks prior to recruitment and have no prior history of Varicella infection or immunisation,  Already has ventilation tubes (grommets),  On a waiting list for grommet surgery and anticipate having surgery within 5 weeks and are unwilling to delay it. Changes to methods The main changes to the protocol that occurred during the conduct of the trial are summarised below: A number of changes were made to the protocol to make it easier for sites to recruit patients and schedule the follow-up appointments. For example, we extended our site coverage into England, the eligibility criteria for audiometry confirmed hearing loss was extended to 14 days preceding recruitment, follow-up visits were conducted in ENT or Audiology outpatient clinics, the timeframe windows for follow-up were extended to + 2 weeks for 5-week follow-up, and +/-2 weeks for 6 and 12 month follow-ups. Paediatric Audiology and AVM Clinics were included as sites, and Audiovestibular Physicians were included as designated OSTRICH clinicians. Additions were made to the exclusion criteria such as ear infections, Kartagener's or Primary Ciliary Dyskinesia, existing known sensory hearing loss, undergoing cancer treatment, on a waiting list for grommet surgery and anticipated having surgery within 5 weeks and unwilling to delay it, live vaccines 4 weeks prior to recruitment if aged under 3 years old. A number of changes to the planned trial procedures were made due to time constraints resulting from the longer than anticipated recruitment period. For example, removal of medical notes search and data linkage used to identify healthcare consultations during the 12 month follow-up period in secondary care and primary care. As a result of this, a specific assessment of resource use at baseline could not be collected. Lastly, a number of further amendments were made to the protocol such as sending reminders for follow-up appointments and contacting parents regarding missed appointments and exploratory analysis to assess association between baseline hearing threshold and quality of life. We also undertook a qualitative sub-study to explore parents' understanding of the treatment options available to them, their views on shared decision making in the context of managing glue ear and their views on the use of oral steroids for glue ear. The following changes were added to the statistical analysis plan following publication of the protocol paper and approved by the IDMC: 1. For the primary outcome, in addition to adjusting for child's age at recruitment, site and time to followup were also deemed important to adjust for. 2. A negative binomial model was used instead of the intended Poisson model due to over dispersion. 3. For the symptoms scores, a component was added to combine each individual symptom score into an overall score so that the issue of multiple outcomes were overcome. The following changes were omitted from the SAP following publication of the protocol paper: 1. Given the limited number of weeks of follow-up, duration between start and resolution of symptom was not examined and modelled using a time to event (Cox regression) model. Instead the analysis proposed in point 3 above was included. Further changes were made to the proposed longer-term modelling to be conducted as part of health economic analysis as a result of the trial results and these can be found in Appendix 2 of the funder's report (found at https://www.journalslibrary.nihr.ac.uk/). Pre-specified sub-group analyses The sub-group analyses that were pre-specified, and undertaken, were: Category Groups Age 2-5, 6-8. Atopy History of atopy or not. Recent antibiotics for ear problem Antibiotic consumption for ear problem in the past month or not. Previous OME No previous episodes of OME, 1 or more previous episodes. Duration of symptoms relating to current episode < 12 months, ≥ 12 months. History of tonsillectomy or adenoidectomy No previous tonsillectomy or adenoidectomy, previous tonsillectomy or adenoidectomy at any time. Smoker in the household No current smokers in the household, one or more smokers (> 5 hours per week) living in the household. Season of recruitment Spring, summer, autumn, winter. Deprivation Quintiles 4. Methods: Secondary outcomes analyses Secondary outcomes with a binary outcome measured over several time points (audiology, tympanometry, and otoscopy) were analysed using repeated measures logistic regression and effects reported as adjusted odds ratios. For continuous outcomes (HUI3, PedsQL, and OM8-30 scores), repeated measures linear regression models were run, adjusting for baseline scores and the effect of oral steroids reported as adjusted differences in mean scores. Transformations (squared and cubed) to the raw scores were performed as necessary to improved residuals and model fit. If no transformations were suitable, the raw scores were dichotomised and a repeated measures logistic regression model used. All repeated measures models investigate differences in trial groups and over time, and included an interaction term for time and group to investigate any divergent or convergent pattern in outcomes. Weekly scores were reported on the child's symptoms on a scale of 0 to 6 (not present to as bad as it could be) for eight symptoms (any problems with hearing, ear pain, speech, energy levels, sleep, attention span, balance, being generally unwell). Cronbach's alpha and factor analysis confirmed that the symptoms could be combined in an overall score. Effect of oral steroids on weekly overall scores was examined using a repeated measures linear model. Changes in nausea and behaviour in mood over time were similarly examined. An adjusted multilevel Cox (shared frailty) regression model examined the days since recruitment to insertion of ventilation tubes and treatment effect reported as an adjusted hazard ratio (aHR). Days off school/work/OME related healthcare consultations were analysed using a Negative Binomial model and effects presented as an adjusted incident rate ratio (aIRR) (oral steroid compared to placebo). Methods: Health economic analyses Discounting was not applied because the trial duration was only 12 months. Costs of the course of oral steroids was calculated and combined with differences in costs between intervention and control groups to determine overall costs associated with the intervention. The resource utilisation of both groups (consultations, medications, operations, equipment, etc.) and treatments associated with adverse events, were assessed through the completion of self-completed questionnaires included in the parent diary at baseline, five weeks, six months and 12 months and translated into costs using appropriate published unit costs. 1 All costs were recorded on 2015-16 prices in pound (£) sterling. The cost of a 7-day course or oral soluble Prednisolone, weighted for the different prescription dosage based on age was estimated to be £59 (taking into account the prescription dispensing cost, this figure rises to £62). The cost effectiveness analysis compared the incremental changes in costs with the differences in primary outcome and PedsQL sore; with a cost utility analysis Quality Adjusted Life Years (QALYs) computed from HUI3 score and from utilities derived from mapping responses to the OM8-30 questionniare. 2 Non-imputed data were used for the primary outcome to reflect the clinical analysis. A multiple imputation approach using the predictive mean matching technique was used to address issues of the assumption of data missing at random for HRQOL outcomes. 3 The multiple imputation approach included covariates to control for age, gender and site. A series of one-way sensitivity analyses were conducted to assess the impact of parameter variation on baseline estimates of the cost-effectiveness ratios with non-parametric bootstrap methods used to present costeffectiveness acceptability curves of the probability of oral steroids being considered cost-effective at different willingness to pay thresholds for the cost-utility analysis. Subgroup analyses The following tables show each of the pre-specified subgroup analyses. All analyses adjusted for site, child's age group at recruitment (2-5, 6-8 years), and time since recruitment of 5 week assessment (days) From the 349 diaries that were returned by parents (oral steroid 179; placebo 170), the total number of healthcare consultations relating to OME over the 5-week period were examined. Very few children consulted with any healthcare setting over the five weeks post randomisation (Table S4.1) with no difference between treatment groups. Similar conclusions were found for time taken off school/nursery or days off work for family members, for ear problems and other illnesses. 15 (9) 8 (4) 0.49 (0.14 to 1.66) 0.25 a Adjusted for site, and child's age group at recruitment (2-5, 6-8 years) b Incidence rate ratio (IRR) of the oral steroid compared to placebo. An IRR< 1 indicating more events in placebo and an IRR>1 indicating more events in the oral steroid group. c Negative binomial model used due to overdispersion and is a better fitting model than Poisson (determined by Akaike Information Criterion (AIC)). Secondary outcomes -Symptom scores The following eight problems were combined into a single symptom scale to avoid multiple outcomes: hearing, ear pain, speech, energy levels, sleep, attention span, balance, and being generally unwell. At 1 week post randomisation, Cronbach's alpha for the eight symptom scores was 0.77 at week 1, indicating good reliability between the eight symptoms, suggesting that they could be combined into a single symptom score ranging from 0 (problems not present at all) to 48 (all problems are as bad as possible). The factor analysis also suggested that these symptoms could form a single scale. The Cronbach alpha for the subsequent four weeks were all >0.80, suggesting relatively high internal consistency over time. The distributions of the weekly overall symptom score was positively skewed indicating no problems.(The highest median scores were at the end of week 1 (7 in placebo and 6 in oral steroid) indicating that these symptoms were not a problem. When scores were changed into binary outcome (no vs. some symptoms), there was no difference between treatment groups not over time (Table S5.1). Two categories of symptoms (nausea, vomiting or indigestion and changes in behaviour and mood over time) were examined separately; a high proportion of children had resolution of symptoms over time with no difference between treatment groups and over time. Health economic results The resource utilisation questionnaires offered the relative costs associated with each group, with unit prices offered in Table S10.1. The primary cost utility analysis (incremental cost per QALY gain at 12 months) found evidence for oral steroids being dominated (i.e. less effective and more costly) by placebo. The multiple imputed data found an insignificant incremental Quality adjusted life year decrease for the steroid group over 12 months (Table S6.2). The cost increase of £145 and -0·015 incremental QALYs results in the steroid group being dominated by the placebo. Bootstrapping the patient level results ( Figure S6.1) identifies that the probability of oral steroid treatment being cost-effective when compared to placebo at a £20,000 per QALY threshold is 17%, increasing slightly to 22% at a £30,000 per QALY threshold. The incremental increase in costs differs from that of the CEA due to the data treatment. Utilising the OM8-30 results to supplement the HUI3 CUA via the use of a utilities mapping technique found insignificant differences in the incremental effect, with an impact of 0·004 QALYs. £10,000 £20,000 £30,000 £40,000 £50,000 Probability Cost-Effective Value of Ceiling Ratio
2018-08-17T13:11:40.040Z
2018-08-01T00:00:00.000
{ "year": 2018, "sha1": "2e181d460c58422519b392dcdf4c1ea89f573c0c", "oa_license": "CCBYNCND", "oa_url": "http://www.thelancet.com/article/S0140673618314909/pdf", "oa_status": "HYBRID", "pdf_src": "PubMedCentral", "pdf_hash": "503452af8045195b6ea6720781f376236d480554", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
3442258
pes2o/s2orc
v3-fos-license
Effect of lithium on ventricular remodelling in infarcted rats via the Akt/mTOR signalling pathways Activation of phosphoinositide 3-kinase (PI3K)/Akt signalling is the molecular pathway driving physiological hypertrophy. As lithium, a PI3K agonist, is highly toxic at regular doses, we assessed the effect of lithium at a lower dose on ventricular hypertrophy after myocardial infarction (MI). Male Wistar rats after induction of MI were randomized to either vehicle or lithium (1 mmol/kg per day) for 4 weeks. The dose of lithium led to a mean serum level of 0.39 mM, substantially lower than the therapeutic concentrations (0.8–1.2 mM). Infarction in the vehicle was characterized by pathological hypertrophy in the remote zone; histologically, by increased cardiomyocyte sizes, interstitial fibrosis and left ventricular dilatation; functionally, by impaired cardiac contractility; and molecularly, by an increase of p-extracellular-signal-regulated kinase (ERK) levels, nuclear factor of activated T cells (NFAT) activity, GATA4 expression and foetal gene expressions. Lithium administration mitigated pathological remodelling. Furthermore, lithium caused increased phosphorylation of eukaryotic initiation factor 4E binding protein 1 (p-4E-BP1), the downstream target of mammalian target of rapamycin (mTOR). Blockade of the Akt and mTOR signalling pathway with deguelin and rapamycin resulted in markedly diminished levels of p-4E-BP1, but not ERK. The present study demonstrated that chronic lithium treatment at low doses mitigates pathological hypertrophy through an Akt/mTOR dependent pathway. Introduction Ventricular remodelling is associated with cardiac physiological or pathological hypertrophy after myocardial infarction (MI), depending on interventional drugs [1]. Distinct signalling pathways are responsible for the development of cardiac pathological and physiological hypertrophy. Physiological hypertrophy is mediated primarily by the insulin-like growth factor-1/phosphoinositide 3-kinase (PI3K (p110α)) pathway [2]. Akt, a downstream target of PI3K, phosphorylates and activates the mammalian target of rapamycin (mTOR), which is central to cardiac physiological hypertrophy. Transgenics with a dominant-negative mutant of the PI3K subunit p110α or a disruption of the Akt1 gene have virtually no signs of hypertrophy in response to exercise training [3], a kind of cardiac physiological hypertrophy. In contrast, pathological hypertrophy is mediated by G-protein-coupled receptors (GPCRs) following stimulation by hormones such as angiotensin II and endothelin-1, both of which are increased after MI [4]. Activation of GPCRs results in a number of downstream signalling events, such as activation of mitogen-activated protein kinases (MAPKs) (e.g. extracellular-signal-regulated kinase (ERK) 1/2 (ERK1/2)) and dephosphorylation of nuclear factor of activated T cells (NFAT) transcription factors by calcineurin [5]. NFAT is not activated by physiologic stimuli, suggesting that activation of NFAT may specifically regulate pathological remodelling of the myocardium [6]. Thus, the PI3K/Akt axis seems more linked to physiological hypertrophy, whereas MAPK signalling and NFAT pathways participate in the development of the pathological hypertrophy. Physiological hypertrophy shows a normal cardiac structure with a relatively normal pattern of cardiac gene expression and improved cardiac function [7]. Pathological hypertrophy is associated with cardiomyocyte hypertrophy, interstitial fibrosis, cardiac dysfunction, left ventricular dilatation and increased expression of foetal genes such as atrial natriuretic peptide (ANP), β-myosin heavy chain (β-MHC) and skeletal α-actin [8,9]. Lithium has been the mainstay of treatment for bipolar disorder for more than 60 years. Lithium has been recognized for its neuroprotective effects against diverse insults, such as ischaemia, both in vitro and in vivo [10,11]. Recently, lithium has been shown to activate insulin-like growth factor-1 [5], which in turn triggered PI3K/Akt signalling pathways [12]. However, the mechanism whereby PI3K activation by lithium mediates ventricular remodelling after MI is unknown. In contrast, previous studies have shown that lithium has an additive effect on cardiac hypertrophy in a model of abdominal aortic banding, a pathological hypertrophy [13]. The effect of lithium after MI on physiological compared with pathological hypertrophy is unknown. Lithium is highly toxic at regular doses and whether the subtherapeutic concentration is enough for optimal efficacy and acceptable toxicity remains controversial. Thus, the purpose of the present study was: (i) to investigate how lithium chloride (LiCl) at a low dose affects physiological or pathological hypertrophy during ventricular remodelling and (ii) to assess the axis of Akt/mTOR systems in a rat MI model. Materials and methods All rats received humane care and the experiment was approved and conducted in accordance with local institutional guidelines of the China Medical University for the care and use of laboratory animals and conformed with the National Institutes of Health Guide for the Care and Use of Laboratory Animals. Part 1 Male Wistar rats (250-300 g) were intubated and the anterior descending artery was ligated using a 6-0 silk, resulting in infarction at the left ventricle (LV) as previously described [4]. For surgery, haemodynamics measurements, electrophysiological studies and sacrifice, rats were intraperitoneally anaesthetized with ketamine (90 mg/kg body weight (BW)) and xylazine (9 mg/kg). Anaesthesia monitoring was by rear-foot reflexes before and during procedures, observation of respiratory pattern and responsiveness to manipulations throughout the procedures. Twenty-four hours after ligation, rats were randomly assigned into either saline group (NaCl) or LiCl (1 mmol/kg per day). The drug was given orally by gastric gavage once a day. The drug was started 24 h after MI; during this window, the drug can maximize benefits while minimizing the possibility of a direct effect on infarct size [14]. For chronic lithium treatment, rats were given water and saline ad libitum to prevent hyponatraemia caused by lithium-induced increased excretion of sodium. To evaluate general toxicity of lithium, BW was monitored weekly. Mortality rate and general conditions of the animals were also observed daily throughout the whole experiment. The study duration was designed to be 4 weeks because the majority of the myocardial remodelling process in the rat (70-80%) is complete within 3 weeks [14]. Sham rats underwent the same procedure except the suture was passed under the coronary artery and then removed. Sham operation served as controls. Part 2 Although results of the above study showed that LiCl significantly increased ventricular hypertrophy after infarction (see 'Results'), the involved mechanism remained unclear. To rule out non-specific effect of lithium and confirm the importance of Akt and mTOR signalling in LiCl-induced hypertrophy, we employed deguelin (a specific Akt inhibitor) and rapamycin (an mTORC1 inhibitor) in an ex vivo experiment. Four weeks after induction of MI by coronary ligation, infarcted rat hearts were isolated and subjected to saline (NaCl), LiCl (0.4 mM) or a combination of LiCl and deguelin (10 μM, Sigma, St. Louis, MO) or LiCl and rapamycin (0.4 μM, Sigma, St. Louis, MO). Each heart was perfused with a non-circulating modified Tyrode's solution as previously described [15]. Drugs were infused for 1 h. The doses of LiCl, deguelin and rapamycin used were as previously described [2,5,16]. At the end of the study, all hearts (n=5 per group) were used for Western blot of eukaryotic initiation factor 4E binding protein 1 (4E-BP1) and ERK in the remote zone (>2 mm outside the infarct). Echocardiogram At 28 days after operation, rats were lightly anaesthetized with intraperitoneal injection of ketamine (45 mg/kg) and xylazine (5 mg/kg). Echocardiographic measurements were done using the GE Healthcare Vivid 7 Ultra-sound System (Milwaukee, WI) equipped with a 14-MHz probe. M-mode tracing of the LV was obtained from the parasternal long-axis view to measure LV end-diastolic diameter dimension (LVEDD), LV end-systolic diameter dimension (LVESD) and fractional shortening (FS, %). The wall tension index (WTI) was defined as the ratio (LVEDD/2*posterior wall thickness) as described previously [17]. WTI was measured in order to indirectly assess myocardial wall stress. After this, the rats quickly underwent haemodynamic measurement after systemic heparinization. Haemodynamics and infarct size measurements Haemodynamic parameters and infarct size were measured in anaesthetized rats at the end of the study as described in detail in the Supplementary Material online. Western blot analysis of Ser 473 -p-Akt1, Akt1, Thr 37/46 -p-4E-BP1, 4E-BP1, Thr 202 /Tyr 204 -p-ERK1/2 and ERK1/2 Samples were obtained from the remote zone at week 4 after infarction. Experiments were replicated three times and results were expressed as the mean value as described in detail in the Supplementary Material online. Real-time RT-PCR of GATA4, ANP, β-MHC and skeletal α-actin mRNAs were quantified by real-time RT-PCR with cyclophilin as a loading control. The cardiac-specific transcription factor, GATA4, plays important roles in cardiac hypertrophy. For a detailed method, please refer to the Supplementary Material online. Morphometric determination of myocyte size and interstitial fibrosis Because ventricular remodelling after infarction is a combination of reactive fibrosis and myocyte hypertrophy, we measured cardiomyocyte sizes in addition to myocardial weight to avoid the confounding influence of non-myocytes on cardiac hypertrophy. For a detailed method, please refer to the Supplementary Material online. Laboratory measurement Blood samples were collected from rats at the end of the study from the ascending aorta and serum was separated by centrifugation for the estimation of lithium levels using an EEL-flame photometer. The NFAT activity was analysed by ELISA according to the manufacturer's instructions (TransAM NFAT Family Transcription Factor Assay Kit; Active Motif). Briefly, nuclear extracts were added to the wells of a 96-well plate that contained the immobilized oligonucleotide carrying an NFAT consensus site, 5 -AGGAAA-3 . Proteins bound to this immobilized oligonucleotide were detected by incubating with a primary antibody that recognizes active NFAT, followed by horseradish peroxidase-conjugated secondary antibody and were quantified by spectrophotometry at 450 nm with a reference wavelength of 650 nm. Histological collagen results were confirmed by hydroxyproline assay adapted from Stegemann and Stalder [18]. The samples from remote areas were immediately placed in liquid nitrogen and stored at −80 • C until measurement of the hydroxyproline content. The results were calculated as hydroxyproline content per weight of tissue. Statistical analysis Results were presented as mean + − S.D. Comparisons among groups were assessed for significance by one-way ANOVA. When significant differences were detected, individual mean values were compared by Bonferroni's post hoc test (SPSS, version 18.0, Chicago, IL). Probability values were two-tailed and P<0.05 was considered to be statistically significant. Lithium affects ventricular remodelling Differences in mortality rates between saline-and lithium-treated infarcted groups were not found throughout the study. Most of the mortalities occurred within the first hours after ligation, with deaths due to excessive infarction Values are mean + − S.D. Abbreviations are as in Table 1. LVPW, left ventricular posterior wall. * P<0.05 compared with respective sham; † P<0.05 compared with saline-treated infarcted group; ‡ P<0.05 compared with saline-treated sham. and arrhythmia. No animals died due to lithium treatment. Relative heart weights corrected for tibia length at the end of the experimental period (12 weeks of age) are presented in Table 1. Consistent with a previous study [19], the gain in BW in lithium-treated rats was less than that in the saline-treated rats despite there being no difference in weight at the start of the study. Four weeks after infarction, the infarcted area of the LV was very thin and was totally replaced by fully differentiated scar tissue. The weight of the LV inclusive of the septum remained essentially constant for 4 weeks between the two infarcted groups. The lung weight (LungW)/tibia ratio, an index of lung oedema, was significantly lower in the LiCl-treated infarcted group compared with that in the saline-treated infarcted group. The values of +dp/dt and -dp/dt were significantly higher in the LiCl-treated infarcted group compared with those in the saline-treated infarcted group. LV end-systolic pressure (LVESP), LV end-diastolic pressure (LVEDP) and infarct size did not differ between the two infarcted groups. To characterize the cardiac hypertrophy on a cellular level, morphometric analyses of LV sections were performed (Figure 1a). Compared with saline-treated sham, saline-treated infarcted rats showed structural changes such as increased cardiomyocyte sizes (Figure 1b-b'), consistent with LV remodelling. LiCl-treated infarcted rats had a further increase in cardiomyocyte size compared with saline-treated infarcted rats. Fibrosis of the LV from the remote area was examined in tissue sections after Sirius red staining, as shown in Figure 1c-c' . Compared with sham, infarcted rats treated with saline had significant increased fibrosis, as evidenced by increased collagen staining. The lithium-treated infarcted rats showed attenuated cardiac fibrosis compared with saline-treated infarcted rats. Measurement of hydroxyproline content mirrored the histological observation (3.26 + − 0.96% dry weight tissue in saline-treated infarcted rats compared with 2.48 + − 0.67% dry weight tissue in lithium-treated infarcted rats, P<0.05). LV functional parameters were studied by echocardiography 28 days after surgical procedure (Table 2, Figure 2). Compared with sham-operated hearts, MI hearts showed structural changes such as increased LV diastolic and systolic diameters, consistent with LV remodelling. Both LVEDD and LVESD in rats with MI were significantly reduced by LiCl compared with saline (P<0.05). LV FS was significantly higher in the LiCl-treated infarcted group compared with saline. A significant decrease in WTI was observed in the LiCl-treated infarcted group compared with saline (P<0.05). These data were corroborated by the results that +dp/dt and -dp/dt were significantly improved in the LiCl-treated infarcted group compared with saline. Lithium increases phosphorylation of Akt and 4E-BP1 Western blot showed that lithium treatment resulted in a significant increase (P<0.05) in relative p-Akt level of 32 + − 8% compared with 25 + − 4% for relative level of p-Akt in saline-treated infarcted rats ( Figure 3). Treatment with LiCl enhanced the 4E-BP1 phosphorylation by 138% (P<0.01) in the infarcted rats compared with the saline-treated rats. This effect of lithium treatment on the levels of 4E-BP1 phosphorylation was completely blocked in the presence of deguelin or rapamycin (Figure 4), implying the axis of Akt/mTOR in regulating 4E-BP1 activity. Lithium inhibits NFAT and ERK activities As expected, MI significantly increased NFAT-dependent transcription compared with sham ( Figure 5a). Lithium administration significantly reduced the NFAT activity by 21% (P<0.05) in the infarcted rats compared with the saline-treated rats. These data indicate that a low concentration of lithium is efficient at selectively inhibiting important regulators involved in pathological hypertrophy (such as NFAT). In addition, the MI-induced up-regulation of the p-ERK levels was attenuated in the presence of lithium ( Figure 3). Finally, to further assess the role of Akt/mTOR pathway in lithium-attenuated p-ERK levels, Western blot was performed on infarcted hearts treated with deguelin or rapamycin in an ex vivo model. As shown in Figure 4, neither deguelin nor rapamycin affected the ERK phosphorylation compared with lithium alone, implying the attenuated ERK levels after adding lithium is not related to Akt/mTOR pathway. Western blot analysis of 4E-BP1 and ERK to furthermore confirm the Akt and mTOR on kinase activity in homogenates of the LV from the remote zone in a rat-isolated infarcted heart model. A significantly increased p-4E-BP1 level is noted in the LiCl-treated group compared with that seen in the saline-treated group, which was attenuated after administering deguelin (a specific Akt inhibitor) and rapamycin (an mTORC1 Discussions Our data indicate for the first time that lithium at a low dose could be utilized to alleviate the pathological development of hypertrophy and improve adaptive physiological cardiac growth. These results were concordant for beneficial effects of lithium, as documented structurally by increase in myocyte sizes, molecularly by myocardial Akt/4E-BP1 levels and functionally by improvement of cardiac contractility. Our results were consistent with previous observation that enhanced PI3K activity by pharmacological intervention that had a beneficial impact against subsequent pressure overload by inhibiting pathological processes [20]. Thus, lithium acts as an activator of physiological hypertrophy and inhibited pathological hypertrophy. The present study provides several novel findings that increase our understanding of the signal transduction mechanism of the cardioprotection afforded by ventricular remodelling. A low dose of lithium is efficient at selectively inhibiting regulators involved in pathological hypertrophy (such as NFAT, ERK and GATA4), while enhancing pathways involved in physiological hypertrophy (Akt and 4E-BP1) as evidenced by three observations (Figure 6). Lithium shows beneficial effects at a low dose Recommendations for target serum lithium concentrations (0.8-1.2 mM) appear to have been originally derived from studies of the effect of lithium on various indexes such as mania recurrence [21]. Despite the obvious advantages of chronic lithium therapy, its clinical use is often curtailed by its narrow therapeutic index and its devastating The diagram summarizes the histological, molecular and pharmacological evidence. Inhibition of these signalling pathways by their respective inhibitors is indicated by the vertical lines. Our data suggest that lithium is efficient at selectively inhibiting important regulators involved in pathological hypertrophy (such as NFAT and ERK), while augmenting pathways involved in physiological hypertrophy (such as 4E-BP1). overdose-induced toxicity. Furthermore, it should be noted that in spite of therapeutic lithium serum levels, wide variations between serum lithium levels and intracellular concentrations of lithium have been reported [22,23]. However, low dose levels have scantly been assessed in animal and clinical studies. The present study showed lithium at a low dose can provide beneficial biological effects after MI. Lithium enhances physiological remodelling Both lithium-and infarction-induced hypertrophy are associated with an increase in myocyte size but with distinct molecular and histological phenotypes. During ventricular remodelling after MI, pathological cardiac hypertrophy is characterized molecularly, by the induction of foetal genes, such as ANP, β-MHC and skeletal α-actin; histologically, by increased interstitial fibrosis and left ventricular dilatation; and functionally, by impaired cardiac contractility. Given the reduction in LV dilatation, better-preserved systolic function, reduction in WTI and decreased expression of foetal genes in the lithium treatment, this hypertrophy may be viewed as more 'physiological' than 'pathological' . Given p-4E-BP1 is activated in the development of physiological hypertrophy, lithium-increased p-4E-BP1 levels are blunted to the level similar to the vehicle group when Akt1 is inhibited by deguelin, implying that Akt is essential for the development of physiological hypertrophy. Previously, Chun et al. [24] reported that deguelin treatment had only mimimal effects on the MAPK pathway. In our study, we also found no significant differences in phosphorylation levels of ERK, suggesting that deguelin application does not have a sufficient effect on MAPK signalling after MI. Whether Akt-induced cardiac hypertrophy is physiological or pathological is complex [17]. Activation of PI3K/Akt1 signalling is required for exercise-induced hypertrophy [16]. Others also point out that Akt1 is a critical mediator of pathological cardiac hypertrophy [25,26]. These latter conclusions, however, are derived from transgenic mouse models overexpressing constitutively active Akt1 at 15-fold higher than the physiological levels. Overexpression of Akt1 to this extent can overtake the function of other Akt isoforms and also can lead to off-target effects because of non-physiological protein-protein interactions and aberrant intracellular localization. Indeed, Akt1 has a dichotomous role in cardiac remodelling by mediating physiological compared with pathological signalling based upon the duration, the intensity and the type of stress. Our study answered the question of the effect of chronic pharmacological activation of Akt with lithium on cardiac hypertrophy after MI. Our results do not seem consistent with previous studies, showing that chronic Akt1 activation, which activates mTORC1, has been shown to worsen aging-induced cardiac hypertrophy and impair myocardial contractile [27]. mTOR exerts its main cellular functions by interacting with specific adaptor proteins to form two distinct multiprotein complexes, mTORC1 and mTORC2 [28]. mTORC1 has been shown to play a crucial role in the regulation of cellular homoeostasis, growth and response to stress. However, its functional role is still under debate because different roles of mTORC1 have been suggested under various experimental conditions. The data from the pharmacological modulators of mTOR and the animal models with genetic modifications of the components of mTOR signalling pathway will be expected to be different because the degree of mTORC1 activation among models is different. The degree of mTORC1 activation and the mTORC1 physiological functions to be preserved to convert mTORC1 activation from detrimental into beneficial during cardiac stress remains unclear. Indeed, our results were consistent with the previous findings, showing that the activation of mTORC1/4E-BPs axis plays a role in physiological hypertrophy [29]. Lithium inhibits pathological remodelling PI3K pathway can inhibit pathological growth in addition to promoting physiological growth. Akt is activated by PI3K (p110α) to induce physiological hypertrophy but is also activated in response to GPCR agonists, e.g. endothelin-1 via another PI3K isoform (p110γ) that induces pathological hypertrophy [2]. That is why the p-Akt levels were significantly higher after inducing MI (Figure 3). Furthermore, PI3K (p110α) signalling negatively regulated GPCR-stimulated extracellular responsive kinase and Akt (via PI3K, p110) activation [16]. Thus, although there was similar activation of Akt between the two groups of lithium-treated sham rats and vehicle-treated infarcted rats, we assessed ERK1/2 activation. p-ERK was significantly increased in infarcted hearts but not changed in the lithium-treated sham, implying different downstream signalling pathways. Finally, in the infarcted rats, lithium administration reduced the p-ERK levels and NFAT activity compared with the saline group, implying the inhibitory effect of lithium on pathological hypertrophy. Our results were consistent with the notion that the PI3K/Akt axis is more linked to physiological hypertrophy, whereas MAPK signalling, in collaboration with the NFAT pathway, participates in the development of the pathological hypertrophy [30]. To more directly address this interpretation, molecular markers of pathological cardiac hypertrophy were analysed by mRNA. The data showed that ventricular remodelling was associated with the expression of ANP, β-MHC and skeletal α-actin in the heart. Dephosphorylated NFAT (increased activity) enters the nucleus where it interacts with GATA4 and causes transcriptional activation of hypertrophic foetal genes leading to cardiomyocyte hypertrophy [31]. These foetal gene expressions are inhibited after lithium administration, consistent with the results that lithium inhibited pathological remodelling. Other mechanisms Although the present study suggests that the mechanisms of lithium-induced physiological ventricular remodelling may be related to an Akt/mTOR axis, other pathways may take part in the effect of lithium. It may be supposed that lithium elicits cardioprotection, in part, through its ability to inhibit GSK-3β by increasing GSK-3β phosphorylation. Previous studies have shown that the lack of GSK-3β phosphorylation in response to pressure overload is associated with reduced hypertrophy and development of dilated cardiomyopathy, highlighting the important role of GSK-3β phosphorylation in the development of compensatory hypertrophy [32]. Thus, lithium may increase physiological cardiac hypertrophy by inhibiting GSK-3β activity. Clinical implications The present study was undertaken to explore the possibility that lithium might have clinical efficacy for the treatment after MI. Traditional therapeutics to prevent post-MI remodelling (e.g. angiotensin-converting enzyme inhibitors and angiotensin-receptor antagonists) are effective to some degree but progression to congestive heart failure or death, despite standard approaches, is common. Novel signalling pathways involved in the cardiac remodelling after MI, like Akt/mTOR signalling, need to be explored. Induction of physiological cardiac hypertrophy may be a potential therapeutic strategy for the treatment of heart failure. Lithium's ability to induce hypertrophy would reduce LV WTI according to Laplace's law, and its effects on contractility would enhance the function of the non-infarcted myocardium, both of which would be of particular benefit if applied, while the process of remodelling was beginning. Our findings may have potential implications as a therapeutic agent for treatment of patients post-MI. Activation of PI3K (p110α), via exercise training or pharmacological approaches, offers a novel therapeutic strategy for preventing LV remodelling in patients at the risk of developing heart failure. While angiotensin converting enzyme inhibitors and angiotensin receptor blockers slow LV remodelling by targeting pathological hypertrophy signalling pathways, activation of PI3K (p110α) attenuates LV remodelling by activating physiological hypertrophy signalling pathways as well as inhibiting pathological signalling pathways.
2018-04-03T05:49:24.602Z
2017-01-23T00:00:00.000
{ "year": 2017, "sha1": "7cee237f87f6b47a27af9b2e53e97aee3f5fd025", "oa_license": "CCBY", "oa_url": "http://www.bioscirep.org/content/ppbioscirep/37/2/BSR20160257.full.pdf", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "7cee237f87f6b47a27af9b2e53e97aee3f5fd025", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [ "Biology", "Medicine" ] }
6330847
pes2o/s2orc
v3-fos-license
Reversibility in the diffeomorphism group of the real line An element of a group is said to be reversible if it is conjugate to its inverse. We characterise the reversible elements in the group of diffeomorphisms of the real line, and in the subgroup of order preserving diffeomorphisms. Introduction An element of a group is reversible if it is conjugate to its inverse.We say that such an element is reversed by its conjugator.A diffeomorphism of the real line R is an infinitely differentiable homeomorphism of R whose derivative never vanishes.We consider the group Diffeo(R) of all diffeomorphisms of R, and the subgroup Diffeo + (R) of order preserving diffeomorphisms.The object of this paper is to characterise the reversible elements in each of these two groups. An involution in a group is an element of order two.One way to obtain a reversible element is to form the product of two involutions.Such an element is reversed by each of the two involutions, and conversely, an element that is reversed by an involution can be expressed as a product of two involutions.Elements reversed by involutions are called strongly reversible.The only involution in Diffeo + (R) is the identity map.There are many non-trivial involutions in Diffeo(R), but all are conjugate to the map x → −x. Interest in reversibility originates from the theory of time-reversible symmetry in dynamical systems, and background to the subject can be found in [2,6].Finite group theorists use the terms real and strongly real instead of reversible and strongly reversible, because an element g of a finite group G is reversible if and only if each irreducible character of G takes a real value when applied to g. Reversibility in the homeomorphism group of the real line has been considered before by Jarczyk [4] and Young [12].See also [3,8].Reversibility in the group of invertible formal power series was considered by O'Farrell in [9].Previously, in [1], Calica had studied reversibility in groups of germs of homeomorphisms and diffeomorphisms that fix 0. Reversibility in the diffeomorphism group of the real line is of particular interest because, whilst it is difficult to fully classify conjugacy in Diffeo + (R) and Diffeo(R), we are able to give a complete account of reversibility. The main results of the paper follow. Theorem 1.1.An element of Diffeo + (R) is reversible if and only if it is conjugate to a map f in Diffeo + (R) that fixes each integer and satisfies Theorem 1.1 gives us an explicit method to generate all order preserving reversible diffeomorphisms. The alternatives (i) and (ii) are not exclusive.If f is an order reversing reversible diffeomorphism then Theorem 1.2 tells us that it is strongly reversible.Composing two non-trivial involutions in Diffeo(R) gives rise to an order preserving diffeomorphism; hence f must be an involution. We prove the following result about composites of reversible maps. Theorem 1.3.Each member of Diffeo + (R) can be expressed as a composite of four reversible diffeomorphisms. We do not know whether each element of Diffeo + (R) can be expressed as a composite of three, or even two, reversible elements.We also prove the following result about composites of involutions. Theorem 1.4.Each member of Diffeo(R) can be expressed as a composite of four involutions. The number four in this theorem is sharp because each order preserving diffeomorphism that is not strongly reversible cannot be expressed as a composite of three involutions.An obvious corollary of Theorem 1.4 is that each element of Diffeo(R) can be expressed as a composite of two (strongly) reversible diffeomorphisms. The structure of the paper is as follows.Section 2 contains relevant background material.Then in Section 3 we focus on the group Diffeo + (R), and prove all our results related to that group, including Theorems 1.1 and 1.3.Section 4 is about the group Diffeo(R), and in that section we prove all our results about Diffeo(R), including Theorems 1.2 and 1.4.Finally, in Section 5 we list some open problems. Background results An element f of a group G is reversible if there is another element h in G such that hf h −1 = f −1 .We say that h reverses f , or that f is reversed by h.If g is an element of G that is conjugate to f then g is also reversible.We denote the fixed point set of a homeomorphism g by fix(g).Listed below are several results about diffeomorphisms of the real line. Lemma 2.1 (Sternberg,[11]).Each fixed point free member of Diffeo + (R) is conjugate either to the map x → x + 1 or the map x → x − 1. We remark that x → x + 1 and x → x − 1 are not conjugate in Diffeo + (R), but they are conjugate by the map x → −x in Diffeo(R). Lemma 2.2 (Kopell, Lemma 1(a), [5]).Suppose that f and g are C 2 order preserving homeomorphisms of an interval [a, b) such that f g = gf .If g has no fixed points in (a, b), but f has a fixed point in (a, b), then f is the identity map. For f in Diffeo(R), we use the notation T a f to denote the truncated Taylor series regarded as a formal power series in the indeterminate X. Let P denote the group of formally invertible formal power series having real coefficients, under the operation of formal composition.The identity of P is the series The formal inverse of a power series P is denoted P −1 . Lemma 2.3 (Kopell, Lemma 1(b), [5]).Let f and g be two elements of Diffeo + (R) that both fix 0 and commute.If T 0 f = X and 0 is not an interior point of fix(f ), then T 0 g = X also. Our final lemma is about reversibility of formal power series.We are unable to find a precise reference for this lemma, so we provide a brief proof that relies on results from [7,9]. , where a p = 0.By [9, Theorem 4 and Corollary 6], p is even.Since T 2 commutes with S we deduce from [7, Proposition 1.5] that the X coefficient of T 2 , namely λ 2 , equals 1. Hence λ = −1.Next, we can apply [9, Lemma 3 (ii)] to deduce that either T is an involution (in which case the lemma is proved), or else there is an odd integer q and non-zero real number b q such that T We reach a contradiction because p is even and q is odd.Therefore T is an involution. Reversible maps. Elementary dynamical considerations tell us that a reversible element in the group of order preserving homeomorphisms of R must have infinitely many fixed points.In fact, a reversing conjugation can only achieve its purpose by shunting the components of the complement of the fixed point set.This was first pointed out by Calica [1]. We use the following lemma about homeomorphisms.Lemma 3.1.Suppose that f and h are order preserving homeomorphisms of R such that hf h −1 = f −1 .Then each fixed point of h is also a fixed point of f .Proof.Suppose that h fixes the point p.We have two equivalent equations hf h −1 = f −1 and hf −1 h −1 = f .From these equations we obtain hf (p) = f −1 (p) and hf −1 (p) = f (p).Order preserving homeomorphisms such as h have no periodic points other than fixed points; thus The next lemma works for diffeomorphisms, but not for homeomorphisms.Lemma 3.2.Suppose that f and h are order preserving diffeomorphisms of R such that hf h −1 = f −1 .If h has a fixed point then f is the identity map. Proof.If h is the identity map then f is an order preserving involution, and therefore f is also the identity map.Suppose then that h is not the identity map, but that it nevertheless has a fixed point.Choose a component (a, b) in the complement of fix(h).One of a or b is a real number (that is, we cannot have both a = −∞ and b = +∞).Let us assume that a is a real number; the other case can be dealt with similarly.By Lemma 3.1, f fixes (a, b) as a set.The map f cannot be free of fixed points on (a, b).To see this, suppose, by switching f and f −1 if necessary, that f (x) > x for each real number x in (a, b).Then which is a contradiction.Since f has a fixed point in (a, b), Lemma 2.2 applied to the maps f and h 2 shows that f coincides with the identity map on (a, b).We already know that f coincides with the identity map on fix(h); thus f is the identity map. We are now in a position to prove the first main result. Proof of Theorem 1.1.Let t be the map given by t(x) = x + 1 for each x.Then (1.1) states that tf t −1 = f −1 .Thus a diffeomorphism f that satisfies (1.1) is reversible, and likewise all conjugates of f are reversible. Conversely, suppose that g and h are elements of Diffeo + (R) such that hgh −1 = g −1 .The theorem holds when g is the identity, so let us suppose that g is not the identity.This means that we can assume, by Lemma 3.2, that h is free of fixed points.Observe that h −1 gh = g −1 ; hence by replacing h with h −1 if necessary we can assume that h(x) > x for each x. By Lemma 2.1 we see that there is an element k in Diffeo + (R) such that khk −1 = t.Define f = kgk −1 .Then tf t −1 = f −1 ; that is, (1.1) holds.Now f , like g, must have a fixed point, and by conjugating f by a translation we may assume that this fixed point is 0. Since translations commute, this conjugation does not affect (1.1).Finally, from the equation tf n t −1 = f −n we deduce that f fixes each integer. We can construct a diffeomorphism f that satisfies (1.1) explicitly by defining f on [0, 1] to be an arbitrary order preserving diffeomorphism of [0, 1] such that T 0 f = (T 1 f ) −1 , and then extending the domain of f to R using (1.1).More precisely, we have the following corollary of Theorem 1.1. Remark 3.4.Each map f of part (ii) commutes with x → x + 2. Hence f is the lift under the covering map x → exp(πix) of the order preserving diffeomorphism of the unit circle f given by f (e iπθ ) = e iπf (θ) .Moreover f is reversed by x → x + 1; thus f is reversed by rotation by π. Composites of reversible maps. Lemma 3.5.Each fixed point free element of Diffeo + (R) can be expressed as a composite of two reversible elements of Diffeo + (R). Proof.By Lemma 2.1 it suffices to find a single fixed point free map that can be expressed as a composite of two reversible diffeomorphisms.Let f be a reversible order preserving diffeomorphism such that f (x + 1) = f −1 (x) + 1 for each real number x, and f (y) > y for each element y of (0, 1).The graph of such a map f is shown in Figure 1. Let a be an element from the interval 1 2 , f 1 2 . Notice that every order preserving diffeomorphism h of 1 2 , a satisfies h(x) < f (x) for x ∈ 1 2 , a .Choose an order preserving diffeomorphism g of a, 5 2 such that T a g = T5 2 g = X, and such that g(x) < f (x) for each x ∈ a, 5 2 .(This construction is possible by a classic result of Borel, which says that to each formal power series P there corresponds a smooth function f defined in a neighbourhood of 0 such that T 0 f = P .)Next, choose an order preserving diffeomorphism k from 1 2 , a to a, 5 2 such that T1 We extend the definition of g to R by defining g(x) = k −1 g −1 k(x) for x ∈ 1 2 , a , and g(x + 2) = g(x) + 2 for all x ∈ R. We extend the definition of k by defining k(x) = k −1 (x) + 2 for x ∈ a, 5 2 and k(x + 2) = k(x) + 2 for all x ∈ R. The resulting maps g and k are both order preserving diffeomorphisms.Moreover, one can check that the equation g(x) = k −1 g −1 k(x) is satisfied for points x in 1 2 , 5 2 .Since both maps commute with x → x + 2, this equation is satisfied throughout R. Finally, we have defined g such that f (x) > g(x) for elements x of 1 2 , 5 2 , and in fact f (x) > g(x) everywhere, again, because both maps commute with x → x + 2. Therefore g −1 f is a fixed point free diffeomorphism expressed as a composite of two reversible maps. Proof of Theorem 1.3.Choose f in Diffeo + (R).Choose a fixed point free diffeomorphism g such that g(x) < f (x) for each x ∈ R. Then g −1 f (x) > x for each x in R, so the map h = g −1 f is also free of fixed points.Since f = gh, the result follows from Lemma 3.5. We do not know whether each element of Diffeo + (R) is the composite of three reversible elements. Order reversing reversible maps. We denote the set of order reversing diffeomorphisms of R by Diffeo − (R).The next proposition fails for homeomorphisms. Proposition 4.1. An order reversing member of Diffeo(R) is reversible in Diffeo(R) if and only if it is an involution. Proof.Involutions are all reversible by the identity map.Conversely, suppose that f ∈ Diffeo − (R), h ∈ Diffeo(R), and hf h −1 = f −1 .By replacing h with hf if necessary, we may assume that h preserves order.From the equation hf = f −1 h we deduce that h fixes the unique fixed point of f .Now, hf 2 h −1 = f −2 , and f 2 preserves order; therefore Lemma 3.2 applies to show that f 2 is the identity map, as required.Proposition 4.1 accounts for all order reversing reversible diffeomorphisms.In Theorem 1.1 we described all order preserving diffeomorphisms that are reversed by order preserving maps.That leaves only order preserving diffeomorphisms that are reversed by order reversing maps.These are examined next.Proof.Let f ∈ Diffeo + (R), h ∈ Diffeo − (R), and hf h −1 = f −1 .We wish to show that f is strongly reversible.Given Lemma 4.2, we may assume that f has a fixed point.By conjugation, we can assume that the fixed point of h is 0. Notice that h permutes the fixed points of f .We define an involutive homeomorphism k by First, suppose that f coincides with the identity on a neighbourhood of 0. In this case we have freedom to adjust the definition of k near 0 so that it is an involutive diffeomorphism, without disturbing the validity of the equation kf k = f −1 . Second, suppose that 0 is not an interior fixed point of f , but that T 0 f = X.Since h 2 commutes with f , it follows from Lemma 2.3 that T 0 h is an involution, so that k is already a diffeomorphism.Third, suppose that 0 is a fixed point of f and T 0 f = X.By Lemma 2.4, T 0 h is an involution, and again, k is a diffeomorphism.Now suppose that 0 lies inside a component (a, b) of R \ fix(f ).Since f has a fixed point, we know that (a, b) = R.Moreover, because the order reversing map h fixes (a, b), both end points a and b are finite, and h(a) = b and h(b) = a.Therefore h 2 fixes a, b, and 0, and commutes with f .By Lemma 2.2, This means that h and h −1 coincide inside (a, b), so that k is a diffeomorphism, and Proof of Theorem 1.2.Combine Propositions 4.1 and 4.3. Since all non-trivial involutions in Diffeo(R) are conjugate to x → −x we have the following explicit method to construct all strongly reversible elements of Diffeo(R): Note that the graph of a map reversed by x → −x is symmetric in the line y = −x. Refer to [4,8,12] for more information on strong reversibility of homeomorphisms.Proposition 4.3 shows that elements of Diffeo + (R) that are reversed by order reversing elements of Diffeo(R) are strongly reversible in Diffeo(R).There are, however, elements of Diffeo + (R) that are reversed by order preserving elements of Diffeo(R) that are not strongly reversible in Diffeo(R).In fact, for order preserving diffeomorphisms, the properties of being reversible in Diffeo + (R) and strongly reversible in Diffeo(R) are logically independent.To demonstrate this, we must, in turn, find an example of an order preserving diffeomorphism that is (i) neither reversible in Diffeo + (R) nor strongly reversible in Diffeo(R); (ii) not reversible in Diffeo + (R), but strongly reversible in Diffeo(R); (iii) reversible in Diffeo + (R), but not strongly reversible in Diffeo(R); (iv) reversible in Diffeo + (R) and strongly reversible in Diffeo(R). Examples of (i) and (ii) are readily constructed.For (ii), any non-trivial strongly reversible diffeomorphism which coincides with the identity map outside a compact set will suffice, because Theorem 1.1 tells us that such a map cannot be reversible in Diffeo + (R).We now give an example of (iii), and then a non-identity example of (iv). Example (iii). We shall describe an order preserving diffeomorphism f that is reversible by order preserving diffeomorphisms, but not by order reversing involutions.The map described is not even strongly reversible as a homeomorphism.We assume some common knowledge of conjugacy in the homeomorphism group of the real line, which can be found, for example, in [3]. We shall define f to be an element of Diffeo + (R) such that fix(f ) = Z.To specify f up to topological conjugacy, it remains only to describe the signature on R \ Z, which we represent by an infinite sequence of + and − symbols.A + symbol corresponds to an interval (n, n + 1) for which f (x) > x for each x ∈ (n, n + 1), and a − symbol corresponds to an interval (n, n + 1) for which f (x) < x for each x ∈ (n, n + 1).The signature of a homeomorphism of R is discussed in more detail in [3].Suppose the signature of f consists of the 12 symbol sequence +, +, +, −, −, +, −, −, −, +, +, −, repeated indefinitely, in both directions.The map f can be chosen to be a diffeomorphism.A portion of a graph of such a function is shown in Figure 2. It satisfies hf h −1 = f −1 , where h is given by the equation h(x) = x + 6.On the other hand, it is straightforward to see (or refer to [3]) that f is not reversible by a non-trivial involution, as the doubly infinite sequence generated by (4.1) read forwards is different from the same sequence read backwards.Then τ is an involutive element of Diffeo − (R), so that f = −τ is an element of Diffeo + (R) that is strongly reversible in Diffeo(R). On the other hand, τ (x + 2) = −τ (−x) − 2 for all x ∈ R, so Hence f is also reversible in Diffeo + (R).The corresponding result for homeomorphisms is due to Fine and Schweigert [3]. Open questions We list two open problems which have emerged from our study. Question 5.1.What is the smallest positive integer m such that each member of Diffeo + (R) can be expressed as a composite of m reversible maps? 4. 2 .Proposition 4 . 3 . Strongly reversible maps.Lemma 4.2.Fixed point free diffeomorphisms are strongly reversible.Proof.A fixed point free diffeomorphism is, by Lemma 2.1, conjugate in the group Diffeo(R) to x → x + 1, and this map is reversed by the involution x → −x.If a member of Diffeo + (R) is reversed by a member of Diffeo − (R) then it is strongly reversible. Corollary 4 . 4 . Let g ∈ Diffeo + (R).Then the following two conditions are equivalent:(i) The map g is reversible in Diffeo(R) by an order reversing diffeomorphism.(ii) There exist (a) a formally invertible power series P that is strongly reversed by the power series −X; (b) a point p and an order preserving diffeomorphism φ : [p, ∞) → [−p, ∞) such that T p φ = P ; (c) h ∈ Diffeo(R), such that g = hf h −1 , where
2008-11-21T18:02:34.000Z
2008-11-21T00:00:00.000
{ "year": 2008, "sha1": "d7de65e6aedbea9bc519c463cc4d3cfa661af173", "oa_license": "public-domain", "oa_url": "https://ddd.uab.cat/pub/pubmat/02141493v53n2/02141493v53n2p401.pdf", "oa_status": "GREEN", "pdf_src": "Arxiv", "pdf_hash": "9faba6a816b7612ab2f9ba4983b01b99d8a1b9c0", "s2fieldsofstudy": [ "Mathematics" ], "extfieldsofstudy": [ "Mathematics" ] }
2989537
pes2o/s2orc
v3-fos-license
Combining Participatory Influenza Surveillance with Modeling and Forecasting: Three Alternative Approaches Background: Influenza outbreaks affect millions of people every year and its surveillance is usually carried out in developed countries through a network of sentinel doctors who report the weekly number of Influenza-like Illness cases observed among the visited patients. Monitoring and forecasting the evolution of these outbreaks supports decision makers in designing effective interventions and allocating resources to mitigate their impact. Objective: Describe the existing participatory surveillance approaches that have been used for modeling and forecasting of the seasonal influenza epidemic, and how they can help strengthen real-time epidemic science and provide a more rigorous understanding of epidemic conditions Introduction Epidemiological surveillance is an important facet in the detection and prevention of the spread of an epidemic [1].Knowing which diseases and variations of these diseases are present can help medical researchers identify appropriate interventions as well as strategies for treatment to reduce overall impact of the disease, including mortality.Because of the utility of such data, a number of agencies collect and distribute surveillance reports on prevailing epidemics or other diseases of interest.In the United States, the Centers for Disease Control and Prevention (CDC) produces surveillance counts for influenza and other diseases based on reports from state and local laboratories and medical health centers (www.cdc.gov/flu/weekly/summary.htm).Internationally, the World Health Organization and other agencies produce surveillance data for a number of emerging diseases such as Zika and Ebola (www.who.int/emergencies/zika-virus/situation-report/25-august-2016/en/).While these clinically-based disease surveillance systems are necessary to keep track of disease prevalence and contain their spread, they have practical limitations [2].Given the time required to collate surveillance numbers, the reports are usually several weeks old, resulting in a mismatch between the public health response and conditions on the ground [3].Depending upon the transmissibility of the epidemic, there can be a big difference in prevalence from week to week.Additionally, even when collecting data from local medical centers, coverage is not always uniform.As a result, the CDC weights the public health response based on state population as well as a region's past history of influenza-like illness (ILI) cases [1].Finally, the level of detail afforded by the medical laboratories and centers reporting to these clinically-based systems may not be sufficient for examining the type of regional demographics that help to identify interventions that are likely to be effective [3]. A number of algorithms and technical approaches have been developed in recent years to attempt to mitigate the shortcomings in clinically collected surveillance data.To address the time delay between when surveillance data become available and the current date, approaches have been developed for ILI that use mechanistic modeling based on epidemiological knowledge of the pathways of flu transmission to estimate near real-time and future estimates of flu activity [4,5].Other approaches have attempted to leverage information from constantly changing Internet-based data sources to identify patterns that may signal a change in the incidence of ILI cases in a population.These data sources include Internet search engines [6][7][8][9][10][11][12], Twitter and its microblogs [13][14][15][16][17], clinicians' Internet search engines [18], and participatory disease surveillance systems where responders on the ground report on disease propagation [19].Sharpe et al [20] conducted a comparative study to analyze whether Google-, Twitter-, or Wikipedia-based surveillance performs the best when compared to CDC ILI data. In addition to helping address the time delay problem, participatory disease surveillance can also offer valuable insight into the characteristics of a disease and the demographics of the affected population [19,[21][22][23][24].It can help to augment coverage in areas where there are fewer medical centers or where infected people are less likely to go for clinical evaluation.Finally, participatory surveillance also offers a good opportunity to promote awareness of an epidemic [25]. Participatory surveillance has its limitations as well, especially participatory bias resulting from nonuniform coverage and from waning interest and participation over the duration of an epidemic [22].Additionally, although not addressed with the examples in this paper, training and trust issues may lead to under-or incorrect reporting [23].Combining participatory surveillance with modeling and simulation can not only help to reduce participatory bias but can also improve real-time forecasting and thus help identify which interventions are most likely to be effective over time in a given area. In this article, we investigate how an understanding of the results from 3 participatory disease surveillance systems, WISDM (Widely Internet-Sourced Distributed Monitoring), Influenzanet, and Flu Near You (FNY), can be or have been extended through the use of modeling, simulation, and forecasting. Using Modeling to Measure Participatory Bias WISDM is a Web-based tool developed at Virginia Tech that supports crowdsourced behavioral data collection, inspection, and forecasting of social dynamics in a population.When integrated with online crowdsourcing services such as Amazon's Mechanical Turk (MTurk), WISDM provides a cost-effective approach to real-time surveillance of potentially evolving disease outbreaks [26].So far, WISDM has been used primarily to collect demographic and health behavior data for epidemiological research.Here, we describe how modeling can XSL • FO RenderX be used in combination with WISDM to measure participatory (nonresponse) bias. Crowdsourcing platforms like MTurk can be used to recruit responders for a low fee.MTurk allows requesters to recruit human intelligence to conduct tasks that computers cannot do; individuals who browse among existing jobs are called workers.However, there is some concern that users recruited on crowdsourcing platforms may not be representative of the population at large [27,28].MTurk workers tend to be young, educated, and digitally savvy, so their responses may systematically differ from the responses of those who did not participate in the survey.Given this potential for nonresponse or participatory bias, understanding how to use data from such surveys for epidemic surveillance is a challenge. To address this issue, we developed a simulation-based approach.Specifically, we combined results of a survey of Delhi, India, residents conducted on WISDM through MTurk with agent-based simulations of the Delhi population to understand the MTurk sample bias.First, we constructed a synthetic population that was statistically indistinguishable from the Delhi census (V in Figure 1), thus providing the best extant at-scale representation of the population. The synthetic population is generated by combining marginal distributions of age, household income, and household size for each Census block group with the corresponding Public Use Microdata Sample.This is done using the iterative proportional fitting procedure [29].Validation is done by comparing distributions of variables not included in the iterative proportional fitting step with the corresponding distributions in the generated synthetic population.The procedure is guaranteed to converge [30] and the inferred joint distribution is the best in the maximum entropy sense [31]. The synthetic population is generated for each block group, which is the highest resolution at which US Census data are available publicly.We generate social contact networks (contact matrices) for the synthetic population through a detailed data-driven model where, after the agents matching the region's demographics are generated, they are assigned home locations using road network data (from Here, formerly known as Navteq), daily activity patterns are assigned using the National Household Travel Survey data, and activity locations are assigned using Dun and Bradstreet data.This allows social contact networks to be extracted based on agents being simultaneously present at locations for overlapping durations.We refer to the literature for a detailed description of the construction of synthetic populations and their applications [32][33][34][35][36][37][38][39][40][41]. From this synthetic population, we selected individuals whose demographics most closely matched the demographics of the MTurk respondents of the WISDM survey (the S in Figure 1).Then, epidemic characteristics of this selected subsample were studied and compared to the epidemic characteristics of the entire synthetic population. Process for Finding the Mechanical Turk-Matched Delhi Synthetic Population First, we used WISDM to collect demographics and health behaviors of about 600 MTurk workers; the health behaviors included preventative and treatment behaviors related to influenza.Then we calculated the Euclidean distance between each of these approximately 600 responders and every person in the synthetic population of the same age, gender, and household size.Next, we selected the closest synthetic matches to each survey respondent.If more than 1 match was identified, all of the matches were retained.We repeated this procedure for each responder in the survey, which provided us with a subpopulation of the synthetic population that most closely matched the WISDM-based survey respondents.This subpopulation is denoted by S in Figure 1, and V denotes the total synthetic population of Delhi. However, the synthetic subpopulation (S) was not statistically representative of the MTurk sample given that survey respondents could be matched with multiple individuals.Thus, we used stratified sampling to construct a finer sample of the synthetic population that was equivalent to those who took the MTurk survey.Specifically, we divided both the survey and synthetic subpopulation (S) data into H mutually exclusive strata, where each stratum corresponded to a unique combination of 3 demographic variables, specifically age, gender, and household size.Only these 3 demographic factors were used for stratification since India Census did not have information on other common socioeconomic variables like income, education, employment, and access to Internet.Variables such as income and access to Internet could be especially important in matching MTurk with individuals in the synthetic population, but due to lack of data this could not be done.This is a significant limitation of the current analysis which we expect to improve upon as more data becomes available in the future. We discretized age into A distinct intervals and household size into B intervals.Gender was split into 2 groups.This resulted in H=2AB strata.Because all matched synthetic people had been retained, the number of observations (N 1 ) in the synthetic subpopulation (ie, first stratum of subpopulation S) was much larger than the number of observations (n 1 ) in the first stratum in the MTurk survey (ie, first stratum of the actual survey sample).Thus, to obtain a representative sample of this first stratum, n 1 observations were randomly sampled from the synthetic subpopulation without replacement.The same procedure was performed for all the remaining strata.This provided us with the final MTurk-matched Delhi synthetic population sample set S' in Figure 1, which demographically matched the MTurk survey data. Comparing Epidemic Outcomes Using Widely Internet-Sourced Distributed Monitoring Our goal was to understand the differences in influenza epidemic outcomes across the 3 populations (V, S, and S').We considered 3 different metrics for measuring epidemics: (1) the size of the epidemic (ie, the attack rate), (2) the peak number of infections, and (3) the time it takes for the epidemic to peak.A difference in these metrics between S and S' would be equivalent to the sample bias if we assume S captures the entire MTurk population.This may not be true unless the sample size is very large, which is not the case in this study.However, for very large samples, it would give the sample bias since S' is the sample and S is the entire synthetic subpopulation that matches the attributes of the sample.Differences between V and S metrics would be equivalent to the nonresponse bias because individuals outside S did not participate in the survey. In order to compare the epidemic outcomes, we simulated an influenza outbreak using a susceptible, exposed, infected, and recovered (SEIR) disease model [34,35] in the synthetic Delhi population.Each node in the network represents an individual, and each edge represents a contact on which the disease can spread.Each node is in 1 of 4 states at any given time: S, E, I, or R.An infectious person spreads the disease to each susceptible neighbor independently with a probability referred to as the transmission probability, given by p=λ(1-(1-τ) Δt ), where λ is a scaling factor to lower the probability (eg, in the case of vaccination), τ is the transmissibility, and Δt is the duration of interaction in minutes.Durations of contact are labels on the network edges.A susceptible person undergoes independent trials from all of its neighbors who are infectious.If an infectious person infects a susceptible person, the susceptible person transitions to the exposed (or incubating) state.The exposed person has contracted influenza but cannot yet spread it to others.The incubation period is assigned per person according to the following distribution: 1 day (30%), 2 days (50%), 3 days (20%).At the end of the exposed or incubation period, the person switches to an infected state.The duration of infectiousness is assigned per person according to the following distribution: 3 days (30%), 4 days (40%), 5 days (20%), 6 days (10%).After the infectious period, the person recovers and stays healthy for the simulation period.This sequence of state transitions is irreversible and is the only possible disease progression.We seed the epidemic in a susceptible population with 10 infections that are randomly chosen every day.A total of 25 replicates were run to account for the stochastic randomness arising from the selection of initial infectors. Influenzanet In 2008, a large research project funded by the European Commission and coordinated by the Institute for Scientific Interchange in Turin, Italy, led to the creation of Influenzanet, a network of Web-based platforms for participatory surveillance of ILI in 10 European countries [42].The ambition was to collect real-time information on population health through the activity of volunteers who provide self-reports about their health status and, by combining this real-time data feed with a dynamical model for spatial epidemic spreading, build a computational platform for epidemic research and data sharing.The results of this multiannual activity have been used to create a novel, modular framework (the FluOutlook framework) capable of capturing the disease transmission dynamics across country boundaries, estimating key epidemiological parameters, and forecasting the long-term trend of seasonal influenza [43]. The input component estimates initial infections for a given week in any census area from collected self-reported information from volunteers on Influenzanet platforms or from other data proxies like Twitter.Influenzanet data collection has been described in several previous papers [44].The number of users reporting a case of ILI each week is used to calculate the weekly incidence of ILI among active users.Active users are those who completed at least 1 Influenzanet symptoms questionnaire during the influenza season.Since users report their place of residence at the level of postal codes, the ILI weekly incidence can be calculated at the resolution level of postal codes. The simulation and forecast component is a computational modeling and simulation engine named Global Epidemic And Mobility model (GLEAM) [45,46].The GLEAM dynamical XSL • FO RenderX model is based on geographical census areas defined around transportation hubs and connected by long-and short-range mobility networks.The resulting meta-population network model can be used to simulate infectious disease spreading in a fully stochastic fashion.The simulations, given proper initial conditions and disease model, generate an ensemble of possible epidemic evolution for epidemic parameters such as newly generated cases.In the application to seasonal influenza, GLEAM is limited to the level of a single country with only the population and mobility of the country of interest taken into account.The number of ILI cases extracted from the Influenzanet platforms are mapped onto the corresponding GLEAM geographical census areas and used as seeds to initialize the simulations.The unique advantage provided by using the data collected by the Influenzanet platform as initial conditions consist in the high resolution, in time (daily) and space (postal code level), with which data are available.This geographical and temporal resolution for the initial conditions cannot be achieved with any other signal.Moreover, these are not proxy data for the ILI activity among the population but indeed represent a high-specificity ground truth for the initial conditions that cannot be obtained with any other source of information.Given these high quality and highly reliable initial conditions, the GLEAM simulations perform a Latin hypercube sampling of a parameter space covering possible ranges of transmissibility, infection periods, immunization rates, and a tuning parameter regulating the number of generated infected individuals.In the prediction component of the framework, the large-scale simulations generate a statistical ensemble of the epidemic profiles for each sampled point in the parameter space.From each statistical ensemble, the prediction component measures its likelihood function with respect to up-to-date ILI surveillance data and selects a set of models by considering a relative likelihood region [47]. The set of selected models represents the output component and provides both long-term (ie, 4 weeks in advance) and short-term predictions for epidemic peak time and intensity.Results are disseminated as interactive plots that can be explored on the public website fluoutlook.org[48]. To quantify the simulation's forecast performance, the Pearson correlation between each predicted time series and sentinel doctors' surveillance time series can be used.Moreover, the mean absolute percent error can be used to evaluate the magnitude estimation and the peak week accuracy defined as the percentage of the selected ensemble of simulations providing predictions within 1 week for peak time. Flu Near You FNY is a participatory disease surveillance system launched in October 2011 by HealthMap of Boston Children's Hospital, the American Public Health Association, and the Skoll Global Threats Fund [17].FNY maintains a website and mobile app that allows volunteers in the United States and Canada to report their health information using a brief weekly survey.Every Monday, FNY sends users a weekly email asking them to report whether or not they experienced any of the following symptoms during the previous week: fever, cough, sore throat, shortness of breath, chills, fatigue, nausea, diarrhea, headache, or body aches.Users are also asked to provide the date of symptom onset for any reported symptoms.Users experiencing fever plus cough and/or sore throat are considered by FNY to be experiencing an ILI.FNY's definition of ILI differs slightly from the US CDC outpatient Influenza-Like Illness Surveillance Network (ILINet) definition, which defines ILI as fever plus cough and/or sore throat without a known cause other than influenza. FNY was conceived to capture flu activity in a population group that may not necessarily seek medical attention, while CDC's ILINet was designed to monitor the percentage of the population seeking medical attention with ILI symptoms.Recent estimates confirm that only approximately 35% of FNY participants who XSL • FO RenderX report experiencing ILI symptoms seek medical attention.Despite this design (and observed) difference and because these 2 distinct groups (those seeking medical attention versus those not doing so) interact, large changes in ILI in the CDC's ILINet are also generally observed in the FNY signal, as shown in Figure 3 for the 2013-2014 and 2014-2015 flu seasons and as previously shown by Smolinski et al [19].To produce Figure 3, spikes of unrealistic increased FNY ILI rates (calculated as the weekly number of users who experienced ILI divided by the total number of reports received during the same week) were first removed.These unrealistic spikes (defined as a weekly change in the FNY ILI rates larger than 10 standard deviations from the mean change of the last 4 weeks) are often associated with media attention on FNY that causes a temporary surge of interest in the system among people sick with the flu, as described Aslam et al [17].Flu estimates were then produced 1 week ahead of the publication of CDC reports by combining historical CDC-reported flu activity (via a lag-2 autoregressive model) with the smoothed weekly FNY rates.These flu estimates are displayed in blue and labeled AR(2)+FNY on Figure 3. The reason why we used CDC-reported ILI rates as our reference for traditional flu surveillance is because these ILI rates have been recorded for multiple years, and public health officials have used them as proxies of influenza levels in the population.This is consistent with multiple influenza activity prediction studies in the United States [7][8][9][49][50].With the intent of providing more timely yet still familiar information to public health officials, we use the smoothed FNY ILI rates as one of multiple data inputs into the HealthMap Flu Trends influenza surveillance and forecasting system [51]. The HealthMap Flu Trends system relies on a machine-learning modeling approach to predict flu activity using disparate data sources [49] including Google searches [8][9], Twitter [15], near real-time electronic health records [50], and data from participatory surveillance systems such as FNY [19].The HealthMap Flu Trends system provides accurate real-time and forecast estimates of ILI rates at the national as well as regional levels in the United States up to 2 weeks ahead of CDC's ILINet flu reports. The multiple data sources entered into the HealthMap Flu Trends system are each individually processed using machine-learning algorithms to obtain a predictor of ILI activity.These individual predictions of ILI rates are then fed into an ensemble machine-learning algorithm that combines the individual predictions to produce robust and accurate ILI estimates, described by Santillana et al [49].The estimates produced by this ensemble machine-learning approach outperform all of the predictions made using each of the data sources independently. Widely Internet-Sourced Distributed Monitoring-Based Results The results based on WISDM are illustrated as time series of daily infections (also called epidemic curves) in Figure 4. Figures 4 a and 4 b correspond to low transmission (0.00003 per minute of contact time and R 0 =1.4) and high transmission (0.00006 per minute of contact time and R 0 =2.7) rates, respectively.The red epidemic curve in each represents the entire Delhi synthetic population (V).The black and blue epidemic curves show results for the MTurk-matched Delhi synthetic population (S') and the entire MTurk-matched Delhi synthetic population (S), respectively.Under a high transmission rate, the attack rate and peak infection rate are higher but the time-to-peak is lower.This is expected since a higher transmission rate spreads the disease quickly and to more individuals in the population. If surveillance is restricted to only the MTurk sample (S'), the level of bias would equal the difference between the red and black curves.This difference represents a combination of the nonresponse bias (difference between the red curve and blue curve) and the sample-size bias (difference between the blue curve and black curve). In order to measure the significance of the total bias, the nonresponse bias, and the sample-size bias of the simulation illustrated in Figure 4, we tested the differences in attack rate, peak infection rate, and time-to-peak by using the 2-sample t test.The mean difference, 95% confidence intervals, and P values are summarized in Tables 1 and 2 for low and high transmission rates, respectively. As shown in Table 1, with a low transmission rate (0.00003), the attack rate for S' is about 10% lower than that for V, while the peak infection rate for S' is 1.36% lower and the epidemic curve peaks 1 day later.Total biases for all 3 metrics are statistically significant.Also for all 3 metrics, the nonresponse bias is larger than the sample bias and dominates the total bias.This is consistent with the fact that MTurk survey responders tend to be younger, educated males among whom the incidence of disease is typically lower than much of the rest of the population. Results for the higher transmission rate (0.00006) are similar (Table 2).Note, however, that the difference between the red and black curves (in Figure 4) shrinks as the transmission rate becomes higher. Influenzanet-Based Results In this section, we show results for simulations and forecasts performed for the 2015-2016 influenza season.The input component of the framework has been initialized with ILI cases from a number of selected countries that are part of the Influenzanet network: Belgium, Denmark, Italy, the Netherlands, Spain, and the United Kingdom.In the simulation component, weekly surveillance data of sentinel doctors, also called traditional surveillance, in each of the selected countries have been used as ground truth to select the set of models with maximum likelihood. Figure 5 illustrates the results of 1-week, 2-week, 3-week, and 4-week predictions.We include results for 1-week, also called now-casting, predictions for the following reason.The now-casting predictions (ie, inferring the incidence value that the traditional influenza surveillance will report in the following week) are usually used to evaluate the performance of the predictions based on the model described in this work with respect to predictions based on linear regression models applied to traditional surveillance data only.In a recent work by Perrotta et al [52], it has been shown how real-time forecasts of seasonal influenza activity in Italy can be improved by integrating traditional surveillance data with data from the participatory surveillance platform called Influweb, and the now-casting predictions have been used as a benchmark test to compare the 2 approaches. Figure 5 shows that for all countries under study, the empirical observations (ie, the ground truth of the traditional surveillance reference data represented as black dots in the figure) lie within the 95% confidence intervals for most weeks.This gives a qualitative indication of the accuracy of the predictions. In Figure 6, we show results for the Pearson correlation between each predicted time series and sentinel doctors' surveillance time series and also results for the mean absolute percent error (MAPE).As expected, the statistical accuracy of the ensemble forecasts increase as the season progresses.In the case of a 1-week lead prediction, the correlation is close to 1 for Italy and Belgium.The correlations are around 0.8 for 2-week predictions for the United Kingdom, around 0.7 for the Netherlands, and above 0.8 for 4-week lead predictions for United Kingdom and Italy.The peak magnitude is 1 of the free parameters we fit in the model.As the correlation increases as the season progresses, the MAPE (ie, the percentage error on the peak magnitude estimated by the model) decreases or remains quite stable for countries like the United Kingdom, in which the correlation is consistently high.For other countries, the performance is not as good and the peak magnitude is not so well estimated.Belgium and Spain are the 2 countries in which the performance is the worst.This might be due to the fact that the ILI incidence curve from Influenzanet in Spain is very noisy, mainly due to low participation, and this has affected the quality of the predictions in terms of amplitude and correlation.In Belgium, the ILI incidence data from traditional surveillance have been very noisy due to an unusually mild influenza season in this country.More information about the Influenzanet ILI incidence curves in the various countries can be found at the Influenzanet page (www.influenzanet.eu/en/flu-activity/).The peak week accuracy also increases as the season progresses and, notably, accuracy is already above 60% with up to 4 weeks lead time in the case of Italy, the Netherlands, and Spain. Overall, even for a peculiar influenza season such as 2015-2016, with an unusually late peak, the results show that our framework is capable of providing accurate short-range (1-week, 2-week) forecasts and reasonably accurate longer range (3-week, 4-week) predictions of seasonal influenza intensities and temporal trends. Flu Near You-Based Results We quantitatively confirmed that incorporating data from our participatory surveillance system improved real-time influenza predictions by comparing the aforementioned influenza estimates with estimates produced using a model based only on historical CDC-reported influenza activity (a lag-2 autoregressive model), labeled AR(2) in Figure 3.The correlation between the observed influenza activity and the estimates obtained using a model based only on historical ILI information for the 2013-2015 time window was 0.95, whereas the correlation with the model that incorporates FNY information was 0.96.While this represents a mild improvement in the correlation values, a more statistically robust test introduced by Yang et al [9] showed that the incorporation of FNY information led to a 10% mean error reduction (90% CI 0.04 to 0.24) when compared to the baseline autoregressive model.The bottom panel of Figure 3 shows visually the errors from each model.HealthMap Flu Trends national-level real-time predictions that were available 1 week ahead of the publication of the weekly CDC reports for the 2013-2014 and 2014-2015 influenza seasons are shown in red on Figure 3.For comparison purposes, the correlation of the HealthMap Flu Trends estimates with the observed CDC ILI rates is 0.99 for the 2013-2015 time window, and the addition of multiple data sources leads to a mean error reduction of about 83% (90% CI 0.69 to 0.85) when compared to the estimates of the model that only uses CDC historical information (AR(2)).In Figure 7, the historical contributions of the different individual predictors (and their tendencies) in the HealthMap influenza estimates are displayed.As illustrated in Figure 7, FNY inputs do contribute to the ensemble-based influenza prediction estimates. Discussion We have described 3 different participatory surveillance systems, WISDM, Influenzanet, and FNY, and we have shown how modeling and simulation can be or has been combined with participatory disease surveillance to (1) measure the nonresponse bias present in a participatory surveillance sample using WISDM and (2) now-cast and forecast influenza activity in different parts of the world using Influenzanet and FNY. RenderX While the advantages of participatory surveillance, compared to traditional surveillance, include its timeliness, lower costs, and broader reach, it is limited by a lack of control over the characteristics of the population sample.Modeling and simulation can help overcome this limitation. Use of MTurk and WISDM combined with synthetic population modeling, as shown here, is one way to measure nonresponse and sample bias.The results measure the nonresponse and sample bias for three epidemic outcomes (ie, epidemic size, peak infection rate, and time-to-peak).As shown in Table 1, a lower transmission rate results in a higher nonresponse bias and higher total bias.Total biases for all 3 metrics are statistically significant.Also for all three metrics, the nonresponse bias is larger than the sample bias and dominates the total bias.This is consistent with the fact that MTurk survey responders tend to be younger, educated males among whom the incidence of disease is typically lower than much of the rest of the population.Results for the higher transmission rate are similar.In summary, WISDM-based results show that the bias that occurs in a skewed survey sample can be measured through modeling and simulation to infer more dependable observations than what can be derived from the survey data alone. Our results confirmed that combining participatory surveillance information from FNY with modeling approaches improve short-term influenza activity predictions.In addition, we described how combining participatory surveillance information with other data sources, by means of a robust machine-learning modeling approach, has led to substantial improvements in short-term influenza activity predictions [49].Information from participatory surveillance may also help improve influenza forecasting approaches such as those proposed in other studies [53][54][55][56]. Moreover, we have shown how by combining digital participatory surveillance data with a realistic data-driven epidemiological model we can provide both short-term now-casts (1 or 2 weeks in advance) of epidemic intensities and long-term (3 or 4 weeks in advance) forecasts of significant indicators of an influenza season.It is indeed the participatory surveillance data component that allows for real-time forecasts of seasonal influenza activity.ILI incidence estimates produced by traditional surveillance systems undergo weekly revisions, are usually released with at least a 1-week lag, and lack the geographical resolution needed to inform high-resolution dynamical models such as GLEAM.Participatory surveillance data are available as soon as participants report their health status.This real-time component allows for accurate now-casting (1 week) and forecasting (2, 3, and 4 weeks) as soon as the influenza activity among the population begins, even before the epidemic curve surpasses the threshold.Data from traditional surveillance up until a specific week are used to fit the selected ensembles which then provide predictions for the upcoming weeks, but these ensembles need to be generated by using the high-resolution real-time data from participatory surveillance. For future work aimed at harmonizing these three approaches, results from the WISDM platform about nonresponse bias could be used to assess similar biases in groups of self-selected individuals participating in Influenzanet and FNY [24]. The projects described here not only strengthen the case for modeling and simulation becoming an integral component of the epidemic surveillance process, but they also open up several new directions for research.Important questions are yet to be answered.How do we optimally integrate other sources of data with data obtained through participatory surveillance?How do we incorporate participatory surveillance data that are reweighted at each point in time based on active learning techniques to maximize forecast accuracy?How can hypotheses be generated and tested in an abductive setting?An abductive setting is where the models and experiments can be run iteratively to test data-driven hypotheses that evolve as new data arrives in real time. With the increasing reach of the Internet and cellular communication, participatory surveillance offers the possibility of early detection of and response to infectious disease epidemics.Continued integration of participatory surveillance with modeling and simulation techniques will help to strengthen real-time epidemic science and provide a more rigorous understanding of epidemic conditions. (https://creativecommons.org/licenses/by/4.0/),which permits unrestricted use, distribution, and reproduction in any medium, provided the original work, first published in JMIR Public Health and Surveillance, is properly cited.The complete bibliographic information, a link to the original publication on http://publichealth.jmir.org,as well as this copyright and license information must be included. Figure 1 . Figure 1.Mapping of MTurk sample to synthetic individuals. Figure 3 . Figure 3. (Top panel) The US Centers for Disease Control and Prevention (CDC) influenza-like illness (ILI) percent value (y-axis) is displayed as a function of time (x-axis).Predictions produced 1 week ahead of the publication of CDC-ILI reports using (1) only historical CDC information via an autoregressive model, AR(2), (2) an autoregressive model that combines historical CDC information with Flu Near You (FNY) information, AR(2)+FNY, and (3) an ensemble method that combines multiple data sources including FNY, Google search frequencies, electronic health records, and historical CDC information (all sources) are shown.(Bottom panel) The errors between the predictions and the CDC-reported ILI for each prediction model are displayed. Figure 4 . Figure 4. (a) Epidemic curves under low transmission rate.(b) Epidemic curves under high transmission rate. Figure 5 . Figure 5. Epidemic profiles for Belgium, Denmark, Italy, the Netherlands, Spain, and the United Kingdom considering 4-week, 3-week, 2-week, and 1-week lead predictions.The best estimation (solid line) and the 95% confidence interval (colored area) are shown together with sentinel doctors' surveillance data (black dots) which represent the ground truth (ie, the target signals). Figure 6 . Figure 6.Pearson correlations, mean absolute percentage errors, and peak week accuracy obtained by comparing the forecast results and the sentinel doctors' influenza-like illness surveillance data along the entire season in each country. Figure 7 . Figure 7. Heatmap showing the relevance of each of the input data sources on the flu prediction during the 7/2013-4/2015 time window (x-axis).These values change from week to week due to a dynamic model recalibration process.The multiple data sources entered into the HealthMap Flu Trends system are on the y-axis with their tendencies, or derivatives.The bar on the right is a color code of the magnitude of the regression coefficients of the multiple data sources used as inputs. Table 1 . Bias in epidemic metrics under low transmission rate. Table 2 . Bias in epidemic metrics under high transmission rate.
2017-11-02T16:31:53.239Z
2017-11-01T00:00:00.000
{ "year": 2017, "sha1": "5743d9cbad326075cebdf6dcf716288f2cf1937b", "oa_license": "CCBY", "oa_url": "https://publichealth.jmir.org/2017/4/e83/PDF", "oa_status": "GOLD", "pdf_src": "ScienceParsePlus", "pdf_hash": "bc01ecab4c37195df757ea1b8c7642a7946f9c33", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [ "Medicine", "Computer Science" ] }
118665948
pes2o/s2orc
v3-fos-license
On the origin of strong photon antibunching in weakly nonlinear photonic molecules In a recent work [T. C. H. Liew and V. Savona, Phys. Rev. Lett. {\bf104}, 183601 (2010)] it was numerically shown that in a photonic 'molecule' consisting of two coupled cavities, near-resonant coherent excitation could give rise to strong photon antibunching with a surprisingly weak nonlinearity. Here, we show that a subtle quantum interference effect is responsible for the predicted efficient photon blockade effect. We analytically determine the optimal on-site nonlinearity and frequency detuning between the pump field and the cavity mode. We also highlight the limitations of the proposal and its potential applications in demonstration of strongly correlated photonic systems in arrays of weakly nonlinear cavities. In a recent work [T. C. H. Liew and V. Savona, Phys. Rev. Lett. 104, 183601 (2010)] it was numerically shown that in a photonic 'molecule' consisting of two coupled cavities, near-resonant coherent excitation could give rise to strong photon antibunching with a surprisingly weak nonlinearity. Here, we show that a subtle quantum interference effect is responsible for the predicted efficient photon blockade effect. We analytically determine the optimal on-site nonlinearity and frequency detuning between the pump field and the cavity mode. We also highlight the limitations of the proposal and its potential applications in demonstration of strongly correlated photonic systems in arrays of weakly nonlinear cavities. The photon blockade is a quantum optical effect preventing the resonant injection of more than one photon into a nonlinear cavity mode [1], leading to antibunched (sub-Poissonian) single-photon statistics. Signatures of photon blockade have been observed by resonant laser excitation of an optical cavity containing either a single atom [2] or a single quantum dot [3] in the strong coupling regime. Arguably, the most convincing realization was based on a single atom coupled to a micro-toroidal cavity in the Purcell regime [4], suggesting that the strong coupling regime of cavity-QED need not be a requirement. Concurrently, on the theory side there has been a number of proposals investigating strongly correlated photons in coupled cavity arrays [5][6][7] or one-dimensional optical waveguides [8]. The specific proposals based on the photon blockade effect include the fermionization of photons in one-dimensional cavity of arrays [9], the crystallization of polaritons in coupled array of cavities [10], and the quantum-optical Josephson interferometer in a coupled photonic mode system [11]. It is commonly believed that photon blockade necessarily requires a strong on-site nonlinearity U for a photonic mode, whose magnitude should well exceed the mode broadening γ. However, in a recent work [12] Liew and Savona numerically showed that a strong antibunching can be obtained with a surprisingly weak nonlinearity (U ≪ γ) in a system consisting of two coupled zerodimensional (0D) photonic cavities (boxes), as shown in Fig. 1(a) [12]. Such a configuration can be obtained, e.g., by considering two modes in two photonic boxes coupled with a finite mode overlap due to leaky mirrors: the corresponding tunnel strength will be designated with J. In Ref. [12] numerical evidence indicated that a nearly perfect antibunching can be achieved for an optimal value of the on-site repulsion energy U and for an optimal value of the detuning between the pump and mode frequency. However, a physical understanding of the mechanism leading to strong photon antibunching is needed to identify the limitations of the scheme in the context of proposed experiments on strongly correlated photons, as well as to determine the dependence of the optimal coupling and detuning on the relevant physical parameters J and γ. In this letter, we show analytically that the surprising antibunching effect is the result of a subtle destructive quantum interference effect which ensures that the probability amplitude to have two photons in the driven cavity is zero. We show that the weak nonlinearity is required only for the auxiliary cavity that is not laser driven and whose output is not monitored, indicating that photon antibunching is obtained for a driven linear cavity that tunnel couples to a weakly nonlinear one. We determine the analytical expressions for the optimal coupling U and for the pump frequency detuning required to have a perfect antibunching as a function of the mode coupling J and broadening γ. Our analytical results are in excellent agreement with fully numerical solutions of the master equation for the considered system. Before concluding, we discuss the experimental realization of such a scheme by using cavities embedding weakly coupled quantum dots. Moreover, we consider also the case of a ring of coupled photonic molecules showing that strong antibunching persists in presence of intersite photonic correlations. We consider two photonic modes coupled with strength J; each mode has energy E j and an on-site photonphoton interaction strength U i (i = 1, 2). The Hamiltonian is written aŝ whereâ i is the annihilation operator of a photon in ith mode, F and ω p are the pumping strength and fre- ij (τ = 0) are plotted as functions of nonlinearity U = U1 = U2 normalized to γ. The nearly perfect antibunching is obtained at the pumped mode [g quency, respectively. Following Ref. [12], we first calculate the second-order correlation function g jâ j in the steady state using the master equation in a basis of Fock states [13]. The results are shown as functions of nonlinearity U in Fig. 1(b). As already demonstrated in Ref. [12], we can get a strong antibunching of the pumped mode (g (2) 11 (0) ≃ 0) for an unexpectedly small nonlinearity U = 0.0428γ. In order to understand the origin of the strong antibunching, we use the Ansatz to calculate the steady-state of the coupled cavity system. Here, |mn represents the Fock state with m particles in mode 1 and n particles in mode 2. Under weak pumping conditions (C 00 ≫ C 10 , C 01 ≫ C 20 , C 11 , C 02 ), we can calculate the coefficients C mn iteratively. For one-particle states, the steady-state coefficients are determined by where ∆E j = E j − ω p and we consider a damping with rate γ j in each mode. Since we assume weak pumping, the contribution from the higher states (C 20 , C 11 , and C 02 ) to the steady-state values of C 10 , C 01 is negligible. From Eq. (3b), the amplitude of mode 2 can be written as indicating that for strong photon tunneling (J ≫ |∆E 2 |, γ 2 ), the probability of finding a photon in the auxiliary cavity is much larger than the driven cavity. One path is the direct excitation from |10 to |20 , but it is forbidden by the interference with the other path drawn by dotted arrows. In the same manner, the coefficients of two-particle states are determined by When we simply consider E 1 = E 2 = E, and γ 1 = γ 2 = γ, the conditions to satisfy C 20 = 0 are derived from Eqs. (4) and (5) as For fixed J and γ, from these equations, the optimal conditions (those that lead to C 20 = 0) are given by and, if J ≫ γ, they are approximately written as In Fig. 2(a), the optimal ∆E opt and U opt [Eq. (7)] are plotted as functions of J/γ. The strong antibunching can be obtained even if U 2 < γ, provided J > γ/ √ 2. Remarkably, the required nonlinearity decreases with increasing tunnel coupling J obeying Eq. (8b). In Fig. 2(b), we show a sketch of the quantum interference effect responsible for this counter-intuitive photon antibunching. The interference is between the following two paths: (a) the direct excitation from |10 . In order to show in detail the origin of the quantum interference, we rewrite Eqs. (6) for C 20 = 0 as follows. First, we calculate C 11 from Eqs. (4) and (5) neglecting C 20 as This amplitude is the result of excitation from |01 to |11 and of the coupling between |10 and |01 and also between |11 and |02 . From this amplitude, C 20 is determined by Eq. (5a) as C 20 ∝ JC 11 + F C 10 , and we can derive Eqs. (6) by the condition C 20 = 0. As seen in Fig. 1(b), while no more than one photon is present in the first cavity mode at the optimal condition, there can be more than one photons in the whole system. While there is nearly perfect antibunching in the driven mode [g (2) 11 (τ = 0) << 1], the cross-correlation between the two modes exhibits bunching [g (2) 12 (τ = 0) > 1]. The amplitude oscillation between |10 and |01 produces the time oscillation of g (2) 11 (τ ) with period 2π/J as reported in Ref. [12] and shown in Fig. 3(a). The equal-time correlation functions is plotted in Fig. 3(b) as a function of the pump detuning ∆E/γ: while the optimal value of the detuning is at ∆E = 0.275γ, a strong antibunching is obtained in a range of about 0.3γ around the optimal value and the width of this window does not significantly depend on J/γ. This may suggest that pump pulses of duration ∆t p longer than 1/(0.3γ) could be enough to ensure strong antibunching. However, the timescale over which strong quantum correlations between the photons exist is on the order of 1/J < √ 2/γ, as seen in Fig. 3(a). While weak nonlinearities do lead to strong quantum correlations, these correlations last for a timescale that scales with 1/J ∝ U opt (see Eq. (8b)). From a practical perspective, a principal difficulty with the observation of the photon antibunching with weak nonlinearities is that it requires fast singlephoton detectors [14]. Conversely, for a given detection set-up, the required minimal value of the nonlinearity is ultimately determined by the time resolution of the available single photon detector. As seen in Eq. (6), the nonlinearity U 1 of the pumped cavity mode is not essential for the antibunching. This means that only the auxiliary (undriven) photonic mode must have a (weak) nonlinearity to achieve the quantum interference leading to perfect photon antibunching. As a practical realization, one could consider two coupled photonic crystal nanocavities, where the auxiliary cavity contains a single quantum dot that leads to the required weak nonlinearity (see the inset in Fig. 4). The Hamil- tonian is written aŝ Here, |g and |ex represent the ground and excited states of the quantum dot, respectively, E ex is the excitation energy, and g is the coupling energy with cavity mode 2. Since the required nonlinearity is relatively weak, one can use a quantum dot which is off-resonant with respect to the cavity mode (|E ex − E 2 | > γ 2 = γ) and/or does not satisfy strong coupling condition (g ≃ γ). We take the quantum dot exciton broadening to be equal to the cavity decay rate for simplicity. We have solved numerically the master equation associated to the Hamiltonian in Eq. (10). Fig. 4 shows g 11 (τ = 0) of the pumped mode as a function of g/γ. The coupling energy between the two cavities is J = 3γ, and then the required nonlinear energy should be U opt = 0.0428γ from Fig. 2. In the present system, this nonlinear energy is practically achieved at g = 1.4γ, which is an intermediate strength between the weak-and strong-coupling regime of cavity mode and quantum dot excitation. The dashed line in Fig. 4 represents the results in the system consisting of one quantum dot and one cavity: in this ordinary Jaynes-Cummings system, only a small antibunching is obtained at g ≃ γ, and the strong-coupling g ≫ γ is required for the observation of large photon antibunching [1,2]. In contrast, in the new scheme using the quantum interference, a nearly perfect antibunching can be obtained even for g ≃ γ. Finally, we note that the quantum interference can be generalized to a system of many coupled photonic molecules: in this case, the strong on-site antibunching can show an interesting interplay with quantum correlation between neighboring photonic modes. As a demonstration, we consider a ring of three molecules whose driven dots are coupled with each other by a tunnel coupling of amplitude J 2 [see Fig. 5(a)]. Also in this case a nearly perfect antibunching can be observed in each driven mode, as shown in the plots of g (2) ii (τ = 0) as a function of J 2 /γ that are shown as a solid line in Fig. 5(b). In order to optimize the antibunching at a finite value of J 2 ≃ γ, values of U = 0.0769γ and ∆E = 0.450γ slightly different from the single-molecule optimal ones (U opt = 0.0428γ and ∆E opt = 0.275γ) had to be chosen. At the same time, a strong bunching effect is observed in the equal-time cross-correlation function between neighboring cavities, which shows a value of g (2) i =j (0) significantly larger than the coherent field value of g (2) i =j (0) = 1. This remarkable combination of strong on-site antibunching and strong inter-site bunching suggests that this system may be a viable alternative to the realization of a Tonks-Girardeau gas of fermionized photons discussed in Ref. [9]. In summary, we have analytically determined that a destructive quantum interference mechanism is responsible for strong antibunching in a system consisting of two coupled photonic modes with small nonlinearity (U < γ). The quantum interference effect occurs for an optimal onsite nonlinearity U opt ≃ 2 where J is the intermode tunnel coupling energy and γ is the mode broadening. This robust quantum interference effect has the peculiar feature that the resulting quantum correlation between the generated photons survive for timescales much shorter than the photon lifetime. Nonetheless, we have shown that this quantum interference scheme has the potential to generate strongly correlated photon states in arrays of weakly nonlinear cavities.
2010-07-09T15:17:02.000Z
2010-07-09T00:00:00.000
{ "year": 2010, "sha1": "322a460ef853e7bc64b578205244e88f0f9ea216", "oa_license": null, "oa_url": "http://arxiv.org/pdf/1007.1605", "oa_status": "GREEN", "pdf_src": "Arxiv", "pdf_hash": "322a460ef853e7bc64b578205244e88f0f9ea216", "s2fieldsofstudy": [ "Physics" ], "extfieldsofstudy": [ "Physics" ] }
59482113
pes2o/s2orc
v3-fos-license
Problems Faced While Simulating Nanofluids Problems are faced when something is already been adopted for a considerable amount of time–here the problem that is discussed is related with nanofluids. The nanofluids have been considered for different engineering applications since last three decades; however, the work on its simulation has been started since last two decades. With the time, nanofluid simulations are increasing as compared to experimental testing. Researchers conducting nanofluid simulations do find difficulties and problems while trying to simulate this system. In addition to this, most of the time researchers are unaware of some basic problems and they find themselves stuck in relentless difficulties. Most of the time, these problems are very basic and can waste a lot of useful time of a research. Therefore, this chapter introduces some fundamental problems which a researcher can find while simulating nanofluids and with a simple way of dealing with it. Moreover, the chapter withholds lots of information regarding the way to design and to model a nanofluid system. Not only this, it also tends to elaborate the nanofluid simulation methodology in a precise manner. Moreover, the literature shows that nanofluid simulation has gained high consideration since last two decades, as experimental techniques are out of reach for everyone. In addition to experimental techniques, they are expensive, time-consuming and require high skills. However, it seems the simulation is picking pace with the due time and is considerably being adopted by the expertise dealing with nanofluids. This opens a high prospect of simulating nanofluids in future. Nevertheless, it seems there will be user-friendly software to conduct nanofluid simulations. Finally, issues and their resolution have also been conveyed which is the main aspect of this topic. Introduction Couple of decades back, nanofluid research was mostly conducted using experimental techniques. With time, as the computational power acquired drastic developments, new algorithms were designed, and therefore, today, we have got sophisticated software and mathematical models to solve and simulate the nanofluid environment. Background knowledge Nanofluids comprises of two constitutes, i.e. Nano comes from nanoparticles and fluid comes from base fluid. The need of combining nanoparticles with fluid was necessary for enhancing the properties of the base fluid. Addition of nanoparticles to the base fluid helps in altering and optimizing properties such as physiochemical [1], thermo-physical [2], rheological [2][3][4], etc.; to give a new composite performance. The initial mixing of nanofluids can be dated back to the time of the US choi in 1995, he was the first one to form nanofluid at Argonne laboratories USA [5,6]. He used the nanofluid for optimization of thermal conductivity. Since then there have been several experimental studies over the thermal conductivity analysis of different nanoparticles in various base fluids [5,[7][8][9]. By looking at thermal conductivity improvement, other researchers came up with different ideas and formulations for utilization of this technique in various fields of science. Today, nanofluids are being used in biological, pharmaceuticals and medicine [10], engineering [7], lubrication industries [11,12]. The major work on experimental side in all these industries has been carried out; however, these experiments of nanofluid require high skilled labour and expensive equipment. Furthermore, material purchase and characterization are costly. Due to this, researchers and industrialists working with nanofluids are trying to develop a model that can replicate mechanisms dealing with nanoparticle and fluid interactions. However, this subject is wide and requires huge expertise to deal with. Currently, as the computational power has enhanced to a level where people are finding it easy to simulate and replicate systems within their personal computers, it is now becoming quite manageable task to simulate nanofluids. But the task is not as simple as it seems, it requires a lot of understanding of physiochemical interactions with thermo-physical boundary conditions. There are many algorithms and mathematical models to be considered. As the number of these models and algorithms increases, higher the computational power is required for solving. Nevertheless, the endless applications and usage makes it convincible for an end-user to adopt this creativity, as it enables one to understand the process and makes it visually quantifiable. Before moving forward, it is necessary to understand some basic theory that is behind the dispersion of nanoparticles within a certain fluid. Theory behind dispersion of nanoparticle Dispersion of nanoparticles is a process in which they are dispersed in a medium like fluid. These fluids are of different grades such as biological, aerospace, automotive and buffering solutions. According to the kinetic theory of molecules, as the molecule interacts with other molecule, it starts to generate some heat due to kinetic molecular movement of the particle. This movement is accountable for the dispersion of nanoparticles in different fluids; thereby, this model causes anomalous increase in the heat transfer of the nanofluids. Furthermore, using this model, four major effects produced by nanoparticles dispersion can be explained i.e. (a) Brownian motion of nanoparticle, (b) liquid layering at liquid particle interface, (c) nature of heat transport between nanoparticles and (d) the clustering effect of nanoparticles in fluid. These factors are responsible for inducing random motion within particle and liquid layers, and this phenomenon is Brownian motion. During the interaction between nanoparticle and fluid, heat is evolved, causing nanoparticles to cluster and agglomerate. These mechanisms have already been replicated by various researchers for analysing properties such as; (a) rheological, (b) thermo-physical and (c) physiochemical as mentioned in Section 1.2. Applications There are various applications in the area of nanofluid simulation. Currently, nanofluid simulation is being applied for analysing the rheological properties of nanofluid environment, which is useful for biological, oil and gas, lubrication and chemical industries. Now, by the help of simulation, it is possible to test those undesirable conditions that could not be tested before, such as testing viscosity at low and very high temperatures. Properties of ideal nanofluid can be tested and their results can also be validated using autocorrelation functions for satisfaction. The use of molecular dynamics has enabled us to test and quantify thermo-physical quantities of nanofluid at obnoxious level. The chemical interactions that were complicated to understand from the real interface, now it has become straightforward to know how the atoms of fluid and nanoparticle interacts together, nevertheless, Brownian dynamics is more appreciably demonstrated and visualized. Having this all, analysing different properties of fluid and nanoparticle interaction, now it is easy to know other parameters such as specific heat [13], total energy, bond formation at molecular level, chemical interactions, etc. [14]. Furthermore, various effects that could not be judged by experimental testing can now easily be known such as the effect of liquid layering on thermal conductivity as investigated by Li et al. [15]. Particle effect on thermal conductivity analysis can now be determined as carried out by Lu and Fan [16]. Nevertheless, effect of surfactant addition in nanofluid system can also be tested using molecular dynamics, which can better tell about the chemical interaction and aggregation dynamics within this system as conveyed by Mingxiang and Lenore [17]. Rudyak also succeeded in showing that by changing nanoparticle size and shape effects the viscosity [18]. Therefore, by looking at the vast applications of nanofluid simulation, it is necessary to know some overview about how these simulations can easily be conducted. Need of simulations over experiments Simulations are being preferred over experimental practices in the twenty-first century. As experiments require a lot of man power and material, which is costly and time-consuming, therefore, researchers are favouring simulations, as it saves material, money and time. With the advancement in computational technology, simulations are being approached to replicate the nanofluids. Simulations are not an old technique, and it has got a firm ground. Currently, the area of simulation to replicate the real phenomena of dispersion is through the int ermediate stages. Before moving to simulations, it is important to understand dispersion and interaction mechanism of nanoparticles with fluids. For this, the major phenomena that is used for dispersion is Brownian motion, which is an important aspect that controls the r andom factor of nanoparticle dispersion. Simulations of nanofluids Nowadays, the necessity of using simulation techniques is increasing due to its cost-effectiveness and time-saving capabilities. Simulations for nanofluids are mostly referred to as molecular dynamics simulation (MDS). However, before MDS, researchers adopted theoretical and numerical calculation method for computing thermo-physical quantities. Earlier theoretical formation, related to MDS research, has not established a strong hold position for replicating the mechanism of heat transfer, rheology and thermo-physics involved for nanofluid dispersion. This is because several researchers had modelled system using various assumptions rather using a definite formulation. This creates ambiguity in collecting results; however, they were well utilized for initial prediction of thermal transfer properties of nanofluid at the cost of wide inaccuracies. Experimental results that are representing actual system sometime are way off from the ideal method, in addition to this, researchers apply various differential equations for equating the system to realistic results as possible. These methods are single-phase and two-phase methods [19] of nanofluid heat convection. They are still being used for predicting several properties related to heat transfer, convection and conduction within nanofluid systems [19][20][21]. Now these two methods are being embedded in computation fluid dynamic and molecular dynamics for heat transfer analysis [21]. The single-phase method of heat convection in nanofluid is an old method and is good for initial prediction of the thermal properties of nanofluid; however, the second-phase method is costlier as it requires higher computing power. In addition to the second-phase method, it is quite versatile as its prediction is in higher accuracy to the experimental results. Numerical approach simulates the nanofluid system using classical thermodynamics principles, which is more close to the single-phase model. Different correlations are applied to estimate the imbalance between the heat propagation values from actual to the ideal system. Physical interaction kinetics involved in real nanofluid system are not mimicked. This is why the real prediction is hard to achieve by this approach; moreover, two-phase fluid heat transfer involves higher mathematical complexity, which requires high computational power for general analysis of nanofluid heat transfer, rheology and thermo-physical quantities. It was investigated by Sergis Antonis that due to not standardizing the procedure of nanofluid preparation diversifies accuracy of the experimental results obtained [2]. In this respect, MDS comes in to play, as it helps in simulating both nanoparticle and fluid particle system in one single domain, enabling us to mimic reaction kinetics of both materials in one single domain. However, these simulations require high computational power for simulating the system as it involves kinetic molecular movement of different atoms. Initially, MDS involved heat transfer within a nanofluid system in which it did not involve analysis with respect to the geometrical features or spherical with no surface texture. It used to be simple analysis in a uniform and homogeneous system. Earlier, properties of SiO 2 nanoparticles were calculated using Stillinger-Weber [22] and later fluid particles were represented by L-J potential. There are two different dispersion prospects of MDS i.e. (1) non-equilibrium MDS (NEMD) and (2) equilibrium MDS (EMD). The macroscopic MDS mimics the molecular interactions between different molecules of various elements; in compound or ionic form. These d ifferent thermo-physical types of interactions of molecular dynamic quantities can be tailored and analysed by true boundary conditions. These boundary conditions are related to the physical settings, chemical interactions, charges, viscosity of the system and motion exhibition of particles. The interaction between the molecules is exhibited by Brownian motion as this mimics the random forces in the system. The system relies on different algorithms behind the scene to design a virtual nanoparticles dispersion in fluid. Furthermore, this is because the interaction kinetics of nanofluid system adhere with nanoparticle surface interacting with the surrounding fluid; this involves exchange of energy, surface tension between two, orientation of nanoparticle, surface energy, bonding configuration, nanoparticle dynamics and kinematics (including nanoparticle spin), liquid layering between nanoparticle and fluid molecule, and diffusion rate. To explain the trajectories and velocities of a fluidic system, it is necessary to adopt a hydrodynamic framework. Computer simulations for mimicking trajectory of hydrodynamic dispersion of a dispersed particle in a fluid system was used by Ermak [23]. Nevertheless, Ermak and McCammon [24] work was more focused on the hyrdrodynamically concentrated system. The hydrodynamical system exhibited that the inter-particle distance is much greater than the range of hydrodynamic interactions. However, by implementation of Brownian dynamics by Ermak gave highly concurrent results with the experimental values achieved. The hydrodynamics of the system display combinations of Coulomb interactions; i.e. long range interactions as well as the Vander Waal interactions; short range interactions. Furthermore, the dynamics of the system is more convincing after applying the Derjaguin, Landau, Verwey and Overbeek (DLVO) [25] theory/factor in the system to mimic the charges and to enhance the realistic intermolecular attractions and repulsions. Currently, there are different nanoparticles being considered for various applications. Therefore, for simulating nanofluids, modelling the nanoparticle is important, for that nanoparticle structure, shape and its properties should be known. Subsequently, the mimicking of interaction potentials; i.e. using force fields such as embedded atom method (EAM), COMPASS, universal, etc; and the other forces between the atoms and molecules, the velocity verlet theorem is implemented. The velocity verlet theorem is a time-dependent movement of the atoms from one position to another using an algorithm for defining the movement, which is based on Brownian dynamics (BD). In addition to this, velocities or movements of atoms are controlled using thermal ensembles i.e. canonical (NVT), grand canonical (ΔPT), isobaric and isothermal (NPT) and micro canonical (NVE). These ensembles support in conducting thermal and physical perturbation to change the dynamical position of the atoms and molecules within a desired system. This causes the system to move to an un-equilibrium state. After starting and moving from an un-equilibrium state, the system is then equilibrated for convergence to equilibrium state. Finally, by this convergence, the system acquires stability of temperature and physical quantity fluctuations. However, this convergence is an iterative process for which time steps are varied to achieve the real convergence results [26,27]. Currently, there are various simulation of nanofluids, for example; CuO, TiO 2 and CeO 2 nanoparticle dispersion in water [3,4]; furthermore, there are also studies of dispersing nanoparticles in hydrocarbons [28]. By having two different simulation strategies, a perspectives and robust methodology can be formulated. As these simulations are performed on two different types of fluids i.e. polar and non-polar, so a concurrent methodology for both fluids can be deduced. Furthermore, up to the date, investigators have carried out various researches on nanofluid MDS, in addition to this, last two decades of work has been cumulated in Figure 1. Following are the details of their work in the field of nanofluid simulations. In 1998, Malevanets and Kapral [29] formulated a method for computing complex fluidic systems using H theorem, which helped in solving hydrodynamics equations and transport coefficients. Colloidal model and random stochastic movement algorithm was established using Brownian dynamics which was formulated by Lodge and Heyes [30]. Francis W. Starr investigated effect of glass transition temperature on the bead spring polymer melts with a nanoscopic particle. He found that the surface interaction dominates due to nanoparticle diffusion within the melted polymeric system [31]. Simulation of chemical interactions was also carried out, and the bond length and structural orientation was noted for Silica nanoparticles in poly ethylene oxide (PEO) oligomer system. By this study, Barbier et al. concluded that the silica nanoparticles influence structural properties of PEO up to two to three layers [32]. Mingxiang and Lenore worked on hydrocarbon surfactant in an aqueous environment with a nanoparticle diffused within this system. It was observed from interactions that the agglomeration created between water molecules and surfactant was independent of nanoparticle i.e. it does not matter whether it is present or not [17]. Nanofluid Heat and Mass Transfer in Engineering Problems Sarkar and Selvam designed a nanofluid system of Cu nanoparticle and Argon as basefluid, for this, he used EAM potential and Green Kubo technique to find the thermal conductivity of this system. He examined that the periodic oscillation existed due to the heat fluxes imposed by Leonard Jones (L-J) potential [9]. Li et al. later worked on similar system of Cu nanoparticle with Ar base fluid; however, they investigated Brownian dynamics induces a thin layer around a particle, giving a hydrodynamic effect to the particle dispersion [33]. Lu and Fan investigated thermo-physical quantities of Alumina nanoparticles dispersed in water and concluded that the particle volume fraction and size effects the viscosity and thermal conductivity [16]. Sankar et al. examined and formulated an algorithm for calculating metallic nanoparticle thermal conductivity in fluid. They articulated that the volume fraction of nanoparticles and temperature of the system effects the overall thermal conductivity [8]. Moreover, Cheung carried out research on L-J nanoparticles within solvent and quantified that the detachment energy decreases as the nanoparticle solvent attraction rises [1]. Sun et al. devised a technique using EMD using Green Kubo method to find the effective thermal conductivity of the Cu nanoparticles in Ar liquid. It was found that there was a linear increase in the effective thermal conductivity of shearing nanofluid due to micro-convection [34]. Rudyak and Krasnolutskii later on worked on Aluminium and Lithium nanoparticles with liquid Ar and suggested that the size and material of nanoparticle considerably effects the viscosity [18]. Lin Yun Sheng et al. also detected increment in thermal conductivity by Cu nanoparticle dispersion in Ethylene glycol fluid. In this study, he used Green kubo formulation for finding thermal conductivity using NEMD [35]. Furthermore, Mohebbi investigated a method to calculate thermal conductivity of nanoparticles in fluid using a non-periodic boundary conditions with EMD and NEMD [14]. Kang H et al. carried out work on coupling factor between nanoparticle of Copper and Ar as base fluid, his investigations suggest that coupling factor is proportional to the volume concentration of particles, nevertheless, he also suggested the that there is no effect of temperature change from 90 to 200 K on coupling factor [36]. Rajabpour et al. investigated the specific heat capacity of Cu nanoparticles within water and he found that the specific heat capacity of this system decreases by increasing the volume fraction of particles in base fluid [13]. Loya et al. initiated work on CuO nanoparticles dispersion in water focusing on the change of viscosity due to temperature increase, he figured that temperature increment decreases the viscosity of nanofluid as also initially predicted using experimental testing [37]. In addition to above, further rheological analysis of CuO nanoparticles in straight chain alkanes [28] and water [4] and CeO 2 in water [3] was carried out by Loya et al. For conducting these simulations, molecular dynamics was used and studies provided highly accurate results of viscosity to experimental findings. Finally, after knowing the perspective of nanofluid simulation, a simple and general way is deduced for researcher, industrialist and their co-worker in Section 2.3. Mimicking different properties of nanofluids using simulation Several studies about simulation work were reported on the diffusion of polymeric, ionic and mineral nanoparticles [38][39][40]. An example of this is calcite nanoparticles. These have been simulated in water for salt molecular dynamics for thermal energy storage nanofluidic simulations [38]. Simulations such as these are mostly conceiving diffusions of the polymeric nanoparticles or di-block polymers represented by spheres. The major diffusion phenomena that have been implemented on the nanoparticle or the polymer dispersion is with the help of BD, targeting the random motion of the particles in a solvent or any solution system. Some further surveys show that one of the best simulations for the dispersion of the metal oxide nanoparticle in the water system was carried out using the DPD potential [41][42][43]. This potential has the power to disperse nanoparticles as well as replicating the phenomena of the BD [44]. DPD was first carried out on nano-water systems by Hooggerbrugge and Koelman [44,45]. Moreover, the work was carried out by Español and Warren for implementing the DPD technique using statistical mechanics. DPD technique imparts stochastic phenomena on particle dynamics [46]. This is how BD was integrated into DPD technique. However, the random forces will only be in pairwise interaction since DPD at the same time imparts the hydrodynamic effect on the system. Many studies of DPD for complex fluidic systems [41][42][43] show that the dispersion of nanoparticles in water exhibits complex properties and to simulate this, initial selection of boundary conditions are important to replicate the real scenario. Thereby, the best way to simulate is to acquire the boundary conditions of the existing experimental system and then use a molecular dynamic simulator to further implement it [47]. The considerations of boundary conditions are particle sizes, force field for particleto-particle interactions, solvent in which the particles will be diffused, and physiochemical nature of the system [48,49]. Within the simulation system, force field plays an important role since it provides charges on atoms for interaction. The force field is a mathematical parameter that governs the energies and potentials between interactive atoms. The physiochemical settings of the system refer to the thermal, chemical and physical properties of the system such as initial temperature settings, charges and dynamics. Finally, the temperature is controlled using different ensembles. Simulation strategy The nanofluid interactions are carried out at molecular level. Therefore, by keeping this in mind to conduct nanofluid simulations, it is necessary to have a simulation technique which allows us to do simulation at molecular level. Hence, the technique use for this is molecular dynamics and package that is focused through this chapter is Large-scale Atomic/Molecular Massively Parallel Simulator (LAMMPS). Furthermore, how to approach this is mentioned in the next section of this chapter i.e. Section 3.1.1. Approach The simulation of nanoparticle dispersion is related to the MDS. For this, the software or the package that needs to be selected was based on the criteria of the conditions that were needed to be simulated, and the flexibility was a major concern for the applicability of different systems. The LAMMPS can be a best molecular dynamics package for simulating the nanofluidic system. This is the code generated by the Sandia Laboratories by Plimpton [50]. This molecular dynamics software has high viability over other available software like Montecarlo and Gromacs. After selection of the MD package, to simulate a desired system with realistic features, it is highly vital to know and understand initial boundary conditions. These initial conditions for a dispersion of nanoparticles are related to charges within the system for interaction, molecular bonding, forces of attraction i.e. Vander Waal or electrostatic coulombs interactions, forcefields, pair potentials (i.e. molecular mechanics constants) and molecular weight. To perform MD simulation, initial boundary conditions are major and fundamental parameters to devise actual dynamics that exist in a real system. After setting the initial parameters, velocity of the system is equilibrated and ensembles are applied to mimic the real thermo-physical conditions. In addition to above, after setting all the boundary conditions related to chemical and thermophysical parameters, the system is then equilibrated for certain time steps. Simulations are processed until converging results are obtained as that of the actual system. Over here, "time step" is the major dependent factor. This accounts for equilibrating the kinetics of the system that takes place; i.e. movement of system from an un-equilibrated state to equilibrium conditions. The above explained method has been compressed and illustrated using a flowchart for better understanding as shown in Figure 2. After suggesting how to approach and initiate your work for simulation of nanofluids, it is also important to know the briefed-out details about the steps like force field, pair potentials, ensembles, etc. After setting up the atoms in a coordinate system using a molecular modelling software, then force field is applied on the system (i.e. Universal, COMPASS, OPLS, etc.) by this atomic charges and bond configurations are setup. These force fields are interlinked with pair potentials (such as DPD, BD, Smoothed Particle Hydrodynamic, LJ, etc.), they are parameters which are used to describe vibrational and oscillation settings between two different atoms. Finally, ensembles are applied on the molecular dynamic system for equilibrating the actual thermal settings for example NVT, NPT, NPH, etc. Techniques and tools As of now, it is known from the previous sections that to simulate and perform MDS it is necessary to know techniques and tools that can be beneficial for use and executing the work. Today, there are several tools and ways to perform this; however, still researchers are unsure about "what are the clear steps for conducting nanofluid simulations using molecular dynamics?" Therefore, through this section, a brief and concise way is illustrated and conveyed for better and easier understanding for people working under the horizon of nanofluid simulations. These steps are as follow: a. Firstly, for creating nanofluid simulation system, it is required to setup a nanoparticle and fluid, then combine them together, for which material studio is the best software for designing a nanoparticle. Now, the nanoparticles can be inserted and replicated in a box containing fluid particles, however, this may be tedious for bigger systems. Therefore, it is suggested to use Packmol after creating the Protein Data Bank (PDB) file from material studio and then create an input script for Packmol to replicate the system with as many particles and fluid molecules as per required. This software automatically packs up the overall molecular arrangement with in a confined imaginary box. Nanofluid Heat and Mass Transfer in Engineering Problems b. As the nanofluid system is set up, now an input data file is required for LAMMPS software, this can be generated by using the PDB file and converting it to required .CAR and .COR format using Material studio. Before conversion do not forget to implement charges on the atoms of the nanoparticle and fluid molecules for this Discover module of the Material studio software can be used. After conversion to .CAR and .COR, use msi2lmp package provided with LAMMPS for converting the file to a LAMMPS readable input. c. Once the LAMMPS readable input file is generated now use "read data" command for LAMMPS to read this file during the simulation execution. Finally, the data quantification, visualizing the effects and properties that can be analysed have been jotted below in different sections. Data quantification Now, the data obtained by using different compute commands can be quantified on MATLAB or Excel. MATLAB initially requires more time for developing its script for computing the mathematical problem or graphs. However, on a long run, it does save time. Whereas, excel is easy going but requires more time for plotting graph each time you feed the new data. MATLAB scripting helps in formulating the work in a precise manner, and digitalise the work with high quality publishing of the data for journal publications. However, MATLAB requires good command over the MATLAB scripting and functions. By using MATLAB, it is easy to apply discrete as well as continuous algorithms and equations for refining and optimization of results. Furthermore, it helps in applying the regression on the noisy data for refinement. In Excel, similar stuff is possible as in MATLAB, but in excel, it is quite complicated as you need to apply macros. These days the computation of MATLAB can be computed in parallel mode; again for excel, it is quite difficult. However, for graphical representation of data, excel is quite versatile. Vice versa both tools have their own benefits over each other; it depends totally on a userfriendliness with certain software. In addition to excel, to compute or establish complex calculations, it will be required to interlink its macros with visual basic scripting, which is under a developer's tool library, mostly hidden from newbies. Visualizing the effects After the successful execution of simulation, you will get dump files from LAMMPS, here a software that can read LAMMPS trajectories can be used for reading the file and visualizing it. For which Visual Molecular Dynamic (VMD) can be used. However, OVITO is also a good software for visualizing your trajectories. The results generated by OVITO are represented as small spheres merged together to form a particular system representation, as shown in Figure 3, i.e. of a CuO-water nanofluid system. In the similar way for showing how the VMD gives visual output is shown in Figure 4. It is similar to that of OVITO, however, VMD has capability of representing the trajectories in the form of molecular structure. This gives an extra possibility for researchers working in the area of Biochemistry, pharmacy, drug delivery and biomedical to represent and observe the c hemical kinetics in real-time, i.e. how one atom reacts and interacts with another atom within a confined system. Properties that can be analysed Some properties and parameters can directly be analysed using VMD using trajectories dump files. VMD has option for analysing the radial distribution function (RDF) and mean square displacement (MSD), they indicate about the agglomeration and dispersion rate, respectively. Figure 3. Representation of OVITO output of molecular dynamics of CuO nanoparticles in water system [4]. Nanofluid Heat and Mass Transfer in Engineering Problems When nanofluids are concerned the major parameters or properties researcher are interested to investigate are viscosity, thermal conductivity, specific heat capacity, thermal diffusivity, diffusion coefficient, total energy, heat loss, etc. To find these properties LAMMPS provide versatile options to compute what you require, using different algorithms or previously established techniques. Currently, main concerned variables out of above mentioned ones are viscosity, diffusion coefficient and thermal conductivity. Therefore, in the next section, we will discuss about how to validate and quantify your results obtained from the simulation. Validation and quantification of results To validate the three major properties mentioned in Section 3.1.5, it is required to know initial experimental results, however, sometime it is hard to obtain those results as some simulation condition cannot be tested, either due to lack of experimental device or it is not possible to meet the boundary conditions as setup over the simulation platform. Now, in this case, the best way is to analyse using autocorrelation function; which is a time series modelling of a function of a variable dependent on time fluctuation. Let us take the case of viscosity, as it is related with shearing stress, there are shear forces acting between the layers of molecular interaction causing pressure function to be induced. This pressure function is dependent on stress due to shearing force. If this stress is analysed using the function of time, this becomes stress tensor. This stress tensor is used for analysing stresses exiting between the molecular layers. Therefore, this is known as stress autocorrelation function (SACF). The SACF accounts for the stresses imparted on the system due to the diffusion of molecules and intermolecular kinetics; i.e. molecular stresses caused by attraction and repulsion of molecules. During the intermolecular kinetics drag is created between the Problems Faced While Simulating Nanofluids http://dx.doi.org/10.5772/66495 molecular layers, this drag is due to the effect of shearing forces. Ultimately as the system is equilibrated, it shows unstable response of the SACF, however, as it approaches stability the SACF starts to converge to a monotonic level, which satisfies that the viscosity analysed is acceptable. In the similar manner, thermal conductivity is quantified, but here instead of stress and shear forces, heat is considered. Therefore, this is known as heat autocorrelation function (HACF), which quantifies or validates the thermal conductivity obtained is satisfactory. In addition to HACF and SACF for thermal conductivity and viscosity, respectively, for diffusion coefficient, velocity autocorrelation function is used for its quantification. As diffusion coefficient is measured by taking the slope of the MSD. So to quantify and validate it, displacement with respect to time i.e. velocity can be used. The accuracy of results equilibrated for measuring the viscosity and thermal conductivity of a system can be justified in a better way with the estimation of heat autocorrelation function and stress autocorrelation function as show in Figure 5. The graphical result in Figure 5 explains the process of the integration of non-equilibrated system to equilibration. At step (a), the system starts with a thermodynamic equilibrium, but the system is not at equilibrium state. At step (b), the thermodynamic conditions are changed due to implementation of thermal ensemble so the system tends to go towards equilibrium. At step (c), the Figure 5. Autocorrelation output gained by running a molecular dynamics simulations [26]. Nanofluid Heat and Mass Transfer in Engineering Problems non-equilibrium system moves to equilibrated level of convergence at this level the system satisfies the convergences. This process is followed during the equilibration of the thermophysical quantities, the convergence time steps depend on the volume and quantity of the atoms in that system. For the larger system, large amount of computational power and time step will be required for convergence. Problems faced for simulating nanofluids So far the topic has been conveying the techniques, approach and method for carrying out nanofluid simulations. Moreover, there has been no data available for the expertise to know what are the problems faced when these simulations are conducted, number of questions can arise, for example, (1) Till what level, computational power can support our simulations? (2) Is there any other way out rather than this? (3) How larger systems can be simulated? etc. Therefore, to answer these questions, it is necessary to understand the material and knowledge given before, however, as the number of atoms are increased within a nanofluid system the molecular dynamics demonstrates sluggish performance due to less computational capabilities i.e. either central processing unit (CPU) power or graphic processing unit (GPU). Furthermore, it is not just simulation that need to be carried out but for the data quantification, the data that are gathered requires huge memory for storage. Thereby, requiring the random access memory (RAM) and hard disk drive (HDD) to be large enough to store the required data easily [51]. After hardware issues, the second set of problems faced by nanofluid simulation is the use of multiple software for designing, modelling, processing and visualization, which needs a lot of understanding of computer for a new geek. Furthermore, if this all is combined in one package, this can marvellously save time and money for purchasing different software for data acquisition. It is slightly known at the moment that there are few software in market for helping in simulating nanofluid; however, academia is not yet aware of it due to less versatility such as Medea and Scienomics MAPS. One of the major problem is that, people of twenty-first century like working using graphical user interface (GUI), as it is easy and you can do everything by just clicks rather than using complicated commands, however, most of the molecular dynamics package are used on Linux operating system, moreover, commands are used for computing and feeding the data for computation. In addition to high computing power, it should be known that before attempting to simulate large scale molecular dynamics (i.e. with more than 0.1 million atoms), it is required to have parallel processing enabled on the PC. For that high end, CPU or GPU is required with multi cores for processing the data in parallel mode. However, this processing has some drawbacks that are loop holes for simulations, one such kind is that sometimes the algorithm is not designed in a way to parallel the process efficiently, which in turn gives ambiguous s imulation output and convergence. For avoiding this, it is necessary for the user to know the correct working of the algorithm. Moreover, the field programmable gate array (FPGA) is good outbreak technology that is being implemented for paralleling the process [52,53], nevertheless, again this technology requires new stuff and bits coding to be learned before operating or using this module for rapidly solving the simulation. Conclusion The chapter has brought about marvellous information and the literature for new geeks for conducting a nanofluid simulation. However, this chapter acts as a guide for a newbie for initialising the nanofluid simulation. Nomenclature Words Abbreviation
2018-12-21T22:06:32.520Z
2017-03-15T00:00:00.000
{ "year": 2017, "sha1": "e66b27e1f167e28180bbc7d58ed34171e1acee83", "oa_license": "CCBY", "oa_url": "https://www.intechopen.com/citation-pdf-url/53300", "oa_status": "HYBRID", "pdf_src": "MergedPDFExtraction", "pdf_hash": "0d943bbd0386e7405d42bf02f4dcdd38f53078b7", "s2fieldsofstudy": [ "Physics" ], "extfieldsofstudy": [ "Materials Science" ] }
238859457
pes2o/s2orc
v3-fos-license
Endoscopic Management of Anastomotic Leakage after Esophageal Surgery: Ten Year Analysis in a Tertiary University Center Background/Aims Anastomotic leakage after esophageal surgery remains a feared complication. During the last decade, management of this complication changed from surgical revision to a more conservative and endoscopic approach. However, the treatment remains controversial as the indications for conservative, endoscopic, and surgical approaches remain non-standardized. Methods Between 2010 and 2020, all patients who underwent Ivor Lewis esophagectomy for underlying malignancy were included in this study. The data of 28 patients diagnosed with anastomotic leak were further analyzed. Results Among 141 patients who underwent resection, 28 (19.9%) developed an anastomotic leak, eight (28.6%) of whom died. Thirteen patients were treated with endoluminal vacuum therapy (EVT), seven patients with self-expanding metal stents (SEMS) four patients with primary surgery, one patient with a hemoclip, and three patients were treated conservatively. EVT achieved closure in 92.3% of the patients with a large defect and no EVT-related complications. SEMS therapy was successful in clinically stable patients with small defect sizes. Conclusions EVT can be successfully applied in the treatment of anastomotic leakage in critically ill patients, while SEMS should be limited to clinically stable patients with a small defect size. Surgery is only warranted in patients with sepsis with graft necrosis. INTRODUCTION anastomotic leakage during rectal surgery. 11 The same procedure was adopted for defects in the upper gastrointestinal tract in 2006. 12 To date, however, the treatment of anastomotic insufficiency remains controversial, as the indications for conservative, endoscopic, or surgical treatment remain non-standardized. 13,14 Recently, the Surgical Working Group on Endoscopy and Ultrasound (CAES) developed a classification for intrathoracic anastomotic leaks, suggesting a classification and treatment algorithm. 15 The aim of this study was to evaluate the endoscopic treatment options for postoperative intrathoracic anastomotic leaks, mainly comparing SEMS and EVT in a tertiary university center and outlining a more standardized approach for the future. Study Population Between 2010 and 2020, all patients who underwent Ivor Lewis esophagectomy for an underlying malignancy were included in this study. All patients who developed post-surgical anastomotic leaks were further analyzed. This led to a total of 28 patients who were treated for anastomotic leak at our hospital. In addition, the following parameters were examined: date of surgery, postoperative day of detection of an anastomotic leak, CAES classification, type of treatment used, median combined intensive care unit and intermediate care stay, median hospital stay, tumor histology, tumor grading, neoadjuvant therapy, number of lymph nodes harvested, R-status, operation method, operation time, morbidity, and mortality. Regarding endoscopic treatment, the following variables were analyzed: location of the defect, size of the defect, number of stents or vacuum sponges used, event-related complications, length of treatment, and treatment outcome. Diagnosis of Anastomotic Insufficiency An anastomotic leak was defined as a communication between the intra-and extraluminal compartments through a defect in the integrity of the intestinal wall of the anastomosis. Routine examination using a dynamic swallow study was performed until 2016. If the dynamic swallowing study suggested the presence of an anastomotic leak, it was followed by an upper endoscopy (UE) or computed tomography (CT). After 2016, routine examination of the anastomosis was abolished and was only performed if patients showed symptoms suggestive of a leak. If so, a combination of UE and CT was performed. Subsequently, patients with macroscopic visible mediastinal leakage cavity (referred to as the "extraluminal cavity") were always treated using EVT. In contrast to this group, patients with smaller anastomotic defects and none or small leakage cavity (called "intraluminal cavity") were treated with stent therapy. In addition, EVT has become increasingly established as a standard therapy over the past few years. EVT If an anastomotic leakage was clinically suspected or confirmed using a CT scan, EVT was evaluated as a therapeutic option. A sufficient external thoracic drainage was inserted in the case of a huge mediastinal septic abscess. In cases of confirmed or suspected anastomotic leakage, UE was performed in sedated or mostly intubated patients (GIF-H180, GIF-H190; Olympus Co., Tokyo, Japan). If there was evidence of a large extraluminal leakage cavity, which could only be inspected using a small-caliber nasal endoscope (GIF-N180) but not with a normal endoscope (GIF-H180, GIF-H190), the defect was expanded using balloon dilatation (CRE TM Wireguided 12-15 mm; Boston Scientific, Marlborough, MA, USA). During the initial endoscopy, the leakage cavity was cleaned and measured to determine the required length and diameter of the sponge, which was then reshaped accordingly. Open-pore polyurethane sponges, Eso-SPONGE ® (B. Braun Melsungen AG, Melsungen, Germany) with a primary diameter of 24 × 55 mm and a 12 CH Redon drain or an individually adapted sponge (V.A.C. VERAFLO TM Dressing Kit; KCI, St. Paul, USA) fixed to a drain (Argyle TM Edlich Gastric Lavage Tube; 16 CH, Medsitis, USA), were used. In general, the intraluminal placement of the sponge in the case of small anastomotic defects (usually less than 8 to 10 mm) or residual cavities with no infection can be differentiated from the intracavitary placement of the sponge, where it is introduced through the wall defect into the extraluminal, i.e., mediastinal, cavity. The intracavitary version of EVT was preferred. For placement of the sponge, two endoscopic methods were used, the "push" technique or the "piggyback" technique. Using the push technique, the sponge was advanced to the correct location along an overtube with a pusher or the endoscope, and a specially approved device (Eso-SPONGE ® ; B. Braun Melsungen AG, Melsungen, Germany). Using the piggyback technique, the sponge was placed in the leakage cavity under direct endoscopic vision, while a suture loop placed at the tip of the sponge was grasped using endoscopic forceps and the sponge was pulled close to the endoscope. While the first technique is often used for small anastomotic leaks with intraluminal positioning of the sponge, the second technique is preferably used for intracavitary placement of the sponge. The drainage tube was placed transnasally and connected to a variable-speed medical vacuum pump (V.A.C. ULTA ® ; KCI, San Antonio, Texas, USA). Suction was applied at a negative pressure of 75-125 mmHg. In addition, a transnasal gastric or duodenal tube was inserted for enteral nutrition. After a dwell time of 3-5 days, the next endoscopy was performed. In this procedure, the sponge was removed orally after it was disconnected from the vacuum pump. The treated cavity was then examined using an endoscope to document the success of the treatment, particularly with a focus on subsequent granulation. A new sponge was inserted after re-measuring the size of the cavity to determine the size of the new sponge. To promote effective cavity closure, the diameter of the sponge was first reduced without reducing its length to allow for closure of the remaining channel in subsequent treatment cycles from the distal part of the channel to the proximal part. EVT was continued until the cavity was reduced to less than 1 cm. During each endoscopy, the CAES grading of anastomotic insufficiency in the esophagus was reevaluated retrospectively. Figure 1 illustrates the typical clinical course of treatment in one of the study patients undergoing successful EVT. SEMS Similar to the principles of EVT, endoscopic evaluation was carried out with regard to the size of the defect, existence of an extraluminal leak cavity, and perfusion of the anastomosis or the gastric sleeve. SEMS were mostly inserted under direct endoscopic view in the intensive care unit (ICU) without radiological control. First, a stiff wire (Amplatz Super Stiff Guidewire; Boston Scientific) was placed down to the stomach under endoscopic control. A fully or partially covered SEMS with a diameter of 22-28 mm was inserted and released under endoscopic view (Wall Flex TM Esophageal Stent, partially covered, 22-28 mm, Boston Scientific) with a total length of 100 mm. The SEMS was removed after approximately three weeks. In the case of persistent insufficiency, another stent was inserted. CAES grading was reevaluated retrospectively. Figure 2 illustrates the management of an anastomotic leak in one of the study patients using an SEMS. Statistical Analysis Statistical analysis was performed using IBM SPSS Statistics Version 24 64-Bit-Version for Mac OS (IBM Co., Armonk, NY, USA). Continuous variables are presented as medians. To compare these variables, we employed analysis of variance (ANOVA) with multiple factors. Categorical variables were compared using the chi-squared test. Statistical significance was defined as p< 0.05. Patient Characteristics Between 2010 and 2020, 141 patients underwent Ivor Lewis esophagectomy for underlying malignancy. All relevant patient characteristics are presented in Table 1. A total of 28 patients were diagnosed with postoperative anastomotic leakage, resulting in an anastomotic insufficiency rate of 19.9%. Of these patients, three patients were treated conservatively, 13 patients were treated with EVT, seven patients were treated with SEMS, one patient was treated with a hemoclip, and four patients received primary surgery to treat the defect (Fig. 3). Six patients required surgical revision after the initiation of endoscopic treatment. All 28 patients were classified using the CAES classification. Of the 28 patients, 23 were men and five were women. The median age was 58.5 years (range: 32-75 years). The median body mass index and American Society of Anesthesiologists classification were 25 kg/m 2 and 2, respectively. The reasons for esophagectomy were adenocarcinoma in 25 patients (89.3%) and squamous cell carcinoma in three cases (10.7%). Twenty of the 28 patients received neoadjuvant chemotherapy or radiotherapy. All but one patient underwent open Ivor Lewis esophagectomy with a median operation time of 290 min (range: 144-624 min). Overall Clinical Outcomes Of the 141 patients, 28 (19.9%) were diagnosed with postoperative anastomotic insufficiency. The median time from surgery to diagnosis was 7.5 days (range: 2-30 days). The median distance from the upper incisor to the defect was 25 cm (range: 18-30 cm), with a median defect size of 10 mm (range: 5-30 mm). In 10 patients, the defect developed into a macroscopic visible extraluminal cavity. The median hospital stay was 48.5 days (range: 9-193 days) with a median ICU/intermediate care (IMC) stay of 22 days (range: 9-193 days). Of the 28 patients who developed anastomotic insufficiency, 20 (71.4%) were treated successfully, while eight (28.6%) patients died. The overall endoscopic findings are presented in Table 2. Eight patients were treated with alternative methods to EVT C D Conservative n=3 Surgery n=4 or SEMS. Of these eight patients, four underwent primary surgery, two of whom required additional surgical revision and died postoperatively. Three of the eight patients were treated conservatively, and one patient was treated with endoscopic clipping of the defect. All patients receiving non-surgical al-ternative treatments were successfully treated. The median combined IMC/ICU and hospital stay were 12 and 29 days, respectively. The median defect size was 3.5 mm (range: 2-4 mm) for the conservative group and 22.5 mm (range: 10-25 mm) for the surgical group. Clinical Outcomes: EVT Thirteen patients (46.4%) with a mediastinal leakage cavity (extraluminal cavity) were treated with EVT. The median time to diagnosis of anastomotic leak after the primary surgery was 8 days. All patients with an extraluminal cavity were treated using EVT, and all of those within the EVT group who died had an extraluminal cavity. No EVT-related complications were observed. The median time for EVT was 24.5 days (range: 8-80 days) with a median of five exchanged sponges (range: [4][5][6][7][8][9][10][11][12][13][14][15][16][17][18]. The median defect size was 15 mm, and the median defect was located 26 cm from the upper incisor. Eight patients were successfully treated (61.5%), while five patients passed away. However, complete closure of the defect was achieved in 12 patients (92.3%). Five patients needed surgical revision during EVT, and 4 of those patients died. All of these patients required surgery due to complications that could not be attributed to EVT. The reasons for surgery were ischemia of the colon, necrosis of the pancreas, postoperative incarcerated hiatal hernia, and hemothorax. The median ICU/IMC stay was 38 days (range: 9-193 days), with a median hospital stay of 74 days (range: 9-193 days). The overall outcomes of EVT are shown in Table 3. Clinical Outcomes: SEMS Seven patients (25%) were treated with SEMS. In the stent group, none of the patients had an extraluminal cavity. The median duration of SEMS therapy was 22 days (range: 3-31 days), with a median of one SEMS exchange (range: 1-2). The median defect size was 6 mm, and the median defect was lo-cated 23 cm from the upper incisor. Six patients were successfully treated (85.7%), while one patient required surgical revision and died. The median ICU/IMC stay was 20 days (range, 16-57 days) with a median hospital stay of 41 days (range: 22-123 days). The median time to diagnosis of anastomotic leak was 7 days. During SEMS treatment, event-related complications, including stent migration (n = 1) and perforation (n = 1), were noted. A close defect was achieved in six out of seven patients (85.7%). The overall outcomes of SEMS therapy are shown in Table 3. Comparison of SEMS and EVT Statistical analysis to compare SEMS and EVT revealed that there was a statistical significance (p< 0.05) in terms of defect size, presence of an intraluminal or extraluminal cavity, and sponges/stents used. No other difference in outcomes between the two treatment options were statistically significant. The results of the statistical analysis comparing the two modalities are presented in Table 3. DISCUSSION Post-surgical anastomotic insufficiency is one of the most feared complications and is associated with high morbidity and mortality rates. Therefore, the goal of every physician involved in the treatment of patients following esophageal surgery is to diagnose and manage the event and its related complications in a timely manner. As already mentioned, the Data are presented as the number (%) or median (range). CAES, The Surgical Working Group on Endoscopy and Ultrasound. incidence of anastomotic leaks after surgery can be up to 50% and is associated with a mortality rate of 20%. However, if operative revision is necessary, the mortality rate can exceed 60%. Historically, the damage was controlled through a combination of surgery and conservative management with nil per mouth, antibiotics, and drainage. In the last decade, however, EVT and SEMS have been used to successfully manage intrathoracic anastomotic insufficiency. In 2018, the German CAES group suggested a classification and treatment algorithm for intrathoracic leaks. Here, surgical revision was only suggested in cases of graft necrosis or in patients with pre-sepsis. 15 Endoscopic therapy for intrathoracic leaks includes clipping of the defect, use of EVT, and insertion of different types of stents. Schaheen et al. performed a systematic review of the use of stents in the management of anastomotic leaks and found 25 studies. 16 Endoscopic placement was successful in 72% of patients, with an overall mortality of 15%. The types of stents used included SEMS and SEPS, with an average time remaining in situ of 6 and 8 weeks, respectively. Stent-related complications included stent migration, perforation, bleeding, and tissue ingrowth. These findings mirror the results of our retrospective analysis. SEMS treatment was successful in 85.7% of the patients, with a mortality rate within the group of 14.3%. Two out of seven patients had event-related complications, including stent migration (14.3%) and perforation (14.3%). However, due to the small sample size, the results were not statistically significant (p > 0.05). The median number of stents used was one. The systematic review concluded that endoscopic stenting remains an experimental therapy as stenting has the ability to "stent the seal" but not "heal the leak"; therefore, mortality remains high even after endoscopic stenting. EVT is another option for treating anastomotic leaks. Similar to wound vacuum for secondary wound infections, a sponge is placed intraluminally and intracavitarily with added suction through a transnasal drain. A systematic review of three available studies showed that 37 out of 40 patients (93%) were successfully treated with EVT without the presence of EVT-related complications. [17][18][19] This is also reflected in our results. In 12 out of 13 patients (92.3%), complete closure of the leak was achieved with no EVT-related complications noted. Unfortunately, five patients required surgery during EVT due to complications that could not be attributed to the therapy. The reasons for surgery were ischemia of the colon, necrosis of the pancreas, incarcerated hiatal hernia, and hemothorax. This is also reflected by the CAES classification (EVT vs. SEMS), as shown in Table 3. However, due to the small sample size, there was no statistical significance regarding CAES classification (p> 0.05). The difference in defect size between the SEMS and EVT groups was very noticeable in our retrospective analysis. There was a tendency to treat patients with smaller defects and no mediastinal leakage cavity with stent therapy. The defect size in the SEMS group was 6 mm compared to 15 mm in the EVT group, yielding a 2.5-fold size difference. A comparison of the defect size between the two groups using multivariate analysis was statistically significant (p < 0.05). In addition, all patients who had an extraluminal cavity on endoscopic findings were treated with EVT instead of SEMS (p < 0.05), and only patients in the EVT group had a worse outcome, underlying the importance of extraluminal, i.e., mediastinal, cavities for the overall prognosis. As already outlined, there was a tendency to treat more critically ill patients with EVT. These results suggest that SEMS treatment is only warranted in patients with a small defect size and with no extraluminal cavity. Unfortunately, we were not able to find similar results across published data of other groups analyzing EVT and SEMS therapy, as defect size and presence of an extraluminary cavity were not specifically described and analyzed. Only one study group mentioned the size of the defect and the presence of a cavity in their study. 20 In our opinion, these are two crucial findings that seem to be related to treatment and patient outcomes and should be analyzed further. This study has some limitations. First, this was a retrospective, non-randomized study. Second, the sample size was small and, therefore, it was difficult to compare the two treatment modalities to identify a statistically significant difference. However, although these limitations are present, it is one of the only available studies mentioning defect size and analyzing its potential outcome on the treatment option that should be chosen. In summary, our results suggest that EVT seems to be a better treatment option for patients with a large defect size and the presence of an extraluminal cavity compared to SEMS. It can be safely applied to critically ill patients with large defects. SEMS therapy seems only warranted in non-septic patients with a small defect size and no extraluminal cavity. Primary surgical revision should be reserved only for septic patients with graft necrosis.
2021-10-15T06:16:47.271Z
2021-10-14T00:00:00.000
{ "year": 2021, "sha1": "570d7298bf76f25916f271a442279d2b9f88f1e0", "oa_license": "CCBYNC", "oa_url": "https://www.e-ce.org/upload/pdf/ce-2021-099.pdf", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "874ae922566d2cebb54f5bf8c2c789773eb9e002", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
237310090
pes2o/s2orc
v3-fos-license
American Veterans in the Era of COVID-19: Reactions to the Pandemic, Posttraumatic Stress Disorder, and Substance Use Behaviors The COVID-19 pandemic may have a compounding effect on the substance use of American veterans with posttraumatic stress disorder (PTSD). This study investigated the relationship between PTSD and current reactions to COVID-19 on alcohol and cannabis use among veterans who completed a survey 1 month prior to the pandemic in the USA and a 6-month follow-up survey. We hypothesized that veterans with PTSD would experience more negative reactions to COVID-19 and increased alcohol and cannabis use behaviors over those without PTSD. Veterans with PTSD prior to the pandemic, relative to those without, endorsed poorer reactions, greater frequency of alcohol use, and greater cannabis initiation and use during the pandemic. Veterans with PTSD may use substances to manage COVID-related stress. Clinicians may see an increase in substance use among this group during and after the pandemic and may need to implement specific behavioral interventions to mitigate the negative effects of COVID-19. symptomology, including fear of COVID-19, economic hardship, loneliness, and social isolation (Fitzpatrick et al., 2020;Horigian et al., 2020;McGinty et al., 2020;Witteveen & Velthorst, 2020). In addition to mental health concerns, substance use is concerning during the pandemic, and research has reported increases in use and initiation of substance use during COVID-19 , with particular increases in alcohol use among adults compared to other substances (Pollard et al., 2020;Sharma et al., 2020;Vanderbruggen et al., 2020). Results regarding changes in cannabis use have been more mixed, with some reporting increases in use and other reporting decreases or no change in use (Sharma et al., 2020;Vanderbruggen et al., 2020). Of great concern is addressing populations with pre-pandemic behavioral health conditions as these conditions may amplify adverse changes in substance use behaviors and mental health during the pandemic (Alonzi et al., 2020;Horigian et al., 2020;Kim et al., 2020). For example, those already struggling with posttraumatic stress disorder (PTSD) may react differently to the global stress that surrounds the pandemic and, further, may resort to increased substance use behaviors, the latter of which may be used as a means to cope with the added stress of the pandemic or the exacerbation of symptoms . Increases in substance use behaviors have been observed as coping reactions to stressful life events (Hyman & Sinha, 2009;Kevorkian et al., 2015;Werner et al., 2016). Thus, for those with PTSD, additional stressful events could exacerbate pre-existing symptoms and make coping with the new stressors more difficult or perhaps lead to maladaptive coping behaviors (Evans et al., 2013;Green et al., 2010). There is an urgent need to understand if, and how, the COVID-19 pandemic has influenced behavioral health outcomes among at-risk groups, such as American veterans. Prior work notes that, veterans report rates of PTSD ranging from 11 to 30% and are at heightened risk for problematic substance use (Dursa et al., 2014;Gradus, 2014;Lapierre et al., 2007;Teeters et al., 2017). American veterans who report PTSD prior to the pandemic may be at heightened risk for exacerbated substance use as well as more negative reactions to the pandemic. Yet, little is known regarding veterans' ability to manage substance use, PTSD symptomology, and COVID-specific stressors. Such work is needed to inform clinical and outreach efforts in the post-pandemic period. Currently, studies with veterans that include a pre-pandemic time point are limited, making it difficult to draw conclusions about changes in substance use behaviors over time for veterans. The Present Study The overall purpose of this study was to examine veterans' reactions to COVID-19 and substance use patterns, with particular attention to those who screened positive for PTSD prior to the pandemic. More specifically, in February 2020 (Time 1), approximately 1 month prior to the Trump administration's declaration of a national emergency in the USA (AJMC, 2020), we conducted a survey of veteran substance use behaviors and PTSD symptoms. Six months later, in August 2020 (Time 2), we conducted a follow-up survey with veterans to assess their substance use behaviors, PTSD symptoms, and reactions to COVID-19. First, we sought to examine veterans' reactions to the pandemic, including emotions (e.g., anxiety, depression), behaviors (e.g., sleep problems), stress (e.g., financial), and family and social relationships. We hypothesized that veterans would report poorer reactions as the course of the pandemic progressed. Second, we sought to examine changes in substance use from Time 1 to Time 2. We expected drinking and cannabis use to either stay approximately the same or increase due to stress related to the pandemic. Third, we examined whether veterans who screened positive for PTSD at Time 1 reported poorer reactions to the pandemic and more substance use at Time 2 compared to those who did not screen positive, hypothesizing that veterans with positive PTSD screens would report poorer outcomes. Lastly, we assessed if reactions to COVID-19 during the course of the pandemic moderated the association between PTSD and substance use, hypothesizing that those with poorer reactions to COVID-19 and who screened positive for PTSD would report greater substance use. Participants and Procedures In February 2020, participants were recruited via social media ad campaigns for a study on "veteran attitudes and behaviors" as part of a survey effort to examine drinking and mental health symptoms among a sample of young adult veterans recruited outside of VA settings. The purpose of the larger study was to learn more about recently discharged veterans' mental health and substance use behaviors to inform future intervention content. Eligibility criteria were (1) age 18 to 40 and (2) separation from the Air Force, Army, Marine Corps, or Navy. Participants were excluded if they were active duty or in the reserve or guard components of the US armed forces. Ads were displayed on Facebook, Instagram, and veteran-specific social media sites (RallyPoint, We Are The Mighty) for 8 days. Veterans were directed to a secure study website that hosted an online consent form and survey. Once consented, participants completed a 30-minute online survey and received a $20 Amazon gift card. The flow of recruitment and retention of the sample can be found in Fig. 1. In total, 5,776 individuals clicked on ads and reached the online consent form, of which 2,750 (48%) did not pursue participation. An additional 94 (2%) were screened and found to be ineligible (i.e., over age 40, not a US veteran), and 1,077 (19%) attempted to access the study once we had reached an IRB-approved quota to prevent new participants from filling Fig. 1 Flow of participant recruitment out the survey. The remaining 1,855 individuals (32%) consented and completed the survey. Of those, 625 (34%) failed internal validation checks, such as not endorsing consistent responses between items (e.g., rank, branch, and paygrade needed to match), completing the survey in an impossible amount of time, completing the survey more than once (e.g., reviewing IP addresses for duplicates), or failing to answer test items correctly that assessed for careless responding (e.g., asking participants to endorse a specific value to check for attention). In August 2020, 6 months after the first survey, the 1,230 participants that completed the February 2020 survey (Time 1) were sent an email invitation to complete a 30-minute follow-up survey (Time 2) about "their experiences and reactions to COVID-19." Of these, 1,031 consented to the study and completed the Time 2 survey (84% of the final Time 1 sample). Nine participants failed internal validation checks and were removed from the Time 2 sample, leaving a sample of 1,025 that completed both surveys. Demographics and Military Characteristics Participants reported on their age, race/ethnicity, gender, and branch of service. Participants also filled out a measure of combat exposure using 11 items from prior work with veterans (e.g., witnessing an accident resulting in serious injury or death; engaging in handto-hand combat; Schell & Marshall, 2008) and an additional item of ever feeling like they were in great danger of being killed. Participants responded to each of the 12 items with Yes or No, and participants with any Yes response were coded as having combat exposure. Reactions to COVID On the Time 2 survey, we included 13 items modified from prior work (JHSPH, 2020) related to emotional, stress, sleep, and relationship reactions to the pandemic. Participants were asked to rate the 13 items from 0 "not at all" to 4 "a great deal" for two separate time periods of the first 3 months of the pandemic in the USA (March, April, May 2020) and the past 3 months (June, July, August 2020). In the effort to create a composite score for each time period, we used an exploratory factor analysis and concluded the items fell into a single factor at each time period. We then dropped four items from the scale with a factor loading of less than 0.35. This gave us a 9-item scale. These nine items fit the data well (CFI = 0.97, RMSEA = 0.02, SRMR = 0.02 for the first 3 months; CFI = 0.97, RMSEA = 0.03, SRMR = 0.02 for the past 3 months). We took the mean of the 9 items at each time period to create a composite score. Posttraumatic Stress Disorder PTSD symptom severity at Time 1 was assessed using the 20-item Posttraumatic Stress Disorder Checklist for DSM-V (PCL-5; (Bovin et al., 2016), where participants indicated how bothered they were by 20 symptoms of PTSD in the past month in relation to a stressful experience (e.g., natural disaster, combat, sexual assault) from not at all (0) to extremely (4). The PCL-5 yields a total sum score from 0 to 80 and was reliable in the current sample (α = 0.96). Using the cutoff score of 33 (Bovin et al., 2016), participants who scored at or above were classified with a positive screen for possible PTSD at Time 1. Substance Use At Time 1 and Time 2, participants completed items for past 30-day alcohol use: days of any alcohol use, days of alcohol use with binge drinking (i.e., 4 or more drinks on a drinking occasion for females, 5 or more drinks for males), number of drinks consumed on a typical drinking occasion, and the number of drinks consumed on the occasion when the participant drank the most (max drinks). At Time 1, participants reported whether or not they had used cannabis in their lifetime, and if so, they were asked how many days in the past 30 days they had used cannabis in any form (e.g., smoking, vaping, edibles). At Time 2, participants reported on any cannabis use in the past 6 months (since the Time 1 survey), and if they reported any use, they indicated how many days in the past 30 days they used cannabis. Analytic Plan The analytic plan was guided by the four aims of the paper. First, we used paired samples t-tests to compare means of the reactions to COVID-19 measure during the first 3 month of the pandemic (March to May 2020) to the past 3 months of the pandemic (June to August 2020). We report Cohen's d effect sizes (Cohen, 1992) to describe the relative magnitude of change. We then used paired samples t-tests to compare means for past 30-day substance use outcomes: drinking days and binge drinking days for all participants, average drinks and max drinks per occasion for past 30 day drinkers, and cannabis use frequency among all participants and among cannabis users only (i.e., those who reported any cannabis use at either time point). A chi-square test was used to examine whether lifetime cannabis non-users at Time 1 began using cannabis during the past 6 months of the pandemic at Time 2. Third, we used logistic and linear regression models that controlled for demographic and military characteristics to examine if individuals who screened positive for PTSD at Time 1 reported greater substance use at Time 2 compared to those without a positive PTSD screen. Lastly, we ran a series of linear regression models with each of the reactions to COVID-19 composite scores separately (i.e., first 3 months and then past 3 months of the pandemic) to determine if poorer reactions moderated the relationship between PTSD and substance use and explored whether those moderators varied by substance or time period. Significant interactions were plotted for interpretation using +1 and −1 standard deviations from the mean for all continuous measures. All continuous variables were grand mean centered to facilitate interpretation. Table 1 contains a description of the sample. Participants reported a mean age of 34.6 (SD = 3.5), and the majority were white (82.9%) and male gender (89.5%). Most participants were veterans of the Army (70.4%), and nearly all had experienced some combat (96.1%). Nearly one-third met criteria for possible PTSD (31.2%). Reactions to COVID-19 Individual means for each of the reactions to COVID-19 items are included in Table 2. During the months of June through August 2020 (past 3 months of the pandemic), participants reported higher means for the reactions to COVID-19 composite, as well as for the individual items, compared to the first 3 months of pandemic (March to May 2020), with effects sizes ranging from 0.18 to 0.55 (see Table 2). Participants who screened positive for PTSD at Time 1 reported significantly higher composite scores on the reactions to COVID-19 measure for the first 3 months of the pandemic (M = 1.30, SD = 0.48) compared to those who did not screen positive for PTSD (M = 0.99, SD = 0.40), t (1020) = 10.91, p < .001; Cohen's d = 0.73). Participants who screened positive for PTSD at Time 1 also reported significantly higher composite scores on the reactions to COVID-19 measure for the past 3 months of the pandemic (M = 1.73, SD = 0.66) compared to those who did not screen positive for PTSD (M = 1.21, SD = 0.54), t (1020) = 12.42, p < .001; Cohen's d = 0.89). Alcohol Use Means and standard deviations for alcohol use outcomes are presented in Table 3. Participants reported significantly decreasing their drinking days and binge drinking days. Past 30 days drinkers significantly decreased their average drinks per occasion and max drinks on one occasion. In Fig. 2 Cannabis Use Among the full sample, 19.5% of participants (n = 200) at Time 1 reported any use of cannabis in their lifetime, while at Time 2, 23.5% of the sample (n = 241) reported use during the past 6 months. Nearly 14% of those who reported no lifetime use of cannabis at Time 1 reported use of cannabis within the past 6 months at Time 2 (see Table 3). There were no significant differences in the mean number of cannabis days from Time 1 to Time 2 for all participants and for lifetime users. For the full sample, a positive PTSD screen at Time 1 was associated with a 3.82 (95% CI [2.45, 5.95]) greater odds (or ~282% increase) of using cannabis in the past 6 months at Time 2, after controlling for age, race/ethnicity, combat severity, and lifetime cannabis use. For those who reported cannabis use at either time point (n = 314), a positive PTSD screen at Time 1 was associated with greater frequency of cannabis use (β= 1.35, SE = 0.65, p = .040; b = 0.11) at Time 2 (see Fig. 3). PTSD and Reactions to COVID-19 Interactions In the models for number of past month drinking days, there was a significant interaction for PTSD and reactions to COVID-19 in the first 3 months of the pandemic, such that those who screened positive for PTSD and reported high (+1 SD) poor reactions to COVID-19 reported drinking the most frequently (see Fig. 4). This difference resulted in over three additional alcohol use days for those with PTSD and high poor reactions to COVID compared to veterans who also screened for PTSD but reported relatively low poor reactions to COVID-19. Simple slopes analyses revealed significant slopes for both high (slope gradient = 4.05, t = 7.53, p < 0.001) and low poor reactions to the pandemic (slope gradient = 2.15, t = 3.39, p = 0.001), with a steeper slope for those with higher poor reactions. There was no interaction effect for the past 3 months of the pandemic. Similarly, in the models for average drinks per occasion, there was a significant interaction for PTSD and reactions to COVID-19 in the first 3 months of the pandemic, such that those who screened positive for PTSD reported drinking the most number of drinks if they also reported experiencing higher poor reactions to COVID-19 (see Fig. 5). Simple slopes analyses revealed a significant slope for higher poor reactions to the pandemic (slope gradient = 0.55, t = 2.49, p = 0.013) but not lower poor reactions to the pandemic (p = 0.888). Again, there was no interaction effect for the past three months of the pandemic. There were no significant interactions for binge drinking days, max drinks, or cannabis use days. Discussion In this study, we assessed how American veterans, a group at-risk for experiencing PTSD prior to the pandemic, fared in terms of their reactions to the pandemic and substance use behaviors during the initial months after COVID-19 was declared a national emergency in the USA. In a sample of 1,025 veterans, participants reported increasingly poorer reactions throughout the first 6 months of the pandemic, including greater feelings of anxiety; depression; and stress, less sleep than typical, and suffering of family and social relationships. Those veterans screening positive for PTSD prior to the pandemic reported poorer reactions to COVID-19 than those without positive PTSD screens. This fits with prior literature from multiple countries, including studies in North America, Europe, Australia, and Asia, demonstrating that individuals with pre-existing mental health conditions may have a harder time coping with the pandemic (Alonzi et al., 2020;Ettman et al., 2020;Neill et al., 2020;Rajkumar, 2020;Varga et al., 2021;Wardell et al., 2020). In our study, poor reactions to the pandemic tended to increase from the first 3 months to the next 3 months of the initial outbreak in the USA; thus, it will be important to assess veterans for increased poor reactions as the pandemic continues to help inform prevention and intervention efforts with this group. Regarding substance use, veterans generally reported lower drinking levels from prepandemic levels. This is inconsistent with prior work in the general global population, as studies have generally found significant increases in drinking, although these increases have tended to be small (e.g., average increase of less than 0.2 days of binge drinking in Pollard et al. 2020) or based on cross-sectional retrospective reports (e.g., asking participants if their drinking changed over the pandemic rather than assessing at a pre-pandemic time point as in Kilian et al., 2021). Given the stay-at-home orders in many US states, it may be that American veterans were drinking less due to limited social engagement with friends outside their homes, such as in bars or at sporting events. However, consistent with the idea that individuals may use alcohol to cope with pre-pandemic mental health problems, veterans who screened positive for PTSD prior to the pandemic reported greater frequency (overall days, binge drinking days) and quantity (average amount, maximum amount on one occasion) of alcohol use than those veterans without positive PTSD screens. Though we did not assess reasons for veterans' use of substances, prior work has shown an increase in using substances to cope with pandemic stressors (Czeisler et al., 2020;Ornell et al., 2020). For cannabis use, an interesting pattern emerged. First, there was no significant increase in days of cannabis use, which fits with some prior work of American and Belgian adults during the first few months of the pandemic (Sharma et al., 2020;Vanderbruggen et al., 2020). However, a substantial proportion of veterans in our study (14%) who had never used cannabis (i.e., no lifetime use at Time 1) reported using cannabis use during the first 6 months of the pandemic. Veterans may have begun using cannabis during the pandemic simply due to availability; cannabis has become increasingly more available for recreational sale and possession, and multiple states determined that cannabis outlets could remain open during the lockdown, with reports of increased sales during the first 3 months of 2020 (Groshkova et al., 2020). Alternatively, for veterans with PTSD, motives for cannabis use have been associated with enjoyment, coping, and use as a sleep aid (Metrik et al., 2018;Metrik et al., 2016); and given the pandemic's effect on bars closing, stayat-home orders, and social distancing, some veterans with PTSD may have shown greater inclination toward the initiation of cannabis use during this period. Moreover, veterans in our sample with pre-pandemic positive PTSD screens reported greater odds of cannabis use during the pandemic and greater frequency of use, which fits with prior work showing cannabis initiation as a reaction to stressful life events (Hyman & Sinha, 2009;Kevorkian et al., 2015). Veterans with PTSD are also more likely to use cannabis to cope than those without PTSD (Boden et al., 2013). Veterans with pre-pandemic PTSD may require special attention to prevent heavy drinking and cannabis use during and after the pandemic, especially considering the detrimental physical and behavioral health effects that may occur with co-occurring disorders during the COVID-19 pandemic. Veterans with positive PTSD screens who reported poorer reactions to COVID-19 during February through May 2020 reported the most frequent drinking and the greatest quantity of drinks per occasion during the pandemic. Cumulative risk perspectives posit that various risks can co-occur and accumulate (Evans et al., 2013) leading to a variety of undesirable or negative outcomes such as substance use disorders (Green et al., 2010). Unfortunately, veterans already face unique context-specific risks related to their deployment and subsequent reintegration into civilian life (Derefinko et al., 2018), which can be compounded by COVID-19 pandemic stressors like financial insecurity/job loss, social isolation, and relationship difficulties. Such compounding effects may lead to greater drinking for those already impacted by PTSD. There were no significant interactions between PTSD and poor reactions to the pandemic during the second 3 months (June-August), suggesting that during the initial months of the pandemic in the USA, poor reactions were more impactful on drinking among those with PTSD than during the later months of the pandemic. Though the general population has been affected by COVID-19 and its aftermath, American veterans with PTSD are a unique group likely requiring targeted outreach and intervention efforts to assist them with stress, mental health symptoms, and substance use during and after the pandemic. Researchers and clinicians from the USA, Australia, Canada, England, and the Netherlands have joined together to outline several steps that can be taken to help veterans during the pandemic, including promoting and using telehealth options to provide behavioral health services to veterans, increasing financial supports and long-term investments in suicide prevention programs, targeting those with pre-existing behavioral health conditions that may be at increased risk, and assisting those healthcare workers treating veterans (Mcfarlane, Jetly, Castro, Greenberg, & Vermetten, 2020). The Veterans Affairs Healthcare System has ramped up telehealth efforts for veterans (Connolly et al., 2021), which has made access to care easier for many veterans. Despite this, rates of care initiation declined during the initial months of the pandemic for VA veterans, making it essential to continue to reach out to veterans in need through targeted outreach campaigns aiming to increase behavioral healthcare enrollment for those with PTSD in particular. Limitations Strengths include a large sample of veterans and two assessment time points, with one immediately prior to the outbreak of COVID-19 and another 6 months later. Limitations include the use of self-report data, which for substance use has been shown to be valid (Simons et al., 2015), but underreporting/overreporting or errors due to retrospective recall may have been present. The sample was also restricted to young adults, and generalizability to older veterans may be limited. In addition, we attempted to capture reactions to COVID-19 using a limited burden measure that encompassed emotional, stress, and relationship reactions to the pandemic. In doing so, we may have missed other aspects of COVID-19 reactions. We also asked about COVID-19 reactions during the first 3 months of the pandemic and the during the 3 months after that during the same assessment point (Time 2). Although the salience of COVID-19 likely made it easy to recall how much one was impacted by the pandemic during the initial versus the latter months of the pandemic, this is nonetheless a limitation. Future work should consider reactions to COVID-19 in the months and years following the outbreak to determine whether these negative reactions continue to increase or at some point begin to reduce. Additionally, our sample consisted of predominately white, male-identifying individuals, and we may have missed important effects within veteran populations in terms of gender, race, and/or ethnic identity. Others have noted the physical and behavioral health burdens that the COVID-19 pandemic poses on minority populations in particular (Egede & Walker, 2020;Gravlee, 2020;Gray et al., 2020). Conclusions Results from this study suggest that veterans struggling with PTSD prior to the COVID-19 pandemic may be at particularly high risk for difficulties managing COVID-related stress. Clinicians working with veterans may see increases in substance use and coping-related substance behaviors among veterans with PTSD during, and after, the pandemic, which may require specific treatments targeting co-occurring symptoms of PTSD and substance use disorders. Outreach efforts may be necessary to reach those with PTSD to bring them in for care to address these issues and prepare veterans to use alterative coping strategies to manage stressful reactions during this unprecedented time.
2021-08-27T13:57:12.886Z
2021-08-26T00:00:00.000
{ "year": 2021, "sha1": "9c4780477aaf4859b4889b54691f51d2c340d1aa", "oa_license": null, "oa_url": "https://link.springer.com/content/pdf/10.1007/s11469-021-00620-0.pdf", "oa_status": "BRONZE", "pdf_src": "PubMedCentral", "pdf_hash": "9c4780477aaf4859b4889b54691f51d2c340d1aa", "s2fieldsofstudy": [ "Psychology" ], "extfieldsofstudy": [ "Medicine" ] }
218629844
pes2o/s2orc
v3-fos-license
Clinical and endoscopic characteristics of eosinophilic esophagitis in Japan: a case-control study Background Eosinophilic esophagitis (EoE) is an allergy-associated clinicopathologic condition gaining an increasing amount of recognition in various areas of the world. While the clinical definition and characteristics may differ depending on country and region, sufficient studies have not yet been performed in Japan. Objective To assess the prevalence of EoE among the Japanese population and the clinical features associated with the disease. Methods Endoscopic data from January 2012 to October 2018 was gathered from 9 Japanese clinical institutes. EoE, defined as esophageal mucosal eosinophilia of at least 15 eosinophils per high-power field, was determined based on esophageal biopsies. Clinical and endoscopic patterns in the cases with EoE were investigated and compared with 186 age- and sex-matched controls. Results From 130,013 upper endoscopic examinations, 66 cases of EoE were identified (0.051%; mean age, 45.2 years [range, 7–79 years]; 45 males). Twenty-five patients (37.9%) with EoE were diagnosed by endoscopy during a medical check-up. Patients with EoE had more symptoms (69.7% vs. 10.8%, p < 0.01) such as dysphagia and food impaction, and more allergies (65.2% vs. 23.7%, p < 0.01) compared with the controls. The prevalence of atrophic gastritis was lower in EoE patients than in the controls (20.0% vs. 33.3%, p < 0.05). Conclusion The prevalence of EoE in the Japanese population was 0.051% which was comparable with previous reports in Japan. History of allergies and the absence of atrophic gastritis were associated with EoE. INTRODUCTION Eosinophilic esophagitis (EoE) is an allergy-associated chronic inflammatory condition characterized by the accumulation of eosinophils in the esophagus [1]. This illness can cause obstruction-related symptoms, such as dysphagia and food impaction. The etiology of EoE is currently being clarified and food allergies may be a cause of EoE pathogenesis [2]. It is well described that several risk factors are associated with the development of this disease, which includes sex (male predominance), race (Caucasian), and other allergic conditions (asthma, seasonal rhinitis, and atopic dermatitis) [3]. The incidence and prevalence of EoE have been increasing, and its clinical features have been extensively investigated, especially in Western countries [4]. On the other hand, in Japan, EoE has been poorly acknowledged by general clinicians. The first case of EoE in Japan was reported in English in 2006 [5], although another group reported a case of EoE in Japanese literature in 1998 [6]. But in the past decade, it is recognized that the prevalence of EoE is increasing in Japan [7,8]. A prospective, multicenter study in 2011 reported that the prevalence of EoE in Japan was 0.02% [9]. Of note, the prevalence appears to be rapidly increasing, and a more recent report indicated the prevalence to be nearly 0.4% in the Japanese population [10]. The low prevalence of EoE in Japan may be in part due to a lack of awareness of the illness. More study is still needed to determine the exact prevalence of EoE in Japan. Therefore, in the present study, we assessed the incidence of EoE in our related clinical institutes in Japan, as well as the clinical features associated with the disease. MATERIALS AND METHODS We conducted a multicenter, retrospective case-control study. A search was made of endoscopic databases in 9 Japanese clinical institutes to identify all diagnosed cases during the period from January 2012 to October 2018. The clinical institutes contained 1 university hospital, 5 general hospitals, and 3 clinics, which were located in Hiroshima, Okayama, Kagawa, Shizuoka, and Tokyo in Japan. In many previous studies, EoE was diagnosed by international consensus criteria which include symptoms of esophageal dysfunction and esophageal biopsy findings [11,12]. However, in the present study, to avoid underestimating the prevalence of EoE in Japan, we defined a diagnosis of EoE solely based on a biopsy finding of esophageal mucosal eosinophilia of at least 15 eosinophils per high-power field. Patients with neoplasia, peptic ulcers, inflammatory bowel disease, fungal esophagitis, eosinophilic gastroenteritis, and other systemic diseases with eosinophilia were excluded. Ultimately, 66 cases with EoE were diagnosed for further analysis, and a total of 186 age-and sex-matched controls without EoE were chosen. The clinical characteristics and endoscopic patterns were investigated. This study was approved by the institutional ethics committee (no.137) at Public Mitsugi General Hospital. Data are presented as the number of cases and percentages for categorical data. Statistical analysis for categorical data was performed using Pearson chi-squared test, and t test for unpaired quantitative data. A p value of <0.05 was considered statistically significant. RESULTS We found a total of 66 cases (0.051%) of EoE among 130,013 upper endoscopic examinations. An additional 186 age-and sex-matched controls without EoE were recruited from the same sampling frame to serve as controls. The 2 groups, subjects with and without EoE, were not different in mean age (45.2 years vs. 47.1 years) or male percentage (68.2% vs. 67.7%) ( Table 1) Our study included 3 patients who were 15 years old or younger. They all had some kinds of allergies including food allergies, atopic dermatitis, or bronchial asthma. All patients less than 30 years old (9 of 66, 13.6%) had allergic diseases. The endoscopic characteristics of EoE have been described previously, such as linear furrows, concentric rings, and whitish exudates [7,13]. In this study, linear furrows, esophageal rings, and whitish exudates were observed in 87.9%, 81.8%, and 63.6% of patients with EoE, respectively ( Table 2). In contrast, such findings were rarely seen in the control group. Representative endoscopic and histological findings of patients with EoE are shown in Fig. 1. There was no difference in the frequency of reflux esophagitis in the EoE and control groups (24.2% vs. 18.3%). A recent study has suggested that Helicobacter pylori infection is inversely associated with EoE [14]. In the present study, atrophic change in the gastric mucosa, suggesting possible H. pylori infection, was found less frequently in the EoE group ( We further analyzed the treatment and prognosis of 55 out of the 66 patients with EoE. Thirty-two patients (78.0%) received proton pump inhibitors with or without swallowed topical corticosteroids. During the follow-up period (mean, 23 months), no patient got worse regarding clinical and endoscopic findings. In 14 asymptomatic patients analyzed, 6 patients were treated with proton pump inhibitors or H 2 -blockers, and 8 patients were followed with yearly endoscopy without any treatment. DISCUSSION In the present study, we found a total of 66 cases (0.051%) of EoE out of 130,013 upper endoscopic examinations during the period from 2012 to 2018 in Japan. Although the exact reasons are still unclear, EoE is a much rarer disease in Japan compared with Western countries [15]. However, since the recognition of EoE in Japan, the prevalence of this disease has been increasing in the last decade presumably caused by a higher awareness among endoscopic physicians. The prevalence of EoE in Japan is 0.02%-0.4% according to reports published from 2011 to 2018 [9,10,16]. Therefore, the prevalence of EoE in our study population is within the same range as the previous reports. To avoid underestimating the prevalence of EoE in Japan in this study, we defined a diagnosis of EoE solely based on a biopsy finding of esophageal mucosal eosinophilia of at least 15 eosinophils per high-power field. This allowed us to count asymptomatic and presumably relatively mild cases of EoE. Indeed, 30.3% of the EoE cases in this study were asymptomatic ( Table 1). Even including asymptomatic cases, the prevalence of EoE was found to be just 0.051%, which indicates that this illness is still rare in Japan. A similar epidemiologic study was recently published by another Japanese research group [16]. In that study, they diagnosed EoE based on esophageal biopsies and reported that the prevalence was 0.20%, which was higher than in our study. However, they found 17 cases of EoE among only 8,589 upper endoscopic examinations in just a single clinical institute. The present study analyzed larger numbers of cases (130,013 cases) from 9 clinical institutes which showed that the prevalence of EoE is 0.051% in Japan. A previous report on the clinical characteristics of Japanese EoE patients indicated the male/ female ratio was 3.3:1 with male preponderance [7]. In the present study, the male/female ratio for EoE was 2.14:1 showing the disease had more male patients but not a biased sex ratio. It was consistent with the fact that approximately 60%-70% of patients had some history of allergic diseases. Smoking history was lower in the EoE group in our study, which is also supported by a previous study [17]. Although the possible mechanism by which the smoking history is lower in EoE is unclear, it could be postulated that nicotine might affect either mucosal infiltration or function of eosinophils. Of interest, it has been reported that nicotine has a positive influence on some inflammatory diseases, including ulcerative colitis [18]. Smoking may suppress the onset or ameliorate the disease progress of EoE through the anti-inflammatory effects of nicotine. Of note, a previous study suggested an inverse association between H. pylori infection and EoE [14]. A subsequent case-control study also showed the possible influence of H. pylori infection on EoE in Japanese patients [19]. In the present study, atrophic changes in the gastric mucosa, suggesting possible H. pylori infection [20], was seen in 20.0% of the patients with EoE, which was significantly less than in the age-and sex-matched controls (33.3%). However, the difference is too small to be conclusive. It is still unclear whether exposure to H. pylori infection has a protective role like that in bronchial asthma [21,22]. Further studies involving a large population should be performed to determine the association between H. pylori infection and allergic disorders, including EoE. In conclusion, the present study identified 66 Japanese cases (0.051%) of EoE from 130,013 upper endoscopic examinations. History of allergies and the absence of atrophic gastritis were associated with EoE. EoE is a disease which can cause dysphasia by the obstruction of the esophagus. In our study, 30% of the patients were asymptomatic and the rest had mild symptoms. A long-term follow-up study is required to clarify the prognosis.
2020-04-30T09:10:59.106Z
2020-04-01T00:00:00.000
{ "year": 2020, "sha1": "64393f62078a07c875cfbb540fea4781d5615787", "oa_license": "CCBYNC", "oa_url": "https://doi.org/10.5415/apallergy.2020.10.e16", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "18707b7cd04f8d1e81ad1e9114b186908639f3eb", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
119293068
pes2o/s2orc
v3-fos-license
A semiclassical collective response of heated, asymmetric and rotating nuclei The Landau Fermi-liquid and extended Gutzwiller periodic-orbit theories are presented for the semiclassical description of collective excitations in nuclei, which are close to main topics of the fruitful activity of S.T. Belyaev. Static susceptibilities show the ergodicity of Fermi liquids. Transport coefficients (nuclear friction and inertia) as functions of the temperature for the hydrodynamic and zero-sound modes are derived within the response theory by using the Fermi-liquid droplet model, in agreement with the shell model for large temperatures. The surface symmetry binding-energy constants are obtained as functions of Skyrme force parameters in the approximation of a sharp edged proton-neutron asymmetric nucleus.The energies and sum rules of the isovector dipole giant resonances are in fairly good agreement with the experimental data. An analysis of the specific structure of these resonances in terms of a main, and satellite peaks, in comparison with the experimental data and microscopic theoretical models, might turn out to be of importance for a better understanding of the values of the surface symmetry-energy constant. The semiclassical collective moment of inertia is derived analytically beyond the quantum perturbation approximation of the cranking model for any potential well as a mean field. It is shown that this moment of inertia can be approximated by its rigid-body value for the rotation with a given frequency within the ETF and more general periodic orbit theories in the nearly local long-length approximation. Its semiclassical shell-structure components are derived in terms of the periodic-orbit free-energy shell corrections. We obtained good agreement between the semiclassical and quantum shell-structure components of the moment of inertia for several critical bifurcation deformations for the harmonic oscillator mean field. For the nuclear collective excitations within the general response-function theory [4,23,24], the basic idea is to parametrize the complex dynamical problem of the collective motion of many strongly interacting particles in terms of a few collective variables found from the physical meaning of the considered dynamical problem, for example the nuclear surface itself [25][26][27] or its multi-pole deformations [4]. We can then study the response to an external field of the dynamical quantities describing the nuclear collective motion in terms of these variables. Thus, we get important information on the transport properties of nuclei. For such a theoretical description of the collective motion it is very important to take into account the temperature dependence of the dissipative nuclear characteristics as the friction coefficient, as shown in [24,[28][29][30]. The friction depends strongly on the temperature and its temperature dependence can therefore not be ignored in the description of the collective excitations in nuclei. Concerning the temperature dependence of the nuclear friction, one of the most important problems is related to the properties of the static susceptibilities and ergodicity of the Fermi systems like nuclei. However, the quantum description of dissipative phenomena in nuclei is rather complicated because we have to take into account the residual interactions beyond the mean-field approximation. Therefore, more simple models [26,[31][32][33] accounting for some macroscopic properties of the many-body Fermi-system are helpful to understand the global average properties of the collective motion. Such a model is based on the Landau Fermi-liquid theory [34][35][36], applied for the nuclear interior and simple macroscopic boundary conditions on the nuclear surface [26,27,33,[37][38][39][40] (see also macroscopic approaches with different boundary conditions [41][42][43][44][45]). In [32], the response-function theory can be applied to describe collective nuclear excitations as the isoscalar quadrupole mode. The transport coefficients, such as friction and inertia, are simply calculated within the macroscopic Fermi-liquid droplet model (FLDM) [31][32][33] and their temperature dependence can be clearly discussed (see also earlier works [27,37,[46][47][48][49]). The asymmetry of heavy nuclei near their stability line and the structure of the isovector dipole resonances are studied in [33,[50][51][52] (see also [53,54]). In this way, the giant multipole resonances were described, and, with increasing temperature [31,32], a transition from zero sound modes to the hydrodynamic first sound. The friction in [31,32] is due to the collisions of particles, which were taken into account in the relaxation-time approximation [35,36,[55][56][57][58] with a temperature and frequency dependence (retardation effects) [31,34]. The most important results obtained in [32,59] are related to the overdamped surface excitation mode for the low energy region and its dissipative characteristics as friction. For the low excitation energy region these investigations can be completed by the additional sources of the friction related to a more precise description of the heated Fermi liquids presented in [57,58] for the infinite matter. Following [57], we should take into account the thermodynamic relations along with the dynamical Landau-Vlasov equation and introduce the local equilibrium distribution instead of the one of global statics, used earlier in [32,59] for the linearization procedure of this equation. These new developments of the Landau theory are especially important for the further investigations of the temperature dependence of the friction. For the first step we have to work out in more details the theory [57] of the heated Fermi liquids for nuclear matter to apply then it for the dynamical description of the collective motion in the interior of nuclei in the macroscopic FLDM [31,32]. Our purpose is also to find the relations to some general points of the response function theory and clarify them taking the example of the analytically solved model based on the non-trivial temperature-dependent Fermiliquid theory. One of the most important questions which would be better to clarify is the above mentioned ergodicity property, temperature dependence of the friction and coupling constant. Another important extension of this macroscopic theory is to study the structure of the isovector giant dipole resonance (IVGDR) as a splitting phenomenon due to the nuclear symmetry interaction between neutrons and protons near the stability line [33,40,[50][51][52][53][54]. The neutron skin of exotic nuclei with a large excess of neutrons is also still one of the exciting subjects of nuclear physics and nuclear astrophysics [2,[60][61][62][63][64][65][66][67][68][69]. Simple and accurate solutions for the isovector particle density distributions were obtained within the nuclear effective surface (ES) approximation [25-27, 39, 40]. It exploits the saturation of nuclear matter and a narrow diffuse-edge region in finite heavy nuclei. The ES is defined as the location of points of the maximum density gradient. The coordinate system, connected locally with the ES, is specified by the distance from the given point to the surface and by tangent coordinates at the ES. The variational condition for the nuclear energy with some additional fixed integrals of motion in the local energy-density theory [70,71] is significantly simplified in these coordinates. In particular, in the extended Thomas-Fermi (ETF) approach [72,73] (with Skyrme forces [74][75][76][77][78][79]) this can be done for any deformations by using an expansion in a small leptodermic parameter. The latter is of the order of the diffuse edge thickness of heavy enough nucleus over its mean curvature radius, or the number of nucleons in power one third under the distortion constraint in the case of deformed nuclei. The accuracy of the ES approximation in the ETF approach without spin-orbit (SO) and asymmetry terms was checked [27] by comparing results of Hartree-Fock (HF) [80,81] and ETF calculations [72,73] for some Skyrme forces. The ES approach (ESA) [25][26][27] was then extended by taking SO and asymmetry effects into account [39,40]. Solutions for the isoscalar and isovector particle densities and energies at the quasi-equilibrium in the ESA of the ETF approach were applied to analytical calculations of the neutron skin and isovector stiffness coefficients in the leading order of the leptodermic parameter and to the derivations of the macroscopic boundary conditions [40]. Our results are compared with the fundamental researches [2,[60][61][62] in the liquid droplet model (LDM). These analytical expressions for the energy surface constants can be used for IVGDR calculations within the FLDM [33,[49][50][51][52]. A further interesting application of the semiclassical response theory would consist in the study of the properties of collective rotation bands in heavy deformed nuclei. One may consider nuclear collective rotations within the cranking model as a response to the Coriolis externalfield perturbation. The moment of inertia (MI) can be calculated as a susceptibility with respect to this external field. The rotation frequency of the rotating Fermi system in the cranking model is determined for a given nuclear angular momentum through a constraint, as for any other integral of motion, as in particular the particle number conservation [81]. In order to simplify such a rather complicated problem, the Strutinsky shell correction method (SCM) [3,82] was adjusted to the collective nuclear rotations in [5,15]. The collective MI is expressed as function of the particle number and temperature in terms of a smooth part and an oscillating shell correction. The smooth component can be described by a suitable macroscopic model, like the dynamical ETF approach [72,73,[83][84][85][86][87][88] similar to the FLDM, which has proven to be both simple and precise. For the definition of the MI shell correction, one can apply the Strutinsky averaging procedure to the single-particle (s.p.) MI, in the same way as for the well-known free-energy shell correction. For a deeper understanding of the quantum results and the correspondence between classical and quantum physics of the MI shell components, it is worth to analyze these shell components in terms of periodic orbits (POs), what is now well established as the semiclassical periodicorbit theory (POT) [73,[89][90][91][92][93][94] (see also its extension to a given angular momentum projection along with the energy of the particle [95] and to the particle densities [96,97] and pairing correlations [97]). Gutzwiller was the first who developed the POT for completely chaotic Hamiltonians with only one integral of motion (the particle energy) [89]. The Gutzwiller approach of the POT extended to potentials with continuous symmetries for the description of the nuclear shell structure can be found in [73,91,93,98]. The semiclassical shell-structure corrections to the level density and energy have been tested for a large number of s.p. Hamiltonians in two and three dimensions (see, for instance, [73,[99][100][101][102][103][104][105]). For the Fermi gas the entropy shell corrections of the POT as a sum of periodic orbits were derived in [91], and with its help, simple analytical expressions for the shell-structure energies in cold nuclei were obtained there following a general semiclassical theory [73]. These energy shell corrections are in good agreement with the quantum SCM results, for instance for elliptic and spheroidal cavities, including the superdeformed bifurcation region [100,102]. In particular in three dimensions, the superdeformed bifurcation nanostructure leads as function of deformation to the double-humped shell-structure energy with the first and second potential wells in heavy enough nuclei [73,94,98,102,104], which is well known as the doublehumped fission barriers in the region of actinide nuclei. At large deformations the second well can be understood semiclassically, for spheroidal type shapes, through the bifurcation of equatorial orbits into equatorial and the shortest 3-dimensional periodic orbits, because of the enhancement of the POT amplitudes of the shell correction to the level density near the Fermi surface at these bifurcation deformations. For finite heated fermionic systems, it was also shown [73,91,97,[106][107][108] within the POT that the shellstructure of the entropy, the thermodynamical (grandcanonical) potential and the free-energy shell corrections can be obtained by multiplying the terms of the POT expansion by a temperature-dependent factor, which is exponentially decreasing with temperature. For the case of the so called "classical rotations " around the symmetry z axis of the nucleus, the MI shell correction is obtained, for any rotational frequency and at finite temperature, within the extended Gutzwiller POT through the averaging of the individual angular momenta aligned along this symmetry axis [95,106,107]. A similar POT problem, dealing with the magnetic susceptibility of fermionic systems like metallic clusters and quantum dots, was worked out in [108,109]. It was suggested in [110] to use the spheroidal cavity and the classical perturbation approach to the POT by Creagh [73,111] to describe the collective rotation of deformed nuclei around an axis (x axis) perpendicular to the symmetry z axis. The small parameter of the POT perturbation approximation turns out to be proportional to the rotational frequency, but also to the classical action (in units of ), which causes an additional restriction to Fermi systems (or particle numbers) of small enough size, in contrast to the usual semiclassical POT approach. In [112,113], the nonperturbative extended Gutzwiller POT was used for the calculation of the MI shell corrections within the mean-field cranking model for both the collective and the alignment rotations. In these works, for the statistical equilibrium nuclear rotations, the semiclassical MI shell corrections were obtained in good agreement with the quantum results in the case of the harmonic-oscillator potential. We extend this approach for collective rotations perpendicular to symmetry axis to the analytical calculations of the MI shell corrections for the case of different mean fields, in particular with spheroidal shapes and sharp edges. The main purpose is to study semiclassically the enhancement effects in the MI within the improved stationary phase method (improved SPM or shortly ISPM) [94,100,102,103,105], due to the bifurcations of the periodic orbits in the superdeformed region. In the present review in Section II A we present some basic formulas of the temperature-dependent Fermiliquid theory [57]. We consider in Sec. II B the particle number and momentum conservation equations and derive from them the energy conservation and general transport equations, in particular, the expressions for the viscosity, shear modulus and thermal conductivity coefficients. In Sec. II C we determine the density-density and density-temperature response functions with the low temperature corrections. Section II D shows the long wave-length (LWL, or hydrodynamic) limit for the response functions, and the specific expressions for the transport coefficients. In Sec. II E, one obtains the static isolated, isothermal, and adiabatic susceptibilities to clarify some important points of the general response function theory, mainly, the ergodicity property of the Fermi systems [29,114]. We study the relaxation and correlation functions on the basis of the fluctuationdissipation theorem and establish their relations to the ergodicity of the Fermi-liquid system in section II F. General aspects of the response function theory for the collective motion in nuclei are presented in Sec. III A in line with [24,29]. Section III B shows the basic ingredients and the collective response function of the nuclear FLDM. Section III C is devoted to the derivation of the temperature dependence of the transport coefficients, such as friction, inertia, and stiffness for the density modes for slow collective motion. The numerical illustrations are given in Sec. III D. In Sec. IV, the semiclassical theory is extended to neutron-proton asymmetric nuclei and applied for the calculations of IVGDRs. In Sec. V, the smooth ETF and fluctuating shell-structure components of the moments of inertia are derived for collective rotations of heavy nuclei. The MI shell component is analytically presented in terms of the periodic orbits and their bifurcations within the POT. This component is compared with the quantum results for the simplest case of the deformed harmonic oscillator Hamiltonian. Comments and conclusions are finally given in Sec. VI. Some details of the thermodynamical, FLDM (in the LWL limit) and POT calculations, such as the analytical derivations of the in-compressibility, viscosity, thermoconductivity, coupling, and surface symmetry-energy constants, as well as the semiclassical MI are presented in Appendices A-E. A. Equations of motion for the heated Fermi liquid In the semiclassical approximation the dynamics of a Fermi liquid may be described by the distribution function f (r, p, t) in the one body phase-space. Restricting to small deviations of particle density ρ(r, t) and temperature T , from their values in a thermodynamic equilibrium one may apply the linearized Landau-Vlasov equation [35,57]: δf (r, p, t) + ∂ε g.e. p will be assumed to be of the form ε g.e. p = p 2 /2m * with m * being the effective nucleonic mass. In (2.1), δε(r, p, t) stands for the variation of the quasiparticle energy ε(r, p, t), δε(r, p, t) = ε(r, p, t) − ε g.e. (2.4) The quasiparticles' density of states N (T ) at the chemical potential µ is given by (2.5) Evidently, because of our linearization the density N (T ) here is the one of equilibrium. In the sequel such a convention will inherently be applied to any coefficient of quantities of order δf . The factor 2 accounts for the spin degeneracy. The amplitude of the quasiparticle interaction, F (p, p ′ ), commonly is written in terms of the Landau parameters F 0 and F 1 , according to F (p, p ′ ) = F 0 + F 1p ·p ′ ,p = p/p. (2.6) These two constants may be related to the two properties of nuclear matter, namely the isothermal incompressibility K T (see Appendix A.1), and the effective mass m * , m * = G 1 m, G n = 1 + F n 2n + 1 (2.8) (n = 0, 1). The equation for the effective mass m * is known [35,57] to be valid for systems obeying Galileo invariance, which shall be assumed here. In principle, the Landau parameters F 0 and F 1 might vary with the momenta p and p ′ . Such a dependence will be neglected henceforth. This approximation appears to be reasonable as we are going to stick to small excitations near the Fermi surface and to temperatures T , which are small as compared to the chemical potential µ. Likewise, we shall discard any temperature dependence of the effective mass. Notice that in addition to the ratio (T /µ) 2 , this dependence would be governed by the additional factor |m * /m − 1| which is small for nuclear matter. These assumptions will allow us to simplify further the theory [57] and to get more explicit results by making use of the temperature expansion for the response functions in the small parameter T /µ, as well as of the standard perturbation approach to eigenvalue problems needed later for the hydrodynamic (long-wave length) limit. We will follow [57] in neglecting higher order terms in the expansion (2.6) in Legendre polynomials. Later on we want to study motion of the system which can be classified as an excitation on top of the local equilibrium. Following [35,57], the collision term δSt can be considered in the relaxation time approximation, δSt = − δf l.e. (r, p, t) τ , f l.e. ε l.e. Here, f l.e. (ε l.e. p ) is the distribution function of a local equilibrium (l.e.), and ε l.e. p is the associated quasiparticle energy. µ(r, t) represents the chemical potential, u(r, t) the mean velocity field, and T (r, t) the temperature, all defined in the local sense. Like in [32], the relaxation time τ is assumed to be independent of the quasiparticle momentum p. However, it will be allowed to depend τ on T as well as on the frequency of the motion (thus, accounting for retardation effects in collision processes). In For the l.e. quasiparticle energy ε l.e. p , one has ε l.e. p = ε g.e. p + δε l.e. p , (2.13) where δε l.e. p is defined like in (2.4) with only δf (r, p, t) replaced by δf l.e. (r, p, t). According to (2.11) and (2.12), for the simplified interaction (2.6), one gets δε l.e. p = δε(r, p, t) = F 0 N (T ) δρ(r, t) + F 1 mρ N (T )p 2 F pu, (2.14) where δρ is the dynamical component of the particle density ρ(r, t) = 2dp (2π ) 3 f (r, p, t) = ρ ∞ + δρ(r, t) (2. 15) with ρ ∞ being its g.e. value associated to f g.e. (ε g.e. p ) for the infinite Fermi liquid. The vector of the mean velocity u can be expressed in terms of the first moment of the distribution function (current density) and the particle density (2.15), δf (r, p, t). (2. 16) The definition of the collision term in the form (2.9) is incomplete without posing conditions for the conservation of the particle number, momentum, and energy (for simplicity of notations, we shall omit index ∞ in the static nuclear-matter density component ρ at second order terms in the energy density variations). Notice that to the order considered, in the equation for energy conservation, ε may be replaced by ε g.e. p (see also [58]). Incidentally, for the quasiparticle interaction (2.6), this substitution even becomes exact, as the dynamical part δε would drop out of the last integral (as follows from (2.14), (2.13), and two first equations in the following set of conditions [57], dp δf l.e. (r, p, t) = 0 , dp p δf l.e. (r, p, t) = 0 , dp ε δf l.e. (r, p, t) = 0. (2.17) These equations mimics conservation of the corresponding quantities in each collision of quasiparticles and ensures that of the same quantities calculated for the total system (without external fields). Together with the basic equation (2.1), one thus has 6 equations for the 6 unknown quantities δρ(r, t), δµ(r, t), u(r, t) and δT (r, t). They allow one to find unique solutions as functionals of the external field V ext (t). Below we shall solve these equations in terms of response functions. It may be noted that, due to the conditions (2.17), the first variation of the distribution function δf (r, p, t) (2.11) disappears from the dynamical component δρ(r, t) of the density ρ(r, t) and of the velocity field u(r, t). As one knows (see, e.g., [35,57,58]), the equation for the velocity field reduces to an identity if one takes into account the definition of the effective mass m * given by (2.8). B. The conserving equations In this section, we like to deduce conserving equations for the particle number, momentum, and energy, which later on will turn out helpful to find appropriate solutions of the Landau-Vlasov equation (2.1). The procedure, which basis on a moment expansion, is well known from textbooks [36,58,115]. We will follow more closely the version of [31,32] (see also [48]). THE MOMENT EXPANSION Whereas particle number conservation implies to have ∂ρ ∂t + ∇ (ρu) = 0, (2.18) the momentum conservation is reflected by the following set of equations Besides quantities introduced before, they involve (2.20) Substituting for δf (r, p, t) (2.11) into (2.20), one gets (2.21) The first component σ αβ , which results from the first term δf l.e. (r, p, t) on the right of (2.11), determines the dynamic shear stress tensor, (2.22) whose trace vanishes. For a linearized dynamics, the nondiagonal components of the momentum flux tensor Π αβ equal the corresponding stress tensor (but with the opposite sign), with correction terms being proportional to u α u β in δf , and thus, of higher order, see (2.16) for u α . The second component of the momentum flux tensor of (2.21) can be derived from the variation δf l.e. (ε l.e. p ) as given by (2.12). It represents the compressional part of the momentum flux tensor, with δρ ≡ δρ(r, t) = 2dp (2π ) 3 δf l.e. ε l.e. p (2.23) [mind (2.11) and (2.17)]. Notice, that here only the diagonal parts survive. The only non-diagonal ones could come from the terms in (2.12) involving pu; but they vanish when integrating over angles in momentum space. Traditionally, δP in (2.23) is referred to as the scalar pressure, see [116]. Using (2.12) for the distribution δf l.e. (ε l.e. p ) and its properties mentioned above, after some simple algebraic transformations, one gets with K T being the isothermal in-compressibility (2.7). For the derivation of the second equation in (2.24), one can use (i) the transformation of δµ to the variations of δρ and δT [see (2.49)], and (ii) the relations (A.13), (A.14), and (A.16) for the entropy per particle ς, the particle density ρ as well as for the quantity M (A.16), respectively. Inspecting (A.18) and (A.8), it becomes apparent that the expression on the very right of (2.24) may indeed be interpreted as an expansion of the static pressure to the first order in δρ and δT . It is thus seen that the truly non-equilibrium component δf l.e. (r, p, t) only appears in the shear stress tensor σ αβ given in (2.22). Note, here and below within Sec. II B, we omit immaterial constants related to the global equilibrium (static) components of the moments to simplify the notations and adopt them to the ones of the standard textbooks when it will not lead to misunderstanding. We should emphasize that the Landau quasiparticle theory which is a basis of our derivations is working in a self-consistence way with small deviations from (small excitations near) the Fermi surface which are denoted by symbol "δ" and takes above mentioned static components as those of the external phenomenological (experimental) data. Therefore, all relations discussed below in this section should be understood as the ones between such close-to-Fermisurface quantities within our linearized Landau-Vlasov phase space dynamics after exclusion of all above mentioned immaterial constants. Nevertheless, we keep the symbol δ with the scalar pressure δP to avoid possible misunderstanding related to the linearization procedure, see more comments below after (2.38). THE STRESS TENSOR It may be worthwhile to relate the stress tensor σ αβ given in (2.22) to the standard form in terms of the coefficients of the shear modulus λ and the viscosity ν, (2.25) Here, the first term σ (λ) αβ is the conservative part of the stress tensor σ αβ , with u = ∂w/∂t and w being the displacement field. The second term in (2.25) can be written as where ν is the coefficient of the shear viscosity (or the first viscosity). For more details see Appendix A.2, in particular for expressions of the coefficients λ (B.12) and ν (B.13) in terms of Fermi liquid interaction parameters. To obtain microscopic expressions for the shear modulus λ and the viscosity ν, one needs to exploit the solution δf l.e. (r, p, t) of the Landau-Vlasov equation (2.1) for the stress tensor σ αβ (r, t) (2.22), reducing the latter to the form (2.25). Such a calculation of λ and µ in terms of the Landau Fermi-liquid parameters is discussed in Appendix A.2, in which Fourier transforms are exploited [31]. Equivalently, one may express functions of space and time by plane waves, which for the distribution function reads [31,46] δf (r, with q being the wave vector and ω the frequency of the vibrational modes of nuclear matter. Such a plane-wave representation is to be applied to both sides of (2.22) and (2.25). The amplitudes for the velocity u and the displacement w field then satisfyw =ũ/(−iω 30) A total effective in-compressibility K tot includes the change of the pressure due to variations of the temperature with density. With the help of (A.18), the incompressibility K tot can be expressed through the specific heat per particle C V (A.9), Again, δT and δρ are the Fourier components of δT (r, t) and δρ(r, t). Like all other kinetic coefficients, such as λ and ν given in (B.12) and (B.13), respectively, this effective, total in-compressibility modulus K tot , too, depends on ω and q. Later on we shall discuss in more detail these quantities in the LWL limit. In this limit, the total in-compressibility K tot will be seen to become identical to the adiabatic one K ς given in (A.29). ENERGY CONSERVATION AND THE GENERAL TRANSPORT EQUATION So far we have not looked at the energy conservation. For this purpose, one needs to consider thermal aspects as they appear in equations for the change of entropy and temperature. To do this we will follow standard procedures. We first built the scalar product of the mean velocity u with the vector equation, whose component α is given by (2.19). Making use of the continuity equation (2.18), after some manipulations, one gets ∂ ∂t On the left hand side, there appears the mean kinetic energy density and the internal energy density ρE per unit volume (defined again up to an immaterial constant). The density E itself may be split in three different components, The first one, is related to shear deformations, which is known from the solid state physics and for Fermi liquids as coming from distortions of the Fermi surface [34]. The second one may be written as it represents the compressional component, associated to the effective total in-compressibility K tot , which is in line of the known thermodynamic relations. Equation (2.35) resembles the expression found in [26,27], except for a generalization of the physical meaning of the incompressibility modulus K tot as function of ω and q given in (2.31), as compared to the quasistatic adiabatic case. The third one in (2.33) represents the change of heat part resulting from a change of entropy. We keep here the dynamical variation symbol δ for the entropy ς (also for the pressure P here and below) to remember that all quantities of the Landau Fermi-liquid theory are presented for small dynamical deviations near the Fermi surface in the linear (or quadratic after multiplying (2.19) by u) form in δf . We avoid here a misunderstanding with following transformations of the energy E (2.33), say Legendre ones, to the differential form in line of a general comment at the beginning of this section. On the r.h.s. of (2.32) the enthalpy W αβ per particle has been introduced, αβ + δP δ αβ (2.36) (see the comment above concerning δP). Furthermore, the thermodynamic relation for the dynamical variations of the internal energy E in terms of those ς for the entropy per particle ς, the density ρ and the displacement tensor w αβ , is given by The displacement tensor w αβ is defined as Note that equation (2.29) for the pressure δP is important to get (2.33) by integration of (2.37). According to (2.36), we get the standard relation of linearized thermodynamics of [116], for instance, between the enthalpy (2.36) and entropy ς up to the second order term in δP. In (2.32), we also added and subtracted the term ∇j T containing the heat current, with the coefficient κ for the thermal conductivity. We may now write the equation for energy conservation as ∂ ∂t In this way, it is seen that from (2.32) and (2.40), together with the continuity equation (2.18) and the definition of the heat current j T (2.39), one gets for the change of entropy: [57,58]. In this way, we get rather simple expressions for the collective and internal components of the energy out the hydrodynamical limit. POTENTIAL FLOW: FERMI LIQUID VERSUS HYDRODYNAMICS Below, we shall be interested in the case of a viscous potential flow, for which one has The diagonal term given on the very right of (2.44) had been used to remove δρ which still appears in (2.30 (2.45) The structure of (2.45) for the potential flow is similar to that of the Navier-Stokes equation for the velocity potential ϕ. The difference to the case of the common classical liquid, is seen in the terms proportional to λ, viz in the presence of the anisotropy term (2.26), which actually represents a reversible motion. Such a term is known from the dynamics of amorphous solids. We emphasize that for Fermi liquids, this term arises only in the presence of the Fermi surface distortions, which survive even in the non-viscous limit; they will turn out important for our applications below. The shear modulus λ may be interpreted as a measure of those distortions which are related to a reversible anisotropy of the momentum flux tensor. They disappear in the hydrodynamic limit, and so does λ, in which case all formulas of this section turn into those for normal liquids; for more details see section II D. At this place an important remark is in order. It should be noted that in contrast to classical hydrodynamics our system of equations for the moments is not closed to the first few ones, namely particle density δρ and velocity field u. This is true in particular for (2.45) for the potential flow ϕ. Indeed, the coefficients λ, ν and K tot depend on the variable ω/q which yet is unknown. The latter is determined from a dispersion relation, which in turn has to be derived from the Landau-Vlasov equation (2.1). Such a procedure goes back to [34] where the dispersion relation was exploited for the collisionless case at T = 0. A collision term in the relaxation approximation has been taken into account in [35]. The extension to heated Fermi liquids and low excitations, in the way, which we are going to use later on, has been developed in [57]. It may be noted that this version of the dispersion relation, which we are aiming at, differs essentially from the one obtained in the "truncated" (scaling model) versions of the Fermiliquid theory of [118,119], where the momentum flux tensor is not influenced by higher moments of the distribution function. We take into account all other multipolarities (larger the quadrupole one) of the Fermi-surface distortions when there is no convergence in multipolarity expansion of the distribution function for finite and large ωτ or for finite K tot , for instance for nuclear matter with small F 0 , in contrast to the Fermi liquid 3 He. C. Response functions DYNAMIC RESPONSE As mentioned earlier, we want to solve the linearized equations of motion in terms of response functions. We concentrate on two quantities, namely particle density ρ(r, t) and temperature T (r, t) and examine how they react to the external field V ext (r, t) introduced earlier. This may be quantified by the following two response functions: The density-density response χ coll DD and the temperature-density response χ coll T D defined as and respectively. To keep the notation simple, we will omit the tilde characterizing the Fourier transform of the distribution function (2.28) (it should suffice to only show the arguments q, ω). The definition of the response functions is identical to the one of [57], except that we have introduced the suffix "coll". This was done adopting a notation used in the literature of nuclear physics when the dynamics of a finite nucleus is expressed in terms of shape variables, to which we will come below. Notice, however, that V ext (q, ω) is only proportional to the density, V ext (q, ω) = q ext (ω)ρ(q, ω), with q ext (ω) being some externally determined function. Often, one therefore defines response functions in a slightly modified way, in that the functional derivatives are performed with respect to q ext (ω) instead of V ext (q, ω) (see, e.g., [36]). As will be seen below, these functions only depend on the wave number q but not on the angles of the wave vector q. For this reason, it is convenient to introduce the dimensionless quantities s and τ q ( instead of the frequency ω and the wave number q. To calculate the response functions (2.46) and (2.47) we follow the procedure of [57]. As any further details may be found there, it may suffice to outline briefly the main features. In short, the strategy is as follows. Firstly, one rewrites the Landau-Vlasov equation (2.1) in terms of the Fourier coefficients introduced in (2.28). Evidently, in the spirit of the separation specified in (2.11), we need to evaluate explicitly only the first component δf l.e. (r, p, t) which enters the conditions (2.17). By a straightforward calculation, one may then express δf l.e. (r, p, ω) in terms of the unknown quantities δρ, u, δµ and δT for any given external field V ext . The form is given in (B.4). The continuity equation (2.18) in the Fourier representation through (2.28), qu = ωδρ/ρ, may be used to eliminate the velocity field u. Furthermore, the thermodynamic relation [see (A.17), (A.8) and (2. (2.49) allows one to express the chemical potential δµ in terms of the two unknown variables δρ and δT . Next, one may exploit the conditions (2.17). As the second (set of) equation(s) is just an identity, provided one uses the appropriate definition of the effective mass (2.8), it is only the first and the third equation which matter. They may determine the remaining two variables δρ and δT in terms of the external field, (2.51) Here, the quantity has been introduced with N (0) being the level density (2.5) of the quasiparticles at T = 0, and ε F = p 2 F /2m * . The functions χ n are given by with n = 0, 1, 2, ... , Furthermore, in (2.50) and (2.51) a short hand notation δV eff has been used for the sum of two terms, namely (2.57) In (2.56), δV eff may be considered as an effective field which includes the true external field V ext and the "screened" field kδρ [57]. Our notation follows the one often used for finite nuclei: The second term in (2.56) plays the role of the collective variable and k of (2.57) represents the "coupling" constant (see, e.g., [4,24,29]). The response function χ coll DD of (2.46) can be now obtained from (2.50) and (2.51), 60) In (2.58), ℵ(τ q , s) finally is given by It is worth noticing that the collective response function for the density-density mode, as given by (2.46) or (2.58), can be expressed as . (2.62) This form is analogous to the form used to describe the dynamics of shape variables [4,24,29]. We omit here the suffix DD because the T D response function takes on a similar form (with some modification of the numerator). It is here where the "coupling constant" k appears, as defined in (2.57), together with the "intrinsic" (or "unscreened" (see [36,57]) response function χ, Both expressions can be found already in [57]. However, later we will find the form (2.58) more convenient for our applications, in particular for the discussion of the low frequency limit ωτ ≪ 1. When we shall expand first χ (2.63) in (2.62) in small ωτ near the poles of χ coll (2.58) (see next section), one should assume that the singularities of χ related to zeros of D 0 in (2.63) are far away from zeros of D in (2.58), i.e., a smoothness of χ as function of ωτ near these poles. After the cancellation of a possible singularity source D 0 in (2.58) we are free from such an assumption. Finally, let us turn to the temperature-density response function χ coll T D (2.47). It is determined by the same system of equations (2.50) and (2.51) and can be written in the form (2.62) but with another "intrinsic" response function χ T D appearing in the numerator, where D 0 (τ q , s) is given by (2.60). As compared to the one printed in [57], this expression contains an additional factor isτ q /χ 1 , which later on will turn out important, for instance, when calculating susceptibilities and the incompressibility K tot (2.31). (We are grateful to H. Heiselberg for confirming this misprint.) Substituting (2.65) into the numerator of (2.62) instead of χ, one gets the temperature-density response function (2.47) in the form similar to (2.58), LOW TEMPERATURE LIMIT The expressions for the collective response functions become much simpler at low temperatures T ≪ µ. In this case, one may calculate χ n of (2.54) by expanding in powers of T /µ. For those applications to nuclear physics we have in mind the temperature is sufficiently small such that it suffices to mainly stick to order two. Fourth order terms shall be shown only when necessary. A basic element for the quantities which we need to evaluate is the derivative ∂f p /∂ε p taken at global equilibrium: . (2.68) It appears in N (T ) of (2.5) [see also (A. 15)], which in turn is needed for χ n of (2.54). For small T , this derivative is a sharp bell-shaped function of ε g.e. p , such that one may evaluate the averaging integrals (2.54) and (A.15) by expanding the smooth functions in terms of ε g.e. p near ε g.e. p = µ g.e. . In this way, the Fourier-Bernoulli integrals over the dimensionless variable [(ε p − µ)/T ] g.e. appear (see, e.g., [57]) which lead to and Here Q 1 (ζ) is the Legendre function of second kind with ζ = s + i/τ q , andT = T /ε F is used also in Appendix A.3. These quantities may now be used to calculate the response functions (2.46) and (2.47), [or more specifically (2.58) and (2.66)]. For zero temperature, one gets the standard solutions [35,57]. So far no assumption has been made concerning the parameter ωτ which specifies the importance of collision in various regimes of the collective motion [35]. In particular, the formulas obtained in this section are valid both for the regimes of zero sound (ωτ ≫ 1) and hydrodynamics (ωτ ≪ 1). For ωτ ≫ 1 our solutions agree with those of [35,57]. However, below we shall be interested mainly in collective excitations of low frequencies. The notion "low frequencies" is meant to indicate that the corresponding excitation energies are smaller than those of the giant resonances. Next we will turn to the hydrodynamic regime where ωτ → 0. As we shall see, at low temperatures our solutions approach the ones of normal classical liquids, in agreement with [115,116]. D. Hydrodynamic regime DISPERSION RELATION The response functions can be simplified significantly in the long-wave length limit. Using τ q introduced in (2.48), this (LWL) limit may be defined as τ q ≪ 1. It can be reached in two ways, namely for small wave numbers q and finite collision time τ or for small τ but finite q. Both cases imply that the dimensionless parameter ωτ = sτ q , which determines the collision rate in comparison to the frequency of the modes, becomes small for any finite value s of (2.48) (|s| ∼ < 1). As will be shown below for nuclear matter at low temperatures, this quantity s is not enough large, in distinction to the case of liquid 3 He. Therefore, a small τ q implies hydrodynamic behavior, in contrast to the zero sound regime; where τ q ≫ 1, or ωτ ≫ 1. The Landau-Vlasov equation (2.1) is an integral equation. Its solution may be sought for in terms of an eigenvalue problem with the distribution function δf being the eigenfunctions and the sound velocity s (2.48) being the eigenvalues, see also [35,36]. This eigenvalue problem may be solved perturbatively with τ q being the smallness parameter [55,56]. It may be noted in passing that this method may be applied to some extent as well to the eigenvalue problem of the Schrödinger equation. We shall use it to get the hydrodynamic sound velocities from the kinetic equation, see [55,56]. To this end, we expand the solutions for s and δf into power series with respect to τ q , but restricted to linear order. Thus, we may write where s 0 and s 1 are independent of the expansion parameter τ q . In Appendix A.2, it is shown how the densitydensity response function may be calculated in the LWL limit. There two non-linear equations for the coefficients s 0 and s 1 are obtained from the dispersion relation Substituting s 1 , respectively. These solutions for s (2.72) can be written in terms of the dimensional frequency ω by means of (2.48) in the following form: and ω (1) The first root ω (0) given in (2.74) is purely imaginary and corresponds to the overdamped excitations of the hydrodynamic Raleigh mode [114,115]. The second and third ones ω (1) ± correspond to the usual first sound mode, expressed in terms of the (macroscopic) parameters of viscosity and thermal conductivity of normal liquids [115,116]. In (2.73) and (2.76), small corrections of the order of the product of the two small quantitiesT 2 and F 0 have been neglected, along with T 2 |(m * − m)/m| =T 2 |F 1 |/3. This procedure should be valid for nuclear matter; where the relevant parameters are small, both | F 0 | and |(m * − m)/m| being of order ≈ 0.2. Discarding such small corrections, our results for the sound frequencies ω (1) ± (2.75) are in agreement with [57]. In particular, up to these small corrections, the volume (or second) viscosity disappears, as it is the case in [57]. In the expressions (2.74) to (2.76) more explicit temperature corrections are given for ω (0) and ω (1) than those discussed in [57]. This will turn out important for the thermal conductivity κ, which we shall address in Sec. II D 3 [see (2.95)]. The "widths" Γ (0) and Γ (1) are proportional to τ q , and thus, to the relaxation time τ which represents the effects of two-body collisions. For nuclear matter, the Landau parameters F 0 and F 1 are small [G 0 and G 1 are close to unity, see (2.8)]. For this reason, according to the last equation in (2.48), the sound velocities cannot be large [see the approximation (2.73)]. So, the LWL limit (τ q ≪ 1) may be identified with the hydrodynamic collision regime ωτ = sτ q ∼ < τ q ≪ 1. Note that for the Fermi liquid 3 He, for instance, the parameters F 0 and F 1 are large and second order equation of (2.73) can not be applied. Moreover, according to the first line in (2.73), the sound velocity is large. Therefore, in this case a smallness τ q does not mean yet that ωτ is also small, i.e., the LWL condition is not enough for the hydrodynamical collision regime. RESPONSE FOR INDIVIDUAL MODES In the following, we are going to examine the collective response function χ coll DD (2.58), in particular its behavior in the neighborhood of the individual modes given by (2.74) and (2.75). To simplify the notation, we shall at times omit the lower index "DD" and move down the upper index "coll". Near any of the sound poles ω (1) ± given in (2.75), the collective response function χ coll (2.58) may be written as . Here, we have made use of (B.24), (B.25), (2.57) as well as of (2.72). It will turn out convenient to present separately the dissipative and reactive parts, χ and Notice that for τ q = +0 the Lorentzians in (2.78) turn into δ-functions. The relaxation time τ , which determines the dimensionless quantity τ q = τ v F q, might depend on temperature and frequency. A useful form is found in 80) with some parameters τ o and c o independent of T and ω; see, e.g., [32]. As indicated on the very right, for our present purpose we may neglect the frequency dependence, simply because we are interested in describing low frequency modes at larger temperatures (with respect to ω). Indeed, it is such a condition which helps justifying the assumption of local equilibrium. We shall return to this question later, when we are going to apply the Landau theory to a finite Fermi-liquid drop. Substituting (2.80) into the damping coefficient Γ (1) (2.76), one has To leading order, this gives the expected dependence on temperature commonly associated to hydrodynamics, namely Γ (1) ∝ 1/T 2 . Finally, we may note that in the long-wave limit the effective, total in-compressibility K tot (2.31) becomes identical to the adiabatic in-compressibility K ς (A.7) specified in Appendix A, was defined in (2.73). As the specific heat C V (A.27) is proportional toT , we need in (2.84) only linear terms to get the temperature correction of the second order inT in the total in-compressibility K tot (2.31). Substituting (2.84), (A.27) and (A.28) into (2.31) for the total in-compressibility K tot , one obtains identically the same as in (A.29) for the adiabatic in-compressibility K ς . The same result (2.84) in the LWL limit can be obtained also from (B.11). Let us address now the pole at ω (0) [see (2.74)]. Near the latter, the collective response function χ coll (2.58) becomes [as may be checked with the help of (B.24), (B.25), (2.57) and (2.72)] and Γ (0) being defined in (2.74). It may be rewritten in a more traditional form, see [114], [115] and [116]. Introducing the "diffusion coefficient" (2.87) Note that according to (2.74) and (2.80), the temperature dependence of Γ (0) becomes similar to the one found in (2.81), For the dissipative and reactive parts of the response function χ The strength distribution χ (0) ′′ coll has a maximum at ω = Γ (0) /2 and a width Γ (0) /2 ∝ τ q . In the LWL limit τ q ≪ 1 this distribution becomes quite sharp with the maximum lying close to ω = 0. As may be inferred with the help of (2.85) and (2.74), the maximal value does not depend on τ q and is proportional toT 2 . It will be demonstrated shortly that the pole at ω (0) (2.74) is related to the heat conduction, for which reason it sometimes is called "heat pole". Notice that the reactive response function χ (0) ′ coll is finite at ω = 0, with a value independent of τ q . In the hydrodynamic regime with τ q ≪ 1, the response function χ coll found for the Fermi liquid becomes identical to the one for normal liquids [115,116]. This can be made more apparent after introducing the dimensional sound velocity c, a width parameter Γ, determined as as well as the diffusion coefficient D (2.86) and the specific heats. The sum of the two contributions discussed above may then be written as Traditionally, the peaks related to the first and second terms are called Brillouin and Rayleigh (or Landau-Placzek) peak, respectively. The ratio of the specific heats C P and C V per particle is discussed in Appendix A.1, see (A.22) and (A.32). Note that the sound speed s 0 , see (2.73), is identical to the adiabatic sound velocity found in Appendix A.1, see (A.31) (c in dimensional units for normal liquids), as it should be for normal liquids [115,116]. The structure of (2.91) is identical to that discussed in the literature (see, e.g., (4.44a) of [115]), if one only expresses the quantities appearing here in terms of viscosity and thermal conductivity. As a matter of fact, the alert reader might expect a third term (as in (4.44a) of [115]), but this one is of the order of τ 2 q and thus is neglected here. The specific temperature dependence of these parameters (in the LWL limit) will be discussed in the next subsection, with respect to the specific heats, see also Appendix A.1. Note that in the derivation of the both amplitudes a (0) (2.85) and a (1) (2.77) we took D(s) (2.59) at low temperatures using (2.69) to (2.71); and then, expand first it near the poles (2.74) and (2.73), respectively; and second, in small τ q of the LWL limit. This way of the calculation is much more simpler because the two last operations can be exchanged only when we shall take into account next order terms in τ q , that takes much hard work. If we exchange the last two operations, expanding first in τ q in the linear LWL approximation (2.72), and then, doing expansion near the poles, some important terms will be lost. SHEAR MODULUS, VISCOSITY AND THERMAL CONDUCTIVITY As explained in Appendix A.2, these coefficients may be obtained by applying expansions to χ n within the perturbation theory mentioned above, for low temperatures (withT ≪ 1); see in particular (B.21), (B.22) and (B.23). They specify the stress tensor σ αβ (2.25)-(2.27) and the heat current j T (2.39). The shear modulus λ (B.12) in the time-reversible part σ (λ) αβ (2.26) of the stress tensor σ αβ (2.25) turns into zero in the long wave-length approximation linear in τ q as in [57] up to immaterial corrections of the order ofT 4 . By another words, in this case, λ is a small quantity of the order of τ 2 q because such corrections were neglected everywhere. It means a disappearance of the Fermi-surface distortions in our linear approach (2.72) which are the main peculiarity of Fermi liquids compared to the normal ones. For the shear viscosity ν (B.13) taken at the first sound frequency ω = ω where and (2.94) The first term ν (1) (2.93) in (2.92) is proportional to the relaxation time τ and coincides mainly with that obtained earlier for mono-atomic gases and for a Fermi liquid by using another method [57], except for the specific explicit dependence on temperature presented here. The temperature dependence of the shear viscosity ν (1) (2.93) is mainly the same as for the rate of the sound damping Γ (1) (2.81), ν (1) ∝ 1/T 2 , with the temperature dependence of the relaxation time τ (2.80). Although the viscosity component ν (2) , too, is related to the first sound solution ω (1) 0 , it is proportional to 1/τ , similar to the viscosity of zero sound but in contrast to the standard first sound viscosity (2.93). The ν (2) component (2.94) of the viscosity (2.92) increases with temperature as T 6 , see also (2.80) for the relaxation time τ . Although the second component ν (2) of the shear viscosity is proportional tō T 4 , and thus may be considered small under usual conditions, it may become important for small wave numbers q (or frequencies ω) [for more details, see the discussion to come below in Sec. III C 2]. This component of the viscosity was not discussed in [57]. Let us finally turn to the thermal conductivity κ which shows up in the equation for variations of temperature T (r, t) with r and t (see Appendix A.2). The form (B.20) [for the heat mode ω = ω (0) of (2.74)] may be rewritten as (2.95) We present here also explicitly the temperature corrections up to the terms of the order ofT 2 . Our expression for the thermal conductivity κ (2.95) differs from the one found in [115] and [57] by smallT 2 corrections. However, they are not important in the calculations of the damping coefficient Γ (1) for the first sound mode defined in [115], and [116], see also the comment before (2.91), Here, ν (1) is the part of the shear viscosity coefficient related to the first sound mode, see (2.93); C P /C V is the adiabatic ratio of the specific heats, see (A.9) and (A.10). We omitted here corrections related to the second viscosity in line of the second approximation in (2.76 Thus, up to the temperature corrections discussed above, we have agreement with the results of [57] for the dispersion equation, viscosity and thermal conductivity coefficients in the hydrodynamic limit. Our derivations are more strict and direct within the perturbation theory for the eigenvalue problem. We have the transition to the hydrodynamics of normal liquids discussed in [114][115][116] in terms of the macroscopic parameters mentioned above. E. Susceptibilities In this section, we want to address the calculation of the static susceptibilities, for which one distinguishes isolated, isothermal and adiabatic ones [114][115][116]. Their comparison is relevant for ergodicity properties, see [114,116]. Here we will concentrate on the density mode of nuclear matter considered as an infinite Fermi-liquid system. ADIABATIC AND ISOTHERMAL SUSCEPTIBILITIES The isolated susceptibility χ DD (0) is defined as the static limit of the response function χ DD (q, ω) (or χ DD (τ q , s) of (2.63) in dimensionless variables), for which one first has to take the limit q → 0 (or τ q → 0), and then, ω → 0 (or s → 0) (see, e.g., [115]) where δV eff and δρ are quasistatic variations. They can be considered as independent of time, in contrast to the ones discussed in Sec. II C 1, see (2.56). The isothermal susceptibility χ T DD is defined as the density-density response at constant temperature T , and the adiabatic one, χ ς DD , as that at constant entropy (per particle ς). Suitable variables for studying the variations of the density ρ are therefore pressure P and temperature T in the first case, and pressure P and entropy per particle ς in the second one. These two representations of δρ can be written as (2.99) For the isothermal and adiabatic susceptibilities χ T DD and χ ς DD , one thus gets the following two relations: The variations of the density with pressure are related to the (in-)compressibilities, see (A.7) and (A.8). As shown in Appendix A.1, their ratio can be expressed by that of the corresponding specific heats, see (A.10). Building the ratio, one therefore gets from (2.100) and (2.101) This is a general relation from thermodynamics where we only have replaced the system's total entropy [116] by the entropy per particle ς applied for the intensive systems as normal and Fermi liquids. We are interested more in the calculation of the differences between the isothermal susceptibility χ T DD defined by the relations in (2.100) (or adiabatic one χ ς DD , see (2.101)) and isolated (static) susceptibility χ DD (0) presented by (2.98) [29]. For this purpose, we find first the ratio of the isothermal-to-isolated susceptibilities χ T DD /χ DD (0) in terms of the ratio of the static "intrinsic" temperature-density response function to the isolated one χ DD (0) (2.63). The static temperature-density susceptibility χ T D (0) is defined in the same way (2.97) as the static limit of the "intrinsic" temperature-density response function χ T D (τ q , s) given by (2.64). Note that the limits ω → 0 (or s → 0) and q → 0 (or τ q → 0) which we consider to get the static response functions are not commutative [115]. Taking the second equations in (2.100) and (2.98) for the intensive systems as liquids, one gets We used here the definitions (2.63) and (2.64) for the density-density and temperature-density response functions and (2.97) for their static limits χ DD (0) and χ T D (0). We then applied the thermodynamic relations of Appendix A.1 for the transformations of the derivative (∂ρ/∂P) T . This derivative appears from the definition of the isothermal susceptibility χ T DD in (2.100) to another simpler thermodynamic derivatives for the application to Fermi liquids, see below. For this aim, we transform the variables (T, P) to the new ones (T, µ). The derivatives of pressure P over these two new variables can be then reduced to the ones of the density ρ shown in the r.h.s. of (2.103) with the help of (A.5). So, the calculations of the susceptibilities are resulted in the derivation of the static limits defined by (2.97) for the temperature-density χ T D (τ q , s) and density-density χ DD (τ q , s) response functions, see (2.63) and (2.64), and their ratio χ T D (0)/χ DD (0) for the case of a heated Fermi liquid. We can then calculate the two ratios of the susceptibilities (2.103) and (2.102) which both determine separately each considered susceptibilities. FERMI-LIQUID SUSCEPTIBILITIES The expression for the ratio of the isothermal-to-static susceptibilities (2.103) can be simplified my making use of the specific properties of Fermi liquids given by (A.17) and second equation in (A.18), According to the definition (2.97) of the static response functions applied to the ones in the ratio χ T D (0)/χ DD (0) of (2.104), we shall use (2.65) and (2.63) for the corresponding intrinsic susceptibilities (F 0 = F 1 = 0 there). The static limit (2.97) of the response functions χ DD (τ q , s) (2.63) and χ T D (τ q , s) (2.65) in (2.104) can be found by using the LWL expansions over a small parameter τ q ≪ 1 at low temperatures, see Sec. II C 2 and Appendix A.2 for the first limit (τ q → 0) in (2.97). We substitute now the perturbation theory expansions for small τ q for the quantities s (2.72), χ 1 (B.22), ℵ (B.24), and D 0 (B.25) into (2.63) and (2.65). We get this limit as functions of s 0 and s 1 , and then, we shall take the second limit of s 0 → 0 and Finally, we arrive then at the very simple result neglecting small cubic terms inT , which correspond tō T 4 corrections in susceptibilities and do not matter in this section. Note that the sequence of the limit transitions defined in (2.97) and recommended in [115] is important for the calculation of this ratio: We get zero for this ratio if we take first s → 0, and then, τ q → 0. Substituting now the ratio (2.105) of the susceptibilities into (2.104), one obtains see also (A.32) for the second equation. We compare now this result with (2.102) and get that our Fermi-liquid system satisfies the ergodicity property: This ergodicity property was proved at low temperatures, for which the Landau Fermi-liquid theory can be applied. It is related to the adiabaticity of the velocity of the sound mode s 0 , see (2.73) and discussion after (2.91). Moreover, we got the normal liquid (hydrodynamic) limit from the Fermi-liquid dynamics, and therefore, the ergodicity property is general for heated Fermi liquids and normal (classical) ones. Another aspect of the discussed ergodicity property might be the relation to the non-degeneracy of the excitation spectrum in the infinite Fermi liquids beside of the spin degeneracy. We have only the two-fold degenerate quasiparticle states, due to the spin degeneracy. However, it does not influence on our results concerning the ergodicity relations because we consider the densitydensity excitations, which do not disturb the spin degree of freedom. We have only the multiplication factor two in all susceptibilities, due to the spin degeneracy, that does not change the ratios of the susceptibilities which are only important for the ergodicity discussed here. Our susceptibilities obtained above satisfy the Kubo relations, see (4.2.32) of [114]: with the equal sign for the second relation because of the ergodicity property. To realize this, we should take into account that C P > C V (or K ς > K T ), according to (A. 22), because all quantities on the r.h.s. of this equation are positive for the stable modes G 0 = 1 + F 0 > 0. The equal sign for the first relation in (2.108) becomes true in the two limit cases: For the temperature T going to 0 or for the in-compressible matter when the interaction constant F 0 tends to ∞. In both limit cases we made obvious equality C P = C V and all susceptibilities are identical [equal signs in the both relations of (2.108)]. Note now that namely the specific Fermi-liquid expression of the static susceptibility χ DD (0), see (2.63) with F 0 = F 1 = 0 for the case of the intrinsic response functions, depends on the sequence of the limit transitions discussed near (2.97), (2.103), (2.105) above and in [115]. For the definition (2.97) of [115], one gets In the last equation, we used also (2.106). Taking the opposite sequence of the limit transitions, first s → 0, one has the result N (T ) (2.5) for the isolated susceptibility χ DD (0) like for the isothermal one χ T DD . The difference is inT 2 corrections. Ignoring them, the both versions of the limit transitions coincide, and we come to the result independent on temperature discussed in [36] . The ergodicity property (2.107), Kubo's relations (2.108) and relation of the isothermal susceptibility to adiabatic one (2.102) do not depend on the specific peculiarities of the static limit of the response function discussed here in connection to Fermi liquids. RELAXATION FUNCTION Coming back to the dynamical problem, we note that the dissipative part of the response function χ ′′ (ω) is related to the relaxation function Φ ′′ (ω) [114] by (2.110) We follow the notations of [24,29] and omit the index "coll" in this section: For the comparison with the microscopic results of [29] we need really the relaxation and correlation functions related to the intrinsic response functions. According to (2.62) and (2.57), all these intrinsic functions can be formally obtained from the collective ones at the zero Landau constants F 0 and F 1 . Taking into account also (2.78) and (2.89), one has This equation can be re-written in the same way like to (2.91) in terms of the parameters c, Γ and D T , see (2.90) and (2.86), We used here the Jacobian relations and (A.8), (2.7) for the transformation of the coefficient in front of the square brackets in (2.91) to the one, the intrinsic isothermal susceptibility χ T (2.109) (F 0 = 0). We also neglected terms of the order of τ 2 q as in the derivation of (2.91). Equation (2.112) for the relaxation function Φ ′′ (ω) is identical to the imaginary part of the r.h.s. of (28.29) in [116] with the transparent physical meaning as (2.91). The first term in the square brackets of (2.111) and (2.112) is the first sound Brillouin component with the poles (2.75) associated to the finite frequencies ±ω (1) 0 of the time-dependent relaxation-function oscillations and their damping rate 1/Γ (1) (±ω s and 1/γ s in the notation of [116], respectively, see more complete discussion of properties of the time-dependent relaxation function as a Fourier transform of the relaxation function Φ(ω) in [116]). The second term in (2.111) and (2.112) describes the pure damped Raleigh mode corresponding to the overdamped pole ω (0) (2.74) defined by the diffuseness coefficient D T ∝ Γ (0) (or ∝ γ T in the notation of [116]). As noted in [116], the strength of this peak is a factor 1 − C V /C P smaller than for the two first sound peaks. According to (A.32), in the zero temperature limit T → 0, the Raleigh peak disappears but the Brillouin ones become dominating because of Γ ∝ Γ (1) ∝ 1/T 2 ; see the second equation of (2.90) for the relation of Γ to Γ (1) and (2.81). Note also that the coefficient in front of the square brackets in (2.111) is finite in the limit T → 0. CORRELATION FUNCTION We like to present also the correlation function, partly for the sake of completeness and partly to allow for comparisons with calculations of the function in the nuclear SM approach of [28,30], see also [24,29], to the collective motion of finite nuclei. Let us use now the fluctuationdissipation theorem [114] to get the correlation function ψ ′′ (ω), (2.113) In the semiclassical limit → 0 considered here, one has According to (2.91), (2.112), this correlation function can be split into the two components as in [28,29], Here, ψ ′′ 0 is the heat pole part, This component has no such singularity at ω = 0 for τ q → 0, as seen from (2.90), (2.75) and (2.81) [see (2.78) for χ (1) ′′ (ω) in the middle of (2.117)]. According to the second equation in (2.116), the heat pole part ψ ′′ 0 (ω) of (2.115) for the intrinsic correlation function can be written as in [28,29], and We applied here (2.102) in the first equation of (2.120) and ergodicity condition (2.107) for the second one. The specific expressions for the quantities Γ (0) , χ T and χ(0) in the last two equations (2.119) and (2.120) can be found in (2.74), (2.88) and (2.109). Note that the correlation function (2.118), corresponding to the heat pole, has the Lorentzian multiplier. This multiplier approaches the δ(ω) function in the hydrodynamic limit τ q → 0 because of Γ T → 0, according to (2.119) and (2.74) (Γ (0) → 0), i.e., The relations (2.118), (2.120) and (2.121) confirm the discussion in [29] concerning the heat pole contribution to the correlation function. The specific property of the Fermi liquid is that this system is exactly ergodic, see (2.107), as used in the second equation of (2.120). A. Basic definitions So far we considered the Fermi-liquid theory for study of the collective excitations at finite temperatures much smaller than the Fermi energy ε F in the infinite nuclear matter. This theory can be also helpful for investigation of the collective modes and transport properties of heavy heated nucleus considered as a finite Fermi system within the macroscopic FLDM [26, 31-33, 37, 38, 46-49]. Such a semiclassical nuclear model applied earlier successfully to the giant multipole resonance description [27, 31-33, 46, 48, 49, 120] is expected to be also incorporated in practice as an asymptotic high temperature limit of the quantum transport theory [29] based on the shell model. This theory takes into account the residue interactions like particle collisions for study of the low energy excitations in nuclei. The latter application of the FLDM is very important for understanding itself the dissipative processes like nuclear fission at finite temperatures (see, g.e., [24,28,29,59,121]) Following [24,29], let us describe the many-body excitations of nuclei in terms of the response to an external perturbation whereF is some one-body operator, The linear response function can be determined through the Fourier transform F ω of the time-dependent quantum average F t by Here and below we omit an unperturbed average value F 0 and use the same notation as in [29]. In the following, we shall consider the operatorsF neglecting the momentum dependence in a phase space representation in the linear approximation for an external field V ext and writingF =F (r). According to (3.3), one can then express explicitly χ coll F F (ω) in terms of the Fourier transform δρ ω (r) of the transition density δρ(r, t) [32] as Note that in a macroscopic picture the transition density is the dynamical part δρ(r, t) of the particle density, Here, ρ qs is the quasistatic equilibrium particle density. We define nowF as related to the variation of the selfconsistent mean field V in the nuclear Hamiltonian: The total HamiltonianĤ tot is given bŷ As shown in [24,29], a conservation of the nuclear energy Ĥ for the HamiltonianĤ (3.6) leads to the equation of motion which is the secular equation in the Fourier representation, The coupling constant k is given by is the stiffness coefficient of the internal energy E(Q, S) for the constant nuclear entropy, S 0 , and χ F F (0) is the static (isolated) susceptibility. F ω and Q ω are related then each other by the self-consistency condition with Q ω being the Fourier component of the collective variable Q(t). The ergodicity condition, with χ ad F F being the adiabatic susceptibility was not used really in the derivation of the self-consistency condition (3.11) with the coupling constant k from (3.10) in [29] if the definition of slow variation of the time-dependent F is employed under the certain physical conditions, see (3.7-14), (3.3-15) in [29] and discussion there. The isolated susceptibility χ F F (0) is the static limit ω → 0 of the intrinsic response function χ F F (ω) defined by Thus, the intrinsic response function χ F F (ω) is related to the collective response function χ coll F F (ω) through the relation (2.62) [4,29]. Within the FLDM formulated below, it is simpler to derive first the collective response function χ coll F F (ω) by making directly use of the definition (3.4). For comparison with the microscopic quantum theory [29] and for study of the susceptibilities and of the ergodicity property, it is helpful to present the intrinsic response function χ(ω) in terms of the collective response function χ coll F F (ω) found from (2.62) as . (3.14) B. Fermi-Liquid Droplet Model In this section we follow [32] for the basic grounds of the FLDM [31,37] for heavy nuclei taking into account the quasiparticle Landau-Vlasov theory for the collective dynamics of the heated Fermi liquids described in [57] and developed in the previous sections in more details for nuclear matter. The main idea is to apply this semiclassical theory for the distribution function inside the nucleus with the macroscopic boundary conditions [26,37] like for normal liquids at its moving surface. These boundary conditions are used for the solutions of the dynamical collisional Landau-Vlasov equation (2.1) coupled with the thermodynamic relations for motion in the Fermi-liquiddrop interior. Our derivations are based on the conception of the linearized dynamics near the local equilibrium instead of the global one considered earlier in [31,32]. This is important for a low frequency region of the nuclear excitations which we are interested in this review. We shall consider below small isoscalar vibrations of the nuclear surface near a spherical shape, which are induced by the external field V ext (t) (3.1). To this end, we define a collective variable Q(t) in the usual way: where R 0 is the equilibrium radius of nucleus, and Y L0 (r) is the spherical harmonics which represent the axially symmetric shapes as functions of the radius vector angleŝ r. For Q(t) we expect the form with the same frequency ω as for the external field (3.1). EQUATIONS OF MOTION INSIDE THE NUCLEUS Quasiparticle conceptions of the Landau Fermi-liquid theory can be justified in the nuclear volume, where variations of the density ρ(r, t) (2.15) are small. Therefore, in the interior of sufficiently heavy nuclei, one may describe the semiclassical phase-space dynamics in terms of the distribution function δf (r, p, t) (2.11) which satisfies the collisional Landau-Vlasov equation (2.1). We recall now the equations of Sec. II A which present the collective dynamics linearized with respect to the local equilibrium (2.9). Our interior nuclear collective dynamics is then described by 6 equations, see (2.1) and (2.17), for the 6 local quantities δρ(r, t), δµ(r, t), u(r, t) and δT (r, t) defined inside of the nucleus as for the nuclear matter. The conserving equations (2.18), (2.19) [or (2.45) for a potential flow], (2.40) and (2.41) are helpful to find them in the semiclassical approximation. For the isoscalar multipole vibrations of the Fermiliquid drop surface (3.15), we shall look for the solutions of (2.1), (2.17) in terms of a superposition of the plane sound waves (2.28) over all anglesq of the unit wave vector q with the amplitude A L (q), Here L is the multipolarity of collective vibrations,q z is the projection of the unit vectorq = q/q on the symmetry z-axis. The Fourier amplitudes δf (q, p, ω) are presented as a spherical harmonic expansion in momentum space, (3.18) where A l ′ are small vibration amplitudes. For such solutions, the velocity field u corresponds to the potential flow (2.42). The relaxation time τ in (2.9) is assumed to be frequency and temperature dependent as in (2.80). Following [29,31,32], we take the form: . (3.20) For c o one has several values. For instance, c o = 1/4π 2 , according to [34,117], c o = 1/π 2 follows from [29,36], 3/4π 2 from [31] and several numbers near these constants were suggested in [55,56]. Formula (3.20) with the c o = 1/π 2 and finite cut-off constant c which weakens the dependence on both frequency ω and temperature T at large values of these quantities may in some sense be compared with the expressions suggested in [29] for the imaginary part of the self-energy to be used in microscopic computations [24,29] [c in (3.20) should not be confused with the sound velocity c used for the description of normal liquids [see, g.e., (2.90) and (2.91)]. In line of these computations, we shall use Γ o = 33.3 MeV and c = 20 MeV in our FLDM calculations. The value of the parameter c o = 3/4π 2 is taken as in [31,32]. The specific value of this parameter is not important for the following derivations and results in this section because we shall apply the temperature-dependent Fermiliquid theory for low frequencies and large temperatures. Note that for c → ∞ the expression (3.20) was derived in [31,34,36,55,56]. BOUNDARY CONDITIONS AND COUPLING CONSTANT The dynamics in the surface layer of nucleus can be described by means of the macroscopic boundary conditions as in [37] by using the effective surface approximation [26,27,38]. For small vibration amplitudes, they read: where u r and Π rr are the radial components of the velocity field u (2.16) and the momentum flux tensor Π αβ (2.20) which are determined in the nuclear volume, see [38,[41][42][43][44][45] for other (mirror and diffused) boundary conditions used directly for the distribution function as a solution of the Landau-Vlasov equation. In the case of the potential flow (2.42), we shall use the specific expression for the momentum flux tensor (2.44) with the shear modulus (λ) and viscosity (ν) coefficients given by (B.12) and (B.13), respectively. The surface pressure P S , which is due to the tension forces for the isoscalar motion in symmetric nuclei, is given by where α is the surface tension coefficient, see Sec. IV and Appendix D for the isovector asymmetric modes. For the tension coefficient α, we used an expression found in [27] within the ESA. This approximation is based on expansion of the nuclear characteristics, such as the particle density and the total energy in small parameter a/R 0 ∼ A −1/3 , where a is the diffuseness parameter and R 0 is the mean curvature radius of the nuclear surface [26,27], see also [39,40] and Appendix D. In this way, one derives the nuclear energy expansion [Wiezsäcker formula (D.8), (D.9)], E = E V + E S + ..., with the volume part of the energy E V proportional to the particle number A, and the surface energy E S , E S = b S A 2/3 (b S = 4πr 2 0 α corresponds to the surface tension constant α, b S ≈ 20 MeV, r 0 = R 0 /A 1/3 ≈ 1.1−1.2 fm) and so on, see [26,27,39,40] and Appendices A.4 (symmetrical nuclei) and D (asymmetrical ones) for more details (the suffix "+" is omitted here). According to (D.7) of [27,40], (3.24) Here and below we neglect the relatively small corrections of the order of A −1/3 of the ESA, which are in particular related to the semiclassical corrections and external field. The coefficient C appears earlier in front of the term which is proportional to (∇ρ qs (r)) 2 in the nuclear energy-density formula [see (D.1)], C = 40 − 60 MeV · fm 5 [40]. An external pressure P ext appears in (3.22), where we make connection to the external potential V ext (3.1) [32,44], For the density in equilibrium, one has (3.26) This density is expressed in terms of the profile function w(ξ) with a sharp decrease from one to zero in the narrow region of the order of the diffuseness parameter a near ξ = 0 as in a step function (w(ξ) → θ(R − r) for a → 0), b V ≈ 16 MeV is the separation energy per nucleon [26,27,37,40]. The value of equilibrium density inside the nucleus ρ 0 [26] is given by where ρ ∞ is the particle density of the infinite nuclear matter, ρ ∞ = 3/(4πr 3 0 ). The surface energy constant, b S , and in-compressibility modulus, K, in (3.27) depend on the condition of the constant temperature, entropy and of the static limit, as shown in Appendix C. In (3.27) and below, we omit the index X of these quantities which specifies one of these conditions, see Appendix C. For instance, the in-compressibility in (3.27) is denoted simply as K = K tot (ω = 0) = K ς , as shown above through (2.31), (2.84) and (A.29). The surface energy constant b S in (3.27) is also identical to the adiabatic one as the in-compressibility (see Appendix C). The second term in the circle brackets of (3.27) is a small correction proportional to A −1/3 , due to the surface tension. Boundary conditions (3.21) and (3.22) were re-derived here from (2.18) and (2.19) where all quantities are now extended to the surface region with a sharp coordinate dependence of the particle density as in the approach [37]. However, we used the specific properties of the heated Fermi-liquid drop following the same ESA [26,27,37]. For the derivation of (3.22), g.e., the key equation (A. 38) for the Gibbs thermodynamic potential per particle g, which satisfies the thermodynamic relations (A. 35), was applied instead of the energy per particle ε of [37]. The result (3.22) has the same form as in [32,37] because in its derivation we have simultaneously to use (A.37) of the temperature-dependent Fermi-liquid theory (with the entropy term T dς), in contrast to the adiabatic equation (17) of [37], see Appendix A.4 for details. The external field V ext (3.1) in (3.25) is determined by the operatorF (r) (3.7), and hence, V ext is concentrated in the surface region of the nucleus. Indeed, for the op-eratorF (r) (3.7) in the FLDM, one gets the form (3.28) see (C.15). After substitution of (3.28) into (3.25) we have The integration by parts in (3.25) and the equation (C.4) for the quasistatic coupling constant k −1 were used in the derivation of (3.29), (3.30), see the second equation of (C.12), and also applications to calculations of the collective vibration modes in [142,155]. COLLECTIVE RESPONSE FUNCTION As shown in [26,27,48], the linearized dynamic part of the nucleonic density δρ(r, t) for the isoscalar modes can be represented as a sum of the "volume" and the "surface" term, where δR is the variation of nuclear radius (3.15), δR = R 0 Q(t)Y L0 (r), w is defined around (3.26) and in Appendix D. For isovector vibration modes of the odd multipolarity (dipole), one has to account for the mass center conservation [33,52] [see (4.14)]. The upper index "vol" in δρ vol (r, t) of (3.31) denotes that the dynamical particle-density variation is determined by the equations of motion in the nuclear volume and is given in terms of the local part δf l.e. (ε l.e. ) (2.12) of the distribution function δf (r, p, t) (2.11) through (2.15). Solving (2.45) with the first boundary condition (3.21), one gets the potential ϕ in the form (3.32) where j L (x) is the spherical Bessel function and j ′ L (x) = dj L (x)/dx. From the continuity equation (2.18) with (3.32), one has (3.33) Therefore, according to (3.31) and (3.33), one finds (3.34) With this solution, we may now proceed to calculate the response function χ coll F F (ω) (3.4) by expressing the integral over the coordinates r for the average F ω (3.3) in the numerator of (3.4) in terms of our collective variable Q ω given by (3.16). Indeed, substituting the Fourier transform of (3.34) together withF from (3.28) into (3.4), we obtain Using (2.44), (3.23), (3.29) and (3.32), one may write the second boundary condition (3.22) in terms of the collective variable Q(t) and periodic time dependence of the external field V ext in the form of the equation of motion (3.36) We have introduced various new quantities, which is a complex function of ω by means of (2.67) for the sound velocity s with (2.59)-(2.61), (2.69)-(2.71). In (3.37), Ω is the characteristic frequency of the classical particle rotation in a mean potential well of the radius R 0 with the energy near ε F , as a convenient frequency unit. Other quantities are defined as and . The poles of this collective response function are determined by the following equation, see (3.44) for D L (ω), in order of their increasing magnitude. We shall consider these roots with ω (n) i in the frequency region of about ω ∼ < Ω which overlaps the low frequency energy region discussed below. We shall con- [34,117].) For above mentioned frequencies ω and temperatures T , for which the quasiparticle and local-equilibrium conceptions of the theory for the heated Fermi liquids can be applied, the only lowest solutions have been found in the infinite sequence (3.48). They are associated with i = 0, 1 and 2 for the "first sound" branch n = 1, and that with i = 0 for the "Landau-Placzek" branch (n = 0). (Quote marks show that the corresponding names are realized in fact only asymptotically in the hydrodynamical limit.) The total response function is the sum of the two branches mentioned above. The response function (3.46) contains all important information concerning the excitation modes of the Fermi-liquid drop. One of the ways of the receipt of this information is to analyze the response function poles (3.48) and their residua. However, this way is often not so convenient and too complicate in the case when a few poles are close to each other or they belong (or are close) to the imaginary axis of the complex plane ω. More transparent way which is free from such disadvantages is to describe the response function in terms of the transport coefficients [29]. C. Transport properties for a slow collective motion The macroscopic response of nucleus to an external field is a good tool for calculations of the transport coefficients. To achieve this goal we follow the lines of [24,29]. For instance, in cranking model type approximations, one assumes the collective motion to be sufficiently slow such that the transport coefficients can be evaluated simply in the "zero-frequency" limit. For a such slow collective motion we shall study here the transport coefficients within the FLDM having a look at excitation energies smaller than the distance between gross shells [91], in heavy nuclei [Ω is the particle rotation frequency (3.37)] ω ∼ < Ω, i.e., less than or of the order of the giant multipole resonance energies. Within the low collective motion (ω ∼ < Ω), we shall deal with first more simple case of the hydrodynamic approximation which can be applied for frequencies much smaller than the characteristic "collisional frequency" 1/τ related to the relaxation time τ (3.19), ωτ ≪ 1. Using this hydrodynamic expansion of the macroscopic response function (3.46) in small parameter ωτ , we shall look for in Sec. III C 1 the relation to the "zero frequency limit" discussed in [29]. Another problem of our interest in this section is related to the correlation functions, "heat pole friction" and ergodicity property, see [24,29]. We shall consider in the next Sec. III C 2 a smaller frequency region where the nuclear heat pole like the Landau-Placzek mode for the infinite matter appears within the hydrodynamic approximation. This subsection will be ended by a more general treatment of the transport coefficients in terms of the parameters of the oscillator response function. The method of [24,29] can be applied for the low frequency excitations, ω ∼ < Ω, but also in the case when the hydrodynamic approach fails, i.e. for ωτ ∼ > 1. Following [24,29], we shall study the "intrinsic" response function χ F F (ω) related to the collective one χ coll F F (ω) (3.4) by the relation (3.14). The collective response function χ coll F F (ω) (3.46) in the FLDM was derived straightly from (3.4) in terms of the solution for the transition density δρ (3.31). The "intrinsic" response function can be then got with help of (3.14). This way is more convenient in the FLDM with respect to the opposite one used usually in the microscopic quantum calculations based on the shell model [29]. By making use of expansion of the denominator D L (ω) (3.44) of the response function χ coll F F (ω) (3.46) up to fourth order terms in small parameter ωτ in the low frequency region ω/Ω ≪ 1, and then, of (3.14), one gets the response function in the F mode in the form: M , C and γ can be defined as the Q-mode mass, the stiffness and the friction coefficients which are the val- Here and below we omit the low index "FF" in the "FF"-response functions everywhere when it will not lead to misunderstanding. Note that the formulas which we derive here and below for the "intrinsic" response function χ(ω) can be applied also to the collective response function χ coll (ω) if we only omit the index "in" in C in and in functions of C in denoted by the same index (except for some approximations based on the specific properties of C in compared to C and noted below if necessary). Another argument of the presentation of our results in terms of the "intrinsic" response functions is to compare them more straightly with the discussed ones in [29] in connection to correlation functions and ergodicity. For the inertia M and stiffness C, we obtain the parameters of the classic hydrodynamic model, namely: the inertia of irrotational flow, and being the stiffness coefficient of the surface energy (3.40). (We introduced here more traditional notations labeled by index "LD" which means the relation to the usual liquid-drop model of irrotational flow). For friction γ, we arrive at the temperature dependence typical for hydrodynamics, Here, ν LD is the classical hydrodynamic limit ν (1) (2.93) for the viscosity coefficient, τ the relaxation time (3.19), (3.20) for ω = 0, . (3.54) However, our result (3.53) for the classical liquid-drop model of irrotational flow, if only extended to include the two-body viscosity, differs from the one found in [118,122] by an additional factor of (2L + 1)/L, see [32,123]. We neglected the fourth order terms inT (Sec. II C 2) in (3.51), (3.52) and (3.53) because of the presence of more important lower order terms there. For the coefficient Υ in the term proportional to 1/ω in (3.50), one obtains ν (2) . and (2.94) for the viscosity component ν (2) up to small temperature corrections of the next order. The both equations (3.55) show the main term in the temperature expansion of the coefficient Υ in front of 1/ω in (3.50). Note that it appears in the orderT 4 and can not be neglected for enough small frequencies ω. As seen from (3.50) considered for the case of the collective response, i.e., with omitted index "in" in C in of (3.50), for enough small frequencies ω, there is the pole which equals approximately iΥ/2. Therefore, the physical meaning of the parameter Υ (3.55) is a "width" of the overdamped pole in the asymptotic collective response function χ coll (ω) for enough low frequencies. As shown below, this pole and corresponding pole of the intrinsic response function (3.50) is overdamped. It is similar to the Landau-Placzek pole in the infinite nuclear matter and to the nuclear heat pole found in [29], see more detailed discussion in Sec. III C 2. The "width" Υ (3.55) of such "heat pole" is inversely proportional to the relaxation time τ and increases with temperature and particle number. Note also that this "width" is proportional to the component ν (2) (2.94) of the viscosity discussed in Sec. II D 3. It is somewhat similar to the viscose part of the standard expression for the first sound "width" Γ (2.96) in terms of the first component ν (1) of the viscosity coefficient (2.92), Γ ∝ ν (1) . However, there is in (3.55) the surface energy constant b S and particle number factor A 1/3 which are both the specific parameters of a finite Fermi-liquid drop. Thus, the denominator of the hydrodynamical response function (3.50) contains the two friction terms. One of them is proportional to the friction coefficient γ, γ ∝ ν (1) , and another one which is proportional to Υ (Υ ∝ ν (2) ). We shall consider in the next two Secs. III C 1 and III C 2 the two limit cases neglecting first the heat pole Υ term for enough large frequencies ω within the hydrodynamic approximation ωτ ≪ 1, and then, the γ friction one for smaller frequencies with the dominating heat pole, respectively. HYDRODYNAMIC SOUND RESPONSE For enough large frequencies ω within the frequent collisional (hydrodynamic) regime, one finds the first sound (i = 1) solution s (2.73). In this case, one can neglect the last term proportional to 1/ω compared to the friction term in the denominator of the asymptotic expression (3.50). The critical frequency ω crit is defined in the second equation of (3.56) as a frequency for which these two compared terms coincide, The critical value ω crit τ increases with increasing temperature asT 2 and does not depend on particle number for the first sound mode n = 1. It equals zero for the Landau-Placzek branch n = 0, according to (3.55) for Υ. For the n = 1 mode, ω crit τ is small for all temperatures T ∼ < 10 MeV, ω crit τ ≈ 0.6T 2 ≪ 1 at typical values of the parameters, ε F = 40 MeV and r 0 = 1.2 fm, and for a value C of the Skyrme forces considered in [27], C = 80 MeV·fm 5 , which is somewhat larger than those of [40] (Sec. IV and Appendix D) in the ESA, where A −1/3 is assumed to be small. We took here and below L = 2 for the quadrupole vibrations, F 0 = −0.2, F 1 = −0.6 for the Landau constants which are close to the values common used for the calculations of the nuclear giant multipole resonances [124,125], a little more "realistic" than in [31,32]. For frequencies ω within the condition (3.56), we arrive at the oscillator-like response function, 57) with all hydrodynamic transport coefficients presented in (3.51), (3.52) and (3.53). In the middle of (3.56), χ osc (ω) is the "intrinsic" oscillator response function which describes the dynamics in terms of the Q(t) variable for the collective harmonic oscillator potential. As seen now, the constants M , C and γ were naturally called above as the transport coefficients: The collective response function χ coll (ω) within the approximation (3.56) is the same (3.57) but with omitted index "in" in the stiffness coefficient, as noted above. This remark is related also to the oscillator QQ-response function χ coll osc (ω) useful for the following analysis of the response functions in terms of the transport coefficients, We obtain the QQ-response functions from the F Fones, for instance, from χ(ω) (3.57), simply multiplying by the constant k 2 because of the self-consistency condition (3.11). Note also that the condition (3.56) for the Landau-Placzek branch of the solutions for the sound velocity s [see (2.74)], is always fulfilled for ωτ ≪ 1. In order to compare our results with those of previous calculations [29], we introduce the dimensionless quantity (3.60) This hydrodynamic effective friction η mainly decreases with temperature as 1/T 2 . For large temperatures and finite cut-off parameter c, the dimensionless friction parameter η (3.60) approaches the constant. We have the two kind of poles of the response function (3.57) as roots of the quadratic polynomial in the denominator, the overdamped poles, see [29], (3.61) and the underdamped ones, These solutions depend on the two parameters, Note also that the two hydrodynamic poles in (3.57) coincide approximately for the both branches n = 0 and 1 of solutions to the dispersion equation (2.67) for the velocity s. The difference between these two modes is related only to the last term proportional to Υ in the brackets of r.h.s. of (3.50), and it was neglected under the condition (3.56). For the real and imaginary parts of the response function χ(ω) (3.57) with the help of (3.63) for the overdamped case (3.61), for instance, one gets for complete- For a more simple case of the collective response in the FLDM, we omit index "in" in formulas of this section [see the comment after (3.50)]. From (3.59) for η with the parameters used above for the estimate of ω crit τ of (3.56), and the "standard" Γ 0 = 33.3 MeV [29]; one has an overdamped motion, η > 1, for all temperatures T ∼ < 10 MeV and particle numbers A ∼ < 230, as seen from Fig. 1. Moreover, for such temperatures and particle numbers, one can expand the "widths" Γ ± in small pa- (3.66) Fig. 1 shows that the above mentioned parameter 1/(4η 2 ) for the expansion in (3.66) is really small for all considered temperatures. Using (3.51), (3.52), (3.53) for the transport coefficients and the definition of τ (3.54) as in the derivation of (3.59), (3.60), one obtains from (3.66) (3.68) One of the "widths" specified by Γ + (3.67) is mainly the decreasing function of temperature, Γ + ∝ τ ∝ 1/T 2 at low temperatures. It is typical for the hydrodynamic modes as the first sound vibrations in normal liquids; in contrast to another "width" Γ − (3.68), Γ − ∝ 1/τ ∝ T 2 , similar to the zero sound damping in relation to the τdependence. They both become about a constant for high temperatures, due to the cut-off factor c. Note, Γ + (3.67) decreases with particle number as A −2/3 while Γ − ∝ A −1/3 , see (3.68). The different Adependence of the "widths" Γ − (3.68) and Γ + (3.67) can not be nevertheless referred even formally to the so-called "one-and two-body dissipation", respectively. (Collisions with potential walls without the integral collision term in the Landau-Vlasov equation but with the mirror or diffused boundary conditions might lead to the "widths" proportional to Ω in (3.37), Ω ∝ A −1/3 , as in equation (49) of [44] or through the wall formula [126][127][128].) They both depend on the collisional relaxation time τ and correspond to the "two-body" dissipation. The latter means here the collisional damping of the viscose Fermi liquid as in [31,32]. The physical source of the damping in the both cases is the same collisions of particles in the nuclear volume, due to the integral collision term (2.9) with the relaxation time τ . We would like to emphasize, however, that the collisional Γ − (3.68) depends on the surface energy constant b S and disappears proportionally to A −1/3 with increasing particle number A like Ω of (3.37) because we took into account a finite size of the system through the boundary conditions (3.21), (3.22). An additional overdamped pole with the "width" Γ − (3.68) appears because of the finiteness of the system and collisions inside the nucleus. This looks rather in contrast to the wall friction [127,128] coming from the collisions with the only walls of the potential well. We shall come back now to the intrinsic response function χ(ω) (3.57). For the "intrinsic stiffness" C in , one has In the last equation, we neglected a small parameter kC, for the typical values of the parameters mentioned above before (3.57). We neglected also small temperature corrections of (A.29) for the in-compressibility modulus K, K = K ς , in the second equation of (3.70). Using a smallness of the parameter kC (3.70), we shall get now the relation of the coupling constant k −1 with the isolated susceptibility χ(0) and stiffness C as in equation (3.1.26) of [29]. For this purpose, we take the limit ω → 0 in (3.57) for the "intrinsic" response function χ(ω) and expand then the obtained expression for χ(0), in powers of the small parameter kC (3.70) up to second order terms. As result, we arrive at the relation (3.71) The liquid-drop transport coefficients M (3.51), C (3.52) and γ (3.53) can be now compared with the ones in the "zero frequency limit" M (0), C(0) and γ(0), respectively, defined by equations (3.1.84)-(3.1.86) in [29]: Expanding χ(ω) near the zero frequency ω = 0 in the secular equation (3.9), see [29], we assumed here and will show below that the "intrinsic" response function χ(ω) is a smooth function of ω for small frequencies ω within the hydrodynamic condition ( For the definition of transport coefficients in "the zero frequency limit" (3.72)-(3.74), we needed to know also the properties of the "intrinsic" response function in the secular equation (3.9), concerning its pole structure. For the "intrinsic" case the quantity η in , see (3.63), plays a role similar to the effective damping η (3.59) for the collective motion. Moreover, η in determines the correction to the liquid drop mass parameter M in (3.74) for the inertia M (0) in "the zero frequency limit". Due to a smallness of the parameter kC (3.70), η in is much smaller than η (3.60) for large particle numbers A ≈ 200 − 230, as seen from Fig. 1, The "underdamped" poles ω ± in approach the real axis on a large distance from the imaginary one as compared to the liquid drop frequency ̟ LD = C/M , |ω ± in | ≫ ω LD . They have a small "width" 2ω in η in = γ/M ∝ 1/T 2 for our choice of large temperatures (T ∼ > 5MeV); see (3.63), (3.53), (3.51), and (3.54). By this reason, for the "underdamped" case of small η 2 in and low frequencies ω ∼ < ω LD , the intrinsic response function χ(ω) is a smooth function of ω. For smaller temperatures T ∼ < 4 MeV and for our parameters used in (3.75), one has the "overdamped" poles (3.61) of the intrinsic response function χ(ω), η in > 1. For such temperatures, η in (3.75) is enough large. We can use therefore the expansion of the "widths" Γ in ± of (3.61) in a small parameter (M ω in /γ) 2 = (4η 2 in ) −1 (see Fig. 1), The "intrinsic width" Γ in + , see (3.77), is mainly larger than Γ in − . They become comparable when increasing temperature, i.e., Γ in is large compared to the characteristic collisional frequency 1/τ (for the same choice of the parameters) the both poles are far away from the zero, see more discussions of the "intrinsic widths" below in connection with the heat pole in the next section III C 2. Therefore, the intrinsic response function χ(ω) (3.57) is a smooth function of ω for the "overdamped" case of enough large η in used in the derivations of (3.77) as for the "underdamped" one discussed above. Thus, we expect that the "zero frequency limit" based on the expansion of the intrinsic response function χ(ω) is a good approximation for low frequencies larger the critical value ω crit within the hydrodynamic condition It would be interesting now to get the "overdamped" correlation function ψ ′′ (ω) determined by the imaginary part of the corresponding response function χ ′′ (ω) (3.65) through the fluctuation-dissipation theorem, see (2.113) and (2.114). In the semiclassical approximation (2.114) and (3.65) for the first sound mode, one writes Using the approximations as in (3.77) and (3.69), one gets from (3.80) The second Lorentzian in the middle is negligibly small compared to the first one because and 4η 2 in is large in these derivations, see the discussions in between (3.77) and (3.78). It seems that we are left with the Lorentzian term of this correlation function on very right of (3.81) which looks as the Landau-Placzek heat-pole correlation function (2.118) and equation (4.3.30) of [29] with obvious constants ψ (0) and Γ T . However, we can not refer the found correlation function ψ s ′′ (ω) (3.81) to the heat pole one. The "width" Γ in − of (3.77) in (3.81) is finite and large compared to the characteristic collision frequency 1/τ which, in turn, is much larger considered frequencies ω, as shown above, see (3.82). The limit Γ in − → 0 for a fixed finite ω and the corresponding δ(ω)-function which would show the relation to the heat pole correlation function do not make sense within the approximation (3.56) used in (3.81). In particular, the response (3.57) and the correlation (3.81) functions were derived for enough large frequencies ω ≫ ω crit due to the condition (3.56). Note also that the inertia parameter M (3.74) is not zero, as it should be for the heat pole. HYDRODYNAMIC CORRELATIONS AND HEAT POLE For lower frequencies ω, which are smaller the critical value ω crit , we should take into account the last additional term in the denominator of (3.50) for the response function. For such small frequencies, this friction term being proportional to Υ (3.55) becomes dominating as compared to the liquid-drop one γ = γ LD . Within this approximation, we shall derive the heat-pole response and correlation functions, and relate Υ (3.55) of (3.50) with the corresponding heat pole friction. This subsection will be ended by discussions of the nuclear ergodicity. For smaller frequencies, (see the second equation in (3.56) for the critical frequency ω crit ), one can neglect the friction iγω term in the denominator of the asymptotic response function (3.50) as compared to the last one, γω ≪ ΥC/2ω. The mass term there is even smaller than the friction one for frequencies ω ∼ < ω crit for the considered parameters and will be neglected too, M ω 2 ≪ γω. In this approximation, from (3.50) one obtains the heat pole response function χ(ω) ≈ χ hp (ω), which is similar to (2.85), (2.87) for the infinite nuclear matter, In these derivations, we used the specific properties of the intrinsic response functions which we now are interested in for analysis of the correlation functions and ergodicity conditions [29]. In (3.84) and in all approximate equations below in this subsection, we applied also the expansion in small parameter kC (3.70) as in (3.69). The real and imaginary parts of the response function χ hp (ω) (3.84) are, respectively, up to small kC corrections, see (3.70). We shall derive now the correlation function ψ hp ′′ (ω) applying the fluctuation-dissipation theorem (2.114) to the "intrinsic" response function χ hp ′′ (ω) (3.87) obtained in the asymptotic limit (3.83). From (2.114) and (3.87) one gets This correlation function looks as the Landau-Placzek peak for the infinite Fermi liquid, see (2.118), It is identical to the r.h.s. of equation (4.3.30) in [29], but with the specific parameters The "width" Γ hp in (3.90) is much smaller than the characteristic collision frequency 1/τ , see (3.55) and (3.70). The relationship (3.91) for Γ hp is in contrast to the one (3.82) for the "intrinsic overdamped widths" Γ in ± (3.77) which are much larger the collision frequency 1/τ for the same selected parameters at all temperatures T ∼ < 10 MeV and particle numbers A = 200 − 230. For the following discussion of the friction coefficients, we compare now the "width" Γ hp For all temperatures and particle numbers which we discuss here, this ratio is small, In the second equation of (3.93) we used the same values of the parameters as in (3.70). Note that the "width" of the Landau-Placzek peak Γ (0) , Γ (0) ∼ τ 2 q /τ ≪ 1/τ for τ q ≪ 1, is similar to Γ hp and is unlike Γ in ± (3.77) in (3.81) for the hydrodynamical sound correlation function. In contrast to the hydrodynamical sound case, see (3.81), we can consider (3.88) for the correlation function approximation in the zero width limit Γ hp → 0 (or in the zero temperature limit T → 0) taking any small but finite frequency ω under the condition (3.83). Therefore, for such frequencies ω, the correlation function (3.88) can be approximated by δ(ω)-like function as in (2.121) for the correlation function of the infinite Fermi liquid (2.118). Because of a very close analogy of equation for the correlation function ψ hp ′′ (ω) (3.89) to the Landau-Placzek peak for the infinite Fermi-liquids in the hydrodynamic limit, see (2.118), and to equation (4.3.30) of [29], we associate the pole (3.85) and corresponding asymptotics of the response (3.84) and correlation (3.88) functions with the "heat pole". As in the case of the infinite nuclear matter, this pole for the finite Fermi-liquid drop is situated at zero frequency ω = 0. Moreover, they are both called as the "heat poles" because they disappear in the zero temperature limit T → 0 in line of the discussions near equation (4.3.30) of [29] and after. In the case of the infinite matter, we can see this property from (2.116) because C V /C P → 1) [or due to (2.120) for ψ (0) in (2.118)]. For the finite Fermi-liquid drop, the reason is that Υ → 0 in the zero temperature limit T → 0, see (3.55), and the only hydrodynamical sound condition (3.56) is then satisfied with the response function (3.57) and correlation function (3.81) where the heat pole is absent, see the discussion after (3.81). To get more explicit expressions for ψ (0) and Γ T of (3.90) we use now (3.30), (3.55), (3.53) and (3.52) for the coupling constant k −1 , parameter Υ, friction γ and stiffness C(0), respectively. With these expressions, one obtains approximately from (3.90) . (2.75). Other approximations are the same as well in the derivation of (3.70) for kC used in (3.95) through (3.85). The temperature dependences of the "intrinsic overdamped width" Γ in − (3.78) and "heat pole one" Γ hp (3.95), (3.91) are different, namely Γ hp ∝T 4 /τ (0, T ) and Γ in − ∝ 1/τ (0, T ) where the temperature dependence of the relaxation time τ (0, T ) can be found in (3.54). The both "widths" are the growing function of temperature as in [29] but with a different power. The dependence on particle number A completely differs for these compared poles being the growing function of A for the "width" Γ in − , Γ in − ∝ A 1/3 , and decreasing function of A for the Γ hp , Γ hp ∝ A −1/3 . As noted above, like for the Landau-Placzek peak [see (2.118), (2.119) and (2.120)], the heat pole with the "width" Γ hp (3.95) exists only in heated systems with a temperature T = 0. However, in contrast to the result (2.119), (2.88) for the Γ T of the heat pole in the infinite Fermi liquid, the heat pole "width" Γ T (3.95) disappears with increasing particle number A, i.e., Γ T → 0 for A → ∞. It allows us to emphasize also that this kind of the heat pole appears only in a finite Fermi system. The correlation function ψ hp ′′ (ω) (3.89) was obtained approximately near the pole −iΓ hp /2, see (3.85). The corresponding QQ-correlation function ψ hp ′′ QQ (ω) = k 2 ψ hp ′′ (ω) is identical to the oscillator correlation function ψ hp ′′ osc (ω) defined through the imaginary part of χ osc (ω) from the second equation of (3.57) at the zero mass parameter M , M = 0, see [29], has a clear physical meaning as the ratio of the hydrodynamic friction coefficient γ (3.53) to the heat pole one γ hp (3.99), A smallness of this ratio shown above claims that the heat pole friction γ hp is much larger than the typical hydrodynamic one γ, see more discussions concerning this comparison of different friction coefficients below. As seen from the inequalities (3.83) with the definition of ω crit from (3.56), the heat pole appears only in the "sound" branch n = 1 and does not exist for the Landau-Placzek branch of the solutions of (2.67) for s. We realize it immediately noting that the width parameter Υ (3.55) is proportional to s 0 which is finite for n = 1 and zero for n = 0 case, see (2.74) and (2.75), respectively. As shown in [29], for enough small Γ T , the coefficient ψ (0) in front of the Lorentzian-like correlation function, see (2.118) and (3.89), is related to the difference of susceptibilities, Neglecting a small difference χ T − χ ad according to (C. 19), (C.14), see Appendix C and [29] for details, one notes that the ergodicity condition (3.12) means smallness of the (1/T )ψ (0) compared to the stiffness C. However, from (3.90), (3.94) one gets a large quantity (1/T )ψ (0) hp /C ≈ 1/(kC) ≫ 1. Note, in the derivations of (3.88), (3.90) we took first ω → 0 (small ωτ ) for the finite Γ hp T , see also (3.55) for Υ in the second equation of (3.90), and then, considered Γ hp T → 0 (small temperature limit T → 0 ). We emphasize that the limits ω → 0 and Γ hp T → 0 are not commutative, i. e., the result of the correlation function calculations depends on the order of executing of these two operations like for the infinite Fermi-liquid matter [115]. This is obvious if we take into account that the "heat pole" last term in the denominator of (3.50) appears in the next (T 4 ) order inT and is proportional to 1/(ωτ ) in contrast to the other classical (sound) hydrodynamic terms, i.e., this Υ-term turns into zero for Γ T → 0 (T → 0). The relation (3.101) was derived in [29] using the opposite sequence of the above mentioned limits, namely, first Γ T → 0 and then ω → 0 in line of the recommendations of Forster [115] [first Γ T ∝ q 2 → 0 [or τ q → 0, see (2.119), (2.74)], and then, ω → 0 (s → 0) for the infinite Fermi-liquid]. In this case there is no contradiction with the ergodicity for the finite Fermi-liquid drop. In the limit Γ hp T → 0 (T → 0) for a finite value of ω, the condition (3.56) is fulfilled instead of (3.83), and the "heat pole" term proportional to 1/ω in the denominator of the response function (3.50) disappears within the ESA used in the FLDM, as noted above. It means formally that one can neglect ψ (0) hp in (3.89), and we have small quantities on the both sides of (3.101) taking into account the ergodicity condition (3.12) derived in Appendix C. It is not obvious that the relation (3.101) can be also derived for the opposite consequence of the above mentioned limit transitions unlike the Forster recommendations, i.e., taking first limit ω → 0 for a finite Γ T , and then, considering the limit Γ T → 0. In particular, (3.81) for the overdamped correlation function was obtained for the last choice of the limit sequences. Equation (3.81) does have also the Lorentzian-like shape but it is not related to the "heat pole" because the coefficient in front of the Lorentzian is not equal to χ T − χ(0). This equation was derived only for large Γ in − compared to the 1/τ , see (3.82), and is true only under these conditions and within inequalities (3.56). There is no a δ(ω) function-like peak in (3.81) for all possible variations of the parameters for which this equation was derived. The overdamped shape of the correlation function like (2.118) does not mean yet that this function is the "heat pole" one though the opposite statement is true. We point out again that (1/T )ψ (0) (3.81) is really large compared to the stiffness C, (1/T )ψ (0) = 1/k, and the ergodicity condition (3.12) is fulfilled rather than the relation (3.101) between (1/T )ψ (0) and χ T − χ(0) within the hydrodynamic conditions (3.56). Following the Forster's recommendations [115], i.e., take first the limit of small Γ T (Γ T → 0) or small temperature (T → 0), one gets the typical hydrodynamic response function (3.57) without "heat pole" terms. The next limit ω → 0 (ωτ → 0) in (3.57) leads to the finite value, up to the relatively small corrections of higher order in parameter kC (3.70). This is in line of Appendix C, and the ergodicity condition (3.12) is fulfilled for the finite Fermi-liquid drop within the ESA. Note that we accounted above for the kC correction at the second order in (3.102). In this way, we got the relation (3.71) between the coupling constant k −1 , isolated susceptibility χ(0) and stiffness C provided that the condition (3.56) is true, see also (3.10) with the stiffness C(0) = C of the "zero frequency limit". Note also that the "heat pole" response function χ hp (ω) (3.84) has a sharp peak near the zero frequency, and hence, is not smooth, i.e., "the zero frequency limit" for the transport coefficients can not be applied in the case (3.83). Thus, all properties of the finite Fermi liquids within the ESA concerning the ergodicity relation (3.12), as applied to (3.101), are quite similar to the ones for the infinite nuclear matter [besides the expressions (3.68), Γ − ∝ b S /A 1/3 , and (3.91), Γ hp ∝ 1/(b S A 1/3 ), themselves depending on b S ]. Our study of these properties is helpful for understanding the microscopic shell-model approach [24,29,59]. We point out that the strength function corresponding to the asymptotics (3.50) is the curve with the two maxima which are related to the "heat pole" and standard (sound) hydrodynamic modes. However, for intermediate frequencies ω of the order of ω crit in the low frequency region, see (3.56) and (3.83), the asymptotic response function (3.50) can not be presented exactly in terms of a sum of the two oscillator response functions like (3.58). For instance, in this case we have the transition from the "heat pole" mode to the sound hydrodynamic peak, and the response function (3.50) is more complex. We have a similar problem when the hydrodynamic condition ωτ ≪ 1 becomes not valid. However, as shown in the next subsection, such problems can be overcome approximately using an alternative definition for the transport coefficients suggested in [29]. For larger frequencies, i.e., for ωτ larger or of the order of 1, but within the low frequencies ω smaller than Ω, see (3.37), the equation for the collective motion becomes more complicate. It is not reduced generally speaking to the second order differential equation with the constant coefficients as in the zero frequency limit of the hydrodynamic approach (3.56). As shown and applied in [28,29] (see also [32] in connection to the FLDM), the problem of the definition of transport coefficients can be nevertheless overcome by defining them through a procedure of fitting an oscillator response function (3.58) to selected peaks of the collective response function χ coll QQ (ω) of (3.46) with respect to the parameters M , C and γ. Here such a fitting procedure would also be adequate for temperatures mentioned above, especially because our response function (3.46) has several poles (3.48), for instance, with i = 0, 1, 2; n = 1 and i = 0; n = 0. Some of them are the overdamped poles close to the imaginary axis in the ωcomplex plane. This procedure can be done analytically in the zero frequency limit provided that the response function (3.46) can be approximated by the oscillator response functions as in (3.58) or by χ hp osc (ω) in (3.96). In this case, we have analytical fitting of the collective response function (3.46) by these oscillator response functions and get the expressions for the transport coefficients (3.72) -(3.74) in the zero frequency limit (3.56) [or (3.97) for the heat pole friction in a smaller frequency region (3.83)]. For larger frequencies, we need to carry out the fitting procedure numerically. We should also comment a little more the definition of the transport coefficients in the zero frequency limit in connection to the one through the fitting procedure to avoid some possible misunderstanding. The transport coefficients in the zero frequency limit can be related to the "intrinsic" response function and its derivatives taken at ω → 0 [24,29]; see (3.72), (3.73), and (3.74). For application of this method of the transport coefficient calculations, we should be carefully in the case when we have several peaks in the strength function but we need to get the transport coefficient, for instance, for the second or more high peaks. In these cases the zero frequency limit might be applied also, but we have first to remove all lower peaks in the collective response function and take then the corresponding "intrinsic" response function and its derivatives without these lower peaks. In practical applications, this limit for the transport coefficients obtained in a such way is close to the same limit for the oscillator response function which fits the selected peak. The latter could be also the second or more high one. We shall consider now the hydrodynamical approximation ωτ ≪ 1 for the response function, see (3.50), for the two cases: The sound response function (3.57) for the sound condition (3.56) and the heat-pole response function (3.84) for the heat pole condition (3.83). The corresponding correlation functions are the sound correlation function (3.81) and the heat-pole correlation one (3.88). These two different approximations are realized for different consequence of the limit transitions, i.e., the approximate result depends on the consequence of their applying. The heat pole case (3.56) is realized when we take first the limit ω → 0 for a finite width Γ T , and then, Γ T → 0 (or zero temperature limit T → 0). This leads approximately to the δ(ω)-like function for the correlation function. In contrast to this, the sound pole case (3.83) is realized when we take first Γ T → 0 (or T → 0) to remove the last heat pole term proportional to Υ in the hydrodynamical response (3.50), and then, ω → 0. We like to follow this last consequence of the limit transition in line of the Forster recommendations [115] when we have the response (3.57) and correlation (3.81) functions without heat pole. In this case the transport coefficients for ωτ ≪ 1 are the standard hydrodynamical ones In this subsection, we discuss the results of the FLDM calculations for the collective response function and transport coefficients. We shall explain now in more details the application of the general fitting procedure for the definition of the transport coefficients. We discuss also the stiffness and inertia parameters found within the FLDM. This subsection will be ended by the discussion of the friction versus temperature. One of the important points of this discussion is the "heat pole" friction and comparison with the quantum shell-model calculations [24,28,29]. We show first the imaginary part of the response function χ coll QQ (ω) (3.46) (its strength) for different temperatures in Fig. 2. The total collective response function χ coll QQ is presented in Fig. 2 as a sum of the two branches n = 0 and 1 of eigen-frequencies ω (n) , see (3.48), in the imaginary part (strength) of the response function (3.46). They are related to the two different solutions of the dispersion equation (2.67) for the sound velocity s (n) . These solutions are similar to the Landau-Placzek (Raleigh) and the sound (Brillouin) ones in normal liquids. The latter are approached exactly by s (0) and s (1) solutions for sound velocity s in the hydrodynamic limit ωτ → 0, which are related to the eigen-frequencies of the infinite-matter vibrations ω (0) (2.74) and ω (1) (2.75), re-spectively. The integral collision term is parametrized in terms of the relaxation time τ (ω, T ) (3.19), (3.20) with c = 20 MeV. We took the nucleus Pu-230 with particle numbers A = 230 as an example of enough heavy nucleus. For the intermediate temperatures 4 MeV ∼ < T ∼ < 6 MeV we have the three peak structure. More detailed plots for smaller frequencies are shown in Fig. 3 for the temperature T = 6 MeV for which the first two peaks ("heat pole" and usual hydrodynamic ones) are seen better in a normal scale. In Fig. 3, we show also the separate contributions of the two branches n = 0 (dotted line) and n = 1 (dashed one) for the eigen-frequencies ω (n) (3.48) calculated from the secular equation (3.47) at each s (n) (n = 0, 1) as in Fig. 2. We present also the imaginary part of the asymptotic response function (3.50) obtained analytically above in the hydrodynamic frequent-collision limit. As seen from Fig. 3, we found from (3.46) the n = 1 mode with the two (i = 0, 1) peaks and the n = 0 mode with one peak (i = 0) for small frequencies ω and small parameter ωτ in agreement with asymptotics (3.50). The heat pole contribution is shown separately by the dotted curve. Note that the two curves for i = 0 and 1 at n = 1 in Fig. 3 coincide because they both were calculated without the last Υ term in (3.50). For the dotted curve, one has Υ ∝ s (0) 0 = 0, and for the dashed one, the last Υ term in (3.50) is omitted under the asymptotical sound condition (3.56). Therefore, the upper asymptotical data (thin solid) marked also by the condition (3.56) are in factor about two larger than the dotted, or dashed, or asymptotical (3.50) ones. The third peak in Fig. 2 appears for intermediate temperatures and larger frequencies. This peak is coming from the third pole i = 2 which belongs to the branch n = 1 in (3.48). This is the essentially Fermi-liquid underdamped mode due to the Fermi-surface distortions related to the shear modulus λ given by (B.12). Such a peak is moving from a large zero-sound-frequency region of the giant resonances to smaller frequencies with increasing temperature. The second (i = 1) peak in the n = 1 branch and first (i = 0) peak in the n = 0 one in the low frequency region (ωτ ≪ 1) are related to the overdamped motion described approximately by the overdamped oscillator response function like (3.58) for the same cut-off parameter c = 20 MeV. For c = ∞ the overdamped motion turns into the underdamped one for large temperatures T ∼ > 7 MeV. The next (third) peak in a more high frequency region (ωτ ∼ > 1) corresponds to the underdamped mode for the both c values. The first lowest peak in Figs. 2 and 3, which is not seen in Fig. 2 being too close to the ordinate axis and studied separately in Fig. 3, is due to the overdamped "heat pole" iΥ/2 in the collective response function, see (3.55) for Υ. The most remarkable property of this "heat pole" peak for smaller temperatures is that it has mainly a very narrow width (3.55) which increases with the temperature as T 6 , see the comments concerning the heat pole "width" after (3.55) and (3.95). This is in contrast to the temperature behavior of the width Γ − (3.68) like T 2 for the hydrodynamic sound peak at large temperatures. Fig. 2 shows the three peaks only for the intermediate temperatures 4 ∼ < T ∼ < 6 MeV because for smaller temperatures the third peak moves to the high frequency region larger Ω corresponding to the giant resonances and first peak is very close to the ordinate axis. The transport coefficients for such two-or three resonance structure were calculated by a fitting procedure of the oscillator response functions to the selected peaks. We subtract first the "heat pole" peak known analytically, see (3.84), from the total response function (3.46). We are left then with the two-humped curve and fit then it by the sum of the two oscillator response functions as (3.58). One of them which fits the first (hydrodynamic) peak in the curve with the remaining two maxima is the overdamped oscillator response function (η > 1) and other one (more high in the low energy region) corresponds to the underdamped motion (η < 1). In this way, we get the two consequences of the transport coefficients presented in Figs. 4-7. In these Figures, the heavy squares are related to the second, hydrodynamic-sound peak of Figs. 2,3 for the mostly overdamped modes with the effective friction η > 1. The open squares show the third Fermi-liquid peak (see Fig. 2) related to the underdamped motion (η < 1) and Fermi-surface distortions, very specific for the Fermi liquids, in contrast to the normal liquids. For the temperatures smaller about 6 MeV the second peak i = 1 in the total response function is overdamped and is coming from the two poles (i = 1, n = 1) and (i = 0, n = 0) which are close to the standard hydrodynamic approach. The third peak, due to the Fermisurface distortions as noted above, can not be found in principle in the hydrodynamic limit. The main difference between the second and third peaks can be found in the comparison of the stiffness coefficient C with the liquiddrop value C LD obtained both from the fitting procedure mentioned above. For the third ("Fermi liquid" in sense of the relation to the Fermi surface distortions specific for the Fermi liquids, in contrast to normal ones) peak the stiffness C is much high than the liquid drop value C LD in contrast to the second (typical hydrodynamical) one for which the stiffness C is very close to C LD almost for all temperatures, see Fig. 4. It means that the third peak is essentially of different nature than the second one because exists only due to the Fermi-surface distortions. A measure of these distortions is the anisotropy (or shear modulus) coefficient λ, see (B.12), which disappears in the hydrodynamic limit. For enough large temperature (larger than or of the order of 7 MeV ) all three peaks are not distinguished in Fig. 2. For such large temperatures the fitting procedure is a little modified to select these three peaks which are close to each other. For the finite c = 20 MeV and all large temperatures presented in Fig. 2 nearly 7 − 10 MeV, we have one wide peak which can be an-alyzed as the superposition of the three peaks, namely the "heat-pole", usual overdamped hydrodynamic and underdamped "Fermi-liquid" ones. Subtracting the first "heat pole" peak [see (3.84)] as for lower temperatures, we fit then the remaining curve by the only one overdamped oscillator function like (3.58) for η > 1. We subtract then again this overdamped fitted oscillator function from the response function (3.46) without the heat pole one (3.84) and fit the rest by the single underdamped oscillator. The found parameters of the two last oscillator response functions are used as initial values for the iteration fitting procedure of the sum of the two oscillator response functions of the same types to the response function (3.46) (without the heat pole). The found transport coefficients are presented in Figs. 4-7. For enough large temperature nearly 10 MeV in the case c = ∞ the only one underdamped oscillator can be used for fitting procedure of one peak [after an exclusion of the heat pole from (3.46)]. We show also the mass parameters found from the above described fitting procedure for several selected peaks in Fig. 5. For the third "Fermi-liquid" peaks the mass parameter M is close to the liquid drop values M LD related to the irrotational flow. The mass parameter of the second "hydrodynamic" peak, due to the mixture of the identical (i = 1, n = 1) and (i = 0, n = 0) poles, is significantly smaller than the liquid drop value M LD but finite. For the first "heat pole" (i = 0, n = 1) peak the mass parameter can be approximated only by zero. As noted above, the stiffness parameter for the third peak is much larger than the one for other (hydrodynamic) poles which is mainly close to the liquid drop value (see Fig. 4). As shown in Figures 4 and 5, for enough large temperatures the temperature dependences of the stiffness (C) and mass (M ) parameters are close to their zero frequency limit, see (3.72) for C(0) and (3.74) for M (0). For smaller temperatures, the inertia M (0) [ Fig. 5] becomes essentially larger than that found from the response function (3.46). It is in contrast to the stiffness C(0) which is identical to the liquid-drop quantity in the semiclassical limit → 0 when C(0) does not contain quantum shell corrections. Figs. 6 and 7 show the results for the friction coefficient γ/ versus the temperature for the collective response function χ coll QQ (ω) = k 2 (T )χ coll F F (ω) related to the χ coll F F (ω) (3.46). We used here the same parameters as well in Figs. 2 and 3 for the response function. The solid line for the friction γ (3.53) corresponds to the response function (3.58) in the hydrodynamic limit (3.56), the same as for the zero frequency approach (3.73). The heavy squares show the result of the fit of (3.46) to the oscillator response function (3.58). We presented also the "heat pole" contribution to the friction obtained from the fitting procedure by one "heat pole" (overdamped) oscillator response function (3.84), see circles in Fig. 7. We might compare the results of this fit to the friction analytically found in terms of the heat pole asymptotics (3.97) valid for smaller temperatures and shown by solid thin lines in Figs. 6 and 7. They are in a good agreement for smaller temperatures where the overdamped "heat pole" with the "width" Υ (3.55) is more important. This "heat pole" friction is too big as compared to other friction components related to the hydrodynamical-sound (full squares) and "Fermi-liquid" poles in the usual scale of Fig. 6. Therefore, we use the logarithmic scale in Fig. 7. Our FLDM friction, except for the "heat pole" one, is similar to the corresponding result of SM calculations [28,29], see Fig. 8. A large SM friction coming from the diagonal matrix elements in Fig. 8 and standard hydrodynamic friction (3.53) as well as heavy squares shown in Figs. 6 and 7 are obviously similar. All these curves for temperatures T ∼ > 2 MeV show the mainly diminishing friction, γ ∝ τ ∝ 1/T 2 roughly like in hydrodynamics, see (3.53). Some deflection of the friction temperature dependence in Fig. 6 for large temperatures T from usual hydrodynamic one 1/T 2 , i.e., a constant asymptotics is related to a different temperature behavior of the Γ (0, T ) (3.20) for a finite and infinite cut-off parameter c: This Γ (0, T ) goes to a constant for large temperatures if c is finite and to zero for c = ∞, see the solid and dashed lines in Fig. 6. It is noted also a similarity concerning the third ("Fermi-liquid") peak presented by the lower open squares with mainly increasing friction in Figs. 6, 7 and by joint full squares in Fig. 8. For c = 20 MeV and temperatures smaller about 10 MeV the friction of this mode increases, see Figs. 6, 7, in contrast to the standard hydrodynamic behavior (for c = ∞ this friction increases first up to about 6 − 7 MeV, and then, decreases at larger temperatures). In Fig. 8 the lower curve with growing dependence on the temperature for c = 20 MeV was obtained by excluding the contribution of the diagonal terms in the response function within the quantum approach based on the SM, see [24,29] for the detailed explanations. Within the conceptions of the FLDM and classical hydrodynamics of the normal liquid drops the first "heat pole" friction obtained for enough small frequencies (3.83) within the hydrodynamic collision regime ωτ ≪ 1 at finite temperature is the physical mode which can be excited when this regime might be realized like the Landau-Placzek pole for normal liquids. However, the hydrodynamic collision regime being still within a low frequency region (enough small collision frequency 1/τ ) is expected to be not achieved in fission experiments. Therefore, the friction is related mainly to another Fermi-liquid mode corresponding to the only third peak owing to the Fermi-surface distortions. The friction of this mode is much smaller than the hydrodynamic one for small temperatures, and they become comparable for high ones. The Fermi-surface distortion friction can be characterized by completely other, mainly growing temperature behaviour, see the lower curve marked by open squares in Figs. 6 and 7. Concerning the SM calculations, it seems that we should omit the diagonal matrix elements, see [29], because of similar arguments: The hydrodynamic collision regime seems to be not realized for nuclear fission processes. (These diagonal matrix elements might correspond to the physical hydrodynamic mode if it is excited, say in another systems like a normal liquid drop). The quantum shell-model friction without contributions of diagonal matrix elements is related probably to another non-hydrodynamic mode, such as the third peak for a Fermi-liquid drop, and this might be the physical reason for an exclusion of these matrix elements. Note that in the SM response-function derivations the diagonal matrix elements mentioned above do not contribute in the Forster's sequence of the limit transitions discussed at the end of the previous section, first Γ T → 0, for exclusion of the diagonal matrix elements at finite ω, and then, ω → 0 limit. In this case, we have not contribution of the diagonal matrix elements in the friction, and we are left with the low friction curves with increasing temperature dependence shown in Figs. 6-8. For the opposite limit sequence if we consider first the small frequency limit ω → 0 for the finite (large) Γ T we have the contribution of diagonal matrix elements to the friction shown by the curves decreasing with temperature which correspond to the hydrodynamic limit here. As noted above, the exclusion of diagonal matrix elements for this last case could be justified because the physical condition of the hydrodynamic limit ωτ ≪ 1 is not probably realized in fission processes. In that case, we expect the increasing friction; which has essentially other, nonhydrodynamic nature. We might interpret it within the FLDM as related to the third peak, due to the Fermisurface distortions. IV. NEUTRON-PROTON CORRELATIONS AND IVGDR A. Extensions to the asymmetric nuclei The FLDM was successfully applied for studying the global properties of the isoscalar multipole giant resonances having nice agreement of their basic characteristics, such as the energies and sum rules, with experimental data for collective excitations of heavy nuclei [31,46]. For the collective excitation modes in asymmetric neutron-proton nuclei, the FLDM was straightly extended in particular for calculations of the IVGDR structure [33,49,51,52]. In this case, one has the two coupled (isoscalar and isovector) Landau-Vlasov equations for the dynamical variations of distribution functions, δf ± (r, p, t), in the nuclear phase-space volume [33], Here m * ± are the isoscalar (+) and isovector (-) effective masses, ε = p 2 /(2m * ± ) , ε F = (p ± F ) 2 /(2m * ± ) is the Fermi energy. The splitting between the Fermi momenta p ± F is originated by the difference of the neutron and proton potential well depths, due to the Coulomb interaction [1,33], where F ′ 0 = 3J/ε F − 1 is the isotropic isovector Landau constant of the quasiparticle interaction (4.6), J is the volume symmetry energy constant [2]. The asymmetry parameter I = (N − Z)/A is assumed to be small near the nuclear stability line, N and Z are the neutron and proton numbers in the nucleus (A = N + Z). In (4.1), for the dynamical variations of the self-consistent quasiparticle (mean-field) interaction δε ± (r, p, t), one has 3) The sum is taken over the sign index σ = ±. The dynamical variations of the quasiparticle interaction δε ± at the first order with respect to the equilibrium energy p 2 /(2m * ± ) is defined through those of the particle density, [zero p-moments of the dynamical distribution functions δf σ (r, p, t) (2.28)], and the current density, (their first p-moments). The Landau interaction constants F l,σσ ′ in (4.3) are defined by expansion of the scattering quasiparticle's interaction amplitude F σσ ′ (p, p ′ ) in the Legendre polynomial series, For the sake of simplicity, we assume that F l,σσ ′ is a symmetrical matrix (l ≤ 1) and F l,pp − F l,nn is of the second order in parameter ∆ [see below (4.1)], and can be neglected in the linear approximation with respect to ∆, Thus, we arrive at usual simple definitions for the isoscalar F 0 and F 1 and isovector F ′ 0 and F ′ 1 Landau interaction constants [1,33], These constants are related to the Skyrme interaction constants in the usual way [130]. The isoscalar (F 0 ) and isovector (F ′ 0 ) isotropic interaction constants are associated with the volume in-compressibility modulus K and symmetry energy constant J, respectively. The anisotropic interaction constants F 1 and F ′ 1 correspond to the effective masses by equations m * + = m(1 + F 1 /3) and m * − = m(1 + F ′ 1 /3). The periodic time-dependent external field in (4.1) is given by V ext ∝ exp(−iωt) as in (3.1). The collision term δSt ± is taken in the simplest τ ± -relaxation time approximation (2.9). For simplicity, we consider in this section the low temperature limit T → 0 neglecting the difference between the local and global equilibrium for the quasistatic distribution function. Solutions of these equations (4.1) associated with the dynamic multipole particle-density variations, δρ ± (r, t) ∝ Y L0 (r) in the spherical coordinates r, θ , ϕ, can be found in terms of a superposition of the plane waves (2.28) over angles of the wave vector q as The factor N Z/A 2 ensures the conservation of the center-of-mass position for the odd vibration multipolarities L [129]), in particular, for the dipole modes (L = 1). The amplitudes of the Fermi surface distortions A ± are determined by (4.1). For the simplest case of the zero anisotropic interaction (F 1 = F ′ 1 = 0) in the collisionless limit ωτ → ∞, the dispersion equation for the sound velocity s takes the form: = 0, (4.10) (We accounted for a small ∆ and large ωτ at the zero temperature.) This equation has the two solutions s = s n related to the main peak n = 1 and 2 for its satellite, see (26) of [33] for the finite ωτ and nonzero F 1 and F ′ 1 . In the limit ∆ → 0, the dispersion equations given by (25) of [33] with our definitions for s 1 and s 2 modes n = 1 and 2 are resulted in the two (isovector and isoscalar) equations for the equations for the separated zero sounds, respectively, For the finite Fermi-liquid drop with a sharp ES [27,37,38], the macroscopic boundary conditions for the pressures and those for the velocities were derived in [33,39,40]. For small isovector vibrations near spherical shape, the radial mean-velocity u r and momentum-fluxtensor Π rr components, defined through the moments of the distribution function δf − as solutions of the kinetic equation (4.1) [see (2.16) and (2.20)] are given by (3.21) and (3.22) with u r = u + r − u − r and Π rr = Π + rr − Π − rr . The r.h.s.s of these boundary conditions are the isovector ES velocity u S = RQ S Y L0 (r) and capillary pressure exceed in (4.12) is considered in [49,53]. This constant essentially differs from the isovector stiffness introduced in [2] for the description of the neutron skin as a collective variable, see more detailed discussions in [40,51]. The energy constant, D = ωA 1/3 , and energy weighted sum rules (EWSR), dω ω Imχ coll (ω), (4.13) for the IVGDR can be found from the collective response function χ coll (ω) . The response function (3.4) is determined by the transition density (3.31) generalized to the dynamic isoscalar and isovector components [52]: where δℵ ± L is defined by the mass center conservation ( dr r δρ ± = 0), w ± (ξ) is given by (D.2) and (D.4). In Fig. 9, a strong SO dependence of the isovector density w − (ξ) is compared with that of the isoscalar one w + (ξ) (low index "+" is omitted here and below) for the SLy7 force as a typical example [39,40]. As shown in [40], the isoscalar w(ξ), and therefore, the isovector w − (ξ) densities depend rather strongly on the most of the Skyrme forces [74,75] near the ES. In Fig. 10 (in logarithmic scale), one observes notable differences in the isovector densities w − derived from different Skyrme forces within the edge diffuseness. In particular, this is important for the calculations of the neutron skins of nuclei [40]. We emphasize that the dimensionless densities, w(x) (D.2) and w − (x) (D.4), shown in Figs. 9 and 10 were obtained in the leading ES approximation (a/R ≪ 1) as functions of the specific combinations of the Skyrme force parameters, such as β and c sym of (D.5). Therefore, they are the universal distributions independent of the specific properties of the nucleus such as the neutron and proton numbers, and the deformation and curvature of the nuclear ES; see also [25,27,39]. These distributions yield approximately the spatial coordinate dependence of local densities in the normal-to-ES direction ξ. With the correct asymptotical behavior outside of the ES layer for any ES deformation, they satisfy the leptodermic condition a/R ≪ 1, in particular, for the semi-infinite nuclear matter. The universal functions w(ξ) (D.2) and w − (x) (D.4) of the leading order in the ESA can be used [explicitly analytically in the quadratic approximation for ǫ(w)] for the calculations of the surface energy coefficients b (±) S (D.7), the neutron skin and isovector stiffness (see [40]). As shown in Appendices B and C of [40], only these particle-density distributions w ± (ξ) within the surface layer are needed through their derivatives [the lower limit of the integration over ξ in (D.7) can be approximately extended to −∞ because of no contributions from the internal volume region in the evaluation of the main surface terms of the pressure and energy]. Therefore, the surface symmetry-energy coefficient k S in (D.10) and (D.12) (also the neutron skin and the isovector stiffness [40]) can be approximated analytically in terms of the functions of the definite critical combinations of the Skyrme parameters such as β, c sym , a [see (D.5)], and the parameters of the infinite nuclear matter (b V , ρ ∞ , K). Thus, they are independent of the specific properties of the nucleus (for instance, the neutron and proton numbers), and the curvature and deformation of the nuclear surface in the considered ESA. Solving the Landau-Vlasov equations (4.1) in terms of the zero sound plane waves (4.9) with using the dispersion equations (26) in [33] for the sound velocities s n and macroscopic boundary conditions (3.21) and (3.22) with (4.12) on the nuclear ES, from (3.4) and (4.14) one obtains Here, c 1 ≈ 1 − s 2 1 + F ′ 0 , d 1 ≈ 1 − s 2 1 + F ′ 0 for the main (n = 1) IVGDR peak. Small anisotropic F 1 and F ′ 1 corrections and more bulky expressions for s 2 of the satellite (n = 2) peak of a smaller (∝ I) strength were omitted (see (D11) in [33] for more precise expressions). We present here also the simplest expressions for the amplitudes, A 1 (q) ≈ −ρ ∞ R 3 j 1 (qR)/(mω 2 ) and A 1 (q) ∝ ∆ ∝ I for the n = 1 and 2 modes [see a more complete equation (60) in [33]]. The Bessel functions j 1 (z) and its derivative j ′ 1 were defined after (3.32) (L = 1). The poles of the response function χ coll (ω) (4.15) (roots ω n of the equation D (n) (ω − iΓ/2) = 0 or q n ) determine the IVGDR energies ω as their real part (the IVGDR width Γ is determined by their imaginary part). The residue A n is important for the calculations of the EWSR (4.13) at a small width of the IVGDR Γ. Note that the expression like (4.15) for the only one main peak (without the IVGDR structure) in symmetrical nuclei (N = Z) with using the phenomenological boundary conditions which have the same form as (3.21) and (3.22), where however the isovector neutron-skin stiffness was applied instead of the surface symmetry-energy constant b (−) S in the capillary pressure exceed (4.12) was obtained earlier in [49]. B. Discussions of the asymmetry effects The isovector surface energy constants k S (D.10) in the ESA using the simplest quadratic approximation for ǫ(w) of the energy density (D.1) are shown in Table 1 for several critical Skyrme forces [74,75]. These constants are rather sensitive to the choice of the Skyrme forces. The modulus of k S for the Lyon Skyrme forces SLy4-7 [74] is significantly larger than for other forces, all of them much smaller than those related to [2,[60][61][62]. For T6 [74], one has C − = 0, and therefore, k S = 0, in contrast to all of other forces shown in Table 1. Notice that the isovector gradient terms which are important for the consistent derivations within the ESA [40] are not also included (C − = 0) into the energy density in [63,65]. For RATP [74], the isovector stiffness (∝ −1/k S ), corresponding inversed k S but with the opposite sign [40], is even negative as C − > 0 (k S > 0). The reason of significant differences in these values might be related to those of the critical isovector Skyrme parameter C − in the gradient terms of the energy density (D.1). Different experiments used for fitting this parameter were found to be almost insensitive in determining uniquely its value, and hence, k S [or b The IVGDR energy constants D = ω (−) A 1/3 of the hydrodynamic model (HDM) are roughly in good agreement with the well-known experimental value D exp ≈ 80 MeV for heavy nuclei within a precision better or of the order of 10%, as shown in [40,51] (see also [33,49,131]). More precise A −1/3 dependence of D seems to be beyond the accuracy of these HDM calculations. This takes place even accounting more consistently for the ES motion because of several other reasons (the macroscopic Fermi-surface distortions [49], also including structure of the IVGDR [33,50,[52][53][54]131], curvature, Coulomb, quantum-shell, and pairing [6] effects towards the realistic self-consistent calculations based on the Skyrme HF approach [132][133][134][135][136]. Larger values 30-80 MeV of the isovector stiffness [2] (smaller k S ) were found in [60,62,67,72]. With smaller |k S | (see Table 1, or larger the isovector stiffness) the fundamental parameter of the LDM expansion in [2,60] is really small for A ∼ > 40, and therefore, the results obtained by using this expansion are justified [40]. Table 1 shows also the mean IVGDR energies D obtained [40,51] within a more precised FLDM [33]. The IVGDRs even for the spherical nuclei have a doubleresonance structure, the main peak n = 1 which exhausts mainly the EWSR for almost all Skyrme forces and the satellite one n = 2 with the significantly smaller EWSR contributions proportional to the asymmetry parameter I, typical for heavy nuclei. The last row shows the average D(A) weighted by their EWSR distribution in rather good agreement with the experimental data within the same accuracy about 10 %, and in agreement with the results of different other macroscopic IVGDR models [49,53,54,131]. Exclusion can be done (see Table 1) for the Skyrme forces SIII [74] and SkL3 [75] where we obtained a little larger IVGDR energies. Note that the main characteristics of the IVGDR described by mean D are almost insensitive to the isovector surface-energy constant k S [40,51]. Therefore, we suggested [40,52] to study the IVGDR two-peak (main and satellite) structure in order to fix the ESA value of k S [40] from comparison with the experimental data [137][138][139] and theoretical results [132][133][134][135][136]140]. V. NUCLEAR COLLECTIVE ROTATIONS A. General ingradients of the cranking model Within the cranking model, the nuclear collective rotation of the Fermi independent-particle system associated with a many-body Hamiltonian, H ω = H + H ω CF , can be described, to a good approximation [129], in the restricted subspace of Slater determinants, by the eigenvalue problem for a s.p. Hamiltonian, usually called the Routhian. For this Routhian, in the body-fixed rotating frame [4,5,15], one has where h ω CF is the s.p. cranking field which is approximately equal to the Coriolis interaction (neglecting a smaller centrifugal term, ∝ ω 2 ). The Lagrange multiplier ω (rotation frequency of the body-fixed coordinate system) is defined through the constraint on the nuclear angular momentum I, evaluated through the quantum average ℓ + s ω = I, of the total s.p. operator, ℓ + s, where ℓ is the orbital angular momentum and s is the spin of the quasiparticle, thus defining a function ω = ω(I). The quantum average of the total s.p. operator ℓ + s is obtained by evaluating expectation values of the manybody Routhian H ω CF in the subspace of Slater determinants. For the specific case of a rotation around the x axis (ω = ω x ) which is perpendicular to the symmetry z axis of the axially-symmetric mean field V , one has (dismissing for simplicity spin (spin-isospin) variables), where d s as the spin (spin-isospin) degeneracy in the case of the corresponding symmetry of the mean potential V . The occupation numbers n ω i for the Fermi system of independent nucleons are given by In (5.2), ψ ω i (r) are the eigenfunctions and ψ ω i (r) their complex conjugate, ε ω i the eigenvalues of the Routhian h ω (5.1), µ ω is the chemical potential. For relatively small frequencies ω and temperatures T , µ ω is to a good approximation equal to the Fermi energy, µ ω ≈ ε F = 2 k 2 F /2m * , where k F is the Fermi momentum in units of . From (5.2), the rotation frequency ω can be specifically expressed in terms of a given angular momentum of nucleus I x , ω = ω (I x ). Within the same approach, one approximately has for the particle number which determines the chemical potential µ ω for a given number of nucleons A. As we introduce the continuous parameter ω and ignore the uncertainty relation between the angular momentum and angles of the body-fixed coordinate system, the cranking model is semiclassical in nature [81]. Thus, we may consider the collective MI Θ x (for a rotation around the x axis, and omitting, to simplify the notation, spin and isospin variables) as a response of the quantum average δ ℓ x ω (5.2), to the external cranking field h ω CF in (5.1). Similarly to the magnetic or isolated susceptibilities [108,109,141,142], one can write Traditionally [5,110,113], another parallel (alignment) rotation with respect to the symmetry z axis can be also considered as presented in Appendix A of [113]. As was shown in [4][5][6][7][8][9]15], one can treat the term −ω · ℓ = −ω ℓ x as a perturbation for a nuclear rotation around the x axis. With the constraint (5.2) and the MI (5.6) treated in second order perturbation theory, one obtains the well known Inglis cranking formula. Instead of carrying out the rather involved calculations presented above, one could, to obtain the yrast line energies E(I x ) for small enough temperatures T and frequencies ω, approximate the angular frequency by ω = I x /Θ x and write the energy in the form As usually done, the rotation term above needs to be quantized through I 2 x → I x (I x + 1) in order to study the rotation bands. B. Self-consistent ETF description of nuclear rotations Following reference [87], a microscopic description of rotating nuclei was obtained in the Skyrme Hartree-Fock formalism, within the Extended Thomas-Fermi densityfunctional theory up to order 2 . Within a variational space restricted to Slater determinant, the minimization of the expectation value of the nuclear Hamiltonian lead to the s.p. Routhian h ω q (5.1) that is determined by a one-body potential V q (r), a spin-orbit field W q (r) and an effective mass form factor f eff q (r) = m/m * q (see also [72]). In addition, in the case when the time reversal symmetry is broken, a cranking field form factor α q (r) and a spin field form factor S q (r) also appear. In this subsection the (roman) subscript q refers to the nucleon isospin (q = {n, p}) and should not be confused with the wave number q in other sections. All these fields can be written as functions of local densities and their derivatives, like the neutron-proton particle densities ρ q (r), the kinetic energy densities τ q (r), the spin densities (also referred to as spin-orbit densities) J q (r), the current densities j q (r), and the spin-vector densities ρ q (r). Note that in the present subsection, τ q (r) stands for the kinetic energy density which should not be confused with the relaxation time in previous sections (here, however, with a different subscript q as compared to q in sections II,III,IV and Appendices A,B). In principle, two additional densities appear, a spin-vector kinetic energy density τ q (r) and a tensor coupling J αβ (r) between spin and gradient vectors, which have, however, been neglected since their contribution should be small, as suggested by [143]. The cranking-field form factor α q (r) contains two contributions. One of them is coming from the orbital part of the constraint, −ω ℓ, which has been shown in [144] to correspond to the Inglis cranking formula [7]. The other, a Thouless-Valatin self-consistency contribution [145] has its origin in the self-consistent response of the mean field to the time-odd part of the density matrix generated by the cranking term of the Hamiltonian. The aim is now to find functional relations for the local densities τ q (r), J q (r), j q (r) and ρ q (r) in terms of the particle densities ρ q (r), in contrast to those given by Grammaticos and Voros [146] in terms of the form factors V q , f eff q , W q , α q and S q . Taking advantage of the fact that, at the leading Thomas-Fermi order, the cranking field form factor is given by [87] one simply obtains the rigid-body value for the Thomas-Fermi current density This result is not that trivial, since it is only through the effect of the Thouless-Valatin self-consistency terms that such a simple result is obtained. Notice also that (5.9) corresponds to a generalization to the case f eff q = 1 of a result already found by Bloch [147]. Equation (5.9) can be also considered as an extension of the Landau quasiparticle (generalized TF) theory [34,35] presented in Secs. V B, V A to the case of rotating Fermi-liquid systems, cf. (5.9) with (4.5) for the current density as an average of the particle velocity, p rot /m = ω × r, rotating with the frequency ω. In particular, the re-normalization of the cranking field form factor α (TF) q = f eff q α o with α o = (r × ω), by (5.8) can be also explained as related to the effective mass corrections, f eff q = 1, obtained by Landau [34] with using both the Galileo principle and the Thouless-Valatin self-consistency corrections to a particle mass m due to the quasiparticles' (self-consistent) interaction through a mean field. They lead in [87] to the self-consistent TF angular momentum of the quasiparticle ℓ q = f eff q ℓ o with the classical angular momentum ℓ o = r×p of the particle, so that −ω·ℓ q = α q ·p. This effect is similar to that for the kinetic energies of the quasiparticles, , see after (4.1). With this transparent connection to the Landau quasiparticle theory, it is clear that there is no contradictions with the TF limit of the current densities (4.5), → 0, accounting for the particle densities (4.4), as well as with the definitions in subsections IV A and V C, because in (5.9) appears formally due to a traditional use of the dimensionless units for the angular momenta in the quantum-mechanical picture to compare with experimental nuclear data. Another reason is related to a consistent treatment of the essentially quantum spin degrees of freedom, beyond the Landau quasiparticle approach to the description of Fermi liquids, which have no straight classical limit, in contrast to the orbital angular momentum ℓ. The convergence in the TF limit → 0 can be realized for smooth already quantities after the statistical (macroscopic) averaging over many s.p. (more generally speaking, many-body) quantum states to remove the fluctuating (shell) effects which appear in the denominators of the exponents within the POT (see Sec. V C for more detailed discussions). Finally, the spin paramagnetic effect can be considered as a macroscopic one in the MI like the orbital diamagnetic contribution. For instance, the spin-vector density does not have a straight classical analogue, such as the orbital angular momentum, and is considered as the object of leading order . Starting from these results and taking advantage of the fact that in the functional ETF expressions up to the order 2 , it is sufficient to replace quantities, such as the cranking field form factor α q , by their Thomas-Fermi expressions (after the statistical averaging mentioned above). In order to obtain a semiclassical expression, that is correct to that order in , one obtains for the spin-vector densities ρ n and ρ p , which are of order in the considered ETF expansion, a system of linear equations. They can be easily resolved [87]. One also notices from this system of equations that the spinvector densities are proportional to the angular velocity ω. Exploiting the well known analogy of the microscopic Routhian problem with electromagnetism, one may then define spin susceptibilities χ q , ρ q = χ q ω . (5.10) The key question now is to assess the sign of these susceptibilities and to decide whether or not the corre-sponding alignment is of a "Pauli paramagnetic" character. The study of [87] shows that this is the case, i.e., that the spin polarization is, indeed, of paramagnetic character, thus confirming the conclusions of the work performed by Dabrowski [148] in a simple model of non-interacting nucleons. Since the cranking field factor α q is, appart from that of the constraining field α o determined only by the current densities j q and the spin-vector densities ρ q , one can then write down [87] the contributions to the current densities j q going beyond the Thomas-Fermi approach. The semiclassical corrections of order 2 can be split into contributions (δj q ) ℓ and (δj q ) s coming respectively from the orbital motion and the spin degree of freedom. It is found [87] that the orbital correction (δj q ) ℓ corresponds to a surface-peaked counter-rotation with respect to the rigid-body current proportional to (ω × r), thus recovering the Landau diamagnetism characteristic of a finite Fermi gas. With the expressions of the current densities j q and the spin-vector densities ρ q up to order 2 , one can write down the corresponding ETF expressions for the kinetic energy density τ q (r) and spin-orbit density J q (r). Having now at hand the ETF functional expressions up to order 2 of all the densities entering our problem, one is able to write down the energy of the nucleus in the laboratory frame as a functional of these local densities, where ρ = ρ n + ρ p as in Appendix D, ρ ≈ ρ ∞ w + . Upon some integration by parts, one finds that E can be written as a sum of the energy density per particle of the nonrotating system E(0) and its rotational part, in line of (5.7). Within the ETF approach, one has from (5.11) where Θ (dyn) TF is the ETF dynamical moment of inertia for the nuclear rotation with the frequency ω. This MI is given in the form: where r ⊥ is the distance of a given point to the rotation axis and W 0 is the Skyrme-force strength parameter of the spin-orbit interaction [72]. One notices that the Thomas-Fermi term which comes from the orbital motion turns out to be the rigid-body moment of inertia. Semiclassical corrections of order 2 come from both the orbital motion (Θ orb. is negative corresponding to a surface-peaked counter rotation in the rotating frame. Such a behavior is to be expected for a N-particle system bound by attractive short-range forces (see [149]). The spin contribution Θ (dyn) spin turns out to be of the paramagnetic type, thus leading to a positive contribution which corresponds to an alignment of the nuclear spins along the rotation axis. It can also be shown (see [150]) that the ETF kinematic moment of inertia, is identical to the ETF dynamical moment of inertia presented above. It is now interesting to study the importance of the Thouless-Valatin self-consistency terms. This has accomplished by calculating the moment of inertia in the Thomas-Fermi approximation but omitting, this time, the Thouless-Valatin terms. One then finds [87] the following expressions for the dynamical moment of inertia, in what is simply the Inglis cranking (IC) limit whereq is the other charge state (q=p when q=n and vice-versa) and B 3 is defined through the Skyrme force parameters t 1 , t 2 , x 1 and x 2 (see [87]). Apart from the corrective term in ρ q ρq, one notices that the first term in the expression above, which is the leading term, yields, at least for a standard HF-Skyrme force where f eff q ≥ 1, to a smaller moment of inertia than the corresponding term in (5.13) containing the Thouless-Valatin corrections. It is also worth noting that in this approximate case, the kinematic moment of inertia is given by which turns out to be quite different from the above given dynamical moment of inertia, (5.15), obtained in the same limit (Thomas-Fermi limit, omitting the Thouless-Valatin self-consistency terms). To investigate the importance of the different contributions to the total moment of inertia, we have performed self-consistent ETF calculations up to order 4 for 31 non-rotating nuclei, imposing a spherical symmetry, and using the SkM * Skyrme effective nucleon-nucleon interaction [151]. Such calculations yield variational semiclassical density profiles for neutrons and protons [72] which are then used to calculate the above given moments of inertia. The nuclei included in our calculations are 16 The results of these calculations are displayed in figure 11 taken from [87]. One immediately notices the absence of any significant isovector dependence. The good reproduction of the total ETF moment of inertia obtained by the Thomas-Fermi (rigid-body) value is also quite striking. One finds that the orbital and spin semiclassical corrections are not small individually but cancel each other to a large extent. To illustrate this fact the ETF moments obtained by omitting only the spin contribution are also shown on the figure. One thus obtains a reduction of the Thomas-Fermi result that is about 6% in 240 Pu but as large as 43% in 16 O. The Inglis cranking approach performed at the Thomas-Fermi level underestimates the kinematic moment of inertia by as much as 25% and the dynamical moment of inertia by about 50% in heavy nuclei, demonstrating in this way the importance of the Thouless-Valatin self-consistency terms. In [87], a crude estimate of the semiclassical corrections due to orbital and spin degrees of freedom has been made by considering the nucleus as a piece of symmetric nuclear matter (no isovector dependence as already indicated by the self-consistent results shown in figure 11 above). It turns out that these semiclassical corrections have an identical A dependence (A −2/3 relative to the leading order Thomas-Fermi, i.e. rigid-body, term) A fit of the parameters η ℓ and η s to the numerical results displayed in Fig. 11 yields η ℓ = −1.94 and η s = 2.63 giving a total (orbital + spin) corrective term of 0.69 A −2/3 . For a typical rare-earth nucleus (A = 170) all this would correspond to a total corrective term equal to 2.2% of the rigid-body value, resulting from a -6.3% correction for the orbital motion and a 8.5% correction for the spin degree of freedom. Whereas in the calculations that lead to figure 11 above, spherical symmetry was imposed, fully variational calculations have been performed in [88], imposing however the nuclear shapes to be of spheroidal form. In this way, the nuclear rotation clearly impacts on the specific form of the matter densities ρ n and ρ p which, in turn, in the framework of the ETF approach determine all the other local densities, as explained above. Trying to keep contact with usual shape parametrizations, by the standard quadrupole parameters β and γ equating the semi-axis lengths of the spheroids with the lengths of a standard quadrupole drop. As a result, figure 12 shows the evolution of the equilibrium solutions (the ones that minimize the energy for given angular momentum I) as a function of I. One clearly observes that at low values of the angular momentum (I in the range between 0 and 50 ) the nuclear drop takes on an oblate shape, corresponding to increasing values of the quadrupole parameter β with increasing I values, but keeping the non-axiality parameter fixed at γ = 60 • . For larger values of the total angular momentum (I beyond 55 ), one observes a transition into triaxial shapes, where the nucleus evolves rapidly to more and more elongated shapes. For even higher values of I (I beyond 70 ) the nucleus approaches the fission instability. These results are in excellent qualitative agreement with those obtained by Cohen, Plasil and Swiatecki [152] in a rotating LDM. It is amusing to observe here a backbending phenomena at the semiclassical level when one is plotting, as usual, the moment of inertia Θ ETF vs the rotational angular momentum, see Fig. 13. One should, however, insist on the fact that this backbending has strictly nothing to do with the breaking of a Cooper pair. The rapid increase of the moment of inertia at about I = 60 with a practically constant (or even slightly decreasing) rotational frequency ω comes simply from the fact that at such a value of I (between I ≈ 60 and I ≈ 70) the nucleus elongates substantially increasing in this way its deformation and at the same time its moment of inertia. It is therefore interesting to notice that the semiclassical ETF approach leads to a moment of inertia that is very well approximated by its Thomas-Fermi, i.e. rigidbody value. Thouless-Valatin terms which arise from the self-consistent response of the mean field to the timeodd part of the density matrix generated by the cranking piece of the Hamiltonian are naturally taken care of in this approach. Semiclassical corrections of order 2 coming from the orbital motion and the spin degree of freedom are not small individually, but compensate each other to a large extent. One has, however, to keep in mind that the shell and pairing effects, that go beyond the ETF approach, are not included in this description. These effects are not only both present, but influence each other to a large extent, especially for collective highspin rotations of strongly deformed nuclei, as shown in [19,22,153]. C. MI shell structure and periodic orbits We shall outlook first the basic points of the POT for the semiclassical level-density and free-energy shell corrections [3,82,94]. We apply then the POT for the derivation of the MI through the rigid-body MI (with the shell corrections, see Appendix E) in the NLLLA related to the equilibrium collective rotation with a given frequency ω [113]. For simplicity, we shall discard the spin and isospin degrees of freedom, in particular, the spin-orbit and asymmetry interaction. Notice also that from the results presented in Figs. 11 and 13 (with the help of Fig. 12), one may conclude that the main contribution to the moment of inertia of the strongly deformed heavy nuclei can be found within the ETF approach to the rotational problems as a smooth rigid body MI. GREEN'S FUNCTION TRAJECTORY EXPANSION For the derivations of shell effects [82] within the POT [73,89,[91][92][93][94], it turns out to be helpful to use the coordinate representation of the MI through the Green's functions G (r 1 , r 2 ; ε) [112,113,141,142,154], The Fermi occupation numbers n(ε) (5.3) are approximately considered at ω = 0 (ε = ε i ). In (5.18), ℓ x (r 1 ) and ℓ x (r 2 ) are the s.p. angular-momentum projections onto the perpendicular rotation x axis at the spatial points r 1 and r 2 , respectively. With the usual energy-spectral representation for the one-body Green's function G in the mean-field approximation, one finds the standard cranking model expression, which however includes the diagonal matrix elements of the operator ℓ x . In this sense, equation (5.18) looks more general beyond the standard perturbation approximation, see [113]. Moreover, the quantum criterion of the application of this standard cranking model approximation, which is a smallness of the cranking field perturbation h ω CF in (5.1) as compared to the distance between the neighboring states of the nonperturbative spectrum, becomes weaker in the semiclassical approach, see more comments below in relation to [10,21]. For the MI calculations by (5.18), through the Green's function G, one may use the semiclassical Gutzwiller trajectory expansion [89] extended to continuous symmetry [73,91,93,95,98,99] and symmetry breaking [73,94,102,103] problems, where The sum runs over all isolated classical trajectories (CTs) or their families inside the potential well V (r) which, for a given energy ε, connect the two spatial points r 1 and r 2 . Here S CT is the classical action along such a CT, and σ CT denotes the phase associated with the Maslov index through the number of caustic and turning points along the path CT, φ d is the constant phase depending on the dimension of the problem [73,91,94,103]. The amplitudes A CT of the Green's function depend on the classical stability factors and trajectory degeneracy, due to the symmetries of that potential [73,91,98,102,103]. For the case of the isolated CTs [73,89], one has the explicit semiclassical expression for the amplitudes through the stability characteristics of classical dynamics, Here, J CT (p 1 , t CT ; r 2 , ε) is the Jacobian for the transformation between the two sets of variables p 1 , t CT and r 2 , ε; p 1 and t CT are the initial momentum and time of motion of the particle along a CT, t CT = ∂S CT /∂ε , r 2 and ε are its final coordinate and energy. In more general case, if the mean field Hamiltonian h obeys a higher symmetry like that of spherical or harmonic-oscillator potentials with rational ratios of frequencies, one has to use other expressions for the amplitude A CT (r 1 , r 2 ; ε) for close trajectories of a finite action (with reflection from the potential boundary), taking into account such symmetries. They account for an enhancement in owing to their classical degeneracy (see [73,91,94,103] and the discussion in subsection below). In the case of the bifurcation of POs, generated by a symmetry-breaking, one may use the ISPM [102,103], especially for superdeformed shapes of the potential. Some examples of the specific amplitudes for the degenerate families of closed POs in the harmonic oscillator (HO) potential are given in Appendix E of [113]. Note that (5.21) can be applied for any potential wells for the contributions of closed and non-closed trajectories which can be considered as isolated (no PO families) ones for the given end points r 1 and r 2 . Among all of CTs in (5.19), we may single out CT 0 which connects directly r 1 and r 2 without intermediate turning points, see Fig. 14. It is associated with the component G CT0 of the sum (5.19) for the semiclassical Green's function. Therefore, for the Green's function G(r 1 , r 2 ; ε) (5.19), one has then a separation, In the NLLLA [113,154], the first term G CT0 of the splitting in the middle of (5.22) is given by where p (r) = 2m[ε − V (r)] , V (r) is a mean nuclear potential, is the fluctuating part of the Green's function (5.19) determined by all other trajectories CT 1 = CT 0 in the sum (5.19) with reflection points at the potential surface (see one of such trajectories CT 1 in Fig. 14), where G CT 1 is the Green's function component (5.20) taken at the CT = CT 0 , i.e., CT 1 . LEVEL-DENSITY AND ENERGY SHELL CORRECTIONS The level density, g(ε) = i δ(ε − ε i ), where ε i is the quantum spectrum, is identically expressed in terms of the Green's function G as According to (5.22), this level density can be presented semiclassically as a sum of the smooth and oscillating components [73,89,91,94], where g ETF (ε) is given by the ETF approach related to the component G 0 in (5.22) in the NLLLA (5.23) r 1 → r 2 → r [72,73,94,155]. The local part of g ETF (ε) is the main simplest Thomas-Fermi (TF) level density g TF (ε) [73]. The second oscillating term δg scl (ε) of the level density (5.28) corresponds to the fluctuating G 1 in the sum (5.22) for the Green's function G near the Fermi surface. The stationary phase conditions for the (standard or improved) SPM evaluation of the integral taken from G 1 over the spatial coordinates r are the PO equations. As the result, one arrives at the sum over PO sum for this oscillating level density [73,89,91,92], where B PO is an amplitude of the oscillating PO terms, see [73,89,[91][92][93][94]102]. The above sum runs over the isolated POs and, in the case of degeneracies owing to the symmetries of a given potential well, over all families of POs. B PO is the oscillation amplitude depending on the stability factors, S PO (ε) the action integral along a given PO, and σ PO is the Maslov phase associated with the turning and caustic points along the PO, see [73,94,103] for the detailed explanations. The semiclassical free-energy shell corrections, δF scl at finite temperature (T ∼ < Ω ≪ ε F ), can be expressed through the PO components of the energy shell corrections δU scl [73,91,94] (see Appendix E.1), with the exponentially decreasing temperaturedependent factor [73,91,106,108,109,113], Finally through (5.30), the shell corrections δF scl and δU scl are determined by the PO level-density shellcorrection components δg PO (ε) of (5.29) at the chemical potential, ε = µ ≈ ε F . In (5.30), one has the additional factor, ∝ 1/t 2 PO , which yields the convergence of the PO sum (without averaging of δg(ε) over the s.p. spectrum), t PO is the time of motion along the PO (PO period). Another (exponential) convergence of δF scl (5.31) with increasing the period t PO and temperature T is giving by the temperature-dependent factor in front of δU PO . FROM CRANKING MODEL TO THE RIGID BODY ROTATION Substituting (5.22) into (5.18), one has a sum of several terms, Indexes n and n ′ run independently the two integers 0 and 1. As shown in Appendix E.2a, the main smooth part of the semiclassical MI Θ x scl (5.32) is associated with the TF (ETF) rigid-body component through the first term Θ 00 x averaged over the phase-space variables; see section V B, also [88,113,154], and previous publications [83][84][85][86]. The statistical averaging over phase space coordinates removes the non-local long-length correlations. The corrections of the smooth ETF approach to the TF approximation were obtained in [86][87][88], see Sect. V B for the review of these works. Using the transformation of the coordinates r 1 and r 2 to the center-of-mass and relative ones r and s 12 , r = (r 1 + r 2 )/2 and s 12 = r 2 − r 1 , (5.34) in (5.33), respectively, one simplifies much the calculations of the oscillating terms, Θ 01 x + Θ 10 x + Θ 11 x . In this way, one finds that the shell component δΘ 01 x of Θ 01 x [see (5.33) at n = 0 and n ′ = 1] is dominating in the MI shell correction δΘ x scl within the NLLLA (5.23), see Appendix E.2b. Indeed, in this approximation, substituting the components, G 0 and G 1 , of the Green's function (5.22) [see (5.24) for G 0 ] into (5.33) for Θ 01 x , and using the averaging over the phase-space variables in the fluctuating (shell) part δΘ x of Θ x , one results in the relationship for the corresponding shell corrections (see Appendix E.2b): Here, δΘ (RB) x is the shell correction to the rigid-body MI Θ (RB) x , which is related to the semiclassical particledensity ρ(r) through with r 2 ⊥x = y 2 + z 2 . (5.37) The particle density ρ(r), and therefore, the MI (5.36), can be expressed in terms of the Green's function G, (5.38) With the splitting of the Green's function (5.22), one obtains the semiclassical sum of the smooth and oscillating (shell) components [96,97]: The integration over ε in (5.38) is performed over the whole s.p. energy spectrum. For the Green's function G, we applied the semiclassical expansion (5.19) in terms of the sum (5.22) of CTs in the last equation for the semiclassical particle density ρ scl (r). The first term in (5.39) is the (extended) Thomas-Fermi component (see Appendix E.2a). Substituting the particle density splitting (5.39) into (5.36), one has the corresponding semiclassical expression of the rigid-body MI, We introduced the shell corrections δρ (see [97]) to the particle density ρ and δΘ dε n (ε) G CCT 1 (r 1 , r 2 ; ε) , (5.42) where G CCT1 is given by (5.20) with CT being the closed CT 1 , i.e., CCT 1 (r 1 → r 2 → r). With the smooth (extended) TF MI component (E. 13), see also the section V B, the equation (5.35) yields semiclassically that is in agreement with the adiabatic picture of the statistically equilibrium rotation [113]. Note that the non-adiabatic MI at arbitrary rotation frequencies for the HO mean field by Zelevinsky [16] was extended to the finite temperatures in [113]. We emphasize that due to an averaging over the phase space variables, one survives with the NLLLA. Note also that the classical angular-momentum projection (E.6) in the rotating body-fixed coordinate system is caused by the global rotation with a given frequency ω rather than by the motion of particles along the trajectories inside the nucleus with respect to this system, considered usually in the cranking model. According to the time-reversible symmetry of the Routhian, the particles are, indeed, moving in the non-rotating coordinate system along these trajectories in both opposite directions. Their contributions to the total angular momentum of the nucleus turns out to be zero. Performing then the integration over s in (E.18) in the spherical coordinate system, one obtains the rigid-body shell correction δΘ (RB) x in the NLLLA as explained in Appendix E.2. Note that the cranking model for the nuclear rotation implies that the correlation (non-local) corrections to (E.18) and (5.36) should be small enough with respect to the main rigidbody shell component δΘ to be neglected within the adiabatic picture of separation of the global rotation of the Fermi system from its vibration and then, both from the internal motion of particles. Other contributions, except for a smooth rigid-body part coming from Θ 00 x , like Θ 10 x and Θ 11 x , as referred to the fluctuation (non-local) correction to the rigid body MI are found semiclassically to be negligibly small in the NLLLA due to the averaging over phase-space variables, see Appendix E.2b. In particular, for the HO Hamiltonian, it was shown that there is almost no contribution of the δΘ 11 x at leading order in in [113]. Thus, with the semiclassical precision, from the adiabatic cranking model expression (5.18) we come to the MI of the statistically equilibrium rotation (5.43), which must be the rigid-body MI, according to the general theorem of the statistical physics. This is in agreement with the ETF approach of section V B. Our semiclassical derivations, valid for the rotation frequencies ω ≪ Ω, are beyond the quantum criterion of the application of the standard 2nd order perturbation approach within the cranking model where ω is small as compared to the distance between the neighboring levels of quantum spectra. We point out that this weakness of the perturbation theory criterion is similar to that with the statistical averaging in the heated Fermi systems and with accounting for the pairing correlations [19,21], where the role of the distance between the quantum neighboring energy levels plays the temperature and the pairing gap, as distance between gross shells Ω (3.49) in the POT [91], respectively. SHELL CORRECTIONS TO THE RIGID-BODY MI Using (5.42) for calculations of the MI rigid-body shell correction δΘ (RB) x scl (5.41), one may exchange the order of integrations over the coordinate r and energy ε. By making use also of the semiclassical trajectory expansion (5.19) for the oscillating Green's function component G 1 (r 1 , r 2 ; ε) of the sum (5.22), one finds δΘ (RB) As usually, with the semiclassical precision, we evaluate the spatial integral by the SPM extended to continuous symmetries [73,91,94] and the bifurcation phenomena (ISPM) [94,100,102,103,105]. The SPM (ISPM) condition writes where the asterisk means the SPM value of the spatial coordinates and momenta, r j = r * j and p j = p * j ( j = 1, 2) at the closed CT 1 s in the phase space, r * 1 = r * 2 and p * 1 = p * 2 . Thus, with the standard relations for the canonical variables by using the action as a generating function, one arrives at the PO condition on right of (5.45). Within the simplest ISPM [94,102,103,105], the other smooth factors r 2 ⊥x and A CT1 (r, r, ε) of the integrand in (5.44) can be taken off the integral over r at these stationary points. Assuming that the quantum averages (y 2 + z 2 ) 2 /ε are smooth enough functions of ε as compared to other factors, for instance, δn, one may take them approximately also off the integral over ε at the chemical potential, ε = µ. For example, for the HO potential (see [113]), they are simply exact constants. Therefore, the main contribution into the integral in (5.44) is coming from the PO stationary-phase points, determined by (5.45), as for calculations of the level-density shell corrections δg scl (5.29) [73,91,94,113]. The SPM condition (5.45) is identity for any stationary point of the classically accessible spatial region for a particle motion filled by PO families in the case of their high degeneracy K ≥ 3. For instance, it is the case for the contribution of the three dimensional (3D) orbits in the axially symmetric HO-potential well with commensurable frequencies, ω x = ω y = ω ⊥ and ω z [99,113]. The stationary points occupy some spatial subspace for a smaller degeneracy K. In the latter case of the equatorial orbits (EQs) (K = 2) in this HO potential well, the SPM condition is identity in the equatorial plane z = 0. Following similar derivations of the oscillating component δg scl (5.29) of the level density g scl (ε) (5.28) and free-energy shell correction δF scl (5.31), one expands the smooth amplitudes and action phases of the MI shell corrections δΘ rig κscl (5.44) up to the first nonzero terms (see Appendix C of [113] and Appendix E.2 here). Finally, from (5.44), one obtains [113] δΘ where r 2 ⊥x PO,µ is the average given by r 2 ⊥x PO,ε = dr A PO (r, r; ε) r 2 ⊥x dr A PO (r, r; ε) (5.47) at ε = µ, A PO (r, r; ε) are the Green's function amplitudes for a closed CT 1 in the phase space, i.e., PO. Integration over r is performed over the classically accessible region of the spatial coordinates. Semiclassical expression (5.46) is general for any potential well. Shorter POs are dominating in the PO sum (5.46) [73,91,94,106,113], see (5.31), (5.30). Therefore, according to (5.31) for δF scl , we obtain approximately the relation where r 2 ⊥x µ is an average value of the quantity (5.47), independent of the specific PO, at ε = µ over short dominating POs. For the axially symmetric HO potential well with the commensurable frequencies ω ⊥ and ω z , as the simplest example, the integration in (5.47) over r for the 3D contribution means over the 3D volume occupied by the 3D families of orbits. For the EQ component the integral is taken over the 2D spatial region filled by the EQ families in the equatorial (z = 0) plane [113]. In the incommensurable-frequency case (irrational ω ⊥ /ω z ), one has the only EQ-orbit contributions. The average (5.47) can be easily calculated by using the Green's function amplitudes A PO for 3D and for EQ orbits, which are given in [99,113]. Finally, for the considered HO potential, one may arrive at where δF scl is the semiclassical PO sum (5.31), (5.30) for the semiclassical free-energy shell-corrections, η HO = ω ⊥ /ω z is the deformation parameter. For the parallel (alignment) rotations around the symmetry axis, one finds similar relations of the MI through the rigid-body MI to the free-energy shell corrections. Moreover, one has such relations for the smooth TF parts, in particular for the HO case, see Appendices E.2.1 here and D1 in [113]. Thus, for the total moment Θ x [see (5.32)], one may prove semiclassically within the POT, up to the same corrections in a smooth TF part, that the shell MI and free-energy shell corrections are approximately proportional, in particular exactly for that HO Hamiltonian [113]: We emphasize that the POT expressions (5.49) for δΘ x scl and (5.50) of Θ x scl were derived without a direct use of the statistically equilibrium rotation condition [4,113]. Substituting the semiclassical PO expansion (5.30) for the free-energy shell correction δF scl (5.31) (after [102]) for 3D orbit families and for EQ POs into (5.49), one arrives finally at the explicit POT expressions for the MI shell corrections δΘ x in terms of the characteristics of the classical POs. For the mean field with the spheroidal shapes and sharp edges (spheroid cavity), these derivations can be performed similarly as for the HO Hamiltonian in [113] but with accounting for the specific PO degeneracies. Note that the parallel, δΘ z , and perpendicular, δΘ x , MI shell components are expressed through the 3D and EQ POs through the free-energy shell correction which contains generally speaking both them for the deformations larger the bifurcation ones. The dominating contributions of one of these families or coexistence of both together depend on the surface deformation parameter (semi-axis ratio of spheroid). For the critical deformations and on right of them, one observes the significant enhancement of the MI shell corrections through the PO level-density amplitudes B POT [see (5.29)] of the free-energy shell corrections (5.31), (5.30). Fig. 15 shows the semiclassical free-energy shell correction δF scl , [(5.31), (5.30), see also [99,113]] vs the particle-number variable, A 1/3 , at a small temperature of T = 0.1 ω 0 for different critical symmetry-breaking and bifurcation deformations η HO = 1, 6/5, and 2 of the HO potential [73,113] with the corresponding quantum SCM results for the same deformations. This comparison also shows practically a perfect agreement between the semiclassical, (5.31) and (5.30), and quantum results. For the spherical case (η HO = 1), one has only contributions of the families of 3D orbits with the highest degeneracy K = 4. At the bifurcation points η HO = 6/5 and 2 the relatively simple families of these 3D POs appear along with EQ orbits of smaller degeneracy. For η HO = 6/5, one mainly has the contributions from EQ POs because the 3D orbits are generally too long in this case. For the bifurcation point η HO = 2, one finds an interference of the two comparably large contributions of EQ and 3D orbits with essentially the different time periods t EQ and t 3D , respectively. COMPARISON OF SHELL STRUCTURE CORRECTIONS WITH QUANTUM RESULTS The quantum (QM) and semiclassical (SCL) shell corrections to the MI δΘ x of (5.49) are compared in Fig. 16. An excellent agreement is observed between the semiclassical and quantum results as for the free-energy shell corrections δF . It is not really astonishing because of the proportionality of the δΘ x to δF [see (5.49)]. One finds in particular the same clear interference of contributions of 3D and EQ POs in the shell corrections to the MI at η HO = 2. The exponential decrease of shell oscillations with increasing temperature, due to the temperature factor in front of the PO energy-shell correction components δE PO in (5.31) is clearly seen in Fig. 16. As the MI and free-energy shell corrections are basically proportional [see (5.46)] for any mean potential well, we may emphasize the amplitude enhancement of the MI near the bifurcation deformations due to that for the energy-shell corrections found in [94,100,102,103,105]. The critical temperature for a disappearance of shell effects in the MI is found for prolate deformations (η > 1) and particle numbers A ∼ 100 − 200, approximately at T cr = ω EQ /π ∼ ω 0 /π ≈ 2 − 3 MeV just as for δF , see [73,91,113]. This effect is also general for any potentials. The particle-number dependence of the shell corrections δΘ z to the total MI Θ z (alignment) is not shown because it is similar to that of δΘ x through their approximate relations, δΘ z ∝ δΘ x ∝ δF . VI. CONCLUSIONS We derived the dynamical equations of motion, such as the conservation of the particle number, momentum and energy as well as the general transport equation for the entropy for low frequency excitations in nuclear matter within the Landau quasiparticle theory of heated Fermi-liquids. Our approach is based essentially on the Landau-Vlasov equation for the distribution function, and it includes all its moments in phase space, in contrast to several truncated versions of fluid dynamics similar to the hydrodynamic description in terms of a few first moments. From the dynamics of the Landau-Vlasov equation for the distribution function, linearized near the local equilibrium, we obtained the momentum flux tensor and heat current in terms of the shear modulus, viscosity, in-compressibility and thermal conductivity coefficients as for very viscose liquids called sometimes amorphous solids. We obtain the dependence of these coefficients on the temperature, the frequency and the Landau interaction parameters. We derived the temperature expansions of the density-density and temperature-density response functions for nuclear matter and got their specific expressions for small temperatures as compared to the chemical potential. The hydrodynamic limit of normal liquids for these response functions within the perturbation theory was obtained from the Landau-Vlasov equation for both distribution function and sound velocity, as for an eigenvalue problem. In this way we found the Landau-Placzek and first sound peaks in the corresponding strength functions as the hydrodynamic limit of the Fermi-liquid theory for heated Fermi-systems. The former (heat pole) peak was obtained only because of the use of the local equilibrium in the Landau-Vlasov linearized dynamics instead of the global static Fermi-distribution of the giant multipole-resonance physics. This is very important for the dispersion equation and its wave velocity solutions. We got the isolated, isothermal and adiabatic susceptibilities for the Fermi-liquids and showed that they satisfy the ergodicity condition of equivalence of the isolated and adiabatic susceptibilities as well as the general Kubo inequality relations. We found the correlation function using the fluctuation-dissipation theorem and discussed its relation to the susceptibilities and Landau-Placzek "heat pole" in the hydrodynamic limit. We applied the theory of heated Fermi-liquids to the Fermi-liquid drop model of finite nuclei within the Landau-Vlasov dynamics in the nuclear interior and macroscopical boundary conditions in the effective sharp surface approximation. Solutions of this problem in terms of the response functions and transport coefficients were obtained. We considered the hydrodynamic limit of these solutions and found the "heat pole" correlation function for frequencies smaller than some critical frequency. The latter was realized only because of using the local equilibrium for the distribution function. The isolated, isothermal and adiabatic susceptibilities for finite nuclei within the FLDM in the ESA were derived. We showed that the ergodicity condition is satisfied also for finite Fermi-systems as for infinite nuclear matter in the same ESA. We found a three-peak structure of the collective strength function: the "heat", standard hydrodynamic and essentially Fermi-liquid peaks. The conditions for the existence of such modes were analyzed and the temperature dependence of their transport coefficients such as friction, stiffness and inertia were obtained in particular, in the hydrodynamic limit. We arrived at the increasing temperature dependence of the friction coefficient for the specific Fermi-liquid mode which exist due to the Fermi-surface distortions. At enough large temperatures, we showed a nice agreement with the results for the friction which were obtained earlier within the microscopic shell-model approach of [24]. The correlation functions found in the FLDM and quantum shell models were discussed in relation to the susceptibilities and ergodicity properties of finite nuclei. The expression for the surface symmetry-energy constant k S was derived from simple isovector solutions of the particle density and energies in the leading ES approximation. We used them for the calculations of the energies, sum rules of the IVGDR strength and the transition densities within the HDM and FLDM [33] for several Skyrme-force parameters. The surface symmetry-energy constant depends much on the fundamental well-known parameters of the Skyrme forces, mainly through the coefficient in the density gradient terms of the isovector part of the energy density. The value of this isovector constant is rather sensitive also on the SO interaction. The IVGDR strength is split into the two main and satellite peaks. The mean energies and EWSRs within both HDM and FLDM are in fairly good agreement with the experimental data. Semiclassical functional expressions were derived in the framework of the Extended Thomas-Fermi approach. We used these analytical expressions to obtain a selfconsistent description of rotating nuclei where the rotation velocity impacts on the structure of the nucleus. It has been shown that such a treatment leads, indeed, to the Jacobi phase transition to triaxial shapes as already predicted in [152] within the rotating LDM. We emphasize that the rigid-body moment of inertia gives a quite accurate approximation for the full ETF value. Being aware of the mutual influence between rotation and pairing correlations [19,22,153], it would be especially interesting to work on an approach that is able to determine the nuclear structure depending on its angular velocity, as we have done here in the ETF approach, but taking pairing correlations and their rotational quenching into account. We derived also the shell corrections of the MI in terms of free-energy shell corrections within the nonperturbative extended POT through those of the rigid-body MI of the equilibrium rotations, which is exact for the HO potential. For the HO, we extended to the finite temperature case the Zelevinsky derivation of the non-adiabatic MI at any rotation frequency. For the deformed HO potential, one finds a perfect agreement between the semiclassical POT and quantum results for the free-energy and the MI shell corrections at several critical deformations and temperatures. For larger temperatures, we show that the short EQ orbits are mostly dominant. For small temperatures, one observes a remarkable interference of the short 3D and EQ orbits in the superdeformed region. An exponential decrease of all shell corrections with increasing temperature is observed, as expected. We point out also the amplitude enhancement of the MI shell corrections due to the bifurcation catastrophe phenomenon. As further perspectives, it would be worth to apply our results to calculations of the IVGDR structure within the Fermi-liquid droplet model to determine the value of the fundamental surface symmetry-energy constant from comparison with experimental data for the pygmy resonance [137,138] and theoretical calculations [52,[132][133][134][135][136]. For further extensions to the description of the isovector low-lying collective states, one has first to use the POT for including semiclassically the shell effects [73,91,[155][156][157]. It would be also worth to apply this semiclassical theory to the shell corrections of the MI for the spheroid cavity and for the inertia parameter of the low-lying collective excitations in nuclear dynamics involving magic nuclei [110,141,142,155]. One of the most attractive subject of the semiclassical periodic orbit theory, in line of the main works of S.T. Belyaev [6,10,12,13], is its extension to the pairing correlations [14,97], and their influence on the collective vibrational and rotational excitations in heavy deformed neutronrich nuclei [19,22,153] (see also [158] for the semiclassical phase-space dynamical approach to the Hartree-Fock-Bogoliubov theory). For homogeneous systems the intensive quantities depend only on two independent variables. For instance, the entropy per particle S/N = ς(E/N, V/N ) only depends on the energy and volume per particle, E/N and V/N respectively. For such systems, the adiabadicity condition may simply be expressed as ς = const. Commonly in nuclear physics, one uses the particle density ρ = N/V, in which case the chemical potential can be expressed as with φ = F/V being the free internal energy per unit volume. For differential quantities there exist various variants of the Gibbs-Duheim relation as follows from Legendre transformations. Thus for the derivatives of the pressure P, considered as functions of T and ρ, one gets from (A.5): In deriving such relations, it is useful to employ special properties of the Jacobian, which allows one to perform transformations between different variables (see e.g., [116]). These relations will be used below to get the specific heats as well as the isothermal and adiabatic compressibilities, together with the corresponding susceptibilities. At first, we shall look at in-compressibilities defined by the derivative of the pressure over the particle density (multiplied by a factor of 9). At constant entropy per particle ς, the adiabatic in-compressibility K ς writes To get the corresponding quantity at constant temperature K T , one only needs to replace ς by T . According to (A.6) and (A.3), one obtains Next we turn to the specific heats at constant volume and constant pressure. If measured per particle, they can be defined in terms of the entropy per particle ς as (A.9) They obey the following, well known relation to the incompressibilities [115,116] For the variation of the entropy ς per particle, one finds after using (A.4) and the specific heat C V of (A.9). To get the first term we applied which is a consequence of (A.4). A.2. Landau theory proper In the following, we will repeat some important relations discussed in [57] without arguing much about their proofs. These relations will be needed to derive some specific thermodynamic properties for quantities, as the entropy or the specific heats. A basic element in Landau theory is the microscopic expression for the entropy per particle, 2)]. The (static) quasiparticle density ρ in (A.13) may be expressed as 14) with the density of states N (T ) (2.5). The additional factor 2 in the integration measure accounts for the spin degeneracy. The expressions on the right in both (A.13) and (A.14) are obtained after integrating by parts. The brackets < · · · > denote some kind of average, which if written for any quantity A(r, p, t) is defined as For the proof of the second equation, we refer to (3.35) of [57] (mind however a difference in the notations for the specific heat: Our ρC V is identical to the C V of [57]). For our C V , one may derive the formula (see (3.34) of [57]) Collecting (A.8), (2.7) and (A.18), one can write the variation of the pressure as Thermodynamic quantities such as in-compressibilities and susceptibilities are calculated under different conditions as fixed temperature or entropy. As one knows (see, e.g., [115]), these (in)-compressibilities may be associated to different sound velocities. To make use of the adiabaticity condition mentioned earlier, we need the derivatives of the entropy per particle ς(ρ, T ). The ones arising in (A.11) can be simplified by exploiting the specific Fermi-liquid expressions given in (A.17) and the second relation of (A.18) between the entropy per particle ς and the specific heat C V , Next we turn to the adiabatic in-compressibility K ς (A.7). It may be expressed by the isothermal one K T given in (2.7), see (2.82). To derive this relation, the Jacobian transformation from (ρ, ς) to (ρ, T ) for the derivatives of the pressure in (A.7) has been applied [mind also (A.18), (A.8) and (A. 21)]. Finally, for the ratio of the specific heats, we find from (A.10), (2.7) and (2.82) A.3. Low temperature expansion In this subsection, we address the temperature dependence of the quantities introduced above. It may be derived as discussed in [57] and conveniently expressed by expansions in terms ofT = T /ε F ; with ε F being the Fermi energy at zero temperature, ε F = p 2 F /(2m * ) = (3π 2 3 ρ) 2/3 /(2m * ). For some of the quantities discussed below we shall include terms of third order inT = T /ε F , which are not considered in [57]. From (A.14) one gets for the particle density ρ(µ, T ) as function of the chemical potential µ and the temperature T . For the chemical potential µ, one obtains which is typical for a system of independent fermions. At this stage it may be worth while to mention that the formulas presented here remain largely unchanged in case of the presence of a density dependent potential V (ρ). As long as such a potential does not depend on the momentum, we may just change our s.p. energy ε g.e. p to p 2 /(2m * ) + V (ρ), and the chemical potential µ to the µ ′ = µ − V (ρ) of [57]. For the density of states N (T ) of the quasiparticles, one finds from (2.5) where N (0) is given by (2.53). Similarly, for M(T ) defined in (A.16), one gets As different to [57], we include a temperature correction here, which is of interest for some of the quantities described in the text. The specific heat C V (A.19) per particle for the constant volume becomes For the isothermal in-compressibility K T , one gets from (2.7) and (A.25) Likewise, for the in-compressibility modulus K ς (2.82) at constant entropy ς per particle, one obtains Using (A.29), the adiabatic sound velocity v (ς) (cf. [115,116]) can be expressed as The ratio of the specific heats (A.10) may be calculated using either the expansions of the in-compressibilities Thus, from (A.27) and (A.32), is the specific heat at the fixed pressure. A.4. Thermodynamic relations for a finite Fermi-liquid drop In this subsection, we apply the formulas derived above to extend the derivations of the boundary conditions in [26,37,38] to the case of equilibrium at a finite T . Like in these papers, the finite Fermi-liquid drop is treated in the effective sharp surface approximation, see subsection III B 2 and Appendix D. Applying to the standard thermodynamic relations dE = T dS − PdV − P Q dQ and dG = −SdT + VdP − P Q dQ , we include the change of the collective variable Q (see, e.g., [24,29]). G is the Gibbs free energy G = F + PV = E + T S + PV, defined similarly to the free energy F with only the volume V replaced by the pressure P. For the FLDM it is more convenient to use G rather than F , simply because in general volume may not be conserved but the pressure has to be fixed by the boundary condition (3.22). The Gibbs free energy is used for deriving these boundary conditions as well as for the calculations of the coupling constants and susceptibilities associated to the operator F (r) (3.28). For the following derivations, we need the relations for the thermodynamical potentials per particle. The Gibbs free energy per particle G/N which is identical to the chemical potential µ is related to the corresponding free energy F/N by the relation G/N ≡ µ = F/N + P/ρ . For a finite Fermi-liquid drop where the particle density ρ is function of the coordinates (smooth inside and sharp decreasing in the surface region) they are written as in [26,37,38] through the variational derivatives δg/δρ and δφ/δρ with the thermodynamical potential densities g and φ per unit of volume, respectively, and this relation reads now These densities depend on the coordinates through ρ and its gradients. Their calculation is carried out from the variations of the corresponding total integral quantities G and F with the following integration by parts, see [26,37,38] (17) of [37]. The one-to-one correspondence of this derivation with that explained in [26,37,38] becomes obvious if we note that equation (17) was found from for the adiabatic condition of a constant entropy per particle (ε here is the same as δε/δρ in the notation of [37]). The variational derivative δg/δρ (A.34) (or the chemical potential µ) appears now in the following key equation for the derivation of the surface condition (3.22): where b V is the nucleon binding energy in the infinite nuclear matter, H the mean curvature of the nuclear surface, H = 1/R 0 for the spherical shape at equilibrium. Index "vol" means that the Gibbs free energy per particle is considered as that found in the nuclear interior. Hence, is a smooth quantity taken at the nuclear surface as the quantities in the l.h.s. of the boundary conditions In the second equation, we applied (A.20) which shows that the expression in the middle of (A.39) is proportional to the gradient of the particle density with some smooth coefficient related to the in-compressibility K. The relation (A.39) will be used in the Appendix C for the calculation of several coupling constants and susceptibilities for the constant temperature and entropy, as well as for the static limit ω → 0, with the corresponding incompressibility modulus and particle density in the last equation (A.39). For the derivations of the susceptibilities in Appendix C and ratio of the surface energy constants (C.21), we need here also the following thermodynamic relation: We obtained this relation as explained in Appendix A1 in [29] with the only one change of the free energy F to the Gibbs free energy G. The derivatives in these equations should be considered for the constant pressure instead of the volume of the Fermi-liquid drop. b. THE SHEAR MODULUS AND VISCOSITY The shear modulus λ and viscosity ν can be now found from the comparison of (B.2) for continuous matter and explicit expressions (B.5) obtained above from the Fermiliquid distribution function δf l.e. (q, p, ω) (B.4) for the same stress tensor componentsσ zz andσ xz . Indeed, substituting (B.5) to the l.h.s. of (B.2) and canceling the velocity field components from their both sides, one finds δT δρ (B.10) From the first equation one has the ratio Separating real and imaginary parts in the second equation, one obtains the shear modulus λ and viscosity ν: With these constants λ and ν, the equations (2.25), (2.26), (2.27) and (2.22) are identities. The aim of the following derivations of the shear modulus and the viscosity is to simplify J 1 (B.7), J 2 (B.8) and χ xz (B.9). For this aim, we make use of transformations of the averages like p kpl z ε m p (qv p ) n /D p g.e. with some integer numbers 0 ≤ k ≤ 4, 0 ≤ l ≤ 4, m = 0, 1 and n = 0, 1 in terms of more simpler functions χ n (n = 0, 1, 2) introduced in [57] for the response functions, see (2.54). For these functions, one has simple temperature and hydrodynamic expansions presented below at the end subsection of this Appendix B. Using such enough lengthy and simple algebraic derivations, one finally gets For the following derivations of the thermal conductivity κ in Fermi liquids, we need to derive the equation for the temperature T from the general transport equation (2.41). The latter equation (2.41) in the linear approximation with respect to the dynamical variations δf in terms of the moments, such as the velocity field u (2.16), particle density δρ, entropy density per particle δς and so on, writes where j T is the heat current given in terms of the thermal conductivity κ and temperature gradient by (2.39). By making use of the thermodynamic relation for the entropy ς per particle, dς = ∂ς ∂P T dP + ∂ς ∂T P dT, (B.18) and the well known arguments to get the thermal conductivity equation, we consider the process with the constant pressure rather than the constant of particle density. (We again omitted the symbol variation δ as in Sec. II B). With the help of (B.18), one then results in the Fourier thermal conductivity equation where C P is the specific heat for the constant pressure per particle, see (A.33 As shown in section II C and subsections B.1a and B.1b, many physical quantities, such as the response functions, see (2.58), the shear modulus (B.12) and viscosity (B.13) can be expressed in terms of the same helpful functions χ n (2.54). By this reason, it is easy to get their LWL limit by expanding the only χ n in small parameter τ q . For small τ q , one can use asymptotic expansions for the Legendre function of second kind Q 1 (ζ) and its derivatives which enter χ n with its derivatives, according to (2.69), (2.70) and (2.71). This approximation is valid for large arguments ζ. Substituting these expansions into the functions χ n (2.54) and ℘ (2.52), one gets to fourth order in τ q : With these expressions, one obtains the collective response function χ coll DD of (2.58), (2.59) through ℵ(s) ≡ ℵ(τ q , s 0 , s 1 ) = π 2 τ 3 q 27 −3is 0 + 1 + 6s 2 0 + 3s 1 τ q + π 2T 2 120 93i − 2 + 186s 2 0 + 93s 1 τ q N 2 (0), (B.24) [also for the temperature-density response function (2.66)]. These two quantities determine the expansion of the function D(s) ≡ D(τ q , s 0 , s 1 ) (2.59) in powers of τ q , and then approximately, the excitation modes given by the dispersion relation (2.67). Indeed, equaling zero the coefficients which appear in front of the each power of τ q in this expansion of D(τ q , s 0 , s 1 ), we get equations for the unknown quantities s 0 and s 1 of (2.72), Solving these equations, one obtains the position of the poles as given in (2.74) and (2.75). The shear modulus (λ) and viscosity (ν) coefficients enter the response function χ coll F F (3.46) and (3.44) in terms of the sum (λ − iνω)/ρ 0 ε F . The LWL expansion of this sum can be obtained with help of (B.21), (B.22) and (2.72) and expansions of all static quantities inT there, see Appendix A, but with taking into account fourth order terms, Separating the real and imaginary parts in these equations, one gets the LWL approximation of the both real coefficients λ and ν. The terms linear in ωτ determine the hydrodynamic viscosity ν (1) (2.93), and the terms proportional to 1/ωτ are related to ν (2) (2.94), see discussions of the "heat pole" for the FLDM transport coefficients in Sec. III C. The LWL approximation for the thermal conductivity The explicit final expressions for the viscosity ν and the thermal conductivity κ are presented and discussed in subsection II D in the LWL limit in connection with the first sound and overdamped (heat pole) modes, see (2.92) and (2.95). As seen immediately from (B.28), the linear terms in τ q for the shear modulus λ appear as high temperature corrections proportional toT 4 . They are regular in ωτ , and therefore, are totally immaterial, see more discussions in the subsection mentioned above. In the linear approximation in τ q of the LWL limit, it is easy to check that the derivative δT /δρ (B.11) is the same as obtained in terms of the response functions in (2.84), and therefore, the in-compressibility K tot (2.31) turns into the adiabatic one. Appendix C: Coupling constants and susceptibilities Let us consider the change of average F (r) of the operatorF (r) (3.7) due to a quasistatic variation of the particle density ρ qs (r, Q, T ), The index "X" shows one of the conditions of the constant temperature (X = T ), entropy (X is "ad") or static limit ω → 0 (X is "ω = 0"). We shall follow the notations of [24,29] omitting the index ω = 0 for the coupling constant (k ω=0 ≡ k), surface energy constant (b ω=0 S ≡ b S ), and in-compressibility [K ω=0 ≡ K = K tot for ω = 0, see (2.31)]. We write it as the zero argument for the isolated susceptibility, χ ω=0 ≡ χ(0), and stiffness coefficient, C ω=0 = C(0). f X and δf X denote the quantity f and its variation provided that the X condition is carried out. The index "qs" stands for the quasistatic quantities as in [29] and will be omitted within this Appendix. Note that the operatorF (r) (3.28) depends in the FLDM on X through the derivatives of the particle density, and by this reason, the upper index X appears inF X (r) of (C.1). From the first of (C.1) with (3.28) and (C.2), one gets the self-consistency condition (3.11), with the following expression for the coupling constant, We omit also the low indexes "FF" for the susceptibilities. Note that we have not identities of −k −1 X to χ X because we neglected earlier high order A −1/3 corrections in the derivation of the operatorF (3.28), in particular, in the FLDM approximation (3.26) for the quasistatic particle density ρ qs . The equation (C.5) is in agreement with (3.10) (identical to equation (3.1.26) of [29]), see also (3.70), (3.102), for the specific relation between the coupling constant k −1 and isolated susceptibility χ(0) with presence of the stiffness term C(0) in "the zero frequency limit" within the FLDM. As shown in Sec. III C 1 through (3.46) by using the expansion in small parameter kC (3.70) up to the second order terms in kC, the isolated susceptibility χ(0), see (3.14) at ω = 0, is related to the coupling constant k −1 by (3.10) with the stiffness term C(0). The correction related to the stiffness C(0) appears in (C.5) in a higher order than A −1/3 because it is of the order of the small parameter kC ∼ A −2/3 , see (3.70) and discussion near this equation. The zero frequency stiffness C(0) is equal approximately to the liquid drop one C (3.52) in the FLDM for the considered enough large temperatures for which the quantum shell effects can be neglected. The derivatives of the quasistatic particle density in (C.2), (C.3) and (C.4) can be found from (3.26), for Q = 0 with as in (3.27). We emphasize that the surface energy constant b X S (or the surface tension coefficient α X ) depends also on the type of the process specified by index X as the in-compressibility K X because of the X-dependence of the particle density derivative in the integrand of (3.24) for the tension coefficient. The total quasistatic energy is the sum of the volume and surface parts determined by the in-compressibility K X and surface b X S constants, respectively. The in-compressibility modulus K X (responsible for the change of the volume energy) is given by (A.8) for X = T , and (A.7) for X is "ad", see also (2.7), (2.82) or (A.28), (A.29) of their more specific expressions for nuclear matter. The in-compressibility K equals the adiabatic one K ς as shown through (2.84) and (A.29), K = K tot (ω = 0) = K ς . In the derivations of (C.6), we took into account that ρ X 0 (C.7) does not depend on Q, and the density ρ ∞ (or r 0 ) is assumed to be approximately independent of index "X" in (C.6). Substituting (C.6) into (C.4) for the coupling constant k −1 X , one writes (C.8) The first term proportional to the density in ∂ρ/∂T of (C.6) leads to small A −1/3 corrections to the coupling constant k −1 X (C.8) with respect to the second component depending on the coordinate derivative ∂ρ/∂r. However, all terms including these corrections related to the variation of the temperature δT in (C.4) [or (C.8)] can be neglected as compared to the first term in the square brackets. (It comes from the variation of the collective variable δQ up to the same relatively small corrections of the order of A −1/3 .) Indeed, for the isothermal case "X = T " one has it exactly by its definition. For other "X" the quantity δT /δQ in (C.3) and (C.8) with the density For instance, for the constant entropy (adiabatic) condition S = drρς = S(ρ, T ) = const., see (A.13) with the quasistatic particle density ρ (3.26), the derivative δT /δQ can be calculated through a variation of this density ρ as shown in the middle of (C.9). In the quasistatic limit ω → 0 all quantities of the equilibrium state can be considered also as a functional of the only density ρ (3.26) in the ESA and one has again (C.9). We have used already this property in the derivation of the opera-torF (r) for transformations of the derivatives of a mean field V in (3.28). As noted and used for the derivations in Appendix A.1, the temperature T (r) is approximately independent of the spatial coordinates r at equilibrium. Therefore, according to (C.9), the second terms in (C.3) and (C.8), which appear due to the temperature variation δT , turn into zero with the FLDM precision. After the simple integration over the anglesr in (C.8) for the coupling constant k −1 X , one then arrives at According to (3.26), the integrands in (C.10) contains the sharp bell function ∂ρ/∂r of r. Therefore, the integrals converges there to a small spatial region near the effective nuclear surface defined as the positions of maxima of this derivative at r = R 0 (Appendix D). We use these properties of the integrand in the derivation of (C.10) taking smooth quantities as r 2 at the nuclear surface point r = R 0 off the integrals up to small corrections of the order of A −1/3 within the same ESA. [This is like for the derivations of the boundary conditions (3.21), (3.22), see [26,27,37], and of (3.28) for the operatorF (r).] In this way, we get the expansion of the coupling constant k −1 X (C.4), in powers of the A −1/3 with the leading term shown in the second equation there, see (C.10). For the following derivations, we specify now the quasistatic derivative (δV /δρ) X at Q = 0 taken it from (A.36), where index X in ∇f X means the gradient of the quantity f taken for the condition marked by X as in the variation δf X . The proportionality of the gradients in (C.11) shows the self-consistency within the ESA precision, see [37] for more general relations of the self-consistency in the FLDM. Using (A.39) and (3.24) in (C.10) for the coupling constants and (C.5) for the susceptibilities, one obtains the identical results for these quantities with small correc-tions of the order of A −1/3 , We shall show now from (C.12) that the adiabatic susceptibility χ ad and coupling constant k −1 ad are equal to the isolated (χ(0)) and quasistatic (k −1 ) ones, respectively, up to small A −1/3 corrections within the ESA. As noted above, for the adiabatic (K ς ) and quasistatic (K) in-compressibility modula, we got K ς = K, see after (2.84). The surface energy constant b S equals the adiabatic one b ad S , according to (A.38). Indeed, the volume energy per particle is also approximately the same for these cases, b ad V = b V , because of its relation b V ≈ K/18 to the in-compressibility modulus (b ad V = K ς /18) within the ESA [27] and equivalence of the corresponding incompressibility modula. The functional derivative in (A.38) is the quasistatic chemical potential µ which does not depend obviously on the type of the process X . From (A.38), one gets now α ad = α for the surface tension coefficient or b S = b ad S for the surface energy constant. Namely, this quantity should be identified with the experimental value b S = 17 − 19 MeV in the FLDM computations. Thus, from (C.12) one obtains the ergodicity condition (3.12) for the susceptibilities within the ESA precision, 13) up to small A −1/3 corrections. As seen from (C.12), one gets also k −1 = k −1 ad for the coupling constants within the same approximation , see (3.30) for k −1 . The index "ad" for the coupling constant will be omitted below in line of [29]. We are interested also in the discussion of the difference between the susceptibilities χ T and χ ad . From (C.5), one has 14) It is useful to re-derive this relation by applying Appendix A1 of [29] for the specific Fermi-liquid drop thermodynamics, see (A.40). As noted in Appendix A.4, it is more convenient to use the Gibbs free energy G instead of the free energy F . As the derivations of the χ T − χ ad in Appendix A1 of [29] do not contain any change in the volume and pressure variables, we can use all formulas in A1 of [29] here with the replace of the free energy F by the Gibbs free energy G, in particular (A.40). There is a specific property of the FLDM with respect to the microscopic shell model with the residue interaction of [29]. The second derivatives of the Hamiltonian ∂ 2Ĥ /∂Q 2 We used also this self-consistent dependence of mean potential V through the particle density ρ in the derivations of the operatorF (r) (3.28) in the FLDM: The derivatives of V are proportional to the ones of the density ρ [see (C.11)], which both depend on X, i.e., whether we consider the latter for the fixed temperature or entropy. Therefore, (A.1.6) and (A.1.7) with the definitions (A.1.8) of [29] as applied to the FLDM, see (A.40), should be a little modified to The derivatives of the thermodynamic potential G are considered for the constant pressure instead of the constant volume as used for the free energy case. Similar calculations of the average value of the second derivative of the Hamiltonian ∂ 2Ĥ /∂Q 2 X as for the coupling constants lead to Applying then the relation of the Gibbs free energy G = Aµ to the chemical potential µ, we note that there is the factor A −1 which suppress much the contribution of the first term compared to second one, k −1 T − k −1 , see (C.12), (C.20) Moreover, the terms in the square brackets of (C. 19) are zero because the second derivative (∂µ/∂Q) T is zero within the precision of the FLDM. To show this, let us take the equation as for the temperature (C.9) with the only replace of the temperature T by the chemical potential µ. The above mentioned statement becomes now obvious because the chemical potential µ is a constant as function of the spatial coordinates at the equilibrium as the temperature T independently on the type of the process X within the FLDM. As the result, we obtain the same relation (C.14) with the difference of the coupling constants shown in (C.20). We can evaluate the ratio of the surface energy coefficients b T S /b S of (C.12) using in (C.14) the fundamental relation (2.102) for the ratio of the susceptibilities χ T /χ ad in terms of the in-compressibility modula K/K T (K = K ς = K ad ), In the last equation, we used the temperature expansions for the in-compressibilities K (A.29) and K T (A.28). Thus, the surface energy constant b T S for the constant temperature is larger than adiabatic (or quasistatic) b S and their difference is small asT 2 . 0.16 fm −3 is the density of infinite nuclear matter [see around (3.26)]. The isovector component can be simply evaluated as ǫ − = 1 − ρ 2 − /(Iρ + ) 2 = 1 − w 2 − /w 2 + . In both these energies ǫ ± , w ± = ρ ± /(I ± ρ ∞ ) are the dimensionless particle densities, I + = 1 and I − = I. The isoscalar SO gradient terms in (D.1) are defined with a constant: D + = −9mW 2 0 /16 2 , where W 0 ≈100 -130 MeV·fm 5 and D − is relatively small [72,74,75]. From the condition of the minimum energy E under the constraints of the fixed particle number A = dr ρ + (r) and neutron excess N − Z = dr ρ − (r), one arrives at the Lagrange equations for ρ ± with the corresponding multipliers being the isoscalar and isovector chemical potentials with the surface corrections at the first order, Λ ± ∝ I ± a/R ∼ A −1/3 [26,27,39,40]. The isoscalar and isovector particle densities w ± can be derived from (D.1) first at the leading approximation in a small parameter a/R. For the isoscalar particle density w + = w + (ξ) [ξ is the distance of the given point r from the ES in units of the diffuseness parameter a in the local ES coordinates, see (3.26), ξ = (r−R)/a for the spherical nuclei], one finds (Appendix B of [40] and [27,39] (see Appendix B of [40] where the specific solutions for ξ(w) in the quadratic approximation for ǫ + (w) in terms of elementary functions were derived). For β = 0 (i.e. without SO terms), it simplifies to the solution w(ξ) = tanh 2 [(ξ − ξ 0 )/2] for ξ ≤ ξ 0 = 2arctanh(1/ √ 3) and zero for ξ outside the nucleus (ξ > ξ 0 ). For the same leading term of the isovector density, w − (w), one approximately (for large enough constants c sym of all desired Skyrme forces [74,75]) finds (Appendix A of [40]) (D.5) and a ≈ 0.5 − 0.6 fm is the diffuseness parameter [see (3.26)]. Simple expressions for the constants b (±) S (D.10) can be easily derived in terms of the elementary functions in the quadratic approximation to ǫ + (w), given explicitly in Appendix A [40]. Note that in these derivations we neglected curvature terms and being of the same order shell corrections. The isovector energy terms were obtained within the ES approximation with high accuracy up to the product of two small quantities, I 2 and (a/R) 2 . Within the improved ES approximation accounting also for next order corrections in a small parameter a/R, we derived the macroscopic boundary conditions (Appendix B of [40]) are the isovector and isoscalar surface-tension (capillary) pressures, δH ≈ −δR ± /R 2 ± are small variations of mean ES curvatures H (A.38), δR ± are radius variations (3.15), and α ± are the tension coefficients, respectively, The conditions (D.6) ensure the equilibrium through the equivalence of the volume and surface pressure (isoscalar or isovector) variations, see detailed derivations in Appendix B of [40]. As shown in Sec. III [26,27,33], the pressures δP ± can be obtained through moments of dynamical variations of the corresponding phase-space distribution functions δf ± (r, p, t) (2.28) in the nuclear volume. For the nuclear energy E in this improved ESA (Appendix C of [40]), one obtains Response function Imχ coll QQ [the sum of the response-function (s (n) ) branches over n (n = 0, 1) in (3.46) without k 2 factor] versus frequency ω in units Ω = v F /R for different temperatures T shown by numbers; F 0 = −0.2; other parameters are the same as in Fig. 1. Fig. 3. Response function for the hydrodynamic collision regime and heat pole of Fig. 2 for small frequencies and temperature T = 6 MeV. Numbers n = 0 and/or 1 show the sum of the response-function (s (n) ) branches or one of them (the latter curves coincide). 8. Contribution of the "heat pole" to friction for the non-ergodic system: for the fully drawn curve Γ T is evaluated for c = 20 MeV and for dashed curve 1/c = 0 ; as reference values, the result of the wall formula (line with stars) and the contribution from the non-diagonal matrix elements (line with squares) are shown (after [28,29]). for several critical Skyrme forces [74,75] in the logarithmic scale (after [40]). (1) 2 momenta of a particle at these points; s 12 = r 2 − r 1 ; polar axises z and z s and the corresponding angles θ 1 and θ 2 are shown, respectively. Fig. 15 for the perpendicular rotation as function of the particle number variable, A 1/3 , temperatures T = 0.1 and 0.2 ω 0 . The thin dotted line shows the contribution of 3D orbits, the thin dashed line the contribution of EQ orbits for a temperature T = 0.1 ω 0 , and broad dashed line the one of EQ orbits for T = 0.2 ω 0 (after [113]). [74,75]; D(A) is the mean IVGDR energy constants for particle numbers A = 50 − 200 within the FLDM and last within the hydrodynamic (Steinwendel-Jensen) model; experimental data is about 80 MeV (after [40]).
2013-08-16T18:22:34.000Z
2013-08-16T00:00:00.000
{ "year": 2014, "sha1": "184ad949c26503e8be3652c0922ba2122ab5e91d", "oa_license": null, "oa_url": "http://arxiv.org/pdf/1308.3687", "oa_status": "GREEN", "pdf_src": "Arxiv", "pdf_hash": "184ad949c26503e8be3652c0922ba2122ab5e91d", "s2fieldsofstudy": [ "Physics" ], "extfieldsofstudy": [ "Physics" ] }
247768159
pes2o/s2orc
v3-fos-license
Advances in Nanotechnology Development to Overcome Current Roadblocks in CAR-T Therapy for Solid Tumors Chimeric antigen receptor T cell (CAR-T) therapy for the treatment of hematologic tumors has achieved remarkable success, with five CAR-T therapies approved by the United States Food and Drug Administration. However, the efficacy of CAR-T therapy against solid tumors is not satisfactory. There are three existing hurdles in CAR-T cells for solid tumors. First, the lack of a universal CAR to recognize antigens at the site of solid tumors and the compact tumor structure make it difficult for CAR-T cells to locate in solid tumors. Second, soluble inhibitors and suppressive immune cells in the tumor microenvironment can inhibit or even inactivate T cells. Third, low survival and proliferation rates of CAR-T cells in vivo significantly influence the therapeutic effect. As an emerging method, nanotechnology has a great potential to enhance cell proliferation, activate T cells, and restarting the immune response. In this review, we discuss how nanotechnology can modify CAR-T cells through variable methods to improve the therapeutic effect of solid tumors. INTRODUCTION CAR-T therapy has made remarkable achievements in the research and clinical treatment of cancer, especially in the treatment of B cell malignancies (1)(2)(3). Unlike conventional surgery, radiotherapy, chemotherapy, immune checkpoint blocking therapies, targeted drug therapy, and CAR-T cell therapies offer more therapeutic options for patients with previously refractory tumors (4)(5)(6)(7)(8). To date, the United States Food and Drug Administration has approved five CAR-T therapies, namely, -Kymriah, Yescarta, Tecartus, Breyanzi and Abecma, -for hematologic malignancies (9). However, CAR-T cell therapy has not achieved satisfactory results in the treatment of solid tumors, such as colon, kidney, and ovarian cancers, for which the best clinical trial outcome is stable disease (10)(11)(12)(13)(14). To improve the efficacy of CAR-T therapy in solid tumors, CAR-T cells must overcome three obstacles. First, the lack of tumor-specific antigens, dense stroma and aberrant vasculature at the tumor site prevent CAR-T cells from efficiently targeting the solid tumor site (15). Second, the tumor immune microenvironment and immunosuppressive mechanisms reduce the antitumor activity of CAR-T cells in solid tumors. Finally, because of the initial differentiation state of selected T cells, the cumbersome production process of CAR-T cells, and the tumor microenvironment (TME) with low oxygen, acidity and nutrition, the survival and proliferation rates of CAR-T cells in vivo were low. Nanotechnology has multiple features that allow it to address the challenges of CAR T cell therapy in treating solid tumors. With optimal size, high surface area to volume ratio, a variety of shapes and components, as well as surface modification and charge, nanoparticles have a wide range of applications in tumor therapy (16)(17)(18)(19)(20). Nanoparticles employed in clinical treatments can be targeted to the site of the lesion with less accumulation in healthy tissue, stronger drug permeability, and retention, and can be rapidly biodegraded and eliminated without pharmacological and toxicological activities (21)(22)(23). Therefore, a number of researchers are exploring the use of nanoparticles in combination with CAR-T therapy to improve the efficacy of CAR-T therapy in solid tumors. Herein, we briefly introduce the three major challenges of CAR T cells in solid tumor therapy, and summarize how to combine nanoparticles with CAR T cells from different perspectives to solve the challenges in solid tumor therapy ( Figure 1). CURRENT ROADBLOCKS IN CAR-T CELL FOR SOLID TUMORS Numerous clinical trials of CAR-T cell therapy for solid tumors have been carried out, and a meta-analysis of the efficacy of CAR-T therapy in solid tumors showed an overall response rate of 9%, although various therapeutic strategies have been implemented (24). There are three major factors that influence CAR-T therapy, as described below. Targeting and Infiltrating CAR T cells are designed to select tumor-associated antigens (TAA) due to the lack of tumor-specific antigens (TSA). In a large number of clinical trials CAR T cell targeting tumorassociated antigens have been found cause damage to normal tissue with low expression of tumor-associated antigens during the process of recognizing and killing tumor cells, which is referred to as the off-target effect (25). Moreover, the reasons behind the success of CAR T cells in the treatment of hematologic tumors is that they can migrate in blood, lymph nodes, and bone marrow to interact with cancer cells (26). By dynamic imaging microscopy on fresh tumor slices from nine patients, Donnadieu et al. (27) investigated T cells with reduced motility in the stroma of human lung tumors, which hinted towards T cells facing difficulties in entering into the tumor due to the presence of obstacles. This makes it easy to understand that there are several other reasons why CAR T cells have difficulty entering solid tumor. Tumor-associated fibroblasts (TAFS) and abnormal vasculature at the tumor site result in compact tumor tissue and a dense extracellular matrix (ECM), which prevent CAR T cell to enter the solid tumor microenvironment (28,29). The experiments conducted by Peschel et al. (30) confirm the lack of adoptively transferred T cells accumulation in solid tumors, while the infused HER2specific T cells spread out in the breast cancer patient's bone marrow. In addition, chemokines can induce T cell migration along the direction of increasing chemokine concentration. However, some solid tumors inhibit chemokine secretion and CAR T cells lack receptors that match chemokines secreted by solid tumors (31,32), such that chemokine receptors on T cells mismatch with tumor secreted chemokines (33)(34)(35). Moreover, the low expression of adhesion molecules including ICAM-1 and 2, VCAM-1 and CD34 in tumor endothelial cells (EC) inhibit the effector T-cell from adhering to the EC and being transported to the tumor (36). Tumor Immunosuppression Immunosuppression of the solid tumor microenvironment is another significant challenge for CAR-T therapy. The causes of tumor cells escaping the anti-tumor immune response are complex, including the presence of immunosuppressive cells, the presence of immunosuppressive cytokines and the absence of immune activating factors. The presence of immunosuppressive cells such as dendritic cells (DCs), myeloid-derived suppressor cells (MDSCs), regulatory cells (Tregs), and M2 macrophages in solid tumors sites, which secrete suppressive cytokines-such as transforming the growth factor-b (TGF-b), adenosine, interleukin-10 (IL-10), and vascular endothelial growth factor (VEGF) extracellularly-, suppresses the immune system and reduces the anti-tumor activity of CAR-T (37)(38)(39)(40). Moreover, the immune checkpoint molecules PD-1 and CTLA4, when combined with the corresponding ligands, inhibit the killing effect of T cells on the tumor and the activation of T cells (41,42). Survival and Proliferation CAR T cells are targeted to the tumor site by a chimeric receptor mediated expressed on the T cell surface, and eliminate cancer cells through cell killing (43). Studies have shown that the longterm survival and proliferation of CAR T cells capable of maintaining normal function in vivo played a decisive role in the therapeutic effect (44). However, the expansion of the CAR T cells during the treatment of solid tumors is low in vivo. For example, Michael et al. detected a large number of CAR T cells in ovarian cancer patients after 2 days of transfusing in vitro geneedited T cells back into the body, but the increase only lasted for about 1 month, and quickly declined to be virtually undetectable in the majority of patients (13). Even with large doses of CAR T cells, the presence of CAR T cells in the circulatory system was not detected (45). Moreover, clinical data showed that longer CAR-T cell persistence indicates longer delays, in the development of disease progression (46). The factors that influence the survival of CAR T cells in patients are complex, including the differentiation and functional status of CAR T cells, CAR target affinity, CAR immunogenicity, tedious timeconsuming production process, immunosuppressive and hypoxic tumor microenvironment (47)(48)(49). Various nanotechnology strategies may improve CAR T cell persistence and expansion in vivo, which would endow CAR-T therapy with superior antitumor activity in the treatment of solid tumors. Nanotechnology to Aid CAR T Cell Target and Accumulate in Solid Tumors To overcome the off-target effect caused by tumor-associated antigens, one group designed circular bispecific aptamers to help T cells recognize and bind to tumor cells. The aptamer can simultaneously bind naïve T cells and tumor cells, and then specifically activate T cells in the cell-cell junction complex. This strategy helps T cells pinpoint the tumor site and kill cancer cells. Thus, the targeted treatment of all kinds of cancer is possibly realized by the use of specific anticancer aptamers (50). In an effort to arm CAR T cells to collapse physical barriers caused by angiogenesis, a dense extracellular matrix and stroma in tumor sites, researchers have proposed numerous of NP-based strategies (51,52). By combing photothermal therapy with the adoptive transfer of CAR T cells, Gu et al. succeeded in promoting the accumulation and enhancing the conventional CAR-T therapy against solid tumors. The indocyanine green (ICG), a near-infrared (NIR) dye, is wrapped in poly(lactic-coglycolic) acid (PLGA) nanoparticles. Once exposed to NIR light irradiation, ICG is used as the photothermal agent released into solid tumor (53)(54)(55). Mild hyperthermia of the tumor disrupts its compact structure, reduces interstitial fluid pressure (IFP), increases blood perfusion, and releases tumor-specific antigens that could significantly stimulate CAR T cells. After about 20 days, tumor growth was significantly inhibited, and no tumor cells were detected in about one-third of the treated mice (56). Other researchers fabricated indocyanine green nanoparticles (INPs) conjugated CAR T cells via the biorthogonal reaction. After mild photothermal intervention, tumor vessels expanded, blood perfusion increased, the ECM ablated and the tumor tissues became loose. Thus, INPs engineered CAR-T biohybrids accumulated and infiltrated extensively in the tumor, remodeled the TME, restarted the immune response, and boosted the efficacy of CAR-T immunotherapy. This microenvironment photothermal-remodeling strategy provides a promising prospect for CAR-T therapy in solid tumors (57). Nanotechnology to Remold Tumor Microenvironment to Stimulate CAR T Cells To reset immunosuppression of cancer environment and promote the activation of CAR T cells, Zhao and colleagues effectively combined the use of the nanozymes method. They synthesized a tumor-targeting HA@Cu 2−x S-PEG (PHCN) nanozyme with photothermal and catalytic properties. After irradiation by a near-infrared laser, the tumor extracellular matrix is damaged by converting light energy into local heat (58)(59)(60). Moreover, the reactive oxygen species by nanocatalyzed tumor therapy increased the secretion levels of key cytokines, such as the interferon and tumor necrosis factor as well as tumor-specific antigens, thus activating the corresponding CAR T cells at the tumor site (61). To surmount the obstacle of hostile microenvironment, researchers tend to combine CAR-T therapy with the use of cytokines and/or antibodies. However, one problem is that CAR T cells and cytokines/antibodies disperse preventing their accumulation in the tumor sites (62,63). Therefore, Xie et al. used a pH-sensitive benzoic−imine bond and inverse electron demand Diels−Alder cycloaddition to link magnetic nanoclusters (NCs) and the PD-1 antibody (aP) together to form NC-Ap. The constructed NC-aP binds to effector T cells due to their PD-1expression. Magnetic resonance imaging (MRI) guided T cells and aP to enrich in solid tumors through magnetization. Because of the acidic tumor microenvironment, the aP is released after the benzoic−imine bond, and then hydrolyzed. Consequently, the adoptively transferred T cells and aP synergistically inhibit solid tumor growth with a few side effects (64). One of immunosuppressive molecules that inhibits the immune function of CD4 + and CD8 + T cells is adenosine. On the surface of activated T cells, the A2a adenosine receptor (A2aR) expressed and trigged adenosine to accumulate outside the cell, which suppressed T-cell proliferation and inhibited IFN-g secretion (65,66). Thus, using nanotechnology to efficiently transport SCH-58261 (SCH), a small molecule inhibitor of A2aR, to CAR T cells in tumors is a promising method. According to their report, Wang et al. used CAR-T therapy and SCH-loaded cross-linked multilamellar liposomes (cMLV) together, which significantly inhibited the tumor growth and improved the survival of treatment groups, the tumor infiltration rate of T-cells, as well as the expression level of IFN-g in vivo. Through rescuing tumor-residing T-cell hypofunction, this method augments CAR T-cell efficacy in solid tumors (67). The presence of immunosuppressive molecules-such as CTLA-4 and PD-L1 is another important cause of tumor immunosuppression. They enable tumor cells to escape surveillance by inhibiting the activation of immune cells, namely the "immune escape" (68,69). To reset the suppressive solid tumor microenvironment, inhibitors targeting checkpoint molecules (such as CTLA-4, PD-1 and PD-L1) and CAR-T therapy were used in combination (70,71). The disadvantages of using immune-checkpoint inhibitors (ICIs) include the emergence of a series of new immune-related adverse events and systemic toxicities (72). Stephan et al. designed a liposomal drug-loaded nanoparticle and decorated it with the tumortargeting peptide iRGD. In addition, PI-3065, a PI3K kinase inhibitor that disrupts the function of immune-suppressive regulatory T cell subsets and myeloid-derived suppressor cell (40), and 7DW8-5, an immunostimulant-invariant natural killer T cell (iNKT) agonist was placed in the liposome (73,74). They demonstrated that this new target nanoparticle alters the tumor immunosuppression and evidently enhances the anti-tumor activity of CAR T cells (75). Nanotechnology to Aid CAR T Cells Survive and Proliferate The number of tumor-infiltrating lymphocytes is positively related with clinical outcomes of CAR-T therapies (36,76,77). T cells obtained from patients are limited, such that amplification in vitro may be an effective solution. In the body, the expansion of T cells requires the assistance of antigen-presenting cells (APC), which cannot be achieved in vitro. In light of this problem, Mooney et al. utilized mesoporous silica to create micro-rods and added in the APC-secreting factor interleukin-2, which extends the lifespan of T cells. They also coated the high-aspect ratio mesoporous silica micro-rods (MSRs) with supported lipid bilayers (SLBs) and a variety of antibodies that activate T cells, mimicking APC's cell membrane. In cell culture, these rods randomly and automatically form a scaffold structure that allows T cells to move around and expand freely. Results showed that APC-mimetic scaffolds generate more CAR T cells and maintain good killing efficacy compared to conventional expansion systems (78). The lack of proliferation signals in TME results in a low survival rate of CAR T cells. As emerging therapies, nanoparticulate RNA vaccines deliver liposomal antigenencoding RNA (RNA-LPX) to activate T cells in cancer patients (79). Recently, Sahin et al. combined CAR-T with the nanoparticulate RNA vaccine to achieve the regulated proliferation of CAR-T cell expansion depending on RNA-LPX dose. The mechanism involves that antigen delivery to antigenpresenting cells in the spleen, lymph nodes, and bone marrow by intravenous injection, followed by the initiation of a toll-like receptor-dependent type-I IFN-driven immune-stimulatory program (80). Moreover, Chan et al. used the tailored nanoemulsion (Clec9A-TNE) vaccine to effectively solve the problem of limited antigen presentation, promote the proliferation of CAR T cells in vivo, and augment the efficacy of solid tumor therapy (81). Conventional manufacturing of CAR-T cells includes several elaborate procedures such as isolation, modification and expansion, resulting a few effective redirected T cells that can be used. Meanwhile, virus transfection and electroporation are commonly used to help T-cells express targeted chimeric antigen receptors (CARs) or T cell receptors. In turn, these methods have drawbacks as they are time-consuming, have a small application scale (82,83). Stephan et al. designed a new genetic programming named "hit-and run", which transports mRNA nanocarriers into cells through simple mixing and transient expression of the target gene. The mRNA nanocarrier has three prominent advantages: (i) lyophilized mRNA NPs can be used for each application that has no effect on its properties and efficacy. (ii) NP uptake and transfection efficiency did not differ whether T cells proliferated or not. (iii) Lymphocyte-targeted mRNA nanocarriers can edit the genome of CAR-T-cells without influencing on their function. The paramount of this method is that it can simply produce CAR T cells at a clinical scale within a short time and without complex handling procedures in vitro (84). Another novel method was developed to program numerous circulating T cells and effectively remove cancer cells in situ. On the surface of the biodegradable poly (b-aminoester)-based nanoparticles, anti-CD3e f(ab′)2 fragments are coupled with it to target T cells. Inside of the nanoparticles, the poly(betaaminoester) (PBAE) polymer is assembled with microtubuleassociated sequences (MTAS) and nuclear localization signals (NLS), which facilitates the gene transfer in the nucleus of the T cells. To maintain CAR expression in T cells, the CD19 CAR plasmid was flanked by the piggyBac transposase gene through a cut-and-paste mechanism. These stable polymer nanoparticles allow simple manufacture and storage, which provides a practical, economical and widely available pathway for CAR-T therapy (85). The immunosuppression and hypoxia in the solid tumor microenvironment result in the weaking CAR T cells infiltration and proliferation. One research group constructed an injectable hydrogel-encapsulated porous immune-microchip system (i-G/MC) with oxygen reservoirs to intratumorally deliver CAR T cells. In the injectable i-G/MC system, IL-15loaded alginate microspheres were made into thin immune-MCs (i-MCs), which were connected with HEMOXCell (Hemo; an oxygen carrier)-loaded alginate, and the alginate forms a gel layer by self-assembly (86). The i-MCs were highly porous and interconnected, which facilitates CAR T cell transport. Hemo, a marine extracellular hemoglobin, has a strong oxygen storage capacity and binds up to 156 oxygen molecules (per Hemo molecule). After the i-G/MC was injected into the solid tumor, the hydrogel (gel) layer degraded quickly, Hemo delivered oxygen to TME, as well as CAR T cells, and decreased the expression level of HIF-1a. Results showed that the immuneniche improves hypoxia TEM and promotes survival and infiltration of CAR T cells in solid tumors. To avoid the side effects of systemically-administered supporting cytokines like interleukins, protein nanogels (NGs) with interleukin (IL)-15 super-agonist were designed. The NGs recognized the specific cell surface antigen and subsequently released the drug at the sites of antigen encounter, for instance, the tumor microenvironment. Most importantly, the NG delivery enhanced the cell proliferation level 16-fold in tumors and administered eight-fold higher doses of cytokine without toxicity (87). CONCLUSION In preclinical studies, researchers have proposed a number of strategies to improve CAR T cell function through the use of nanotechnology. However, there are still some fundamental issues to be addressed in the clinical application of CAR T therapy. For example, the carcinogenicity, reproductive toxicity and persistence of magnetic nanoclusters are still unknown and therefore it cannot be used in clinical therapy. The use of near infrared laser will cause damage to human skin, short-term use will appear skin swelling phenomenon, long-term may affect human reproductive function and induce cancer. The safety, immunogenicity and toxicity of nano-vaccines have yet to be verified. Will nano-derivative biodegrades induce non-specific immune responses? Due to the specificity of tumor-associated antigens, the preparation cycle of tailored nanoemulsion vaccine is time consuming and involves high cost…. These questions from clinical studies may seem disappointing, but many studies have highlighted the potential of nanotechnology in combination with CAR T therapies for solid cancers, which giving us great hope for CAR T cells. Currently, there are about 40 CAR-T targets in clinical trials in solid tumors, which has significantly outnumbered hematological tumors. Different from CD19, which is often used as a target for CAR-T therapy in hematologic tumors, the main targets of CART development in solid tumors include Mesothelin, GD2, HER2, GPC3, Claudin18.2(CLDN18.2) and so on. Most CAR-T studies in solid tumors have low response rates in the 0-25% range (88). Recently, the EMA granted prime eligibility to CAR T -cell product candidate CT041, which against the claudin18.2 protein (CLDN18.2) for the treatment of gastric/gastroesophageal junction cancer. Results from a phase I clinical trial published in 2019 show a total objective response rate of 33% in a small group of patients with advanced gastric or pancreatic cancers, with no serious side effects (89). This means that CT041 is expected to become the world's first approved solid tumor CAR T product, thus achieving zero breakthrough in solid tumor treatment. AUTHOR CONTRIBUTIONS JM: Conceptualization; writing-original draft. QY: Writingreview and editing. YM: Conceptualization; writing-review and editing. All authors contributed to manuscript revision, read, and approved the submitted version.
2022-03-29T13:35:11.109Z
2022-03-23T00:00:00.000
{ "year": 2022, "sha1": "83e74cb5245dc9433b8d700aa075b1cd339725eb", "oa_license": null, "oa_url": null, "oa_status": null, "pdf_src": "Frontier", "pdf_hash": "83e74cb5245dc9433b8d700aa075b1cd339725eb", "s2fieldsofstudy": [ "Biology", "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
17580507
pes2o/s2orc
v3-fos-license
SCES '08 - concluding remarks This year's SCES has proved exciting in the array of unconventional phenomena discovered both in novel systems, and by the renewed investigation of age-old systems, arguably in the vicinity of QCPs. From heavy fermion systems, to cuprate superconductors, and in a new twist iron pnictide superconductors - some questions remain: just how similar or different are correlated phenomena in these systems? Further, how ubiquitous are ultra-strongly correlated effects such as the fractional quantum Hall effect (QHE), and can cold atom systems mimic such correlated phases? We shall discuss some of these issues here. INTRODUCTION This year's SCES conference has been as usual, memorable -not in small part because of the idyllic surroundings in the picturesque fishing town of Buzios, Brazil. We would especially like to thank the organisers, Elisa Saitovitch and Mucio Continentino for the sterling conference organisation. A breadth of exciting topics were discussed at this year's SCES, yet limitations of time mean that we are unable to do justice to them all. With a view to our particular areas of condensed matter specialisation and interest, we have chosen to focus on the area of novel materials -both those that occur in the solid state, and those that are artificially created. The narrative of novel phase emergence in the vicinity of phase instabilities has now been central to strongly correlated electron systems for more than a decade. One of the first reasons for the attraction of condensed matter physicists to phenomena in the vicinity of a (nearly) continuous phase transition at zero temperature -known as a Quantum Critical Point (QCP) -was the possible appearance of universal behaviour where microscopic details of electrons and structures are bypassed in favour of macroscopic patterns [1,2]. This year's SCES has proved exciting in the ar- * Suchitra E. Sebastian acknowledges financial support from Trinity College (Cambridge University), Royal Society conference grant, and the Institute for Complex Adaptive Matter. † C. Morais Smith acknowledges partial financial support from the Netherlands Organization for Scientific Research (NWO) and from the National Science Foundation under Grant No. NSF PHY05-51164. ray of unconventional phenomena discovered both in novel systems, and by the renewed investigation of age-old systems, arguably in the vicinity of QCPs. From heavy fermion systems, to cuprate superconductors, and in a new twist iron pnictide superconductors -some questions remain: just how similar or different are correlated phenomena in these systems? Further, how ubiquitous are ultra-strongly correlated effects such as the fractional quantum Hall effect (QHE), and can cold atom systems mimic such correlated phases? HEAVY FERMION SUPERCONDUC-TORS The puzzle of superconductivity in magnetic metals first came to the fore with the discovery of superconductivity in the heavy fermion systems UBe 13 [3,4] and CeCu 2 Si 2 [5] over a quarter of a century ago, a finding that was initially received amidst considerable incredulity. Soon, however, similar phenomena were discovered in a broad array of heavy fermion materials, revealing a pattern of superconductivity potentially mediated by magnetic interactions. The concept of novel phases mediated by enhanced interactions at a QCP came to the fore at this juncture and rapidly became ubiquitous. It is intriguing, however, that despite the superficially similar fashion in which these materials were initially thought to behave, various aspects of the physics of these 'model' systems continue to baffle. The 115 family The family of 115 heavy fermion systems perhaps constitutes the prototypical class of magnetic heavy fermion superconductors. Yet materials within this family continue to surprise. CeCoIn 5 Ambient pressure superconductivity in the lowdimensional heavy fermion system CeCoIn 5 [6] followed the discovery of pressure-induced superconductivity in its three-dimensional (3D) analogue CeIn 3 [7] a decade ago. The fact that unanswered questions continue to swirl thick and fast almost a decade after the discovery of superconductivity in CeCoIn 5 is a testament to the complexity of physics underlying an apparently straightforward case of magnetic interaction mediated superconductivity. A prominent debate at SCES this year pertained to whether or not a Fulde-Ferrell-Larkin-Ovchinnikov (FFLO) phase (i.e. a superconducting pairing state between Zeeman exchange split parts of the Fermi surface) [8,9] is realised at high magnetic fields in CeCoIn 5 . A new phase within the superconducting phase appears at high magnetic fields, and one interpretation is that this phase is the realisation of an FFLO [10]. Latest results of vortex lattice imaging studies as a function of magnetic field in CeCoIn 5 were presented, indicating a departure from Ginzburg Landau physics [11]. Yet the case for the realisation of an FFLO phase at high magnetic fields is nebulous. Experimental results under pressure were presented to make the case for an FFLO ground state: the suppression of magnetism by the application of pressure was shown to result in an increased novel phase region, supporting the case for an FFLO phase [12]. However, recent results of high magnetic field neutron diffraction experiments challenge this notion of an FFLO. Evidence of long range antiferromagnetic order is found to be associated with the novel phase region of superconductivity at high magnetic fields; the invariance of the observed Cooper pair momentum with magnetic field appears to be inconsistent with a potential FFLO state [13]. Of additional interest in the family of the 115's is their increasing similarity to the Cuprate fam-ily of high T c superconductors. In this case, commonalities appear to run more than skin deep. Not only does superconductivity appear to be related to magnetic interactions, but both evidence a form of density wave order, potentially antiferromagnetism only at high magnetic fields, perhaps suggesting a unique form of associated superconductivity. Further experiments to detect whether high-field antiferromagnetic phase is confined to the superconducting region may serve to shed light on this mystery. PuCoGa 5 Another member of the 115 family in which superconductivity has recently been discovered is PuCoGa 5 , with T c ∼ 18 K [14]. Spin susceptibility measurements that yield finite values at the lowest temperature have established a dirty dwave form of the superconducting wavefunction in this material [15]. However, a spirited debate at this year's SCES pertained to whether unconventional superconductivity in this material is mediated by electron-phonon interactions or a form of magnetic interaction such as spin exchange between the magnetic lattice and the metallic environment. While superconductivity in PuCoGa 5 develops out of a Pauli paramagnetic state, it is not entirely clear as to its proximity to a magnetic instability. The presence of spin-disorder scattering and local-moment behaviour that appears to be linked to the appearance of superconductivity have been cited as potential evidence for magnetically mediated superconductivity. As we heard in this year's SCES, however, theoretical work by Caicuffo et al. [16] that models results of irradiation experiments on PuCoGa 5 served to caution us that nodal superconductivity does not necessarily imply magnetically mediated superconductivity, in fact most experimental features could be explained by means of an electron phonon mechanism within the Ehliashberg theory. A rebuttal of this point of view came from Piers Coleman, who instead proposed a different explanation in which spins constitute the fabric of exchange rather than the glue [17]. In this proposal, Kondo spin quenching and superconductivity develop simultaneously in a composite pair-ing mechanism involving Kondo spin exchange. Further experiments are required to distinguish between alternate mediating mechanisms of superconductivity. The variety of proposed mechanisms for unconventional superconductivity in PuCoGa 5 reveals yet again the complex physics arising from materials diversity within the 115 family of superconductors that have long been considered a model for magnetic interaction mediated superconductivity. New Heavy Fermion Superconductors 2.2.1. β-YbAlB 4 A new entry into the class of f -electron superconductors was β-YbAlB 4 . While following the theme of proximity to a phase instability, β-YbAlB 4 broke new ground in constituting the first known Yb-based f -electron superconductor, with T c ∼ 80 mK [18]. It had remained another mystery of heavy fermion superconductors as to the prevalence of superconductivity in systems based on Ce (4f 1 ), but a singular absence in systems based on its single hole analogue Yb (4f 13 ). The discovery of β-YbAlB 4 goes some way in solving this mystery, and appears to follow the narrative of Ce-based heavy fermion superconductors: proximity to an antiferromagnetic instability. This remarkable discovery required measurements on ultra pure single crystals of β-YbAlB 4 , and an experimental tour de force involving challenging low temperatures -in fact, far more stringent conditions required to observe superconductivity than in other f -electron families of materials. However, while proximity to a phase instability seems clear from the specific heat, magnetic susceptibility, and transport behaviour of β-YbAlB 4 , the unconventional transport scaling behaviour renders unclear the nature of the neighbouring phase instability. Another mystery pertains to the far lower superconducting energy scale in this material as compared to Ce-based heavy fermion superconductors, possibly related to the size of antiferromagnetic interactions, the Fermi surface topology [19], or indeed, the nature of the neighbouring phase instability. The study of pressure induced magnetisation in β-YbAlB 4 may shed light on some of these puzzles, as will more experiments to probe the nature and symmetry of superconductivity in this material. NpPd 5 Al 2 An exciting new f -electron superconductor discussed at SCES this year was NpPd 5 Al 2 . A particularly intriguing aspect of this discovery was the accidental fashion in which this material was grown, resulting from an attempt to grow NpPd 3 single crystals out of Pb flux in Al 2 O 3 crucibles. As it turned out, NpPd 5 Al 2 was found to superconduct at 4.9 K [20]. On the face of it, this material appears to carry some of the trademarks seen in unconventional f -electron superconductors. Preliminary experiments of specific heat, susceptibility, and the upper critical field performed on this material suggest potential dwave singlet superconductivity in the paramagnetic limit. Currently, possibilities to explain the physics in this system are boundless. A potentially finite momentum superconducting state like CeCoIn 5 , or composite pairing state have been suggested, yet further experiments alone will reveal as yet unexplored possibilities beyond the realm of familiar models. Even as more members are added to the category of heavy fermion superconductors, the notion of a 'universal QCP driving materials' properties appear more of a chimera than ever. Given the unexplained mysteries evidenced both in the familiar 115 family of compounds, and new families of heavy fermion superconductors, it is more likely than not that phenomena outside the confines of simple theories of quantum criticality are likely to emerge with more careful study measurements, and a broader scope of materials families under exploration. FERROMAGNETIC SUPERCON-DUCTORS The very notion of superconductivity existing in an itinerant ferromagnet was initially treated with disbelief, until its experimental discovery in UGe 2 [21]. The overarching notion of superconductivity in close proximity to a nearcontinuous phase instability (in this case ferromagnetic) thought to underlie this phenomena motivated the discovery of more ferromagnetic superconductors such as URhGe [22]. With new materials and more measurements, indeed, come unexpected findings that may not fit the simple narrative of QCP's, but may birth new discoveries in themselves. URhGe Interestingly enough, the case of UGe 2 , where the narrative of novel phases in proximity to a QCP largely originated, has since been found to be more complex than first thought. The superconducting dome in UGe 2 lies in the vicinity of both a transition from a paramagnetic to ferromagnetic state, and a transition between two different ferromagnetic states -neither of these phase instabilities are thought to be continuous. So too the case of URhGe at ambient pressure and low magnetic fields. However, a new development arose with the discovery of unconventional superconductivity at high magnetic fields in the vicinity of a metamagnetic transition in URhGe [23]. Drawing on the theme of novel phases mediated in the vicinity of a continuous instability, the novelty of this discovery was in the different class of instability that was probed, further enabling tuning in a two-dimensional (2D) angular plane instead of along a one-dimensional (1D) axis. New measurements presented at SCES this year probed the enhancement in effective mass via transport measurements of the A-coefficient in URhGe [24,25], thereby accessing the evolution of fluctuations in the vicinity of high field superconductivity. The striking finding from these experiments is a maximum in the A-coefficient that coincides with the peak of the superconducting dome, consistent with the notion of enhanced interactions in a quantum critical region that mediate unconventional superconductivity. The case of re-entrant superconductivity in URhGe appears to be a rarity in that it closely follows the simple theoretical description of superconductivity near a QCP, in this case terminating a plane of first order transitions -in fact branching off from a possible tricritical point [26]. URhGe is perhaps a model system where microscopic measurements may be used to probe the potential divergence of length-scales in the vicinity of a QCP. Further experiments of interest will no doubt be direct Fermi surface measurements to trace the enhancement in effective quasiparticle mass, and additionally, experiments that directly probe the superconducting wavefunction in this material, which could then be compared and contrasted with the case of UGe 2 . New Ferromagnetic Superconductors 3.2.1. UCoGe As we have seen, there exists a breadth of different possibilities for the physics of phases mediated at a quasi-continuous instability even within the category of ferromagnetic superconductors. New members in this category, therefore, provide an excellent opportunity for further exploration. A new material we heard about at SCES this year was UCoGe [27], a ferromagnetic superconductor in the manner of pressure-tuned UGe 2 and ambient pressure URhGe. Measurements of magnetisation, transport, thermal expansion, and specific heat reveal ferromagnetism at T c = 3 K that coexists with superconductivity at 0.8 K, suggesting that this material lies along the pressuretuning axis, with its location to the left of the dome maximum -lying between UGe 2 to the left of the superconducting dome onset, and URhGe to the right of the superconducting dome maximum. Despite the similarities between these materials, however, they in fact display subtly different forms of magnetism -the magnetic transition is in the longitudinal moment in UGe 2 , in the transverse moment in URhGe, and metamagnetism appears to be absent in UCoGe. Future experiments of mass enhancement and microscopic aspects of the magnetic phase transition in UCoGe may reveal deeper complexities that potentially underlie phase space in the vicinity of superconductivity in this material. CeFeAs Another material that has been discovered to lie close to a ferromagnetic QCP is an alloy of CeFeAsO and CeFePO [28]. While CeFePO constitutes an extremely heavy fermion system (γ ∼ 1000 mJ/mol K 2 ) with no evidence of magnetic ordering, CeFeAsO exhibits antiferromagnetism associated with the lattice of local Ce moments at T n ∼ 3.8 K, and γ ∼ 60 mJ/mol K 2 . It has therefore been suggested that a ferromagnetic instability may lie partway between these two systems, and may be accessed by substituting As for P to yield CeFeAs 1−x P x O. The investigation of this material showed considerable foresight, considering that related members of this family of materials later formed parent systems of the recently discovered pnictide high T c superconductors. Much awaited experiments would involve further tuning of phase space to access this ferromagnetic phase instability in clean single crystals and investigating the possible emergence of unconventional superconductivity. IRON PNICTIDE SUPERCONDUC-TORS Arguably the high point of condensed matter discoveries this year was that of the new family of iron based high temperature superconductors. Interestingly enough, the discovery was made by a chemist Hideo Hosono, with the original goal of evaluating low-dimensional candidate materials for their potential as magnetic semiconductors. First signs of the imminent breakthrough came in 2006 and 2007 with the discovery of intrinsic superconductors LaFePO (T c ∼ 4K) [29] and LaNiPO (T c ∼ 3K) [30]. In 2008, a significant advance was made with the finding of T c ∼ 33 K superconductivity in LaFeAsO doped with F [31], to be followed up by the discovery of superconductivity on doping related materials such as REOFeAs (RE = La, Ce, Pr, Nd, Sm) and AFe 2 As 2 (A = Ca, Sr, Ba, Eu) [32]. Parent materials in these families of compounds are typically antiferromagnets with relatively high temperature spin density wave transitions T SDW ∼ 100 K, which upon doping or the application of pressure, achieve superconducting temperatures as high as T c ∼ 55 K [33]. In a bid to bring to bear existing theoretical pictures on this new family of superconductors, the question has been posed as to whether the parent materials of these superconducting compounds are Mott insulators like the Cuprate family of superconductors, or in contrast, are closer to itinerant metals. Optical studies presented at SCES reveal a sharp drop in scattering rate and plasma frequency at the magnetic transition in these materials, providing robust evidence for an itinerant system gapped by a spin density wave resulting in the loss of the majority of carriers [34]. Results of quantum oscillation experiments presented at the meeting also reveal a very small remnant Fermi surface due to reconstruction by a spin density wave, indicating itinerant character [35]. A puzzle for theorists has been how to incorporate this itinerant character into a magnetic model of local exchange interactions constructed to understand the electronic structure in this system. One intriguing possibility presented at this conference was a 'traffic light' model where J1-J2 exchange interactions are combined with itinerant electrons much in the fashion of traffic lights regulating the flow of traffic [36]. The symmetry of the superconducting wavefunction in these materials is currently the subject of intense debate: detailed microscopic measurements on single crystals of improved purity will no doubt be crucial to progress on this front. CUPRATE HIGH-T C SUPERCON-DUCTORS A discussion of correlated electron systems would be incomplete without mentioning cuprate high-Tc superconductors. An understanding of unconventional superconductivity in these materials poses a conundrum that will hopefully be solved within the next few decades. There have been some important recent developments in the understanding of these materials, for instance, relating to the enigmatic pseudogap phase. We briefly touch on a few new experimental developments here. Scanning tunnelling microscopy (STM) results from the group of Yazdani suggest that high-T c superconductivity occurs in two steps: at T P incoherent Cooper pairs are formed and at T c these preformed pairs break the U(1) gauge symmetry and reach phase coherence. It has been proposed that this intermediate temperature T P between T * and T c is closely related to the superconducting gap ∆, given that 2∆/k B T P ≈ 8 [37]. Some recent neutron scattering results [38] have been interpreted in terms of the Varma phase with circulating currents [39]. Nevertheless, no consensus has been achieved yet among the different scenarios proposed theoretically for explaining the pseudogap phase [39,40,41] and an unambiguous answer to this puzzle is as yet lacking. An experimental breakthrough in the YBCO family of cuprates was the measurement of the electronic structure via quantum oscillations, thereby enabling access to low energy coherent quasiparticles. The measurement of Shubnikov de Haas and de Haas van Alphen oscillations in YBa 2 Cu 3 O 6+δ [42,43,44] and YBa 2 Cu 4 O 8 [45,46] was made possible by advances involving high magnetic fields and improved single crystal quality. At high magnetic fields where superconductivity is destroyed, a Fermi surface comprising small sections was measured in these underdoped cuprate materials, indicating likely translational symmetry breaking that 'reconstructs' the large paramagnetic Fermi surface. It is currently a subject of debate and ongoing experiments as to the origin of such a superlattice and its relation to unconventional superconductivity. Another outstanding question relates to the apparent dichotomy between 'Fermi arcs' measured at high temperatures and low magnetic fields by photoemission experiments [47,48], and the closed 'Fermi pockets' measured at low temperatures and high magnetic fields by quantum oscillation measurements. Experiments that relate these two regimes are crucial to understand quasiparticle excitations in the precursor phase to superconductivity, thereby providing potential clues as to the Cooper pairing mechanism. UNCONVENTIONAL SUPERCON-DUCTIVITY: SAME OR DIFFERENT It is striking that the occurrence of superconductivity on the brink of magnetisation appears to be ubiquitous. Indeed, common elements are in play in the assorted f -and d-electron families discussed: a low-dimensional crystal structure, (anti)ferromagnetism and tuneability to the brink of magnetism. Similar tuning parameters such as doping and applied pressure are seen to suppress magnetism and induce superconductiv-ity in these materials, the particular attraction of the d-electron family of materials lying in their significantly higher energy scales. The commonality in behaviour which becomes apparent on considering representative materials' families, however, poses a puzzle. On further inspection, this apparently universal behaviour is found to display surprising variations in details of mediating mechanisms and mediated phases. This deviation from notions of universality leads us to another important consideration that may inform such dichotomous behaviour -the flattened energy landscape in the vicinity of a phase instability. The consequent degeneracy of phases in this region needs to be weighed in the balance with potential universal behaviour in order to understand the similar yet different manifestation of physical phenomena. ULTRA STRONGLY CORRELATED PHENOMENA IN GRAPHITE AND BISMUTH While unconventional superconductivity is a consequence of strong correlations in condensed matter systems, in the limit of ultra strong correlations, arguably more exotic effects come into play. A 2D electron gas in the presence of a perpendicular magnetic field offers a rich playground for the observation of such exotic quantum states of matter, celebrated examples of which include the fractional QHE states such as the Abelian Lauglin liquid [49] or the non-Abelian Pfaffian and parafermionic states [50,51]. At this year's SCES, it was suggested that practical 3D materials, examples being graphite and bismuth, may also exhibit such quantum effects. Graphite Although graphite is a 3D material consisting of several graphene planes, the high-anisotropy between out-and in-plane transport observed in highly oriented pyrolytic graphite (HOPG) ρ out /ρ in ∼ 5.10 4 [52] puts this material in the class of 2D conductors. In the presence of a perpendicular magnetic field, 2D electron systems are expected to display QHE. Integer QHE in the presence of a perpendicular mag-netic field has been previously demonstrated in HOPG [52]. Careful analysis of the steps in HOPG further showed that this material exhibits QHE for both Dirac-like holes and massive electrons [53]. This observation that the Fermi surface in graphite comprises both electron and hole pockets is also consistent with de Haas van Alphen and Shubnikov de Haas quantum oscillations [54], scanning tunneling spectroscopy [55], far-infrared magneto-transmission spectroscopy [56], and angle-resolved photoemission experiments [57]. At this year's SCES, studies were presented on HOPG with improved mobility (µ ∼ 10 6 cm 2 /Vs) and ultra-high pulsed magnetic fields (up to B = 57T) applied perpendicular to the graphene layers [58]. Deep in the quantum limit, for fields B ≫ B QL ∼ 7 − 8T, several plateaus are seen in the Hall resistivity at fractional filling factors. Although the longitudinal resistivity ρ xx does not vanish in the plateau region, it exhibits small dips at the filling factors for which ρ xy shows plateaus, as do other 3D systems. The series of plateaus observed at fractional filling factors ν = 2/7, 1/4, 1/5, 2/11, 1/6, 1/8, 2/17, and 1/9 [58] suggests fractional QHE in graphite, albeit a more complex scenario than the Jain series that appears in more conventional 2D electron system [59]. Bismuth A rather more surprising example of a 3D material where fractional QHE is potentially realised is bismuth [60]. Due to the extremely small Fermi surface of Bismuth and the long Fermi wavelength of itinerant electrons in this system, the quantum limit can be attained by applying moderate magnetic fields B ∼ 9T along the trigonal axis of the material [61]. Earlier studies up to B = 12T showed quantum oscillations in the Nernst coefficient in the vicinity of the quantum limit [62]. It was further argued that the peak in the Nernst signal where the Landau level crosses the Fermi level is indicative of a 'quantum Nernst effect' associated with integer QHE. At this year's SCES, measurements of transport properties of a single crystal of bismuth under magnetic fields up to B = 35T were reported [60], revealing a plateau in the Hall resistivity at a fractional filling factor. The previous detection of three peaks in the Nernst response corresponding to rational fractions 2/3, 2/5, and 2/7 of the first integer peak were the first indications of a fractional QHE in this material [61]. However, no plateaus were observed at the corresponding Hall resistivity data, which had only low resolution at T = 0.44K [61]. Recently, a clear plateau-like feature was measured for magnetic fields applied at magic angles with respect to the trigonal axis [60], at a filling factor which would naively correspond to ν = 1/3 for holes. While the high mobility and small Fermi surface of bismuth make it a promising candidate for exotic quantum effects such as fractional QHE, it remains a puzzle as to how its 3D structure could support such a state. These novel observations may introduce more questions than answers. For instance, the involvement of the rich Fermi surfaces in graphite and bismuth systems that comprise both electron and hole pockets is as yet unclear. Whereas in graphite the electrons are massive and the holes are Dirac-like, in bismuth the situation is the opposite -the holes are massive and the electrons are massless -it is conceivable that competing responses from electrons and holes introduce novel effects. It is hoped that further investigations of higher mobility samples at lower temperatures and stronger magnetic fields could shed further light in this direction. COLD ATOMS The experimental realization of ultracold atomic gases loaded into optical lattices has opened a unique pathway to study strongly correlated quantum many-body systems. The great versatility in engineering optical lattices allows for a full control of the lattice parameter and of the barrier (height and width) separating neighboring sites. In addition, the lattice can be loaded with bosons, fermions, or Bose-Fermi mixtures and the inter-and/or intra-species interactions can be tuned from attractive to repulsive by using the technique of Feshbach resonance [64]. Several interesting phenomena predicted in the context of condensed matter systems have recently been observed with cold atoms. A prominent example is the detection of the superfluid/Mott insulator transition in 3D [65] and more recently in 2D [66] optical lattices loaded with bosons. In the limit of strong repulsive on-site interactions U the bosons are localized and form a Mott insulator, whereas a superfluid phase with spontaneously broken gauge symmetry emerges below the critical value of the ratio between U and the hopping parameter t. Other interesting examples are the observation of the crossover between the Bose-Einstein condensate and the BCS regimes by varying the interaction strength in fermionic condensates [67], and more recently the creation of an antiferromagnetic Neel state with spinful fermionic atoms [68]. In this context, some novel realizations of strongly correlated phases were proposed during this year's SCES. One of them addressed the investigation of Luttinger liquid physics by generating 1D tubes of strongly interacting fermionic isotopes in a 2D optical lattice. If the tubes are well separated, two regimes are expected to occur, depending on the total atomic density and on the 3D s-wave scattering length between different species of atoms. In the Spin-Coherent regime, the usual spin-charge separation is expected to occur. However, another regime can be realized, in which the Luttinger liquid becomes 'Spin-Incoherent' and only charge excitations remain as a collective mode. The measurement of the off-diagonal (spin-up spin-down) correlator of density fluctuations was proposed to be the natural observable for detecting both phases, since it would exhibit different signs in the two different regimes [69]. Another proposal considered the effect of a staggered rotation for atoms loaded in a 2D square optical lattice [70]. The rotation acts as an effective staggered magnetic field and the Hamiltonian describing the system becomes a generalized Hubbard model, with complex and anisotropic hopping coefficients [71]. For the case of bosons, the system exhibits different superfluid phases at small values of the Hubbard parameter U , depending on the strength of the 'gauge' field. For weak magnetic fields, the bosons condense at zero momentum and the conventional uniform su-perfluid is realized. This phase is reminiscent of the Meissner phase in superconductors. On the other hand, for stronger fields the minimum of the single-particle spectrum moves to the boundary of the Brillouin zone and a finite-momentum condensate is realized [71]. This phase bears analogies with the FFLO state; nevertheless, in the FFLO the Cooper pairs carry a finite linear momentum, whereas in this case the bosons carry a finite angular momentum. Indeed, this phase consists of a square vortex-antivortex lattice, analogous to the Abrikosov-vortex lattice observed in type-II superconductors. The quantum phase transition between the uniform and the finite momentum condensates occurs when the magnetic flux per plaquette equals one half of the fundamental flux-quantum φ 0 and it is of first order. When loaded with fermions, instead, at half filling this system realizes the physics of graphene [72]. Due to the staggered rotation, the optical lattice is divided into A and B sublattices and the single-particle spectrum exhibits 4 Dirac cones, with two inequivalent ones. At the critical value of the 'magnetic field' where the flux φ = φ 0 /2 the fermionic system exactly realizes the graphene spectrum because the Dirac cones are isotropic. However, by changing the magnetic field the cones become anisotropic in the x and y directions and the cold atom system shares the features proposed to occur by depositing graphene on top of a periodic potential [73]. This superlattice configuration could be very important for technological applications involving graphene. Moreover, at this critical field the fermionic system realizes the so called staggeredπ flux phase, which was proposed long ago to be the ground state in the pseudogap phase of high-T c cuprates [74]. An even more interesting case emerges when this staggered lattice is loaded with both, fermions and bosons. In this case an unconventional superconducting phase can arise because a nearest-neighbor attractive interaction between fermions in the A and B sublattices can be generated, mediated by the bosons. Due to the additional sublattice degree of freedom, a state which is singlet in the spin and in the sublattice but odd in the orbital can occur, thus opening the possi-bility to realize unconventional superconductivity with cold atoms [75]. From the theoretical point of view, it is by now clear that cold atom systems under staggered rotation in an optical lattice offer the possibility of realizing a panoply of interesting quantum states of matter, ranging from finite-momentum Bose-Einstein condensates, to anisotropic Dirac fermions, or even unconventional superconductivity. The experimental realization of such exotic phases remains a challenge to be faced in the forthcoming years. WHERE TO LOOK NEXT A veritable bevy of experiments is currently underway to unearth details of the nature of superconductivity, magnetism, and possible interaction mechanisms that might potentially mediate unconventional superconductivity in systems such as heavy fermions, cuprate and pnictide high T c 's. Amidst all this busyness, however, a pressing question to be asked is: why was potential high temperature superconductivity in the pnictides not explored earlier, given the shared characteristics that appear evident on hindsight? As we have seen, diverse and unexpected phenomena in disparate materials families can nonetheless be woven into the narrative of novel phases nucleated at a near-continuous phase instability. Candidates with at least nominally similar characteristics: crystal structure and magnetisation for instance, would be the natural candidates to search for unconventional phenomena, and not just superconductivity -but possibly even more exotic pairing symmetries, for instance spontaneous forms of current order. A further boost in this effort to identify candidates for novel quantum phases may be provided through cold atom models. These models hold the promise of tailoring, under extremely well controlled conditions, the most exotic correlated systems. Such an initiative could play a pivotal role in the understanding of the interplay between magnetism and superconductivity in these complex systems. While various discoveries of novel phases can be collectively viewed through the prism of quantum critical phenomena, adherence to this pre-scription is likely to yield not a blandly universal array of physical phenomena, rather a rich landscape of novel phases and mediating mechanisms peculiar to each materials system studied. Looking forward toward developing a condensed matter community that seeks to explore new and uncharted territory, it cannot be sufficiently emphasised as to the importance of a directed search for and study of candidate new materials families.
2009-03-26T10:53:01.000Z
2009-03-26T00:00:00.000
{ "year": 2009, "sha1": "17a4658abf6ca6a80c598c9a20039a0c06fe05fc", "oa_license": null, "oa_url": "http://arxiv.org/pdf/0903.4548", "oa_status": "GREEN", "pdf_src": "Arxiv", "pdf_hash": "17a4658abf6ca6a80c598c9a20039a0c06fe05fc", "s2fieldsofstudy": [ "Physics" ], "extfieldsofstudy": [ "Physics" ] }
150315691
pes2o/s2orc
v3-fos-license
Among rewilding mountains: grassland conservation and abandoned settlements in the Northern Apennines ABSTRACT Due to agricultural abandonment and the urban preference for ‘wilderness’ over ‘rural’ areas, abandoned settlements and rewilding grasslands are often the last traces of agriculture in today’s protected areas of the Northern Apennines. However, since the late 1990s, an increasing number of policy makers have appreciated these spatial manifestations of the nature/culture gestalt and developed projects to conserve the last grasslands in rewilding protected areas. Does the land cover reflect these changing attitudes? Using the Foreste Casentinesi National Park as our case in point, we aim to (1) detect land cover changes during 1990–2001 and 2001–2010; (2) analyse change trajectories; and (3) reveal potential discrepancies between the conservation of biodiversity and rural built heritage. Our results show that grassland loss dominated 1990–2001, whereas grassland maintenance/restoration can be observed for 2001–2010. However, the decadence of the rural built heritage seems to continue. Introduction and aims 'Implicit in the wilderness idea is an anti-human bias that seeks to present nature as the antithesis of culture' (Gade, 1999, p. 6) Background In many Southern European mountain regions, the complementary processes of urbanisation and deagrarianisation have led to changes in rural land use practices, often leading to the complete abandonment of agricultural land since the 1950s (on European mountains in general see MacDonald et al., 2000; on Mediterranean mountains in particular see Papanastasis, 2012). In a case study in the Borau Valley of the Spanish Pyrenees, an area characterised by severe depopulation during the 20 th century, Vicente-Serrano, Lasanta, and Cuadrat (2000) noticed increased woody vegetation on former agricultural land and called this process the 'banalization of landscape'. Agricultural abandonment was also observed in the second half of the past century in Italian mountain regions (Vecchio, 1989). Bender and Haller (2017) analysed the nexus of population mobility and landscape development in the Alps and identified deep-rooted cultural practices as important drivers of change. In a quantitative study, Falcucci, Maiorano, and Boitani (2007, p. 622) detected large increase in forest areas in the Alps and Apennines between 1960 and 2000. The Apennines, which Goethe in his Italian Journey called 'ein merkwürdiges Stück Welt' ('a remarkable part of the world'), additionally presented a large decrease in pastures. Due to their regional geographic characteristics, the Northern Apennines are a perfect case in point (Farina, 1995(Farina, , 2006pp. 256-257); mountain depopulation in Tuscany and Emilia-Romagna was not considered alarming until the end of the 1930s 1 (Toniolo, 1937), but gathered pace from the 1950s. While urban regions on the plains of Po and Arno, as well as along the coasts of the Ligurian and Adriatic Sea, expanded, 2 many rural mountain municipalities in the Northern Apennines experienced abandonment of agricultural land and settlements (for case studies in the Northern Apennines see Dossche, Rogge, & Van Eetvelde, 2016;Kühne, 1974;Torta, 2004;Vos, 1993). This process of physical and demographic urbanisation was accompanied by profound attitudinal changes leading to (1) the preference of 'natural landscapes' over 'rural landscapes' and (2) the creation of protected mountain areas managed according to the paradigm of 'renaturalization' (Agnoletti, 2014)-a tendency influenced by discussions on conserving wilderness. 3 In the first half of the 1980s, this development resulted in the institutional establishment of the 'wilderness' movement in Italy (Zunino, 1995). Yet in the 1990s, with the implementation of the EU Habitats Directive and the adoption of the Mediterranean Landscape Charter (followed by the European Landscape Convention in 2000; see Jones, Howard, Olwig, Primdahl, & Sarlöv Herlin, 2007;Olwig, 2007), a shift from wilderness-style to utilised protected areas occurred in Western/Mediterranean Europe (Zimmerer, Galt, & Buck, 2004, p. 527). This shift was accompanied by a general change in attitude (D'Angelo, 2016): the return to the appreciation of ordinary agricultural landscapes among rewilding mountains-mostly for biodiversity conservation but also for cultural-historical motives. In a Council of Europe publication, Sangiorgi (2008, p. 5) underlines that 'the landscape, the environment, the land and the people are part of one and the same unit and that this heritage should be preserved not only as a memory of the past but also as a resource for future development'. This fact particularly applies to the protected areas of IUCN category II (principally aimed at the conservation of biodiversity, ecological structures, and processes) in the Tuscan and Emilian-Romagnan Apennines, which today are a type of 'green and mountainous interstice' (sensu Gambino & Romano, 2003, p. 8) embedded between urbanised lowlands. In the Central Italian Foreste Casentinesi National Park, the appearance of new attitudes toward grasslands-and the emerging consciousness for the need of conserving these traces of human presence-can be observed since the late 1990s. Against the massive loss of grasslands since the 1950s, 4 Tellini Florenzano (1999) published a report on a bird monitoring project carried out during the 1990s, concluding that: 'It would be of highest importance to stop the current tendency of forest regrowth (natural and artificial), conserving at least the present share of pastures, arable lands, or shrub formations. To preserve the current state, the direct intervention of humans is necessary and appropriate incentives for agropastoral activities in the area should be provided. Unfortunately, hardly any of these nonforest environments form part of the public regional heritage. However, from a global perspective on conservation, it is nevertheless possible to intervene on private properties. [. . .] Within this scope, the middle and higher elevations (700-1100 masl) are of importance'. (Tellini Florenzano, 1999, p. 78; translated from Italian by the authors). At the same time, in 1999, a project to restore pasture habitats in the park started in the 'Monte Gemelli, Monte Guffone' Natura 2000 site (IT4080003), which was financed by the EU program LIFE NATURA (LIFE99 NAT/IT/006237). More recently, in 2013, the park authorities sent out a press release announcing a project in collaboration with the local Union of Mountain Municipalities (Unione dei Comuni Montani del Casentino), which aims at recovering some abandoned pastures in the south of the park: 'The grasslands currently existing in the protected area are results of the preexisting clearing of forest and could only be conserved over centuries by their use. The latter represents an important element of biological diversification and safeguards the existence of ecotones [. . .], where the extraordinary vegetational, faunistic, landscape, and environmental biodiversity typical for these habitats is as rare as particularly worthy of attention'. (Parco Nazionale delle Foreste Casentinesi, Monte Falterona e Campigna, 2013, s.p.; translated from Italian by the authors). Hence, despite the continuing appeal and aim of creating 'wilderness' in this Apennine protected area, since the turn of the millennium we can observe a series of efforts to maintain grasslands in the midst of rewilding mountains; these include the acquisition of abandoned poderi (small farms belonging to a larger mezzadria or sharecropping system 5 ) by the park authorities to ensure the conservation of grasslands: 'The acquirement of the poderi of Bagnatoio, Briganzone, Centine and Romiti [. . .] (392 ha, 700 millions [of Italian lire]) has almost been finalised. By acquiring these properties, the park authority aims to take care of the management and restoration of some of the most significant rural environments of the park. The agricultural part of these areas could be leased to breeders for the seasonal pasturing of the livestock [sheep and cattle].' (Parco Nazionale delle Foreste Casentinesi, Monte Falterona e Campigna, 2002a, p. 31; translated from Italian by the authors). If human-made grasslands in the Foreste Casentinesi National Park are considered spatial manifestation of the nature/culture gestalt (Gade, 1999)-where 'culture' should be understood in a broad sense, one that considers both individual and community, law, justice, and customs (see Olwig, 1996)-, then the question emerges whether this attitudinal change (from the preference of rewilding to the consciousness of conserving the remaining grasslands) is reflected by the development of the park's land cover. Is there a noticeable decrease in-or even a stop in-shrub encroachment on grassland after 2000? Did grassland conservation go along with the restoration of abandoned farmsteads? Hence, the present article specifically aims at (1) detecting land cover changes during 1990-2001 and 2001-2010; (2) analysing change trajectories during 1990-2001 and 2001-2010, focusing on changes between 'grassland' and 'wood or shrubland' in different altitudinal zones; and (3) revealing potential discrepancies between the conservation of biodiversity and rural built heritage by identifying abandoned farmsteads on stable grassland. Study area The Italian Foreste Casentinesi National Park (officially Parco Nazionale delle Foreste Casentinesi, Monte Falterona e Campigna), created between 1990 (tentative definition of the park's boundaries; Ministerio dell'Ambiente, 1990) and 1993 (establishment of the government body of the national park; Ministerio dell'Ambiente, 1993), is located between approximately 43°42ʹ and 44°02ʹ northern latitude, and 11°42ʹ and 11°56ʹ eastern longitude ( Figure 1). The park covers more than 36,000 ha of the Northern Apennines reaching almost equally into Tuscany (southwest) and Emilia-Romagna-in fact the Romagna toscana-(northeast). The park's municipalities of San Godenzo and Londa belong to the Metropolitan City of Florence, and Pratovecchio-Stia, Poppi, Bibbiena, and Chiusi della Verna are a part of the Province of Arezzo. The municipalities of the Romagna-Bagno di Romagna, Santa Sofia, Premilcuore, Portico e San Benedetto, and Tredozio 6 -lie in the Province of Forlì-Cesena. Physical-geographical setting The Foreste Casentinesi National Park spans both sides of the Northern Apennines' main ridge formed by sedimentary rocks, mainly sandstone and marl (Cavagna & Cian, 2003;Rother & Tichy, 2008). While the southwestern slopes are relatively smooth, the northeastern side's relief is rather steep and rugged. According to Rubel, Brugger, Haslinger, and Auer (2017), the vast majority of the park has a Cfb climate, that is, a warm temperate climate without a dry season but with warm summers (Köppen-Geiger climate classification; HISTALP data from 1986-2010). The area of the park, almost entirely covered by Natura 2000 sites at present, ranges from approximately 400 m asl up to the Monte Falco (1658 m asl). Consequently, different vegetation zones-similar to the southern European Alps (Franz, 1979)-can be divided (Viciani & Agostini, 2008): a colline zone up to approximately 600 m asl (4.5% of the park area), the lower (600-800 m asl; 26%) and upper submontane belt (800-1000 m asl; 37%), as well as the lower (1000-1400 m asl; 30%) and upper montane region asl (above 1400 m asl; 2.5%). While the colline and submontane areas are characterised by oaks (Quercus cerris), hop hornbeams (Ostrya carpinifolia), and sweet chestnut (Castanea sativa), the montane zones are typically covered by beech forests (Fagus sylvatica). Apart from deciduous trees, areas with plantations of evergreen trees (e.g. Abies spp., Pinus spp., or Picea spp.)-most famously the white spruce (Abies alba) forest created by the monks of Camaldoli (Pungetti, Hughes, & Rackham, 2012)-as well as grassland and shrubland can be found in the park. As indicated in the management plan of the park, grasslands used as meadows include Dactylis glomerata, whereas in pastures Bromus erectus and Cynosurus cristatus are found. Once the abandonment process starts, diffusion of Brachypodium pinnatum often occurs, and pioneer species such as the deciduous and simple-leaved Spanish broom (Spartium junceum) increasingly colonise the extensively used grasslands (Parco Nazionale delle Foreste Casentinesi, Monte Falterona e Campigna, 2002b, pp. 23-24). Social transformations and population dynamics The shift from grassland to wood or shrubland ( Figure 2) is often linked with social transformations and population dynamics. 7 In an overview of the Italian Apennines, Rother and Wallbaum (1975) found a slight decrease in depopulation in the Northern Apennines during 1961-1971, with exceptions in Liguria. Differences between the Tuscan and Emilian-Romagnan slopes were not observed. Depopulation in selected Emilian-Romagnan municipalities of today's national park (Premilcuore and Portico e San Benedetto) was studied in detail by Kühne (1974Kühne ( ) during 1966Kühne ( -1969. He revealed that outmigration in the 1960s-mainly to large cities such as Florence, Forlì, or Milan-primarily affected scattered buildings of poderi and led to the abandonment of these isolated farmsteads; a tendency Kühne saw in the context of the area's mezzadria or sharecropping system. Since the 1980s, certain differences between both regions of today's national park municipalities emerged: the Emilian-Romagnan municipalities registered decreasing or-at best-stagnating population, whereas population increase was registered by Tuscan municipalities. From 1981 to 2011 (Table 1), Portico e San Benedetto, Premilcuore, and Tredozio lost more than 20% of their population; San Godenzo, Poppi, Bibbiena, and Londa, in turn, showed population gains between 6% and 68%. The latter value was clearly driven by suburbanisation and/or postsuburbanisation processes (on differences see Borsdorf, 2005) in the metropolitan area of Florence. Data acquisition To analyse land cover changes using Geographic Information System (GIS), remotely sensed data were used, especially satellite imagery-sharped 432-RGB composites of Landsat TM scenes (30 m resolution) from 1990 (July 20), 2001 (July 26), and 2010 (July 3)-and a digital elevation model (Aster GDEM; 30 m resolution). Training samples for classification were produced by combining (373) were created for assessing the accuracy of classification using satellite imagery (very high resolution; acquired on 29 August 2014) in the free virtual globe software Google Earth. Although the use of imagery via Google Earth has several limitations (e.g. temporal, spatial, and spectral metadata cannot always be found), the quality of data is considered sufficient for a range of practice-oriented research and mapping exercises (see for instance Potere, 2008). Very high-resolution imagery from Google Earth (acquired on 29 August 2014) was also used to map abandoned farmsteads in the study area. Data processing Land cover change and settlement mapping Each of the three Landsat TM composites was classified into 'wood or shrubland', 'grassland', or 'other'-always applying the same process. To assess the accuracy of the 2010 classification, we compared it with a set of reference points in a so-called error matrix. Three hundred reference points were created randomly using very high-resolution imagery from 2014. The vast majority (273) Because the 1990 and 2001 images were acquired by the same sensor in the same month, and processed identically, we assumed that the accuracy of classifications from 1990 and 2001 was comparable to the 2010 image. Finally, a cross-tabulation matrix was calculated (following Pontius, Shusas, & McEachern, 2004) and land cover change trajectories for 1990-2001-2010 were analysed using pixel-wise comparison, and then visualised and interpreted. Abandoned settlements visible on very high-resolution images were manually digitised. As abandonment cannot be visually detected, we manually searched and digitised all single buildings that clearly had no complete roof (either a damaged roof or none at all) at a view height of 800 m. Groups of buildings were joined and counted as one settlement (the location of the largest building was mapped). The location information of the abandoned buildings, imported into the GIS as kml files, was complemented by the trajectory of the site's land cover change (the respective 30 × 30 m pixel was considered). Classification accuracy assessment On the basis of the 373 reference points, the percentage of correctly allocated pixels reached 87.13% ( Table 2). The majority of classes had individual producer and user accuracy values 8 of 73% or more. The only exception was the user accuracy value of 'grassland', 60.66% possibly because of the fuzzy border between 'grassland' with incipient shrub encroachment and 'wood or shrubland' proper. In sum, the accuracy targets set by Thomlinson, Bolstad, and Cohen (1999), more than 85% correctly allocated pixels with no individual class below 75% can be considered almost achieved for the 2010 classification. Allocation disagreement showed somewhat higher values than quantity disagreement. 9 Land cover and change trajectories The results show that 67% of today's park area was covered by wood or shrubland in 1990. At the same time, grassland covered 30%, and all other land cover types together made up only 3% of the study area. In 2001, in turn, the situation changed: wood or shrubland covered approximately 84% and grassland decreased to only 13%, whereas other land cover types did not show large changes. This implies that grassland areas were more than halved within a period of only 11 years. The situation in 2010 shows a distribution similar to that in 2001: 83% covered by wood or shrubland, 13% by grassland, and other land cover by 4%. Land cover changes 1990-2001 and 2001-2010 The results presented in Table 3 indicate that net land cover changes during 1990-2001 were less than 17%. On comparing gains and losses per category, it becomes clear that swaps-equal transitions between two categories-make up approximately 2%, and thus a total change of little more than 19%. This implies that approximately 81% of the park's area were persistent between 1990 and 2001. Subsequently, during 2001-2010, the park's land cover showed even higher rates of persistency (approximately 92%), whereas swaps amounted to 7%; net changes only reached a value of 1% of the park's area. A closer look on grassland reveals that this category gained 1% but lost 18% of the park's area during 1990-2001. Between 2001 and 2010, however, the amount of grassland gained was almost equal to that of grassland lost, resulting in a total change of 7.5%. In this context, it is astonishing that net changes within the grassland category are even smaller than those in the category of 'other'. These figures convey the impression that-regarding the total land cover structure-conservation was highly effective between 1990 and 2010 (high rates of persistency). Moreover, the land cover trajectories seem to confirm the attitudinal change from 'wilderness only' (net change from grassland to wood or shrubland during 1990-2001) to a consciousness for conserving the remaining grasslands (swaps between grassland and wood or shrubland during 2001-2010). Yet the question of whether these changes are systematic and dominant remains. Systematic and random changes Under random processes of gain or loss, the distribution of the total gains or losses would be expected to correlate with the respective categories' share in the total area. Hence, by subtracting the expected values from the observed transitions between categories, we can identify and 1990-2001 2001-2010 1990-2001 2001-2010 1990-2001 2001-2010 1990-2001 2001-2010 1990-2001 interpret systematic changes (values not equal to zero) and random changes (values equal to zero). In addition, values very close to zero-those between 0.1% and −0.1%-are also considered nonsystematic changes. In terms of gains (1990-2001Table 4), the observed transition from grassland to wood or shrubland shows a difference of 1.56% between observed and expected changes. Hence, when wood or shrubland gains, it systematically replaces grassland. The observed changes from wood or shrubland to grassland (0.05%) equal the expected values, and thus are not systematic. In terms of losses (1990-2001Table 4), the value of observed transitions from grassland to wood or shrubland is higher than expected (difference of 0.27%), indicating that, when grassland loses, it is systematically replaced by wood or shrubland, and wood or shrubland gains. Regarding transitions from wood or shrubland to grassland, a difference of 0.22% indicates a systematic change. Following Alo and Pontius (2004) and Braimoh (2006), the transitions from grassland to wood or shrubland are the dominant 10 changes. For 2001-2010, in terms of gains (2001-2010; Table 4), we see systematic changes for transitions from grassland to wood or shrubland (difference of 0.7%), indicating that, when wood or shrubland gains, it systematically replaces grassland. In addition, transitions from wood or shrubland to grassland (2001)(2002)(2003)(2004)(2005)(2006)(2007)(2008)(2009)(2010) seem to be systematic (difference of 0.15%), thus when grassland gains, it tends to replace wood or shrubland. In terms of losses (Table 4), the observed transition from grassland to wood or shrubland almost equals the expected value; thus, no systematic change exists. For transitions from wood or shrubland to grassland, in turn, there is a difference of 0.43%, implying that, when wood or shrubland loses, it is systematically replaced by grassland. According to Alo and Pontius (2004) and Braimoh (2006), the changes from grassland to wood or shrubland were no longer dominant during 2001-2010. Instead, transitions from wood or shrubland to grassland were dominant. Disappeared and existent grasslands The areas of grassland cover lost during 1990-2010 are shown in Figure 3. It becomes clear that the decrease of grasslands occurred in both Tuscany and Emilia-Romagna. Moreover, altitudinal differences can be observed as the majority of disappeared grasslands are almost homogeneously distributed in the lower parts of the park. This is no surprise because in 1990 around 85% of grasslands were below 1000 m asl (with 78% in the submontane zones), whereas only 15% were in the montane zones: 8598 pixel in the colline zone (covering 49% of this altitudinal zone), 45,791 pixel in the lower submontane zone (covering 43%), 50,739 pixel in the upper submontane zone (covering 33%), 17,297 pixel in the lower montane zone (covering 14%), and 572 pixel in the upper montane zone (covering 6%). The altitudinal distribution of disappeared grassland illustrated in Figure 4 shows that 77% of grassland areas lost between 1990 and 2010 were in the submontane zones (which cover 63% of the park), with a peak at the limit between the lower and upper submontane areas (800 m asl). Thus, while the distribution of grassland in 1990 clearly tends toward the submontane zone-obviously because the Emilian-Romagnan grasslands are on mountains that hardly surpass heights of 1000 m asl-grassland loss during 1990-2010 within this zone occurs quite randomly-a fact that underlines the clear dominance of natural wood or shrubland expansion ('rewilding') in comparison with human-induced reforestation. With respect to grasslands existent in 2010, the areas shown in Figure 3 indicate that large patches of 2010 grasslands appear particularly in the municipalities of Portico e San Benedetto, Premilcuore, and Santa Sofia (Emilia-Romagna); Tuscan municipalities with large patches of 2010 grassland were Pratovecchio-Stia, Poppi, and Chiusi della Verna. The fact that 2010 grassland areas of the Tuscan part are mainly at the edge of the park while the Emilian-Romagnan patches tend to be at a greater distance from the park's border can be explained by the different types of relief on both parts of the Apennine ridge-and does not necessarily mean differences in the altitudinal distribution of 2010 grassland areas. In 2010, around 88% of grassland was below 1000 m asl (with 81% in the submontane zones), whereas only 12% were in the montane zones ( Figure 4): 3620 In summary, grassland areas in the total park area decreased by 57% between 1990 and 2010. The colline zone also lost 57% of its grasslands, and grassland areas in the lower and upper submontane belt decreased by 53% and 55%, respectively. In contrast, the lower and upper montane zones show values clearly above the total average: 64% and 67%, respectively. The decrease of grassland above 1100 m asl is evident (see Figure 4); further concentration of grassland areas at the submontane zones occurred. Abandoned settlements on stable grassland According to the prominent school of landscape studies identified with the cultural geography of the University of California, Berkeley, and geographer Carl Sauer, grasslands are understood as a spatial manifestation of a nature/culture gestalt or whole (Gade, 2011). From this 'Sauerian' perspective, which draws on both American and European sources, we regard grassland maintenance/restoration as including the rural built heritage. If, however, grasslands are simply seen as habitat areas where the vegetation is dominated by grasses, we would expect a significant number of abandoned settlements (buildings with either a damaged roof or none at all) on stable grassland. We identified 81 abandoned settlements on very high-resolution imagery from 2014, of which 44 were found on 2010 grassland. They were equally distributed between the lower (20 settlements) and upper submontane zone (21 settlements each); only three settlements were located in the lower montane zone. By interpreting the selected settlements against the respective trajectory of land cover change (1990-2001-2011), we found 36 abandoned settlements on stable grassland-11 ( Figure 5). These abandoned settlements were located in the lower submontane zone (18 settlements), the upper submontane (16), and slightly above 1000 m asl in the lower montane zone (2). While the quantitative results of land cover change clearly show the effectiveness of the park authorities' efforts to maintain/restore grasslands since the new millennium-although the inclusion of mountain communities in general, and breeders in particular, remains a challenge (see Acciaioli, Tellini Florenzano, & Parrini, 2014)-the motive behind these attitudinal changes, which developed during the 1990s, seems to be clearly driven by the EU Habitats Directive's aim of conserving biodiversity. As the many abandoned settlements on stable grassland indicate ( Figure 6), cultural-historical motives clearly play a minor or even no role. The Agenzia Regionale per lo Sviluppo e l'Innovazione nel Settore Agricolo-Forestale of the regional government of Tuscany, for instance, has edited a helpful manual on Northern Apennine grassland management (ARSIA, 2010), which clearly considers pasturing a biodiversity conservation instrument: 'Human activities are not always conflicting with biodiversity-quite the contrary. Human activities, such as certain forms of agriculture, forestry, or livestock farming, have contributed, and still contribute, to the maintenance of very high biodiversity values [. . .], because human actions maintain varied (diversified) Grasslands are seen as spaces of biodiversity. Thus, there is still a potential to better integrate the conservation of biodiversity and the cultural-historical heritage in the sense of the European Landscape Convention (see Sarmiento, Bernbaum, Brown, Lennon, & Feary, 2014;Seardo, 2016). In this context, it is crucial to recognise the dynamic nature and the increasing complexity of the social-ecological grassland systems in mountains (see Scolozzi, Soane, & Gretter, 2014). Regarding the rural built heritage in the park, the 2002 management plan of the National Park already underlines that '[s]pecial attention must be paid to the restoration of those buildings not permanently inhabited [. . .]. In particular, more attention has to be paid in case these buildings are disused, not connected to the municipal, provincial, or national road system, and without supply of public services (power, gas water, telephone, etc. Given that montane grasslands are more affected than submontane grassland areas, and considering the diversity of the ecotone between the submontane forest and the beech-dominated montane zone-from both an ecological and an aesthetic point of view-abandoned farmsteads and grasslands at approximately 1000 m asl are particularly suitable to integrate the successful efforts of biodiversity conservation with those of restoring the rural built heritage, raising awareness of the importance of the past for a sustainable present and future. Ongoing artistic projects, such as the Le Valli project (www.progettolevalli.org) started in 2011 by the Florence-based artist Andrea Papi, are already contributing to the restoration and valorisation of the rural built heritage in the mountains of San Godenzo, for instance by creating a Sentiero dell'architettura rurale (a 'trail of rural architecture'). Moreover, the Popoli del Parco project (www.popolidelparco.it)-an initiative of the Foreste Casentinesi National Park-launched a website in 2017, raising awareness for local landscape history by making photo archives and video interviews with contemporary witnesses accessible online. Such signposted landscape trails and web-based communication tools (see Jones, 2007) are definitely a step in the right direction. a category of 2000, and that 2000 category has also systematically gained from the same 1990 category, then we can conclude a systematic process of transition between the two categories'. Braimoh (2006) calls this a 'dominant' process. 11. In this context, 'stable grassland' does not mean that there was no shrub encroachment on these lands during 1990-2010 nor does it automatically mean that these grasslands were used as pastures, as only the land cover at three selected points in time was analysed.
2019-05-12T14:23:21.905Z
2018-08-09T00:00:00.000
{ "year": 2018, "sha1": "02ec077cdbca75d8b8a75374aecd81946fd0835f", "oa_license": "CCBY", "oa_url": "https://www.tandfonline.com/doi/pdf/10.1080/01426397.2018.1495183?needAccess=true", "oa_status": "HYBRID", "pdf_src": "TaylorAndFrancis", "pdf_hash": "bb682012fcf19dba9355893027cdc2b76fdd97b5", "s2fieldsofstudy": [ "Environmental Science" ], "extfieldsofstudy": [ "Geography" ] }
232106902
pes2o/s2orc
v3-fos-license
Transcriptome and Comparative Chloroplast Genome Analysis of Vincetoxicum versicolor: Insights Into Molecular Evolution and Phylogenetic Implication Vincetoxicum versicolor (Bunge) Decne is the original plant species of the Chinese herbal medicine Cynanchi Atrati Radix et Rhizoma. The lack of information on the transcriptome and chloroplast genome of V. versicolor hinders its evolutionary and taxonomic studies. Here, the V. versicolor transcriptome and chloroplast genome were assembled and functionally annotated. In addition, the comparative chloroplast genome analysis was conducted between the genera Vincetoxicum and Cynanchum. A total of 49,801 transcripts were generated, and 20,943 unigenes were obtained from V. versicolor. One thousand thirty-two unigenes from V. versicolor were classified into 73 functional transcription factor families. The transcription factors bHLH and AP2/ERF were the most significantly abundant, indicating that they should be analyzed carefully in the V. versicolor ecological adaptation studies. The chloroplast genomes of Vincetoxicum and Cynanchum exhibited a typical quadripartite structure with highly conserved gene order and gene content. They shared an analogous codon bias pattern in which the codons of protein-coding genes had a preference for A/U endings. The natural selection pressure predominantly influenced the chloroplast genes. A total of 35 RNA editing sites were detected in the V. versicolor chloroplast genome by RNA sequencing (RNA-Seq) data, and one of them restored the start codon in the chloroplast ndhD of V. versicolor. Phylogenetic trees constructed with protein-coding genes supported the view that Vincetoxicum and Cynanchum were two distinct genera. INTRODUCTION Apocynaceae is a large family of plants distributed globally, which contains around 4,500 species in approximately 370 genera (Endress et al., 2014;Fishbein et al., 2018). Vincetoxicum versicolor (also known as Cynanchum versicolor in Flora of China) belongs to the Apocynaceae family and is the original plant species of the Chinese herbal medicine Cynanchi Atrati Radix et Rhizoma (Chinese Pharmacopoeia Commission, 2015). However, the genus of this plant has not been unified due to the controversial phylogenetic relationship between the genera Vincetoxicum and Cynanchum, which may affect the Cynanchi Atrati Radix et Rhizoma application in the world. The phylogenetic relationship between Vincetoxicum and Cynanchum has been controversial since the first transfer of Vincetoxicum hirundinaria and several other Eurasian Vincetoxicum species to the genus Cynanchum by Persoon in 1805 (Persoon, 1805). Some researchers have suggested that Vincetoxicum should be grouped into the genus Cynanchum based on the corona structure similarity (Jiang and Li, 1977;Gilbert et al., 1996). On the other hand, these two genera were considered to be distinct, and Vincetoxicum was regarded as an independent genus based on molecular data and chemical substances (Qiu et al., 1989;Liede-Schumann, 2000). Besides, the second opinion is supported by studies based on some regions of the nuclear and chloroplast DNA (Yamashiro et al., 2004;Fishbein et al., 2018). Although Vincetoxicum is generally considered an independent genus in Apocynaceae taxonomy around the world (Goyder et al., 2012;Endress et al., 2014;Liede-Schumann et al., 2016;Liede-Schumann and Meve, 2018), the concept of Vincetoxicum as a section of the genus Cynanchum is still reflected in the taxonomy of modern flora in China Li et al., 2012;. Therefore, more evidence should be provided to promote the unification of the phylogenetic relationship between Vincetoxicum and Cynanchum. Chloroplasts originated from ancient endosymbiotic cyanobacteria and are active metabolic centers that sustain life on Earth by converting solar energy into carbohydrates via the photosynthesis process and oxygen release (Leister, 2003;Daniell et al., 2016). Chloroplasts carry their own genomes and genetic systems. The typical angiosperm chloroplast genome has a quadripartite structure, with a genome size of 107-218 kb and gene content of 120-130 genes (Daniell et al., 2016;Kim et al., 2019). The chloroplast genome has the characteristics of uniparental inheritance, moderate nucleotide substitution rate, haploid status, and no homologous recombination (Shaw et al., 2005;Hansen et al., 2007;Yang et al., 2019b). These features make it a suitable tool for molecular identification of species and genetic diversity studies (Zhang et al., 2017;Chen et al., 2018). Moreover, the entire chloroplast genome contains more informative sites than chloroplast DNA fragments, which can provide a higher resolution of the phylogenetic relationship at multiple taxonomic levels (Yang X.-Y. et al., 2018). The development of next-generation sequencing technology has led to more and more angiosperm chloroplast genomes available, making comparative chloroplast genomics a convenient and efficient method for phylogenetic and evolutionary studies (Ge et al., 2018;Gu et al., 2019). Next-generation sequencing not only greatly improves our ability to obtain genomic resources in non-model species but also facilitates the development of the RNA-Seq technique. RNA-Seq is an efficient technology for large scale transcriptome investigations, which provides a convenient way to obtain information from expressed genomic regions quickly and offers an opportunity to solve comparative transcriptomic-level problems for non-model organisms (Logacheva et al., 2011;Zhang et al., 2013). Transcriptome analysis provides an effective way for novel gene discovery (Emrich et al., 2007) and expression profile construction (Fox et al., 2014), as well as for molecular marker development (Zhang et al., 2013) and analysis of adaptive evolution (Jia et al., 2017). As a non-model species, V. versicolor lacks transcriptome analysis, delaying molecular studies at the transcriptional level. RNA editing, which is identified primarily by the RNA-Seq technique, is a repair mechanism derived by species in response to abnormal DNA mutations during evolution. RNA editing is a post-transcriptional process in which the nucleotide in the transcript differs from the encoded DNA sequence by nucleotide insertion, deletion, or conversion (Takenaka et al., 2013). Most RNA editing events occur in internal codons, resulting in aminoacid substitutions. However, in some cases, the ACG codon is restored to the AUG start codon because of the C-to-U RNA editing, contributing to the conservation of the translation start signals at the gene level, which is essential for protein synthesis (Hirose and Sugiura, 1997). This editing-restored start codon has been reported in the chloroplast transcripts from maize (rpl2), tobacco (psbL), but especially in the ndhD transcript of several species, including Arabidopsis, Betula, tobacco, spinach, and snapdragon (Neckermann et al., 1994;Wang et al., 2018). Here, we de novo assembled the transcriptome and chloroplast genome of V. versicolor and performed a comparative chloroplast genome analysis between species of the genera Vincetoxicum and Cynanchum. The aims of this study were (1) to characterize the transcriptome and chloroplast genome of V. versicolor, (2) to explore the V. versicolor molecular evolution, and (3) to provide insights into the phylogenetic relationship between the genera Vincetoxicum and Cynanchum. Plant Materials Collection and DNA and RNA Extraction The young fresh leaves of a single plant of V. versicolor were collected in August 2019 from Tianjin University of Traditional Chinese Medicine (117.06 • E, 38.96 • N), Tianjin City, China. The voucher specimens were deposited at Tianjin State Key Laboratory of Modern Chinese Medicine, Tianjin University of Traditional Chinese Medicine, Tianjin, China (voucher number 2019bsbq). The collected leaves were snap-frozen in liquid nitrogen and then stored at -80 • C until DNA and RNA extraction. The total DNA was extracted using the extract Plant DNA kit (QIAGEN, Germany) following the manufacturer's instructions. Total RNA was extracted using the QIAGEN RNeasy Plant Mini Kit (QIAGEN, Germany) following the manufacturer's instructions. The purity and concentration of DNA and RNA were checked using NanoPhotometer R spectrophotometer (IMPLEN, CA, United States) and Qubit R DNA Assay Kit in Qubit R 2.0 Fluorometer (Life Technologies, CA, United States), respectively. DNA and RNA Sequencing, Assembly, and Annotation of Chloroplast Genome and Transcriptome The DNA-Seq library with an average insert size of 350 bp was constructed using the Truseq Nano DNA HT Sample Preparation Kit (Illumina United States). The strand-specific RNA-Seq library was constructed using the protocol described by Zhong et al. (2011). Then, the RNA-Seq library was sequenced on the Illumina HiSeqTM 2,500 platform. Subsequently, clean DNA and RNA data were obtained by removing adaptors and low-quality reads from the raw data. The V. versicolor chloroplast genome was de novo assembled using NOVOPlasty3.7.2 (Dierckxsens et al., 2017). To validate the reads coverage of the assembled chloroplast genome, clean data were mapped to the V. versicolor chloroplast genome using bowtie 2 (Langmead and Salzberg, 2012), and the average reads coverage was 2,418×. The V. versicolor chloroplast genome was annotated using GeSeq (Tillich et al., 2017), coupled with manual corrections for the start and stop codons. Finally, the V. versicolor chloroplast genome was deposited in the National Center for Biotechnology Information (NCBI) GenBank under accession number MT558564. For the transcriptome assembly, high-quality RNA-Seq data were de novo assembled into transcripts using Trinity (Grabherr et al., 2011) with min_kmer_cov set to two and other parameters set to default. The trinity-obtained contigs were then linked into transcripts. To remove redundant transcripts and obtain the primary representative of each gene locus, only the longest transcript in each cluster was selected as the unigene for subsequent analysis. Finally, the obtained unigenes were annotated using a BLAST search against the following databases, namely KOG (euKaryotic Ortholog Groups), GO (Gene Ontology), KO (KEGG Ortholog), Swiss-Prot (a manually annotated and reviewed protein sequence database), Nr (NCBI non-redundant protein sequences), Nt (NCBI non-redundant nucleotide sequences), and Pfam (protein family). Annotation of Functional Genes, Prediction of Biochemical Pathways, and Detection of Transcription Factors Gene Ontology functional analysis was implemented using blast2go tool (Götz et al., 2008). The KAAS software (Moriya et al., 2007) was used to predict the biochemical pathways of the V. versicolor unigenes based on the KO database. The transcription factors were detected using the iTAK program (Zheng et al., 2016). Identification of RNA Editing Sites RNA-Seq reads were mapped to the chloroplast genome of V. versicolor using bowtie 2 (Langmead and Salzberg, 2012). Then, samtools was applied to call single nucleotide polymorphisms to recognize editing sites in the V. versicolor chloroplast genome. Codon Usage Calculation The number of codons and the relative synonymous codon usage (RSCU) were calculated using Mega X (Kumar et al., 2018). The effective number of codons (ENc) values against GC content in the third position of synonymously variable codons (GC3s) values of protein-coding genes of chloroplast genome were calculated using CodonW v1.4.4 (Peden, 1999). Then, the relationships between ENc and GC3s were analyzed using the R script. Phylogenetic Analyses A total of 20 chloroplast genomes (Supplementary Table 1) of 18 Apocynaceae species and two Gentianaceae species available in GenBank were collected to reconstruct phylogenetic trees. Besides, another Vincetoxicum species (V. rossicum) was added to phylogenetic analysis. Although the full-length chloroplast genome of V. rossicum was not available, its raw reads were present in NCBI Sequence Read Archive under accession number SRR934046 (Straub et al., 2013). So, a draft chloroplast genome of V. rossicum was assembled using NOVOPlasty3.7.2. The draft chloroplast genome was incomplete and contained many degenerate bases in the intergenic regions, but its proteincoding genes were complete and could be used for phylogenetic analysis. The protein-coding genes from 21 chloroplast genomes were extracted, aligned separately, and recombined to construct a matrix using PhyloSuite_v1.1.15 . The generated matrix was used to conduct the Bayesian inference (BI) and Maximum likelihood (ML) phylogenies. The BI phylogenies were inferred using MrBayes 3.2.6 (Ronquist et al., 2012) under JC + I + G model, which was determined from the ModelFinder (Kalyaanamoorthy et al., 2017). The ML phylogenies were inferred using IQ-TREE (Nguyen et al., 2015) under an edge-linked partition model for 5,000 ultrafast (Minh et al., 2013) bootstraps, as well as the Shimodaira-Hasegawa-like approximate likelihood-ratio test (Guindon et al., 2010). Transcriptome Features Illumina pair-end sequencing produced 52,502,062 raw reads for V. versicolor, and 51,764,112 clean reads were obtained after removing adaptors and low-quality data ( Table 1). The base quality value Q20 and Q30 reached 97.51 and 93.01%, respectively, which indicated that the produced data could be used for further analysis. A total of 49,801 transcripts were generated in V. versicolor, of which 20,943 unigenes (N50 = 2,128 bp, average length = 1,491 bp) were identified. Most transcripts and unigenes were 1,001-2,000 bp, and the number of transcripts and unigenes over 2,000 bp were 14,787 and 5,443, respectively (Supplementary Figure 1). There were 16,895 unigenes (80.60%) for V. versicolor with at least one significant match to the databases discussed earlier and 3,177 Gene Ontology and Biochemical Pathways Prediction The GO concept aims to use a common vocabulary to annotate homologous genes and protein sequences in various organisms in a flexible and dynamic way. Thus, scientists can query and retrieve genes and protein sequences based on their shared biology (Ashburner et al., 2000). The functional classification of unigenes in the GO database was assigned into three categories: biological processes, cellular components, and molecular functions (Figure 1). A total of 12,369 unigenes were assigned to the GO classification groups. In the "Biological processes" group, "Cellular process" (7,374) was the most abundant term. Regarding "Cellular components, " "Cell" (4,159), and "Cell part" (4,159) were the dominant items. In the "Molecular functions" category, "Binding" (7,101) was the largest cluster. Interestingly, the most abundant terms in the corresponding GO categories in V. versicolor were highly similar to other angiosperm transcriptomes, such as Raphanus (Mei et al., 2016), Glycyrrhiza , and Dipteronia (Zhou et al., 2016). These data suggested that these gene groups are highly expressed and have functional importance in angiosperms. The KO database is an integrated database resource composed of genes, protein, small molecules, reactions, pathways, diseases, drugs, organisms, and viruses, as well as more conceptual objects, aiming to assign functional meanings to genes and genomes, both at the molecular and higher levels (Kanehisa et al., 2017). For the biochemical pathways prediction in the KO database, a total of 6,705 unigenes were assigned to the KO pathways (Figure 2). The cluster for "Translation" (799) represented the largest group, followed by "Carbohydrate metabolism" (542) and "Folding, sorting and degradation" (477), which indicated that these pathways might be crucial for the V. versicolor development. Detection of Transcription Factors Transcription factors play pivotal roles in complex biological processes under multiple environmental signals by regulating FIGURE 1 | Functional classification of unigenes of V. versicolor in GO database. GO terms were annotated according to three main categories "biological processes," "cellular components," and "molecular functions." Frontiers in Genetics | www.frontiersin.org FIGURE 2 | Annotation of unigenes of V. versicolor in KO database. "A" presents "Cellular processes," "B" presents "Environmental information processing," "C" presents "Genetic information processing," "D" presents "Metabolism," and "E" presents "Organismal Systems." the gene transcription through binding to specific DNA sequences in the target gene promoters (Honys and Twell, 2004). Transcription factors are generally classified into different families based on their DNA-binding domains (Jin et al., 2014). A total of 1,032 unigenes V. versicolor were classified into 73 functional families ( Table 2). Among these families, bHLH transcription factors were the most abundant (57), followed by AP2/ERF (56). It is worth paying attention to these two transcription factor families in the ecological adaptation studies of V. versicolor, as they play essential roles in resistance to abiotic stress in plants (Chinnusamy et al., 2003;Yang et al., 2016;Tripathi et al., 2017). Detection of Chloroplast RNA Editing Sites The RNA editing sites in the V. versicolor chloroplast genome were identified based on RNA-Seq data. The type and position of the editing sites are shown in Table 4. All RNA editing sites identified were C-to-U. A total of 35 RNA editing sites were detected in the V. versicolor chloroplast genome, of which 33 were located in the protein-coding region, and the remaining two were located in the tRNA region (trnN-GUU). All identified RNA editing sites occurred at the first and second positions of the codon, resulting in amino acid changes at the transcription level. Among these changes, the change from serine (S) to leucine (L) was the most abundant. We found an interesting phenomenon when checking the annotated genes, in which the chloroplast ndhD of V. versicolor did not seem to have a start codon at the genome level (the sequence was validated by a polymerase "Genome position," "gene position," and "codon position" refer to the positions of RNA editing in the chloroplast genome, genes, and codons, respectively. chain reaction and Sanger sequencing data). A further comparison of the chloroplast ndhD between species of the genera Vincetoxicum and Cynanchum showed that only the C. auriculatum ndhD started with the standard AUG. In contrast, the ndhD of V. versicolor, V. shaanxiense, and C. wilfordii exhibited ACG instead of AUG at the corresponding codon position (Figure 4). Therefore, we speculated that RNA editing restored the start codon AUG in V. versicolor, V. shaanxiense, and C. wilfordii, as observed in Arabidopsis, tobacco, spinach, Betula, and snapdragon (Neckermann et al., 1994;Wang et al., 2018). Examination FIGURE 4 | Comparison of chloroplast ndhD in Vincetoxicum and Cynanchum species. Red dotted box represents the amino acid changes at the transcription level. "Start" represents the start codon, whereas "T," "S," "L," and "F" represent threonine, serine, leucine, and phenylalanine, respectively. of V. versicolor transcripts revealed seven RNA editing sites in ndhD, one of which appeared on the ndhD first codon, causing the codon change from ACG to AUG (this editing site was validated by a reverse transcription-polymerase chain reaction, Supplementary Figure 2). This confirmation of the editing-restored ndhD start codon in V. versicolor strongly supported our hypothesis despite lacking the transcripts from the other two species. To further verify whether the editing-restored ndhD start codon was a common phenomenon in Apocynaceae, the ndhD of 17 Apocynaceae species was compared (Supplementary Table 3). The results showed that almost all of the examined Apocynaceae species exhibited ACG in the first ndhD codon (except for C. auriculatum), suggesting that the editing-restored ndhD start codon was prevalent in Apocynaceae. This kind of editingrestored ndhD start codon had also been reported in other angiosperms, especially in dicots (López-Serrano et al., 2001;Tsudzuki et al., 2001). In Apocynaceae, only C. auriculatum showed the appropriate AUG start codon in ndhD, suggesting that the mutation in this species corrected the start codon of ndhD at the genomic level after the interspecific differentiation in Cynanchum, as implied by previous studies on Liliaceae and Aloaceae (López-Serrano et al., 2001). Codon Usage Analyses As an important evolutionary feature, the codon usage pattern has been widely investigated in plant chloroplast genomes (Gao et al., 2018;Somaratne et al., 2019;Yang et al., 2019a). To explore the codon usage pattern in the chloroplast genomes of the Vincetoxicum and Cynanchum species, we calculated the number of codons and RSCU of protein-coding genes in the four chloroplast genomes using Mega X (Supplementary Table 4). The 88 shared protein-coding genes were encoded by 26729, 26671, 26716, and 26586 codons in the chloroplast genomes of V. versicolor, V. shaanxiense, C. wilfordii, and C. auriculatum, respectively. AAA encoding lysine was the most commonly used codon in the chloroplast genome of V. versicolor, whereas AUU encoding isoleucine was the most abundant codon in the chloroplast genomes of V. shaanxiense, C. wilfordii, and C. auriculatum. In the four chloroplast genomes, the A/U content in the third codon position was 68.70-69.11%, showing the preference for A/U-ending codons. Codon bias contributes to the efficiency of gene expression and, therefore, is generated and maintained by selection pressure (Hershberg and Petrov, 2008). The bias toward A/U in the third codon position is commonly observed in the angiosperm chloroplast genomes (Cui et al., 2019;Mehmood et al., 2020). This reflects the strong selection pressure that affects the codon usage of the chloroplast genome, thus regulating the chloroplast gene expressions. Additionally, except for UUG, all preferred synonymous codons (RSCU > 1) ended with A/U. The usage of the initial codon AUG and tryptophan UGG had no bias (RSCU = 1), as observed in other angiosperms . The plot of the ENc values against the GC3 values is a useful indicator to explore the factors that affect the codon usage. The predicted values are in the expected curve when the codon usage of a gene is constrained only by the G + C mutation bias. Moreover, the predicted values are much lower than the expected curve when natural selection played a major role in optimizing codon usage bias (Wright, 1990). The four chloroplast genomes shared the analogous codon bias pattern (Figure 5). A small number of protein-coding genes followed the standard curve, suggesting that the codon bias of these genes was caused mainly by the nucleotide composition bias in the third codon position. In particular, more than half of the genes were below the curve, indicating that natural selection predominantly influenced these genes. The photosynthesis-related genes represent most of them, revealing their importance so that strong selection pressure is necessary to keep these genes conserved. However, not all photosynthesisrelated genes were below the curve. These photosynthesis-related genes exhibited discrete distribution, which implies that other factors such as gene expression level can also affect codon bias (Hershberg and Petrov, 2008). Phylogenetic Analysis Complete chloroplast genomes can provide abundant genetic information for understanding the phylogenetic relationships at various taxonomic levels (Huang et al., 2019;Yang et al., 2019b). To explore the phylogenetic relationship between the genera Vincetoxicum and Cynanchum in the Apocynaceae family, the phylogenetic analysis was conducted based on protein-coding genes of chloroplast genomes of 19 Apocynaceae species (Figure 6). ML and BI trees had a highly similar typology at most branches, except that the position of Vincetoxicum hainanense between ML and BI trees was inconsistent. In the ML and BI trees, four Vincetoxicum species (V. versicolor, V. shaanxiense, V. hainanense, and V. rossicum) were clustered into a monophyletic branch (bootstrap proportions = 100, posterior probabilities = 1), whereas two Cynanchum species formed another monophyletic branch (bootstrap proportions = 100, posterior probabilities = 1). Phylogeny between Vincetoxicum and Cynanchum was described as {Cynanchum + [Vincetoxicum + (Asclepias + Calotropis)]}, which strongly supports the previous view (Liede-Schumann, 2000;Yamashiro et al., 2004;Alessandro et al., 2007) that there was no close phylogenetic relationship between the genera Vincetoxicum and Cynanchum. CONCLUSION This study was the first effort to characterize the transcriptome and chloroplast genome of V. versicolor. A total of 49,801 transcripts were generated, and 20,943 unigenes were obtained from V. versicolor. The GO classification showed that "Cellular process, " "Cell, " "Cell part, " and "Binding" were the most abundant terms in the corresponding categories. KO pathway prediction indicated that the "Translation" cluster represented the largest group. A total of 1,032 unigenes from V. versicolor were classified into 73 functional transcription factor families. The bHLH and AP2/ERF transcription factors were significantly abundant, suggesting that they should be carefully evaluated in the V. versicolor ecological adaptation studies. The comparative analysis showed that the Vincetoxicum and Cynanchum chloroplast genomes were highly conserved in terms of gene order, gene content, and AT content. They shared an analogous codon bias pattern in which their protein-coding genes exhibited a preference for A/U-ending codons. More than half of the chloroplast genes were predominantly influenced by natural selection pressure, and photosynthesis-related genes accounted for most of them. The RNA-Seq data revealed 35 editing sites in the chloroplast genome of V. versicolor, and one of which restored the ndhD start codon in V. versicolor. Phylogenetic analysis based on ML and BI trees strongly supported the view that Vincetoxicum and Cynanchum were two distinct genera. Thus, Vincetoxicum should be regarded as an independent genus in the Apocynaceae family. Overall, this study provided valuable insights into the evolution and phylogeny of V. versicolor. DATA AVAILABILITY STATEMENT The dataset generated for this study can be found in NCBI Sequence Read Archive (SRA) under the accession numbers SRR10838756 (DNA) and SRR10838799 (RNA). The assembled chloroplast genome of V. versicolor can be found in GenBank under the accession number MT558564. AUTHOR CONTRIBUTIONS XT and DW designed the study and revised the manuscript. XY assembled, annotated, analyzed the chloroplast genome and transcriptome, and drafted the manuscript. XY and WW performed the experiment. XY, HY, and XZ analyzed the data. All authors contributed to the article and approved the submitted version. FUNDING This work is supported by grants from the State Key Laboratory of Component-based Chinese Medicine, Tianjin University of Traditional Chinese Medicine, Tianjin, 300193, China.
2021-03-04T14:17:26.080Z
2021-03-04T00:00:00.000
{ "year": 2021, "sha1": "023e514a9e6ce31f7b3aaa34eee0d1dbfd606029", "oa_license": "CCBY", "oa_url": "https://www.frontiersin.org/articles/10.3389/fgene.2021.602528/pdf", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "023e514a9e6ce31f7b3aaa34eee0d1dbfd606029", "s2fieldsofstudy": [ "Biology" ], "extfieldsofstudy": [ "Medicine" ] }
241557701
pes2o/s2orc
v3-fos-license
Effect of Rooting Media, Cutting Types and Watering Frequency on Dry Matter Production of Long Pepper (Piper cappense) at Jimma Long pepper cuttings are traditionally planted in a trench & covered with plastic sheet to get large number of transplantable seedlings. But the success of the transplant is often low hence; it is common to retain cuttings for more than a year to synchronize their stage of transplanting with start of main rainy season. This requires extra costs for nursery operation & maintenance that needs further research focus to fill the gap alleviated by identifying best growing media, suitable type of cutting type & identifying appropriate watering frequency. The present study was conducted at Jimma Agricultural Research Center (JARC) to investigate the influence of rooting media, cutting types and watering frequency on dry matter production of long pepper cuttings. Four types of media, composed of sub soil(ss), top soil(ts), farmyard manure(fym) and fine sand with the following proportion, 2top soil + 1farmyard manure + 2fine sand recommended for coffee cutting , 6top soil + 3farmyard manure + 2fine sand recommended for coffee nursery, 1/3 ss upper + 2top soil + 1farmyard manure + 1fine sand recommended for tea media and 2top soil + 1farmyard manure + 1fine sand recommended for coffee nursery, three level of cutting type soft wood (sw), semi hard wood(shw) and hard wood (hw) and four level of watering frequency were combined in split plot design with three replications, where four watering frequency levels were assigned to main plots, four media type levels were assigned to sub plot and three types of cutting (soft wood, semi hard wood and hard wood) levels were assigned to subsub plot were combined with factorial arrangement (4 x 4 x 3) with 48 treatments. Data was collected for root and shoot dry matter production after six months of planting. The analysis of variance for average dry shoot weight was significantly influenced by watering frequency, rooting media and cutting type. The main effect of watering frequency, rooting media, the interaction effect of watering frequency with rooting media, watering frequency with cutting type and media with cutting type, the three way interaction effect of watering frequency, rooting media and cutting type were very highly significant (P < 0.001) difference. Attention should also be given in selecting the cutting type and position on the stock plant while preparing the cuttings. nigrum L.) are being utilized for seasoning and provide oil which to a certain extent is used as an aromatic in the drinks industry and for medicine. In Ethiopia, field surveys have shown both the utilized and the wild species of long paper found growing under story the natural forest area of Ethiopia. The crop is a short shrub that can be disturbed if appropriate cultural practices are not applied. The field where long pepper should be planted during the rainy seasons since much undergrowth is expected. When the dry season commence mulching and sometimes watering advisable. This spice is growing under the shade of natural forest and sometimes when the shade become very dense and this time it needs to remove some of the branches to let in light inside for effective flowering, pollination, fruiting, and maturity. Long pepper grows under natural forest but from experience, this plant has the tendency to moves towards the open areas or margin of the forest or shade of the forest. Long pepper cuttings are traditionally planted in a trench and covered with plastic sheet to get large number of transplantable seedlings. However, the success of the transplant is often low hence; it is common to retain cuttings for more than a year to synchronize the transplanting time with the beginning of the rainy season. This requires extra costs for nursery operation & maintenance. This can be alleviated by identifying the best growing media, suitable type of cutting type & identifying appropriate watering frequency. However, no research work has been carried out on cutting propagation of long pepper and hence, much of the information on nursery practices and improved long pepper propagation technologies is lacking in the growing area of Ethiopia. Therefore, this study was designed to address the above-mentioned gaps. Description of the Study Area The study was conducted at Jimma Agricultural Research Center (JARC) located 365 km South West of Addis Ababa, and 12 km away from Jimma town. The Nursery site is located at 7 0 40' N latitude and 36 0 47'E longitudes with an altitude of 1753 meters above sea level. It situated in the tepid to cool humid-mid highlands of southwestern Ethiopia. The long-term (ten years) mean annual rainfall of the area is 1639 mm with a maximum and minimum air temperature 26.6 o C and 13.9 o C respectively. According to JARC 2010 meteorology data the relative humidity of the area ranges from 35 to 95 percent. Experimental Treatments The experimental materials used in this study include, rooting media composed of top soil, sub soil, farmyard manure, Fine sand, stem cutting obtained long pepper accession among from the 1979 collection batch and Watering frequency. Rooting Media Proportions (Types) The basic media used for the preparation of the potting mixes were top soil, sub-soil, farmyard manure and fine river sand. Top soil was collected from the upper 25 cm layer of uncultivated land and the sub soil next to the layer of the top soil at about 30-35 cm depth was also collected from the same area. Well decomposed animal dung were collected from dairy farming privet enterprise around Jimma town, these materials were sun dried, crushed and also sieved through mesh before mixing with other media categories. Finally, four rooting media types with the following proportion (v/v) were prepare. Preparation of Cutting Types Long pepper already established in the clone garden of Tepi Agricultural Research Center; vertically grow orthotropic shoots was used as a source of stem cutting, uniform and healthy cuttings with 2-4 nodes were harvested early in the morning when the shoot and the leaves are turgid from the soft wood (upper part of the shoot), semi hard wood (middle part of the shoot) and hard wood (nearer to the main stem) were taken using sharp and sanitized pruning shear with alcohol. The cuttings were placed immediately in the plastic bug to prevent dehydration and then transported to the actual propagation site where the whole operation carried out under shaded condition to provide protection against sun light. Double node cutting of soft wood, semi hard wood and hard wood cutting were prepared by cutting the shoot just above each node and the woody and young parts from the lower and upper ends of the shoot, respectively. The leaves on both types of cutting were trimmed totally to reduce the rate of transpiration. Slant cut at the base of each cutting were made before setting them in the rooting media. To maintain internal turgidity, all the cuttings were kept in a plastic bug. Finally, inserted to a depth of 3-4 cm into the potted media in February 2012 and watered up to field capacity. The polythene sheet buried along the edges of the bed to provide humidified environment for the cuttings. Watering Frequency The quantity of water applied to a plot at a time (per irrigation) was Equivalent to the amount required to replenish or maintain the moisture content of the growth medium at field capacity. Entrance of water into adjacent plots upon irrigating a plot was controlled by carful application using fine-holed standard watering can. Water from external sources, particularly rainfall was prevented by white transparent plastic film spread over wooden poles and string to cover the main plot, the plastic film was closed all the time except during watering hours of the day. Propagator Structure Eucalyptus wood, elephant grass and 30 micron thick white plastic sheet was used to contract the propagator. Raised nursery beds with 1.2m width x 10m length were prepared to arrange the treatments. Then, simple and inexpensive non-mist propagator was made from wooden frame (eucalyptus tree post). The frame was covered with 30-micron thick white translucent plastic sheet. Figure1. Propagator structure framework Artificial shade supported with wooden poles were made at a height of 2 meter above the ground level and covered with elephant grass to provide approximately 70 to 75% shade (Behailu et al., 2006), and both sides of the propagator was also protected with the elephant grass to avoid direct sunlight. Experimental Design and Treatment Layout/Arrangement The Experiment was conducted in the nursery at Melko (JARC) using stem cutting of long pepper in split plot design with 3 replications, where Four watering frequency levels were assigned to main plots, four media type levels were assigned to sub plot and three types of cutting (Soft wood, semi hard wood and hard wood) levels were assigned to sub-sub plot were combined with factorial arrangement (4 x 4 x 3) with 48 treatments (Table 1). Each treatment contains 12 cuttings and 1728 cuttings were used for the experiment. The cutting was inserted directly in the media filled in 16 cm wide and 25 cm long black polyethylene bags and randomly assigned in the propagator with in main plot with two rows and 10 cm spacing between treatments. After Planting Care To maintain the required level of moisture, temperature and relative humidity, water application manually using 10-liter capacity plastic watering cane was done depending upon the time of watering Frequency (every Week, every Two Weeks, every Three Weeks and every Month) was carried out accordingly by opening and closing back the polyethylene sheet. A daily minimum and maximum temperature inside the propagator were recorded using thermometer and the range was 22-23 0 C, 20-21 0 C, 21-22 0 C and 29 0 C under watering interval every week, every two weeks, every three weeks and every Month respectively. The relative humidity (RH) inside the propagator was also recorded daily and the average was 66-70%, 80-81%, 81-83% and 87% watering interval every week, every two weeks, every three weeks and every four weeks respectively. Data Collection Destructive data were collected 185 days after planting. Rooting percent was determined based on all survived cuttings per plot and the average was taken. Five selected sample cutting from each plot were considered and separated in to root and shoot part and evaluated for the different parameters. The parameters measured and the methods used each were presented as follows. Soil Analysis Prior to the nursery experiment, the soil was sampled from each rooting media and prepared before planting the cuttings and analysed for the physical and chemical properties. The analysis was determined in the laboratory using the procedure outlined by Sahlemedhin and Taye (2000). The analysis was carried out at JARC soil laboratory ( Water holding capacity(%): it was calculated using the following formula WHC = weight water in the saturated media(g) X 100 Weight of saturated media(g) 2.13. Chemical Properties P H : The P H of the rooting media was determined by meters, from a 1:2.5 soil-water suspension. Organic carbon (%): organic carbon content of the soil was determined by the wet combustion procedure of Walkley and Black method (1934). Total nitrogen (%): Total nitrogen content of the rooting media was determined by wet-oxidation procedure using modified Kjeldahl method. Available phosphorus (ppm): The available phosphorus content of the rooting media was determined by 0.5M sodium bicarbonate extraction solution (PH 8.5) method of Olsen(1954) Available potassium (ppm): The available potassium content of the rooting media was determined by using atomic absorption or flame photometer. Dry Matter Parameter Root dry weight (g): after drying the root in an oven drier (at a temperature of 100 o C to constant weight) weight was measured using a sensitive balance and the average was calculated for each treatment. Root to shoot dry weight ratio: root to shoot dry weight ratio was determined by dividing dry weight of root to shoot of each sample cuttings and the average was calculated for each treatment. Shoot dry weight (g): after drying the shoots in an oven drier (at a temperature of 100 o C to constant weight) weight was measured using a sensitive balance and the average was calculated for each treatment. Data Analysis Data collected for various root and shoot, dry matter parameters were checked for meeting the assumption for ANOVA. The results were presented for discussion per plant basis. The percentage data (percentage of sprouting) was transformed using the Arc sign transformation method before analysis. Data were analyzed using SAS software (SAS version 9.2, 2008). Mean comparison were perform using the Duncan's Multiple Range Test (DMRT) method. A significant level of 5% used for all statistical analysis. RESULTS AND DISCUSSION Dry Matter Production 3.1. Shoot Dry Weight (g) The analysis of variance for Average Dry shoot weight of long pepper stem cutting was significantly influenced by watering frequency, rooting media and cutting type. The main effect of watering frequency, rooting media, the interaction effect of watering frequency with rooting media, watering frequency with cutting type and media with cutting type, the three way interaction effect of watering frequency, rooting media and cutting type were very highly significant (P < 0.001) difference. However, main effect of cutting types did not show significant (p > 0.05) difference (Table 3). . . * -Significant at 5% (level of significance opted by user), NS -Non Significant p-Value < 0.05 -Significant at 5%, p-Value < 0.01 -Significant at 1% The highest average dry shoot weight per cutting (14.7g and 14.3g) was recorded under watering interval every week grown in (2ts: 1fym:1fs) media proportion, watering frequency every three weeks grown in the same rooting media and cutting type respectively ( Table 6). The least average dry shoot weight per cutting (5.33g ) was recorded under semi hard wood cutting type grown under 2ts: 1fym:1fs media proportion by watering interval every month. Root dry weight (g) The analysis of variance for Average dry root weight of long pepper stem cutting was significantly influenced by watering frequency, rooting media and cutting type. The analysis of variance has showed highly significant (P< 0.001) difference among interaction effect of watering frequency with rooting media, media with cutting type and watering frequency, rooting media and cutting type. However not significant difference (P>0.05) was recorded on the main effect of watering frequency, rooting media and cutting type and the interaction effect of watering frequency by cutting type (Table 4). The highest average dry root weight per cutting (2.7g) was recorded under watering interval every two weeks, hard wood cutting types grown in 6TS: 3FYM: 2FS media proportion. Likewise with in interaction of media and cutting type the highest average dry root weight per cutting (2.13g) was recorded under semi hard wood cutting type grown in (1/3 SS + 2TS: 1FYM:1FS) media proportion Table 6. The least average root weight ( 0.5g ) was recorded for watering interval every month, soft wood cutting types, grown under (2TS: 1FYM:1FS ) media proportion Table 6. Root to Shoot Dry Weight Ratio Root to shoot ratio was highly significant (p<0.01) and was influenced by interaction effect of watering frequency, rooting media and cutting types. The main effect of rooting media and the interaction effect of watering frequency with rooting media, rooting media and cutting types and three-way interaction of watering frequency, rooting media and cutting types were highly significant. However, the main effect of cutting types and the interaction effect of watering frequency with cutting types showed no significant difference ( The highest root to shoot ratio (0.33 %) was registered for watering frequency every three week and semi hard wood cutting type grown in (2TS: 1FYM:2FS) media proportion ( Table 6). The least root to shoot ratio (0.04%) was recorded for watering interval every week month, soft wood cutting grow in (2TS: 1FYM:1FS) media proportion (
2021-08-27T16:50:04.028Z
2021-01-01T00:00:00.000
{ "year": 2021, "sha1": "dd5883dc1a5a1fcac0ef790523f5187ec6a84074", "oa_license": null, "oa_url": "https://doi.org/10.20431/2454-6224.0704003", "oa_status": "GOLD", "pdf_src": "MergedPDFExtraction", "pdf_hash": "61f69165eb019bfc0e200f8b3e56287239e4ea0e", "s2fieldsofstudy": [ "Agricultural and Food Sciences" ], "extfieldsofstudy": [] }
230487657
pes2o/s2orc
v3-fos-license
Prevalence of serum antibodies to Toxoplasma gondii in free-ranging cats on Tokunoshima Island, Japan The prevalence of Toxoplasma gondii infection in free-ranging cats on Tokunoshima Island was assessed by testing 125 serum samples using anti-T. gondii IgG indirect enzyme-linked immunosorbent assay. The overall seropositivity rate was 47.2% (59/125). Seropositivity rates in cats with body weight >2.0 kg (57.4%) were significantly higher than in those with body weight ≤2.0 kg (12.5%, P<0.01). Analysis of the number of seropositive cats by settlement revealed the presence of possibly-infected cats in 17 of 23 settlements, indicating the widespread prevalence of T. gondii on the island. This is the first study to show the seroprevalence of T. gondii in free-ranging cats on Tokunoshima Island. The information revealed in this paper will help to prevent the transmission of T. gondii among cats and also in both wild and domestic animals and humans on the island. Toxoplasma gondii is a zoonotic protozoan parasite, which is able to infect diverse species of warm-blooded animals, including humans. Oocysts of T. gondii are shed in feces of infected domestic cats and wild felids, the definitive host, and ingestion of oocysts-contaminated feces, soil, and water is one of the main routes of infection in both cats and other animals that act as intermediate hosts. Since oocysts are distributed widely by infected cats, and T. gondii can infect various hosts in various ways, the impact extends to wildlife and livestock production. Tokunoshima Island in the Nansei Islands is located in a subtropical area, has a surface area of approximately 247.85 km 2 and about 23,600 residents [7]. The forest area of Tokunoshima Island harbors many endemic mammals, such as the Amami rabbit (Pentalagus furnessi), Ryukyu long-haired rat (Diplothrix legata), and Tokunoshima spiny rat (Tokudaia tokunoshimensis). Currently, conserving endemic animals is one of the main issues on this island. In addition, a case of T. gondii infection in the endemic Amami spiny rat (T. osimensis) [18] and suspected toxoplasmosis of Amami rabbit [9] has been reported in the adjacent Amami-Oshima Island. Free-ranging cats (stray, feral, and owned-outside cats) are frequently found not only in town, but also in forest areas and farmlands on the Island [10]. This study surveyed T. gondii infection in freeranging cats on Tokunoshima Island for the first time, as a first step towards inferring the distribution of the infection on the island. One hundred and twenty-five serum samples of free-ranging cats were provided from the population control program on Tokunoshima Island, carried out by the local Tokunoshima government and the Ministry of the Environment. In the program, freeranging cats were captured by traps from all towns on the Island; Amagi-cho, Isen-cho, Tokunoshima-cho. Captured cats are either released or adopted by new owners after being neutered. Information on the samples such as the date of capture, area of capture, sex, and body weight were also provided. All serum samples were stored at −20°C until further analysis. Ten serum samples from specific pathogen free (SPF) cats were used as negative controls. Eight serum samples of naturally-infected cats from the Okinawa Island, which were previously confirmed to be infected by commercial latex agglutination test (LAT) kits (Toxocheck-MT; Eiken Chemical, Tokyo, Japan), were used as positive controls. Serum anti-T. gondii antibody was measured by indirect enzyme-linked immunosorbent assay (ELISA). Antigen was prepared from T. gondii RH Ankara strain tachyzoites as described previously [3]. Briefly, tachyzoites were obtained from peritoneal fluids of BALB/c mice infected 72 hr prior intraperitoneally. After washing three times with sterile phosphate buffered salts (PBS, pH: 7.4), the pellet was suspended with 20% sodium dodecyl sulfate (SDS) and waited for 30 min, then centrifuged at 14,000 g for 5 min. The supernatant was collected and kept at −20°C. The tests were conducted following the method reported previously [2], with some modifications. Ninety six-well microtiter plates were coated with 100 µl per well of an antigen suspension containing 3.0 × 10 5 /ml samples of T. gondii RH Ankara strain tachyzoites in PBS and incubated overnight at 4°C. In other procedures, incubations were performed at room temperature. Plates were washed with PBS-T (PBS-0.05% Tween 20) and blocked for 1 hr with 200 µl of 1% bovine serum albumin (BSA) in PBS-T per well. The serum samples were 1:100 diluted in dilution buffer (0.1% BSA in PBS-T) and applied at 100 µl per well for 1 hr, after being washed. Next, plates were washed and 100 µl 1:20,000-diluted Goat anti-cat IgG horseradish peroxidase (HRP) conjugated antibodies (Life Technologies, Frederick, MD, USA) in dilution buffer were applied. Subsequent to 1-hr incubation and washing, 10 min reaction with 100 µl of substrate (TMB microwell Peroxidase Substrate System; SeraCare Life Sciences, Milford, MA, USA) was carried out and stopped after by 50 µl of 2[N] H 2 SO 4 per well. The absorbance was measured at 450 nm by a microplate reader (SpectraMax Paradigm, Molecular Device, Sunnyvale, CA, USA). All samples were analyzed in duplicates. The optical density (OD) values of each sample, negative controls, and positive controls were calculated. The cut-off value was considered as the mean OD values of negative controls plus three standard deviations. The Fisher's exact test was performed to detect any differences in the rates of positivity in both genders (male and female), weight, and towns. The level of maturity of free-ranging cats was considered by bodyweight conventionally; cats weighing 2.0 kg and less were categorized as young, while cats over 2.0 kg were categorized as adults. P-values less than 0.01 were considered significant. All data analysis and map drawing of Tokunoshima Island with GPS points were performed using R version 3.6.3 [16]. To show the representative points of settlements, the map and the information of land use were obtained from National Spatial Planning and Regional Policy Bureau, Ministry of Land, Infrastructure, Transport and Tourism of Japan [14]. To confirm the diagnostic accuracy of ELISA assay, western blotting was performed on 66 randomly chosen serum samples. Crude tachyzoite lysate was prepared as previously reported [1]. T. gondii PLK strain maintained in Vero cell cultures was collected and passed through a 27 G needle, three times. The cells were pelleted at 2,000 rpm for 10 min, washed in PBS, and then passed through a 5 µm filter. Parasites were then repelleted at 2,000 rpm for 10 min and lysed with M-PER™ Mammalian Protein Extraction Reagent (Thermo Fisher Scientific, Waltham, MA, USA). Crude tachyzoite lysate was 3:1 diluted with SDS sample buffer and boiled 95°C for 5 min. Ten µg of samples per well were loaded on 12.5% polyacrylamide gel and separated by electrophoresis. Then antigens were transferred onto polyvinylidene difluoride western blotting membrane (Roche Diagnostics GmbH, Mannheim, Germany) using the Trans-Blot SD Semi-dry Transfer Cell (Bio-Rad, Hercules, CA, USA), and blocked in Block Ace (DS Pharma Biomedical, Osaka, Japan) overnight at 4°C. After washing in PBS-T, membranes were incubated in cat serum diluted 1:500 in dilution buffer (1% BSA PBS-T) at room temperature for 1 hr. Then, membranes were again incubated in 1:10,000-diluted Goat anti-cat IgG HRP conjugated antibodies (Life Technologies) in dilution buffer for 1 hr subsequent to wash. Next, membranes were washed, and chemiluminescent images were developed using Amersham ECL Western Blotting Detection Reagent (GE Healthcare UK Ltd., Buckinghamshire, UK) and LAS-3000 mini (Fujifilm, Tokyo, Japan). The overall seropositivity rate in free-ranging cats on Tokunoshima Island was 47.2% (59/125) with the cut-off value set at 0.438 (Fig. 1A). The seropositivity rate was then compared by sex, maturity, and towns (Table 1). No significant difference was observed between males and females (P=0.35) (Fig. 1B). Adult cats had a significantly higher seropositivity rate (57.4%) than young cats (12.5%, P<0.01) (Fig. 1C). No significant difference was found in the seropositivity rates between the 3 towns (P=0.85). The three towns on Tokunoshima Island can be divided into 44 settlements. A total of 125 samples were obtained from 23 settlements. In 17 of those 23 settlements, 1 or more samples tested positive (Fig. 2). Thirty-seven out of 66 samples were positive (56.1%) by western blotting analysis ( Fig. 3 and Table 2). Compared to the seropositivity rate in ELISA and western blotting, the results are reasonably in accord (Table 3), and the seroprevalence based on ELISA using PLK strain antigen with positive/negative controls and randomly selected samples was substantially concordant with the result of ELISA using RH Ankara strain antigen (data not shown). This study showed a considerably high prevalence of anti-T. gondii IgG in free-ranging cats, indicating some risk of T. gondii infection to other animals and humans on Tokunoshima Island. The result may be the highest among all studies of seroprevalence in cats conducted previously in Japan. The seropositivity rates have been reported in cats that visited animal hospitals; 5.4% (78/1,477) Kumamoto) in 1997 [15]. The seroprevalence in free-ranging cats in Japan were reported as 9% in 2013-2017 in Amami-Oshima Island [12], and 13.4% in 1998-1999 in Chiba prefecture [6]. In Tokachi subprefecture, Hokkaido in 2013-2014 showed significantly higher seroprevalence in cats allowed to roam outdoors or reared at the farm (30.0%) than in cats reared indoors (7.5%) [17]. Various seroprevalence values, including much higher results in free-ranging cats have been reported from other countries, such as La Rioja and Madrid in Spain (36.4%) [13], Tasmania, Australia (84.2%) [5], and Izmir, Turkey (34.4%) [2]. As those studies mentioned, the tendency that free-ranging cats have higher seropositivity than house-kept cats might also be true in Tokunoshima Island. A dietary analysis using fecal samples obtained from free-ranging cats on Tokunoshima Island has proved that 17.7% and 30.8% of those samples contained forest-living species such as the Amami rabbit, Ryukyu long-haired rat, and Tokunoshima spiny rat, and farmland animals such as the black rat (Rattus rattus) and shrews (Crocidura spp.), respectively [10]. Thus, cats that roam around both forest areas and farmlands may have some opportunities to bring oocysts to and contaminate such environments. In this study, the seropositivity rate in adult cats was significantly higher than young cats. This result is consistent with some studies that reported that seropositivity rates in adult cats are higher than those in kittens or juvenile cats, indicating more opportunity of exposure to T. gondii [5,11,13,17]. Anti-T. gondii IgG remains for long period after infection, at least 6 years in cats [4]. The result also showed that possibly infected free-ranging cats are found in 17 of the 23 settlements (Fig. 2), suggesting T. gondii infection widely occur on the island, and the introduction of T. gondii to this island is not a recent incident. Although it is not clear how long cats have been established on the island, cattle industry, which creates population sources of free-ranging cats on this island, has flourished for the last 50 years [8], suggesting that free-ranging cats have become abundant since the last several decades. A case of T. gondii infection was reported in a deceased Amami spiny rat on Amami-Oshima Island, revealing the risk of T. gondii infection to wildlife [18]. No significant difference was found in the seropositivity rate between three towns on the island, and also between samples from cats captured in forest area and residential area including farmland (data not shown), indicating that the forest area, as well as the residential area, might be contaminated with oocysts shed by those free-ranging cats. Interestingly, the reported seroprevalence of T. gondii in free-ranging cats was much lower on Amami-Oshima Island [12], which is located about 42 km northeast of Tokunoshima Island (nearest coastline distance). Amami-Oshima Island (712 km 2 ) is the largest of the Amami Islands, more than 2.5 times larger than Tokunoshima Island. The disagreement in seroprevalence despite their proximity might have resulted from geographical features or cat density, due to the smaller size of Tokunoshima Island. Furthermore, Tokunoshima Island thrives more with livestock production compared to Amami-Oshima Island. Cattle barns are possibly one of the major factors of the population of free-ranging cats, because a study on Tokunoshima Island suggested that feeding cats in cattle barns contributes to maintaining those population [8]. To identify the reasons behind the high seroprevalence in free-ranging cats on Tokunoshima Island, more comprehensive studies that include analysis of natural and artificial geographical features are required. Because of wide host range and complicated route of infection, one health viewpoints are essential for epidemiologic research of T. gondii transmission, which analyzes human, animal and wildlife integrally.
2020-12-31T09:04:01.080Z
2020-12-30T00:00:00.000
{ "year": 2020, "sha1": "f1f4193b84d1a016d011939b8c95602f2da86f01", "oa_license": "CCBYNCND", "oa_url": "https://www.jstage.jst.go.jp/article/jvms/83/2/83_20-0512/_pdf", "oa_status": "GOLD", "pdf_src": "Anansi", "pdf_hash": "378ee13948a335e28965c5410cdbf3223ba17a5a", "s2fieldsofstudy": [ "Medicine", "Environmental Science", "Biology" ], "extfieldsofstudy": [ "Medicine" ] }
257524314
pes2o/s2orc
v3-fos-license
Epoxidized and Maleinized Hemp Oil to Develop Fully Bio-Based Epoxy Resin Based on Anhydride Hardeners The present work aims to develop thermosetting resins using epoxidized hemp oil (EHO) as a bio-based epoxy matrix and a mixture of methyl nadic anhydride (MNA) and maleinized hemp oil (MHO) in different ratios as hardeners. The results show that the mixture with only MNA as a hardener is characterized by high stiffness and brittleness. In addition, this material is characterized by a high curing time of around 170 min. On the other hand, as the MHO content in the resin increases, the mechanical strength properties decrease and the ductile properties increase. Therefore, it can be stated that the presence of MHO confers flexible properties to the mixtures. In this case, it was determined that the thermosetting resin with balanced properties and high bio-based content contains 25% MHO and 75% MNA. Specifically, this mixture obtained a 180% higher impact energy absorption and a 195% lower Young’s modulus than the sample with 100% MNA. Also, it has been observed that this mixture has significantly shorter times than the mixture containing 100% MNA (around 78 min), which is of great concern at an industrial level. Therefore, thermosetting resins with different mechanical and thermal properties can be obtained by varying the MHO and MNA content. Introduction Thermosetting polymers, particularly epoxy resins, are widely used in engineering as adhesives and polymeric matrix for composite materials. This product family currently accounts for 14% [1] of the global polymeric market, being used in industrial sectors such as the automotive, aircraft and marine industries, civil infrastructures, electronic components and sporting goods, among others [2,3]. Epoxy resins are generally characterized by excellent mechanical, thermal, adhesive and solvent resistance properties [4]. Aside from the properties above, epoxy resins have several drawbacks. On the one hand, most of them are obtained from petrochemical products, which, combined with the difficulty of being recycled, makes the reduction of the carbon footprint produced during their life cycle very difficult [5]. On the other hand, epoxy resin is characterized by its inherent brittleness due to its highly crosslinked structure and moisture sensitivity [6]. Acting on the first point, numerous studies have focused on the total or partial replacement of both main components, the conventional synthetic reinforcement and the resin, by natural fibers or bio-based polymers [7][8][9]. The fibers that are most commonly used to develop green composite are based on ramie, jute, sisal, kenaf, hemp or flax [8]. These studies cover a wide range of topics from manufacturing techniques, material properties, fiber pre-treatments, coatings or possible applications. On the other hand, petrochemical epoxy resins can be replaced by bio-based epoxy resins, formulated from precursors such as furans, tannins, cardanol, natural rubber and especially vegetable oils (VO) [10][11][12]. In this context, VO are of particular interest due to them being sustainable, inexpensive and easily substitutable with other petrochemical epoxy resins for specific low-load bearing applications, as well as the improvement of the fragility problems mentioned above [13]. VO are formed by a long chain structure based on unsaturated triglycerides (C=C), which can easily be modified by active molecules such as oxirane oxygen, maleic anhydride, hydroxyl or acrylate. These modified VO can be employed as plasticizers, chain extenders, compatibilizers, coatings and thermosetting resins [14][15][16][17]. Epoxidation is one of the most widely used processes in the chemical modification of VO, with commercially available epoxidized soybean (ESO) and linseed oils (ELO). These can be used as reactive diluents instead of styrene to produce vinyl esters or in applications such as coatings, automotive and steel primers, thermal insulation, glues and adhesives [18]. However, these oils are also used in food production, which may raise an ethical issue in the case of the growing demand for bio-based feedstock in engineering applications. Therefore, to alleviate this conflict of interest, it is essential to focus on using fast-growing non-food oil crops as feedstock for bioresins. Hemp seeds contain between 28-35% oil, depending on the geographical region of cultivation, variety of seeds and climatic conditions [19]. The annual export value of hemp seed in Europe in 2021 amounted to 34,412 tonnes, with the Netherlands being the European country that exported the most hemp seed in 2021, with a total of 27,435 tonnes [20]. In addition, hemp seed oil, due to its high content of unsaturated fatty acids, allows for a wide variety of chemical modifications, such as epoxidation, maleinization or acrylation [21]. Therefore, it has been shown that hemp as a raw material at the industrial level is promising for the manufacture of bioresins. As it is possible to observe in Table 1, the theoretical oxirane oxygen content of the hemp oil, a parameter related to the amount of monounsaturated and polyunsaturated fatty acids (MUFA and PUFA, respectively), is above 10.53%, being one of the VO with the most significant potential. Currently, there are few studies in the literature reporting on the use of hemp oil in the manufacture of thermosets. Manthey et al. [22], due to the need to develop new bio-based materials, developed new biocomposites made from hemp oil epoxidized with jute fiber as a reinforcement, and then compared them with samples containing commercial epoxidized soybean oil. They showed that samples with epoxidized hemp oil had slightly better mechanical, water absorption and dynamic mechanical properties than samples that were manufactured with epoxidized hemp oil. Similarly, when mixing epoxidized hemp oil with jute fibers, the resulting material is a perfect competitor with commercially produced epoxidized soybean oil in biocomposite applications. On the other hand, to convert epoxidized vegetable oil into a crosslinked thermoset material, hardeners such as anhydrides or amines are used [10,17]. Anhydrides are the most employed curing agents, but they can present problems due to a susceptibility to hydrolytic degradation [31]. This aspect can be directly related to the moisture sensitivity of the epoxy resin and, furthermore, it can be increased if the epoxy resin is used to develop composites with natural fibers, which can contain moisture contents between 3 and 13% [6]. Thus, to avoid this problem and look for environmentally friendly alternatives to these petrochemical hardeners, vegetable oils modified by a maleinization process can be employed. For this purpose, the unsaturations present in the triglyceride are reacted with maleic anhydride (MA) through a combination of Diels-Alder reactions and "ene" reactions. The maleic groups present in the maleinized vegetable oil can react with the epoxy groups of the epoxidized vegetable oil (EVO), resulting, with the right combination of accelerators and catalysts, in a crosslinked structure. Some previous studies have looked at maleinized linseed oil as a hardener [15] and maleinized castor oil to produce biopolymers [4]. Still, again, there is no literature related to the use of maleinized hemp oil. Therefore, the main aim of the present study is to develop and optimize an epoxy resin based on epoxidized hemp oil and use a different content of maleinized hemp oil as a potential substitute for the petroleum-derived methyl nadic anhydride (MNA). When using different proportions of maleinized hemp oil and MNA, a wide range of mechanical and thermal properties can be obtained. Furthermore, these different ratios will ultimately define the bio-based carbon fraction of the final material. Although the use of epoxidized hemp oils as a basis for future bio-based resins is still underexplored, the main novelty of this study is the use of maleinized hemp oil (MHO) as a bio-based hardener. The use of MHO opens the door to the substitution of current petrochemical-based hardeners, which are based on molecules such as anhydrides or acids, resulting in almost 100% bio-based thermoset resins. Materials The hemp seed was obtained from a local market in Callosa de Segura. A CZR-309 (Changyouxin Trading Co., Zhucheng, China) press machine at room temperature was used to extract the hemp seed oil (HSO). The epoxidation process was carried out with acetic acid (99.7%), sulfuric acid (97%) and hydrogen peroxide (30% v/v) supplied by Sigma Aldrich (Sigma Aldrich, Madrid, Spain). On the other hand, the maleinization process was performed by adding maleic anhydride (MA) with purity >98% supplied by Sigma Aldrich (Madrid, Spain). The hardeners used for crosslinking the epoxidized hemp oil were methyl nadic anhydride (MNA) and maleinized hemp oil (MHO). MNA, with an anhydride equivalent weight (AEW) of 178 g·eq −1 , is of petrochemical origin and was supplied by Sigma Aldrich (Madrid, Spain). On the other hand, MHO is of biological origin. In addition, glycerol at 0.8 wt.% was used as an initiator and 1-methylimidazole at 2 wt.% was used as an accelerator, both of which were supplied by Sigma Aldrich (Madrid, Spain) [32]. Figure 1 displays the chemical structures of all the components used, such as the epoxy resin, the crosslinkers, the initiator and the accelerator. Epoxidation Process To carry out the epoxidation reaction to obtain the bio-based epoxy matrix, the process used by Dominguez-Candela et al. [23] was followed with minor modifications. In this case, a three-neck round-bottomed flask with a capacity of 1000 mL was used. Inside this flask, a two-bladed stirrer was placed and immersed in a thermostatic water bath. The temperature in the bath could be controlled at ±0.1 • C according to the desired temperature. The epoxidation process was carried out with a duration of 8 h at a temperature and constant agitation of 70 • C and 220 rpm, respectively. After 10 min, following arrival at the required temperature, a mixture of sulphuric acid and hydrogen peroxide was added dropwise. The addition of this mixture took 30 min to complete. Figure 2 shows the proposed reaction mechanism during the epoxidation stage of hemp oil. This mechanism will later be contrasted using different techniques such as oxiranic oxygen content or FTIR. Epoxidation Process To carry out the epoxidation reaction to obtain the bio-based epoxy matrix, the process used by Dominguez-Candela et al. [23] was followed with minor modifications. In this case, a three-neck round-bottomed flask with a capacity of 1000 mL was used. Inside this flask, a two-bladed stirrer was placed and immersed in a thermostatic water bath. The temperature in the bath could be controlled at ±0.1 °C according to the desired temperature. The epoxidation process was carried out with a duration of 8 h at a temperature and constant agitation of 70 °C and 220 rpm, respectively. After 10 min, following arrival at the required temperature, a mixture of sulphuric acid and hydrogen peroxide was added dropwise. The addition of this mixture took 30 min to complete. Figure 2 shows the proposed reaction mechanism during the epoxidation stage of hemp oil. This mechanism will later be contrasted using different techniques such as oxiranic oxygen content or FTIR. The oxiranic oxygen content of the resulting epoxidized hemp oil was 7.2, obtained according to ASTM D1652. On the other hand, the epoxy equivalent weight (EEW), which is defined as the mass (expressed in grams) of the epoxy resin containing one equivalent of the epoxy group (g·eq −1 ), of the EHO was obtained according to ASTM D1652 by titra- The oxiranic oxygen content of the resulting epoxidized hemp oil was 7.2, obtained according to ASTM D1652. On the other hand, the epoxy equivalent weight (EEW), which is defined as the mass (expressed in grams) of the epoxy resin containing one equivalent of the epoxy group (g·eq −1 ), of the EHO was obtained according to ASTM D1652 by titration and the value obtained was 226. Maleinization Process The process followed to obtain MHO from virgin hemp oil has been described in previous studies [16]. MHO is characterized by an acid number of 106 mg KOH g −1 and a maximum viscosity at 20 • C of 10 dPa·s. Sample Preparation The different formulations were made by keeping the amount of EHO, glycerol and 1methyl imidazole constant and changing the amount of the hardeners, MNA and MHO, as seen in Table 2. The hardeners' ratio of epoxide equivalent weight to anhydride equivalent weight (EEW:AEW) was set at 1:1 [32]. The different mixtures were placed in aluminum containers to be weighed, shaken vigorously and then poured into a silicone mold to obtain standard rectangular specimens (80 mm × 10 mm × 4 mm) according to ISO 178. The curing process of the samples was carried out at 90 • C for 3 h and post-curing at 120 • C for 1 h. A schematic representation of the preparation of the different samples can be seen in Figure 3. The different mixtures were placed in aluminum containers to be weighed, shaken vigorously and then poured into a silicone mold to obtain standard rectangular specimens (80 mm × 10 mm × 4 mm) according to ISO 178. The curing process of the samples was carried out at 90 °C for 3 h and post-curing at 120 °C for 1 h. A schematic representation of the preparation of the different samples can be seen in Figure 3. Figure 4 shows the interaction of EHO when reacting with MHO. The free volume can be seen to increase due to the functionalized long fatty acid chains. Therefore, the chain mobility is increased, contributing to better flexibility of the final thermosetting resins [33]. As for the equivalences used for the EHO and MHO molecules, the dots shown in black refer to the oxirane groups in EHO and the red dots to the maleic groups in MHO. Figure 4 shows the interaction of EHO when reacting with MHO. The free volume can be seen to increase due to the functionalized long fatty acid chains. Therefore, the chain mobility is increased, contributing to better flexibility of the final thermosetting resins [33]. As for the equivalences used for the EHO and MHO molecules, the dots shown in black refer to the oxirane groups in EHO and the red dots to the maleic groups in MHO. Figure 4 shows the interaction of EHO when reacting with MHO. The free can be seen to increase due to the functionalized long fatty acid chains. There chain mobility is increased, contributing to better flexibility of the final thermoset ins [33]. As for the equivalences used for the EHO and MHO molecules, the dot in black refer to the oxirane groups in EHO and the red dots to the maleic groups i Figure 5 shows the plausible reaction between EHO, used as an epoxy resin, and MNA, used as a crosslinker. In this case, the reaction is initiated by the interaction between a hydroxyl group present in the initiating agent molecules, with the MNA giving rise to an ester. The acid group resulting from this reaction reacts with an epoxy group to produce a diester and a new hydroxyl group [34]. The MNA confers rigidity to the final material, since, as can be seen in Figure 3, the resulting structure is more clustered than that seen in Figure 2. Oxirane Oxygen Content (O o ) and Acid Value The oxirane content (O o ) was determined according to ASTM D1652. For this purpose, the sample of EHO had to be dissolved in chlorobenzene, followed by a drop of crystal violet and titrated using a hydrobromic acid (HBr) solution in glacial acetic acid. The O o content was obtained using Equation (1): where N is the normality of HBr to glacial acetic acid, V refers to the volume of HBr solution used in the titration of the sample (expressed in mL), B refers to the volume of the HBr solution used to perform the blank titration (expressed in mL) and W refers to the amount (in grams) of sample used. At least five measurements were made for the sample and the average values were reported. MNA, used as a crosslinker. In this case, the reaction is initiated by the interaction between a hydroxyl group present in the initiating agent molecules, with the MNA giving rise to an ester. The acid group resulting from this reaction reacts with an epoxy group to produce a diester and a new hydroxyl group [34]. The MNA confers rigidity to the final material, since, as can be seen in Figure 3, the resulting structure is more clustered than that seen in Figure 2. Oxirane Oxygen Content (Oo) and Acid Value The oxirane content (OO) was determined according to ASTM D1652. For this purpose, the sample of EHO had to be dissolved in chlorobenzene, followed by a drop of crystal violet and titrated using a hydrobromic acid (HBr) solution in glacial acetic acid. The Oo content was obtained using Equation (1): where N is the normality of HBr to glacial acetic acid, V refers to the volume of HBr solution used in the titration of the sample (expressed in mL), B refers to the volume of the HBr solution used to perform the blank titration (expressed in mL) and W refers to the amount (in grams) of sample used. At least five measurements were made for the sample and the average values were reported. The acid value was obtained according to ISO 660. A titration of hemp oil (2 g) dissolved in 5 mL ethanol was used to determine the acid value content using potassium hydroxide solution as an ethanolic solution standard reagent to a phenolphthalein end The acid value was obtained according to ISO 660. A titration of hemp oil (2 g) dissolved in 5 mL ethanol was used to determine the acid value content using potassium hydroxide solution as an ethanolic solution standard reagent to a phenolphthalein end point (the pink color of henolphthalein persisted for at least 30 s). Finally, Equation (2) was used to obtain the acid value, where C corresponds to the exact concentration of the potassium hydroxide (KOH) solution (expressed in mol·L −1 ), V is the KOH volume used for the sample titration (expressed in mL) and m refers to the mass of the sample used to carry out the titration (expressed in grams). Fourier Transform Infrared Spectroscopy (FTIR) Fourier transform infrared spectroscopy (FTIR) was used to analyze the chemical structure of the virgin hemp oil, EHO, MHO and MNA. These samples were analyzed using a Bruker Vector 22 spectrometer from Bruker Española, S. A. (Madrid, Spain). The analyzed samples were subjected to a total of 20 scans in the range of 4000-400 cm −1 , with a resolution of 4 cm −1 . The spectra obtained were normalized with a limit ordinate of 1 absorbance unit. Mechanical Characterization Flexure, impact and hardness tests were carried out to analyze the mechanical properties of the different solid mixtures. The flexural test was carried out with an Ibertest ELIB 30 universal testing machine from S.A.E. Ibertest (Madrid, Spain) at room temperature. A crosshead speed of 5 mm·min −1 was used. The impact tests were performed using a 6 J Charpy pendulum from Metrotec S.A (San Sebastián, Spain) according to ISO 179-1. The shore D hardness was obtained using a Shore D hardness durometer 676-D (J. Bot. S. A., Barcelona, Spain) according to ISO 868. Five samples were tested for each test to obtain an average. Thermal Characterization The curing process of the different samples was studied by differential scanning calorimetry (DSC). DSC tests were carried out in a Mettler Toledo 821e calorimeter (Schwerzenbach, Switzerland). Samples were subjected to a temperature ramp of 30-350 • C at a rate of 10 • C·min −1 under a nitrogen atmosphere, with a flow rate of 66 mL·min −1 . The starting temperature, the maximum crosslinking temperature (peak) and the final temperature of the process were obtained from each of the calorimetric curves. In addition, the enthalpy value (∆H) of each sample was obtained from the integration of the exothermic peak area. On the other hand, DSC was carried out with the same equipment and conditions as described above but using the crosslinked samples to obtain the cure percentages of these. Given that the curing process of the samples was carried out at 90 • C for 3 h and 1 h at 120 • C, it is possible that the samples were not 100% cured. DSC curves allow for analysis to determine if a small exothermic peak appears, making it possible to measure the % cured by comparing the obtained enthalpies in the first curing cycle from 30 to 350 • C in the liquid samples, which are cured 100%, and the second, smaller exothermic peak. Thermomechanical Characterization Dynamic mechanical and thermal analysis was performed in the plate-plate mode in an oscillating rheometer AR G2 from TA Instruments (New Castle, DE, USA). The samples were in a liquid state and were subjected to an isothermal temperature of 90 • C for 5 h at a frequency of 1 Hz. The gel time was obtained as the crossover point between the storage modulus (G ) and the loss modulus (G ). In addition, the rheometer was also used in torsion mode to test the cured rectangular samples with dimensions of 40 mm × 10 mm × 4 mm. These samples were subjected to a temperature ramp from −20 to 110 • C at a frequency of 1 Hz, a heating rate of 2 • C·m −1 and a strain rate of 0.1%. Morphological Characterization After the impact test, the fractured samples were taken to observe their surfaces in a field emission scanning electron microscope (FESEM) model Zeiss ULTRA from Oxford Instruments (Abingdon, UK) with a voltage of 2 kV. Before observation, the samples were coated with a thin layer of gold and platinum using an EM MED020 sputter coater from Leica Microsystems (Wetzlar, Germany). Figure 6 shows the FTIR spectra of virgin hemp oil, MHO, EHO and MNA. The characteristic peaks of the double bonds are number 1, located at 3010 cm −1 (=CH (v) ), which is caused by the stretching of the cis-olefin bonds, number 2, which is located at 1672 cm −1 (C=C (v) ) and is due to the stretching of disubstituted cis-olefins and number 3, located at 723 cm −1 (C=C (cis-δ) ) and caused by the combination of out-of-plane deformation and oscillating vibration in cis-disubstituted olefins [35]. As can be seen in Figure 6b,c, corresponding to the MHO and EHO, respectively, these peaks are diminished compared to virgin hemp oil. This is due to the fact that these double bonds have reacted during the maleinization and epoxidation processes, thus reducing the number of double bonds present in them. On the other hand, it can be observed how in the MHO sample (Figure 6b), two peaks appear representing the virgin hemp oil sample, located at 1781 and 1861 cm −1 (peak 4) and related to the symmetrical and antisymmetric vibrations of the carbonyl (C=O) of the anhydride groups observed, respectively. This is due to the maleic anhydride used as a reagent in the maleinization process of virgin hemp oil [36]. This proposal has been supported by studies where maleinized chia oil, which is chemically very similar to MHO, has been analyzed using NMR [37]. In this study, it was concluded that the characteristic peak at 2.8-3.2 ppm is attributed to the methylene and succinic protons created after the maleinization of the oil. This confirms the presence of reactive maleic anhydride (MA) groups in the maleinized oil, which was used as a hardener for the first time. In the sample of EHO (Figure 6c), a new peak is observed at 821 cm −1 (peak 5) in comparison with virgin hemp oil, related to the existence of the oxirane group (COC (v) ). This group appears due to oxygen insertion into the double bonds through the peracetic acid formed by epoxidation [38]. Finally, a peak can be seen appearing in all the samples at 3470 cm −1 (peak 6), which is associated with the elongation (v) of -OH. This peak is observed higher in the EHO sample ( Figure 6c) because it is related to the vibration of the hydroxyl group and demonstrates the formation of -OH groups due to the opening of the epoxy ring in the epoxidation process [39]. Thermal Properties Differential scanning calorimetry (DSC) was used to analyze the cure cycle of the different thermosetting resins studied. Figure 7 shows the DSC curves of the curing cycles of the different resins, where it is possible to identify the different exothermic peaks of the samples referred to in the crosslinking process. As can be seen, there are variations in the initial, peak and final temperatures of the reaction for the different samples. In this case, it can be observed that for the 100% MNA sample, the curing process takes place at higher temperatures than in the rest, with a starting temperature of 140 °C and a final process temperature of around 225 °C. However, when MHO is added to the samples, it is observed that the start and end temperature of the resin reaction decreases, with the decrease being higher as the MHO content in the sample increases. In this case, it is observed that the sample with a 100% MHO content presents a start and end of the cure temperatures of 111 °C and 210 °C , respectively, which means a decrease of 34 °C and 15 °C , representing the beginning and end of the cure temperatures of the 100MNA sample. As can be seen, this same trend can be observed in the peak temperature, referring to the maximum Thermal Properties Differential scanning calorimetry (DSC) was used to analyze the cure cycle of the different thermosetting resins studied. Figure 7 shows the DSC curves of the curing cycles of the different resins, where it is possible to identify the different exothermic peaks of the samples referred to in the crosslinking process. As can be seen, there are variations in the initial, peak and final temperatures of the reaction for the different samples. In this case, it can be observed that for the 100% MNA sample, the curing process takes place at higher temperatures than in the rest, with a starting temperature of 140 • C and a final process temperature of around 225 • C. However, when MHO is added to the samples, it is observed that the start and end temperature of the resin reaction decreases, with the decrease being higher as the MHO content in the sample increases. In this case, it is observed that the sample with a 100% MHO content presents a start and end of the cure temperatures of 111 • C and 210 • C, respectively, which means a decrease of 34 • C and 15 • C, representing the beginning and end of the cure temperatures of the 100MNA sample. As can be seen, this same trend can be observed in the peak temperature, referring to the maximum reaction rate. That is to say, it can be observed that when MHO is added to the sample, the maximum crosslinking temperature decreases, with this decrease being greater as the MHO content increases in the sample. However, it can be observed that the addition of 25% MHO (75MNA25MHO) hardly affects the maximum crosslinking temperature, obtaining the same peak temperature as the 100MNA sample. These results indicate that the EHO reaction with MHO takes place more easily than reactions using MNA, and these results are consistent with the gel time, as it has been observed that MNA leads to higher gel time values than MHO [40]. Regarding the percentage of curing of the samples, all the samples have a high percentage of curing. For the samples that contain MNA in their formulation, the curing percentage is at least 90%. On the other hand, the samples containing only MNO as a hardener have a curing percentage of 85%. This is a good sign as a result obtained from the sample with 100% MNA content does not differ significantly from the one with 100% Similarly, the enthalpy shows the same downward trend, decreasing as the MHO content increases (Table 3). In this case, the maximum reaction enthalpy is obtained with the 100MNA mixture at a value of 189.40 J·g −1 . On the other hand, the minimum enthalpy value of 83.33 J·g −1 is reached with the 100MHO sample. Therefore, MHO leads to a lower exothermicity. In addition, sample 75MNA25MHO also shows a decrease in enthalpy to values of 141.30 J·g −1 . Less exothermic values are obtained due to the chemical structure of MHO, as these macromolecules increase in weight. Still, on the other hand, the number of epoxide groups per gram is lower than in MNA [41]. Regarding the percentage of curing of the samples, all the samples have a high percentage of curing. For the samples that contain MNA in their formulation, the curing percentage is at least 90%. On the other hand, the samples containing only MNO as a hardener have a curing percentage of 85%. This is a good sign as a result obtained from the sample with 100% MNA content does not differ significantly from the one with 100% MHO content. Thermomechanical Properties As for the gel time, it is critical in handling thermoset materials as, at this point, the material stops flowing and cannot be processed. The rheometer was used to obtain this value, considering the gel time when a phase angle equal to 45 • is obtained. Table 4 shows the most relevant data for this test: the start of the curing process (δ ≈ 90 • ), the end (δ ≈ 0 • ) and the intermediate point or gel time (δ = 45 • ). When looking at these results, it can be seen that as the amount of MHO increases, the reaction rate and crosslinking of these mixtures also increase, thus reducing the curing time. In this case, the sample containing 100% MNA (100MNA) has a crosslinking onset of 10,200 s, a gel time of 11,500 s and a crosslinking endset of 15,270 s. However, after incorporating 25% MHO, which refers to the sample 75MNA25MHO, it is observed that these values decrease, with a crosslinking start of 4690 s, a gel time of 5433 s and a crosslinking start of 7900 s. Finally, the sample containing 100% MHO as a hardener (100MHO) shows a crosslinking onset value of 555 s, a gel time of 1006 s and, finally, a crosslinking endset of 1920 s. This decrease in the curing time of the MHO-containing samples suggests that the anhydride group included in the MHO is more reactive than the MNA, so crosslinking is faster [42]. This reduction in the gelation time resulting from the presence of MHO is interesting in the field of composites as it reduces the precipitation of particles that could lead to phase separation and heterogeneous material [43]. Moreover, this reduction in curing times is an important feature at the industrial level as fully cured materials can be obtained with shorter curing times. Once all the cured and post-cured EHO materials crosslinked with the MNA/MHO blends were characterized with the rheometer, it was observed that changing the amount of MNA/MHO in the samples resulted in a change in the glass transition temperature (T g ) of the materials. The phase angle (δ) is shown in Figure 8 and the values obtained for the glass transition temperature (T g ) are compiled in Table 5. As can be seen, T g decreases as the amount of MHO in the MNA/MHO mixtures increases. The sample with the highest T g is the 100MNA with a value of 48.7 • C. On the other hand, a significant decrease in T g is observed with the addition of MHO, with the lowest T g value being obtained for the sample with 100% MHO content at a value of 6.8 • C. Furthermore, this decrease in T g after the incorporation of MHO results in materials with higher ductility. This is best observed in those samples with high MHO contents above 50%, where the T g values are below room temperature. In conclusion, as this T g decreases, it can be said that the material changes from being brittle to being a more ductile material. Mechanical Properties The flexural strength and flexural modulus of the samples obtained are presented in Figure 9. As can be seen in Figure 9a, the flexural modulus of the 100MNA sample is around 300 MPa. On the other hand, samples containing MHO have a lower stiffness, which becomes lower as the percentage of MHO increases and the percentage of MNA decreases. Specifically, the presence of 75% MNA in the sample leads to a decrease in the flexural modulus, with a value of 100 MPa, representing a decrease of 195%. This decrease is greater as the MHO content in the sample increases, obtaining a modulus of 7 MPa for the sample with 100% MHO content. Figure 9b represents the flexural strength of the tested samples. In this case, a similar trend to the flexural modulus is observed; that is, as the MHO content in the sample increases, the flexural strength decreases compared to the 100MNA sample. In this case, it can be seen how the flexural strength decreases from almost 7 MPa for the sample 100MNA to 1 MPa for the sample with the highest MHO content (100MHO). This decrease in mechanical strength properties is related to the chemical structure of the hardeners used since, on the one hand, MNA, which is a cyclic anhydride, confers rigidity to the mixture and, on the other hand, MHO, composed of triglycerides, provides flexibility to the samples containing it [37]. Rösch and Mülhaupt [44], obtained highly flexible and rubber-like crosslinking polymers using anhydrides (succinic, hexaphydrophthalic and norbornene dicarboxylic acid) as a bio-based epoxy matrix epoxidized soybean oil. Mechanical Properties The flexural strength and flexural modulus of the samples obtained are presented in Figure 9. As can be seen in Figure 9a, the flexural modulus of the 100MNA sample is around 300 MPa. On the other hand, samples containing MHO have a lower stiffness, which becomes lower as the percentage of MHO increases and the percentage of MNA decreases. Specifically, the presence of 75% MNA in the sample leads to a decrease in the flexural modulus, with a value of 100 MPa, representing a decrease of 195%. This decrease is greater as the MHO content in the sample increases, obtaining a modulus of 7 MPa for the sample with 100% MHO content. Figure 9b represents the flexural strength of the tested samples. In this case, a similar trend to the flexural modulus is observed; that is, as the MHO content in the sample increases, the flexural strength decreases compared to the 100MNA sample. In this case, it can be seen how the flexural strength decreases from almost 7 MPa for the sample 100MNA to 1 MPa for the sample with the highest MHO content (100MHO). This decrease in mechanical strength properties is related to the chemical structure of the hardeners used since, on the one hand, MNA, which is a cyclic anhydride, confers rigidity to the mixture and, on the other hand, MHO, composed of triglycerides, provides flexibility to the samples containing it [37]. Rösch and Mülhaupt [44], obtained highly flexible and rubber-like crosslinking polymers using anhydrides (succinic, hexaphydrophthalic and norbornene dicarboxylic acid) as a bio-based epoxy matrix epoxidized soybean oil. On the other hand, in Figure 10a, the results obtained for the Shore D hardness of the samples are shown. In this case, as in the case of flexural strength, it is observed that the hardness decreases as the MHO content increases in the samples. In this case, the maximum hardness is obtained for the crosslinked sample with 100% MNA, 63 Shore D. This value is very similar to that obtained in the bibliography when crosslinking soybean oil epoxidized with maleic anhydride, getting a value of 70 Shore D [45]. As the MHO content increases, this hardness decreases in the samples, with the lowest hardness value being obtained in the crosslinked sample with 100% MHO, with a value of 21 Shore D. Furthermore, it can be observed that the samples crosslinked with 75% MNA and 50% MNA obtain similar values of around 44 Shore D. Finally, the results of energy adsorption in the different samples obtained after performing the Charpy impact test are shown in Figure 10b. As can be seen in Figure 10b, two of the five samples subjected to the impact test (25MNA75MHO and 100MHO) failed to break and, therefore, no values are available. In this case, it can be seen that the value obtained in the 100MNA sample, 6.3 kJ/m 2 , is lower than that obtained in the 75MNA25MHO sample, which is 17.6 kJ/m 2 , an increase of 180%. In addition, the 50MNA50MHO sample presents a very similar value to the previous sample, since the value obtained is 17 kJ/m 2 . This increase in impact energy absorption corroborates with the increase in ductility due to the presence of MHO in the samples. Sahoo et al. [46] reported that adding a mixture of an epoxy toughened with renewable resources, linseed oil and a bio-based crosslinker to a petroleum-based epoxy (DGEBE) increases the impact absorption energy by 40% over petroleum-based resin alone. On the other hand, in Figure 10a, the results obtained for the Shore D hardness of the samples are shown. In this case, as in the case of flexural strength, it is observed that the hardness decreases as the MHO content increases in the samples. In this case, the maximum hardness is obtained for the crosslinked sample with 100% MNA, 63 Shore D. This value is very similar to that obtained in the bibliography when crosslinking soybean oil epoxidized with maleic anhydride, getting a value of 70 Shore D [45]. As the MHO content increases, this hardness decreases in the samples, with the lowest hardness value being obtained in the crosslinked sample with 100% MHO, with a value of 21 Shore D. Furthermore, it can be observed that the samples crosslinked with 75% MNA and 50% MNA obtain similar values of around 44 Shore D. On the other hand, in Figure 10a, the results obtained for the Shore D hardness of the samples are shown. In this case, as in the case of flexural strength, it is observed that the hardness decreases as the MHO content increases in the samples. In this case, the maximum hardness is obtained for the crosslinked sample with 100% MNA, 63 Shore D. This value is very similar to that obtained in the bibliography when crosslinking soybean oil epoxidized with maleic anhydride, getting a value of 70 Shore D [45]. As the MHO content increases, this hardness decreases in the samples, with the lowest hardness value being obtained in the crosslinked sample with 100% MHO, with a value of 21 Shore D. Furthermore, it can be observed that the samples crosslinked with 75% MNA and 50% MNA obtain similar values of around 44 Shore D. Finally, the results of energy adsorption in the different samples obtained after performing the Charpy impact test are shown in Figure 10b. As can be seen in Figure 10b, two of the five samples subjected to the impact test (25MNA75MHO and 100MHO) failed to break and, therefore, no values are available. In this case, it can be seen that the value obtained in the 100MNA sample, 6.3 kJ/m 2 , is lower than that obtained in the 75MNA25MHO sample, which is 17.6 kJ/m 2 , an increase of 180%. In addition, the 50MNA50MHO sample presents a very similar value to the previous sample, since the value obtained is 17 kJ/m 2 . This increase in impact energy absorption corroborates with the increase in ductility due to the presence of MHO in the samples. Sahoo et al. [46] reported that adding a mixture of an epoxy toughened with renewable resources, linseed oil and a bio-based crosslinker to a petroleum-based epoxy (DGEBE) increases the impact absorption energy by 40% over petroleum-based resin alone. Finally, the results of energy adsorption in the different samples obtained after performing the Charpy impact test are shown in Figure 10b. As can be seen in Figure 10b, two of the five samples subjected to the impact test (25MNA75MHO and 100MHO) failed to break and, therefore, no values are available. In this case, it can be seen that the value obtained in the 100MNA sample, 6.3 kJ/m 2 , is lower than that obtained in the 75MNA25MHO sample, which is 17.6 kJ/m 2 , an increase of 180%. In addition, the 50MNA50MHO sample presents a very similar value to the previous sample, since the value obtained is 17 kJ/m 2 . This increase in impact energy absorption corroborates with the increase in ductility due to the presence of MHO in the samples. Sahoo et al. [46] reported that adding a mixture of an epoxy toughened with renewable resources, linseed oil and a bio-based crosslinker to a petroleum-based epoxy (DGEBE) increases the impact absorption energy by 40% over petroleum-based resin alone. Morphological Properties To support the results obtained in the mechanical tests, the fractured surfaces of the samples after the Charpy test were studied using field emission scanning electron microscopy (FESEM). Figure 11 shows the fracture FESEM images of the samples subjected to the Charpy impact test. The sample with 100% MNA content is shown in Figure 11a, and it can be seen that the fracture surface is smooth, which is characteristic of a rigid material. As can be seen in Figure 11b,c, when the MHO content in the samples increases, the surface is no longer smooth and cracks are observed on the fracture surface. This is directly related to the higher ductility that MHO confers to the samples. These results perfectly agree with those obtained in the mechanical tests, as they confirm the increased ductility as the MHO content increases. Similarly, Domínguez-Candela et al. [14] report that by increasing the oil content in the sample, the resulting mixture gives rise to a more ductile material. Morphological Properties To support the results obtained in the mechanical tests, the fractured surfaces of the samples after the Charpy test were studied using field emission scanning electron microscopy (FESEM). Figure 11 shows the fracture FESEM images of the samples subjected to the Charpy impact test. The sample with 100% MNA content is shown in Figure 11a, and it can be seen that the fracture surface is smooth, which is characteristic of a rigid material. As can be seen in Figure 11b,c, when the MHO content in the samples increases, the surface is no longer smooth and cracks are observed on the fracture surface. This is directly related to the higher ductility that MHO confers to the samples. These results perfectly agree with those obtained in the mechanical tests, as they confirm the increased ductility as the MHO content increases. Similarly, Domínguez-Candela et al. [14] report that by increasing the oil content in the sample, the resulting mixture gives rise to a more ductile material. Conclusions After compiling all the data obtained, it can be concluded that maleinized hemp oil (MHO) is an excellent crosslinking agent next to methyl nadic anhydride (MNA), which is of petrochemical origin, for epoxidized hemp oil (EHO). After performing the mechanical tests, it was observed that the sample containing 100% MNA (100MNA) presented with high rigidity and brittleness, whereas, with the addition of MHO in different amounts, it was observed that the material showed greater ductility and flexibility. For example, a significant difference in mechanical properties was observed between the 100MNA sample and the sample with 25% MHO (75MNA25MHO), as it changed from a rigid material to a more ductile material with only the addition of 25% MHO. The decrease in the flexural strength of the sample with 100% MNA compared to the sample with 75% MNA and 25% MHO is 11%. On the other hand, the impact energy absorption for the same samples shows an increase of 180%. On the other hand, after the calorimetric study, it was shown that the incorporation of MHO leads to a reduction in the curing temperatures of the different samples studied, obtaining lower starting, maximum crosslinking and end temperatures as the MHO content increases in the samples. For example, the 100MNA sample presents a start, maximum crosslinking and end temperature of 140 °C, 191 °C and 225 °C, respectively. On the other hand, if the other extreme is observed, which is the 100MHO sample, the values obtained are 111 °C , 155 °C and 210 °C for the start, maximum crosslinking and end temperature, respectively. The reduction in the maximum crosslinking temperature between these two samples is 23%. Furthermore, it has been observed that as the MHO content in the samples increases, the gel time decreases. It has been observed that the gel time is reduced from 11,500 s for the 100MNA sample to values of up to 1000 s for the 100MHO sample. In addition, with the results obtained for the Tg of the materials, it was observed that when the MHO content increased, the Tg decreased, which means that the material was changing from a more rigid to a more flexible one. Once these results have been presented, it can be said that the mixture with the most balanced properties compared to the 100MNA sample is the 75MNA25MHO sample, as Conclusions After compiling all the data obtained, it can be concluded that maleinized hemp oil (MHO) is an excellent crosslinking agent next to methyl nadic anhydride (MNA), which is of petrochemical origin, for epoxidized hemp oil (EHO). After performing the mechanical tests, it was observed that the sample containing 100% MNA (100MNA) presented with high rigidity and brittleness, whereas, with the addition of MHO in different amounts, it was observed that the material showed greater ductility and flexibility. For example, a significant difference in mechanical properties was observed between the 100MNA sample and the sample with 25% MHO (75MNA25MHO), as it changed from a rigid material to a more ductile material with only the addition of 25% MHO. The decrease in the flexural strength of the sample with 100% MNA compared to the sample with 75% MNA and 25% MHO is 11%. On the other hand, the impact energy absorption for the same samples shows an increase of 180%. On the other hand, after the calorimetric study, it was shown that the incorporation of MHO leads to a reduction in the curing temperatures of the different samples studied, obtaining lower starting, maximum crosslinking and end temperatures as the MHO content increases in the samples. For example, the 100MNA sample presents a start, maximum crosslinking and end temperature of 140 • C, 191 • C and 225 • C, respectively. On the other hand, if the other extreme is observed, which is the 100MHO sample, the values obtained are 111 • C, 155 • C and 210 • C for the start, maximum crosslinking and end temperature, respectively. The reduction in the maximum crosslinking temperature between these two samples is 23%. Furthermore, it has been observed that as the MHO content in the samples increases, the gel time decreases. It has been observed that the gel time is reduced from 11,500 s for the 100MNA sample to values of up to 1000 s for the 100MHO sample. In addition, with the results obtained for the T g of the materials, it was observed that when the MHO content increased, the T g decreased, which means that the material was changing from a more rigid to a more flexible one. Once these results have been presented, it can be said that the mixture with the most balanced properties compared to the 100MNA sample is the 75MNA25MHO sample, as it has a higher renewable content, greater flexibility and a shorter curing time, and this is very interesting from an industrial point of view. The developed thermoset resins, due to their high ductile properties, could be used in a variety of industrial sectors, such as construction, automotive, aerospace, electronics and marine, among others. In construction, they can be used for the manufacture of insulation panels, coatings and adhesives. In the automotive and aerospace industries, they can be used in the production of structural parts and components for vehicular interiors. In electronics, they could be employed in the encapsulation and printed circuit board components and in the marine industry, in the manufacturing of saltwater-resistant parts. Finally, it should be said that different mechanical and thermal properties can be obtained by changing the percentage of MHO, as it is possible to get material with more or less flexibility and most importantly, with high renewable content, as this is an essential characteristic from an environmental perspective.
2023-03-15T15:12:32.866Z
2023-03-01T00:00:00.000
{ "year": 2023, "sha1": "44a1cca8ede261c2fee07d832b47bf15923060c9", "oa_license": "CCBY", "oa_url": "https://www.mdpi.com/2073-4360/15/6/1404/pdf?version=1678525969", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "4e7edcf78110ca8b8f24a0e7fb380a7ad3f75bec", "s2fieldsofstudy": [ "Materials Science" ], "extfieldsofstudy": [] }
267377265
pes2o/s2orc
v3-fos-license
Mechanical power during mechanical ventilation Mechanical ventilation provides lifesaving support for patients with acute respiratory failure. However, the pressures and volumes required to maintain gas exchange can cause ventilator-induced lung injury. The current approach to mechanical ventilation involves attention to both tidal volume and airway pressures, in particular plateau pressures and driving pressures. The ventilator provides energy to overcome airway resistance and to inflate alveolar structures. This energy delivered to the respiratory system per unit time equals mechanical power. Calculation of mechanical power provides a composite number that integrates pressures, volumes, and respiratory rates. Increased levels of mechanical power have been associated with tissue injury in animal models. In patients, mechanical power can predict outcomes, such as ICU mortality, when used in multivariable analyses. Increases in mechanical power during the initial phase of ventilation have been associated with worse outcomes. Mechanical power calculations can be used in patients on noninvasive ventilation, and measurements of mechanical power have been used to compare ventilator modes. Calculation of mechanical power requires measurement of the area in a hysteresis loop. Alternatively, simplified formulas have been developed to provide this calculation. However, this information is not available on most ventilators. Therefore, clinicians will need to make this calculation. In summary, calculation of mechanical power provides an estimate of the energy requirements for mechanical ventilation based on a composite of factors, including airway resistance, lung elastance, respiratory rate, and tidal volume. IntroductIon Mechanical ventilation provides lifesaving support for patients with acute respiratory failure.However, this support can also cause ventilator-induced lung injury. 1 The usual classification for ventilator-induced lung injury includes barotrauma, volutrauma, atelectrauma associated with the repeated opening and closing areas of the lung parenchyma, and biotrauma with the release of inflammatory markers into the lung and systemic circulation.Determining whether or not ventilator-induced lung disease develops in a patient on mechanical ventilation is difficult since the initial disorder causing acute respiratory failure causes lung injury with edema formation, inflammation, and potentially fibrosis.Current ventilator standards concentrate on "safe ventilation" with smaller tidal volumes (6-8 ml/kg ideal body weight), reduced plateau pressures (<30 cm H 2 O), and reduced driving pressures (plateau pressure -PEEP <15 cm H 2 O).The respiratory rate and minute ventilation should be adjusted to maintain PaCO 2 levels at or above 40 mmHg.Patients usually require sedatives and narcotics for comfort and better interaction with ventilators.The FiO 2 and PEEP combination can be based on the low PEEP level or the high PEEP level tables.Patients with very poor gas exchange may benefit from short-term use of paralytic drugs and the use of prone positioning. 1 This approach to ventilator management focuses on static The Southwest Respiratory and Critical Care Chronicles 2024;12(50): 16-23 intrapulmonary pressures with the expectation that lower pressures are associated with less lung injury.However, low pressure strategies generally require a higher minute ventilation to achieve equivalent gas exchange.The clinician can decrease the energy added per machine breath (tidal volume), but this may increase the energy added per unit of time (power).There are theoretical reasons with some support from experimental evidence that mechanical power may be a more important determinant of lung injury than the work of each delivered tidal volume. Calculating mechanical power or the energy delivered to the lung per minute during mechanical ventilation provides an alternative approach to understanding the development of ventilator-induced lung injury.Gattinoni and colleagues developed the concept of mechanical power and its effect on the development of ventilator-induced lung injury. 2 Important parameters include pressures, volumes, flow, and respiratory rates.Abnormalities in the lung parenchyma during acute respiratory failure include differences in the disease process in various regions of the lung resulting in inhomogeneity, cyclic collapse and recruitment of the lung parenchyma, and the primary events associated with the development of lung injury, which include edema formation, inflammation, and fibrosis.The mechanical ventilator applies energy to the lung and chest wall during each ventilatory cycle; the energy per unit time is power.This energy is not distributed uniformly throughout the damaged lung during the respiratory cycle.Consequently, mechanical power calculations provide only an index of the overall mechanical events during the respiratory cycle.Key equations used in the Gattinoni publication included: Where ELrs × delta V = Delta P, i.e., pressure component due to elastic recoil Raw × F = Ppeak-Pplat, i.e., pressure component due to air flow PEEP = base line tension at end expiration Power = RR (delta V 2 × [0.5 × ELrs + RR × (1 Where delta V = tidal volume; EL = elastance of the system; I: E = the inspiratory to expiratory time ratio; Raw = airway resistance. Methods to MeAsure MechAnIcAl power Mechanical power can be calculated using graphs plotting changes in pressure versus changes in volume during a tidal breath.This requires software to measure this area.An ideal method is to directly measure the volume and pressure measurements using a high rate of sampling during 1 tidal breath.The energy is calculated by solving the integral of airway pressure with respect to change in volume, which represents the area of the pressure-volume loop.This requires high-quality data collection and software to make the calculations.Mechanical power can also be calculated by comprehensive formulas developed by Gattinoni, which requires multiple measurements to calculate the power needed to overcome resistance, elastic recoil, and PEEP.This approach is not practical for most clinicians for bedside management of ventilators.However, surrogate equations have been developed based on the comprehensive equation to provide simpler calculations at the bedside.Pressure-volume curves plot changes in pressure against changes in volume during inspiration and expiration.Some of the work required to inflate the lung during inspiration is recovered by relaxation of elastic structures during exhalation.Work required to drive flow across resistance is lost, however, and the loss must be dissipated as heat.Hysteresis is another amount of energy required during inspiration that is not recovered during expiration (Figure 1).The lost energy is heat that may contribute to lung injury if the respiratory system cannot dissipate it.The main determinants of hysteresis are the air liquid surface forces in the alveoli, stress relaxation of lung tissue, and lung re-expansion and collapse during inflation and deflation.These loops can be evaluated at different PEEP levels.If PEEP increases and lung volumes increase secondary to recruitment, there should be a change in the configuration of the loop.The mechanical energy calculated from a hysteresis loop should be compared to the energy calculated from various formulas that consider pressure, tidal volume, and resistance.The differences should represent energy lost to heat, associated with tissue injury, and stored in the lung parenchyma. Gattinoni summarized recent studies on the utility of mechanical power in 2023. 4The mechanical power formula multiplies each pressure component involved in mechanical ventilation by the tidal volume to calculate work or energy.It is then multiplied by the respiratory rate to determine power in joules per minute.The pressure components include elastic pressure, resistive pressure, and static pressure.The need for more mechanical power during mechanical ventilation is associated with mortality, but the boundaries for safe mechanical power levels are uncertain.In pigs, the safe threshold was between 4 to 7 J/min and 12 J/min.In animal studies, the experimental adjustment of respiratory rate, tidal volume, and PEEP cause the same level of lung damage provided mechanical power is similar, referred to as iso-power, at the various settings. 5It is likely that mechanical power needs to be normalized to other physical components of the lung, such as compliance or lung volume, or body weight.The distribution of mechanical power during the respiratory cycle may need to be considered, since it is unlikely to be uniform.In addition, the recovery of mechanical power during exhalation also depends on ventilator parameters. 6,7In most studies, the measured mechanical power is the energy needed to inflate both the lung and chest wall; to determine the mechanical power applied to the lung only would require placement of an esophageal balloon to measure transpulmonary pressures. In summary, mechanical energy is used to create flow into the lungs, expand (inflate) the lungs, and maintain volume stability at a various pressure.It also creates heat and can cause tissue injury in some patients.Some energy is stored in the lung, and some is released during exhalation.Mechanical power can be calculated graphically using pressure volume curves.Alternatively, it can be calculated using a comprehensive formula developed by Gattinoni and coworkers.Finally, surrogate formulas have been developed to make it easier to calculate mechanical power at the bedside. 3Mechanical power has been studied in patients with acute respiratory failure to determine its association with outcomes and its changes during an episode of acute respiratory failure and to compare ventilator modes.The energy delivery has been studied in patients on noninvasive ventilation and in animal models.Some of these studies are discussed below.The blue elliptical area represents the net work required to be added to the respiratory system for each breath.This is also the net energy per breath that must be dissipated as heat. MechAnIcAl power As predIctor of outcoMes Serpa Neto et al. used two large databases to study the outcomes in patients with acute respiratory failure. 8The median mechanical power on the second day of ventilator care was 21.4 J/min in the first cohort and 16.0 J/min in the second cohort.Approximately 10% of the patients had ARDS, and the overall mortality was 29.9% and 31.0% in the 2 cohorts.Mechanical power was independently associated with in-hospital mortality.The odds ratio for each 5 J/min increase was 1.06 in the first cohort and 1.10 in the second cohort.Mechanical power was associated with ICU mortality, 30-day mortality, the number of ventilator-free days, and ICU and hospital length of stay.Higher mechanical power levels were associated with worse outcomes in patients who were on a low tidal volume ventilator strategy and had low driving pressures.Since mechanical power calculations several ventilator parameters, it might be used as a method to determine optimal ventilator settings that potentially reduce lung injury.In this study the calculation for mechanical power was: MP (J/min) = 0.098 × V t × RR × (P peak -1/2 × driving pressure). MechAnIcAl power norMAlIzed to body sIze Zhu used the data stored in a large critical care database. 9This study involved patients who were on invasive mechanical ventilation for at least 48 hours, and the mechanical power was normalized to the predicted body weight.This study eventually included 1301 patients; 365 patients died.Patients in the fourth quartile of normalized mechanical power had an increased ICU mortality rate, increased ICU length of stay, and a decreased number of ventilator-free days at 28 days of ventilation.The formula used in this study was: Serpa Neto combined the clinical and ventilator information from 2 large patient cohorts with acute respiratory failure. 10This study included 8191 patients requiring invasive ventilation.They calculated absolute mechanical power, mechanical power adjusted for predicted body weight, mechanical power normalized to body mass index, and mechanical power normalized to be a body surface area.All 4 values were increased in non-survivors in this cohort.However, these parameters were not significantly increased in the patients with ARDS.These results suggest that normalized mechanical power calculations can improve predictions of outcomes in patients with acute respiratory failure. MechAnIcAl ventIlAtIon Chi et al. studied the outcomes of 602 patients who required mechanical ventilation for acute respiratory failure for more than 48 hours. 11This study excluded patients with a mechanical power less than 10 J/min.Patients were classified as having a decrease in mechanical power at 24 hours or an increase or no change in mechanical power at 24 hours.The baseline mechanical power levels were 11.7 J/min in the group with increasing mechanical power and 12.2 J/min in the group with decreasing mechanical power at 24 hours.Patients who had decreased mechanical power had decreased mortality in comparison to the patients who did not.The mortality rates were 24% and 36%.The 24-hour mechanical power variation rate was associated with ICU mortality after adjusting for confounders.All mechanical power components improved in the group that had reduced levels at 24 hours.Minute ventilation and PEEP levels contributed to the increase in mechanical power in the group that had increases in mechanical power.The PaO 2 levels at 24 hours were identical in the 2 groups.Compliance improved or increased in the patients in the improved mechanical power levels.The formula used in this study was: MP = 0.098 × RR × TV × (PIP-0.5 driving pressure). Pozzi et al. enrolled 69 patients with ARDS in a prospective study to determine outcomes and mechanical ventilation variables, including PaO 2 /FiO 2 ratios, mechanical power, and alveolar dead space fraction. 12Thirty-six patients (52%) died during the study.The initial mechanical power in the entire cohort was 18.7 (14.7-22.2) J/min, and the mechanical power ratio was 7.0 (5.8-8.3).The PaO 2 /FiO 2 was 139 (93-168) and the alveolar dead space fraction was 46 (30-62)%.The only difference between the 2 groups on admission was in the mechanical power ratio which was lower in survivors.Based on CT analysis, the total amount of nonaerated lung tissue was 47 (38-56) %.Patients who survived had higher PaO 2 /FiO 2 ratios on the third day of mechanical ventilation in the ICU.These patients also had lower mechanical power ratios and lower alveolar dead space fractions.Based on the average values over 3 days of monitoring, the mechanical power ratio, the driving pressure, and the PaO 2 / FiO 2 ratio were significantly associated with ICU mortality.In this study, the mechanical power ratio equals the measured mechanical power divided by a calculated ideal mechanical power based on equations that involved the ideal bodyweight, ideal respiratory rate, and ideal plateau pressure.This equation provides the expected mechanical power based on a healthy lung.In summary, monitoring these gas exchange variables and the energy requirement to deliver a tidal volume (i.e., mechanical power and mechanical power ratio) can predict outcomes in these patients during the initial phase of mechanical ventilation. MechAnIcAl power MeAsureMents to coMpAre ventIlAtor Modes Buiteman-Kruizinga measured mechanical power in 24 patients requiring mechanical ventilation for at least 1 day. 13Twelve patients were ventilated with adaptive support ventilation, and 12 patients were ventilated with pressure-controlled ventilation.Mechanical power was calculated 3 times per day.It was lower with adaptive support ventilation than pressurecontrolled ventilation.The numbers were 15.1 J/min versus 22.9 J/min.The tidal volumes were similar, but the maximum pressure and respiratory rate were lower with adaptive support ventilation.They concluded that this mode of ventilation may have benefit since it requires lower levels of mechanical power. MechAnIcAl power In coMpArIson to respIrAtory rAte And drIvIng pressure to predIct outcoMes Costa et al. analyzed patient level data for 4549 patients with acute respiratory failure. 14The average mechanical power was 0.32 ± 0.14 J/min/kg of predicted body weight.The driving pressure was 15 ± 5.8 cm of water; the respiratory rate was 25.7 ± 7.4 breaths/min.The overall mortality was 38%.Univariable predictors of mortality included driving pressure, PEEP level, plateau pressure, respiratory rate, and mechanical power.Models were subsequently adjusted for a baseline risk factors in patients with ARDS.When all variables were entered into a multivariable model, only driving pressure and respiratory rate were significantly associated with mortality; the effect size of each 1 cm of water increase in driving pressure was approximately 4 times the effect size of a 1 breath/min increase in respiratory rate.The components of mechanical power were then introduced into the model.In this analysis the elastic dynamic component was associated with mortality and had a stronger effect than total power.A model that included a relationship between driving pressure and respiratory rate predicted mortality better than power. Overall, this study suggested that driving pressure and respiratory rate were independently associated with survival.Mechanical power was independently associated with mortality, but this was attributed to the dynamic elastic component of this equation.Driving pressure had a greater effect on mortality than respiratory rate, which might suggest that adjusting the tidal volume to lower the driving pressure could have beneficial effects on overall mortality even if the respiratory rate is increased.The level of mechanical power needed for mechanical ventilation should reflect the disease severity.However, poorly adjusted ventilator settings with an unnecessarily high mechanical power may increase the potential for ventilatorinduced lung injury.The stress and strain per breath applied to the lung is reflected in the driving pressure; the frequency of this stress/strain applied to the lung is reflected in the respiratory rate.These authors conclude that mechanical power is associated with mortality.However, driving pressure and the respiratory The Southwest Respiratory and Critical Care Chronicles 2024;12(50): 16-23 rate are also predictors of mortality and are easier to measure at the bedside.Driving pressure potentially has more effects on mortality than respiratory rate and should be adjusted first. MechAnIcAl power durIng nonInvAsIve ventIlAtIon Musso et al. measured mechanical power in patients with hypoxemic respiratory failure secondary to COVID-19. 15They analyzed the differences in mechanical power in the supine and prone position.This study included 216 patients who underwent noninvasive ventilation (NIV).They normalized the mechanical power to well aerated lung volumes determined by computed tomography scans.The prone position was associated with a 34% reduction in mechanical power.Patients with a high mechanical power during the first 24 hours of NIV had higher 28-day NIV failure and higher death rates.Mechanical power performed better than other ventilatory variables as a predictor of 28-day and NIV failure and death.It also predicted gas exchange, ultrasound changes in the lung, and inflammatory biomarker changes (CRP).In this study, mechanical power was calculated as: MP = 0.098 × RR × Vt × [PEEP + delta Pi] where delta Pi = airway pressure above PEEP.The definition of high mechanical power in used this study was 9.1 J/min/ liter well aerated lung; low mechanical power was defined as less than 9.1 J/min/L well aerated lung.Throughout the first 7 days of patient management mechanical power decreased with every change from the supine to prone position.The median mechanical power on day 1 in patients who were ventilated in the prone position was 16.7 J/min.The mean mechanical power in patients in the supine position on day 1 was 16.9 J/min.This study demonstrates that mechanical power can be calculated in patients on noninvasive ventilation and has important associations with NIV failure and with death at 28 days of management. MechAnIcAl power studIes In An AnIMAl Model Cressoni et al. used an animal model to try to determine the level of mechanical power which resulted in ventilator-induced lung injury. 16These piglets were ventilated at a mechanical power level known to be lethal; the tidal volume was 38 mL per kilogram, the plateau pressure was 27 cm of water, and the respiratory rate was 15 breaths per minute.Other groups of piglets were ventilated with the same tidal volumes and plateau pressures but at lower respiratory rates.All animals were ventilated for 54 hours.Mechanical power levels greater than 12 J/min caused ventilator-induced lung injury.The animals at power levels greater than 12 J/min developed whole lung edema; animals ventilated below 12 J/ min developed isolated densities in their lungs.These authors found a significant relationship between the mechanical power applied to the lung and increases in lung weight and lung elastance and decreases in PaO 2 /FiO 2 ratios.There were significant changes in the configuration of the pressure volume loops in the animals receiving higher mechanical power levels at the end of the experiment. Vassalli et al. used a porcine model to determine the effects of changes in mechanical power and the effect of the changes in tidal volume, respiratory rate, and PEEP in a model using adjustments to maintain a similar or iso-mechanical power in the animals with different respiratory parameters. 5In the iso-mechanical power studies, the tidal volume was twice the functional residual capacity, the respiratory rate was 40 breaths/ min, and the PEEP level was 25 cm of water.The mechanical power levels were 15 and 30 J/min, and the treatment protocol was 48 hours.They found that the lung weight, wet to dry ratio, and histologic scores were similar regardless of the ventilatory strategies and the power levels.The high PEEP level group had larger changes in hemodynamics and required increased fluid administration.The authors suggest that understanding ventilator-induced lung injury requires an assessment of all relevant lung parameters, including tidal volume, respiratory rate, and PEEP level.There were no differences in lung histology in animals in the 2 power groups, but it is possible that 15 J/min is high enough to cause a significant lung injury. crItIque of MechAnIcAl power As A gAuge for lung Injury The work of breathing, whether expressed as energy per breath or expressed as energy per time, The Southwest Respiratory and Critical Care Chronicles 2024;12(50): 16-23 i.e., mechanical power, is a cause of lung injury, an indicator of lung injury, or both.As lungs become stiffer, more energy is required to inflate them.At one point, it became fashionable to adjust ventilator knobs in such a way as to minimize the work of breathing.However, that strategy ignored some simple truths.If the goal of ventilator strategy were to minimize the work of breathing, then every patient should be on neuromuscular blockade 24/7.If the goal of ventilation were to minimize the work of breathing, the clinician should stop the ventilator completely -a ridiculous suggestion.The goal of ventilation is gas exchange, not some exercise in minimum heat transfer.Decreases in tidal volume require increases in rate to maintain equal levels of gas exchange.The clinician can reduce the work required for each breath at the expense of increasing mechanical power.Is this good or bad?The lack of knowledge of underlying mechanisms leading to lung injury limit any conclusions about the correct answer to this question.More basic observational evidence is needed to define the power thresholds which cause lung tissue temperatures to rise, which would seem necessary for any mechanism of injury based on mechanical power.Associations of increased mechanical power with poor outcomes must somehow correct for the difficulty that worse disease requires greater mechanical power to inflate the lungs and achieve adequate gas exchange.Consideration of mechanical power is useful with these limitations in mind.Delivering mechanical insults that cause harm more frequently over time will lead to greater injury than the same insult delivered less frequently.Once thresholds for harm have been exceeded, increasing either rate or energy per breath will be harmful, so practitioners should be careful not to decrease one parameter while increasing the other parameter without limit. Marini and co-authors have written a succinct review of the mechanical factors potentially causing ventilator-induced lung injury. 17They focus on both static and dynamic contributors to injury.Important considerations include the reduced size of the injured lung, the heterogeneous distribution of the injury, the heterogeneous pathologic processes involved in the injury, the energy applied to the lung per cycle and per time unit, and the fact that overtime the distribution of absorbed energy in the lung likely opens up new areas in the damaged lung which are then at risk for mechanical trauma.Their conclusions provide eight statements and questions which illustrate the complexity of any analysis trying to understand ventilatorinduced lung injury and suggest that more studies are needed to understand the utility of mechanical power measurements in patients with acute respiratory failure. conclusIons Mechanical ventilation provides lifesaving support for patients with acute respiratory failure.However, the pressures and volumes required to maintain gas exchange can cause ventilator-induced lung injury.The current approach to mechanical ventilation involves attention to both tidal volume and airway pressures, in particular plateau pressures and driving pressures.The ventilator provides energy to overcome airway resistance and to inflate alveolar structures and the chest wall.This energy delivered to the respiratory system per unit time equals mechanical power, and calculation of mechanical provides a composite number which integrates pressures, volumes, and respiratory rates.Increased levels of mechanical power have been associated with tissue injury in animal models.In patients, mechanical power can predict outcomes, such as ICU mortality, when used in multivariable analyses.Increases in mechanical power during the first day of ventilation have been associated with worse outcomes.Mechanical power calculations can be used in patients on noninvasive ventilation, and measurements of mechanical power have been used to compare ventilator modes.This calculation requires measurement of the area in a hysteresis loop.Alternatively, simplified formulas have been developed to provide this calculation.However, this information is not available on most ventilators, and clinicians will need to make this calculation.In summary, calculation of mechanical power provides an estimate of the energy requirements for mechanical ventilation based on a composite of factors, including airway resistance, lung elastance, respiratory rate, and tidal volume.Manufactures should add mechanical power calculations to the software of new ventilators. Figure 1 . Figure 1.Graphical illustration of work or energy per breath.This is the traditional view of breath cycles in medicine and physiology.In this view, work or energy of movement in the plane is the area or integral to the LEFT of the curve.Physics and thermodynamics discussions display volume as the x-axis and pressure as the y-axis, so work is the integral under the curve.A breath cycle moves from A to B to C during inspiration and from C to D to A during exhalation.The gray area is the energy recovered as elastic tissues relax during exhalation.The gray + blue areas represent the work or energy required to inflate the lung during inspiration.The blue elliptical area represents the net work required to be added to the respiratory system for each breath.This is also the net energy per breath that must be dissipated as heat.
2024-02-02T16:19:19.191Z
2024-01-29T00:00:00.000
{ "year": 2024, "sha1": "9e84045bfb20858485ad41a00abc919313913135", "oa_license": "CCBYSA", "oa_url": "https://pulmonarychronicles.com/index.php/pulmonarychronicles/article/download/1275/2727", "oa_status": "GOLD", "pdf_src": "Anansi", "pdf_hash": "4630aa2ece480d635474bea2ebae8a5418578111", "s2fieldsofstudy": [ "Medicine", "Engineering" ], "extfieldsofstudy": [] }
13485018
pes2o/s2orc
v3-fos-license
Endocannabinoid-binding CB1 and TRPV1 receptors as modulators of sperm capacitation Mammalian spermatozoa reach the ability to fertilize only after they complete a complex series of physical-chemical modification, the capacitation. Recently, the endocannabinoid-binding type-1cannabinoid receptor (CB1) and transient receptor potential vanilloid 1 (TRPV1) channel have been proposed to play a key role in the control of capacitation. In particular CB1, acting via a Gi protein/cAMP/PKA pathway, maintains low cAMP levels in early stages of post ejaculatory life of male gametes. By this way it promotes the maintenance of membrane stability, thus avoiding the premature fusion of plasma membrane (PM) and outer acrosome membrane (OAM), which is mandatory for the exocytosis of acrosome content. TRPV1, on the contrary, becomes active during the latest stages of capacitation, and allows the rapid increase in intracellular calcium concentration that leads to the removal of the F-actin network interposed between PM and OAM, leading to their fusion and, ultimately, to the acrosome reaction. M ammalian spermatozoa reach the ability to fertilize only after they complete a complex series of physicalchemical modification, the capacitation. Recently, the endocannabinoid-binding type-1cannabinoid receptor (CB 1 ) and transient receptor potential vanilloid 1 (TRPV1) channel have been proposed to play a key role in the control of capacitation. In particular CB 1 , acting via a Gi protein/cAMP/PKA pathway, maintains low cAMP levels in early stages of post ejaculatory life of male gametes. By this way it promotes the maintenance of membrane stability, thus avoiding the premature fusion of plasma membrane (PM) and outer acrosome membrane (OAM), which is mandatory for the exocytosis of acrosome content. TRPV1, on the contrary, becomes active during the latest stages of capacitation, and allows the rapid increase in intracellular calcium concentration that leads to the removal of the F-actin network interposed between PM and OAM, leading to their fusion and, ultimately, to the acrosome reaction. Mammalian spermatozoa, immediately after ejaculation, are unable to fertilize the oocytes. In fact, male gametes acquire the full fertilizing competence only after residing for a period that ranges from hours to days (depending on the species), within the female genital tract. Here, after the removal of seminal plasma, a complex series of morpho-functional modifications, collectively known as "capacitation," occurs due to the interaction of sperm cells Endocannabinoid-binding CB 1 and TRPV1 receptors as modulators of sperm capacitation with the female environment. The capacitation process is completed when sperm cells become able to recognize the oocyte and to extrude the content of acrosomal vescicle (acrosome reaction, AR), thus penetrating the zona pellucida (ZP) and reaching the oocyte membrane. Since their discovery in the 50s, the molecular mechanisms involved in capacitation and AR have attracted much attention, with a particular interest on the well-balanced dialog between activating and inhibiting factors that drive the modification of the whole biochemical asset and signal transduction machinery. Recently, it has been proposed that the endocannabinoid system (ECS) could play a key-role in the control of sperm physiology. [1][2][3] In line with this hypothesis, the relevance of ECS in the physiopathology of male reproduction is supported by its conservation along the evolutionary axis, from echinoderm (sea urchin 4 ) to amphibians (frog 5 ) and mammals (mouse, 5 bull, 6 boar, 2 Human 1 ). Also the finding that the assumption of tetrahydrocannabinol (THC), the main psychoactive ingredient of cannabis (Cannabis sativa), negatively affects sperm motility, metabolism and fertility, 7,8 has further supported this view. In this context, we have recently proposed that the endocannabinoid-binding type-1cannabinoid receptor (CB 1 ) 9 and transient receptor potential vanilloid 1 (TRPV1) 10 channel could participate in the modulation of spermatozoa maturation ( Fig. 1). In particular CB 1 , a G i/o protein coupled receptor, seems to be involved in an As the spermatozoa progress along the female genital tract, they are gradually exposed to lower concentrations of endocannabinods and higher concentrations of bicarbonate (oviduct). 11 This condition is associated with the migration of CB 1 in the equatorial region of the sperm, where its activity progressively decreases. The reduced CB 1 activity and the presence of high levels of bicarbonate, which activate a soluble isoform of adenylyl cyclase (sAC), cause a rise in intracellular asymmetrical organization: the aminophospholipids phosphatidylserine (PS) and phosphatidylethanolamine (PE) are concentrated in the inner leaflet, and the choline phospholipids sphingomyelin (SM) and phosphatidylcholine (PC) in the outer leaflet. This asymmetry is established and maintained by the action of several translocating enzymes with differing phospholipid specificities, the activity of which is modulated by PKA-dependent phosphorylation. integrated dialog with bicarbonate, that finely regulates membranes dynamics. In early stages of post ejaculatory life (as it happens in uterus), sperm cells are exposed to a high concentration of endocannabinoids and to low levels of bicarbonate. As a result, the intracellular concentration of cAMP is maintained low through the stimulation of CB 1 , that inhibits the activity of the trans-membrane isoform of adenylyl cyclase (tmAC). In this state, the membranes are in a highly stable (1). the slowly increasing [Ca 2+ ] i allows G-actin polymerization, thus creating a diaphragm between Pm and Oam (2). camP concentration is maintained at a low level by CB 1 located in the post-equatorial area, which actively inhibits tmaC, and by a low intracellular concentration of bicarbonate (a). Progressively (late events) trPV1 translocates in the acrosomal area and becomes active (1). Its opening triggers a membrane depolarization wave (2) and, as a consequence, the recruitment of VOCCs which, in turn, promote a dramatic increase in [Ca 2+ ] i (3). the latter event causes a fast depolymerization of F-actin network (4), thus allowing the contact between Pm and Oam. at the same time, CB 1 migrates in the equatorial area and becomes inactive (a). this event, along with the markedly increased intracellular concentration of bicarbonate (B), promotes the rise in camP levels (C) that, via a PKa-dependent pathway, causes the activation of lipid scrambling and increases membrane fusogenicity (d). ©2 0 1 1 L a n d e s B i o s c i e n c e . D o n o t d i s t r i b u t e . by the endocannabinoid/endovanilloidbinding transient receptor potential vanilloid 1 (TRPV1) channel. The latter protein, in fact, is inactive at early stages of capacitation, and is located over the postequatorial area of the sperm head. As the acquisition of fertilizing ability progresses, TRPV1 translocates from the post-equatorial to the anterior region of the sperm head, thus becoming active. TRPV1 is a non-selective cationic channel, and its activation triggers a membrane depolarization wave and, as a consequence, the opening of voltage-operated calcium channels (VOCC). Then, the subsequent fast elevation of intracellular calcium concentration causes several biochemical events and, particularly, the depolymerisation of F-actin and the disappearance of the actin network. At this time, the contact between PM and OAM, and thus AR, becomes possible. In conclusion, our present investigation has improved our understanding of the role of the ECS in the complex events leading to sperm-oocyte recognition. In the female genital tract, endocannabinoidbinding receptors seem a crucial element to modulate sperm membrane response to environmental stimuli, and to regulate sperm functioning in a spatio-temporal manner. Besides the possible physiological role of these receptors in controlling sperm acquisition of fertilizing ability, it is tempting to suggest that they may represent a novel platform, on which new diagnostic and therapeutic strategies may be developed to treat male infertility and to improve sperm management and storage. cAMP levels. Interestingly, an autocrine/ paracrine inhibitory feedback seems to exist between intracellular cAMP levels and CB 1 localization. In fact, CB 1 modulates the endogenous tone of cAMP and, in turn, this second messenger seems to control receptor translocation. Over time, the increased cAMP, acting via a PKAdependent pathway, leads to PM phospholipids redistribution and to the rupture of the inner and outer leaflets asymmetry ("lipid scrambling"). This event causes an increased sperm membrane fluidity and disorder, and sustains a cholesterol efflux driven by soluble protein acceptors, from the anterior sperm head. Concomitantly, the antigenic mosaic modifies its organization in order to allow the specific localization of molecules involved in signal transduction, as well as the reorganization of membrane microdomains (i.e., portions that contain larger amounts of cholesterol, sphingomyelin, gangliosides, phospholipids with saturated long-chain acyl chains, and proteins such as GPI anchored proteins, caveolin and flotillin), 12,13 where the transduction signaling machinery is segregated. As a consequence, plasma membrane (PM) and outer acrosome membrane (OAM) progressively increase their ability to fuse with each other (the so called "fusogenicity"), which is mandatory for the completion of AR. A result of this integrated dialog is the promotion of a capacitative status of spermatozoa and the control of the PM and OAM ability to fuse at the right time, thus avoiding lack of responsiveness to ZP and premature loss of acrosome integrity. The latter event can be prevented, during capacitation, by the calcium-dependent polymerization of G-actin, that leads to the formation of an F-actin network that acts as a diaphragm between PM and OAM. This structure has important functions in signal transduction, 14 and acts as a mechanical shelter against the premature fusion of sperm head membranes. 15 Its removal at the right moment (i.e., when the physiological stimulus is received) is controlled
2018-04-03T00:36:37.395Z
2012-01-01T00:00:00.000
{ "year": 2012, "sha1": "ca27c8009c78ef5061ffeda69e01e0ada5ecc4ee", "oa_license": "CCBYNC", "oa_url": "https://doi.org/10.4161/cib.18118", "oa_status": "GOLD", "pdf_src": "TaylorAndFrancis", "pdf_hash": "27b75fde1a7db8a13e39824f7d0f99359056e6a6", "s2fieldsofstudy": [ "Biology", "Medicine" ], "extfieldsofstudy": [ "Biology", "Medicine" ] }
220977714
pes2o/s2orc
v3-fos-license
Mobile Phone Addiction and Risk-Taking Behavior among Chinese Adolescents: A Moderated Mediation Model Objectives: The mobile phone (MP) is an indispensable digital device in adolescents’ daily lives in the contemporary era, but being addicted to MP can lead to more risk-taking behavior. However, little is known about the mediating and moderating mechanisms underlying this relation. To address the gaps in the literature, the present study examined the idea that MP addiction is associated with reduced self-control, which further associates with increased risk-taking behavior. In addition, this study also investigated the moderation effect of adolescent sex in the association between MP addiction and self-control. Methods: A three-wave longitudinal study, each wave spanning six months apart, was conducted in a sample of Chinese adolescents (final N = 333, 57.4% girls). Results: Results of the moderated mediation model suggest that after controlling for demographic variables and baseline levels of self-control and risk-taking behavior, MP addiction at T1 positively predicted increased risk-taking behavior at T3 through reduced self-control at T2 for girls but not for boys. Conclusions: Theoretically, these findings contribute to the understanding about the working processes in the association between MP addiction and risk-taking behavior in adolescents. Practically, the results implied that boosting self-control appeared as a promising way to reduce girls’ risk-taking behavior, particularly for those who are addicted to MPs. Introduction Mobile phones (MPs) are ubiquitous in the contemporary era. With the advancement of high-speed internet, MPs become more versatile and play a significant role in people's daily lives. According to a recent report, China's MP users reached 897 million, an increase of 4.2 million compared to 2018 [1]. Adolescents constitute a major proportion of MP users in China. Although the appropriate use of MPs (e.g., looking up useful information and maintaining positive social ties) can be beneficial to adolescents (e.g., feeling higher subjective well-being) [2,3], being addicted to a MP is associated with a wide array of undesirable outcomes in adolescents [4][5][6]. Among others, a salient undesirable consequence associated with MP addiction is risk-taking behavior [7][8][9][10]. MP addiction is a type of addiction to technology and it can be defined as the uncontrolled or excessive use of mobile phones, with an inability to control craving, feeling anxious, withdrawal, and productivity loss as symptoms [11]. Although the association between MP addiction and risk-taking behavior in adolescents has been well documented, scant research has examined how and for whom MP addiction is associated with risk-taking behavior. Examining these issues may shed light on the intervention and prevention of The Mediation Effect of Self-Control Self-control is defined as the ability that individuals make effort to overcome impulsion and automatic reaction, and to support the pursuit of long-term goals [32,33]. As a vital psychological function, self-control is associated with a number of positive outcomes, including less risk-taking behavior [32][33][34]. The general theory of crime postulates that individuals with low self-control are inclined to be short-sighted and impulsive, which is a core cause of young people's delinquency [35]. Research has found that adolescents with low self-control are more likely to have excessive drinking [36,37], substance abuse [38], gambling [39,40] and other risk-taking behavior. In addition, the strength model of self-control posits that self-control resource depletion in the previous stage limits the availability of self-control for the next stage, which may increase the likelihood of the occurrence of risk-taking behavior [41,42]. Self-control may play a "bridge" role between MP addiction and adolescents' risk-taking behavior. There could be different pathways from mobile phone addiction to low self-control. On the one hand, MP addiction may reduce cognitive control, distract attention, and make the cognitive control system "lazied" in adolescents, and thus they prefer intuitive cognitive processing [27,28,43]. For instance, in a sample of 1721 adolescents, Hong et al. (2020) found that MP addiction leads to cognitive failures. On the other hand, MPs may provide immediate stimulation and feedback that may activate the socioemotional system, rendering adolescents vulnerable to instant gratification and short-term rewards. Individual differences in low self-control and the temporary depletion of self-control resources due to MP addiction can render adolescents' cognitive resources insufficient to override the tendencies of seeking novelty and excitement, which may be further associated with more risk-taking behavior. On these bases, we assume that MP addiction can indirectly affect adolescents' risk-taking via reduced self-control. The Moderation Effect of Sex Previous research has revealed sex differences in the pattern of mobile phone use [15,16]. For instance, girls are more prone to use MPs for social networking, entertainment and shopping [44,45], while boys are more likely to use MPs for work and games [15]. In this sense, compared to boys, girls are considered to be more emotionally involved when using MPs, experience more emotional swifts, and have higher social motivation [15]. In addition, boys are prone to engage in more risk-taking behavior than girls during adolescence [46], because boys have higher sensation-seeking tendencies and lower impulsive control [47,48]. As a relatively malleable personal characteristic, self-control can be affected by environment. Compared to boys, girls are more likely to be susceptible to external factors (e.g., MP addiction) [13]. In line with this, MP addiction could suppress girls' self-control rather than boys. Therefore, we assume that sex would moderate the effect of MP addiction on adolescents' self-control. The Present Study Taken together, this three-wave longitudinal study, with each wave spanning six months apart, investigates the association between MP addiction and risk-taking behavior as well as the underlying mechanisms in a sample of Chinese adolescents. Specifically, we would examine the idea that MP addiction would be associated with increased risk-taking behavior through reduced self-control. Moreover, we would examine the moderating role of sex (see in Figure 1). In sum, we hypothesized that: (1) MP addiction would be positively related with adolescent risk-taking behavior; (2) self-control would mediate the relation between MP addiction and risk-taking behavior; (3) sex would moderate the effect of MP addiction on adolescents' self-control, with the negative effect of MP addiction on self-control being stronger for girls than boys; and (4) sex would moderate the mediation effect of self-control, with the mediation effect of self-control being more pronounced for girls than boys. Combining all these hypotheses results in a moderated mediation model ( Figure 1). control resources due to MP addiction can render adolescents' cognitive resources insufficient to override the tendencies of seeking novelty and excitement, which may be further associated with more risk-taking behavior. On these bases, we assume that MP addiction can indirectly affect adolescents' risk-taking via reduced self-control. The Moderation Effect of Sex Previous research has revealed sex differences in the pattern of mobile phone use [15,16]. For instance, girls are more prone to use MPs for social networking, entertainment and shopping [44,45], while boys are more likely to use MPs for work and games [15]. In this sense, compared to boys, girls are considered to be more emotionally involved when using MPs, experience more emotional swifts, and have higher social motivation [15]. In addition, boys are prone to engage in more risk-taking behavior than girls during adolescence [46], because boys have higher sensation-seeking tendencies and lower impulsive control [47,48]. As a relatively malleable personal characteristic, self-control can be affected by environment. Compared to boys, girls are more likely to be susceptible to external factors (e.g., MP addiction) [13]. In line with this, MP addiction could suppress girls' self-control rather than boys. Therefore, we assume that sex would moderate the effect of MP addiction on adolescents' self-control. The Present Study Taken together, this three-wave longitudinal study, with each wave spanning six months apart, investigates the association between MP addiction and risk-taking behavior as well as the underlying mechanisms in a sample of Chinese adolescents. Specifically, we would examine the idea that MP addiction would be associated with increased risk-taking behavior through reduced self-control. Moreover, we would examine the moderating role of sex (see in Figure 1). In sum, we hypothesized that: (1) MP addiction would be positively related with adolescent risk-taking behavior; (2) selfcontrol would mediate the relation between MP addiction and risk-taking behavior; (3) sex would moderate the effect of MP addiction on adolescents' self-control, with the negative effect of MP addiction on self-control being stronger for girls than boys; and (4) sex would moderate the mediation effect of self-control, with the mediation effect of self-control being more pronounced for girls than boys. Combining all these hypotheses results in a moderated mediation model ( Figure 1). Participants and Procedures The data were collected from a public middle school in a large city in southern China. All the procedures involving human participants were reviewed and approved by the research ethics committee in the School of Education at Guangzhou University (Protocol Number: GZHU2019018). Written consent forms from the parents and oral assent from the adolescents were obtained before Participants and Procedures The data were collected from a public middle school in a large city in southern China. All the procedures involving human participants were reviewed and approved by the research ethics committee in the School of Education at Guangzhou University (Protocol Number: GZHU2019018). Written consent forms from the parents and oral assent from the adolescents were obtained before data collection across the waves. At each wave, two trained research assistants hosted the survey and the participants completed the questionnaires during regular class hours in the classroom. All the participants received a small gift worthy of 15 RMB (approximately 2.5 US Dollars) after completing the questionnaires each time. A total of 412 parents provided consent for their children's participation. Finally, 399 10th graders (M = 15.37, SD = 0.52, 52.1% girls) participated in the first wave of data collection (Time 1, T1). Of the 399 adolescents, 353 (attrition rate = 11.53%) and 386 (attrition rate = 3.26%) participated in the assessments at Time 2 (T2) and Time 3 (T3), respectively. The time interval of the data collection between each wave was six months. Detailed demographic characteristics of the T1 sample are presented in Table 1. We used the Mobile Phone Addiction Index Scale (MPAI) [49] to measure the participants' frequency of using MPs at T1. This scale consists of 17 items rated on a five-point scale (from 1 = never done to 5 = almost always). A mean score can be calculated by averaging all the items, with a higher score indicating more the frequent use of MP. Sample items are "Your friends and family complained about your use of the mobile phone" and "You feel lost without your mobile phone". In the current study, the Cronbach's alpha of this scale was 0.97. Self-Control at T1 and T2 We used the Chinese version of Tangney et al.'s (2004) Brief Self-Control Scale (BSCS) [50,51] to assess the participants' self-control ability at T1 and T2. This scale consists of 13 items rated on a five-point scale (from 1 = not like me at all to 5 = like me very much). A higher mean score indicates a better self-control ability. The sample items are "I am good at resisting temptation" and "Sometimes I can't stop myself from doing something, even if I know it is wrong". In this research, the Cronbach's alpha of this scale at T1 and T2 was 0.79 and 0.80, respectively. Taking Behavior at T1, T2, and T3 Risk-taking behavior was assessed by the 15-item Adolescent Risk-Taking Questionnaire (ARQ) [17] through T1 to T3. Adolescents reported the frequency of performing various risk-taking behavior (e.g., unprotected sex, driving/cycling after drinking) on a five-point Likert scale (from 0 = never done to 4 = done very often). A mean score was calculated by averaging all the items. This measure has been translated into Chinese and demonstrated to be valid and reliable in Chinese samples [18]. In this study, Cronbach's alpha of this scale at T1, T2 and T3 was 0.96, 0.81 and 0.76, respectively. Covariates at T1 The child's age, only child at home or not (0 = Yes, 1 = No), parents' employment status (1 = freelance, 2 = par-time job, 3 = full-time job) and educational levels (1 = junior middle school and below, 2 = high school degree, 3 = college degree, 4 = bachelor's degree, 5 = master's degree or doctoral degree) were included as covariates since prior studies have found significant associations between these demographic variables with risk-taking behavior [52,53]. Data Analyses Initially, descriptive statistics and bivariate correlations were performed using SPSS 22.0 ((IBM, Armonk, NY, USA) to examine the centrality and association among the variables of interest. Second, structural equation modeling (SEM) was performed using Mplus 7.0 (Muthén & Muthén, Los Angeles, CA, USA) to test the hypothesized moderated mediation model. The missing data were handled with the full information maximum likelihood estimation (FIML) [54]. In this model, T1 MP addiction was the independent variable; T2 self-control was the mediator; T3 risk-taking behavior was the outcome; and sex was the moderator. In this model, we also controlled for the baseline levels of self-control at T1 and risk-taking behavior at T1 and T2, as well as the effect of covariates on the outcome (i.e., T3 risk-taking behavior). Given that the bootstrapping technique has several advantages over the traditional approaches in examining mediation models such as higher statistical power [55], we used bootstrapping (N = 5000) and its 95% confidence intervals to judge the significance of the mediation. As long as the 95% confidence interval excludes 0, significant mediation effect is tenable. The following indices were used to evaluate the overall model fit [56]: a nonsignificant chi-square statistics (χ 2 ), the comparative fit index (CFI), the root mean square error of approximation (RMSEA) [57] with its 90% confidence interval (CI), and the standardized root mean square residual (SRMR). However, given that the sample size of the current study is large and the χ 2 statistic is sensitive to sample size, a significant χ 2 statistic was expected. Descriptive Statistics and Bivariate Correlation Means, standard deviations, and bivariate associations are shown in Table 2. As can be seen in the table, MP addiction at T1 was negatively related to T1 and T2 self-control (r = −0.43 and −0.35, ps < 0.001), but positively associated with T1/T2/T3 risk-taking behavior (r = 0.22-0.37, ps < 0.001). Adolescent self-control and risk-taking behavior were also negatively correlated, within and across time points (r = −0.22-−0.30, ps < 0.001). According to Cohen's (1992) standard [58], the effect sizes of these correlation coefficients were small-to-medium. As shown in Figure 2 and Table 3, T1 MP addiction was not directly related to T3 risk-taking behavior at the statistically significant level. Nevertheless, the T1 MP addiction was significantly related to T2 self-control (B = −0.08, SE = 0.03, p = 0.009). Moreover, T2 self-control was significantly related to T3 risk-taking behavior (B = −0.11, SE = 0.04, p = 0.003). More importantly, T2 self-control significantly linked the association between T1 MP addiction and T3 risk-taking behavior (B = 0.01, 95% CI = [0.002, 0.021]), but not the effect of baseline levels of self-control, risk-taking behavior, and covariates. As shown in Figure 2 and Table 3, T1 MP addiction was not directly related to T3 risk-taking behavior at the statistically significant level. Nevertheless, the T1 MP addiction was significantly related to T2 self-control (B = −0.08, SE = 0.03, p = 0.009). Moreover, T2 self-control was significantly related to T3 risk-taking behavior (B = −0.11, SE = 0.04, p = 0.003). More importantly, T2 self-control significantly linked the association between T1 MP addiction and T3 risk-taking behavior (B = 0.01, 95% CI = [0.002, 0.021]), but not the effect of baseline levels of self-control, risk-taking behavior, and covariates. Regarding the moderation effect of sex, there is a significant interaction effect between MP addiction and adolescent sex on T2 self-control (B = −0.13, SE = 0.06, p = 0.02). As shown in Figure 3 and Table 4, the association between T1 MP addiction and T2 self-control was significant only for girls (B = −0.14, SE = 0.04, p < 0.001), but not for boys (B = −0.03, SE = 0.04, p = 0.56). Moreover, we found that the mediation of T2 self-control was significant for girls (B = 0.018, SE = 0.007, 95% CI = Regarding the moderation effect of sex, there is a significant interaction effect between MP addiction and adolescent sex on T2 self-control (B = −0.13, SE = 0.06, p = 0.02). As shown in Figure 3 and Table 4, the association between T1 MP addiction and T2 self-control was significant only for girls (B = −0.14, SE = 0.04, p < 0.001), but not for boys (B = −0.03, SE = 0.04, p = 0.56). Moreover, we found that the mediation of T2 self-control was significant for girls (B Discussion MP has become an inseparable part of adolescents' life, but MP addiction can be related to various undesirable outcomes such as high-stake risk-taking behavior. To examine how and for whom MP addiction is related to risk-taking behaviors in adolescents, this three-wave longitudinal study examines the mediation role of self-control and the moderation role of sex in a sample of Chinese high school students. The results reveal that adolescents' MP addiction is related to increased risk-taking behavior via self-control, but this mediation model appears as only significant for girls but not for boys. MP Addiction and Adolescents' Risk-Taking Behavior Prior studies have found that MP addiction is related to risk-driving behavior [25,59]. The current study adds to this line of literature. Supporting our first hypothesis, this study reveals that MP addiction is related to other forms of risk-taking behavior in addition to risk driving. More importantly, using a three-wave longitudinal study and controlling for the baseline levels of risktaking behavior, our results indicate that MP addiction increases risk-taking behavior over time, although the effect is indirect rather than direct. MPs can be used for multiple content categories, such as gathering information, playing games, and maintaining social networking. Different content categories can be related to different consequences. For example, exposure to risk-taking photos that are posted on the internet may increase adolescents' acceptance and propensity of engaging in risktaking behavior [60,61]. However, the current study does not examine which content category is most related to risk-taking behavior. This could be a promising avenue for future research to explore. Discussion MP has become an inseparable part of adolescents' life, but MP addiction can be related to various undesirable outcomes such as high-stake risk-taking behavior. To examine how and for whom MP addiction is related to risk-taking behaviors in adolescents, this three-wave longitudinal study examines the mediation role of self-control and the moderation role of sex in a sample of Chinese high school students. The results reveal that adolescents' MP addiction is related to increased risk-taking behavior via self-control, but this mediation model appears as only significant for girls but not for boys. MP Addiction and Adolescents' Risk-Taking Behavior Prior studies have found that MP addiction is related to risk-driving behavior [25,59]. The current study adds to this line of literature. Supporting our first hypothesis, this study reveals that MP addiction is related to other forms of risk-taking behavior in addition to risk driving. More importantly, using a three-wave longitudinal study and controlling for the baseline levels of risk-taking behavior, our results indicate that MP addiction increases risk-taking behavior over time, although the effect is indirect rather than direct. MPs can be used for multiple content categories, such as gathering information, playing games, and maintaining social networking. Different content categories can be related to different consequences. For example, exposure to risk-taking photos that are posted on the internet may increase adolescents' acceptance and propensity of engaging in risk-taking behavior [60,61]. However, the current study does not examine which content category is most related to risk-taking behavior. This could be a promising avenue for future research to explore. The Mediation Effect of Self-Control Confirming the second hypothesis, the results of the mediation model show that self-control mediates the effect of MP addiction on risk-taking behavior. The first part of the mediation process (i.e., mobile phone addiction → self-control) supports the flow theory. The flow theory suggests that immediate gratification and intrinsic rewards induced by mobile phones render individuals to lose themselves in electronic devices [62]. The perception of the presence of MPs is a temptation, even when people are not using it, because it distracts individuals' attention and increases the difficulty for individuals to focus on a task [9,63]. MP addiction may lead adolescents to develop a bad habit of checking their mobile phone frequently in daily life, which may undermine adolescents' self-control ability [64] and render them vulnerable to immediate rewards [8]. The second part of the mediation process (i.e., self-control → risk-taking behavior) is consistent with previous findings [42,65]. Self-control is a crucial psychological function associated with numerous life outcomes [34,35]. The mediation model suggests that MP addiction may increase risk-taking behavior by limiting one's self-control ability. We call this a "restraining path", such that MP addiction increases risk-taking behavior by restraining protective factors such as self-control. Although the current study does not provide a direct examination, it is worthwhile to note that according to the dual system model, there may be also a "promotive path" that may explain how MP addiction increases risk-taking behavior. For instance, MP addiction may increase adolescents' sensitivity to incentives and rewarding stimuli [10,21], and thus adolescents may meet their desires by engaging in more risk-taking behavior. As this study does not examine this assumption directly, future research may examine other mediators (e.g., increased sensitivity) in the association between MP addiction and risk-taking behavior. The Moderation of Sex Confirming the third and the fourth hypotheses, our results reveal that the mediation of self-control in the association between MP addiction and risk-taking behavior is only significant for girls. This could be because girls' self-control is more sensitive to environmental stimuli (e.g., cell phone) and more malleable [13] compared to boys, and thus MP addiction imposes more negative effect on self-control in girls than boys. As discussed above, the current study provides support to the "restraining path" and finds that this working mechanism only works among girls. Specifically, given that girls' self-control is more malleable and matures earlier [13], girls' self-control can sever as a protective factor of risk-taking behavior. However, MP addiction, as a risk factor, may restrain girls' self-control, thus increasing the likelihood of risk-taking behaviors. In contrast, boys generally have higher sensation-seeking and impulsivity than girls during adolescence, which indicates that boys' self-control cannot play its role well in prohibiting risk-taking behaviors [46]. In this case, the "restraining path" of self-control plays a little role in the association between MP addiction and the increase in risk-taking behaviors in boys. Thus, we suspect whether the "promotive path" works equally well in both sexes. Future research may examine this line of research to further deepen the working mechanisms underlying the "MP addiction-risk-taking behavior" link. Implications This study bears two implications for the prevention and intervention of adolescents' risk-taking behavior. On the one hand, the results show that MP addiction may increase risk-taking behavior over time. This implies that addressing MP addiction may be a crucial way to reduce the occurrence of risk-taking behavior in adolescents. On the other hand, reduced self-control significantly mediates the association between MP addiction and risk-taking behavior in girls. This suggests that using evidence-based programs (e.g., mindfulness) to boost girls' self-control may be useful in reducing girls' risk-taking behavior. Limitations We must acknowledge that this study has several limitations. First, only self-reported data were collected and thus the associations could be inflated because of the common method bias. To enhance the internal validity of the results, future research may use multiple measurement modalities to triangulate each variable. In addition, as discussed above, this study does not reveal which content category of MP addiction is related to risk-taking behavior. Future research may deepen this issue to achieve a fuller understanding of the relationship between MP addiction and risk-taking behavior in adolescents. Finally, family relationship has been found to be associated with adolescents' screen behaviors and risk behaviors [11]. Future research should take family relationship into further consideration. Conclusions Taken together, this study reveals that MP addiction is a risk factor for risk-taking behavior via reduced self-control in adolescent girls. These findings bear important implications for the prevention and intervention of adolescents' risk-taking behavior and to the promotion of positive youth development.
2020-08-06T09:05:30.073Z
2020-07-29T00:00:00.000
{ "year": 2020, "sha1": "b2561a0c9ab40734771a1c4e40579c426f262708", "oa_license": "CCBY", "oa_url": "https://www.mdpi.com/1660-4601/17/15/5472/pdf", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "d7dd344ac68a9651fa63ae4e56cfa522e5c4673b", "s2fieldsofstudy": [ "Psychology" ], "extfieldsofstudy": [ "Medicine", "Psychology" ] }
205326465
pes2o/s2orc
v3-fos-license
A Novel Histone H4 Arginine 3 Methylation-sensitive Histone H4 Binding Activity and Transcriptional Regulatory Function for Signal Recognition Particle Subunits SRP68 and SRP72* Background: Histone methylation is believed to recruit specific histone-binding proteins. Results: We identified SRP68/72 heterodimers as major nuclear proteins whose binding of histone H4 tail is inhibited by H4R3 methylation. Conclusion: SRP68/72 are novel histone H4-binding proteins. Significance: Uncovers a novel chromatin regulatory function for SRP68/72 and suggests that histone arginine methylation may function mainly in inhibiting rather than recruiting effector proteins. Arginine methylation broadly occurs in the tails of core histones. However, the mechanisms by which histone arginine methylation regulates transcription remain poorly understood. In this study we attempted to identify nuclear proteins that specifically recognize methylated arginine 3 in the histone H4 (H4R3) tail using an unbiased proteomic approach. No major nuclear protein was observed to specifically bind to methylated H4R3 peptides. However, H4R3 methylation markedly inhibited the binding of two proteins to H4 tail peptide. These proteins were identified as the SRP68 and SRP72 heterodimers (SRP68/72), the components of the signal recognition particle (SRP). Only SRP68/72, but not the SRP complex, bound the H4 tail peptide. SRP68 and SRP72 bound the H4 tail in vitro and associated with chromatin in vivo. The chromatin association of SRP68 and SRP72 was regulated by PRMT5 and PRMT1. Both SRP68 and SRP72 activated transcription when tethered to a reporter via a heterologous DNA binding domain. Analysis of the genome-wide occupancy of SRP68 identified target genes regulated by SRP68. Taken together, these results demonstrate a role of H4R3 methylation in blocking the binding of effectors to chromatin and reveal a novel role for the SRP68/SRP72 heterodimer in the binding of chromatin and transcriptional regulation. Arginine methylation broadly occurs in the tails of core histones. However, the mechanisms by which histone arginine methylation regulates transcription remain poorly understood. In this study we attempted to identify nuclear proteins that specifically recognize methylated arginine 3 in the histone H4 (H4R3) tail using an unbiased proteomic approach. No major nuclear protein was observed to specifically bind to methylated H4R3 peptides. However, H4R3 methylation markedly inhibited the binding of two proteins to H4 tail peptide. These proteins were identified as the SRP68 and SRP72 heterodimers (SRP68/72), the components of the signal recognition particle (SRP). Only SRP68/72, but not the SRP complex, bound the H4 tail peptide. SRP68 and SRP72 bound the H4 tail in vitro and associated with chromatin in vivo. The chromatin association of SRP68 and SRP72 was regulated by PRMT5 and PRMT1. Both SRP68 and SRP72 activated transcription when tethered to a reporter via a heterologous DNA binding domain. Analysis of the genome-wide occupancy of SRP68 identified target genes regulated by SRP68. Taken together, these results demonstrate a role of H4R3 methylation in blocking the binding of effectors to chromatin and reveal a novel role for the SRP68/SRP72 heterodimer in the binding of chromatin and transcriptional regulation. Like histone lysine methylation (12)(13)(14), in principle histone arginine methylation can regulate transcription either by effect in cis on other histone modifications and/or by serving as histone code to influence the binding of histone-interacting effector proteins. In this regard, H4R3me2a catalyzed by PRMT1 has been shown to promote subsequent histone acetylation by CBP/p300 (7,15); this in cis effect explains at least in part the role of H4R3me2a in transcriptional activation. In support of the histone code hypothesis, an increasingly large number of proteins has been shown to specifically bind various methylated lysine residues in histone N-terminal tails and plays diverse roles in epigenetic regulation (14,16,17). In contrast, so far only a few proteins including Tudor domain-containing protein 3 (TDRD3), DNA methyltransferase 3a (Dnmt3a), RNA polymerase-associated protein 1 (PAF1) complex, and p300/CBP-associated factor (PCAF) have been implicated in binding of methylated arginine residues in histone tails (18 -21), and among them only the binding of H3R17me2a and H4R3me2a by TDRD3 is supported by biochemical and structural evidences (22). TDRD3 binds H3R17me2a and H4R3me2a via a Tudor domain that has been recognized as a structural motif for binding of arginine-methylated non-histone proteins (23). The limited number of arginine-methylated histone-binding proteins identified so far raises the possibility for the existence of large number of arginine-methylated histone-specific effectors that remain to be identified. Alternatively, it may underscore a major mechanistic difference in the action of arginine and lysine methylation. Mammalian signal recognition particle (SRP) is a ribonucleoprotein complex composed of six SRP proteins (SRP9, SRP14, SRP19, SRP54, SRP68, and SRP72) and a RNA molecule known as 7 S RNA or 7 SL RNA (24,25). The SRP complex is conserved in evolution and plays a central role in the co-translational targeting of secretory and membrane proteins to the endoplasmic reticulum (ER). SRP binds nascent signal peptide sequences of proteins as they emerge from the ribosome. The resulting targeting complex then docks to ER via interaction with the SRP receptor in a GTP-dependent manner (26). Previous studies have shown that SRP68 and SRP72 exist predominantly as a stable SRP68/72 heterodimer that is essential for SRP-mediated ER-targeting of proteins (27). In this study we used an unbiased proteomic approach to screen for proteins that bind specifically to H4R3me2s and H4R3me2a. Instead of identifying new methyl-H4R3-binding proteins, we found two proteins, SRP68 and SRP72, whose binding to the H4 tail was inhibited by arginine methylation. Our study illustrates a novel function of H4R3 methylation in inhibiting binding of chromatin effectors and reveals a novel transcriptional function for SRP68 and SRP72. HeLa and 293T cell lines were maintained in Dulbecco's modified Eagle's medium supplemented with 10% fetal bovine serum. Transient transfections in 293T and HeLa cells were performed using Lipofectamine 2000 (Invitrogen) according to the manufacturer's instructions. Luciferase reporter assay was essentially as described (30). Isolation of Binding Proteins from HeLa Nuclear Extracts Using Biotinylated H4 Tail Peptides-Nuclear extracts were prepared from HeLa cells by the protocol of Dignam et al. (31). The C-terminal biotinylated H4 tail peptides (amino acids 1-16) without or with either a H4R3me2a or H4R3me2s were synthesized and purified by Beijing Scilight Biotechnology Ltd. Co. Purification of corresponding H4 peptide-binding proteins from HeLa nuclear extracts was carried out essentially as described (32). Mass Spectrometry and Western Blot Analysis-Both methods were performed as described previously (32). Preparation of Cytosol, Nuclear Extract, and Chromatin and Chromatin Immunoprecipitation (ChIP)-Western Blot-To fractionate cellular contents to cytosol and nuclear and chromatin fractions, cultured cells were collected by centrifugation and washed twice with ice-cold PBS. The pellets were resuspended in 2 packed cell volumes of solution A (20 mM Tris-HCl, pH 8.0, 50 mM NaCl, 1% Nonidet P-40, 1 mM DTT, and protease inhibitors), incubated on ice for 10 min, and centrifuged at 4000 rpm for 5 min at 4°C. The resulting supernatants were collected and designated as cytosol. The pellets were resuspended with solution A containing 0.4 M NaCl and incubated in ice for 20 min. The samples were centrifuged at 12,000 rpm for 10 min, and the supernatants were designated as nuclear extracts. The pellets were washed once with solution A, resuspended in 2 volumes of 1ϫ SDS loading buffer, and designated as chromatin fractions. For ChIP-Western blot analysis, HeLa or 293T cells were treated with 1% formaldehyde for 15 min in culture medium. The cells were lysed as above, and the pellets containing nuclei were resuspended in solution A plus 3 mM CaCl 2 and 5 units of micrococcal nuclease. After incubation on ice for 1 h, the samples were sonicated, and the soluble chromatin was prepared. Immunoprecipitation was carried out with or without the addition of anti-SRP68 antibody, and Western blot analyses were performed using the antibodies as indicated. H4R3me2 Inhibits SRP68/72 Chromatin Association Immunofluorescent Staining and Cell Imaging with CFP Fusion Proteins-Subcellular localization analyses for endogenous-and ectopic-expressed SRP proteins were performed in HeLa or 293T cells. The interaction between SRP68 and the H4 tail by co-localization in DG44 CHO cells was assessed as described previously (34). Chromatin Immunoprecipitation and High-throughput Sequencing-Chromatin immunoprecipitation assays were performed with or without immunoaffinity-purified SRP68 antibody using chromatin prepared from 293T cells. After immunoprecipitation, the purified DNAs were subjected to sequencing on a Illumina Genome analyzer. SOAP 2.20 was used to align reads with two mismatches for each sample using the updated Human genome databases available on line at Human (Homo sapiens) Genome Browser Gateway. Sequences with greater than 83% identity were used for further analyses. To identify ChIP peaks, ChIP seq data were analyzed using the MACS program available at with the SRP68 ChIP-Seq data as input and the control ChIP-seq data as background. Default parameters were set for human genome, and the p value was set to p Ͻ 0.00001 or p Ͻ 0.0001 for the identification of positive peaks. Cisgenome was used to annotate the peaks to their associated genes and obtain the physical distribution in relation to promoters, exons, introns, and other features. The SRP68 association was confirmed using ChIP followed by quantitative PCR analysis. The ChIP primers used are shown in supplemental Table S1. H4R3 Methylation Blocks the Binding of SRP68/72 to the H4 Tail Peptide-In the attempt to identify proteins that specifically recognize the H4R3me2s and H4R3me2a codes, we employed an unbiased in vitro affinity purification approach using immobilized histone tail peptides and HeLa nuclear extracts. We have previously used this approach to identify chromatin effectors specific for methylated H3K4 or H3K9 (32,35,36). Three biotinylated H4 N-terminal tail peptides (amino acids 1-16) containing either no modification or R3me2s or R3me2a were synthesized. These peptides were immobilized on streptavidin-agarose beads through a C-terminal biotin moiety and used for affinity purification of specific binding proteins obtained from HeLa nuclear extracts. Bound polypeptides were resolved by electrophoresis on a 4 -20% SDS-PAGE gel and visualized by silver staining (Fig. 1A). In multiple experiments no prominent polypeptides that bound specifically to either the H4R3me2s or H4R3me2a peptides but not to the H4 peptide was observed. Instead, two polypeptides with molecular masses in the range of 70 -75 kDa were reproducibly observed to be enriched in the control H4 peptide sample as compared with the H4R3me2s and H4R3me2a peptides (Fig. 1A). We sequenced the two protein bands by mass spectrometry and determined their identities to be SRP68 and SRP72. Subsequent Western blot analyses using antibodies specific for SRP68 and SRP72 confirmed that both proteins bound with higher affinity to the H4 peptide than to the H4R3me2s and H4R3me2a peptides (Fig. 1B). Thus, we identified SRP68 and SRP72 as novel H4 tail-binding proteins whose binding activity is inhibited by H4R3 methylation. The SRP68/72 Heterodimer but Not the SRP Complex Binds to the H4 Tail-The observed binding of SRP68 and SRP72 to the H4 N-terminal tail peptide raised the question if these proteins bind to the H4 tail in the form of heterodimers or associated with the SRP complex (25,27). Western blot analysis demonstrated that, unlike SRP68 and SRP72, SRP9, SRP14, SRP19, and SRP54 did not bind to the H4 peptide (Fig. 1B), suggesting that SRP68 and SRP72 bind the H4 tail peptide independently of the other SRP proteins. To substantiate this further, we utilized Superose 6 gel filtration chromatography to separate SRP68 and SRP72 in HeLa nuclear extract into a different complex(es). Western blot analysis revealed at least two protein complexes that contained SRP68 and SRP72. The larger molecular weight complex cofractionated with SRP54 and probably represented the fully assembled SRP (Fig. 1C, lanes [3][4][5]. A smaller complex(es) also contained SRP68/72 but lacked SRP54 (Fig. 1C, lanes 6 -9). When these fractions were assayed with regard to their binding of H4 tail peptide, we observed that only SRP68 in the smaller complex bound the H4 tail peptide (Fig. 1D). In addition, among all four histone tail peptides tested, SRP68 and SRP72 bound only the H4 tail peptide (Fig. 1E). The binding of SRP68 and SRP72 to H4 peptide was insensitive to RNase A treatment (Fig. 1F), further supporting the conclusion that SRP68 and SRP72 bind in the form of heterodimers in the absence of 7SL SRP RNA. Taken together, these data provide compelling evidence that SRP68/72 heterodimers, but not the SRP complex, are the histone H4 tail-specific binding proteins. SRP68 and SRP72 Associate with Chromatin in Vivo-SRP68 and SRP72 are well known for their exclusive protein targeting function within the SRP complex. Our finding that SRP68 and SRP72 bind specifically to the unmodified H4 tail peptide suggests a new function of SRP68 and SRP72 in chromatin regulation. In agreement with previous reports (37,38), immunostaining of HeLa cells revealed a predominantly cytoplasmic localization for SRP9, SRP19, and SRP54 ( Fig. 2A and data not shown). However, the endogenous SRP68 in HeLa cells was detected predominantly in the nucleus, whereas SRP72 was present both in the nucleus and cytoplasm ( Fig. 2A). Similarly, we observed that ectopically expressed SRP68 and SRP72 were mainly nuclear in HeLa cells (Fig. 2B). To investigate if the nuclear SRP68 and SRP72 associated with chromatin, cytosol, nuclear, and chromatin fractions were prepared from HeLa cells. A substantial amount of SRP68 and SRP72 was found to associate with chromatin (Fig. 2C). In contrast, SRP54 did not associate with chromatin under the same condition (Fig. 2C). As markers for appropriate cellular fractionation, ␤-actin was detected only in the cytosol, whereas the core histone H3 was detected only in the chromatin fraction (Fig. 2C). To test further the chromatin association of SRP68 and SRP72, HeLa cells were treated with 1% formaldehyde, and nuclei were prepared and subjected to chromatin digestion with micrococcal nuclease. After centrifugation to remove insoluble nuclei pellets, the soluble chromatin-containing fraction was immunoprecipitated with anti-SRP68 antibody and assayed for the presence of core histones by Western blot analysis. Fig. 2D shows the presence of core histones H3 and H4 in immunoprecipitation of soluble chromatin with anti-SRP68 antibody. Significantly, the chromatin that co-precipitated with SRP68 was devoid of H4R3me2s (Fig. 2D). This was in agreement with the in vitro peptide binding data that showed that H4R3 methylation blocks the binding of SRP68/72 to the H4 tail. Similar results were observed when immunoprecipitation was performed with anti-SRP72 antibody and chromatin derived from HeLa cells (data not shown). To further demonstrate that SRP68 binds the H4 N-terminal tail within cells, we made use of a CHO cell line, which contains a large number of Lac operator sequences stably integrated in a single chromosomal site (39). Expression of control CFP-tagged Lac proteins or CFP-Lac fused with a tandem H4 tail peptide (designated as CFP-Lac-H4t) in these cells generated bright foci due to the binding of integrated Lac sequences by CFP-Lac fusion proteins. We observed co-localization of ectopically expressed HA-SRP68 with CFP-Lac-H4t but not the control CFP-Lac (Fig. 2E), indicating a tandem H4 tail dependent interaction with SRP68. Similar results were observed for SRP72 (data not shown). Taken together these data demonstrate that SRP68 binds H4 tail peptide in cells and that a portion of the SRP68 and SRP72 proteins is intracellularly associated with chromatin. SRP68 and SRP72 Directly Interact with the H4 Tail in a H4R3 Methylation-sensitive Manner-To investigate if SRP68 and SRP72 directly interact with the H4 tail, in vitro translated polypeptides were assayed for binding to the four histone tail peptides using pulldown assays. In vitro synthesized SRP68 and SRP72 bound H4 but not other histone tail peptides (Fig. 2F). Under the same conditions, SRP54 did not bind to the H4 tail peptide (Fig. 2F). Using a series of SRP68 deletion mutants, we further mapped the H4 tail binding activity to the C-terminal region (amino acids 436 -620) of SRP68 that is known to also bind to SRP72 (Fig. 2G). We observed that both the N-terminal region (amino acids 1-356) and C-terminal region (amino acids 529 -659) of SRP72 were able to bind the H4 tail peptide (Fig. 2H). To test if SRP68 and SRP72 bind directly the H4 tail peptide, we expressed and purified GST fusions of full-length SRP68 and SRP72 and their C-terminal H4 binding domains. In pulldown assays, these recombinant proteins bound specifically the H4 FIGURE 1. H4R3 methylation inhibits the binding of SRP68 and SRP72 heterodimers to the H4 tail peptide. A, an unbiased peptide pulldown assay revealed SRP68 and SRP72 as major nuclear proteins whose binding to the H4 tail peptide was inhibited by either H4R3me2a or H4R3me2s. The identities of the polypeptides were determined by mass spectrometry. B, HeLa nuclear extracts were subjected to pulldown assay as in A and analyzed by Western blot analysis using antibodies as indicated. Western blot analyses confirmed that SRP68 and SRP72 bound to the H4 tail peptide but not H4R3me2a and H4R3me2s peptides. In addition, the binding of H4 tail peptide was detected for SRP68 and SRP72 but not other SRP proteins. The amounts of immobilized peptides employed in the pulldown reactions were shown by Coomassie Blue staining. C, HeLa nuclear extracts were fractionated with a Superose 6 gel filtration column, and the indicated fractions were analyzed by Western blot. Brg1, a subunit of human SWI/SNF complex with molecular masses of ϳ2 MDa, was served as a control. D, the fractions of HeLa nuclear extracts derived from Superose 6 gel filtration were tested for binding to the H4 tail peptide by pulldown assay. The amounts of immobilized peptides in the pulldown reactions were shown by Coomassie Blue staining. E, the SRP proteins in HeLa nuclear extracts were tested for binding of all four histone tail peptides by in vitro pulldown and analyzed by Western blot using antibodies as indicated. F, the HeLa nuclear extracts were treated with or without RNase A first and then assayed for binding of H4 tail peptide by in vitro pulldown assay. tail peptide (Fig. 2I). Furthermore, H3R4 methylation abolished the binding of recombinant SRP68 and SRP72 to the H4 tail (Fig. 2I, compare lane 5 with lane 4). Together these data demonstrate that both SRP68 and SRP72 bind directly to the H4 tail in a Arg-3 methylation-sensitive manner. PRMT5 Regulates SRP68/72 Chromatin Association and Subcellular Localization-Having established that H4R3 methylation blocks the binding of SRP68 and SRP72 to the H4 tail peptide and that SRP68 and SRP72 associate with chromatin in cells, we next investigated the effect of H4R3 methylation on FIGURE 2. SRP68 and SRP72 associate with chromatin in cells and directly bind to the H4 tail peptide in vivo and in vitro. A, the subcellular localization of SRP subunits in HeLa cells were analyzed by immunofluorescent staining using antibody specific for each subunit. The nuclei were revealed by DAPI staining. Note that a predominant nuclear localization of endogenous SRP68 and SRP72 was observed. In contrast, SRP54 was found primarily in the cytoplasm. B, HA-tagged SRP68 and SRP72 were transfected into HeLa cells, and the subcellular localization of these proteins was revealed by immunostaining using anti-HA antibody. Note both HA-SRP68 and HA-SRP72 expressed in HeLa cells were predominantly a nuclear localization. C, cellular fractionation of HeLa cells revealed the presence of SRP68 and SRP72 but not SRP54 in chromatin fraction. HeLa cells were fractionated into cytosol (cyto), nuclear (Nucl), and chromatin (chro) as described under "Experimental Procedures," and the presence of SRP subunits in each fraction was determined by Western blot analysis. ␤-Actin served as a cytosol marker, and core histone H3 served as a chromatin marker. D, ChIP followed by Western blot analysis revealed the association of SRP68 with soluble chromatin. HeLa cells were treated with 1% formaldehyde to cross-link the chromatin-associated proteins with chromatin. The nuclei were prepared, and chromatin was released into soluble fraction by micrococcal nuclease digestion. The soluble chromatin fraction was then subjected to immunoprecipitation with anti-SRP68 antibody followed by Western blot (WB) analysis using antibodies as indicated. Note that the chromatin co-precipitated with SRP68 was devoid of H4R3me2s. E, colocalization of HA-SRP68 with CFP-Lac-H4t but not the control CFP-Lac in DG44 CHO cells is shown. CFP-Lac-H4t contains a tandem H4 tail (amino acids 1-20) peptide. The bright foci with colocalization of CFP-Lac-H4t and HA-SRP68 was marked by an arrow. F, in vitro synthesized, [ 35 S]Metlabeled SRP68 and SRP72 bound to the H4 but not other histone tails in in vitro pulldown assays. As a control of binding specificity, no binding was detected for SRP54 under the same condition. The binding was revealed by autoradiography. G, mapping the H4 tail binding domain of SRP68 by using a series of SRP68 deletion mutants and peptide pulldown assay is shown. The top panel illustrates the domain structure of SRP68. The SRP68 mutants used for peptide pulldown assay were in vitro synthesized, and [ 35 S]Met was labeled. The binding was revealed by autoradiography. aa, amino acids; FL, full-length. H, mapping the H4 tail binding domain of SRP72 by using a series of SRP72 deletion mutants and peptide pulldown assay was as above. I, the recombinant SRP68 and SRP72 bound preferentially to the H4 tail peptide. GST fusion proteins were purified and subjected to in vitro binding with various histone tail peptides as indicated. The binding of recombinant proteins were revealed by Coomassie Blue staining. the intracellular association of SRP68 and SRP72 with chromatin. For this purpose, we overexpressed FLAG-PRMT5 and its enzymatic defective mutant in 293T cells and analyzed the effect on SRP68 and SRP72 subcellular localization and chromatin association by cellular fractionation. We found that overexpression of wild type (Fig. 3A, compare lanes 5 and 6 with lanes 2 and 3) but not the mutant PRMT5 (Fig. 3A, compare lanes 8 and 9 with lanes 2 and 3) reduced the levels of nuclear and chromatin-associated SRP68 and SRP72. Western blot analysis also detected increased levels of H4R3me2s in chromatin derived from FLAG-PRMT5 but not FLAG-PRMT5m-expressed cells (see Fig. 5A, compare lane 6 with lanes 3 and 9). As controls for proper cellular fractionation, ␤-actin was detected mainly in the cytosol and core histone H3 in the chromatin. These results indicate that PRMT5 regulates SRP68 and SRP72 chromatin association in an enzymatic activity-dependent manner. Furthermore, PRMT5 appears to promote nuclear to cytoplasmic translocation of SRP68 and SRP72. To further examine the effect of PRMT5 on SRP68 and SRP72 subcellular localization and chromatin association, we cotrans-fected FLAG-PRMT5 with HA-SRP68 or HA-SRP72 into HeLa cells and analyzed the subcellular localization of HA-SRP68 and HA-SRP72 by immunofluorescent staining. Although HA-SRP68 was primarily nuclear in cells expressing HA-SRP68 alone, it was predominantly localized in the cytoplasm in cells that coexpressed wild type but not mutant PRMT5 (Fig. 3B). Similar results were observed for SRP72 (Fig. 3C). Together with the cellular fractionation experiments described above, these results suggest that PRMT5 inhibits the binding of SRP68/72 to chromatin and sequesters SRP68 and SRP72 from the nucleus toward the cytosol in an enzymatic activity-dependent manner. In our in vitro binding assays both H4R3me2a and H4R3me2s modifications inhibited the binding of SRP68 and SRP72 to H4 tail peptide (Fig. 1, A and B). As H4R3me2a and H4R3me2s are known to have distinct transcriptional regulatory functions, we were eager to determine if overexpression of PRMT1, the enzyme that catalyzes H4R3me2a modification, also influences the chromatin association and subcellular localization of SRP68 and SRP72. We thus overexpressed PRMT1 in 293T cells and carried out cellular fractionation experiments to FIGURE 3. PRMT5 and PRMT1 regulate SRP68 and SRP72 chromatin association. A, the 293T cells were transfected with or without FLAG-tagged wild-type PRMT5 (F-PRMT5) or enzymatic inactive PRMT5 (F-PRMT5m), and 2 days after transfection the cells were collected and fractionated into cytoplasm (Cyto), nuclear (Nucl), and chromatin (Chro). The expression of F-PRMT5 and F-PRMT5m was confirmed by Western blot analysis. The presence of SRP68 and SRP72 in each fraction was determined by Western blot analysis. Note that expression of F-PRMT5 but not F-PRMT5m led to an ϳ2-fold increase of H4R3me2s. ␤-Actin served as a cytosol marker, and core histone served H3 as a chromatin marker. B, HeLa cells were cotransfected with F-PRMT5 or F-PRMT5m together with HA-SRP68. The effect of ectopic expressed PRMT5 on HA-SRP68 subcellular localization was analyzed by immunofluorescent staining. Note that expression of F-PRMT5 but not F-PRMT5m resulted in export of transfected HA-SRP68 from the nucleus to cytoplasm. C, the effect of ectopic expressed PRMT5 on HA-SRP72 subcellular localization was analyzed by immunofluorescent staining as in B. Again, expression of F-PRMT5 but not F-PRMT5m resulted in export of transfected HA-SRP72 from the nucleus to cytoplasm. D, the 293T cells were transfected with or without HA-tagged PRMT1 (HA-PRMT1), and 2 days after transfection the cells were collected and fractionated into cytoplasm, nuclear, and chromatin. The effect of ectopic expression of HA-PRMT1 on SRP68 and SRP72 distribution was analyzed by Western blot analysis. Note that expression of HA-PRMT1 diminished the level of SRP68 and SRP72 in chromatin but not nuclear fraction. Also note that expression of HA-PRMT1 led to increased levels of H4R3me2a. E, HeLa cells were cotransfected with FLAG-tagged PRMT1 and HA-SRP68 or SRP72, and the subcellular localization was revealed by immunofluorescent staining. Note that expression of F-PRMT1 did not affect the nuclear localization of HA-SRP68 and HA-SRP72. determine the effect on SRP68 and SRP72 chromatin association and subcellular fractionation. As shown in Fig. 3D, we found that overexpression of PRMT1 led to the dissociation of SRP68 and SRP72 from chromatin. Western blot analysis confirmed an increased H4R3me2a level upon ectopic expression of HA-PRMT1. This result is in agreement with our in vitro H4 tail peptide binding data showing that H4R3me2a modification also inhibits the binding of H4 tail by SRP68 and SRP72. Unlike the case of PRMT5 overexpression, PRMT1 overexpression did not appear to reduce the nuclear fraction of SRP68 and SRP72 (Fig. 3D, compare lane 5 and lane 2). Indeed, unlike PRMT5, ectopic expression of PRMT1 did not affect the nuclear localization of SRP68 and SRP72 as shown by immunofluorescent staining (Fig. 3E). Together these results show that both PRMT5 and PRMT1 regulate SRP68/72 chromatin association, presumably through its ability to catalyze H4R3me2s and H4R3me2a, respectively, which in turn interferes with the binding of SRP68/72 to histone H4 tail in chromatin. The mechanism by which PRMT5 and PRMT1 differentially affect the subcellular localization of SRP68/72 remains to be investigated. Both SRP68 and SRP72 Appear to Possess a Transcriptional Activation Activity-The above findings that both SRP68 and SRP72 associate with chromatin in cells and that their chromatin association is regulated by H4R3 methylation raise the possibility that SRP68 and SRP72 are involved in transcriptional regulation. To this end, we investigated if SRP68 and SRP72 possess transcriptional activity. We generated fusion proteins of SRP68 and SRP72 with a heterologous DNA binding domain (DBD amino acids 1-147) from yeast transcription factor Gal4. When cotransfected with a minimal TK promoter-driven luciferase reporter containing four tandem Gal4 binding sites (UAS) upstream of the TK promoter (4xUAS-TK-luc) into 293T cells, we found that expression of Gal-SRP68 or Gal-SRP72 led to transcriptional activation in a dose-dependent manner (Fig. 4A). The correct expression of Gal-SRP68 and Gal-SRP72 was verified by Western blot analysis using a Gal4(DBD)-specific antibody (Fig. 4A, lower panel). These results suggest that SRP68 and SRP72 have a transcriptional activation function. To map the potential transcriptional activation domain(s) in SRP68 and SRP72, we fused the different regions of SRP68 and SRP72 to Gal4(DBD) and tested their ability to activate transcription in luciferase reporter assay as above. We found that the transcriptional activation activity of SRP68 mainly resided in the C-terminal region amino acids 436 -620 (Fig. 4B). The differences in the transcriptional activity for different regions of SRP68 were not due to variation in protein expression, because Western blot analysis revealed a similar expression level for various Gal-SRP68 fusion proteins (Fig. 4B, lower panel). For SRP72, the major transcriptional activation domain was mapped to the N-terminal region, amino acids 1-356. Given that Gal-(1-356) exhibited only half of the transcriptional activity of the full-length SRP68, the contribution of the additional C-terminal region to the transcriptional activity could not be excluded. Fig. 4D summaries the transcriptional activa- . Tethering SRP68 or SRP72 to a reporter gene results in transcriptional activation. A, 293T cells were co-transfected with 4xUAS-TK-Luc reporter (100 ng) and control Gal(DBD) vector, Gal-SRP68, and Gal-SRP72 constructs as indicated. Two days after the transfection, the cells were collected, and the relative luciferase activities were determined. The amounts of expression plasmids used were: ϩ, 100 ng, ϩϩ, 200 ng. The samples were also analyzed for expression of Gal-SRP68 and Gal-SRP72 by Western blot (WB). The control Gal(DBD) was not detected due to small size (running out of gel). B, mapping the potential transcriptional activation domain of SRP68 is shown. 293T cells were transfected with 4xUAS-TK-Luc reporter and SRP68 mutants as indicated, and luciferase activities were assayed as above. Amount of DNA used: 100 ng each. C, mapping the potential transcriptional activation domain of SRP72 is shown. The experiments were performed as in B except various SRP72 mutants as indicated were used. The amount of DNA used was 100 ng each. D, shown is a summary of the transcriptional activity for SRP68 and SRP72 and their deletion mutants. NOVEMBER 23, 2012 • VOLUME 287 • NUMBER 48 tion domain mapping results of SRP68 and SRP72. Although the precise transcriptional activation domain(s) and mechanism(s) by which they activate transcription remains to be determined, these data nevertheless support a transcriptional regulatory function for both SRP68 and SRP72. H4R3me2 Inhibits SRP68/72 Chromatin Association Identification of Potential SRP68 Target Genes by ChIP-Seq Analysis-We next attempted to identify potential endogenous SRP68 target genes using ChIP followed by high throughput sequencing (ChIP-seq). 293T cells were fixed by formaldehyde, and the chromatin was fragmented by sonication and immunoprecipitated using purified specific anti-SRP68 antibodies or no antibodies as the ChIP negative control group. A total of 1166 SRP68 binding peaks with a p value 1.00e-004 was identified, and 681 of the 1166 peaks can be annotated to 638 unique genes using a parameter of maximum distance of 100 kb up-and downstream of the transcription starting sites (TSS) (Fig. 5, A and B). The representative SRP68 binding profiles were shown in Fig. 5C for the CD1E and TCTA genes. The SRP68 binding sites were not enriched in particular genomic regions (e.g. promoters or introns) (supplemental Fig. S1A) but interestingly were enriched in chromosomes 1, 3, 13, and X (supplemental Fig. S1B). We randomly selected 18 SRP68 peaks and validated the binding of SRP68 to these regions by ChIP followed by quantitative PCR analysis (Fig. 5D). As negative controls, the binding of SRP68 was not observed in the promoter regions of NKX3.1 and PSA genes (data not shown). The results suggested that most if not all peaks identified by our ChIP-seq analysis are authentic SRP68 binding sites. GO analysis revealed that the 638 unique SRP68 binding site-containing genes are slightly enriched for cytoskeleton organization, cell adhesion, DNA catabolic, and apoptosis processes (supplemental Fig. S2). SRP68 May Regulate Target Gene Expression in a Context-dependent Manner-Having identified the potential SRP68 target genes, we next investigated if SRP68 had a role in their expression. We knocked down SRP68 in 293T cells by RNAi and verified the efficient down-regulation of SRP68 protein by Western blotting (Fig. 5E). We then analyzed the effect of SRP68 knockdown on mRNA levels of the 18 genes that had been verified for binding of SRP68 by quantitative RT-PCR. We found that knockdown of SRP68 resulted in the substantial upregulation of DDIT3, DIDO1, CUL1, HNRNPA3, SLC37A3, YEATS4, and DRD2, in a significant down-regulation of TMEM110, TCTA, NF1, OCDC43, CD1E, and CDKN1A, and in insignificant changes of the remaining genes. This effect on target gene expression was reproducible in three independent experiments. Thus, although the reporter assays clearly suggested a transcriptional activation function for SRP68, the knockdown of SRP68 differentially affected the expression of its associated target genes, suggesting a context-dependent transcriptional function for SRP68. DISCUSSION In this study we attempted to use an unbiased proteomic approach to identify nuclear proteins that selectively bind histone H4 N-terminal tail peptides with H4R3me2a or H4R3me2s. Despite extensive effort, we did not observe any prominent nuclear protein that binds the H4 tail peptide in an H4R3me2aor H4R3me2s-dependent manner (Fig. 1A). Instead, we consistently observed that both H4R3me2a and H4R3me2s inhibited the binding of two nuclear proteins that were subsequently identified as the SRP68/72 heterodimers. It is noteworthy that the same experimental approach has permitted us previously FIGURE 5. Genome-wide SRP68 distribution determined by ChIP-seq analysis and effect of SRP68 on target gene expression. A, the total numbers of peaks and annotated peaks enriched by anti-SRP68 antibody are shown. ChIP-seq analysis was performed with 293T cells. B, shown is the distance of SRP68 peaks relative to the transcriptional start site (TSS) of the genes. C, the SRP68 binding profiling for two representative genes is shown. SRP68 is in black, and the control is in purple. D, the validation of SRP68 binding for 18 annotated SRP68 peaks (target genes) by quantitative PCR ChIP analysis is shown. E, Western blot analysis confirmed the efficient knockdown of SRP68 in 293T cells by siRNA. F, 293T cells were treated with siSRP68, and the effect on the transcription of 18 SRP68 target genes was assessed quantitative RT-PCR. The relative levels of transcription to that of the control siRNA treated sample were calculated as log 2. H4R3me2 Inhibits SRP68/72 Chromatin Association the identification of multiple H3K4me2-and H3K9me2-binding proteins (36). Thus, the failure of identification of methylated H4R3-specific binding proteins in this study is unlikely due to technical reasons but more likely reflects the general nature of histone arginine methylation. In contrast to the situation that an increasingly large number of methylated lysinespecific histone-binding proteins have been identified, so far only limited proteins have been reported for binding of arginine-methylated histones. On the basis of our study and others (see next), we therefore propose that histone arginine methylation is more likely employed to inhibit than to recruit effector proteins (Fig. 6). The role of histone arginine methylation in inhibiting rather than recruiting effector proteins is not unique to H4R3 methylation. It was shown previously that H3R2me2a catalyzed by PRMT6 antagonizes H3K4 methylation by interfering with the binding of H3K4 methyltransferase mixed lineage leukemia (MLL) complexes and other proteins to chromatin (40 -42). More recently, H3R2 methylation has been shown to inhibit the binding of UHRF1/ICBP90 to histone H3 tail (43). In addition, by using the same unbiased peptide pulldown approach as employed in this study, we also failed to detect any prominent nuclear proteins that bind the H3 tail peptides containing either H3R17me2a and/or H3R26me2a (44). Instead, we uncovered a role of H3R17me2a and H3R26me2a in conjunction with histone acetylation in inhibiting the binding of corepressors such as Mi-2/NURD/NuRD complex to H3 tail in vitro and chromatin in vivo (44). Together these data support that, unlike histone lysine methylation, histone arginine methylation including H4R3 methylation may mainly act to inhibit or repel rather than recruit histone effector proteins. In this regard, arginine residues are often involved in protein-protein interaction because it contains five potential hydrogen bond donors that can form favorable interaction with biological hydrogen bond acceptors (4). Methylation on arginine would affect not only the number of available hydrogen bond donors but also alters its conformation, which in turn may inhibit the arginine-engaged protein-protein interaction. We want to emphasize that our data do not exclude the existence of methylated arginine-specific binding proteins such as survival motor neuron protein (SMN) and TDRD3. The Tudor domain-containing proteins are known to engage hydrophobic and hydrogen-bond interactions with methylated arginine residues in histones and nonhistone proteins through an aromatic cage. Thus, histone arginine methylation is likely to have a combinatorial effect on binding of chromatin effector proteins; on one hand by inhibiting the binding of proteins such as MLL (40 -42), UHRF1 (43) and SRP68/72 and on the other hand by facilitating the chromatin association of proteins such as TDRD3 (19). Nevertheless, the limited number of arginine-methylated histone-binding proteins identified so far raises the possibility that this type of histone modification mainly functions to inhibit rather than to recruit the effector proteins. H4R3me2a and H4R3me2s are catalyzed, respectively, by PRMT1 and PRMT5 and have been linked to transcriptional activation and repression, respectively (7, 9 -11). Given their opposite roles in transcription, we initially expected to identify distinct sets of proteins that bind H4R3me2a or H4R3me2s, respectively, or whose binding to H4 tail are differentially affected. Although we could not rule out the possibility that we failed to observe these proteins due to technical limitations in our experiments, it is equally possible that H4R3me2a or H4R3me2s may exert opposite effect on transcription through their distinct cis-effect on other histone modifications. For example, the presence of H4R3me2a has been shown to facilitate in cis histone acetylation catalyzed by CBP/p300 (7) and PCAF (20). On the other hand, H4R3me2s catalyzed by PRMT5 has not been shown to enhance in cis histone acetylation. Thus, the effect of histone arginine methylation on transcription is likely due to the combinatorial effect of histone arginine methylation on the binding of effectors and/or other histone modifications in cis. A surprising finding in this study is the identification of SRP68 and SRP72 as major H4-binding proteins. SRP68 and SRP72 are part of the SRP, a particle critically important for targeting secretory and membrane proteins to ER. SRP68 and SRP72 were previously shown to form heterodimers independent of the SRP complex and are released from the SRP as a stable SRP68/72 that is essential for SRP-mediated protein targeting (27,29). Our pulldown and gel filtration assays provide compelling evidence that the SRP68/72 heterodimers, but not the SRP complex, binds the H4 tail. Multiple lines of evidence support a direct binding of SRP68/72 to H4 tails, including a H4 tail-dependent recruitment (co-localization) of SRP68 in CHO cells (Fig. 2E) and binding of H4 tail peptide in vitro by recombinant SRP68 and SRP72 (Fig. 2I). The ability for both SRP68 and SRP72 to bind H4 tail may allow SRP68/72 heterodimers to bind chromatin with high affinity. It is noteworthy that SRP68 and SRP72 do not share sequence similarity. Exactly how these proteins bind the H4 tail peptide remains to be determined. Consistent with an inhibitory role of H4R3me2s and H4R3me2a on binding of SRP68/72 to H4 tail peptide, ectopic expression of PRMT5 or PRMT1 all resulted in the dissociation of SRP68/72 from chromatin. Interestingly, ectopic expression of PRMT5 also drives SRP68/72 out of the nucleus, whereas ectopic expression of PRMT1 does not. At this stage we do not FIGURE 6. A model of H4R3 methylation in regulating chromatin association of SRP68/72. SRP68/72 has dual roles either in the SRP complex participating in protein targeting (left) or as heterodimer involving in transcription (right). Indicated are the six SRP proteins (SRP9 to SRP72) in relation to the folded SRP RNA (black line) as well as the signal peptide of the secretory protein (sp). The methylated H4 tail (H4R3me2, red triangle) is highlighted in the chromatin complex. Binding of SRP68/72 to H4 tails of chromatin is inhibited by H4R3 methylation as shown by the X. The association of SRP68/72 with chromatin is likely to influence transcription either directly and/or indirectly. NOVEMBER 23, 2012 • VOLUME 287 • NUMBER 48 know how PRMT1 and PRMT5 differentially regulate SRP68/72 subcellular localization. One possibility is that PRMT5 not only methylates H4R3, which leads to dissociation of SRP68/72 from chromatin, but also methylates SRP68/72, which results in nuclear export of SRP68/72. On the other hand, PRMT1 may only methylate H4R3 to affect SRP68/72 chromatin association. Further work is needed to elucidate the mechanisms by which PRMT5 regulates SRP68/72 subcellular localization. H4R3me2 Inhibits SRP68/72 Chromatin Association By tethering SRP68 and SRP72 to a luciferase reporter through a heterologous Gal4 DNA binding domain, we demonstrated that both SRP68 and SRP72 possess a transcriptional activation activity (Fig. 4A). The transcriptional activity can be mapped primarily to the C-terminal region of SRP68 and N-terminal region of SRP72 (Fig. 4, B and C). Although the mechanism by which tethering SRP68 or SRP72 to DNA leads to transcriptional activation remains to be investigated, it nevertheless suggests that the chromatin-associated SRP68 and SRP72 may have a transcriptional regulatory function. In support of this notion, we carried out ChIP-seq analysis and identified 1166 SRP68 associated regions and 638 potential SRP68 target genes using a parameter of maximum SRP68 peak distance of 100 kb up-and downstream of the transcription starting sites (TSS) (Fig. 5, A and B). As the enrichment of SRP68 in the SRP68 peaks was confirmed by ChIP-quantitative PCR analysis for 18 randomly selected genes (Fig. 5D), most of the SRP68 peaks identified in this study are likely the authentic SRP68-associated regions in 293T cells. Given that SRP68 and SRP72 form heterodimers, the SRP68-associated regions are most likely the SRP68/72-associated regions, although this remains to be tested experimentally. Interestingly, although SRP68 possesses a transcriptional activation activity in the reporter assay, knockdown of SRP68 affects positively or negatively the expression of its directly associated genes (Fig. 5E), suggesting that the effect of SRP68 on target gene expression is likely context-dependent. Although the underlying mechanism remains to be fully investigated, many transcription factors or epigenetic regulators possess a context-dependent transcriptional function. For example, stem cell factor Oct4 can activate or repress its target gene expression (45), in part depending on the coregulators it interacts with (46,47). Taken together, our study has identified SRP68 and SRP72 as novel H4 tail-binding proteins whose binding of H4 tails is inhibited by H4R3 methylation, and thus we uncovered a novel transcriptional regulatory function for SRP68/72 (Fig. 6). The identification of potential SRP68/72 target genes by ChIP-seq substantiates the histone binding activity of SRP68/72 and sets up the stage for characterization of their transcription and potentially other chromatin-related function.
2018-04-03T03:27:56.965Z
2012-10-08T00:00:00.000
{ "year": 2012, "sha1": "432f238fa3136a358bddcd5f46ced2393d296938", "oa_license": "CCBY", "oa_url": "http://www.jbc.org/content/287/48/40641.full.pdf", "oa_status": "HYBRID", "pdf_src": "Highwire", "pdf_hash": "fcf6d7153333799c70b9ee0aab3dd48bbdda3a2f", "s2fieldsofstudy": [ "Biology", "Chemistry" ], "extfieldsofstudy": [ "Medicine", "Biology" ] }
235577182
pes2o/s2orc
v3-fos-license
Kolmogorov-Smirnov APF Test for Inhomogeneous Poisson Processes with Shift Parameter In this article, we study the Kolmogorov-Smirnov type goodness-of-fit test for the inhomogeneous Poisson process with the unknown translation parameter as multidimensional parameter. The basic hypothesis and the alternative are composite and carry to the intensity measure of inhomogeneous Poisson process and the intensity function is regular. For this model of shift parameter, we propose test which is asymptotically partially distribution free and consistent. We show that under null hypothesis the limit distribution of this statistic does not depend on unknown parameter. Introduction One of the central themes of statistical theory and practice is the problem of the quality of goodness-of-fit tests. The problems of constructing the quality of goodness-of-fit tests in the case of i.i.d. are well studied in [1]. To set up a test that allows, if possible, accepting or rejecting the hypothesis to be tested against a given alternative, depending on a data set, a nonparametric study of the hypothesis tests is required, including a typical example that is the goodness-of-fit test and other important examples for applications that are the tests for symmetry, independence and homogeneity. [2] [3], and many other authors have worked in this area mainly in the mini max approach which is considered in nonparametric statistics as a good framework for determining the performance of an estimator. In classical mathematical statistics, [4] intensely studied the Chi-square, Kolmogorov-Smirnov and Cramér-von Mises tests, and the Kolmogorov-Smirnov and Cramér-von Mises goodness-of-fit tests shown are asymptotically statistically free (i.e. have independent laws of the distribution under the null hypothesis). [5] recently studied in their paper the tests of nonparametric hypotheses for the intensity of the inhomogeneous Poisson process. The study they carried out is an extension to the Poisson processes of Ingster's work. [4] studied nonparametric tests for Gaussian white noise models with a ε noise level tending to 0. [6] presented in their article a review of several results concerning the construction of Kolmogorov-Smirnov-type and Cramér-von Mises-type fit tests for continuous-time processes. As models, they considered a small noise stochastic differential equation, an ergodic diffusion process, a Poisson process, and self-exciting or self-exciting point processes. [7] [8] consider the shift parameter model and the shift and scale parameter model, and show that the Cramér-von Mises test is asymptotically distribution free and asymptotically partially distribution free, and consistent. For each model, they proposed the tests which provide the asymptotic size α and describe the form of the power function under the local alternatives. In applications, the hypotheses to be tested are often of a more complex nature. The first works on the problems of goodness-of-fit testing of composite hypotheses concerning classical statistics are due to [9] ( [2]) who proposed to test composite hypotheses, in the case where the distribution function under the hypothesis to be tested depends on a multidimensional unknown parameter. The null hypothesis therefore becomes composite, i.e. it does not determine the distribution of the sample in a unique way. In the case where the parameters are estimated, the Kolmogorov-Smirnov test, as well as the Cramér-von Mises test is no longer asymptotically distribution free. It follows that the critical values change from one null hypothesis to another. Different values of the parameter result in different critical values, often within the same parametric family. The distribution free character is therefore crucial in applications since the critical values are calculated only once for any distribution defined under the hypothesis to be tested. To work around this problem, [9] suggested the split sample method. Durbin's problem involves a martingale transformation of the parametric empirical process which was proposed by [10]. The martingale approach of [10] allows building asymptotically distribution free hypothesis tests. This approach proposed by [10] is used by various authors including [11] in the regression models, [12]. We use an approach similar to that of [10] to construct, in this article, Kolmogorov-Smirnov-type asymptotically distribution free and consistent goodness-of-fit tests. We will consider the same model as [7]. In general, dealing with the measurement of the intensity of the Poisson process, we will consider the model de-pending on an unknown translation parameter with a composite parametric base assumption and show that the Kolmogorov-Smirnov test is asymptotically parameter free. Statement of the Problem and Auxiliary Results Suppose that we observe n independents inhomogeneous Poisson processes are trajectories of the Poisson processes with the mean function Here ( ) 0 λ ⋅ ≥ is the corresponding intensity function. : Then we can introduce the Kolmogorov-Smirnov (K-S) type statistic ⋅ is a known mean function of the Poisson process depending on some finite-dimensional unknown parameter ϑ ∈ Θ ⊂  . Note that under 0  there exists the true value 0 ϑ ∈ Θ such that the mean of the observed Poisson process ( ) ( ) The K-S type GoF test can be constructed by a similar way. Introduce the normalized process . The goal of this work is to show that if the unknown parameter ϑ , when ϑ ∈ Θ is the shift parameter, then it is possible to construct a test statistic ˆn Γ whose limit distribution does not depend on 0 ϑ . The test will be uniformly consistent against another class of alter- Here 0 ρ > is some given number. The mean function under null hypothesis is is known and therefore the solution can be calculated before the experiment using, say, numerical simulations. We are given n independent observations ( ) ( ) Here 0 ϑ is the true value, and the intensity function is It is convenient to use two different functions and we hope that such notation will not be misleading. Therefore, we have the parametric null hypothesis where the parametric family is Λ ⋅ is a known absolutely continuous function with properties: In this work, we denote by ( ) the derivative with respect to ϑ of any We consider the class of tests of asymptotic level ε : The test studied in this work is based on the following statistic of K-S type: As we use the asymptotic properties of the MLE ˆn ϑ , we need some regularity conditions. Conditions  is strictly positive and three times continuously differentiable. Note that, by these conditions, the MLE ˆn Main Result Let us introduce the following random variable The main result of this work is the following theorem. Since the function Let us put ( ) ( ) ( ) It is easy to see that, if we change the variables The proof of the theorem is based on the proof of the following fundamental lemma. Further, for the second relation, we have Proof of the Lemma 3.4. The proof of the Lemma is based on the Central Limit theorem for stochastic integrals (see, e.g., Kutoyants [14], Theorem 1.1). We follow the proof of this theorem. In particular, we obtain the convergence when n → ∞ of the characteristic function where we put which is the characteristic function defined in (3.11). Therefore, we have the convergence of the one-dimensional distributions. In the general case, the verification of the convergence is entirely similar. In this work, we find the Kolmogorov-Smirnov GoF test based on sup-metrics in the case of the translation parameter. It is natural to ask: what if we take ( ) 2   metrics?
2021-06-22T17:55:28.532Z
2021-04-28T00:00:00.000
{ "year": 2021, "sha1": "db419df26f0c191237077704c63220467581559a", "oa_license": "CCBY", "oa_url": "http://www.scirp.org/journal/PaperDownload.aspx?paperID=108776", "oa_status": "GOLD", "pdf_src": "ScienceParsePlus", "pdf_hash": "a8d722803a4ae91893c6c1702d01b995bb053edf", "s2fieldsofstudy": [ "Mathematics" ], "extfieldsofstudy": [ "Mathematics" ] }
11105525
pes2o/s2orc
v3-fos-license
Improved human disease candidate gene prioritization using mouse phenotype Background The majority of common diseases are multi-factorial and modified by genetically and mechanistically complex polygenic interactions and environmental factors. High-throughput genome-wide studies like linkage analysis and gene expression profiling, tend to be most useful for classification and characterization but do not provide sufficient information to identify or prioritize specific disease causal genes. Results Extending on an earlier hypothesis that the majority of genes that impact or cause disease share membership in any of several functional relationships we, for the first time, show the utility of mouse phenotype data in human disease gene prioritization. We study the effect of different data integration methods, and based on the validation studies, we show that our approach, ToppGene , outperforms two of the existing candidate gene prioritization methods, SUSPECTS and ENDEAVOUR. Conclusion The incorporation of phenotype information for mouse orthologs of human genes greatly improves the human disease candidate gene analysis and prioritization. Background Although the availability of complete genome sequences and the wealth of large-scale biological data sets opened up unprecedented opportunities to elucidate the genetic basis of rare and common human diseases [1], comprehending the underlying pathophysiological mechanisms continues to be challenging. Majority of the common diseases are genetically intricate, polygenic and multifactorial, and frequently manifest as different clinical phenotypes. Additionally, these complex conditions are often triggered by an interaction of genetic, environmental, and physiological factors, making it difficult for researchers to narrow their focus to a single or few genes. High-throughput genome-wide studies like linkage analysis and gene expression profiling although useful for classification and characterization do not provide sufficient information to identify specific disease causal genes. Both of these approaches typically result in hundreds of potential candidate genes, failing to help the researchers in reducing the target genes to a manageable number for further validation. Functional enrichment approaches [2][3][4] focusing on gene sets that share common biological function, chromo-somal location, or regulation although successful in identifying enriched biological themes are not suitable for gene prioritization. To overcome this, several gene prioritization methods have been developed [5][6][7][8][9] (see Tiffin et al [10] and Oti and Brunner [11] for a complete list of existing approaches and web tools for the prediction or prioritization of disease candidate genes). POCUS [6], for instance, finds candidate genes by identifying an enrichment of keywords associated with gene ontology (GO), shared protein domains and expression profiles among a given set of susceptibility loci relative to the genome at large. Similarly, PROSPECTR [8] and SUSPECTS [12], focusing on Mendelian and oligogenic disorders, compare GO, protein domains and expression libraries of putative disease genes with those known to be involved with the same disease. Integrating genomic and proteomic data, Mootha et al [13] identified LSFC (Leigh syndrome, French-Canadian type) causal gene. The recent method, ENDEAVOUR [9], uses several data sources to prioritize candidate genes. None of these approaches however utilize the mouse phenotype data in their prioritization approaches although mouse is the key model organism for the analysis of mammalian developmental, physiological and disease processes [14]. Additionally, there have been several reports [15,16] wherein a direct comparison of human and mouse phenotypes allowed for the rapid recognition of disease causal genes. Extending on the above mentioned approaches, and an earlier hypothesis, that the majority of disease causal genes are functionally closely related [6], we reasoned that an integrative genomics-transcriptomics-phenomics-bibliomics approach utilizing the available human gene annotations, mouse phenotype data and literature co-citations of genes will expedite human complex disease candidate gene identification and prioritization. We call our prioritization method ToppGene (acronym for Transcriptome Ontology Pathway PubMed based prioritization of Genes). For the first time, we incorporated the mouse phenotype data as one of the feature parameters apart from GO, pathways, biomedical literature, protein domains, protein interactions and gene expression of genes to prioritize human disease candidate genes and demonstrate its utility. Mouse phenotype as a feature for candidate gene prioritization The Mammalian Phenotype (MP) Ontology enables robust annotation of mammalian phenotypes in the context of mutations, quantitative trait loci and strains that are used as models of human biology and disease. The MP Ontology (MPO) supports different levels and richness of phenotypic knowledge and flexible annotations to individual genotypes [17]. Each node in MPO represents a cat-egory of phenotypes and each MP ontology term has a unique identifier, a definition, synonyms, and is associated with gene variants causing these phenotypes in genetically engineered or mutagenesis experiments. In the current study, we retrieved mouse genes associated with each of the MP term and extracted the corresponding human orthologous genes. In the current version of MPO, there are 4280 terms associated to 4329 unique Entrez mouse genes (extrapolated to 4329 orthologous human genes). We do not check whether the human orthologous gene of a mouse gene causes similar phenotype. Rather, we assume that orthologous genes cause "orthologous" phenotype and test the potential of the extrapolated mouse phenotype terms as a similarity measure between the training and test group of genes in candidate gene analysis. Document identifier as a feature for candidate gene prioritization We use biomedical literature abstract identifiers (PubMed identifiers, PMIDs) as a feature for classification, where the dimensionality of the feature space was equal to the number of documents in the document set. We hypothesized that if a PMID is cross-referenced in two genes, the two genes are likely to have a direct or indirect association. A large number of co-citations for a pair of genes (i.e. same PMIDs associated with two different genes) probably represents a relationship (direct or indirect association) between the two genes. For each gene, ToppGene considers all associated articles (represented as PMIDs) as literature annotation of this gene. The gene to PMID association file ("gene2pubmed.gz") was downloaded from NCBI Entrez Gene ftp site [18]. 44806 PMIDs were associated with more than one gene and 25294 genes had at least one PMID association. 24273 genes shared at least one PMID with another gene. For the current study, we do not look into the details of the relationship type between the genes but consider only co-citation. In other words, the PMIDs are used only as a feature of similarity measure in the candidate gene analysis. Comparison of ToppGene with other gene prioritization approaches To evaluate the performance of our approach and also compare it with other similar gene prioritization approaches [8,9,12], we performed two types of comparisons: large-scale cross-validations and small-scale test cases (See Additional file 1 for the workflow, and Tables 1 and 2 for a comparison of features and methods used in the 3 applications, namely, SUSPECTS, ENDEAVOUR and ToppGene). For large-scale cross-validations, we used the same or similar training sets as mentioned in the previous methods. Specifically we compared ToppGene's performance with ENDEAVOUR [9] using random-gene cross-validation; and for comparison with PROSPECTR [8] and SUSPECTS [12], we used locus-region cross-validation. Additionally, as test cases, we selected two diseases, congenital heart defects (CHD) and diabetic retinopathy (DR), and compared the prioritization performance of ToppGene with SUSPECTS [12] and ENDEAVOUR [9]. Comparison of ToppGene with ENDEAVOUR: Randomgene cross-validation In the current study we used our own disease training sets because the complete data sets used by ENDEAVOUR are not available for public access. We, therefore, randomly selected 19 diseases along with their associated genes from Online Mendelian Inheritance in Man (OMIM) and the Genetic Association Database (GAD). Each disease gene set contained 30 to 44 genes. The total number of genes across 19 selected diseases was 693 (See Additional file 2 for the complete list of the datasets). For negative controls, 20 sets, each containing 35 random genes, were created as training data. We followed the same methodology as ENDEAVOUR to evaluate the performance of our prioritization method and also compare the results with ENDEAVOUR. In each validation run, the gene group of a particular disease (with one gene removed as the "target") was used as the training set. The "target" gene was then mixed with 99 random genes to make a test set of 100 genes. The rank of the "target" gene in the resulting list, following prioritization, was recorded. This process was repeated for each gene in the list. Sensitivity was defined as the frequency of "target" genes that are ranked above a particular threshold position, and specificity as the percentage of genes ranked below the threshold. For instance, a sensitivity/specificity value of 70/90 indicates that the correct disease gene (the "target") is ranked among the best-scoring 10% of genes in 70% of the prioritizations. Receiver operating characteristic (ROC) curves were plotted based on the sensitivity/specificity values and area under curve (AUC) was computed as the standard measure of the performance of the method. ENDEAVOUR reported 90/74 sensitivity/specificity value and an AUC score of 0.866 [9]. Using ToppGene, we first created the overall ROC curves. In order to compare with ENDEAVOUR directly, we followed the same definitions for sensitivity and specificity as described by Aerts et al [9]. Figure 1 shows the overall ROC curves using ToppGene. The AUC score of the 19 disease training sets was 0.916, and the sensitivity/specificity was 90/77, i.e. the "target" gene was ranked among the top 23% in 90% of the cases. In case of the control, the AUC score of the 20 random training sets was 0.503 (see section A of Table 3). Second, we studied the ROC curves based on p-value based scores. ENDEAVOUR provides ranking of the "target" gene based on p-values from order statistics, which are local p-values. In contrast, ToppGene provides p-values based on random sampling of the whole genome. Topp-Gene p-value based scores are therefore global measures of the similarity of the test genes to the training genes. As a result, sensitivity and specificity can also be defined based on the p-value based scores; specifically, sensitivity is the true positive rate (the proportion of detected "target" genes among all "target" genes) at a cutoff score, and specificity is the true negative rate (the proportion of Data type SUSPECTS ENDEAVOUR ToppGene Attribute-based data Semantic similarity p-value from meta-analysis Fuzzy measure based similarity Vector-based data Pearson correlation Pearson correlation Pearson correlation Combination of scores Weighted mean p-value from order statistics p-value from meta-analysis "rejected" genes among all "non-target" genes) at the same cut-off level. For example, a sensitivity/specificity of 70/90 indicates that 70% of the "target" genes and 10% of the "non-target" genes have scores higher than a particular cut-off value. Evaluation of features used for gene prioritization in ToppGene To study the efficiency of different features (GO-Gene Ontology, MP-Mouse Phenotype, Pathways, PubMed, Protein Domains, Gene Expression and Protein Interactions), ROC curve of each of the feature sets was generated. Figure 2 shows the corresponding AUC scores of the ROC curves, depicting the relative performance of each feature set in the prioritization method. The mouse phenotype and PubMed showed the best performance while protein interactions and gene expression features performed poorly. In terms of coverage (the percentage of genes annotated with each of these features in the whole genome), PubMed was the best while MP had least coverage (only about 19% of known genes have at least one MP term association). To understand better the relative performance and the power of each of the features in gene prioritization, we tested ToppGene by performing cross-validations with one of the features left out. The performance decreased significantly only when MP was removed (see ROC curve in Figure 3). As expected, the best performance was recorded when all the features were considered for prioritization, with an AUC of 0.913 (see ROC curve in Figure 3) and a coverage of ~89%. For a cutoff score of 0.93, the sensitivity/specificity was 74/90. In other words, 74% of the "target" genes were included in the candidate list (about 9-fold reduction from the original test set). Comparison of ToppGene with SUSPECTS and PROSPECTR: Locus-region cross-validation In this cross-validation we compared the performance of ToppGene with two other gene prioritization methods, namely, SUSPECTS [12] and PROSPECTR [8]. We used the same data set [6] that was used in SUSPECTS and PROSPECTR study (See Additional file 3 for a complete list of the data set). This data set contains a list of 29 OMIM diseases (each disease had at least known gene associations). For each cross-validation run, the training set was composed of all the genes related to a disease except the "target" gene. The test set was created by including all the genes in the 15 Mb locus region i.e. genes occurring in the 7.5 Mb flanking regions (5' and 3') of the "target" gene's chromosomal location along with the "target" gene itself. PROSPECTR, which uses sequence features alone for gene prioritization, ranked the "target" gene in an average of top 31.23% in the prioritized test lists and among the top 5% about 20 times out of 155 (i.e. about 13%). On the other hand, SUSPECTS, which uses GO, protein domains, gene expression, and sequence features for gene prioritization, ranked the "target" genes in the top 5% of the prioritized lists 87 times out of 155 (~56%), and on average the "target" genes were ranked at top 12.93% in the prioritization results. In comparison, ToppGene was able to rank the "target" gene among the top 5% of the prioritized lists for 118 times out of 150 (79%). Five genes in the original list were not present in the current NCBI Entrez Gene database and were therefore excluded. Thus, instead of 155 genes, 150 genes were used for this cross-validation test. On average, the "target" genes were ranked at top 7.39% in the prioritized lists using our approach (see section B of Table 3). To evaluate the performance of the individual feature, we repeated the same locus-region cross-validation with one feature removed at a time (as described earlier under comparison of ToppGene with ENDEAVOUR). The performance did not change significantly if only GO, pathway, protein domains, protein interactions or gene expression features were excluded during gene prioritization. The performance however declined significantly when MP or PubMed was not included as one of the features in gene prioritization (see Table 4 and Figure 4). ROC curves of random-gene cross-validation based on score ranks Figure 1 ROC curves of random-gene cross-validation based on score ranks. Blue curve was generated from the 19 disease gene training sets. Black curve, negative control, was generated from 20 random training sets. See text for the definitions of sensitivity and specificity. Comparison of ToppGene with ENDEAVOUR and SUSPECTS Test case 1: Congenital heart disease (CHD) We used 28 genes implicated in congenital heart disease (CHD) (see Additional file 4 for the complete list and comparison of relative rankings of "target" genes using different gene prioritization approaches) as the test case and prioritized the genes using the random-gene crossvalidation method as described in the earlier sections. In each run, same training and test sets were submitted to SUSPECTS, ENDEAVOUR and ToppGene manually. Twenty-eight prioritizations were performed by each of the three methods and the average size of the test sets was 20 genes. Test case 2: Diabetic retinopathy (DR) A similar comparative analysis was repeated with diabetic retinopathy (DR) as a test case using locus-region crossvalidation as described in previous section. The training set comprised 27 known genes implicated in DR (see Additional file 5 for the complete list and comparison of the relative rankings of the "target" genes using SUS-PECTS, ENDEAVOUR and ToppGene) while the test sets comprised genes in the locus regions of the "target" genes. The "target" genes were ranked among top 5% in the resulting lists 12 times out of 27 (~44%) with both SUS-PECTS and ENDEAVOUR based gene prioritization. As witnessed in earlier comparisons, ToppGene again outperformed both SUSPECTS and ENDEAVOUR by ranking the "target" genes among top 5% in 17 times out of 27 (~63%). If we considered the top 10%, surprisingly SUS-PECTS fared better than ENDEAVOUR and was close to ToppGene's performance. Thus, the "target" genes were ranked among the top 10% of the prioritized gene lists 17, 15 and 19 times (63%, 56% and 70%) respectively with SUSPECTS, ENDEAVOUR and ToppGene. The average rank ratios of the "target" genes were 17.04%, 13.31% and 8.49% for SUSPECTS, ENDEAVOUR and our approach respectively (see section D of Table 3). ToppGene implementation and access The programs of our prioritization method are implemented purely in JAVA. Open source JAVA package Ftp-Bean by Calvin Tai [19] is used to automatically download data and annotation files from Ftp servers. Bio-Java packages [20] are used to process UniProt records [21] and extract related protein domain information. GOLEM [22] source code was adapted and modified for dealing with ontology annotations. Colt [23] and Jakarta Commons-Math libraries [24] are used for statistical analysis. The fuzzy similarity measure and related functions are implemented locally. Our prioritization method is available as a standalone web application [25]. The user interface is written in JAVA script, JSP and servlets, and integrated with the Tomcat web server. Users can enter the training and test sets of genes of interest as queries from the interface, and the application will display enriched themes (based on the GO, Pathways, Phenotype, Protein Domains, PubMed and Protein Interactions) in the training set genes along with annotated prioritized test genes. All the gene information and annotation data will be updated automatically except for pathways. Discussion Traditionally there are two categories of approaches to compute the similarity between any two genes based on semantic annotations: pair-based and set-based [26]. In pair-based methods, an average or maximum of pairwise term information content is calculated as the similarity between the two genes. This will however cause inconsistency problems. Specifically, an average of pairwise term information content tends to underestimate the similarities (e.g. two identical genes have a similarity of less than 1) while a maximum of pairwise term information content tends to overestimate the similarity (e.g. two genes sharing one annotation term have similarity equal to 1). On the other hand, set-based similarity measures, such as Jaccard and Dice similarity [26], will generate 0 if the two genes do not share a common annotation term. This behavior is especially undesirable for annotation terms from ontologies. The fuzzy-based similarity measure adopted and applied in our approach can overcome these problems and therefore could generate a better similarity measure than the traditional methods. AUC of different feature sets Most of the current tools to enrich lists of genes or candidate gene prioritization are based on GO, gene expression or pathways [2,4,27,28]. Previous studies have also shown that integrating multiple lines of evidence is good for candidate gene analysis. However, to the best of our knowledge none of the previous candidate gene prioritization approaches used mouse phenotype features although the mouse is a key model organism for the analysis of mammalian developmental, physiological, and disease processes [14]. Additionally, there have been reports wherein a direct comparison of human and mouse phenotypes allowed for the rapid recognition of disease causal genes (for example, ROR2 as the Robinow syndrome gene [16]; the phenotype of the Abcc6-/-mouse shares calcification of elastic fibers with human Pseudoxanthoma elasticum, PXE, pathology, caused by mutations in human ABCC6 gene [15]). In this paper, for the first time, we use phenotype annotations for mouse orthologs of human genes as one line of evidence for candidate gene analysis. We are aware that comparing phenotypes between two different organisms may involve consideration of several issues. For instance, the mouse genotype may involve mutations to orthologs of one or more of the genes associated with a phenotype, but the mouse phenotype may not resemble the disease in human. Nevertheless, finding, for instance that targeted disruption of the mouse ortholog of human CFC1 gene (associated with visceral heterotaxy which is characterized by congenital anomalies that include complex cardiac malformations and situs inversus or situs ambiguous [29]) results in L-R laterality defects including cardiac malformations [30] can lead to novel and interesting hypotheses. Although, our results have conclusively demonstrated the utility of mouse phenotype data in human candidate gene analysis, there are some inherent limitations in using mouse phenotype annotations. For instance, MP is not a disease-centric ontology and the phenotype of a same gene mutation can vary depending on specific mouse strains or their genetic backgrounds. Most importantly, orthologous genes need not necessarily result in orthologous phenotypes. We are currently working on a more efficient crossspecies phenome extrapolation where in the mouse phenotype terms are mapped to human phenotype concepts (from UMLS [31]) semantically ("orthologous phenotype") and the resultant orthologous genes associated with an orthologous phenotype are identified. How to efficiently utilize this kind of information in human disease candidate gene prioritization is a topic of future research. Apart from the contribution of MP, improved performance of ToppGene over other methods can be attributed partially to the usage of more comprehensive data resources. For instance, unlike ENDEAVOUR, the pathway data set in ToppGene is not limited to KEGG resource. We compiled more than 700 additional pathways (associated with about 4800 human genes) from various sources (see Methods) and used for gene prioritization. Our approach however has some limitations. First, by using a training set we assume that the disease genes we have yet to discover will be consistent with what is already known about a disease and/or its genetic basis which may not always be the case. Second, it is important to note that the annotations and analyses provided and the prioritization by our approach can only be as accurate as the underlying online sources from which the annotations are retrieved. Only one-fifth of the known human genes have pathway or phenotype annotations and there are still more than 40% genes whose functions are not defined (see Methods). Third, using an appropriate training setalthough the difference was not significant, while crossvalidating, we noted that using larger training sets (> 100 ROC curves of random-gene cross-validation based on scores Figure 3 ROC curves of random-gene cross-validation based on scores. The red curve was generated using all features sets (AUC score 0.913). The blue curve was generated without Mouse Phenotype annotations (AUC score 0.893). The orange curve was generated without Mouse Phenotype and Pubmed annotations (AUC score 0.888). See text for the definitions of sensitivity and specificity. genes) would decrease the sensitivity and specificity of the prioritization when compared to using smaller training sets (7 to 21 genes). Conclusion Existing disease candidate gene prioritization methodologies mine biological and functional information about candidate genes, and we believe that our system, Topp-Gene, can complement these existing approaches by using a novel method that mines mouse phenotype data. The aim of ToppGene is to generate likely candidates by exten-sive analysis of all known characteristics of genes, and is inevitably restricted by existing information be it GO annotation, pathways, phenotype or gene expression data. Through various examples, we demonstrate that ToppGene performs better than SUSPECTS, PROSEPCTR and ENDEAVOUR in candidate gene prioritization. However, it needs to be emphasized that our aim is not to prove that ToppGene prioritized genes are true disease genes but to aid in selection of a subset of most likely disease gene candidates from larger sets of disease-implicated genes identified by high throughput genome-wide techniques like linkage analysis and microarray analysis. For the first time, we have used the mouse phenotype data in human disease candidate gene analysis. Our results demonstrate that employing the mouse phenotype data improves candidate gene prioritization significantly and can therefore aid in the process of focusing the search for the most likely human disease gene candidates. Lastly, as the functional annotations of human and mouse genes improve, especially the mouse phenotype annotations, we envisage a proportional increase in the performance of ToppGene and strongly believe that it will be a valuable adjunct to wet lab experiments in human genetics and disease research. Data sources We used seven data sources (6 human-related and 1 mouse-related) to prioritize the gene candidates (see Figure 5). 1. Gene Ontology (GO): Gene Ontology [32] was downloaded from GO web site [33]. Corresponding human GO-gene annotations were downloaded from NCBI Entrez Gene ftp site [18]. This data set contained 15,068 human genes annotated with 7,124 unique GO terms. GO Molecular Function (GO:MF) and GO Biological Process (GO:BP) were considered as separate features since The performance of locus-region cross-validation using dif-ferent feature sets Figure 4 The performance of locus-region cross-validation using different feature sets. The average rank ratio (y-axis on the left) indicates the average rank ratio of the "target" genes in the resulting list, thus lower value corresponding to a better performance. At the same time, the higher the number of top 5% ranked "target" genes among total of 150 prioritizations (y-axis on the right), the better the performance. As a result, it's very clear that removing MP, PubMed or both resulted in significant drop of performance. although they belong to the same annotation family (GO), they have separate roots and term spaces. 2. Mammalian Phenotype (MP): MP ontology [17], mouse gene phenotype annotations, and the corresponding orthologous genes from human were downloaded from Mouse Genome Informatics (MGI) website [34]. This data set contained 4329 human genes compiled by extrapolating the mouse genes annotated with 4280 mouse phenotype terms. 5. PubMed: Gene-PubMed ID relations were downloaded from NCBI Entrez Gene ftp site [18]. This data set contained 25,294 distinct genes associated with at least one PMID (a total of more than 142,000 PubMed abstract). About 32% (44,806) of these papers were associated with at least two genes. Pre-processing of annotation terms A pre-processing step was performed prior to using the eight features for candidate gene prioritization. The information content values of all categorical annotation terms, namely, GO:MF, GO:BP, MP, Pathways, Protein Domains, PubMed, and Protein Interaction annotations, were calculated. The information content (g i ) of annotation term T i of a gene was defined in the following way: where Processing of training set genes The training process was to create a representative profile of the training genes based on all the 8 annotations (features). For categorical gene annotations this process was to identify the over-representative terms from the training genes. Hypergeometric distribution with Bonferroni correction was used as the standard method. For numeric gene annotation, i.e. microarray expression levels, the training process generated the average (a vector of size 79) of all the training genes. Similarity measure Again different methods were used for similarity measures of categorical and numeric annotations. Fuzzy measurebased similarity measure was applied for categorical terms. The following part explains the method in detail. If G = {T 1 ,..., T n } denotes the set of annotation terms of a gene, a Sugeno fuzzy measure, g, is a real valued function g: 2 G → [0, 1], satisfying 1) g(Φ) = 0 and g(G) = 1, For a given gene annotation set G, the parameter λ of its Sugeno fuzzy measure can be determined uniquely by solving the following equation: where g i is the fuzzy density of term T i , or the information content obtained in the pre-processing step, and n is the number of terms in G. Fuzzy measure-based similarity (FMS) of two sets G 1 and G 2 of annotation terms is defined as which can be derived based on the values of λ 1 and λ 2 determined using equation (3). For ontological terms, the augmented FMS (AFMS) was used to account for the hierarchical structure of ontology annotations. where [G1 ∩ G 2 ] + = [G 1 + ∩ G 2 + ] = [G 1 ∩ G 2 ] ∪ {T 1i , T 2j }, G 1 + = G 1 ∪ {T 1i , T 2j }, G 2 + = G 2 ∪ {T 1i , T 2j }, and {T 1i , T 2j } denotes the set of most specific common ancestors of every pair of terms (T 1i , T 2j ) from G 1 and G 2 . This ensures for two genes annotated by ontological terms, even though they don't share common terms, the similarity measure is > 0 (See Popescu et al [26] for additional details). For numeric annotation, i.e. the microarray expression values, the similarity score was calculated as the Pearson correlation of the two expression vectors of the two genes. Processing of the test set genes In this step, each of the genes from the test set was compared to the representative profile of the training set. As described earlier, the training profile contained the overrepresented terms from the training genes for all categorical annotations and the average vector for the expression values. For a test gene, a similarity score to the training profile for each of the eight features was derived using the methods mentioned in the previous section. The test gene was then summarized by the 8 similarity scores. In case of missing value (for instance, lack of one or more annotations for a test gene), the score was set to -1. Otherwise, it is a real value in [0, 1]. In order to combine the 8 similarity scores into an overall score, we applied a statistical meta-analysis. A p-value of each annotation of a test gene G was derived by random sampling from the whole genome. The p-value of similarity score S i was defined as: Fisher's inverse chi-square method, which states that (assuming p i 's come from independent tests), was then applied to combine the p-values from multiple annotations into an overall p-value. Since the pvalues of GO:MF and GO:BP were highly correlated, a single p-value was generated by taking the p-value of the average of GO:MF and GO:BP scores in the random sample. A pairwise Pearson correlation test result of the p-values is shown in Additional file 6. The final similarity score of the test gene was then obtained by 1 minus the combined pvalue. We used random sampling to estimate the p-values because the density functions of the similarity scores were not easy to estimate, and although this process increased the computation time, for a reasonably large random sample the p-values were fairly stable. Publish with Bio Med Central and every scientist can read your work free of charge
2016-05-12T22:15:10.714Z
2007-10-16T00:00:00.000
{ "year": 2007, "sha1": "958620f2d25da676e7d16faca13b95cbaca20d19", "oa_license": "CCBY", "oa_url": "https://bmcbioinformatics.biomedcentral.com/track/pdf/10.1186/1471-2105-8-392", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "958620f2d25da676e7d16faca13b95cbaca20d19", "s2fieldsofstudy": [ "Biology", "Medicine", "Computer Science" ], "extfieldsofstudy": [ "Medicine", "Biology", "Computer Science" ] }
17928855
pes2o/s2orc
v3-fos-license
Protective measures and H5N1-seroprevalence among personnel tasked with bird collection during an outbreak of avian influenza A/H5N1 in wild birds, Ruegen, Germany, 2006 Background In Germany, the first outbreak of highly pathogenic avian influenza A/H5N1 occurred among wild birds on the island of Ruegen between February and April 2006. The aim of this study was to investigate the use of recommended protective measures and to measure H5N1-seroprevalence among personnel tasked with bird collection. Methods Inclusion criteria of our study were participation in collecting wild birds on Ruegen between February and March 2006. Study participants were asked to complete a questionnaire, and to provide blood samples. For evaluation of the use of protective measures, we developed a personal protective equipment (PPE)-score ranging between 0 and 9, where 9 corresponds to a consistent and complete use of PPE. Sera were tested by plaque neutralization (PN) and microneutralization (MN) assays. Reactive sera were reanalysed in the World Health Organization-Collaborating Centre (WHO-CC) using MN assay. Results Of the eligible personnel, consisting of firemen, government workers and veterinarians, 61% (97/154) participated in the study. Of those, 13% reported having always worn all PPE-devices during bird collection (PPE-score: 9). Adherence differed between firemen (mean PPE-score: 6.6) and government workers (mean PPE-score: 4.5; p = 0.006). The proportion of personnel always adherent to wearing PPE was lowest for masks (19%). Of the participants, 18% had received seasonal influenza vaccination prior to the outbreak. There were no reports of influenza-like illness. Five sera initially H5-reactive by PN assay were negative by WHO-CC confirmatory testing. Conclusion Gaps and variability in adherence demonstrate the risk of exposure to avian influenza under conditions of wild bird collection, and justify serological testing and regular training of task personnel. Background Severe human A/H5N1 infections were first observed during outbreaks of highly pathogenic avian influenza (HPAI) A/H5N1 among poultry in Hong Kong in 1997 [1]. Since its re-emergence in Asia in 2003, 438 human cases have been reported worldwide, of which 60% had a fatal outcome (as of 11 August 2009) [2]. The main risk factor for human A/H5N1 infection is direct contact with HPAI A/H5N1-infected animals [3]. In Germany, the first outbreak of HPAI A/H5N1 occurred among wild birds on the island of Ruegen in the federal state Mecklenburg-Western Pomerania, in northeastern Germany between 8 th February and 6 th April 2006. Of 1,881 tested birds, 8.4% were laboratory confirmed H5positive. The most commonly affected birds were wild swans (90%) [4]. Soldiers of the German Federal Defence Force, professional firemen from Mecklenburg-Western Pomerania, firemen of the local auxiliary fire brigade and local government workers (administrative staff) participated in collection of wild birds on Ruegen during the outbreak. In addition, local veterinarians collected and transported wild birds to laboratory for testing. The Ruegen Health Office recommended protective measures for personnel tasked with wild bird collection according to official German recommendations [5][6][7]. These included use of personal protective equipment (PPE: headwear, protective goggles, masks, protective clothing, gloves and protective boots; Figure 1) during bird collection and current seasonal influenza vaccination. Acting on the recommendation of the State Office of Health and Social Affairs Mecklenburg-Western Pomerania, antiviral prophylaxis (oseltamivir) was not recommended for the personnel tasked with bird collection who collected the birds using recommended PPE. So far, no human A/H5N1 infection has been reported in Germany. However, the possibility of asymptomatic A/ H5N1 infection after exposure to potentially A/H5N1infected poultry has been reported in different countries and the prevalence can be as high as 3-10% [8,9]. Poultry is known to have played a major role in the epizootic transmission of avian influenza to humans [10]. To date, only limited studies have been carried out to investigate the transmission of avian influenza to humans by close contact with potentially infected wild birds during outbreaks [11,12]. We launched an investigation to assess adherence to the use of protective measures among personnel tasked with wild bird collection during the HPAI A/H5N1 outbreak on Ruegen in order to improve recommendations to prevent exposure by potentially HPAI A/H5N1-infected animals. In addition, we searched for clinical symptoms of personnel enrolled during and after wild bird collection and measured their seroprevalence of anti-H5N1 antibodies in order to assess the risk of human A/H5N1 infection. Study participants We organized several information sessions on Ruegen in March 2007 in order to motivate study participation. Inclusion criteria of our study were participation in collecting wild birds on the island of Ruegen between February and March 2006. As soldiers of the German Federal Defence Force and professional firemen (according to the Ruegen Health Office: about 400 soldiers and 34 professional firemen) had been recruited for bird collection from all parts of Germany and could not be contacted at the time of investigation, they were not included in this study. Ethical clearance and data protection The study was approved by the Ethics Commission of the Charité, Universitätsmedizin, Berlin and the Commissioner for Data Protection and Freedom of Information of the German Federal Government and the State of Mecklenburg-Western Pomerania. Data collection Persons who agreed to participate in this study were asked to give a written informed consent and to complete a questionnaire soliciting demographics (date of birth, sex, occupation), conditions of wild bird collection (species and status of collected birds, finding situation), use of protective measures during bird collection (kinds of used PPE-devices and frequency of PPE use), problems regarding PPE use (difficulties in adherence to PPE use, risk behaviour that could reduce the protective effect of PPE), seasonal influenza vaccination status, and acute respiratory symptoms during and up to 5 days after bird collection [see Additional files 1 and 2]. Influenza-like illness was defined as the presence of fever, cough, headache and muscle or limb pain. Evaluation of PPE use To evaluate adherence to the use of protective measures we constructed a PPE-score. The score integrated both completeness and frequency of use. Generally, masks, protective clothing, gloves and protective goggles were reported to be most effective PPE-devices against influenza virus [13,14]. However, we considered goggles less effective against A/H5N1 infection as in contrast to other avian influenza subtypes (e.g. A/H7) conjunctivitis was rarely reported as clinic manifestation of A/H5N1 infection [3]. Therefore, we considered masks, protective clothing and gloves more effective to protect personnel tasked with bird collection than headwear, protective goggles and protective boots, and these were assigned scores of 2 and 1, respectively, if they were "always" used during bird collection. When a PPE-device was "sometimes" used, half the score was assigned (Table 1). Therefore, a person who indicated that he or she "always" used all PPE-devices during bird collection obtained a maximal score of 9 (3*2 + 3*1). We also calculated the PPE-device specific "adherence ratio" as the sum of all scores obtained for a specific PPE-device by all participants divided by the maximum possible score multiplied by the number of participants. Serological testing Persons who agreed to participate in this study were asked to provide a single 5 mL blood sample, which was sent to the German National Reference Centre for Influenza (GNRCI), Berlin. Unrefrigerated transport to GNRCI took a maximum of 24 hours. Serum was extracted from blood and stored at -20°C at GNRCI until tested for antibodies against A/H5N1 virus. The sera were tested by plaque neutralization (PN) and microneutralization (MN) assays using the reference virus strain A/whooper swan/R65-2/Germany/2006 (H5N1), which was directly taken from an infected swan during the outbreak on Ruegen. Reactive sera were reanalysed by the World Health Organization-Collaborating Centre for Reference and Research on Influenza (WHO-CC), London by MN assay using the reference virus strains A/bar-headed goose/Qinghai/1A/2005 (H5N1) and A/whooper swan/ Mongolia/244/2005 (H5N1), which were isolated from infected birds in Asia. The reference strains used by GNRCI and WHO-CC belong to a same cluster of clade 2.2. Serum samples were considered to be reactive by PN or MN assay if anti-H5 titre was > 1:20. In addition, MN assay was performed by GNRCI using reference virus strains A/New Caledonia/20/99 (H1N1) and A/Wisconsin/67/05 (H3N2) to analyse possible presence of crossreactive antibodies to human influenza A/H1N1 and A/ H3N2. Statistical analysis The data were analysed using Excel (version 11, Microsoft Corporation, Redmond, Washington, USA) and Stata (version 9.0, StataCorp LP, College Station, Texas, USA). Mean PPE-scores were presented with a 95%-confidence interval (95%-CI). Associations between human influenza vaccination status and PN assay result and between human influenza vaccination status and acquiring acute respiratory symptoms were tested by chi square test. Fisher's exact test was used when expected values for cells below 5. The Mann-Whitney test was used to assess influencing factors to PPE-score. A p-value of less than 0.05 was considered statistically significant. PPE use during bird collecting Of 94 participants, 12 (13%) reported having always worn all PPE-devices during bird collection (PPE-score: 9); 91 (97%) reported having ever used at least one PPEdevice during bird collection. The mean calculated PPEscore of the 94 participants was 6.3 (95%-CI: 5.9-6.8), the median was 6.9 (interquartile range: 8-5 = 3). Firemen had the highest PPE-score of all three groups. They applied PPE significantly more frequently and completely than government workers (p = 0.006; Table 2). Both the PPE-device specific adherence ratio and the proportion of participants reporting having always been adherent to wearing the respective PPE-device were highest for gloves, second for protective boots and lowest for masks (Table 3). FFP3 (33/72) and multilayer surgical masks (29/72) were mostly applied among participants whom reported having always or sometimes used masks during collection of wild birds. No differences among the kind of applied masks regarding mean score of masks were found (p = 0.11). Problems regarding PPE use Any difficulties in adherence to recommended PPE use were reported by 24 of the 88 participants answering this question. The most commonly reported problems were short supply of PPE-devices (5/24) and mobility constraints (3/24). More specifically, 25/86 participants reported PPE-devices had interfered with their work, particularly protective goggles, masks and protective clothing. Regarding behaviour of personnel tasked with bird collection that potentially resulted in reduction of or gaps in the protective effect of PPE, 45% (41/92) participants reported using a mobile phone at least once during bird collection and 33% (30/92) reported driving an automobile while wearing protective clothing. Acute respiratory symptoms during and after collecting birds Of 90 participants, 7 (8%) reported symptoms of acute respiratory diseases during the period of bird collection or up to 5 days thereafter. Reported symptoms were cough (7/7), cold (5/7), headache (4/7), and muscle or limb pain (4/7). No participant reported fever. Thus, there were no reports that fulfilled the case definition of influenzalike illness. No differences among PPE-score (p = 0.33) and influenza vaccination status (p = 0.28) regarding acquiring acute respiratory symptoms were found. Serological analysis Blood samples were provided by 78/97 (80%) participants. All serum samples were screened by PN assay at GNRCI; 5 sera were reactive against H5. Three of the 5 sera were tested by MN assay at GNRCI of which only one showed reduced viral replication. Retesting of the 5 reactive sera by MN assay at WHO-CC gave negative results. The 5 sera reactive in the PN assay were also tested by MN assay with human influenza A virus, and all showed very high antibody-titres against influenza A/H1N1 compared to A/H3N2. Information on influenza vaccination status was available for 4 of the 5 study participants whose sera were reactive in PN assay by GNRCI. All had received at least one seasonal influenza vaccination prior to the serological analysis (September 2005 to December 2006). However, no association was found between human influenza vaccination status and the PN assay result among all study participants (p = 0. 13). An overview of information on occupation, PPE use, influenza vaccination status, respiratory symptoms, H5-titres, H1-titres and H3-titres of these 5 participants is presented in Table 4. Discussion To our knowledge, this is the first systematic assessment of adherence to recommended protective measures used by personnel tasked with bird collection during a large outbreak of HPAI A/H5N1 in wild birds. The environmental conditions during wild bird collection differed considerably from culling of poultry during outbreaks of avian influenza. Study participants reported difficulties owing to a wet and cold environment during wild bird collection, and almost half the participants collected potentially infected birds that were still alive which resulted in a high risk of exposure. These environmental conditions were reported to be favourable for virus survival as the A/H5N1 viruses are more stable in wet and fresh feces of infected animals [15,16]. In contrast to other studies assessing adherence to recommended preventive measures during outbreaks of HPAI, we not only measured the proportion of PPE always used, but constructed a score to summarize both the completeness and frequency of PPE use simultaneously considering differences in the protective effect of PPE-devices. Studies conducted after the HPAI A/H7N7 outbreak in poultry in the Netherlands in 2003 showed a low self-reported adherence in the consistent use of masks and protective goggles among poultry farmers (6%, 1%) and cullers (25%, 13%) [17]. Based on the PPE score, our study showed that PPE adherence differed between occupational groups as well, and was highest in firemen who probably similar to cullers had more previous experience in the use of PPE owing to their occupation. However, this estimation requires confirmation by further investigations. Our study showed better adherence to using protective goggles among all personnel tasked with bird collection (37%). Compared with the results of the study after an A/H7N3 outbreak in poultry in Canada in 2004 (always use masks: 83%, gloves: 85% and protective goggles: 55%) the adherence in our study was poor (19%; 78%; 37%) [18]. However, the better result for adherence measured using the PPE-score method (46%, 88%, 51%), suggests an underestimation of the true adherence when restricting the analysis to the consistent use of PPE. Interference of PPE with the task of wild bird collection was reported in particular for protective goggles and masks, in keeping with the lowest adherence for these two PPE-devices. The use of mobile phones during bird collection could have reduced the adherence of mask use as well. To our knowledge, other studies have not specifically addressed the question of gaps or barriers reducing adherence. Since 2003, the German Committee for Biological Agents has recommended seasonal influenza vaccination for people exposed to A/H5N1 infected birds or poultry [6]. Even though seasonal vaccination does not protect against infection with avian influenza, it can potentially reduce opportunities for reassortment by avoiding the simultaneous infection of humans with avian and human influenza viruses. After an influenza vaccination, the development of an immune response takes about 2 weeks [19]. In our study, all participants had been offered seasonal influenza vaccination. However, 53% of those were unvaccinated. Among all participants, 29% had received a seasonal influenza vaccination only shortly before bird collection in February 2006 who may not have developed immunity against seasonal influenza during the first few days of bird collection. The proportion of study participants with a seasonal influenza vaccination prior to the outbreak (18%) in our investigation was similar to the proportion in a study carried out after an outbreak of A/H7N3 in Canada (21%) [18] and in a study after an outbreak of HPAI A/H5N1 in England (16%) [20]. However, this proportion is lower than influenza immunization coverage in the general population of Germany in the season 2005/06 (32.5%) [21]. In our study, differences were found between PN and MN results. Five sera reactive to A/H5N1 in PN assay could not be confirmed by MN assay. Discrepant results between serological assays have been shown for sera in the study in the Netherlands as well. A/H7N7-reactive sera initially tested by haemagglutination inhibition (HI) assay were all negative by an MN assay [22]. Therefore, the MN assay showed higher specificity than PN and HI assays. The lack of concordance of results between GNRCI and WHO-CC might be also explained by the use of different reference strains, even though both belong to clade 2.2. A possible cross-reaction of A/H1N1 antibodies in high concentrations with the A/H5N1 reference virus was found by MN assay at GNRCI. It has been postulated that seasonal influenza vaccination may boost the anti-N1 response and therefore could lead to false-positive results against A/H5N1 virus as well because of cross-reactions between A/H1N1 and A/H5N1 [23]. As no association between human influenza vaccination status and PN assay result among study participants was found, it is unclear whether the cross-reaction between A/H5N1 and A/H1N1 could explain the reactivity by PN assay. A limitation of our study is its conduct one year after the outbreak. Some study participants might have been unable to remember the details of their activities during the outbreak in 2006, which could reduce the validity of the study findings. Also the influenza antibody titres could decay over time [24], so the serological investigation might have failed to reveal seroconversion to H5 owing to the length of time elapsed since exposure. Because soldiers of the German Federal Defence Force and professional firemen could not be included in this study, this investigation was based on participation of local personnel who performed initial response to the outbreak. As this study conducted on Ruegen was voluntary, not all individuals involved in bird collection were included. Participation among government workers and veterinarians was high at > 80%. Therefore, this study provides a good assessment for these two groups. The participation among firemen, however, was lower (55%) and might lead to selection bias as possibly firemen taking part in this study were more highly motivated and interested and used PPE more intensively than their non-responding colleagues. Therefore, the findings from this group could be less reliable. This could be another limitation of this study. Evidence of the risk factors or protective effect of the protective measures could not be further analysed in our study because no A/H5N1-positive case or influenza-like illness was detected. Our findings should be completed by future studies under other operating conditions. Conclusion As every human infection with avian influenza presents a chance for further adaptation of the virus and might lead to severe disease with a high case fatality, adherence to recommendations for the use of protective measures needs to be improved to reduce the risk of exposure to A/H5N1 infection. Personnel with potential involvement in bird collection during wild bird outbreaks should be identified in advance and offered early and regular training in PPE use, particularly regarding masks and protective goggles use and the environmental conditions of wild bird collection. Only persons vaccinated against seasonal influenza should be admitted participating in bird collection. Problems regarding PPE use and behavioural risk factors should be taken into account by the recommendations to avoid gaps in the use of PPE or reduction of its protective efficacy. Recommendations should also consider aspects of work organisation to prevent avoidable risks, e.g. concerning assignment of separate personnel for transport or communication. Possible exposure to infected animals and adherence to recommendations should be assessed systematically in a timely manner. The potential risk for bird to human transmission -even if could not be quantified -during HPAI outbreaks among wild birds justifies early follow-up of exposed persons by means of serological testing using high-specificity assays. collection, assisted in the statistical analysis and helped to draft the manuscript. SB participated in the design and coordination of the study, data and blood collection and helped to draft the manuscript. ML participated in the design of the study. JH participated in the design and coordination of the study, data and blood collection. WH participated in the design and coordination of the study, data and blood collection, assisted in the statistical analysis, helped to draft the manuscript and made final revisions of the manuscript. All authors read and approved the final manuscript.
2018-04-03T04:39:29.687Z
2009-10-18T00:00:00.000
{ "year": 2009, "sha1": "712223abb3d18bd40da9834824534946a9bd0e4b", "oa_license": "CCBY", "oa_url": "https://bmcinfectdis.biomedcentral.com/track/pdf/10.1186/1471-2334-9-170", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "d7a02add8bac02e42fd95e9af70d6f0e7c3347f7", "s2fieldsofstudy": [ "Environmental Science", "Medicine" ], "extfieldsofstudy": [ "Biology", "Medicine" ] }
3739845
pes2o/s2orc
v3-fos-license
Long-stay in forensic-psychiatric care in the UK Purpose Forensic services provide care for mentally disordered offenders. In England this is provided at three levels of security—low, medium and high. Significant number of patients within these settings remain detained for protracted periods of time. This is both very costly and restrictive for individuals. No national studies have been conducted on this subject in England. Methods We employed a cross-sectional design using anonymised data from medical records departments in English secure forensic units. Data were collected from a large sample of medium secure patients (n = 1572) as well as the total high secure patient population (n = 715) resident on the census date (01-04-2013). We defined long-stay as a stay of more than 10 years in high, 5 years in medium or 15 years in a mix of high and medium secure settings. Long-stay status was assessed against patient demographic and admission information. Results We identified a significant proportion of long-stayers: 23.5% in high secure and 18.1% in medium secure care. Amongst medium secure units a large variation in long-stay prevalence was observed from 0 to 50%. Results indicated that MHA section, admission source and current ward type were independent factors associated with long-stay status. Conclusion This study identified a significant proportion of long-stayers in forensic settings in England. Sociodemographic factors identified in studies in individual settings may be less important than previously thought. The large variation in prevalence of long-stayers observed in the medium secure sample warrants further investigation. Background The purpose of forensic-psychiatric care is to improve the mental health of mentally disordered offenders whilst reducing their risk of recidivism. Forensic-psychiatric services in England provide care and treatment for mentally disordered offenders in high, medium and low secure in-patient facilities as well as in the community. High secure units (HSUs) admit individuals detained under the Mental Health Act and who "require treatment under conditions of high security on account of their dangerous, violent or criminal propensities" [1]. Medium secure units (MSUs) were developed in the late 1970s to bridge the gap between high secure and general psychiatric care and are designed for those patients detained under the Mental Health Act who "pose a serious danger to the public" [1]. There are currently three high secure hospitals in England providing just over 700 beds and around 60 medium secure units providing around 3500 medium secure beds, with nearly 35% of those beds provided by the independent sector. Since the 1950s there has been an increasing tendency towards deinstitutionalisation, with more patients being treated in community settings rather than as inpatients on general psychiatric hospitals. This process has been consistently associated with greater user satisfaction, increased met needs, and better outcomes on adherence to treatment, clinical symptoms and quality of life [2][3][4][5]. Whilst bed numbers have decreased in general psychiatric hospitals, they have actually increased in forensic-psychiatric services over the same period. 3 A significant number of patients in secure units remain in care for extended periods of time. In addition, it has previously been found that between one and two-thirds of patients in high secure settings in the UK did not need that level of security [6][7][8][9][10]. Meanwhile it appears that the average length of stay (LoS) within MSUs is on the rise [11] and it is similarly expected that a substantial proportion of patients will be placed under restrictions inappropriate to their level of risk. There are strong ethical and financial concerns arising from potentially unnecessarily protracted stay in secure care. Secure settings are extremely restrictive, characterised by a loss of privacy, repetitive daily routines and low-stimulation environments. Although this may be necessary for some patients, it is of concern that some individuals remain in secure care for potentially inappropriate lengths of time. Secure care provision is also very expensive. MSUs in the UK, for example, cost around £175,000 per annum per patient, consuming £1.2 billion per annum. This is 1% of the entire NHS and 10% of the mental health budget [1,12]. Services must therefore aim to target only those individuals who require and will benefit from them. At the same time, there is a suggestion that community based services do not provide sufficient levels of care for a sub-group of forensic patients, for whom 'deinstitutionalisation' may not be appropriate [13]. There has been some recognition of this group of patients at the international level. A review of the international literature revealed two European countries who have responded proactively to the needs of those patients who require long-term forensic-psychiatric care. In both the Netherlands and Germany long-stay units have been developed. It has been found that purposefully designed long-stay wards in the Netherlands may attract some cost savings compared to regular treatment wards as well as increased patient satisfaction due to their focus on quality of life [14]. Thus, identifying the characteristics of long-stay patients can support service improvements not only in order to better facilitate patient discharge, but also to aid in the development of more cost-effective pathways with better quality of life for patients genuinely requiring longer term care. It is imperative therefore to identify the characteristics of long-stay patients and the factors behind their LoS in order to design appropriate service models to meet their particular needs. Previous studies in secure settings have identified a number of predictors of LoS: severity of index offence, psychopathology, referral from another secure or psychiatric setting, restriction orders and lack of facilities with lower levels of care and security [11,15,16]. However, these previous studies have been based upon samples from single units and no national studies are currently available on long-stay in forensic-psychiatric care in England. This may hinder the provision of services to support the discharge or to improve their quality of life within secure care of this patient group [17,18]. Aims and objectives Using a cross-sectional approach, this study aimed to: (1) identify the prevalence of long-stay in high secure units in England; (2) estimate the prevalence of long-stay in medium secure units in England and (3) identify individual-level sociodemographic and service factors associated with longstay amongst patients in high and medium secure care in England. Design and approval A cross-sectional design was used, collecting anonymised data from all high secure units as well as a sample of medium secure units in England resident on the census date (01-04-2013). Individuals who were on trial leave at the time were excluded. Data were submitted to the research team in anonymised form and only included routinely collected data. As such the study did not require ethical approval and was deemed to fall within the remit of service evaluation by the sponsoring organisation. Definition of 'long-stay' Patients were categorised into long-stayers and non-longstay patients. Our piloting data from one high secure care setting suggested that just over 15% of patients stayed for over 10 years. For medium secure care, the literature suggests that 10-20% stay 5 years or longer. For our study, we aimed to capture the extreme end of long-stay; therefore, a cut-off that would capture around 15-20% of the population seemed appropriate. This is also the percentage of patients in long-stay services in countries where dedicated long-stay services exist. Allocation to 'long-stay' status was determined by total time of continuous stay in high and/or medium secure care, i.e. from admission to any such setting to census date. Long-stay was defined as five or more continuous years in medium secure care OR ten or more continuous years in high secure care OR a combination of the high and medium secure settings totalling 15 years or more of continuous secure care. Assignment to long-stay status was initially by review of admission date to current unit though if patients did not fall within the long-stay category based on their LoS in the current unit, we then enquired about admission source and, if admission source was high or medium secure care, whether the individual fulfils our criteria for long-stay through medical records staff or 1 3 responsible clinicians. Unfortunately, date of first admission to secure care was not readily available in all cases, i.e. clinicians would be able to say with confidence that a particular individuals had been in services in excess of our cut-off but without being able to identify exact LoS overall in secure care; this is the reason why we were not able to use LoS as a continuous variable in the analyses. Selection of participating units All three high secure units in England were included. A stratified cluster sampling frame was adopted for MSUs with 23 units sampled. This included 14 NHS and 9 independent units, drawn according to geographical region (according to boundaries of the then 10 Strategic Health Authorities), size and specialisation, with oversampling of units specialising in particular patient groups (e.g. patients with intellectual disabilities). This sample represents approximately 40% of all MSUs in England. One medium secure unit was included in regions with 1 to 3 units, 2 in regions with 4 or 5 units, 3 in regions with 6 or 7 units, 4 in regions with 8 or 9 units and 5 in regions with 10 or more medium secure units. Based on patient numbers, 11 (48%) of these units were classed as small (≤ 50 patients), 7 (30%) were medium-sized (51-99 patients) and 5 (22%) were large (≥ 100 patients). Units were located across all English regions: North East (n = 1), North West (4), Yorkshire and the Humber (2), East Midlands (2), West Midlands (2), East of England (4), London (3), South East (2), South Central (1), South West (2). Data collected Data for both HSUs and MSUs were collected through medical records departments on length of stay and basic patient and admission characteristics. Data collected were based on information known to be readily available from administrative systems on the basis of a pilot trial conducted in one HSU and one MSU. This included the following variables: date of admission to current unit, age, gender, ethnic class (White, Black, Asian, mixed, other), admission source, current Mental Health Act (MHA) section; diagnostic specification of current ward [mental illness, personality disorder (PD), comorbidity, intellectual disability (ID), neuropsychiatry, mixed diagnosis, other, cannot assign] and stage of treatment specification of current ward (admission/assessment, treatment, high dependency, long-stay/slow stream, pre-discharge/rehab, mixed assessment/treatment, other, cannot assign). Data analysis Admission source was collapsed into community (any nonsecure psychiatric settings, including Psychiatric Intensive Care Units, non-institutional settings, police stations), low, medium and high secure settings and prison. MHA section was categorised as civil/quasi-civil (s2, 3, 37, 37(N), 41(5), 47), hospital orders with restriction (s37/41, CPIA), prison transfer (s47/49, 48/49), pre-sentencing (s35, 36, 38) and other. 1 Data analysis was conducted separately for patients in high and medium secure settings as factors determining length of stay might be different in both settings, and only the former constitutes the full population. Results presented for the medium secure analysis are adjusted for sampling weights. Summary statistics were taken of all included variables. National level information on the demographics and service use variables used in the sampling stratification is currently unavailable, which precluded adjustment for unequal probability of selection within the medium secure sample. The variability of long-stay status across units was first investigated in a two-level logistic regression with the secure unit as the level two analytical unit. Results showed no significant unit-level variability amongst HSUs, but statistically significant variability among MSUs (var = 0.491, 95% CI 0.186-1.292, intra-class correlation: 13.0%). Exploratory analysis further showed non-significant region-level variance among units. Therefore, a general logistic regression was used to explore the association between long-stay and various influential factors for the high secure sample and a two-level logistic regression was used for the medium secure sample. As exploratory analysis showed there were some missing values for some influential factors (ethnic class: 7.4%, admission source: 9.1%), the robustness of the results was assessed by comparing the results of modelling data with missing covariates imputed by means of multiple imputation. All analysis was conducted using STATA 15. For the high secure sample, univariate analysis was conducted with each factor entered individually into a logistic regression model to allow comparison of associations with and without adjusting other factors. For the medium secure sample a series of two-level logistic regressions were run, 1 The different sections of the MHA differ with regards to power of decision-making regarding transfer and discharge, possibility of move back to prison, amongst other things. On the most simple level those sections designated here to the 'civil/quasi-civil' category allow the clinical team to decide upon the patient's placement, while patients on restriction order require agreement by the Ministry of Justice for any transfers or discharge. Those on prison transfer orders can be moved back to prison as long as their sentence has not been served in full yet. with the secure unit entered as the level 2 analytical unit. All factors were subsequently entered simultaneously into a multivariate model. As this study is exploratory, no hypotheses were made regarding the analysis. Prevalence of long-stay According to our criteria, the prevalence of long-stay in across all three English high secure settings was 23.5%, ranging from 21.6 to 26.5%. Within the medium secure sample, the prevalence of long-stay was 18.1%. There was a wide degree of variation between the medium secure units in our sample, with long-stay prevalence ranging from 0 to 50%. With sampling weights and adjustment for unitlevel variance, the predicted probability for long-stay in the medium secure sample was 16.9% (95% CI 12.7%, 21.1%) ( Table 1). Factors associated with long-stay status Variables entered in logistic models included MHA section and admission source; for the high secure sample ward diagnostic category was entered additionally and for the medium secure sample ward pathway category. For the high secure analysis the categories 'low secure unit' from admission source and 'pre-sentencing' and 'other' from the MHA section variable were omitted given inadequate number of longstay cases; the latter two categories were also omitted from the medium secure analysis. High secure care Results for the high secure population are shown in Table 2. The multivariate analysis found that demographic variables (gender and ethnic class) were not significantly associated with long-stay. For ethnic class, additional analysis of white patients compared to non-white was also nonsignificant (not shown in table). Compared with patients admitted on s37/41, other MHA section types were associated with a significantly reduced likelihood of long-stay status by over half. Those with a civil/quasi-civil section had 42% reduced odds and patients on a prison transfer had 68% reduced odds. Admission source demonstrated a significantly increased likelihood of long-stay against prison admissions for previous high secure cases and community admissions with medium secure admissions being non-significant (OR = 1.257, p = .369). It should be noted that there were only 5 cases admitted from the community in the high secure sample and estimates may not be reliable for this group. Diagnostic ward categorisation was a significant factor when comparing intellectual disability against personality disorder wards with cases from the latter presenting with reduced likelihood of prolonged stay. It may also be noted that patients on intellectual disability ward were also more likely to be long-stayers compared to mixed type and mental illness wards at the marginal significance level (p = .081 and 0.076, respectively, not shown in Table 2). Medium secure care Similarly to the high secure population, the multivariate results for the medium secure sample in Table 3 show that demographic variables (gender and ethnic class) were not significantly associated with long-stay. For ethnic class, additional analysis of white patients compared to non-white was also non-significant (not shown in table). Regarding MHA section, compared with patients sectioned on hospital orders with restrictions, those on civil/quasi-civil section had 63% reduced odds of being a long-stayer and the odds were reduced by 65% for prison transfer patients. Admission source was also a significant factor. Compared to prisonadmitted patients, cases arriving from medium and high secure settings had approximately eight times the odds of being a long-stayer. For ward diagnostic category, patients in learning disability wards showed increased odds of long-stay compared to all other ward types, though none of the results reached significance level. Discussion This study sought to assess the prevalence of long-stay as well as some of the key determinants of long-stay status within high and medium secure care settings in England. According to our specified criteria, 24% of patients within high secure units were classified as long-stayers and an estimated 17.4% in medium secure. There is limited research identifying how many patients stay for extended periods of time in high or medium secure hospitals in England and comparisons with previous studies are difficult to draw as previous research has used different cut-offs for 'long-stay', calculated LoS in patients' current unit only rather than continuous care, and sampled from single units. In high secure care, Dell et al. [16] found that 44.4% of patients had exceeded the average LoS of 8 years in their study at one high secure hospital. This would appear to be a higher figure than ours though their study used a lower LoS cut-off; in addition, the data of that study is 20 years old now and policy and pathways have changed -not least has the accelerated discharge programme since taken place [19] targeting some of the residents in the Dell et al. [16] study. Meanwhile in medium secure settings studies using our cut-off of 5 years for LoS in England reported figures of between just under 10 to just over 20% [20][21][22]. Two of these figures are lower than ours but these used current unit LoS and sampled from single units. Given the huge variation in prevalence between both units in our study, it is clear that research in one single setting does not provide a useful national picture of LoS. The large variation in prevalence of long-stay for medium secure care is worth noting. One of the units included here had a ward set up specifically for those leaving high secure care as part of the accelerated discharge programme-therefore, a higher percentage of long-stayers in this unit was expected. On the other hand, about two-thirds of the high and half of the medium secure long-stay group were admitted from the same or lower levels of security. Variation in long-stay numbers may arise as a result of the different patient groups (e.g. those with PD or LD) catered for; some studies have also identified variation in admission rates by geographical location due to differences in social deprivation, ethnic class and availability of low secure beds [23]. These factors are unlikely to fully account for the differences in long-stay though, particularly as we did not find some of them, e.g. ethnic class, to be associated with longstay status. There are no national standards with regard to admission criteria to medium secure care beyond the patient being a "serious danger to the public" [1] and it is possible, though this cannot be confirmed by our study, that individual units adopt their own (implicit or explicit) criteria, such as not admitting patients with little prospect of moving on to less secure settings or being discharged. Alternatively, it is possible that the interventions offered in units with a higher proportion of long-stayers are less effective in allowing patients to move on. Our final models suggested that MHA section, admission source and current ward type were each independently associated with long-stay status. Previous studies have produced somewhat conflicting findings with regard to associations between sociodemographic factors and LoS though most have not found such a relationship. Two previous studies have identified that non-white patients had a shorter LoS than white ethnic groups [21,22] and studies that looked at gender differences have found shorter LoS in females [24]. Notably, their longer term outcome seems to be worse though [25]. We did not find any difference between long-stayers and non-long-stayers on gender or ethnic class; the higher percentage of white ethnic class in long-stayers in the medium secure setting failed to reach statistical significance. Unsurprisingly, long-stayers were older than non-long-stayers in both, high and medium secure care. The large number of older patients, with about one-third of the long-stay population being over 50, has important implications for the service planning for this patient group. In line with other research in individual settings [15,24,26] our national study has also identified an association between MHA and long-stay status in both, medium and high secure patients with significantly more patients in the long-stay groups on hospital orders with restrictions and less on prison transfers. This reflects the practical realities of this section in that it does not allow return transfer back to prison for those who may no longer benefit from hospital treatment. Compared to those civil sections (or quasi-civil sections, such as hospital orders without restrictions) these patients also require Ministry of Justice approval for moves to other secure settings, another reason for the potential delay in their transfer. The data on admission source additionally reflects potential challenges in the smooth transfer of this patient group along a pathway from more to less secure settings as identified by others (e.g. Tetley et al. for PD patients [27]). Such pathways typically identify the journey of a patient from more to less secure units, and ideally back into the community. If in the future dedicated long-stay services were developed in England, decisions would also have to be made at which point such services would become part of this pathway. A number of authors have suggested that a lack of secure services for LD patients might contribute to their higher LoS [28] and most studies have found that severe mental illness was associated with longer and PD with shorter LoS [11]. This study did not use formal diagnostic data, but diagnostic ward type was used as a proxy and reflects these findings. It should be noted, though, that diagnostic ward type concerns whether particular groups will be admitted, not that the unit will be entirely populated by patients with that diagnosis and proportions may vary between units. Limitations This was a cross-sectional study, limiting any causal inference. Lack of diagnostic data meant we had to refer to ward diagnostic classification as a proxy measure which may be unreliable. Demographic and admissions data were restricted to that which was readily obtainable within the sample. Many other variables than which were modelled in this study are likely to explain variation in long-stay. Conclusion There is a large number of patients resident in English high or medium secure settings who remain in those settings for prolonged periods of time and the prevalence of long-stay varied greatly between medium secure settings suggesting potentially a lack of consistency in admission criteria and/or discharge procedures. The large number of patients admitted from the same or lower levels of security is of concern and suggests a trajectory of movement within the system rather than progression outwards. These experiences can cause a significant amount of distress for patients and carers. In order to facilitate more effective treatment and discharge of long-stay patients from secure settings, further investigation of the characteristics and needs of this patient group is required in order to identify suitable therapeutic interventions. A national strategy for the management of this patient group might assist in this.
2018-03-08T14:24:33.655Z
2018-01-31T00:00:00.000
{ "year": 2018, "sha1": "00fe0aede6a77700e23eaa4563d5cac9fe273b17", "oa_license": "CCBY", "oa_url": "https://link.springer.com/content/pdf/10.1007/s00127-017-1473-y.pdf", "oa_status": "HYBRID", "pdf_src": "PubMedCentral", "pdf_hash": "c6d10324d50100aaf4d9caba4561b007ce4a660d", "s2fieldsofstudy": [ "Psychology", "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
6524364
pes2o/s2orc
v3-fos-license
Genome-Wide Characterization of miRNAs Involved in N Gene-Mediated Immunity in Response to Tobacco Mosaic Virus in Nicotiana benthamiana microRNAs (miRNAs) are a class of endogenous small RNAs (sRNAs) that play pivotal roles in plant development, abiotic stress response, and pathogen response. miRNAs have been extensively studied in plants, but rarely in Nicotiana benthamiana, despite its wide use in plant virology studies, particularly for studying N protein–tobacco mosaic virus (TMV) interactions. We report an efficient method using high-throughput sequencing and bioinformatics to identify genome-wide miRNAs in N. benthamiana. A total of 30 conserved miRNA families and 113 novel miRNAs belonging to 93 families were identified. Some miRNAs were clustered on chromosomes, and some were embedded in host gene introns. The predicted miRNA targets were involved in diverse biological processes, such as metabolism, signaling, and responses to stimuli. miRNA expression profiling revealed that most of them were differentially expressed during N-mediated immunity to TMV. This study provides a framework for further analysis of miRNA functions in plant immunity. Introduction MicroRNA (miRNA) is a pivotal category of small RNA (sRNA) used in the regulation of gene expression in eukaryotes. 1 miRNAs are approximately 21 nt endogenous noncoding RNAs that negatively regulate gene expression at the post-transcriptional level, by either repressing gene translation or cleaving target mRNAs. 2 Unlike animals, plants use Dicer-like (DCL) proteins to generate stem-loop precursor mRNA (pre-miRNA) and process the miRNA:miRNA star (miRNA:miRNA*) duplex with two-nucleotide 3′ overhangs, 3 which is then transported from the nucleus into the cytoplasm by HASTY (HST). 4 Once 2'O-methylated by Hua En hancer 1 (HEN1), the mature miRNA strand is predominantly incorporated into argonaute-1 (AGO1) or argonaute-10 (AGO10) containing RNA-induced silencing complexes (RISCs) that inhibit gene expression by perfect or near-perfect complement to target transcripts. 5,6 Many miRNA sequences are highly conserved within the same kingdom, 7 whereas others are species specific. These non-conserved miRNAs are difficult to identify by conventional methods. However, recently established high-throughput sequencing technologies together with powerful bioinformatics tools have allowed efficient identification of not only conserved miRNAs but also low-abundance miRNAs in several plant species. [8][9][10] In plants, miRNAs are involved in diverse processes such as development 11,12 and responses to nutrient, 13 and environmental stresses. 14 They also play critical roles in resistance to bacterial pathogens and viruses. For example, Evolutionary Bioinformatics 2015: 11(s1) Arabidopsis treatment with flg22, a flagellin-derived peptide, increases the transcriptional level of miR393, which then negatively regulates auxin receptors TIR1, AFB2, and AFB3 in bacterial resistance mechanisms. 15 In Arabidopsis, miR160a, miR398b, and miR773 participate in plant innate immunity against Pseudomonas syringae by regulating Pathogen-associated molecular pattern (PAMP)-induced callose deposition. 16 In diverse plant species, miR482/2118 superfamily members target the P-loop motif coding sequence of resistance genes with nucleotide binding site (NBS) and leucine-rich repeat (LRR) motifs, which leads to RNA-dependent RNA polymerase 6 (RDR6)-dependent mRNA degradation and production of secondary small interfering RNAs (siRNAs). 17 Similarly, nta-miR6019 and nta-miR6020 from tobacco guide the cleavage of the mRNA of the immune receptor N's TIR domain, which also leads to RDR6-and DCL4-dependent production of secondary siRNAs. 18 In accordance with the function of miRNAs in plant immunity, genes required for miRNA biogenesis are also required for resistance against bacterial pathogens. For example, both HEN1 and DCL1 are required for PAMP-triggered immunity (PTI). 19 The tobacco N gene belongs to the TIR-NB-LRR class of resistance (R) genes that confer resistance to tobacco mosaic virus (TMV). 20 When TMV attacks tobacco cells, p50, the TMV replicase fragment, is recognized by N protein through direct interaction. This triggers a series of signal transduction cascades, which initiate a hypersensitive response (HR), inhibit TMV spread, and induce systemic acquired resistance (SAR). Interestingly, N protein's function is temperature sensitive and reversible. 21 At temperature above 28 °C, N-mediated HR is restricted and TMV spreads throughout the plants. When temperature is below 28 °C, N protein reactivated, resulting in HR in TMV-containing tissues. In recent decades, many proteins have been identified by virusinduced gene silencing (VIGS) technology as participating in N-mediated signaling pathways. Like other TIR-NB-LRR proteins, N protein requires enhanced disease susceptibility 1 (EDS1) for its function. 22 The jasmonic acid (JA) and ethylene signaling pathways have been implicated in the resistance response to TMV through their respective hormone receptors, COI1 and CTR1. N protein occurs in a large complex with Rar1/SGT1, COP9 signalosome (CSN), and HSP90, suggesting that ubiquitin-mediated protein degradation and molecular chaperones play key roles in the N-mediated signaling pathway. [22][23][24] Two MAPK cascades, MEK1 MAPKK and NRK1 MAPK, function downstream of the recognition step. The transcription factors WRKY1-3 and MYB1 might function downstream of the MAPK cascades. 25 Although miRNAs have been implicated in plant immunity, whether miRNAs are involved in the N-mediated resistance pathway is still unknown. To address this question, we constructed three sRNA libraries of TMV-infected Nicotiana benthamiana plants from the selected time points after N gene activation. Through library sequencing and analysis, we identified 30 families of conserved miRNAs and 93 families of N. benthamiana-specific miRNAs. Furthermore, we identified numerous candidate miRNAs and their putative targets that may participate in regulating N-mediated resistance to TMV. This is the first large-scale survey of miRNAs in N. benthamiana, and has revealed putative miRNAs and targets that participate in the N-mediated resistance pathway. results High-throughput sequencing of srNAs in N. benthamiana. To probe miRNA regulation of N gene-mediated resistance to TMV, we deep sequenced (Solexa-Illumina) sRNAs from TMV-infected N. benthamiana plants containing the transgenic N gene 22 at zero, two, and eight hours after transfer from a five-day 32 °C treatment to normal growth conditions. We obtained 11,597,524 (zero hour), 10,492,893 (two hours), and 11,125,715 (eight hours) reads. After removing 3′ and 5′ adaptors and low-quality reads, we obtained 11,200,906 (zero hour), 10,129,898 (two hours), and 11,125,715 (eight hours) high-quality reads, ranging in size from 10 to 30 nt (Table 1). These high-quality reads were then used to determine the sRNA length distribution. sRNA lengths varied but were similarly abundant between samples. The most abundant size class was 19-24 nt sRNA, accounting for 77.31% (zero hour), 77.79% (two hours), and 83.38% (eight hours) of the sRNAs (Fig. 1). Of the major specific sRNA lengths, the 21 and 24 nt sRNAs were similarly abundant in all three samples and significantly more abundant than other lengths (P , 0.01, Table S8 and Fig. S2). To identify putative miRNA in the pool of sRNA reads, we first removed other sRNA categories (rRNA, snRNA, snoRNA, tRNA) from our analysis. We identified the other sRNA categories by comparing the cleaned reads (see Materials and Methods) to entries in annotated sRNA databases of GenBank (http://www.ncbi.nlm.nih.gov/genbank/) and Rfam (http://rfam.sanger.ac.uk). The remaining unannotated reads were mapped to the N. benthamiana genome (version 0.4.4). Next, all mapped reads were analyzed to identify candidate miRNAs. Although we excluded the other sRNA categories from further analysis, we note here that rRNA levels clearly increased from zero hour (1.67%) to two hours (6.24%), but had decreased slightly by eight hours (4.21%). A similar pattern was also found for tRNA, indicating that many functional genes are expressed immediately after N gene activation and their expression peaked around two hours after the transfer to normal growth conditions. conserved N. benthamiana mirNAs. To identify conserved N. benthamiana miRNAs, we used a computational protocol similar to Mackowiak et al. 26 with modifications for plant miRNA identification 27 to align excised mapped reads to Nicotiana tabacum miRNAs and miRNA*s deposited in miRBase and a recent database reported by Frazier et al. 28 We identified 95 miRNAs previously described in N. tabacum. With the remaining excised precursors, we used other Evolutionary Bioinformatics 2015: 11(s1) plants' miRNAs in miRBase as a reference and identified 17 N. benthamiana miRNAs to be similar to other plant species' miRNAs. In total, 112 conserved miRNAs were identified, 100 of which were expressed in at least one of our sRNA libraries (97, 96, and 98 miRNA genes in the zero-, two-, and eight-hour samples, respectively; Table S1). The miRNA names, mature sequences, star sequences, the corresponding read numbers, and reference miRNAs from other plants with the same seed and pre-miRNAs' position on the scaffolds are presented in Table S1. Furthermore, the partner miRNA* was identified for over 90% (102/112) of novel N. benthamiana miRNA genes. We also detected 16 sequences within the loop structure of miRNA genes. miRNA* and loop RNA are generally short lived, indicating that the high-throughput sequencing technology was very sensitive for identifying miRNA. We found nearly equal numbers of reads from two arms of stem-loop precursors for 5 of the 112 known miRNAs (nbt-miR169a, nbt-miR160e, nbt-miR398, nbt-miR396b, and nbt-miR396a) and even more reads of the star strand than the miRNA strand annotated by miRBase for 3 miRNAs (nbt-miR319a, nbt-miR319b, nbt-miR482b) (Table S1). These results indicate that the biogenesis of mature miRNA is highly complex in N. benthamiana. Mature plant miRNAs preferentially have a U at the first position from the 5′-end according to a previous study. 6 We constructed position-weight-matrices (PWMs) in WebLogo 29 for all conserved 18-22 nt mature N. benthamiana miRNA sequences. The results confirmed the reported findings. The extreme 5′-end of the miRNAs we examined had a 60% U bias based on the graphed nucleotide composition per position ( Fig. 2A). There were also other positions that showed biased base conservations. For example, position 3 had more G, positions 5 and 15 more A/U, and position 11 more G/C than random expectations ( Fig. 2A). We categorized the known miRNAs into 30 families according to their mature sequence identity ( Table 2). The largest family, nbt-miR166, has 20 members, followed by nbt-miR156 with 10 members. Members of these two miRNA families match their Arabidopsis family member counterparts nearly perfectly, suggesting that the families are evolutionarily conserved and originated before the divergence of the two dicot branches. Novel N. benthamiana mirNAs. We identified novel N. benthamiana miRNAs with a computational protocol similar to that of Sebastian et al. 26 with modifications for plant miRNA identification 27 (the same program used for predicting conserved miRNAs) using a probabilistic method to score the compatibility of the miRNA position and the frequency of sRNA within the secondary structure of the miRNA precursors. 30 A total of 113 unique sequences were identified as potential miRNA genes with a true possibility of over 71% (Table S2). Since we excluded miRNAs that had high similarity with the miRNA of the reference plants, these miRNAs are believed to be N. benthamiana specific. The miRNA* read was present in over 79% (90/113) of the novel miRNAs in at least one of the libraries, similar to the ratio observed for conserved miRNAs. In all, 43 (38.1%) miRNA reads mapped to the loop structure, a ratio much higher than that of conserved miRNAs (14.3%). Interestingly, most reads of 32 novel miRNAs mapped to the loop rather than the star region of the pre-miRNA. For these 32 novel miRNA candidate genes, there were fewer miRNA* sequences than the corresponding mature miRNA sequences. Like conserved miRNAs, novel miRNAs' mature sequence length was mostly 21 nt (62), followed by 22 nt (21) (Table 3). However, longer mature sequences with up to 24-25 nt were also present (Table 3). We also inspected the nucleotide bias of the novel miRNA mature sequences by WebLogo. 29 As expected, U was mostly preferred at the first position, albeit with a lower frequency (50%) than that of conserved miRNAs. Unlike conserved miRNAs, there was no obvious nucleotide bias at the other sites, indicating that the novel miRNAs in N. benthamiana are highly variable (Fig. 2). Based on sequence similarity (identity .90%), the 113 novel N. benthamiana miRNAs were grouped into 93 families. The largest families were miRN3 and miRN10 with four members. Only 15 families had more than one member (Table S3). Of the 112 novel miRNAs, 50 had a very high confidence score (over 100) with a corresponding predicted true possibility higher than 94% (Table S2). clustered N. benthamiana mirNAs. Clustered miRNAs have been reported in both animal and plant genomes, 31,32 but have not yet been described in tobacco. Since assembled chromosome data are not available for N. benthamiana, we mapped pre-miRNAs onto N. benthamiana genome scaffolds and contigs. We found 11 pre-miRNAs from five families distributed in five clusters (Table S4). After filtering out candidates who did not meet high stringency criteria, we identified two pre-miRNA clusters containing two nbt-miR399 and three nbt-miRN41 members (Fig. 3). The two nbt-miRN41 family members (nbt-miRN41a and nbt-miRN41b) on the scaffold Niben044Scf00014276 are separated by less than 400 bp. The closeness of nbt-miRN41a and nbt-miRN41b suggests that the two miRNAs are transcribed as a single primary transcript. However, the other clustered miRNAs are separated by much longer distances of 6-8 kb. Interestingly, members of the clustered pre-miRNAs belong to the same miRNA family. Moreover, both miRNA clusters are located in intergenic regions according to the current genome annotation. Intronic mirNAs in N. benthamiana. Intronic miRNAs are a type of miRNAs located in the introns of host genes and were first identified in fruit flies and worms, and then in mammals. In animals, about 80% of the miRNAs are embedded in gene introns. 33 Recently, they have also been detected in the genomes of plants, including rice, Arabidopsis, and Populus. [34][35][36][37] However, their possible presence in the Solanaceae has not been previously explored. Therefore, we exploited our sRNA data to address this possibility. By mapping pre-miRNAs onto N. benthamiana intron sequences, we identified eight intronic miRNAs with high confidence in the genome sequence assembly. Information including the structures of genes that harbor intronic miRNAs, the positions in host genes, transcriptional directions, and folding structures of the pre-miRNAs is presented in Figure 4. Conserved and novel miRNA expressions during N gene-mediated immunity to TMV. To test this hypothesis, we investigated the expression profiles of all the miRNAs at different time points of the N gene activation process using the quantifier module of the miRDeep2 26 algorithm. To increase the quantification's fidelity, miRNAs with less than 100 total reads were excluded. Conserved and novel miRNAs were quantified separately, and each family member was quantified individually. Most of the conserved miRNAs were differentially expressed; however, few displayed a dramatic change in pattern across the sampling time points (Fig. 5A). Most of the novel N. benthamiana-specific miRNAs were also differentially regulated during N gene activation (Fig. 5B). The dynamic N. benthamiana miRNA expression during N gene activation suggests a complex regulation of miRNAs in N gene-activated defense against TMV and indicates that the miRNAs identified in our study may be involved in this crucial immune pathway. Prediction of mirNAs' candidate target genes. miRNAs associate with their target transcripts by basepair complementarity, which ultimately leads to modulation of target gene expression by mRNA cleavage or translation inhibition. 38 Thus, predicting potential targets helps to infer a miRNA's function. We computationally predicted potential N. benthamiana targets using predicted cDNAs from the N. benthamiana genome v0.4.4 (http://solgenomics.net/) of the plant miRNA target analysis tools on the psRNATarget server (http://plantgrn.noble.org/psRNATarget/). 39 The predicted cDNAs contain about 48,342 nonoverlapping gene models with annotations and comprise the longest representative transcripts. Using the default parameters' setting, we identified 315 unique potential targets for 98 conserved miRNAs from 28 families and 609 unique potential targets for 104 novel miRNAs from 85 families. The miRNA ID, matched sequence, corresponding target transcript ID, inhibition target, and annotation of conserved and novel miRNAs are presented in Tables S5 and S6, respectively. Go analysis of predicted mirNA target functions. A total of 924 putative target transcripts of both conserved and novel miRNAs were assigned GO terms based on a BLAST search of transcripts with known functions using the Blast2GO program (false discovery rate (FDR) cutoff of P , 0.05). 40 Each transcript was assigned a molecular function and biological process. The molecular function of most transcripts fell into either the binding (50.2%) or catalytic activity (31.8%) categories (Fig. 6A). The two predominant binding activities were protein binding (35.1%) and nucleic acid binding (30.7%) (Table S7). These results suggest that to a large extent N. benthamiana miRNAs regulate transcription by modulating transcription factors. Among the GO biological processes assigned to the putative target transcripts, cellular (27.6%) and metabolic processes (24.8%) were the largest categories. discussion characterization of N. benthamiana mirNAs. N. benthamiana is widely used as a host in plant-virus studies. Its susceptibility to a large number of pathogens, such as bacteria, fungi, and oomycetes, demonstrates its utility as a model in plant-pathogen research. To date, 21,516 mature miRNAs have been identified in plant species and deposited in miRBase (release 18). 41 However, no N. benthamiana miRNA has been annotated in this database. Next-generation technologies have been instrumental in finding conserved as well as novel miRNAs in Arabidopsis, rice, Populus, and several non-model species. 10,[42][43][44][45][46][47] Through high-throughput sequencing, we identified 30 miRNA families conserved in N. benthamiana and other species. Most (95) of the conserved miRNAs (112 in total) are found in N. tabacum (Table S1) and show high homology of mature miRNA sequences, suggesting these two species share many common miRNA pathway features and might have only recently split in evolutionary history. We also identified 93 novel families, possibly N. benthamiana-specific miRNA families, which showed no sequence conservation with miRNAs from other plant species in the miRBase. By comparing miRNAs between N. tabacum and N. benthamiana, we found that only 28 miRNA families existed in N. tabacum, whereas 98 miRNAs are specific to N. benthamiana (Table S7). The differences in miRNAs between these two close species may account for their difference in response to TMV, and further studies are needed to prove this hypothesis. The abundance of novel miRNAs was lower than that of conserved miRNAs. Possibly, as previously Nb044Scf00026509g0007.1 5' U U G A U A C G C A C C U G A A U C G G C AAU U A C G U A C G G A C C A NbS00039059g0002. 1 2,646 2,828 5' 3' 0kb 1kb 2kb 3kb 4kb 5kb 6kb 7kb 8kb 9kb 10kb 7,412 7,652 NbS00039768g0012 NbS00023850g0115. 1 1,938 2,014 nbt-miRN26 proposed, 48 conserved miRNAs are responsible for regulating basic cellular and developmental processes common to many eukaryotes, while species-specific miRNAs are involved in regulating unique pathways. The conserved N. benthamiana miRNAs tended to form multimember families (Table 2), whereas no novel miRNA family contained more than four members (only 2 families contained four members and 11 families contained two members; Table S3). This is consistent with observations from other species, such as Arabidopsis 49 and Populus, 50 and supports the hypothesis that conserved miRNA families have expanded by duplication. [51][52][53] However, the exact frequency of birth and death needs to be further investigated by comparison on the basis of genomes of distantly related species. In accordance with observations of genomes of other plant species, 49 the majority of both conserved and novel N. benthamiana 21 nt mature miRNA sequences contained a 5′ U, whereas 5′ A was overrepresented in 24 nt mature miRNA sequences (Fig. S1). When miRNA was discovered, numerous observations suggested that only one strand of the miRNA duplex could be the effector sRNA and that the other strand, termed miRNA*, is degraded. However, traditional miRNA cloning methods and recently developed high-throughput sequencing technologies have increasingly revealed equal or close to equal miRNA/miRNA* ratios in vivo. In mouse, both strands of miR-30c and miR-142 have been cloned. 54 In fruit fly, there are even more miRNA genes that yield close to equal miRNA*/miRNA ratios in vivo, most of which are relatively abundant in the total RNA transcriptome. 55 Recently, nine miRNAs (ptc-miR160f, ptc-miR169b, ptc-miR169l, ptc-miR171h, ptc-miR171m, ptc-miR172h, ptc-miR393a, ptc-miR393b, and ptc-miR403c) with high miRNA* levels have been found in Populus euphratica from searches of cDNA libraries prepared from plants under drought stress and controls. 35 Here, we identified five miRNA (nbt-miR169a, nbt-miR160e, nbt-miR398, nbt-miR396b, nbt-miR396a) genes that yielded nearly equal numbers of miRNA* and miRNA reads. Interestingly, two of these miRNAs (nbt-miR169a and nbt-miR160e) are similar to some of the abovementioned P. euphratica miRNAs with high miRNA* levels (ptc-miR160f, ptc-miR169b, and ptc-miR169l). We also identified three miRNA genes (miR319a, miR319b, and miR482b) with slightly more reads for the miRNA* than for the annotated miRNA. Since the sRNAs we measured were at the steady stage, these miRNA* sequences may be involved in particular biological processes in cells. In Drosophila, miRNA* preferentially associates with AGO2; thus, independent sorting of miRNA/miRNA* strands is a general character of Drosophila miRNA genes. 55 In Arabidopsis, miR393* also binds AGO2, thereby downregulating a Golgi-localized SNARE gene (MEMB12) by translational inhibition. 56 Because miR393 is bound by AGO1, a possible mechanism of independent sorting of duplex strands via distinct AGOs is suggested. Whether the abundant miRNA*s identified here are also bound by AGO2 is unknown and requires further investigation by coimmunoprecipitation with different AGOs. Polycistronic mirNA. miRNAs are often clustered together in animal and plant genomes. They may share a common primary mRNA (pri-miRNA), a precursor subsequently excised by DCL1 into different pre-miRNAs. Clustered miRNAs with the same transcription start site are referred to as polycistronic miRNA. Polycistronic miRNA precursors are more abundant in animals (25-45% of total miRNAs) than in plants (10-20% of total miRNAs). 57 In N. benthamiana, we identified two potentially polycistronic miRNA precursors, one containing known miRNAs and the other containing novel miRNAs. The proportion of potential polycistronic miRNA precursors (,1% of total miRNA genes) appears to be much lower than in other investigated plants, but this may be because of the incomplete sequencing of the N. benthamiana genome. We also identified another six potential polycistronic miRNA genes (Table S4), which were discarded because of consecutive Ns (unsequenced nucleotide positions) between individual miRNA loci. In animals, clustered miRNA genes are often heterogeneous. For example the miR-17 cluster consists of sequences encoding miR-17, miR-18, miR-19a, miR-19b, miR-20, miR-25, miR-92, miR-93, miR-106a, and miR-106b. 67 However, the opinion that plant miRNA gene clusters generally comprise homologous members 6,58,59 is consistent with our discovery. The clustered plant miRNAs may have been caused by tandem duplication and suggests a dosage effect of miRNA expression. mirNA target prediction. We have identified over 100 known miRNAs, some of which are conserved across several model plant species, including Arabidopsis, Populus, and rice. Despite our stringent target prediction criteria, most of the targets of conserved N. benthamiana miRNAs were conserved with targets in other plant species and favored genes encoding transcription factors. For example, nbt-miR156 targets SPL transcription factors 60 ; nbt-miR159 targets MYB domain containing transcription factors 61 ; nbt-miR160 targets the ARF gene family 62 ; nbt-miR165/166 targets the homeodomainleucine zipper (HDZip) gene family 63 ; and nbt-miR172 targets AP2-like transcription factors. 64 These miRNAs are classified as highly conserved in plants. 6 However, for moderately conserved miRNAs, 6 only three (nbt-miR164, nbt-miR169, and nbt-miR397) out of eight N. benthamiana miRNAs target genes from the same family (Table S5). The targeting observed in N. benthamiana of conserved genes by conserved miRNA also occurs in other plants and even animals. 52 For example, miR165/166 is conserved in all plant lineages, including mosses, monocots, and dicots, and the binding site of their targets, which encode the HD-Zip family transcription factors, is also conserved in these taxa. The conservation between miRNAs and their targets implies that regulatory networks involving miRNA-target interactions may have evolved over a very long time and play a pivotal role in key processes during the plant life cycle. dynamic mirNA expression programs during N-mediated resistance tMV. Most of the miRNAs we identified, from both conserved and novel miRNA pools, showed dynamic expression patterns during the TMV response, implying that miRNAs are involved in N-mediated resistance to TMV. This observation is in agreement with miRNA expression after bacterial infection in Arabidopsis. 65 Since the predicted targets of these miRNAs have diverse functions, such as binding, catalytic activity, transporter activity, and transcription activity (Fig. 6A); and they are also involved in many biological processes, such as cellular process, metabolic process, developmental process, and stimuli process (Fig. 6B); the miRNAs' dynamic expression might regulate gene expression systematically at different layers in N-mediated resistance pathway to TMV. Usually, miRNAs from the same family have similar expression patterns. Interestingly, we identified a miRNA, nbt-miR160e, which displayed an expression pattern distinct from other members of its family. During TMV infection, nbt-miR160a/b/c/d was downregulated, while levels of nbt-miR160e increased (Fig. 5). This distinct expression pattern suggests that nbt-miR160e functions differently from the other members in the resistance pathway. Although miRNAs from the same family have the same or similar mature sequences, and therefore the same or similar target genes, their genomic contexts are different, which might explain the distinct expression patterns of different members. Similar observations for multicopy miRNAs from rice and Arabidopsis have been made. 66 To date, few miRNAs have been shown to regulate plant immunity. miR393 targets TIR/AFB F-box genes, thereby downregulating auxin signaling and contributing to resistance to bacteria DC3000. 15 However, we did not observe any significant changes in nbt-miR393 expression during N gene activation upon infection, suggesting that nbt-miR393 is not involved in the N gene pathway during N. benthamiana's immunity response to TMV. miR160a enhances PTI, while overexpression of miR398b negatively regulates PTI in Arabidopsis. 16 We found that both nbt-miR160e and nbt-miR398 increased rapidly within eight hours of N gene activation in N. benthamiana, indicating that these miRNAs positively regulate immunity to TMV in the N gene pathway. Accordingly, family member nta-miR6019 has been shown to cleave N gene transcripts, thereby attenuating N-mediated resistance to TMV. 18 We also found that expression of the N receptor inhibitor nbt-miR6019a/b was inhibited two hours after N gene activation, suggesting a tight control between miRNAs and the immune receptormediated resistance pathway. This result also supports that Evolutionary Bioinformatics 2015:11(s1) our miRNA identification and expression profile analyses are reliable. Materials and Methods Plant material and growth conditions. Transgenic N-expressing N. benthamiana plants 22 were grown in soil in a controlled climate chamber providing 16 hours light/8 hours dark cycles at 22-25 °C. For high-temperature treatment, sets of plants were transferred to a chamber providing 16 hours light/8 hours dark cycles at 32 °C. After two days of pre-treatment under these conditions, the plants were rubinoculated with TMV-GFP and immediately moved back to high-temperature conditions for another five days. Leaves with full GFP fluorescence were collected at zero, two, and eight hours after the transfer to 22-25 °C. Inoculation of tMV-GFP. N. benthamiana leaves were infiltrated with Agrobacterium carrying the TMV-GFP T-DNA construct. At five days post-infiltration (dpi), TMV-GFP infected leaves were homogenized and rub-inoculated onto the leaves of tested plants. srNA library construction and high-throughput sequencing. Total RNA was extracted with TRIzol (Invitrogen) following the manufacturer's guide for the plant material. The RNA quality and quantity were determined with an Agilent 2100 Bioanalyzer (Agilent). The RNA was separated by PAGE, and then 16-30 nt sRNA was purified and ligated to 5′ and 3′ RNA adaptors. A reverse transcription reaction was performed with several cycles of PCR, and products were sequenced by Solexa technology. bioinformatics analysis of high-throughput sequencing data. The raw Solexa sequencing data were preprocessed by filtering out low-quality reads, trimming adaptors and contaminants formed by adaptors, and removing reads less than 18 nt (both siRNA and miRNA are longer than 18 nt in plants). The clean reads were then compared with entries in the available sRNA databases, Rfam (http://rfam.sanger. ac.uk, Release 9.1) and GenBank (http://www.genbank. com). All the reads that mapped to rRNA, tRNA, snRNA, and snoRNA entries in these two databases were annotated and removed. The remaining reads were first mapped to the N. benthamiana genome (Niben.genome.v0.4.4.scaffolds. nrcontigs) by miRDeep2's mapper module. 30 The arf file and reads file obtained from the mapping procedure together with the genome file were used to identify novel and known miRNAs. Before identification of miRNA using miRDeep2module, we modified several places of PERL script of the module to perform plant miRNA identification as Wen et al did. 27 For conserved miRNA identification, we used mature miRNAs and precursors of miRNAs of the phylogenetically close N. tabacum as the first reference. N. tabacum mature miRNAs and their precursors were obtained from two sources: miRBase (http://www. mirbase.org) and reported miRNAs predicted using Expressed Sequence Tag (EST) sequences. 28 Then we used all plant-derived mature miRNAs (from miRBase, http:// www.mirbase.org) except N. tabacum as the second reference. After excluding reads mapped to the conserved miR-NAs, the remaining reads were subjected to novel miRNA identification. We grouped mature miRNAs (both conserved and novel) with identical and near identical (.90% identity) sequences into the same family. miRNA expression across distinct samples was profiled using the quantifier module of miRDeep2. 30 For intronic miRNA identification, first, we obtained the 5′ and 3′ sites' information in the genome of each miRNA's precursor; second, we compared their location with the corresponding gene information in the gff file of the genome. If miRNA's precursor's 5′ and 3′ sites lay in an intron, we defined the miRNA as intronic miRNA. In addition, only genes with length more than 500 bp and intron length less than 10 kb were considered.
2018-04-03T00:56:45.888Z
2015-01-01T00:00:00.000
{ "year": 2015, "sha1": "af27f80a36258bf2a15dcb45cbdf583a82de5e8e", "oa_license": "CCBYNC", "oa_url": "http://journals.sagepub.com/doi/pdf/10.4137/EBO.S20744", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "af27f80a36258bf2a15dcb45cbdf583a82de5e8e", "s2fieldsofstudy": [ "Biology", "Environmental Science" ], "extfieldsofstudy": [ "Biology", "Medicine" ] }
9877418
pes2o/s2orc
v3-fos-license
Classification of interacting electronic topological insulators in three dimensions A fundamental open problem in condensed matter physics is how the dichotomy between conventional and topological band insulators is modified in the presence of strong electron interactions. We show that there are 6 new electronic topological insulators that have no non-interacting counterpart. Combined with the previously known band-insulators, these produce a total of 8 topologically distinct phases. Two of the new topological insulators have a simple physical description as Mott insulators in which the electron spins form spin analogs of the familiar topological band-insulator. The remaining are obtained as combinations of these two `topological paramagnets' and the topological band insulator. We prove that these 8 phases form a complete list of all possible interacting topological insulators, and are classified by a Z_2^3 group-structure. Experimental signatures are also discussed for these phases. The last few years have seen tremendous progress [1][2][3][4][5] in our understanding of electronic topological insulators modeled by band theory. Despite this there is currently very little understanding of the interplay between strong electron interactions and the phenomenon of topological insulation. Can interaction dominated phases be in a topological insulating state? Are there new kinds of topological insulators that might exist in interacting electron systems that have no non-interacting counterpart? These questions acquire particular importance in light of the ongoing experimental search for topological phenomena in strongly correlated materials with strong spin-orbit coupling. It is important to first distinguish the topological insulator from a different class of more exotic topological phases -ones with a bulk gap but with "intrinsic" topological order 6 as exemplified most famously by the fractional quantum Hall states. Intrinsically topologically ordered phases have exotic bulk excitations which may exhibit fractionalization of quantum numbers. A fascinating minimal generalization of a topological insulator to interacting systems is to states of matter known as Symmetry Protected Topological (SPT) phases. In contrast to more exotic generalizations 7,8 SPT phases have a bulk gap and no intrinsic topological order but nevertheless have non-trivial surface states that are robust in the presence of a global internal symmetry. We focus here on the all-important example of time reversal symmetric insulating phases of electrons with a conserved global charge (corresponding to a global U (1) symmetry). Non-interacting insulators with this symmetry in 3D have a well known distinction [1][2][3]9 between the topological and trivial band insulators (corresponding to a Z 2 classification). We show that with interactions there are 6 other nontrivial topological insulating states corresponding to a classification by the group Z 3 2 . This group structure means that all these interacting topological insulators can be obtained from 3 'root' states and taking combinations. One of the 3 root states is the standard topological band insulator. The other two require interactions. They can be understood as Mott insulating states of the elec-trons where the resulting quantum spins have themselves formed an SPT phase. Such SPT phases of quantum spins were dubbed 'topological paramagnets' in Ref. 10 and their properties in 3D elucidated. The three root states and their properties are briefly described in Table. I. A formal abstract classification for some symmetries (which includes neither charge conservation, nor spin-1/2 electrons) in 3d has been attempted 31 but leaves many physical questions unanswered. Our strategy -which sidesteps the difficulties of this prior approach -is to first constrain the symmetries and statistics of monopole sources of external electromagnetic fields. We then incorporate these constraints into a theory of the surface, and determine the resulting allowed distinct states. In general it is natural to attempt to construct possible SPT phases of fermion system by first forming bosons as composites out of the fermions and putting the bosons in a bosonic SPT state. However not all these boson SPTs remain distinct states in an electronic system. We determine that the distinct such states (see Appendix. B) can all be viewed as topological paramagnets as described above. While such spin-SPT phases can clearly exist, we give very general arguments that the only other electronic root state is the original topological band insulator. We also clarify a number of other questions about interacting topological insulators (see end of the paper and Appendix. F, G). For instance we explain the fundamental connection between topological insulation and Kramers structure of the electron. h T (T is the temperature). N is the number of gapless Majorana cones protected by time-reversal symmetry when the surface becomes a superconductor. A combination of these measurements could uniquely determine the TI. Generalities: For any fully gapped insulator in 3D, the effective Lagrangian for an external electromagnetic field obtained by integrating out all the matter fields will take the form The first term is the usual Maxwell term and the second is the 'theta' term: where E and B are the external electric and magnetic fields respectively. Under time reversal, θ → −θ and in a fermionic system the physics is periodic under θ → θ + 2π. Time reversal symmetric insulators thus have θ = nπ with n an integer. Trivial time-reversal symmetric insulators have θ = 0 while free fermion topological insulators have θ = π 32 . Any new interacting TI that also has θ = π can be combined with the usual one to produce a TI with θ = 0. Thus it suffices to restrict attention to the possibility of new TIs which have θ = 0. Consider the symmetry properties of monopole sources of the external magnetic field. At a non-zero θ, this elementary monopole carries electric charge θ 2π so that it is neutral when θ = 0. Under time reversal the monopole becomes an anti-monopole as the magnetic field is odd. Formally if we gauge the global U (1) symmetry to introduce a dynamical monopole field m it must transform under time reversal as However 22 (see Appendix. A) by combining with a gauge transformation we can set the phase α = 0. Physically this is because the time reversed partner of a monopole lives in a different topological sector with opposite magnetic charge and hence is not simply a Kramers partner. This fixes the symmetry properties of the bulk monopole. There are still in principle two distinct choices corresponding to the statistics of the monopole: it may be either bosonic or fermionic. We will consider them in turn below. Bosonic monopoles will be shown to allow for the topological paramagnets mentioned above and nothing else. Fermionic monopoles will be shown to not occur in electronic SPT phases. Topological insulators at θ = 0 -bosonic monopoles: Consider the surface of any insulator with θ = 0 and a bosonic monopole. This is conveniently incorporated into an effective theory of the surface formulated in terms of degrees of freedom natural when the surface is superconducting, i.e, it spontaneously breaks the global U (1) but not time reversal symmetry. The suitable degrees of freedom then are hc 2e vortices and (neutralized) Bogoliubov quasiparticles 33 (spinons) which have mutual semion interactions. In general we can also allow for co-existing topological order, i.e. other fractionalized quasi-particles, in the surface superconductor 34 . This gives a dual description of 2D electronic systems that is particularly convenient to studying not just the superconducting phase but also some topologically ordered insulating phases. Imagine tunneling a monopole from the vacuum to the system bulk. Since the monopole is trivial in both regions, the tunneling event -which leaves a hc e vortex on the surface -also carries no non-trivial quantum number. Hence the surface dual effective field theory has a bosonic hc e -vortex that carries no non-trivial quantum number. We can therefore proliferate (condense) the hc evortex on the surface which disorders the superconductor and yields an insulator with the full symmetry U (1) T unbroken. However as is well known from dual vortex descriptions 33,35 of spin-charge separation in 2D, the resulting state has intrinsic topological order. In this surface topologically-ordered symmetrypreserving insulator, a quasi-particle of charge-q sees the hc e -vortex as a 2πq/e flux. Hence, the hc e -vortex condensate confines all particles with fractional charge and quantizes the charge to q = ne for all the remaining particles in the theory (for a more detailed discussion see Appendix. C). However, we can always remove integer charge from a particle without changing its topological sector by binding physical electrons. Hence the particle content of the surface topological order is {1, , ...}×{1, c}, where only the physical electron c in the theory is charged, and all the non-trivial fractional quasiparticles in {1, , ...} are neutral. Since time-reversal operation preserves the U (1) charge, its action has to be closed within the neutral sector {1, , ...}. We can therefore describe the surface topological order as a purely charge-neutral quantum spin liquid with topological order {1, , ...}, supplemented by the presence of a trivial electron band insulator, {1, c}. In particular, any gaugeinvariant local operator made out of the topological theory must be neutral (up to binding electrons), but in an electron system a local neutral object has to be bosonic. Hence the theory should be viewed as emerging purely from a neutral boson system. This implies that the bulk SPT order should also be attributed to the neutral boson (spin) sector, i.e it should be a SPT of spins in a Mott insulating phase of the electrons (a topological paramagnet). The SPT states of neutral bosons with time-reversal symmetry are classified 10,22,23 by Z 2 2 , with two fundamental root non-trivial phases. These can both be understood as Mott insulators in topological paramagnet phases. Adding to this the usual θ = π TI captured by band theory we have 3 root states corresponding to a Z Topological insulators at θ = 0 -fermionic monopoles?: The possibility that the monopole may be fermionic in a system which also has fermionic charges is naively consistent with time-reversal symmetry. However we can show that such a state cannot occur in any electronic 3D SPT phase. Crucial to our argument is the requirement of 'edgability' defined in Ref. 22. Any theory that can occur in strictly d-dimensions (as opposed to the surface of an SPT in (d + 1) dimensions) must admit a physical edge to the vacuum. We show that electronic systems with a fermionic monopole are not edgable. To illustrate the difficulty consider a Bose-Fermi mixture, with both the boson b and the electron c carrying charge-1. Now put the electron into a trivial band insulator, and the boson into a bosonic SPT state. Then the charge-neutral external monopole source becomes a fermion 22,24 . We may attempt to get rid of the bosons in the bulk by taking their charge gap to infinity (i.e projecting them out of the Hilbert space). However they will make their presence felt at the boundary and the theory is not edgable as a purely electronic system. Indeed we show in Appendix. D by a direct and general argument that fermionic statistics of the monopole in an SPT phase implies the existence of physical charge-1 bosons at the boundary. This is not possible in a purely electronic system. Physical characterization of interacting topological insulators: We now describe phenomena which in principle can be used to completely experimentally identify the various TIs. We consider breaking symmetry at the surface to obtain states with no intrinsic topological order. The results are summarized in Table.I. A different, less practical, but conceptually powerful characterization is in terms of a gapped topologically ordered surface state which we describe in Appendix. B. First consider surface states breaking time-reversal symmetry (and no intrinsic topological order). Of the 8 insulating phases we obtained, four have electromagnetic response θ = π (of which one is the topological band insulator) and four have θ = 0 (of which one is the trivial insulator). The θ term in the response means that such a surface state will have quantized electrical Hall conductivity e 2 h ν with ν = θ 2π + n, where n can be any integer signifying conventional integer quantum hall effect on the surface. A further distinction is obtained by considering the thermal Hall effect κ xy in this surface state. In general in a quantum Hall state κ xy = ν Q T are Boltzmann's constant and the temperature respectively. The number ν Q is a universal property of the quantum Hall state. Two of the θ = π insulators have ν Q = ν = 1/2 + n (including the topological band insulator) while the other two have ν Q = ν ± 4. Similarly two of the θ = 0 insulators (including the trivial one) have ν Q = ν = n while the other two have ν Q = ν ± 4 36 . Thus a combined measurement of electrical and thermal Hall transport when T is broken at the surface can provide a very useful practical (albeit partial) characterization of these distinct topological insulators. Next we consider surface superconducting states (again without topological order) obtained by depositing an swave superconductor on top. It was noticed in Ref. 43 that the surface of the topological paramagnets I and II become identical to that of a topological superconductor (see also Appendix. E for a simpler derivation). The corresponding free fermion superconductor has N = 8(mod16) gapless Majorana cones at the surface protected by time-reversal symmetry. Thus inducing superconductivity on the surface of either Topological Paramagnet I or II leads to 8 gapless Majorana cones which should be observable through photoemission measurements. Taken together with the T -breaking surface transport we have a unique fingerprint for each of the 8 TIs. Other symmetries, Kramers fermions, and θ = π topological insulators: As a by-product of our considerations we are able to address a number of other fundamental questions about interacting topological insulators. For the free fermion systems the Kramers structure is what allows a topological insulator with θ = π. What precise role, beyond free fermion band theory, does the Kramers structure of the electron play in enabling θ = π ? We show non-perturbatively that any gapped insulator with a θ = π response and no intrinsic topological order necessarily has charge carriers that are Kramers doublet fermions. We also use a similar insight to show the necessity of magnetic ordering when the exotic bulk excitations of the topological Mott insulator phase of Ref. 8 are confined. Finally we show that time reversal breaking electronic systems with global charge U (1) symmetry have no interacting topological insulator phase in three dimensions. These results are described in Appendix. F and Appendix. G. Our results set the stage for a number of future studies including identification of the new topological insulators in microscopic models and in real materials. Strongly correlated materials with strong spin orbit interactions are natural platforms for the various topological insulator phases we described. We expect that our results will inform the many ongoing searches (e.g., in rare earth insulators, or in iridium oxides) for topological phenomena in such materials. Acknowledgements -We thank X. Chen and C.-M. Jian, and particularly A. Vishwanath for useful discussions, and B. Swingle and P. A. Lee for comments on our manuscript. This work was supported by Department of Energy DESC-8739-ER46872 (TS and CW), and and NSF Grant No. DGE-0801525 (ACP), and partially by the Simons Foundation by award number 229736 (TS). TS also thanks the hospitality of Harvard University where this work was partially done. After this work was completed we learnt of Ref. 39 which also pointed out the relation between Kramers fermion and θ = π TI. Appendix A: Time reversal action on the magnetic monopole As the magnetic field is odd under time reversal, a magnetic monopole becomes an antimonopole. We briefly recapitulate the reasoning of Ref. 22 to show that in Eq. (3) and (4) of the main paper the phase α can always be set to zero. To see this we observe that the T operator can be combined with a (magnetic) gauge transformation to define a new time reversal operator: where U (α) = e −iαqm where q m is the total magnetic charge. Since q m is odd under time-reversal, we have U (α)T = T U (α), hence the order of product in Eq. (A1) does not matter. When acting on physical gauge invariant statesT has the same effect as T but the monopole fields m, m † transform with α = 0. Appendix B: Topologically ordered surface states A powerful and complete characterization of the different three dimensional interacting topological insula-tors is in terms of a gapped symmetry preserving surface with intrinsic topological order. The physical symmetries are realized in this surface topological order in a manner which cannot be realized in strict two dimensions. The surface topological order of the topological paramagnets was studied in Refs. 10,22,23 . The simplest such surface states have Z 2 topological order, with two particles e and m having a mutual π-statistics. The Topological Paramagnet I supports a surface theory in which both e and m particles are Kramers bosons (denoted as eT mT ), while Topological Paramagnet II has a surface in which both e and m are non-Kramers fermions (e f m f ). The third state, being a composite of the previous two, has e and m both being Kramers fermions (e f T m f T ). The topological band insulator can also be characterized in terms of its surface topological order. In contrast to the topological paramagnets the surface topological order in this case is non-abelian and such states have recently been studied in Ref. [38][39][40][41]. The resulting state are variants of the familiar Moore-Read state describing the ν = 5/2 quantum hall system, modified to accommodate time-reversal symmetry. In Table II we list the representative surface topological orders of the three root states described in the main text. In hindsight, in interacting electron systems the descendants of neutral boson SPT states are naturally expected to arise. However, one could also have naively included the descendants of boson SPT states made out of Cooper pairs (charge-2 objects). The non-trivial boson SPT made out of physical bosons with charge q = 2 supports a surface theory 10,22,24 in which both e and m are non-Kramers bosons carrying charge q/2 = 1 (denoted as eCmC). However, since we have physical Kramers fermions with charge-1 in the system (the electrons), we can bind them with the e and m particles. This converts them to neutral Kramers fermions, which becomes exactly one of the SPT surface states (e f T m f T ) of neutral bosons. Hence the SPT state made out of charge-2 bosons does not add any non-trivial fermion topological insulator. Apart from its conceptual value the study of the surface topological order also provides a very useful theoretical tool to access the topological paramagnets. It allowed Ref. 22 to construct the root states of the two time reversal symmetric topological paramagnets (as well as other bosonic SPT phases) in a system of coupled layers where each layer forms a state that is allowed in strictly 2d systems. Ref. 23 Here we provide more details of the argument establishing that electronic topological insulators with θ = 0 and a bosonic monopole can be reduced to bosonic topological paramagnets. It is convenient to start with a symmetry preserving surface termination that has intrinsic topological order. Such a surface state is characterized by a set of anyons {1, c, X,X, Y I } where I is a discrete label, and their corresponding braiding and fusion rules. Each anyon will be characterized by a sharply quantized charge q under the global U (1) symmetry. Let us denote this topological information and symmetry assignments as the initial surface anyon theory: T initial . A useful theoretical device 24 is to consider creating a monopole source of an external (non-dynamical) magnetic field, and dragging that monopole through the topologically ordered surface at position R. Such a monopole insertion event changes the external magnetic flux, Φ B , piercing the surface by 2π e (in units where = c = e = 1). When the monopole sits close to the under-side of the surface, this extra flux, δΦ B , is concentrated in the vicinity of R. Suppose we take a surface excitation, Y , with fractional charge q Y , and drag it around a sufficiently large loop that encloses (nearly all) the additional magneticflux from the monopole insertion. This process accumulates Berry phase e 2πiq Y = 1 because of Y 's fractional charge. However, the total monopole insertion event is a local physical process, and since there are no gapless excitations in the system it cannot have non-trivial action on distant events (clearly if Y is arbitrarily far from the R, it should not be able to discern whether the monopole is infinitesimally above or infinitesimally below the surface). Therefore, if T initial contains quasi-particles Y I with fractional charge, q I , the monopole insertion event must also create a quasi-particle of type X in the surface theory which has mutual statistics θ X,Y I = e −2πiq I . This mutual statistics then exactly compensates the nontrivial Berry phase from encircling the additional flux from the monopole insertion, and ensures that the overall monopole insertion event does not have unphysical non-local consequences. Furthermore, since the bulk monopole is chargeless and bosonic, X, is a neutral bo-son. We can similarly consider the time-reversed version of this process by inserting an anti-monopole from the vacuum into the bulk. Let us denote byX the particle nucleated at the surface. Clearly X andX are exchanged by T , indicating that, like X,X is a charge-neutral boson. The mutual statistics of an anyon Y withX is then e 2πiq Y . Further as the monopole and antimonopole can annihilate each other to give back the ground stateX must be the antiparticle of X. These mutual statistics indicate that driving a phase transition in which X,X condense will confine all fractionally charged particles. However, in general it is not guaranteed that the condensation of X,X preserves T . To avoid this issue, we take a detour through an intermediate superconducting phase in which descendants of X,X can be safely condensed while preserving T . This results in a topologically ordered state, T final , which has the desired structure of a neutral boson theory. Our strategy is to first enter a superconducting phase obtained by condensing the physical Cooper pair, b ≡ c ↑ c ↓ , from T initial and then to exit it through a different phase transition to reach the final topological order T final . In the theory, T initial , the Cooper pair is local with respect to all nontrivial anyons. Thus its condensation preserves the topological order T initial . The resulting topologically ordered superconductor is conventionally denoted SC * (see Ref. 33) to distinguish it from the ordinary non-fractionalized BCS superconductor, SC. Let us denote the Cooper pair field by b = √ ρ b e iφ . A long-wavelength effective Lagrangian density for the theory can be written: where L T initial [X,X, Y I , . . . ] is the Lagrangian density encoding the topological content of the topologically ordered phase, and L mixed = λ {N I } I e iq I φ/2 Y I N I encodes all charge-conserving interaction terms between b and gauge-invariant combinations of operators in the topologically ordered theory. When b condenses to obtain a superconducting phase, apart from the original topological quasiparticles, there will also be quantized vortex excitations where the phase φ of b winds by 2nπ with n an integer. Following the terminology of Ref. 33 we will call these vortons (to distinguish from the vortices of conventional superconductors without topological order). We wish to disorder the superconducting order by condensing a suitable vortex to obtain the desired insulating surface theory T final . This may be done in a dual effective field theory in terms of the vorton degrees of freedom. To formulate such a dual field-theory, it is very convenient to introduce "neutralized" fields:Ỹ I = e iq I φ/2e Y I , obtained by binding a fraction of the Cooper pair to Y I . In terms of these neutralized variables: The advantage of this choice of variables is now manifest, as the Cooper-pair phase φ is no longer directly coupled to the neutralized fieldsỸ I . TheỸ I however now acquire a phase e πiq I on encircling an elementary vorton. Following the standard duality transformation, we can re-write the boson current j µ b = ρ b ∂ µ φ as the flux of a gaugefield α µ : j µ b = ε µνλ 2π ∂ ν α λ . In the dual theory, the vorton field, denoted by v, is a bosonic field that couples minimally to this gauge field, and in addition has statistical interactions with theỸ particles: where the gauge fields, a I , integer vectors (I) , and multicomponent Chern-Simons term with K-matrix K IJ capture the mutual statistics between the vortons and the fields Y I . Here, j µ Y I is the current of the Y I particles, and V (|v| 2 )) is a potential for the vorton field. Now consider the particles v 2 X, v † 2X . These carry vorticity ±2 and are interchanged under time reversal. These are the relics of a monopole tunneling event in this superconducting state discussed in the main text. Due to the coupling of v to the dual gauge field, α I , we may always choose a gauge such that time reversal is implemented as: We may now condense v 2 X, v † 2X and preserve time reversal symmetry. The condensation also destroys the superconducting order and produces the desired new topological order T final . Note that the neutralized particles Y I have no non-trivial mutual statistics with v 2 X as the phase around the v 2 exactly cancels the phase around X. Hence they survive in T final as quasiparticles. The vortex condensate however quantizes electric charge to be an integer. In particular the charge q bosons obtained by fractionalizing the Cooper pair b q = e iqφ 2 are confined unless q is an integer. In effect the original electrically charged Y I particles are confined to the fractional bosons to produce the neutralỸ I particles. The vortons v also survive as particles in final but they are electrically neutral. The detour through the superconductor essentially implements a 'charge-anyon' separation of the original topological theory T initial . This is completely analogous to the conceptual utility of superconducting degrees of freedom in implementing 'spin-charge' separation in 2d insulators 33 . Though we will not elaborate this here an alternately route from T initial to T final is through a parton construction where we fractionalize the charged anyons into a charged boson and a neutral anyon. This proves that T final only has integer charged quasiparticles. Without loss of generality, we may relabel the quasi-particle content of T final by binding an appropriate number of electrons to each quasi-particle to remove the remaining integer charge. The resulting theory has quasi-particle content {1, v,Ỹ I } × {1, c}, that can be decomposed into the direct product of a neutral boson sector {1, v,Ỹ I } trivially accompanied by a gapped electron. This completes the desired proof that the θ = 0 classification reduces to the classification of neutral bosonic phases. Appendix D: Impossibility of a Fermionic Monopole In this section we provide a general argument against the possibility of fermionic monopoles in a purely electronic SPT insulator. We will show that fermionic monopoles in the bulk necessarily leads to inconsistencies in the boundary theory, as long as the charge U (1) symmetry is preserved. When the charge U (1) is gauged, apart from monopoles we may also consider in the bulk dyons parametrized by (q m , q e ) where q e is the electric charge and q m the magnetic charge. If the neutral monopole (1, 0) is fermionic in a purely electronic system (where the (0, 1) particle is identified with the electron) all dyons with q m = 1 are also fermions. If time reversal is broken in the bulk the θ value may change from 0 leading to these dyons acquiring non-zero charge. However their statistics stays fermionic. It follows that if any putative time reversal symmetric electronic topological insulator phase with a fermionic monopole exists then it will stay a non-trivial topological insulator even in the absence of time reversal symmetry. Thus it suffices to show that fermionic monopoles are forbidden in the absence of time reversal symmetry to rule out such putative topological insulators. We will show that SPT states of electrons with a global U (1) symmetry admit unphysical boundary excitations if the monopole is fermionic. Suppose we could construct a state with fermionic monopoles. By the arguments of the previous section, we may describe this phase in terms of the surface topological order with particle content: Here, f is the surface excitation corresponding to the bulk monopole, and hence is a neutral fermion having mutual statistics e −2πiq I /e , with particles Y I of charge q I . (Even if time reversal is not present we imagine tuning to a point where the monopole is neutral). Following an analogous line of reasoning in Appendix. C, we can now pair condense the remnant of the fermionic monopole f f = 0, which immediately confines all the fractionally charged particles Y I unless q I = ne/2 for some integer n, due to their mutual statistics with f . By attaching enough physical electrons (c), we can always take the charge of the particles Y I to be either 0 or e/2. The resulting theory can thus be written as: where C I have charge e/2, and N I are neutral quasiparticles. Note that f is local with respect to N I and is a mutual semion with C I . The neutral sector of the theory {1, f, N I } is closed under fusion and braiding due to charge conservation. Moreover they form a consistent topological field theory. To see this, let us momentarily dispense also with charge-conservation symmetry (for example by explicitly breaking it), and then condense cf = 0, which confines all 1/2-charged particles C I while keeping all the neutral particles N I unaffected. Furthermore, as f is local with respect to all the N I 's, the theory {1, f, N I } can be viewed as a topological field theory of a system with physical fermion f in the absence of any symmetry. Such a theory can then be confined to {1, f } without obstruction. Returning to the original theory in Eqn D2 this implies that we may get rid of the neutral particles N I and be left with where {C i } is a subset of the original charge-e/2 particles {C I }. Without loss of generality, we can restrict our attention to a single species of fractional charge particle C 1 , and its anti-particle. The only possible fusion outcomes consistent with charge conservation are: C 1 × C 1 ∈ {c, cf }. If two copies of C 1 fuse to c then c † C 1 is the anti-particle of C 1 . However, this is not possible, since the topological spin (self-statistics) of c † C 1 and C 1 differs by −1, whereas anti-particles must have the same topological spin. A similar argument rules out the possibility that two copies of C 1 fuses to cf . This line of reasoning shows that the topological order of Eq. D3 is internally inconsistent, unless there are no C i particles, i.e. unless the topological order contains only the following particles: Since f has trivial mutual statistics with c, it must be a physical object that is microscopically present in the system (i.e. is not an emergent particle). However, there is no such neutral fermion degree of freedom in an electronic system. It follows that in a purely electronic system the monopole cannot be fermionic in an SPT phase with global U (1) symmetry. We note that the Bose-Fermi example constructed in the main paper has a neutral fermion excitation ( a bound state of the boson and fermion) and hence is allowed to have a fermionic monopole. Let us examine this more closely. We put the electron into a trivial band insulator, and the boson into a boson topological insulator. Then the charge-neutral external monopole source becomes a fermion 22,24 . We initially consider such a system in a geometry with no boundaries. We then tune the boson charge gap to infinity, so that the charged bosons disappear from the spectrum, and we are left with a purely electronic theory. But since the fermionic monopole does not carry any boson charge, it survives as the only chargeneutral monopole. Now the bulk theory is exactly what we were looking for, but we need to examine its boundary and see if it is consistent with a time-reversal invariant electronic system. As the electrons are in a trivial insulator they do not contribute anything special on the boundary, so we only have to worry about surface states of the eCmC boson SPT. We first consider a symmetric surface state with topological order. It is known 10 that one of the possible surface states of the bosonic TI is described by a Z 2 gauge theory with both e and m carrying charge-1/2 and the fermion being charge-neutral (the state denoted eCmC in Ref. 22). By setting the boson charge-gap to infinity, the e and m particles disappear from the spectrum, but the neutral fermion survives as a gauge-invariant local object, which is not allowed in a system purely made of charged fermions. Another way to see the inconsistency of the surface is to look at the surface state without topological order in which time-reversal symmetry is broken. The boson topological insulator leads to a surface electrical quantum hall conductance σ xy = ±1 and thermal hall conductance κ xy = 0. 10 The difference of σ xy , κ xy between the two time-reversal broken states should correspond to an electronic state in two dimensions without topological order. Here we have ∆σ xy = 2 and ∆κ xy = 0, which cannot be realized from a purely electronic system without topological order. Indeed adding integer quantum Hall states of electrons increases σ xy , κ xy by the same amount. It is possible to add a neutral boson integer quantum Hall state without topological order but that requires σ xy=0 , κ xy = 0(mod8). Hence the boundary as a purely electronic theory is not consistent with time-reversal symmetry, and the bulk theory cannot be realized in strict three dimensions, although it may be realizable at the surface of a four dimensional system. We also note that if we allow topological (or other exotic long range entanglement) in the bulk then the monopole may be fermionic. In this section, we provide a physical construction of the eT mT topological order from the N = 8 Majoranacone surface state of a time-reversal invariant topological superconductor phase. We start from the free theory where i ∈ {1, . . . , 4} and a ∈ {↑, ↓}, and with time reversal acting on the real (Majorana) fermions as We can group the theory into four complex (Dirac) fermions by writing the Lagrangian then simply describes four gapless Dirac cones in which time-reversal acts as It is easy to see that the theory is protected from gapopening at the free (quadratic) level. We can then ask, could a non-perturbative gap be opened when interaction is introduced? The way to tackle this problem is to first introduce a symmetry-breaking mass term into the fermion theory, viewing the mass term as an fluctuating order parameter, and ask if one can recover the symmetry by disordering the phase of the mass field. For this purpose it is convenient to first introduce an auxiliary global U (1) symmetry as a microscopic symmetry in the model (rather than a subgroup of the emergent SO(8) flavor symmetry). This auxiliary symmetry will be removed at the end of the argument, so the final result does not depend on the existence of this U (1) symmetry. The total symmetry is now enlarged to U (1) × T , with U θ T = T U θ (i.e. the conserved quantity associated with the auxiliary U (1) symmetry changes sign under T like a component of spin rather than an electrical charge). One can now write down a pairing-gap term into the theory which breaks both U (1) and T (∆ → −∆ * under timereversal because T 2 = −1 on physical fermions). The task for us now is then to disorder the field ∆ and restore time-reversal symmetry. The virtue of the auxiliary U (1) symmetry shows up here: the field ∆ is XY -like, so to disorder it we can follow the familiar and well-understood route of proliferating vortices of the order parameter It is important here to notice that although the gap in Eq. (E7) breaks both U (1) and T , it does preserve a timereversal-like subgroup generated byT = T U π/2 . Since we want to restore T by disordering ∆ (which surely will restore U (1)), we must do it while preservingT . This "modified time-reversal" looks almost like the original one, but there is a crucial difference:T 2 = 1 when acting on the fermion field ψ. Now we are ready to disorder the field ∆. At first glance it seems sufficient just to proliferate the fundamental vortex (hc/2e-vortex) and obtain a trivial gapped insulator. However, as we will see below,T 2 = −1 on these fundamental vortices, hence proliferating them could not restore time-reversal symmetry. The vortex here is subtle because of the fermion zeromodes associated with it. It is well-known that a superconducting Dirac cone gives a Majorana zero-mode in the vortex core 44 . So the four Dirac cones in total gives two complex fermion zero-modes f 1,2 . We then define different vortex operators as where |F N denotes the state with all the negativeenergy levels filled in a vortex background. The U (1) being spin-like under T (henceT also) means that a vortex configuration is time-reversal invariant. The only non-trivial action ofT is thus on the zero-modes: and by choosing a proper phase definition: It then follows straightforwardly that {v 00 , v 11 } and {v 01 , v 10 } form two "Kramers" pairs underT . Moreover, since the two pairs carry opposite fermion parity, they actually see each other as mutual semions. We thus conclude that to preserve the symmetry, the "minimal" construction is to proliferate double vortices. The resulting insulating state has Z 2 topological order {1, e, m, } with the e being the remnant of {v 00 , v 11 }, m being the remnant of {v 01 , v 10 }, and is the neutralized fermionψ. Now the full U (1) × T is restored, we can ask how are they implemented on {1, e, m, }. Obviously these particles are charge-neutral, so the question is then about the implementation of T alone. However, since the particles are neutral the extra auxiliary U (1) rotation inT is irrelevant and they transform identically underT and T . Hence we have T 2 =T 2 = −1 on e and m, and T 2 =T 2 = 1 on , which is exactly the topological order eT mT . The charged physical fermion ψ is now trivially gapped and plays no role in the topological theory, one can thus introduce explicit pairing to break the auxiliary U (1) symmetry. Since topological order stems from the charge-neutral sector, pair-condensation of ψ does not alter the topological order, and the resulting state is just the eT mT state with only T symmetry.
2013-11-13T18:37:33.000Z
2013-06-13T00:00:00.000
{ "year": 2013, "sha1": "1e08455a015f46227375efc4b5543509575453d7", "oa_license": null, "oa_url": "http://arxiv.org/pdf/1306.3238", "oa_status": "GREEN", "pdf_src": "Arxiv", "pdf_hash": "1e08455a015f46227375efc4b5543509575453d7", "s2fieldsofstudy": [ "Physics" ], "extfieldsofstudy": [ "Medicine", "Physics" ] }